Remote Debugging and What It Means for Java Applications
Following the lingering promise of managed infrastructure, reduced operational cost, and resiliency, cloud computing has seen phenomenal trends in adoption since the past decade. As software development marches towards the cloud, we soon realize that this shift warrants the need to rethink our debugging strategies. This is because as software systems are now leveraging these advancements in cloud computing and distributed systems we see gaps emerging in debugging that cannot be satisfied by the traditional methods of logging and breakpoints.
For example, a major issue while using breakpoints is that the codebase needs to be run in debug mode. Therefore, we are not actually replicating the actual state of our systems taking into consideration multi-threading, distributed services, and dependencies on remote services in a cloud-native environment along with multi-service architecture. Similarly, logs offer no respite, as they may be cumbersome and even costly to execute and store.
Therefore, considering the new era of technology and software, we see a movement to redefine debugging practices to better suit this new world. This is where we see the birth of remote debugging. A debugging technique that aims to provide the solution for cloud development and all its pitfalls when it comes to traditional debugging in the cloud or basic remote landscape. It allows debugging a system when the system itself is not in the local environment of the developer by setting up a connection between the developer’s local environment and the service to be debugged that sits on a remote server.
Therefore, this article aims to expand on why remote debugging is needed, the disadvantages that may arise, and how we can go about live debugging Java applications.
When to Adopt Remote Debugging
Over the years we have seen a transformation of software architecture being disintegrated into autonomous entities that allow better development practices in terms of agility and autonomy of teams. In terms of infrastructure, this usually translates to different parts of the codebase running on separate servers, conventionally in the form of container instances, FaaS functions, or pods.
However, this also means that, even though the entire system does operate as a single entity in production, the part of the codebase being worked on during the development phase may be disconnected from the other parts of the system and crucial resources.
As a result, writing test cases that depend on these unavailable resources becomes difficult. This is especially true when considering the architectures that developers would gravitate towards when developing for the cloud. This can include a combination of concepts such as hybrid monoliths on the cloud, event-driven serverless configurations, active/active multi-region set-ups, and many more.
There are various techniques that mitigate this pain point, but these techniques are usually cumbersome and expensive in terms of operational cost. Some of these techniques are listed below:
- Leveraging local server plugins: There are some libraries that may be available for creating embedded servers of tools in your testing environment. For example, Maven boasts many such plugins as the one available for Redis under the Ozimov group.
- Relying on inbuilt libraries - Some tools in your cloud stack, such as Hadoop offer development libraries such as the MRUnit library. This can be leveraged when testing for Hadoop MapReduce Jobs. As can be seen, this is an extension of the concept of using libraries, but provided from the tools themselves. Hence mitigating the fears of reaffirming test results.
- Setting up local resources - Definitely a comparatively higher maintenance and cost-intensive approach as compared to the others listed above. This involves having an entire local environment with replicas of the production environment resources. This is an effective method but is hard to maintain especially when considering the potential drift in the local and production environment. There are methods to mitigate the pains of this method, mainly IaC, but at the end of the day, it fails to scale and makes the entire process susceptible to incidents. Exactly the point against DevOps.
As a result of the pain points of the techniques listed above, we see the use of remote debugging come into play. By being able to connect to remote systems to debug while leveraging Non-Breaking Breakpoints, allows developers to now set non-intrusive breakpoints at any line of their code base running in any environment. This will allow the remote debugger to capture all crucial insights such as metrics and snapshots containing variable states at the point of the Non-Breaking Breakpoint. Better yet, the developer will procure all these insights without having to disrupt the flow of their systems.
The Mechanics of Java Remote Debugging
As noted, the benefits of remote debugging provide the right progressive mindset in this ever-changing world of software development. The main aim of the technique is to be able to connect the debugging environment with the target system which resides in a remote instance.
When diving deeper into how this connection can be made for Java-specific applications we see the Java Debugger Platform Architecture (JDPA) come up.
JDPA, developed by Sun Microsystems, is a multi-tiered architecture that allows users to connect to remote Java applications through their local IDE and perform the necessary debugging on the remote system. As can be seen from the diagram above, the JDPA consists of three main interfaces. These are the Java Virtual Machine Tool Interface (JVM TI), the Java Debug Wire Protocol (JDWP), and the Java Debug Interface (JDI).
The high-level roles that these three components play are straightforward. To reiterate, the goal is to be able to connect the local debugger’s environment to the remote VM on which the target application is running. The need for connection is filled by the JDWP whose responsibility is to define the format of messages being communicated between the debugger and the remote system. Furthermore, the JDWP does not define the transport mechanism to ensure flexibility of use. Hence allowing the backend VM and the frontend debugger interface to run on separate mechanisms.
The JDI can be thought of as simply a Java API that is aimed at capturing requests and relaying information to and from the debugger. This also means that the debugger interface can be written in any language as long as it calls upon the correct set of API endpoints provided. On the opposite end is the JVM TI which is a native programming interface. It communicates with the services in the VM and can observe and control the execution of the Java applications.
When understanding how the three components operate in tandem to enable remote Java debugging, we can consider the functioning of JPDA from two perspectives. The debugger and the debuggee.
Regardless of whether the interface is related to the debugger or the debuggee, there are two forms of activity in each interface. These are events and requests. However, requests are conventi0onally generated on the debugger side whereas events are generated on the debuggee side. These events are the debuggee responses to requests for information pertaining to the current state of the debuggee application, sitting on the remote VM.
Non-Breaking Breakpoints Flow
As mentioned earlier, Non-Breaking Breakpoints are a valuable component of remote debugging as they provide all the necessary insights without disruptions to the running application. This proves useful especially when the target system is running in production but is crucial for debugging purposes of another service being developed in the local environment. This is just one use case, there are several other challenges that the technique helps overcome. In this section though, let’s discuss how the JPDA performs in setting and responding to Non-Breaking breakpoints.
The first step is the setting of the Non-Breaking Breakpoint in the local debugger UI. In doing so, the debugger calls upon the relevant set of endpoints in In the JDI. Upon these calls, the JDI then generates a debugging state change request. This request is then converted into a byte stream as per the definition set by the JDWP. As mentioned, there is no specific communication system that JDWP imposes and this can be defined by those setting up the JDPA. In this case, it could be a socket.
By leveraging the JDWP, the JDI manages to send this request finally to the backend where the byte of streams is first deciphered. After receiving the request, the relevant JVM TI functions are triggered to perform the setting of the breakpoint in the Java application.
When the application that is being executed in the remote VM finally hits a breakpoint, events containing system information are then generated. The VM passes these events back to the frontend by calling the event handling functions of the JVM TI and passing the breakpoint itself. This sets of a chain of operations in filtering and queuing the events, which are then finally sent as a stream of bytes over the JDWP.
The front end then decodes the messages received from the JDWP and calls upon the event functions of the JDI, which then kicks off JDI events. These events are then processed for their debugging information which is then displayed in the debugger console.
As can be seen, the JDPA is a sophisticated system that enables us to perform debugging techniques. Owing to this complexity and low-level state of the JDPA it can be difficult to perform remote debugging by using the JDPA.
Additionally, apart from the operational challenges of using the JDPA directly, there are other challenges. One of the main concerns is security as the technique requires the opening of ports into your remote VMs. Also, the logging concerns are not effectively mitigated as logs may be written to the application itself depending on the implementation. Hence in the case that the application experiences an incident, those logs may become unavailable or inaccessible, defeating the debugging purpose of those logs.
However, all is not lost. The software development industry is swift at meeting the needs of effective development practices. Hence we are now seeing a new era of debugging tools that are providing live debugging solutions. Sidekick is one such solution in the market.
Aptly named, the solution provides that much-needed respite for developers debugging their cloud and distributed systems by providing them all the necessary live debugging operations and insights in an easy-to-use and effective manner. Hence empowering developers to go about with their debugging strategies in this faced-paced environment that are continuously witnessing strides in software technology.
And if you have yet to take your first step into Sidekick, you can begin your journey here.