Skip to content

Profiling Java code changes execution times

I’m trying to optimize my code, but it’s giving me problems. I’ve got this list of objects:

List<DataDescriptor> descriptors;

public class DataDescriptor {
    public int id;
    public String name;

There is 1700 objects with unique id (0-1699) and some name, it’s used to decode what type of data I get later on.

The method that I try to optimize works like that:

    public void processData(ArrayList<DataDescriptor> descriptors, ArrayList<IncomingData> incomingDataList) {
        for (IncomingData data : incomingDataList) {
            DataDescriptor desc = descriptors.get(data.getDataDescriptorId());

            if (desc.getName().equals("datatype_1")) {
            } else if (desc.getName().equals("datatype_2")) {
            } else if ....
            } else if (desc.getName().equals("datatype_16")) {

This method is called about milion times when processing data file and every time incomingDataList contains about 60 elements, so this set of if/elses is executed about 60 milion times.

This takes about 15 seconds on my desktop (i7-8700).

Changing code to test integer ids instead of strings obviously shaves off few seconds, which is nice, but I hoped for more 🙂 I tried profiling using VisualVM, but for this method (with string testing) it says that 66% of time is spent in “Self time” (which I believe would be all this string testing? and why doesnt it says that it is in String.equals method?) and 33% is spent on descriptors.get – which is simple get from ArrayList and I don’t think I can optimize it any further, other than trying to change how data is structured in memory (still, this is Java, so I don’t know if this would help a lot).

I wrote “simple benchmark” app to isolate this String vs int comparisons. As I expected, comparing integers was about 10x faster than String.equals when I simply run the application, but when I profiled it in VisualVM (I wanted to check if in benchmark ArrayList.get would also be so slow), strangely both methods took exactly the same amount of time. When using VisualVM’s Sample, instead of Profile, application finished with expected results (ints being 10x faster), but VisualVM was showing that in his sample both types of comparisons took the same amount of time.

What is the reason for getting such totally different results when profiling and not? I know that there is a lot of factors, there is JIT and profiling maybe interferes with it etc. – but in the end, how do you profile and optimize Java code, when profiling tools change how the code runs? (if it’s the case)



Profilers can be divided into two categories: instrumenting and sampling. VisualVM includes both, but both of them have disadvantages.

Instrumenting profilers use bytecode instrumentation to modify classes. They basically insert the special tracing code into every method entry and exit. This allows to record all executed methods and their running time. However, this approach is associated with a big overhead: first, because the tracing code itself can take much time (sometimes even more than the original code); second, because the instrumented code becomes more complicated and prevents from certain JIT optimizations that could be applied to the original code.

Sampling profilers are different. They do not modify your application; instead they periodically take a snapshot of what the application is doing, i.e. the stack traces of currently running threads. The more often some method occurs in these stack traces – the longer (statistically) is the total execution time of this method.

Sampling profilers typically have much smaller overhead; furthermore, this overhead is manageable, since it directly depends on the profiling interval, i.e. how often the profiler takes thread snapshots.

The problem with sampling profilers is that JDK’s public API for getting stack traces is flawed. JVM does not get a stack trace at any arbitrary moment of time. It rather stops a thread in one of the predefined places where it knows how to reliably walk the stack. These places are called safepoints. Safepoints are located at method exits (excluding inlined methods), and inside the loops (excluding short counted loops). That’s why, if you have a long linear peace of code or a short counted loop, you’ll never see it in a sampling profiler that relies on JVM standard getStackTrace API.

This problem is known as Safepoint Bias. It is described well in a great post by Nitsan Wakart. VisualVM is not the only victim. Many other profilers, including commercial tools, also suffer from the same issue, because the original problem is in the JVM rather than in a particular profiling tool.

Java Flight Recorder is much better, as long as it does not rely on safepoints. However, it has its own flaws: for example, it cannot get a stack trace, when a thread is executing certain JVM intrinsic methods like System.arraycopy. This is especially disappointing, since arraycopy is a frequent bottleneck in Java applications.

Try async-profiler. The goal of the project is exactly to solve the above issues. It should provide a fair view of the application performance, while having a very small overhead. async-profiler works on Linux and macOS. If you are on Windows, JFR is still your best bet.

User contributions licensed under: CC BY-SA
5 People found this is helpful