I am evaluating different data from a textfile in a rather large algorithm.
If the text file contains more than datapoints (the minimum I need is sth. like 1.3 million datapoints) it gives the following error:
Exception in thread "main" java.lang.OutOfMemoryError: GC overhead limit exceeded at java.util.regex.Matcher.<init>(Unknown Source) at java.util.regex.Pattern.matcher(Unknown Source) at java.lang.String.replaceAll(Unknown Source) at java.util.Scanner.processFloatToken(Unknown Source) at java.util.Scanner.nextDouble(Unknown Source)
When I’m running it in Eclipse with the following settings for the installed jre6 (standard VM):
-Xms20m -Xmx1024m -XX:MinHeapFreeRatio=20 -XX:MaxHeapFreeRatio=40 -XX:NewSize=10m -XX:MaxNewSize=10m -XX:SurvivorRatio=6 -XX:TargetSurvivorRatio=80 -XX:+CMSClassUnloadingEnabled
Note that it works fine if I only run through part of the textfile.
Now I’ve read a lot about this subject and it seems that somewhere I must have either a data leak or I’m storing too much data in arrays (which I think I do).
Now my problem is: how can I work around this? Is it possible to change my settings such that I can still perform the computation or do I really need more computational power?
The really critical vm arg is
-Xmx1024m, which tells the VM to use up to 1024 megabytes of memory. The simplest solution is to use a bigger number there. You can try
-Xmx4096m, or any number, assuming you have enough RAM in your machine to handle it.
I’m not sure you’re getting much benefit out of any of the other VM args. For the most part, if you tell Java how much space to use, it will be smart with the rest of the params. I’d suggest removing everything except the
-Xmx param and seeing how that performs.
A better solution is to try to improve your algorithm, but I haven’t yet read through it in enough detail to offer any suggestions.