In my program, I am reading a series of text files from the disk. With each text file, I process out some data and store the results as JSON on the disk. In this design, each file has its own JSON file. In addition to this, I also store some of the data in a separate JSON file, which stores relevant data from multiple files. My problem is that the shared JSON grows larger and larger with every file parsed, and eventually uses too much memory. I am on a 32-bit machine and have 4 GB of RAM, and cannot increase the memory size of the Java VM anymore.
Another constraint to consider is that I often refer back to the old JSON. For instance, say I pull out ObjX from FileY. In pseudo code, the following happens (using Jackson for JSON serialization/deserialization):
// In the main method. FileYJSON = parse(FileY); ObjX = FileYJSON.get(some_key); sharedJSON.add(ObjX); // In sharedJSON object List objList; function add(obj) if (!objList.contains(obj)) objList.add(obj);
The only thing I can think to do is use streaming JSON, but the problem is that I frequently need to access the JSON that came before, so I don’t know that stream will work. Also my data types on not only strings, which prevents me from using Jackson’s streaming capabilities (I believes). Does anyone know of a good solution?
Advertisement
Answer
If you’re getting to the point where your data structures are so large that you’re running out of memory, you’ll have to start using something else. I would recommend that you use a database, which will significantly speed up data retrieval and storage. It will also make the limit of your data structure the size of your hard drive, instead of the size of your RAM.
Try this page for an introduction to Java and Databases.