I implemented a program about calculating the stock values comparing that value in the given times’ values and etc.. I have a csv file which has all dates and double values. In my program for every single date, I’m openning a file, parsing and searching the value of the given date. And I may check the date for multiple times. That’s why I want to use these values in the HashMap, if I don’t have the value of the given date as a Key, I would open the csv file and search it. But before implementing that program, I want to check that if I store all values in the hashmap, is it fit into the java’s heap which is 8MB. My csv file has 1200 lines of date and corresponding double value. Is there any way to understand it before implementing the code ? Or do you advise any better ideas to make more efficient program ?
Thanks.
Advertisement
Answer
Loading this data into a hashmap is orders of magnitude more efficient than reading from files constantly. There is no one hard and fast rule on determining how much heap space will be taken up by parsing and loading a given csv file. It depends on the system architecture, string encodings used, data types, etc.
The simplest and best approach is to run a simple JVM profiler and examine the difference in heap usage from just before and just after you load a particular sized file into a hashmap.