I’ve set up a glassfish cluster with 1 DAS and 2 Node Agents.
The system has TimedObjects which are batched once a day. As glassfish architecture, there is only 1 cluster instance allowed to trigger timeout event of each Timer created by TimerService.
My problems is about Heap size of a cluster instance which triggers batch job. The VisualVM shows that one instance always has scalable heap size (increase when the server is loaded and decrease after that) but another one always has heap size at the maximum and never decrease.
It is acceptable to tell me that the heap size is at the maximum because the batch job is huge. But, the only question I have is why it does not decrease after the job is done???
VisualVM shows that the “Used Heap Memory” of the instance which triggers timeout event decreases after the batch job. But, why its “Heap Size” is not scaled down accordingly?
Thank you for your advice!!! ^^
Advertisement
Answer
Presumably you have something referencing the memory. I suggest getting a copy of MAT and doing a heap dump. From there you can see what’s been allocated and what is referencing it.