I have defined a work manager with max threads = 16 and it is not Growable. However, in the logs I can see thread numbering is as high as 180+.
[WorkManager.Transformer : 180] [WorkManager.Transformer : 181] [WorkManager.Transformer : 182] [WorkManager.Transformer : 183]
I suspect due to this, threads are getting starved waiting for resources and throwing a JDBC ConnectionWaitTimeoutException after 180 seconds.
Why are so many threads getting spawned when the max limit is set to 16. What more can I check?
The thread index is the number of total threads that have been created for that thread pool over the lifetime of the server, and it does not necessarily indicate how many threads are currently active. If the pool’s minimum and maximum values are not the same, the pool will delete threads down to its configured minimum after a period of unuse, then create new threads if demand rises above that minimum level. The newly-created thread’s index is simply the next number that hasn’t been used yet.
For example, if you have a pool with minimum size 1 and maximum size 5, and you dispatch five work items to it simultaneously, it will create threads with the names “Pool : 0” through “Pool : 4”. When those work items finish, after some amount of time all but one of them will be deleted, as specified by the minimum pool size. If you again dispatch five work items to the pool, four threads named “Pool : 5” through “Pool : 8” will be created, but you only have five active threads even though the highest index number is 8.
If you’re concerned that this isn’t actually the case, you can collect a javacore from the process (
kill -3 <pid> or request it through the server’s administrative console) and simply count the number of threads with “WorkManager.Transformer” in their name – I’m guessing it’ll be 16 or less. The javacore will also be useful in determining what’s causing your resource issues, as you’ll be able to see the stacks of threads along with any locks or other resources they’re waiting on.