It seems to be impossible to make a cached thread pool with a limit to the number of threads that it can create.
Here is how static Executors.newCachedThreadPool
is implemented in the standard Java library:
public static ExecutorService newCachedThreadPool() { return new ThreadPoolExecutor(0, Integer.MAX_VALUE, 60L, TimeUnit.SECONDS, new SynchronousQueue<Runnable>()); }
So, using that template to go on to create a fixed sized cached thread pool:
new ThreadPoolExecutor(0, 3, 60L, TimeUnit.SECONDS, new SynchronusQueue<Runable>());
Now if you use this and submit 3 tasks, everything will be fine. Submitting any further tasks will result in rejected execution exceptions.
Trying this:
new ThreadPoolExecutor(0, 3, 60L, TimeUnit.SECONDS, new LinkedBlockingQueue<Runable>());
Will result in all threads executing sequentially. I.e., the thread pool will never make more than one thread to handle your tasks.
This is a bug in the execute method of ThreadPoolExecutor
? Or maybe this is intentional? Or there is some other way?
Edit: I want something exactly like the cached thread pool (it creates threads on demand and then kills them after some timeout) but with a limit on the number of threads that it can create and the ability to continue to queue additional tasks once it has hit its thread limit. According to sjlee’s response this is impossible. Looking at the execute()
method of ThreadPoolExecutor
it is indeed impossible. I would need to subclass ThreadPoolExecutor
and override execute()
somewhat like SwingWorker
does, but what SwingWorker
does in its execute()
is a complete hack.
Advertisement
Answer
The ThreadPoolExecutor
has the following several key behaviors, and your problems can be explained by these behaviors.
When tasks are submitted,
- If the thread pool has not reached the core size, it creates new threads.
- If the core size has been reached and there is no idle threads, it queues tasks.
- If the core size has been reached, there is no idle threads, and the queue becomes full, it creates new threads (until it reaches the max size).
- If the max size has been reached, there is no idle threads, and the queue becomes full, the rejection policy kicks in.
In the first example, note that the SynchronousQueue
has essentially size of 0. Therefore, the moment you reach the max size (3), the rejection policy kicks in (#4).
In the second example, the queue of choice is a LinkedBlockingQueue
which has an unlimited size. Therefore, you get stuck with behavior #2.
You cannot really tinker much with the cached type or the fixed type, as their behavior is almost completely determined.
If you want to have a bounded and dynamic thread pool, you need to use a positive core size and max size combined with a queue of a finite size. For example,
new ThreadPoolExecutor(10, // core size 50, // max size 10*60, // idle timeout TimeUnit.SECONDS, new ArrayBlockingQueue<Runnable>(20)); // queue with a size
Addendum: this is a fairly old answer, and it appears that JDK changed its behavior when it comes to core size of 0. Since JDK 1.6, if the core size is 0 and the pool does not have any threads, the ThreadPoolExecutor will add a thread to execute that task. Therefore, the core size of 0 is an exception to the rule above. Thanks Steve for bringing that to my attention.