I’ve seen articles saying that we should try to limit the scope of transaction, e.g. instead of doing this:
@Transactional public void save(User user) { queryData(); addData(); updateData(); }
We should exclude queryData
from the transaction by using Spring’s TransactionTemplate
(or just move it out of the transactional method):
@Autowired private TransactionTemplate transactionTemplate; public void save(final User user) { queryData(); transactionTemplate.execute((status) => { addData(); updateData(); return Boolean.TRUE; }) }
But my understanding is that since JDBC will always need a transaction for all operations, if I use the second way, there will be 2 transactions opened and closed, 1 for queryData
(opened by JDBC), and another for codes inside transactionTemplate.execute
opened by our class. If so, won’t this be a waste of resources now that you’ve split 1 transaction into 2?
Advertisement
Answer
If an transaction starts , it will use up one DB connection. So we generally want the transaction to be completed as fast as possible , and delay to start it as much as we can until we really need to access DB such that the connection pool has more time to provide more available connections for other requests to use.
So if part of the workflow within your function requires to take some time to finish their work and that work is not required to access DB, it is true that it is better to limit the scope of the transaction to exclude this part of the codes.
But in your example, as both transaction are executed in series and both need to access DB , I don’t see there are any points to separate them into two different transactions.
Also, in term of Hibernate, it is very normal to load and update the entities in the same transaction such that you do not need to deal with the detached entities if the entities that you update are loaded from another already closed transaction. Dealing with detached entities is not easy if you are not familiar with Hibernate.