Jul 11th, 2011, 01:43 AM
Transaction scenario to understand
I am using DatasourceTransactionmanager in my project. Project uses JDBCTemplate mostly (or DataSource to get connection and then DB access in few places). I am using annotation driven transactions. These datasource related stuff is read only. i.e. no any DB save/update are done. I am using DBCP pooling.
This project has one dependency which is responsible to store/retrieve data from database using hibernate. i.e. this is essentially DB write operation API. This dependent project however uses HiberanteTransactionmanager; annotation driven.
Below are few of my doubts (probably i am bit lazy to dig the spring code):
1. How transaction in my project and dependent project are related, if i started a transaction in my project and a call is made to dependent project's method which has Trx propagation REQUIRED, what will happen? Same Trx will continue, as nested will start?
2. If in my project i choose to completely avoid the transaction, as i am not really doing any save/update operations, should it be alright?
Please share your thoughts.
Jul 11th, 2011, 01:34 PM
Originally Posted by itsyuvaraj
Remember to use @Transactional(readOnly=true) around operations which never modify state in the DB.
JDBCTemplate and HibernateTemplate both use the same underlying JDBC connection (by default - minimal configuration mode), this means they can also share the same transaction. If you define only one "transactionManager" based on JDBC this is what you get.
1) REQUIRED makes execution create a new tranaction is none currently exists, or join the existing transaction if one already exists.
By default (as in - if not otherwise configured) both JDBCTemplate and HibernateTemplate will both seek out "transactionManager" and this in turn will seek out "dataSource". Causing operations to share the same JDBC connection.
It all depends how the dependant project is configured. If it expects a parent Spring BeanFactory to provide "transactionManager".
It is not advisable to mess with the isolationLevel/autoCommit form under a dependant project, if it expects standard things then you should provide standard things.
2) There is no such thing as avoiding a transaction. All operations on a transaction capable SQL server are done inside a transaction, however there is an implicit-transaction mode that options such as autoCommit=true would force. That is each SQL statement is exactly 1 transaction. There is also no notion of "I am _NOT_ _REALLY_ doing any save/update", either you are or you are not, pick one of "sometimes" or "never". "Sometimes" and "often" have exactly the same implications to consider.
The only concern you have with your data is that a series of read-only operations on SQL might need to be consistent with each other. For example you have Foreign Key contraints so you lookup a row, see the FK exist, then use it to lookup a related row (in say another table). But that row disappears, your code may expect it to exist due to FK contraint. This is where your Transaction Isolation Level still has an effect for read-only JDBC transactions. With the usual isolation level and putting the two SQL SELECTs inside the same JDBC transaction the FK contraint will ensure the 2nd SQL query will always return the expected row. Using 2 transactions (that each commit/complete) can result in any kind of change to the SQL data in between, so you can not expect the row to exist. This is ok if your program handles that situation in your raw JDBC access application code.
But if you are using Hibernate this is probably not what you want, since Hibernate expects the SQL data model to honour the constraints and be inside the transaction, this is basis for features such as "Lazy loading".
One point you should consider, a side-effect of using @Transactional and having a transactional context is that the underlying JDBC connection is retained by the Spring application code Thread of execution while the JDBC transaction is still open. If you truely want one transaction per JDBC statement then you don't need Spring to provide any transaction support, this will cause the application code to rapidly obtain and release JDBC connection for a series of JDBC Template invocations by a single application thread.
But I would think that lowering the isolationLevel is really what you want to achieve the performance gain and keeping the notional transaction boundaries inside your project using the default REQUIRED mode to handle nested application code behaviour.
So if you still want to keep transaction-less JDBC but transaction aware Hibernate usage (this is the only sane usage IMHO) then you can setup 2 dataSource and 2 transactionManager.
in dataSource: set autoCommit=true, set isolationLevel=READ-COMMITTED, rename it "jdbcDataSource"
in transactionManager: inject into it "jdbcDataSource", rename it "jdbcTransactionManager", create a "no operation" implementation that supports the API but does nothing.
ensure your jdbcTemplate is looking up the renamed jdbcDataSource jdbcTransactionManager (you need to override the defaults)
in dataSource: keep defaults (autoCommit=false, isolationLevel=REPEATABLE-READ)
in transactionManager: keep defaults
I would not recommend mixing application program calls unless you fully understand the caveats. You are better off writing a Hibernate compatible read-only query for use in the hibernate use case. This keeps the unexpected suprises down.
If you do decide you must mix calls to SQL data access:
* remember transactions do not work for JDBC calls, so all side effects take place immeditely
* that 2 JDBC connections can be in use when you mix calls (for the same thread) and
* that each connection may not be able to see each others data (until hibernate has committed)
* it is possible to dead-lock a single thread in your application (this could be solved by changing JDBC isoloationLevel=READ-UNCOMMITTED). Hibernate performs optimistic reads (or any write) some data in SQL. JDBC comes along and reads the same data.
But the largest advise I can give is don't do it, unless you have clear performance numbers illustrating that there is a major performance win for you (you particularly want to profile your application using JDBC in lower isolationLevel modes, if REPEATABLE-READ is the default then go down to READ-COMMITTED and then down to READ-UNCOMMITTED). I would guess that READ-COMMITTED inside normal JDBC transactions will disolve any performance problem you have in a heavly read-only application, since two (or more) read-only SQL transactions will never block each other (even if they are doing transactions).
What you probably want is both JDBC and Hibernate to share the same transaction as that read-write via hibernate usage case is rare the performance numbers may not be critical.
[ sorry if I have not been clear enough, there are probably a number of errors/problems in my explainations, but only so much time to contribute to the forum and this turned into a particular long response, please feel free to ask new questions to clarify, it is hoped this would simply make you think about transactions differently ]
Jul 12th, 2011, 03:24 AM
Thanks dlmiles. Incredible to see such lengthy reply. Thanks for your time.
Well, i agree to your points, infact i did think of them. The one i should mind now is about read only transaction and isolation level. I wasn't much concern with isolatio level till now, however i have to. Thanks.
BTW, in my case instances of BasicdataSource (or you can say datasource) are diferent. In fact there are 2 databases in use i.e. 2 JDBC data Sources. Project 1 has 2 datasources and same for project 2(which is hibernate data layer). As explained project 1 performs only read operations using its own datasources and project 2 or dependency ptoject performs write/update operations. Both have separate connection pools as well. I probably agree that having these resources shared and using let say only Hibernate driven approach could diminish the complexity and issues those have been appeared, however not possible to do right away is the fact.
After all this, the all read/write operations on 2 databases (by JDBC Template or HibernateTemplate) are called by different threads and are in any order.
Jul 14th, 2011, 08:31 AM
Well, i concluded that JTA Trx Manager has to be used. I applied the same.
I have an another question:
If there are two transaction managers one being local and another being XA, synchronizing over same datasource; how would be the expected behavior? Will they do their job or JTA will get preference? Or behavior can be simulated as if two threads synchronizing for a resource?
Anybody has some input?
Jul 15th, 2011, 05:51 PM
I'm sorry I don't know about XA to be able to say anything useful.
1) If you have overlapping transactions from the same origin (same context or same thread or whatever) then you really need to understand all the implications better.
2) I don't think mixing XA with non-XA is a good idea, especially if they might end up crossing the streams (to cite the 1980s Ghostbuster movie line, never cross the streams). I would guess you need to use an XA aware transaction co-ordinator ("transactionManager") and that needs to use XA aware transaction sources (XA aware JDBC). I would also guess by going XA you get some dead-lock detection and therefore avoidance facility.
Hopefully someone here can correct my thinking where it maybe wrong.