Not exactly - such error occur after service API return in the process of commit (i.e. transaction would not be successfully committed).
Originally Posted by constv
More or less - he has not explicitly mentioned Hibernate. And Hibernate (and ORM in general) is not required to obtain such kind of behavior. You may meet it even with old plain JDBC, for example, if you have in the database something like following (Oracle):
Like during Hibernate's flushing that normally happens behind the scenes and performed by Hibernate at its own will unless we force it? Is this what ramoq was asking?
Such constraint is checked not on insert/update but only on commit.
CREATE TABLE games
(scores NUMBER, CONSTRAINT unq_num UNIQUE (scores)
INITIALLY DEFERRED DEFERRABLE);
100-fold performance degradation due to excessive flushes I have witnessed first hand.
Hmm... I agree with you that forced flushing every time you execute a query is expensive and most likely would be a huge overkill.
First of all framework may not be involved, secondly, while I rather can not recover from this condition (while it depends on business requirements) I may - and rather should - provide meaningful message to the client. DUPLICATED_KEY does not do as it may be duplicated user id, duplicated SSN, and so on. There should be some piece of code that is aware of DAO implementation details. And it is better if it is independent from the way how service API was called, e.g. handler exception resolver would not always do.
It seems to me that dealing with such conditions should - normally - be trusted to the framework itself (Hibernate, in this case) and if such abnormal condition ever happens, there's little you can do on the application/service API side.
So, from my point of view it should be some wrapper (manual or via AOP) that calls exception resolution service, which in turn calls exception resolver built-in into DAO layer.
In other words, I am using Hibernate because it abstracts certain things that I supposedly don't need to worry about. If that framework all of a sudden does something out of whack and screws up the integrity of the whole system, that's the framework's malfunction. That error will eventually manifest itself as a critical database exception and should go straight to the top handler for critical/fatal errors. However, I tend to think that the likelihood of such conditions hugely depends on the quality of the data access/database design, in general. Perhaps your system should not be designed so that it heavily relies on huge volumes of potentially stale cached objects, etc. Don't you agree? Stuff like that should not be happening in well designed systems, and if it does it should be treated as a marginal critical condition and looked into from the stand point of a possibility of design tweaking rather than error handling. That's my take on it. (I know, I have seen some horrendous Hibernate implementations - usually in cases where ORM was totally inappropriate, in the first place. As you have said before, people sometimes misuse good technologies. I think ORM should be used when a very clear and straightforward mapping between your object model and database schema can be achieved without making one ridiculously complex to fit the other. But that's a different topic. )[/QUOTE]