We have the following issue. Table names and column names are changed because of company policy.
There is a table, with a primary key from a sequence and also a unique key as a combination of business fields. Id column is populated from a trigger.
We insert into this table from java using batch inserts, with a statement like:
insert /*+ ignore_row_on_dupkey_index(t, uk_t) */ into xxx_s2_owner.t (a, b, c, d) values(?, ?, ?, 'SOME_CONSTANT')
And we get the following result
org.springframework.jdbc.UncategorizedSQLException: PreparedStatementCallback; uncategorized SQLException for SQL [insert /*+ ignore_row_on_dupkey_index(t, uk_t) */ into xxx_s2_owner.t (a, b, c, d) values(?, ?, ?, 'SOME_CONSTANT')]; SQL state [99999]; error code [38911]; ORA-38911: unique constraint (XXX_S2_OWNER.UK_T) violated
; nested exception is java.sql.BatchUpdateException: ORA-38911: unique constraint (XXX_OWNER.UK_T) violated
The issue is that we ask oracle to ignore duplicates, but instead we get error on an attempt to insert duplicate, just with another error code, with ORA-38911 instead of ORA-00001. This is how hint change_dupkey_error_index should behave, not ignore_row_on_dupkey_index.
We tried to change java batch insert to forall insert inside pl/sql procedure, but the result is the same. We have found a workaround however. If we make the index on id non-unique, and drop the primary key constraint, the error goes away. We can see that oracle somehow gets confused with two unique indexes.
I've raised an SR with oracle, but so far they didn't tell me anything useful, so I decided to ask the community. Our oracle version is 11.2.0.4. One more detail that may be important is that we have a number of identical schemas on this database (SIT1, SIT2 etc.)
Have anyone else seen this issue?