After solving the problem described here:
2152449
i made a, in my eyes a bit weird, experience when using columns of type NCLOB.
First of all i have to say that this is not problem i need a solution for, it's just about information exchange and maybe for discussion.
Last week i tried some different configurations on database and code side after i encountered following problem:
In my database i have a table with a column of type NCLOB and i wanted to write UTF-16 characters into this column. I know i need UTF-16 as national character set in my DB to do this and i have to use the correct paramteres on code side (see link i posted) which were correct. To get sure that everything works i copied some UTF-16 characters from the character map utility that comes with windows. I tried to write them into the DB using the SQL Developer and i tried it from code side and both did not work properly. Some characters kept their original value, some others were not. Most character values have been changed to 191 '¿'. I found out that the characters that kept their values also exist in the character set of the database character set (db char set was european-1252 and chars like € and ‰ kept their values).
My next step was creating a new DB with database character set AL32UTF8. All other settings are the same. Using this database and the same code worked without any problems, no conversions or anything like that and inserting values worked from both sides, from SQL Developer and code side.
I also tried different configurations on code side. This change was switching the character set form for the OCILobLocator between SQLCS_IMPLICIT and SQLCS_NCHAR. I always used OCI_UTF16ID as char set id.
On the DB with the european-1252 database character set i had big problems when writing data to the database using SQLCS_NCHAR. It seems like every byte gets considered as single byte character and then gets converted to two bytes, adding a byte with value 0. Of course this is horrible when reading the data because there are a lot of bytes with a value of 0 in a chain of UTF-16 characters and i had a zero termination in the middle of the string. Switching the character set form to SQLCS_IMPLICIT i did not have this problem.
On the AL32UTF8 database everything work fine. No matter if i used SQLCS_IMPLICIT or SQLCS_NCHAR.
Of course i also tried to change the char set form when reading a OCILobLocator. When i use SQLCS_IMPLICIT i get a mess of single byte characters. But when i set it to SQLCS_NCHAR i get my UTF-16 chars, even when the SQLCS_IMPLICIT char set form was used when writing the data.
My current settings on code side are SQLCS_NCHAR for reading and SQLCS_IMPLICIT for writing. The char set id is always set to OCI_UTF16ID. I've chosen to use SQLCS_IMPLICIT for writing because it didn't make a difference on the database with the AL32UTF8 database character set and it also worked fine on the other one, even when loosing some characters.
My conclusion after all of this is: The database characters set has an influence on NCLOB columns.
And now i ask myself: Why? Isn't the NCLOB type supposed to depend on the national character set?
nvarchar2 columns always work fine, jsyk.