what table column size is needed to accomodate Unicode characters
620942Aug 20 2008 — edited Aug 28 2008Hi guys,
I have encounter something which i dont understand and i hope gurus here will shed some light on me.
I am running a non-unicode database and i decided to port the data over to a unicode database.
So
1) i export the schema out --> data.dmp
2) then i create the unicode database + create a user
3) then i import the schema into the database
during the imp i can see that character conversion will take place.
During importing of data into the unicode database
I encounter some error
saying column size is too small
so i went to check the row that has the column value that is too large to fit in the table.
I realise it has some [][][][] data.. so i went to the live non-unicode database and find the row. Indeed it has some [][][][] rubbish data which i feel that someone has inserted other language then english into the database.
But regardless,
I went to modify the column size to a larger size, now the row can be accommodated. However the data is still [][][].
q1) why so ? since now my database is unicode, during the import, this column data [][][] should be converted to unicode already but i still have problem seeing what language it is.
q2) why at the non-unicode database, the [][][] data can fit into the table column size, but on unicode database, the same table column size need to be increase ?
q3) while doing more research on unicode, it was said that unicode character takes up 2 byte per character. Alot of my table data are exactly the same size of the table column size.
E.g Name VARCHAR2(5);
value - 'Peter'
Now if converting to unicode, characters will take 2byte instead of 1, isnt 'PETER' going to take up 10byte ( 2 byte per character ),
why is it that i can still accomodate the data into the table column ?
q4) now with unicode database up, i will be supporting different language characters around the world. How big should i set my column size to ? the longest a name can get ? or ?
Thanks guys!