Database version : 11.2.0.3.0
NLS_CHARACTERSET : AL32UTF8
OS : Red Hat Enterprise Linux Server release 6.3 (Santiago)
I did not work with multiple language characters and manipulating them. So, the basic idea is to write UTF8 data as Unicode file using UTL_FILE. This is fairly an empty database and does not have any rows in at least the tables I am working on. First I inserted a row with English characters in the columns.
I used utl_file.fopen_nchar to open and used utl_file.put_line_nchar to write it to the file on the Linux box. I open the file and I still see English characters (say "02CANMLKR001".
Now, I updated the row with some columns having Chinese characters and ran the same script. It wrote the file. Now when I "vi" the file, I see "02CANè¹æ001" (some Unicode symbols in place of the Chinese characters and the regular English.
When I FTP the file to windows and open it using notepad/notepad++ it still shows the Chinese characters. Using textpad, it shows ? in place of Chinese characters and the file properties say that the file is of type UNIX/UTF-8.
My question : "Is my code working and writing the file in unicode? If not what are the required changes?" -- I know the question is little vague, but any questions/suggestions towards answering my question would really help.
sample code:
{pre}
DECLARE
l_file_handle UTL_FILE.file_type;
l_file_name VARCHAR2 (50) := 'test.dat';
l_rec VARCHAR2 (250);
BEGIN
l_file_handle := UTL_FILE.fopen_nchar ('OUTPUT_DIR', l_file_name, 'W');
SELECT col1 || col2 || col3 INTO l_rec FROM table_name;
UTL_FILE.put_line_nchar (l_file_handle, l_rec);
UTL_FILE.fclose (l_file_handle);
END;
/
{/pre}