DB version : 11.2.0.4
OS : Oracle Linux 6.8
I have a table which has millions of rows with a table size of 52GB. Auditors in my firm need to check some data.
So, they need all the data in this table (pkms_detail_staging) to be spooled into multiple csv files with each CSV file containing only 500,000 rows.
pkms_detail_staging is a staging table which only 4 columns.
So, I need to create a script which should churn out CSV files with 500K on each file until all the rows in this table are moved into CSV files.
The last batch might not have 500K rows , but those rows should also be covered. One of the column is of DATE datatype . So , sorting based on this column is also possible.
I have created the basic structure of it as shown below. Can somebody help me in the adding the required code which is missing.
All suggestions welcome.
#!/bin/bash
sqlplus -s fmsprd/fmsd.123PRD@prodfms-scan.xyz.com/soaprd<<HEREDOC
alter session set parallel_force_local = true ;
alter session enable parallel query;
set serveroutput on
spool order-data-batch1_500K.csv ---- batch number has to change in the file name for every 500K rows
SET SERVEROUTPUT ON;
DECLARE
TYPE pkms_detail_t IS TABLE OF pkms_detail%ROWTYPE;
l_pkms_detail pkms_detail_t;
CURSOR c_data IS
SELECT *
FROM pkms_detail_staging;
BEGIN
OPEN c_data;
LOOP
FETCH c_data
BULK COLLECT INTO l_pkms_detail LIMIT 500000;
EXIT WHEN l_pkms_detail.count = 0;
--- add the logic to print all the 4 columns in this table_name on batches of 500K rows.
DBMS_OUTPUT.put_line(l_pkms_detail.count || ' rows');
END LOOP;
CLOSE c_data;
END;
END;
/
exit
HEREDOC