Skip to Main Content

SQL & PL/SQL

Announcement

For appeals, questions and feedback about Oracle Forums, please email oracle-forums-moderators_us@oracle.com. Technical questions should be asked in the appropriate category. Thank you!

Extract data from a table with 100 million rows

User_XN10JDec 5 2021

Hi all ,

I'm trying to download data from Oracle SQL db to my local machine through sqlplus.
It's just a simple select Statement-

Query 1-Select column1|| '|'|| column2 from table_name ;
Query 2-Select column1|| '|'|| column2 from table_name where column2 in ('value1','value2)

Query 1 takes about 75 minutes to write the resultset to csv. Which is extremely slow for around 10 GB of data.
So, I decided to select in chunks based on column2, as table is uniformly divided on column2 wher each column is present in 5% of the table data. Now, I am running 20 select statement each for different column2 value concurrently using sqlplus in 20 different sessions and appending the result in single file.
It has reduced the time from 75 minutes to 17 minutes.
Still when I run just 1 query(query 2) in 1 session , it takes around 7-8 minutes of time, but running 20 queries concurrently increases the time for each query drastically.
Some of the important parameters pertaining to question-
Oracle version- 11g
Max utilisation - 400
Index is not configured- if I index the table on column2, how much improvement can I expect? I know it's impossible to predict but estimations are welcome.
SQLplus configuration-
Page size-1000
Line size -1000
I am new to this, and I'm sure I'm doing something horribly wrong here. Could you please point me in the correct direction and tell me what could be the issue and what all I need to check in order to make the process more efficient?

This post has been answered by User_H3J7U on Dec 6 2021
Jump to Answer
Comments
Post Details
Added on Dec 5 2021
9 comments
9,018 views