The "Sqoop export to Oracle DB fails for non-trivial file sizes" discussion is archived but because "Exports to MySQL work OK, but only with --direct argument" the export load is more than supported by Sqoop. The direct mode is designed for greater load. And with --direct some data is exported.
Both with and without --direct argument; in the second case, some records make it to the database table (exact number varies, seemingly randomly). No records are exported in the first case
Tune the following Sqoop arguments in JDBC connection or Sqoop mapping to optimize performance
1. Enable JDBC batching using the --batch parameter.
sqoop export --connect <<JDBC URL>> --username <<SQOOP_USER_NAME>> --password <<SQOOP_PASSWOR>> --table <<TABLE_NAME>> --export-dir <<FOLDER_URI>> --batch
2. The second option is to use the property sqoop.export.records.per.statement to specify the number of records that will be used in each insert statement:
sqoop export
-Dsqoop.export.records.per.statement=10
--connect <<JDBC URL>> --username
<<SQOOP_USER_NAME>> --password
<<SQOOP_PASSWORD>> --table
<<TABLE_NAME>> --export-dir
<<FOLDER_URI>>
3. Set how many rows will be inserted per transaction with the sqoop.export.statements.per.transaction property:
sqoop export
-Dsqoop.export.statements.per.transaction=10 --connect
<<JDBC URL>> --username
<<SQOOP_USER_NAME>> --password
<<SQOOP_PASSWORD>> --table
<<TABLE_NAME>> --export-dir
<<FOLDER_URI>>
Sqoop defaults to to 100 for both sqoop.export.records.per.statement and sqoop.export.statements.per.transaction properties. Reducing these to 10 would reduce transaction buffers and prevent out-of-memory conditions.