Dear Experts ,
I am using a simple shell script to update 100 million records in background, but it has failed without any reason.
If we want to run any update or delete SQLs against any large volume tables, what would be the best approach?
I need some assistance to commit every 10 lakh records update.
#!/bin/bash
# Run the update in the background using nohup
nohup sqlplus -s hrms2plus/hrms2plus@hrms2pp <<EOF > update_attendance.log 2>&1 &
-- Update statement
UPDATE hrms2plus.HRMS2_EMP_ATTENDANCE_REGISTER
SET MIGRATION_FLAG = 'HRMS1';
-- Commit changes
COMMIT;
-- Exit SQL*Plus
EXIT;
EOF
# End of script
>>
HRMS2PLUS@HRMS2PP >sELECT * FROM table(DBMS_XPLAN.DISPLAY);
PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------
Plan hash value: 1759083947
----------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
----------------------------------------------------------------------------------------------------
| 0 | UPDATE STATEMENT | | 107M| 14G| 590K (1)| 00:00:24 |
| 1 | UPDATE | HRMS2_EMP_ATTENDANCE_REGISTER | | | | |
| 2 | TABLE ACCESS FULL| HRMS2_EMP_ATTENDANCE_REGISTER | 107M| 14G| 590K (1)| 00:00:24 |
----------------------------------------------------------------------------------------------------
9 rows selected.
Thanks