Loading 300 million row table (Job schedule 2 times a week)
SRD7Sep 18 2011 — edited Oct 10 2011Hi -
I've a 300 million row table where I'd like to sync between Source and Target 2 times a week. My current scenario is that I get a full dump of the data from Source and do an import to the target and there onwards my ODI job need to sync between the source and the target.
can any one please tell me how to set up the ODI job so that it sync's the data between the source and target. I have a filter where i select only last 7 days of data and do a merge on to the target. While doing merge it compares the existing data and if a row exists then it updates and if row doesn't exist it inserts.
The problem with this approach is that it takes soo much time to compare the selected dataset with the target data set as target already has 300million rows.
please suggest any other approaches using ODI.
Limitations: I can't set up the replication for this table.
Appreciate your input.
Thanks