Hi, We are having databases which are in version 11.2 and some are in 19C. We are trying to move all of them to 19C though. During one analysis we found, we have tables across databases, which are holding historical data those are lot older and are really should have been purged as we are not required those anymore. For Some, we are having jobs/procedures already created and scheduled for purging the data (like say for partitioned table those Job are used for Dropping partitions and for non partitioned table few of them also deletes based on the set retention period). But some of them having no retention period defined or say they are not included in that purge procedure, so in those cases the data is keep on increasing since years without our notice and that occupies space and also impacting performance too.
Keeping in mind that DELETE's statement does have lot of other implications like fragmentation etc, also each of those table is different in structure based on those application design. Wanted to understand experts view on, how feasible is it to have a common job/procedure which would take care of all types of table data purge, may it be partitioned or non-partitioned etc. By just passing the table/column name(based on which data will be purged) and few other attributes to the procedure(like partitioned or not etc). And then this can be called/scheduled for any database and just can be configured through an API as one time setup in a button click and we are all set. Is it worth working towards this common data purging solution or is there any issue which we are not able to foresee? or is there something Oracle provided ready-made auto data purge solution available for such requirement?