Hi folks,
Hope you're all doing well!
Maybe this is a silly question, but it's something I've been wondering about and haven't found a clear answer to yet.
Here's the context: When using an RMAN recovery catalog, we all know that relevant information from the controlfile gets synchronized with the catalog so far, so good. I'm regularly taking backups, and the catalog tracks the pieces, SCNs, timestamps, and so on.
Now, let’s switch to the restore perspective:
We have an automated recovery server that performs “auto restore and recover” operations. I’ve written a script that restores mission-critical databases with predefined parameters to regularly test the reliability of our backups.
To make this work, we connect like this:
rman target / catalog /@mycat cmdfile=autorestore.sh
Everything runs fine except for one behavior during the archivelog restore phase:
At the exact time this test restore process restores archivelogs, we observe that on the live production database (which shares the same recovery catalog), those same archivelogs are marked as EXPIRED.
Yep, you're reading that right: the archivelogs being restored by the recovery process are simultaneously considered expired on the production system.
So here's my actual question:
Why does the catalog update the status of archivelogs (that are actively being restored elsewhere) to EXPIRED on the production system?
On the prod side, this has real consequences: the backup routines for archivelogs stop working correctly until we clean up or revalidate them manually.
Any ideas on what's happening here behind the scenes? Is this as expected because the catalog doesn't differ between restore and backup?
Cheers,
Dennis