Anybody thought of improving performance of database patching? I'm sure having plenty of server memory and database SGA, faster CPUs, and probably fewer database components helps. But how about profiling, even very roughly, about where most of the patching time is spent?
We recently applied the April 2021 patch to a 2-node 19c RAC database using opatchauto. This is a 20 CPU (2.60GHz) 640 GB memory per node cluster with database well tuned, HugePages properly configured. Each node takes 50+ minutes to patch. According to the opatchauto log (in GI_Home/cfgtoollogs/opatchauto), the database binary patch took 13 minutes. The GI binary patch took 19 minutes. But looking at the opatchauto binary log, I can't tell which part of the binary patch took significantly longer time. Data patch didn't take long.
This database cluster does have a bunch of unnecessary components installed that we never care or use, ACFS, AFD, MGMTDB, and inside the database, OLAP, DB Vault, Label Security, Spatial. But I don't know if uninstalling any of them will make a noticeable difference. I know patching performance is not a big deal since it happens only once in a while. But for sensitive applications, it makes business sense to speed up patching as much as we can, even if it's done in rolling.