Skip to Main Content

Oracle Database Discussions

Announcement

For appeals, questions and feedback about Oracle Forums, please email oracle-forums-moderators_us@oracle.com. Technical questions should be asked in the appropriate category. Thank you!

PRVG-10130 and PRKC-1025 when trying to apply JAN2018 PSU to 12cR1 GI

Dear DBA FrankMay 11 2018 — edited May 15 2018

12.1.0.2 2-node RAC cluster.  I am attempting to apply the JAN2018 PSU to my Grid Infrastructure (GI).  Last week, I applied it successfully to another 2-node cluster in the same version.  This week, I get the same blocking problem on 2 similar clusters.

root@racnode2: ~ # crsctl query crs softwarepatch

Oracle Clusterware patch level on node racnode2 is [735250889]

Accorgin to the readme.html document of the PSU (Patch 27010872 - Oracle Grid Infrastructure Patch Set Update 12.1.0.2.180116), since I am in the case of a non-shared GI with non-shared Oracle Homes, I must simply run  opatchauto apply:

Case 1: GI Home and the Database Homes That Are Not Shared and ACFS File System Is Not Configured

As root user, execute the following command on each node of the cluster:

# <GI_HOME>/OPatch/opatchauto apply <UNZIPPED_PATCH_LOCATION>/27010872  

Here is what I get (with or without the -analyze option):

root@racnode2: /app/grid/12.1.0/OPatch # ./opatchauto apply /mnt/oracle/Linux64/12.1.0.2/PSU012018_JAN2018/27010872 -analyze

OPatchauto session is initiated at Fri May 11 10:10:21 2018

System initialization log file is /app/grid/12.1.0/cfgtoollogs/opatchautodb/systemconfig2018-05-11_10-10-28AM.log.

Clusterware is either not running or not configured. You have the following 2 options:

1. Configure and start the Clusterware on this node and re-run the tool

2. Run the tool with '-oh <GI_HOME>' to first patch the Grid Home, then invoke tool with '-database <oracle database name>' or '-oh <RAC_HOME>' to patch the RAC home

OPATCHAUTO-72029: CLusterware home not configured.

OPATCHAUTO-72029: Clusterware is either not running or not configured or cluster is software only GI

OPATCHAUTO-72029: If only Grid Infrastructure software is installed, please run opatchauto with '-oh' option . Alternatively configure and start the Clusterware before running opatchauto.

OPatchauto session completed at Fri May 11 10:10:30 2018

Time taken to complete the session 0 minute, 10 seconds

Topology creation failed.

root@racnode2: /app/grid/12.1.0/OPatch # cat /app/grid/12.1.0/cfgtoollogs/opatchautodb/systemconfig2018-05-11_10-10-28AM.log

2018-05-11 10:10:28,895 INFO  [1] com.oracle.glcm.patch.auto.db.product.inventory.ClusterInformationLoader - crsType: CRS

2018-05-11 10:10:28,911 INFO  [1] com.oracle.glcm.patch.auto.db.product.inventory.ClusterInformationLoader - running: false

2018-05-11 10:10:28,913 INFO  [1] com.oracle.glcm.patch.auto.db.product.validation.validators.DatabaseHomeOptionValidation - Clusterware is either not running or not configured. You have the following 2 options:

1. Configure and start the Clusterware on this node and re-run the tool

2. Run the tool with '-oh <GI_HOME>' to first patch the Grid Home, then invoke tool with '-database <oracle database name>' or '-oh <RAC_HOME>' to patch the RAC home

2018-05-11 10:10:28,913 INFO  [1] com.oracle.glcm.patch.auto.db.product.validation.validators.DatabaseHomeOptionValidation - Clusterware is either not running or not configured. You have the following 2 options:

1. Configure and start the Clusterware on this node and re-run the tool

2. Run the tool with '-oh <GI_HOME>' to first patch the Grid Home, then invoke tool with '-database <oracle database name>' or '-oh <RAC_HOME>' to patch the RAC home

2018-05-11 10:10:28,929 INFO  [1] com.oracle.glcm.patch.auto.db.product.validation.DBValidationController - Validation failed with reason  :: CLUSTERHOME_NOT_RUNNIG

2018-05-11 10:10:28,944 INFO  [1] com.oracle.glcm.patch.auto.db.product.inventory.ClusterInformationLoader - crsType: CRS

2018-05-11 10:10:28,944 INFO  [1] com.oracle.glcm.patch.auto.db.product.inventory.ClusterInformationLoader - running: false

I then add the -oh option as suggested in the previous message:

root@racnode2: /app/grid/12.1.0/OPatch # ./opatchauto apply /mnt/oracle/Linux64/12.1.0.2/PSU012018_JAN2018/27010872 -oh /app/grid/12.1.0

OPatchauto session is initiated at Fri May 11 10:12:00 2018

System initialization log file is /app/grid/12.1.0/cfgtoollogs/opatchautodb/systemconfig2018-05-11_10-12-05AM.log.

Failed:

Verifying shared storage accessibility

Checking shared storage accessibility...

ERROR:  /app/grid/12.1.0

PRVG-10130 : Unable to determine whether file path "/app/grid/12.1.0" is shared by nodes "racnode2,racnode1"

PRKC-1025 : Failed to create a file under the filepath /app/grid/12.1.0 because the filepath is not executable or writable

Shared storage check failed on nodes "racnode2,racnode1"

Verification of shared storage accessibility was unsuccessful on all the specified nodes.

NODE_STATUS::racnode2:EFAIL

The result of cluvfy command contain EFAIL NODE_STATUS::racnode2:EFAIL

OPATCHAUTO-72050: System instance creation failed.

OPATCHAUTO-72050: Failed while retrieving system information.

OPATCHAUTO-72050: Please check log file for more details.

OPatchauto session completed at Fri May 11 10:12:14 2018

Time taken to complete the session 0 minute, 15 seconds

Topology creation failed.

root@racnode2: /app/grid/12.1.0/OPatch # cat /app/grid/12.1.0/cfgtoollogs/opatchautodb/systemconfig2018-05-11_10-12-05AM.log

2018-05-11 10:12:05,314 INFO  [1] com.oracle.glcm.patch.auto.db.product.inventory.ClusterInformationLoader - crsType: CRS

2018-05-11 10:12:05,327 INFO  [1] com.oracle.glcm.patch.auto.db.product.inventory.ClusterInformationLoader - running: false

2018-05-11 10:12:05,348 INFO  [1] com.oracle.glcm.patch.auto.db.product.inventory.ClusterInformationLoader - crsType: CRS

2018-05-11 10:12:05,349 INFO  [1] com.oracle.glcm.patch.auto.db.product.inventory.ClusterInformationLoader - running: false

2018-05-11 10:12:06,333 INFO  [1] com.oracle.glcm.patch.auto.db.product.driver.crs.CrsProductDriver - crsType: CRS

2018-05-11 10:12:06,334 INFO  [1] com.oracle.glcm.patch.auto.db.product.driver.crs.CrsProductDriver - running: false

2018-05-11 10:12:06,335 INFO  [1] com.oracle.glcm.patch.auto.db.product.driver.crs.CrsProductDriver - Checking for Exadata. Looking for /opt/oracle.cellos/ORACLE_CELL_OS_IS_SETUP file.

2018-05-11 10:12:06,336 INFO  [1] com.oracle.glcm.patch.auto.db.product.driver.crs.CrsProductDriver - This is not an Exadata environment

2018-05-11 10:12:07,080 INFO  [1] com.oracle.glcm.patch.auto.db.product.driver.crs.CrsProductDriver - crsType: CRS

2018-05-11 10:12:07,082 INFO  [1] com.oracle.glcm.patch.auto.db.product.driver.crs.CrsProductDriver - running: false

2018-05-11 10:12:14,253 WARNING [1] oracle.dbsysmodel.driver.sdk.util.OsysUtility - Failed:

Verifying shared storage accessibility

Checking shared storage accessibility...

ERROR:  /app/grid/12.1.0

PRVG-10130 : Unable to determine whether file path "/app/grid/12.1.0" is shared by nodes "racnode2,racnode1"

PRKC-1025 : Failed to create a file under the filepath /app/grid/12.1.0 because the filepath is not executable or writable

Shared storage check failed on nodes "racnode2,racnode1"

Verification of shared storage accessibility was unsuccessful on all the specified nodes.

NODE_STATUS::racnode2:EFAIL

The result of cluvfy command contain EFAIL NODE_STATUS::racnode2:EFAIL

2018-05-11 10:12:14,253 SEVERE [1] com.oracle.glcm.patch.auto.db.integration.model.productsupport.topology.TopologyCreator - Not able to retrieve system instance details :: Unable to determine if "/app/grid/12.1.0" is a shared oracle home.

Failed:

Verifying shared storage accessibility

Checking shared storage accessibility...

ERROR:  /app/grid/12.1.0

PRVG-10130 : Unable to determine whether file path "/app/grid/12.1.0" is shared by nodes "racnode2,racnode1"

PRKC-1025 : Failed to create a file under the filepath /app/grid/12.1.0 because the filepath is not executable or writable

Shared storage check failed on nodes "racnode2,racnode1"

Verification of shared storage accessibility was unsuccessful on all the specified nodes.

NODE_STATUS::racnode2:EFAIL

The result of cluvfy command contain EFAIL NODE_STATUS::racnode2:EFAIL

I have had this behavior on 2 clusters.  I am stuck, and a google search of PRVG-10130 PRKC-1025 returns nothing exactly like what I have.  Does anyone have any idea/suggestion?

This post has been answered by Dear DBA Frank on May 15 2018
Jump to Answer
Comments
Locked Post
New comments cannot be posted to this locked post.
Post Details
Locked on Jun 12 2018
Added on May 11 2018
7 comments
5,591 views