Skip to Main Content

Database Software

Announcement

For appeals, questions and feedback about Oracle Forums, please email oracle-forums-moderators_us@oracle.com. Technical questions should be asked in the appropriate category. Thank you!

12.1.0.2 grid installation (cluster) - first vip won't start

User_A0C9WJun 8 2017 — edited Jun 9 2017

Hi everyone,

im facing some issues trying to configure a cluster.

I got the following error-mesage from the crs configuration log ($GRID_HOME/cfgtoollogs/crsconfig/rootcrs_zlab01_2017-06-08_04-23-57PM.log)

...

2017-06-08 16:32:39: nodes_to_start=zlab01

2017-06-08 16:34:45: exit value of start nodeapps/vip is 1

2017-06-08 16:34:45: output for start nodeapps is  PRKO-2421 : Network resource is already started on node(s): zlab01 PRCR-1079 : Failed to start resource ora.zlab01.vip CRS-2674: Start of 'ora.zlab01.vip' on 'zlab01' failed CRS-2632: There are no more servers to try to place resource 'ora.zlab01.vip' on that would satisfy its placement policy PRKO-2422 : ONS is already started on node(s): zlab01

2017-06-08 16:34:45: output of startnodeapp after removing already started mesgs is PRCR-1079 : Failed to start resource ora.zlab01.vip CRS-2674: Start of 'ora.zlab01.vip' on 'zlab01' failed CRS-2632: There are no more servers to try to place resource 'ora.zlab01.vip' on that would satisfy its placement policy

2017-06-08 16:34:45: Executing cmd: /opt/oracle/product/grid/bin/clsecho -p has -f clsrsc -m 180 "/opt/oracle/product/grid/bin/srvctl start nodeapps -n zlab01" "1"

2017-06-08 16:34:45: Executing cmd: /opt/oracle/product/grid/bin/clsecho -p has -f clsrsc -m 180 "/opt/oracle/product/grid/bin/srvctl start nodeapps -n zlab01" "1"

2017-06-08 16:34:45: Command output:

>  CLSRSC-180: An error occurred while executing the command '/opt/oracle/product/grid/bin/srvctl start nodeapps -n zlab01' (error code 1)

>End Command output

2017-06-08 16:34:45: CLSRSC-180: An error occurred while executing the command '/opt/oracle/product/grid/bin/srvctl start nodeapps -n zlab01' (error code 1)

2017-06-08 16:34:45: Invoking "/opt/oracle/product/grid/bin/cluutil -ckpt -oraclebase /opt/oracle -writeckpt -name ROOTCRS_NODECONFIG -state FAIL"

2017-06-08 16:34:45: trace file=/opt/oracle/crsdata/zlab01/crsconfig/cluutil7.log

2017-06-08 16:34:45: Running as user grid: /opt/oracle/product/grid/bin/cluutil -ckpt -oraclebase /opt/oracle -writeckpt -name ROOTCRS_NODECONFIG -state FAIL

2017-06-08 16:34:45: s_run_as_user2: Running /bin/su grid -c ' echo CLSRSC_START; /opt/oracle/product/grid/bin/cluutil -ckpt -oraclebase /opt/oracle -writeckpt -name ROOTCRS_NODECONFIG -state FAIL '

2017-06-08 16:34:46: Removing file /var/tmp/sba6FsyQc

2017-06-08 16:34:46: Successfully removed file: /var/tmp/sba6FsyQc

2017-06-08 16:34:46: pipe exit code: 0

2017-06-08 16:34:46: /bin/su successfully executed

2017-06-08 16:34:46: Succeeded in writing the checkpoint:'ROOTCRS_NODECONFIG' with status:FAIL

2017-06-08 16:34:46: Install node: zlab01

2017-06-08 16:34:46: Current first node: zlab01

2017-06-08 16:34:46: Executing cmd: /opt/oracle/product/grid/bin/clsecho -p has -f clsrsc -m 287

2017-06-08 16:34:46: Executing cmd: /opt/oracle/product/grid/bin/clsecho -p has -f clsrsc -m 287

2017-06-08 16:34:46: Command output:

>  CLSRSC-287: FirstNode configuration failed

2017-06-08 16:34:46: ###### Begin DIE Stack Trace ######

2017-06-08 16:34:46:     Package         File                 Line Calling

2017-06-08 16:34:46:     --------------- -------------------- ---- ----------

2017-06-08 16:34:46:  1: main            rootcrs.pl            267 crsutils::dietrap

2017-06-08 16:34:46:  2: crsinstall      crsinstall.pm        2470 main::__ANON__

2017-06-08 16:34:46:  3: crsinstall      crsinstall.pm        2393 crsinstall::configNode

2017-06-08 16:34:46:  4: crsinstall      crsinstall.pm         412 crsinstall::perform_configNode

2017-06-08 16:34:46:  5: crsinstall      crsinstall.pm         296 crsinstall::CRSInstall

2017-06-08 16:34:46:  6: main            rootcrs.pl            410 crsinstall::new

2017-06-08 16:34:46: ####### End DIE Stack Trace #######

At first, i thought, i got the following issue:

But after patching the grid home to version 12.1.0.2.170418, i still facing the same issue.

Content of $ORACLE_BASE/diag/crs/zlab01/crs/trace/alert.log:

2017-06-08 16:34:40.278 [ORAROOTAGENT(4078)]CRS-5818: Aborted command 'start' for resource 'ora.zlab01.vip'. Details at (:CRSAGF00113:) {1:51244:429} in /opt/oracle/diag/crs/zlab01/crs/trace/crsd_orarootagent_root.trc.

2017-06-08 16:34:41.203 [ORAROOTAGENT(4078)]CRS-5017: The resource action "ora.zlab01.vip start" encountered the following error:

2017-06-08 16:34:41.203+Action for VIP aborted. For details refer to "(:CLSN00107:)" in "/opt/oracle/diag/crs/zlab01/crs/trace/crsd_orarootagent_root.trc".

2017-06-08 16:34:44.282 [CRSD(3937)]CRS-2757: Command 'Start' timed out waiting for response from the resource 'ora.zlab01.vip'. Details at (:CRSPE00163:) {1:51244:429} in /opt/oracle/diag/crs/zlab01/crs/trace/crsd.trc.

Content of $ORACLE_BASE/diag/crs/zlab01/crs/trace/crsd_orarootagent_root.trc :

2017-06-07 12:06:46.482644 :CLSDYNAM:8: [ora.net1.network]{1:16600:320} [start] (:CLSN00107:) clsn_agent::start {

2017-06-07 12:06:46.482712 :CLSDYNAM:8: [ora.net1.network]{1:16600:320} [start] Agent::getResAgent resname:ora.net1.network version:12.1.0.2.0

2017-06-07 12:06:46.482886 :CLSDYNAM:8: [ora.net1.network]{1:16600:320} [start] Agent::Agent pAgent:130f090 resName:130f118 ora.net1.network

2017-06-07 12:06:46.483033 :CLSDYNAM:8: [ora.net1.network]{1:16600:320} [start] Agent::setResType setResType type:ora.network.type

2017-06-07 12:06:46.483134 :CLSDYNAM:8: [ora.net1.network]{1:16600:320} [start] NetworkAgent::init enter {

2017-06-07 12:06:46.521045 :CLSDYNAM:8: [ora.net1.network]{1:16600:320} [start] /usr/sbin/ifconfig -a | /usr/bin/grep  groupname returned non existent IPMP

2017-06-07 12:06:46.646306 :CLSDYNAM:8: [ora.net1.network]{1:16600:320} [start] VendorType=4

2017-06-07 12:06:46.646358 :CLSDYNAM:8: [ora.net1.network]{1:16600:320} [start] Checking if vnic0 Interface is fine

2017-06-07 12:06:46.646552 :CLSDYNAM:8: [ora.net1.network]{1:16600:320} [start] ifName = vnic0

2017-06-07 12:06:46.647033 :CLSDYNAM:8: [ora.net1.network]{1:16600:320} [start] ifname=vnic0

2017-06-07 12:06:46.647049 :CLSDYNAM:8: [ora.net1.network]{1:16600:320} [start] subnetmask=255.255.255.0

2017-06-07 12:06:46.647068 :CLSDYNAM:8: [ora.net1.network]{1:16600:320} [start] subnetnumber=172.24.19.0

2017-06-07 12:06:46.647086 :CLSDYNAM:8: [ora.net1.network]{1:16600:320} [start] Interface Name = vnic0

2017-06-07 12:06:46.647315 :CLSDYNAM:8: [ora.net1.network]{1:16600:320} [start] Primary IP = 172.24.19.12

2017-06-07 12:06:46.647329 :CLSDYNAM:8: [ora.net1.network]{1:16600:320} [start] NetworkAgent::init exit }

2017-06-07 12:06:46.675636 :CLSDYNAM:8: [ora.net1.network]{1:16600:320} [start] Ocr Context init default level 24182896

2017-06-07 12:06:46.676944 :CLSDYNAM:8: [ora.net1.network]{1:16600:320} [start] Ocr Context init default level 24182896

2017-06-07 12:06:46.689459 :CLSDYNAM:8: [ora.net1.network]{1:16600:320} [start] (:CLSN00107:) clsn_agent::start }

2017-06-07 12:06:46.689476 :    AGFW:8: {1:16600:320} Command: start for resource: ora.net1.network zlab01 1 completed with status: SUCCESS

2017-06-07 12:06:46.690264 :CLSFRAME:1:  TM [MultiThread] is changing desired thread # to 3. Current # is 2

2017-06-07 12:06:46.690636 :    AGFW:9: {1:16600:320} Agent sending reply for: RESOURCE_START[ora.net1.network zlab01 1] ID 4098:763

2017-06-07 12:06:46.691123 : USRTHRD:8: {1:16600:320} Thread:[VipRelocate:] start { acquire thndMX:16ceee0

2017-06-07 12:06:46.691134 : USRTHRD:8: {1:16600:320} Thread:[VipRelocate:] start pThnd:168c9d8

2017-06-07 12:06:46.691230 : USRTHRD:8: {1:16600:320} Thread:[VipRelocate:] start 2 release thndMX:16ceee0 }

2017-06-07 12:06:46.691549 :    AGFW:9: {1:16600:320} ora.net1.network zlab01 1 state changed from: STARTING to: ONLINE

2017-06-07 12:06:46.691608 :    AGFW:9: {1:16600:320} Started implicit monitor for [ora.net1.network zlab01 1] interval=1000 delay=1000

2017-06-07 12:06:46.691752 :    AGFW:9: {1:16600:320} Agent sending last reply for: RESOURCE_START[ora.net1.network zlab01 1] ID 4098:763

2017-06-07 12:06:46.771222 : USRTHRD:11: {1:16600:320} Thread:[VipRelocate:] isRunning is reset to false here

2017-06-07 12:06:46.874514 :    AGFW:9: {1:16600:334} Agent received the message: RESOURCE_ADD[ora.zlab01.vip 1 1] ID 4356:804

2017-06-07 12:06:46.875138 : CLSDADR:9: (:CLSD0023:)dynamic component [R3212657] mapped to [CLSDYNAM]

2017-06-07 12:06:46.876922 :    AGFW:9: {1:16600:334} ID = [ora.zlab01.vip 1 1] Intermediate Timeout = [0]

2017-06-07 12:06:46.876946 :    AGFW:9: {1:16600:334} Added new resource: ora.zlab01.vip 1 1 to the agfw

2017-06-07 12:06:46.877596 :    AGFW:9: {1:16600:334} Agent sending last reply for: RESOURCE_ADD[ora.zlab01.vip 1 1] ID 4356:804

2017-06-07 12:06:46.901183 :    AGFW:9: {1:16600:334} Agent received the message: RESOURCE_START[ora.zlab01.vip 1 1] ID 4098:806

2017-06-07 12:06:46.901203 :    AGFW:9: {1:16600:334} Preparing START command for: ora.zlab01.vip 1 1

2017-06-07 12:06:46.901214 :    AGFW:9: {1:16600:334} ora.zlab01.vip 1 1 state changed from: UNKNOWN to: STARTING

2017-06-07 12:06:46.902781 :CLSDYNAM:10: [ora.zlab01.vip]{1:16600:334} [start] (:CLSN00107:) clsn_agent::start {

2017-06-07 12:06:46.902828 :CLSDYNAM:10: [ora.zlab01.vip]{1:16600:334} [start] Agent::getResAgent resname:ora.zlab01.vip version:12.1.0.2.0

2017-06-07 12:06:46.902888 :CLSDYNAM:10: [ora.zlab01.vip]{1:16600:334} [start] Agent::Agent pAgent:1894010 resName:1894098 ora.zlab01.vip

2017-06-07 12:06:46.903019 :CLSDYNAM:10: [ora.zlab01.vip]{1:16600:334} [start] Agent::setResType setResType type:ora.cluster_vip_net1.type

2017-06-07 12:06:46.941705 :CLSDYNAM:10: [ora.zlab01.vip]{1:16600:334} [start] /usr/sbin/ifconfig -a | /usr/bin/grep  groupname returned non existent IPMP

2017-06-07 12:06:47.012427 :CLSDYNAM:10: [ora.zlab01.vip]{1:16600:334} [start] InterfaceName = vnic0

2017-06-07 12:06:47.012442 :CLSDYNAM:10: [ora.zlab01.vip]{1:16600:334} [start] InterfaceNameList = vnic0

2017-06-07 12:06:47.012453 :CLSDYNAM:10: [ora.zlab01.vip]{1:16600:334} [start] NetworkMask = 255.255.255.0

2017-06-07 12:06:47.012463 :CLSDYNAM:10: [ora.zlab01.vip]{1:16600:334} [start] NetworkAuto = static

2017-06-07 12:06:47.012491 :CLSDYNAM:10: [ora.zlab01.vip]{1:16600:334} [start] AddressType = IPV4

2017-06-07 12:06:47.012501 :CLSDYNAM:10: [ora.zlab01.vip]{1:16600:334} [start] SubNet Number = 172.24.19.0

2017-06-07 12:06:47.012522 :CLSDYNAM:10: [ora.zlab01.vip]{1:16600:334} [start] Checking if vnic0 Interface is fine

2017-06-07 12:06:47.012852 :CLSDYNAM:10: [ora.zlab01.vip]{1:16600:334} [start] ifName = vnic0

2017-06-07 12:06:47.013196 :CLSDYNAM:10: [ora.zlab01.vip]{1:16600:334} [start] ifname=vnic0

2017-06-07 12:06:47.013208 :CLSDYNAM:10: [ora.zlab01.vip]{1:16600:334} [start] subnetmask=255.255.255.0

2017-06-07 12:06:47.013219 :CLSDYNAM:10: [ora.zlab01.vip]{1:16600:334} [start] subnetnumber=172.24.19.0

2017-06-07 12:06:47.013235 :CLSDYNAM:10: [ora.zlab01.vip]{1:16600:334} [start] InterfaceName = vnic0

2017-06-07 12:06:47.015770 :CLSDYNAM:10: [ora.zlab01.vip]{1:16600:334} [start] HostName zlab01-vip.domain.my translated to 172.24.19.14

2017-06-07 12:06:47.015917 :CLSDYNAM:10: [ora.zlab01.vip]{1:16600:334} [start] Interface Name = vnic0

2017-06-07 12:06:47.015928 :CLSDYNAM:10: [ora.zlab01.vip]{1:16600:334} [start] Ip Address = 172.24.19.14

2017-06-07 12:06:47.016047 : USRTHRD:10: {1:16600:334} Thread:OcrCreateKeyValueThread[] start { acquire thndMX:1894458

2017-06-07 12:06:47.016058 : USRTHRD:10: {1:16600:334} Thread:OcrCreateKeyValueThread[] start pThnd:168c8b8

2017-06-07 12:06:47.016103 : USRTHRD:10: {1:16600:334} Thread:OcrCreateKeyValueThread[] start 2 release thndMX:1894458 }

2017-06-07 12:06:47.016220 :CLSDYNAM:10: [ora.zlab01.vip]{1:16600:334} [start] VipActions::startIp {

2017-06-07 12:06:47.048111 : USRTHRD:12: {1:16600:334} Ocr Context init default level 24182896

2017-06-07 12:06:47.179968 : USRTHRD:12: {1:16600:334} Thread:OcrCreateKeyValueThread[] isRunning is reset to false here

2017-06-07 12:06:47.516838 :CLSDYNAM:10: [ora.zlab01.vip]{1:16600:334} [start] Failed to check 172.24.19.14 on vnic0

2017-06-07 12:06:47.516852 :CLSDYNAM:10: [ora.zlab01.vip]{1:16600:334} [start] (null) category: 0, operation: , loc: , OS error: 0, other:

2017-06-07 12:06:47.544635 :CLSDYNAM:10: [ora.zlab01.vip]{1:16600:334} [start] /usr/sbin/ifconfig -a | /usr/bin/grep  groupname returned non existent IPMP

2017-06-07 12:06:47.668685 :CLSDYNAM:10: [ora.zlab01.vip]{1:16600:334} [start] VendorType=4

2017-06-07 12:06:47.692764 : USRTHRD:8: {1:16600:320} Thread:[VipRelocate:] stop {

2017-06-07 12:06:47.692772 : USRTHRD:8: {1:16600:320} Thread:[VipRelocate:] Thread stop:thread acquire lock:16ceee0

2017-06-07 12:06:47.692777 : USRTHRD:8: {1:16600:320} Thread:[VipRelocate:] Thread stop:thread join pThnd:168c9d8

2017-06-07 12:06:47.692799 : USRTHRD:8: {1:16600:320} Thread:[VipRelocate:] Thread stop:thread close pThnd:168c9d8

2017-06-07 12:06:47.692808 : USRTHRD:8: {1:16600:320} Thread:[VipRelocate:] Thread stop:thread destroy pThnd:168c9d8

2017-06-07 12:06:47.692819 : USRTHRD:8: {1:16600:320} Thread:[VipRelocate:] stop release thndMx:16ceee0 }

2017-06-07 12:06:47.692824 : USRTHRD:8: {1:16600:320} Thread:[VipRelocate:] start { acquire thndMX:16ceee0

2017-06-07 12:06:47.692829 : USRTHRD:8: {1:16600:320} Thread:[VipRelocate:] start pThnd:168c9d8

2017-06-07 12:06:47.692871 : USRTHRD:8: {1:16600:320} Thread:[VipRelocate:] start 2 release thndMX:16ceee0 }

2017-06-07 12:06:47.693083 :CLSFRAME:1:  TM [MultiThread] is changing desired thread # to 4. Current # is 3

2017-06-07 12:06:47.701542 :CLSDYNAM:10: [ora.zlab01.vip]{1:16600:334} [start] /usr/sbin/ifconfig -a | /usr/bin/grep  groupname returned non existent IPMP

2017-06-07 12:06:47.716754 : USRTHRD:13: {1:16600:320} Thread:[VipRelocate:] isRunning is reset to false here

2017-06-07 12:06:47.825855 :CLSDYNAM:10: [ora.zlab01.vip]{1:16600:334} [start] VendorType=4

2017-06-07 12:06:47.825927 : default:10: ICMP Ping from 172.24.19.12 to 172.24.19.14

2017-06-07 12:06:47.927096 : default:10: gipcWait failed.

2017-06-07 12:06:48.027398 : default:10: gipcWait failed.

2017-06-07 12:06:48.127783 : default:10: gipcWait failed.

2017-06-07 12:06:48.228131 : default:10: gipcWait failed.

2017-06-07 12:06:48.328539 : default:10: gipcWait failed.

2017-06-07 12:06:48.428985 : default:10: gipcWait failed.

2017-06-07 12:06:48.529452 : default:10: gipcWait failed.

2017-06-07 12:06:48.629893 : default:10: gipcWait failed.

2017-06-07 12:06:48.632569 : default:10: ICMP Ping from 172.24.19.12 to 172.24.19.14

2017-06-07 12:06:48.693964 : USRTHRD:8: {1:16600:320} Thread:[VipRelocate:] stop {

2017-06-07 12:06:48.693977 : USRTHRD:8: {1:16600:320} Thread:[VipRelocate:] Thread stop:thread acquire lock:16ceee0

2017-06-07 12:06:48.693986 : USRTHRD:8: {1:16600:320} Thread:[VipRelocate:] Thread stop:thread join pThnd:168c9d8

2017-06-07 12:06:48.694000 : USRTHRD:8: {1:16600:320} Thread:[VipRelocate:] Thread stop:thread close pThnd:168c9d8

2017-06-07 12:06:48.694007 : USRTHRD:8: {1:16600:320} Thread:[VipRelocate:] Thread stop:thread destroy pThnd:168c9d8

2017-06-07 12:06:48.694016 : USRTHRD:8: {1:16600:320} Thread:[VipRelocate:] stop release thndMx:16ceee0 }

2017-06-07 12:06:48.694023 : USRTHRD:8: {1:16600:320} Thread:[VipRelocate:] start { acquire thndMX:16ceee0

2017-06-07 12:06:48.694031 : USRTHRD:8: {1:16600:320} Thread:[VipRelocate:] start pThnd:168c9d8

2017-06-07 12:06:48.694079 : USRTHRD:8: {1:16600:320} Thread:[VipRelocate:] start 2 release thndMX:16ceee0 }

2017-06-07 12:06:48.718200 : USRTHRD:16: {1:16600:320} Thread:[VipRelocate:] isRunning is reset to false here

2017-06-07 12:06:48.733402 : default:10: gipcWait failed.

2017-06-07 12:06:48.833979 : default:10: gipcWait failed.

2017-06-07 12:06:48.934393 : default:10: gipcWait failed.

2017-06-07 12:06:49.034797 : default:10: gipcWait failed.

2017-06-07 12:06:49.135200 : default:10: gipcWait failed.

2017-06-07 12:06:49.235526 : default:10: gipcWait failed.

2017-06-07 12:06:49.335859 : default:10: gipcWait failed.

2017-06-07 12:06:49.436412 : default:10: gipcWait failed.

2017-06-07 12:06:49.439114 :CLSDYNAM:10: [ora.zlab01.vip]{1:16600:334} [start] Adding 172.24.19.14 on vnic0

2017-06-07 12:06:49.508848 :CLSDYNAM:10: [ora.zlab01.vip]{1:16600:334} [start] VipActions::startIp }

2017-06-07 12:06:49.693987 : USRTHRD:14: {1:16600:320} Thread:[VipRelocate:] stop {

2017-06-07 12:06:49.694002 : USRTHRD:14: {1:16600:320} Thread:[VipRelocate:] Thread stop:thread acquire lock:16ceee0

2017-06-07 12:06:49.694010 : USRTHRD:14: {1:16600:320} Thread:[VipRelocate:] Thread stop:thread join pThnd:168c9d8

2017-06-07 12:06:49.694043 : USRTHRD:14: {1:16600:320} Thread:[VipRelocate:] Thread stop:thread close pThnd:168c9d8

2017-06-07 12:06:49.694051 : USRTHRD:14: {1:16600:320} Thread:[VipRelocate:] Thread stop:thread destroy pThnd:168c9d8

2017-06-07 12:06:49.694060 : USRTHRD:14: {1:16600:320} Thread:[VipRelocate:] stop release thndMx:16ceee0 }

2017-06-07 12:06:49.694067 : USRTHRD:14: {1:16600:320} Thread:[VipRelocate:] start { acquire thndMX:16ceee0

2017-06-07 12:06:49.694075 : USRTHRD:14: {1:16600:320} Thread:[VipRelocate:] start pThnd:168c9d8

2017-06-07 12:06:49.694119 : USRTHRD:14: {1:16600:320} Thread:[VipRelocate:] start 2 release thndMX:16ceee0 }

2017-06-07 12:06:49.718599 : USRTHRD:17: {1:16600:320} Thread:[VipRelocate:] isRunning is reset to false here

2017-06-07 12:06:50.009755 :CLSDYNAM:10: [ora.zlab01.vip]{1:16600:334} [start] Failed to check 172.24.19.14 on vnic0

2017-06-07 12:06:50.009777 :CLSDYNAM:10: [ora.zlab01.vip]{1:16600:334} [start] (null) category: 0, operation: , loc: , OS error: 0, other:

...

2017-06-08 16:34:44.781687 :    AGFW:9: {1:51244:429} ora.zlab01.vip 1 1 state changed from: STARTING to: OFFLINE

2017-06-08 16:34:44.781839 :    AGFW:9: {1:51244:429} Agent sending last reply for: RESOURCE_START[ora.zlab01.vip 1 1] ID 4098:1255

2017-06-08 16:34:44.782041 : USRTHRD:69: {1:51244:429} VipAgent::sendFail2Srv {

2017-06-08 16:34:44.784778 : USRTHRD:69: {1:51244:429} VipAgent::sendFail2Srv }

2017-06-08 16:34:44.784806 : USRTHRD:69: {1:51244:429} Thread:[SendFail2SrvThread:] isRunning is reset to false here

2017-06-08 16:34:44.786887 :    AGFW:9: {1:51244:429} Agent received the message: RESOURCE_CLEAN[ora.zlab01.vip 1 1] ID 4100:1319

2017-06-08 16:34:44.786907 :    AGFW:9: {1:51244:429} Preparing CLEAN command for: ora.zlab01.vip 1 1

2017-06-08 16:34:44.786919 :    AGFW:9: {1:51244:429} ora.zlab01.vip 1 1 state changed from: OFFLINE to: CLEANING

2017-06-08 16:34:44.787655 :CLSDYNAM:63: [ora.zlab01.vip]{1:51244:429} [clean] (:CLSN00106:) clsn_agent::clean {

2017-06-08 16:34:44.787705 :CLSDYNAM:63: [ora.zlab01.vip]{1:51244:429} [clean] clean {

2017-06-08 16:34:44.789179 :CLSDYNAM:63: [ora.zlab01.vip]{1:51244:429} [clean] VipActions::stopIp {

2017-06-08 16:34:44.789726 :CLSDYNAM:63: [ora.zlab01.vip]{1:51244:429} [clean] VipActions::stop not needed IP is down

2017-06-08 16:34:44.789742 :CLSDYNAM:63: [ora.zlab01.vip]{1:51244:429} [clean] VipActions::stopIp }

2017-06-08 16:34:44.789833 :CLSDYNAM:63: [ora.zlab01.vip]{1:51244:429} [clean] VipAgent::stop() called

2017-06-08 16:34:44.790029 :CLSDYNAM:63: [ora.zlab01.vip]{1:51244:429} [clean] clean  }

2017-06-08 16:34:44.790058 :CLSDYNAM:63: [ora.zlab01.vip]{1:51244:429} [clean] (:CLSN00106:) clsn_agent::clean }

2017-06-08 16:34:44.790072 :    AGFW:63: {1:51244:429} Command: clean for resource: ora.zlab01.vip 1 1 completed with status: SUCCESS

2017-06-08 16:34:44.790565 :    AGFW:9: {1:51244:429} Agent sending reply for: RESOURCE_CLEAN[ora.zlab01.vip 1 1] ID 4100:1319

2017-06-08 16:34:45.291202 :CLSDYNAM:63: [ora.zlab01.vip]{1:51244:429} [check] Failed to check 172.24.19.14 on vnic0

2017-06-08 16:34:45.291222 :CLSDYNAM:63: [ora.zlab01.vip]{1:51244:429} [check] (null) category: 0, operation: , loc: , OS error: 0, other:

2017-06-08 16:34:45.291241 :CLSDYNAM:63: [ora.zlab01.vip]{1:51244:429} [check] VipAgent::checkIp returned false

2017-06-08 16:34:45.291431 : USRTHRD:63: {1:51244:429} Thread:[SendFail2SrvThread:] stop {

2017-06-08 16:34:45.291440 : USRTHRD:63: {1:51244:429} Thread:[SendFail2SrvThread:] Thread stop:thread acquire lock:180f520

2017-06-08 16:34:45.291447 : USRTHRD:63: {1:51244:429} Thread:[SendFail2SrvThread:] Thread stop:thread join pThnd:1757848

2017-06-08 16:34:45.291459 : USRTHRD:63: {1:51244:429} Thread:[SendFail2SrvThread:] Thread stop:thread close pThnd:1757848

2017-06-08 16:34:45.291467 : USRTHRD:63: {1:51244:429} Thread:[SendFail2SrvThread:] Thread stop:thread destroy pThnd:1757848

2017-06-08 16:34:45.291475 : USRTHRD:63: {1:51244:429} Thread:[SendFail2SrvThread:] stop release thndMx:180f520 }

2017-06-08 16:34:45.291483 : USRTHRD:63: {1:51244:429} Thread:[SendFail2SrvThread:] start { acquire thndMX:180f520

2017-06-08 16:34:45.291490 : USRTHRD:63: {1:51244:429} Thread:[SendFail2SrvThread:] start pThnd:1757848

2017-06-08 16:34:45.291498 : USRTHRD:63: {1:51244:429} Thread:[SendFail2SrvThread:] thread spawn pThnd:1757848 thndType:1

2017-06-08 16:34:45.291549 : USRTHRD:63: {1:51244:429} Thread:[SendFail2SrvThread:] thread spawned release thndMX:180f520 }

2017-06-08 16:34:45.291937 :    AGFW:9: {1:51244:429} ora.zlab01.vip 1 1 state changed from: CLEANING to: OFFLINE

2017-06-08 16:34:45.292001 :    AGFW:9: {1:51244:429} Agent sending last reply for: RESOURCE_CLEAN[ora.zlab01.vip 1 1] ID 4100:1319

2017-06-08 16:34:45.292319 : USRTHRD:70: {1:51244:429} VipAgent::sendFail2Srv {

2017-06-08 16:34:45.299733 : USRTHRD:70: {1:51244:429} VipAgent::sendFail2Srv }

2017-06-08 16:34:45.299761 : USRTHRD:70: {1:51244:429} Thread:[SendFail2SrvThread:] isRunning is reset to false here

Little roundup:

  • 2 nodes (zlab01 & zlab02)
  • pub network on 172.24.19.0
  • priv network on 192.168.0.0
  • VIP 172.24.19.14, .15 and .16

### Network-config

root@zlab01.domain.my # ifconfig -a

lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 3

        inet 127.0.0.1 netmask ff000000

vnic0: flags=140201000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,CoS,L3PROTECT,PHYSRUNNING> mtu 1500 index 1

        inet 172.24.19.12 netmask ffffff00 broadcast 172.24.19.255

        ether 2:8:20:41:ee:5f

vnic1: flags=140001000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,L3PROTECT,PHYSRUNNING> mtu 9000 index 2

        inet 192.168.0.1 netmask ffffff00 broadcast 192.168.0.255

        ether 2:8:20:71:82:f0

lo0: flags=2002000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv6,VIRTUAL> mtu 8252 index 3

        inet6 ::1/128

vnic1: flags=160002000840<RUNNING,MULTICAST,IPv6,L3PROTECT,PHYSRUNNING> mtu 9000 index 2

        inet6 ::/0

        ether 2:8:20:71:82:f0

If someone has a glue, why i'm facing this, it would be nice to give me an advice to solve this.

Thanks!

Regards

David

Comments
Locked Post
New comments cannot be posted to this locked post.
Post Details
Locked on Jul 7 2017
Added on Jun 8 2017
1 comment
1,149 views