Dear Team,
I have deployed Self hosted engine 4.3.6 (harware 3 physical KVM nodes OEL 7.8 with Gluster storage) but in the final stage of deployment of Ovirt appliance i.e stage 5 it got failed.
I see that Ovirt appliance got created and when I check the VM status its showing bad. After serveral reboots by its own Ovirt appliance comes up.
but when I reboot again the Ovirt appliance same issue. Is it a bug or any workable solution for this?
hosted-engine --vm-status
--== Host kvm-1.ora.local (id: 1) status ==--
conf_on_shared_storage : True
Status up-to-date : True
Hostname : kvm-1.ora.local
Host ID : 1
Engine status : {"reason": "failed liveliness check", "health": "bad", "vm": "up", "detail": "Up"}
Score : 3400
stopped : False
Local maintenance : False
crc32 : 9c33127f
local_conf_timestamp : 126978
Host timestamp : 126978
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=126978 (Fri Aug 7 21:35:02 2020)
host-id=1
score=3400
vm_conf_refresh_time=126978 (Fri Aug 7 21:35:02 2020)
conf_on_shared_storage=True
maintenance=False
state=EngineStarting
stopped=False