I currently have two OVM 3.2.8 server pools running on HP DL380p Gen8 hardware that I am frustrated with. I have had a support request open with Oracle for 5+ months were if I reboot some VMs they will come back up on the same hardware with no network working. Our configuration is using two of the the HP NC552SFP, SKU 614203-B21 10GbE adapters in each sever connected using LACP (bonding mode 4). However after some time the second port on the LACP connection goes into a "noOperMem trunk" state at the switch. This doesn't kill the current working network connection only seems to be random on a reboot or live migrate. I can have some VMs on the host that are working fine, but others that I live migrate there on the same VLAN and bridge show fully connected but can't even get to the default gw. If I live migrate them to a different host they may or may not work. This is highly frustrating when doing OS maintenance on VMs. For example I did 24 VMs last Thursday and 8 of them rebooted with network not working until I live migrated them to various other hosts and it would just "start working" or some of the larger memory ones I shutdown and moved to different hosts. Anyway this extra work cost me an extra 3 hours of my time. I'm to the point that I'm ready to throw new adapters at all 16 of our hosts to resolve the issue if someone has a good solid recommendation on what to use. Support has said it is a bug but it has been with development for almost 2 months now with no progress seen on my side.
May not be publicly viewable - https://support.oracle.com/epmos/faces/BugDisplay?id=19833108
Matt