Skip to Main Content

Infrastructure Software

Announcement

For appeals, questions and feedback about Oracle Forums, please email oracle-forums-moderators_us@oracle.com. Technical questions should be asked in the appropriate category. Thank you!

OVM 2.x P2V using Knoppix vs linux p2v

J Peters-OracleMay 17 2012 — edited May 17 2012
Regarding OVM 2.x P2V.

I have yet to use OVM 3.0.3 P2V. FYI "lomount" is not included in the "xen-tools" package. Use of "kpartx" is
preferred method for loopback mount on OVM 3.0.3.

In general use of OVM P2V e.g. boot source host from OVM CDROM using "linux p2v" works well for simple Linux migrations.

Windows P2V migration is a bit more effort and there are better tools available e.g. VMware vCenter Converter. Performing a V2V migration of a VMware P2V migration if you really want OVM is also a possibility. Use of VirtualBox for conversion of VMware VM is also an option.

Limitations of OVM P2V migration.

1) No resizing of filesystems. The entire block device is migrated to a disk image file.

2) P2V is offline.

3) HVM only. Personally I prefer HVM for all my VM's, some may beg to differ.

In a nutshell OVM P2V boots up a HTTP server which serves up a "vm.cfg" file and
the block device. You download to your OVM host via "wget".

In general I have found using Knoppix more portable for migration of legacy HP, IBM and Dell Linux hosts. The ability to modify the network interface to force duplex is quite useful when you have issues with auto-negotiation. Knoppix is also quite forgiving for oddball legacy servers vs the JeOS that is OVM.

I also prefer to create the target disk image files on the OVM typically creating a "System.img" for "root" and "swap" and a "u01.img" for "/u01". Mounting the disk images from Dom0 is also employed e.g. mkdir root ; lomount -diskimage System.img -partition 1 root

The source filesystem is mounted from Knoppix and migrated using "tar" and SSH e.g.
mount /dev/sda1 /mnt/root ; cd /mnt/root ; tar czf - | ssh root@$OVM_HOST '( cd /OVS/$VM_NAME/root ; tar xvzpf - )'

Prior to starting the migrated VM you will have update "grub.conf" and "device.map". I also find it useful to clear out the "blkid.tab" cache. Rebuilding "initrd" is also recommended.

You can use combination of "linux p2v" and Knoppix migration e.g. "root" via "linux p2v" and "ssh | tar" for the data migration. It would also behoove you to verify interface is plumbed properly e.g. full duplex and Gbit. You could boot up DomU migrated VM from its "root" and then transfer the data vs using the Dom0 "lomount" option. Dom0 could get pegged doing this migration.

The largest "linux p2v" block device migration I have performed was ~500G from a legacy HP server with 100Mbit interface. Example
of "linux p2v" 270G disk image transfer from legacy HP server.

Transfer completed:

root@$OVM_HOST# tail -5 nohup.out
284519100K .......... .......... .......... .......... .......... 99% 2.54M 0s
284519150K .......... .......... .......... .. 100% 3.84M=16d24h

09:37:18 (206 KB/s) - `System-c0d1.img' saved 291347642880/291347642880
Publish

I believe the interface autonegotiated 10Mbit and half-duplex, "16d" == 16 days.

If I had used Knoppix I could have forced 100Mbit and full duplex.

Another advantage to the "ssh | tar" transfer is that only the contents of filesystem on the block device are transferred vs the
entire block device e.g. /u01 uses 40G of 50G block device only the 40G is transferred vs the entire 50G block device. If the legacy
system uses LVM for block devices then "ssh | tar" method will migrate to simple filesystem. There is no need for LVM on a VM.
Using LVM to manage block devices for a VM is another story.
Comments
Locked Post
New comments cannot be posted to this locked post.
Post Details
Locked on Jun 14 2012
Added on May 17 2012
0 comments
697 views