Skip to Main Content

Infrastructure Software

Announcement

For appeals, questions and feedback about Oracle Forums, please email oracle-forums-moderators_us@oracle.com. Technical questions should be asked in the appropriate category. Thank you!

Hard Partitioning of Guests (Pinning the vCPU for a DomU)

epretorious-workSep 27 2018 — edited Oct 11 2018

First: There are about a dozen OVM 3.4.5 hosts in our environment, organized into two separate pools - each host with...

  • two 12-core CPU's,
  • 1 TB of RAM, and
  • iSCSI shared storage.

Second: We're hosting a few different Oracle RAC systems as Oracle Linux 7 (i.e., OL7) hardware-assisted VM's (i.e., HVM's) - each with a dozen vCPU's or more and several hundred gigabytes of RAM allocated.

Third: As I understand it RAC stores the database in memory and that causes quite a bit of thrashing in memory.

Fourth: We're experiencing a memory leak on the Management Domain (i.e., Dom0).

After doing some reading on the InterWebs: It seems that splitting guests' vCPU's across different physical CPU's (and, therefore, different memory banks) might not be the best practice: e.g., On a host with two guests - each of them an Oracle RAC system - It can be easily be seen that each of the guests occupies threads (i.e., vCPU's) on each socket...

[root@ovm01 ~]# xenpm get-cpu-top

CPU     core    socket  node

CPU0     0       0       0

CPU1     0       0       0

CPU2     1       0       0

CPU3     1       0       0

CPU4     2       0       0

CPU5     2       0       0

CPU6     3       0       0

CPU7     3       0       0

CPU8     4       0       0

CPU9     4       0       0

CPU10    5       0       0

CPU11    5       0       0

CPU12    8       0       0

CPU13    8       0       0

CPU14    9       0       0

CPU15    9       0       0

CPU16    10      0       0

CPU17    10      0       0

CPU18    11      0       0

CPU19    11      0       0

CPU20    12      0       0

CPU21    12      0       0

CPU22    13      0       0

CPU23    13      0       0

CPU24    0       1       1

CPU25    0       1       1

CPU26    1       1       1

CPU27    1       1       1

CPU28    2       1       1

CPU29    2       1       1

CPU30    3       1       1

CPU31    3       1       1

CPU32    4       1       1

CPU33    4       1       1

CPU34    5       1       1

CPU35    5       1       1

CPU36    8       1       1

CPU37    8       1       1

CPU38    9       1       1

CPU39    9       1       1

CPU40    10      1       1

CPU41    10      1       1

CPU42    11      1       1

CPU43    11      1       1

CPU44    12      1       1

CPU45    12      1       1

CPU46    13      1       1

CPU47    13      1       1

[root@ovm01 ~]# for x in $(xm list | grep -v Name | awk '{ print $1}' ) ; do \

> xm vcpu-list $x | awk '{ printf "%-3s %-4s %-32s\n",$4,$3,$1}' | sort -g ; \

> done ;

CPU VCPU Name

23  10   0004fb0000060000434e47395bb7fc9e

24  0    0004fb0000060000434e47395bb7fc9e

27  5    0004fb0000060000434e47395bb7fc9e

28  6    0004fb0000060000434e47395bb7fc9e

29  9    0004fb0000060000434e47395bb7fc9e

30  1    0004fb0000060000434e47395bb7fc9e

32  2    0004fb0000060000434e47395bb7fc9e

36  11   0004fb0000060000434e47395bb7fc9e

38  3    0004fb0000060000434e47395bb7fc9e

41  7    0004fb0000060000434e47395bb7fc9e

42  8    0004fb0000060000434e47395bb7fc9e

45  4    0004fb0000060000434e47395bb7fc9e

CPU VCPU Name

0   6    0004fb00000600007d847ba5ee0aaef7

3   12   0004fb00000600007d847ba5ee0aaef7

5   14   0004fb00000600007d847ba5ee0aaef7

14  10   0004fb00000600007d847ba5ee0aaef7

15  9    0004fb00000600007d847ba5ee0aaef7

18  16   0004fb00000600007d847ba5ee0aaef7

20  7    0004fb00000600007d847ba5ee0aaef7

21  1    0004fb00000600007d847ba5ee0aaef7

21  4    0004fb00000600007d847ba5ee0aaef7

22  17   0004fb00000600007d847ba5ee0aaef7

24  13   0004fb00000600007d847ba5ee0aaef7

25  2    0004fb00000600007d847ba5ee0aaef7

26  15   0004fb00000600007d847ba5ee0aaef7

28  11   0004fb00000600007d847ba5ee0aaef7

31  8    0004fb00000600007d847ba5ee0aaef7

34  0    0004fb00000600007d847ba5ee0aaef7

44  5    0004fb00000600007d847ba5ee0aaef7

46  3    0004fb00000600007d847ba5ee0aaef7

...snip...

So I decided to explore pinning guests' vCPU's to a single socket in order to keep the guests' RAM consolidated to that socket. I've read the whitepaper "Hard Partitioning with Oracle VM Server for x86". But I need some clarification:

  1. The section titled "Oracle VM 3: Configuring Hard Partitioning" advises using the Oracle VM Utilities (i.e., ovm_vmcontrol) to assign CPU's to a guest.
  2. But the section titled "Oracle VM 2: Configuring Hard Partitioning" advises editing the guest's configuration file (i.e., vm.cfg) and adding the line...

cpus='0-24'

...or some other range as desired.

Is there that much difference between OVM 2 & OVM 3? i.e., Do I really need to use the Oracle VM Utilities to pin CPU's to a guest/DomU? Or can I just edit the guest's vm.cfg file manually and stop+start the guest?

TIA,

Eric P.

Portland, Oregon

Comments
Locked Post
New comments cannot be posted to this locked post.
Post Details
Locked on Nov 8 2018
Added on Sep 27 2018
5 comments
1,278 views