Skip to Main Content

Infrastructure Software

Announcement

For appeals, questions and feedback about Oracle Forums, please email oracle-forums-moderators_us@oracle.com. Technical questions should be asked in the appropriate category. Thank you!

Increase disk space on Oracle Linux running on VMware

jkinningerJun 9 2020 — edited Jun 10 2020

We are looking at migrating our Oracle Databases off AIX (pSeries) and onto Oracle Linux running on VMware. One question that came up was about expanding disks. Apparently with AIX on pSeries you can just increase disk space, all storage is using Pure Storage, and then increase the disk space.

What we were thinking is having two disks (vmdk files) per server. First disk would contain the OS information and partitions while the second disk would have the databases which we would want to setup in a similar fashion. From what I have used in the past and read on Google searches we would need to alter the partition using fdisk, delete and then recreate with the additional space. When doing this with Windows you can rescan and then just expand the disk from within the GUI so I didn't know if System Storage Manager (SSM) had any additional features that would allow us to mirror or use the Oracle Linux environment like we are with AIX from the storage perspective. I also know you can add additional disks and then add those to the pool but trying to just increase the disk in vCenter and then run a few commands on the Linux server to increase a specific mount that would be housing the database information, if that makes sense.

Here is the AIX environment we have:

AIX uses its version of LVM (logical volume manager).  ALL of our VG’s (volume groups) contain only one PV (physical volume), aka disk.  Since all PV’s/disks are virtual, there is no sense in having more than one PV per VG.  And if a VG needs to be increased, we just increase the existing PV that is in it (from the Pure GUI).  I’m not sure how Linux handles it, but it would be nice if it was the same way: ie, when a disk needs to be increased, we could just increase the VMDK in vSphere and then increase the disk/volume in Linux.  If Linux doesn’t work that way (ie, if you have to add another VMDK in order to expand a disk) might need to re-evaluate the setup.

Using oracle_test as an example, here is list of all the PV’s (first column) with associated VG (third column):

[root@oracle_test:/]

# lspv

hdisk1          00ca2d9783db3af0                    rootvg          active

hdisk0          00ca2d974278d29b                    oravg           active

hdisk2          00ca2d97509a5ae8                    vg_db01_1d      active

hdisk3          00ca2d97509a813a                    vg_db01_1t      active

hdisk4          00ca2d97509a9634                    vg_db01_1s      active

hdisk5          00ca2d97509b1b9a                    vg_db03_1d      active

hdisk6          00ca2d97509b35da                    vg_db03_1t      active

hdisk7          00ca2d97509b4f95                    vg_db03_1s      active

hdisk8          00ca2d9752a464a6                    vg_rcatt        active

hdisk9          00ca2d9752e9cae7                    vg_pace_d       active

hdisk10         00ca2d97ec65933d                    vg_opcl_d       active

hdisk11         00ca2d97ac0a1b1f                    vg_opdp_d       active

hdisk12         00ca2d97ac0a5ffb                    vg_ophr_d       active

hdisk14         00ca2d975adf0920                    vg_ecm_d        active

hdisk15         00ca2d975adfd2c0                    vg_ecm_t        active

hdisk16         00ca2d9786fd9282                    vg_pam_t        active

hdisk17         00ca2d978700d414                    vg_pam_x        active

hdisk18         00ca2d9787028fb2                    vg_pamin_t      active

hdisk19         00ca2d97870485ec                    vg_pamfw_t      active

hdisk20         00ca2d9787073450                    vg_pampfi_t     active

hdisk21         00ca2d97870ada64                    vg_pam_d        active

hdisk13         00ca2d97c3868134                    vg_opdp_q       active

hdisk22         00ca2d97c386bd14                    vg_opdp_s       active

hdisk23         00ca2d9777cb9358                    vg_ophr_q       active

hdisk24         00ca2d9777ce7ee3                    vg_ophr_s       active

hdisk25         00ca2d97d43ce9ce                    vg_ecm2_t       active

hdisk26         00ca2d97c01b14d0                    vg_ecm2_d       active

hdisk27         00ca2d976f4511a5                    vg_aspd_t       active

hdisk28         00ca2d97c0ffb7f3                    vg_opt_s        active

hdisk30         00ca2d97c1551575                    vg_acsf_s       active

hdisk31         00ca2d97c154766a                    vg_acsf_t       active

hdisk32         00ca2d97984cbd23                    vg_calyp_d      active

hdisk33         00ca2d9737c45620                    vg_opcl_s       active

hdisk34         00ca2d9737c86f15                    vg_opcl_q       active

hdisk35         00ca2d976b9535c9                    vg_ecms1_d      active

hdisk36         00ca2d976b96030a                    vg_ecm_q        active

hdisk37         00ca2d976b965ce4                    vg_ecms_s       active

hdisk38         00ca2d9776e02b5d                    vg_calyp_t      active

hdisk39         00ca2d9776e0806f                    vg_calyp_q      active

hdisk40         00ca2d97422095de                    vg_pace_q       active

hdisk29         00ca2d97707e8947                    vg_ecm1_d       active

hdisk41         00ca2d97b09324ba                    vg_pace_t       active

hdisk42         00ca2d9789762d77                    vg_pace_a       active

We create one VG for the AIX file systems (“rootvg”) and one VG for the Oracle file systems (“oravg”).  These have multiple LV’s (logical volumes) in them as can be seen in the output below (you can also see the mount point, aka file system on each LV):

[root@oracle_test:/]

# lsvg -l rootvg

rootvg:

LV NAME             TYPE       LPs     PPs     PVs  LV STATE      MOUNT POINT

hd5                 boot       1       1       1    closed/syncd  N/A

hd6                 paging     453     453     1    open/syncd    N/A

hd8                 jfs2log    1       1       1    open/syncd    N/A

hd4                 jfs2       49      49      1    open/syncd    /

hd2                 jfs2       105     105     1    open/syncd    /usr

hd9var              jfs2       400     400     1    open/syncd    /var

hd3                 jfs2       120     120     1    open/syncd    /tmp

hd1                 jfs2       128     128     1    open/syncd    /home

hd10opt             jfs2       96      96      1    open/syncd    /opt

hd11admin           jfs2       1       1       1    open/syncd    /admin

lg_dumplv           sysdump    8       8       1    closed/syncd  N/A

livedump            jfs2       2       2       1    open/syncd    /var/adm/ras/livedump

hd7                 sysdump    78      78      1    open/syncd    N/A

fslv21              jfs2       64      64      1    open/syncd    /nmontmp

[root@oracle_test:/]

# lsvg -l oravg

oravg:

LV NAME             TYPE       LPs     PPs     PVs  LV STATE      MOUNT POINT

fslv00              jfs2       128     128     1    open/syncd    /u01

fslv01              jfs2       128     128     1    open/syncd    /u02

fslv02              jfs2       16      16      1    open/syncd    /audfiles

fslv03              jfs2       128     128     1    open/syncd    /iofiles

fslv04              jfs2       128     128     1    open/syncd    /workfiles

fslv05              jfs2       128     128     1    open/syncd    /redofiles

fslv06              jfs2       16      16      1    open/syncd    /ctlfiles1

fslv07              jfs2       16      16      1    open/syncd    /ctlfiles2

fslv20              jfs2       128     128     1    open/syncd    /ohbkfiles

fslv09              jfs2       800     800     1    open/syncd    /expfiles

fslv10              jfs2       640     640     1    open/syncd    /arcfiles

fslv11              jfs2       1536    1536    1    open/syncd    /frafiles

fslv12              jfs2       768     768     1    open/syncd    /tmpfiles

fslv13              jfs2       96      96      1    open/syncd    /db_work

Each database has its own PV/VG, with only the one LV/FS.  For example:

[root@oracle_test:/]

# lsvg -l vg_pace_a

vg_pace_a:

LV NAME             TYPE       LPs     PPs     PVs  LV STATE      MOUNT POINT

fslv55              jfs2       2000    2000    1    open/syncd    /db_pace_a

Here is the df output you requested, which lets you see the file system sizes in GB:

[root@oracle_test:/]

# df -tg

Filesystem    GB blocks      Used      Free %Used Mounted on

/dev/hd4           6.12      3.14      2.98   52% /

/dev/hd2          13.12      5.46      7.67   42% /usr

/dev/hd9var       50.00     43.65      6.35   88% /var

/dev/hd3          15.00      0.55     14.45    4% /tmp

/dev/hd1          16.00     12.85      3.15   81% /home

/dev/hd11admin      0.12      0.00      0.12    1% /admin

/proc                 -         -         -    - /proc

/dev/hd10opt      12.00      2.05      9.95   18% /opt

/dev/livedump      0.25      0.00      0.25    1% /var/adm/ras/livedump

/dev/fslv00      128.00     51.75     76.25   41% /u01

/dev/fslv01      128.00     48.71     79.29   39% /u02

/dev/fslv02       16.00      0.46     15.54    3% /audfiles

/dev/fslv03      128.00     10.11    117.89    8% /iofiles

/dev/fslv20      128.00     70.94     57.06   56% /ohbkfiles

/dev/fslv04      128.00     19.83    108.17   16% /workfiles

/dev/fslv05      128.00     66.31     61.69   52% /redofiles

/dev/fslv06       16.00      1.38     14.62    9% /ctlfiles1

/dev/fslv07       16.00      1.38     14.62    9% /ctlfiles2

/dev/fslv09      800.00     82.82    717.18   11% /expfiles

/dev/fslv10      640.00    147.88    492.12   24% /arcfiles

/dev/fslv11     1536.00   1161.20    374.80   76% /frafiles

/dev/fslv12      768.00    318.44    449.56   42% /tmpfiles

/dev/fslv13       96.00      0.39     95.61    1% /db_work

/dev/fslv14      512.00    191.28    320.72   38% /db_db01_1d

/dev/fslv15      512.00    119.07    392.93   24% /db_db01_1t

/dev/fslv16      512.00     22.66    489.34    5% /db_db01_1s

/dev/fslv17      512.00     10.66    501.34    3% /db_db03_1d

/dev/fslv18      512.00    190.23    321.77   38% /db_db03_1t

/dev/fslv19      512.00      9.35    502.65    2% /db_db03_1s

/dev/fslv08       16.00     10.76      5.24   68% /db_rcatt

/dev/fslv21        8.00      6.05      1.95   76% /nmontmp

/dev/fslv22     2048.00   1590.49    457.51   78% /db_pace_d

/dev/fslv23       32.00     16.40     15.60   52% /db_opcl_d

/dev/fslv24       32.00     24.09      7.91   76% /db_opdp_d

/dev/fslv25       64.00     30.47     33.53   48% /db_ophr_d

/dev/fslv26      125.00     51.39     73.61   42% /db_ecm_d

/dev/fslv27      256.00     61.98    194.02   25% /db_ecm_t

/dev/fslv28      512.00    268.29    243.71   53% /db_pam_t

/dev/fslv29      512.00    260.28    251.72   51% /db_pam_x

/dev/fslv30      512.00    272.54    239.46   54% /db_pamin_t

/dev/fslv31      512.00    278.04    233.96   55% /db_pamfw_t

/dev/fslv32      512.00    277.91    234.09   55% /db_pamfi_t

/dev/fslv33      512.00    267.98    244.02   53% /db_pam_d

/dev/fslv34       64.00     30.03     33.97   47% /db_opdp_q

/dev/fslv35       64.00     23.03     40.97   36% /db_opdp_s

/dev/fslv36       64.50     30.97     33.53   49% /db_ophr_q

/dev/fslv37       40.00     26.59     13.41   67% /db_ophr_s

/dev/fslv38      128.00     23.48    104.52   19% /db_ecm2_d

/dev/fslv41      384.00    142.93    241.07   38% /db_opt_s

/dev/fslv42       64.00     33.70     30.30   53% /db_acsf_t

/dev/fslv43       64.00     19.65     44.35   31% /db_acsf_s

/dev/fslv44      250.00    121.37    128.63   49% /db_ecm2_t

/dev/fslv45      256.00     43.31    212.69   17% /db_calyp_d

/dev/fslv46      256.00     91.43    164.57   36% /db_opcl_s

/dev/fslv47      512.00    369.78    142.22   73% /db_opcl_q

/dev/fslv48       64.00     12.78     51.22   20% /db_ecms1_d

/dev/fslv49      232.00     47.10    184.90   21% /db_ecm_q

/dev/fslv50       65.00     18.90     46.10   30% /db_ecm_s

/dev/fslv51      256.00     33.24    222.76   13% /db_calyp_t

/dev/fslv52      256.00     18.31    237.69    8% /db_calyp_q

/dev/fslv53     2048.00   1570.36    477.64   77% /db_pace_q

/dev/fslv39       16.00      8.08      7.92   51% /db_aspd_t

/dev/fslv40      128.00     16.98    111.02   14% /db_ecm1_d

/dev/fslv54     2048.00   1616.22    431.78   79% /db_pace_t

/dev/fslv55     2000.00   1543.68    456.32   78% /db_pace_a

vmgridoral1t:/opt/wsfg      46.97     17.50     29.47   38% /opt/wsfg

This post has been answered by Dude! on Jun 9 2020
Jump to Answer
Comments
Post Details
Added on Jun 9 2020
14 comments
8,725 views