Skip to Main Content

Database Software

Announcement

For appeals, questions and feedback about Oracle Forums, please email oracle-forums-moderators_us@oracle.com. Technical questions should be asked in the appropriate category. Thank you!

Interested in getting your voice heard by members of the Developer Marketing team at Oracle? Check out this post for AppDev or this post for AI focus group information.

11g R2 RAC on VMWare - ocfs2 problem

user130038Jul 11 2011 — edited Jul 12 2011
Hi there

I am creating two node 11gR2 RAC using VMWare. I have successfully installed Grid Infrastructure. The next step in the guide I am following is to configure OCFS2.

I installed the 3-required RPMs (ocfs2-tools-1.*, ocfs2-el5- and ocfs2console-* on both VMs as root). I ran "ocfs2console" and created cluster configuration successfully. See the cluster.conf contents below:
[root@collabn1 Server]# more /etc/ocfs2/cluster.conf
node:
        ip_port = 7777
        ip_address = 172.16.100.51
        number = 0
        name = collabn1
        cluster = ocfs2
 
node:
        ip_port = 7777
        ip_address = 172.16.100.52
        number = 1
        name = collabn2
        cluster = ocfs2
 
cluster:
        node_count = 2
        name = ocfs2
I propagated the configuration to 2nd node successfully using menu option and I can see the cluster.conf file with same contents as above on 2nd node as well.

The next step is to "*format*" the device for OCFS filesystem. When I try this, I get an error:
ocfs2console
   TASKS
        FORMAT
              Error: No unmounted partitions
root@collabn1 Server]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
                       22G  5.4G   15G  27% / 
/dev/sda1              99M   12M   82M  13% /boot
tmpfs                 760M  154M  606M  21% /dev/shm
.host:/               454G  404G   50G  90% /mnt/hgfs
/dev/hdc              2.9G  2.9G     0 100% /media/Enterprise Linux dvd 20100405
[root@collabn1 Server]# fdisk -l
 
Disk /dev/sda: 26.8 GB, 26843545600 bytes
255 heads, 63 sectors/track, 3263 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
 
   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1          13      104391   83  Linux
/dev/sda2              14        3263    26105625   8e  Linux LVM
 
Disk /dev/sdb: 3489 MB, 3489660928 bytes
255 heads, 63 sectors/track, 424 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
 
   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1               1         424     3405748+  da  Non-FS data
 
Disk /dev/sdc: 3489 MB, 3489660928 bytes
255 heads, 63 sectors/track, 424 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
 
   Device Boot      Start         End      Blocks   Id  System
/dev/sdc1               1         424     3405748+  da  Non-FS data
[root@collabn1 Server]#
Could someone please help me figure out what the problem is? Please let me know if any further info is required to analyze the issue.

Regards

Comments

585179
Hi Saademer,


Once you have created the cluster.conf file the next step you need to configure the ocfs2

as root user run

# /etc/init.d/o2cb offline ocfs2
# /etc/init.d/o2cb unload
# /etc/init.d/o2cb configure

Put load in boot to yes and dead threshold to 61

Once the ocfs2 has been configure then you can format the partition using below command

# mkfs.ocfs2 -b 4K -C 32K -N 4 -L ocrvote /dev/sdb1 --> example


then try mounting the ocfs2 drive

# mount -t ocfs2 -o datavolume,nointr -L "ocrvote" /u02 --> example


then modify file /etc/fstab add below line

LABEL=ocrvote /u02 ocfs2 _netdev,datavolume,nointr 0 0


Hope this helps


Cheers
user130038
Hi Zheng

Here is the output from node#1:
[root@collabn1 Server]# /etc/init.d/o2cb offline ocfs2
[root@collabn1 Server]# /etc/init.d/o2cb unload
Unmounting ocfs2_dlmfs filesystem: OK
Unloading module "ocfs2_dlmfs": OK
Unmounting configfs filesystem: OK
Unloading module "configfs": OK
[root@collabn1 Server]# /etc/init.d/o2cb configure
Configuring the O2CB driver.

This will configure the on-boot properties of the O2CB driver.
The following questions will determine whether the driver is loaded on
boot.  The current values will be shown in brackets ('[]').  Hitting
<ENTER> without typing an answer will keep that current value.  Ctrl-C
will abort.

Load O2CB driver on boot (y/n) [n]: y
Cluster stack backing O2CB [o2cb]: 
Cluster to start on boot (Enter "none" to clear) [ocfs2]: 
Specify heartbeat dead threshold (>=7) [31]: 61
Specify network idle timeout in ms (>=5000) [30000]: 
Specify network keepalive delay in ms (>=1000) [2000]: 
Specify network reconnect delay in ms (>=2000) [2000]: 
Writing O2CB configuration: OK
Loading filesystem "configfs": OK
Mounting configfs filesystem at /sys/kernel/config: OK
Loading filesystem "ocfs2_dlmfs": OK
Mounting ocfs2_dlmfs filesystem at /dlm: OK
Starting O2CB cluster ocfs2: OK
[root@collabn1 Server]# mkfs.ocfs2 -b 4K -C 32K -N 4 -L ocrvote /dev/sdb1 
mkfs.ocfs2 1.4.3
Cluster stack: classic o2cb
/dev/sdb1 is apparently in use by the system; will not make a ocfs2 volume here!
[root@collabn1 Server]# 
As per guide, I have already created partitions as per steps below:
>
Step 6.Create partitions on all of the newly created disks with fdisk.
a) run fdisk /dev/sdb You should see the message "Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel"
b) type "n" to create a new partition.
c) type "p" for a primary partition.
d) type partition number 1.
e) press enter twice to accept the default first/last cylinders.
f) type "t" to set the partition type.
g) enter partition type da (Non-FS data).
h) type "w" to write the partition table to disk.

Repeat these steps for sdc
>

I am kind a lost here.

Regards
585179
saeedamer wrote:
/dev/sdb1 is apparently in use by the system; will not make a ocfs2 volume here!
[root@collabn1 Server]#

As per guide, I have already created partitions as per steps below:
>
Step 6.Create partitions on all of the newly created disks with fdisk.
a) run fdisk /dev/sdb You should see the message "Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel"
b) type "n" to create a new partition.
c) type "p" for a primary partition.
d) type partition number 1.
e) press enter twice to accept the default first/last cylinders.
f) type "t" to set the partition type.
g) enter partition type da (Non-FS data).
h) type "w" to write the partition table to disk.

Repeat these steps for sdc
>

I am kind a lost here.

Regards
Hi Saadamer,

Try delete the partition on /dev/sdb and create the new partition again but don't set partition type (skip step f and g)

Cheers
user130038
Hi Zheng

I deleted the partitions and re-created them as per your suggestion and rebooted the VMs but the result is same. Please see below:
[root@collabn1 ~]# fdisk -l

Disk /dev/sda: 26.8 GB, 26843545600 bytes
255 heads, 63 sectors/track, 3263 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1          13      104391   83  Linux
/dev/sda2              14        3263    26105625   8e  Linux LVM

Disk /dev/sdb: 3489 MB, 3489660928 bytes
255 heads, 63 sectors/track, 424 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1               1         424     3405748+  83  Linux

Disk /dev/sdc: 3489 MB, 3489660928 bytes
255 heads, 63 sectors/track, 424 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sdc1               1         424     3405748+  83  Linux
[root@collabn1 ~]# 
[root@collabn1 ~]# /etc/init.d/o2cb offline ocfs2
Stopping O2CB cluster ocfs2: OK
[root@collabn1 ~]# /etc/init.d/o2cb unload
Unmounting ocfs2_dlmfs filesystem: OK
Unloading module "ocfs2_dlmfs": OK
Unmounting configfs filesystem: OK
Unloading module "configfs": OK
[root@collabn1 ~]# /etc/init.d/o2cb configure
Configuring the O2CB driver.

This will configure the on-boot properties of the O2CB driver.
The following questions will determine whether the driver is loaded on
boot.  The current values will be shown in brackets ('[]').  Hitting
<ENTER> without typing an answer will keep that current value.  Ctrl-C
will abort.

Load O2CB driver on boot (y/n) [y]: y
Cluster stack backing O2CB [o2cb]: 
Cluster to start on boot (Enter "none" to clear) [ocfs2]: 
Specify heartbeat dead threshold (>=7) [61]: 
Specify network idle timeout in ms (>=5000) [30000]: 
Specify network keepalive delay in ms (>=1000) [2000]: 
Specify network reconnect delay in ms (>=2000) [2000]: 
Writing O2CB configuration: OK
Loading filesystem "configfs": OK
Mounting configfs filesystem at /sys/kernel/config: OK
Loading filesystem "ocfs2_dlmfs": OK
Mounting ocfs2_dlmfs filesystem at /dlm: OK
Starting O2CB cluster ocfs2: OK
[root@collabn1 ~]# mkfs.ocfs2 -b 4K -C 32K -N 4 -L ocrvote /dev/sdb1 
mkfs.ocfs2 1.4.3
Cluster stack: classic o2cb
/dev/sdb1 is apparently in use by the system; will not make a ocfs2 volume here!
[root@collabn1 ~]# 
user130038
Hi Zheng

I tried another thing - I deleted the partition "/dev/sdb1" and did not create a new one, rebooted the VM and tried the commands you listed, looks like I am moving forward. See the output below (hope this is positive):
[root@collabn1 ~]# /etc/init.d/o2cb offline ocfs2
Stopping O2CB cluster ocfs2: OK
[root@collabn1 ~]# /etc/init.d/o2cb unload
Unmounting ocfs2_dlmfs filesystem: OK
Unloading module "ocfs2_dlmfs": OK
Unmounting configfs filesystem: OK
Unloading module "configfs": OK
[root@collabn1 ~]# /etc/init.d/o2cb configure
Configuring the O2CB driver.

This will configure the on-boot properties of the O2CB driver.
The following questions will determine whether the driver is loaded on
boot.  The current values will be shown in brackets ('[]').  Hitting
<ENTER> without typing an answer will keep that current value.  Ctrl-C
will abort.

Load O2CB driver on boot (y/n) [y]: y
Cluster stack backing O2CB [o2cb]: 
Cluster to start on boot (Enter "none" to clear) [ocfs2]: 
Specify heartbeat dead threshold (>=7) [61]: 
Specify network idle timeout in ms (>=5000) [30000]: 
Specify network keepalive delay in ms (>=1000) [2000]: 
Specify network reconnect delay in ms (>=2000) [2000]: 
Writing O2CB configuration: OK
Loading filesystem "configfs": OK
Mounting configfs filesystem at /sys/kernel/config: OK
Loading filesystem "ocfs2_dlmfs": OK
Mounting ocfs2_dlmfs filesystem at /dlm: OK
Starting O2CB cluster ocfs2: OK
[root@collabn1 ~]# mkfs.ocfs2 -b 4K -C 32K -N 4 -L ocrvote /dev/sdb1
mkfs.ocfs2 1.4.3
Cluster stack: classic o2cb
Filesystem label=ocrvote
Block size=4096 (bits=12)
Cluster size=32768 (bits=15)
Volume size=3487465472 (106429 clusters) (851432 blocks)
4 cluster groups (tail covers 9661 clusters, rest cover 32256 clusters)
Journal size=67108864
Initial number of node slots: 4
Creating bitmaps: done
Initializing superblock: done
Writing system files: done
Writing superblock: done
Writing backup superblock: 1 block(s)
Formatting Journals: done
Formatting slot map: done
Writing lost+found: done
mkfs.ocfs2 successful

[root@collabn1 ~]# 
585179
Hi Saadamer,

Yes you have a good positive step, now you can continue with the steps / testing


Cheers
user130038
Hi Zheng,

I am not sure but should I repeat the "mount" steps of both nodes? The reason I am asking is that when I rebooted node#2, I saw some error messages. The error messages were similar to what I see now when I run the "mount" commands on node#2 (see below):
[root@collabn2 ~]# mount -t ocfs2 -o datavolume,nointr -L "ocrvote" /u51
mount.ocfs2: Transport endpoint is not connected while mounting /dev/sdb1 on /u51. Check 'dmesg' for more information on this error.
[root@collabn2 ~]# mount -t ocfs2 -o datavolume,nointr -L "u52-backup" /u52
mount.ocfs2: Transport endpoint is not connected while mounting /dev/sdc1 on /u52. Check 'dmesg' for more information on this error.
[root@collabn2 ~]# 
Both the firewall and SELinux are DISABLED on both VMs.
585179
Hi,

Yes the mount command must be run on all nodes, the mount should be put in /etc/fstab so next time the server rebooted you don't need to run the mount command again.

So the mount can be mounted on node 1 and node 2 throw below error?

Post the log from dmesg, search text ocfs2


Cheers
user130038
Thank you Zheng - I just restarted both VMs and things look OK now. Thank you so much for your time and help. I am going ahead with next steps now and will post again if there any further issues.


I am a bit confused after completing the above step. The guide has following chapters:
Chapter 4: Grid Install (ASM)
   a. Setup ASMLib
   b. Cluster Verification Utility
   c. Install Grid Infrastructure
   d. Increase CRS Fencing Timeouts
   e. Setup ASM
Chapter 5: Grid Install (CFS/NFS)
   a. Setup OCFS2
   b. Cluster Verification Utility
   c. 11gR2 Bug workaround
   d. Install Grid Infrastructure
   e. Increase CRS Fencing Timeouts
What I suspect is that I was not supposed to follow Chapter#5 at all. I should have jumped to Chapter#6 (RAC Install). Can I possibly email you the guide?

EDIT: OK, I restored the backup of VMs and directly jumped to Oracle Software install and then successfully created RAC database.
Thank you so much Zheng - xie xie

Please advise!

Best regards
Amer

Edited by: saeedamer on Jul 12, 2011 4:05 PM

Edited by: saeedamer on Jul 12, 2011 4:06 PM
1 - 9
Locked Post
New comments cannot be posted to this locked post.

Post Details

Locked on Aug 8 2011
Added on Jul 11 2011
9 comments
1,443 views