Skip to Main Content

Database Software

Announcement

For appeals, questions and feedback about Oracle Forums, please email oracle-forums-moderators_us@oracle.com. Technical questions should be asked in the appropriate category. Thank you!

Interested in getting your voice heard by members of the Developer Marketing team at Oracle? Check out this post for AppDev or this post for AI focus group information.

Clone A Multi-Terabyte Oracle Database in 5 Minutes

unknown-1040115Sep 30 2015 — edited Oct 5 2015

CLONE A MULTI-TERABYTE ORACLE DATABASE IN 5 MINUTES

     
By Wissem EL KHLIFI , Oracle ACE

  

In an environment with multiple Oracle Databases running large amounts of data (multi-terabyte), the snapshot storage feature is very useful when you want to preserve the state of the Oracle database, make a clone, create a standby database or just backup the environment. This feature simply saves the current state of your Oracle storage environment while the database is online (in open mode) or offline (the Oracle instance is shutdown during the snapshot phase).  When the Oracle database is in no archive log mode, only snapshot offline mode is possible to save a consistent state of your Oracle storage environment. However, in an archive log mode, both online and offline modes are possible because it would be possible to apply archive log modes for the period between the SCN of the Snapshots and the target SCN (Database point-in-time recovery (DBPITR)). The Snapshot feature is another method to back up the Oracle database. Compare with hard disk imaging, it require less storage space for each snapshot than a system partition image file, and the restoration speed is significantly increased. Backing up the Oracle databases using Oracle Recovery manager and Snapshot feature guarantees much safer and protected Oracle database environments. In this article, we will show how the use HP 3PAR Thin Provisioning Software, Oracle ASM utilities and the Oracle database point-in-time recovery procedure to clone a multi-terabyte Oracle production database within 5 minutes (in this case 7 Terabyte of size). We will use HP 3PAR InForm OS Command Line Interface Software, command line user interface for managing, and configuring HP 3PAR Storage Systems and the Oracle Automatic Storage Management (ASM) commands for managing the Oracle Storage. (Note: Thin Provisioning is a mechanism to improve the utilization efficiency of storage capacity deployment and virtualization technology.)

Snapshot Creation:        

       

The following table describes the source and target Oracle databases. We name the source database (livedb in this case) the database to be cloned and the target database (clonedb) which is the database result of the clone procedure.

       

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

Source database name

livedb

Source database Size

7 terabytes

Source database version

11.2.0.3

Source DB host name

jupiter

Source OS version

Oracle Linux 6

Source Oracle Storage manager

ASM

Source ASM disk group

Livedb_dg

Source Storage Solution

HP 3PAR

Target database name

clonedb

Target database Size

7 terabytes

Target  database version

11.2.0.3

Target  DB host name

jupiter

Target  OS version

Oracle Linux 6

Target  Oracle Storage manager

ASM

Target  ASM disk group

clonedb_dg

Target  Storage Solution

HP 3PAR

Let’s check the database size of the Source livedb database. As you can see the livedb has  7 terabytes.

jupiter(livedb):/home/oracle>sqlplus   / as sysdba

SQL*Plus: Release 11.2.0.3.0   Production on Sat Feb 28 05:50:44 2015

Copyright (c) 1982, 2011,   Oracle.  All rights reserved.

Connected to:
        Oracle Database 11g Enterprise   Edition Release 11.2.0.3.0 - 64bit Production
        With the Partitioning,   Automatic Storage Management, OLAP, Data Mining
        and Real Application Testing   options

SYS:livedb SQL>
        SELECT A.DATA_SIZE +   B.TEMP_SIZE + C.REDO_SIZE + D.CONTROLFILE_SIZE "total_size in GB"
          FROM (SELECT SUM(BYTES) / 1024 / 1024 /   1024 DATA_SIZE FROM DBA_DATA_FILES) A,
               (SELECT NVL(SUM(BYTES), 0) / 1024 /   1024 / 1024 TEMP_SIZE
                  FROM DBA_TEMP_FILES) B,
               (SELECT SUM(BYTES) / 1024 / 1024 /   1024 REDO_SIZE FROM SYS.V_$LOG) C,
               (SELECT SUM(BLOCK_SIZE *   FILE_SIZE_BLKS) / 1024 / 1024 / 1024 CONTROLFILE_SIZE
                  FROM V$CONTROLFILE) D;

total_size in GB
        ----------------
              7036.39496

SYS:livedb SQL>

The next step is to check the ASM disks used by the livedb database.

SYS:livedb SQL> show   parameter db_create

NAME                                              TYPE          VALUE
        ------------------------------------         -----------   ------------------------------
        db_create_file_dest                          string      +LIVEDB_DG
        db_create_online_log_dest_1          string      +LIVEDB_DG
        db_create_online_log_dest_2          string
        db_create_online_log_dest_3          string
        db_create_online_log_dest_4          string
        db_create_online_log_dest_5          string

We can see the livedb uses LIVEDB_DG diskgroup. We use asmcmd command to list the ASM disks;

JUPITER(+ASM):/home/oracle>asmcmd lsdsk -G LIVEDB_DG
      Path
      /dev/oracleasm/disks/JUPITER_3PAR_LIVE_0
      /dev/oracleasm/disks/JUPITER_3PAR_LIVE_1
      /dev/oracleasm/disks/JUPITER_3PAR_LIVE_2
      /dev/oracleasm/disks/JUPITER_3PAR_LIVE_3
      JUPITER(+ASM):/home/oracle>

The overall size allocated for the 4 ASM disks is 8 terabytes.

SQL>  select name,total_mb from v$asm_disk where   header_status='MEMBER' and group_number=35;

NAME                             TOTAL_MB
        ------------------------------   ----------
        LIVEDB_DG_0011                      2097152
        LIVEDB_DG_0010                      2097152
        LIVEDB_DG_0009                      2097152
        LIVEDB_DG_0008                      2097152

SQL>

Now, we use the HP 3PAR InForm OS Command Line Interface to list the set of the Virtual Volume used by the Oracle database. We run the command showvvset to show all autonomic virtual volume groups or sets.

3PAR700 cli%  showvvset

14 JUPITER_3PAR_LIVE     JUPITER_3PAR_LIVE.0
                                                         JUPITER_3PAR_LIVE.1
                                                         JUPITER_3PAR_LIVE.2
                                                         JUPITER_3PAR_LIVE.3

From the above outputs we see we have a virtual volume set name JUPITER_3PAR_LIVE   along with four virtual volumes (JUPITER_3PAR_LIVE.0, JUPITER_3PAR_LIVE.1 , JUPITER_3PAR_LIVE.2 and JUPITER_3PAR_LIVE.3) all used by the oracle livedb database.

The Oracle database livedb is in no archive log mode so we clean shutdown the database to make the snapshot.

SYS:livedb SQL> select   log_mode from v$database;

LOG_MODE
        ------------
        NOARCHIVELOG

SYS:livedb SQL>  shutdown immediate;

After clean shutdown, we create the snapshots using the following command:

3PAR700 cli%  createsv -exp 1825d TPAR_CLONEDB.@count@   set:JUPITER_3PAR_LIVE

This command will create a snapshot backup of the JUPITER_3PAR_LIVE volume set. We set the virtual volume expiration times with the -exp option in our case we set it to 1825 days. We created a virtual copy, or snapshot, of the virtual volume set using the command createsv. That’s it. With a downtime of only 5 minutes we now have a snapshot copy of the livedb.

We can startup the livedb database using the startup command.

First we create a virtual volume set for easy management of the new TPAR_CLONEDB* copies using createvvset 3par command.

3PAR700 cli%  createvvset    TPAR_CLONEDB TPAR_CLONEDB*

We show the newly set created;

3PAR700 cli%  showvvset |grep TPAR_CLONEDB

75 TPAR_CLONEDB            TPAR_CLONEDB.0
                                                      TPAR_CLONEDB.1
                                                      TPAR_CLONEDB.2
                                                      TPAR_CLONEDB.3

Manipulate the Snapshot for the cloned database

First, export the virtual volumes to the JUPITER host using the command createvlun.

createvlun TPAR_CLONEDB.0 auto   JUPITER
      createvlun TPAR_CLONEDB.1 auto   JUPITER
      createvlun TPAR_CLONEDB.2 auto   JUPITER
      createvlun TPAR_CLONEDB.3 auto   JUPITER

Next, perform a LUN scanning on all HBAs in the Jupiter host and reload the multipath.

We can get the list of LUNS using the HP3PARInfo –i command; this will show the LUNS used by CLONEDB snapshots;

350002ac000000000020004d40000632e        2T      mpathxe         dm-154          TPAR_CLONEDB.0          JUPITER_3PAR_LIVE_0
      350002ac000000000020004d50000632e        2T      mpathxb         dm-142          TPAR_CLONEDB.1          JUPITER_3PAR_LIVE_1
      350002ac000000000020004d60000632e        2T      mpathxc         dm-143          TPAR_CLONEDB.2          JUPITER_3PAR_LIVE_2
      350002ac000000000020004d70000632e        2T      mpathxd         dm-153          TPAR_CLONEDB.3          JUPITER_3PAR_LIVE_3

At this stage, we need to rename the ASM disks from JUPITER_3PAR_LIVE* to TPAR_CLONEDB* using oracleasm renamedisk from ASMLIB command line.

/usr/sbin/oracleasm renamedisk   -f /dev/dm-154 TPAR_CLONEDB_0
      /usr/sbin/oracleasm renamedisk   -f /dev/dm-142 TPAR_CLONEDB_1
      /usr/sbin/oracleasm renamedisk   -f /dev/dm-143 TPAR_CLONEDB_2
      /usr/sbin/oracleasm renamedisk   -f /dev/dm-153 TPAR_CLONEDB_3

Another important step is to modify the disk headers of the ASM disks to reflect the new ASM diskgroup that cloned is going to use. From the outputs (see below), we see that the actual ASM disks are still referring to the LIVEDB_DG diskgroup. We need to modify the name from LIVEDB_DG to CLONEDB_DG.

[root@jupiter LIVEDB]# strings   -a /dev/dm-154 |head -4
      ORCLDISKTPAR_LIVEDB_0
      LIVE_DG_0008
      LIVEDB_DG
      LIVE_DG_0008
      [root@jupiter LIVEDB]# strings   -a /dev/dm-142  |head -4
      ORCLDISKTPAR_LIVEDB_1
      LIVE_DG_0009
      LIVEDB_DG
      LIVE_DG_0009
      [root@jupiter LIVEDB]# strings   -a /dev/dm-143  |head -4
      ORCLDISKTPAR_LIVEDB_2
      LIVE_DG_0010
      LIVEDB_DG
      LIVE_DG_0010
      [root@jupiter LIVEDB]# strings   -a /dev/dm-153  |head -4
      ORCLDISKTPAR_LIVEDB_3
      LIVE_DG_0011
      LIVEDB_DG
      LIVE_DG_0011
      [root@jupiter LIVEDB]#

To modify the ASM disk headers, we can use kfed command; kfed can be used to read and write ASM metadata in disk headers and ASM metadata files. Please note that kfed in write mode is a powerful tool and can corrupt the disk metadata if it is not well used. You can refer to the My Oracle Support Document: “ASM tools used by Support: KFOD, KFED, AMDU (Doc ID 1485597.1) “for more details about kfed usage.

First we change the owner and group of CLONEDB_DG* disks from root to oracle;

Chown –R oracle:dba   /dev/oracleasm/disks/ TPAR_CLONEDB *

Then, I created a script to do the kfed work; basically we search in ASM disks TPAR_CLONEDB* and we replace the occurrences of LIVEDB_DG to CLONEDB_DG.

[root@jupiter LIVEDB]# cat   /usr/local/oracle/bin/LIVEDB/LIVEDB_kfed.sh
      for file in   /dev/oracleasm/disks/ TPAR_CLONEDB *
      do
      echo "Processing REDO disk   $file ..."
      search=LIVEDB_DG
      replace=CLONEDB_DG
      newlength=`echo $replace | wc   -m`
      let newlength=$newlength-1
      shortname=`echo $file | cut -f   6 -d /`

kfed op=read dev=$file | sed -e   '24,24s/ '$search' / '$replace' /' -e '24,24s/length=.*/length='$newlength'/'   > /tmp/$shortname.kfed
        kfed op=write dev=$file   text=/tmp/$shortname.kfed CHKSUM=YES
        done

After running the kfed command as oracle user. We check again the ASM disk headers. You can see we now have CLONEDB_DG showing up.

[root@jupiter LIVEDB]# strings   -a /dev/dm-154 |head -4
      ORCLDISKTPAR_LIVEDB_0
      LIVE_DG_0008
      CLONEDB_DG
      LIVE_DG_0008
      [root@jupiter LIVEDB]# strings   -a /dev/dm-142  |head -4
      ORCLDISKTPAR_LIVEDB_1
      LIVE_DG_0009
      CLONEDB_DG
      LIVE_DG_0009
      [root@jupiter LIVEDB]# strings   -a /dev/dm-143  |head -4
      ORCLDISKTPAR_LIVEDB_2
      LIVE_DG_0010
      CLONEDB_DG
      LIVE_DG_0010
      [root@jupiter LIVEDB]# strings   -a /dev/dm-153  |head -4
      ORCLDISKTPAR_LIVEDB_3
      LIVE_DG_0011
      CLONEDB_DG
      LIVE_DG_0011
      [root@jupiter LIVEDB]#

The next step is to mount the new diskgroup CLONEDB_DG;


      ALTER DISKGROUP CLONEDB_DG   mount;

Now that we have the new diskgroup mounted, we need to create a new pfile and conrolfile for the new cloned database.

We login into the livedb database and we issue the following commands:

alter database backup   controlfile to trace as '/tmp/cr_ctl_clonedb_wissem.sql';
      create   pfile=’/tmp/pfile_clonedb_wissem.ora’ from spfile;

The following steps include replacing all the occurrences of livedb and LIVEDB_DG in /tmp/cr_ctl_clonedb_wissem.sql' and ’/tmp/pfile_clonedb_wissem.ora’ files.

The clonedb controlfile will looks like this; please note we replaced the “REUSE DATABASE” from the original control file to “SET DATABASE”.

CREATE CONTROLFILE SET DATABASE   "CLONEDB" RESETLOGS NOARCHIVELOG
          MAXLOGFILES 80
          MAXLOGMEMBERS 3
          MAXDATAFILES 800
          MAXINSTANCES 8
          MAXLOGHISTORY 47280
      LOGFILE
        GROUP 13   '+CLONEDB_DG/livedb/onlinelog/group_13.860.849881219'  SIZE 1G BLOCKSIZE 512,
        GROUP 14   '+CLONEDB_DG/livedb/onlinelog/group_14.758.849881221'  SIZE 1G BLOCKSIZE 512,
        GROUP 15   '+CLONEDB_DG/livedb/onlinelog/group_15.753.849881223'  SIZE 1G BLOCKSIZE 512,
        GROUP 16   '+CLONEDB_DG/livedb/onlinelog/group_16.816.849881225'  SIZE 1G BLOCKSIZE 512
      DATAFILE
          '+CLONEDB_DG/livedb/datafile/system.833.865924661',
        ...
       >>>>>>> content   truncated , too many datafiles to show
        ...
          '+CLONEDB_DG/livedb/datafile/apps_ts_tx_data.831.872086137'
      CHARACTER SET UTF8
      ;

Finally we startup the database in nomount state, we run the controlfile script, open the database with resetlogs and add temporary files to the temporary tablespaces. (Note: you may also invoke the DBNEWID utility to modify the DBID of the newly cloned database).

Startup nomount pfile=’   /tmp/pfile_clonedb_wissem.ora’;
      @/tmp/cr_ctl_clonedb_wissem.sql
      Alter database open resetlogs;
      Alter tablespace TEMP add   TEMPFILE size 10G;

Conclusion

We have seen how combine snapshots feature and Oracle ASM utilities (ASMLIB, kfed, asmcmd) speed up and simply the cloning process of multi – terabyte Oracle databases.


About the Author

Wissem is a Senior DBA with over 12 years of experience specialized in Oracle HA solutions / Big Data. He works for “Schneider Electric / APC Global operations”. Wissem has also worked for several international enterprise leaders in many industries including Banking, Telecommunications, Internet and Energy. Wissem is the first Oracle ACE in Spain and he has earned many certifications including OCP DBA.

Comments
Post Details
Added on Sep 30 2015
12 comments
12,895 views