Skip to Main Content

Database Software

Announcement

For appeals, questions and feedback about Oracle Forums, please email oracle-forums-moderators_us@oracle.com. Technical questions should be asked in the appropriate category. Thank you!

Interested in getting your voice heard by members of the Developer Marketing team at Oracle? Check out this post for AppDev or this post for AI focus group information.

Shareplex vs Dataguard for Disaster recovery

User_GKEJFMar 5 2017 — edited Mar 6 2017

Dear Oracle Experts, I am looking for help with a design decision on a DR database, a DW database and its disaster recover database.

At our company, we are debating the usage of shareplex vs dataguard for disaster recovery. Below is the scenario

The current data (5tb) is in mainframe and project is to move it to oracle, setup dw and dr databases. So we will have database A (high OLTP, 5TB initial size, expected to grow fast, 8tb, 12tb in few months, on a powerful V-block blade, 200g ram, oracle 12c rac etc, this database A is the source of all 3 other databases), A_DW (datawarehouse db for A, internal use and reside in same data center), A_DR (disaster recovery instance of A in a different datacenter) and A_DW_DR (DR database for A_DW in same datacenter as A_DR). Design was done in the past and I am reviewing/modifying the design as I feel relying on a replication tool for DR is going to have issues in future.

The replication from A -> A_DW is set to happen thru shareplex (a uni-directional replication, no lobs or complex data types, hence shareplex was chosen over goldengate), at the same time another shareplex session will replicate data from A -> A_DR. And there is dataguard setup to keep A_DW and A_DW_DR in sync. Also another requirement of using shareplex between A->A_DR is the ability of shareplexed A_DR database, be open for occasional read access to verify somethings on the fly (without connecting to A). Hence Architects in the past have chosen shareplex (over goldengate or activedataguard on cost basis).

My experience with goldengate (any replication tool) in the past was always troublesome, job abends, conflicts, lags with constant DBA escalations etc. Hence I proposing to use Dataguard between A->A_DR and leave shareplex as is, between A->A_DW and also allow users to have the occasional "read access requirement" to connect to A_DW (instead of keeping A_DR open) or go with active dataguard between A->A_DR altogether.

We can setup the db's and do POC but its hard to do a real simuation of the real time traffic and the previous design might work fine if the traffic is not high on A.

We have a DBA that has done shareplex before and he seems to be of opinion that if shareplex has lag, then dataguard most likely will also have a lag. But based on the design of how replication tools work, wouldnt the replication tool be slow as it reads from the redologs and instead dataguard transmits the redolog to destination and apply it ? Also how about the vendor support of shareplex vs datagaurd ? I just feel comfortable as dataguard is so inbuilt to oracle, 1 vendor - 1 support, fast swith over, fail over features, block corruption recovery with dataguard etc.

Please advise 1. Advantages and disadvantages of using shareplex vs dataguard for DR.

2. Should we consider goldengate (vs shareplex),

3. Shareplex performance in general and lag issues on high OLTP systems 4.

4. Would you anticipate issues with the original design especially in terms of performance on A, lag built between A and the remaining 3 databases.

Sorry for the lengthy question and thankyou for reading it thru. My original question was 3 or 4 times longer but I tried my best to cut it down but still be able to explain the design and questions about it.

Thankyou very much.

Comments

mseberg

Hello;

Data Guard advantages

1. Cost. Data Guard is zero cost option.

2. Data Guard protects against log and block corruption.

3. Snapshot standby.

4. The recovery (data files etc) can be done using the standby as a source.

It's kind of like buying a replacement for RMAN. The Oracle product works great so why would you pay more?

GoldenGate's  main feature is replication, not DR. Not sure I would compare these.

Best Regards

mseberg

user1756862

I agree with Mseberg. Shareplex is a replication tool, Dataguard is a DR tool. Both are meant to do different things and I wouldn't compare them.

You can make one tool work like the other by putting in extra effort and by sacrificing some "goodies" that come native with that tool, but "WHY" is the question you need to ask yourself. It's like a "square peg in a round hole" Idiom. Would it work, may be..... but with lot of suffocation. Keep it simple and use dataguard for disaster recovery. Experienced dba's know to keep it simple.

Dataguard comes free with Oracle. If the idea is to spend more, get more CPU, memory and spend on advanced features in Oracle like partitioning, advanced compression etc. Also if your disaster recovery db has to be in open mode, then I would consider Activedataguard and not rely on shareplex to do it.

BPeaslandDBA

We have a DBA that has done shareplex before and he seems to be of opinion that if shareplex has lag, then dataguard most likely will also have a lag.

I would mostly disagree with that statement as far as today's Data Guard is concerned. If you have Standby Redo Logs in place, your data loss to the standby will be minimal, even for those in Max Performance mode. See this for more info:

I don't use SharePlex, but I doubt it can do better than DG with SRLs properly configured. Also, Oracle 12c introduced the FarSync option for DG which can provide a zero-data-loss solution without the negative impact of Max Protect mode.


Cheers,
Brian

MAZHAR.

Data Guard don't have lag in today's version it can be a good option.  I would prefer GG as per the scenario mentioned for replication, its a very good tool for replicating data on real time basis for different database and os.

User_GKEJF

Given a scenario of just replicating data from a very high OLTP db to 2 other destinations, 1. A DataWarehouse db in same data center and 2. A DisasterRecovery db in a datacenter 1000 miles apart, which would be faster, either dataguard or shareplex ?

Would there be lot of lag on shareplex ?

In other words, is transporting the archive log and applying it on destination (dataguard) is faster OR extracting sql from the redologs at source, and transporting those sql statements to 1000+ miles and applying those sql statements on destination database (shareplex) be faster ?

Oratig-Oracle

Hi,

1) Archive transfer is depends on network ? if your network is good enough your immediately transfer archives.

2) Archive log apply is depends on your I/O bandwidth.

For an example: I had customer who has 20 Node RAC primary and 20 node RAC standby, and this is completely Data warehouse. it will create 6TB archives per day, we use physical standby with active data guard option, which helps run report in standby.

Hemant K Chitale

Remember that setting up DR is not just about Bandwidth but Latency as well.  You may have high Bandwidth but High Latency as well simply because the DR site may be at a significant distance away from the Live site.  I don't know about SharePlex but my experience with DataGuard is that Latency is also important.  With DataGuard you have to consider if you want Maximum Protection or Availability or Performance.  High Latency can affect the performance of transactions when doing SYNC redo for the first two levels.

Hemant K Chitale

BPeaslandDBA

I'd agree with all of that, but if latency is an issue, then implement Far Sync.

Cheers,
Brian

1 - 8
Locked Post
New comments cannot be posted to this locked post.

Post Details

Locked on Apr 3 2017
Added on Mar 5 2017
8 comments
1,182 views