Data Guard FAQ – Interview Questions & Answers

Written by Choma

January 18, 2018

1. What are the types of Oracle Data Guard?

Oracle Data Guard is classified in to two types based on way of creation and method used for Redo Apply. They are as follows.

1.Physical standby (Redo Apply technology)

2.Logical standby (SQL Apply Technology)

2. What are the advantages in using Oracle Data Guard?

Following are the different benefits in using Oracle Data Guard feature in your environment.

1.High Availability.

2.Data Protection.

3.Offloading Backup operation to standby database.

4.Automatic Gap detection and Resolution in standby database.

5.Automatic Role Transition using Data Guard Broker.

 

2b. What are the uses of Oracle Data Guard?

a) Oracle Data Guard ensures high
availability, data
protection, and
disaster recovery
for enterprise data.

b) Data Guard provides a comprehensive set of services that create, maintain, manage, and monitor one or more standby databases to enable production Oracle databases to survive disasters and data corruptions.

c) With Data Guard, administrators can optionally improve production database performance by offloading resource-intensive backup and reporting operations to standby systems.

3. What are the different services available in Oracle Data Guard?

Following are the different Services available in Oracle Data Guard of Oracle database

1.Redo Transport Services.

2.Log Apply Services.

3.Role Transitions.

4. What are the different Protection modes available in Oracle Data Guard?

Following are the different protection modes available in Data Guard of Oracle database you can use any one based on your application requirement.

1.Maximum Protection

2.Maximum Availability

3.Maximum Performance => This is the default protection mode. It provides the highest level of data protection that is possible without affecting the performance of a primary database.

This is accomplished by allowing transactions to commit as soon as all redo data generated by those transactions has been written to the online log.

5. How to check what protection mode of primary database in your Oracle Data Guard?

By using following query you can check protection mode of primary database in your Oracle Data Guard setup.

SQL> select
protection_mode
from
v$database;

PROTECTION_MODE

——————————–

MAXIMUM PERFORMANCE

6. How can the protection mode in Oracle Data Guard setup be changed?

By using following query your can change the protection mode in your primary database after setting up required value in corresponding LOG_ARCHIVE_DEST_n parameter

in primary database for corresponding standby database.

ALTER DATABASE SET STANDBY DATABASE TO MAXIMUM [PROTECTION|PERFORMANCE|AVAILABILITY];

Example:

alter database set standby database to MAXIMUM PROTECTION;

7. What are the advantages of using Physical standby database in Oracle Data Guard?

Advantages of using Physical standby database in Oracle Data Guard are as follows.

■High Availability.

■Load balancing (Backup and Reporting).

■Data Protection.

■Disaster Recovery.

8. What is physical standby database in Oracle Data Guard?

 

Physical standby database are created as exact copy i.e block by block copy of primary database.

 

In physical standby database transactions happen in primary database are synchronized in standby database by

using Redo Apply method by continuously applying redo data on standby database received from primary database.

 

Physical standby database can offload the backup activity and reporting activity from Primary database.

 

Physical standby database can be opened for read-only transactions but redo apply won’t happen during that time. But from 11g onwards using Active Data Guard option (extra purchase) you can simultaneously open the physical standby database for read-only access and apply redo logs received from primary database.

9.What is Logical standby database in Oracle Data Guard?

 

Logical standby database can be created similar to Physical standby database and later you can alter the structure of logical standby database.

Logical standby database uses SQL Apply method to synchronize logical standby database with primary database.

 

This SQL apply technology converts the received redo logs to SQL statements and continuously apply those SQL statements on logical standby

database to make standby database consistent with primary database.

 

Main advantage of Logical standby database compare to physical standby database is you can use Logical standby database for reporting purpose during SQL apply

i.e Logical standby database must be open during SQL apply.

 

Even though Logical standby database are opened for read/write mode,

 

Though Logical standby database has advantage on Physical standby database it has some restriction on data-types, types of DDL, types of DML and types of tables.

10. What are the advantages of Logical standby database in Oracle Data Guard?

■Better usage of resource

■Data Protection

■High Availability

■Disaster Recovery

11. What is the usage of DB_FILE_NAME_CONVERT parameter in Oracle Data Guard setup?

DB_FILE_NAME_CONVERT –  This parameter is used when you are using different directory structure in standby database compare to primary database data files location & also when we duplicating database this parameter can be used to generate files in a different location.

or

DB_FILE_NAME_CONVERT parameter is used in Oracle Data Guard setup that to in standby databases.

DB_FILE_NAME_CONVERT parameter are used to update the location of data files in standby database.

These parameter are used when you are using different directory structure in standby database compare to primary database data files location

 

12. What are the services required on the primary and standby data-base?

The services required on the
primary database
are:

Log Writer Process (LGWR) – Collects redo information and updates the online redo logs. It can also create local archived redo logs and transmit online redo to standby databases.

Archiver Process (ARCn) – One or more archiver processes make copies of online redo logs either locally or remotely for standby databases.

Fetch Archive Log (FAL) Server – Services requests for archive redo logs from FAL clients running on multiple standby databases. Multiple FAL servers can be run on a primary database, one for each FAL request.

Log network server (LNS): LNS is used on the primary to initiate a connection with the standby database.

The services required on the
standby database
are:

Fetch Archive Log (FAL) Client – Pulls archived redo log files from the primary site. Initiates transfer of archived redo logs when it detects a gap sequence.

Remote File Server (RFS) – Receives archived and/or standby redo logs from the primary database.

Archiver (ARCn) Processes – Archives the standby redo logs applied by the managed recovery process (MRP).

Managed Recovery Process (MRP) – Applies archive redo log information to the standby database.

13. What is RTS (Redo Transport Services) in Data-guard?

It controls the automated transfer of redo data from the production database to one or more archival destinations. The redo transport services perform the following tasks:

a) Transmit redo data from the primary system to the standby systems in the configuration.

b) Manage the process of resolving any gaps in the archived redo log files due to a network failure.

c) Automatically detect missing or corrupted archived redo log files on a standby system and automatically retrieve replacement archived redo log files from the primary database or another standby database.

d) Control the automated transfer of redo data from a database destination to one or more destinations. Redo transport services also manage the process of resolving any gaps in the archived redo log files due to a network failure.

14. How to delay the application of logs to a physical standby?

A standby database automatically applies redo logs when they arrive from the primary database. But in some cases, we want to create a time lag between the archiving of a redo log at the primary site, and the application of the log at the standby site.

Modify the Log_Archive_Dest_n initialization parameter on the primary database to set a delay for the standby database.

Example: For 60min Delay:

ALTER SYSTEM SET LOG_ARCHIVE_DEST_2=’SERVICE=stdby_srvc DELAY=60′;

The DELAY attribute is expressed in minutes.

The archived redo logs are still automatically copied from the primary site to the standby site, but the logs are not immediately applied to the standby database. The logs are applied when the specified time interval expires.

15. How many standby databases we can create (in 10g/11g)?

Till Oracle 10g,
9 standby
databases are supported.

From Oracle 11g R2, we can create
30 standby
databases.

16. What are differences between physical, logical, snapshot standby and ADG (or) what are different types of standby databases?

Physical standby – in MOUNT STATE, MRP proves will apply the archives

ADG – in READ ONLY state, MRP will apply the archives

Logical standby – in READ ONLY state, LSP will run

Snapshot standby databases – Physical standby database can be converted to snapshot standby database, which will be in READ WRITE mode, can do any kind of testing, then we can convert back snapshot standby database to physical standby database and start MRP which will apply all pending archives.

17.
How many standby databases we can create (in 10g/11g)?

Till Oracle 10g, 9 standby databases are supported.

From Oracle 11g R2, we can create
30 standby
databases.

18. What are the parameters we’ve to set in primary/standby for Data Guard?

DB_UNIQUE_NAME

LOG_ARCHIVE_CONFIG

LOG_ARCHIVE_MAX_PROCESSES

DB_CREATE_FILE_DEST

DB_FILE_NAME_CONVERT

LOG_FILE_NAME_CONVERT

LOG_ARCHIVE_DEST_n

LOG_ARCHIVE_DEST_STATE_n

FAL_SERVER

FAL_CLIENT

STANDBY_FILE_MANAGEMENT

 

19. What is the use of fal_server & fal_client, is it mandatory to set these?

FAL_SERVER:

specifies the FAL (fetch archive log) server for a standby database. The value is an Oracle Net service name, which is assumed to be configured properly on the standby database system to point to the desired FAL server.

FAL_CLIENT:

specifies the FAL (fetch archive log) client name that is used by the FAL service, configured through the

FAL_SERVER initialization parameter, to refer to the FAL client.

The value is an Oracle Net service name, which is assumed to be configured properly on the FAL server system to point to the FAL client (standby database).

20. How to find out backlog or Log Gap of standby?

select round((sysdate – a.NEXT_TIME)*24*60) as “Backlog”,m.SEQUENCE#-1 “Seq Applied”,m.process, m.status

from
v$archived_log
a, (select process,SEQUENCE#, status from
v$managed_standby where process like ‘%MRP%’)m where a.SEQUENCE#=(m.SEQUENCE#-1);

 

21. If you didn’t have access to the standby database and you wanted to find out what error has occurred in a data guard configuration, what view would you check in the primary database to check the error message?

You can check the
v$dataguard_status
view.

select message from v$dataguard_status;

 

22. How can u recover standby which is far behind from primary (or) without archive logs how can we make standby sync?

By using RMAN incremental backup.

 

23. What is snapshot standby (or) How can we give a physical standby to user in READ WRITE mode and let him do updates and revert back to standby?

Till Oralce 10g, create guaranteed restore point, open in read write, let him do updates, flashback to restore point, start MRP.

From Oracle 11g, convert physical standby to snapshot standby, let him do updates, convert to physical standby, start MRP.

 

24. Why would I use Active Data Guard and not just add another node to my primary Oracle RAC cluster to enhance performance ?

Oracle RAC offers many advantages for scalability and high availability that are well understood and embraced by thousands of Oracle customers. 
Active Data Guard
is designed to address a different requirement where customers wish to physically isolate the overhead of processing ad-hoc queries and reports from their OLTP system by using a completely independent, synchronized replica of the production database. If the customer requirement can be addressed using read-only access to an up-to-date replica of the production database, then Active Data Guard is an ideal solution.

 

25. Why would I use Active Data Guard and not simply use SQL Apply (logical standby) that is included with Data Guard 11g ?

If read-only access satisfies the requirement – Active Data Guard is a closer fit for the requirement, and therefore is much easier to implement than any other approach.  Active Data Guard supports all datatypes and is very simple to implement. An Active Data Guard replica can also easily support additional uses – offloading backups from the primary database, serve as an open read-write test system during off-peak hours (Snapshot Standby), and provide an exact copy of the production database for disaster recovery – fully utilizing standby servers, storage and software while in standby role.

 

26. Can Oracle’s Data Guard be used on Standard Edition, and if so howHow can you test that the standby database is in sync?

Yes and No.

The automated features of Data Guard are not available in the standard edition of Oracle. You can still however, perform log shipping manually and write scripts to manually perform the steps. If you are on unix platform, you can write shell scripts that identify the logs and then use the scp or sftp command to ship it to the standby server. Then on the standby server, identify which logs have not been applied and apply/recover them maually and remove them once applied.

 

In Oracle Standard Edition, it is possible to be a standby database, and update it *manually*. Roughly, put your production database in archivelog mode. Create a hotbackup of the database and move it to the standby machine. Then create a standby controlfile on the production machine, and ship that file, along with all the archived redolog files to the standby server. Once you have all these files assembled, place them in their proper locations, recover the standby database, and you’re ready to roll. From this point on, you must manually ship, and manually apply those archived redologs to stay in sync with production.

      
To test your standby database, make a change to a table on the production server, and commit the change. Then manually switch a logfile so those changes are archived. Manually ship the newest archived redolog file, and manually apply it on the standby database. Then open your standby database in read-only mode, and select from your changed table to verify those changes are available. Once you’re done, shutdown your standby and startup again in standby mode

 

27. 
What is the difference between Active Dataguard, and the Logical Standby implementation of 10g dataguard?

Active dataguard is almost like a physical standby. We can use physical standby for testing without compromising on data.

We can open the physical standby on read/write mode, so that we can do some destructive things in it (drop tables, change data, whatever – run a test – perhaps with real application testing). While we are using it for reporting, redo will still stream from production.

 

28. With the availability of Active Data Guard, what role does SQL Apply (logical standby) continue to play?

Use SQL Apply for the following requirements:

(a) when you require read-write access to a synchronized standby database but do not modify primary data,

(b) when you wish to add local tables to the standby database that can also be updated, or

(c) when you wish to create additional indexes to optimize read performance. 

The ability to handle local writes makes SQL Apply better suited to packaged reporting applications that often require write access to local tables that exist only at the target database. SQL Apply also provides rolling upgrade capability for patchsets and major database releases.  This rolling upgrade functionality can also be used by physical standby databases beginning with Oracle 11g using Transient Logical Standby.

 

29. What is a Dataguard?

Oracle Dataguard is a disaster recovery solution from Oracle Corporation that has been utilized in the industry extensively at times of Primary site failure, failover, switchover scenarios.

 

30. If you have a low-bandwidth WAN network, what can you do to improve the Oracle 11g data guard configuration in a GAP detected situation?

Oracle 11g introduces the capability to compress redo log data as it transports over the network to the standby database. It can be enabled using the compression parameter. Compression becomes enabled only when a gap exists and the standby database needs to catch up to the primary database.

alter system set log_archive_dest_1=’SERVICE=DBA11GDR COMPRESSION=ENABLE’;

31. In an Oracle 11g Logical Standby Data Guard configuration, how can you tell the dbms_scheduler to only run jobs in primary database?

Oracle 11g, logical standby now provides support for DBMS_SCHEDULER. It is capable of running jobs in both primary and logical standby database. You can use the DBMS_SCHEDULER.SET_ATTRIBUTE procedure to set the database_role. You can specify that the jobs can run only when operating in that particular database role.

 

32. How can you control when an archive log can be deleted in the standby database in oracle 11g ?

In Oracle 11g, you can control it by using the
log_auto_delete
initialization parameter. The log_auto_delete parameter must be coupled with the
log_auto_del_retention_target
parameter to specify the number of minutes an archivelog is maintained until it is purged. Default is 24 hours. For archivelog retention to be effective, the log_auto_delete parameter must be set to true.

 

33. In Oracle 11g how can speed up backups on the standby database?

In Oracle 11g, block change tracking is now supported in the standby database. 

 

34. With the availability of Active Data Guard, what role does SQL Apply (logical standby) continue to play?

Use SQL Apply for the following requirements:

(a) when you require read-write access to a synchronized standby database but do not modify primary data,

(b) when you wish to add local tables to the standby database that can also be updated, or

(c) when you wish to create additional indexes to optimize read performance.

The ability to handle local writes makes SQL Apply better suited to packaged reporting applications that often require write access to local tables that exist only at the target database. SQL Apply also provides rolling upgrade capability for patchsets and major database releases. This rolling upgrade functionality can also be used by physical standby databases beginning with Oracle 11g using Transient Logical Standby.

 

35. How do I collect stats from an Active Data Guard replica given that it is open read-only ?

This is described in Metalink Note 454848.1 that details installation and usage of standby statspack.  In summary:

Statspack/AWR cannot be executed on a standby due to its read-only nature. Therefore tuning the performance of the apply process involves manually collecting statistics. 

 

In 11gR1, using the new Active Data Guard option, users can now use statspack from the primary database to collect data from a standby database that is opened read-only and performing recovery.  The standby statspack is installed in a separate schema on the Primary database, STDBYPERF which is then propagated to the standby. This new user does not have DBA privileges and has no access to local V$ tables.

 

1.1 Statspack Installation 

 

The perfstat schema and statspack related objects must be present on the primary and standby prior to installing standby statspack. 

If the perfstat user does not exist in the primary site then you will need to create the perfstat schema using the following:

sqlplus / as sysdba

SQL> @?/rdbms/admin/spcreate.sql

 
 

The standby statspack installation script (sbcreate.sql) creates the standby statspack schema to hold the standby snapshots. The script asks for:

  1. A password for stdbyperf user
  2. Default tablespace
  3. Temporary tablespace

The script creates the ‘stdbyperf’ user and assigns it the selected default and temporary table space. After the user has been created it calls sbaddins.sql to add the first standby instance to the configuration.

Example:

SQL> @?/rdbms/admin/sbcreate

Note
Due to Bug 11899453, ensure global_names is set to ‘FALSE’ on the Standby Database while you run this Script. See Note 11899453.8 for some more Details.

 

1.2 Add an Instance to Statspack Configuration

 

Log in to the primary as the ‘stdbyperf’ user and run the script sbaddins.sql to add a standby instance to the configuration. The script asks for:

  1. The TNS alias of the standby database instance
  2. The password of the perfstat user on the standby site

The script then creates a private database link to the perfstat schema on the standby site and a separate gathering package for each standby instance. Example:

SQL> connect stdbyperf/your_password 

SQL> @?/rdbms/admin/sbaddins

Input inst2_alias as the tns alias.

1.3 Collect Performance Data from a Standby Instance

 

The script sbaddins.sql creates a separate PL SQL package for each standby instance. Run the snap procedure of those packages to collect the performance data. The statspack_<instance_name>.snap procedure accesses the data dictionary and stats$ views on the standby database via database link connected to the original perfstat user und stores the data to stats$ tables on the primary instance. For example, while the standby is opened read only, login to the primary database and create the snap:

SQL> connect stdbyperf/your_password

SQL> exec statspack_<db_unique_name>_<instance_name>.snap

1.4 Generate Standby Statistics Report

 

The script sbreport.sql generates the standby statistics report. The script asks for: database id, instance number, high and low snapshots id to create the report. Example:

SQL>@?/rdbms/admin/sbreport

1.5 Purge a Set of Snapshots

 

The script sbpurge.sql purges a set of snapshots. The script asks for database id, instance number, low and high snapshots ids. The script purges all snapshots between the low and high snapshot ids for the given instance. Example:

SQL>@?/rdbms/admin/sbpurge

1.6 Delete an Instance from the Configuration

The script sbdelins.sql deletes an instance from the configuration, and deletes the associated PL SQL package. The scripts asks for instance name. The snapshots are not automatically purged when the instance is deleted. After deleting the instance, you are not able to generate reports for that instance. Example:

SQL> @?/rdbms/admin/sbdelins

1.7 Drop Statspack Schema

 

The script sbdrop.sql drops the stdbyperf user and tables. The script must be run when connected to SYS (or internal).

Note that if you consider to reinstall Statspack you may have to drop all public synonyms for STATSPACK and STAT$%%. Else you may get an Error when trying to run spcreate again.

Example:

SQL> connect / as sysdba

SQL> @?/rdbms/admin/sbdrop

2. New Statistics Collected

 

Two new sections are added to standby statspack report: (1) Recovery Progress Stats, and (2) Managed Standby Stats. 

 

3. List of New Scripts and Short Descriptions

 

All those Scripts below are located in $ORACLE_HOME/rdbms/admin

sbcreate.sql – Install standby statspack

sbcusr.sql – Called from sbcreate.sql to create the schema

sbctab.sql – Called from sbcreate.sql to create tables holding snapshots

sbaddins.sql – Called from sbcreate.sql to add a standby database instance to the configuration.

 

sbaddins.sql – Add a standby database instance to the configuration

sbcpkg.sql – Called from sbaddins.sql to create the instance specific statspack package

 

sblisins.sql – List instances in the standby statspack configuration

 

sbreport.sql – Create a standby statistics report

sbrepcon.sql – Called from sbreport.sql to get the report configuration

sbrepins.sql – Called from sbreport.sql to create the actual report

 

sbpurge.sql – Purge a set of snapshots identified by low and high snapshot ids

 

sbdelins.sql – Delete an instance from the standby statspack configuration

 

sbdrop.sql – Drop the stdbyoperf user and tables of the standby statspack

sbdtab.sql – Called from sbdrop.sql to drop tables

sbdusr.sql – Called from sbdrop.sql to drop user, must run from an account that connects to internal (SYS)


 

Q 1

What is data guard in simple language?

 
 

A 1

Your primary database is running and you want to reduce downtime because of unplanned outages. You create a replica of this primary database (termed as standby database).

You regularly ship redo generated in the primary database to standby database and apply it there. So that is our ‘Data Guard’ standby database and it is in a continuous state of recovery, validating and applying redo to remain in sync with the primary database. 


Q 2

Your standby database was out of reach because of network issue. How will you synchronize it with primary database again?

 A 2

Data Guard automatically resynchronizes the standby following network or standby outages using redo data that has been archived at the primary. 


 
 

Q 3

What is Redo Transport Services (RTS)?

 A 3

This process takes care of the transmission of redo from a primary database to the standby database.

Below is how Redo Transport Services (RTS) works:

1) Log Network Server (LNS) reads redo information from the redo buffer in SGA of PRIMARY Database

2) Log Network Server (LNS) passes redo to Oracle Net Services for transmission to the STANDBY database

3) Remote File Server (RFS) records the redo information transmitted by the LNS at the STANDBY database

4) Remote File Server (RFS) writes it to a sequential file called a standby redo log file (SRL) at the STANDBY database

** For multi-standby configuration, the primary database has a separate LNS process for each standby database.

** Two redo transport methods are supported with the LNS process: synchronous (SYNC) or asynchronous (ASYNC).

 
 


 
 

Q 4

What is the difference between SYNC and ASYNC redo transport method?

 
 

A 4

Synchronous transport (SYNC)

Also known as a “zero data loss” redo transport menthod.

Below is how it works:

1) Log Network Server (LNS) reads redo information from the redo buffer in SGA of PRIMARY Database

2) Log Network Server (LNS) passes redo to Oracle Net Services for transmission to the STANDBY database

3) Remote File Server (RFS) records the redo information transmitted by the LNS at the STANDBY database

4) Remote File Server (RFS) writes it to a sequential file called a standby redo log file (SRL) at the STANDBY database

5) Remote File Server (RFS) transmits an acknowledgement back to the LNS process on the primary database

6) Log Network Server (LNS) notifies the LGWR that transmission is complete on the primary database.

7) Log Writer (LGWR) acknowledges the commit to the user.

 
 

Asynchronous transport (ASYNC)

Unlike SYNC, Asynchronous transport (ASYNC) eliminates the requirement that the LGWR wait for acknowledgement from the LNS. This removes the performance impact on the

primary database irrespective of the distance between primary and standby locations.

So if the LNS is unable to keep pace and the log buffer is recycled before the redo can be transmitted to the standby, the LNS automatically transitions to reading and sending from the Online Redo logs. Once the LNS is caught up, it automatically transitions back to reading & sending directly from the log buffer.

Below is how it works:

1) Log Network Server (LNS) reads redo information from the redo buffer in SGA of PRIMARY Database

2) Log Network Server (LNS) passes redo to Oracle Net Services for transmission to the STANDBY database

3) Remote File Server (RFS) records the redo information transmitted by the LNS at the STANDBY database

4) Remote File Server (RFS) writes it to a sequential file called a standby redo log file (SRL) at the STANDBY database

so step 5, 6 & 7 as discussed above for SYNC are not applicable here.

The only drawback of ASYNC is the increased potential for data loss. Say a failure destroyed the primary database before any transport lag was reduced to zero, this means any committed transactions that were a part of the transport lag will be lost. So it is highly advisable to have enough network bandwidth to handle peak redo

generation rates when using ASYNC method.

 
 


 
 

Q 5

How can Synchronous transport (SYNC) impact the primary database performance?

 
 

A 5

SYNC guarantees protection for every transaction that the database acknowledges as having been committed but at the same time LGWR must wait for confirmation that data is protected at the standby before it can proceed with the next transaction. It can impact primary database performance and it depends on factors like

> the amount of redo information to be written

> available network bandwidth

> round-trip network latency (RTT)

> standby I/O performance writing to the SRL.

> distance betweeen primary and standby databases as network RTT increases with distance.

 
 


 
 

Q 6

What is Data Guard’s Automatic Gap Resolution?

 A 6

Your database is using ASYNC transport method and the instance load is at the peak. The LNS is unable to keep pace and the log buffer is recycled before the redo can be transmitted to the standby, the LNS automatically transitions to reading and sending from the Online Redo logs. Once the LNS is caught up, it automatically transitions back to reading & sending directly from the log buffer.

Now in some cases there can be two or more log switches before the LNS has completed sending the redo information from online redo log files and in meantime if any such required online redo log files were archived then those redo information will be transmitted via Data Guard’s gap resolution process “Automatic Gap Resolution”.

OR

In some other case when your network or the standby database is down and your primary system is one busy system, so before the connection between the primary and standby is restored, a large log file gap will be formed.

Automatic Gap Resolution will take care of such scenarios by following below action plan:

1) ARCH process on the primary database continuously ping the standby database during the outage to determine its status.

2) As soon as the standby is restored, the ARCH ping process queries the standby control file (via its RFS process) to determine the last complete log file that the standby received from the primary database.

3) Data Guard determines which log files are required to resynchronize the standby database and immediately begins transmitting them using additional ARCH processes.

4) LNS process at primary database will also attempt and succeed in making a connection to the standby database and will begin transmitting current redo. So first all the ARCH files are applied and then current redo log.

The Data Guard architecture enables gaps to be resolved quickly using multiple background ARCH processes

 
 


 
 

Q 7

What is the difference between Physical standby and Logical standby database?

 A 7

Data Guard Apply process in standby database can apply redo information directly and in that case it will be called physical standby.

OR It can apply SQL and in that case it will be called Logical standby.

Physical Standby:

In this case standby database is an exact, block-by-block, physical replica of the primary database.

The change vectors received by RFS process are directly applied to the standby database by using media recovery.so here the apply process read data blocks, assemble redo changes from mappings, and then apply redo changes to data blocks directly.

Physical Standby is the best choice for disaster recovery (DR) based upon their simplicity, transparency, high performance, and good data protection.

Logical Standby:

In this case standby database uses SQL Apply method to “mine” the redo by converting it to logical change records, and then building SQL

transactions and applying SQL to the standby database.

As this process of replaying the workload is more complex than the Physical Standby’s process, so it requires more memory, CPU, and I/O.

One good advantage here is that a logical standby database can be opened read-write while SQL Apply is active which means you can update (create/insert/delete etc) local tables and schemas in the logical standby database.

 
 


 
 

Q 8

How does Data Guard Apply process work if primary and secondary database involves Oracle RAC?

 A 8

If Primary database is RAC but standby is Non-RAC:

Each primary Oracle RAC instance ships its own thread of redo that is merged by the Data Guard apply process at the standby and applied in SCN order to the standby database.

If both Primary and standby databases are RAC:

If the standby is also an Oracle RAC database, only one instance (the apply instance) will merge and apply changes to the standby database. If the apply instance fail for any reason, the apply process will automatically failover to a surviving instance in the Oracle RAC standby database when using the Data Guard broker.

 
 


 
 

Q 9

What is Active Data Guard Option (Oracle Database 11g Enterprise Edition)?

 A 9

For physical standby database, prior to 11g, the database would have to be in the mount state when media recovery was active which means you were not able to query the standby database during media recovery stage as there was no read-consistent view.

Active Data Guard 11g features solves the read consistency problem by use of a “query” SCN. The media recovery process on the standby database will advance the query SCN after all the changes in a transaction have been applied . The query SCN will appear to user as the CURRENT_SCN column in the V$DATABASE view on the standby database. So Read-only users will only be able to see data up to the query SCN, and hence guaranteeing the same read consistency as the primary database.

This enables a physical standby database to be open as read-only while media recovery is active, making it useful for doing read-only workloads.

Also, if you need read-write access to the standby database, you can use SQL Apply method of dataguard.

 
 

Q 10

What are the important database parameters related to Data Guard corruption prevention?

 A 10

On the primary database:

a) DB_ULTRA_SAFE

Values can be DATA_AND_INDEX or DATA_ONLY. Setting DB_ULTRA_SAFE at the primary will also automatically set DB_ LOST_WRITE_PROTECT=TYPICAL on the primary database.

In Oracle Database 11g Release 2 (11.2), the primary database automatically attempts to repair the corrupted block in real time by fetching a good version of the same block from a physical standby database.

 
 

On the standby database:

a) DB_BLOCK_CHECKSUM=FULL

DB_BLOCK_CHECKSUM detects redo and data block corruptions and detect corruptions on the primary database and protect the standby database. This parameter requires minimal CPU resources.

 
 

b) DB_LOST_WRITE_PROTECT=TYPICAL

A lost write can occur when an I/O subsystem acknowledges the completion of a write, while in fact the write did not occur in persistent storage.

This will create a stale version of the data block. When the DB_LOST_WRITE_PROTECT initialization parameter is set, the database records buffer cache block reads in the redo log, and this information is used to detect lost writes.

You set DB_LOST_WRITE_PROTECT to TYPICAL in both primary and standby databases.

 
 

Q 11

What are different Data Guard protection modes?

 A 11

Data Guard protection modes implement rules that controls how the configuration will respond to failures, enabling you to achieve specific objectives for data protection, availability, and performance.

 
 

a) Maximum Performance

– emphasis is on primary database performance over data protection.

– requires ASYNC (the default method) redo transport so that the LGWR process never waits for acknowledgment from the standby database.

– network connection between primary and standby OR the availability of the standby database DO NOT IMPACT the primary database performance

 
 

b) Maximum Availability

– first emphasis is on availability and second priority is zero data loss protection.

– requires SYNC redo transport so primary database performance may be impacted in waiting for acknowledgment from the standby (it doesn’t mean indefinite wait in case standby database fails, maximum wait will be equal to parameter NET_TIMEOUT seconds).

 
 

c) Maximum Protection

– utmost priority is on data protection.

– also requires SYNC redo transport.

– unlike ‘Maximum Availability’ it does not consider the NET_TIMEOUT parameter, which means If the primary does not receive acknowledgment from a SYNC standby database, it will stall primary and eventually abort it, preventing any unprotected commits from occurring.

– highly recommended to use a minimum of two SYNC standby databases at different locations if using ‘Maximum Protection’ to have high availability of primary database.

 
 


 
 

Q 12

What is Switchover event?

 A 12

Switchover is useful for minimizing downtime during planned maintenance. It is a planned event in which Data Guard reverses the roles of the primary and a standby database.

The primary database runs unaffected while we are making the required changes on our standby database (e.g. patchset upgrades, full Oracle version upgrades, etc).

Once changes are complete, production is switched over to the standby site running at the new release.

This means regardless of how much time is required to perform planned maintenance, the only production database downtime is the time required to execute a switchover, which can be less than 60 seconds

Below operations happens when switchover command is executed:

1. primary database is notified that a switchover is about to occur.

2. all users are disconnected from the primary.

3. a special redo record is generated that signals the End Of Redo (EOR).

4. primary database is converted into a standby database.

5. the final EOR record is applied to standby database, this guarantees that no data has been lost, and it converts the standby to the primary role.

 

 
 

Q 13

What is Failover event?

 A 13

The Failover process is similar to switchover event except that the primary database never has the chance to write an EOR record as this is an unplanned event.

Whether or not a failover results in data loss depends upon the Data Guard protection mode:

a) Maximum Protection >> No Data Loss

b) Maximum Availability >> No Data Loss (except when there was a previous failure (e.g. a network failure) that had INTERRUPTED REDO TRANSPORT and allowed the primary database to move ahead of standby)

c) Maximum Performance (ASYNC) >> may lose any committed transactions that were not transmitted to the standby database before the primary database failed.

 
 

Failover event can be of two types:

1) Manual

Administrator have complete control of primary-standby role transitions. It can lengthen the outage by the amount of time required for the administrator to be notified and manual execution of command.

2) Automatic

It uses Data Guard’s Fast-Start Failover feature which automatically detects the failure, evaluates the status of the Data Guard configuration, and, if appropriate, executes the failover to a previously chosen standby database.


Q 14

Which tools can be used for Data Guard Management?

 A 14

1) SQL*Plus – traditional method, can prove most tedious to use

2) Data Guard broker – automates and centralizes the creation, maintenance, and monitoring of a Data Guard configuration. Simplifies and automates many administrative

tasks. It has its own command line (DGMGRL) and syntax.

3) Enterprise Manager – requires that the Data Guard broker be enabled. a GUI to the Data Guard broker, replacing the DGMGRL command line and interfacing directly with the broker’s monitor processes.

 
 


Q 15

What is Data Guard 11g snapshot standby?

 
 

A 15

With 11g, you can thoroughly test your changes on a true replica of your production system and database using actual production workload.

Data Guard 11g physical standby can now be converted to a snapshot standby, independent of the primary database, that is open read-write and able to be used for preproduction testing. It uses Flashback Database and sets a guaranteed restore point (GRP) at the SCN before the standby was open read-write.

NOTE: Primary database redo continues to be shipped to a snapshot standby, and while not applied, it is archived for later use.

You can convert this snapshot database back into a synchronized physical standby database when testing is complete. Redo Apply process at standby will take care that all

primary database redo archived while a snapshot standby is applied until it is caught up with the primary database.

 
 


Q 16

What is the difference between Recovery Point Objective(RPO) and Recovery Time Objective (RTO)?

 
 

A 16

A) Recovery Point Objective(RPO)

RPO concerns with data. It is the amount of data you are willing to lose when the failure occurs in your database system. Usually people define data loss in terms of time, so possible values can be 5 seconds of data loss, 2 hours of data loss etc.

Remember that each standby database has its own set of attributes and parameters. It means you can mix zero data loss standby databases with minimal data loss standby

databases in the same Data Guard configuration

If you have decided that you want to implement zero data loss strategy, then you should really focus on Networks and Data Loss

B) Recovery Time Objective (RTO)

RTO is defined as how fast you can get back up and running (whereas RPO is concerned with data loss)

So with your RPO strategy you lost say only about 6 seconds of data as you committed to your client but with RTO you need to formulate how fast clients can connect back to the database system after the data loss has occurred.

 
 


 
 

Q 17

What are Standby Redo Log (SRL) files?

 A 17

The SRL files are where the Remote File Server (RFS) process at your standby database writes the incoming redo so that it is persistent on disk for recovery. SRL files are important for better redo transport performance and data protection.

SRL are MUST in Maximum Availability or Maximum Protection mode and OPTIONAL (but recommended) in Maximum Performance mode.

If there are no Standby Redo Log (SRL) files, then at each log switch in the primary database, the RFS process on the standby database that is serving an asynchronous standby destination has to create an archive log of the right size. While the RFS is busy doing creating the archive log file, the LNS process at the primary database has to wait, getting further and further behind the LGWR (in case of Maximum Performance mode). That is why it recommended to have Standby Redo Log (SRL) files in Maximum Performance mode also.

We generally configure them on our primary database as well in preparation for a role transition b/w primary-standby.

Also, do not multiplex SRLs. Since Data Guard will immediately request a new copy of the archive log if an SRL file fails, there is no real need to have more than one copy of each.

 
 

How does one create a standby database?

While your production database is running, take an (image copy) backup and restore it on duplicate hardware. Note that an export will not work!!!

On your standby database, issue the following commands:

 

ALTER DATABASE CREATE STANDBY CONTROLFILE AS ‘file_name’;

ALTER DATABASE MOUNT STANDBY DATABASE;

RECOVER STANDBY DATABASE;

On systems prior to Oracle 8i, write a job to copy archived redo log files from the primary database to the standby system, and apply the redo log files to the standby database (pipe it). Remember the database is recovering and will prompt you for the next log file to apply.

Oracle 8i onwards provide an “Automated Standby Database” feature which will send archived log files to the remote site via NET8, and apply then to the standby database.

When one needs to activate the standby database, stop the recovery process and activate it:

ALTER DATABASE ACTIVATE STANDBY DATABASE;

Assumptions

You have two servers (physical or VMs) with an operating system and Oracle installed on them. In this case I’ve used Oracle Linux 5.6 and Oracle Database 11.2.0.2.

The primary server has a running instance.

The standby server has a software only installation.

    
 

 

 2

 


 

You May Also Like…

Annotation example

This is an example of annotation, hover the highlighted "annotation" word to see it in action!If you find the plugin...

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *