Skip to main content

How Oracle Data Guard Works


<<Back to Oracle DataGuard Main Page

Understanding the Working of Oracle Data Guard

The concept of oracle data guard is designed to utilize oracle redo logs/archive logs to keep the target standby database continuously synchronize with primary database.
There are 2 services which facilities the log transfer (redo or archive) from primary to standby and apply them on the standby and these are
Redo Transport Services: This service performs the following task in Oracle data guard Configuration
  • Control the automated transfer of redo data from the production database to one or more standby destinations.
  • Manage the process of resolving any gaps between Primary and standby database
  • Automatically detect missing or corrupted archived redo log files on a standby system and automatically retrieves replacement archived redo log files from the primary database or another standby database.
Apply Services: Apply services automatically apply the redo /archive logs on the standby database to keep the standby database consistent with the primary database.
Let us see the various possible ways, oracle transfers the redo to the standby database

Flow of Redo from Primary to Standby when no Standby Redo Logs(SRLs) are configured

  • Each transaction has redo records into the Redo Log Buffer Cache in SGA.
  • The Log Writer Process(LGWR) flushes these redo records from Redo Log Buffer to  Online Redo Log Files
  • Archiver process then archives these online redo logs ( normally when the online log file is full and log switch happens) and creates the archived redo log files
  • The archiver process is also responsible to ship these archived logs to the standby database server using oracle net services
  • On the standby database server there is Remote File Server (RFS) process which receives these archived logs
  • The RFS then sends the redo stream to the local Archiver process (ARCn).
  • The ARCn (on the standby server) then writes the redo to the archived redo log location on the standby server.
  • Once the archived redo log is shipped completely on the standby server, the Managed Recovery Process (MRP0) apply these redo to roll forward the standby database
This process is depicted in the picture above.

What is Standby Redo Logs (SRLs)

SRLs is same as online redo logs but is used only when the database role is standby. If SRLs are configured RFS process writes the redo streams received from primary to these SRLs and MRP process reads the redo directly from SRLs and apply to the standby database (MRP don't have to wait for the archived redo logs). When SRLs are full ( normally when the log switch occurs) the archiver process archives these SRLs and generates archived log files.

Flow of Redo from Primary to Standby when Standby Redo Logs(SRLs) are configured

  • Each transaction has redo records into the Redo Log Buffer Cache in SGA.
  • The Log Writer Process(LGWR) flushes these redo records from Redo Log Buffer to  Online Redo Log Files
  • If DG Configuration is Maximum Protection Mode, then redo transport is SYNC . The Network Server SYNC process (NSSn), also known as Redo Transport Process, is a slave process to LGWR which ships redo to the RFS process on the standby server. The RFS on the standby server simply writes the redo stream directly to the SRLs.
  • If DG Configuration is Maximum Performance mode, then redo transport is ASYNC. The Network Server ASYNC process (NSAn), also known as Redo Transport Process, in this case reads redo stream from Online Redo logs and transports them to the RFS process on the standby server. The RFS on the standby server simply writes the redo stream directly to the SRLs.
  • If Standby database is using real time apply then MRP process reads the redo stream directly from the SRLs and apply the redo to the standby database.
  • If Standby database is not using real time apply, MRP process will wait for the SRL’s contents to be archived. MRP process in this case reads the redo stream from archived logs and apply it to the standby database.

Comments

Popular posts from this blog

DataPump Import Of Object Types Fails With Errors ORA-39083 ORA-2304 Or ORA-39117 ORA-39779

<<Back to Oracle DATAPUMP Main Page ORA-39083: Object type TYPE:"TEST_QA01"."LOG_RECORD" failed to create with error: ORA-02304: invalid object identifier literal Import: Release 12.1.0.2.0 - Production on Tue May 29 07:59:12 2018 Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved. Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production With the Partitioning, Real Application Clusters, Automatic Storage Management, Oracle Label Security, OLAP, Advanced Analytics and Real Application Testing options Master table "SYSTEM"."SYS_IMPORT_FULL_01" successfully loaded/unloaded Starting "SYSTEM"."SYS_IMPORT_FULL_01":  system/********@TEST_QA parfile=import_TEST.par Processing object type SCHEMA_EXPORT/USER Processing object type SCHEMA_EXPORT/SYSTEM_GRANT Processing object type SCHEMA_EXPORT/ROLE_GRANT Processing object type SCHEMA_EXPORT/DEFAULT_ROLE Pr

ORA-28374: typed master key not found in wallet

<<Back to Oracle DB Security Main Page ORA-46665: master keys not activated for all PDBs during REKEY SQL> ADMINISTER KEY MANAGEMENT SET KEY FORCE KEYSTORE IDENTIFIED BY xxxx WITH BACKUP CONTAINER = ALL ; ADMINISTER KEY MANAGEMENT SET KEY FORCE KEYSTORE IDENTIFIED BY xxxx WITH BACKUP CONTAINER = ALL * ERROR at line 1: ORA-46665: master keys not activated for all PDBs during REKEY I found following in the trace file REKEY: Create Key in PDB 3 resulted in error 46658 *** 2019-02-06T15:27:04.667485+01:00 (CDB$ROOT(1)) REKEY: Activation of Key AdnU5OzNP08Qv1mIyXhP/64AAAAAAAAAAAAAAAAAAAAAAAAAAAAA in PDB 3 resulted in error 28374 REKEY: Keystore needs to be restored from the REKEY backup.Aborting REKEY! Cause: All this hassle started because I accidently deleted the wallet and all wallet backup files too and also forgot the keystore password. There was no way to restore the wallet back. Fortunately in my case the PDB which had encrypted data was supposed to be deco

How to Find VIP of an Oracle RAC Cluster

<<Back to Oracle RAC Main Page How to Find Out VIP of an Oracle RAC Cluster Login clusterware owner (oracle) and execute the below command to find out the VIP hostname used in Oracle RAC $ olsnodes -i node1     node1-vip node2     node2-vip OR $ srvctl config nodeapps -viponly Network 1 exists Subnet IPv4: 10.0.0.0/255.255.0.0/bondeth0, static Subnet IPv6: Ping Targets: Network is enabled Network is individually enabled on nodes: Network is individually disabled on nodes: VIP exists: network number 1, hosting node node1 VIP Name: node1-vip VIP IPv4 Address: 10.0.0.1 VIP IPv6 Address: VIP is enabled. VIP is individually enabled on nodes: VIP is individually disabled on nodes: VIP exists: network number 1, hosting node node2 VIP Name: node2-vip VIP IPv4 Address: 10.0.0.2 VIP IPv6 Address: VIP is enabled. VIP is individually enabled on nodes: VIP is individually disabled on nodes:

How to Power On/off Oracle Exadata Machine

<<Back to Exadata Main Page How to Power On/off Oracle Exadata Machine Oracle Exadata machines can be powered on/off either by pressing the power button on front of the server or by logging in to the ILOM interface. Powering on servers using  button on front of the server The power on sequence is as follows. 1. Start Rack, including switches  Note:- Ensure the switches have had power applied for a few minutes to complete power on  configuration before starting Exadata Storage Servers 2.Start Exadata Storage Servers  Note:- Ensure all Exadata Storage Servers complete the boot process before starting the   database servers 3. Start Database Servers Powering On Servers Remotely using ILOM The ILOM can be accessed using the Web console, the command-line interface (CLI), IPMI, or SNMP. For example, to apply power to server dm01cel01 using IPMI, where dm01cel01-ilom is the host name of the ILOM for the server to be powered on, run the

How to Attach to a Datapump Job and Check Status of Export or Import

<<Back to Oracle DATAPUMP Main Page How to check the progress of  export or import Jobs You can attach to the export/import  job using ATTACH parameter of oracle datapump utility. Once you are attached to the job you check its status by typing STATUS command. Let us see how Step1>  Find the Export/Import Job Name You can find the datapump job information from  DBA_DATAPUMP_JOBS or  USER_DATAPUMP_JOBS view. SQL> SELECT OWNER_NAME,JOB_NAME,OPERATION,JOB_MODE,STATE from DBA_DATAPUMP_JOBS; OWNER_NAME JOB_NAME                       OPERATION            JOB_MODE   STATE ---------- ------------------------------ -------------------- ---------- ---------- SYSTEM     SYS_EXPORT_FULL_02             EXPORT               FULL       EXECUTING OR You can also find the job name for export/import in logfile in beginning itself. Step2>Attach to the Job and check status One you get the Export/Import Job Name attach the job and check its status. You can attach or det

How to Create Pfile from Spfile and Vice Versa

<<Back to DB Administration Main Page There are instances when a DBA need to start the database using pfile, for example to trouble an instance startup error or to validate init file post parameter changes etc. In such situations you can create a pfile from spfile and once you are done with your changes you can create spfile from updated/modified pfile to start the database. How to Create Pfile from Spfile As sysdba execute following command  SQL> create pfile='/tmp/initOrcl.ora' from spfile; How to Create SPfile from Pfile As sysdba execute following command  SQL> create spfile from  pfile='/tmp/initOrcl.ora'; You can also create the pfile directly from memory How to Create Pfile from Memory As sysdba execute following command  SQL> create  pfile='/tmp/initOrcl.ora' from memory;

How to export only data or only metadata using expdp

<<Back to Oracle DATAPUMP Main Page CONTENT parameter of expdp let you select whether you want to export only data or only metadata or both Default : ALL Syntax and Description CONTENT=[ALL | DATA_ONLY | METADATA_ONLY] DATA_ONLY exports only table row data; no database object definitions are exported. METADATA_ONLY exports only database object definitions; no table row data is exported. Exporting metadata only  $ cat exp_full_pdb01.par directory=dump dumpfile=EXP_PDB01_FULL%U.dmp logfile=EXP_PDB01_FULL.log full=y CONTENT=METADATA_ONLY $ expdp system@PDB01 parfile=exp_full_pdb01.par Exporting data only directory=dump dumpfile=EXP_PDB01_FULL%U.dmp logfile=EXP_PDB01_FULL.log full=y CONTENT=DATA_ONLY $ expdp system@PDB01 parfile=exp_full_pdb01.par

Step by Step How to Configure Software Keystore/ Oracle Wallet

<<Back to Oracle DB Security Main Page How to Configure a Software Keystore A software keystore is a container that stores the Transparent Data Encryption master encryption key. To configure a software Keystore follow the steps below. Step 1: Set the Keystore Location in the sqlnet.ora File You can store the software keystore (also known as wallet) in file system or in ASM Diskgroup. Does not matter where you want to store the keystore you have modify the sqlnet.ora and make an entry accordingly Make an entry as shown below in $ORACLE_HOME/network/admin/sqlnet.ora file Example1: If Storing the Wallet in ASM ENCRYPTION_WALLET_LOCATION=  (SOURCE=(METHOD=FILE)    (METHOD_DATA=     (DIRECTORY= +DG_TST_DATA/$ORACLE_SID/wallet )    )  )   Example2: If Storing the Wallet in File System ENCRYPTION_WALLET_LOCATION=  (SOURCE=(METHOD=FILE)    (METHOD_DATA=     (DIRECTORY= /u01/dbatst1/admin/wallet/$ORACLE_SID)    )  ) NOTE: Ensure that the path you entered in  DIREC

ORA-15040: diskgroup is incomplete

<<Back to Oracle ASM Main Page ORA-15040: diskgroup is incomplete SQL> startup ORA-00099: warning: no parameter file specified for ASM instance ASM instance started Total System Global Area 1140850688 bytes Fixed Size                  8629704 bytes Variable Size            1107055160 bytes ASM Cache                  25165824 bytes ORA-15110: no diskgroups mounted Reason: The reason of this error is simply the ASM is not able to find the some or all the disks. Solution: Investigate and make all the disks available to ASM to mount the disk group. Make sure the disks has proper permissions. If you are using AFD check following services are online oracleacfs oracleadvm oracleoks  oracleafd   Source of Problem : Issue started after restart of the server After restarting the server when I tried to start the ASM instance its started throwing error.  ORA-15110: no diskgroups mounted Investigation in my Case Step1> ASM Logfile Scanning  Looked i

ORA-28365: wallet is not open while starting the database

<<Back to DB Administration Main Page ORA-28365: wallet is not open Encountered while Starting the Database $ srvctl start instance -d CDB001 -i CDB0011 PRCR-1013 : Failed to start resource ora.cdb001.db PRCR-1064 : Failed to start resource ora.cdb001.db on node node1.oracle.com CRS-5017: The resource action "ora.cdb001.db start" encountered the following error: ORA-28365: wallet is not open . For details refer to "(:CLSN00107:)" in "/u01/app/oracle/diag/crs/node1.oracle.com/crs/trace/crsd_oraagent_oracle.trc". CRS-2674: Start of 'ora.cdb001.db' on 'node1.oracle.com' failed Solution : Start the instance in mount mode SQL> startup mount; ORACLE instance started. Total System Global Area 2147483648 bytes Fixed Size                  2926472 bytes Variable Size            1392511096 bytes Database Buffers          738197504 bytes Redo Buffers               13848576 bytes Database mounted. Check Wallet status set linesiz