Skip to main content

Adding Quorum Disks in Exadata


<<Back to Exadata Main Page

What is Quorum Disk

A quorum disk is same as voting disk holds same information and used for the same cause. The only difference is in the syntax of its administration and management. You can identify the quorum disk by querying FAILGROUP_TYPE column of  v$asm_disk.
Login to asm instance as sysasm
SQL> select a.GROUP_NUMBER, b.name group_name, a.DISK_NUMBER, a.PATH, a.name, a.TOTAL_MB, a.FREE_MB, a.failgroup_type from v$asm_disk a, v$asm_diskgroup b where a.group_number = b.group_number and a.group_number = 1;

GROUP_NUMBER    GROUP_NAME   DISK_NUMBER   NAME                                              FAILGROUP_TYPE
------------                      ----------                 -----------              ------------------------------                     ---------------------
           1                         DATAC3                19                       QD_DATAC3_DB02                      QUORUM
           1                         DATAC3                18                       QD_DATAC3_DB01                      QUORUM

Exadata- Adding Quorum Disks to Database Nodes

If you have Oracle Exadata rack which has fewer than five storage servers and you have 2 database nodes and you want to configure HIGH redundancy diskgroup for Voting disk then you can use Quorum Disk Manager utility introduced in Oracle Exadata release 12.1.2.3.0 to create and maintain additional required quorum disks

Software Requirements for Quorum Disk Manager

  • Oracle Exadata software release 12.1.2.3.0 and above
  • Patch 23200778 for all Database homes
  • Oracle Grid Infrastructure 12.1.0.2.160119 with patches 22722476 and 22682752, or Oracle Grid Infrastructure 12.1.0.2.160419 and above
Step1> Find out the network interfaces to be used for communication with the iSCSI devices using the following command
[oracle@host]$ oifcfg getif | grep cluster_interconnect | awk '{print $1}'
ib0
ib1
Step2> Find the IP address of each interface using the following command. The IP address in my example is 192.168.10.4[5-8]
ip addr show interface_name
ON DB01
[oracle@host1]$ ip addr show ib0
192.168.10.45
192.168.10.46
ON DB02
[oracle@host2]$ ip addr show ib1
192.168.10.47
192.168.10.48
On Both Nodes DB01 and DB02 as root User
Step3> Run the quorumdiskmgr command with the --create --config options to create quorum disk configurations on both db01 and db02
[root@DB01]#/opt/oracle.SupportTools/quorumdiskmgr --create --config --owner=oracle --group=dba --network-iface-list="ib0, ib1"
[Info] Successfully created iface exadata_ib0 with iface.net_ifacename ib0
[Info] Successfully created iface exadata_ib1 with iface.net_ifacename ib1
[Success] Successfully created quorum disk configurations

[root@DB02]#/opt/oracle.SupportTools/quorumdiskmgr --create --config --owner=oracle --group=dba --network-iface-list="ib0, ib1"
[Info] Successfully created iface exadata_ib0 with iface.net_ifacename ib0
[Info] Successfully created iface exadata_ib1 with iface.net_ifacename ib1
[Success] Successfully created quorum disk configurations
Step4>Run the quorumdiskmgr command with the --list --config options to verify that the configurations have been successfully created on both db01 and db02
[root@DB01]#/opt/oracle.SupportTools/quorumdiskmgr --list --config
Owner: oracle
Group: dba
ifaces: exadata_ib0 exadata_ib1
[root@DB02]#/opt/oracle.SupportTools/quorumdiskmgr --list --config
Owner: oracle
Group: dba
ifaces: exadata_ib0 exadata_ib1
Step5>Run the quorumdiskmgr command with the --create --target options to create a target on both db01 and db02 for Oracle ASM disk group DATAC1 and make the target visible to both db01 and db02
[root@DB01]# /opt/oracle.SupportTools/quorumdiskmgr --create --target --asm-disk-group=DATAC3 --visible-to="192.168.10.45, 192.168.10.46, 192.168.10.47, 192.168.10.48"
[Success] Successfully created target iqn.2015-05.com.oracle:QD_DATAC3_DB01.
[root@DB02]#/opt/oracle.SupportTools/quorumdiskmgr --create --target --asm-disk-group=DATAC3 --visible-to="192.168.10.45, 192.168.10.46, 192.168.10.47, 192.168.10.48"
[Success] Successfully created target iqn.2015-05.com.oracle:QD_DATAC3_DB02.
Step6> Run the quorumdiskmgr command with the --list --target options to verify the target has been successfully created on both db01 and db02
[root@DB01]# /opt/oracle.SupportTools/quorumdiskmgr --list --target
Name: iqn.2015-05.com.oracle:QD_DATAC3_DB01
Host name: DB01
ASM disk group name: DATAC3
Size: 128 MB
Visible to: 192.168.10.45, 192.168.10.46, 192.168.10.47, 192.168.10.48
Discovered by:

[root@DB02]# /opt/oracle.SupportTools/quorumdiskmgr --list --target
Name: iqn.2015-05.com.oracle:QD_DATAC3_DB02
Host name: DB02
ASM disk group name: DATAC3
Size: 128 MB
Visible to: 192.168.10.45, 192.168.10.46, 192.168.10.47, 192.168.10.48
Discovered by:

Step7> Run the quorumdiskmgr command with the --create --device options to create devices on both db01 and db02 from targets on both db01 and db02.
[root@DB01]# /opt/oracle.SupportTools/quorumdiskmgr --create --device --target-ip-list="192.168.10.45, 192.168.10.46, 192.168.10.47, 192.168.10.48"
[Success] Successfully created all device(s) from target(s) on machine with IP address 192.168.10.45
[Success] Successfully created all device(s) from target(s) on machine with IP address 192.168.10.46
[Success] Successfully created all device(s) from target(s) on machine with IP address 192.168.10.47
[Success] Successfully created all device(s) from target(s) on machine with IP address 192.168.10.48
[root@DB02]# /opt/oracle.SupportTools/quorumdiskmgr --create --device --target-ip-list="192.168.10.45, 192.168.10.46, 192.168.10.47, 192.168.10.48"
[Success] Successfully created all device(s) from target(s) on machine with IP address 192.168.10.45
[Success] Successfully created all device(s) from target(s) on machine with IP address 192.168.10.46
[Success] Successfully created all device(s) from target(s) on machine with IP address 192.168.10.47
[Success] Successfully created all device(s) from target(s) on machine with IP address 192.168.10.48
Step8> Run the quorumdiskmgr command with the --list --device options to verify the devices have been successfully created on both db01 and db02 
[root@DB01~]# /opt/oracle.SupportTools/quorumdiskmgr --list --device
Device path: /dev/exadata_quorum/QD_DATAC3_DB01
Host name: DB01
ASM disk group name: DATAC3
Size: 128 MB
Device path: /dev/exadata_quorum/QD_DATAC3_DB02
Host name: DB02
ASM disk group name: DATAC3
Size: 128 MB
[root@DB02~]# /opt/oracle.SupportTools/quorumdiskmgr --list --device
Device path: /dev/exadata_quorum/QD_DATAC3_DB01
Host name: DB01
ASM disk group name: DATAC3
Size: 128 MB
Device path: /dev/exadata_quorum/QD_DATAC3_DB02
Host name: DB02
ASM disk group name: DATAC3
Size: 128 MB
Step9> login to asm instance and verify the disk
On DB01or DB02 as oracle user
SQL> show parameter asm_diskstring
NOTE: Adjust the asm_diskstring parameter in case it is required to discover the quorum disks /dev/exadata_quorum/*
SQL>set linesize 200
col path format a50
col path format a30
select inst_id, label, path, mode_status, header_status from gv$asm_disk where path like '/dev/exadata_quorum/%'; 

  INST_ID LABEL                          PATH                                               MODE_STATUS           HEADER_STATUS
---------- ------------------------------ -------------------------------------------------- --------------------- ------------------------------------
         1 QD_DATAC3_DB02          /dev/exadata_quorum/QD_DATAC3_DB02          ONLINE                CANDIDATE
         1 QD_DATAC3_DB01          /dev/exadata_quorum/QD_DATAC3_DB01          ONLINE                CANDIDATE
         2 QD_DATAC3_DB02          /dev/exadata_quorum/QD_DATAC3_DB02          ONLINE                CANDIDATE
         2 QD_DATAC3_DB01          /dev/exadata_quorum/QD_DATAC3_DB01          ONLINE                CANDIDATE

Step10> Add the discovered candidate disks as Quorumdisk or create a new diskgroup as applicable
SQL> alter diskgroup DATAC3 add quorum failgroup "DB01" disk '/dev/exadata_quorum/QD_DATAC3_DB01' quorum failgroup "DB02" disk '/dev/exadata_quorum/QD_DATAC3_DB02';
OR
SQL> create diskgroup DATAC1 high redundancy quorum failgroup db01 disk '/dev/exadata_quorum/QD_ DATAC1_DB01' quorum failgroup db02 disk '/dev/exadata_quorum/QD_ DATAC1_DB02' ...
Step11> Wait for the rebalancing operation to complete
SQL> select * from v$asm_operation;
Step12> Check the votingdisk status
[oracle@DB01]$ crsctl query css votedisk;
##  STATE    File Universal Id                File Name Disk group
................................................................................................................................................................


................................................................................................................................................................
 4. ONLINE   ff198b7b1ff74fbcbf8a7baedd669b22 (/dev/exadata_quorum/QD_DATAC3_DB02) [DATAC3]
 5. ONLINE   3990c6fc37f44f2bbff3e31c23785803 (/dev/exadata_quorum/QD_DATAC3_DB01) [DATAC3]
Located 5 voting disk(s).

If Applicable
Step13>
Relocate the existing voting files from the normal redundancy disk group to the high redundancy disk group.
$Grid_home/bin/crsctl replace votedisk +DATAC1
Step14>Verify the voting disks have been successfully relocated to the high redundancy disk group and that five voting files exist
$crsctl query css votedisk


Comments

Popular posts from this blog

How to Power On/off Oracle Exadata Machine

<<Back to Exadata Main Page How to Power On/off Oracle Exadata Machine Oracle Exadata machines can be powered on/off either by pressing the power button on front of the server or by logging in to the ILOM interface. Powering on servers using  button on front of the server The power on sequence is as follows. 1. Start Rack, including switches  Note:- Ensure the switches have had power applied for a few minutes to complete power on  configuration before starting Exadata Storage Servers 2.Start Exadata Storage Servers  Note:- Ensure all Exadata Storage Servers complete the boot process before starting the   database servers 3. Start Database Servers Powering On Servers Remotely using ILOM The ILOM can be accessed using the Web console, the command-line interface (CLI), IPMI, or SNMP. For example, to apply power to server dm01cel01 using IPMI, where dm01cel01-ilom is the host name of the ILOM for the serve...

ORA-28374: typed master key not found in wallet

<<Back to Oracle DB Security Main Page ORA-46665: master keys not activated for all PDBs during REKEY SQL> ADMINISTER KEY MANAGEMENT SET KEY FORCE KEYSTORE IDENTIFIED BY xxxx WITH BACKUP CONTAINER = ALL ; ADMINISTER KEY MANAGEMENT SET KEY FORCE KEYSTORE IDENTIFIED BY xxxx WITH BACKUP CONTAINER = ALL * ERROR at line 1: ORA-46665: master keys not activated for all PDBs during REKEY I found following in the trace file REKEY: Create Key in PDB 3 resulted in error 46658 *** 2019-02-06T15:27:04.667485+01:00 (CDB$ROOT(1)) REKEY: Activation of Key AdnU5OzNP08Qv1mIyXhP/64AAAAAAAAAAAAAAAAAAAAAAAAAAAAA in PDB 3 resulted in error 28374 REKEY: Keystore needs to be restored from the REKEY backup.Aborting REKEY! Cause: All this hassle started because I accidently deleted the wallet and all wallet backup files too and also forgot the keystore password. There was no way to restore the wallet back. Fortunately in my case the PDB which had encrypted data was supposed to be deco...

How to Find VIP of an Oracle RAC Cluster

<<Back to Oracle RAC Main Page How to Find Out VIP of an Oracle RAC Cluster Login clusterware owner (oracle) and execute the below command to find out the VIP hostname used in Oracle RAC $ olsnodes -i node1     node1-vip node2     node2-vip OR $ srvctl config nodeapps -viponly Network 1 exists Subnet IPv4: 10.0.0.0/255.255.0.0/bondeth0, static Subnet IPv6: Ping Targets: Network is enabled Network is individually enabled on nodes: Network is individually disabled on nodes: VIP exists: network number 1, hosting node node1 VIP Name: node1-vip VIP IPv4 Address: 10.0.0.1 VIP IPv6 Address: VIP is enabled. VIP is individually enabled on nodes: VIP is individually disabled on nodes: VIP exists: network number 1, hosting node node2 VIP Name: node2-vip VIP IPv4 Address: 10.0.0.2 VIP IPv6 Address: VIP is enabled. VIP is individually enabled on nodes: VIP is individually disabled on nodes:

ORA-46630: keystore cannot be created at the specified location

<<Back to DB Administration Main Page ORA-46630: keystore cannot be created at the specified location CDB011> ADMINISTER KEY MANAGEMENT CREATE KEYSTORE '+DATAC4/CDB01/wallet/' IDENTIFIED BY "xxxxxxx"; ADMINISTER KEY MANAGEMENT CREATE KEYSTORE '+DATAC4/CDB01/wallet/' IDENTIFIED BY "EncTest123" * ERROR at line 1: ORA-46630: keystore cannot be created at the specified location Cause  Creating a keystore at a location where there is already a keystore exists Solution To solve the problem, use a different location to create a keystore (use ENCRYPTION_WALLET_LOCATION in sqlnet.ora file to specify the keystore location), or move this ewallet.p12 file to some other location. Note: Oracle does not recommend deleting keystore file (ewallet.p12) that belongs to a database. If you have multiple keystores, you can choose to merge them rather than deleting either of them.

ORA-65104: operation not allowed on an inactive pluggable database alter pluggable database open

<<Back to DB Administration Main Page ORA-65104: operation not allowed on an inactive pluggable database SQL> alter pluggable database TEST_CLON open; alter pluggable database TEST_CLON open * ERROR at line 1: ORA-65104: operation not allowed on an inactive pluggable database Cause The pluggable database status was UNUSABLE. It was still being created or there was an error during the create operation. A PDB can only be opened if it is successfully created and its status is marked as NEW in cdb_pdbs.status column SQL> select PDB_NAME,STATUS from cdb_pdbs; PDB_NAME             STATUS -------------------- --------------------------- PDB$SEED             NORMAL TEST_CLON            UNUSABLE Solution:  Drop the PDB and create it again. Related Posts How to Clone Oracle PDB (Pluggable Database) with in the Same Container

ORA-16905: The member was not enabled yet

<<Back to Oracle DataGuard Main Page ORA-16905 Physical Standby Database is disabled DGMGRL> show configuration; Configuration - DG_ORCL1P   Protection Mode: MaxPerformance   Members:   ORCL1PP - Primary database     ORCL1PS - Physical standby database (disabled)       ORA-16905: The member was not enabled yet. Fast-Start Failover:  Disabled Configuration Status: SUCCESS   (status updated 58 seconds ago) DGMGRL> DGMGRL> enable database 'ORCL1PS'; Enabled. DGMGRL>  show configuration; Configuration - DG_ORCL1P   Protection Mode: MaxPerformance   Members:   ORCL1PP - Primary database     ORCL1PS - Physical standby database Fast-Start Failover:  Disabled Configuration Status: SUCCESS   (status updated 38 seconds ago)

How to Switch Log File from All Instances in RAC

<<Back to Oracle RAC Main Page Switch The Log File of All Instances in Oracle RAC. In many cases you need to switch the logfile of the database. You can switch logfile using alter system switch logfile command but if you want to switch the logfile from all the instances you need to execute the command on all the instances individually and therefore you must login on all the instances. You can avoid this and switch logfile of all instances by just running the below command from any of the instance in RAC database SQL> ALTER SYSTEM SWITCH ALL LOGFILE;   System altered.

Starting RMAN and connecting to Database

  <<Back to Oracle Backup & Recovery Main Page Starting RMAN and connecting to Database Starting RMAN and connecting to Database To start RMAN you need to set the environment and type rman and press enter. You can connect to database either using connect command or using command line option. using command line option localhost:$ export ORACLE_HOME=/ora_app/product/18c/dbd2 localhost:$ export PATH=$ORACLE_HOME/bin:$PATH localhost:$ export ORACLE_SID=ORCL1P localhost:$ rman target / Recovery Manager: Release 18.0.0.0.0 - Production on Sun Apr 4 08:11:01 2021 Version 18.11.0.0.0 Copyright (c) 1982, 2018, Oracle and/or its affiliates.  All rights reserved. connected to target database: ORCL1P (DBID=4215484517) RMAN> using connect option localhost:$ rman RMAN> connect target sys@ORCL1P  target database Password:******** connected to target database: ORCL1P (DBID=4215484517) NOTE: To use connect command you need to ensure that  you have proper TNS sentry...

How to Attach to a Datapump Job and Check Status of Export or Import

<<Back to Oracle DATAPUMP Main Page How to check the progress of  export or import Jobs You can attach to the export/import  job using ATTACH parameter of oracle datapump utility. Once you are attached to the job you check its status by typing STATUS command. Let us see how Step1>  Find the Export/Import Job Name You can find the datapump job information from  DBA_DATAPUMP_JOBS or  USER_DATAPUMP_JOBS view. SQL> SELECT OWNER_NAME,JOB_NAME,OPERATION,JOB_MODE,STATE from DBA_DATAPUMP_JOBS; OWNER_NAME JOB_NAME                       OPERATION            JOB_MODE   STATE ---------- ------------------------------ -------------------- ---------- ---------- SYSTEM     SYS_EXPORT_FULL_02          ...

ORA-46655: no valid keys in the file from which keys are to be imported

<<Back to DB Administration Main Page SQL> administer key management import encryption keys with secret "xxxx" from '/tmp/pdb02_tde_key.exp' force keystore identified by "xxxx" with backup; administer key management import encryption keys with secret "xxxxxx" from '/tmp/pdb02_tde_key.exp' force keystore identified by "xxxxxx" with backup * ERROR at line 1: ORA-46655: no valid keys in the file from which keys are to be imported Cause: Either the keys to be imported already present in the target database or correct container (PDB) is not set. Solution: In my case I got the error because I attempted to import the keys for newly plugged database PDB02 from CDB$ROOT container. To Solve the issue just switched to the correct container and re run the import. SQL> show con_name CON_NAME ------------------------------ CDB$ROOT <===Wrong Container selected  SQL> alter session set container=PDB02; Session alt...