Skip to main content

How to Rename a Diskgroup in ASM


<<Back to Oracle ASM Main Page

How to Use renamedg Utility to Rename ASM DiskGroup

In this post I will change the name of diskgroup from DG_TEST to DG01
Step 1: dismount the DG on all nodes 
Step 2: Validate rename DG operation by running  remamedg  command with check  options  verbose=true check=true
Step 3: Rename DG using renamedg utility
Step 4: mount the new DG 
Check out the current DG Configuration
$ asmcmd lsdg DG_TEST
State    Type    Rebal  Sector  Logical_Sector  Block       AU  Total_MB  Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name
MOUNTED  EXTERN  N         512             512   4096  1048576    102400   102346                0          102346              0             N  DG_TEST/
$ asmcmd lsdsk -k
Total_MB  Free_MB  OS_MB  Name     Failgroup  Site_Name  Site_GUID                         Site_Status  Failgroup_Type  Library                                      Label    Failgroup_Label  Site_Label  UDID  Product  Redund   Path
   51200    51174  51200  DISK011  DISK011               00000000000000000000000000000000               REGULAR         AFD Library - Generic , version 3 (KABI_V3)  DISK011                                              UNKNOWN  AFD:DISK011
   51200    51172  51200  DISK10   DISK10                00000000000000000000000000000000               REGULAR         AFD Library - Generic , version 3 (KABI_V3)  DISK10                                               UNKNOWN  AFD:DISK10
Dismount the Diskgroup from All Nodes
SQL> alter diskgroup DG_TEST dismount;
Diskgroup altered.
Validate the Diskgroup Rename without Actually Executing it
$ renamedg dgname=DG_TEST newdgname=DG01 verbose=true check=true
Parsing parameters..
Parameters in effect:
         Old DG name       : DG_TEST
         New DG name          : DG01
         Phases               :
                 Phase 1
                 Phase 2

         Discovery str        : (null)
         Check              : TRUE
         Clean              : TRUE
         Raw only           : TRUE
renamedg operation: dgname=DG_TEST newdgname=DG01 verbose=true check=true
Executing phase 1
Discovering the group
Performing discovery with string:
Identified disk ASM:AFD Library - Generic , version 3 (KABI_V3):AFD:DISK10 with disk number:0 and timestamp (33072750 1124915200)
Identified disk ASM:AFD Library - Generic , version 3 (KABI_V3):AFD:DISK011 with disk number:1 and timestamp (33072750 1124915200)
Checking for hearbeat...
Re-discovering the group
Performing discovery with string:
Identified disk ASM:AFD Library - Generic , version 3 (KABI_V3):AFD:DISK10 with disk number:0 and timestamp (33072750 1124915200)
Identified disk ASM:AFD Library - Generic , version 3 (KABI_V3):AFD:DISK011 with disk number:1 and timestamp (33072750 1124915200)
Checking if the diskgroup is mounted or used by CSS
Checking disk number:0
Checking disk number:1
Generating configuration file..
Completed phase 1
Executing phase 2
Looking for AFD:DISK10
Leaving the header unchanged
Looking for AFD:DISK011
Leaving the header unchanged
Completed phase 2
Rename the DiskGroup
$ renamedg dgname=DG_TEST newdgname=DG01 verbose=true
Parsing parameters..
Parameters in effect:
         Old DG name       : DG_TEST
         New DG name          : DG01
         Phases               :
                 Phase 1
                 Phase 2
         Discovery str        : (null)
         Clean              : TRUE
         Raw only           : TRUE
renamedg operation: dgname=DG_TEST newdgname=DG01 verbose=true
Executing phase 1
Discovering the group
Performing discovery with string:
Identified disk ASM:AFD Library - Generic , version 3 (KABI_V3):AFD:DISK10 with disk number:0 and timestamp (33072750 1124915200)
Identified disk ASM:AFD Library - Generic , version 3 (KABI_V3):AFD:DISK011 with disk number:1 and timestamp (33072750 1124915200)
Checking for hearbeat...
Re-discovering the group
Performing discovery with string:
Identified disk ASM:AFD Library - Generic , version 3 (KABI_V3):AFD:DISK10 with disk number:0 and timestamp (33072750 1124915200)
Identified disk ASM:AFD Library - Generic , version 3 (KABI_V3):AFD:DISK011 with disk number:1 and timestamp (33072750 1124915200)
Checking if the diskgroup is mounted or used by CSS
Checking disk number:0
Checking disk number:1
Generating configuration file..
Completed phase 1
Executing phase 2

Looking for AFD:DISK10
Modifying the header
Looking for AFD:DISK011
Modifying the header
Completed phase 2
Mount the Renamed DiskGroup
SQL> alter diskgroup DG01 mount;
Diskgroup altered.
$ asmcmd lsdg DG01 -g
Inst_ID  State    Type    Rebal  Sector  Logical_Sector  Block       AU  Total_MB  Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name
      1  MOUNTED  EXTERN  N         512             512   4096  1048576    102400   102346                0          102346              0             N  DG01/
Lastly Remove the NonExisting DG from CRS
$ srvctl remove diskgroup -g DG_TEST

==========================================================================
Following are acceptable argument for renamedg utility.

$ renamedg -help
Parsing parameters..
phase                           Phase to execute,
                                (phase=ONE|TWO|BOTH), default BOTH
dgname                          Diskgroup to be renamed
newdgname                       New name for the diskgroup
config                          intermediate config file
check                           just check-do not perform actual operation,
                                (check=TRUE/FALSE), default FALSE
confirm                         confirm before committing changes to disks,
                                (confirm=TRUE/FALSE), default FALSE
clean                           ignore errors,
                                (clean=TRUE/FALSE), default TRUE
asm_diskstring                  ASM Diskstring (asm_diskstring='discoverystring',
                                'discoverystring1' ...)
verbose                         verbose execution,
                                (verbose=TRUE|FALSE), default FALSE
keep_voting_files               Voting file attribute,
                                (keep_voting_files=TRUE|FALSE), default FALSE

Comments

Popular posts from this blog

ORA-28374: typed master key not found in wallet

<<Back to Oracle DB Security Main Page ORA-46665: master keys not activated for all PDBs during REKEY SQL> ADMINISTER KEY MANAGEMENT SET KEY FORCE KEYSTORE IDENTIFIED BY xxxx WITH BACKUP CONTAINER = ALL ; ADMINISTER KEY MANAGEMENT SET KEY FORCE KEYSTORE IDENTIFIED BY xxxx WITH BACKUP CONTAINER = ALL * ERROR at line 1: ORA-46665: master keys not activated for all PDBs during REKEY I found following in the trace file REKEY: Create Key in PDB 3 resulted in error 46658 *** 2019-02-06T15:27:04.667485+01:00 (CDB$ROOT(1)) REKEY: Activation of Key AdnU5OzNP08Qv1mIyXhP/64AAAAAAAAAAAAAAAAAAAAAAAAAAAAA in PDB 3 resulted in error 28374 REKEY: Keystore needs to be restored from the REKEY backup.Aborting REKEY! Cause: All this hassle started because I accidently deleted the wallet and all wallet backup files too and also forgot the keystore password. There was no way to restore the wallet back. Fortunately in my case the PDB which had encrypted data was supposed to be deco...

How to Find VIP of an Oracle RAC Cluster

<<Back to Oracle RAC Main Page How to Find Out VIP of an Oracle RAC Cluster Login clusterware owner (oracle) and execute the below command to find out the VIP hostname used in Oracle RAC $ olsnodes -i node1     node1-vip node2     node2-vip OR $ srvctl config nodeapps -viponly Network 1 exists Subnet IPv4: 10.0.0.0/255.255.0.0/bondeth0, static Subnet IPv6: Ping Targets: Network is enabled Network is individually enabled on nodes: Network is individually disabled on nodes: VIP exists: network number 1, hosting node node1 VIP Name: node1-vip VIP IPv4 Address: 10.0.0.1 VIP IPv6 Address: VIP is enabled. VIP is individually enabled on nodes: VIP is individually disabled on nodes: VIP exists: network number 1, hosting node node2 VIP Name: node2-vip VIP IPv4 Address: 10.0.0.2 VIP IPv6 Address: VIP is enabled. VIP is individually enabled on nodes: VIP is individually disabled on nodes:

ORA-65104: operation not allowed on an inactive pluggable database alter pluggable database open

<<Back to DB Administration Main Page ORA-65104: operation not allowed on an inactive pluggable database SQL> alter pluggable database TEST_CLON open; alter pluggable database TEST_CLON open * ERROR at line 1: ORA-65104: operation not allowed on an inactive pluggable database Cause The pluggable database status was UNUSABLE. It was still being created or there was an error during the create operation. A PDB can only be opened if it is successfully created and its status is marked as NEW in cdb_pdbs.status column SQL> select PDB_NAME,STATUS from cdb_pdbs; PDB_NAME             STATUS -------------------- --------------------------- PDB$SEED             NORMAL TEST_CLON            UNUSABLE Solution:  Drop the PDB and create it again. Related Posts How to Clone Oracle PDB (Pluggable Database) with in the Same Container

ORA-46630: keystore cannot be created at the specified location

<<Back to DB Administration Main Page ORA-46630: keystore cannot be created at the specified location CDB011> ADMINISTER KEY MANAGEMENT CREATE KEYSTORE '+DATAC4/CDB01/wallet/' IDENTIFIED BY "xxxxxxx"; ADMINISTER KEY MANAGEMENT CREATE KEYSTORE '+DATAC4/CDB01/wallet/' IDENTIFIED BY "EncTest123" * ERROR at line 1: ORA-46630: keystore cannot be created at the specified location Cause  Creating a keystore at a location where there is already a keystore exists Solution To solve the problem, use a different location to create a keystore (use ENCRYPTION_WALLET_LOCATION in sqlnet.ora file to specify the keystore location), or move this ewallet.p12 file to some other location. Note: Oracle does not recommend deleting keystore file (ewallet.p12) that belongs to a database. If you have multiple keystores, you can choose to merge them rather than deleting either of them.

ORA-16905: The member was not enabled yet

<<Back to Oracle DataGuard Main Page ORA-16905 Physical Standby Database is disabled DGMGRL> show configuration; Configuration - DG_ORCL1P   Protection Mode: MaxPerformance   Members:   ORCL1PP - Primary database     ORCL1PS - Physical standby database (disabled)       ORA-16905: The member was not enabled yet. Fast-Start Failover:  Disabled Configuration Status: SUCCESS   (status updated 58 seconds ago) DGMGRL> DGMGRL> enable database 'ORCL1PS'; Enabled. DGMGRL>  show configuration; Configuration - DG_ORCL1P   Protection Mode: MaxPerformance   Members:   ORCL1PP - Primary database     ORCL1PS - Physical standby database Fast-Start Failover:  Disabled Configuration Status: SUCCESS   (status updated 38 seconds ago)

How to Switch Log File from All Instances in RAC

<<Back to Oracle RAC Main Page Switch The Log File of All Instances in Oracle RAC. In many cases you need to switch the logfile of the database. You can switch logfile using alter system switch logfile command but if you want to switch the logfile from all the instances you need to execute the command on all the instances individually and therefore you must login on all the instances. You can avoid this and switch logfile of all instances by just running the below command from any of the instance in RAC database SQL> ALTER SYSTEM SWITCH ALL LOGFILE;   System altered.

ORA-46655: no valid keys in the file from which keys are to be imported

<<Back to DB Administration Main Page SQL> administer key management import encryption keys with secret "xxxx" from '/tmp/pdb02_tde_key.exp' force keystore identified by "xxxx" with backup; administer key management import encryption keys with secret "xxxxxx" from '/tmp/pdb02_tde_key.exp' force keystore identified by "xxxxxx" with backup * ERROR at line 1: ORA-46655: no valid keys in the file from which keys are to be imported Cause: Either the keys to be imported already present in the target database or correct container (PDB) is not set. Solution: In my case I got the error because I attempted to import the keys for newly plugged database PDB02 from CDB$ROOT container. To Solve the issue just switched to the correct container and re run the import. SQL> show con_name CON_NAME ------------------------------ CDB$ROOT <===Wrong Container selected  SQL> alter session set container=PDB02; Session alt...

Starting RMAN and connecting to Database

  <<Back to Oracle Backup & Recovery Main Page Starting RMAN and connecting to Database Starting RMAN and connecting to Database To start RMAN you need to set the environment and type rman and press enter. You can connect to database either using connect command or using command line option. using command line option localhost:$ export ORACLE_HOME=/ora_app/product/18c/dbd2 localhost:$ export PATH=$ORACLE_HOME/bin:$PATH localhost:$ export ORACLE_SID=ORCL1P localhost:$ rman target / Recovery Manager: Release 18.0.0.0.0 - Production on Sun Apr 4 08:11:01 2021 Version 18.11.0.0.0 Copyright (c) 1982, 2018, Oracle and/or its affiliates.  All rights reserved. connected to target database: ORCL1P (DBID=4215484517) RMAN> using connect option localhost:$ rman RMAN> connect target sys@ORCL1P  target database Password:******** connected to target database: ORCL1P (DBID=4215484517) NOTE: To use connect command you need to ensure that  you have proper TNS sentry...

How to Modify Database Startup Mode (Startoption) in Clusterware Registry

<<Back to Oracle RAC Main Page How to Modify Database Startup Option Using srvctl In this post I will show you, how you can modify the database startup option / Database startup mode in clusterware registry using srvctl. There are some time requirement to change the startup option (from default  OPEN ) to  MOUNT, or "READ ONLY" eg if in case of Physical Standby Configuration Let us Check the Current Configuration $ srvctl config database -d ORCL Database unique name: ORCL Database name: Oracle home: /u01/app/oracle/product/12.1.0.2/dbhome_2 Oracle user: oracle Spfile: +DATA/ORCL/PARAMETERFILE/spfileORCL.ora Password file: +DATA/ORCL/PASSWORD/pwORCL Domain: Start options: open Stop options: immediate Database role: PHYSICAL_STANDBY Management policy: AUTOMATIC Server pools: Disk Groups: DATA,RECO Mount point paths: Services: Type: RAC Start concurrency: Stop concurrency: OSDBA group: oinstall OSOPER group: oinstall Database instances: ORCL1,...

cluvfy Pre Check for RAC Oracle Installation (CVU)

<<Back to Oracle RAC Main Page runcluvfy.sh stage -pre crsinst -n node1,node2 -fixup -verbose Cluster Verification Pre Installation check failed. Verifying Physical Memory ...FAILED (PRVF-7530) $./runcluvfy.sh stage -pre crsinst -n node1,node2 -fixup -verbose Verifying Physical Memory ... Node Name     Available                 Required                  Status   ------------  ------------------------  ------------------------  ----------   node2         7.5443GB (7910784.0KB)    8GB (8388608.0KB)         failed   node1         7.5443GB (7910784.0KB)    8GB (8388608.0KB)      ...