Skip to main content

How to upgrade oracle clusterware from 11g to 19c RAC

 

<<Back to Oracle RAC Main Page


Step by Step how to upgrade oracle Clusterware from 11g to 19c RAC


Step1> Apply Latest Patch
Ensure to download and apply latest (last) available patch. Follow the instruction from below post


Step2> Ensure following patches are applied 

Patch Prerequisite 2539751.1 /2180188.1
From GI Home 
$GI_HOME/OPatch/opatch lsinventory |grep -e 28553832 -e 17617807 -e 21255373

for DB Home
from DB Home if DB also to be upgraded to 19c
$DB_HOME/OPatch/opatch lsinventory |grep -e 17617807 -e 21255373 -e 28553832

Managing older database with 19c ASM
$GI_HOME/OPatch/opatch lsinventory |grep -e 23186035

Step3> Create Directory 

As root user create following directories and change the ownership and permissions accordingly

 # mkdir -p /ora_grid/product/19c/grid
# cd /ora_grid/product/
# chown -R oracle:oinstall 19c


Step4> Unzip Clusterware Software 

on first node of the cluster unzip the clusterware software in Grid Home directory create in step3

cd /stage/software/clusterware/linux_x86_64/LINUX_X86_64/grid_home
unzip -qd /ora_grid/product/19c/grid LINUX.X64_193000_grid_home.zip
cd /ora_grid/product/19c/grid
ls

Step5> Install/Upgrade AFH on all nodes

download latest version of AHF- Autonomous Health Framework (AHF) - Including TFA and ORAchk/EXAchk (Doc ID 2550798.1)

unzip -d /ora_app /stage/software/ahf/AHF-LINUX_v24.9.0.zip

cd /ora_app
as root 
# ./ahf_setup -local

Refer output log in the bottom of the post to see sample output 


Step6>Run Ora Check (Optional)
cd /opt/oracle.ahf/bin
./orachk -u -o pre

Step7> Verify Prerequisite 

Clusterware Software Installation Prerequisite 
/ora_grid/product/19c/grid/runcluvfy.sh stage -pre crsinst -n node1,node2,node3,node4

Clusterware upgrade Prerequisite 
/ora_grid/product/19c/grid/runcluvfy.sh stage -pre crsinst -upgrade -rolling -src_crshome /ora_grid/product/11.2.0.4/grid -dest_crshome /ora_grid/product/19c/grid -dest_version 19.0.0.0.0 -fixup -verbose


Step8>Apply latest RU on 19c Cluster Home installed in step4

/ora_grid/product/19c/grid/gridSetup.sh -silent -applyRU /stage/software/clusterware/linux_x86_64/19c/sap/GIRU19P_2308-70004550/35319490

NOTE: This example is based on oracle SAP RAC installation , Patch number/name for normal oracle RAC may differ however the step remains same

Step9>Perform Dry Run Upgrade

/ora_grid/product/19c/grid/gridSetup.sh -dryRunForUpgrade

Step10> Stop  All Databases 

srvctl stop database -d <DB_NAME>

Step11> Backup GI and OH along with OCR

cd /ora_app/product/
tar -pcvf /stage/racsap/oracle_home_bkup_node1.tar 11.2.0.4
tar -pcvf /stage/racsap/oracle_home_bkup_node2.tar 11.2.0.4
....

cd /ora_grid/product/11.2.0.4

tar -pcvf /stage/racsap/gi_home_bkup_node1.tar grid
tar -pcvf /stage/racsap/gi_home_bkup_node2.tar grid
......

Step12>Apply latest RU

/ora_grid/product/19c/grid/gridSetup.sh -silent -applyRU /ora_stage/software/clusterware/linux_x86_64/19c/sap/GIRU19P_2308-70004550/35319490

Step13>Perform Dry Run

/ora_grid/product/19c/grid/gridSetup.sh -dryRunForUpgrade

Step14>Perform Upgrade

unset following environment variable before proceed

unset ORACLE_HOME
unset ORACLE_BASE
unset ORACLE_SID
unset ORA_CRS_HOME
unset ORA_NLS10
unset TNS_ADMIN
unset MANPATH
unset TVDPERL_BASE
unset TVDPERLLIB
unset LD_LIBRARY_PATH
unset PERL_HOME_DEFAULT
unset BE_ALIASES
unset ORA_MODULE
unset PERL_HOME
unset BESAVE_ALLBEDB_TNS_ADMIN


echo $ORACLE_HOME
echo $ORACLE_BASE
echo $ORACLE_SID
echo $ORA_CRS_HOME
echo $ORA_NLS10
echo $TNS_ADMIN
echo $PATH
env |grep ORA
env |grep TNS

export PATH=/ora_grid/product/11.2.0.4/grid/bin:/usr/bin:/usr/local/bin:/sbin:/usr/sbin:/home/oracle/bin:/bin:/usr/bin/X11:/usr/games:/ora_grid/product/19c/grid/bin

/ora_grid/product/19c/grid/gridSetup.sh

follow on screen instruction to proceed with upgrade
when prompted run rootupgrade.sh
rootupgrade.sh must be executed on node1 first wait to finish 
 then execute on all other node except last node. once finished on all nodes , proceed to execute rootupgrade.sh on last node


==========================================================================
Output Step 5
# ./ahf_setup -local
AHF Installer for Platform Linux Architecture x86_64
AHF Installation Log : /tmp/ahf_install_237000_27012_2023_08_07-12_45_33.log
Starting Autonomous Health Framework (AHF) Installation
AHF Version: 23.7.0 Build Date: 202307281326
AHF is already installed at /ora_grid/oracle.ahf
Installed AHF Version: 21.4.0 Build Date: 202112200745
Do you want to upgrade AHF [Y]|N : Y
Upgrading /ora_grid/oracle.ahf
Shutting down AHF Services
Upgrading AHF Services
Beginning Retype Index
TFA Home: /ora_grid/oracle.ahf/tfa
Moving existing indexes into temporary folder
Index file for index moved successfully
Index file for index_metadata moved successfully
Index file for complianceindex moved successfully
Moved indexes successfully
Starting AHF Services
Do you want AHF to store your My Oracle Support Credentials for Automatic Upload ? Y|[N] : N
.------------------------------------------------------------------.
| Host       | TFA Version | TFA Build ID         | Upgrade Status |
+------------+-------------+----------------------+----------------+
| NODE1 |  23.7.0.0.0 | 23700020230728132609 | UPGRADED       |
'------------+-------------+----------------------+----------------'
Setting up AHF CLI and SDK
AHF is successfully upgraded to latest version
Moving /tmp/ahf_install_237000_27012_2023_08_07-12_45_33.log to /ora_log/oracle.ahf/data/NODE1/diag/ahf/

==========================================================================
rootupgrade.sh Execution log

Node1
node1:/ora_grid/product/19c/grid/cfgtoollogs/opatchauto # /ora_grid/product/19c/grid/rootupgrade.sh
Performing root user operation.

The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /ora_grid/product/19c/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:
The file "dbhome" already exists in /usr/local/bin.  Overwrite it? (y/n)
[n]:
The file "oraenv" already exists in /usr/local/bin.  Overwrite it? (y/n)
[n]:
The file "coraenv" already exists in /usr/local/bin.  Overwrite it? (y/n)
[n]:

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Relinking oracle with rac_on option
Using configuration parameter file: /ora_grid/product/19c/grid/crs/install/crsconfig_params
The log of current session can be found at:
  /ora_app/crsdata/node1/crsconfig/crsupgrade_node1_2024-04-12_11-27-08AM.log
2024/04/12 11:27:17 CLSRSC-595: Executing upgrade step 1 of 18: 'UpgradeTFA'.
2024/04/12 11:27:17 CLSRSC-4015: Performing install or upgrade action for Oracle Trace File Analyzer (TFA) Collector.
2024/04/12 11:27:18 CLSRSC-595: Executing upgrade step 2 of 18: 'ValidateEnv'.
2024/04/12 11:27:18 CLSRSC-595: Executing upgrade step 3 of 18: 'GetOldConfig'.
2024/04/12 11:27:18 CLSRSC-692: Checking whether CRS entities are ready for upgrade. This operation may take a few minutes.
2024/04/12 11:27:55 CLSRSC-4003: Successfully patched Oracle Trace File Analyzer (TFA) Collector.
2024/04/12 11:29:32 CLSRSC-693: CRS entities validation completed successfully.
2024/04/12 11:29:35 CLSRSC-464: Starting retrieval of the cluster configuration data
Failure 19 at Cluster Synchronization Services context initialization
CRS-4693: Failed to back up the voting file for Cluster Synchronization Service.
CRS-4000: Command Backup failed, or completed with errors.
crsctl backup votedisk on OCR failed
2024/04/12 11:29:43 CLSRSC-515: Starting OCR manual backup.
2024/04/12 11:29:52 CLSRSC-516: OCR manual backup successful.
2024/04/12 11:29:59 CLSRSC-486:
 At this stage of upgrade, the OCR has changed.
 Any attempt to downgrade the cluster after this point will require a complete cluster outage to restore the OCR.
2024/04/12 11:29:59 CLSRSC-541:
 To downgrade the cluster:
 1. All nodes that have been upgraded must be downgraded.
2024/04/12 11:29:59 CLSRSC-542:
 2. Before downgrading the last node, the Grid Infrastructure stack on all other cluster nodes must be down.
2024/04/12 11:30:04 CLSRSC-465: Retrieval of the cluster configuration data has successfully completed.
2024/04/12 11:30:04 CLSRSC-595: Executing upgrade step 4 of 18: 'GenSiteGUIDs'.
2024/04/12 11:30:05 CLSRSC-595: Executing upgrade step 5 of 18: 'UpgPrechecks'.
2024/04/12 11:30:13 CLSRSC-595: Executing upgrade step 6 of 18: 'SetupOSD'.
2024/04/12 11:30:13 CLSRSC-595: Executing upgrade step 7 of 18: 'PreUpgrade'.
2024/04/12 11:30:38 CLSRSC-468: Setting Oracle Clusterware and ASM to rolling migration mode
2024/04/12 11:30:38 CLSRSC-482: Running command: '/ora_grid/product/19c/grid/bin/asmca -silent -upgradeNodeASM -nonRolling false -oldCRSHome /ora_grid/product/11.2.0.4/grid -oldCRSVersion 11.2.0.4.0 -firstNode true -startRolling true '

ASM configuration upgraded in local node successfully.

2024/04/12 11:30:46 CLSRSC-469: Successfully set Oracle Clusterware and ASM to rolling migration mode
2024/04/12 11:30:53 CLSRSC-466: Starting shutdown of the current Oracle Grid Infrastructure stack
2024/04/12 11:31:20 CLSRSC-467: Shutdown of the current Oracle Grid Infrastructure stack has successfully completed.
2024/04/12 11:31:23 CLSRSC-595: Executing upgrade step 8 of 18: 'CheckCRSConfig'.
2024/04/12 11:31:24 CLSRSC-595: Executing upgrade step 9 of 18: 'UpgradeOLR'.
2024/04/12 11:31:34 CLSRSC-595: Executing upgrade step 10 of 18: 'ConfigCHMOS'.
2024/04/12 11:31:34 CLSRSC-595: Executing upgrade step 11 of 18: 'UpgradeAFD'.
2024/04/12 11:31:39 CLSRSC-595: Executing upgrade step 12 of 18: 'createOHASD'.
2024/04/12 11:31:44 CLSRSC-595: Executing upgrade step 13 of 18: 'ConfigOHASD'.
2024/04/12 11:31:44 CLSRSC-329: Replacing Clusterware entries in file 'oracle-ohasd.service'
2024/04/12 11:32:12 CLSRSC-595: Executing upgrade step 14 of 18: 'InstallACFS'.
2024/04/12 11:32:23 CLSRSC-595: Executing upgrade step 15 of 18: 'InstallKA'.
2024/04/12 11:32:27 CLSRSC-595: Executing upgrade step 16 of 18: 'UpgradeCluster'.
clscfg: EXISTING configuration version 5 detected.
Successfully taken the backup of node specific configuration in OCR.
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
2024/04/12 11:46:22 CLSRSC-343: Successfully started Oracle Clusterware stack
2024/04/12 11:46:26 CLSRSC-595: Executing upgrade step 17 of 18: 'UpgradeNode'.
2024/04/12 11:46:28 CLSRSC-474: Initiating upgrade of resource types
2024/04/12 11:47:29 CLSRSC-475: Upgrade of resource types successfully initiated.
2024/04/12 11:47:35 CLSRSC-595: Executing upgrade step 18 of 18: 'PostUpgrade'.
2024/04/12 11:47:42 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded
node1:/ora_grid/product/19c/grid/cfgtoollogs/opatchauto #




2nd node

node2:~ # /ora_grid/product/19c/grid/rootupgrade.sh
Performing root user operation.

The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /ora_grid/product/19c/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:
The file "dbhome" already exists in /usr/local/bin.  Overwrite it? (y/n)
[n]:
The file "oraenv" already exists in /usr/local/bin.  Overwrite it? (y/n)
[n]:
The file "coraenv" already exists in /usr/local/bin.  Overwrite it? (y/n)
[n]:

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Relinking oracle with rac_on option
Using configuration parameter file: /ora_grid/product/19c/grid/crs/install/crsconfig_params
The log of current session can be found at:
  /ora_app/crsdata/node2/crsconfig/crsupgrade_node2_2024-04-12_11-49-19AM.log
2024/04/12 11:49:23 CLSRSC-595: Executing upgrade step 1 of 18: 'UpgradeTFA'.
2024/04/12 11:49:23 CLSRSC-4015: Performing install or upgrade action for Oracle Trace File Analyzer (TFA) Collector.
2024/04/12 11:49:23 CLSRSC-595: Executing upgrade step 2 of 18: 'ValidateEnv'.
2024/04/12 11:49:23 CLSRSC-595: Executing upgrade step 3 of 18: 'GetOldConfig'.
2024/04/12 11:49:28 CLSRSC-464: Starting retrieval of the cluster configuration data
2024/04/12 11:49:31 CLSRSC-465: Retrieval of the cluster configuration data has successfully completed.
2024/04/12 11:49:31 CLSRSC-595: Executing upgrade step 4 of 18: 'GenSiteGUIDs'.
2024/04/12 11:49:31 CLSRSC-595: Executing upgrade step 5 of 18: 'UpgPrechecks'.
2024/04/12 11:49:33 CLSRSC-595: Executing upgrade step 6 of 18: 'SetupOSD'.
2024/04/12 11:49:33 CLSRSC-595: Executing upgrade step 7 of 18: 'PreUpgrade'.

ASM configuration upgraded in local node successfully.

2024/04/12 11:49:38 CLSRSC-466: Starting shutdown of the current Oracle Grid Infrastructure stack
2024/04/12 11:49:59 CLSRSC-4003: Successfully patched Oracle Trace File Analyzer (TFA) Collector.
2024/04/12 11:50:07 CLSRSC-467: Shutdown of the current Oracle Grid Infrastructure stack has successfully completed.
2024/04/12 11:50:11 CLSRSC-595: Executing upgrade step 8 of 18: 'CheckCRSConfig'.
2024/04/12 11:50:12 CLSRSC-595: Executing upgrade step 9 of 18: 'UpgradeOLR'.
2024/04/12 11:50:18 CLSRSC-595: Executing upgrade step 10 of 18: 'ConfigCHMOS'.
2024/04/12 11:50:18 CLSRSC-595: Executing upgrade step 11 of 18: 'UpgradeAFD'.
2024/04/12 11:50:19 CLSRSC-595: Executing upgrade step 12 of 18: 'createOHASD'.
2024/04/12 11:50:20 CLSRSC-595: Executing upgrade step 13 of 18: 'ConfigOHASD'.
2024/04/12 11:50:20 CLSRSC-329: Replacing Clusterware entries in file 'oracle-ohasd.service'
2024/04/12 11:50:40 CLSRSC-595: Executing upgrade step 14 of 18: 'InstallACFS'.
2024/04/12 11:50:47 CLSRSC-595: Executing upgrade step 15 of 18: 'InstallKA'.
2024/04/12 11:50:48 CLSRSC-595: Executing upgrade step 16 of 18: 'UpgradeCluster'.
2024/04/12 11:51:39 CLSRSC-343: Successfully started Oracle Clusterware stack
clscfg: EXISTING configuration version 19 detected.
Successfully taken the backup of node specific configuration in OCR.
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
2024/04/12 11:51:48 CLSRSC-595: Executing upgrade step 17 of 18: 'UpgradeNode'.
Start upgrade invoked..
Upgrading CRS managed objects
Upgrading 71 CRS resources
Completed upgrading CRS resources
Upgrading 72 CRS types
Completed upgrading CRS types
Upgrading 35 CRS server pools
Completed upgrading CRS server pools
Upgrading 2 old servers
Completed upgrading servers
CRS upgrade has completed.
2024/04/12 11:52:23 CLSRSC-478: Setting Oracle Clusterware active version on the last node to be upgraded
2024/04/12 11:52:23 CLSRSC-482: Running command: '/ora_grid/product/19c/grid/bin/crsctl set crs activeversion'
Started to upgrade the active version of Oracle Clusterware. This operation may take a few minutes.
Started to upgrade the OCR.
Started to upgrade CSS.
CSS was successfully upgraded.
Started to upgrade Oracle ASM.
Started to upgrade CRS.
CRS was successfully upgraded.
Started to upgrade Oracle ACFS.
Oracle ACFS was successfully upgraded.
Successfully upgraded the active version of Oracle Clusterware.
Oracle Clusterware active version was successfully set to 19.0.0.0.0.
2024/04/12 11:54:28 CLSRSC-479: Successfully set Oracle Clusterware active version
2024/04/12 11:54:33 CLSRSC-476: Finishing upgrade of resource types
2024/04/12 11:55:25 CLSRSC-477: Successfully completed upgrade of resource types
2024/04/12 11:55:42 CLSRSC-595: Executing upgrade step 18 of 18: 'PostUpgrade'.
Successfully updated XAG resources.
2024/04/12 11:55:58 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded
node2:~ #

Comments

Popular posts from this blog

How to Power On/off Oracle Exadata Machine

<<Back to Exadata Main Page How to Power On/off Oracle Exadata Machine Oracle Exadata machines can be powered on/off either by pressing the power button on front of the server or by logging in to the ILOM interface. Powering on servers using  button on front of the server The power on sequence is as follows. 1. Start Rack, including switches  Note:- Ensure the switches have had power applied for a few minutes to complete power on  configuration before starting Exadata Storage Servers 2.Start Exadata Storage Servers  Note:- Ensure all Exadata Storage Servers complete the boot process before starting the   database servers 3. Start Database Servers Powering On Servers Remotely using ILOM The ILOM can be accessed using the Web console, the command-line interface (CLI), IPMI, or SNMP. For example, to apply power to server dm01cel01 using IPMI, where dm01cel01-ilom is the host name of the ILOM for the serve...

ORA-28374: typed master key not found in wallet

<<Back to Oracle DB Security Main Page ORA-46665: master keys not activated for all PDBs during REKEY SQL> ADMINISTER KEY MANAGEMENT SET KEY FORCE KEYSTORE IDENTIFIED BY xxxx WITH BACKUP CONTAINER = ALL ; ADMINISTER KEY MANAGEMENT SET KEY FORCE KEYSTORE IDENTIFIED BY xxxx WITH BACKUP CONTAINER = ALL * ERROR at line 1: ORA-46665: master keys not activated for all PDBs during REKEY I found following in the trace file REKEY: Create Key in PDB 3 resulted in error 46658 *** 2019-02-06T15:27:04.667485+01:00 (CDB$ROOT(1)) REKEY: Activation of Key AdnU5OzNP08Qv1mIyXhP/64AAAAAAAAAAAAAAAAAAAAAAAAAAAAA in PDB 3 resulted in error 28374 REKEY: Keystore needs to be restored from the REKEY backup.Aborting REKEY! Cause: All this hassle started because I accidently deleted the wallet and all wallet backup files too and also forgot the keystore password. There was no way to restore the wallet back. Fortunately in my case the PDB which had encrypted data was supposed to be deco...

How to Find VIP of an Oracle RAC Cluster

<<Back to Oracle RAC Main Page How to Find Out VIP of an Oracle RAC Cluster Login clusterware owner (oracle) and execute the below command to find out the VIP hostname used in Oracle RAC $ olsnodes -i node1     node1-vip node2     node2-vip OR $ srvctl config nodeapps -viponly Network 1 exists Subnet IPv4: 10.0.0.0/255.255.0.0/bondeth0, static Subnet IPv6: Ping Targets: Network is enabled Network is individually enabled on nodes: Network is individually disabled on nodes: VIP exists: network number 1, hosting node node1 VIP Name: node1-vip VIP IPv4 Address: 10.0.0.1 VIP IPv6 Address: VIP is enabled. VIP is individually enabled on nodes: VIP is individually disabled on nodes: VIP exists: network number 1, hosting node node2 VIP Name: node2-vip VIP IPv4 Address: 10.0.0.2 VIP IPv6 Address: VIP is enabled. VIP is individually enabled on nodes: VIP is individually disabled on nodes:

ORA-46630: keystore cannot be created at the specified location

<<Back to DB Administration Main Page ORA-46630: keystore cannot be created at the specified location CDB011> ADMINISTER KEY MANAGEMENT CREATE KEYSTORE '+DATAC4/CDB01/wallet/' IDENTIFIED BY "xxxxxxx"; ADMINISTER KEY MANAGEMENT CREATE KEYSTORE '+DATAC4/CDB01/wallet/' IDENTIFIED BY "EncTest123" * ERROR at line 1: ORA-46630: keystore cannot be created at the specified location Cause  Creating a keystore at a location where there is already a keystore exists Solution To solve the problem, use a different location to create a keystore (use ENCRYPTION_WALLET_LOCATION in sqlnet.ora file to specify the keystore location), or move this ewallet.p12 file to some other location. Note: Oracle does not recommend deleting keystore file (ewallet.p12) that belongs to a database. If you have multiple keystores, you can choose to merge them rather than deleting either of them.

ORA-65104: operation not allowed on an inactive pluggable database alter pluggable database open

<<Back to DB Administration Main Page ORA-65104: operation not allowed on an inactive pluggable database SQL> alter pluggable database TEST_CLON open; alter pluggable database TEST_CLON open * ERROR at line 1: ORA-65104: operation not allowed on an inactive pluggable database Cause The pluggable database status was UNUSABLE. It was still being created or there was an error during the create operation. A PDB can only be opened if it is successfully created and its status is marked as NEW in cdb_pdbs.status column SQL> select PDB_NAME,STATUS from cdb_pdbs; PDB_NAME             STATUS -------------------- --------------------------- PDB$SEED             NORMAL TEST_CLON            UNUSABLE Solution:  Drop the PDB and create it again. Related Posts How to Clone Oracle PDB (Pluggable Database) with in the Same Container

ORA-16905: The member was not enabled yet

<<Back to Oracle DataGuard Main Page ORA-16905 Physical Standby Database is disabled DGMGRL> show configuration; Configuration - DG_ORCL1P   Protection Mode: MaxPerformance   Members:   ORCL1PP - Primary database     ORCL1PS - Physical standby database (disabled)       ORA-16905: The member was not enabled yet. Fast-Start Failover:  Disabled Configuration Status: SUCCESS   (status updated 58 seconds ago) DGMGRL> DGMGRL> enable database 'ORCL1PS'; Enabled. DGMGRL>  show configuration; Configuration - DG_ORCL1P   Protection Mode: MaxPerformance   Members:   ORCL1PP - Primary database     ORCL1PS - Physical standby database Fast-Start Failover:  Disabled Configuration Status: SUCCESS   (status updated 38 seconds ago)

How to Switch Log File from All Instances in RAC

<<Back to Oracle RAC Main Page Switch The Log File of All Instances in Oracle RAC. In many cases you need to switch the logfile of the database. You can switch logfile using alter system switch logfile command but if you want to switch the logfile from all the instances you need to execute the command on all the instances individually and therefore you must login on all the instances. You can avoid this and switch logfile of all instances by just running the below command from any of the instance in RAC database SQL> ALTER SYSTEM SWITCH ALL LOGFILE;   System altered.

Starting RMAN and connecting to Database

  <<Back to Oracle Backup & Recovery Main Page Starting RMAN and connecting to Database Starting RMAN and connecting to Database To start RMAN you need to set the environment and type rman and press enter. You can connect to database either using connect command or using command line option. using command line option localhost:$ export ORACLE_HOME=/ora_app/product/18c/dbd2 localhost:$ export PATH=$ORACLE_HOME/bin:$PATH localhost:$ export ORACLE_SID=ORCL1P localhost:$ rman target / Recovery Manager: Release 18.0.0.0.0 - Production on Sun Apr 4 08:11:01 2021 Version 18.11.0.0.0 Copyright (c) 1982, 2018, Oracle and/or its affiliates.  All rights reserved. connected to target database: ORCL1P (DBID=4215484517) RMAN> using connect option localhost:$ rman RMAN> connect target sys@ORCL1P  target database Password:******** connected to target database: ORCL1P (DBID=4215484517) NOTE: To use connect command you need to ensure that  you have proper TNS sentry...

How to Attach to a Datapump Job and Check Status of Export or Import

<<Back to Oracle DATAPUMP Main Page How to check the progress of  export or import Jobs You can attach to the export/import  job using ATTACH parameter of oracle datapump utility. Once you are attached to the job you check its status by typing STATUS command. Let us see how Step1>  Find the Export/Import Job Name You can find the datapump job information from  DBA_DATAPUMP_JOBS or  USER_DATAPUMP_JOBS view. SQL> SELECT OWNER_NAME,JOB_NAME,OPERATION,JOB_MODE,STATE from DBA_DATAPUMP_JOBS; OWNER_NAME JOB_NAME                       OPERATION            JOB_MODE   STATE ---------- ------------------------------ -------------------- ---------- ---------- SYSTEM     SYS_EXPORT_FULL_02          ...

ORA-46655: no valid keys in the file from which keys are to be imported

<<Back to DB Administration Main Page SQL> administer key management import encryption keys with secret "xxxx" from '/tmp/pdb02_tde_key.exp' force keystore identified by "xxxx" with backup; administer key management import encryption keys with secret "xxxxxx" from '/tmp/pdb02_tde_key.exp' force keystore identified by "xxxxxx" with backup * ERROR at line 1: ORA-46655: no valid keys in the file from which keys are to be imported Cause: Either the keys to be imported already present in the target database or correct container (PDB) is not set. Solution: In my case I got the error because I attempted to import the keys for newly plugged database PDB02 from CDB$ROOT container. To Solve the issue just switched to the correct container and re run the import. SQL> show con_name CON_NAME ------------------------------ CDB$ROOT <===Wrong Container selected  SQL> alter session set container=PDB02; Session alt...