Skip to main content

How to Clean Deinstall Oracle Grid Infrastructure for Single Instance Environment


<<Back to Oracle ASM Main Page

How to Uninstall Oracle Grid Infrastructure for Standalone Server Cleanly

Step1> login as oracle grid infrastructure owner (oragrid)
Step2> Change directory to $ORACLE_HOME/deinstall
$cd $ORACLE_HOME/deinstall
Step3> execute ./deinstall.sh and follow the instruction
$./deinstall
Step4> Answer carefully the questions on the prompt.
Step5> When prompted run the command as root user
Step6> Clean the leftovers if any manually

Deinstallation Logs
[oragrid@test1]$./deinstall
Checking for required files and bootstrapping ...
Please wait ...
Location of logs /tmp/deinstall2018-08-31_10-52-00AM/logs/
############ ORACLE DECONFIG TOOL START ############

######################### DECONFIG CHECK OPERATION START #########################
## [START] Install check configuration ##

Checking for existence of the Oracle home location /u01/oragrid/12.2.0.1/grid
Oracle Home type selected for deinstall is: Oracle Grid Infrastructure for a Standalone Server
Oracle Base selected for deinstall is: /u01/oragrid/orabase
Checking for existence of central inventory location /u01/orainventory
Checking for existence of the Oracle Grid Infrastructure home /u01/oragrid/12.2.0.1/grid
## [END] Install check configuration ##
Traces log file: /tmp/deinstall2018-08-31_10-52-00AM/logs//crsdc_2018-08-31_10-53-16-AM.log
Network Configuration check config START
Network de-configuration trace file location: /tmp/deinstall2018-08-31_10-52-00AM/logs/netdc_check2018-08-31_10-53-17-AM.log
Specify all Oracle Restart enabled listeners that are to be de-configured. Enter .(dot) to deselect all. [LISTENER]:
Network Configuration check config END
Asm Check Configuration START
ASM de-configuration trace file location: /tmp/deinstall2018-08-31_10-52-00AM/logs/asmcadc_check2018-08-31_10-53-39-AM.log
Automatic Storage Management (ASM) instance is detected in this Oracle home /u01/oragrid/12.2.0.1/grid.
ASM Diagnostic Destination : /u01/oragrid/orabase
ASM Diskgroups : +DG_TST_DATA,+DG_TST_FRA
ASM diskstring : AFD:*
Diskgroups will be dropped and all ASM filter driver labels will be cleared
De-configuring ASM will drop all the diskgroups and their contents at cleanup time. Also ASM filter driver labels will be cleared. This will affect all of the databases and ACFS that use this ASM instance(s).
 If you want to retain the existing diskgroups and associated ASM filter driver labels or if any of the information detected is incorrect, you can modify by entering 'y'. Do you  want to modify above information (y|n) [n]:
Database Check Configuration START
Database de-configuration trace file location: /tmp/deinstall2018-08-31_10-52-00AM/logs/databasedc_check2018-08-31_10-54-08-AM.log
Database Check Configuration END
######################### DECONFIG CHECK OPERATION END #########################

####################### DECONFIG CHECK OPERATION SUMMARY #######################
Oracle Grid Infrastructure Home is: /u01/oragrid/12.2.0.1/grid
Oracle Home selected for deinstall is: /u01/oragrid/12.2.0.1/grid
Inventory Location where the Oracle home registered is: /u01/orainventory
Following Oracle Restart enabled listener(s) will be de-configured: LISTENER
ASM instance will be de-configured from this Oracle home
Do you want to continue (y - yes, n - no)? [n]: y
A log of this session will be written to: '/tmp/deinstall2018-08-31_10-52-00AM/logs/deinstall_deconfig2018-08-31_10-53-15-AM.out'
Any error messages from this session will be written to: '/tmp/deinstall2018-08-31_10-52-00AM/logs/deinstall_deconfig2018-08-31_10-53-15-AM.err'
######################## DECONFIG CLEAN OPERATION START ########################
Database de-configuration trace file location: /tmp/deinstall2018-08-31_10-52-00AM/logs/databasedc_clean2018-08-31_10-54-13-AM.log
ASM de-configuration trace file location: /tmp/deinstall2018-08-31_10-52-00AM/logs/asmcadc_clean2018-08-31_10-54-13-AM.log
ASM Clean Configuration START

ASM Clean Configuration END
Network Configuration clean config START
Network de-configuration trace file location: /tmp/deinstall2018-08-31_10-52-00AM/logs/netdc_clean2018-08-31_11-06-42-AM.log
De-configuring Oracle Restart enabled listener(s): LISTENER
De-configuring listener: LISTENER
    Stopping listener: LISTENER
    Listener stopped successfully.
    Unregistering listener: LISTENER
    Listener unregistered successfully.
    Deleting listener: LISTENER
    Listener deleted successfully.
Listener de-configured successfully.
De-configuring Naming Methods configuration file...
Naming Methods configuration file de-configured successfully.
De-configuring backup files...
Backup files de-configured successfully.
The network configuration has been cleaned up successfully.
Network Configuration clean config END

---------------------------------------->
Run the following command as the root user or the administrator on node "test1".
/u01/oragrid/12.2.0.1/grid/crs/install/roothas.sh -force  -deconfig -paramfile "/tmp/deinstall2018-08-31_10-52-00AM/response/deinstall_OraGI12Home1.rsp"
Press Enter after you finish running the above commands
<----------------------------------------
######################### DECONFIG CLEAN OPERATION END #########################

####################### DECONFIG CLEAN OPERATION SUMMARY #######################
ASM instance was de-configured successfully from the Oracle home
Following Oracle Restart enabled listener(s) were de-configured successfully: LISTENER
The stopping and de-configuring of Oracle Restart failed. Fix the problem and rerun this tool to completely remove the Oracle Restart configuration and the software
The stopping and de-configuring of Oracle Restart failed. Fix the problem and rerun this tool to completely remove the Oracle Restart configuration and the software
Oracle Restart was already stopped and de-configured on node "test1"
Oracle Restart is stopped and de-configured successfully.
#######################################################################

############# ORACLE DECONFIG TOOL END #############
Using properties file /tmp/deinstall2018-08-31_10-52-00AM/response/deinstall_2018-08-31_10-53-15-AM.rsp
Location of logs /tmp/deinstall2018-08-31_10-52-00AM/logs/
############ ORACLE DEINSTALL TOOL START ############


####################### DEINSTALL CHECK OPERATION SUMMARY #######################
A log of this session will be written to: '/tmp/deinstall2018-08-31_10-52-00AM/logs/deinstall_deconfig2018-08-31_10-53-15-AM.out'
Any error messages from this session will be written to: '/tmp/deinstall2018-08-31_10-52-00AM/logs/deinstall_deconfig2018-08-31_10-53-15-AM.err'
######################## DEINSTALL CLEAN OPERATION START ########################
## [START] Preparing for Deinstall ##
Setting LOCAL_NODE to test1
Setting CRS_HOME to true
Setting oracle.installer.invPtrLoc to /tmp/deinstall2018-08-31_10-52-00AM/oraInst.loc
Setting oracle.installer.local to false
## [END] Preparing for Deinstall ##
Setting the force flag to false
Setting the force flag to cleanup the Oracle Base
Oracle Universal Installer clean START
Detach Oracle home '/u01/oragrid/12.2.0.1/grid' from the central inventory on the local node : Done
Delete directory '/u01/oragrid/12.2.0.1/grid' on the local node : Done
Delete directory '/u01/orainventory' on the local node : Done
Oracle Universal Installer cleanup completed successfully
Oracle Universal Installer clean END

## [START] Oracle install clean ##

## [END] Oracle install clean ##

######################### DEINSTALL CLEAN OPERATION END #########################

####################### DEINSTALL CLEAN OPERATION SUMMARY #######################
Successfully detached Oracle home '/u01/oragrid/12.2.0.1/grid' from the central inventory on the local node.
Successfully deleted directory '/u01/oragrid/12.2.0.1/grid' on the local node.
Failed to delete directory '/u01/orainventory' on the local node.
Failed to delete directory '/u01/oragrid/orabase' on the local node.
Oracle Universal Installer cleanup completed successfully.

Run 'rm -r /etc/oraInst.loc' as root on node(s) 'test1' at the end of the session.
Run 'rm -r /opt/ORCLfmap' as root on node(s) 'test1' at the end of the session.
Run 'rm -r /etc/oratab' as root on node(s) 'test1' at the end of the session.
Review the permissions and contents of '/u01/oragrid/orabase' on nodes(s) 'test1'.
If there are no Oracle home(s) associated with '/u01/oragrid/orabase', manually delete '/u01/oragrid/orabase' and its contents.
Oracle deinstall tool successfully cleaned up temporary directories.
#######################################################################

############# ORACLE DEINSTALL TOOL END #############

Root Script Execution Log
# /u01/oragrid/12.2.0.1/grid/crs/install/roothas.sh -force  -deconfig -paramfile "/tmp/deinstall2018-08-31_10-52-00AM/response/deinstall_OraGI12Home1.rsp"
Using configuration parameter file: /tmp/deinstall2018-08-31_10-52-00AM/response/deinstall_OraGI12Home1.rsp
The log of current session can be found at:
  /tmp/deinstall2018-08-31_10-52-00AM/logs/hadeconfig.log
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'test1'
CRS-2673: Attempting to stop 'ora.evmd' on 'test1'
CRS-2677: Stop of 'ora.evmd' on 'test1' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'test1'
CRS-2677: Stop of 'ora.cssd' on 'test1' succeeded
CRS-2673: Attempting to stop 'ora.driver.afd' on 'test1'
CRS-2677: Stop of 'ora.driver.afd' on 'test1' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'test1' has completed
CRS-4133: Oracle High Availability Services has been stopped.
2018/08/31 11:09:08 CLSRSC-337: Successfully deconfigured Oracle Restart stack

Leftover Cleanup Log
# rm -r /etc/oraInst.loc
rm: remove regular file ‘/etc/oraInst.loc’? y
[root@test1 ~]# rm -r /opt/ORCLfmap
rm: descend into directory ‘/opt/ORCLfmap’? y
rm: descend into directory ‘/opt/ORCLfmap/prot1_64’? y
rm: descend into directory ‘/opt/ORCLfmap/prot1_64/bin’? y
rm: remove regular file ‘/opt/ORCLfmap/prot1_64/bin/fmputl’? y
rm: remove regular file ‘/opt/ORCLfmap/prot1_64/bin/fmputlhp’? y
rm: remove directory ‘/opt/ORCLfmap/prot1_64/bin’? y
rm: descend into directory ‘/opt/ORCLfmap/prot1_64/etc’? y
rm: remove regular file ‘/opt/ORCLfmap/prot1_64/etc/filemap.ora’? y
rm: remove directory ‘/opt/ORCLfmap/prot1_64/etc’? y
rm: remove directory ‘/opt/ORCLfmap/prot1_64/log’? y
rm: remove directory ‘/opt/ORCLfmap/prot1_64’? y
rm: remove directory ‘/opt/ORCLfmap’? y


Comments

Popular posts from this blog

How to Power On/off Oracle Exadata Machine

<<Back to Exadata Main Page How to Power On/off Oracle Exadata Machine Oracle Exadata machines can be powered on/off either by pressing the power button on front of the server or by logging in to the ILOM interface. Powering on servers using  button on front of the server The power on sequence is as follows. 1. Start Rack, including switches  Note:- Ensure the switches have had power applied for a few minutes to complete power on  configuration before starting Exadata Storage Servers 2.Start Exadata Storage Servers  Note:- Ensure all Exadata Storage Servers complete the boot process before starting the   database servers 3. Start Database Servers Powering On Servers Remotely using ILOM The ILOM can be accessed using the Web console, the command-line interface (CLI), IPMI, or SNMP. For example, to apply power to server dm01cel01 using IPMI, where dm01cel01-ilom is the host name of the ILOM for the server to be powered on, run the

ORA-28374: typed master key not found in wallet

<<Back to Oracle DB Security Main Page ORA-46665: master keys not activated for all PDBs during REKEY SQL> ADMINISTER KEY MANAGEMENT SET KEY FORCE KEYSTORE IDENTIFIED BY xxxx WITH BACKUP CONTAINER = ALL ; ADMINISTER KEY MANAGEMENT SET KEY FORCE KEYSTORE IDENTIFIED BY xxxx WITH BACKUP CONTAINER = ALL * ERROR at line 1: ORA-46665: master keys not activated for all PDBs during REKEY I found following in the trace file REKEY: Create Key in PDB 3 resulted in error 46658 *** 2019-02-06T15:27:04.667485+01:00 (CDB$ROOT(1)) REKEY: Activation of Key AdnU5OzNP08Qv1mIyXhP/64AAAAAAAAAAAAAAAAAAAAAAAAAAAAA in PDB 3 resulted in error 28374 REKEY: Keystore needs to be restored from the REKEY backup.Aborting REKEY! Cause: All this hassle started because I accidently deleted the wallet and all wallet backup files too and also forgot the keystore password. There was no way to restore the wallet back. Fortunately in my case the PDB which had encrypted data was supposed to be deco

How to Find VIP of an Oracle RAC Cluster

<<Back to Oracle RAC Main Page How to Find Out VIP of an Oracle RAC Cluster Login clusterware owner (oracle) and execute the below command to find out the VIP hostname used in Oracle RAC $ olsnodes -i node1     node1-vip node2     node2-vip OR $ srvctl config nodeapps -viponly Network 1 exists Subnet IPv4: 10.0.0.0/255.255.0.0/bondeth0, static Subnet IPv6: Ping Targets: Network is enabled Network is individually enabled on nodes: Network is individually disabled on nodes: VIP exists: network number 1, hosting node node1 VIP Name: node1-vip VIP IPv4 Address: 10.0.0.1 VIP IPv6 Address: VIP is enabled. VIP is individually enabled on nodes: VIP is individually disabled on nodes: VIP exists: network number 1, hosting node node2 VIP Name: node2-vip VIP IPv4 Address: 10.0.0.2 VIP IPv6 Address: VIP is enabled. VIP is individually enabled on nodes: VIP is individually disabled on nodes:

ORA-16905: The member was not enabled yet

<<Back to Oracle DataGuard Main Page ORA-16905 Physical Standby Database is disabled DGMGRL> show configuration; Configuration - DG_ORCL1P   Protection Mode: MaxPerformance   Members:   ORCL1PP - Primary database     ORCL1PS - Physical standby database (disabled)       ORA-16905: The member was not enabled yet. Fast-Start Failover:  Disabled Configuration Status: SUCCESS   (status updated 58 seconds ago) DGMGRL> DGMGRL> enable database 'ORCL1PS'; Enabled. DGMGRL>  show configuration; Configuration - DG_ORCL1P   Protection Mode: MaxPerformance   Members:   ORCL1PP - Primary database     ORCL1PS - Physical standby database Fast-Start Failover:  Disabled Configuration Status: SUCCESS   (status updated 38 seconds ago)

How to Switch Log File from All Instances in RAC

<<Back to Oracle RAC Main Page Switch The Log File of All Instances in Oracle RAC. In many cases you need to switch the logfile of the database. You can switch logfile using alter system switch logfile command but if you want to switch the logfile from all the instances you need to execute the command on all the instances individually and therefore you must login on all the instances. You can avoid this and switch logfile of all instances by just running the below command from any of the instance in RAC database SQL> ALTER SYSTEM SWITCH ALL LOGFILE;   System altered.

ORA-65104: operation not allowed on an inactive pluggable database alter pluggable database open

<<Back to DB Administration Main Page ORA-65104: operation not allowed on an inactive pluggable database SQL> alter pluggable database TEST_CLON open; alter pluggable database TEST_CLON open * ERROR at line 1: ORA-65104: operation not allowed on an inactive pluggable database Cause The pluggable database status was UNUSABLE. It was still being created or there was an error during the create operation. A PDB can only be opened if it is successfully created and its status is marked as NEW in cdb_pdbs.status column SQL> select PDB_NAME,STATUS from cdb_pdbs; PDB_NAME             STATUS -------------------- --------------------------- PDB$SEED             NORMAL TEST_CLON            UNUSABLE Solution:  Drop the PDB and create it again. Related Posts How to Clone Oracle PDB (Pluggable Database) with in the Same Container

ORA-46630: keystore cannot be created at the specified location

<<Back to DB Administration Main Page ORA-46630: keystore cannot be created at the specified location CDB011> ADMINISTER KEY MANAGEMENT CREATE KEYSTORE '+DATAC4/CDB01/wallet/' IDENTIFIED BY "xxxxxxx"; ADMINISTER KEY MANAGEMENT CREATE KEYSTORE '+DATAC4/CDB01/wallet/' IDENTIFIED BY "EncTest123" * ERROR at line 1: ORA-46630: keystore cannot be created at the specified location Cause  Creating a keystore at a location where there is already a keystore exists Solution To solve the problem, use a different location to create a keystore (use ENCRYPTION_WALLET_LOCATION in sqlnet.ora file to specify the keystore location), or move this ewallet.p12 file to some other location. Note: Oracle does not recommend deleting keystore file (ewallet.p12) that belongs to a database. If you have multiple keystores, you can choose to merge them rather than deleting either of them.

Starting RMAN and connecting to Database

  <<Back to Oracle Backup & Recovery Main Page Starting RMAN and connecting to Database Starting RMAN and connecting to Database To start RMAN you need to set the environment and type rman and press enter. You can connect to database either using connect command or using command line option. using command line option localhost:$ export ORACLE_HOME=/ora_app/product/18c/dbd2 localhost:$ export PATH=$ORACLE_HOME/bin:$PATH localhost:$ export ORACLE_SID=ORCL1P localhost:$ rman target / Recovery Manager: Release 18.0.0.0.0 - Production on Sun Apr 4 08:11:01 2021 Version 18.11.0.0.0 Copyright (c) 1982, 2018, Oracle and/or its affiliates.  All rights reserved. connected to target database: ORCL1P (DBID=4215484517) RMAN> using connect option localhost:$ rman RMAN> connect target sys@ORCL1P  target database Password:******** connected to target database: ORCL1P (DBID=4215484517) NOTE: To use connect command you need to ensure that  you have proper TNS sentry for database (ORCL

How to Attach to a Datapump Job and Check Status of Export or Import

<<Back to Oracle DATAPUMP Main Page How to check the progress of  export or import Jobs You can attach to the export/import  job using ATTACH parameter of oracle datapump utility. Once you are attached to the job you check its status by typing STATUS command. Let us see how Step1>  Find the Export/Import Job Name You can find the datapump job information from  DBA_DATAPUMP_JOBS or  USER_DATAPUMP_JOBS view. SQL> SELECT OWNER_NAME,JOB_NAME,OPERATION,JOB_MODE,STATE from DBA_DATAPUMP_JOBS; OWNER_NAME JOB_NAME                       OPERATION            JOB_MODE   STATE ---------- ------------------------------ -------------------- ---------- ---------- SYSTEM     SYS_EXPORT_FULL_02             EXPORT               FULL       EXECUTING OR You can also find the job name for export/import in logfile in beginning itself. Step2>Attach to the Job and check status One you get the Export/Import Job Name attach the job and check its status. You can attach or det

Step by Step how to Create Virtual Machine using Virtualbox

<<Back to Linux Main Page How to Create New Virtual Machine Using Oracle Virtual Box Step1:   Open Oracle Virtual Box --> Click New Provide Name, Type and Version as shown in the image below and click Next  Step2:  Adjust memory (RAM) as per the requirement and availability   and click Next.  NOTE:- Remember to leave enough memory for the host OS to work properly.  Step3:   Select the option to create a new virtual hard drive and click "Create"  (erzeugen) button.  Step4:  Accept Default and click next (weiter)   Step5: Accept the dynamically allocated option by clicking the "Next" (weiter) button.  Step6:  If you don't want to use the defaults, enter the required location, name and size of the virtual disk and click the "Create" (erzeugen) button. Note:- At this point your virtual machine is created and ready for OS installation Preparing the Virtual machine for Oracle RAC Installation Step1:   Select t