Skip to main content

How to Dump a Block in Oracle


<<Back to DB Administration Main Page

Dumping Oracle Database Blocks

As we already know the data in oracle database resides in Tablespace (Logical Container) and each Tablespace has one or more datafile (Physical storage) so lets start with this
Step1> Create a tablespace
SQL> create tablespace TEST_AU_OS datafile '/u01/dbatest1/stage/database/db01.dbf' size 100M;
Tablespace created.
Step2> Find the Detail of your Tablespace and Datafile
SQL> select f.FILE#, f.NAME "File", t.NAME "Tablespace" from V$DATAFILE f, V$TABLESPACE t where t.NAME='TEST_AU_OS' and f.TS# = t.TS#;
     FILE# File                                                         Tablespace
---------- ------------------------------------------------------------ ---------------
        27 /u01/dbatest1/stage/database/db01.dbf                        TEST_AU_OS
Step3> Lets Create A Table and Populate Some Data
SQL> create table test (n number, name varchar2(16)) tablespace test_au_os;
Table created.
SQL> insert into test values (1,'TEST_FOR_FUN');
1 row created.
SQL> commit;
Commit complete.

At this point we know that my data is stored in a table called "TEST" which is stored in a Tablespace called "TEST_AU_OS" and data physically in a datafile  /u01/dbatest1/stage/database/db01.dbf which belongs to "TEST_AU_OS" tablespace
Datafile is constructed with database blocks or in other words the space to a datafile is allocated in terms of data blocks( standard DB Block Size is 8K). So let us find out the DB Block Size and Actual Block number which hold my piece of data
Step4> Find DB Block Size and Block Number for Particular Piece of Data
SQL> select ROWID,n,name from test where n=1;
ROWID                       N NAME
------------------ ---------- ----------------
AAAR4cAAbAAAACGAAA          1 TEST_FOR_FUN
SQL> select DBMS_ROWID.ROWID_BLOCK_NUMBER('AAAR4cAAbAAAACGAAA') "Block number" from DUAL;
Block number
------------
         134
SQL> select BLOCK_SIZE from V$DATAFILE where FILE#=27;
BLOCK_SIZE
----------
      8192 <= Bytes
At this point I know the location of my Data which is stored in Block number 134 and BlockSize of each block is 8K.
Once you have the block number you can dump it using alter system dump datafile command.
Step5> Dump the Individual Block
SQL> alter session set tracefile_identifier ='Block_dump';
Session altered.
SQL> ALTER SYSTEM DUMP DATAFILE 27 BLOCK 134;
System altered.
You can inspect the trace file created for block dump to find the data stored in plain text format
$cat TESTA_ora_16058_Block_dump.trc |grep TEST_FOR_FUN
Note:- You can dump individual block or multiple block at a time.
to dump multiple block at once use command below
ALTER SYSTEM DUMP DATAFILE absolute_file_number  BLOCK MIN minimum_block_number BLOCK MAX maximum_block_number;
absolute_file_number <= DataFile Number
minimum_block_number  <= Starting block Number 
maximum_block_number;<= Ending Block number

Reading Data Stored in Database Using strings Command

Since my datafile is located in a filesystem mount point, if I know the disk and got the root access I can even read the data using strings command on this disk
Let us see how
$df |grep /u01/dbatest1
Login as root user to read the data from /dev/xvde disk
#strings /dev/xvde |grep TEST_FOR_FUN

Reading and Dumping Data Stored in Database Using dd Command

dd is another very interesting utility which we can use to read or dump the data directly from the disk.
If I know my datafile name (see step2) together with Block Number and Block Size (see Step 4) where my data is stored I can directly read this blog from disk itself using dd utility even if the database is down. Let us see how
$dd if=/u01/dbatest1/stage/database/db01.dbf bs=8192 skip=142 count=1|strings
Where
bs<= Database Block Size
Skip<= BlockNumber+Header Blocks(ie 134+8)
Count<= Number of Block to be red or dumped 

How to Dump the Header Block

Procedure of dumping the header block is same as dumping the normal block. Find the header block
using below query
 select header_file, header_block from dba_segments where segment_name = ='TABLE_NAME';
SQL>  select header_file, header_block from dba_segments where segment_name = 'TEST';
HEADER_FILE HEADER_BLOCK
----------- ------------
         27          130

Comments

Popular posts from this blog

How to Power On/off Oracle Exadata Machine

<<Back to Exadata Main Page How to Power On/off Oracle Exadata Machine Oracle Exadata machines can be powered on/off either by pressing the power button on front of the server or by logging in to the ILOM interface. Powering on servers using  button on front of the server The power on sequence is as follows. 1. Start Rack, including switches  Note:- Ensure the switches have had power applied for a few minutes to complete power on  configuration before starting Exadata Storage Servers 2.Start Exadata Storage Servers  Note:- Ensure all Exadata Storage Servers complete the boot process before starting the   database servers 3. Start Database Servers Powering On Servers Remotely using ILOM The ILOM can be accessed using the Web console, the command-line interface (CLI), IPMI, or SNMP. For example, to apply power to server dm01cel01 using IPMI, where dm01cel01-ilom is the host name of the ILOM for the serve...

ORA-28374: typed master key not found in wallet

<<Back to Oracle DB Security Main Page ORA-46665: master keys not activated for all PDBs during REKEY SQL> ADMINISTER KEY MANAGEMENT SET KEY FORCE KEYSTORE IDENTIFIED BY xxxx WITH BACKUP CONTAINER = ALL ; ADMINISTER KEY MANAGEMENT SET KEY FORCE KEYSTORE IDENTIFIED BY xxxx WITH BACKUP CONTAINER = ALL * ERROR at line 1: ORA-46665: master keys not activated for all PDBs during REKEY I found following in the trace file REKEY: Create Key in PDB 3 resulted in error 46658 *** 2019-02-06T15:27:04.667485+01:00 (CDB$ROOT(1)) REKEY: Activation of Key AdnU5OzNP08Qv1mIyXhP/64AAAAAAAAAAAAAAAAAAAAAAAAAAAAA in PDB 3 resulted in error 28374 REKEY: Keystore needs to be restored from the REKEY backup.Aborting REKEY! Cause: All this hassle started because I accidently deleted the wallet and all wallet backup files too and also forgot the keystore password. There was no way to restore the wallet back. Fortunately in my case the PDB which had encrypted data was supposed to be deco...

How to Find VIP of an Oracle RAC Cluster

<<Back to Oracle RAC Main Page How to Find Out VIP of an Oracle RAC Cluster Login clusterware owner (oracle) and execute the below command to find out the VIP hostname used in Oracle RAC $ olsnodes -i node1     node1-vip node2     node2-vip OR $ srvctl config nodeapps -viponly Network 1 exists Subnet IPv4: 10.0.0.0/255.255.0.0/bondeth0, static Subnet IPv6: Ping Targets: Network is enabled Network is individually enabled on nodes: Network is individually disabled on nodes: VIP exists: network number 1, hosting node node1 VIP Name: node1-vip VIP IPv4 Address: 10.0.0.1 VIP IPv6 Address: VIP is enabled. VIP is individually enabled on nodes: VIP is individually disabled on nodes: VIP exists: network number 1, hosting node node2 VIP Name: node2-vip VIP IPv4 Address: 10.0.0.2 VIP IPv6 Address: VIP is enabled. VIP is individually enabled on nodes: VIP is individually disabled on nodes:

ORA-46630: keystore cannot be created at the specified location

<<Back to DB Administration Main Page ORA-46630: keystore cannot be created at the specified location CDB011> ADMINISTER KEY MANAGEMENT CREATE KEYSTORE '+DATAC4/CDB01/wallet/' IDENTIFIED BY "xxxxxxx"; ADMINISTER KEY MANAGEMENT CREATE KEYSTORE '+DATAC4/CDB01/wallet/' IDENTIFIED BY "EncTest123" * ERROR at line 1: ORA-46630: keystore cannot be created at the specified location Cause  Creating a keystore at a location where there is already a keystore exists Solution To solve the problem, use a different location to create a keystore (use ENCRYPTION_WALLET_LOCATION in sqlnet.ora file to specify the keystore location), or move this ewallet.p12 file to some other location. Note: Oracle does not recommend deleting keystore file (ewallet.p12) that belongs to a database. If you have multiple keystores, you can choose to merge them rather than deleting either of them.

ORA-65104: operation not allowed on an inactive pluggable database alter pluggable database open

<<Back to DB Administration Main Page ORA-65104: operation not allowed on an inactive pluggable database SQL> alter pluggable database TEST_CLON open; alter pluggable database TEST_CLON open * ERROR at line 1: ORA-65104: operation not allowed on an inactive pluggable database Cause The pluggable database status was UNUSABLE. It was still being created or there was an error during the create operation. A PDB can only be opened if it is successfully created and its status is marked as NEW in cdb_pdbs.status column SQL> select PDB_NAME,STATUS from cdb_pdbs; PDB_NAME             STATUS -------------------- --------------------------- PDB$SEED             NORMAL TEST_CLON            UNUSABLE Solution:  Drop the PDB and create it again. Related Posts How to Clone Oracle PDB (Pluggable Database) with in the Same Container

ORA-16905: The member was not enabled yet

<<Back to Oracle DataGuard Main Page ORA-16905 Physical Standby Database is disabled DGMGRL> show configuration; Configuration - DG_ORCL1P   Protection Mode: MaxPerformance   Members:   ORCL1PP - Primary database     ORCL1PS - Physical standby database (disabled)       ORA-16905: The member was not enabled yet. Fast-Start Failover:  Disabled Configuration Status: SUCCESS   (status updated 58 seconds ago) DGMGRL> DGMGRL> enable database 'ORCL1PS'; Enabled. DGMGRL>  show configuration; Configuration - DG_ORCL1P   Protection Mode: MaxPerformance   Members:   ORCL1PP - Primary database     ORCL1PS - Physical standby database Fast-Start Failover:  Disabled Configuration Status: SUCCESS   (status updated 38 seconds ago)

How to Switch Log File from All Instances in RAC

<<Back to Oracle RAC Main Page Switch The Log File of All Instances in Oracle RAC. In many cases you need to switch the logfile of the database. You can switch logfile using alter system switch logfile command but if you want to switch the logfile from all the instances you need to execute the command on all the instances individually and therefore you must login on all the instances. You can avoid this and switch logfile of all instances by just running the below command from any of the instance in RAC database SQL> ALTER SYSTEM SWITCH ALL LOGFILE;   System altered.

Starting RMAN and connecting to Database

  <<Back to Oracle Backup & Recovery Main Page Starting RMAN and connecting to Database Starting RMAN and connecting to Database To start RMAN you need to set the environment and type rman and press enter. You can connect to database either using connect command or using command line option. using command line option localhost:$ export ORACLE_HOME=/ora_app/product/18c/dbd2 localhost:$ export PATH=$ORACLE_HOME/bin:$PATH localhost:$ export ORACLE_SID=ORCL1P localhost:$ rman target / Recovery Manager: Release 18.0.0.0.0 - Production on Sun Apr 4 08:11:01 2021 Version 18.11.0.0.0 Copyright (c) 1982, 2018, Oracle and/or its affiliates.  All rights reserved. connected to target database: ORCL1P (DBID=4215484517) RMAN> using connect option localhost:$ rman RMAN> connect target sys@ORCL1P  target database Password:******** connected to target database: ORCL1P (DBID=4215484517) NOTE: To use connect command you need to ensure that  you have proper TNS sentry...

How to Attach to a Datapump Job and Check Status of Export or Import

<<Back to Oracle DATAPUMP Main Page How to check the progress of  export or import Jobs You can attach to the export/import  job using ATTACH parameter of oracle datapump utility. Once you are attached to the job you check its status by typing STATUS command. Let us see how Step1>  Find the Export/Import Job Name You can find the datapump job information from  DBA_DATAPUMP_JOBS or  USER_DATAPUMP_JOBS view. SQL> SELECT OWNER_NAME,JOB_NAME,OPERATION,JOB_MODE,STATE from DBA_DATAPUMP_JOBS; OWNER_NAME JOB_NAME                       OPERATION            JOB_MODE   STATE ---------- ------------------------------ -------------------- ---------- ---------- SYSTEM     SYS_EXPORT_FULL_02          ...

ORA-46655: no valid keys in the file from which keys are to be imported

<<Back to DB Administration Main Page SQL> administer key management import encryption keys with secret "xxxx" from '/tmp/pdb02_tde_key.exp' force keystore identified by "xxxx" with backup; administer key management import encryption keys with secret "xxxxxx" from '/tmp/pdb02_tde_key.exp' force keystore identified by "xxxxxx" with backup * ERROR at line 1: ORA-46655: no valid keys in the file from which keys are to be imported Cause: Either the keys to be imported already present in the target database or correct container (PDB) is not set. Solution: In my case I got the error because I attempted to import the keys for newly plugged database PDB02 from CDB$ROOT container. To Solve the issue just switched to the correct container and re run the import. SQL> show con_name CON_NAME ------------------------------ CDB$ROOT <===Wrong Container selected  SQL> alter session set container=PDB02; Session alt...