<<Back to DB Administration Main Page
Database Redo Logs and Redo Log Buffer Pool In Detail
This post is simply the continuation of Oracle Database Architecture. In this post I will try to explore about Redo Logs. What are Redo Logs? Why are Redo Logs so Important? What Purpose they full fill? How oracle manages them etc etc....If you have already gone through my previous Posts related to oracle database architecture you are good to proceed. It would be worth to review these posts in case you missed them. The link are here
Oracle Database Architecture Overview Explain a high level Architecture
Database Buffer Cache in Details Explains how Buffer Cache works and how it is implemented
Database Shared Pool in Detail Explains all you want to know about Shared Pool
So let's get started before wasting any time
What is Redo logs?
Oracle Database records all the changes to the database in the form of Redo logs. Redo Logs resides in a memory area in SGA called Redo Log Buffer and the size of which is controlled using initiation parameter LOG_BUFFER. LGWR is the parameter which writes the buffers from redo log buffers to online redo log file (a file on disk) to preserve the changes in case of Instance crash.
What are the usages of Redo Logs?
Oracle usages redo logs for many things like. Recovery (instance and media)
Log Miner
Oracle Streams
There are exciting features like Dataguard would not have been possible without Redo logs
Redo Log Buffer
The redo log buffer is a circular buffer in the SGA that hold the change vectors (redo entries) describing changes made to the database. When a request to change a piece of data in the database is triggered oracle performs following action (most probably in this sequence)- The Server process create the change vector for the undo record in PGA.
- The Server process create the change vector for the data block in PGA.
- The Server process then combines the change vectors to prepare the redo record to write into the redo log buffer.
- The process then search the free space in database buffer cache using redo allocation latch.
- Once the session allocates space in the log buffer it acquires the redo copy latch to write the redo entries in this free space.
- The undo entries are then written to undo block.
- And finally the change to the actual data block happens in the database buffer cache.
- The LGWR process then writes the content of log buffer to online redo log files using redo writing latch to preserve the changes.
When LGWR process Writes The Changes From Redo Buffer to Online Log file?
The LGWR process sequentially writes the changes from Redo log buffer to Online Logfile when any of the following condition is met
- A user commits/rollback a transaction
- An online redo log switch occurs
- Every 3 Seconds
- The redo log buffer is one-third full or contains 1 MB of buffered data
- DBWR must write modified buffers to disk
- Alter System Check Point command is issued.
I assume, by now you have fair idea about, Redo logs, Log buffer and LGWR and you are ready to make it complex.
As you just red "The redo log buffer is a circular buffer in the SGA". The below picture tries to visualize this how oracle manages pointers to mark the start of free space , end of free space, used space and the space in Log buffer for which LGWR is writing the content to online logfile
As you can see there are 3 important markers. one identifying the start of free space , other identifying the end of free space. The third location is a flag showing whether or not LGWR is busy writing. Once LGWR will complete its write it will move the end of free space pointer to notify the more free space in buffer and will continue writing more recently used buffers. Other sessions will be generating more redo filling up the space in buffer moving the start of free space pointer forward reaching up to the end of the buffer and then cycle back to the beginning of the buffer.
Now Let us Examine Case By Case
How LGWR knows there are more than 1MB redo when it is sleeping?
As we know now that LGWR writes in every 3 Seconds, It means LGWR flush the buffer to online logfile, sleeps for 3 seconds, wakes up, performs the redo write and sleeps and repeat and repeat... The question is here how LGWR knows that there are more than 1MB redo when it is sleeping. The answer is simple, each session checks the total allocated space in redo log buffer every time it allocates space in buffer, if the total space used exceeds 1MB (space between start of free space and end of free space) it posts message to LGWR which interrupts LGWR sleep and LGWR starts writing.
What if the LGWR was not sleeping but writing when the session was requesting write?
The next question arise, What if the LGWR was not sleeping but writing at the moment session identified that there are more than 1MB redo. Does it still post message to LGWR. And the answer of this question is no. Remember the write flag which LGWR sets when it starts writing, any session which wants to send the signal to LGWR checks this flag to see if LGWR is already writing. LGWR clears the flag once it finishes the writing. LGWR gets the redo writing latch to set and clear the flag.(which means two gets for every write),and each session has to get the latch to read the flag so that it doesn’t read the flag while it’s in flux. In both cases the gets on the latch are in willing-to-wait mode.
What happens with the session which was requesting the LGWR to write but LGWR was busy?
When a session posts a message to LGWR for writing the log it goes on "log file sync" wait event and sets a sleep timer, When it wakes up it checks to see if its write request has been satisfied before continuing. Alternatively if LGWR finishes writing while the session is still sleeping it checks the sessions waiting on "log file sync" and notifies them to continue.
How a session acquires space in Redo Log buffer ?
When a session wants to write redo record in log buffer it gets the redo copy latch. (Through redo copy latch it identifies the buffer in log buffer). It then gets the redo allocation latch After successfully acquiring the redo allocation latch it moves the start of free space pointer and drops the redo allocation latch. The session then copies the redo record into the log buffer (if sufficient space available in buffer) while holding the redo copy latch. If the allocation has taken the used space over one-third of the log buffer or 1MB, or if the redo record was a commit record, then post signal to LGWR to write (but get the redo writing latch first to check if LGWR is already writing) If the redo record was a commit record, increment redo synch writes and wait on log file sync wait.
What if sufficient space in Redo log buffer is not available?
After acquiring the redo copy and redo allocation latch if the session identifies that there is no sufficient free space available in redo log buffer, it drops the redo allocation latch and acquires the redo writing latch to check if LGWR is writing, if LGWR is not writing post a signal to write else drop the redo writing latch and wait on log buffer space wait event. When LGWR finish the writing it checks the sessions waiting on log buffer space wait event and post them signal to proceed.
NOTE: As we can see after finishing the writing LGWR checks both, sessions waiting on "log buffer space" wait event sessions waiting on "log file sync" and notifies them to continue.
How LGWR writes redo from Log buffer to Disk
LGWR first acquires redo writing latch and set the write flag, so that each coming session knows that LGWR is busy writing and do not post further message to LGWR, then it drops the redo writing latch and acquired the redo allocation latch to identify the highest point allocated in the log buffer at that moment. It then moves the pointer up to the end of the block and drops the latch, and starts copying the log buffer to disk.
As LGWR completes its write, it clears the write flag (getting and releasing the redo writing latch again) and move the "End of free space" pointer up to the point it has just finished writing (getting and releasing the redo allocation latch ), and run through the list of sessions waiting on "log file sync" waits and "log buffer space" wait events, signaling them to continue.
As LGWR completes its write, it clears the write flag (getting and releasing the redo writing latch again) and move the "End of free space" pointer up to the point it has just finished writing (getting and releasing the redo allocation latch ), and run through the list of sessions waiting on "log file sync" waits and "log buffer space" wait events, signaling them to continue.
If LGWR notice that there are some sessions waiting on log file sync that were not cleared by its write, it goes through the cycle again..
Multiple Public Redo Log Buffers and Private strands
Since now you know how redo copy, redo allocation and redo write latches are used, assume a very busy system where there are multiple sessions congruently generating lots of redo and competing for redo allocation/redo write latches will kill the system performance. To over come this situation oracle implemented multiple log buffers starting from 10g.(never get a change to try out at 9i)
A hidden parameter _log_parallelism_dynamic (default value TRUE) controls the behavior whether multiple redo log buffers will be configured or not, another hidden parameter _log_parallelism_max controls the maximum number of log buffer strands. Value of _log_parallelism_max is automatically calculated by oracle based on number of CPUs, in my observation. I found value 2 on server with CPU 4,8 and 16 and value 4 on server having 64 and 72 CPUs. I am not sure but I guess oracle drives the value of this parameter using formula strand=CPUs/16.
You can verify the concept of multiple log buffer using below query
A hidden parameter _log_parallelism_dynamic (default value TRUE) controls the behavior whether multiple redo log buffers will be configured or not, another hidden parameter _log_parallelism_max controls the maximum number of log buffer strands. Value of _log_parallelism_max is automatically calculated by oracle based on number of CPUs, in my observation. I found value 2 on server with CPU 4,8 and 16 and value 4 on server having 64 and 72 CPUs. I am not sure but I guess oracle drives the value of this parameter using formula strand=CPUs/16.
You can verify the concept of multiple log buffer using below query
SQL> select strand_size_kcrfa from x$kcrfstrand where ptr_kcrf_pvt_strand = hextoraw(0) and pnext_buf_kcrfa_cln != hextoraw(0);
STRAND_SIZE_KCRFA
-----------------
6660096
6660096
If you sum up its 13008K which is the size of log_buffer in my case
-----------------
6660096
6660096
If you sum up its 13008K which is the size of log_buffer in my case
SQL> show parameter log_buffer
log_buffer big integer 13008K
log_buffer big integer 13008K
Another way to verify the machenism of having multiple log buffer, by performing SGA dump.
SQL> oradebug setmypid
Statement processed.
SQL> oradebug dumpvar sga kcrf_max_strands
uword kcrf_max_strands_ [0600283C8, 0600283CC) = 00000002
SQL> oradebug dumpvar sga kcrf_actv_strands
uword kcrf_actv_strands_ [0600283E0, 0600283E4) = 00000001
SQL> oradebug setmypid
Statement processed.
SQL> oradebug dumpvar sga kcrf_max_strands
uword kcrf_max_strands_ [0600283C8, 0600283CC) = 00000002
SQL> oradebug dumpvar sga kcrf_actv_strands
uword kcrf_actv_strands_ [0600283E0, 0600283E4) = 00000001
From the dump above you can see the maximum number of public redo strands are 2 and active one is just 1. This is because my database is not very busy. After little investigation I come across the Metalink Note 372557.1 which states
"The initial allocation for the number of strands depends on the number of CPU's and is started with 2 strands with one strand for active redo generation. For large scale enterprise systems the amount of redo generation is large and hence these strands are *made active* as and when the foregrounds encounter this redo contention (allocated latch related contention) when this concept of dynamic strands comes into play."
SQL> select count(*) from v$latch_children where name = 'redo allocation';
COUNT(*)
----------
2
COUNT(*)
----------
2
1 row selected.
Private strands or Private Redo
Recall the redo generation behavior discussed in the beginning of the post itself where redo is prepared in PGA, and then copied into the log bugger using redo copy, redo allocation and redo writing latches as we have studied so far. If oracle can some how reduce this overhead and copy the redo directly from this private place to the logfile, will optimize the redo log buffer usages and enhancement in the performance . To achieve this oracle introduced another enhancement from 10g, known as private redo or Private strands for this oracle allocated space for redo generation directly in SGA (shared pool).
The private strands SGA buffers are allocated at startup time itself. The first point to note is that the number of threads in use, both private and public, is dynamic and Oracle tries to keep the number to a minimum.
The second point is that the basic handling of the public threads doesn’t change. When a session wants to copy something into a public thread, the method is just the same as always.
The maximum private redo strands is defined as transactions/10 and is controlled by hidden parameter _log_private_parallelism_mul. The session tries to find the private thread and if there are no private threads available, the session will use one of the public threads. Each private thread has its own redo allocation latch
Using query below you can find the size allocated to private strand
SQL> select pool, name, bytes from v$sgastat where name like 'private strands'
Now with the introduction of private strands allocated in shared pool the processes prepare the redo log in shared pool itself instead of PGA . And LGWR process directly writes them to Online logfile skipping the Log buffer.
With the implementation of this feature the first image provided in this blog will changes slightly and look somewhat like this.
Misc Information Related to Redo Log Buffer and LGWR Process
Related Events
log file switch completion
log file sync
log file sequential read
log file sync: SCN ordering
target log write size
log file parallel write
log switch/archive
log file single write
switch logfile command
LGWR wait for redo copy
log file sync
log file sequential read
log file sync: SCN ordering
target log write size
log file parallel write
log switch/archive
log file single write
switch logfile command
LGWR wait for redo copy
Related Latches
latch: redo writing
latch: redo copy
latch: redo allocation
latch: redo copy
latch: redo allocation
Related Hidden Parameters
Name Value Description
_log_checkpoint_recovery_check 0 # redo blocks to verify after checkpoint
_log_event_queues 0 number of the log writer event queues
_log_switch_timeout 0 Maximum number of seconds redos in the current log could span
_log_buffers_debug FALSE debug redo buffers (slows things down)
_log_buffers_corrupt FALSE corrupt redo buffers before write
_log_simultaneous_copies 144 number of simultaneous copies into redo buffer(# of copy latches)
_log_parallelism_max 4 Maximum number of log buffer strands
_log_parallelism_dynamic TRUE Enable dynamic strands
_log_private_parallelism_mul 10 Active sessions multiplier to deduce number of private strands
_log_private_mul 5 Private strand multiplier for log space preallocation
_log_read_buffer_size 8 buffer size for reading log files
_log_buffer_coalesce FALSE Coalescing log buffers for log writes
_log_file_sync_timeout 10 Log file sync timeout (centiseconds)
_log_write_info_size 4096 Size of log write info array
_log_max_optimize_threads 128 maximum number of threads to which log scan optimization is applied
_log_read_buffers 8 Number of log read buffers for media recovery
_log_committime_block_cleanout TRUE Log commit-time block cleanout
_log_space_errors TRUE should we report space errors to alert log
_log_segment_dump_parameter TRUE Dump KSP on Log Segmentation
_log_segment_dump_patch TRUE Dump Patchinfo on Log Segmentation
_redo_compatibility_check FALSE general and redo/undo compatibility sanity check
_redo_log_record_life 168 Life time in hours for redo log table records
_redo_log_debug_config 0 Various configuration flags for debugging redo logs
_redo_log_check_backup 10 time interval in minutes between wakeups to check backup of redo logs
_redo_read_from_memory TRUE Enable reading redo from in-memory log buffer
_log_checkpoint_recovery_check 0 # redo blocks to verify after checkpoint
_log_event_queues 0 number of the log writer event queues
_log_switch_timeout 0 Maximum number of seconds redos in the current log could span
_log_buffers_debug FALSE debug redo buffers (slows things down)
_log_buffers_corrupt FALSE corrupt redo buffers before write
_log_simultaneous_copies 144 number of simultaneous copies into redo buffer(# of copy latches)
_log_parallelism_max 4 Maximum number of log buffer strands
_log_parallelism_dynamic TRUE Enable dynamic strands
_log_private_parallelism_mul 10 Active sessions multiplier to deduce number of private strands
_log_private_mul 5 Private strand multiplier for log space preallocation
_log_read_buffer_size 8 buffer size for reading log files
_log_buffer_coalesce FALSE Coalescing log buffers for log writes
_log_file_sync_timeout 10 Log file sync timeout (centiseconds)
_log_write_info_size 4096 Size of log write info array
_log_max_optimize_threads 128 maximum number of threads to which log scan optimization is applied
_log_read_buffers 8 Number of log read buffers for media recovery
_log_committime_block_cleanout TRUE Log commit-time block cleanout
_log_space_errors TRUE should we report space errors to alert log
_log_segment_dump_parameter TRUE Dump KSP on Log Segmentation
_log_segment_dump_patch TRUE Dump Patchinfo on Log Segmentation
_redo_compatibility_check FALSE general and redo/undo compatibility sanity check
_redo_log_record_life 168 Life time in hours for redo log table records
_redo_log_debug_config 0 Various configuration flags for debugging redo logs
_redo_log_check_backup 10 time interval in minutes between wakeups to check backup of redo logs
_redo_read_from_memory TRUE Enable reading redo from in-memory log buffer
Comments
Post a Comment