Skip to content

Oracle Instance Tuning Using Performance Views

    Instance Tuning Steps

    These are the main steps in the Oracle performance method for instance tuning:

    1. Define the Problem
    2. Examine the Host System and Examine the Oracle Database Statistics
    3. Implement and Measure Change
    4. Determine whether the performance objective defined in step 1 has been met. If not, then repeat steps 2 and 3 until the performance goals are met.

    Oracle Instance Tuning using the Oracle Database dynamic performance views. However, Oracle recommends using Automatic Workload Repository (AWR) and Automatic Database Diagnostic Monitor (ADDM) for statistics gathering, monitoring, and tuning due to the extended feature list.

    Identifying I/O Problems Using V$ Views

    Check the Oracle wait event data in V$SYSTEM_EVENT to see whether the top wait events are I/O related. I/O related event include db file sequential read, db file scattered read, db file single write, db file parallel write, and log file parallel write. These are all events corresponding to I/Os performed against data files and log files. If any of these wait events correspond to high average time, then investigate the I/O contention.

    Cross reference the host I/O system data with the I/O sections in the Automatic Repository report to identify hot data files and tablespaces. Also compare the I/O times reported by the operating system with the times reported by Oracle Database to see if they are consistent.

    An I/O problem can also manifest itself with non-I/O related wait events. For example, the difficulty in finding a free buffer in the buffer cache or high wait times for logs to be flushed to disk can also be symptoms of an I/O problem. Before investigating whether the I/O system should be reconfigured, determine if the load on the I/O system can be reduced.

    To reduce I/O load caused by Oracle Database, examine the I/O statistics collected for all I/O calls made by the database using the following views:

    • V$IOSTAT_CONSUMER_GROUPThe V$IOSTAT_CONSUMER_GROUP view captures I/O statistics for consumer groups. If Oracle Database Resource Manager is enabled, I/O statistics for all consumer groups that are part of the currently enabled resource plan are captured.
    • V$IOSTAT_FILEThe V$IOSTAT_FILE view captures I/O statistics of database files that are or have been accessed. The SMALL_SYNC_READ_LATENCY column displays the latency for single block synchronous reads (in milliseconds), which translates directly to the amount of time that clients need to wait before moving onto the next operation. This defines the responsiveness of the storage subsystem based on the current load. If there is a high latency for critical data files, you may want to consider relocating these files to improve their service time. To calculate latency statistics, timed_statistics must be set to TRUE.
    • V$IOSTAT_FUNCTIONThe V$IOSTAT_FUNCTION view captures I/O statistics for database functions (such as the LGWR and DBWR).An I/O can be issued by various Oracle processes with different functionalities. The top database functions are classified in the V$IOSTAT_FUNCTION view. In cases when there is a conflict of I/O functions, the I/O is placed in the bucket with the lower FUNCTION_ID. For example, if XDB issues an I/O from the buffer cache, the I/O would be classified as an XDB I/O because it has a lower FUNCTION_ID value. Any unclassified function is placed in the Others bucket. You can display the FUNCTION_ID hierarchy by querying the V$IOSTAT_FUNCTION view:
      • select FUNCTION_ID, FUNCTION_NAME from v$iostat_function order by FUNCTION_ID;

    These V$IOSTAT views contains I/O statistics for both single and multi block read and write operations. Single block operations are small I/Os that are less than or equal to 128 kilobytes. Multi block operations are large I/Os that are greater than 128 kilobytes. For each of these operations, the following statistics are collected:

    • Identifier
    • Total wait time (in milliseconds)
    • Number of waits executed (for consumer groups and functions)
    • Number of requests for each operation
    • Number of single and multi block bytes read
    • Number of single and multi block bytes written

    You should also look at SQL statements that perform many physical reads by querying the V$SQLAREA view, or by reviewing the “SQL ordered by Reads” section of the Automatic Workload Repository report. Examine these statements to see how they can be tuned to reduce the number of I/Os.

    Wait Events

    Wait events are statistics that are incremented by a server process or thread to indicate that it had to wait for an event to complete before being able to continue processing. Wait event data reveals various symptoms of problems that might be impacting performance, such as latch contention, buffer contention, and I/O contention. Remember that these are only symptoms of problems, not the actual causes.

    Wait events are grouped into classes. The wait event classes include: Administrative, Application, Cluster, Commit, Concurrency, Configuration, Idle, Network, Other, Scheduler, System I/O, and User I/O.

    A server process can wait for the following:

    • A resource to become available, such as a buffer or a latch.
    • An action to complete, such as an I/O.
    • More work to do, such as waiting for the client to provide the next SQL statement to execute. Events that identify that a server process is waiting for more work are known as idle events.

    Wait event statistics include the number of times an event was waited for and the time waited for the event to complete. If the initialization parameter TIMED_STATISTICS is set to true, then you can also see how long each resource was waited for.

    To minimize user response time, reduce the time spent by server processes waiting for event completion. Not all wait events have the same wait time. Therefore, it is more important to examine events with the most total time waited rather than wait events with a high number of occurrences. Usually, it is best to set the dynamic parameter TIMED_STATISTICS to true at least while monitoring performance.

    Dynamic Performance Views Containing Wait Event Statistics

    These dynamic performance views can be queried for wait event statistics:

    • V$ACTIVE_SESSION_HISTORYThe V$ACTIVE_SESSION_HISTORY view displays active database session activity, sampled once every second.
    • V$SESS_TIME_MODEL and V$SYS_TIME_MODELThe V$SESS_TIME_MODEL and V$SYS_TIME_MODEL views contain time model statistics, including DB time which is the total time spent in database calls.
    • V$SESSION_WAITThe V$SESSION_WAIT view displays information about the current or last wait for each session (such as wait ID, class, and time).
    • V$SESSIONThe V$SESSION view displays information about each current session and contains the same wait statistics as those found in the V$SESSION_WAIT view. If applicable, this view also contains detailed information about the object that the session is currently waiting for (such as object number, block number, file number, and row number), the blocking session responsible for the current wait (such as the blocking session ID, status, and type), and the amount of time waited.
    • V$SESSION_EVENTThe V$SESSION_EVENT view provides summary of all the events the session has waited for since it started.
    • V$SESSION_WAIT_CLASSThe V$SESSION_WAIT_CLASS view provides the number of waits and the time spent in each class of wait events for each session.
    • V$SESSION_WAIT_HISTORYThe V$SESSION_WAIT_HISTORY view displays information about the last ten wait events for each active session (such as event type and wait time).
    • V$SYSTEM_EVENTThe V$SYSTEM_EVENT view provides a summary of all the event waits on the instance since it started.
    • V$EVENT_HISTOGRAMThe V$EVENT_HISTOGRAM view displays a histogram of the number of waits, the maximum wait, and total wait time on an event basis.
    • V$FILE_HISTOGRAMThe V$FILE_HISTOGRAM view displays a histogram of times waited during single block reads for each file.
    • V$SYSTEM_WAIT_CLASSThe V$SYSTEM_WAIT_CLASS view provides the instance wide time totals for the number of waits and the time spent in each class of wait events.
    • V$TEMP_HISTOGRAMThe V$TEMP_HISTOGRAM view displays a histogram of times waited during single block reads for each temporary file.

    Investigate wait events and related timing data when performing reactive performance tuning. The events with the most time listed against them are often strong indications of the performance bottleneck. For example, by looking at V$SYSTEM_EVENT, you might notice lots of buffer busy waits. It might be that many processes are inserting into the same block and must wait for each other before they can insert. The solution could be to use automatic segment space management or partitioning for the object in question.

    System Statistics

    System statistics are typically used in conjunction with wait event data to find further evidence of the cause of a performance problem.

    For example, if V$SYSTEM_EVENT indicates that the largest wait event (in terms of wait time) is the event buffer busy waits, then look at the specific buffer wait statistics available in the view V$WAITSTAT to see which block type has the highest wait count and the highest wait time.

    After the block type has been identified, also look at V$SESSION real-time while the problem is occurring or V$ACTIVE_SESSION_HISTORY and DBA_HIST_ACTIVE_SESS_HISTORY views after the problem has been experienced to identify the contended-for objects using the object number indicated. The combination of this data indicates the appropriate corrective action.

    Statistics are available in many V$ views. The following are some of the V$ views that contain system statistics.


    This view displays active database session activity, sampled once every second.


    This contains overall statistics for many different parts of Oracle Database, including rollback, logical and physical I/O, and parse data. Data from V$SYSSTAT is used to compute ratios, such as the buffer cache hit ratio.


    This contains detailed file I/O statistics for each file, including the number of I/Os for each file and the average read time.


    This contains detailed rollback and undo segment statistics for each segment.


    This contains detailed enqueue statistics for each enqueue, including the number of times an enqueue was requested and the number of times an enqueue was waited for, and the wait time.


    This contains detailed latch usage statistics for each latch, including the number of times each latch was requested and the number of times the latch was waited for.

    Segment-Level Statistics

    You can gather segment-level statistics to help you spot performance problems associated with individual segments. Collecting and viewing segment-level statistics is a good way to effectively identify hot tables or indexes in an instance.

    After viewing wait events and system statistics to identify the performance problem, you can use segment-level statistics to find specific tables or indexes that are causing the problem. Consider, for example, that V$SYSTEM_EVENT indicates that buffer busy waits cause a fair amount of wait time. You can select from V$SEGMENT_STATISTICS the top segments that cause the buffer busy waits. Then you can focus your effort on eliminating the problem in those segments.

    You can query segment-level statistics through the following dynamic performance views:

    • V$SEGSTAT_NAME: This view lists the segment statistics being collected and the properties of each statistic (for instance, if it is a sampled statistic).
    • V$SEGSTAT: This is a highly efficient, real-time monitoring view that shows the statistic value, statistic name, and other basic information.
    • V$SEGMENT_STATISTICS: This is a user-friendly view of statistic values. In addition to all the columns of V$SEGSTAT, it has information about such things as the segment owner and table space name. It makes the statistics easy to understand, but it is more costly.

    buffer busy waits

    This wait indicates that there are some buffers in the buffer cache that multiple processes are attempting to access concurrently. Query V$WAITSTAT for the wait statistics for each class of buffer. Common buffer classes that have buffer busy waits include data blocksegment headerundo header, and undo block.

    Check the following V$SESSION_WAIT parameter columns:

    • P1: File ID
    • P2: Block ID
    • P3: Class ID


    To determine the possible causes, first query V$SESSION to identify the value of ROW_WAIT_OBJ# when the session waits for buffer busy waits. For example:

    SELECT row_wait_obj# 
     WHERE EVENT = 'buffer busy waits';

    To identify the object and object type contended for, query DBA_OBJECTS using the value for ROW_WAIT_OBJ# that is returned from V$SESSION. For example:

    SELECT owner, object_name, subobject_name, object_type
     WHERE data_object_id = &row_wait_obj;


    The action required depends on the class of block contended for and the actual segment.

    Segment Header

    If the contention is on the segment header, then this is most likely free list contention.

    Automatic segment-space management in locally managed tablespaces eliminates the need to specify the PCTUSEDFREELISTS, and FREELIST GROUPS parameters. If possible, switch from manual space management to automatic segment-space management (ASSM).

    The following information is relevant if you are unable to use ASSM (for example, because the tablespace uses dictionary space management).

    A free list is a list of free data blocks that usually includes blocks existing in several different extents within the segment. Free lists are composed of blocks in which free space has not yet reached PCTFREE or used space has shrunk below PCTUSED. Specify the number of process free lists with the FREELISTS parameter. The default value of FREELISTS is one. The maximum value depends on the data block size.

    To find the current setting for free lists for that segment, run the following:

     WHERE SEGMENT_NAME = segment name
       AND SEGMENT_TYPE = segment type;

    Set free lists, or increase the number of free lists. If adding more free lists does not alleviate the problem, then use free list groups (even in single instance this can make a difference). If using Oracle RAC, then ensure that each instance has its own free list group(s).

    Finding the SQL Statement executed by Sessions Waiting for I/O

    Use the following query to determine, at a point in time, which sessions are waiting for I/O:

     WHERE EVENT LIKE 'db file%read';  

    Finding the Object Requiring I/O

    To determine the possible causes, first query V$SESSION to identify the value of ROW_WAIT_OBJ# when the session waits for db file scattered read. For example:

    SELECT row_wait_obj# 
     WHERE EVENT = 'db file scattered read';

    To identify the object and object type contended for, query DBA_OBJECTS using the value for ROW_WAIT_OBJ# that is returned from V$SESSION. For example:

    SELECT owner, object_name, subobject_name, object_type
     WHERE data_object_id = &row_wait_obj;

    Finding Locks and Lock Holders

    Query V$LOCK to find the sessions holding the lock. For every session waiting for the event enqueue, there is a row in V$LOCK with REQUEST <> 0. Use one of the following two queries to find the sessions holding the locks and waiting for the locks.

    If there are enqueue waits, you can see these using the following statement:

    SELECT * FROM V$LOCK WHERE request > 0;

    To show only holders and waiters for locks being waited on, use the following:

    SELECT DECODE(request,0,'Holder: ','Waiter: ') || 
              sid sess, id1, id2, lmode, request, type
       FROM V$LOCK
     WHERE (id1, id2, type) IN (SELECT id1, id2, type FROM V$LOCK WHERE request > 0)
       ORDER BY id1, request;


    The appropriate action depends on the type of enqueue.

    If the contended-for enqueue is the ST enqueue, then the problem is most likely to be dynamic space allocation. Oracle Database dynamically allocates an extent to a segment when there is no more free space available in the segment. This enqueue is only used for dictionary managed tablespaces.

    To solve contention on this resource:

    • Check to see whether the temporary (that is, sort) tablespace uses TEMPFILES. If not, then switch to using TEMPFILES.
    • Switch to using locally managed tablespaces if the tablespace that contains segments that are growing dynamically is dictionary managed.
    • If it is not possible to switch to locally managed tablespaces, then ST enqueue resource usage can be decreased by changing the next extent sizes of the growing objects to be large enough to avoid constant space allocation. To determine which segments are growing constantly, monitor the EXTENTS column of the DBA_SEGMENTS view for all SEGMENT_NAMEs.
    • Preallocate space in the segment, for example, by allocating extents using the ALTER TABLE ALLOCATE EXTENT SQL statement.

    Find Latches Currently Waited For

     WHERE EVENT LIKE 'latch%'

    A problem with the previous query is that it tells more about session tuning or instant instance tuning than instance or long-duration instance tuning.

    The following query provides more information about long duration instance tuning, showing whether the latch waits are significant in the overall database time.

     WHERE EVENT LIKE 'latch%'

    A more general query that is not specific to latch waits is the following:

        AND N.WAIT_CLASS NOT IN ('Idle', 'System I/O')

    Shared Pool and Library Cache Latch Contention

    A main cause of shared pool or library cache latch contention is parsing. There are several techniques that you can use to identify unnecessary parsing and several types of unnecessary parsing:

    This method identifies similar SQL statements that could be shared if literals were replaced with bind variables. The idea is to either:

    • Manually inspect SQL statements that have only one execution to see whether they are similar:CopySELECT SQL_TEXT FROM V$SQLSTATS WHERE EXECUTIONS < 4 ORDER BY SQL_TEXT;
    • Or, automate this process by grouping what may be similar statements. Estimate the number of bytes of a SQL statement that are likely the same, and group the SQL statements by this number of bytes. For example, the following example groups statements that differ only after the first 60 bytes.CopySELECT SUBSTR(SQL_TEXT, 1, 60), COUNT(*) FROM V$SQLSTATS WHERE EXECUTIONS < 4 GROUP BY SUBSTR(SQL_TEXT, 1, 60) HAVING COUNT(*) > 1;
    • Or report distinct SQL statements that have the same execution plan. The following query selects distinct SQL statements that share the same execution plan at least four times. These SQL statements are likely to be using literals instead of bind variables.CopySELECT SQL_TEXT FROM V$SQLSTATS WHERE PLAN_HASH_VALUE IN (SELECT PLAN_HASH_VALUE FROM V$SQLSTATS GROUP BY PLAN_HASH_VALUE HAVING COUNT(*) > 4) ORDER BY PLAN_HASH_VALUE;

    Check the V$SQLSTATS view. Enter the following query:


    When the PARSE_CALLS value is close to the EXECUTIONS value for a given statement, you might be continually reparsing that statement. Tune the statements with the higher numbers of parse calls.

    Identify unnecessary parse calls by identifying the session in which they occur. It might be that particular batch programs or certain types of applications do most of the reparsing. To achieve this goal, run the following query:

    SELECT pa.SID, pa.VALUE "Hard Parses", ex.VALUE "Execute Count" 
     WHERE pa.SID = ex.SID 
           FROM V$STATNAME WHERE NAME = 'parse count (hard)') 
           FROM V$STATNAME WHERE NAME = 'execute count') 
       AND pa.VALUE > 0; 

    The result is a list of all sessions and the amount of reparsing they do. For each session identifier (SID), go to V$SESSION to find the name of the program that causes the reparsing.


    Because this query counts all parse calls since instance startup, it is best to look for sessions with high rates of parse. For example, a connection which has been up for 50 days might show a high parse figure, but a second connection might have been up for 10 minutes and be parsing at a much faster rate.

    The output is similar to the following:

    Copy   SID  Hard Parses  Execute Count
    ------  -----------  -------------
         7            1             20
         8            3          12690
         6           26            325
        11           84           1619

    The cache buffers lru chain latches protect the lists of buffers in the cache. When adding, moving, or removing a buffer from a list, a latch must be obtained.

    For symmetric multiprocessor (SMP) systems, Oracle Database automatically sets the number of LRU latches to a value equal to one half the number of CPUs on the system. For non-SMP systems, one LRU latch is sufficient.

    Contention for the LRU latch can impede performance on SMP computers with a large number of CPUs. LRU latch contention is detected by querying V$LATCHV$SESSION_EVENT, and V$SYSTEM_EVENT. To avoid contention, consider tuning the application, bypassing the buffer cache for DSS jobs, or redesigning the application.

    The cache buffers chains latches are used to protect a buffer list in the buffer cache. These latches are used when searching for, adding, or removing a buffer from the buffer cache. Contention on this latch usually means that there is a block that is greatly contended for (known as a hot block).

    To identify the heavily accessed buffer chain, and hence the contended for block, look at latch statistics for the cache buffers chains latches using the view V$LATCH_CHILDREN. If there is a specific cache buffers chains child latch that has many more GETSMISSES, and SLEEPS when compared with the other child latches, then this is the contended for child latch.

    This latch has a memory address, identified by the ADDR column. Use the value in the ADDR column joined with the X$BH table to identify the blocks protected by this latch. For example, given the address (V$LATCH_CHILDREN.ADDR) of a heavily contended latch, this queries the file and block numbers:

      FROM X$BH
     WHERE HLADDR = 'address of latch'

    X$BH.TCH is a touch count for the buffer. A high value for X$BH.TCH indicates a hot block.

    Many blocks are protected by each latch. One of these buffers will probably be the hot block. Any block with a high TCH value is a potential hot block. Perform this query several times, and identify the block that consistently appears in the output. After you have identified the hot block, query DBA_EXTENTS using the file number and block number, to identify the segment.

    After you have identified the hot block, you can identify the segment it belongs to with the following query:


    In the query, &obj is the value of the OBJ column in the previous query on X$BH.

    The row cache objects latches protect the data dictionary.

    Practical Values for FAST_START_MTTR_TARGET

    The maximum value for FAST_START_MTTR_TARGET is 3600 seconds (one hour). If you set the value to more than 3600, then Oracle Database rounds it to 3600.

    The following example shows how to set the value of FAST_START_MTTR_TARGET:


    In principle, the minimum value for FAST_START_MTTR_TARGET is one second. However, the fact that you can set FAST_START_MTTR_TARGET this low does not mean that this target can be achieved. There are practical limits to the minimum achievable MTTR target, due to such factors as database startup time.

    The MTTR target that your database can achieve given the current value of FAST_START_MTTR_TARGET is called the effective MTTR target. You can view your current effective MTTR by viewing the TARGET_MTTR column of the V$INSTANCE_RECOVERY view.

    The practical range of MTTR target values for your database is defined to be the range between the lowest achievable effective MTTR target for your database and the longest that startup and cache recovery will take in the worst-case scenario (that is, when the whole buffer cache is dirty).

    Reducing Checkpoint Frequency to Optimize Run-Time Performance

    To reduce the checkpoint frequency and optimize run-time performance, you can do the following:

    • Set the value of FAST_START_MTTR_TARGET to 3600. This enables Fast-Start checkpointing and the Fast-Start Fault Recovery feature, but minimizes its effect on run-time performance while avoiding the need for performance tuning of FAST_START_MTTR_TARGET.
    • Size your online redo log files according to the amount of redo your system generates. Try to switch logs at most every twenty minutes. Having your log files too small can increase checkpoint activity and reduce performance. Also note that all redo log files should be the same size.
    Determining Lower Bound for FAST_START_MTTR_TARGET: Scenario

    To determine the lower bound of the practical range, set FAST_START_MTTR_TARGET to 1, and start up your database. Then check the value of V$INSTANCE_RECOVERY.TARGET_MTTR, and use this value as a good lower bound for FAST_START_MTTR_TARGET. Database startup time, rather than cache recovery time, is usually the dominant factor in determining this limit.

    For example, set the FAST_START_MTTR_TARGET to 1:


    Then, execute the following query immediately after opening the database:


    Oracle Database responds with the following:

    18          15             

    The TARGET_MTTR value of 18 seconds is the minimum MTTR target that the system can achieve, that is, the lowest practical value for FAST_START_MTTR_TARGET. This minimum is calculated based on the average database startup time.

    The ESTIMATED_MTTR field contains the estimated mean time to recovery based on the current state of the running database. Because the database has just opened, the system contains few dirty buffers, so not much cache recovery would be required if the instance failed at this moment. That is why ESTIMATED_MTTR can, for the moment, be lower than the minimum possible TARGET_MTTR.

    ESTIMATED_MTTR can be affected in the short term by recent database activity. Assume that you query V$INSTANCE_RECOVERY immediately after a period of heavy update activity in the database. Oracle Database responds with the following:

    18          30             

    Now the effective MTTR target is still 18 seconds, and the estimated MTTR (if a crash happened at that moment) is 30 seconds. This is an acceptable result. This means that some checkpoints writes might not have finished yet, so the buffer cache contains more dirty buffers than targeted.

    Now wait for sixty seconds and reissue the query to V$INSTANCE_RECOVERY. Oracle Database responds with the following:

    18          25             

    The estimated MTTR at this time has dropped to 25 seconds, because some of the dirty buffers have been written out during this period

    Determining Upper Bound for FAST_START_MTTR_TARGET

    To determine the upper bound of the practical range, set FAST_START_MTTR_TARGET to 3600, and operate your database under a typical workload for a while. Then check the value of V$INSTANCE_RECOVERY.TARGET_MTTR. This value is a good upper bound for FAST_START_MTTR_TARGET.

    Selecting Preliminary Value for FAST_START_MTTR_TARGET

    After you have determined the practical bounds for the FAST_START_MTTR_TARGET parameter, select a preliminary value for the parameter. Choose a higher value within the practical range if your concern is with database performance, and a lower value within the practical range if your priority is shorter recovery times. The narrower the practical range, of course, the easier the choice becomes.

    For example, if you discovered that the practical range was between 17 and 19 seconds, it would be quite simple to choose 19, because it makes relatively little difference in recovery time and at the same time minimizes the effect of checkpointing on system performance. However, if you found that the practical range was between 18 and 40 seconds, you might choose a compromise value of 30, and set the parameter accordingly:


    You might then go on to use the MTTR Advisor to determine an optimal value.

    Evaluate Different Target Values with MTTR Advisor

    After you have selected a preliminary value for FAST_START_MTTR_TARGET, you can use MTTR Advisor to evaluate the effect of different FAST_START_MTTR_TARGET settings on system performance, compared to your chosen setting.

    Enabling MTTR Advisor

    To enable MTTR Advisor, set the two initialization parameters STATISTICS_LEVEL and FAST_START_MTTR_TARGET.

    STATISTICS_LEVEL governs whether all advisors are enabled and is not specific to MTTR Advisor. Ensure that it is set to TYPICAL or ALL. Then, when FAST_START_MTTR_TARGET is set to a nonzero value, the MTTR Advisor is enabled.

    Using MTTR Advisor

    After enabling MTTR Advisor, run a typical database workload for a while. When MTTR Advisor is ON, the database simulates checkpoint queue behavior under the current value of FAST_START_MTTR_TARGET, and up to four other different MTTR settings within the range of valid FAST_START_MTTR_TARGET values. (The database will in this case determine the valid range for FAST_START_MTTR_TARGET itself before testing different values in the range.)

    Viewing MTTR Advisor Results: V$MTTR_TARGET_ADVICE

    The dynamic performance view V$MTTR_TARGET_ADVICE lets you view statistics or advisories collected by MTTR Advisor.

    The database populates V$MTTR_TARGET_ADVICE with advice about the effects of each of the FAST_START_MTTR_TARGET settings for your database. For each possible value of FAST_START_MTTR_TARGET, the row contains details about how many cache writes would be performed under the workload tested for that value of FAST_START_MTTR_TARGET.

    Specifically, each row contains information about cache writes, total physical writes (including direct writes), and total I/O (including reads) for that value of FAST_START_MTTR_TARGET, expressed both as a total number of operations and a ratio compared to the operations under your chosen FAST_START_MTTR_TARGET value. For instance, a ratio of 1.2 indicates 20% more cache writes.

    Knowing the effect of different FAST_START_MTTR_TARGET settings on cache write activity and other I/O enables you to decide better which FAST_START_MTTR_TARGET value best fits your recovery and performance needs.

    If MTTR Advisor is currently on, then V$MTTR_TARGET_ADVICE shows the Advisor information collected. If MTTR Advisor is currently OFF, then the view shows information collected the last time MTTR Advisor was ON since database startup, if any. If the database has been restarted since the last time the MTTR Advisor was used, or if it has never been used, the view will not show any rows.

    Determine the Optimal Size for Redo Logs

    You can use the V$INSTANCE_RECOVERY view column OPTIMAL_LOGFILE_SIZE to determine the size of your online redo logs. This field shows the redo log file size in megabytes that is considered optimal based on the current setting of FAST_START_MTTR_TARGET. If this field consistently shows a value greater than the size of your smallest online log, then you should configure all your online logs to be at least this size.

    Note, however, that the redo log file size affects the MTTR. In some cases, you may be able to refine your choice of the optimal FAST_START_MTTR_TARGET value by re-running the MTTR Advisor with your suggested optimal log file size.

    Also See:

    Oracle ORAchk