Log Apply Services
Text description of the illustration sbr81091.gif
Text description of the illustration parallel.gif
Text description of the illustration sbr81099.gif
This chapter describes how to manage a standby database. It includes the following topics:
- Introduction to Log Apply Services
- Process Architecture
- Managed Recovery Mode
- Controlling Managed Recovery Mode
- Archive Gap Management
- Datafile Management
- Read-Only Mode
- Read-Only Mode Considerations
- Monitoring Log Apply Services
4.1 Introduction to Log Apply Services
In a Data Guard environment, log apply services maintain the standby database in either a managed recovery mode or an open read-only mode. Log apply services automatically apply archived redo logs to maintain transactional synchronization with the primary database, and allow transactionally consistent read-only access to the data.
In a Data Guard environment, the log apply services component coordinates its activities with the log transport services component.
Using log apply services, you can manage the standby database in one of the following modes:
- Managed Recovery Mode In this setup, log transport services archive logs to the standby site, and log apply services automatically apply these logs. If you want maximum protection against data loss or corruption, then maintain the standby database in managed recovery mode in a Data Guard environment.
- Read-Only Mode Use read-only mode for supplemental reporting of data contained in the primary database. If you want to use the standby database for reporting purposes, then open it in read-only mode in a Data Guard environment. Log apply services cannot apply archived redo logs to the standby database when it is in this mode, but you can still execute queries on the database. The primary database continues to archive to the standby site so long as the standby instance is started.
You can easily change between managed recovery mode and read-only mode. In most implementations of a Data Guard environment, you may want to make this change at various times to either:
- Update a standby database used primarily for reporting
- Check that data is correctly applied to a database that is used primarily for disaster protection
The following sections in this chapter describe the procedures for initiating the various modes as well as for performing failover to a standby database.
4.2 Process Architecture
The physical standby component uses several processes to achieve the automation necessary for disaster recovery and high availability. On the standby database, log apply services use the following processes:
- Remote file server (RFS) The remote file server (RFS) process receives archived redo logs from the primary database.
- Archiver (ARCn) The ARCn process archives the standby redo logs to be applied by the managed recovery process (MRP).
- Managed recovery process (MRP) The MRP applies archived redo log information to the standby database.
4.3 Managed Recovery Mode
Log transport services automate archiving to a standby database. Log apply services keep the standby database synchronized with the primary database by waiting for archived logs from the primary database and then automatically applying them to the standby database, as shown in Figure 4-1.
Figure 4-1 Automatic Updating of a Standby Database
Text description of the illustration sbr81091.gif
This section contains the following topics:
- Starting the Standby Instance in Preparation for Recovery
- Initiating Log Apply Services
- Monitoring the Recovery Process
4.3.1 Starting the Standby Instance in Preparation for Recovery
After all necessary parameter and network files have been configured, you can start the standby instance. If the instance is not started and mounted, the standby database cannot receive archived redo logs that are automatically copied to the standby site from the primary database by log transport services.
To start the standby instance:
- Connect to the standby database instance. For example, enter:
SQL> CONNECT sys/change_on_install@standby1 AS SYSDBA
- Start the Oracle instance at the standby database without mounting the database. For example, enter:
SQL> STARTUP NOMOUNT pfile=initSTANDBY.ora;
- Mount the standby database:
SQL> ALTER DATABASE MOUNT STANDBY DATABASE;
4.3.2 Initiating Log Apply Services
Log apply services can run as a foreground session or as a background process with control options such as
CANCEL
, FINISH
, TIMEOUT
, DELAY
, DISCONNECT
, and PARALLEL
.
You can enable managed recovery using log apply services. The following is an example of a foreground session:
SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE;
As log transport services archive redo logs to the standby site, log apply services can automatically apply them to the standby database.
Note:
After you execute the
RECOVER statement, the prompt sits on the line following the RECOVER statement (unless DISCONNECT is specified); this is the expected foreground behavior. |
If you want to start a detached server process and immediately return control to the user, add the
DISCONNECT FROM SESSION
option to the ALTER DATABASE
statement. Note that this does not disconnect the current SQL session. For example: SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION;
See Also:
Section 4.4 and Table 9-1
|
4.3.3 Monitoring the Recovery Process
To verify that you have correctly initiated log apply services, query the
V$MANAGED_STANDBY
fixed view on the standby database. This view monitors the progress of a managed recovery. For example: SQL> SELECT process, status, thread#, sequence#, block#, blocks 2> FROM v$managed_standby; PROCESS STATUS THREAD# SEQUENCE# BLOCK# BLOCKS ------- ------------ ---------- ---------- ---------- ---------- MRP0 APPLYING_LOG 1 946 10 1001
If you did not start a detached server process, you need to execute this query from another SQL session.
To monitor activity on the standby database, query the
V$ARCHIVE_DEST_STATUS
fixed view. 4.4 Controlling Managed Recovery Mode
To control the managed recovery operation, use the
TIMEOUT
, CANCEL
, NODELAY
, DELAY
, NEXT
, PARALLEL
, FINISH
, EXPIRE
, and DISCONNECT
control options.
This section contains the following topics:
- CANCEL Control Option
- DELAY Control Option
- DISCONNECT Control Option
- EXPIRE Control Option
- FINISH Control Option
- NEXT Control Option
- NODELAY Control Option
- PARALLEL Control Option
- TIMEOUT Control Option
4.4.1 CANCEL Control Option
Cancel the managed recovery operation at any time by issuing the
CANCEL
option of the ALTER DATABASE RECOVER MANAGED STANDBY DATABASE
statement. Format
The
CANCEL
option directs log apply services to stop the managed recovery operation after completely processing the current archived redo log. The managed recovery operation is terminated on an archived log boundary.
The
CANCEL IMMEDIATE
option, however, directs log apply services to stop the managed recovery operation either before reading another block from the archived redo log or before opening the next archived redo log, whichever occurs first. The managed recovery operation is terminated on an I/O boundary or on an archived log boundary, whichever occurs first. Stopping the recovery on an I/O boundary will leave the database in an inconsistent state and, therefore, unable to be opened. Note the following scenarios:
By default, the
CANCEL
option waits for the managed recovery operation to terminate before control is returned. If you specify the NOWAIT
option, control is returned to the process that issued the CANCEL
option without waiting for the managed recovery operation to terminate. 4.4.2 DELAY Control Option
Use the
DELAY
control option to specify an absolute apply delay interval to the managed recovery operation. The managed recovery operation waits the specified number of minutes before applying the archived redo logs. The apply delay interval begins once the archived redo logs have been selected for recovery.
The
DELAY
control option apply delay interval supersedes any apply delay interval specified for the standby database by the primary database's corresponding LOG_ARCHIVE_DEST_
n
initialization parameter. The DELAY
control option pertains to archived redo logs being applied by this managed recovery operation.
You can use the
DELAY
control option when starting a managed recovery operation or to alter the mode of an active managed recovery operation. A DELAY
control option value of zero (0) directs the managed recovery operation to revert to default behavior. Format
In the following example, log apply services specify an absolute apply delay interval of 30 minutes to a foreground managed recovery operation:
SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DELAY 30;
4.4.3 DISCONNECT Control Option
Use the
DISCONNECT
control option to start managed recovery in background mode. Format
Using this option creates a managed recovery process (MRP) to perform the recovery in the background while the foreground process that issued the
RECOVER
statement continues performing other tasks. The optional FROM SESSION
keywords can be added for clarity but do not change the behavior of the DISCONNECT
option. 4.4.4 EXPIRE Control Option
Use the
EXPIRE
control option to specify the number of minutes after which the managed recovery operation automatically terminates, relative to the current time. The managed recovery operation terminates at the end of the current archived redo log that is being processed. This means that the value of the EXPIRE
control option is the earliest amount of time that the managed recovery operation will terminate, but it could be significantly later than the specified value.
The managed recovery operation expiration is always relative to the time the statement was issued, not when the managed recovery operation was started. To cancel an existing managed recovery operation, use the
CANCEL
control option.
Specifying a new
EXPIRE
control option overrides any previously specified value. Use the value 0 to cancel a previously specified expiration value.
When you use a detached managed recovery process, the MRP makes an alert log entry when it terminates. A foreground managed recovery process simply returns control to the SQL prompt.
Format
In the following example, log apply services automatically terminates two hours from now:
SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE EXPIRE 240;
The
EXPIRE
control option cannot be specified in conjunction with the CANCEL
control option. 4.4.5 FINISH Control Option
Use the
FINISH
control option to complete managed recovery in preparation for a failover from the primary database to the standby database. Format
A managed recovery directed by the
FINISH
option will first apply all available archived redo logs and then recover any available standby redo logs (where a non-finish mode recovery only applies the archived redo logs). This extra step brings the standby database up-to-date with the last committed transaction on the primary database. You can use the FINISH
option when starting a managed recovery or to alter the mode of an ongoing managed recovery. If you use the FINISH
option to alter the mode of an ongoing managed recovery, use the NOWAIT
option to allow control to be returned to the foreground process before the recovery completes. 4.4.6 NEXT Control Option
Use the
NEXT
control option to direct the managed recovery operation to apply a specified number of archived redo logs as soon as possible after the log transport services have archived them. Any apply delay interval specified by the DELAY
attribute of the LOG_ARCHIVE_DEST_
n
initialization parameter set on the primary database is ignored by the managed recovery operation until the specified number of archived redo logs has been applied. Then, the managed recovery operation reverts to the default behavior.
You can use the
NEXT
control option when starting a managed recovery operation or to alter the mode of an active managed recovery operation. Format
In the following example, log apply services direct a foreground managed recovery operation to apply the next 5 archived redo logs without delay:
SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE NEXT 5;
4.4.7 NODELAY Control Option
Use the
NODELAY
control option to direct the managed recovery operation to apply the archived redo logs as soon as possible after log transport services have archived them. Any apply delay interval specified by the DELAY
attribute of the LOG_ARCHIVE_DEST_
n
initialization parameters on the primary database is ignored by the managed recovery operation when the NODELAY
control option is specified. By default, the managed recovery operation respects any apply delay interval specified for the standby database by the primary database's corresponding LOG_ARCHIVE_DEST_
n
initialization parameter. Format
In the following example, log apply services direct a foreground managed recovery operation to apply archived redo logs without delay:
SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE NODELAY;
4.4.8 PARALLEL Control Option
When running in managed recovery mode, the log apply services apply the changes generated on the primary database by many concurrent processes. Therefore, applying the archived redo logs on the standby database can take longer than the time it took to initially generate the changes on the primary database. By default, the log apply services use a single process to apply all of the archived redo logs sequentially. When using the parallel recovery option, several processes are able to apply the archived redo logs simultaneously.
In general, using the parallel recovery option is most effective at reducing recovery time when several datafiles on several different disks are being recovered concurrently. The performance improvement from the parallel recovery option is also dependent upon whether the operating system supports asynchronous I/O. If asynchronous I/O is not supported, the parallel recovery option can dramatically reduce recovery time. If asynchronous I/O is supported, the recovery time may be only slightly reduced by using parallel recovery.
See Also:
Your operating system documentation to determine whether the system supports asynchronous I/O
|
In a typical parallel recovery situation, one process is responsible for reading and dispatching archived redo logs. This is the dedicated managed recovery server process (either the foreground SQL session or the background MRP0 process) that begins the recovery session. The managed recovery server process reading the archived redo logs enlists two or more recovery processes to apply the changes from the archived redo logs to the datafiles.
Figure 4-2 illustrates a typical parallel recovery session.
Figure 4-2 Parallel Recovery Session
Text description of the illustration parallel.gif
Standby database recovery is a very disk-intensive activity (as opposed to a CPU-intensive activity). Therefore, the number of recovery processes needed is dependent entirely upon how many disk drives are involved in recovery. In most situations, one or two recovery processes per disk drive containing datafiles needing recovery are sufficient. In general, a minimum of eight recovery processes is needed before parallel recovery can show improvement over a serial recovery.
Format
In the following example, log apply services specify a parallel managed recovery operation utilizing 8 background child processes:
SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE PARALLEL 8;
4.4.9 TIMEOUT Control Option
Use the
TIMEOUT
control option to specify the number of minutes that log apply services wait for log transport services to complete the archival of the redo log required by the managed recovery operation. If the specified number of minutes passes without receiving the required archived redo log, the managed recovery operation is automatically terminated. By default, log apply services wait indefinitely for the next required archived redo log. The managed recovery operation is terminated only through use of the CANCEL
option, a CTRL+C
key combination, or an instance shutdown. Format
In the following example, log apply services initiate a foreground managed recovery operation with a timeout interval of 15 minutes:
SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE TIMEOUT 15;
The
TIMEOUT
control option cannot be used in conjunction with the DISCONNECT
control option. 4.5 Archive Gap Management
An archive gap is a range of archived redo logs created whenever you are unable to apply the next archived redo log generated by the primary database to the standby database. For example, an archive gap occurs when the network goes down and automatic archival from the primary database to the standby database stops. When the network is up and running again, automatic archival of the archived redo logs from the primary database to the standby database resumes. However, if the standby database is specified as an optional archive destination, and a log switch occurred at the primary site, the standby database has an archive gap for the time the network was down. The gap is automatically detected and resolved when managed recovery mode is enabled.
Setting Initialization Parameters to Automatically Resolve Archive Gaps
In previous versions, to be able to place the standby database in managed recovery mode, you would first manually apply logs in the archive gap to the standby database. After you had performed this manual recovery, you could then issue the
RECOVER MANAGED STANDBY DATABASE
statement, at which point log apply services would apply subsequent logs to the standby database automatically.
In Oracle9i, you can set initialization parameters so that log apply services automatically identify and resolve archive gaps as they occur. For log apply services to automatically identify and resolve archive gaps, you must:
- Use the Oracle Net Manager to configure the listener on the standby site. Use the TCP/IP protocol and statically register the standby database service with the listener using the SID, so that the standby database can be managed by the Data Guard broker.
- Use the Oracle Net Manager to create a net service name that the standby database can use to connect to the FAL server. The net service name should resolve to a connect descriptor that uses the same protocol, host address, port, and SID that you specified when you configured the listener on the FAL server site. If you are unsure what values to use for these parameters, use the Oracle Net Manager to display the listener configuration on the FAL server site.
- Use the Oracle Net Manager to create a net service that the FAL server can use to connect to the standby database. The net service name should resolve to a connect descriptor that uses the same protocol, host address, port, and SID that you specified when you configured the listener on the standby database site. If you are unsure what values to use for these parameters, use the Oracle Net Manager to display the listener configuration on the standby database site.
- In the initialization parameter file of the standby database, assign the net service name that you created for the standby database to the
FAL_CLIENT
initialization parameter, and assign the net service name that you created for the FAL server to theFAL_SERVER
initialization parameter.
Log apply services automatically detect, and the FAL server process running on the primary database resolves, any gaps that may exist when you enable managed recovery with the
RECOVER MANAGED STANDBY DATABASE
statement.
If the FAL process cannot resolve an archive gap, then you must resolve it manually. You can use the
V$ARCHIVE_GAP
fixed view to manually identify archive gaps on standby databases. See Also:
Section B.3 for a description of the manual steps, Section 6.10.2 for a scenario using the
V$ARCHIVE_GAP view, and Oracle9i Net Services Administrator's Guide for information about Oracle Net |
Define the
FAL_CLIENT
and FAL_SERVER
initialization parameters in the standby database initialization parameter file:
The FAL client is the component of the Oracle database server that:
- Runs on the standby database
- Detects an archive gap on the standby database
- Requests the transfer of archived redo logs from the FAL server
The FAL server is a background Oracle process that services the incoming requests from the FAL client. In most cases, the FAL server is located on a primary database. However, it can be located on another standby database.
4.6 Datafile Management
If the standby site uses the same directory naming structure as the primary site, then you do not have to rename the primary database files in the standby control file. If the primary and standby databases are located on the same site, however, or if the primary and standby sites use different directory naming structures, then you must rename the primary database files in the standby control file so that the archived redo logs can be applied to the standby datafiles.
You can set initialization parameters so that your standby database automatically converts datafile and archived redo log filenames based on data in the standby database control file. If you cannot rename all primary database files automatically using these parameters, then you must rename them manually.
Note:
If you do not set the
LOCK_NAME_SPACE parameters differently when the standby and primary databases share a site, you will receive an ORA-1102 error. |
The initialization parameters in Table 4-1 perform automatic filename conversions.
Table 4-1 Filename Conversion
Use the
DB_FILE_NAME_CONVERT
parameter to convert the filenames of one or more sets of datafiles on the primary database to filenames on the standby database; use the LOG_FILE_NAME_CONVERT
parameter to convert the filename of a new redo log on the primary database to a filename on the standby database.
When the standby database is updated, the
DB_FILE_NAME_CONVERT
parameter is used to convert the datafile name on the primary database to a datafile name on the standby database. The file must exist and be writable on the standby database or the recovery process will halt with an error.
The
DB_FILE_NAME_CONVERT
and LOG_FILE_NAME_CONVERT
parameters must have two strings. The first string is a sequence of characters to be looked for in a primary database filename. If that sequence of characters is matched, it is replaced by the second string to construct the standby database filename.
Adding a datafile or log to the primary database necessitates adding a corresponding file to the standby database. Use the
STANDBY_FILE_MANAGEMENT
initialization parameter with the DB_FILE_NAME_CONVERT
initialization parameter to automate the process of creating files with identical filenames on the standby database and primary database.
The
DB_FILE_NAME_CONVERT
initialization parameter allows multiple pairs of filenames to be specified. For example: DB_FILE_NAME_CONVERT="/private1/prmy1/df1", "/private1/stby1/df1", \ "/private1/prmy1", "/private1/stby1" STANDBY_FILE_MANAGEMENT=auto
This section contains the following topics:
- Setting the STANDBY_FILE_MANAGEMENT Initialization Parameter
- Restrictions on ALTER DATABASE Operations
4.6.1 Setting the STANDBY_FILE_MANAGEMENT Initialization Parameter
When you set the
STANDBY_FILE_MANAGEMENT
initialization parameter to auto
, it automatically creates the file on the standby sites using the same name that you specified on the primary site.
The
STANDBY_FILE_MANAGEMENT
initialization parameter works with the DB_FILE_NAME_CONVERT
parameter to take care of the case where datafiles are spread across multiple directory paths on the primary database. 4.6.2 Restrictions on ALTER DATABASE Operations
You cannot rename the datafile on the standby site when the
STANDBY_FILE_MANAGEMENT
initialization parameter is set to auto
. In addition, when you set the STANDBY_FILE_MANAGEMENT
initialization parameter to auto
, the following operations are no longer necessary: ALTER DATABASE RENAME
ALTER DATABASE ADD/DROP LOGFILE
ALTER DATABASE ADD/DROP LOGFILE MEMBER
ALTER DATABASE CREATE DATAFILE AS
If you attempt to use any of these operations on the standby database, an error is returned. For example:
SQL> ALTER DATABASE RENAME FILE '/private1/stby1/t_db2.dbf' to 'dummy'; alter database rename file '/private1/stby1/t_db2.dbf' to 'dummy' * ERROR at line 1: ORA-01511: error in renaming log/data files ORA-01270: RENAME operation is not allowed if STANDBY_FILE_MANAGEMENT is auto
See Also:
Section 5.8.1 to learn how to add datafiles to a database
|
4.7 Read-Only Mode
Read-only mode allows users to open and query a standby database without the potential for online data modifications. This mode reduces system overhead on the primary database by using the standby database for reporting purposes. You can periodically open the standby database in read-only mode to:
- Run reports
- Perform ad hoc queries to ensure that log apply services are updating the standby database properly
Figure 4-3 shows a standby database in read-only mode.
Figure 4-3 Standby Database in Read-Only Mode
Text description of the illustration sbr81099.gif
This section contains the following topics:
4.7.1 Assessing Whether to Run in Read-Only Mode
As you decide whether to run the standby database in read-only mode, consider the following:
- Having the standby database open in read-only mode available for queries makes it unavailable for managed recovery. The archived redo logs are received by the standby database site but are not applied. At some point, you need to put the standby database back in managed recovery mode and apply the archived redo logs to resynchronize the standby database with the primary database. This action limits the role of the standby database as a disaster recovery database.
- If you need the standby database both for protection against disaster and reporting, then you can maintain multiple standby databases, some read-only and some in managed recovery mode. However, you will need to synchronize the read-only database. A database in managed recovery mode gives you immediate protection against disaster.
4.7.2 Placing the Database in Read-Only Mode
You can change from the shutdown mode or managed recovery mode into read-only mode (and back again) using the following procedures. The following modes are possible for a standby database:
To open a standby database in read-only mode when the database is shut down:
- Start the Oracle instance for the standby database without mounting it:
SQL> STARTUP NOMOUNT pfile=initSTANDBY.ora;
- Mount the standby database:
SQL> ALTER DATABASE MOUNT STANDBY DATABASE;
- Open the database in read-only mode:
SQL> ALTER DATABASE OPEN READ ONLY;
To open the standby database in read-only mode when in managed recovery mode:
- Cancel log apply services:
SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;
- Open the database in read-only mode:
SQL> ALTER DATABASE OPEN READ ONLY;
To move the standby database from read-only mode back to managed recovery mode:
- Terminate all active user sessions on the standby database.
- Restart log apply services:
SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE;
4.8 Read-Only Mode Considerations
Before you put your standby database in read-only mode, consider the following topics:
- Receiving Archived Redo Logs While in Read-Only Mode
- Sorting While in Read-Only Mode
- Sorting Without Temporary Tablespaces
4.8.1 Receiving Archived Redo Logs While in Read-Only Mode
While the standby database is in read-only mode, the standby site can still receive archived redo logs from the primary site. However, these archived redo logs are not automatically applied to the standby database until the database is in managed recovery mode. Consequently, a read-only standby database is not synchronized with the primary database at the archive level. You should not fail over to the standby database unless all archived redo logs have been applied.
See Also:
Section 3.4.2 for examples of initialization parameter settings you need to define to automatically archive from the primary site to the standby site
|
4.8.2 Sorting While in Read-Only Mode
To perform queries on a read-only standby database, the Oracle database server must be able to perform on-disk sorting operations. You cannot allocate space for sorting operations in tablespaces that cause Oracle to write to the data dictionary.
4.8.2.1 Creating Temporary Tablespaces
Temporary tablespaces allow you to add tempfile entries in read-only mode for the purpose of making queries. You can then perform on-disk sorting operations in a read-only database without affecting dictionary files or generating redo entries.
Note the following requirements for creating temporary tablespaces:
- The tablespaces must be temporary, locally managed, and contain only tempfiles.
- User-level allocations and permissions to use the locally managed temporary tablespaces must be in place on the primary database. You cannot change these settings on the standby database.
You should also follow these guidelines:
- Minimize data validation time on the standby database so that you can change to managed or manual recovery mode when necessary.
- Minimize runtimes for reports.
- Implement desired optimizations on the primary database only.
See Also: Oracle9i Database Administrator's Guide for more information about using tempfiles and temporary tablespaces
To create a temporary tablespace for use on a read-only standby database:
- Open the standby database in read-only mode using the relevant procedure described in Section 4.7.2.
- Create a temporary tablespace. For example, enter:
SQL> CREATE TEMPORARY TABLESPACE tbs_1 TEMPFILE 'file_1.dbf' 2> EXTENT MANAGEMENT LOCAL UNIFORM SIZE 16M;
See Also: Oracle9i SQL Reference for information about theCREATE TEMPORARY TABLESPACE
syntax
4.8.3 Sorting Without Temporary Tablespaces
If you attempt to sort without temporary tablespaces by executing a SQL
SELECT * FROM V$PARAMETER
statement when the database is neither mounted nor open, you will get an error. For example: SQL> SELECT * FROM v$parameter; select * from v$parameter * ERROR at line 1: ORA-01220: file based sort illegal before database is open
To look at the parameters when the database is not open in read/write mode:
- Set the
SORT_AREA_SIZE
parameter in your initialization parameter file. If you do this, you can execute theSELECT * FROM V$PARAMETER
statement when the database is not mounted, is mounted, or is open.SORT_AREA_SIZE
is a static parameter. This means the parameter must be specified in your initialization parameter file prior to starting the instance.
- If you do not set the
SORT_AREA_SIZE
parameter, you cannot select all of the columns from theV$PARAMETER
view. If you need to select only a few columns, you can execute theSELECT
statement in any database state.
If you can open the database in read/write mode, you can execute a
SELECT * FROM V$PARAMETER
statement even if the SORT_AREA_SIZE
parameter is not specified in your initialization parameter file. However, this only applies to the primary database. On a standby database, opening the database in read-only mode and executing a SELECT * FROM V$PARAMETER
statement fails as previously shown. For standby databases, you must set the SORT_AREA_SIZE
parameter in your initialization parameter file if you wish to execute the SELECT * FROM V$PARAMETER
statement. 4.9 Monitoring Log Apply Services
To determine the status of archived redo logs on the standby database, query the
V$MANAGED_STANDBY
, V$ARCHIVE_DEST_STATUS
, V$ARCHIVED_LOG
, and V$LOG_HISTORY
fixed views. You can also monitor the standby database using the Data Guard Manager.
This section contains the following topics:
- Accessing the V$MANAGED_STANDBY Fixed View
- Accessing the V$ARCHIVE_DEST_STATUS Fixed View
- Accessing the V$ARCHIVED_LOG Fixed View
- Accessing the V$LOG_HISTORY Fixed View
- Setting Archive Tracing
4.9.1 Accessing the V$MANAGED_STANDBY Fixed View
Query the standby database to monitor log apply and log transport services activity at the standby site.
SQL> SELECT process, status, thread#, sequence#, block#, blocks 2> FROM v$managed_standby; PROCESS STATUS THREAD# SEQUENCE# BLOCK# BLOCKS ------- ------------ ---------- ---------- ---------- ---------- RFS ATTACHED 1 947 72 72 MRP0 APPLYING_LOG 1 946 10 72
The previous query output shows an RFS process that has completed the archival of redo log file sequence number 947. The output also shows a managed recovery operation that is actively applying archived redo log sequence number 946. The recovery operation is currently recovering block number 10 of the 72-block archived redo log file.
4.9.2 Accessing the V$ARCHIVE_DEST_STATUS Fixed View
To quickly determine the level of synchronization for the standby database, issue the following query:
SQL> SELECT archived_thread#, archived_seq#, applied_thread#, applied_seq# 2> FROM v$archive_dest_status; ARCHIVED_THREAD# ARCHIVED_SEQ# APPLIED_THREAD# APPLIED_SEQ# ---------------- ------------- --------------- ------------ 1 947 1 945
The previous query output shows the standby database is two archived log files behind in applying the redo logs received from the primary database. This may indicate that a single recovery process is unable to keep up with the volume of archived redo logs being received. Using the
PARALLEL
option may be a solution. 4.9.3 Accessing the V$ARCHIVED_LOG Fixed View
The
V$ARCHIVED_LOG
fixed view on the standby database shows all the archived redo logs received from the primary database. This view is only useful after the standby site has started receiving logs, because before that time, the view is populated by old archived log records generated from the primary control file. For example, you can execute the following SQL*Plus script (sample output included): SQL> SELECT registrar, creator, thread#, sequence#, first_change#, 2> next_change# FROM v$archived_log; REGISTRAR CREATOR THREAD# SEQUENCE# FIRST_CHANGE# NEXT_CHANGE# --------- ------- ---------- ---------- ------------- ------------ RFS ARCH 1 945 74651 74739 RFS ARCH 1 946 74739 74772 RFS ARCH 1 947 74772 74774
The previous query output shows three archived redo logs received from the primary database.
4.9.4 Accessing the V$LOG_HISTORY Fixed View
The
V$LOG_HISTORY
fixed view on the standby database shows all the archived redo logs that have been recovered. For example: SQL> SELECT thread#, sequence#, first_change#, next_change# 2> FROM v$log_history; THREAD# SEQUENCE# FIRST_CHANGE# NEXT_CHANGE# ---------- ---------- ------------- ------------ 1 945 74651 74739
The previous query output shows that the most recently recovered archived redo log was sequence number 945.
4.9.5 Setting Archive Tracing
To see the progression of the archiving of redo logs to the standby site, set the
LOG_ARCHIVE_TRACE
parameter in the primary and standby initialization parameter files. 4.9.5.1 Determining the Location of the Trace Files
The trace files for a database are located in the directory specified by the
USER_DUMP_DEST
parameter in the initialization parameter file. Connect to the primary and standby instances using SQL*Plus and issue a SHOW
statement to determine the location, for example: SQL> SHOW PARAMETER user_dump_dest NAME TYPE VALUE ------------------------------------ ------- ------------------------------ user_dump_dest string ?/rdbms/log
4.9.5.2 Setting the Log Trace Parameter
The format for the archiving trace parameter is as follows, where
trace_level
is an integer: LOG_ARCHIVE_TRACE=trace_level
To enable, disable, or modify the
LOG_ARCHIVE_TRACE
parameter in a primary database, do one of the following: - Shut down the primary database, modify the initialization parameter file, and restart the database.
- Issue an
ALTER SYSTEM SET LOG_ARCHIVE_TRACE=
n
statement while the database is open or mounted.
To enable, disable, or modify the
LOG_ARCHIVE_TRACE
parameter in a standby database in read-only or recovery mode, issue a SQL statement similar to the following: SQL> ALTER SYSTEM SET LOG_ARCHIVE_TRACE=15;
In the previous example, specifying 15 sets trace levels 1, 2, 4, and 8 as described in Section 4.9.5.3.
Issue the
ALTER SYSTEM
statement from a different standby session so that it affects trace output generated by the remote file service (RFS) and ARCn processes when the next archived log is received from the primary database. For example, enter: SQL> ALTER SYSTEM SET LOG_ARCHIVE_TRACE=32;
4.9.5.3 Choosing an Integer Value
The integer values for the
LOG_ARCHIVE_TRACE
parameter represent levels of tracing data. In general, the higher the level, the more detailed the information. The following integer levels are available:
You can combine tracing levels by setting the value of the
LOG_ARCHIVE_TRACE
parameter to the sum of the individual levels. For example, setting the parameter to 6 generates level 2 and level 4 trace output.
Following are examples of the ARC0 trace data generated on the primary site by the archival of redo log 387 to two different destinations: the service
standby1
and the local directory /oracle/dbs
. Note:
The level numbers do not appear in the actual trace output: they are shown here for clarification only.
|
Level Corresponding entry content (sample) ----- -------------------------------- ( 1) ARC0: Begin archiving log# 1 seq# 387 thrd# 1 ( 4) ARC0: VALIDATE ( 4) ARC0: PREPARE ( 4) ARC0: INITIALIZE ( 4) ARC0: SPOOL ( 8) ARC0: Creating archive destination 2 : 'standby1' (16) ARC0: Issuing standby Create archive destination at 'standby1' ( 8) ARC0: Creating archive destination 1 : '/oracle/dbs/d1arc1_387.dbf' (16) ARC0: Archiving block 1 count 1 to : 'standby1' (16) ARC0: Issuing standby Archive of block 1 count 1 to 'standby1' (16) ARC0: Archiving block 1 count 1 to : '/oracle/dbs/d1arc1_387.dbf' ( 8) ARC0: Closing archive destination 2 : standby1 (16) ARC0: Issuing standby Close archive destination at 'standby1' ( 8) ARC0: Closing archive destination 1 : /oracle/dbs/d1arc1_387.dbf ( 4) ARC0: FINISH ( 2) ARC0: Archival success destination 2 : 'standby1' ( 2) ARC0: Archival success destination 1 : '/oracle/dbs/d1arc1_387.dbf' ( 4) ARC0: COMPLETE, all destinations archived (16) ARC0: ArchivedLog entry added: /oracle/dbs/d1arc1_387.dbf (16) ARC0: ArchivedLog entry added: standby1 ( 4) ARC0: ARCHIVED ( 1) ARC0: Completed archiving log# 1 seq# 387 thrd# 1 (32) Propagating archive 0 destination version 0 to version 2 Propagating archive 0 state version 0 to version 2 Propagating archive 1 destination version 0 to version 2 Propagating archive 1 state version 0 to version 2 Propagating archive 2 destination version 0 to version 1 Propagating archive 2 state version 0 to version 1 Propagating archive 3 destination version 0 to version 1 Propagating archive 3 state version 0 to version 1 Propagating archive 4 destination version 0 to version 1 Propagating archive 4 state version 0 to version 1 (64) ARCH: changing ARC0 KCRRNOARCH->KCRRSCHED ARCH: STARTING ARCH PROCESSES ARCH: changing ARC0 KCRRSCHED->KCRRSTART ARCH: invoking ARC0 ARC0: changing ARC0 KCRRSTART->KCRRACTIVE ARCH: Initializing ARC0 ARCH: ARC0 invoked ARCH: STARTING ARCH PROCESSES COMPLETE ARC0 started with pid=8 ARC0: Archival started
Following is the trace data generated by the RFS process on the standby site as it receives archived log 387 in directory
/stby
and applies it to the standby database: level trace output (sample) ---- ------------------ ( 4) RFS: Startup received from ARCH pid 9272 ( 4) RFS: Notifier ( 4) RFS: Attaching to standby instance ( 1) RFS: Begin archive log# 2 seq# 387 thrd# 1 (32) Propagating archive 5 destination version 0 to version 2 (32) Propagating archive 5 state version 0 to version 1 ( 8) RFS: Creating archive destination file: /stby/parc1_387.dbf (16) RFS: Archiving block 1 count 11 ( 1) RFS: Completed archive log# 2 seq# 387 thrd# 1 ( 8) RFS: Closing archive destination file: /stby/parc1_387.dbf (16) RFS: ArchivedLog entry added: /stby/parc1_387.dbf ( 1) RFS: Archivelog seq# 387 thrd# 1 available 04/02/99 09:40:53 ( 4) RFS: Detaching from standby instance ( 4) RFS: Shutdown received from ARCH pid 9272
Thanks
Srini
No comments:
Post a Comment
No one has ever become poor by giving