Kamran Agayev's Oracle Blog

Oracle Certified Master

Archive for November 24th, 2009

Chapter 7 – Managing archived redo logs

Posted by Kamran Agayev A. on 24th November 2009

Chapter 7 – Managing archived redo logs

–          The LOG_ARCHIVE_MAX_PROCESSES parameter specifies the number of ARCn processes that the database initially invokes. The default is two processes. You can specify this parameter up to ten processes to be started at instance startup. It is dynamic parameter and can be changed with ALTER SYSTEM command. The database must be mounted, but not opened.

–          If you want to archive only to a single destination, you specify that destination in the LOG_ARCHIVE_DEST parameter. If you want to multiplex the archived logs, you can choose whether to archive to up to ten locations (using the LOG_ARCHIVE_DEST_n parameters) or to archive to a primary and secondary destinations (LOG_ARCHIVE_DEST and LOG_ARCHIVE_DUPLEX_DEST)

–          You specify the location for LOG_ARCHIVE_DEST_n using the keywords LOCATION (for a local file system) and SERVICE (remote archival through Oracle Net service name)



–          The optional parameter LOG_ARCHIVE_MIN_SUCCEED_DEST=n determines the minimum number of destinations to which the database must successfully archive a redo log group before it can reuse online log files. the default value is 1. Valid values for n are 1 to 2 if you are using dupliexing, or 1 to 10 if you are multiplexing.

–          MANDATORY option specifies that the transmission of the redo data to the destination must succeed before the local online redo log file can be made available for reuse. The default is OPTIONAL.

–          REOPEN option specifies the minimum number of seconds before the archived process (ARC) or the LGWR should try again to access a previously failed destination. Default is 300 seconds.

–          VERIFY option indicates if an archived (ARC) process should scan and verify the correctness of the contents of the completed archived redo log file, either local or remote, after successfully completing the archival operation. By default, archived redo log file are not verified.

–          When you specify REOPEN for an OPTIONAL destination, the database can overwrite online logs if ther is an error. If you specify REOPEN for a MANDATORY destination, the database stalls the production database when it cannot successfully archive.

–          Use LOG_ARCHIVE_TRACE parameter to specify a trace level

–          To get information about archived redo logs use the following views:







Posted in My abstracts from Documentation 10gR2 | No Comments »

Chapter 5,6 – Managing Control Files and Redo Log files

Posted by Kamran Agayev A. on 24th November 2009

Chapter 5,6 – Managing Control Files, Redo Log files

– Every oracle database has a control file, which is a small binary file that records the physical structure of the database. The control file includes:

–          The database name

–          Names and locations of associated datafiles and redo log files

–          The timestamp of the database creation

–          The current log sequence number

–          Checkpoint information

-The initial control files of an Oracle Database are created when you issue the CREATE DATABASE statement.

– The CREATE CONTROLFILE statement can potentially damage specified datafiles and redo log files. omitting a filename can cause loss of the data in that file, or loss of access to the entire database.

If a datafile exists in the data dictionary but not in the new controlfile, the database creates a placeholder entry in the controlfile under the name MISSINGnnnnn, where nnnn is the file number in decimal. MISSINGnnnn is flagged in the controlfile as being offline and requiring media recovery. If the actual datafile correspoinding to MISSINGnnnn is read-only or offline normal, then you can make the datafile accessible by renaming MISSINGnnnn to the name of the actual file. If MISSINGnnnn corresponds to a datafile that was not read-only or offline normal, then you cannot use the rename operation to make the datafile accessible, because the datafile requires media recovery that is precluded by the results of RESETLOGS. In this case, you must drop the tablespace containing the datafile

Use V$CONTROLFILE_RECORD_SECTION view to display information about controlfile record sections

 Managing the Redo Log

–          The most crucial structure for recovery operations is the redo log, which consists of two or more preallocated files that store all changes made to the database as they occur. Every instance of an Oracle Database has an associated redo log to protect the database in case of an instance failure.

–          The redo log for each database instance is also referred to as a redo thread.

–          Redo log files are filled with redo records. A redo record, also called a redo entry is made up of a group of change vectors, each of which is a description of a change made to a single block in the database.

–          Redo entreis record data that you can use to reconstruct all changes made to the database, including the undo segments. Therefore, the redo log also protects rollback data. When you recover the database using redo data, the database reads the change vectors in the redo records and applies the changes to the relevant blocks

–          Whenever a transaction is committed, LGWR writes the transaction redo records from the redo log buffer of the SGA to a redo log file, and assigns a system change number (SCN) to identify the redo records for each committed transaction. Only when all redo records associated with a given transaction are safely on disk in the online logs is the user process notified that the transaction has been committed.

–          Redo records can also be written to a redo log file before the corresponding transaction is committed. If the redo log buffer fills or another transaction commits, LGWR flushes all of the redo log entries in the redo log buffer to a redo log file, even though some redo records may not be committed. If necessary, the database can roll back these changes

–          The database requires a minimum of two files to guarantee that one is always available for writing while the other is being archived.

–          The redo log file that LGWR is actively writing to is called the current redo log file. Redo log files that are required for instance recovery are called active redo log files. It means that the content has not been flushed to datafiles from database buffer cache, because CKPT has not been started.  Redo log files that are no longer required for instance recovery are called inactive redo log files

–          Log switch is the point at which the database stops writing to one redo log file and begins writing to another.

–           Oracle Database assigns each redo log file a new log sequence number bevery time a log switch occurs. During crash, instance or media recovery the database properly applies redo log files in ascending order by using the log sequence number of the necessary archived and redo log files

–          Datafiles should be placed on different disks from redo log files to reduce contention in writing data blocks and redo records

–          All members of the same multiplexed redo log group must be the same size. Members of different groups can have different sizes. However, there’s no advantage in varying file size between groups.  

–          The minimum size permitted for a redo log file is 4MB

–          If messages in alert.log file indicate that LGWR frequently has to wait for a group because a checkpoint has not completed or a group has not been archived, add groups.

–          The MAXLOGFILES parameter used in the CREATE DATABASE statement determines the maximum number of groups of redo log files for each database.

–          The MAXLOGMEMBERS parameter used in the CREATE DATABASE statement determines the maximum number of members for each group.

–          The ARCHIVE_LAG_TARGET parameter can be set specifically to force logs to be switched and archived. If it is set to a very low value, there can be a negative impact on performance. This can force frequent log switches.

–          To create log groups use one of the following commands:

  • ALTER DATABASE ADD LOGFILE (‘/home/oracle/…./redo1c.log’,’/home/oracle2/redo2c.log’) size 10M
  • ALTER DATABASE ADD LOGFILE GROUP 5  (‘file1.log’,’file2.log’) size 10M;

–          To create a new member for a redo log group use one of these commands:

  • ALTER DATABASE ADD LOGFILE MEMBER ‘/file_name.log’ TO (‘/file1.log’,’file2.log’);

–          An instance requires at least two groups of redo log files

–          If you can drop a redo log group only if it is inactive. If you need to drop the current group, first force a log switch to occur and make sure it is archived before dropping it. Check V$LOG view to see whether it has been archived or not. Drop the redo log group using ALTER DATABASE DROP LOGFILE GROUP 4; command  

–          To drop specific inactive redo log members use ALTER DATABASE DROP LOGFILE MEMBER ‘/home/….’; command

–          When redo log member is dropped from the database, the operating system file is not deleted from disk. Rather, the control files of the associated database are updated to drop the member from the database structure.

–          A redo log file might become corrupted while the database is open, and ultimately stop database activity because archiving cannot continue. In this situation the ALTER DATABSAE CLEAR LOGFILE statement can be used to reinitialize the file without shutting down the database. ALTER DATABSAE CLEAR LOGFILE GROUP 3;

–          If the corrupted redo log file has not been archived, use the UNARCHIVED keyword in the statement

–          Use V$LOG, V$LOGFILE and V$LOG_HISTORY views for more information

Posted in My abstracts from Documentation 10gR2 | No Comments »