Kamran Agayev's Oracle Blog

Oracle Certified Master

Archive for the 'My abstracts from Documentation 10gR2' Category

Chapter 10 – Managing the Undo Tablespace

Posted by Kamran Agayev A. on 1st December 2009

–          Every Oracle Database must have a method of maintaining information that is used to roll back, or undo, changes to the database. Such information consists of records of the actions of transactions, primarily before they are committed. These records are collectively referred to as undo

–          Undo records are used to L:

  • Rollback transactions when a ROLLBACK statement is issued
  • Recover the databsae
  • Provide read consistency

When a ROLLBACK statement is issued, undo records are used to undo changes that were made to the database by the uncommitted transaction. During database recovery, undo records are used to undo any uncommitted changes applied from the redo log to the datafiles. Undo records provide read consistency by maintaining the before image of the data for users who are accessing the data at the same time that another user is changing it.

–          In automatic undo management mode, you create an undo tablespace, and the server automatically manages undo segments and space among the various active sessions

–          When the instance starts, the database automatically selects the first available undo tablespace. If no undo tablespace is available, then the instance starts without an undo tablespace and stores undo records in the SYSTEM tablespace.

Undo Retention

–          After a transaction is committed, undo data is no longer needed for rollback or transaction recovery purposes. However, for consistent read purposes, long-running queries may require this old undo information for producing older images of data blocks. Furthermore, the success of several Oracle Flashback features can also depend upon the availability of older undo information. For these reasons, it is desirable to retain the old undo information for as long as possible.

–          When automatic undo management is enabled, there is always a current undo retention period, which is the minimum amount of time that Oracle Dataabase attempts to retain old undo information before overwriting it. Old (commiteed) undo information that is older than the current undo retention period is said to be expired. Old undo information with an age that is less than the current undo retention period is said to be unexpired.

–          Oracle database automatically tunes the undo retention period based on undo tablespace size and system activity. You can specify a minimum undo retention period (in seconds) by setting the UNDO_RETENTION parameter. The database makes its best effort to honor the specified minimum undo retention period, provided that the undo tablespace has space available for new transactions. When available space for new transactions becomes short, the databsae begins to overwrite expired undo. If the undo tbalespace has no space for new transactions after all expired undo is overwritten, the databsae may begin overwriting unexpired undo information. If any of this overwritten undo information is required for consistent read in a current long-running query, the query could fail with the snapshot too old error message.

–          To guarantee the success of long-running queries or Oracle Flashback operations, you can enable retention guarantee. If retention guarantee is enabled, the specified minimum undo retention is guaranteed, the database never overwrites unexpired undo data even if it means that transactions fail due to lack of space in the undo tablespace. If retention guarantee is not enabled, the database can overwrite unexpired undo when space is low, thus lowering the undo retention for the system. This option is disabled by default.

–          You enable retention guarantee by specifying the RETENTIN GUARANTEE clause for the undo tablespace when you create it with either the CREATE DATABASE or CREATE UNDO TABLESPACE statement.

Posted in My abstracts from Documentation 10gR2 | 8 Comments »

Chapter 9 – Managing Datafiles and Tempfiles

Posted by Kamran Agayev A. on 1st December 2009

          Oracle database assigns each datafile two associated file numbers, an absolute file number and a relative file number, that are used to uniquely identify it. The absolute file number can be found in the FILE# column of the V$DATAFILE or V$TEMPFILE view. Relative file number uniquely identifieds a datafile within a tablespace. For small and medium size databases, relative file numbers usually have the same value as the absolute file number. However, when the number of datafiles in a database exceeds a threshold (typically 1023), the relative file number differs from the absolute file number.

          When starting an instance, the DB_FILES parameter specifies the maximum number of database files that can be opened for this database. Default value is 200

          If several disk drives are available to store the databsae, consider placing potentially contending datafiles on separate disks. This way, when users query information, both disk drives can work simultaneously, retrieving data at the same time

          Datafiles should not be stored on the same disk drive that stores the databsae redo log files. if the datafiles and redo log files are stored on the same disk drive and that disk drive fails, the files cannot be used in your database recovery procedures.

          To determine whether a datafile is auto-extensible, query the DBA_DATA_FILES view and examine the AUTOEXTENSIBLE column

          To take a datafile offline when the database is in NOARCHIVELOG mode, use the ALTER DATABASE statement with both the DATAFILE and OFFLINE FOR DROP clauses

          Clauses of the ALTER TABLESPACE statement allow you to change the online or offline status of all of the datafiles or tempfiles within a tablespace.

ALTER TABLESPACE tbs_test DATAFILE [ONLINE | OFFLINE]

To rename and relocate datafiles use the following steps:

·         Take the tablespace that contains the datafiles offline:

ALTER TABLESAPCE users OFFLINE NORMAL;

·         Rename the datafiles using the operating system

·         Use the ALTER TABLESPACE statement with the RENAME DATAFILE clause to change the filenames within the database

ALTER TABLESPACE users RENAME DATAFILE ‘file1’ TO ‘file2’;

·         Backup the database

 

          To drop the datafile, the database must be open and the file must be empty. If you must remove a datafile that is not empty and that cannot be made empty by dropping schema objects, you must drop the tablespace that contains the datafile. You can’t drop the first or only datafile in a tablespace, or datafiles in a read only tablespace or SYSTEM tablespace.

          If you want to configure the database to use checksums to verify data blocks, set the initialization parameter DB_BLOCK_CHECKSUM to TRUE. This causes the DBWn process and the direct loader to calculate a checksum for each block and to store the checksum in the block header when writing the block to disk.

The checksum is verified when the block is read, but only if DB_BLOCK_CHECKSUM is TRUE and the last write of the block stored a checksum. If corruption is detected, the database returns message ORA-01578 and writes information about the corruption to the alert log

Posted in My abstracts from Documentation 10gR2 | 2 Comments »

Chapter 8 – Managing Tablespaces

Posted by Kamran Agayev A. on 25th November 2009

Chapter 8 – Managing Tablespaces

–          If the database has multiple tablespaces you can separate data of one application from the data of another to prevent multiple applications from being affected if a tablespace must be taken offline. You can store different datafiles of different tablespaces on different disk drives to reduce I/O contention and backup individual tablespaces

–          Create only enough tablespaces to fulfill your needs, and create these tablespaces with as few files as possible. If you need to increase the size of a tablespace, add one or two large datafiles, or create datafiles with autoextension enabled, rather than creating many small datafiles

–          SYSAUX tablespace, which acts as an auxiliary tablespace to the SYSTEM tablespace, is also always created when you create a database. It contains information about and the schemas used by various Oracle products and features, so that those products do not require their own tablespaces.

–          Locally managed tablespaces track all extent information in the tablespace itself by using bitmaps. It provides enhanced performance, and user reliance on the data dictionary is reduced, because the necessary information is stored in file headers and bitmap blocks

–          Create a locally managed tablespace by specifying LOCAL in the EXTENT MANAGEMENT vlause of the CREATE TABLESPACE statement. This is the default for new permanent tbalespaces, but you must specify the EXTENT MANAGEMENET LOCAL clause if you want to specify either the AUTOALLOCATE clause or the UNIFORM clause. You can have the database manage extents for you automatically with the AUTOALLOCATE clause (the default) or you can specify that the tablespace is managed with uniform extents of a specific size (UNIFORM)

–          AUTOALLOCATE means that the extent size are managed by Oracle. Oracle will choose the next optimal size for the extents starting with 64KB. As the segments grow and more extents are needed, oracle will start allocating larger and larger sizes ranging from 1Mb to eventually 64Mb extents.

–          UNIFORM T option tells the database to allocate and deallocate extents in the tablespace with the same unvarying size that you can specify or let extents default to 1Mb.

–          In a locally managed tablespace, there are two methods that Oracle Database can use to manage segment spaceL AUTOMATIC and MANUAL. Manual segment space management uses linked lists called “freelists” to manage free space in the segment, while automatic segment management uses bitmaps. Automatic segment space management is the more efficient method, and is the default for all new permanent, locally managed tablespaces.

–          To create a  bigfile tablespace, run CREATE BIGFILE TABLESPACE command. Oracle automatically creates a locally managed tablespace with automatic segment space management.

–          If default tablespace type was set to BIGFILE at database creation, but you want to create a traditional (smallfile) tablespace, then specify a CREATE SMALLFILE TABLESPACE statement to override the default tablespace that you are creating

–          Within a temporary tablespace, all sort operations for a given instance and tablespace share a single sort segment. The sort segment is created by the first statement that uses a temporary tablespace for sorting, after startup, and is released only at shutdown.

–          You can view the allocation and deallocation of space in a temporary tablespace sort segment using the V$SORT_SEGMENT view

–          When you create a temporary table, its rows are stored in your default temporary tablespace

–          Use CREATE BIGFILE TEMPORARY TABLESPACE statement to create a single tempfile tablespace

–          To rename a tempfile, you take the tempfile offline, use operating system commands to rename or relocate the tempfile, and then use the ALTER DATABASE RENAME FILE command to update the database controlfiles

–          Using temporary tablespace group, rather than a single remporary tablespace, can alleviate problems caused where one tablespace is inadequate to hold the results of a sort, particularly on a table that has many partitions.

–          You create at tablespace group implicitily when you include the TABLESPACE GORUP clause in the CREATE TEMPORARY TABLESPACE or ALTER TABLESPACE statement and the specified tablespace group does not currently exist.

ALTER TABLESPACE tmp_tbs TABLESPACE GROUP group2;

–          The view DBA_TABLESPACE_GROUPS lists tablespace groups and their member tablespaces

–          To assign tablespace group as a default temporary tablespace, use the following statement:

ALTER DATABASE db_name DEFAULT TEMPORARAY TABLESPACE group2;

–          You can create tablespaces with block sizes different from the standard database block size, which is specified by the DB_BLOCK_SIZE parameter.

–          Use the BLOCKSIZE clause of the CREATE TABLESPACE statement to create a tablespace with a block size different from the database standard block size. In order for the BLOCKSIZE clause to succeed, you must have already set the DB_CACHE_SIZE and at least one DB_nK_CACHE_SIZE parameter. Further, and the integer you specify in the BLOCKSIZE clause must correspond with the setting of one DB_nK_CACHE_SIZE parameter.

CREATE TABLESPACE tbs DATAFILE ‘/datafile/’ SIZE 10M EXTENT MANAGEMENT LOCAL UNIFORM SIZE 128K BLOCKSIZE 8K;

–          You can’t take SYSTEM, UNDO and TEMPORARY tablespaces offline

–          When you take tablespace to OFFLINE usint IMMEDIATE clause, media recovery for the tablespace is required before the tablespace can be brought online.

–          If you must take tablespace offline, use the NORMAL clause (the default) if possible. This setting guarantees that the tablespace will not require recovery to come back online, even if after incomplete recovery you reset the redo log sequence using an ALTER DATABASE OPEN RESETLOGS statement .

–          Specify TEMPORARY only when you cannot take tablespace offline normally.  In this case, only the files taken offline because of errors need to be recovered before the tablespace can be brought online. Specify IMMEDIATE only after trying both the normal and temporary settings

–          You can drop items, such as tables or indexes, from a read-only tablespace, but you cannot create or alter objects in a read-only tablespace.

–          Before you can make a tablespace read-only, the tablespace must be online, can’t be active undo tablespace or SYSTEM tablespace

–          When there’re transactions waiting to commit, the ALTER TABLESPACE .. READ ONLY statement does not return immediately. It waits for all transactions started before you issued the ALTER TABLESPACE statement to either commit or rollback.

–          When substantial portions of a very large database are stored in read-only tablespaces that are located on slow-access devices or hierarchical storage you should consider setting the READ_ONLY_OPEN_DELAYED parameter to TRUE. This speed certain operations, primary opening the databsae, by causing datafiles in readonly tablespaces to be accessed for the first time only when an attempt is made to read data stored within them

–          You can’t rename the tablespace if

  • The tablespace is SYSTEM or SYSAUX tablespace
  • If any datafile in the tablespacea is offline or if the tablespace is offline

–          Use the CASCADE CONSTRAINTS clause to drop all referential integrity constraints from tables outside the tablespace that refer to primary and unique keys of tables inside the tablespace

–          Use INCLUDING CONTENTS AND DATAFILES to delete physical datafiles from OS. It an OS error prevents the deletion of a file, the DROP TABLESPACE statement still succeeds, but a message describing the error is written to the alert log

–          If SYSAUX tablesapce becomes unavailable, core database functionality will remain operational.

–          You can monitor the occupants of the SYSAUX tablespace using the V$SYSAUX_OCCUPANTS view.

–          Moving data using transportable tablespaces is much faster than performing either an export/import or unload/load of the same data. This is because the datafiles containing all of the actual data are just copied to the destination location, an you use an export/import utility to transfer only the metadata of the tablespace objects to the new database

–          Starting with Oracle Database 10g, you can transport tablespaces across platforms.

–          You can query the V$TRANSPORTABLE_PLATFORM view to see the platfors that are supported, and to determine each platform’s endian format(byte ordering)

–          If the source platform and the target platform are of different endianness, then an additional step must be done on either the source or target platform to convert the tablespace being transported to the target format.  If they are of the same endianness, then no conversion is necessary and tablespaces can be transported as if they were on the same platform.

To transport tablespace between database follow this procedure:

  • Check the endian format of both platforms by querying the V$TRANSPORTABLE_PLATFORM view
  • Pick a self containted set of tablespaces
  • Generate a transportable tablespace set. It consists of datafiles for the set of tablespaces being transported and an export file containing structural information (metadata) for the set of tablespaces.
  • Transport the tablespace set
  • Import the tablespace set

 

–          There may be logical or physical dependiencies between objects in the transportable set and those outside of the set. You can only transport a set of tablespaces that is self-contained. In this context “self-containted” means that there are no references from inside the set of tablespaces pointing outside of the tablespaces. To check whether the tablespace is self-containted, you can invoke the TRANSPORT_SET_CHECK procedure in the Oracle supplied package DBMS_TTS. After invoking this procedure, you can see all violations by selecting from the TRANSPORT_SET_VIOLATIONS view.

–          To convert tablespaces from one endian format to another, login to RMAN and run the following command:

RMAN> CONVERT TABLESPACE sales_1, sales_2 TO PLATFORM ‘Microsoft Windows NT’ FORMAT ‘/tmp/%U’;

–          Then transport both the datgafiles and the export file of the tablespaces to a place that is accessible to the target database

–          Use RMAN to convert the necessary transported datafiles to the endian format of the destination host format and deposit the results in /orahome/dbs as shown in the following examples

RMAN>CONVERT DATAFILE

‘/h1/finance/tbs_21.f’,

‘/h1/finance/tbs_22.f’

TO PLATFORM =”Solaris[tm] OE (32-bit)”

FROM PLATFORM=”HP Tru64 UNIX”

DB_FILE_NAME_CONVERT=’/h1/finance/work/try/”,’/h1/finance/dbs/tru’

PARALLELISM=5;

Posted in My abstracts from Documentation 10gR2 | No Comments »

Chapter 7 – Managing archived redo logs

Posted by Kamran Agayev A. on 24th November 2009

Chapter 7 – Managing archived redo logs

–          The LOG_ARCHIVE_MAX_PROCESSES parameter specifies the number of ARCn processes that the database initially invokes. The default is two processes. You can specify this parameter up to ten processes to be started at instance startup. It is dynamic parameter and can be changed with ALTER SYSTEM command. The database must be mounted, but not opened.

–          If you want to archive only to a single destination, you specify that destination in the LOG_ARCHIVE_DEST parameter. If you want to multiplex the archived logs, you can choose whether to archive to up to ten locations (using the LOG_ARCHIVE_DEST_n parameters) or to archive to a primary and secondary destinations (LOG_ARCHIVE_DEST and LOG_ARCHIVE_DUPLEX_DEST)

–          You specify the location for LOG_ARCHIVE_DEST_n using the keywords LOCATION (for a local file system) and SERVICE (remote archival through Oracle Net service name)

LOG_ARCHIVE_DEST_1=’LOCATION=/disk1/archive’;

LOG_ARCHIVE_DEST_2=’SERVICE=standby’;

–          The optional parameter LOG_ARCHIVE_MIN_SUCCEED_DEST=n determines the minimum number of destinations to which the database must successfully archive a redo log group before it can reuse online log files. the default value is 1. Valid values for n are 1 to 2 if you are using dupliexing, or 1 to 10 if you are multiplexing.

–          MANDATORY option specifies that the transmission of the redo data to the destination must succeed before the local online redo log file can be made available for reuse. The default is OPTIONAL.

–          REOPEN option specifies the minimum number of seconds before the archived process (ARC) or the LGWR should try again to access a previously failed destination. Default is 300 seconds.

–          VERIFY option indicates if an archived (ARC) process should scan and verify the correctness of the contents of the completed archived redo log file, either local or remote, after successfully completing the archival operation. By default, archived redo log file are not verified.

–          When you specify REOPEN for an OPTIONAL destination, the database can overwrite online logs if ther is an error. If you specify REOPEN for a MANDATORY destination, the database stalls the production database when it cannot successfully archive.

–          Use LOG_ARCHIVE_TRACE parameter to specify a trace level

–          To get information about archived redo logs use the following views:

V$ARCHIVED_LOG

V$ARCHIVE_DEST

V$ARCHIVE_PROCESSES

V$BACKUP_REDOLOG

V$LOG

V$LOG_HISTORY

Posted in My abstracts from Documentation 10gR2 | No Comments »

Chapter 5,6 – Managing Control Files and Redo Log files

Posted by Kamran Agayev A. on 24th November 2009

Chapter 5,6 – Managing Control Files, Redo Log files

– Every oracle database has a control file, which is a small binary file that records the physical structure of the database. The control file includes:

–          The database name

–          Names and locations of associated datafiles and redo log files

–          The timestamp of the database creation

–          The current log sequence number

–          Checkpoint information

-The initial control files of an Oracle Database are created when you issue the CREATE DATABASE statement.

– The CREATE CONTROLFILE statement can potentially damage specified datafiles and redo log files. omitting a filename can cause loss of the data in that file, or loss of access to the entire database.

If a datafile exists in the data dictionary but not in the new controlfile, the database creates a placeholder entry in the controlfile under the name MISSINGnnnnn, where nnnn is the file number in decimal. MISSINGnnnn is flagged in the controlfile as being offline and requiring media recovery. If the actual datafile correspoinding to MISSINGnnnn is read-only or offline normal, then you can make the datafile accessible by renaming MISSINGnnnn to the name of the actual file. If MISSINGnnnn corresponds to a datafile that was not read-only or offline normal, then you cannot use the rename operation to make the datafile accessible, because the datafile requires media recovery that is precluded by the results of RESETLOGS. In this case, you must drop the tablespace containing the datafile

Use V$CONTROLFILE_RECORD_SECTION view to display information about controlfile record sections

 Managing the Redo Log

–          The most crucial structure for recovery operations is the redo log, which consists of two or more preallocated files that store all changes made to the database as they occur. Every instance of an Oracle Database has an associated redo log to protect the database in case of an instance failure.

–          The redo log for each database instance is also referred to as a redo thread.

–          Redo log files are filled with redo records. A redo record, also called a redo entry is made up of a group of change vectors, each of which is a description of a change made to a single block in the database.

–          Redo entreis record data that you can use to reconstruct all changes made to the database, including the undo segments. Therefore, the redo log also protects rollback data. When you recover the database using redo data, the database reads the change vectors in the redo records and applies the changes to the relevant blocks

–          Whenever a transaction is committed, LGWR writes the transaction redo records from the redo log buffer of the SGA to a redo log file, and assigns a system change number (SCN) to identify the redo records for each committed transaction. Only when all redo records associated with a given transaction are safely on disk in the online logs is the user process notified that the transaction has been committed.

–          Redo records can also be written to a redo log file before the corresponding transaction is committed. If the redo log buffer fills or another transaction commits, LGWR flushes all of the redo log entries in the redo log buffer to a redo log file, even though some redo records may not be committed. If necessary, the database can roll back these changes

–          The database requires a minimum of two files to guarantee that one is always available for writing while the other is being archived.

–          The redo log file that LGWR is actively writing to is called the current redo log file. Redo log files that are required for instance recovery are called active redo log files. It means that the content has not been flushed to datafiles from database buffer cache, because CKPT has not been started.  Redo log files that are no longer required for instance recovery are called inactive redo log files

–          Log switch is the point at which the database stops writing to one redo log file and begins writing to another.

–           Oracle Database assigns each redo log file a new log sequence number bevery time a log switch occurs. During crash, instance or media recovery the database properly applies redo log files in ascending order by using the log sequence number of the necessary archived and redo log files

–          Datafiles should be placed on different disks from redo log files to reduce contention in writing data blocks and redo records

–          All members of the same multiplexed redo log group must be the same size. Members of different groups can have different sizes. However, there’s no advantage in varying file size between groups.  

–          The minimum size permitted for a redo log file is 4MB

–          If messages in alert.log file indicate that LGWR frequently has to wait for a group because a checkpoint has not completed or a group has not been archived, add groups.

–          The MAXLOGFILES parameter used in the CREATE DATABASE statement determines the maximum number of groups of redo log files for each database.

–          The MAXLOGMEMBERS parameter used in the CREATE DATABASE statement determines the maximum number of members for each group.

–          The ARCHIVE_LAG_TARGET parameter can be set specifically to force logs to be switched and archived. If it is set to a very low value, there can be a negative impact on performance. This can force frequent log switches.

–          To create log groups use one of the following commands:

  • ALTER DATABASE ADD LOGFILE (‘/home/oracle/…./redo1c.log’,’/home/oracle2/redo2c.log’) size 10M
  • ALTER DATABASE ADD LOGFILE GROUP 5  (‘file1.log’,’file2.log’) size 10M;

–          To create a new member for a redo log group use one of these commands:

  • ALTER DATABASE ADD LOGFILE MEMBER ‘/file_name.log’ TO GROUP 5;
  • ALTER DATABASE ADD LOGFILE MEMBER ‘/file_name.log’ TO (‘/file1.log’,’file2.log’);

–          An instance requires at least two groups of redo log files

–          If you can drop a redo log group only if it is inactive. If you need to drop the current group, first force a log switch to occur and make sure it is archived before dropping it. Check V$LOG view to see whether it has been archived or not. Drop the redo log group using ALTER DATABASE DROP LOGFILE GROUP 4; command  

–          To drop specific inactive redo log members use ALTER DATABASE DROP LOGFILE MEMBER ‘/home/….’; command

–          When redo log member is dropped from the database, the operating system file is not deleted from disk. Rather, the control files of the associated database are updated to drop the member from the database structure.

–          A redo log file might become corrupted while the database is open, and ultimately stop database activity because archiving cannot continue. In this situation the ALTER DATABSAE CLEAR LOGFILE statement can be used to reinitialize the file without shutting down the database. ALTER DATABSAE CLEAR LOGFILE GROUP 3;

–          If the corrupted redo log file has not been archived, use the UNARCHIVED keyword in the statement

–          Use V$LOG, V$LOGFILE and V$LOG_HISTORY views for more information

Posted in My abstracts from Documentation 10gR2 | No Comments »

Chapter 4 – Managing Oracle Database Processes

Posted by Kamran Agayev A. on 18th November 2009

Chapter 4 – Managing Oracle Database Processes

–          Oracle database creates server processes to handle the requests of user processes connected to an instance.

–          Your database is always enabled to allow dedicated server processes, but you must specifically configure and enable shared server by setting one or more initialization parameters

–          Shared server can be configured for session multiplexing, which combines multiple sessions for transmission over a single network connection in order to conserve the operating system’s resources.

–          DBW – The database writer writes modified blocks from the database buffer cache to the datafiles. Oracle database allows a maximum of 20 database writer processes. The DB_WRITER_PROCESSES initialization parameter specifies the number of DBW processes.

–          LGWR – the log writer process writes redo log entries to disk. Redo log entries are generated in the redo log buffer of the SGA. LGWR writes the redo log entries sequentially into a redo log file. If the database has a multiplexed redo log, then LGWR writes the redo log entries to a group of redo log files.

–          CKPT – At specific times, all modified database buffers in the system global area are written to the datafiles by DBW. This event is called a checkpoint. The checkpoint process is responsible for signaling DBW at checkpoints and updating all the datafiles and control files of the database to indicate the most recent checkpoint.

–          SMON – the system monitor performs recovery when a failed instance starts up again. SMON also cleans up temporary segments that are no longer in use and recovers dead transactions skipped during system failure and instance recovery because of file-read or offline erros.

–          PMON – the process monitor performs process recovery when a user process fails. PMON is responsible for cleaning up the cache and freeing resources that the process was using.

–          RECO – the recoverer process is used to resolve distributed transactions that are pending because of a network or system failure in a distributed database.

The following views help you to monitor the Oracle instance

V$PROCESS – Contains information about the currently active processes

V$LOCKED_OBJECT – Lists all locks acquired by every transaction on the system

V$SESSION – lists session information for each current session

V$SESS_IO – contains I/O statistics for each user session

V$SESSION_WAIT – lists the resources or events for which active sessions are waiting

V$SYSSTAT – Contains session statistics

V$SQLAREA – Contains statistics about shared SQL area and contains one row for each SQL string

Posted in My abstracts from Documentation 10gR2 | 6 Comments »

Chapter 3 – Starting Up and Shutting Down

Posted by Kamran Agayev A. on 17th November 2009

Chapter 3 – Starting Up and Shutting Down

–          If you know that media recovery is required, you can start an instance, mount a database to the instance, and have the recovery process automatically start by using the STARTUP command with the RECOVER clause: STARTUP OPEN RECOVER

–          When SHUTDOWN IMMEDIATE command used, any uncommitted transactions are rolled back. If long uncommitted transactions exist, this method of shutdown might not complete quickly, despite its name

–          Occasionally you might want to put a database in a state that allows only DBA transactions. Such a state is referred to as s quiesced state. Without ability to quiesce the database, you would need to shut down the database and reopen it in restricted mode. Use ALTER SYSTEM QUIESCE RESTRICTED; command. Moreover, refer to the Tom Kyte’s answer on this issue from the following link:

http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:902149038276

–          To restore the database to normal operation use ALTER SYSTEM UNQUIESCE;

Posted in My abstracts from Documentation 10gR2 | No Comments »

Chapter 2 – Creating an Oracle Database

Posted by Kamran Agayev A. on 17th November 2009

Chapter 2 – Creating an Oracle Database

Before starting this chapter, I’d like to direct you to my step by step guide on “Create database manually

–          If you omit to assign a password at CREATE DATABASE command, the SYS and SYSOPER users are assigned the default passwords change_on_install and manager, respectively. Oracle strongly recommends that you specify these clauses, even though they are optional

–          Specify the EXTENT MANAGEMENT LOCAL clause in the CREATE DATABASE statement to create a locally managed SYSTEM tablespace. If you do not specify the EXTENT MANAGEMENT LOCAL clause, by default the database creates a dictionary managed SYSTEM tablespace.

–          The SYSAUX tablespace serves as an auxiliary tablespace to the SYSTEM tablespace. BECAUSE IT IS THE DEFAULT TABLESPACE FOR MANY Oracle Database features and products that previously required their own tablespaces, it reduces the number of tablespaces required by the database and that you must maintain.

–          You can’t drop or rename the SYSAUX tablespace

–          The SYSAUX tablespace needs to be at least 240MB at the time of database creation

–          The maximum number of datafiles in an Oracle Databsae is limited (usually to 64K files – 65536 files)

–          The maximum number of blocks in a datafile is 4GB (4 million blocks).  From the following link you can find a nice article written about it http://www.dba-oracle.com/t_bigfile_tablespace_tips.htm

–          Bigfile tablespace can contain only one file, but that file can have up to 4G blocks

–          A smallfile tablespace can contain up to 1022 files with up to 4M block each

–          To dynamically change the default tablespace type after database creation, use alter DATABASE SET DEFAULT BIGFILE TABLESPACE;  command

–          It is recommended that you use a server parameter file

–          DB_NAME must be set to a text string of no more than eight characters. During database creation, the name provided for DB_NAME is recorded in the datafiles, redo log files, and contro file of the database.

–          You can’t change the block size after database creation except by recreating the database.

–          The SGA_MAX_SIZE initialization parameter specifies the maximum size of the System Global Area for the lifetime of the instance. You can dynamically alter the initialization parameters affecting the size of the buffer caches, shared pool, large pool, Java pool and streams pool.

–          You enable automatic shared memory management feature by setting the SGA_TARGET parameter to a non-zero value. In addition, you must set the value of all automatically sized SGA components to a zero to enable full automatic tuning of these components. When SGA_TARGET is not set, the automatic shared memory management feature is not enabled.

–           When you increase the size of a manually sized component, extra memory is taken away from one or more automatically sized components. And when you decrease the size of a manually sized component, the memory that is released is given to the automatically sized components

–           The COMPATIBLE initialization parameter enables or disables the use of features in the databsae that affect file format on disk. For example, if you create an Oracle Database 10g database, but specify  COMPATIBLE=9.2.0.2 in the initialization parameter file, then features that requires 10.0 compatibility will generate an error if you try to use them. Such a database is said to be at the 9.2.0.2. compatibility level.

–          To limit the number of users created in a database, set the LICENSE_MAX_USERS initialization parameter in the database initialization parameter file

–          The COMMENT clause at ALTER SYSTEM command lets you associate a text string with the parameter update. When you specify SCOPE as SPFILE or BOTH, the comment is written to the server parameter file.

–          To view initialization file’s parameter use SHOW PARAMETER command, CREATE PFILE command or V$PARAMETER view (this view displays the values of parameters currently in effect)

–          V$SPPARAMETER view displays the current contents of the server parameter file. The view returns FALSE values in the ISSPECIFIED column if a server parameter file is not being used by the instance

Posted in My abstracts from Documentation 10gR2 | No Comments »

Chapter 1 – Overview of Administering an Oracle Database

Posted by Kamran Agayev A. on 17th November 2009

–          Check your current release number as follows

SQL> COL PRODUCT FORMAT A35

SQL> COL VERSION FORMAT A15

SQL> COL STATUS FORMAT A15

SQL> SELECT * FROM PRODUCT_COMPONENT_VERSION;

PRODUCT                             VERSION         STATUS

———————————– ————— —————

NLSRTL                              10.2.0.1.0      Production

Oracle Database 10g Enterprise Edit 10.2.0.1.0      Production

PL/SQL                              10.2.0.1.0      Production

TNS for Linux:                      10.2.0.1.0      Production

–          Create at least one additional administrative user and grant to that user an appropriate administrative role to use when performing daily administrative tasks. Do not use SYS and SYSTEM for these purposes.

–          When you create an Oracle Database, the users SYS and SYSTEM are automatically created and granted the DBA role

–          No one should create any tables in the schemas of users SYS and SYSTEM

–          A predefined DBA role is automatically created with every Oracle Database installation

–          When you connect with SYSDBA or SYSOPER privileges, you connect with a default schema, not with the schema that is generally associated with your username. For SYSDBA this schema is SYS, for SYSOPER the schema is PUBLIC

–           If you issue the ALTER USER statement to change the password for SYS after connecting to the database, both the password stored in the data dictionary and the password stored in the password file are updated. You can’t change the password for SYS if REMOTE_LOGIN_PASSWORDFILE is set to SHARED. An error message is issued if you attempt to do so

–          Use the V$PWFILE_USERS view to see the users who have been granted SYSDBA or SYSOPER system privileges for a database

Posted in My abstracts from Documentation 10gR2 | 2 Comments »