Queue Monitor Process (QMNn)

The QMON processes are optional background processes used by Oracle Streams Advanced Queueing (AQ), Streams and a variety of other Database products which monitor and maintain all the system and user-owned AQ persistent and buffered objects. These optional processes, like the job_queue processes, do not cause the instance to fail on process failure. They provide the mechanism for message expiration, retry, and delay, maintaining queue statistics, removing processed messages from the queue table and maintaining the dequeue IOT. They also handle all the supported buffered message operations.

Related Post :-

What Is Archive Process (ARCn)

The process of turning online redo log files into archived redo log files is done by archiving process (ACRn). This process is only work if the database is running in ArchiveLog mode and automatic archiving is enabled. An archived redo log file is a copy of one of the identical filled members of an online redo log group. It includes the redo entries present in the identical member  if you are multiplexing your online redo log, and if Group 1 contains member files redo_log_01_01.log and redo_log_01_02.log, then the archiver process (ARCn) will archive one of these identical members.

An Oracle instance can have up to 10 ARCn processes (ARC0 to ARC9). The LGWR process starts a new ARCn process whenever the current number of ARCn processes is insufficient to handle the workload. The alert log keeps a record of when LGWR starts a new ARCn process. you can specify multiple archiver processes with the initialization parameter LOG_ARCHIVE_MAX_PROCESSES. The ALTER SYSTEM statement can change the value of this parameter dynamically to increase or decrease the number of ARCn processes. You do not need to change this parameter from its default value of 1. LGWR automatically starts up more ARCn processes when the database workload requires more.

Related Post :- 

Recoverer Process (RECO)

The Distributed Transaction Recovery Process finds pending distributed transactions and resolves them. Pending distributed transactions are two-phase commit transactions involving multiple databases.The RECO process manager two-phase commits to track and resolve in-doubt transactions. The database that the  transaction started is normally the coordinator. It will send request to other  databases involved in two-phase commit if they are ready to commit. The RECO process of a node automatically connects to other databases involved in an in-doubt distributed transaction. When the RECO process reestablishes a connection between involved database servers, it automatically resolves all in-doubt transactions, removing from each database's pending transaction table any rows that correspond to the resolved in-doubt transactions.

Related Post:-

Process Monitor (PMON)

Process Monitor process recovers failed process resources. PMON monitors and restarts any failed dispatcher or server processes. In RAC, PMON’s role as service registration agent is particularly important. PMON periodically checks the status of dispatcher and server processes, and restarts any that have stopped running. PMON also registers information about the instance and dispatcher processes with the network listener. Like SMON, PMON checks regularly to see whether it is needed and can be called if another process detects the need for it.
The background process PMON cleans up after failed processes by:
Rolling back the user’s current transaction
Releasing all currently held table or row locks
Freeing other resources currently reserved by the user
Restarts dead dispatchers

Related Post:- 

    System Monitor Process (SMON)

    The system monitor process (SMON) performs recovery, if necessary, at instance startup. If the Oracle instance fails, any information in the SGA that has not been written to disk is lost. For example, the failure of the operating system causes an instance failure. After the loss of the instance, the background process SMON automatically performs instance recovery when the database is reopened. Instance recovery when the database is reopened. Instance recovery consists of the following steps:

    • Rolling forward to recover data that has not been recorded in the data files but that has been recorded in the online redo log. This data has not been written to disk because of the loss of the SGA during instance failure. During this process, SMON reads the redo log files and applies the changes recorded in the redo log to the data blocks. Because all committed transactions have been written to the redo logs, this process completely recovers these transactions.
    • Opening the database so that users can log on. Any data that is not locked by unrecovered transactions is immediately available.
    • Rolling back uncommitted transactions. They are rolled back by SMON or by the individual server processes as they access locked data.

    SMON also performs some space maintenance functions:

    • It combines, or coalesces, adjacent areas of free space in the data files.
    • It deallocates temporary segments to return them as free space in data files
    Related Post :-

    What is Checkpoint (CKPT)

    The checkpoint process is responsible for updating file headers in the database datafiles. A checkpoint occurs when the Oracle backgroundprocess DBWn writes all the modified database buffers in the SGA  including both committed and uncommitted data to the data files. Checkpoints are implemented for the following reasons:
    • Checkpoints ensure that data blocks in memory that change frequently are written to data files regularly. Because of the least recently used algorithm of DBWn, a data block that changes frequently might never qualify as the least recently used block and thus might never be written to disk if checkpoints did not occur.
    • Because all database changes up to the checkpoint have been recorded in the data files, redo log entries before the checkpoint no longer need to be applied to the data files if instance recovery is required. Therefore, checkpoints are useful because they can expedite instance recovery.
    At a checkpoint, the following information is written:
    • Checkpoint number into the data file headers
    • Checkpoint number, log sequence number, archived log names, and system change numbers into the control file.
    CKPT does not write data blocks to disk or redo blocks to the online redo logs.Some checkpoints can be logged to the alert log by setting log_checkpoints_to_alert to true.Controlfile and datafile headers are updated CHECKPOINT_CHANGE#

    Every 3 seconds CKPT calculates the checkpoint  target RBA based on:

    The most current RBA

    Related Post:- 

    Log Writer Process (LGWR)

    Oracle database keeps record of changes made to data. Every time user performs a DML, DDL or DCL operation, its redo entries are also created. These redo entries contain commands to rebuild or redo the changes. These entries are stored in Redo Log buffer. Log writer process (LGWR) writes these redo entries to redo log files. Redo log buffer works in circular fashion. It means that it overwrites old entries. Before overwriting LGWR writes  old entries in to redo log files. Log writer process (LGWR) writes redo entries after certain amount of time to ensure that free space is available for new redo entries. LGWR process writes

    • When a transaction commits
    • When the redo log buffer cache is one-third full
    • When there is more than a megabyte of changes records in the redo log buffer cache
    • Before DBWn writes modified blocks in the database buffer cache to the data files
    • Every 3 seconds.

     Because the redo is needed for recovery, LGWR confirms the commit only after the redo is
    written to disk. When DBWn writes modified data from Database buffer cache to disk, corresponding redo entries must also be written to disk. DBWn process checks for redo entries, it signal LGWR process if redo entries have not been written.
    When a user commits a transaction, the transaction is assigned a system change number (SCN), which Oracle records along with the transaction's redo entries in the redo log. SCNs are recorded in the redo log so that recovery operations can be synchronized in Real Application Clusters and distributed databases.

     LOG_BUFFER initialization parameter is used to set the size Redo Log buffer.

    Related Post :-

    Database Writer Process (DBWn)

    The server process records changes to rollback and data blocks in the buffer cache. Database Writer (DBWn) writes the dirty buffers from the database buffer cache to the data files. Although one database writer process is enough for most systems, you can configure additional processes through  from DBW1 to DBW9 and from DBWa to DBWj to improve write performance if your system modifies data heavily. When a buffer in the database buffer cache is changed, it is marked as a dirty buffer. A dirty buffer is a buffer that has not been recently used according to the least recently used . The DBWn process writes dirty buffers to disk so that user processes are able to find free buffers that can be used to write new blocks into the cache. If the number of free buffers are low in the cache and user processor are not able to find free blocks in the cache then  DBWn manages the buffer cache so that user processes can always find free buffers.
    DBWn improves the performance because server process make changes only in buffer cache and DBWn manages the buffer cache to keep free buffers in the cache so that processor can find easily free buffer in cache and dirty buffer writes in to the data files. For example, blocks that are often  accessed small tables or indexes are kept in the cache so that there will be cache hit means there is no requirement to read these blocks from disk. The LRU algorithm keeps more frequently accessed blocks in the buffer cache so that when a buffer is written to disk, it is unlikely to contain data that will be useful soon.
    The initialization parameter DB_WRITER_PROCESSES specifies the number of DBWn processes. The maximum number of DBWn processes is 20. The DBWn process writes dirty buffers to disk under the following conditions:
    • Incremental or normal checkpoint
    • The number of dirty buffers reaches a threshold value
    • A process scans a specified number of blocks when scanning for free buffers and cannot fine any.
    • Timeout occurs.
    • A ping request in Real Application Clusters environment.
    • Placing a normal or temporary tablespace offline.
    • Placing a tablespace in read only mode.
    • Dropping or Truncating a table.

    Related Post :- 

    Background Processes

    An Oracle instance is the combination of the background processes and memory structures. The relationship between the physical and memory structures is maintained and enforced by Oracle’s background processes.Background processes perform functions on behalf of the invoking process. They consolidate functions that would otherwise be handled by multiple Oracle programs running for each user. The background processes perform input/output (I/O) and monitor other Oracle processes to provide increased parallelism for better performance and reliability. Many other processes exist that are used by other options within Oracle, such as Advanced Queuing, Real Application Clusters, Shared Server, Advanced Replication, and so on. The background processes in an Oracle instance can include the following:

    Server Process

    A server process is a program that directly interacts with the Oracle server. Once a user has established a connection, a server process is started to handle the user processes requests. A server process can be either a dedicated server process or a shared server process. In a dedicated server environment, the server process handles the request of a single user process. Once a user process disconnects, the server process is terminated. In ashared server environment, the server process handles the request of several user processes. The server process communicates with the Oracle server using the Oracle Program Interface Server processes (or the server portion of combined user/server processes) created on behalf of each user's application can perform one or more of the following:

    • Parse and run SQL statements issued through the application
    • Read necessary data blocks from datafiles on disk into the shared database buffers of the SGA, if the blocks are not already present in the SGA
    • Return results in such a way that the application can process the information

    Related Post:- 

    User Process

    User Processes

    When a user runs an Oracle tool Oracle creates a user process to run the user's application. A database user who needs to request information from the database must first make a connection with the Oracle server. The connection is requested using a database interface tool, such as SQL*Plus, and beginning the user process. The user process does not interact directly with the Oracle server. Rather it generates calls through the user program interface (UPI), which creates a session and starts a server process

    Connections and Sessions

    The terms "connection" and "session" are closely related to the term "user process", but are very different in meaning.
    A connection is a communication pathway between a user process and an Oracle instance. A communication pathway is established using available interprocess communication mechanisms (on a computer that executes both the user process and Oracle) or network software (when different computers execute the database application and Oracle, and communicate via a network).
    A session is a specific connection of a user to an Oracle instance via a user process. For example, when a user starts SQL*Plus, the user must provide a valid username and password and then a session is established for that user. A session lasts from the time the user connects until the time the user disconnects or exits the database application.

    Related Posts:

    Introduction to processes

    All connected Oracle users must execute two modules of code to access an Oracle database instance:
    • Application or Oracle tool 
    • Oracle server code 

    These code modules are executed by processes. A process is a "thread of control" or a mechanism in an operating system that can execute a series of steps. (Some operating systems use the terms job or task. A process normally has its own private memory area in which it runs.
    The process structure varies for different Oracle configurations, depending on the operating system and the choice of Oracle options. There are some processes 

    Stream Pool

    This is a new area in Oracle Database 10g that is used to provide buffer areas for the streams components of Oracle.  To configure the Streams pool explicitly, specify the size of the pool in bytes using the streams_pool_size initialization parameter. If the size of the Streams pool is greater than zero, then any SGA memory used by Streams is allocated from the Streams pool. If the size of the Streams Pool is zero or not specified, then the memory used by Streams is allocated from the shared pool and may use up to 10% of the shared pool.

    Large Pool & Java Pool

    Oracle Large Pool is an optional memory component of the oracle database SGA. This area is used for providing large memory allocations in many situations that arise during the operations of an oracle database instance.
    • Session memory for the a shared server and the Oracle XA Interface when distributed transactions are involved
    • I/O Server Processes
    • Parallel Query Buffers
    • Oracle Backup and Restore Operations using RMAN

     Large Pool plays an important role in Oracle Database Tuning since the allocation of the memory for the above components otherwise is done from the shared pool. Also due to the large memory requirements for I/O and Rman operations, the large pool is better able to satisfy the requirements instead of depending on the Shared Pool Area.

    Usage of a Large Pool Area allows the shared pool to primarily cache SQL and avoid the overhead casued by shrinking the SQL Cache Area.
    This area is only used if shared server architecture, also called multi-threaded server (MTS), is used, or if parallel query is utilized. The large pool holds the user global areas when MTS is used and holds the parallel query execution message queues for parallel query.

    By issuing a summation select against the v$sgastat view, a DBA can quickly determine the size of the large pool area currently being used.


    The select above should be used when an "ORA-04031: Unable to allocate 16084 bytes of shared memory  large pool unknown object large pool hea PX large pool error is received with a configured large pool the number of bytes specified may differ

    Data Dictionary Cache

    An important part of an Oracle database is its data dictionary, which is a read-only set of tables that provides administrative metadata about the database. A data dictionary contains information such as the following:
    • The definitions of every schema object in the database, including default values for columns and    integrity constraint information.
    •  The amount of space allocated for and currently used by the schema objects
    • The names of Oracle Database users, privileges and roles granted to users, and auditing information related to users

    The data dictionary is a central part of data management for every Oracle database. For example, the database performs the following actions:
    • Accesses the data dictionary to find information about users, schema objects, and storage structure
    • Modifies the data dictionary every time that a DDL statement is issued
    Contents of the Data Dictionary

    The data dictionary consists of the following:

    Base Tables

    The underlying tables that store information about the associated database. Only Oracle should write to and read these tables. Users rarely access them directly because they are normalized, and most of the data is stored in a cryptic format.

    The views that summarize and display the information stored in the base tables of the data dictionary. These views decode the base table data into useful information, such as user or table names, using joins and WHERE clauses to simplify the information. Most users are given access to the views rather than the base tables.

    Related Post:- What Is View?

    Storage of the Data Dictionary

    The data dictionary base tables are the first objects created in any Oracle database. All data dictionary tables and views for a database are stored in the SYSTEM tablespace. Because the SYSTEM tablespace is always online when the database is open, the data dictionary is always available when the database is open.

    How Oracle Uses the Data Dictionary

    Data in the base tables of the data dictionary is necessary for Oracle to function. Therefore, only Oracle should write or change data dictionary information. Oracle provides scripts to modify the data dictionary tables when a database is upgraded or downgraded. During database operation, Oracle reads the data dictionary to ascertain that schema objects exist and that users have proper access to them. Oracle also updates the data dictionary continuously to reflect changes in database structures, auditing, grants, and data.
    For example, if user Kathy creates a table named parts, then new rows are added to the data dictionary that reflect the new table, columns, segment, extents, and the privileges that Kathy has on the table. This new information is then visible the next time the dictionary views are queried.
    Dictionary Cache is place in Shared Pool which contains Data Dictionary. Oracle frequently requires Data Dictionary. Most parts of Data Dictionary are cached into Dictionary Cache. Oracle utilizes Dictionary Cache information in query parsing. Dictionary cache is also called Row Cache as data inside Dictionary Cache is maintained into rows instead of buffer

    Library Cache

    The library cache stores information about the most recently used SQL and PL/SQL statements. Library cache is very important part of Oracle Sharedpool. Shared Pool controls execution of SQL statements. Shared pool is divided into Data dictionary Cache and Library Cache. In Dedicated server configuration Private SQL area is created in PGA of server process. Shared SQL areas are accessible to all users, so the library cache is contained in the shared pool with in the SGA.

    Shared SQL Area

    Shared SQL Area contains parse tree and execution plan of SQL cursors and PL/SQL programs. So executable form of SQL statements is available here which ca be reused. When a query is submitted to oracle server for execution, oracle checks if same query has been executed previously. If the parsed execution plan is found then this event is known as Library cache hit or soft parsing. If pared form of the statement is not found in the shared pool then new statement is parsed and its parsed version is stored in Shared SQL area. This is known as hard parse.

    Oracle allocates memory from shared pool when a new statement is submitted. If required, oracle may deallocate memory from previous statements. As a result of this, deallocated statements shall require hard parsing when re-submitted. More resources are used to perform a hard parse. So it is very important to keep the size for shared pool large enough to avoid hard parsing.

    As Library cache is kept inside Shared Pool so use SHARED_POOL_SIZE initialization parameter to increase the size of Shared Pool. It will indirectly increase memory available for Shared SQL Area.

    Private SQL area

    Each session issuing a SQL statement has a private SQL area in its PGA (see "Private SQL Area"). Each user that submits the same statement has a private SQL area pointing to the same shared SQL area. Thus, many private SQL areas in separate PGAs can be associated with the same shared SQL area.
    The database automatically determines when applications submit similar SQL statements. The database considers both SQL statements issued directly by users and applications and recursive SQL statements issued internally by other statements. The location of the private SQL area depends on the connection established for the session. If a session is connected through a shared server, then part of the private SQL area is kept in the SGA.

    Shared Pool

    The shared pool is used to store the most recently executed SQL statements and the most recently used data definitions. The shared pool caches various types of program data. The shared pool stores parsing, interpreting, and executing all of the SQL statements and data dictionary information. The shared pool includes the following components:

    The total size of the shared pool is determined by the initialization parameter SHARED_POOL_SIZE. Increasing the value of this parameter increases the amount of memory reserved for the shared pool.

    Related post:-  Introduction to SGA

    Redo Log Buffer

    Oracle database keeps record of changes made to data. Every time user performs a DML, DDL or DCL operation, its redo entries are also created. Redo entries contain the information necessary to reconstruct, or redo, changes made to the database by INSERT, UPDATE, DELETE, CREATE, ALTER, or DROP operations. Redo entries are used for database recovery, if necessary. Oracle Database processes copy redo entries from the user memory space to the redo log buffer in the SGA. The redo entries take up continuous, sequential space in the buffer. The background process log writer (LGWR) writes the redo log buffer to the active online redo log group on disk
    If database is running in archiving mode then archive log files are created from redo log files. These archive log files are used to in various database recovery techniques.

    LOG_BUFFER initialization parameter is used to set the size Redo Log buffer

    Related Post :- System Global Area (SGA)

    Buffer Pools

    A buffer pool is a collection of buffers. The database buffer cache is divided into one or more buffer pools.

    You can manually configure separate buffer pools that either keep data in the buffer cache or make the buffers available for new data immediately after using the data blocks. You can then assign specific schema objects to the appropriate buffer pool to control how blocks age out of the cache.

    The possible buffer pools are as follows:

    Default pool

    This pool is the location where blocks are normally cached. Unless you manually configure separate pools, the default pool is the only buffer pool.

    Keep pool

    This pool is intended for blocks that were accessed frequently, but which aged out of the default pool because of lack of space. The goal of the keep buffer pool is to retain objects in memory, thus avoiding I/O operations.

    Recycle pool

    This pool is intended for blocks that are used infrequently. A recycle pool prevent objects from consuming unnecessary space in the cache.

    You can create a tablespace with a block size that differs from the standard size. Each non default block size has its own pool. Oracle Database manages the blocks in these pools in the same way as in the default pool. The structure of the buffer cache when multiple pools are used. The cache contains default, keep, and recycle pools. The default block size is 8 KB. The cache contains separate pools for tablespaces that use the nonstandard block sizes of 2 KB, 4 KB, and 16 KB.

    Database Buffer Cache

    Database Buffer cache is one of the most important components of System Global Area (SGA). Database Buffer Cache is the place where data blocks are copied from datafiles to perform SQL operations. Buffer Cache is shared memory structure and it is concurrently accessed by all server processes. The buffers in the cache are organized in two lists: the write list and the least recently used (LRU) list.
    The write list holds dirty buffers, which contain data that has been modified but has not yet been written to disk.

    The LRU list holds pinned buffers,clean, free and dirty buffers that have not yet been moved to the write list. Free buffers do not contain any useful data and are available for use. Pinned buffers are currently being accessed
    When an Oracle process accesses a buffer, the process moves the buffer to the most recently used (MRU) end of the LRU list. As more buffers are continually moved to the MRU end of the LRU list, dirty buffers age toward the LRU end of the LRU list. When oracle server process requires a specific data block, it first searches it in Buffer cache. If it finds required block, it is directly accessed and this event is known as Cache Hit. If searching in Buffer cache fails then it is read from datafile on the disk and the event is called Cache Miss. If the required block is not found in Buffer cache then process needs a free buffer to read data from disk. It starts search for free buffer from least recently used end of LRU list. In process of searching, if user process finds dirty block in LRU list it shifts them to Write List. If the process can not find free buffers until certain amount of time then process signals DBWn process to write dirty buffers to disks.
    When the user process is performing a full table scan, it reads the blocks of the table into buffers and puts them on the LRU end (instead of the MRU end) of the LRU list. This is because a fully scanned table usually is needed only briefly, so the blocks should be moved out quickly to leave more frequently used blocks in the cache.

    Size of the Database Buffer Cache :- 

    Oracle allows different block size for different tablespaces. A standard block size is defined in DB_BLOCK_SIZE initialization parameter . System tablespace uses standard block size. DB_CACHE_SIZE parameter is used to defiane size for Database buffer cache.
    you can also set the size for two additional buffer pools, KEEP and RECYCLE, by setting DB_KEEP_CACHE_SIZE and DB_RECYCLE_CACHE_SIZE. These three parameters are independent of one another.


    Twitter Delicious Facebook Digg Stumbleupon Favorites More