Oracle Database can set limits on how much virtual memory the database uses for the SGA. It can start instances with minimal memory and allow the instance to use more memory by expanding the memory allocated for SGA components, up to a maximum determined by the SGA_MAX_SIZE initialization parameter. If the value for SGA_MAX_SIZE in the initialization parameter file or server parameter file (SPFILE) is less than the sum the memory allocated for all components, either explicitly in the parameter file or by default, at the time the instance is initialized, then the database ignores the setting for SGA_MAX_SIZE.
For optimal performance in most systems, the entire SGA should fit in real memory. If it does not, and if virtual memory is used to store parts of it, then overall database system performance can decrease dramatically. The reason for this is that portions of the SGA are paged (written to and read from disk) by the operating system. The amount of memory dedicated to all shared areas in the SGA also has performance impact.
The size of the SGA is determined by several initialization parameters. The following parameters have the greatest effect on SGA size:
Parameter Description
DB_CACHE_SIZE The size of the cache of standard blocks.
LOG_BUFFER The number of bytes allocated for the redo log buffer.
SHARED_POOL_SIZE The size in bytes of the area devoted to shared SQL and PL/SQL statements.
LARGE_POOL_SIZE The size of the large pool; the default is 0.
JAVA_POOL_SIZE The size of the Java pool.
Related Post :-
0 comments:
Post a Comment