Oracle and DB2, Comparison and Compatibility/Architecture/DB2

DB2 Storage Model

edit

The physical files used to store information about the DB2 Universal Database (DB2) and the database data are created in a directory on the filesystem using the CREATE DATABASE command. The following descriptions are of actual structures stored on disk. They are used for storing user information and managing the operation of the database instance. They are such things as log files, control files, lock files and configuration files. How DB2 maps these physical structures onto the logical structures used by the DBMS (such as containers, pages and tablespaces) is covered later on.

These files can be stored on raw devices or in filesystem Files and Directories. In this hierarchical structure a subdirectory is created for each instance so that databases created in different instances in the same directory do not use the same path. The first database created in this directory is named SQL00001 and this contains all the database objects associated with the first database.

Below each instance subdirectory another directory called NODE0000 is created that identifies partitions in a logically partitioned database. The first database created in this directory is named SQL00001 and this contains all the database objects associated with the first database. Subsequent databases that are created are incrementally named, SQL00002, SQL00003 etc.

The physical structure is

 Directory identified in CREATE DATABASE
       |
       +→ Instance
               |
               +→ NODE0000
                       |
                       +→ SQL00001
                       |
                       +→ SQL00002

The database directories contain the following files and directories

 - SQLDBCON – database configuration information
 - The SQLOGDIR directory – contains database log files
        |
        + --> SQLLOGCTL.LFH and SQLOGMIR.LFH – log control information files.
              These files contain information about the active logs and are used in database recovery
 - SQLINSLK – a lock files that ensures that each database is used by only one databasze instance
 - DB2RHIST.ASC – a housekeeping file that contains the following data 
      
      backup
restore
Table loads
Table reorganizations
Tablespace alterations
- DB2RHIST.BAK – a backup of the DB2RHIST.ASC file - SQLSPCS.1 and SQLSPCS.2 – table space information, each file is duplicated as a backup - SQLBP.1 and SQLBP.2 – buffer pool information, each file is duplicated as a backup

DB2 Memory Model

edit

DB2 manages memory in four distinct chunks. The memory can be thought as of being organized from the ‘top down’. There is memory allocated for the Database Instance – the Database Management System (DBMS) that handles all the individual databases in the DBMS and all the instance level tasks and activities. Below this there is memory allocated for each individual database that holds things like the database buffer pool and recovery information. For each set of applications associated with a database there is a memory allocation, and within the applications there are things DB2 calls agents that are like little subroutines or utilities that do the work for the applications. So from the top down, we have:


• Database Instance Memory (one per DBMS Instance)

• Database Global Memory (one per database)

• Application global memory

• Agent private memory (per application)

Four Main Areas of DB2 Memory Model
 


DB2 Database Instance Memory
edit
DB2 Database Instance Memory
 


The memory allocated for each DBMS instance is Database Instance Memory, it is allocated one per DB2 instance, and is used It is used for instance level tasks such as monitoring, auditing and inter-node communication. It is allocated when the DBMS is started (db2start), and freed when it is stopped (db2stop).

The memory pools in Database Instance Memory are:


• The Monitor Heap

• The Audit Buffer and

• The FCM Buffer


The health monitor captures information about the database manager, database, tablespaces and tablespace containers. It evaluates the health of the instance on data from database system monitors, the operating system, and DB2 itself. The monitor heap will keep a maximum of ten history records for each indicator, and can only evaluate health indicators on a database when it is active.

The history is stored in:

 Directory specified in CREATE DATABASE
                             |                         
                             +→ Instance
                                   |
                                   +→ hmonCache directory.

The history is cleared when the health monitor is stopped. The health monitor also trims old history records when it reaches ten records.

The Audit Facility in DB2 monitors access to data outside of established authentication credentials, authorities and privileges. This facility generates records that are kept in the Audit Buffer, and subsequently the audit log files on disk.

The audit log (db2audit.log) and the audit configuration file (db2audit.cfg) are located in:

  Directory specified in CREATE DATABASE
          |
          +→ Instance
                  |
                  +→ security directory.

DB2 uses Fast Communication Manager (FCM) to control the information flow between database partitions and agents. The memory that FCM needs are allocated as FCM buffers for the database instance.

The amount of memory reserved for Database Instance Memory is specified in the instance_memory parameter. The default is AUTOMATIC and DB2 will calculate the amount of memory from the sizes of the monitor heap, audit buffer and FCM buffers. Additional memory is also allocated for the overflow buffer which satisfies memory requests whenever a heap exceeds its configuration limits.

In this way, it can be seen that DB2 handles Database Instance Memory efficiently by dynamically managing its constituent parts.

DB2 Database Global Memory.
edit
DB2 Database Global Memory
 


The memory allocated for each database is Database Global Memory, it is allocated one per database, and is used for backup and restore, locking, and SQL execution. It is allocated when you start or initially connect to a database and is released when you stop or disconnect. When the database is started, DB2 will allocate memory pools in full or partially depending on need.

Within Database Global Memory, there are a number of memory pools. The size of most of these memory pools are pre-determined by their configuration settings, however the Package Cache and Catalog Cache are dynamic since you can have any number of applications connected to the database. The Package Cache is used to cache the access plan for SQL queries. The Catalog Cache is the DB2 data dictionary. Additionally, the utility heap is dynamically allocated. It is used for utilities like backup, and initially only 16 KB of memory is allocated for it. Once a utility is started, then the full amount of utility heap is allocated. The lock list is a buffer that holds information about the locks held on the database. It is a crucial component in maintaining database consistency in multi-user concurrent access.

You can configure DB2 to calculate the amount of Database Global Memory from the size of the memory pools it contains. It will also allocate additional memory for an overflow buffer to handle spikes in memory requirements (i.e. if backup for some reason needs more memory than is configured).

In this way, it can be seen that DB2 handles Database Global Memory efficiently by dynamically managing its constituent parts.

Usually the largest component of Database Global Memory is the buffer pool(s). A database must have at least one buffer pool which is where all the data and indexes are handled. A database may have more than one buffer pool depending on the workload and database page sizes. The size of the buffer pool is limited by the architecture of addressable memory. If the buffer pool allocation in the database configuration files SQLBP.1 and SQLBP.2 is too large to fit into the addressable memory space, the database will not start. To cater for this, four small additional buffer pools are allocated when a database is started. They are of page size 4K, 8K, 16K and 32K respectively and these will be used to start DB2 (with an error) so that you can change the main buffer pool size(s). Additionally, the main buffer pool can be extended to overcome this problem. This extended storage acts a cache for pages ejected from the main buffer pool. Instead of being written to disk, they are stored in extended storage, which keeps them in main memory and reduces disk I/O. There is a trade off, since main memory is finite and extended storage is allocated at the expense of other memory pools.

The Sort Heap Threshold memory pool is used for sorting, and DB2 does 2 kinds of sort – Private and Shared. Shared shorts use memory in the database shared memory pool and private sorts take place in an Agent's Private Memory (covered later).

Sorts are performed if there is no index on retrieved rows or the optimizer decides that a sort is more efficient than an index scan. A configuration parameter SHEAPTRES specifies a soft upper limit on the amount of memory that can be used by an instance for private sorts (Agent Private Memory). DB2 manages memory for these sorts by limiting the amount of memory available for additional sorts when this limit is reached.

If you use parallelism or concentrator, DB2 may choose a shared sort over a private sort. In this instance, the sort heap for the sort is allocated in Database Global Memory. The maximum amount of memory used for shared sorts is controlled by the SHEAPTHRES_SHR database parameter. This is a database-wide hard limit on the total amount of memory available for shared sorts. When this limit is hit, the requesting application will get an error and no more shared sort memory requests will be allowed until memory consumption falls below the SHEAPTHRES_SHR limit.

DB2 Application global memory
edit
DB2 Application Global Memory
 


Application Global Memory is allocated when you have:


• Partitioned databases

• Non-partitioned databases using intra-parallel processing

• The connection concentrator enabled


In environments where the above is not true (i.e. non-partitioned databases), use of Application Global Memory is minimal.


Partitioning.

Until now, we have been describing a monolithic architecture, where all the user data in the database is represented as a single unit – the data, indexes, configuration files and transaction logs are for the complete dataset. Partitioning is a technique where the whole dataset is broken up into discrete units, all of them operating under a single database instance, but each with their own separate collection of data, indexes, configuration files and transaction logs.

This allows organizations to spread large datasets across several different servers (or to create multiple partitions on one large server that act independently – so called ‘Logical Partitions’). There are multiple benefits to doing this – administration can be carried out on a single partition at a time without affecting the complete dataset (i.e. backup and restore can be done on a partition by partition basis). The separation of data means that the whole dataset is less vulnerable to an outage (instead of all the data being unavailable, only a single partition is unavailable). Partitioning can be done arbitrarily – by geographic region, by date ranges, divisions in an organization, you name it. This allows processing and I/O loads to be spread. The overall benefits are flexibility, security, scalability and speed. A nice feature of the DB2 Database Partitioning Feature (DPF) is that whether you use it or not, it is relatively transparent – per the DB2 SQL Reference

"a partitioned relational database is a relational database whose data is managed across multiple partitions [..]. This separation of data across partitions is transparent to users of most SQL statements."

This means that the decision to partitioning to a database for scalability or performance reasons does not need to consider re-architecting an existing design. Of course, this option is not free, but it’s nice when the decision to get bigger or faster just boils down to a hardware and licensing cost benefit analysis.

Partitioning is covered later in Chapter 4. It is introduced here as it relates to Application Global Memory. One of the ways partitioning is made more efficient is that application logic can be local at the application level (in Agent Private Memory), or at the partition level in Application Global Memory – this way processing for a specific partition can be handled within that partition at the application level, but processing across the whole dataset can be abstracted up to the partition level. This has consequences for Application Global Memory, and one of these consequences is that there needs to be memory allocated to each application – the Application Control Heap, and memory allocated to all applications, the Application Group Shared Heap.

The Application Group Shared Heap is used by agents to share information across partitions. It is also used to store information about temporary tables, and executing SQL statements for agents. Essentially it is a memory pool for partitions.

Intra-Parallel processing on a non-partitioned database is where multiple agents and sub agents can be active at the same time for an application. These Agents use the Application Group Shared Heap for communication at the application level in this instance (as opposed to the partition level).

The Connection Concentrator is a way of pooling connections. Each connection to the database (from applications and agents) consumes a portion of memory (to maintain connection state etc.). On large multi-user databases with thousands of connections this can eat a lot of memory. The thing is that each of these connections may not be performing useful work, they can be waiting – this is wasteful. The concentrator acts as a multiplexer, where all connections come into the concentrator, and it holds these logical connections and then multiplexes the resource requests of each agent across a smaller number of actual database connections. To do this the concentrator needs to store information about the connections and it uses the Application Group Shared Heap to do this.

DB2 Agent private memory
edit
DB2 Agent Private Memory
 


Agents perform work for the applications, they are used for such things as:


• Building Query Plans,

• Executing Query Plans,

• Handling Cursors,

• Sorting,

• Gathering Statistics


Each agent process needs memory to carry out its function, and this memory is allocated when the agent is assigned as the result of a connect request or a new SQL request. As can be seen from the diagram, the Agent Private Memory Pools are:


• Application Heap – Used for sharing information between agents working on behalf of the same request (minimal for non-partitioned databases that are not using intra-parallelism.) It is also used to store descriptor information for temporary tables.

• Sort Heap – Used to handle sort operations in sorts, hash joins, and dynamic bitmaps.

• Statement Heap - used as a work space for the SQL or XQuery compiler, it is allocated and released for each SQL or XQuery statement.

• Statistics Heap – Used for storing statistics about the characteristics of a table, associated indexes, or statistical views, such as:

   o	number of records 
o number of pages
o average record length

The optimizer uses these statistics to determine access paths.

• Query Heap - Used to store the queries in the agent's private memory. The information for each query is

   o	the input and output SQL descriptor area
o the statement text
o the SQL communication area
o the package name, creator, section number, and consistency token.

• Java Interpreter Heap - Used by the Java interpreter to service Java stored procedures and user defined functions.

• Agent Stack Size - The agent stack is the virtual memory for each agent and in Windows the initial stack size is set to prevent stack overflow errors.


To balance memory limits versus speed, you can have agents keep their memory when they are not working. These memory allocations for idle agents consume memory, so you can set a limit for the number of idle agents allowed at any one time. The advantages of having idle agents retain their memory is that when new requests come in you don’t have the overhead of creating and destroying them, an idle agent is called into service for that request.

DB2 Process Model

edit
DB2 Process Model
 


Previously, we described how database data, including user data and the information needed by the DBMS to operate are laid out on the physical disk. Next we described the memory structures that DB2 uses when it works with this data in RAM. The structures in memory are not just for the database data, but are used by the database programs – user and DB2 programs that are implemented as processes and threads. The diagram above is a representative subset of the processes and threads common to all DB2 databases and provides an insight into how the database operates.

The DB2 System Controller - db2sysc - is the main DB2 thread and it is what is kicked off when you start the database. There is only one multi-threaded db2sysc per active partition, and it spawns all the other threads that we will be describing – some of these are explicitly named i.e. db2lock, others are identified simply as EDU’s (Engine Dispatchable Units). All the processes and threads are spawned (or dispatched) by db2sysc, and EDU is a way of collectively naming them, albeit grammatically incorrect.

The database is a collection of these server processes. When it is started it is doing such things as monitoring the health of the database, but it is basically just ticking over, it does no usable work until a client process connects to it. Client processes in this instance are your programs – the SQL that you use to create, manage and access the data in the database.

Client programs can run remotely or on the same machine as the database server. They connect to the database through a listener, and coordinator agent (db2agent started by db2sysc) is assigned to them. If the client is running locally on the same machine as the database instance, it will connect through db2ipccm, otherwise it will connect through a listener specific to the clients method of communication, for example a client using TCP/IP will connect to the db2tcpm listener, a client using SNA / APPC will use db2snacm.

Like the memory architecture was broken down into Instance, Database, Application and Agent memory pools, the process architecture will be broken down those processes associated with Instance, Database, Connection and Request processes.

There are some processes outside of the firewall, and not part of the data server, per se. These are the user processes already described (local and remote) and such processes as:

• db2fmp - The fenced mode process responsible for executing fenced stored procedures and user-defined functions outside the firewall. db2fmp is always a separate process but may be multithreaded depending on the types of routines it executes.

• db2vend – Executes vendor code for other processes and threads (*nix only)


Instance level Threads.

The system controller, db2sysc mentioned earlier must exist, otherwise your database is not working. The other Instance level threads in the Process Model diagram are:


• The listener threads db2ippcm and db2tpcm are instance threads as well as connection threads as are all the sub-agent threads (db2agent, db2agnta, db2agntp and db2agnsc)

• db2wdog - The DB2 watchdog is for *nix systems only since *nix are only aware of their parent process. Whenever a new process gets started, another *nix only process, the Global Daemon Service - db2gds - notifies db2wdog. If any processes get an abend signal, this signal is passed to db2wdog and db2wdog notifies all the other instance processes. • db2acd - The autonomic computing daemon runs the health monitor, maintenance utilities, and admin task scheduler.


Connection Threads.

All client connections get a coordinator agent -db2agent - associated with them, whether they communicate directly with the agent, or through an instance level listener thread (db2ipccm or db2tcpm for example). The agents handle all database actions for the application. If you are using intra-parallelism db2agent distributes the database requests to subagents - db2agntp. Agents can be connected to the database, instance or client. The agent pool holds idle agents (described previously in the DB2 Memory Model).

DB2 Coordination agents and sub agents connecting with database instances, databases or client applications
 


Database Threads.

The following is a list of the threads at the database level identified in figure 10. This is by no means a comprehensive list, it serves to identify the major threads associated with database operations (like logging and deadlock detection) that have commonality across Oracle and DB2.


• db2pfchr, The buffer pool prefetcher fetches data drom disk to RAM

• db2pclnr, The buffer pool page cleaner ejects pags from the buffer pool

• db2loggr, The logger handles log files for transaction processing and recovery

• db2loggw, The log writer writes log records to the log files.

• db2logts, The tablespace logger collects data about which logs are active when you modify a tablespace

• db2dlock, The deadlock detector identifies lock collisions. In partitioned databases a global deadlock process - db2glock - coordinates the information from each db2dlock process on each partition.