Process Architecture

本文详细介绍了Oracle数据库中的各类进程,包括客户端进程、服务器进程(专用服务器和共享服务器)、后台进程(如PMON、LGWR、DBW等),以及并行执行和资源管理。这些进程在数据库操作中各自承担不同的职责,确保了数据库的高效稳定运行。
摘要由CSDN通过智能技术生成

A process normally runs in its own private memory area. Most processes can periodically write to an associated trace file.

The process execution architecture depends on the operating system:

For example, on Windows an Oracle background process is a thread of execution within a process. On Linux and UNIX, an Oracle process is either an operating system process or a thread within an operating system process.

In releases earlier than Oracle Database 12c, Oracle processes did not run as threads on UNIX and Linux systems. Starting in Oracle Database 12c, the multithreaded Oracle Database model enables Oracle processes to execute as operating system threads in separate address spaces. When Oracle Database 12c is installed, the database runs in process mode. You must set the THREADED_EXECUTION initialization parameter to TRUE to run the database in threaded mode. In threaded mode, some background processes on UNIX and Linux run as processes (with each process containing one thread), whereas the remaining Oracle processes run as threads within processes.

process分为client process与oracle process(An Oracle process is a unit of execution that runs the Oracle database code), 而oracle process又分为以下三类:

  1. background process starts with the database instance and perform maintenance tasks such as performing instance recovery, cleaning up processes, writing redo buffers to disk, and so on.
  2. server process performs work based on a client request.
  3. A slave process performs additional tasks for a background or server process.

Client Processes

The Oracle processes servicing the client process can read from and write to the SGA, whereas the client process cannot. 

A single connection can have more sessions established on it. The sessions are independent: a commit in one session does not affect transactions in other sessions.

Multiple sessions can exist concurrently for a single database user.

下面示例为同一连接使用多个会话,执行set autotrace on就会再创建一个会话用于监视原会话统计信息

SQL> SELECT SID, SERIAL#, PADDR FROM V$SESSION WHERE USERNAME = USER;

SID SERIAL# PADDR

--- ------- --------

 90      91 3BE2E41C

SQL> SET AUTOTRACE ON STATISTICS;

SQL> SELECT SID, SERIAL#, PADDR FROM V$SESSION WHERE USERNAME = USER;

SID SERIAL# PADDR

--- ------- --------

 88      93 3BE2E41C

 90      91 3BE2E41C

Server Processes

Server processes can perform one or more of the following tasks:

  1. Parse and run SQL statements issued through the application, including creating and executing the query plan
  2. Execute PL/SQL code
  3. Read data blocks from data files into the database buffer cache (the DBW background process has the task of writing modified blocks back to disk)
  4. Return results in such a way that the application can process the information

  1. Dedicated Server Processes

Each client process communicates directly with its server process. This server process is dedicated to its client process for the duration of the session. The server process stores process-specific information and the UGA in its PGA.

  1. Shared Server Processes

In shared server connections, client applications connect over a network to a dispatcher process, not a server process. For example, 20 client processes can connect to a single dispatcher process.

The dispatcher process receives requests from connected clients and puts them into a request queue in the large pool. The first available shared server process takes the request from the queue and processes it. Afterward, the shared server places the result into the dispatcher response queue. The dispatcher process monitors this queue and transmits the result to the client.

Like a dedicated server process, a shared server process has its own PGA. However, the UGA for a session is in the SGA so that any shared server can access session data.

Background Processes

An Oracle Database background process is defined as any process that is listed in V$PROCESS and has a non-null value in the PNAME column.

Select PNAME,SPID from v$process where pname is not null;

An instance can have many background processes, not all of which always exist in every database configuration.

  1. Mandatory Background Processes

These processes run by default in a read/write database instance started with a minimally configured initialization parameter file. A read-only database instance disables some of these processes.

  1. Process Monitor Process (PMON) Group

The PMON group includes PMON, Cleanup Main Process (CLMN), and Cleanup Helper Processes (CLnn). These processes are responsible for the monitoring and cleanup of other processes.

The PMON group oversees cleanup of the buffer cache and the release of resources used by a client process. For example, the PMON group is responsible for resetting the status of the active transaction table, releasing locks that are no longer required, and removing the process ID of terminated processes from the list of active processes.

Process Monitor Process (PMON)

PMON会监测其它后台进程,对于server/dispatcher process还会进行恢复

The process monitor (PMON) detects the termination of other background processes. If a server or dispatcher process terminates abnormally, then the PMON group is responsible for performing process recovery. Process termination can have multiple causes, including operating system kill commands or ALTER SYSTEM KILL SESSION statements.

Cleanup Main Process (CLMN)

PMON delegates cleanup work to the cleanup main process (CLMN). The task of detecting abnormal termination remains with PMON.

CLMN是周期性被唤醒检查

CLMN periodically performs cleanup of terminated processes, terminated sessions, transactions, network connections, idle sessions, detached transactions, and detached network connections that have exceeded their idle timeout.

Cleanup Helper Processes (CLnn)

CLnn是CLMN具体的清理进程

CLMN delegates cleanup work to the CLnn helper processes. 

The CLnn processes assist in the cleanup of terminated processes and sessions. The number of helper processes is proportional to the amount of cleanup work to be done and the current efficiency of cleanup.

A cleanup process can become blocked, which prevents it from proceeding to clean up other processes. Also, if multiple processes require cleanup, then cleanup time can be significant. For these reasons, Oracle Database can use multiple helper processes in parallel to perform cleanup, thus alleviating slow performance.

The V$CLEANUP_PROCESS and V$DEAD_CLEANUP views contain metadata about CLMN cleanup. The V$CLEANUP_PROCESS view contains one row for every cleanup process. For example, if V$CLEANUP_PROCESS.STATE is BUSY, then the process is currently engaged in cleanup.

Database Resource Quarantine

If a process or session terminates, then the PMON group releases the held resources to the database. In some cases, the PMON group can automatically quarantine隔离 corrupted, unrecoverable resources so that the database instance is not immediately forced to terminate.

The PMON group continues to perform as much cleanup as possible on the process or session that was holding the quarantined resource.

The V$QUARANTINE view contains metadata such as the type of resource, amount of memory consumed, Oracle error causing the quarantine, and so on.

  1. Process Manager (PMAN)

Process Manager (PMAN) oversees several background processes including shared servers, pooled servers, and job queue processes.

PMAN monitors, spawns, and stops the following types of processes:

Dispatcher and shared server processes

Connection broker and pooled server processes for database resident connection pools

Job queue processes

Restartable background processes

  1. Listener Registration Process (LREG)

The listener registration process (LREG) registers information about the database instance and dispatcher processes with the Oracle Net Listener.

LREG会在实例启动时进行动态注册,如果此时未开启监听则之后周期性执行

When an instance starts, LREG polls the listener to determine whether it is running. If the listener is running, then LREG passes it relevant parameters. If it is not running, then LREG periodically attempts to contact it.

In releases before Oracle Database 12c, PMON performed the listener registration.

  1. System Monitor Process (SMON)

The system monitor process (SMON) is in charge of a variety of system-level cleanup duties.

SMON checks regularly to see whether it is needed. Other processes can call SMON if they detect a need for it. SMON是周期性运行,其它进程也可以调用SMON

Duties assigned to SMON include:

  1. Performing instance recovery, if necessary, at instance startup. In an Oracle RAC database, the SMON process of one database instance can perform instance recovery for a failed instance.
  2. Recovering terminated transactions that were skipped during instance recovery because of file-read or tablespace offline errors. SMON recovers the transactions when the tablespace or file is brought back online. 用于恢复表空间或数据文件
  3. Cleaning up unused temporary segments. For example, Oracle Database allocates extents when creating an index. If the operation fails, then SMON cleans up the temporary space.
  4. Coalescing contiguous free extents within dictionary-managed tablespaces.

  1. Database Writer Process (DBW)

详细见《RedoLog & Checkpoint & SCN》

DBW processes write modified buffers in the database buffer cache to disk.

The DBW process writes dirty buffers to disk under the following conditions:

  1. When a server process cannot find a clean reusable buffer after scanning a threshold number of buffers, it signals DBW to write. DBW writes dirty buffers to disk asynchronously if possible while performing other processing.
  2. DBW periodically writes(3秒) buffers to advance推进 the checkpoint, which is the position in the redo thread from which instance recovery begins. The log position of the checkpoint is determined by the oldest dirty buffer in the buffer cache.
  3. checkpoint触发时(包含Incremental checkpoint)

通过DB_WRITER_PROCESSES可以指定初始化dbw的进程数,它的默认值为max(1,cpu_count/8), 最多100个

You can configure additional processes—DBW1 through DBW9, DBWa through DBWz, and BW36 through BW99—to improve write performance if your system modifies data heavily.

  1. Log Writer Process (LGWR)

In the following circumstances, LGWR writes all redo entries that have been copied into the buffer since the last time it wrote:

  1. A user commits a transaction.
  2. An online redo log switch occurs.
  3. Three seconds have passed since LGWR last wrote.
  4. The redo log buffer is one-third full or contains 1 MB of buffered data. 

1/3 log buffer的未写日志或1MB未写日志均会触发日志落盘

  1. DBW must write modified buffers to disk.

Before DBW can write a dirty buffer, the database must write to disk the redo records associated with changes to the buffer (the write-ahead protocol). If DBW discovers that some redo records have not been written, it signals LGWR to write the records to disk, and waits for LGWR to complete before writing the data buffers to disk.

LGWR can write redo log entries to disk before a transaction commits.

When activity is high, LGWR can use group commits.

For example, a user commits, causing LGWR to write the transaction's redo entries to disk.

During this write other users commit. LGWR cannot write to disk to commit these transactions until its previous write completes. Upon completion, LGWR can write the list of redo entries of waiting transactions (not yet committed) in one operation. 在完成上次写入后接着继续写入刚才提交事务. In this way, the database minimizes disk I/O and maximizes performance. If commits requests continue at a high rate, then every write by LGWR can contain multiple commit records.

If a log file is inaccessible, then LGWR continues writing to other files in the group and writes an error to the LGWR trace file and the alert log. If all files in a group are damaged, or if the group is unavailable because it has not been archived, then LGWR cannot continue to function.

  1. Checkpoint Process (CKPT)

详细见《RedoLog & Checkpoint & SCN》

The checkpoint process (CKPT) updates the control file and data file headers with checkpoint information and signals DBW to write blocks to disk. Checkpoint information includes the checkpoint position, SCN, and location in online redo log to begin recovery.

CKPT does not write data blocks to data files or redo blocks to online redo log files.

  1. Manageability Monitor Processes (MMON and MMNL)

The manageability monitor process (MMON) performs many tasks related to the Automatic Workload Repository (AWR). MMON writes out the required statistics for AWR on a scheduled basis. For example, MMON writes when a metric violates its threshold value, taking snapshots, and capturing statistics value for recently modified SQL objects.

MMON实际执行进程为Mnnn(类似作业队列的Jnnn或Qnnn进程)

The manageability monitor lite process (MMNL) writes statistics from the Active Session History (ASH) buffer in the SGA to disk. MMNL writes to disk when the ASH buffer is full.

    

  1. Recoverer Process (RECO)

In a distributed database, the recoverer process (RECO) automatically resolves failures in distributed transactions.

The RECO process of a node automatically connects to other databases involved in an in-doubt distributed transaction. When RECO reestablishes a connection between the databases, it automatically resolves all in-doubt transactions, removing from each database's pending transaction table any rows that correspond to the resolved transactions.

  1. Optional Background Processes

An optional background process is any background process not defined as mandatory.

Most optional background processes are specific to tasks or features.

  1. Archiver Processes (ARCn)

LOG_ARCHIVE_MAX_PROCESSES参数指定初始化ARCn进程个数(不是使用最大的进程个数),默认4,范围在1-30。如果日志写入器速度比归档器的归档速度快,则LGWR将自动启用新的ARCn,由于这是全自动的,所以一般不用配置LOG_ARCHIVE_MAX_PROCESSES,当然也可以配成非动态参数,如设置为8个

An archiver process (ARCn) copies online redo log files to offline storage after a redo log switch occurs. These processes can also collect transaction redo data and transmit it to standby database destinations. ARCn processes exist only when the database is in ARCHIVELOG mode and automatic archiving is enabled.

  1. Job Queue Processes (CJQ0 and Jnnn)

CJQ0是所有任务的调度器,Jnnn实际执行CJQ0分配的某一任务

A queue process runs user jobs, often in batch mode. A job is a user-defined task scheduled to run one or more times.

Oracle Database manages job queue processes dynamically, thereby enabling job queue clients to use more job queue processes when required. The database releases resources used by the new processes when they are idle.

Dynamic job queue processes can run many jobs concurrently at a given interval. The sequence of events is as follows:

  1. The job coordinator process (CJQ0) is automatically started and stopped as needed by Oracle Scheduler. The coordinator process periodically selects jobs that need to be run from the system JOB$ table. New jobs selected are ordered by time.
  2. The coordinator process dynamically spawns job queue slave processes (Jnnn) to run the jobs.
  3. The job queue process runs one of the jobs that was selected by the CJQ0 process for execution. Each job queue process runs one job at a time to completion.
  4. After the process finishes execution of a single job, it polls for more jobs. If no jobs are scheduled for execution, then it enters a sleep state, from which it wakes up at periodic intervals and polls for more jobs. If the process does not find any new jobs, then it terminates after a preset interval.

JOB_QUEUE_PROCESSES specifies the maximum number of job slaves per instance that can be created for the execution of DBMS_JOB jobs and Oracle Scheduler (DBMS_SCHEDULER) jobs. 默认即最大值4000). 

However, clients should not assume that all job queue processes are available for job execution. The coordinator process is not started if the initialization parameter JOB_QUEUE_PROCESSES is set to 0.

  1. Flashback Data Archive Process (FBDA)

The flashback data archive process (FBDA) archives historical rows of tracked tables into Flashback Data Archives. When a transaction containing DML on a tracked table commits, this process stores the pre-image of the changed rows into the Flashback Data Archive. It also keeps metadata on the current rows.

FBDA automatically manages the Flashback Data Archive for space, organization, and retention. Additionally, the process keeps track of how long the archiving of tracked transactions has occurred.

  1. Space Management Coordinator Process (SMCO)

The SMCO process coordinates the execution of various space management related tasks.

Typical tasks include proactive space allocation and space reclamation. SMCO dynamically spawns slave processes (Wnnn) to implement the task.

  1. Slave Processes

Slave processes are background processes that perform work on behalf of other processes.

  1. I/O Slave Processes

I/O从属进程用于为不支持异步I/O的系统或设备模拟异步I/O

I/O slave processes (Innn) simulate asynchronous I/O for systems and devices that do not support it. In asynchronous I/O, there is no timing requirement for transmission, enabling other processes to start before the transmission has finished.

For example, assume that an application writes 1000 blocks to a disk on an operating system that does not support asynchronous I/O. Each write occurs sequentially and waits for a confirmation that the write was successful. With asynchronous disk, the application can write the blocks in bulk and perform other work while waiting for a response from the operating system that all blocks were written.

To simulate asynchronous I/O, one process oversees several slave processes. The invoker process assigns work to each of the slave processes, who wait for each write to complete and report back to the invoker when done. In true asynchronous I/O the operating system waits for the I/O to complete and reports back to the process, while in simulated asynchronous I/O the slaves wait and report back to the invoker.

The database supports different types of I/O slaves, including the following:

  1. I/O slaves for Recovery Manager (RMAN)
  2. When using RMAN to back up or restore data, you can use I/O slaves for both disk and tape devices.
  3. Database writer slaves

If it is not practical to use multiple database writer processes, such as when the computer has one CPU, then the database can distribute I/O over multiple slave processes. DBW is the only process that scans the buffer cache LRU list for blocks to be written to disk. However, I/O slaves perform the I/O for these blocks.

两个相关参数:

BACKUP_TAPE_IO_SLAVES specifies whether I/O server processes (also called slaves) are used by Recovery Manager to back up, copy, or restore data to tape.

DBWR_IO_SLAVES specifies the number of I/O server processes used by the DBW0 process.

DBW0总是将缓冲区缓存中的脏块写至磁盘,参数默认为0,如果将它设置为非0,LGWR和ARCH也会使用自己的I/O从属进程,LGWR和ARCH最多允许4个I/O从属进程

DBWR I/O 从属进程的名字是I1nn,LGWRI/O 从属进程的名字是I2nn,这里nn 是一个数。

  1. Parallel Execution (PX) Server Processes

In parallel execution, multiple processes work together simultaneously to run a single SQL statement. By dividing the work among multiple processes, Oracle Database can run the statement more quickly.

Parallel execution reduces response time for data-intensive operations on large databases such as data warehouses. Symmetric multiprocessing (SMP) and clustered system gain the largest performance benefits from parallel execution because statement processing can be split up among multiple CPUs. Parallel execution can also benefit certain types of OLTP and hybrid systems.

一般RAC的并行是限定在连接使用service内的instances内的,但某此并行操作没此限制

In Oracle RAC systems, the service placement of a specific service controls parallel execution. Specifically, parallel processes run on the nodes on which the service is configured. By default, Oracle Database runs parallel processes only on an instance that offers the service used to connect to the database. This does not affect other parallel operations such as parallel recovery or the processing of GV$ queries.

Query Coordinator

即原来执行的Server process作为query coordinator, 而不是产生一个新进程

In parallel execution, the server process acts as the query coordinator (also called the parallel execution coordinator).

The query coordinator is responsible for the following:

  1. Parsing the query
  2. Allocating and controlling the parallel execution server processes
  3. Sending output to the user

是在生成执行计划时把SQL query分成多个并行部分

Given a query plan for a query, the coordinator breaks down each operator in a SQL query into parallel pieces, runs them in the order specified in the query, and integrates the partial results produced by the parallel execution servers executing the operators.

The number of parallel execution servers assigned to a single operation is the degree of parallelism for an operation. Multiple operations within the same SQL statement all have the same degree of parallelism.

Producers and Consumers

Parallel execution servers are divided into producers and consumers. The producers are responsible for processing their data and then distributing it to the consumers that need it.

The database can perform the distribution using a variety of techniques. Two common techniques are a broadcast and a hash. 

In a broadcast, each producer sends the rows to all consumers. In a hash, the database computes a hash function on a set of keys and makes each consumer responsible for a subset of hash values.

Figure 15-6 represents the interplay between producers and consumers in the parallel execution of the following statement: SELECT * FROM employees ORDER BY last_name;

The execution plan implements a full scan of the employees table. The scan is followed by a sort of the retrieved rows. All of the producer processes involved in the scan operation send rows to the appropriate consumer process performing the sort.

Granules

In parallel execution, a table is divided dynamically into load units. Each unit, called a granule, is the smallest unit of work when accessing data.

A block-based granule is a range of data blocks of the table read by a single parallel execution server (also called a PX server), which uses Pnnn as a name format. To obtain an even distribution of work among parallel server processes, the number of granules is always much higher than the requested DOP.


The database maps granules to parallel execution servers at execution time. When a parallel execution server finishes reading the rows corresponding to a granule, and when granules remain, it obtains another granule from the query coordinator. This operation continues until the table has been read. The execution servers send results back to the coordinator, which assembles the pieces into the desired full table scan.

VHDL (VHSIC Hardware Description Language) is a hardware description language used in digital circuit design and simulation. A VHDL process is a block of sequential statements that execute concurrently in a specific order. It is the basic building block for designing digital systems using VHDL. In VHDL, a process is defined using the process statement and is enclosed within the architecture of a module. It can contain variables, signals, and sequential statements such as if-else, case, loops, etc. A process can also be sensitive to events like rising or falling edges of signals, changes in signal values, or a combination of both. Processes in VHDL are used to model the behavior of digital circuits. They allow designers to describe complex control logic, data flow, and timing requirements. The statements within a process are executed sequentially within a clock cycle or in response to specific events. This helps in creating synchronous and asynchronous designs. Here's an example of a simple VHDL process: ```vhdl architecture Behavioral of MyModule is begin process (clk) begin if rising_edge(clk) then -- sequential statements here -- ... end if; end process; end architecture; ``` In this example, the process is sensitive to the rising edge of the 'clk' signal. The sequential statements inside the process will execute whenever there is a rising edge on the 'clk' signal. Processes in VHDL are a fundamental concept for describing the behavior of digital systems and are widely used in FPGA and ASIC designs. They allow for the creation of complex and efficient hardware designs.
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值