操作系统-进程同步

Cooperating process: A process that can affect or be affected by other processes executing in the system, can either 1) directly share a logical address space (e.g. threads)or 2) be allowed to share data through files or messages.

Race condition: A situation where several processes access and manipulate the same data concurrently and the outcome of the execution depends on the particular order in which the access takes place.

 

 

The critical-section problem

/* General structure of a typical process Pi */
do {
    [entry section]
    [critical section]
    [exit section]
    [remainder section]
} while (true);

Critical section – A segment of code that each process has, in which the process may be changing common variables, updating a table, writing a file, and so on.

Entry section – The section of code implementing the (necessary) request for permission to enter its critical section.

Exit section – The section of code that may follow the critical section.

Remainder section – The remaining code.

Critical-section problem: To design a protocol that the processes in a system can use to cooperate so that no 2 processes are executing in their critical sections at the same time.

 

A solution to the critical-section problem must satisfy 3 requirements:

1)Mutual exclusion If a process is executing in its critical section, then no other processes can be executing in their critical sections.

2)Progress If no process is executing in its critical section and some processes wish to enter their critical sections then only processes that are not executing in their remainder sections can participate in deciding which will enter its critical section next.

3)Bounded waiting – There exits a bound/limit on the number of times that other processes are allowed to enter their critical sections after a process has made a request to enter its critical section and before that request is granted.

 

2 general approaches used to handle critical sections in OSs:

1)Preemptive kernels: allows a process to be preempted while it is running in kernel mode; (essentially free from race conditions on kernel data structure)

2)Nonpreemptive kernel: A kernel-mode process will run until it exits kernel mode, blocks, or voluntarily yields control of the CPU. (more responsive, less risk of long waiting)

 

1. Peterson’s solution

Peterson’s solution: A classic software-based solution to the critical-section problem. There are no guarantees that it will work correctly on modern computer architectures.

Peterson’s solution is restricted to 2 processes P0and P1

Pi– one of the process; Pj– the other process;(j=1-i)

turn – indicates whose turn it is to enter its critical section;

flag[2] – indicates if a process is ready to enter its critical section, if flag[i]=true, then Piis ready.

turn and flag[2] are shared between the 2 processes.

/* The structure of process Pi */
do {
    flag[i] = true;
    turn = j;
    while (flag[j] && turn == j)
        ;
    [critical section]
    flag[i] = false;
    [remainder section]
} while true;

 

2. Synchronization hardware (hardware-based solutions)

Locking: Protecting critical regions through the use of locks.

In a single-processor environment, the critical-section problem can be solved simply by preventing interrupts while a shared variable was being modified. (often taken by nonpreemptive kernels)

In a multiprocessor environment, disabling interrupts is not as feasible. Many modern computer systems provide special hardware instructions (2 types – test_and_set() and compare_and_swap()) that can be executed automaticallyto solve the critical-section problem.

Automatically: to be executed as one uninterruptible unit.

1) test_and_set() instruction

boolean test_and_set (Boolean *target) {
    boolean rv = *target;
    *target = true;
    return rv;
}
/* The structure of process Pi */
do {
    while (test_and_set(&lock))
        ; /* do nothing */
    [critical section]
    lock = false;
    [remainder section]
} while (true);

variable lock is initialized to false.

2) compare_and_swap() instruction

int compare_and_swap (int *value, int expected, int new_value) {
    int temp = *value;
    if (*value == expected)
        *value = new_value;
    return temp;
}
/* The structure of process Pi */
do {
    while (compare_and_swap(&lock, 0, 1) != 0)
        ; /* do nothing */
    [critical section]
    lock = 0;
    [remainder section]
} while (true);

variable lock is initialized to 0.

The above algorithm 1 and 2 satisfy the mutual-exclusion requirements, but not the bounded-waiting requirements.

3) Algorithm satisfies all the critical-section requirements:

do {
    waiting[i] = true;
    key = true;
    while (waiting[i] && key)
        key = test_and_set(&lock);
    waiting[i] = false;
    [critical section]
    j = (i+1)%n;
    while ((j !+ i) && !waiting[j])
        j = (j+1)%n;
    if (j == i)
        lock = false;
    else
        waiting[j] = false;
    [remainder section]
} while (true);

(waiting[n] and lock are initialized to false.)

Cons: generally inaccessible to application programmers.

 

3. Mutex Locks (the simplest software synchronization tool)

Mutex lock: a.k.a.spinlock, used to protect critical regions and thus prevent race conditions. A process must acquire the lock (call acquire()) before entering a critical section, and release the lock (call release()) when it exits the critical section.

Variable available indicates if the lock is available or not.

 acquire () {
    while (!available)
        ; /* busy wait */
    available = false;
}
release () {
    available = true;
}
/* The structure of process Pi */
do {
    [acquire lock]
    [critical section]
    [release lock]
    [remainder section]
} while (true);

Pros: no context switch is required when a process must wait on a lock. (Thus useful when locks are expected to be held for short times)

Cons: requires busy waiting.

busy waiting: While a process is in its critical section, any other process that tries to enter its critical section must loop continuously in the call to acquire(). Wastes CPU cycles that some other process might use a multiprogramming system.

 

 

4. Semaphores (a more robust that behave similarly to a mutex lock, but also provide more sophisticated ways)

 

Semaphore [S]: an integer variable that, apart from initialization, is accessed only through 2 standard atomic operations: wait() [P] and signal() [V].

 

/* P - proberen, “to test” */
wait (S) {
    while  (S <= 0)
        ;
    S--;
}
/* V – verhogen, “to increment” */
signal (S) {
    S++;
}

All modification to the semaphore in wait() and signal() must be executed indivisibly.

 

Semaphore Usage

Binary semaphore: ranges only between 0 and 1. (can be used instead on systems that do not provide mutex locks)

Counting semaphore: ranges over an unrestricted domain, can be used to – 

1) control access to a given resource consisting of a finite number of instances.

Initialize the semaphore to the number of available resources -> each process performs a wait() when wishing to use a resource & peforms a signal() when releasing a resource. 

2) solve synchronization problems.

Consider 2 concurrently running processes P1 (with statement S1) and P2 (with statement S2):

If S2 needs to be executed only after S1 has completed ->

Let P1 and P2 share a common semaphore synch (initialized to 0), then implement:

/* The structure of P1 */
S1;
signal(synch);
/* The structure of P2 */
wait(synch);
S2;

 

Semaphore implementation (modified to overcome busy waiting)

/* The structure of a semaphore */
typedef struct {
    int value;
    struct process *list;
} semaphore;

block() and wakeup() operations are systems called provided by OS.

block() places a process into a waiting queue associated with the semaphore, switches the state of the process to ‘waiting’, and then the control is transferred to the CPU scheduler.

wakeup() switches the state of a blocked process from ‘waiting’ to ‘ready’, places the process in the ready queue.

When a process executeswait()and finds that the semaphore value <0, it can block itself.

 

wait (semaphore *S) {
    S->value--;
    if (S->value < 0) {
        add this process to S->list;
        block();
    }
}

 

A blocked process waiting on a semaphore should be restarted when some other process executes a signal() operation.

 

signal(semaphore *S) {
    S->value++;
    if (S->value <= 0) {
        remove a process P from S->list;
        wakeup(P);
    }
}

 

Semaphore operations should be executed atomically, no 2 processes can execute wait() and signal() on the same semaphore at the same time.

 

Deadlocks & starvation

A set of processes is said to be in a deadlocked state when every process in the set is waiting for an event (e.g. the execution of a signal()) that can be caused only by another process in the set.

Indefinite blocking/starvation: A situation in which processes wait indefinitely within the semaphore, may occur if processes are removed form the list in LIFO order.

 

Priority inversion

Priority inversion: (A scheduling problem that occurs only in systems with more than 2 priorities.) Given 3 processes L, M, H with priorities L<M<H, H requires resource R that is currently being accessed by L. Ordinarily, H would wait for L to finish using R. However, if M becomes runnable and preempt L, it will indirectly affect how long H must wait for L to relinquish R.

Typical solution –> priority inheritance protocol: All processes that are accessing resources needed by a higher-priority process inherit the higher priority until they are finished with the resources in the question. When they are finished, their priorities revert to their original values.

(thus M cannot preempt L’s execution, when L finish using R, H would run next.)

 

转载于:https://www.cnblogs.com/RDaneelOlivaw/p/9753330.html

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值