CPT104操作系统笔记(L5 scheduling II)

Thread Scheduling

Contention Scope

The scope in which threads compete for the use of physical CPUs
2 possible contention scopes:
Process Contention Scope PCS , a.k.a local contention scope .
System Contention Scope SCS , a.k.a global contention scop
ProcessContention Scope (unbound threads) -
competition for the CPU takes place among threads belonging to the same process . Available on the many-to -one model.
System Contention Scope (bound threads) -
competition for the CPU takes place among all threads in the system. Available on the one-to-one model .
In a many-to-many thread model, user threads can have either system or process contention scope

Multiple-Processor Scheduling

Structure of Multi-Processor OS

Different inter-process communication and synchronization techniques are required
In multiprocessing systems, all processors share a memory
three structures for multi-processor OS:
Separate Kernel Configuration
Master–Slave Configuration (Asymmetric Configuration)
Symmetric Configuration

Separate Kernal Configuration

Each processor has its own I/O devices and file system. There is very little interdependence among the processors
A process started on a process runs to completion on that processor only
The disadvantage of this organization is that parallel execution is not possible. A single task cannot be divided into sub-tasks and distributed among several processors, thereby losing the advantage of computational speed-up.

 

Master-Slave Configuration

One processor as the master and the other processors in the system as slaves .
The master processor runs the OS and processes while slave processors run the processes only.
The process scheduling is performed by the master processor
Parallel processing is possible as a task can be broken down into sub-tasks and assigned to various processors

 

Symmetric Configuration SMP

Any processor can access any device and can handle any interrupts generated on it
Mutual exclusion must be enforced such that only one processor is allowed to execute the OS at one time.
To prevent the concurrency of the processes many parts of the OS are independent of each other such as scheduler, file system call, etc

 Approaches to Symmetric Configuration

 Processor Affinity(处理器亲和)

Processor affinity is the ability to direct a specific task, or process, to use a specified core
        The idea behind: if the process is directed to always use the same core it is possible that the process will run more efficiently because of the cache re-use
Soft affinity OSs try to keep a process running on the same processor but not guaranteeing it will do so.
Hard affinity - allows a process to specify a subset of processors on which it may run.

Load Balancing(负载平衡)

When each processor has a separate ready queue, there can be an imbalance in the numbers of jobs in the queues
Push migration = A system process periodically (e.g. every 200 ms) checks ready queues and moves (or push ) processes to different queues, if need be
Pull migration = If scheduler finds there is no process in ready queue so it raids another processor’s run queue and transfers a process onto its own queue so it will have something to run ( pulls a waiting task from a busy processor )

Multicore Processors

A core executes one thread at a time
Single-core processor spends time waiting for the data to become available ( slowing or stopping of a process ) = Memory stall.
Multicore processor : to put multiple processor cores onto a single chip to run multiple kernel threads concurrently

 Hyperthreading(超线程)

 Intel technology -> the physical processor is divided into two logical or virtual processors that are treated as if they are actually physical cores by the operating system (Simultaneous multithreading SMT)

Hyperthreading allows multiple threads to run on each core of CPU

 Techniques for multithreading:

Coarse-grained multithreading - switching between threads only when one thread blocks (long latency event such as a memory stall occurs).
Fine-grained multithreading - instructions “scheduling” among threads obeys a Round Robin policy

Real-Time CPU Scheduling

Characteristics

A real-time system is one in which time plays an essential role

e.g. 医院重症监护病房的病人监护、飞机的自动驾驶仪、雷达系统、自动化工厂的机器人控制等
Hard real-time system – is one that must meet its deadline ; otherwise, it will cause unacceptable damage or a fatal error to the system
Soft real-time system – an associated deadline that is desirable but not necessary ; it still makes sense to schedule and complete the task even if it has passed its deadline.
Aperiodic tasks ( random time ) has irregular arrival times and either soft or hard deadlines.
Periodic tasks ( repeated tasks ) , the requirement may be stated as “once per period T” or “exactly T units apart.”

Issues

The major challenge for an RTOS is to schedule the real-time tasks .
Two types of latencies may delay the processing (performance):

1.Interrupt latency

aka interrupt response time is the time elapsed between the last instruction executed on the current interrupted task and start of the interrupt handler.

2.Dispatch latency

– time it takes for the dispatcher to stop one process and start another running. To keep dispatch latency low is to provide preemptive kernels.

 Real-Time CPU Scheduling

The RTOS schedules all tasks according to the deadline information and ensures that all deadlines are met.

Static scheduling

A schedule is prepared before execution of the application begins .

Priority-based scheduling

The priority assigned to the tasks depends on how quickly a task has to respond to the event.

Dynamic scheduling

There is complete knowledge of tasks set, but new arrivals are not known. Therefore, the schedule changes over the time .

Timing constraints

The timing constraints(时序约束) are in the form of period and deadline .
The period is the amount of time between iterations of a regularly repeated task. Such repeated tasks are called periodic tasks .
The deadline is a constraint of the maximum time limit within which the operation must be complete.

scheduling criteria

The timing constraints of the system must be met.
• The cost of context switches, while preempting, must be reduced .
The scheduling in real-time systems may be performed in the following ways: preemptively, non-preemptively, statically , and dynamically .

Characteristics of processes

Processes are considered periodic (repeated tasks).

Once a periodic process has acquired the CPU, it has:
- fifixed processing time t ,
- deadline d by which it must be serviced by the CPU, and
- period p .
                        0 ≤ t d p
Rate of a periodic task is 1/ p
A process may have to announce its deadline requirements to the scheduler.

 

Rate Monotonic Scheduling (RMS)

It is a static priority-based preemptive scheduling algorithm.

The task with the shortest period will always preempt the executing task.
The shortest period = the highest priority
The CPU utilization of a process Pi
t i = the execution time
p i = the period of process
ti/pi

The deadline for each process requires that it complete its CPU burst by the start of its next period.

e.g.
1) P1 a higher priority than P2 .
P1 p 1 = 50, t 1 = 20        the CPU utilization of P1 = 20/50 = 0.4
P2 p 2 = 100, t 2 = 35.    the CPU utilization of P2 = 35/100 = 0.35
total CPU utilization – 75%

 2) P2 a higher priority than P1.

P1 p 1 = 50, t 1 = 20
P2 p 2 = 100, t 2 = 35.
                 (55>50)

 3) A set of processes that cannot be scheduled using the RM algorithm

P1: p 1 = 50, t 1 = 25.
P2: p 2 = 80, t 2 = 35.
process P1 is high priority
The total CPU utilization: (25/50)+(35/80) = 0.94 (though the data seems good, not good for CPU

 

Earliest-Deadline-First Scheduling

The scheduling criterion is based on the deadline of the processes .
When a process becomes runnable, it must announce its deadline requirements to the system.
(The processes / tasks need not be periodic .)
Dynamically assigns priorities according to deadline (as the deadlines are computed at run-time and priorities are not fixed as well) .
the earlier the deadline = the higher the priority;

Proportional Share Scheduling

T shares are allocated among all processes in the system
An application receives N shares where N < T
This ensures each application will receive N / T of the total processor time

Algorithm Evaluation

How do we select a CPU-scheduling algorithm for a particular system

1. Deterministic Modeling

Takes a particular predetermined workload and defines the performance of each algorithm for that workload
          What algorithm can provide the minimum average waiting time?

Consider 5 processes arriving at time 0:

2.Queueing Models

If we define a queue for the CPU and a queue for each I/O device, we can test the various scheduling algorithms using queueing theory

 Little s formula – processes leaving queue must equal processes arriving, thus:

n = λ x W
n = average queue length
W = average waiting time in queue
λ = average arrival rate into queue

e.g. if on average 7 processes arrive per second, and normally 14 processes in queue, then average wait time per process = 2 seconds  (n=14, λ=7)

3. Simulations

trace tapes
This is data collected from real processes on real machines and is fed into the simulation .

 

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值