操作系统 ShanghaiTech CS130 | Scheduling

ShanghaiTech CS130 | Lecture Note | Scheduling

说明:笔记旨在整理我校CS130课程的基本概念。由于授课及考试语言为英文,故用英文为主,中文为辅的方式整理。由于是整理,尽提供最低限度的解释,以便提供简洁快速的查阅。

全部笔记索引:【传送门】 | 上一节: 到头了 | 下一节:Main memory

目录

ShanghaiTech CS130 | Lecture Note 01 | Scheduling

1 General Ideas

1.1 Scheduling Objectives

1.2 Program Behavior issues

1.3 Basic concepts(恐龙书5.1)

1.4 CPU Scheduler(恐龙书5.1.2)

1.5 Dispacher(恐龙书5.1.4)

2 Scheduling Algorithm on uniprocessor 单处理器调度算法

2.1 Scheduling Criteria调度准则 (恐龙书5.2)

2.2 Optimization Criteria优化准则(恐龙书5.2)

2.3 Scheduling Algorithms (恐龙书5.3)

 

2.4 Shortest-Job-First Scheduling(SJF, 恐龙书5.3.2)

2.5 Priority Scheduling(恐龙书5.3.4)

2.6 Round Robin(RR) Scheduling(恐龙书5.3.3)

2.7 Multilevel Queue Scheduling (恐龙书5.3.5, PPT 46)

3. General scheduling policy

3.1 Multiple-Processor Scheduling(恐龙书5.5)

3.2 Real-time Scheduling(恐龙书5.6)

3.3 Algorithm Evaluation(恐龙书5.8)

4 Case study: Linux Scheduler(5.7.1)

5 Final word

Reference


1 General Ideas

1.1 Scheduling Objectives

  • Enforcement of fairness
  • Enforcement of priorities
  • Make best use of avaliable system resources
  • Give preference to process holding key resources
  • Give preference to process exhibiting good behavior
  • Degrade gracefully under heavy loads

1.2 Program Behavior issues

  • I/O boundedness
    • spending most of their time waiting for I/O and only brief periods computing.
  • CPU boundness
    • spends the majority of its time simply using the CPU (doing calculation).
  • Urgency and priorities
  • Frequency of preemption
  • Process execution time
  • Time sharing

1.3 Basic concepts(恐龙书5.1)

1.Maximum CPU untilization obtained with multiprogramming

2.CPU-I/O Burst Cycle

  • Cycle: process execution consists of a cycle of CPU execution and I/O wait. Process alternate between these two states
  • CPU burst&I/O burst means two periods that CPU works and I/O deveice works 
illustration of burst

CPU burst 统计特性: 长的burst少,短的burst多; 一个I/O-bounded程序有许多小CPU burst, CPU-bounded 可能会存在长的。

CPU burst 统计特性

3. Levels of scheduling

1.4 CPU Scheduler(恐龙书5.1.2)

1. CPU scheduler: selects from among the processes in memory that are ready to execute, and allocates the CPU to one of them.

CPU scheduling may occur in following cases: (1) Process switches from running to waiting(e.g: waiting for I/O in child process) (2) Runing -> Ready state (3) Ready state -> Running (4) Process Exit

2. No-preemptive Scheduling: Process keeps CPU until: case(1) and case(4)

3. Preemptive Scheudling: Process can be interrupted: Need to coordinate access to shared data. (case(2) and case (3))

1.5 Dispacher(恐龙书5.1.4)

1. dispatcher: dispatcher module gives control of the CPU to the process selected by the short-term scheduler. Involving:

  • Switching context
  • Switching to user mode
  • Jumping to the proper location in the user program to restart that program

2. dispacther latency:

  • Time it takes to stop one process and start another
  • must be fast

2 Scheduling Algorithm on uniprocessor 单处理器调度算法

2.1 Scheduling Criteria调度准则 (恐龙书5.2)

1. CPU utilization: 让CPU越忙越好

2. Throughput: # completed process / time unit

3. Turnaround time: the time of submission of a process to the time of completion = waiting to get into memory + waiting in the ready queue + executing on the CPU + doing I/O

4. Waiting time: time it takes waiting in the ready queue

5. Response Time: time it takes from request submitted to the first response is produced

启发(恐龙书p205): 在一个交互的系统当中,turnaround time不是一个很好的指标。因为,一个进程可能会持续输出结果,我们取第一个输出的结果相对来说更为合适,而不是等到进程完成所有的事情。

2.2 Optimization Criteria优化准则(恐龙书5.2)

1.最大化CPU utilization, Throughput; 最小化turnaround time, waiting time, response time

2.Throughput vs. response time: throughput related to response time, but not identical, since min response time -> more context switching

3. Fairness vs. response time: less fair -> better average response time

2.3 Scheduling Algorithms (恐龙书5.3)

1.First come first server(FCFS algorithm):

policy: process that requests the CPU first is allocated the CPU first

implementation: FIFO queues

Pros: simple.

Cons: short job get stuck behind big one.

different order leads to different avg. waiting time. Gantt chart visualizes it. Convoy effect means all the other processes wait for the one big process to get of CPU.

 

2.4 Shortest-Job-First Scheduling(SJF, 恐龙书5.3.2)

policy: assign CPU to the process that has the smallest next CPU burst.

Limitation: SJF can't be used to short-term CPU scheduling, so we try to approximate SJF scheduling, expecting that the next CPU burst will be similar in length to the previous ones.

non-preemptive/preemptive: whether newly-arrived process can interrupt running process, the preemptive one is often called shortest-remaining time-first(SRTF) scheduling.

SRTF cons: ①starvation: many small jobs, large jobs never run(unfair). ②do not know time of CPU burst in the future.

SRTF pros: minimal average waiting time

2.5 Priority Scheduling(恐龙书5.3.4)

policy: A priority value (int.) is associated with each process(base on: cost to user, importance to user, aging, %CPU time used in last XX hours). CPU is allocated to process with the highest prioirty(preemptive/non-preemptive).

Problem: Starvation; Low priority processes may never execute.
Solution: Aging, As time progresses, increase the priority of the process.

solution to starvation in SJN:

 

SJN(shortest job next, the same as shortest job first) is a priority scheme: the priority is the predicted next CPU burst time. solution is to include aging factor(aka increasing the priority of the process as time flies).

2.6 Round Robin(RR) Scheduling(恐龙书5.3.3)

policy: call a small unit of CPU time as a time quantum. After this time has elapsed, the process is preempted and added to the end of the ready queue(new process also). The process may have a CPU burst of less than 1 time quantum. It will release the resource voluntarily.

property:

  • n processes, time quantum=q
    • Each process gets 1/n CPU time
    • At most q units at at a time
    • No process waits more than (n-1) q time units

Time quantum too large -> degenerate into FCFS(infinite q), response time poor.

Time quantum too small -> context switch overhead(throughput suffers).

词义解释:overhead,开销非常大

practice number: q=10-100ms, context-switching=0.1-1ms

pros: better for short jobs, fair

cons: context switching overhead for long jobs

comparison: assume zero-cost context-switching: RR acts worse avg. response. and cache state must be shared among all jobs with RR but can be devoted to each job with FCFS.

2.7 Multilevel Queue Scheduling (恐龙书5.3.5, PPT 46)

Definition: ①ready queue partitioned into seprate queus(e.g: foreground(interactive) and background(batch)). ②Each queue has its own scheduling algorithm. ③processes assigned to one queue permanently(fixed priority, possiility of starvation). ④scheduling must be done between queues(serve all from foreground, then background).

升级版multilevel feedback queue(恐龙书5.3.6): ③-> variable priorities, process can move between queues(age can be impelemented in this way).

Hyperparameters: ① Number of queues ② Scheduling algorithm for each queue. ③ Method used to determine: when to upgrade/demote process or queuing policy for a entering process with required service.

An example: 

Multilevel feedback queues

goal: responsiveness, low overhead, starvation freedom, some tasks are high/low priority, fariness, but not perfect at any of them in practice.

3. General scheduling policy

3.1 Multiple-Processor Scheduling(恐龙书5.5)

1. have one ready queue accessed by each CPU

  • self-scheduled: each CPU dispatches a job from ready queue
  • master slave: one CPU schedules the other CPUs

2. Homogenous(symmetric) processors within multiprocessor: permits load sharing

3. asymmetric multiprocessing: ① only 1 CPU runs kernel. ② Others run users programs. ③ Alleviates(减轻) need for data sharing

3.2 Real-time Scheduling(恐龙书5.6)

1.Hard real-time computing: require to complete a critical task witin a guaranteed amount of time.

2.Soft real-time computing: require that critical processes receive priority over less fortunate ones.

3.Types of real-time schedulers:

  • Periodic schedulers: fixed arrival rate(5.6.2)
  • Demand-driven schedules: variale arrival rate
  • Deadline schedulers: priority determined by deadline(EDF 5.6.4)

4. Dispatch latency(5.6.1)

  • Problem: need to keep dispatch latency small, OS may enforce process to wait for system call or I/O to complete.
  • Solution: make system calls preemptive(instead of blocking), determine safe criteria such that kernel can be interrupted

5. Priority Inversion and inheritance(project 1):

  • problem: priority inversion:
    • HIgher priority process needs kernel resource currently being used y another lower priority process
    • higher prioerity process must wait
  • solution: priority inheritance
    • low prority process now inherits high priority until it has completed use of the resource in question

3.3 Algorithm Evaluation(恐龙书5.8)

1.Deterministic modeling(5.8.1): it is one type of analytic evaluation. This method takes a particular predetermined workload and defines the performance of each algorithm for that workfload.

Remark: 就和我们之前分析算法的方式类似,设定一个process的集合,规定需要的arrival time, CPU burst啥的进行分析。

2.Queuing models and queueing theory(5.8.2):

比起deterministic,我们在queueing models里面设定CPU和I/O burst的distribution(用概率论进行建模)

2.1. Little' formula: n = λ * W

  • n: average longterm queue length(#process)
  • λ: average arrival time(#process/s)
  • W: average waiting time in queue(s)

Remark: 公式来源于对于一个process, W时间内的思考。平均情况下,一个进程进入到离开queue进入running状态耗时W. 在这个周期里,n个在队列里的进程离开了queue。同时,  λ * W个进程进入。假设新增和离开的进程相等,就有了该公式。

2.2 Other techniques:

  • Simulation(5.8.3): programming a model of the computer system. Software data structures represent the major components of the system.
  • Implementation(5.8.4):直接将算法投入实战。上面是仿真

4 Case study: Linux Scheduler(5.7.1)

这也太trivial了,考了摸了

5 Final word

When do the details of the scheduling policy and fairness really matter?

  • When there aren't enough resources to go around

An interesting implication of this curve:

  • Most scheduling algorithms work fine in the linear portion of the load curve
  • Response time goes to infinity when utilization is 100%

Reference

1. Operating System Concepts 10th Edition. WILEY

2. ShanghaiTech CS130 Operating system, Topic 8, Scheduling. Shu-Yin.

  • 1
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值