OS——CPU Scheduling

1. Types of Scheduling

  • long-term scheduling:Determines which programs are admitted to the system for processing.
  • medium-term scheduling:Part of the swapping function.
  • short-term scheduling: Known as the dispatcher. Executes most frequently. Invoked when an event occurs, e.g. Clock interrupts, I/O interrupts, Operating system calls, Signals
  • I/O scheduling:The decision as to which process’s pending I/O request shall be handled by an available I/O device.
    cpu各级调度关系

2.Basic Concepts

  • CPU–I/O Burst Cycle –Process execution consists of a cycle of CPU execution and I/O wait.
  • CPU Scheduler: The scheduler selects among the processes in memory that are ready to execute, and allocates the CPU to one of them.
    CPU scheduling decisions may take place when a process:
    1. Switches from running to waiting state.
    2. Switches from running to ready state.
    3. Switches from waiting to ready.
    4. Terminates.
      Scheduling under 1 and 4 is nonpreemptive.
      All other scheduling is preemptive
  • Dispatcher: Dispatcher module gives control of the CPU to the
    process selected by the short-term scheduler;
    this involves:
    – switching context
    – switching to user mode
    – jumping to the proper location in the user program to
    restart that program
    Dispatch latency – time it takes for the dispatcher to
    stop one process and start another running

3.Scheduling Criteria(准则)

  • CPU utilization – keep the CPU as busy as possible
    – In a real system, range from 40% to 90%
  • throughput (吞吐量)– the number of processes that complete their execution per time unit
  • turnaround time (周转时间)– amount of time to execute a particular process.
    – The interval from the time of submission of a process to the time of completion is the turnaround time.
  • waiting time – amount of time a process has been waiting in the ready queue
    – the sum of the periods spent waiting in the ready queue
  • response time – amount of time it takes from when a request was
    submitted until the first response is produced, not output (for time-sharing environment)

4.Scheduling Algorithms

First-Come, First-Served (FCFS) Scheduling

  1. Each process joins the FIFO Ready queue When the current process ceases to execute, the oldest process in the Ready queue is selected
  2. the FCFS scheduling algorithm is **nonpreemptive.*the FCFS scheduling algorithm is nonpreemptive.
  3. Convoy(护航) effect: short process behind long process Convoy(护航) effect: short process behind long process

Shortest-Job-First (SJF) Scheduling

  1. Two schemes:
    nonpreemptive – once CPU is given to the process, it cannot be preempted until completes its CPU burst.
    preemptive – if a new process arrives with CPU burst length less than remaining time of current executing process, preempt. This scheme is know as the Shortest-Remaining-Time-First (SRTF).
  2. SJF is optimal – gives minimum average waiting time for a given set of
    processes.

prediction

Priority Scheduling

  1. The CPU is allocated to the process with the highest priority (smallest integer = highest priority).
  2. Equal-priority processes are scheduled in FCFS order.
  3. Priority can be defined either internally or externally.
  4. SJF is a priority scheduling where priority is the predicted next CPU burst time.
  5. Two schemes:
    – Preemptive: preempt the CPU if the priority of the newly arrived process is higher than
    the priority of the currently running process.
    – Nonpreemptive: simply put the new process at the head of the ready queue
  6. Problem : Starvation (or indefinite blocking)– low priority processes may never execute.
    Solution : Aging – as time progresses ,increase the priority of the process.

Round Robin (RR) Scheduling Multilevel Queue Scheduling

  1. Each process gets a small unit of CPU time (time quantum, or time slice), usually 10-100 milliseconds. After this time has elapsed, the process is preempted and added to the end of the ready queue. Keep the ready queue as a circular FIFO queue.
  2. Set a timer to Set a timer to interrupt after 1 time quantum.
  3. Two cases:CPU burst of the currently running process
    – less than 1 time quantum. The timer will go off and will cause an interrupt to the operating system. A context switch will be executed, and the process will be put at the tail of the ready queue.
  4. The RR scheduling algorithm is preemptive.
    If there are n If there are n processes in the ready queue and the time quantum is q, then each
    process gets 1/n of the CPU time in chunks of at most q time units. No process waits more than (n-1)q time units.  Performance depends on the size of the time quantum
    – q very large (infinite) : FCFS
    – q very small : processor sharing, q must be large with respect to context switch, otherwise overhead is too high

Multilevel Queue Scheduling

  • the ready queue is partitioned into separate queues, for example
    – foreground (interactive)– RR
    – background (batch) Performance – FCFS

  • fixed priority scheduling for scheduling among queues

  • or time slice – each queue gets a certain amount of CPU time which it can schedule amongst its processes

Multilevel Feedback Queue Scheduling

  • A process can move between the various queues.
  • To separate processes with different CPU-burst characteristics.
    – If a process uses too much CPU time, it will be moved to a lower-priority queue.Leaves I/O-bound and interactive processes in the higher priority queues.
    – If a process waits too long in a lower-priority queue, it may be moved to a higher-priority queue.This form of aging prevents starvation.
  • Multilevel-feedback-queue scheduler is defined by the following
    parameters:
    – the number of queues
    – the scheduling algorithms for each queue
    – the method used to determine when to upgrade a process to a higher-priority queue
    – the method used to determine when to demote a process to a lower-priority queue
    – the method used to determine which queue a process will enter when that process needs service

Highest Response-Ratio Next (HRRN) scheduling

  • Non-preemptive scheduling algorithm.
  • Response-Ratio :
    R=(W+T)/T=1+W/T
    – W:waiting time in ready queue
    –T:CPU burst time

The process with the highest response-ratio will be scheduled next

5.Thread scheduling

  1. kernel-level threads, scheduled by the OS.
    – User-level threads, managed by a thread library, and the kernel is unaware of them.
    – To run on a CPU, user-level threads must ultimately be mapped to an associated kernel-level thread, although this mapping may be indirect and may use a lightweight process (LWP)
  2. process-contention scope (PCS) among threads belonging to the same process:
    m:1 and m:n model
  3. system-contention scope (SCS) among all threads in the system.
    1:1 model uses only SCS

6. Multiple-Processor Scheduling(*)

asymmetric multiprocessing
– Master processor handles all scheduling decisions, I/O processing, and other system activities.
Disadvantages
• Failure of master brings down whole system

symmetric multiprocessing(SMP)
– Peer architecture (symmetric multiprocessing)
• Operating system can execute on any processor
• Each processor is self-scheduling
• Problem:
one processor could be idle while another one was very busy

Processor Affinity
– A process has an affinity for the processor on which it is currently running
– Soft affinity
• The OS has a policy of attempting to keep a process running on the same processor, but doesn’t guarantee that it will do so.
– Hard affinity
•The OS provides system calls, allowing a process to specify a subset of processors on which it may run

Load balancing
– Push migration: a specific task periodically checks the load on each processor, if imbalance, pushing processes from overloaded processor to idle or less-busy ones.
– Pull migration: an idle processor pulls a waiting task from a busy processor

Multiple-Processor Scheduling

7. Real-Time CPU Scheduling

Priority-based Scheduling
Rate-Monotonic Scheduling(单间隔)
Earliest-Deadline-First Scheduling (EDF)
Proportional Share Scheduling

8.Algorithm Evaluation

Defining the criteria to be used in selecting an algorithm.
– CPU utilization, response time, or throughput

Deterministic modeling

Deterministic modeling – takes a particular predetermined workload and defines the performance of each algorithm for that workload

Queuing models

Little’s formula: n = Λ ∗ W n=\Lambda *W n=ΛW
– n: average queue length excluding the process being serviced
– W: average waiting time in the queue
– 入: average arrival rate for new processes in the queue

Simulations

Simulations more accurate
– Programmed model of computer system
– Clock is a variable
– Gather statistics indicating algorithm performance
– Data to drive simulation gathered via

Implementation

Difficulties
– The cost: the expense is incurred not only in coding the algorithm and modifying the operating system to support it as well as its required data structure, but also in the reaction of the users to a constantly changing operating
system.
– The environment will change: the environment will change not only in the usual way, as new programs are written and the types of problems change, but also as a result of the performance of the scheduler

  • 0
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值