Operating Systems

Three level

  1. virtualization

  2. concurrency

  3. persistence

Why need OS virtualize resources?

This is not the main question, as the answer should be obvious: it makes the system easier to use.

How to virtualize resources?

This is the crux of problem. Thus, focus on how:

  • What mechanisms and policies are implemented by the OS to attain virtualization?

  • How does the OS do so efficiently?

  • What hardware support is needed?

What does a running program do?

It executes instructions.

Many millions (and these days, even billions) of times every second, the processor fetches an instruction from memory, decodes it (i.e., figures out which instruction this is), and executes it (i.e., it does the thing that it is supposed to do, like add two numbers together, access memory, check a condition, jump to a function, and so forth). After it is done with this instruction, the processor moves on to the next instruction, and so on, and so on, until the program finally completes.

Von Neumann model of computing.

OS does?

it takes physical resources, such as a CPU, memory, or disk, and virtualizes them. It handles tough and tricky issues related to concurrency. And it stores files persistently, thus making them safe over the long-term.

What is Process?

one of the most fundamental abstractions that the OS provides to users: the process. The definition of a process, informally, is quite simple: it is a running program.

The OS creates this illusion by virtualizing the CPU. By running one process, then stopping it and running another, and so forth, the OS can promote the illusion that many virtual CPUs exist when in fact there is only one physical CPU (or a few). This basic technique, known as time sharing of the CPU, allows users to run as many concurrent processes as they would like; the potential cost is performance, as each will run more slowly if the CPU(s) must be shared.

To implement virtualization of the CPU, and to implement it well, the OS will need both some low-level machinery(mechanisms) as well as some high level intelligence(policies).

Mechanisms

We call the low-level machinery mechanisms; mechanisms are low-level methods or protocols that implement a needed piece of functionality. For example, we’ll learn later how to implement a context switch, which gives the OS the ability to stop running one program and start running another on a given CPU; this time-sharing mechanism is employed by all modern OSes.

Policies

On top of these mechanisms resides some of the intelligence in the OS, in the form of policies. Policies are algorithms for making some kind of decision within the OS. For example, given a number of possible programs to run on a CPU, which program should the OS run? A scheduling policy in the OS will make this decision, likely using historical information (e.g., which program has run more over the last minute?), workload knowledge (e.g., what types of programs are run), and performance metrics (e.g., is the system optimizing for interactive performance, or throughput?) to make its decision.

Process API

  • Create

  • Destroy

  • Wait

  • Status

Process States

  • Running

  • Ready

  • Blocked

  • Done

Mechanism: Limited Direct Execution

  • Direct Execution Protocol (Without Limits)

  • Limited Direct Execution Protocol

  • Limited Direct Execution Protocol (Timer Interrupt)

Scheduling

assumptions

Make the following assumptions about the processes, sometimes called jobs, that are running in the system:

  1. Each job runs for the same amount of time.

  2. All jobs arrive at the same time.

  3. Once started, each job runs to completion.

  4. All jobs only use the CPU (i.e., they perform no I/O)

  5. The run-time of each job is known.

Scheduling Metrics

  1. Turnaround time

    T(turnaround) = T(completion) − T(arrival)

    Because we have assumed that all jobs arrive at the same time, for now T(arrival) = 0 and hence T(turnaround) = T(completion). This fact will change as we relax the aforementioned assumptions.

  2. Response time

    T(response) = T(first_run) − T(arrival)

Evolution

  • only Turnaround time

    FIFO –(relax 1)–> SJF –(relax 2)–(relax 3)–> STCF

First in First out(FIFO)

这里写图片描述

A finished at 10, B at 20, and C at 30. Thus, the average turnaround time for the three jobs is simply (10+20+30)/3 = 20.

Why FIFO Is Not That Great?

这里写图片描述

a painful 110 seconds: (100+110+120)/3 = 110.

Shortest Job First(SJF)

这里写图片描述

turnaround from 110 seconds to 50: (10+20+120)/3 = 50.

这里写图片描述

Average turnaround time for these three jobs is 103.33 seconds: (100+(110−10)+(120−10))/3 = 103.33.

Shortest Time-to-Completion First (STCF)

这里写图片描述

The result is a much-improved average turnaround time: 50 seconds: ((120−0)+(20−10)+(30−10)) / 3 = 50.

  • add Response time

Round Robin

这里写图片描述

Incorporating I/O

这里写图片描述

这里写图片描述

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值