cs120

CSE 120


前言

CSE 120 - Princ/Computer Operating Systm - Zhou [SP21]

Lecture 3

Reading chapter 6

Two modes: User mode and kernel mode

  • user mode
    • code that runs in user mode is restricted in what it
      can do. For example, when running in user mode, a process can’t issue
      I/O requests; doing so would result in the processor raising an exception;
      the OS would then likely kill the process.
  • kernel mode
    • which the operating system(or kernel) runs in. In this mode, code that runs can do what it likes, including privileged operations such as issuing I/O requests and executing
      all types of restricted instructions.

Program在user mode里运行,如果需要权限则make system call

  • system call
    -就是一个请求
  • trap
    • 请求要求运行的代码
  • return-from-trap
    -请求回到user mode

trap table

  • tell the hardware what code to run when certain exceptional events occur. For example, what code should run when a harddisk interrupt takes place, when a keyboard interrupt occurs, or when
    a program makes a system call?

system call number
找到trap table 地址的操作也是privileged的

two phases in the limited direct execution (LDE) protocol.

  • In the first (at boot time), the kernel initializes the trap table, and the
    CPU remembers its location for subsequent use.
  • In the second (when running a process), the kernel sets up a few things
    (e.g., allocating a node on the process list, allocating memory) before using a return-from-trap instruction to start the execution of the process;
    this switches the CPU to user mode and begins running the process.

切换进程

程序run的时候,os是不在cpu上run的。程序跑完要交cpu给os

  • os相信程序不是恶意,一段时间会放弃cpu(yield)
  • illegal操作会trigger trap,还给os(divide by zero)
  • 如果程序不return,只能重启
  • Time interrupt: 每一毫秒interrupt一下,给os权限,决定下一个谁run
    scheduler下一个谁run
    context switch: 切换程序,存一下现在的register,restore一下其他程序的register

concurrency

好几个trap怎么办?

  • disable while executing

Lecture 3

load and store is not protected: they are virtual address
在这里插入图片描述

Different bewtween exeptions and interruptes

  • exception是被动的unexpected的?

system call and interrupt

  • System Call is a method that allows a program to request services from the kernel while Interrupt is an event that indicates the CPU to perform a specific task immediately.

function call and system call

  • 区别挺大?
  • system call cross boundary

int: interrupt
vendor = developer
os 没有main()

goto the kernel = trap into the kernel

highest interrupt: power loss

Handling Fauls

exception: faults: divide by zero, illegla op code, address doesn’t exist(Page fault)
must save state so the faulting prcoes can be restarted.

  • need to resume, for example page fault, OS restore the data. I’ts exception but not program’s fault

    • Truely a fault or OS doing tricks?
  • may kill the process if unrecoverable

    • seg fault, core dump

What if faults in the kernel

  • OS crashes, Blue Screen of Death

Before crash, we don’t terminate immedieatly so we can send data

  • transfer to program’s own handler, but must be registered.
  • After send report, handler should quit, otherwise user need to force quit

System calls (exceotption)

Only OS has direct acess to hardware,

  • program ask the OS to do for it.

Hardware provides a system call instruction that:

  • causes an exception(implement through interrupt)
    在这里插入图片描述

Lecture 4

reading
Time sharing: share CPU
**Space sharing:**share resource
program counter (PC) (sometimes
called the instruction pointer or IP) tells us which instruction of the program will execute next

load

reside on disk ins ome kind of executable format, put them in memroy
eagerly load: all at once
lazily: load when use
Then allocate stack and heap
initizliae STDIN, STDOUT, STDERR

3 exucution state

  • running
  • Ready
    • another process is exucuting on the CPU
  • Waiting: waiting for an event (I/O completion)

process

  • executing instance of a “program”
  • can be launched by other processes

cpu register is replaced by new process, so saving them is important

What need to record for a process

  • Process State
  • Program counter
  • cpu registers
  • cpu scheduling inforation
  • memory management information
  • accounting informaiton
  • i/O status informaiton
    virtual memroy: not enough in physical memory, store somewhere else, restore when page fault.

如果我们run两个

printf("myval %d, %d:, var, &var)

myval 5, 0x804968c
myval 6, 0x804968c

val一样address不一样,因为address是virtual address, physical address不一样

address not absolute

process data structure

在这里插入图片描述
在这里插入图片描述

context switch

  • hwen prcess end, CPU stroe (PC, SP, regs, etcs)
  • when OS stops process, it saves the current values of registers into the process’ PCB
  • When OS ready to sttt executing a new process, it loads hardware registers form that process’ PCB
  • the procss of changing the CPU hardwar state form one process to aother is called a context switch

Process Queues

  • a collection of queues that represetn status of all processes
  • One queue for each status
    PCB are data structes dynamically allocated in OS memory

process creation: exec()

Does not create a new process
place PCB onto ready queue

fork()

creates a child process
exact copy
only the return value different

pid = fork();
if(pid == 0) child code 
else parent code

这里不是指thread,是child process

在这里插入图片描述

Lecture 5

communication between process is costly
States of a proecss

  • waring
  • running
  • waiting

Use case: web server

while 1
int sock = accept();
if child_pid = fork() == 0
handle client request //take a long time, dont wait but receieve next immediately
else
close socket

the only differnece is return value of fork()

Use case: Unix Shell

while 1
	char *cmd = read_cmomand();
	int child_oid = fork();
	if child_pid == 0
		Manipulate STDIN/OUT?ERR file descriptors for pipes;tread
		redirection
		exec(cmd); never return, unless fail
		panic("exec failed");
	else
		waitpid(child_pid) //suspend until child process finish

parent wait for child to finish
so why need child?
if cmd has bug, it only taks down the child process

waiting state can only switch to ready state
在这里插入图片描述

在这里插入图片描述
Process includes

  • address space(all code and data)
  • OS resources
  • Execution state(PC, SP)

communication between process is difficult

  • Space: PCB, page tables,
  • Time: create data structures, for and copy addr space

thread: process with seperate execution state
Thread share

  • Processor info: parent process
  • Memory: glocal data, heap, page talbe
  • I/O and file, communication ports, directories, and file descritor
    Thread private
  • State (ready, running and blocked)
  • registers
  • program counter
  • execution stack

Thread vs Process

  • A Thread defines a sequentail execution stream within a process (PCm SP, registers)
    • A process defines the address sapce and gneral process attributes
      A therad is bound to a single process
  • A process can have multiple threads
    Thread become the unit of scheduling
  • Processes are now the containers in which threads execute

example: webserver

web_server() {
while 1
	int sock = accept()
	thread_fork(handle_request, sock);
	}
	}
handle_request(int sock) {
process requrest
clsoe(sock);
}

one thread down, whole process down

  • becaue they sheare address space

Usage: word process

  • one for keyboard
  • one for display
  • one for Dist

If single-threaded, when displaying, keyborad won’t work

OS managed thread are called kernel-level threads or Kernel managed threads or lightweight prceosses

  • NT: threads
  • Solaris: lightweight processes
  • POSIX Threads(pthreads): PTHREAD_SCOPE_SYSTEM

Kernel threads is much cheaper tha nprocesses
But stil much overhead

  • Still requires system calls

User level managed threads

  • managed by run-time system instead of OS
  • represented by only a PC, registers, stack, and small Thread control block
  • create a new thread, switch between therads, synchonizing are done via procedure call
    • no kernel involed
  • 100x faster than kernel managed
  • pthreads: PTHREAD_SCOPE_PROCESS

from OS ponit of voew, only one thread

  • OS wont allcoate more resource
  • if doing I/O, OS block you. doens’t now you have other

Solution: use both kernel and user

ex: JVM

  • you cerate 64 user thread, JVM create 8 pthread(OS managed thread)

Multile user level threads over multiple kernel level threads
Kernel

  • informed scheduling
  • slow
    User
  • Fast to create, manipulate, synchromnize
  • not integrated with OS

All run in user mode. not kernel mode

Implementing Thread issue

  • Interface
  • COntext switch
  • not-Preemptive
    • give up CPU volantarily or not
  • scheuding
  • Synchonization

thread_yield()

  • go to ready state volunatarily

lecture 6

Blocking vs. non-blocking system call

which of following is shared among htreas of the same process/

  • global variables yes
  • Heap objects yes, only one heap per process
  • local varibalbe no
  • stack pointers no
  • parent preocesses yes
  • program counters no
  • code yes

add old_trehad to ready queue before switch
The PCB stores information about the kernel process. … A PCB will have one or more TCB’s linked to it. The TCB describes an execution context, (eg. stack pointer),

context swtich

push old state to stack not TCB

preemptive scheduling

involuntary, OS always do so

  • can handle bug, robust
  • but less efficient( more context switch)
  • use in multi app system

Non-preemptive Scheduling: efficient, not robust

  • voluntary, used in user level thread
  • only voluntary calls to thread_tield(), thread_stop(), or thread_exist()
  • If bug/infinite loop, bad!
  • Okay in single application system

Blacking vs non-blocking system calls

Blocking system call- You wait for return before continue

  • Usually I/O related: read(), fread(), getc(), write()–Doesn’t return until the call completes
  • The process/thread is switched to blocked state–When the I/O completes, the process/thread becomes ready
  • Simple
  • Real life example: attending a lecturel
  • OS will block you

Using non-blocking system call for I/O - just continue

  • Asynchronous I/O
  • Complicated
  • The call returns once the I/O is initiated, and the caller continue
  • Once the I/O completes, an interrupt is delivered to the caller
  • Real life example: apply for job

Inter-process communication

lock is mutual exclusino
Coordination – Synchronization

thread1 foo(){
x++
}

thread2 bar(){
x--
}

result can be 0, -1 ,1.
Because OS may interrupt before result is stored in memory
safe when user thread, non-preemptive schduling.

Synchronization

threads communicate to each other
shared resources
Two concurrent threads accessed a shared resource without any synchronization

  • Known as a race condition or date races

Local variable now hsared

  • each has own sack
  • never pass pointer to another thread T2

Global varibale and static objects ar sahred

Dynamic and heap object are shared

  • created by malloc

atomic operationsare reads and writes of words

  • read whole words is garuentteed, the basic atom level
    • can’t be broken further

We assume that a context swithc can occur any time

mutual exclusions

code that use mutual exclusion ty synchornize is call critical section

while 1
enter critical secition
access, modify global variable
exit critical section

Mutual Exclusion

  • No other process must execute within the critical section while a process is in it.

Progress – run long time outside critical section

  • If no process is waiting in its critical section and several processes are trying to get into their critical section, then entry to the critical section cannot be postponed indefinitely.

Bounded Wait

  • A process requesting entry to a critical section should only have to wait for a bounded number of other processes to enter and leave the critical section. l

No assumption

  • No assumption may be made about speeds or number of CPUs.

在这里插入图片描述

Atomic read/write
–Can it be done?lLocks–Primitive, minimal semantics, used to build others
Semaphores–Basic, easy to get the hang of, but hard to program with
Monitors
–High-level, requires language support, operations implicitl
Messages
–Simple model of communication and synchronization based on atomic transfer of data across a channel
–Direct application to distributed systems
–Messages for synchronization are straightforward (once we see how the others work)

死锁
在这里插入图片描述

Lecture 7

Peterson’s Algorithm
在这里插入图片描述

  • acquire(lock): before entering the critical section
  • release(lock): after leaving a critical section
  • acquire(lock) does not return unti any previous holder releases
    You define your lock, you should lock whenever you access shared data. You can have multiple
    Locks can be spin(spin lock) or block(a mutex)

example of spinlock

struct lock(
int held =0;
}
void acquire (lock){
	while lock->held); // not good, what if switched to other thread by time interrupe
	lock->held = 1; //both get the lock
}
void release (lock){
	lock ->held = 0;
}

Atomic instruction
Test and set

  • read the old value
  • set the vaue to true
  • return the old value

Treated as one instuction
very expensive
no matter how many cpus, none of them can access memory

spinlocks is wasteful

solution

  • if cann not get lock, call thread_yield
  • go to sleep and get called when it’s OK

Semaphores

abstract date type that provide mutual exclusion to critical sections

  • Semaphores are intergres that support two operations
    • P operation: decrement (call when enter critical section
    • V operation: increment (call when exit critical section

Provided by the OS, so expensive

Binary semaphore

can only be 1 or 0

  • 1 means open, 0 means blockMutex:
  • if thread can’t get itm it weiil be blocked (instaed of spining

Counting semaphore

  • You defined the max value
  • if exceed max value, block
struct Semaphore {
	 int value;
	 Queue q;
	} S;
	f(){
	 P(S)
	 criticla
	 V(S);
	 return
	 }
  • mutex(binary semephore is slow when
    • multiple cpu, fast critical section

if critical section is long ----mutex is better
don’t do I/O in critical section

Lecture 8

Readers/Writers Problem

  • Multiple readers, only one writer
  • int readcount
  • Semephoer mutex - control access to readcount
  • Semaphore w_or_r - read or write access

read and write/ write and write can’t at the same time

reader{
	P(mutex);
	readcoutn++
	if readcount == 1
		P(w_or_r)
	V(mutex;
	read;
	P(mutex);
	readcount --;
	if(readcount ==0 )
		V(W or r)
	V(mutex)

– Probem: starvation
If reader keeps coming, writer never have ha chance

Problem: Bounded Buffer

Used in producer/consumer

  • Producer inserts recources into the buffer set
  • Consumer removes resources form the buffer set
  • Coordinate is needed

Three semaphores

  • empty county of empty buffers
  • full count of full buffers
  • mutex- control access

在这里插入图片描述
add一下,consume一下

Monitors

A programming language construct that controls access to shared data

  • ONly one thread can execute any moniutor precedure at any time
  • don’t do I/O

Condition Variables = queue

three operations:

  • wait -release monitor lock
  • Signal wake up one waiting therad
  • Broadcast - wake up all therad

two monitors - differ in scheduling

Hoare monitors

  • when wake up, data gaurentteed is there
  • easir to reason about program
  • If is sufficient

Mesa monitor

  • when wake up, data not gaurenteed, need check
  • need to use while
  • 在这里插入图片描述
    doesn’t address starvation problem

Lecture 9 - scheduling

Which thread to choose from the ready queue is called scheduling

  • Fairness
  • Efficiency: no idle time
  • Throughput
  • Turnaround Time
  • Wairing Time
  • Response
    • from request to response

First Come First Serve

  • come first, get CPU first
  • Non-preemptive

Problem

  • Non-preemptive
  • not optimal AWT. Small job go first

Shortest Job First (SJF)

  • also non-preemptive
  • Means you can’t switch off long job

Preemptive SJF

  • If a long job is running and a short job came, switch

Priority schedling

  • each job, base on it’s job type, is assigned a priority
  • higher priority job go first

Set priority

Static or Dynamic
Based on

  • cost to user
  • importnace of user
  • aging
  • percentage of time execulataed already

Priority inversion

round robin

在这里插入图片描述

Time finish - Time arrival - duration = wailt time

Time Quantum/ Time Slice

same as round robin?
Time slice too large: poor response
Time slice too samll: overhead
Heuristic: 70% jobs block within time slice

Cmobingin algorithms (today’s OS)

  • multiple ready queues
  • can switch between quees
  • each queue has own algorithm/priority
  • 在这里插入图片描述

Lectrue 9

Deadlock

Resource:

can be either

  1. seriallyy reusable, like cpu, memory
  2. cosumable: produced bya prograss

also be either

  1. preemptible
  2. non-preemptible

also

  1. shared among several processes
  2. dedicated

Necessary and sufficient condition for deadlock

  • Mutual exclusion
  • Wait-for condition
    • Porcess hold resrouce already allocated to them
  • No preemption condition
    • can’t force process to give up
  • Circular wait condtion

detect cycle

if has cycle, may exist
If no cycle, no deadlock
so deadlock means cycle
在这里插入图片描述

how to design

  • prevent
  • -avoidance
  • detection & recovery
  • nothing, reboot (use by mobile

Prevention

Break one of deadlock condictions

  • Mutial exclusion
    • aviod assignign resource
  • hold wait
    • force process to request all required at once
  • No preemption
    • If denied further resource, give them up
  • Circular wait
    • all resource are numbered

Two pahse locking

phase one

  1. try to lock all records, one at a time
  2. If needed already locked, start over
    Second phase: perform task and release locks

break circular wait

method 1: request one at a time
method 2: request have to made in increasing order, so no go back

dection and recory

VMS algorithm to detect deadlock, complex

recovery: two method

  1. abort process
  2. preempt resource

Java therda

  1. extend Thread, override run(), thread.start
  2. implements runnable, override run, start(), not run()

public synchronized void increment() { c++;}

  • acqurie a lcok associated with object

static public synchorized acurqed lock associated with class

synchronized blocks

public void add name(){
	synchoronized(this){
	...
	...
	}
	...
}

在这里插入图片描述
wait(), notify, notifyAll 自己object本身自带的lock

Lecture 10

VM stuff, see 142

Lecture 11

TLB miss

  • Hardware handle miss
    • OS maintains tables, HOW accesses them
  • software loaded TLB
    • Flexible, slower

Each process has its page table
But CPU only has one TLB, when context switch we need to invalidate all TLB entries

  • Flush TLB, it’s fast
    Each thread of the same process share the same TLB, so switching thread don’t needto flush TLB
    TLB is indide the CPU

Lecture 13

Sharing

Soft Link: File B -> File A -> Inode (symbolic link, short cut link)
Hard Link: File B -> A’s Inode

Delete FIle: when all pointer to Inode are deleted, it will be deleted by the File System

Keep track of free blocks

bitmap, 1 means free.

Inodes and path search

every directory and files has Inodes
How many disk access for /user/local/a.ppt?
8

  1. inode of /
  2. contnet of /
  3. inode of /user
  4. content of user
  5. inode of /user/local
  6. content of /user/local
  7. inode of /user/local/a.ppt
  8. 1st block of /user/local/a.ppt

File Buffer Cache

Application exhibits good locality
Instead reading from Disk, store in memory.

Caching writes

some application assume data make it thorugh the buffer cache and onto the disk

  • writes are slow becuase each time we write, we write to disk

Ways to slove

  • write behind, preiodically flush the queue to disk
  • Battery backed-up RAM, As with write behind, but maintiain queue in NVRAM, expensive
  • Log-structured file system, complicated

Read Ahead(prefetch)

predict what you gonna read.
FS goes ahead and requests it form the disk

Good for sequentially accessed files
read one extend at a time, 1 extend is a fixed amount of bytes

Distributive System

Coordinate many machines instead of thread, users, etc
Performance: parallelism across multiple nodes
Reliability and fault tolerance
one machine go down, OS handle it
Scalability by adding more nodes

Lecture 16

NFS(Network File System)

  • stateless
  • Consistency problem:
    • Cache of client A can’t be seen by B

Socket Programming

Socket is an interface between application and network
Tyle

  • bidirectional, connected(TCP)
  • boardcast, not connected(UDP)

File

Inode

a directory is a special file
finding an file is expensive, is wy we have file descriptor, this is hwy open is seqarate from read/write

Each indoe contains 15 block pointers

  • first 12 are direct blocks( 4KB)
  • THen single, double, and triple indirect
    在这里插入图片描述
    You load block into memory->find pointer -> load agian -> find again…

File sharing

OS implements **file lock **

Access Control List ACL

for each object(directories/ files), mainest a list of subject(users) and their permitted actions

Capabilities Lists

for each subject(user), maintain a list of objects(directories/ files) and permitted actions

In order vs out-of-order

You can’t skip this instruction if you can’t execute it, everyone behind need to wait long with this instruction

out of order: Execute as many as you can so long you have empty slot.

Internal fragmentation vs external

Internam: memory in a partition not used by a process is not abailable to other processes. Happen in fixed partitions
In order to initialize you only need to tell OS the base register, space is fixed for every program.
External:

  • You need both base register and Limit Register, limit register tells how much space you need. If you try to read a location over the limit, it will ause an exception

分配给process的memory必须要连续,但作业里的section就不一定。?
在这里插入图片描述

Paging

paging solves external fragmentation
but at the cost of maintaining a page table

Hard soft links

每个file存在inode table,对应inode
当inode没有被指向的时候才会被删除
Hard link: a real copy, two user can point to same inode, raise security issue
Softlink: 对应的inode存path。

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值