Process, Thread, Synchronization, Deadlock Review Note - Operating System

我的笔记地址:https://docs.google.com/document/d/19Lfkmvq_eU-KIHKhOrx4kYBOdL8sQpxWvtaod72Oaoc/edit?usp=sharing

 

Architectural support for Oses

Application: written by programmer, compiled by programmer, uses function calls

Libraries: written by elves, provided pre-compiled, defined in headers, input to linker, invoked like functions, maybe resolved when program is loaded

Portable OS layer: system calls (read, open, …), all high-level code

Machine-dependent layer: bootstrap, system initialization, interrupt and exception, I/O device driver, memory management, Kernel/user mode switching, processor management

 

  1. Types of architecture support

    1. Manipulating privileged machine state

    2. Generating and handling events

    3. Events: interrupts, exceptions, system calls, etc

  2. Privileged instructions

    1. What are privileged instructions?

A subset of instructions of every CPU is restricted to use only by the OS

  1. Who gets to execute them? - OS

  2. How does the CPU know whether they can be executed?

– When it runs the OS code. Kernel mode or user mode is indicated by a status bit in a protected control register. CPU checks mode bit when protected instruction executes. Attempts to execute in user mode are detected and prevented.

Only the operating system can

Directly access I/O devices (disks, printers, etc.)

Manipulate memory management state

Manipulate protected control registers

Halt instruction

      1. Difference between user and kernal mode

        1. User programs execute in user mode

        2. OS executes in kernel mode

    1. Why do they need to be privileged?

      1. OS must be able to protect programs from each other

      2. OS must protect itself from user programs

    2. What do they manipulate?

      1. Protected control registers

      2. Memory management

        1. OS must be able to protect programs from each other

        2. OS must protect itself from user programs

        3. Memory management hardware (MMU) provides memory protection mechanisms

        4. Manipulating MMU uses protected (privileged) operations

      3. I/O devices

  1. Events

An event is an “unnatural” change in control flow.

Events immediately stop current execution.

Change mode, context (machine state), or both.

  1. Events

    1. Synchronous: fault (exceptions), system calls

    2. Asynchronous: interrupts, software interrupt

  2. What are faults, and how are they handled?

  3. What are system calls, and how are they handled?

    1. How do I/O devices use interrupts?

  4. What is the difference between exceptions and interrupts?

    1. Exceptions are caused by executing instructions

      1. CPU requires software intervention to handle a fault or trap

    2. Interrupts are caused by an external event: signal asynchronous events

      1. Device finishes I/O, timer expires, etc.

 

OS modules, interfaces, and structures



Processes

  1. Processes

    1. What is a process?

The OS abstraction for execution.

    1. What is the difference between a process and a program?

    2. What is contained in a process?

  1. Process data structures

    1. Process Control Blocks (PCBs)

      1. What information does it contain?

      2. How is it used in a context switch?

    2. State queues

      1. What are process states?

      2. What is the process state graph?

      3. When does a process change state?

      4. How does the OS use queues to keep track of processes?

  2. Process manipulation

    1. What does fork() on Unix do?

      1. What does it mean for it to “return twice”?

    2. What does exec() on Unix do?

      1. How is it different from fork?

    3. How are fork and exec used to implement shells?

 

Threads

  1. Threads

    1. What is a thread?

      1. What is the difference between a thread and a process?

      2. How are they related?

    2. Why are threads useful?

    3. What is the difference between user-level and kernel-level threads?

      1. What are the advantages/disadvantages of one over another?

  2. Thread implementation

    1. How are threads managed by the run-time system?

      1. Thread control blocks, thread queues

      2. How is this different from process management?

    2. What operations do threads support?

      1. Fork, yield, sleep, etc.

      2. What does thread yield do?

    3. What is a context switch?

    4. What is the difference between non-preemptive scheduling and preemptive thread scheduling?

      1. Voluntary and involuntary context switches

 

Synchronization

  1. Synchronization

    1. Why do we need synchronization?

      1. Coordinate access to shared data structures

      2. Coordinate thread/process execution

    2. What can happen to shared data structures if synchronization is not used?

      1. Race condition

      2. Corruption

      3. Bank account example

    3. When are resources shared?

      1. Global variables, static objects

      2. Heap objects

  2. Mutual exclusion

    1. What is mutual exclusion?

    2. What is a critical section?

      1. What guarantees do critical sections provide?

      2. What are the requirements of critical sections?

        1. Mutual exclusion (safety)

        2. Progress (liveness)

        3. Bounded waiting (no starvation: liveness)

        4. Performance

    3. How does mutual exclusion relate to critical sections?

    4. What are the mechanisms for building critical sections?

      1. Locks, semaphores, monitors, condition variables

  3. Locks

    1. What does acquire do?

    2. What does release do?

    3. What does it mean for acquire/release to be atomic?

    4. How can locks be implemented?

      1. Spinlocks

      2. Disable/enable interrupts

      3. Blocking (Nachos)

    5. How does test-and-set/swap work?

      1. What kind of lock does it implement?

    6. What are the limitations of using spinlocks, interrupts?

      1. Inefficient, interrupts turned off too long

  4. Semaphores

    1. What is a semaphore?

      1. What does P/Decrement do?

      2. What does V/Increment do?

      3. How does a semaphore differ from a lock?

      4. What is the difference between a binary semaphore and a counting semaphore?

    2. When do threads block on semaphores?

    3. When are they woken up again?

    4. Using semaphores to solve synchronization problems

      1. Readers/writers problem

      2. Bounded buffers problem

  5. Monitors

    1. What is a monitor?

      1. Shared data

      2. Procedures

      3. Synchronization

    2. In what way does a monitor provide mutual exclusion?

      1. To what extent is it provided?

    3. How does a monitor differ from a semaphore?

    4. How does a monitor differ from a lock?

    5. What kind of support do monitors require?

      1. Language, run-time support

  6. Condition variables

    1. What is a condition variable used for?

      1. Coordinating the execution of threads

      2. Not mutual exclusion

    2. Operations

      1. What are the semantics of Wait?

      2. What are the semantics of Signal?

      3. What are the semantics of Broadcast?

    3. How are condition variables different from semaphores?

  7. Implementing monitors

    1. What does the implementation of a monitor look like?

      1. Shared data

      2. Procedures

      3. A lock for mutual exclusion to procedures (w/ a queue)

      4. Queues for the condition variables

    2. What is the difference between Hoare and Mesa monitors?

      1. Semantics of signal (whether the woken up waiter gets to run immediately or not)

      2. What are their tradeoffs?

      3. What does Java provide?

  8. Locks and condition vars

    1. In nachos, we don't have monitors

    2. But we want to be able to use condition variables

    3. So we isolate condition variables and make them independent (not associated with a monitor)

    4. Instead, we have to associate them with a lock (mutex)

    5. Now, to use a condition variable …

      1. Threads must first acquire the lock (mutex)

      2. CV::Wait releases the lock before blocking, acquires it after waking up

 

Scheduling

    1. Scheduling

      1. Components

        1. Scheduler (dispatcher)

      2. When does scheduling happen?

        1. Job changes state (e.g., waiting to running)

        2. Interrupt, exception

        3. Job creation, termination

    2. Scheduling goals

      1. Goals

        1. Maximaize CPU utilization

        2. Maximize job throughput

        3. Minimize ternaround time

        4. Minimize waiting time

        5. Minimize response time

      2. What is the goal of a batch system?

      3. What is the goal of an interactive system?

    3. Starvation

      1. Starvation

        1. Indefinite denial of a resource (CPU, lock)

      2. Causes

        1. Side effect of scheduling

        2. Side effect of synchronization

      3. Operating systems try to prevent starvation

    4. Scheduling algorithms

      1. What are the properties advantages and disadvantages of the following scheduling algorithms?

        1. First Come First Serve (FCFS)/First In First Out (FIFO)

        2. Shortest Job first (SJF)/shortest remaining time first

        3. Priorty

        4. Round Robin

        5. Multilevel feedback queues

      2. What scheduling algorithm does Unix use? Why?

    5. Some quick clarification

      1. Round Robin schedule

        1. If the time slice is 10ms, it means that a thread can at most use the CPU for 10 ms a time

        2. If a thread gives up the CPU because it has to wait for I/O, condition variable, etc, the scheduler will switch to another thread in the ready queue

    6. Deadlocks

      1. Deadlock conditions

      2. Deadlock detection

      3. Deadlock prevention

      4. How do today’s systems handle deadlocks? Why?

 

转载于:https://www.cnblogs.com/yxcindy/p/10223562.html

深度学习是机器学习的一个子领域,它基于人工神经网络的研究,特别是利用多层次的神经网络来进行学习和模式识别。深度学习模型能够学习数据的高层次特征,这些特征对于图像和语音识别、自然语言处理、医学图像分析等应用至关重要。以下是深度学习的一些关键概念和组成部分: 1. **神经网络(Neural Networks)**:深度学习的基础是人工神经网络,它是由多个层组成的网络结构,包括输入层、隐藏层和输出层。每个层由多个神经元组成,神经元之间通过权重连接。 2. **前馈神经网络(Feedforward Neural Networks)**:这是最常见的神经网络类型,信息从输入层流向隐藏层,最终到达输出层。 3. **卷积神经网络(Convolutional Neural Networks, CNNs)**:这种网络特别适合处理具有网格结构的数据,如图像。它们使用卷积层来提取图像的特征。 4. **循环神经网络(Recurrent Neural Networks, RNNs)**:这种网络能够处理序列数据,如时间序列或自然语言,因为它们具有记忆功能,能够捕捉数据中的时间依赖性。 5. **长短期记忆网络(Long Short-Term Memory, LSTM)**:LSTM 是一种特殊的 RNN,它能够学习长期依赖关系,非常适合复杂的序列预测任务。 6. **生成对抗网络(Generative Adversarial Networks, GANs)**:由两个网络组成,一个生成器和一个判别器,它们相互竞争,生成器生成数据,判别器评估数据的真实性。 7. **深度学习框架**:如 TensorFlow、Keras、PyTorch 等,这些框架提供了构建、训练和部署深度学习模型的工具和库。 8. **激活函数(Activation Functions)**:如 ReLU、Sigmoid、Tanh 等,它们在神经网络中用于添加非线性,使得网络能够学习复杂的函数。 9. **损失函数(Loss Functions)**:用于评估模型的预测与真实值之间的差异,常见的损失函数包括均方误差(MSE)、交叉熵(Cross-Entropy)等。 10. **优化算法(Optimization Algorithms)**:如梯度下降(Gradient Descent)、随机梯度下降(SGD)、Adam 等,用于更新网络权重,以最小化损失函数。 11. **正则化(Regularization)**:技术如 Dropout、L1/L2 正则化等,用于防止模型过拟合。 12. **迁移学习(Transfer Learning)**:利用在一个任务上训练好的模型来提高另一个相关任务的性能。 深度学习在许多领域都取得了显著的成就,但它也面临着一些挑战,如对大量数据的依赖、模型的解释性差、计算资源消耗大等。研究人员正在不断探索新的方法来解决这些问题。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值