深入计算机系统:操作系统管理硬件(The Operating SystemManages the Hardware)

1.7 操作系统管理硬件(The Operating SystemManages the Hardware)

返回我们一开始介绍的Hello程序。当shell装载程序、运行程序和打印出结果时,程序在这个过程中都没有直接访问键盘,显示器,硬盘或内存。也就是说,他们依赖操作系统来提供服务。我们可以想象操作系统是在应用程序和硬件之间的软件层,如图1.10。所有应用程序想操作硬盘必须通过操作系统。

[外链图片转存失败(img-2ApBFp3n-1567049027434)(en-resource://database/1029:1)]

操作系统主要有两个功能:

  • 保护硬件不被失控的应用程序控制。
  • 提供给应用程序简单且统一机制的方式来控制低层次而且互不相同的硬件设备。操作系统通过几个基本的抽象概念来实现这个功能,如图1.10所示。文件是IO设备的抽象表示,虚拟存储器是对内存和IO设备的抽象表示,进程是对处理器、内存和IO设备的抽象表示。

[外链图片转存失败(img-v7TcSwJt-1567049027435)(en-resource://database/1031:1)]

1.7.1 进程

当诸如hello之类的程序在现代系统上运行时,操作系统提供了一种错觉,即程序是唯一在系统上运行的程序。 该程序似乎独家使用处理器,主存储器和I / O设备。 processorappears执行程序中的指令,一个接一个地执行,没有中断。 并且程序的代码和数据是系统内存中唯一的对象。 这些幻想是由一个过程的概念提供的,这是计算机科学中最重要和最成功的思想之一。

进程是操作系统对正在运行的程序的抽象。多个进程可以在同一系统上并发运行,并且每个进程似乎都独占使用该硬件。 同时,我们的意思是一个过程的指令与另一个过程的指令交错。 在大多数系统中,运行的进程多于运行它们的CPU。

Unix,Posix和标准Unix规范
20世纪60年代是一个庞大而复杂的操作系统时代,例如IBM的OS / 360和霍尼韦尔的Multics系统。虽然OS / 360是历史上最成功的软件项目之一,但Multics拖延了多年,从未实现过大规模使用。贝尔实验室是Multics项目的原始合作伙伴,但由于担心项目的复杂性和缺乏进展,于1969年退出。贝尔实验室的研究人员Ken Thompson,Dennis Ritchie,Doug McIlroy和Joe Ossanna于1969年开始为数字设备公司的PDP-7计机器语言。新系统中的许多想法,例如分层文件系统和shell作为用户级进程的概念,都是从Multics借用的,但是在更小,更简单的包中实现。 1970年,Brian Kernighan将新系统“Unix”称为“Multics”复杂性的双关语。内核于1973年在C中重写,并于1974年向外界宣布了Unix [93]。
由于贝尔实验室以慷慨的条款向学校提供源代码,因此Unix在大学中开发了大量的代码。 最具影响力的工作是在20世纪70年代末和80年代初在加州大学伯克利分校完成的,伯克利研究人员在一系列名为Unix 4.xBSD(Berkeley Software Distributimn)的版本中添加了虚拟内存和互联网协议。 同时,贝尔实验室发布了他们自己的版本,后来被称为System V Unix。 来自其他供应商的版本(例如Sun Microsystems Solaris系统)源自这些原始BSD和System V版本。
由于Unix供应商试图通过添加新的且通常不兼容的功能来区分自己,因此在20世纪80年代中期出现了麻烦。 为了应对这一趋势,IEEE(电气和电子工程师协会)赞助了一项标准化Unix的工作,后来被Richard Stallman称为“Posix”。 结果是一系列标准,称为Posix标准,涵盖了诸如Unix系统调用的C语言接口,shell程序和实用程序,线程和网络编程之类的问题。 最近,一项单独的标准化工作,称为“标准Unix规范”,与Posix合作为Unix系统创建单一,统一的标准。 由于这些标准化工作,Unix版本之间的差异已基本消失。

传统系统一次只能执行一个程序,而较新的多核处理器可以同时执行多个程序。 在任何一种情况下,单个CPU似乎可以通过让处理器切换它们来同时执行多个进程。 操作系统使用称为上下文切换的机制执行此交织。 为简化本讨论的其余部分,我们仅考虑包含单个CPU的单处理器系统。 我们将在1.9.2节回到多处理器系统的讨论。

系统会跟踪进程运行所需的所有状态信息。 该状态称为上下文,包括诸如PC的当前值,存储文件和主存储器的内容之类的信息。 在任何时间点,auniprocessor系统只能执行单个进程的代码。当操作系统决定将控制从当前进程转移到某个新进程时,它通过保存当前进程的上下文,恢复上下文来执行上下文切换。 新流程,然后将控制权传递给新流程。 新流程准确地从中断的地方开始。 图1.12显示了我们的示例hello场景的基本思想。

[外链图片转存失败(img-NAaOT26W-1567049027435)(en-resource://database/1033:0)]

在我们的示例场景中有两个并发进程:shell进程和hello进程。 最初,shell进程单独运行,等待命令行上的输入。 当我们要求它运行hello程序时,shell通过调用一个称为系统调用的特殊函数来执行我们的请求,该函数将控制传递给操作系统。 操作系统保存shell的上下文,创建一个新的hello进程及其上下文,然后将控制权传递给新的hello进程。 在hello终止之后,操作系统将恢复shell进程的上下文并将控制权交还给它,等待下一个命令行输入。

如图1.12所示,从一个进程到另一个进程的转换由操作系统内核管理。 内核是操作系统代码中始终驻留在内存中的部分。 当应用程序需要操作系统执行某些操作(例如读取或写入文件)时,它会执行特殊的系统调用指令,将控制权转移到内核。 然后内核执行请求的操作并返回应用程序。 请注意,内核不是一个单独的过程。 相反,它是系统用来管理所有进程的代码和数据结构的集合。

实现流程抽象需要低级硬件和操作系统软件之间的紧密合作。 我们将在第8章中探讨它的工作原理,以及应用程序如何创建和控制自己的流程。

1.7.2 线程

尽管我们通常认为流程具有单一控制流,但在现代系统中,流程实际上可以由多个执行单元组成,称为线程,每个执行单元在流程上下文中运行并共享相同的代码和全局数据。 由于在网络服务器中需要并发性,线程是一个越来越重要的编程模型,因为在多个线程之间比在多个进程之间更容易共享数据,并且因为线程通常比进程更有效。 当多个处理器可用时,多线程也是使程序更快速运行的一种方法,我们将在1.9.2节中讨论。 在第12章中,您将学习并发的基本概念,包括如何编写线程程序。

1.7.3 虚拟内存

虚拟内存是一种抽象,它为每个进程提供了独占使用主内存的错觉。每个进程都具有相同的内存统一视图,称为虚拟地址空间。 Linux进程的虚拟地址空间如图1.13所示。 (其他Unix系统使用类似的布局。)在Linux中,地址空间的最顶层区域保留用于操作系统中的所有进程通用的代码和数据。地址空间的下部区域保存用户进程定义的代码和数据。请注意,图中的地址从底部到顶部增加。

每个进程看到的虚拟地址空间由许多明确定义的区域组成,每个区域都有特定的目的。您将在本书的后面部分了解这些方面的内容,但从最低的地址开始并逐步完成后,对每个方面都有所帮助:

  • 程序代码和数据。代码从所有进程的相同固定地址开始,然后是与全局C变量对应的数据位置。代码和数据区域直接从可执行对象文件的内容初始化 - 在我们的例子中是hello可执行文件。在第7章学习链接和加载时,您将了解有关这部分地址空间的更多信息。

  • 堆。代码和数据区域立即跟随时间堆。与代码和数据区域不同,代码和数据区域在进程开始运行时大小固定,由于调用C标准库例程(如malloc和free),堆在运行时会动态扩展和收缩。当我们在第9章学习管理虚拟内存时,我们将研究堆。

  • 共享库。靠近地址空间的中间是一个包含共享库(如Cstandard库和数学库)的代码和数据的areat。共享库的概念是一个强大但有些困难的概念。在第7章学习动态链接时,您将了解它们是如何工作的。

  • 堆叠。在用户的虚拟地址空间的顶部是编译器用于实现函数调用的userstack。与theheap一样,用户堆栈在程序执行期间动态扩展和收缩。特别是,每次我们调用函数时,堆栈都会增长。每次我们从一个函数返回时,它都会收缩。您将在第3章中了解编译器如何使用堆栈。

  • 内核虚拟内存。地址空间的顶部区域是为内核保留的。不允许应用程序读取或写入该区域的内容或直接调用内核代码中定义的函数。相反,他们必须调用内核来执行这些操作。

要使虚拟内存工作,硬件和操作系统软件之间需要复杂的交互,包括处理器生成的每个地址的硬件转换。基本思想是将进程的虚拟内存存储在磁盘上然后使用主内存作为磁盘的缓存。
第9章解释了这是如何工作的以及为什么它对现代系统的运作如此重要。

1.7.4 文件系统

文件是一个字节序列,仅此而已。 每个I / O设备(包括磁盘,键盘,显示器甚至网络)都被建模为文件。 系统中的所有输入和输出都是通过读取和写入文件来执行的,使用一小组称为Unix I / O的系统调用。

这种简单而优雅的文件概念非常强大,因为它为应用程序提供了系统中可能包含的所有各种I / O设备的统一视图。 例如,操纵磁盘文件内容的应用程序员很幸福地没有意识到特定的磁盘技术。 此外,同一程序将在使用不同磁盘技术的不同系统上运行。 您将在第10章中了解Unix I / O.

1.7 The Operating System Manages the Hardware

Back to our hello example. When the shell loaded and ran thehello program, and when the hello program printed its message,neither program accessed the
keyboard, display, disk, or main memory directly. Rather, they relied on the services provided by the operating system. We can think of the operating system as a layer of software interposed between the application program and the hardware, as shown in Figure 1.10 . All attempts by an application program to manipulate the hardware must go through the operating system.

The operating system has two primary purposes:(1) to protect the hardware from misuse by runaway applications and
(2) to provide applications with simple and uniform mechanisms for manipulating complicated and often wildly different low-level hardware devices. The operating system achieves both goals via the fundamental abstractions shown in Figure 1.11 : processes, virtual memory, and files. As this figure suggests, files are abstractions for I/O devices, virtual memory is an abstraction for both the main memory and disk I/O devices, and processes are abstractions for the processor, main memory, and I/O devices. We will discuss each in turn.

1.7.1 Processes

When a program such as hello runs on a modern system, theoperating system provides the illusion that the program is the only onerunning on the system. The program appears to have exclusive use ofboth the processor, main memory, and I/O devices. The processorappears to execute the instructions in the program, one after theother, without interruption. And the code and data of the programappear to be the only objects in the system’s memory. These illusionsare provided by the notion of a process, one of the most important andsuccessful ideas in computer science.

A process is the operating system’s abstraction for a running program.Multiple processes can run concurrently on the same system, andeach process appears to have exclusive use of the hardware. Byconcurrently, we mean that the instructions of one process are interleaved with the instructions of another process. In most systems,there are more processes to run than there are CPUs to run them.

Aside Unix, Posix, and the Standard Unix Specification

The 1960s was an era of huge, complex operating systems,such as IBM’s OS/360 and Honeywell’s Multics systems. While OS/360 was one of the most successful software projects in history, Multics dragged on for years and never achieved wide scale use. Bell Laboratories was an original partner in the Multics project but dropped out in 1969 because of concern over the complexity of the project and the lack of progress. In reaction to their unpleasant Multics experience, a group of Bell Labs researchers—Ken Thompson, Dennis Ritchie, Doug McIlroy, and Joe Ossanna—began work in 1969 on a simpler operating system for a Digital Equipment Corporation PDP-7 computer, written entirely in machine language. Many of the ideas in the new system, such as the hierarchical file system and the notion of a shell as a user-level process, were borrowed from Multics but implemented in a smaller, simpler package. In 1970, Brian Kernighan dubbed the new system “Unix” as a pun on the complexity of “Multics.” The kernel was rewritten in C in 1973, and Unix was announced to the outside world in 1974 [93].

Because Bell Labs made the source code available to schoolswith generous terms, Unix developed a large following at universities. The most influential work was done at the University of California at Berkeley in the late 1970s and early 1980s, with Berkeley researchers adding virtual memory and the Internet protocols in a series of releases called Unix 4.xBSD (Berkeley Software Distributimn). Concurrently, Bell Labs was releasing their own versions, which became known as System V Unix. Versions from other vendors, such as the Sun Microsystems Solaris system, were derived from these original BSD and System V versions.

Trouble arose in the mid 1980s as Unix vendors tried to differentiate themselves by adding new and often incompatible features. To combat this trend, IEEE (Institute for Electrical and Electronics Engineers) sponsored an effort to standardize Unix, later dubbed “Posix” by Richard Stallman. The result was a family of standards, known as the Posix standards, that cover such issues as the C language interface for Unix system calls, shell programs and utilities, threads, and network programming. More recently, a separate standardization effort, known as the “Standard Unix Specification,” has joined forces with Posix to create a single, unified standard for Unix systems. As a result of these standardization efforts, the differences between Unix versions have largely disappeared.

Traditional systems could only execute one program at a time, whilenewer multi-core processors can execute several programs simultaneously. In either case, a single CPU can appear to execute multiple processes concurrently by having the processor switchamong them. The operating system performs this interleaving with a mechanism known as context switching. To simplify the rest of this discussion, we consider only a uniprocessor system containing asingle CPU. We will return to the discussion of multiprocessor systemsin Section 1.9.2 .

The operating system keeps track of all the state information that theprocess needs in order to run. This state, which is known as thecontext, includes information such as the current values of the PC, theregister file, and the contents of main memory. At any point in time, auniprocessor system can only execute the code for a single process.When the operating system decides to transfer control from thecurrent process to some new process, it performs a context switch bysaving the context of the current process, restoring the context of thenew process, and then passing control to the new process. The new process picks up exactly where it left off. Figure 1.12 shows the basic idea for our example hello scenario.

[外链图片转存失败(img-0OU3m2uj-1567049027436)(en-resource://database/1033:0)]

There are two concurrent processes in our example scenario: the shell process and the hello process. Initially, the shell process is running alone, waiting for input on the command line. When we ask it to run the hello program, the shell carries out our request by invoking a special function known as a system call that passes control to the operating system. The operating system saves the shell’s context, creates a new hello process and its context, and then passes control to the new hello process. After hello terminates, the operating system restores the context of the shell process and passes control back to it, where it waits for the next command-line input.

As Figure 1.12 indicates, the transition from one process to another is managed by the operating system kernel. The kernel is the portion of the operating system code that is always resident in memory. When an application program requires some action by the operating system, such as to read or write a file, it executes a special system call instruction, transferring control to the kernel. The kernel then performs the requested operation and returns back to the application program. Note that the kernel is not a separate process. Instead, it is a collection of code and data structures that the system uses to manage all the processes.

Implementing the process abstraction requires close cooperationbetween both the low-level hardware and the operating systemsoftware. We will explore how this works, and how applications cancreate and control their own processes, in Chapter 8 .

1.7.2 Threads

Although we normally think of a process as having a single controlflow, in modern systems a process can actually consist of multipleexecution units, called threads, each running in the context of theprocess and sharing the same code and global data. Threads are anincreasingly important programming model because of therequirement for concurrency in network servers, because it is easier toshare data between multiple threads than between multipleprocesses, and because threads are typically more efficient thanprocesses. Multi-threading is also one way to make programs runfaster when multiple processors are available, as we will discuss in

Section 1.9.2 . You will learn the basic concepts of concurrencyincluding how to write threaded programs, in Chapter 12 .

1.7.3 Virtual Memory

Virtual memory is an abstraction that provides each process with the illusion that it has exclusive use of the main memory. Each process has the same uniform view of memory, which is known as its virtual address space. The virtual address space for Linux processes is shown in Figure 1.13 . (Other Unix systems use a similar layout.) In Linux, the topmost region of the address space is reserved for code and data in the operating system that is common to all processes. The lower region of the address space holds the code and data defined by the user’s process. Note that addresses in the figure increase from the bottom to the top.

The virtual address space seen by each process consists of a numberof well-defined areas, each with a specific purpose. You will learnmore about these areas later in the book, but it will be helpful to lookbriefly at each, starting with the lowest addresses and working ourway up:

  • Program code and data. Code begins at the same fixed addressfor all processes, followed by data locations that correspond toglobal C variables. The code and data areas are initialized directlyfrom the contents of an executable object file—in our case, thehello executable. You will learn more about this part of theaddress space when we study linking and loading in Chapter 7 .

  • Heap. The code and data areas are followed immediately by therun-time heap. Unlike the code and data areas, which are fixed in size once the process begins running, the heap expands andcontracts dynamically at run time as a result of calls to C standardlibrary routines such as malloc and free . We will study heaps indetail when we learn about managing virtual memory in Chapter9 .

  • Shared libraries. Near the middle of the address space is an areathat holds the code and data for shared libraries such as the Cstandard library and the math library. The notion of a shared libraryis a powerful but somewhat difficult concept. You will learn howthey work when we study dynamic linking in Chapter 7 .

  • Stack. At the top of the user’s virtual address space is the userstack that the compiler uses to implement function calls. Like theheap, the user stack expands and contracts dynamically during theexecution of the program. In particular, each time we call afunction, the stack grows. Each time we return from a function, itcontracts. You will learn how the compiler uses the stack inChapter 3 .

  • Kernel virtual memory. The top region of the address space is reserved for the kernel. Application programs are not allowed to read or write the contents of this area or to directly call functions defined in the kernel code. Instead, they must invoke the kernel to perform these operations.

For virtual memory to work, a sophisticated interaction is requiredbetween the hardware and the operating system software, including ahardware translation of every address generated by the processor.The basic idea is to store the contents of a process’s virtual memoryon disk and then use the main memory as a cache for the disk.
Chapter 9 explains how this works and why it is so important to theoperation of modern systems.

1.7.4 Files

A file is a sequence of bytes, nothing more and nothing less. Every I/O device, including disks, keyboards, displays, and even networks, is modeled as a file. All input and output in the system is performed by reading and writing files, using a small set of system calls known as Unix I/O.

This simple and elegant notion of a file is nonetheless very powerfulbecause it provides applications with a uniform view of all the variedI/O devices that might be contained in the system. For example,application programmers who manipulate the contents of a disk fileare blissfully unaware of the specific disk technology. Further, thesame program will run on different systems that use different disktechnologies. You will learn about Unix I/O in Chapter 10 .

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值