Why we need to understand how compilation systems work?
- Optimizing program performance
- Understanding link-time errors
- Avoiding security holes
Buses: transfer fixed-size chunks of bytes known as words
I/O Devices: system’s connection to the external world
Main Memory: temporary storage device that holds both a program and the data it manipulates while processor executes the program
Processor: CPU, engine that executes instructions stored in main memory
Cache Memories
Serve as temporary staging areas for information that the processor is likely to need in the near future
storage at one level serves as a cache for the next level (e.g. L1 as cache for L2)
Operating system
- Protect the hardware from misuse by runaway applications
- Provide applications with simple and uniform mechanisms for manipulating complicated and often wildly different low-level hardware devices
- Processes
~~ a running program
interleaving–>context switching
transition is managed by kernel, which is a collection of code and data structures that the system uses to manage all the processes
- Threads
~~ multiple execution units
(a process consists of multiple execution units --> threads)
- Virtual Memory
~~ provides each process with the illusion that it has exclusive use of the main memory (”我不是备胎“的幻觉)
- Files
~~ a sequence of bytes (all I/O device – disks, keyboards, displays and networks is modeled as a file)
Amdahl’s law
When speed up a part of a system, the effect on the overall performance depends on both how significant this part was and how much it sped up
Thread-Level Concurrency
Multiple programs execute at the same time
Hyperthreading (simultaneous multi-threading): allows a single CPU to execute multiple flows of control --> having multiple copies of some of the CPU hardware (e.g. program counters and register files), while having only single copies of other parts of the hardware.
Instruction-Level Parallelism
Processors execute multiple instructions at one time
superscalar processors: processors that can sustain execution rates faster than 1 instruction per cycle
Single-Instruction, Multiple-Data (SIMD) Parallelism
Allows a single instruction to cause multiple operations to be performed in parallel
–> speed up applications that process image, sound and video data