串行计算,分为“指令”和“数据”两个部分,在程序执行时“独立地申请和占有”内存空间,所有计算均局限于该内存空间。
并行计算,则将进程相对独立的分配在不同的节点上,由各自独立的操作系统调度,享有独立的CPU和内存资源(内存可共享),进程间通过消息传递实现信息的相互交换。
并行计算机分类:
Flynn分类法
•SISD(单指令流单数据流)系统
•SIMD(单指令流多数据流)系统
•MISD(多指令流单数据流)系统
•MIMD(多指令流多数据流)系统
五种物理机模型: 实际的机器体系结构
— PVP (Parallel Vector Processor, 并行向量机)
— SMP (Symmetric Multiprocessor, 对称多处理机)
— MPP (Massively Parallel Processor, 大规模并行处理机)
— COW (Cluster of Workstation, 工作站机群)
— DSM (Distributed Shared Memory, 分布共享存储多处理机)
https://computing.llnl.gov/tutorials/parallel_comp/
Introduction to Parallel Computing
Blaise Barney, Lawrence Livermore National Laboratory
Table of Contents
- Overview
- Concepts and Terminology
- Parallel Computer Memory Architectures
- Parallel Programming Models
- Designing Parallel Programs
- Parallel Examples
- References and More Information
https://computing.llnl.gov/tutorials/parallel_comp/
Flynn's Classical Taxonomy
- There are different ways to classify parallel computers. One of the more widely used classifications, in use since 1966, is called Flynn's Taxonomy.
- Flynn's taxonomy distinguishes multi-processor computer architectures according to how they can be classified along the two independent dimensions ofInstruction and Data. Each of these dimensions can have only one of two possible states: Single or Multiple.
- The matrix below defines the 4 possible classifications according to Flynn:
S I S D
Single Instruction, Single Data
S I M D
Single Instruction, Multiple Data
M I S D
Multiple Instruction, Single Data
M I M D
Multiple Instruction, Multiple Data
What is Parallel Computing?
- Traditionally, software has been written for serial computation:
- To be run on a single computer having a single Central Processing Unit (CPU);
- A problem is broken into a discrete series of instructions.
- Instructions are executed one after another.
- Only one instruction may execute at any moment in time.
In the simplest sense, parallel computing is the simultaneous use of multiple compute resources to solve a computational problem:
- To be run using multiple CPUs
- A problem is broken into discrete parts that can be solved concurrently
- Each part is further broken down to a series of instructions
- Instructions from each part execute simultaneously on different CPUs
- The compute resources can include:
- A single computer with multiple processors;
- An arbitrary number of computers connected by a network;
- A combination of both.
- The computational problem usually demonstrates characteristics such as the ability to be:
- Broken apart into discrete pieces of work that can be solved simultaneously;
- Execute multiple program instructions at any moment in time;
- Solved in less time with multiple compute resources than with a single compute resource.