并发与并行的区别 The differences between Concurrency and Parallel

逻辑控制流

在程序加载到内存并执行的时候(进程),操作系统会通过让它和其他进程分时段占用CPU(CPU slices)让它产生自己独占CPU的假象(同时通过虚拟内存让它产生独占内存的假象)。在CPU在执行一个进程的指令时,被执行的许多指令连接起来(也可以理解为程序计数器PC的变化)就构成了“逻辑控制流”。

逻辑控制流的概念也不局限于进程,它在异常处理程序、线程、Java进程中均有体现。而“并发(concurrency)”和”并行(parallel)“都是对逻辑控制流而言的。


并发

当两个逻辑控制流交替执行的时候,我们就称它们是”并发(concurrency)“的。更确切的说,对于逻辑控制流A、B,如果B被执行晚于A被执行的开始且早于A被执行的结束,那么A和B就是并发的。例如下图:

1072319-20171221195827475-698311883.jpg

其中A和B是并发的,因为B的执行晚于A的开始且早于A的结束。但是B和C就不是并发的,因为C的执行并没有早于B的结束。同理A和C是并发的。

注意到并发和cpu的个数或者计算机的个数是没有关系的,只要两个逻辑流满足上面的关系我们就称它们并发。


并行

如果两个逻辑控制流同时(一个cpu时段内)在不同的cpu(多核)或者计算机上被执行,我们就称其为并行。例如下图:

1072319-20171221195836459-285989510.jpg

其中A和C、B和D之间就是并行执行的。

注意到并行要求具有多个处理核心。




另外,我在网上看到一组很有意思的漫画,讲解了并发和并行的区别,分享一下(图片来自https://code.google.com/archive/p/rspace/source/concur/source):

假设一只老鼠正在烧书,其中书就代表要被执行的指令,火炉代表cpu,老鼠把书一本本运送并烧掉的过程就构成了逻辑控制流。

1072319-20171221195909412-983813993.jpg

现在有A、B两只老鼠(两个逻辑控制流)在烧书,例如下面这个图,由于只有一个火炉,A老鼠烧书的时候,B就要等着(保存上下文),A烧一会儿后再轮到B烧(上下文切换),即他们烧书是交替进行的,我们就说他们在并发。(多个火炉满足这样的交替关系我们也可以说他们在并发):

1072319-20171221195921225-723224691.jpg

当两只老鼠烧书同时进行时,我们就说它们是并行的,例如下面这个例子,由于有两个火炉(多核),烧书本身可以同时发生:

1072319-20171221195932803-1756630309.jpg




参考:

  1. 《深入理解计算机系统》第三版
  2. Rob Pike - ‘Concurrency Is Not Parallelism’

转载于:https://www.cnblogs.com/liqiuhao/p/8082246.html

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
GPT (Generative Pre-trained Transformer) and BERT (Bidirectional Encoder Representations from Transformers) are both advanced natural language processing (NLP) models developed by OpenAI and Google respectively. Although they share some similarities, there are key differences between the two models. 1. Pre-training Objective: GPT is pre-trained using a language modeling objective, where the model is trained to predict the next word in a sequence of words. BERT, on the other hand, is trained using a masked language modeling objective. In this approach, some words in the input sequence are masked, and the model is trained to predict these masked words based on the surrounding context. 2. Transformer Architecture: Both GPT and BERT use the transformer architecture, which is a neural network architecture that is specifically designed for processing sequential data like text. However, GPT uses a unidirectional transformer, which means that it processes the input sequence in a forward direction only. BERT, on the other hand, uses a bidirectional transformer, which allows it to process the input sequence in both forward and backward directions. 3. Fine-tuning: Both models can be fine-tuned on specific NLP tasks, such as text classification, question answering, and text generation. However, GPT is better suited for text generation tasks, while BERT is better suited for tasks that require a deep understanding of the context, such as question answering. 4. Training Data: GPT is trained on a massive corpus of text data, such as web pages, books, and news articles. BERT is trained on a similar corpus of text data, but it also includes labeled data from specific NLP tasks, such as the Stanford Question Answering Dataset (SQuAD). In summary, GPT and BERT are both powerful NLP models, but they have different strengths and weaknesses depending on the task at hand. GPT is better suited for generating coherent and fluent text, while BERT is better suited for tasks that require a deep understanding of the context.
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值