感觉你的描述还是体会不出什么具体的区别,下面是Stackoverflow上的摘要:
It’s well understood that concurrency is decomposition of a complex problem into smaller components. If you cannot correctly divide something into smaller parts, it’s hard to solve it using concurrency. but it is wrong b/c those smaller components may depend on each other in a sequential manner to complete, so even if you divide into small components, it does not mean you achieve concurrency / parallelism.
In all my classes of parallel and distributed algorithms (both in BS and MS) we never talked about “concurrency we obtained and now let’s see how to obtain parallelism”. If you use the word concurrency to describe and algorithm then you imply parallelism and vice versa.
When you have the abstract form of an algorithm in mind, you then have to choose if you will implement it with Message Passing or Shared Memory or maybe Hybrid. You will also have to consider the type of memory access (NUMA, UMA, etc) and the Topology used (Hypercube, Torus, Ring, Mesh, Tree, etc)
This seems a lot of work to someone who just wants something, maybe even simple, done in a parallel way (e.g. parallel for).
And it is a lot of work especially if you change the topology (so you can have all of its advantages).
So you write the parallel code (be it simple or complex) and the VM or compiler will choose what seems to be the best way to go, even running it in a sequential way! (an example would be Task Parallel Library for .net).And it should mention that I am talking about concurrency in a program / algorithm and not between independent programs that run in a system.
From an implementation point of view, if you say “parallelism” you usually intend a program that runs on the local computer or a cluster (Shared Memory communication), and “distributed” when you run the program on a grid (Message Passing communication).
so this also mean concurrency as the abstract principle and parallel as the way it is implemented [Shared Memory, Message Passing, Hybrid between both; Type of memory acces (numa, uma, etc)].
BTW:You can get concurrency on a single core processor using preemptive time-shared threads. But what you cannot achieve on a single core processor is parallelism.
Reference:
Concurrency is not Parallelism(it’s better): http://concur.rspace.googlecode.com/hg/talk/concur.html