Concurrency vs Parallelism

February 20, 2024

Overview

Concurrency and Parallelism are similar but distinct concepts. The former is possible with a single core while the latter is not. Furthermore, it's possible for both to be taking place at the same time, but at different locations. For example, JavaScript is a language whose applications are only able to operate a single thread of execution. However, it has concurrency capabilities (e.g. through its async/await feature). When an asynchronous function call is written in JavaScript to execute several I/O operations, that statement and those function calls will run in a single thread concurrently. Yet, those I/O operations will run outside the JavaScript application and within one of the operating system's I/O management subsystems, which can handle them in parallel.

Table of Contents

Comparison
Concurrency
Hyper-Threading
Parallelism
Multi-Threading
Conclusion

Comparison
^

ConcurrencyParallelism
ExecutionTasks can run in overlapping time periods, but they cannot start or finish simultaneously.Tasks can start, run, or finish simultaneously.
PurposeIncrease utilization of a single CPU core.Increase utilization of multiple CPU cores.
Use CaseScenarios where a core or application is idling, waiting for an I/O operation to complete or return.Scenarios where work requires CPU processing and can be broken up into smaller pieces.

Table template created by ChatGPT of OpenAI.

Concurrency
^

Concurrency results in higher utilization of single threads of resources by making use of time that would otherwise be spent idle. It occurs at all levels of all systems, from CPUs to high-level JavaScript applications running in a browser. Concurrency is a distinct area of study pioneered by Edsger Dijkstra in the mid-1960s. The concept was introduced to modern programming languages through several iterations that continuously improved its ease-of-use:

  1. Callbacks
  2. Promises
  3. Async/Await

Today, practically all modern programming languages provide asynchronous constructs to facilitate concurrency.

Learn more about the evolution of asynchronous programming in JavaScript.

Learn more about Asynchronous vs Synchronous.

Hyper-Threading
^

Hyper-Threading is a technology developed by Intel that allows a single physical CPU core to appear as two cores to the operating system. This may give the impression that there is parallelism occurring between these virtual cores, but it's actually concurrency. Intel capitalizes on time when the CPU core remains idle (e.g. between memory interactions), to enhance processing efficiency. During this time that would be idle, the core is able to process the instructions of another task. This is concurrency.

Parallelism
^

Parallelism enables starting different tasks simultaneously at the exact same time thanks to the fact that there are completely different workers able to complete tasks. These workers could be entirely different machines, as would be the case in a distributed processing cluster, like the utilized by Apache Spark. These workers could also be different CPU cores.

Multi-Threading
^

When multiple tasks are run in multiple threads in an application, those threads may or may not be run in parallel (in separate cores simultaneously). The operating system's scheduler manages how threads are allocated to CPU cores, considering the system's load and scheduling policies. However, it depends on the CPU itself (and its load) whether those tasks ultimately run in parallel.

Conclusion
^

This post outlines the differences between systems that utilize concurrency and parallelism. But there are similarities. In both cases, the work of multiple tasks overlaps. This poses a challenge in testing because the order in which these tasks complete is not deterministic. In one run, the first task may finish before the second, and in another run, the opposite may be true (even if both tasks require the exact same amount of work). This indeterministic nature is caused by the fact that system resources (e.g. entire cores of CPU and blocks of RAM) are rarely allocated in isolation to an application. Testing frameworks are aware of this, and they provide tooling to assert results in an unordered manner.