Skip to content
Dev Dump

🎯 Concurrency vs Parallelism: Complete Guide


Concurrency is the ability to run several programs or several parts of a program in parallel. If a time consuming task can be performed asynchronously or in parallel, this improves the throughput and the interactivity of the program.

A modern computer has several CPU’s or several cores within one CPU. The ability to leverage these multi-cores can be the key for a successful high-volume application.


🔹 Analogy: A recipe book 📖
🔹 Example: chrome.exe before launch

Characteristics:

  • Static set of instructions
  • Stored on disk
  • Not actively executing
  • No memory allocation

🔹 Analogy: Baking a cake using a recipe 🎂
🔹 Example: Running chrome.exe → Chrome browser launches

🧠 Key Points:

  • Independent execution
  • Has its own memory space
  • Managed by the OS
  • Can contain multiple threads

🔹 Analogy: Multiple bakers doing parts of the same cake 🍰
🔹 Example: Chrome browser:

  • 1 thread for UI
  • 1 thread for network
  • 1 thread for user input

Characteristics:

  • Lightweight execution unit
  • Shares memory with other threads in the same process
  • Has its own call stack
  • Managed by the process

  • A program is a set of instructions and associated data that resides on the disk and is loaded by the operating system to perform some task.
  • In order to run a program, the operating system’s kernel is first asked to create a new process, which is an environment in which a program executes.

A process is a program in execution.

Process 🧱Thread 🧵
Independent programPart of a process
Own memoryShares memory with others
HeavyweightLightweight
Harder communicationEasy communication
Crash doesn’t affect othersCrash may affect other threads
Example: PostgreSQL serverExample: Chrome tabs inside one process

A process runs independently and isolated of other processes. It cannot directly access shared data in other processes. The resources of the process, e.g. memory and CPU time, are allocated to it via the operating system. Process can have Multiple thread, Initially when the process is created, it starts with the one thread call Main thread.

A thread is a so called lightweight process. It has its own call stack, but can access shared data of other threads in the same process. Every thread has its own memory cache. If a thread reads shared data, it stores this data in its own memory cache.

Thread Creation Methods:

  • Extending the Thread class.
  • Implementing the Runnable interface.

A thread can re-read the shared data.

A Java application runs by default in one process. Within a Java application you work with several threads to achieve parallel processing or asynchronous behavior.

62f404337a3ec79d4b70bc3d21d88358_MD5


Concurrency is about managing multiple tasks within the same time period. It doesn’t necessarily mean they are all running simultaneously. Think of it as juggling: you’re handling multiple balls (tasks), but you’re not holding them all in the air at the exact same instant. You switch between them rapidly.

Imagine you are a single chef preparing two dishes. You chop vegetables for one dish, then stir a pot for the other, and keep switching between the two tasks. You are not cooking both dishes at the same time, but you are making progress on both.

Key Characteristics:

  • Tasks appear to run together
  • Can work on a single core
  • Uses context switching
  • Focus: Managing tasks

Parallelism refers to the simultaneous execution of multiple tasks. This requires multiple processing units (e.g., multi-core processors) so that tasks can truly run at the same time.

Parallelism is about doing multiple tasks at the same time. It is a subset of concurrency, but it specifically requires hardware support for simultaneous execution.

Imagine you are two chefs in a kitchen, each preparing a dish. Both dishes are being cooked at the same time, and progress is made simultaneously.

Key Characteristics:

  • Tasks truly run simultaneously
  • Requires multiple cores
  • Each core runs a task independently
  • Focus: Executing tasks

c6faab7626ef59336d0dd7e4bda20f57_MD5

⚔️ Concurrency vs. Parallelism Comparison

Section titled “⚔️ Concurrency vs. Parallelism Comparison”
Concurrency 🧩Parallelism 🚀
Tasks appear to run togetherTasks truly run simultaneously
Can work on a single coreRequires multiple cores
Uses context switchingEach core runs a task independently
Focus: Managing tasksFocus: Executing tasks

  • Synchronous execution refers to line-by-line execution of code. If a function is invoked, the program execution waits until the function call is completed
  • Blocking behavior - program waits for each operation to complete
  • Simple to understand and debug
  • Can cause UI freezing in single-threaded applications
  • Asynchronous programming is a means of parallel programming in which a unit of work runs separately from the main application thread and notifies the calling thread of its completion, failure or progress
  • Non-blocking behavior - program continues execution while waiting
  • Better user experience - UI remains responsive
  • More complex to implement and debug

Critical section is any piece of code that has the possibility of being executed Concurrently by more than one thread of the application and exposes any shared data or resources used by the application for access.

Examples of Critical Sections:

  • Shared variable access
  • Database operations
  • File I/O operations
  • Resource allocation

Race conditions happen when threads run through critical sections without thread synchronization. The threads “race” through the critical section to write or read shared resources and depending on the order in which threads finish the “race”, the program output changes.

6663452b8d046652b52290b8cee332a7_MD5

Common Race Condition Scenarios:

  • Incrementing a shared counter
  • Adding/removing from a shared collection
  • Updating shared configuration
  • Resource allocation

Switching between threads by saving and restoring states

🔹 Analogy: A chef switching between cooking soup and checking the roast 🍲🔥
🔹 Enables: Multitasking on a single core

🧠 Steps:

  1. Interrupt: OS decides to pause a thread
  2. Save state: Register values, program counter saved
  3. Load new state: Load data of the new thread
  4. Execute: Resume from last point

⚠️ Performance Consideration:

  • Adds overhead
  • Too much switching = ⛔ performance hit
  • Context switch time varies by OS (typically 1-30 microseconds)
  • Cache misses can occur after context switches

OS component that manages which thread runs when

📋 Uses algorithms like:

  • Round-robin: Each thread gets equal time slices
  • Priority-based scheduling: Higher priority threads run first
  • FIFO (First In, First Out): Threads run in order of arrival
  • Preemptive scheduling: OS can interrupt running threads

🚨 Overhead:

  • Too many threads = more time spent switching than doing actual work
  • Scheduling decisions add computational overhead
  • Priority inversion can occur in complex systems

Allows multiple threads to run concurrently inside one process

🔍 What it solves:

  • Blocking operations - UI remains responsive during I/O
  • Inefficient CPU use - better utilization of available cores
  • UI freezing - background tasks don’t block user interface

🧠 Benefits:

  • Better performance - parallel execution of tasks
  • Non-blocking behavior - responsive applications
  • 🧩 Shared memory = fast communication between threads
  • 🖥 Responsive UI - user interface remains interactive
  • 📡 Real-time apps (games, streaming)
  • 💥 Better CPU core utilization - leverages multiple cores

Common Challenges:

  • Race conditions - unpredictable behavior
  • Deadlocks - threads waiting for each other indefinitely
  • Memory consistency - visibility issues between threads
  • Debugging complexity - non-deterministic behavior

MetricDefinitionExample
ThroughputNumber of tasks completed per unit timeFiles processed per second
LatencyTime taken to complete a single taskTime to process one file
ConcurrencyNumber of tasks in progress simultaneouslyNumber of active threads
ParallelismNumber of tasks executing simultaneouslyNumber of CPU cores utilized

Modern Implications:

  • Single-thread performance improvements are slowing
  • Multi-core processors are becoming the norm
  • Concurrent programming is increasingly important
  • Parallel algorithms are essential for performance