Inter-Process Communication (IPC): Methods and Mechanisms
Inter-process communication (IPC) refers to the set of mechanisms an operating system kernel provides to allow separate processes to exchange data, coordinate execution, and synchronize state. IPC sits at the intersection of process management, concurrency and synchronization, and kernel design, making it foundational to multi-process application architecture on every major operating system platform. The choice of IPC mechanism determines performance characteristics, security isolation boundaries, and failure behavior at the system level.
Definition and scope
IPC encompasses any OS-facilitated pathway through which two or more processes — each occupying a distinct address space — transfer information or coordinate activity. Because processes are isolated by the OS memory manager (each running in protected virtual address space, as specified in POSIX.1-2017), direct memory access across process boundaries is prohibited without kernel mediation. IPC mechanisms are the controlled, kernel-supervised exceptions to that isolation.
The scope of IPC includes both same-host (local) and cross-host (networked or distributed) communication pathways. The Open Group Base Specifications (POSIX.1-2017) standardizes core IPC interfaces including pipes, message queues, semaphores, and shared memory under the System V IPC and POSIX IPC API families. On Linux, the ipc(7) man page formally documents the kernel's IPC namespace model introduced in the Linux 3.19 kernel series. On Windows, IPC facilities are specified in the Windows API documentation maintained by Microsoft and include named pipes, mailslots, COM/DCOM, and Remote Procedure Call (RPC).
IPC is structurally distinct from intra-process thread communication, where shared address space makes direct memory access permissible under threading library rules. The kernel acts as arbiter in IPC — enforcing access controls, buffering data, and managing lifecycle for all IPC objects.
How it works
IPC mechanisms vary along two primary axes: data transfer model (copy-based vs. mapping-based) and synchronization semantics (blocking vs. non-blocking, ordered vs. unordered).
The six principal IPC mechanism classes recognized across POSIX and major OS implementations are:
-
Pipes and FIFOs — A unidirectional byte stream between two processes. Anonymous pipes connect a parent process to a child process created by
fork(). Named pipes (FIFOs) persist in the filesystem namespace, allowing unrelated processes to communicate. Data passes through a kernel buffer with FIFO ordering; write blocks when the buffer is full; read blocks when empty. -
Message queues — A kernel-maintained queue of discrete messages, each tagged with a type integer. Readers can select messages by type, enabling priority-like semantics. POSIX message queues (
mq_open,mq_send,mq_receive) and the older System V message queues (msgget,msgsnd,msgrcv) are the two standard variants. POSIX message queues supportmq_notify()for asynchronous notification. -
Shared memory — The highest-throughput IPC mechanism. The kernel maps the same physical memory pages into the virtual address spaces of 2 or more processes. Data is not copied through the kernel; instead, processes read and write directly to the shared region. POSIX shared memory uses
shm_open()andmmap(); System V shared memory usesshmget()andshmat(). Shared memory requires a separate synchronization mechanism (semaphore or mutex) to prevent race conditions. -
Semaphores — Counting or binary synchronization primitives used to coordinate access to shared resources. POSIX semaphores (
sem_open,sem_wait,sem_post) support both named (cross-process) and unnamed (same-process, thread-level) use. System V semaphores (semget,semop) operate as sets. Semaphores carry no data payload; they signal permission or resource availability only. -
Sockets — Bidirectional communication endpoints supporting both local (Unix domain sockets,
AF_UNIX) and network (TCP/IP,AF_INET/AF_INET6) communication. Unix domain sockets bypass the network stack entirely and transfer data through a kernel buffer with lower latency than TCP loopback. The BSD socket API, standardized in POSIX.1-2017, is the dominant cross-platform IPC interface for distributed scenarios. -
Signals — Asynchronous software interrupts delivered to a process by the kernel or by another process with sufficient privilege. Signals carry a signal number (e.g.,
SIGTERM,SIGUSR1) but no structured data payload. They are the only IPC mechanism that can interrupt a process's execution mid-instruction-stream.
On macOS, Apple's Mach kernel layer adds Mach ports and XPC as first-class IPC facilities used extensively by system services. System calls underpin every IPC operation — each write(), msgsnd(), or sem_wait() transitions from user space to kernel space.
Common scenarios
Database and application server coordination — A web server process communicates with a database proxy over a Unix domain socket. This avoids TCP overhead while preserving the process isolation boundary. PostgreSQL's default local connection mode uses Unix domain sockets in /tmp or /var/run/postgresql.
Producer-consumer pipelines — A data ingestion process writes records to a POSIX message queue; one or more worker processes consume from the queue at their own rate. This decouples throughput rates and allows workers to crash and restart without losing queued data.
High-frequency shared state — A real-time sensor aggregator writes telemetry to shared memory at rates exceeding 100,000 updates per second; a display process reads the same region. Copying through pipes or message queues at that frequency would introduce unacceptable latency. POSIX semaphores gate access to prevent torn reads. Real-time operating systems frequently rely on this shared-memory-plus-semaphore pattern for deterministic latency guarantees.
Microservice-to-microservice coordination — Services in containerized environments communicate over TCP sockets or Unix domain sockets mounted through container volumes. Kubernetes inter-pod communication uses the CNI (Container Network Interface) plugin model, which maps to kernel-level socket infrastructure.
Security isolation enforcement — Operating system security architectures such as SELinux and AppArmor enforce IPC access controls through kernel mandatory access control (MAC) policies. On Linux, IPC namespaces (introduced in Linux 3.19) isolate System V IPC objects and POSIX message queues between namespace groups, forming the isolation layer underlying container runtimes.
Decision boundaries
Selecting an IPC mechanism requires matching mechanism properties to communication requirements. The following contrasts define the principal decision boundaries:
Shared memory vs. message passing — Shared memory achieves the lowest latency and highest throughput because data is not copied through kernel buffers. However, it requires explicit synchronization and is unsuitable when processes run on different physical hosts. Message passing (pipes, sockets, message queues) copies data through the kernel, adding latency but providing built-in ordering, flow control, and cross-host capability. For same-host communication with throughput requirements above approximately 1 GB/s, shared memory is the standard choice in systems programming practice.
Pipes vs. sockets — Anonymous pipes are limited to parent-child process relationships. Named pipes (FIFOs) extend to unrelated processes but remain single-host. Unix domain sockets support the same single-host scope as FIFOs but additionally provide bidirectional communication, datagram semantics, and credential passing (SCM_CREDENTIALS). For any scenario requiring bidirectional local IPC between unrelated processes, sockets are the more capable mechanism.
Signals vs. structured IPC — Signals are appropriate only for event notification (process termination notice, reload triggers, user-defined event flags). Their one-integer payload and asynchronous delivery semantics make them unsuitable for data transfer. Any use case requiring structured data exchange must employ one of the data-carrying IPC mechanisms.
POSIX IPC vs. System V IPC — POSIX IPC (message queues, semaphores, shared memory) uses file-descriptor-like handles and integrates with poll()/select()/epoll() for event-driven programming. System V IPC uses integer key identifiers and does not integrate with file descriptor multiplexing. POSIX.1-2017 formally deprecates System V IPC in favor of the POSIX equivalents for new development.
The broader operating system landscape — including distributed operating systems and cloud operating systems — extends these local IPC primitives to network-transparent RPC frameworks and message broker systems, but all networked IPC ultimately reduces to socket operations at the kernel boundary. A full map of where IPC fits within OS subsystem architecture is available through the Operating Systems Authority index.