Sending stdin into a container using nothing but kernel primitives

When a containerised process needs input via stdin, delivering it from outside the container is often done in a more complex way than necessary. This article describes a lightweight, robust approach using a host-created FIFO mounted into the container, providing a near-zero-overhead, atomic, scriptable stdin channel with no extra daemons, no docker exec, and no PTY involvement.

The problem

Some processes use stdin as their primary control interface. When such a process runs inside a Docker container, the conventional options for sending input to it from the host are:

None of these are satisfactory when the goal is simple, reliable, scriptable command delivery to a process that already reads from stdin.

The approach

A Linux FIFO (named pipe) created on the host can be mounted directly into a container as a bind mount. Inside the container, a minimal wrapper script opens the FIFO and redirects it to the target process’s stdin. From that point, any process on the host can write to the FIFO and the containerised process receives it on stdin, with kernel-guaranteed atomicity up to PIPE_BUF (typically 4096 bytes on Linux).

The additional processes and abstraction layers introduced by the conventional approaches are avoided entirely with this method. The full mechanism requires only three standard components:

  1. A FIFO created on the host before the container starts.
  2. The FIFO mounted into the container as a bind volume.
  3. A wrapper script inside the container that opens the FIFO and execs the target process with stdin redirected from it.

This works because Docker on Linux runs containers as processes directly on the host kernel, isolated via namespaces and cgroups. A FIFO created on the host is therefore the same kernel object inside the container as outside. The bind mount simply makes it visible at a chosen path in the container’s filesystem namespace, with no copying, translation, or intermediary layers involved. Data moves directly through the kernel’s FIFO buffer, which makes this as efficient as it can practically be.

Implementation

1. Create the FIFO on the host and set access permissions

mkfifo /var/run/myservice/command_pipe
chmod 0666 /var/run/myservice/command_pipe

The FIFO must exist before the container starts. The permissions shown are permissive for simplicity. You may tighten the permissions to suit your environment, but ensure the process inside the container can still open the FIFO.

2. Mount the FIFO into the container

docker run -d \
  -v "/var/run/myservice/command_pipe:/service/command_pipe" \
  my-image

The container now sees the same FIFO file. Writes from the host are immediately available to the process inside the container. The kernel mediates the transfer entirely in memory.

3. Wrapper script inside the container

#!/bin/bash

CMD_FIFO="/service/command_pipe"

if [[ ! -p "$CMD_FIFO" ]]; then
    >&2 echo "ERROR: command FIFO $CMD_FIFO missing. Aborting."
    exit 1
fi

# Open the FIFO for read/write on fd 3.
# Opening read/write prevents the open() call from blocking
# while waiting for a writer on the other end.
exec 3<> "$CMD_FIFO"

# Exec the target process with stdin redirected from the FIFO.
# exec replaces this shell, so the target process becomes PID 1.
exec /usr/bin/myservice <&3

This script is the container’s entrypoint.

It is technically possible to open the FIFO directly on stdin with exec 0<> "$CMD_FIFO", but this modifies the shell’s stdin before the service process takes over. In a simple wrapper this may work, but any startup logic between that line and the final exec, e.g. sourcing files, running checks, or reading configuration, would have its stdin redirected to the FIFO. Using a custom fd as an intermediary keeps the shell’s stdin untouched until the handoff.

4. Sending commands from the host

# Single command
echo "reload config" > /var/run/myservice/command_pipe

# Multiple commands from a file
cat commands.txt > /var/run/myservice/command_pipe

# From another script
printf "save\nquit\n" > /var/run/myservice/command_pipe

No special tooling required. Standard shell redirection and piping work as-is.

Key details

FIFO opening and file descriptors

Inside the container, the FIFO is opened read/write rather than read-only because a FIFO opened for reading blocks until a writer opens the other end, and vice versa. Opening the FIFO with <> (read/write) on a single file descriptor sidesteps this: the open call returns immediately because the same fd satisfies both ends. The process then holds the FIFO open continuously, meaning subsequent host writes never block waiting for a reader.

File descriptors 0, 1, and 2 are reserved for stdin, stdout, and stderr respectively. Any number from 3 up to the process’s file descriptor limit is available for arbitrary use. fd 3 is chosen here by convention, but any unused descriptor would work.

Why exec instead of a subshell

Using exec to replace the wrapper shell with the target process means the target process inherits PID 1 inside the container. This is correct behaviour for Docker: PID 1 receives signals directly, including SIGTERM on docker stop, allowing clean shutdown. A subshell sitting between the FIFO and the process would intercept or swallow signals.

Atomicity

The Linux kernel guarantees that writes to a FIFO up to PIPE_BUF bytes are atomic. Multiple host processes can write to the same FIFO concurrently without interleaving, as long as each write is within the buffer limit. For line-oriented command interfaces this is almost always satisfied.

One-way only

This mechanism delivers input to the process. Output is not returned through the FIFO. Use docker logs or a separate log aggregation mechanism to observe the process’s output.

Approach comparison

Property This approach docker exec In-container multiplexer
Scriptable from host Yes, native shell Yes, with overhead Requires attach tooling
Extra processes No At least 2 per command Multiplexer server + attach client + exec processes
Service process is PID 1 Yes Should be No
Atomicity Kernel-guaranteed (per PIPE_BUF) Not guaranteed Depends on implementation
Overhead Minimal Per-command process spawn cost Constant footprint plus per-interaction cost

Applicability

This approach works for most, if not all, containerised processes that read commands from stdin; input can originate from the host, from within the same container, or from a different container (including sidecars, via a shared mount or volume), or any combination of these.

It may not be suitable for processes that require a PTY (terminal emulation), though many processes that appear to require a PTY in interactive use will accept plain stdin input when running non-interactively.

It is not applicable on Windows or macOS. This approach relies on a shared POSIX kernel between host and container. On Windows and macOS, Docker Desktop runs Linux containers inside a lightweight Linux VM, isolating them from the host kernel. Windows native containers run directly on the Windows kernel, which does not expose a POSIX-compatible interface to containers.

Summary

A host FIFO mounted into a container, opened read/write to prevent blocking, and passed to a target process via exec and stdin redirection provides a clean, low-overhead, kernel-mediated command channel into any containerised stdin-reading process. No extra daemons, no network sockets, no docker exec overhead. Standard shell tools write to it from the host. The kernel handles atomicity and buffering. Together these properties make it a practical, low-complexity solution for scriptable stdin control in containerised environments, with no moving parts outside the kernel.

Provenance

This approach emerged while working around stdin control limitations in a proprietary server binary. The original implementation is available in the Bedfeather project.

Each component is a decades-old POSIX or Linux primitive. Their combination in this pattern appears to be rarely documented. This article exists to distill the method into a clear, referenceable form.

Related articles
Also posted at
External sources & resources
Licence

Article text is licensed under CC BY 4.0 — you may share and adapt it freely with attribution. Code samples are released under the MIT licence.

← PreviousLLM integration without the cargo cult