Skip to main content
This page explains how isol8 is assembled under the hood so you can reason about behavior, performance, and operational tradeoffs.

Core building blocks

LayerKey implementationResponsibility
Public engine contractIsol8Engine in src/types.tscommon start/stop/execute/executeStream/putFile/getFile interface
Local engineDockerIsol8 in src/engine/docker.tsOrchestrator/Facade - delegates to managers
Manager classesNetworkManager, ExecutionManager, VolumeManager in src/engine/managers/Domain-specific operations
Remote engineRemoteIsol8 in src/client/remote.tsHTTP client for remote server execution
Runtime selectionRuntimeRegistry in src/runtime/adapter.ts + src/runtime/index.tsruntime adapter lookup and file extension detection
HTTP servercreateServer() in src/server/index.ts/execute, /execute/stream, session + file endpoints
Concurrency controlSemaphore in src/engine/concurrency.tscaps parallel execution both engine-side and server-side

Diagram 1: Request path by interface

Manager classes

DockerIsol8 delegates to specialized manager classes to reduce coupling and improve testability:
  • NetworkManager (src/engine/managers/network-manager.ts): Handles network setup including proxy startup and iptables configuration for filtered network mode.
  • ExecutionManager (src/engine/managers/execution-manager.ts): Handles command building, package installation, output streaming/collection, secret masking, and environment variable construction.
  • VolumeManager (src/engine/managers/volume-manager.ts): Handles file I/O operations (read/write via exec), tar-based archive operations, and output file retrieval.
This separation allows each manager to be tested more independently and makes the codebase easier to maintain.

Execution model

isol8 uses one engine abstraction with three execution paths:
  • Ephemeral (mode: "ephemeral"): container acquired from warm pool, executed, then released back for cleanup/reuse. For simple requests, the ephemeral path can execute runtime inline commands directly instead of writing a script file first.
  • Persistent (mode: "persistent"): one long-lived container per session/engine, state preserved across calls.
  • Streaming (executeStream): creates an ephemeral-style container for that stream execution and yields chunked events.
In server mode, persistent behavior is keyed by sessionId. If sessionId is absent, the request is treated as ephemeral.

Diagram 2: Ephemeral pool lifecycle

Pool and concurrency architecture

Container pool

ContainerPool maintains pre-started containers to reduce startup overhead:
  • poolStrategy: "fast" (default): dual-pool behavior (clean + dirty), background cleanup.
  • In fast mode, clean-pool acquires trigger asynchronous replenishment to keep warm capacity available.
  • poolStrategy: "secure": cleanup in acquire path.
  • poolSize: number or { clean, dirty } depending on strategy.

Semaphores

  • DockerIsol8 uses an internal semaphore with maxConcurrent.
  • The HTTP server also applies a global semaphore (config.maxConcurrent) before execution.
  • Effect: protects host resources and creates predictable queueing under load.

Runtime adapter architecture

Runtime behavior is delegated to adapters:
  • each adapter provides image + command generation + default file extension
  • registry loads built-ins (Python, Node, Bun, Deno, Bash)
  • CLI file-based runs can auto-detect runtime via registry rules
This keeps engine orchestration generic while runtime specifics stay isolated in adapter modules.

Server session architecture

src/server/index.ts keeps a Map<sessionId, SessionState> for persistent sessions:
  • create/reuse session engines on /execute with sessionId
  • update lastAccessedAt on execute/file operations
  • DELETE /session/:id explicitly tears down session containers
  • optional auto-prune removes stale inactive sessions when enabled by config cleanup policy
File endpoints (/file) require sessionId because file I/O is tied to an active persistent container.

Security and output pipeline placement

Inside the engine execution path:
  1. container/network/security options are applied (network, seccomp mode, readonly rootfs, limits)
  2. code runs as sandbox user with timeout enforcement
  3. stdout/stderr are captured and bounded (maxOutputSize)
  4. secret values are masked from text output
  5. optional network logs/audit metadata are attached when enabled

Audit and observability hooks

When audit config is enabled:
  • AuditLogger records execution provenance
  • optional resource tracking (CPU/memory/network) is captured
  • optional network request logs are collected in filtered mode (logNetwork)
  • privacy switches control code/output inclusion

FAQ

The pool reduces container startup overhead. The semaphore controls total parallelism to protect host capacity. They solve different problems.
It sends the same ExecutionRequest shape to the server. Execution still happens in DockerIsol8 on the server side.
Use it when state must persist across calls (files, installed packages, incremental workflows). Use ephemeral for independent runs and higher isolation between tasks.

Troubleshooting

  • Session state not persisting: ensure you pass a stable sessionId for every related API call.
  • High queue latency: inspect maxConcurrent and host resource saturation before raising pool sizes.
  • Runtime mismatch errors: verify adapter registration and selected runtime/file extension combination.
  • Missing network logs: network logs require network: "filtered" and logNetwork: true.