The agent runtime runs the pi coding agent (@mariozechner/pi-coding-agent) inside an isol8 container. Instead of executing code, it executes a prompt — pi handles the LLM loop, tool calls (read, write, edit, bash), and file edits autonomously, entirely within the sandbox.
Quick start
isol8 run -e "add unit tests for the auth module" \
--runtime agent \
--net filtered \
--allow "api.anthropic.com" \
--secret "ANTHROPIC_API_KEY=sk-ant-..." \
--agent-flags "--model anthropic/claude-sonnet-4-5"
import { DockerIsol8 } from "@isol8/core";
const engine = new DockerIsol8({
network: "filtered",
networkFilter: {
whitelist: ["^api\\.anthropic\\.com$"],
blacklist: [],
},
});
await engine.start();
const result = await engine.execute({
runtime: "agent",
code: "add unit tests for the auth module",
agentFlags: "--model anthropic/claude-sonnet-4-5",
env: { ANTHROPIC_API_KEY: process.env.ANTHROPIC_API_KEY! },
timeoutMs: 300_000,
});
console.log(result.stdout);
await engine.stop();
curl -X POST http://localhost:3000/execute \
-H "Authorization: Bearer $ISOL8_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"request": {
"runtime": "agent",
"code": "add unit tests for the auth module",
"agentFlags": "--model anthropic/claude-sonnet-4-5",
"env": { "ANTHROPIC_API_KEY": "sk-ant-..." }
},
"options": {
"network": "filtered",
"networkFilter": {
"whitelist": ["^api\\.anthropic\\.com$"],
"blacklist": []
},
"timeoutMs": 300000
}
}'
How it works
When runtime: "agent" is used, isol8 runs:
pi --no-session --append-system-prompt '<sandbox context>' [agentFlags] -p '<code>'
--no-session — disables session persistence (ephemeral, non-interactive)
--append-system-prompt — automatically injected by isol8 to inform pi of sandbox constraints
agentFlags — extra pi flags you supply (model, thinking level, tool restrictions)
-p '<code>' — your prompt, shell-quoted
pi then runs its own tool-call loop inside the container. It can read, write, and edit files under /sandbox, and run arbitrary bash commands — all within the sandbox’s resource and network limits.
Networking requirement
The agent runtime requires network: "filtered" with at least one whitelist entry. Passing network: "none" throws:
Error: agent runtime requires network "filtered" with at least one whitelist entry.
The agent needs access to an LLM API endpoint (e.g. "^api\\.anthropic\\.com$").
// Correct
const engine = new DockerIsol8({
network: "filtered",
networkFilter: {
whitelist: ["^api\\.anthropic\\.com$"],
blacklist: ["^169\\.254\\."],
},
});
Sandbox system prompt
Every pi invocation inside isol8 receives an automatically appended system prompt informing the agent that it is running in a sandbox with restricted network access and an ephemeral filesystem. This uses pi’s --append-system-prompt — it appends to pi’s default prompt without replacing it. You do not need to supply this yourself.
The code field
For the agent runtime, code is always the prompt text — never a script. It is passed to pi via -p '<prompt>' after shell-quoting.
await engine.execute({
runtime: "agent",
code: "Refactor authenticate() to use async/await and add JSDoc comments.",
});
Agent flags (agentFlags)
Use agentFlags (library/API) or --agent-flags (CLI) to pass extra arguments to pi before the -p flag.
await engine.execute({
runtime: "agent",
code: "Fix the failing tests",
agentFlags: "--model anthropic/claude-sonnet-4-5 --thinking medium --no-extensions",
});
Useful pi flags
| Flag | Description |
|---|
--model <provider/id> | LLM to use — e.g. anthropic/claude-sonnet-4-5, openai/gpt-4o, google/gemini-2.0-flash |
--thinking <level> | Thinking budget: off, minimal, low, medium, high, xhigh |
--tools <list> | Built-in tools to enable. Default: read,bash,edit,write. Also: grep, find, ls |
--no-tools | Disable all built-in tools |
--no-skills | Disable auto-loading of skill files from the container |
--no-extensions | Disable auto-loading of extensions from the container |
Injecting files
Use files in ExecutionRequest (library/API) or --files <dir> (CLI) to inject local files into /sandbox before the agent runs.
import { readFileSync } from "node:fs";
await engine.execute({
runtime: "agent",
code: "Review the code in /sandbox and suggest improvements to error handling",
agentFlags: "--model anthropic/claude-sonnet-4-5 --tools read,bash",
files: {
"src/auth.ts": readFileSync("./src/auth.ts", "utf-8"),
"src/utils.ts": readFileSync("./src/utils.ts", "utf-8"),
// pi auto-loads AGENTS.md from cwd — use this for project rules
"AGENTS.md": "# Rules\n- Follow existing code style\n- No new dependencies\n",
},
});
# Inject an entire local directory into /sandbox
isol8 run -e "Review the code and suggest improvements" \
--runtime agent \
--files ./src \
--net filtered \
--allow "api.anthropic.com"
pi automatically loads AGENTS.md (and CLAUDE.md) from the working directory at startup. Injecting your project rules as /sandbox/AGENTS.md gives the agent project-specific context without touching the prompt.
Persistent sessions
Use mode: "persistent" to run multiple steps in the same container — for example, cloning a repo with bash and then running the agent against it:
const engine = new DockerIsol8({
mode: "persistent",
network: "filtered",
networkFilter: {
whitelist: ["^api\\.anthropic\\.com$", "^github\\.com$"],
blacklist: [],
},
secrets: {
GITHUB_TOKEN: process.env.GITHUB_TOKEN!,
ANTHROPIC_API_KEY: process.env.ANTHROPIC_API_KEY!,
},
pidsLimit: 200,
memoryLimit: "2g",
});
await engine.start();
// Step 1: deterministic setup (bash)
await engine.execute({
runtime: "bash",
code: `
git clone https://$GITHUB_TOKEN@github.com/my-org/my-repo.git /sandbox/repo
cd /sandbox/repo && git checkout -b agent/task origin/main
`,
});
// Step 2: agentic implementation (agent)
await engine.execute({
runtime: "agent",
code: "Fix the type errors in src/parser.ts. The project uses TypeScript strict mode.",
agentFlags: "--model anthropic/claude-sonnet-4-5 --thinking low",
env: { ANTHROPIC_API_KEY: process.env.ANTHROPIC_API_KEY! },
timeoutMs: 300_000,
});
// Step 3: deterministic verification (bash)
const testResult = await engine.execute({
runtime: "bash",
code: "cd /sandbox/repo && npx tsc --noEmit && npx jest --ci",
});
await engine.stop();
Streaming agent output
pi produces output incrementally. Use executeStream to receive it in real-time:
for await (const event of engine.executeStream({
runtime: "agent",
code: "Refactor the auth module to remove deprecated API calls",
agentFlags: "--model anthropic/claude-sonnet-4-5",
})) {
if (event.type === "stdout") process.stdout.write(event.data);
if (event.type === "stderr") process.stderr.write(event.data);
if (event.type === "exit") console.log(`\nAgent exited: ${event.data}`);
}
Default resource limits
The agent runtime applies higher defaults than other runtimes since pi spawns subprocesses for tool calls:
| Option | Agent default | Other runtimes |
|---|
pidsLimit | 200 | 64 |
sandboxSize | 2g | 512m |
Retrieving output files
Use outputPaths to include files written by the agent in the result:
const result = await engine.execute({
runtime: "agent",
code: "Generate a test suite for the Parser class and write it to /sandbox/parser.test.ts",
outputPaths: ["/sandbox/parser.test.ts"],
});
console.log(result.files?.["/sandbox/parser.test.ts"]);
Or retrieve files explicitly with getFile() after execution in a persistent session.
LLM API key handling
Pass the API key via engine secrets (recommended — masked from output) or per-request env:
// Via engine secrets — automatically masked in stdout/stderr
const engine = new DockerIsol8({
secrets: { ANTHROPIC_API_KEY: process.env.ANTHROPIC_API_KEY! },
// ...
});
Troubleshooting
Error: agent runtime requires network "filtered" — Switch to network: "filtered" and add an LLM API endpoint to the whitelist.
Agent exits non-zero — Check result.stderr. Common causes: missing API key, endpoint not in whitelist, timeoutMs too short.
Agent can’t reach the LLM API — Verify the whitelist pattern is anchored (^api\\.anthropic\\.com$). A missing ^ or $ won’t match as expected.
Files not in result — Add outputPaths or call getFile() after the run. In ephemeral mode, container state is discarded on exit.
Related pages