Use this guide when your LLM needs a reliable code-execution tool and you want strong isolation by default.
Diagram: Agent execution loop
Recommended baseline
For agent workloads, start with:
mode: "ephemeral" for stateless runs
network: "none" unless explicitly required
- explicit
timeoutMs, memoryLimit, and output cap
- secrets passed via engine
secrets (not raw echoing in code/output)
import { DockerIsol8 } from "@isol8/core";
const engine = new DockerIsol8({
mode: "ephemeral",
network: "none",
timeoutMs: 15000,
memoryLimit: "512m",
maxOutputSize: 1_048_576,
});
await engine.start();
Wrap execution so your orchestrator always receives normalized fields.
import type { Runtime } from "@isol8/core";
async function executeAgentCode(code: string, runtime: Runtime) {
const result = await engine.execute({
code,
runtime,
});
return {
stdout: result.stdout,
stderr: result.stderr,
exitCode: result.exitCode,
durationMs: result.durationMs,
truncated: result.truncated,
};
}
Keep the tool contract small and stable (stdout, stderr, exitCode, durationMs) so your LLM prompt doesn’t drift.
The pattern is the same regardless of model provider:
- model requests a tool call
- agent executes code with isol8
- tool result is fed back
- model decides whether another iteration is needed
async function runToolCall(code: string) {
const result = await engine.execute({
code,
runtime: "python",
timeoutMs: 12000,
});
return JSON.stringify({
stdout: result.stdout,
stderr: result.stderr,
exitCode: result.exitCode,
});
}
Stream output for long tasks
For long-running tool calls, stream events to your UI so users see progress.
async function runWithStreaming(code: string) {
for await (const event of engine.executeStream({ code, runtime: "python" })) {
if (event.type === "stdout") process.stdout.write(event.data);
if (event.type === "stderr") process.stderr.write(event.data);
if (event.type === "exit") console.log(`\nexit=${event.data}`);
}
}
Stateful agent workflows
When one step should reuse files/state from prior steps, use persistent execution in a long-lived process.
const sessionEngine = new DockerIsol8({ mode: "persistent", timeoutMs: 20000 });
await sessionEngine.start();
await sessionEngine.execute({
runtime: "python",
code: `
import json
json.dump([{"x": 1}, {"x": 2}], open("/sandbox/data.json", "w"))
print("prepared")
`,
});
const result = await sessionEngine.execute({
runtime: "python",
code: `
import json
d = json.load(open("/sandbox/data.json"))
print(sum(row["x"] for row in d))
`,
});
console.log(result.stdout); // 3
await sessionEngine.stop();
Persistent containers are runtime-bound. Do not switch Python -> Node in the same persistent container.
Remote multi-agent deployment
Use the same tool contract with remote execution when you need centralized policy and shared infrastructure.
CLI server
Library client
API request
isol8 serve --port 3000 --key "$ISOL8_API_KEY"
import { RemoteIsol8 } from "@isol8/core";
const remote = new RemoteIsol8(
{
host: "http://localhost:3000",
apiKey: process.env.ISOL8_API_KEY!,
sessionId: "agent-session-123",
},
{
network: "none",
timeoutMs: 15000,
}
);
await remote.start();
const result = await remote.execute({
runtime: "python",
code: "print('remote agent run')",
});
await remote.stop();
curl -X POST http://localhost:3000/execute \
-H "Authorization: Bearer $ISOL8_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"sessionId": "agent-session-123",
"request": { "code": "print(2**8)", "runtime": "python" },
"options": { "timeoutMs": 15000, "network": "none" }
}'
If the agent must call external APIs:
- move from
network: "none" to network: "filtered"
- set strict allow/deny rules
- pass credentials through
secrets
const netEngine = new DockerIsol8({
mode: "ephemeral",
network: "filtered",
networkFilter: {
whitelist: ["^api\\.openai\\.com$"],
blacklist: ["^169\\.254\\."],
},
secrets: {
API_KEY: process.env.OPENAI_API_KEY!,
},
});
Secret masking applies to stdout/stderr text, not arbitrary files written by executed code.
Patterns for reliable agent behavior
- keep execution snippets short and focused
- prefer deterministic tool outputs (JSON when possible)
- gate package installs; pre-bake stable dependencies for production
- enforce hard timeouts per tool run
- return both
stderr and exitCode to model, not just stdout
Related pages