Skip to main content
Maxions is a demonstration of what you can build on top of isol8. It is a self-hosted platform for running one-shot coding agents as a queue of jobs — inspired by Stripe’s Minions, where over 1,300 PRs merge autonomously every week. Submit a plain-English task and a target repo; Maxions clones, implements, verifies, commits, and opens a PR with no human in the loop. Maxionalisa is the GitHub App running this platform. It raised PR #111 and PR #113 on this very repo autonomously — cloned, implemented, committed, and opened each PR with no human code. isol8 is the only reason this is possible without a vendor.
The current Maxions setup targets the isol8 repo out of the box — it exists to demonstrate isol8’s capabilities in a real production context. It is straightforward to adapt for any other repository: change TARGET_REPO in your .env and install the GitHub App on the new repo.

Why not just use a cloud coding agent?

Cloud agents (GitHub Copilot Workspace, Devin, etc.) are convenient but opaque. Maxions is self-hosted — you own the pipeline, the environment, and the secrets.
Cloud agentsMaxions
Execution environmentVendor-managed VMsYour Docker host — full control
Network accessOpaqueExplicit: none / filtered / host
SecretsSent to vendor infrastructureStay in your environment, masked in all output
Cost modelPer-seat or per-task pricingYou pay only for the LLM tokens
ConcurrencyLimited by your planBounded by your hardware (p-queue, configurable)
AuditabilityVendor logsFull stdout/stderr streamed to your own dashboard
Pipeline customizationPrompt onlyEvery step is your code — change anything

Pipeline

Each job runs inside a single persistent DockerIsol8 container. All five stages share the same container filesystem — so the repo cloned during setup is still there for implement, verify, fix, and ship.

Current architecture vs. the ideal

How it works today

The orchestrator sequences discrete steps — each is a separate execute() call. The agent sees one step at a time: implement, then fix (if needed), then ship. The orchestrator drives control flow; the agent drives code changes.

The ideal: master-agent architecture

The current approach works, but splitting context across multiple prompts leaves gaps. The ideal evolution is a master agent that runs outside the isol8 container: it gathers all relevant context first — reads the repo structure, fetches the GitHub issue body, pulls related PRs, loads the style guide and AGENTS.md — and synthesises everything into a single, self-sufficient prompt. It then hands off to the isol8 agent box in one shot. The isol8 agent has everything it needs without back-and-forth. The orchestrator’s job shrinks to: spin up container → inject context-rich prompt → wait for PR URL. See the implement step patterns in the one-shot guide for how to structure that context gathering today.
The key insight from Stripe’s Minions research is that agent reliability correlates with prompt completeness, not with the number of retries. A master agent that front-loads context consistently outperforms one that iterates with thin prompts.

How Maxions uses isol8

The two-token split

Each job receives two separate GitHub tokens:
VariableToken typeUsed byWhy
GITHUB_TOKENGitHub App installation token — short-lived, repo-scopedgit clone, git push, gh CLIMinted fresh per job via @octokit/auth-app; expires in 1 hour
COPILOT_GITHUB_TOKENPersonal Access Token with Copilot accesspi (checks this before GITHUB_TOKEN)GitHub App tokens are server-to-server tokens — the Copilot LLM API rejects them. A PAT is required.
GitHub App installation tokens are not valid for the Copilot LLM API. If pi picks up GITHUB_TOKEN instead of COPILOT_GITHUB_TOKEN, all agent calls will fail with a 401. pi checks COPILOT_GITHUB_TOKEN first — ensure it is set in your environment.

Stack

LayerTechnology
MonorepoTurborepo + Bun
API serverHono (Bun)
Web dashboardNext.js 15 + shadcn/ui + Tailwind CSS
DatabaseSQLite + Drizzle ORM
Agent sandbox@isol8/coreDockerIsol8 persistent mode
Coding agentpi (@mariozechner/pi-coding-agent) via runtime: "agent"
LLMGitHub Copilot (github-copilot/gpt-5-mini)
GitHub authGitHub App — installation tokens via @octokit/auth-app
Queuep-queue (configurable concurrency)
LintingUltracite (Biome-based)

Project structure

apps/
  api/          Hono API — job queue, SSE live streaming, REST routes
  web/          Next.js dashboard — job list, detail view, live log terminal
packages/
  orchestrator/ The blueprint: all pipeline steps, DockerIsol8 engine wiring
  db/           Drizzle schema, client, SQLite migrations
  ui/           Shared React components — StatusBadge, LogTerminal, StepTimeline

Prerequisites

  • Bun 1.2+
  • Docker running locally with access to /var/run/docker.sock
  • The isol8:agent image built from @isol8/core:
    docker build --target agent -t isol8:agent node_modules/@isol8/core/docker/
    
  • A GitHub App installed on the target repo with Contents (read/write) and Pull Requests (read/write) permissions
  • A GitHub PAT with Copilot access for the pi agent

Setup

1

Install dependencies

bun install
2

Configure environment

cp .env.example .env
Fill in the values — see the environment variables table below.
3

Run database migrations

bun run db:migrate
4

Start development servers

bun run dev

Environment variables

VariableDescription
GITHUB_APP_IDGitHub App ID
GITHUB_APP_PRIVATE_KEYApp private key in PEM format — use literal \n between lines
GITHUB_APP_INSTALLATION_IDInstallation ID for the target repo
COPILOT_GITHUB_TOKENPersonal Access Token with Copilot access — used by pi for the LLM API
DATABASE_URLSQLite path, e.g. file:./maxions.db
API_PORTPort for the Hono API server (default: 3000)
WEB_PORTPort for the Next.js dashboard (default: 3002)
TARGET_REPODefault target repository (owner/repo) — overridable per-task via the API

Docker Compose

docker compose up --build
The API container mounts /var/run/docker.sock to spawn sandbox containers as siblings on the host Docker daemon (Docker-outside-of-Docker). The isol8:agent image must be built on the host before starting the compose stack — the API container does not build it automatically.

Implementation notes

Non-obvious things discovered while building this:
Bun’s for await blocks the event loop when iterating Docker TCP streams, preventing data events from firing. The stream consumer in packages/orchestrator/src/blueprint.ts uses .then() + setImmediate chaining instead — this keeps the Bun event loop free between iterations so Docker TCP events can interleave.
Calling git commit without -m opens an interactive editor, which hangs forever in a non-interactive Docker container. The commit step instructs the agent to write the message to /tmp/commit-msg.txt and run git commit -F /tmp/commit-msg.txt — no editor involved.
gh pr create --body "..." is broken by backticks or $(...) in the PR body — the shell interprets them as command substitution. The PR body is written to /tmp/pr-body.md and passed via --body-file instead.
Bun’s default idle timeout kills long-lived connections before the SSE heartbeat runs. The Bun server export requires idleTimeout: 0 to keep SSE connections alive for the duration of a job.
When the API sits behind nginx, SSE responses are buffered by default — the client sees nothing until the connection closes. Add X-Accel-Buffering: no to SSE responses to disable nginx response buffering.
With set -e active, git checkout -b branch-name exits non-zero if the branch already exists (e.g. on a retry). The setup script uses git checkout -b ${branch} || git checkout ${branch} — the fallback is load-bearing; removing it breaks retries silently.

Further reading