sundayswift.com

The Unsupervised

Agent Pipeline

Autonomous issue-to-PR pipelines with Rust, MCP, and Claude Code

Hesham Salman

The Gap

Agents are capable. The bottleneck is initiation.

  • Claudio parallelizes agents across worktrees
  • But someone still reads the issue
  • Someone still writes the prompt
  • Someone still kicks it off and opens the PR

For well-scoped tasks, that someone
doesn't need to be a human.

The Loop

What Symposium does on every tick

Poll Tracker
Dispatch
Implement
Review
Draft PR
Monitor

Reviewer requests changes → agent addresses feedback → loop continues

Architecture

A single Tokio event loop

CLI (clap)
Orchestrator (tokio event loop + mpsc channels)
Config Layer (WORKFLOW.md watcher + hot-reload)
Notion / Sentry Tracker (MCP client → MCP server)
Agent Runner (claude CLI + streaming JSON parser)
HTTP Server (axum: dashboard + REST API)

MCP as Universal Adapter

One protocol, two consumers

Symposium spawns @notionhq/notion-mcp-server
as a child process, speaks JSON-RPC 2.0 over stdio

Server lives for one polling cycle, then gets dropped.
Process isolation. No connection management.

The agent uses the same MCP server
to read comments, update status, fetch context.

One protocol. One server. Two consumers.

Workspace Isolation

Each issue gets its own directory via hooks

hooks:
  after_create: |
    git -C ~/Developer/my-org/my-repo worktree add \
      {{ workspace }} -b symposium/bug-{{ issue.safe_identifier }}

Hooks are Liquid templates with access to issue properties

Workspace setup is fully user-controlled.
Symposium doesn't know or care that it's a git worktree.

Fail-Open Design

A failing component should never halt the system

Component fails

  • Preflight agent crashes
  • Config file has a typo
  • Agent attempt fails
  • Notion API is flaky

System response

  • Main agent proceeds anyway
  • Previous config retained
  • Exponential backoff + retry
  • Re-fetch on next tick

Early versions didn't do this.
A single flaky API response would stall the entire pipeline.

Filesystem as IPC

The agent writes files. The orchestrator reads them.

PREFLIGHT_SKIP → skip this issue, agent explains why PR_TITLE → title for the draft PR PR_BODY.md → description, reasoning, issue link workspace dir → the workspace itself is the contract

No shared memory. No sockets. No database.
Simple to debug, resilient to agent crashes.

PR Review Monitoring

Where the real time savings come from

Each tick checks open PRs for reviewer feedback

  • Groups reviews by author, keeps only the latest
  • CHANGES_REQUESTED → spin up agent to address
  • Reviewer later approves → no re-trigger

Without this, Symposium is a fancy auto-PR-opener.
With this, it's an autonomous contributor.

What Actually Breaks

Learned from running against a real codebase

Agent failures

  • Fixates on the wrong part of an issue
  • Passes tests but misses the point
  • PRs too large to review

System failures

  • Slashes in issue IDs broke routing
  • Capacity guard blocked PR reviews
  • Restart wiped all tracked PR state

Retries catch transient failures.
Judgment failures need human eyes.
That's why everything opens as a draft.

What's Next: Epics

From individual issues to multi-task projects

Real projects have dependencies.
Data model before API. API before UI.

  • Reads dependency graphs from Mermaid diagrams or Notion relations
  • Dispatches tasks when dependencies are merged
  • Single dependency → stacked PR
  • Multiple dependencies → bases off main

Takeaways

Fail-open, not fail-closed.
Autonomous systems can't require everything to go right. Design every component to degrade gracefully.
The filesystem is good enough.
Files are the simplest IPC between an orchestrator and an agent. Simple to debug, resilient to crashes.
Close the loop.
The real value isn't auto-opening PRs. It's handling the review cycle that normally takes days.