Autonomous issue-to-PR pipelines with Rust, MCP, and Claude Code
Hesham Salman
Agents are capable. The bottleneck is initiation.
For well-scoped tasks, that someone
doesn't need to be a human.
What Symposium does on every tick
Reviewer requests changes → agent addresses feedback → loop continues
A single Tokio event loop
One protocol, two consumers
Symposium spawns @notionhq/notion-mcp-server
as a child process, speaks JSON-RPC 2.0 over stdio
Server lives for one polling cycle, then gets dropped.
Process isolation. No connection management.
The agent uses the same MCP server
to read comments, update status, fetch context.
One protocol. One server. Two consumers.
Each issue gets its own directory via hooks
hooks:
after_create: |
git -C ~/Developer/my-org/my-repo worktree add \
{{ workspace }} -b symposium/bug-{{ issue.safe_identifier }}
Hooks are Liquid templates with access to issue properties
Workspace setup is fully user-controlled.
Symposium doesn't know or care that it's a git worktree.
A failing component should never halt the system
Early versions didn't do this.
A single flaky API response would stall the entire pipeline.
The agent writes files. The orchestrator reads them.
No shared memory. No sockets. No database.
Simple to debug, resilient to agent crashes.
Where the real time savings come from
Each tick checks open PRs for reviewer feedback
Without this, Symposium is a fancy auto-PR-opener.
With this, it's an autonomous contributor.
Learned from running against a real codebase
Retries catch transient failures.
Judgment failures need human eyes.
That's why everything opens as a draft.
From individual issues to multi-task projects
Real projects have dependencies.
Data model before API. API before UI.