Teaching AI to See SwiftUI Previews
The Problem: AI is Blind
AI coding assistants are remarkably good at writing SwiftUI code. They understand the view hierarchy, know when to reach for a LazyVStack over a VStack, and can produce complex layouts from a text description. But they have a fundamental limitation: they can’t see what they’re building.
Think about how you develop UI. You write some code, glance at the preview canvas, tweak a padding value, check again. It’s a tight feedback loop between code and visuals. AI assistants don’t have that loop. They write code, hand it to you, and hope for the best. When something looks off, you describe the problem in words, they try to interpret your description, and the cycle repeats. It’s slow and lossy.
I wanted to close that gap. What if the AI could just… look at the preview?
The Approach
Claude-XcodePreviews is a CLI toolset that programmatically builds and captures SwiftUI previews as screenshots. It integrates with Claude Code as a plugin, so the AI can invoke /preview MyView.swift, get back an image, and actually see the UI it just wrote.
The concept is simple. The implementation taught me a few things.
What I Learned
You Can’t Just Build the App
My first instinct was straightforward: build the app, launch it, take a screenshot. This is a non-starter for a few reasons. Full app builds are slow — often minutes for a large project. They also require the entire dependency graph to resolve, which means dealing with signing, entitlements, backend configurations, and everything else that has nothing to do with the one view you want to preview.
The key insight was to build as little as possible. Instead of building the full app, we create a minimal PreviewHost target — a tiny app whose sole purpose is to display a single SwiftUI view. This target only depends on the modules that the preview file actually imports. For a cached build, this takes about 3-4 seconds. That’s fast enough to feel interactive.
Extracting #Preview Blocks is Tricky
SwiftUI’s #Preview macro is convenient for developers but awkward to work with programmatically. We need to extract the preview body from a Swift source file and inject it into our minimal host app. This means parsing Swift code to find the #Preview { ... } block and correctly handling nested braces.
The extracted content gets wrapped into a standalone SwiftUI App that renders the preview in a window. The result is a self-contained binary that shows exactly one thing: your preview.
Keeping the Project Clean
Injecting a temporary target into someone’s Xcode project is inherently messy. The xcodeproj Ruby gem handles the project manipulation — adding the PreviewHost target, configuring its dependencies, setting up build settings. But you need to be meticulous about cleanup.
The tool injects the target, builds it, captures the screenshot, and then removes every trace of the temporary target from the project file. If anything fails mid-process, cleanup still runs. The user’s project file should never be left in a modified state. This was non-negotiable — nobody wants a tool that litters their .xcodeproj with phantom targets.
Simulator Management is Its Own Problem
You’d think launching an app in a simulator and taking a screenshot would be the easy part. It’s not. You need to find an available simulator (or create one), boot it, wait for it to be ready, install the app, launch it, wait for the UI to render, capture the screen, and then optionally shut it down.
Each of these steps can fail in its own creative way. The simulator might be in a stale state. The boot might hang. The app might crash on launch because of a missing resource bundle. I ended up writing a dedicated sim-manager.sh script just to handle simulator lifecycle — because getting this right is the difference between a tool that works reliably and one that works “sometimes.”
Resource Bundles Need Special Attention
One subtle issue: views that reference colors, images, or other assets from resource bundles will crash at runtime if those bundles aren’t included in the build. The tool automatically detects and includes asset bundles from the target’s dependencies. Without this, any view using a design system or theme would render as a crash log instead of a screenshot.
What Changes When AI Can See
The workflow shift is meaningful. Instead of describing what’s wrong with a layout (“the spacing between the title and subtitle is too large, and the button doesn’t extend to full width”), you tell Claude to preview the view, and it can see the problem directly. It can iterate on its own — adjust values, re-preview, verify the fix — without round-tripping through your verbal description of the visual output.
It also changes how you prompt. Instead of painstakingly describing a design, you can say “make this look like the screenshot” and attach a reference image. Claude can build the view, preview it, compare with the reference, and iterate. The feedback loop that makes UI development productive for humans now works for AI too.
Why Not Just Use Xcode’s MCP?
With Xcode 26.3, Apple shipped a built-in MCP server (xcrun mcpbridge) that exposes 20 tools to AI agents — including RenderPreview, which can capture SwiftUI previews and return images. If you’ve seen the announcement, you might reasonably wonder: why bother with Claude-XcodePreviews when Apple provides this natively?
For a single agent working on a single branch, RenderPreview is genuinely simpler. No plugin install, no Ruby gem, no simulator management. It renders through Xcode’s own preview canvas and just works. If that’s your workflow, use it.
But the Xcode MCP is fundamentally single-threaded. It’s bound to one running Xcode instance, with one project open, on one branch. The moment you want to parallelize — and if you’ve read my post on Claudio, you know I think parallelization is the future of AI-assisted development — the model breaks down.
Consider a typical Claudio session: three or four Claude Code instances, each working in its own git worktree on its own branch, building different features simultaneously. Each agent might be iterating on a different SwiftUI view and needs visual feedback. The Xcode MCP can’t serve them — there’s one Xcode, one project state, one preview canvas. You can’t have four agents all calling RenderPreview against four different branches at once.
Claude-XcodePreviews doesn’t have this constraint. It works from the terminal, builds a minimal PreviewHost target with xcodebuild, and captures from the simulator independently. No Xcode GUI required. Each agent in each worktree can run its own preview capture in isolation. The tool operates on whatever files are in its working directory — it doesn’t care whether that directory is the main checkout or a worktree three levels deep in .claudio/worktrees/.
There are smaller differences too. The Xcode MCP bridge triggers an “Allow access to Xcode?” dialog for every new agent PID, which gets disruptive during longer sessions. Claude-XcodePreviews is just shell scripts calling standard developer tools — no permission dialogs.
The tradeoff is real: Claude-XcodePreviews has more moving parts (simulator management, project injection, Ruby dependency). For a single-agent, Xcode-open workflow, the MCP is the simpler path. But the moment you scale to parallel agents and worktrees, a terminal-native tool that operates independently per-worktree isn’t just convenient — it’s the only option that works.
Using It
Claude-XcodePreviews is available as a Claude Code plugin. Install it and the /preview slash command becomes available:
# In Claude Code
/preview path/to/MyView.swift
It auto-detects whether you’re working in an Xcode project or an SPM package and picks the right strategy. For Xcode projects, it uses dynamic target injection. For SPM packages, it creates a temporary Xcode project that pulls in the local package as a dependency.
The tool is open source on GitHub. If you’re working with SwiftUI and Claude Code, give it a try. And if you run into edge cases — I’m sure there are plenty — contributions are welcome.