Introducing
ListKit
A fast, pure-Swift diffable data source for UICollectionView
github.com/Iron-Ham/ListKit
OPEN WITH: "Every iOS app has lists. Feeds, inboxes, settings screens, catalogs — lists everywhere. Apple gave us UICollectionViewDiffableDataSource to make them easier to build. The model is great: describe your data as an immutable snapshot, hand it off, and let the framework figure out the animations. But the implementation? The implementation is slow. This talk is about why, and what we can do about it."
PAUSE briefly, then advance to the next slide.
TARGET TIME: ~30 seconds on this slide.
The Problem
Apple's diffable data source was a big step forward. But the implementation is slow.
1.2 ms
Apple — build 10k items
0.002 ms
ListKit — build 10k items
752x
faster
SAY: "Let's start with one number. How long does it take to build a snapshot with 10,000 items?"
[CLICK → fragment 1: Apple's 1.2ms appears]
SAY: "With Apple's NSDiffableDataSourceSnapshot: 1.2 milliseconds. That sounds fine, right? It's just a millisecond. But watch this."
[CLICK → fragment 2: ListKit's 0.002ms appears]
SAY: "With ListKit: 0.002 milliseconds. Two microseconds. For the same 10,000 items."
PAUSE — let the audience read both numbers.
[CLICK → fragment 3: 752x appears]
SAY: "That's 752 times faster. And this is just snapshot construction — before we even get to the diff. Before any animation happens. Just building the data structure that describes your list."
SAY: "So how did we get here? Let me show you the full picture."
TARGET TIME: ~1.5 minutes total by end of this slide.
The Full Picture
vs Apple's NSDiffableDataSourceSnapshot
Operation ListKit Apple Speedup
Build 10k items
0.002 ms
1.223 ms
752x
Build 50k items
0.006 ms
6.010 ms
1,045x
Build 100 sections × 100
0.060 ms
3.983 ms
66x
Query itemIdentifiers 100×
0.051 ms
46.364 ms
908x
Reload 5k items
0.099 ms
1.547 ms
15.7x
Release config • median of 15 • 5 warmup iterations • Apple Silicon
SAY: "Here's the full benchmark table — ListKit vs Apple's NSDiffableDataSourceSnapshot across five operations."
WALK THROUGH EACH ROW:
- "Build 10k items — that's the 752x we just saw."
- "Scale it to 50k and the gap gets worse: over 1,000x. Apple's cost grows linearly; ListKit stays nearly flat."
- "Build 100 sections with 100 items each — 66x. Sectioned snapshots are slower for both, but Apple pays significantly more."
- "Now look at this row: query itemIdentifiers 100 times. Apple takes 46 milliseconds. ListKit takes 51 microseconds. That's 908x faster."
EMPHASIZE: "This one matters a lot in practice. If your compositional layout's size provider calls itemIdentifiers during sizing — and many do — you're paying 46ms on every layout pass. That's almost three dropped frames just from querying your own data."
- "Reload 5k items — 15.7x. This is the smallest gap, and it's still an order of magnitude."
SAY: "All benchmarks: release config, median of 15 runs, 5 warmup iterations, Apple Silicon. These are stable, reproducible numbers."
TARGET TIME: ~2.5 minutes total.
What Is ListKit?
Two Swift modules. One package.
Lists
SwiftUI Wrappers (SimpleListView, GroupedListView, OutlineListView)
Pre-Built Configs (SimpleList, GroupedList, OutlineList)
Data Sources + Builder DSL (ListDataSource, SnapshotBuilder)
depends on
▼
ListKit
Snapshot (DiffableDataSourceSnapshot)
Diff Engine (HeckelDiff, SectionedDiff, DataSource)
SAY: "ListKit is actually two Swift modules shipped as one Swift package."
POINT TO BOTTOM (ListKit module):
SAY: "At the bottom is ListKit — the engine. It contains the snapshot type, DiffableDataSourceSnapshot, the Heckel diff algorithm, SectionedDiff for two-level diffing, and a drop-in replacement for Apple's CollectionViewDiffableDataSource. If you want full control, you use ListKit directly. It's API-compatible with Apple's types — same method signatures, same behavior, just faster."
POINT TO TOP (Lists module):
SAY: "On top is Lists — the convenience layer. This is where the SwiftUI wrappers live: SimpleListView, GroupedListView, OutlineListView. It also has the pre-built configurations for UIKit, the result builder DSL for declarative snapshot construction, and the CellViewModel protocol that handles cell registration and dequeuing automatically."
POINT TO ARROW:
SAY: "Lists depends on ListKit, but not the other way around. You can adopt ListKit alone for the performance wins without buying into the convenience abstractions. Or use both together. Zero dependencies beyond UIKit."
TARGET TIME: ~3 minutes total.
Why Apple's Snapshot Is Slow
Apple's NSDiffableDataSourceSnapshot
Objective-C class — heap allocated
Every item boxed to AnyHashable
Eagerly rebuilds internal hash maps on every mutation
Queries reconstruct results from map each time
ListKit's DiffableDataSourceSnapshot
Swift struct — stack / inline allocated
Generic <T: Hashable> — no boxing
Lazy reverse map — only built when needed
Queries return raw arrays directly
SAY: "So why is Apple's snapshot slow? Let's break it down side by side."
POINT TO LEFT (Apple) BOX — walk through each line:
- "It's an Objective-C class. Heap allocated. Every snapshot creation is a malloc."
- "Every item you add gets boxed into AnyHashable. That means bridging overhead, reference counting, and a loss of type information."
- "Here's the big one: it eagerly rebuilds its internal hash maps on every mutation. Every time you call appendItems or appendSections, it reconstructs the reverse index. If you're building a snapshot with 20 sections, that's 20 rebuilds before you even apply."
- "And queries like itemIdentifiers reconstruct their results from the map every time. No caching."
[CLICK → fragment 1: ListKit box appears]
POINT TO RIGHT (ListKit) BOX — walk through each line:
- "ListKit's snapshot is a Swift struct. Stack allocated or inline — no heap allocation for the snapshot itself."
- "It's generic over your Hashable type. No boxing, no AnyHashable. The compiler can inline hash and equality checks."
- "The reverse map — the item-to-section lookup — is lazy. It's only built when you actually need it, which is only for mutation methods like deleteItems or insertItems(before:)."
- "And queries return the raw backing arrays directly. itemIdentifiers(inSection:) is a single array subscript. O(1), no reconstruction."
SAY: "The key insight: the typical snapshot lifecycle is build, diff, apply. None of those steps need the reverse map. So we just... don't build it."
TARGET TIME: ~4 minutes total.
Parallel Array Storage
The data structure that makes everything fast
// The four fields of ListKit's snapshot
var sectionIdentifiers = [SectionID]()
var sectionItemArrays = [[ItemID]]() // parallel to sections
var sectionIndex = [SectionID: Int]() // section → position
var _itemToSection: [ItemID: SectionID]? // ← LAZY
sectionIdentifiers[i] and
sectionItemArrays[i] always refer to the same section.
itemIdentifiers(inSection:) returns the raw array. No reconstruction.
The lazy map is the key.
The common path — append sections, append items, apply — never builds the reverse map .
It's only constructed on demand by mutation methods like deleteItems or insertItems(before:).
SAY: "Let's look at the actual data structure. ListKit's snapshot has exactly four fields."
POINT TO CODE:
- "sectionIdentifiers — a flat array of your section IDs."
- "sectionItemArrays — a parallel array of item arrays, one per section. Index 0 of sectionIdentifiers and index 0 of sectionItemArrays always refer to the same section."
- "sectionIndex — a dictionary mapping section IDs to their position. This is the only dictionary on the read path."
- "And _itemToSection — the reverse map from items back to their containing section. Notice the optional — it starts as nil."
[CLICK → fragment 1: parallel array explanation appears]
SAY: "The parallel array design is what makes reads fast. When UICollectionView asks 'what items are in section 3?', the answer is sectionItemArrays[3]. A single array subscript. No hashing, no reconstruction. The raw array is returned directly."
SAY: "And itemIdentifiers(inSection:) just returns that array. Apple's version reconstructs it from a hash map every time — that's why it's 908x slower."
[CLICK → fragment 2: lazy map takeaway appears]
SAY: "But the real win is the lazy reverse map. The common path — append sections, append items, apply to the collection view — never touches _itemToSection. It stays nil the entire time. You only pay for it if you call mutation methods like deleteItems or insertItems(before:), which need to look up which section an item belongs to."
SAY: "This is where the 752x comes from. It's not algorithmic cleverness — it's data structure design. Apple eagerly maintains both directions. We only build the reverse when you ask for it."
TARGET TIME: ~5 minutes total.
The Heckel Diff
Paul Heckel, 1978 — six passes, O(n) average
1
Scan new
Build symbol table with occurrence counts
2
Scan old
Same table, record old-side counts
3
Match uniques
Unique in both → definite match (the key insight)
4
Expand forward
Extend matches to adjacent equal elements
5
Expand backward
Same, in reverse
6
Collect
Unmatched old → delete, unmatched new → insert, changed order → move
+ LIS move minimization.
After matching, a Longest Increasing Subsequence identifies items already in correct relative order.
Only items outside the LIS need explicit move operations.
SAY: "Now let's talk about the diff algorithm. ListKit uses Paul Heckel's diff from 1978. It's the same algorithm that IGListKit uses. It runs in O(n) average time — six linear passes over the old and new arrays."
[CLICK → fragments 1: passes 1-2 appear]
SAY: "Passes 1 and 2 scan the new and old arrays, building a shared symbol table. For each element, the table tracks how many times it appears in each array and where."
[CLICK → fragment 2: pass 3 appears]
SAY: "Pass 3 is the key insight of the whole algorithm. If an element appears exactly once in the old array and exactly once in the new array, it must be the same element. There's no ambiguity. These unique elements become anchor points — definite matches that we build the rest of the diff around."
[CLICK → fragment 3: passes 4-5 appear]
SAY: "Passes 4 and 5 expand outward from those anchors. If element i in the old array matched element j in the new array, check if i+1 matches j+1. Then do the same backward: does i-1 match j-1? This propagates matches to adjacent elements even if they aren't unique themselves. Think of it like growing blocks of matched content outward from known anchor points."
[CLICK → fragment 4: pass 6 appears]
SAY: "Pass 6 collects the results. Anything unmatched in the old array is a delete. Anything unmatched in the new array is an insert. Matched elements that changed position are moves."
[CLICK → fragment 5: LIS takeaway appears]
SAY: "One more optimization on top: after matching, we run a Longest Increasing Subsequence to identify items that are already in correct relative order. Only items outside the LIS need explicit move operations. This means fewer performBatchUpdates calls and smoother animations — the collection view does less work."
TARGET TIME: ~6.5 minutes total.
SectionedDiff
The layer that makes real-world usage fast
1. Diff section arrays with Heckel
2. For each surviving section: skip if items unchanged
3. Diff items per section with Heckel
4. Reconcile cross-section moves
The skip in step 2 is why the no-change case is so dramatic:
9.5 ms
IGListKit — no change, 10k
0.09 ms
ListKit — no change, 10k
SAY: "The Heckel diff works great for flat arrays. But real apps have sections. SectionedDiff is the layer that makes sectioned lists fast in practice."
SAY: "Step 1: diff the section identifiers themselves using Heckel. Which sections were added, removed, or reordered?"
[CLICK → fragment 1: step 2 appears]
SAY: "Step 2 — and this is the critical optimization — for each section that survived the section-level diff, check: did the items actually change? If the item array is identical, skip it entirely. No diff needed."
[CLICK → fragment 2: step 3 appears]
SAY: "Step 3: for the sections where items did change, run Heckel on each one individually. Small, focused diffs."
[CLICK → fragment 3: step 4 appears]
SAY: "Step 4: reconcile cross-section moves. If an item was deleted from section A and an identical item was inserted into section B, convert that delete+insert pair into a move. This produces correct slide animations instead of fade-out/fade-in. This is something IGListKit can't do — it diffs a flat array and can't distinguish a move from a delete+insert."
[CLICK → fragment 4: no-change comparison appears]
SAY: "The skip in step 2 is why the no-change case is so dramatic. Think about what happens in a reactive architecture — TCA, for example — where state changes trigger full snapshot rebuilds. Most of the time, the list data hasn't actually changed. IGListKit still walks the entire 10,000-item flat array: 9.5 milliseconds. ListKit checks each section's array identity — are these the same arrays? — and returns in 0.09 milliseconds. 106x faster. The most common case is basically free."
TARGET TIME: ~7.5 minutes total.
vs IGListKit
Same algorithm. Different language, different architecture.
Operation IGListKit ListKit Speedup
Diff 10k (50% overlap)
10.8 ms
3.9 ms
2.8x
Diff 50k (50% overlap)
55.4 ms
19.6 ms
2.8x
Diff no-change 10k
9.5 ms
0.09 ms
106x
Diff shuffle 10k
9.8 ms
3.2 ms
3.1x
Language: ObjC++ with message dispatch vs Swift generics with inlined hash/equality
Architecture: Flat array diff vs per-section diff with skip optimization
Moves: LIS minimization → fewer performBatchUpdates calls
SAY: "Now let's compare directly against IGListKit — Meta's open source diffing library. Same Heckel algorithm under the hood. Different language, different architecture."
WALK THROUGH TABLE:
- "Diff 10k items with 50% overlap — IGListKit: 10.8ms, ListKit: 3.9ms. 2.8x faster."
- "Scale to 50k — same 2.8x ratio. Both scale linearly, but ListKit has a lower constant factor."
- "No-change 10k — here's our 106x again. IGListKit still walks the full flat array. ListKit's per-section skip makes this nearly free."
- "Shuffle 10k — worst case for any diff. ListKit: 3.1x faster. Even when everything moves, the constant factor advantage holds."
[CLICK → fragment: Language line appears]
SAY: "Where does the constant factor come from? First, language. IGListKit is Objective-C++. Every element comparison is an Objective-C message send through isEqual: and hash. ListKit is pure Swift with Hashable conformance — the compiler inlines the hash and equality functions directly. No message dispatch."
[CLICK → fragment: Architecture line appears]
SAY: "Second, architecture. IGListKit diffs a single flat array. If you have sections, you flatten everything and manage section boundaries yourself through IGListSectionController. ListKit diffs sections first, then items per section. Unchanged sections are skipped entirely."
[CLICK → fragment: Moves line appears]
SAY: "Third, move minimization. The LIS optimization means fewer explicit move operations, which translates to fewer performBatchUpdates calls and smoother animations."
TARGET TIME: ~8.5 minutes total.
The API
Speed means nothing if it's painful to use
Define a view model:
struct Contact: CellViewModel, Identifiable {
typealias Cell = UICollectionViewListCell
let id: UUID
let name: String
func configure(_ cell: UICollectionViewListCell) {
var content = cell.defaultContentConfiguration()
content.text = name
cell.contentConfiguration = content
}
}
Use the DSL:
await dataSource.apply {
SnapshotSection("favorites") {
for contact in favorites {
contact
}
}
SnapshotSection("all") {
for contact in allContacts {
contact
}
}
}
SAY: "All of that performance means nothing if the API is painful. So let's talk about what it's actually like to use."
[CLICK → fragment 1: CellViewModel code appears]
SAY: "The core abstraction is CellViewModel. It's a small protocol: Hashable, Sendable, and one method — configure. The associated type Cell tells the framework which UICollectionViewCell subclass to use. That's it."
SAY: "Notice: no register call. No dequeue call. No reuse identifier strings. The framework handles all of that automatically from the associated type. You define your view model, and cell registration just works."
SAY: "The Identifiable conformance here is doing more than you'd expect. When a CellViewModel is also Identifiable, Lists automatically synthesizes Hashable and Equatable based on the id only. This means the diff uses identity to match elements — which is fast — and content changes like updating a name trigger a reconfigureItems call rather than a delete-and-insert. Better animations, less work."
[CLICK → fragment 2: DSL code appears]
SAY: "And here's how you apply data. The result builder DSL lets you describe your snapshot declaratively. SnapshotSection defines a section, and you list the items inside it. This supports if/else, for loops, optionals, array passthrough — the full set of Swift result builder features."
SAY: "Under the hood, this builds the same DiffableDataSourceSnapshot you'd build imperatively. It's syntax sugar, not a different data model. The apply is async and runs the diff against the current state automatically."
TARGET TIME: ~9 minutes total (combined with next slide).
Pre-Built Configs & SwiftUI
UIKit — one object, done:
let list = SimpleList<Contact>(
appearance: .insetGrouped
)
view.addSubview(list.collectionView)
await list.setItems(contacts)
SwiftUI — modifier chain:
SimpleListView(
items: contacts,
appearance: .insetGrouped
)
.onSelect { contact in
navigate(to: contact)
}
.onDelete { contact in
remove(contact)
}
.onRefresh {
await reload()
}
Also: GroupedList for multi-section with headers/footers
• OutlineList for hierarchical expand/collapse
• MixedListDataSource for heterogeneous cell types
SAY: "For common patterns, you don't even need the DSL. The Lists module provides three pre-built configurations that handle everything: layout, data source, and delegate management."
POINT TO LEFT (UIKit) CODE:
SAY: "In UIKit: create a SimpleList, add its collection view to your view hierarchy, call setItems. Three lines. The SimpleList owns the collection view, the layout, and the data source internally. You just hand it data."
[CLICK → fragment 1: SwiftUI code appears]
POINT TO RIGHT (SwiftUI) CODE:
SAY: "In SwiftUI: SimpleListView with a modifier chain. onSelect, onDelete, onRefresh — these are closure-based APIs that receive your typed model, not an IndexPath. You never have to manually resolve an IndexPath to an item."
SAY: "The SwiftUI wrappers are UIViewRepresentable under the hood. They're driven by @State — when your items array changes, the view automatically diffs and applies with animations."
[CLICK → fragment 2: other configurations appear]
SAY: "Beyond SimpleList, there's GroupedList for multi-section layouts with headers and footers, OutlineList for hierarchical expand/collapse like a sidebar, and MixedListDataSource for heterogeneous cell types — where you have different kinds of cells in the same list."
SAY: "This is the high-level Lists module. All the diffing speed we just talked about, none of the setup."
TARGET TIME: ~9 minutes total.
Two Kinds of Equality
A subtle but important design decision
Identity (Hashable)
Did this item move, get inserted, or deleted?
Runs inside the O(n) diff — must be fast
Free via Identifiable
Content (ContentEquatable)
Same identity, but did the visible data change?
Runs only on matched pairs — after the diff
Triggers reconfigureItems automatically
Kept orthogonal by design. Neither is required. Both are opt-in.
Identifiable gives you identity-from-id for free.
ContentEquatable adds content-diff for free.
SAY: "There's one more design decision worth understanding: ListKit separates identity from content. These are two different kinds of equality, and conflating them is a common source of bugs and performance problems."
POINT TO LEFT (Identity) BOX:
SAY: "Identity equality answers: is this the same item? Did it move, get inserted, or get deleted? This runs inside the O(n) diff — every item is compared — so it must be fast. If your CellViewModel conforms to Identifiable, you get this for free: Hashable and Equatable are synthesized from the id property only."
[CLICK → fragment: Content box appears]
POINT TO RIGHT (Content) BOX:
SAY: "Content equality answers a different question: same item, but did the visible data change? Did the user's name update? Did a read status toggle? This only runs on matched pairs — items that survived the diff — so it's a much smaller set. When content changes, ListKit automatically calls reconfigureItems, which updates the cell in place. No delete-and-insert, no cell recreation — just update the content configuration. Cheaper and produces better animations."
[CLICK → fragment: orthogonal note appears]
SAY: "These two concerns are kept orthogonal by design. Neither is required. Both are opt-in. Identifiable gives you fast identity diffing. ContentEquatable adds content-change detection on top. You can use one, both, or neither."
SAY: "The trade-off is subtle: two items with the same id but different content are considered equal by the diff. The diff won't detect content-only changes. You need ContentEquatable for that. This is intentional and documented, but it's worth understanding before you adopt the pattern."
TARGET TIME: ~9.5 minutes total.
Takeaways
Data structures > algorithms.
ListKit and IGListKit use the same Heckel algorithm. The 2.8x gap is Swift value types vs ObjC objects,
parallel arrays vs dictionaries, lazy maps vs eager indexing.
The 752x snapshot gap is entirely data structure design.
The fastest code is code that doesn't run.
Per-section skip. Lazy reverse map. Structural-changes-only fast path.
The biggest wins come from identifying work that can be skipped entirely.
Ergonomics don't have to cost performance.
Lists adds ViewModels, result builders, SwiftUI wrappers, and type erasure.
AnyItem precomputes its hash at wrap time and short-circuits cross-type equality
with a single pointer compare. You can have both.
SAY: "Let me leave you with three takeaways."
[CLICK → fragment 1: first takeaway appears]
SAY: "First: data structures matter more than algorithms. ListKit and IGListKit use the exact same Heckel diff algorithm. The 2.8x constant-factor gap comes from Swift value types vs Objective-C objects, ContiguousArray vs NSArray, parallel arrays vs dictionaries. And the 752x snapshot gap? That's entirely data structure design — lazy maps vs eager indexing. Choosing the right data structure gave us three orders of magnitude."
[CLICK → fragment 2: second takeaway appears]
SAY: "Second: the fastest code is code that doesn't run. Per-section skip means unchanged sections cost nothing. The lazy reverse map means the hot path never builds a dictionary. Structural-changes-only fast paths mean common operations are nearly free. The biggest performance wins in this library came from identifying work that could be skipped entirely, not from making work faster."
[CLICK → fragment 3: third takeaway appears]
SAY: "Third: ergonomics don't have to cost performance. The Lists module adds CellViewModels, result builder DSLs, SwiftUI wrappers, and type erasure with AnyItem. These are real abstractions with real convenience. But AnyItem precomputes its hash at wrap time and short-circuits cross-type equality with a single pointer comparison. The result builder produces the same snapshot as imperative code. You can have a nice API and fast execution. You don't have to choose."
PAUSE.
SAY: "Thank you."
TARGET TIME: ~10 minutes total.