Local-First Software Is Having a Moment — CRDTs, Automerge, and the End of "Save" Buttons

The local-first software movement has gone from a fringe idea discussed in academic circles to one of the most exciting paradigm shifts in application development in 2025-2026. If you haven’t read the Ink & Switch “Local-First Software” paper from 2019, it’s worth revisiting — not because the ideas are new, but because the ecosystem has finally caught up to the vision.

CRDTs Have Grown Up

The core technology enabling local-first is CRDTs — Conflict-free Replicated Data Types. These data structures allow multiple users (or devices) to edit the same data independently and merge changes without conflicts. What used to be a PhD-level topic is now shipping in production libraries:

  • Automerge 2.0 rewrote the internals in Rust for dramatic performance improvements. Documents that used to choke at 10K operations now handle millions. The JavaScript bindings are seamless via WASM.
  • Yjs continues to be the go-to for real-time collaborative editing. Its ecosystem of bindings (ProseMirror, CodeMirror, Monaco) makes it trivially easy to add multiplayer to existing editors.
  • cr-sqlite is the sleeper hit — it brings CRDT semantics directly into SQLite, meaning you can use familiar SQL for your local database and get automatic conflict-free sync. This is huge for mobile and desktop apps.

Real Apps Are Shipping Local-First

This isn’t theoretical anymore. Major products have embraced local-first principles:

  • Linear built their entire app around an offline-first sync engine. Open Linear on an airplane and it just works — create issues, update statuses, write comments. Everything syncs when you’re back online. The UX difference is night and day.
  • Figma’s multiplayer architecture uses CRDT-inspired techniques for real-time collaboration. Their CTO has spoken extensively about how operational transforms and CRDTs influenced their design.
  • Notion’s offline overhaul in late 2025 was a massive engineering effort, but the result is that Notion now feels genuinely fast. No more loading spinners when you open a page. The data is already there.

Why Developers Love It

Having built a local-first side project with Automerge over the past few months, I can tell you the developer experience is surprisingly good — but the mental model shift is real. Here’s what changes:

  1. No loading spinners. The UI renders instantly from local state. There’s no useEffect fetching data from an API on mount. The data is just… there.
  2. No REST APIs for CRUD. You stop thinking about HTTP requests entirely. Your app writes to a local document, and a sync layer handles replication in the background.
  3. Offline by default. You don’t need to build an “offline mode.” The app IS offline mode. Network is a nice-to-have enhancement, not a requirement.
  4. Users own their data. The data lives on the user’s device first. This is philosophically appealing and increasingly important in a privacy-conscious world.

The biggest mental shift is thinking in terms of document state and sync rather than request/response cycles. Instead of POST /api/todos you’re doing doc.change(d => d.todos.push(newTodo)) and the library handles the rest.

The Hard Parts

It’s not all sunshine, though. Here are the challenges I’ve hit:

  • Conflict resolution UX: CRDTs guarantee eventual consistency, but “consistent” doesn’t always mean “what the user expected.” If two people edit the same paragraph, the merged result can be semantically nonsensical. You need thoughtful UX to surface and resolve these situations.
  • Initial sync for large datasets: If a new device needs to sync a large document history, it can take a while. Automerge’s binary format helps, but this remains a real challenge.
  • Auth and permissions: This is the big gap. Most local-first libraries have no concept of access control. Who can read this document? Who can write to it? In server-first architectures, this is trivial — the server enforces it. In local-first, you need cryptographic approaches, and the tooling is immature.

The Elephant in the Room

Here’s the uncomfortable question: most business models depend on server-side data control. SaaS companies monetize by hosting your data and providing access to it. If users own their data locally, what’s the business model? Subscription for the sync service? One-time purchase?

The companies succeeding with local-first tech (Figma, Linear) are still fundamentally cloud services. They use CRDT-like techniques for performance and UX, but the server remains the source of truth.

So I’ll pose the question to the group: is local-first a genuine paradigm shift that will reshape how we build apps, or is it a niche approach best suited for specific categories like developer tools, note-taking, and creative software?

I’d love to hear from folks who’ve tried building (or shipping) local-first in production. What was your experience?

We evaluated local-first for our internal tools at my company and the productivity gains were real — engineers absolutely loved the snappy UX. No more staring at loading skeletons while a waterfall of API calls resolved. Everything felt instant, and the offline capability meant our field engineers could work from client sites with spotty WiFi without losing their work.

But we hit a wall with access control.

In a server-first world, permissions are straightforward — the server is the gatekeeper. A user requests data, the server checks their role, and returns only what they’re authorized to see. Simple, well-understood, battle-tested.

In local-first, you need either:

  1. Cryptographic access control — encrypting documents so only authorized users can decrypt them. This is conceptually sound but adds enormous complexity. Key management, key rotation, revoking access to data that’s already been synced to a device… it’s a rabbit hole.
  2. Trust the client — which is obviously a non-starter for anything with sensitive data.

We ended up with a hybrid approach: local-first for the editing experience, server-authoritative for permissions and audit logs. The sync layer talks to a server that validates every change before propagating it to other clients. If a user’s permissions change, the server stops syncing new data to them (though data already on their device is a harder problem).

I think this hybrid model is where most teams will land in practice. Pure local-first works beautifully for:

  • Personal tools (note-taking, task management)
  • Small team collaboration (< 20 people)
  • Apps where data sensitivity is low

But enterprise requirements — compliance, audit trails, data residency, GDPR right-to-deletion — still need a server in the loop. When a regulator asks “who accessed this record and when,” you need a server-side audit log, not a CRDT merge history spread across 50 devices.

The Ink & Switch folks acknowledge this gap, and projects like DXOS are working on decentralized access control, but we’re years away from it being production-ready for regulated industries. For now, hybrid is the pragmatic choice.

The business model question is the real blocker, and I think it deserves more attention than it gets in these discussions.

Every SaaS company I’ve worked at — from early-stage startups to Series C — monetizes through server-side features: real-time collaboration (that the server mediates), analytics dashboards (computed from centralized data), integrations with other services (via server-to-server APIs), and admin controls (enforced server-side). Remove the server from the equation and you’ve removed the monetization surface.

If data lives on the client, what exactly are you selling?

  • Sync service? That’s a race to the bottom. iCloud, Dropbox, and Google Drive already provide generic file sync. Competing on “we sync your CRDT documents” is a thin value proposition.
  • One-time license fee? Investors hate this. No recurring revenue means lower valuations, harder fundraising, and a business model that requires constant new customer acquisition.
  • Premium features? Maybe, but if the core data layer is local, power users will find ways to build those features themselves or use open-source alternatives.

The companies that have made local-first work commercially — Figma, Linear, Notion — are still fundamentally cloud services. They use CRDT-like techniques for the UX (instant responsiveness, multiplayer editing), but the server is absolutely the source of truth. Your Figma files live on Figma’s servers. Your Linear issues are in Linear’s database. The “local” part is a performance optimization, not an architectural philosophy.

I think “local-first” as a UX philosophy is extremely valuable — build the UI as if the data is local, sync in the background, never show a loading spinner for cached data. Every product team should adopt this mindset.

But “local-first” as an architectural philosophy — where the client is the source of truth and the server is optional — is a much harder sell in the current market. Investors want recurring cloud revenue, not one-time license fees for offline-capable apps. Until someone demonstrates a breakout local-first business that isn’t secretly a cloud service, this will remain a developer enthusiasm story rather than a business strategy.

That said, I’d love to be proven wrong. Is anyone here building a commercially viable local-first product? What’s your monetization strategy?

The hiring and training angle is one that rarely comes up in these “should we go local-first” conversations, but from an engineering management perspective, it’s a significant factor.

Finding engineers who understand CRDTs well enough to debug production issues is genuinely hard.

When we built a prototype with Automerge for an internal collaboration tool, everything went smoothly during development. The API is clean, the docs are decent, and the “hello world” experience is great. But then we hit our first sync conflict bug in testing — two users edited adjacent list items and the merged result had duplicated entries.

It took a senior engineer two full days to understand the CRDT merge semantics well enough to identify the root cause. The issue wasn’t in Automerge itself — it was in how we were structuring our document schema. We were using a plain array where we should have been using Automerge’s List type with stable element IDs. But understanding why that mattered required diving into the academic papers on RGA (Replicated Growable Array) CRDTs.

Compare that to a traditional server-side bug: “the API returned a 500, here’s the stack trace, the database query timed out.” Any junior engineer can debug that with basic tooling. The observability story for CRDTs is still in its infancy.

Here’s what I think needs to happen before local-first is ready for mainstream engineering teams:

  1. Better debugging tools. We need the equivalent of “Rails server logs” for CRDT sync. Show me what operations are pending, what conflicts were auto-resolved, what the merge history looks like. Automerge has some introspection APIs but nothing close to what you get with traditional database tooling.

  2. Clearer mental models. The gap between “I understand what a CRDT is conceptually” and “I can design a document schema that avoids pathological merge behavior” is enormous. We need more content like Martin Kleppmann’s talks, but targeted at application developers rather than distributed systems researchers.

  3. Production runbooks. What do you do when a client has a corrupted local state? How do you handle schema migrations in a CRDT document? What’s the recovery process when sync gets stuck? These operational questions don’t have well-documented answers yet.

  4. Training pathways. We ended up running a two-week internal “CRDT bootcamp” for the team, which was effective but expensive. If this technology goes mainstream, we need courses, certifications, or at least comprehensive tutorials that go beyond the basics.

I’m bullish on local-first long-term — the UX benefits are undeniable and the technology is maturing fast. But the human side of the equation — hiring, training, debugging, on-call support — needs to catch up before I’d recommend it for a team that doesn’t already have distributed systems expertise. The Automerge docs are good but sparse for production scenarios. The Yjs docs are better for common use cases but thin on edge cases.

For now, my recommendation to engineering leaders: experiment with local-first in internal tools or non-critical features, build team expertise gradually, and wait for the tooling to mature before committing to it for core product functionality.