Holepunch's P2P Stack: Building 'Unstoppable' Apps Without Infrastructure

Holepunch, backed by Tether with significant funding, is building something that challenges fundamental assumptions about how we build internet applications. Their Pear Runtime enables fully peer-to-peer applications with zero centralized infrastructure — no servers, no cloud providers, no single points of failure. CEO Mathias Buus claims “P2P apps will be essential in transitioning from Web 2.” Having spent time digging into their stack, I think this deserves a serious technical analysis.

The Pear Runtime Architecture

At its core, Pear Runtime is a desktop and mobile runtime for P2P applications. Think of it as Electron but for decentralized apps. Instead of bundling a web app with a Chromium instance that talks to servers, Pear bundles a web app with networking primitives that talk directly to other peers.

Hole-Punching: The Core Innovation

The “hole” in Holepunch refers to NAT hole-punching — a technique for establishing direct connections between two devices that are both behind NAT (Network Address Translation) routers. Here’s the simplified version of how it works:

  1. Both peers register with a lightweight coordination service (not a relay — just a matchmaker)
  2. The coordination service tells each peer the other’s external IP and port mapping
  3. Both peers simultaneously send packets to each other’s external addresses
  4. The NAT routers, seeing outgoing packets to those addresses, create temporary port mappings
  5. When the return packets arrive, the NAT routers forward them through the newly created holes
  6. A direct P2P connection is established, with data flowing peer-to-peer without any intermediary

This works for the majority of NAT configurations (UDP hole-punching succeeds roughly 80-85% of the time across typical consumer networks). For the remaining cases where symmetric NATs block hole-punching, Holepunch uses TURN-style relay fallbacks — but the architecture is designed to minimize reliance on these.

DHT for Peer Discovery

Holepunch uses a Distributed Hash Table (based on Kademlia) for peer discovery. When you want to find another peer:

  • Your identifier (public key) maps to a location in the DHT’s key space
  • Nearby nodes in the DHT store your connection information
  • Other peers query the DHT to find your current network address
  • The DHT itself is maintained by participating peers — no central directory

This is similar to how BitTorrent’s mainline DHT works, but optimized for real-time application use cases rather than file sharing.

Hypercore for Data Replication

The data layer uses Hypercore, an append-only log structure that enables efficient data replication between peers. Key properties:

  • Merkle tree verification: Every entry in the log is cryptographically verified, so you can trust data from any peer
  • Sparse replication: You don’t need the full log — request only the entries you need
  • Multi-writer support through Autobase: Multiple peers can write to shared data structures with automatic conflict resolution
  • Efficient sync: Only new entries are transferred when peers reconnect

Built on top of Hypercore, Hyperbee provides a B-tree key-value store, and Hyperdrive provides a POSIX-like filesystem — both fully P2P and replicated.

The Developer Experience

This is where Holepunch gets interesting from a practical standpoint. Their stack includes over 1,500 public npm modules providing networking, data transfer, and collaboration primitives. The developer experience is deliberately designed to feel familiar:

const Hyperswarm = require('hyperswarm')
const Hypercore = require('hypercore')

// Create a P2P data feed
const core = new Hypercore('./my-data')
await core.ready()

// Join a swarm to find peers interested in this feed
const swarm = new Hyperswarm()
swarm.join(core.discoveryKey)

// Handle new peer connections
swarm.on('connection', (socket) => {
  core.replicate(socket)
})

That’s remarkably simple for establishing a P2P data replication channel. The npm-native approach means developers can use familiar tooling (npm install, require/import, standard Node.js patterns) to build decentralized applications.

Pear Runtime itself provides:

  • A desktop application shell (similar to Electron) for P2P apps
  • Built-in key management for peer identity
  • Automatic hole-punching and connection management
  • A development workflow with hot reloading

Real Applications on the Stack

Keet

Keet is Holepunch’s flagship messaging application — think WhatsApp or Signal but fully P2P. Messages, calls, and file transfers happen directly between peers with end-to-end encryption. There’s no server storing your messages, no metadata collection, no service to subpoena. If both peers go offline, the conversation simply pauses until they reconnect.

Keet supports:

  • Text messaging with E2E encryption
  • Voice and video calls (P2P WebRTC)
  • File sharing (arbitrary size, streamed P2P)
  • Group chats with multi-writer Hypercore
  • Room-based collaboration

Pear Credit

A payment system built on the P2P stack, though details are limited. The Tether backing suggests this is part of a larger financial infrastructure play — enabling P2P payments without centralized payment processors.

The Technical Challenges

NAT Traversal Reliability

This is the elephant in the room. NAT traversal success rates vary dramatically:

  • Full Cone NAT: ~95% success rate (most consumer routers)
  • Address-Restricted Cone: ~85% success rate
  • Port-Restricted Cone: ~75% success rate
  • Symmetric NAT: ~30% success rate (common in corporate/mobile networks)

When you average across real-world network distributions, you get roughly 60-95% success depending on the user population. For enterprise or mobile-heavy use cases, that reliability gap is significant.

Peer Availability

In a client-server model, the server is always on. In P2P, when a peer goes offline, their unique data becomes unavailable unless replicated. Solutions include:

  • Super peers: Always-on nodes that maintain replicas (but this starts to look like servers)
  • Erasure coding: Distribute data fragments across many peers so any subset can reconstruct the full data
  • Incentivized replication: Pay peers to store your data (Filecoin model)

Holepunch acknowledges this challenge but their current answer — “replicate across enough peers” — works for social apps but not for applications requiring guaranteed availability.

Data Consistency

Distributed systems are hard. CAP theorem doesn’t care whether your nodes are servers or peers. Autobase provides eventual consistency for multi-writer scenarios, but:

  • Conflict resolution is application-specific
  • Ordering guarantees are weaker than centralized databases
  • Complex queries across distributed data are challenging
  • There’s no equivalent of a database transaction spanning multiple peers

Is P2P Ready for Mainstream?

Holepunch’s technical execution is impressive. The developer experience is genuinely good — better than any previous P2P framework I’ve used. The npm-native approach lowers the barrier to entry significantly.

But “unstoppable” is a strong claim. P2P applications are unstoppable in the sense that there’s no server to shut down, but they’re also unstoppable in the sense that there’s no server to ensure they work reliably. The tradeoff between decentralization and reliability remains the fundamental challenge.

The Tether backing adds both credibility (real funding) and questions (why does a stablecoin issuer want “unstoppable” infrastructure?). The charitable interpretation is that Tether wants censorship-resistant payment rails. The skeptical interpretation is that “unstoppable” infrastructure serves interests that benefit from being outside regulatory reach.

What I find most compelling is the modular approach. You don’t have to go fully P2P. You can use Hypercore for data replication while maintaining some centralized coordination. You can use hole-punching for media streams while keeping metadata on servers. The stack is composable enough to support hybrid architectures.

For developers interested in P2P, Holepunch’s stack is currently the most practical option available. Whether the mainstream is ready for P2P — or whether P2P is ready for the mainstream — is still an open question.

Good technical overview, Jackson. As someone who spends their days keeping infrastructure running at scale, I need to push back on some of the optimism here. P2P sounds elegant in theory but the operational reality is far messier.

NAT Traversal Is Worse Than the Numbers Suggest

You cited 80-85% success rates for UDP hole-punching across “typical consumer networks.” That number is misleading for several reasons:

Corporate networks are the hard case. Enterprise firewalls, carrier-grade NAT (CGNAT) on mobile networks, and VPN tunnels all create symmetric NAT scenarios where hole-punching success drops to 30% or below. If your application targets professionals (as many collaboration tools do), you’re hitting the worst-case NAT types disproportionately.

Success rates degrade over time. NAT mappings have timeouts. A hole-punched connection that works at 2pm might fail at 2:15pm if there’s a lull in traffic and the NAT mapping expires. You need keepalive packets, retry logic, and graceful fallback — none of which are as simple as they sound when you’re also managing application state.

Mobile networks are particularly brutal. Carrier-grade NAT is increasingly common as IPv4 addresses become scarcer. I’ve seen scenarios where a mobile user’s NAT type changes mid-session because they moved between cell towers. Your carefully punched hole just collapsed.

In my experience running infrastructure at Google Cloud, we found that any system requiring direct peer connectivity needs a relay fallback path that handles 15-40% of connections. At that point, you’re maintaining both a P2P stack AND relay infrastructure. The operational complexity doesn’t decrease — it doubles.

The Availability Problem Is Fundamental

You mentioned “replicate across enough peers” as Holepunch’s answer to availability. Let me explain why this is insufficient for production systems.

Our team requires 99.9% availability for any system we deploy. That means no more than 8.76 hours of downtime per year. In a P2P system:

  • If a peer has 95% uptime (which is generous for a consumer device), you need data replicated across at least 3 peers to achieve 99.9% availability (1 - 0.05^3 = 99.9875%)
  • But those peers need to be independently available — if they’re all in the same timezone and go to sleep at the same time, your theoretical availability is meaningless
  • For global availability, you need geographic distribution of replicas, which in a P2P system means hoping your users are globally distributed

Compare this to spinning up three EC2 instances across different availability zones. Deterministic, measurable, guaranteed. With P2P, you’re probabilistically hoping for availability.

The Debugging Nightmare

Here’s something the P2P advocates never talk about: debugging distributed peer networks is extraordinarily difficult. When a user reports “the app is slow” in a client-server architecture, I can check server logs, run diagnostics, trace the request path. In a P2P network:

  • Which peer is the bottleneck? The sender? The receiver? An intermediate relay?
  • Is the slowness caused by NAT traversal retries, Hypercore replication lag, or application logic?
  • How do you reproduce an issue that depends on specific NAT configurations across two different ISPs?
  • Where are the logs? On the user’s device — which you may not have access to

I’ve built production monitoring for distributed systems. P2P makes every debugging session a detective novel.

Where P2P Actually Works

I’m not saying P2P is useless. It’s genuinely good for:

  1. Messaging where eventual delivery is acceptable (Keet’s use case)
  2. File sharing where redundancy is built into the content (BitTorrent model)
  3. Local network collaboration where NAT isn’t a factor
  4. Censorship-resistant communication where reliability tradeoffs are accepted

But for anything requiring SLA-grade availability, consistent performance, and operational observability? You end up building centralized infrastructure to compensate for P2P’s gaps. And at that point, you’ve added complexity without removing servers.

The honest architecture for most applications is hybrid: P2P for what it’s good at (direct data transfer, E2E encryption), centralized for what requires reliability (coordination, availability guarantees, monitoring). Holepunch’s stack supports this, but their marketing doesn’t emphasize it.

Jackson, appreciate the thorough technical breakdown. Let me add the security perspective, because P2P networks introduce threat vectors that most developers don’t think about until it’s too late.

Every Peer Is an Attack Surface

In a client-server model, you have a defined perimeter. You harden your servers, monitor ingress/egress, and control the attack surface. In a P2P network, every participating peer is both a client and a server. This means:

Eclipse attacks: A malicious actor can surround a target peer with compromised nodes in the DHT, controlling all their peer discovery results. The target thinks they’re connected to the legitimate network but they’re actually in a controlled bubble. This is a well-documented attack against Kademlia-based DHTs and Holepunch’s documentation doesn’t describe specific mitigations.

Sybil attacks: Creating thousands of fake identities in a P2P network is cheap. Without proof-of-work or proof-of-stake, there’s no cost to spinning up fake peers that can disrupt DHT routing, pollute discovery results, or perform targeted denial-of-service.

Data poisoning: Even with Merkle tree verification on Hypercore, application-level data poisoning is possible. A malicious peer can write valid but misleading data to shared Hypercore feeds. The cryptographic verification ensures the data hasn’t been tampered with in transit — it doesn’t ensure the data was truthful when created.

The Metadata Problem

End-to-end encryption protects message content, but P2P networks leak metadata in ways that centralized services can actually mitigate:

  • Connection patterns: Who connects to whom is visible to any peer in the swarm. Network observers can map social graphs from connection metadata alone.
  • DHT queries: When you look up a peer’s public key in the DHT, multiple intermediary nodes learn that you’re trying to reach that specific peer. This is equivalent to DNS query leakage but distributed across potentially untrusted nodes.
  • Timing analysis: Direct peer connections make timing correlation attacks easier. If I know when you send a message (from network traffic analysis) and when a specific peer receives data, I can correlate the two even without reading the encrypted content.
  • IP address exposure: In a P2P connection, both peers necessarily know each other’s IP addresses. There’s no CDN or proxy to hide behind. For users with privacy concerns (journalists, activists, whistleblowers), this is a significant exposure.

Signal solves this with sealed sender and relay infrastructure. Tor solves it with onion routing. Pure P2P has no equivalent privacy layer for connection metadata.

The Tether Question

I have to address the elephant in the room: why is Tether funding “unstoppable” infrastructure?

Tether (USDT) is the world’s largest stablecoin with over $100 billion in circulation. They’ve faced persistent questions about:

  • Reserve backing and audit transparency
  • Regulatory compliance across jurisdictions
  • Use in money laundering and sanctions evasion (per multiple DOJ investigations)

Now they’re funding infrastructure explicitly designed to be “unstoppable” — resistant to shutdown by any authority. And they’re building Pear Credit, a payment system on this infrastructure.

I’m not making accusations, but the security analyst in me has to flag the risk model here. “Unstoppable” financial infrastructure is a feature if you’re a dissident in an authoritarian regime. It’s a risk if you’re a regulator trying to prevent financial crimes. And it’s a red flag if you’re an investor evaluating the motivations behind the project.

The security community has a principle: evaluate the threat model, not just the technology. Holepunch’s technology is solid. But the question of who benefits from “unstoppable” infrastructure and why they’re willing to fund it heavily — that deserves scrutiny.

Practical Security Recommendations

For developers considering Holepunch’s stack:

  1. Don’t assume E2E encryption solves privacy. Metadata leakage in P2P is significant. Design your application with metadata minimization in mind.
  2. Implement peer reputation systems. Not all peers are trustworthy. Track peer behavior and deprioritize unreliable or potentially malicious nodes.
  3. Use application-level access control. Hypercore’s cryptographic verification ensures data integrity, not authorization. Build your own access control layer.
  4. Plan for DHT poisoning. Your peer discovery mechanism is a target. Implement multiple discovery paths and validate results.
  5. Consider hybrid architectures where sensitive coordination happens through trusted infrastructure, with P2P handling data transfer.

The technology is interesting. The security implications need more attention than they’re getting.

Great discussion, everyone. Let me share our experience actually evaluating Pear Runtime for a production use case, because theory and practice diverged in interesting ways.

Our Evaluation: Internal Collaboration Tool

Six months ago, my team evaluated Pear Runtime for an internal document collaboration tool — think Google Docs but without Google. The appeal was obvious: real-time collaboration without sending proprietary documents through third-party servers. For a financial services company handling sensitive data, that’s a compelling pitch.

What Worked Well

The developer experience genuinely impressed us. Our team of four engineers had a working prototype in two weeks. The npm-based workflow felt natural — no blockchain concepts to learn, no token economics to understand, just standard JavaScript with P2P networking primitives. Hypercore’s append-only log was a natural fit for document revision history.

Local network performance was excellent. When peers were on the same corporate network (no NAT traversal needed), the collaboration experience was snappy. Real-time cursor tracking, instant updates, conflict-free editing with Autobase. It felt like a mature product.

Data sovereignty was genuine. No document data ever left our network. For our compliance team, this was a dream scenario. No vendor DPA negotiations, no data residency concerns, no third-party subpoena risk.

Where It Fell Apart

Enterprise requirements killed it. Specifically:

  1. Audit logging: Financial services regulations require comprehensive audit trails. Who accessed what document, when, what changes were made, and from where. In a P2P system, audit logs live on individual peers. There’s no centralized audit trail. We’d need to build a separate logging infrastructure that collects events from every peer — which is basically building a server.

  2. Admin controls: Our compliance team needs the ability to revoke access to documents, enforce retention policies, and place legal holds. In a P2P system with replicated data, revoking access means trusting that every peer deletes their copy. That’s not enforceable. Once data is replicated to a peer, you’ve lost control of it.

  3. User management: We need SSO integration (Okta), role-based access control, and centralized user provisioning. Pear Runtime’s identity model is based on cryptographic key pairs. Bridging that to enterprise identity infrastructure required significant custom work that we estimated at 3-4 months of engineering time.

  4. Compliance reporting: Regulators want to see data flow diagrams showing where sensitive data lives. “It’s distributed across employee devices” is not an answer that satisfies a SOC 2 auditor.

The NAT Problem in Practice

Alex from infrastructure nailed this. Our employees work from:

  • Corporate offices (behind enterprise firewalls with strict NAT)
  • Home networks (variable NAT types)
  • Coffee shops and co-working spaces (often restrictive)
  • Mobile hotspots (carrier-grade NAT)

We measured a 62% direct P2P connection success rate across our employee network distribution. For 38% of connections, we needed relay fallback. At that point, we were maintaining relay infrastructure anyway — and the operational complexity of maintaining both P2P and relay paths was higher than just running a centralized collaboration server.

Where I Think P2P Fits

After this evaluation, I came away with a nuanced view. P2P isn’t wrong — it’s wrong for most enterprise use cases as a complete solution. Where it makes sense:

Data transfer layer: Using Hypercore for efficient data replication between known, trusted nodes (like our offices) is genuinely valuable. It’s faster and more efficient than routing through a central server when peers are on the same network.

Offline-first applications: For field workers who need to collaborate without reliable internet, P2P sync when devices are co-located is excellent. Our team in remote branches could benefit from this.

Hybrid architectures: A centralized coordination layer (handling auth, audit, compliance) with P2P data transfer underneath. You get the privacy and efficiency benefits of P2P without sacrificing enterprise requirements.

Specific verticals: Healthcare (HIPAA data that shouldn’t leave a facility), legal (privileged documents), defense (classified networks). These sectors have extreme data sovereignty requirements where P2P’s “data never leaves your control” property is genuinely valuable.

The Bottom Line

Holepunch built impressive technology. The developer experience is the best I’ve seen in the P2P space. But enterprise adoption requires more than good technology — it requires compliance, governance, and operational maturity that the P2P paradigm inherently struggles with.

My recommendation to my peers in engineering leadership: evaluate P2P for specific use cases within your architecture, not as a wholesale replacement for client-server. The hybrid approach is where the real value lies.