If your PR review process looks anything like ours did six months ago, here’s the honest picture: a developer opens a pull request with frontend and backend changes, a reviewer glances at the diff, maybe leaves a comment about a variable name, types “LGTM,” and approves. Nobody actually runs the code. Nobody tests the user-facing behavior. Visual bugs, broken interactions, and integration issues sail through review and land in staging — or worse, production.
The root problem is friction. To properly review a full-stack PR, the reviewer has to stash their current work, pull the branch, install dependencies, seed the database, run both the API server and frontend, and then navigate to the right page to test the change. That’s 10-15 minutes of setup for a 5-minute review. So rational people skip it and just read the code.
How We Built Ephemeral Preview Environments
We built a system where every pull request automatically gets a full-stack deployment with a unique URL: pr-123.preview.company.dev. The reviewer clicks the link, sees the running application with the PR’s changes, and can actually interact with it. No local setup required.
Here’s the architecture:
Namespace-per-PR on Kubernetes. Each PR gets its own Kubernetes namespace (preview-pr-123) containing the full application stack. This gives us resource isolation and clean teardown — deleting the namespace removes everything.
GitOps with ArgoCD. When a PR is opened, a GitHub Action generates Helm values with the PR number templated in (image tags, ingress hostnames, namespace name) and commits them to our GitOps repo. ArgoCD detects the change and deploys the full stack. This means our preview deployments go through the exact same deployment pipeline as staging and production.
Database seeding. Each preview environment gets its own PostgreSQL instance seeded with a synthetic test dataset. We built a seeding tool that generates realistic but fake data covering our main user flows — accounts, transactions, settings, the works. Seed time is about 45 seconds for our schema.
Automatic TLS and DNS. cert-manager handles Let’s Encrypt certificates for each preview URL via DNS-01 challenges. ExternalDNS creates wildcard DNS entries pointing *.preview.company.dev to our ingress controller. Zero manual configuration per PR.
Scale-to-zero. This was critical for cost management. After 30 minutes of inactivity (no HTTP requests), KEDA scales the deployments to zero replicas. The first request after scaling triggers a cold start of about 15 seconds — not instant, but acceptable for review purposes. When the PR is closed or merged, a GitHub Action deletes the entire namespace.
The Numbers
- Average spin-up time: 3 minutes from PR open to live URL
- Monthly cost: ~$200 for approximately 50 active preview environments (thanks to aggressive scale-to-zero)
- Infrastructure: Single EKS cluster with spot instances for preview workloads
Impact on Review Quality
The change in reviewer behavior was immediate and dramatic. Reviewers now actually click around the application, test user flows, and catch issues that are invisible in a code diff. Comments shifted from “this variable name could be better” to “the loading spinner on the settings page persists even after data loads” and “the error state is cut off on mobile viewports.”
PR approval time decreased by roughly 30% because reviewers feel more confident approving when they’ve actually seen the change running. Counterintuitively, the number of review comments increased — but they were higher quality, catching real bugs instead of bikeshedding code style.
Challenges We’re Still Solving
Database schema complexity. Our seeding tool needs to be updated every time we run a migration. We’re exploring using a recent anonymized snapshot approach instead, but that introduces its own concerns.
Secrets management. Preview environments need API keys for third-party services (Stripe test mode, SendGrid, etc.). We use Sealed Secrets with a separate set of credentials from production, but managing the rotation across ephemeral namespaces is painful.
Config drift. Occasionally something works perfectly in preview but breaks in staging due to environment configuration differences. We’ve reduced this by sharing Helm value templates between preview and staging, but it’s not fully eliminated.
Alternatives Considered
If you’re running a static site or simple frontend, just use Vercel or Netlify — they do preview deployments out of the box and it’s not worth building custom infrastructure. Railway and Render are adding preview environment features for full-stack apps, and they’re getting better rapidly.
But for complex microservice architectures with multiple databases and service dependencies, we found that no managed platform could spin up our full topology without significant customization. The custom K8s approach gave us the flexibility we needed.
The question for the community: Is the investment in custom preview infrastructure worth it for your team, or are managed platforms sufficient? Where’s the complexity threshold where build-your-own becomes the right call?