Next.js is the most popular React framework for production websites, powering everything from startup landing pages to enterprise e-commerce platforms. But if you're not A/B testing, you're leaving conversions on the table. This guide walks you through every approach to A/B testing in Next.js — from edge middleware to React hooks — with production-ready code you can ship today.
1. Why A/B Test in Next.js?
Next.js teams ship fast — but shipping without measurement is just guessing. A/B testing creates a data-driven feedback loop that tells you exactly which changes move your metrics. Consider: a 10% lift on a pricing page converting at 3% means 0.3 percentage points more customers. For a SaaS doing $50k MRR, that's $1,500/month from a single test.
Here's why Next.js is exceptionally well-suited for experimentation:
The combination of SSR, edge middleware, and React's component model means you can implement A/B tests at every layer of the stack — and pick the right approach for each experiment. Over a year of monthly testing, the compound effect on conversion rates is dramatic.
2. Server-Side vs Client-Side Splitting
The most important architectural decision is where you split traffic. Each approach has real trade-offs that affect performance, SEO, and reliability.
Server-Side (Recommended)
Split in middleware or getServerSideProps. User sees the correct variant immediately.
- Zero flicker — variant decided before HTML ships
- SEO-safe — crawlers see one consistent version
- Compatible with caching via cookie-based keys
- Slightly more complex initial setup
Client-Side
Split in browser after page loads. Simpler but visible.
- Easy to implement with any host
- Works with ISG/SSG out of the box
- Causes layout shift and visible flicker
- May hurt Core Web Vitals (CLS)
For most Next.js apps, server-side splitting via middleware is the best approach. It runs at the edge, adds ~1ms of latency, and the user never sees the split happening.
Client-side splitting still makes sense for interactive UI experiments — testing a different modal, tooltip, or in-app onboarding flow where the component mounts after user interaction so flicker isn't visible.
A third option: Server Component splitting in the App Router. Read the variant from a cookie in a Server Component and conditionally render — no client JS required. Powerful for above-the-fold content experiments.
3. The Middleware Approach
Next.js middleware runs at the edge before the request reaches your page. This is the gold standard — the split happens before any HTML is sent. The pattern: read a cookie for an existing assignment, assign randomly if absent, then rewrite to the variant page.
// middleware.ts
import { NextRequest, NextResponse } from 'next/server'
const COOKIE = 'exp-hero-test'
const VARIANTS = ['control', 'variant-a'] as const
export function middleware(req: NextRequest) {
if (req.nextUrl.pathname !== '/') return
const existing = req.cookies.get(COOKIE)?.value
const variant = VARIANTS.includes(existing as any)
? existing!
: Math.random() < 0.5 ? 'control' : 'variant-a'
const url = req.nextUrl.clone()
url.pathname = variant === 'variant-a'
? '/home-variant-a' : '/home'
const res = NextResponse.rewrite(url)
if (!existing) {
res.cookies.set(COOKIE, variant, {
maxAge: 60 * 60 * 24 * 30, path: '/',
})
}
return res
}
export const config = { matcher: ['/'] }The URL stays as / regardless of variant. The rewrite is invisible — no flicker, no redirect, no layout shift. For the App Router, pass the variant via request headers to Server Components:
// middleware.ts — header-based (App Router)
const res = NextResponse.next()
res.headers.set('x-experiment-variant', variant)
res.cookies.set(COOKIE, variant, { maxAge: 2592000, path: '/' })
return res
// app/page.tsx — read in Server Component
import { headers } from 'next/headers'
export default function Home() {
const variant = headers().get('x-experiment-variant')
return variant === 'variant-a'
? <HeroNew /> : <HeroOriginal />
}Watch your cache headers
If you use ISR or CDN caching, ensure the cache key includes the experiment cookie. Otherwise all users get the same cached variant. Vercel handles this automatically with middleware rewrites.
4. React Hooks Pattern
For client-side experiments on interactive components, a custom React hook is the cleanest pattern. Ideal for modals, forms, and UI that appears after user interaction where flicker isn't a concern.
// hooks/useExperiment.ts
import { useState, useEffect } from 'react'
type Variant = 'control' | 'variant-a'
export function useExperiment(id: string): Variant {
const [variant, setVariant] = useState<Variant>('control')
useEffect(() => {
const key = `exp-${id}`
const stored = localStorage.getItem(key)
if (stored === 'control' || stored === 'variant-a') {
setVariant(stored); return
}
const assigned: Variant =
Math.random() < 0.5 ? 'control' : 'variant-a'
localStorage.setItem(key, assigned)
setVariant(assigned)
}, [id])
return variant
}
// Usage
export function PricingHero() {
const variant = useExperiment('pricing-hero-v2')
return variant === 'variant-a'
? <PricingHeroNew />
: <PricingHeroOriginal />
}For multi-variant weighted splits, extend the hook to accept a config:
const variant = useExperiment('checkout-flow', {
variants: [
{ id: 'control', weight: 50 },
{ id: 'one-page', weight: 25 },
{ id: 'accordion', weight: 25 },
]
})The hook handles assignment and persistence. Don't forget exposure tracking — you need to know which users saw the variant, not just which were assigned. This distinction matters for intent-to-treat analysis.
5. Implementation with ExperimentHQ
Building A/B testing infrastructure from scratch is educational but time-consuming. ExperimentHQ handles assignment, tracking, and statistical analysis with a lightweight script — so you focus on experiments, not plumbing.
Step 1: Add the snippet to your layout
// app/layout.tsx
import Script from 'next/script'
export default function RootLayout({ children }) {
return (
<html>
<head>
<Script
src="https://cdn.experimenthq.io/ehq.js"
data-key="YOUR_PROJECT_KEY"
strategy="beforeInteractive"
/>
</head>
<body>{children}</body>
</html>
)
}Step 2: Create an experiment in the dashboard. For code-split tests, use the JS API:
const variant = window.ehq?.getVariant('hero-redesign')
if (variant === 'new-hero') {
document.getElementById('hero').classList.add('hero-v2')
}
// Track conversion
document.getElementById('signup-btn')
.addEventListener('click', () => {
window.ehq?.trackGoal('signup')
})ExperimentHQ handles consistent assignment across sessions, automatic exposure tracking, and real-time statistical significance. You know exactly when a test has enough data — no spreadsheets required.
6. Feature Flags in Next.js
Feature flags and A/B tests are closely related. A flag controls on/off; an A/B test is a flag with measurement. In Next.js, flags operate at multiple levels:
A common power pattern: use flags for gradual rollouts, then layer A/B testing on top:
const flags = await getFeatureFlags(userId)
if (flags['new-checkout'].enabled) {
const variant = flags['new-checkout'].variant
return variant === 'streamlined'
? <StreamlinedCheckout />
: <NewCheckout />
}
return <LegacyCheckout />ExperimentHQ combines feature flags and A/B testing in one platform — roll out to 10%, then 50%, then run a measured experiment. No tool switching. Read more in our complete feature flags guide.
7. Statistical Significance
Running a test is easy. Knowing when the result is real is the hard part. Statistical significance tells you the probability that the observed difference isn't random noise.
ExperimentHQ auto-calculates significance and tells you when a test has enough data. Read our practical guide on statistical significance.
8. Performance Considerations
A/B testing tools are notorious for slowing down websites. Many inject 100KB+ of JavaScript, block rendering, and tank Core Web Vitals. Here's how to avoid that in Next.js:
ExperimentHQ's snippet is under 5KB gzipped and loads asynchronously. For server-side experiments via middleware, there's zero client-side overhead — the experiment is invisible to the browser.
The key principle: your testing tool should never be the reason a test loses. If the control is faster because the variant loads a heavy script, you're measuring tool overhead, not user preference.
Start Testing Today
The best experimentation programs start with a single test. Pick your highest-traffic page, form a hypothesis, and run your first experiment this week. The data will guide everything after that.