A/B testing is how you validate that a feature actually moves the needle instead of guessing. PostHog's Experiments give you a clean way to split traffic between variants, collect metrics, and see statistical significance without having to wire up your own experiment infrastructure.
Create an Experiment in PostHog
Start by defining your test in the PostHog UI. You'll specify your variants, who participates, and what success looks like.
Go to Experiments and create a new one
In the PostHog app, navigate to Experiments in the left sidebar. Click New experiment. You'll define the experiment name (e.g., checkout-button-color), description, and which feature flag will back this experiment.
// After creating the experiment in PostHog UI,
// you'll reference it by its feature flag name
const experimentVariant = posthog.getExperimentVariant('checkout-button-color');
console.log(experimentVariant); // 'control' or 'test'Set up your variants
Define what each variant does. Create a control (your current experience) and at least one test variant. For a button color test, the control keeps the existing blue, and test changes it to green. You can add more variants, but each one splits your traffic further.
Set participant inclusion rules
Specify who gets included in the experiment. You can include all users, specific cohorts, or filter by properties. In the Participants section, set the percentage of traffic exposed (e.g., 50% means half your users see variants). You can also set a release condition to gradually roll out.
Implement Variants in Your Code
Once the experiment is live, check which variant each user gets and render accordingly. PostHog handles the random assignment automatically.
Call `getExperimentVariant()` to get the user's assignment
Inside your component, use the PostHog SDK to retrieve the variant. This is a synchronous call after PostHog initializes. If the user isn't in the experiment, it returns null.
import posthog from 'posthog-js';
export function CheckoutButton() {
const variant = posthog.getExperimentVariant('checkout-button-color');
if (variant === 'test') {
return <button className="green-button">Complete Purchase</button>;
}
return <button className="blue-button">Complete Purchase</button>;
}Handle the timing of flag initialization
Sometimes a user loads before PostHog initializes. Use the onFeatureFlags callback to run code once flags load, or set a loading state initially. This prevents rendering one variant, then switching mid-page.
Track events that measure your hypothesis
Capture events that indicate success: button clicks, form submissions, purchases. PostHog automatically attributes these to the variant. In Metrics, select which events to measure and PostHog calculates conversion rate by variant.
// Track engagement and success events
function handleCheckoutClick() {
posthog.capture('checkout_button_clicked', {
variant: posthog.getExperimentVariant('checkout-button-color'),
button_color: '#22c55e'
});
proceedToCheckout();
}
function handlePurchaseSuccess() {
posthog.capture('checkout_completed', {
amount: totalPrice,
variant: posthog.getExperimentVariant('checkout-button-color')
});
}Choose your success metrics in the UI
Back in Experiments, navigate to the Metrics section and select which events count as wins. For the checkout button, select checkout_button_clicked and checkout_completed. PostHog calculates conversion rates and statistical significance for each variant automatically.
// Example events for a complete checkout flow
posthog.capture('checkout_page_viewed');
posthog.capture('checkout_button_clicked');
posthog.capture('payment_form_opened');
posthog.capture('checkout_completed');
posthog.capture('checkout_error');getExperimentVariant(), it returns null. Always set a default (usually the control variant) or wait for the onFeatureFlags callback.Monitor Results and Declare a Winner
Review live results in the Experiments dashboard
In Experiments, watch the table update with variant counts, conversion rates, and confidence intervals. PostHog calculates statistical significance automatically and shows the sample size needed for each metric. Don't stop early—let it run until you hit the confidence threshold.
// You can also query results via the API:
const response = await fetch(
'https://app.posthog.com/api/experiments/your-experiment-id/',
{ headers: { 'Authorization': `Bearer ${POSTHOG_API_KEY}` } }
);
const data = await response.json();
console.log(data.results); // Variant results and metricsStop the experiment and deploy the winner
When your experiment reaches significance (typically 95% confidence), click Stop experiment. Update your code to always use the winning variant and remove the conditional logic. Delete the feature flag once you've deployed.
// After the green button wins, simplify to the winner:
export function CheckoutButton() {
return (
<button className="green-button">
Complete Purchase
</button>
);
}Common Pitfalls
- Starting with too many variants. Five variants means splitting traffic five ways—each needs enough volume to reach significance. Start with two: control and one test.
- Not waiting for PostHog to load. Call
getExperimentVariant()before flags initialize and you getnull. UseonFeatureFlags()or set a sensible default. - Inconsistent event tracking between variants. If test captures
checkout_clickedbut control doesn't, you can't compare. Track the same events everywhere. - Stopping experiments early because you see a 3% lift. Real significance takes time. Let PostHog reach 95%+ confidence before declaring a winner, or you'll ship changes that don't actually work.
Wrapping Up
You now have a framework to test hypotheses rigorously instead of guessing. Create the experiment, check the variant in code, track the outcomes, and let PostHog calculate whether the change works. Running experiments regularly is the only way to build products that users actually want. If you want to track experiments across your whole stack and correlate them with product changes, Product Analyst can help.