6 min read

How to Track A B Testing in PostHog

A/B tests only matter if you measure them. PostHog gives you multiple ways to track which variant users see and how they behave, whether you're testing UI changes, copy, or entire features. The key is capturing the assignment event and connecting it to downstream actions.

Capture Experiment Assignments as Events

The foundation of A/B test tracking is recording when a user enters an experiment and which variant they get. You can do this by firing a custom event each time a user sees a variant.

Step 1: Fire an event when a variant is assigned

Whenever your code decides to show a user a variant, capture that decision as a custom event. Include the experiment name, variant, and any other relevant context. PostHog will automatically attach the user ID and timestamp.

javascript
posthog.capture('experiment_viewed', {
  'experiment_name': 'checkout_button_color',
  'variant': 'blue_button',
  'experiment_id': 'exp_checkout_001'
});
Fire this event as soon as the variant is rendered

Step 2: Use consistent naming across variants

Keep experiment names and variant values the same across your codebase and analytics. PostHog is case-sensitive, so blue_button and Blue_Button are different variants.

javascript
const VARIANTS = {
  CONTROL: 'control',
  BLUE_BUTTON: 'blue_button',
  GREEN_BUTTON: 'green_button'
};

posthog.capture('experiment_viewed', {
  'experiment_name': 'checkout_button_color',
  'variant': VARIANTS.BLUE_BUTTON
});
Define variants as constants to prevent typos
Tip: Fire the event as early as possible, right when you assign the variant. If you wait until after user interaction, you might miss users who never engage.

Track Variant Actions and Conversions

After capturing the assignment event, track the actions users take while in each variant. This links conversions back to the experiment.

Step 1: Capture conversion events with variant context

When users complete your key action (purchase, signup, etc.), include the current variant in that event. This links the conversion back to the experiment without needing separate queries.

javascript
const variant = localStorage.getItem('checkout_button_variant');

posthog.capture('purchase_completed', {
  'revenue': 99.99,
  'product_id': 'prod_abc123',
  'experiment_name': 'checkout_button_color',
  'variant': variant
});
Include experiment context in conversion events

Step 2: Use PostHog Experiments for automated tracking

PostHog's Experiments feature automatically tracks variant assignments via feature flags. Define your test in PostHog, retrieve the variant with getFeatureFlag, and PostHog handles the analytics automatically.

javascript
const variant = posthog.getFeatureFlag('button_color_test');

if (variant === 'blue_button') {
  renderButton({ backgroundColor: '#0066cc' });
} else if (variant === 'green_button') {
  renderButton({ backgroundColor: '#00b359' });
} else {
  renderButton({ backgroundColor: '#666' });
}

posthog.onFeatureFlags(() => {
  console.log('Variant loaded:', posthog.getFeatureFlag('button_color_test'));
});
PostHog automatically tracks assignments when using feature flags
Watch out: If you're using PostHog Experiments, don't fire a separate experiment_viewed event. PostHog captures the assignment automatically via the feature flag. Double-tracking skews your results.

Analyze Results and Statistical Significance

Once you've collected enough data, PostHog shows you which variant won. Use the Experiments dashboard or create custom funnels to measure impact.

Step 1: View results in PostHog Experiments

Open Experiments in PostHog, select your test, and check the Results tab. PostHog displays conversion rates, sample sizes, and confidence intervals for each variant. A green badge means the result is statistically significant.

javascript
// Fetch experiment results via PostHog API
const response = await fetch(
  'https://app.posthog.com/api/experiments/',
  {
    headers: { Authorization: `Bearer ${API_TOKEN}` }
  }
);

const { results } = await response.json();
const test = results.find(e => e.name === 'button_color_test');

console.log('Variant:', test.feature_flag_key);
console.log('Status:', test.status);
console.log('Results:', test.results);
Retrieve experiment metadata and results via PostHog API

Step 2: Create a funnel to isolate experiment impact

Create a Funnel in Insights that filters by experiment_name or variant. This isolates conversion rates for each variant, accounting for users who drop off at different steps.

javascript
// PostHog SQL query to compare variant performance
SELECT 
  properties.variant,
  COUNT(DISTINCT distinct_id) as total_users,
  SUM(CASE WHEN event = 'purchase_completed' THEN 1 ELSE 0 END) as conversions,
  ROUND(100.0 * SUM(CASE WHEN event = 'purchase_completed' THEN 1 ELSE 0 END) / 
        COUNT(DISTINCT distinct_id), 2) as conversion_rate_pct
FROM events
WHERE properties.experiment_name = 'checkout_button_color'
  AND event IN ('experiment_viewed', 'purchase_completed')
GROUP BY properties.variant
ORDER BY conversion_rate_pct DESC;
SQL query to calculate conversion rates by variant
Tip: PostHog Experiments needs a minimum sample size to calculate significance. Let data collect for at least 3-5 days before drawing conclusions. Check the confidence level—aim for 95% or higher.

Common Pitfalls

  • Firing the experiment event too late or conditionally—some users never trigger it, and you only track a subset of your audience.
  • Using inconsistent variant names (e.g., 'control' vs 'CONTROL')—PostHog treats them as separate variants, splitting your data.
  • Stopping the test too early—with small sample sizes, PostHog can't calculate statistical significance. Early stopping leads to false positives.
  • Double-tracking with both feature flags and custom events—PostHog Experiments already tracks assignments automatically. Extra events create duplicate data and inflated counts.

Wrapping Up

Tracking A/B tests in PostHog boils down to firing an event when users see a variant and capturing conversions with that variant context. Use PostHog's built-in Experiments feature for automatic tracking and statistical significance, or manually capture events if you're running tests elsewhere. Either way, consistency in naming and early event capture are critical. If you want to track A/B tests automatically across multiple tools and platforms, Product Analyst can help.

Track these metrics automatically

Product Analyst connects to your stack and surfaces the insights that matter.

Try Product Analyst — Free