6 min read

How to Set Up A/B Testing in Amplitude

Running A/B tests without solid tracking is just guessing. Amplitude lets you run structured experiments, but you need to properly instrument your variants—assigning users to groups and logging the right events—to get reliable results that actually tell you what works.

Create Your Experiment Structure

The foundation of any A/B test is clear tracking. Start by deciding your variants and logging when users encounter them.

Define Your Variants

Decide on your control and variant groups (usually 'control', 'variant_a', 'variant_b'). Make sure your application logic can determine which variant a user sees—this happens server-side or client-side depending on where your test runs, whether it's a feature flag service, random assignment, or API response.

Log Variant Assignment When User Enters Test

As soon as you know which variant a user is in, log an event in Amplitude. Include the experiment ID and variant name. This event is your anchor—everything else gets joined to it in analysis. Log this before any other experiment-related events.

javascript
const amplitude = require('@amplitude/analytics-browser');

// Log exposure to the experiment
amplitude.track('experiment_assignment', {
  experiment_id: 'checkout_button_v2',
  variant: userVariant,
  timestamp: Date.now()
});

// Or using the Amplitude SDK v2 API
amplitude.logEvent('experiment_assigned', {
  'experiment_id': 'checkout_button_v2',
  'variant': userVariant
});
Track when a user enters your experiment. The experiment_id and variant must be consistent across all related events.
Watch out: Log the experiment_assignment event once per session, not on every page load. Use a flag or check to ensure you're not duplicating the entry event.

Track Actions Within the Experiment

Once users are assigned to variants, capture the actions that matter—clicks, purchases, sign-ups, whatever your success metric is.

Log Events with Experiment Context

Every event related to your test should include the experiment ID and variant. This lets you slice results by variant in Amplitude's analysis tools. Include both the core event (like 'purchase') and the experiment context properties so you can segment results properly.

javascript
// When user completes the action you're testing
amplitude.track('purchase', {
  experiment_id: 'checkout_button_v2',
  variant: userVariant,
  revenue: 49.99,
  product_id: 'sku_123'
});

// For interaction tracking
amplitude.track('button_clicked', {
  experiment_id: 'checkout_button_v2',
  variant: userVariant,
  button_text: 'Complete Purchase Now',
  page: 'checkout'
});
Include experiment context in every related event so you can segment and compare variants.

Set User Properties for Segmentation

Use setUserProperties() to mark users with their experiment variant. This lets you segment other events across your entire product by who's in which test, which is useful for detecting unexpected side effects or interactions between experiments.

javascript
amplitude.setUserProperties({
  'active_experiments': ['checkout_button_v2'],
  'checkout_button_v2_variant': userVariant
});

// Later, any event logged for this user automatically includes these properties
amplitude.track('page_view', {
  page: 'checkout'
  // ^ This event will inherit the user properties above
});
User properties persist across sessions, so all future events from this user include their variant assignment automatically.
Tip: Keep variant names consistent everywhere. If you use 'control' in one place and 'control_group' in another, Amplitude treats them as separate groups and your results get fragmented.

Analyze Results in Amplitude

Create a Segmentation Report

Go to Reports and create a new Segmentation report. Choose your metric (e.g., 'purchase' events), then segment by the variant property. Amplitude breaks down your key metrics by control vs. variant_a, showing you conversion rates, counts, and other stats side-by-side.

javascript
// You can access experiment results programmatically via Amplitude's Events API
const experimentData = await fetch('https://api.amplitude.com/api/2/events', {
  method: 'POST',
  headers: {
    'Authorization': 'Bearer YOUR_API_KEY',
    'Content-Type': 'application/json'
  },
  body: JSON.stringify({
    user_id: userId,
    start_date: '20240101',
    end_date: '20240131'
  })
});

const events = await experimentData.json();
const variantResults = events.filter(e => 
  e.event_properties?.experiment_id === 'checkout_button_v2'
);
Access experiment data via API for custom analysis or exporting results externally.
Watch out: Run your test long enough to reach statistical significance. Amplitude shows you sample size and confidence intervals—aim for 95% confidence and at least a few hundred conversions per variant before deciding on a winner.

Common Pitfalls

  • Logging events with inconsistent variant names ('control', 'Control', 'group_a')—Amplitude treats these as separate variants, fragmenting your analysis across similar groups
  • Forgetting to include experiment_id and variant in every related event—makes it impossible to properly segment results and compare variants side-by-side
  • Logging the experiment_assignment event multiple times per user—inflates your exposure count and skews conversion rates and statistical calculations
  • Stopping your test too early before reaching statistical significance—random noise can look like real differences in small samples and lead to wrong decisions

Wrapping Up

You now have a structured A/B test set up in Amplitude with proper variant assignment and event tracking. Run it long enough to reach statistical significance, then use the results to guide your product decisions. If you want to track this automatically across tools, Product Analyst can help.

Track these metrics automatically

Product Analyst connects to your stack and surfaces the insights that matter.

Try Product Analyst — Free