A/B testing only works if you can precisely track which users see which variation and measure their behavior against it. Amplitude makes this straightforward: store experiment assignments as user properties, attach them to all events, and segment your metrics by variant to see which one performs better.
Store Experiment Assignments as User Properties
The foundation is capturing which variant each user is in. Do this immediately after your test framework assigns the user, before they interact with the experiment.
Call identify() with the experiment variant
Use amplitude.identify() to save the experiment metadata. Include the experiment ID, variant name, and timestamp. From that point forward, all events from that user automatically include this context.
// After your test framework assigns the user to a variant
const userId = 'user_abc123';
const variant = 'variant_a'; // e.g., 'control', 'variant_a', 'variant_b'
const experimentId = 'checkout_flow_q1';
amplitude.setUserId(userId);
amplitude.identify(
new amplitude.Identify()
.set('experiment_id', experimentId)
.set('experiment_variant', variant)
.set('experiment_assigned_at', Date.now())
);Track Exposure and Conversion Events
Log when users enter the experiment and what they do inside it. This gives you the data to compare behavior across variants.
Track an exposure event at entry
Fire an experiment_exposed event when the variation loads. Include the variant and experiment ID. This creates a clean cohort of users who actually saw the experiment.
// When the variation component mounts or page loads
amplitude.track('experiment_exposed', {
experiment_id: 'checkout_flow_q1',
experiment_variant: 'variant_a',
timestamp: Date.now()
});Track key actions with variant context
Log every user action in the experiment: button clicks, form submissions, purchases, etc. Include the variant in each event so Amplitude knows which group performed each action.
// User clicks the primary CTA
amplitude.track('button_clicked', {
button_name: 'checkout_submit',
experiment_variant: 'variant_a',
experiment_id: 'checkout_flow_q1'
});
// User completes purchase
amplitude.track('purchase_completed', {
revenue: 149.99,
items_count: 3,
experiment_variant: 'variant_a',
experiment_id: 'checkout_flow_q1',
time_to_conversion_ms: 32000
});experiment_variant in event properties too. User properties can update, but event properties are immutable and represent the exact variant active when the action occurred.Segment Results by Variant
With data flowing in, use Amplitude's segmentation tools to compare metrics across your variants.
Compare conversion rates in the Events chart
Go to Analytics > Events, select your conversion event (e.g., 'purchase_completed'), then add a Segmentation by the experiment_variant property. Amplitude splits the counts and conversion rates side-by-side for each variant.
Build a variant-segmented funnel
Create a Funnel that flows from experiment_exposed → button_clicked → purchase_completed. Segment by experiment_variant to see if one variant has higher drop-off at any step.
// Amplitude builds this query automatically in the UI.
// Under the hood, it's segmenting events like:
// SELECT
// event_properties['experiment_variant'] as variant,
// COUNT(DISTINCT user_id) as users,
// SUM(CASE WHEN event_type = 'purchase_completed' THEN 1 ELSE 0 END) / COUNT(DISTINCT user_id) as conversion_rate
// FROM events
// WHERE event_properties['experiment_id'] = 'checkout_flow_q1'
// GROUP BY variant
// ORDER BY conversion_rate DESC;Common Pitfalls
- Calling
identify()after the user has already triggered events—those early events won't have experiment context. Always set properties before the variation renders. - Storing only user properties and not including experiment data in event properties—user property syncs can lag or fail, corrupting segment accuracy. Always pass variant in the event properties.
- Assigning variants purely client-side with random()—page refreshes or cache can reassign users mid-test, splitting them across groups. Use server-side bucketing or a feature flag service.
- Not tracking an explicit exposure event—you won't have a clean entry cohort, making it hard to calculate real conversion rates. Always fire an exposure event when the variant loads.
Wrapping Up
You're now capturing experiment assignments, tracking user actions by variant, and analyzing results in Amplitude. Segment your conversion rates, funnels, and user flows by variant to identify which one wins. If you want to coordinate A/B testing across multiple tools and automate experiment tracking, Product Analyst can help.