6 min read

How to Set Up Alerts for Event Volume in PostHog

Event volume anomalies—sudden spikes or drops—are usually a sign of trouble. A deployment broke something. A feature shipped and tanked adoption. Or your tracking code stopped firing. PostHog's alerts let you catch these in real time instead of discovering them in a weekly review.

Create an Alert in the PostHog UI

The fastest way to start is through PostHog's built-in alert feature.

Build a Trends insight for your event

Go to Insights and create a new Trends chart. Select the event you want to monitor (e.g., page_view, purchase, or any custom event). Keep the default Count metric to track raw event volume.

javascript
// Make sure your events are flowing into PostHog
posthog.capture('purchase', {
  product_id: 'prod_123',
  amount: 29.99,
  currency: 'USD'
});
Verify your events are being captured consistently

Set up the alert threshold

Click Add alert on your insight. Choose Absolute value to trigger on a fixed count (e.g., alert if volume drops below 100 events/hour), or Relative change to alert on percentage swings (e.g., alert if volume drops 50% from baseline). Set your lookback period—typically 7 days so PostHog can compare recent volume against a baseline.

javascript
// Example alert configuration structure
// (Set via PostHog UI or API)
const alertConfig = {
  metric: 'count',
  comparison: 'less_than',
  threshold: 100,
  threshold_type: 'absolute',
  lookback_days: 7,
  evaluation_window: '1h'
};
Alert fires when volume drops below 100 events/hour

Choose your notification channels

Select Slack, Email, or Webhook. If using Slack, authorize PostHog to your workspace and pick a channel. For webhooks, provide your endpoint and PostHog will POST alert payloads when thresholds trigger.

javascript
// Example: Receive PostHog alerts via webhook
const express = require('express');
const app = express();

app.post('/posthog-alert-webhook', express.json(), (req, res) => {
  const { alert_name, status, value, insight_id } = req.body;
  console.log(`Alert "${alert_name}" triggered. Current value: ${value}`);
  
  // Route to PagerDuty, Datadog, or your incident system
  if (value < 50) {
    notifyOncall(alert_name, value);
  }
  
  res.status(200).json({ received: true });
});

app.listen(3000);
Webhook handler to integrate PostHog alerts into your incident workflow
Tip: Use Absolute value for baseline thresholds (minimum viable event volume). Use Relative change for events that fluctuate with user activity. Watch out: Avoid thresholds tighter than 20% variance—you'll get daily false positives.

Create Alerts Programmatically via API

For teams managing dozens of alerts or building alert automation, the PostHog API gives you full control.

Get your API key and project ID

In PostHog, go to Settings > API tokens and copy your Team API Key (starts with phc_). You'll also need your project ID from the URL bar or Settings > General.

javascript
const axios = require('axios');

const posthog = axios.create({
  baseURL: 'https://app.posthog.com/api',
  headers: {
    'Authorization': `Bearer phc_YOUR_API_KEY_HERE`,
    'Content-Type': 'application/json'
  }
});

const PROJECT_ID = 'YOUR_PROJECT_ID';

module.exports = { posthog, PROJECT_ID };
Set up PostHog API client with authentication

Create an alert on an insight via API

POST to /api/projects/{project_id}/alerts/ with your insight ID, threshold settings, and notification channels. The comparison field accepts less_than, greater_than, relative_decrease, or relative_increase.

javascript
const createAlert = async () => {
  try {
    const response = await posthog.post(`/projects/${PROJECT_ID}/alerts/`, {
      name: 'Critical: Purchase volume drop',
      insight: 'insight_abc123',  // Your Trends insight ID
      comparison: 'less_than',
      threshold: 50,
      threshold_type: 'absolute',
      evaluation_window: '1h',
      lookback_days: 7,
      notify_slack: true,
      slack_channel: '#alerts',
      notify_email: true,
      email_to: '[email protected]'
    });
    
    console.log('Alert created:', response.data.id);
    return response.data;
  } catch (error) {
    console.error('Failed to create alert:', error.response?.data);
  }
};

createAlert();
Create an alert that triggers when event volume drops below 50/hour

List and manage alerts programmatically

Fetch all alerts for your project, check their status, and update thresholds as your baseline stabilizes. You can also disable an alert by setting is_active: false.

javascript
// List all alerts in your project
const listAlerts = async () => {
  const response = await posthog.get(`/projects/${PROJECT_ID}/alerts/`);
  response.data.results.forEach(alert => {
    console.log(`${alert.name}: threshold=${alert.threshold}, active=${alert.is_active}`);
  });
};

// Update an alert threshold
const updateAlertThreshold = async (alertId, newThreshold) => {
  const response = await posthog.patch(
    `/projects/${PROJECT_ID}/alerts/${alertId}/`,
    { threshold: newThreshold }
  );
  console.log('Alert updated:', response.data);
};

listAlerts();
updateAlertThreshold('alert_xyz', 75);
Manage alerts: list, inspect, and update thresholds
Tip: Start with loose thresholds and tighten after 1-2 weeks of baseline data. Watch out: Always verify your insight is saved before creating an alert—unsaved insights won't work.

Tune and Monitor Alert Performance

Alerts only help if the thresholds are calibrated to your actual traffic patterns.

Review baseline volume and adjust thresholds

After a week, check your insight's trend. If you typically see 1000 events/hour ±200, set your drop alert at 500–600. If you see 50% variance between morning and evening, use relative change alerts instead of absolute thresholds.

javascript
// Fetch insight data to calculate appropriate thresholds
const getInsightStats = async (insightId) => {
  const response = await posthog.get(
    `/projects/${PROJECT_ID}/insights/${insightId}/?refresh=true`
  );
  
  const results = response.data.result;
  const values = results[0].data;  // Array of event counts
  
  const avg = values.reduce((a, b) => a + b) / values.length;
  const stdDev = Math.sqrt(
    values.reduce((sum, v) => sum + Math.pow(v - avg, 2)) / values.length
  );
  
  console.log(`Average: ${avg}, StdDev: ${stdDev}`);
  console.log(`Recommended alert threshold: ${Math.round(avg - 2 * stdDev)}`);
};

getInsightStats('insight_abc123');
Calculate statistical thresholds from your actual baseline

Monitor alert firing patterns and adjust

Go to Insights > Alerts to see recent alert activity. If the same alert fires multiple times daily, your threshold is too sensitive. If it never fires despite real issues, it's too loose. Adjust every 1-2 weeks as patterns stabilize.

javascript
// Check alert firing history
const getAlertActivity = async (alertId) => {
  const response = await posthog.get(
    `/projects/${PROJECT_ID}/alerts/${alertId}/`
  );
  
  const alert = response.data;
  console.log(`Alert: ${alert.name}`);
  console.log(`Last triggered: ${alert.last_triggered_at}`);
  console.log(`Is active: ${alert.is_active}`);
  console.log(`Current threshold: ${alert.threshold}`);
};

getAlertActivity('alert_xyz');
Check alert metadata and firing history
Tip: Create two alerts for critical events—a strict one for your oncall (alert at 50% drop) and a loose one for team Slack (alert at 75% drop). This prevents fatigue while keeping visibility.

Common Pitfalls

  • Setting thresholds too tight in the first week—wait 7–14 days of baseline data before assuming your threshold is correct.
  • Using absolute thresholds for high-variance events—purchase volume at 2 AM is legitimately lower than 2 PM. Use relative change alerts or time-zone-aware windows.
  • Forgetting to enable notifications—your alert fires silently in PostHog and nobody ever sees it.
  • Not disabling stale alerts—old alerts clutter your dashboard and desensitize you to new problems. Clean them up quarterly.

Wrapping Up

You're now catching event volume anomalies in real time. Start with one critical event, stabilize the threshold over a week, then layer on additional alerts as confidence grows. If you want to correlate volume drops with deployments, feature flags, or customer behavior across your entire analytics stack, Product Analyst can help automate that.

Track these metrics automatically

Product Analyst connects to your stack and surfaces the insights that matter.

Try Product Analyst — Free