Enhanced Alertmanager configuration with hierarchical routing, multiple receivers, and best practices for production use.
Set these environment variables before starting Alertmanager:
# Slack Configuration
export SLACK_WEBHOOK_URL="https://hooks.slack.com/services/YOUR/WEBHOOK/URL"
# Email Configuration (SMTP)
export SMTP_HOST="smtp.gmail.com"
export SMTP_PORT="587"
export SMTP_USER="your-email@gmail.com"
export SMTP_PASSWORD=""
export ALERT_EMAIL_FROM="alerts@yourdomain.com"
export ALERT_EMAIL_TO="team@yourdomain.com"
export ALERT_EMAIL_CRITICAL="oncall@yourdomain.com"
export ALERT_EMAIL_WARNING="devops@yourdomain.com"
export ALERT_EMAIL_ERRORS="backend-team@yourdomain.com"
export ONCALL_EMAIL="oncall@yourdomain.com"
export BACKEND_TEAM_EMAIL="backend@yourdomain.com"
# Webhook Configuration (optional)
export WEBHOOK_URL="https://your-webhook-url.com/alerts"
export WEBHOOK_BEARER_TOKEN=""
# PagerDuty Configuration (optional)
export PAGERDUTY_SERVICE_KEY=""
Update docker-compose.observability.yml to include environment variables:
alertmanager:
image: prom/alertmanager:latest
container_name: alertmanager
environment:
- SLACK_WEBHOOK_URL=${SLACK_WEBHOOK_URL}
- ALERT_EMAIL_TO=${ALERT_EMAIL_TO}
# ... other variables
volumes:
- ./monitoring/alertmanager/alertmanager.yml:/etc/alertmanager/alertmanager.yml:ro
command:
- --config.file=/etc/alertmanager/alertmanager.yml
ports:
- "9093:9093"
severity: criticalcritical-alerts, critical-oncall (for BackendDown)#alerts-critical, #oncall-criticalseverity: warningwarning-alerts#alerts-warningseverity: infoinfo-alerts#alerts-info (no resolved notifications)backend-team receivererror-rate-alerts receiverThe configuration includes inhibition rules to reduce alert noise:
#alerts-general, default emailseverity: critical alerts#alerts-critical, critical email, webhook#oncall-critical, on-call emailseverity: warning alerts#alerts-warning, warning emailseverity: info alerts#alerts-info (no resolved notifications)service: github-ai-search-backend alerts#backend-alerts, backend team emailalertname: HighErrorRate alerts#alerts-errors, error monitoring emailAdd new routes in the routes section:
routes:
- match:
service: your-service
receiver: your-service-receiver
continue: true
Add new receivers in the receivers section:
receivers:
- name: your-receiver
slack_configs:
- api_url: '${SLACK_WEBHOOK_URL}'
channel: '#your-channel'
Create custom templates in /etc/alertmanager/templates/ and reference them in the templates section.
docker-compose -f docker-compose.observability.yml up -d alertmanager
http://localhost:9093
# Send test alert via Prometheus
curl -X POST http://localhost:9093/api/v1/alerts -d '[{
"labels": {
"alertname": "TestAlert",
"severity": "critical",
"service": "github-ai-search-backend"
},
"annotations": {
"summary": "Test alert",
"description": "This is a test alert"
}
}]'
# Check configuration syntax
docker exec alertmanager amtool check-config /etc/alertmanager/alertmanager.yml
# Test routing
docker exec alertmanager amtool config routes show
group_by to reduce notification noisedocker logs alertmanager
docker exec alertmanager amtool check-config /etc/alertmanager/alertmanager.yml
# In Prometheus UI: Status -> Targets -> Alertmanagers
group_interval and repeat_intervalgroup_wait and group_interval