A self-hosted webhook ingester for Resend that stores email, contact, and domain events in your database. Built with Next.js for easy deployment to Vercel or your preferred hosting platform. Learn more about storing webhooks data
Or use Docker: docker pull ghcr.io/resend/resend-webhooks-ingester
- Features
- Supported Databases
- Supported Event Types
- Quick Start
- Database Setup
- Running Locally
- Development & Testing
- Deployment
- Configuring Resend Webhooks
- API Reference
- Data Retention
- Troubleshooting
- Receives and verifies Resend webhooks using Svix signatures
- Stores all webhook events in your database (append-only log)
- Supports all Resend event types: emails, contacts, and domains
- Idempotent event storage (duplicate webhooks are safely ignored)
- Type-safe with full TypeScript support
- Multiple database connectors available
| Connector | Endpoint | Best For |
|---|---|---|
| Supabase | /supabase |
Quick setup with managed Postgres |
| PostgreSQL | /postgresql |
Self-hosted or managed Postgres (Neon, Railway, Render) |
| Neon | /neon |
Serverless environments (Vercel, Netlify, Cloudflare) |
| MySQL | /mysql |
Self-hosted or managed MySQL |
| PlanetScale Postgres | /postgresql |
Serverless Postgres |
| PlanetScale MySQL | /planetscale |
Serverless MySQL |
| MongoDB | /mongodb |
Document database (Atlas, self-hosted) |
| Snowflake | /snowflake |
Data warehousing and analytics |
| BigQuery | /bigquery |
Google Cloud analytics |
| ClickHouse | /clickhouse |
High-performance analytics |
| Event | Description |
|---|---|
email.sent |
Email accepted by Resend, delivery attempted |
email.delivered |
Email successfully delivered to recipient |
email.delivery_delayed |
Temporary delivery issue |
email.bounced |
Email permanently rejected |
email.complained |
Recipient marked email as spam |
email.opened |
Recipient opened the email |
email.clicked |
Recipient clicked a link in the email |
email.failed |
Email failed to send |
email.scheduled |
Email scheduled for future delivery |
email.suppressed |
Email suppressed by Resend |
email.received |
Inbound email received |
| Event | Description |
|---|---|
contact.created |
Contact added to an audience |
contact.updated |
Contact information updated |
contact.deleted |
Contact removed from an audience |
| Event | Description |
|---|---|
domain.created |
Domain added to Resend |
domain.updated |
Domain configuration updated |
domain.deleted |
Domain removed from Resend |
git clone https://github.com/resend/resend-webhooks-ingester.git
cd resend-webhooks-ingesterpnpm installcp .env.example .env.localEdit .env.local with your Resend webhook secret and database credentials (see Database Setup).
Run the appropriate schema file for your database from the schemas/ directory.
Deploy to Vercel (or your preferred platform) and configure your webhook endpoint in the Resend Dashboard.
Environment Variables:
SUPABASE_URL=https://your-project.supabase.co
SUPABASE_SERVICE_ROLE_KEY=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9...Schema: Run schemas/supabase.sql in the Supabase SQL Editor.
Endpoint: POST /supabase
Works with any PostgreSQL database: self-hosted, Neon, Railway, Render, etc.
Environment Variables:
POSTGRESQL_URL=postgresql://user:password@host:5432/databaseSchema: Run schemas/postgresql.sql in your database.
Endpoint: POST /postgresql
Recommended for serverless environments (Vercel, Netlify, Cloudflare). For long-running servers, use the PostgreSQL connector instead.
Environment Variables:
NEON_DATABASE_URL=postgresql://user:password@ep-xyz.us-east-1.aws.neon.tech/database?sslmode=requireSchema: Run schemas/postgresql.sql in your Neon database.
Endpoint: POST /neon
Environment Variables:
MYSQL_URL=mysql://user:password@host:3306/databaseSchema: Run schemas/mysql.sql in your database.
Endpoint: POST /mysql
Environment Variables:
POSTGRESQL_URL=postgresql://username:password@host:5432/postgres?sslmode=verify-fullGet your connection string from the PlanetScale dashboard under Connect > Create role.
Schema: Run schemas/postgresql.sql in your PlanetScale database.
Endpoint: POST /postgresql
Environment Variables:
PLANETSCALE_URL=mysql://username:password@host/database?ssl={"rejectUnauthorized":true}Get your connection string from the PlanetScale dashboard under Connect > Create password.
Schema: Run schemas/mysql.sql in your PlanetScale database.
Endpoint: POST /planetscale
Works with MongoDB Atlas, self-hosted MongoDB, or any MongoDB-compatible database.
Environment Variables:
MONGODB_URI=mongodb+srv://username:password@cluster.mongodb.net/?retryWrites=true&w=majority
MONGODB_DATABASE=resend_webhooksGet your connection string from your MongoDB Atlas dashboard or construct it for your self-hosted instance.
Schema: Run schemas/mongodb.js using mongosh:
mongosh "your-connection-string" schemas/mongodb.jsOr execute the commands manually in MongoDB Compass or Atlas.
Endpoint: POST /mongodb
Environment Variables:
SNOWFLAKE_ACCOUNT=your-account-identifier
SNOWFLAKE_USERNAME=your-username
SNOWFLAKE_PASSWORD=your-password
SNOWFLAKE_DATABASE=your-database
SNOWFLAKE_SCHEMA=your-schema
SNOWFLAKE_WAREHOUSE=your-warehouseSchema: Run schemas/snowflake.sql in a Snowflake worksheet.
Endpoint: POST /snowflake
Environment Variables:
BIGQUERY_PROJECT_ID=your-project-id
BIGQUERY_DATASET_ID=your-dataset-id
# Optional: Service account credentials as JSON string
BIGQUERY_CREDENTIALS={"type":"service_account","project_id":"..."}If running on Google Cloud (Cloud Run, GKE), you can omit BIGQUERY_CREDENTIALS and use default application credentials.
Schema: Run schemas/bigquery.sql in the BigQuery console (replace YOUR_DATASET with your dataset ID).
Endpoint: POST /bigquery
Environment Variables:
CLICKHOUSE_URL=https://your-instance.clickhouse.cloud:8443
CLICKHOUSE_USERNAME=default
CLICKHOUSE_PASSWORD=your-password
CLICKHOUSE_DATABASE=defaultSchema: Run schemas/clickhouse.sql in your ClickHouse client.
Endpoint: POST /clickhouse
Start the development server:
pnpm devThe webhook endpoints will be available at http://localhost:3000/{connector}.
For local testing, expose your server using ngrok:
ngrok http 3000Use the ngrok URL (e.g., https://abc123.ngrok.io/supabase) as your webhook endpoint in Resend.
The project includes integration tests for MongoDB, PostgreSQL, MySQL, and ClickHouse that run with Docker.
Cloud-only connectors (require real accounts, not run in CI):
- Supabase - Requires real Supabase project credentials
- Neon - Requires a Neon database and connection string
- PlanetScale - Requires real PlanetScale account (uses same schema as MySQL)
- Snowflake - Requires real Snowflake account
- BigQuery - Requires real GCP project
1. Start databases with Docker Compose:
docker compose up -d2. Apply schemas to test databases:
pnpm db:setup3. Start the dev server with test environment:
pnpm dev:test4. Run tests (in another terminal):
pnpm testOr run tests for a specific connector:
pnpm test:mongodb
pnpm test:supabase
pnpm test:postgresql
pnpm test:neon
pnpm test:mysql
pnpm test:clickhouseTests use .env.test for configuration. The dev:test script loads this file automatically via dotenv-cli.
To test cloud connectors like Supabase:
- Add your credentials to
.env.test:
SUPABASE_URL=https://your-project.supabase.co
SUPABASE_SERVICE_ROLE_KEY=your-service-role-key
SUPABASE_DB_URL=postgresql://postgres:password@db.your-project.supabase.co:5432/postgres- Run the schema setup (from Supabase SQL Editor or via CLI):
pnpm db:setup --supabase- Run the tests:
pnpm test:supabasePull the image from GitHub Container Registry:
docker pull ghcr.io/resend/resend-webhooks-ingester:latestRun with environment variables:
docker run -p 3000:3000 \
-e RESEND_WEBHOOK_SECRET=whsec_your_secret \
-e MONGODB_URI=mongodb://host:27017 \
-e MONGODB_DATABASE=resend_webhooks \
ghcr.io/resend/resend-webhooks-ingester:latestOr build locally:
docker build -t resend-webhooks-ingester .
docker run -p 3000:3000 -e ... resend-webhooks-ingesterUse the deploy button above, or:
- Push your code to GitHub
- Import the repository in Vercel
- Add environment variables:
RESEND_WEBHOOK_SECRET(required)- Database-specific variables for your chosen connector
- Deploy
Your webhook endpoint: https://your-project.vercel.app/{connector}
This is a standard Next.js application:
- Netlify: Use the Next.js runtime
- Railway: Deploy directly from GitHub or use the deploy button above
- Render: Use the deploy button above or connect your repo
- Fly.io: Use the Dockerfile
- Google Cloud Run: Build and deploy container
- Self-hosted: Use Docker or
pnpm build && pnpm start
- Go to your Resend Dashboard
- Click Add Webhook
- Enter your webhook endpoint URL (e.g.,
https://your-domain.com/supabase) - Select the events you want to receive
- Click Create
- Copy the Signing Secret and add it as
RESEND_WEBHOOK_SECRET
src/
├── app/
│ ├── page.tsx # Empty root page
│ ├── supabase/route.ts # Supabase connector
│ ├── postgresql/route.ts # PostgreSQL connector
│ ├── neon/route.ts # Neon serverless connector
│ ├── mysql/route.ts # MySQL connector
│ ├── planetscale/route.ts # PlanetScale connector
│ ├── mongodb/route.ts # MongoDB connector
│ ├── snowflake/route.ts # Snowflake connector
│ ├── bigquery/route.ts # BigQuery connector
│ └── clickhouse/route.ts # ClickHouse connector
├── lib/
│ ├── verify-webhook.ts # Svix signature verification
│ └── webhook-handler.ts # Shared webhook handling logic
├── types/
│ └── webhook.ts # TypeScript types for webhook payloads
└── env.d.ts # Environment variable types
schemas/
├── supabase.sql # Supabase/PostgreSQL schema
├── postgresql.sql # PostgreSQL schema
├── mysql.sql # MySQL/PlanetScale schema
├── mongodb.js # MongoDB schema and indexes
├── snowflake.sql # Snowflake schema
├── bigquery.sql # BigQuery schema
└── clickhouse.sql # ClickHouse schema
tests/
├── setup.ts # Test configuration
├── helpers/
│ ├── svix.ts # Webhook signature generation
│ ├── fixtures.ts # Sample event payloads
│ ├── db-clients.ts # DB clients for assertions
│ └── test-factory.ts # Shared test cases
└── integration/
├── mongodb.test.ts
├── supabase.test.ts
├── postgresql.test.ts
├── mysql.test.ts
└── clickhouse.test.ts
All connectors share the same API:
Receives and stores Resend webhook events.
Required Headers:
svix-id: Webhook message IDsvix-timestamp: Unix timestampsvix-signature: HMAC signature
Responses:
| Status | Description |
|---|---|
200 |
Webhook processed successfully |
400 |
Missing headers or unknown event type |
401 |
Invalid webhook signature |
500 |
Server error (triggers Resend retry) |
- Always verify webhook signatures - The ingester rejects requests with invalid signatures
- Use environment variables - Never commit secrets to your repository
- Use service role keys carefully - Keys that bypass RLS should only be used server-side
- HTTPS only - Always use HTTPS in production
By default, webhook events are stored indefinitely. This gives you complete historical data for analytics and auditing.
If you need to limit data retention, you can set up scheduled jobs to delete old events. Below are example queries to delete events older than a specified number of days.
-- Delete email events older than 90 days
DELETE FROM resend_wh_emails
WHERE event_created_at < NOW() - INTERVAL '90 days';
-- Delete contact events older than 90 days
DELETE FROM resend_wh_contacts
WHERE event_created_at < NOW() - INTERVAL '90 days';
-- Delete domain events older than 90 days
DELETE FROM resend_wh_domains
WHERE event_created_at < NOW() - INTERVAL '90 days';For Supabase, you can use pg_cron to schedule these queries.
-- Delete email events older than 90 days
DELETE FROM resend_wh_emails
WHERE event_created_at < DATE_SUB(NOW(), INTERVAL 90 DAY);
-- Delete contact events older than 90 days
DELETE FROM resend_wh_contacts
WHERE event_created_at < DATE_SUB(NOW(), INTERVAL 90 DAY);
-- Delete domain events older than 90 days
DELETE FROM resend_wh_domains
WHERE event_created_at < DATE_SUB(NOW(), INTERVAL 90 DAY);-- Delete email events older than 90 days
DELETE FROM `your_project.your_dataset.resend_wh_emails`
WHERE event_created_at < TIMESTAMP_SUB(CURRENT_TIMESTAMP(), INTERVAL 90 DAY);You can also set up partition expiration on your tables.
-- Delete email events older than 90 days
DELETE FROM resend_wh_emails
WHERE event_created_at < DATEADD(day, -90, CURRENT_TIMESTAMP());You can use Snowflake Tasks to schedule cleanup.
-- Delete email events older than 90 days
ALTER TABLE resend_wh_emails DELETE
WHERE event_created_at < now() - INTERVAL 90 DAY;Alternatively, use TTL expressions in your table definition for automatic cleanup.
// Delete email events older than 90 days
db.resend_wh_emails.deleteMany({
event_created_at: { $lt: new Date(Date.now() - 90 * 24 * 60 * 60 * 1000) }
});
// Delete contact events older than 90 days
db.resend_wh_contacts.deleteMany({
event_created_at: { $lt: new Date(Date.now() - 90 * 24 * 60 * 60 * 1000) }
});
// Delete domain events older than 90 days
db.resend_wh_domains.deleteMany({
event_created_at: { $lt: new Date(Date.now() - 90 * 24 * 60 * 60 * 1000) }
});You can also use MongoDB Atlas scheduled triggers or create a TTL index for automatic expiration.
See queries_examples.md for useful analytics queries including:
- Email status counts by day
- Bounce rates
- Open rates
- Click-through rates
- Contact growth tracking
- Most clicked links
- Verify your endpoint URL is correct in Resend
- Check that your server is publicly accessible
- Ensure
RESEND_WEBHOOK_SECRETmatches the signing secret in Resend
- Make sure you're using the raw request body for verification
- Check that
RESEND_WEBHOOK_SECRETis set correctly - Verify the webhook secret hasn't been rotated in Resend
- Verify your database credentials are correct
- Check that the schema has been applied
- Review server logs for specific error messages
- Verify your account identifier format (e.g.,
xy12345.us-east-1) - Ensure the warehouse is running and accessible
- Check that the user has INSERT permissions on the tables
- Verify the service account has BigQuery Data Editor role
- Ensure the dataset and tables exist
- Check that the project ID is correct
MIT