Skip to content

A boilerplate example on how to ingest Resend's webhooks and send them to your preferred database

Notifications You must be signed in to change notification settings

resend/resend-webhooks-ingester

Repository files navigation

Resend Webhooks Ingester

A self-hosted webhook ingester for Resend that stores email, contact, and domain events in your database. Built with Next.js for easy deployment to Vercel or your preferred hosting platform. Learn more about storing webhooks data

Deploy

Deploy with Vercel Deploy on Railway Deploy to Render

Or use Docker: docker pull ghcr.io/resend/resend-webhooks-ingester

Table of Contents

Features

  • Receives and verifies Resend webhooks using Svix signatures
  • Stores all webhook events in your database (append-only log)
  • Supports all Resend event types: emails, contacts, and domains
  • Idempotent event storage (duplicate webhooks are safely ignored)
  • Type-safe with full TypeScript support
  • Multiple database connectors available

Supported Databases

Connector Endpoint Best For
Supabase /supabase Quick setup with managed Postgres
PostgreSQL /postgresql Self-hosted or managed Postgres (Neon, Railway, Render)
Neon /neon Serverless environments (Vercel, Netlify, Cloudflare)
MySQL /mysql Self-hosted or managed MySQL
PlanetScale Postgres /postgresql Serverless Postgres
PlanetScale MySQL /planetscale Serverless MySQL
MongoDB /mongodb Document database (Atlas, self-hosted)
Snowflake /snowflake Data warehousing and analytics
BigQuery /bigquery Google Cloud analytics
ClickHouse /clickhouse High-performance analytics

Supported Event Types

Email Events

Event Description
email.sent Email accepted by Resend, delivery attempted
email.delivered Email successfully delivered to recipient
email.delivery_delayed Temporary delivery issue
email.bounced Email permanently rejected
email.complained Recipient marked email as spam
email.opened Recipient opened the email
email.clicked Recipient clicked a link in the email
email.failed Email failed to send
email.scheduled Email scheduled for future delivery
email.suppressed Email suppressed by Resend
email.received Inbound email received

Contact Events

Event Description
contact.created Contact added to an audience
contact.updated Contact information updated
contact.deleted Contact removed from an audience

Domain Events

Event Description
domain.created Domain added to Resend
domain.updated Domain configuration updated
domain.deleted Domain removed from Resend

Quick Start

1. Clone the repository

git clone https://github.com/resend/resend-webhooks-ingester.git
cd resend-webhooks-ingester

2. Install dependencies

pnpm install

3. Set up environment variables

cp .env.example .env.local

Edit .env.local with your Resend webhook secret and database credentials (see Database Setup).

4. Create database tables

Run the appropriate schema file for your database from the schemas/ directory.

5. Deploy and configure webhook

Deploy to Vercel (or your preferred platform) and configure your webhook endpoint in the Resend Dashboard.

Database Setup

Supabase

Environment Variables:

SUPABASE_URL=https://your-project.supabase.co
SUPABASE_SERVICE_ROLE_KEY=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9...

Schema: Run schemas/supabase.sql in the Supabase SQL Editor.

Endpoint: POST /supabase


PostgreSQL

Works with any PostgreSQL database: self-hosted, Neon, Railway, Render, etc.

Environment Variables:

POSTGRESQL_URL=postgresql://user:password@host:5432/database

Schema: Run schemas/postgresql.sql in your database.

Endpoint: POST /postgresql


Neon

Recommended for serverless environments (Vercel, Netlify, Cloudflare). For long-running servers, use the PostgreSQL connector instead.

Environment Variables:

NEON_DATABASE_URL=postgresql://user:password@ep-xyz.us-east-1.aws.neon.tech/database?sslmode=require

Schema: Run schemas/postgresql.sql in your Neon database.

Endpoint: POST /neon


MySQL

Environment Variables:

MYSQL_URL=mysql://user:password@host:3306/database

Schema: Run schemas/mysql.sql in your database.

Endpoint: POST /mysql


PlanetScale Postgres

Environment Variables:

POSTGRESQL_URL=postgresql://username:password@host:5432/postgres?sslmode=verify-full

Get your connection string from the PlanetScale dashboard under Connect > Create role.

Schema: Run schemas/postgresql.sql in your PlanetScale database.

Endpoint: POST /postgresql


PlanetScale MySQL

Environment Variables:

PLANETSCALE_URL=mysql://username:password@host/database?ssl={"rejectUnauthorized":true}

Get your connection string from the PlanetScale dashboard under Connect > Create password.

Schema: Run schemas/mysql.sql in your PlanetScale database.

Endpoint: POST /planetscale


MongoDB

Works with MongoDB Atlas, self-hosted MongoDB, or any MongoDB-compatible database.

Environment Variables:

MONGODB_URI=mongodb+srv://username:password@cluster.mongodb.net/?retryWrites=true&w=majority
MONGODB_DATABASE=resend_webhooks

Get your connection string from your MongoDB Atlas dashboard or construct it for your self-hosted instance.

Schema: Run schemas/mongodb.js using mongosh:

mongosh "your-connection-string" schemas/mongodb.js

Or execute the commands manually in MongoDB Compass or Atlas.

Endpoint: POST /mongodb


Snowflake

Environment Variables:

SNOWFLAKE_ACCOUNT=your-account-identifier
SNOWFLAKE_USERNAME=your-username
SNOWFLAKE_PASSWORD=your-password
SNOWFLAKE_DATABASE=your-database
SNOWFLAKE_SCHEMA=your-schema
SNOWFLAKE_WAREHOUSE=your-warehouse

Schema: Run schemas/snowflake.sql in a Snowflake worksheet.

Endpoint: POST /snowflake


BigQuery

Environment Variables:

BIGQUERY_PROJECT_ID=your-project-id
BIGQUERY_DATASET_ID=your-dataset-id
# Optional: Service account credentials as JSON string
BIGQUERY_CREDENTIALS={"type":"service_account","project_id":"..."}

If running on Google Cloud (Cloud Run, GKE), you can omit BIGQUERY_CREDENTIALS and use default application credentials.

Schema: Run schemas/bigquery.sql in the BigQuery console (replace YOUR_DATASET with your dataset ID).

Endpoint: POST /bigquery


ClickHouse

Environment Variables:

CLICKHOUSE_URL=https://your-instance.clickhouse.cloud:8443
CLICKHOUSE_USERNAME=default
CLICKHOUSE_PASSWORD=your-password
CLICKHOUSE_DATABASE=default

Schema: Run schemas/clickhouse.sql in your ClickHouse client.

Endpoint: POST /clickhouse


Running Locally

Start the development server:

pnpm dev

The webhook endpoints will be available at http://localhost:3000/{connector}.

For local testing, expose your server using ngrok:

ngrok http 3000

Use the ngrok URL (e.g., https://abc123.ngrok.io/supabase) as your webhook endpoint in Resend.

Development & Testing

Running Tests Locally

The project includes integration tests for MongoDB, PostgreSQL, MySQL, and ClickHouse that run with Docker.

Cloud-only connectors (require real accounts, not run in CI):

  • Supabase - Requires real Supabase project credentials
  • Neon - Requires a Neon database and connection string
  • PlanetScale - Requires real PlanetScale account (uses same schema as MySQL)
  • Snowflake - Requires real Snowflake account
  • BigQuery - Requires real GCP project

1. Start databases with Docker Compose:

docker compose up -d

2. Apply schemas to test databases:

pnpm db:setup

3. Start the dev server with test environment:

pnpm dev:test

4. Run tests (in another terminal):

pnpm test

Or run tests for a specific connector:

pnpm test:mongodb
pnpm test:supabase
pnpm test:postgresql
pnpm test:neon
pnpm test:mysql
pnpm test:clickhouse

Test Environment

Tests use .env.test for configuration. The dev:test script loads this file automatically via dotenv-cli.

Testing Cloud Connectors (Supabase, PlanetScale, etc.)

To test cloud connectors like Supabase:

  1. Add your credentials to .env.test:
SUPABASE_URL=https://your-project.supabase.co
SUPABASE_SERVICE_ROLE_KEY=your-service-role-key
SUPABASE_DB_URL=postgresql://postgres:password@db.your-project.supabase.co:5432/postgres
  1. Run the schema setup (from Supabase SQL Editor or via CLI):
pnpm db:setup --supabase
  1. Run the tests:
pnpm test:supabase

Deployment

Docker

Pull the image from GitHub Container Registry:

docker pull ghcr.io/resend/resend-webhooks-ingester:latest

Run with environment variables:

docker run -p 3000:3000 \
  -e RESEND_WEBHOOK_SECRET=whsec_your_secret \
  -e MONGODB_URI=mongodb://host:27017 \
  -e MONGODB_DATABASE=resend_webhooks \
  ghcr.io/resend/resend-webhooks-ingester:latest

Or build locally:

docker build -t resend-webhooks-ingester .
docker run -p 3000:3000 -e ... resend-webhooks-ingester

Vercel

Use the deploy button above, or:

  1. Push your code to GitHub
  2. Import the repository in Vercel
  3. Add environment variables:
    • RESEND_WEBHOOK_SECRET (required)
    • Database-specific variables for your chosen connector
  4. Deploy

Your webhook endpoint: https://your-project.vercel.app/{connector}

Other Platforms

This is a standard Next.js application:

  • Netlify: Use the Next.js runtime
  • Railway: Deploy directly from GitHub or use the deploy button above
  • Render: Use the deploy button above or connect your repo
  • Fly.io: Use the Dockerfile
  • Google Cloud Run: Build and deploy container
  • Self-hosted: Use Docker or pnpm build && pnpm start

Configuring Resend Webhooks

  1. Go to your Resend Dashboard
  2. Click Add Webhook
  3. Enter your webhook endpoint URL (e.g., https://your-domain.com/supabase)
  4. Select the events you want to receive
  5. Click Create
  6. Copy the Signing Secret and add it as RESEND_WEBHOOK_SECRET

Project Structure

src/
├── app/
│   ├── page.tsx              # Empty root page
│   ├── supabase/route.ts     # Supabase connector
│   ├── postgresql/route.ts   # PostgreSQL connector
│   ├── neon/route.ts         # Neon serverless connector
│   ├── mysql/route.ts        # MySQL connector
│   ├── planetscale/route.ts  # PlanetScale connector
│   ├── mongodb/route.ts      # MongoDB connector
│   ├── snowflake/route.ts    # Snowflake connector
│   ├── bigquery/route.ts     # BigQuery connector
│   └── clickhouse/route.ts   # ClickHouse connector
├── lib/
│   ├── verify-webhook.ts     # Svix signature verification
│   └── webhook-handler.ts    # Shared webhook handling logic
├── types/
│   └── webhook.ts            # TypeScript types for webhook payloads
└── env.d.ts                  # Environment variable types

schemas/
├── supabase.sql              # Supabase/PostgreSQL schema
├── postgresql.sql            # PostgreSQL schema
├── mysql.sql                 # MySQL/PlanetScale schema
├── mongodb.js                # MongoDB schema and indexes
├── snowflake.sql             # Snowflake schema
├── bigquery.sql              # BigQuery schema
└── clickhouse.sql            # ClickHouse schema

tests/
├── setup.ts                  # Test configuration
├── helpers/
│   ├── svix.ts               # Webhook signature generation
│   ├── fixtures.ts           # Sample event payloads
│   ├── db-clients.ts         # DB clients for assertions
│   └── test-factory.ts       # Shared test cases
└── integration/
    ├── mongodb.test.ts
    ├── supabase.test.ts
    ├── postgresql.test.ts
    ├── mysql.test.ts
    └── clickhouse.test.ts

API Reference

All connectors share the same API:

POST /{connector}

Receives and stores Resend webhook events.

Required Headers:

  • svix-id: Webhook message ID
  • svix-timestamp: Unix timestamp
  • svix-signature: HMAC signature

Responses:

Status Description
200 Webhook processed successfully
400 Missing headers or unknown event type
401 Invalid webhook signature
500 Server error (triggers Resend retry)

Security Considerations

  • Always verify webhook signatures - The ingester rejects requests with invalid signatures
  • Use environment variables - Never commit secrets to your repository
  • Use service role keys carefully - Keys that bypass RLS should only be used server-side
  • HTTPS only - Always use HTTPS in production

Data Retention

By default, webhook events are stored indefinitely. This gives you complete historical data for analytics and auditing.

If you need to limit data retention, you can set up scheduled jobs to delete old events. Below are example queries to delete events older than a specified number of days.

PostgreSQL / Supabase

-- Delete email events older than 90 days
DELETE FROM resend_wh_emails
WHERE event_created_at < NOW() - INTERVAL '90 days';

-- Delete contact events older than 90 days
DELETE FROM resend_wh_contacts
WHERE event_created_at < NOW() - INTERVAL '90 days';

-- Delete domain events older than 90 days
DELETE FROM resend_wh_domains
WHERE event_created_at < NOW() - INTERVAL '90 days';

For Supabase, you can use pg_cron to schedule these queries.

MySQL / PlanetScale

-- Delete email events older than 90 days
DELETE FROM resend_wh_emails
WHERE event_created_at < DATE_SUB(NOW(), INTERVAL 90 DAY);

-- Delete contact events older than 90 days
DELETE FROM resend_wh_contacts
WHERE event_created_at < DATE_SUB(NOW(), INTERVAL 90 DAY);

-- Delete domain events older than 90 days
DELETE FROM resend_wh_domains
WHERE event_created_at < DATE_SUB(NOW(), INTERVAL 90 DAY);

BigQuery

-- Delete email events older than 90 days
DELETE FROM `your_project.your_dataset.resend_wh_emails`
WHERE event_created_at < TIMESTAMP_SUB(CURRENT_TIMESTAMP(), INTERVAL 90 DAY);

You can also set up partition expiration on your tables.

Snowflake

-- Delete email events older than 90 days
DELETE FROM resend_wh_emails
WHERE event_created_at < DATEADD(day, -90, CURRENT_TIMESTAMP());

You can use Snowflake Tasks to schedule cleanup.

ClickHouse

-- Delete email events older than 90 days
ALTER TABLE resend_wh_emails DELETE
WHERE event_created_at < now() - INTERVAL 90 DAY;

Alternatively, use TTL expressions in your table definition for automatic cleanup.

MongoDB

// Delete email events older than 90 days
db.resend_wh_emails.deleteMany({
  event_created_at: { $lt: new Date(Date.now() - 90 * 24 * 60 * 60 * 1000) }
});

// Delete contact events older than 90 days
db.resend_wh_contacts.deleteMany({
  event_created_at: { $lt: new Date(Date.now() - 90 * 24 * 60 * 60 * 1000) }
});

// Delete domain events older than 90 days
db.resend_wh_domains.deleteMany({
  event_created_at: { $lt: new Date(Date.now() - 90 * 24 * 60 * 60 * 1000) }
});

You can also use MongoDB Atlas scheduled triggers or create a TTL index for automatic expiration.

Example Queries

See queries_examples.md for useful analytics queries including:

  • Email status counts by day
  • Bounce rates
  • Open rates
  • Click-through rates
  • Contact growth tracking
  • Most clicked links

Troubleshooting

Webhooks not being received

  • Verify your endpoint URL is correct in Resend
  • Check that your server is publicly accessible
  • Ensure RESEND_WEBHOOK_SECRET matches the signing secret in Resend

Signature verification failing

  • Make sure you're using the raw request body for verification
  • Check that RESEND_WEBHOOK_SECRET is set correctly
  • Verify the webhook secret hasn't been rotated in Resend

Database insertion errors

  • Verify your database credentials are correct
  • Check that the schema has been applied
  • Review server logs for specific error messages

Snowflake connection issues

  • Verify your account identifier format (e.g., xy12345.us-east-1)
  • Ensure the warehouse is running and accessible
  • Check that the user has INSERT permissions on the tables

BigQuery errors

  • Verify the service account has BigQuery Data Editor role
  • Ensure the dataset and tables exist
  • Check that the project ID is correct

Resources

License

MIT

About

A boilerplate example on how to ingest Resend's webhooks and send them to your preferred database

Resources

Stars

Watchers

Forks

Packages

 
 
 

Contributors 5