Open-source, multi-tenant integration control plane for outbound webhooks, inbound runtime proxying, scheduling, event-source adapters, observability, RBAC, and AI-assisted operations.
If you're building a B2B SaaS platform—especially a growing, multi-tenant one—managing how your system talks to your customers' external systems is a massive pain point.
I faced this exact problem while building integrations for a Healthcare Information Management System (HIMS) provider. I was spending weeks building custom webhook delivery systems, inbound API gateways, rate limiters, and dead-letter queues just so our internal products could integrate reliably with external hospital tools and third-party vendors.
Enterprise integration platforms like MuleSoft or Kong cost a fortune and are incredibly heavy to run. On the other hand, hacking together custom scripts per organization was completely unmaintainable as we scaled.
So, I built precisely what growing multi-tenant companies actually need: a fast, organization-scoped Integration Control Plane.
This system acts as a unified traffic controller. Instead of spending 3-6 months building a reliable webhook and proxy infrastructure from scratch, you can deploy this gateway alongside your MongoDB/MySQL database and instantly have enterprise-grade, multi-tenant integration management. What used to take our team weeks of development per integration now takes minutes.
While there are other webhook services (like Svix) or API Gateways (like Kong), this project occupies a highly valuable middle ground:
- The "All-in-One" Approach: It handles Outbound Webhooks (pushing events), Inbound Proxying (receiving requests), and Scheduled Jobs (batch data fetching) in a single platform. Most competitors only do one.
- AI-Assisted Operations: This is the killer feature. Instead of forcing developers to manually write complex transformation scripts, the gateway uses built-in AI (OpenAI, Anthropic, GLM, Moonshot) to generate JS/JMESPath transformations, analyze vendor API docs, suggest data mappings, and diagnose log failures instantly.
- Deep Multi-Tenancy: Everything is strictly scoped by
orgId(organization) and supports organizational hierarchies, meaning it is built for B2B scale from day one. - Built-in Observability: It doesn't just route traffic; it stores execution traces, manages the Dead Letter Queue (DLQ) for auto-retries, and provides a built-in Alert Center.
- Manages integration configurations per organization (
orgIdscoped). - Delivers outbound events with retries and dead-letter handling.
- Exposes inbound integration endpoints with auth, transformation, and optional response streaming.
- Supports scheduled integrations and scheduled jobs (cron/interval).
- Tracks logs, execution traces, audit trails, alert center, and daily reporting.
- Includes AI features for transformation generation, documentation analysis, mapping suggestions, diagnostics, and assistant chat.
- Outbound integrations (
/api/v1/outbound-integrations)- CRUD, duplicate, bulk update/delete, test delivery, cURL generation.
- Script and simple transformation modes.
- Delivery mode support:
IMMEDIATE,DELAYED,RECURRING. - Signing secret rotation/removal endpoints.
- Inbound/runtime integrations (
/api/v1/integrations)- Runtime trigger endpoints (
GET/POST/PUT /api/v1/integrations/:type). - Public runtime trigger endpoints (
GET/POST/PUT /api/v1/public/integrations/:type) with per-integration inbound auth. - Inbound auth checks (
NONE,API_KEY,BEARER,BASIC). - Request policy controls:
- source IP allowlist (
allowedIpCidrs) - browser origin allowlist (
allowedBrowserOrigins) - per-integration rate limiting
- source IP allowlist (
- Outbound auth support in delivery path (
NONE,API_KEY,BASIC,BEARER,OAUTH1,OAUTH2,CUSTOM,CUSTOM_HEADERS). - Optional streamed upstream response forwarding.
- Generic inbound email routing via sender profiles (
ROUTED_EMAIL) with request-bodyfromresolution and default sender fallback.
- Runtime trigger endpoints (
- Scheduled integrations (
/api/v1/scheduled-integrations) for delayed/recurring webhook execution. - Scheduled jobs (
/api/v1/scheduled-jobs)- Cron and interval scheduling.
- Manual execute endpoint.
- Job logs and per-job log detail endpoints.
- Data source types currently handled:
SQL,MONGODB,API.
- Per-org event source configuration (
/api/v1/event-sources). - Supported types in code:
mysql,kafka,http_push. - Important:
- MySQL is optional and should only be configured for organizations that use MySQL as their event source.
- Global
eventSource.typecan be left empty. http_pushadapter is present but currently marked Phase 2 (registered, not full polling loop yet).- MySQL safety limits are enforced server-side to prevent overload:
- Shared pool:
connectionLimit1..20,queueLimit0..200 - Dedicated pool:
connectionLimit1..5,queueLimit0..50 - Source tuning:
pollIntervalMs1000..300000,batchSize1..100,dbTimeoutMs1000..120000
- Shared pool:
- Delivery logs, execution logs, system logs, event audit, DLQ endpoints.
- System logs support rotated application logs, rotated access logs, and raw process log tail (
/api/v1/system-logs/process-tail). - System status supports org-scoped and global admin modes with worker, adapter, process lifecycle, and sender profile visibility.
- Alert center and analytics/dashboard routes.
- Audit logs and user activity tracking.
- Daily report configuration/test/status endpoints.
- Health endpoint:
GET /health.
- Role/feature permission system with org-scoped access control.
- Built-in roles include
SUPER_ADMIN,ADMIN,ORG_ADMIN,INTEGRATION_EDITOR,VIEWER,ORG_USER,API_KEY. - Org context is passed via JWT claims or
orgIdquery param.
- AI assistant and AI config routes:
/api/v1/ai/api/v1/ai-config
- Supported providers in code:
- OpenAI
- Anthropic Claude
- Kimi (Moonshot)
- Z.ai (GLM)
- AI operations implemented:
GET /ai/status,GET /ai/usagePOST /ai/generate-transformationPOST /ai/analyze-documentationPOST /ai/suggest-mappingsPOST /ai/generate-test-payloadPOST /ai/generate-scheduling-scriptPOST /ai/analyze-errorPOST /ai/diagnose-log-fixPOST /ai/apply-log-fixPOST /ai/chatPOST /ai/explain-transformation- AI interactions/log stats endpoints
- AI config operations:
- Get/save org config
- Test provider connection
- Delete API key
- List providers/models
backend/ # Express API, workers, adapters, data layer
frontend/ # React + TypeScript admin console
docs/ # Guides and architecture docs
docker-compose.yml # Production-ish local stack (Mongo + backend + frontend)
docker-compose.dev.yml # Dev stack with hot reload
- Docker + Docker Compose (for containerized run), or:
- Node.js 16+ for local development (modern LTS still recommended)
- MongoDB 6+ (required)
- Optional, only if your use case needs them:
- MySQL (org-specific event source or scheduled SQL source)
- Kafka (org-specific event source)
- Clone repository.
git clone https://github.com/varaprasadreddy9676/integration-control-plane.git
cd integration-control-plane- (Optional) Create a root
.envto override defaults.
Example:
cat > .env <<'EOF'
API_KEY=change_me_dev_key
JWT_SECRET=change_me_dev_secret
MONGODB_URI=mongodb://mongodb:27017/integration_gateway
MONGODB_DATABASE=integration_gateway
FRONTEND_URL=http://localhost
EOF- Start services.
docker compose up -d --build- Verify containers and liveness.
docker compose ps
curl http://localhost:3545/
curl http://localhost:3545/healthNotes:
GET /is a liveness check endpoint for container health.GET /healthreports system/dependency health and can return non-200 when dependencies degrade.
- Create first admin user.
docker compose exec backend node scripts/create-user.js \
--email admin@example.com \
--password 'ChangeMe123!' \
--role ADMIN- Access app.
- Frontend:
http://localhost/integration-gateway/(roothttp://localhostalso works in Docker nginx setup) - Backend API base:
http://localhost:3545/api/v1
cd backend
npm installFor local (non-Docker), configure either env vars or backend/config.json.
Minimum required:
MONGODB_URI=mongodb://localhost:27017/integration_gatewayAPI_KEY=<your api key>JWT_SECRET=<your jwt secret>
Backend env loading order is explicit:
- repository root
.env - optional
backend/.envoverride
So the backend behaves the same whether you start it from repo root or from backend/.
If you prefer file config:
cp config.example.json config.jsonStart:
npm run devnpm run dev uses nodemon with source/config watching only. Generated files under backend/logs/ are ignored so runtime lifecycle markers and rotated logs do not trigger restart loops.
cd frontend
npm install
cp .env.example .env
npm run devDefault local values in frontend/.env:
VITE_API_BASE_URL=http://localhost:3545/api/v1VITE_API_KEY=<same value as backend security.apiKey>(used by frontend API client)
- MongoDB is mandatory. App startup fails without MongoDB.
- MySQL is optional.
- Do not hardcode it unless required.
- Prefer per-org event source config via API/UI.
- Kafka is optional and only needed for orgs configured with Kafka source.
- Global event source default (
eventSource.type) is optional.- Leave empty for fully dynamic per-org source setup.
The frontend has 21 feature modules:
| Module | Description |
|---|---|
dashboard |
KPI cards, delivery trends, latency, errors, inbound, scheduled, outbound tabs |
integrations |
Outbound integration management, detail view, editor, HMAC signing |
inbound-integrations |
Inbound integration configuration and detail pages |
system-status |
Runtime health, workers, adapters, lifecycle, and sender profile visibility |
scheduled |
Scheduled integration (DELAYED/RECURRING) management |
scheduled-jobs |
CRON/interval batch job config with visual cron builder |
logs |
Delivery log viewer with advanced filtering and cURL export |
events |
Event management, event detail, bulk import |
dlq |
Dead letter queue management, bulk retry |
lookups |
Lookup table CRUD, import/export via XLSX, statistics |
settings |
Event sources, sender profiles, admin request-policy/rate-limit tools |
templates |
Reusable integration template library |
versions |
Integration version history with diff view |
bulk |
Bulk operations (import, export, batch updates) |
alert-center |
Alert management, categories, statistics |
ai |
AI assistant chat interface with AI-powered suggestions |
ai-settings |
Per-org AI provider configuration (OpenAI, Claude, GLM, Kimi) |
flowBuilder |
Visual drag-and-drop workflow builder (ReactFlow) |
settings |
Org settings, event source settings, MySQL pool settings |
admin |
Super admin: org/user management, storage stats, magic link portal embed |
system-logs |
System-level logging and debugging |
help |
In-app help and lookup guide |
landing |
Public landing page |
Mounted under /api/v1:
/auth,/users,/roles,/admin,/tenant/outbound-integrations,/inbound-integrations,/integrations/scheduled-integrations,/scheduled-jobs/events,/event-sources,/lookups,/templates,/field-schemas,/bulk,/versions,/import-export/logs,/execution-logs,/system-logs,/alert-center,/dashboard,/analytics,/dlq/ai,/ai-config/config,/daily-reports,/audit,/client-errors
- Do not commit real secrets,
.envfiles, private keys, or production data exports. - Keep
backend/config.example.jsonas template values only. - Configure AI provider keys per organization in AI settings/API (encrypted at rest in Mongo).
See the full comparison with Svix, Convoy, Hookdeck, and building it yourself.
| ICPlane | Svix | Convoy | Hookdeck | |
|---|---|---|---|---|
| Outbound + Inbound + Scheduled | All three | Outbound + Ingest | Both | Both (core is SaaS-only) |
| Self-hosted | Yes (AGPL v3) | Yes (MIT, reduced) | Yes (Elastic, not OSI) | Outpost only |
| Multi-tenant RBAC | 7 roles, org-scoped | Basic | Basic | Basic |
| AI-assisted transforms | 4 providers | Single button | No | No |
| Visual field mapping | Yes | No | No | No |
| DLQ auto-retry | Yes | Manual only | Manual only | Manual only |
| Paid tier starts at | Free | $490/mo | $99/mo | $39/mo |
- docs/README.md
- docs/guides/GETTING-STARTED.md
- docs/guides/DOCKER.md
- docs/guides/DEPLOYMENT.md
- docs/guides/RBAC-GUIDE.md
- docs/architecture/ARCHITECTURE.md
- docs/architecture/SCHEDULED_JOBS.md
- docs/comparison.md
See CONTRIBUTING.md.
If you discover a vulnerability, follow SECURITY.md and avoid posting sensitive details in public issues.
- Website: icplane.com
- Email: founder@icplane.com
- Documentation: docs/README.md
- Issues: GitHub Issues
- Discussions: GitHub Discussions
GNU Affero General Public License v3.0. See LICENSE.