Problem Statement
Agents as First-Class OpenPipeline Data Sources with Custom Ingest Endpoints
dtctl is excellent for configuring observability (pipelines, dashboards, SLOs, workflows), but agents—as autonomous systems that emit telemetry—currently have no native way to send data to their custom-configured OpenPipeline endpoints via the same CLI.
This creates friction in closed-loop agentic systems where agents:
- Operators configure observability infrastructure (
dtctl create settings ...) ✓
- Agents query telemetry for context (
dtctl query "fetch bizevents | filter ...") ✓
- Agents execute workflows for self-remediation (
dtctl exec workflow ...) ✓
- Agents emit telemetry to their custom endpoint ... ❌ (requires a different tool)
Current Flow (Fragmented)
Agent has telemetry
↓
SDK or raw API call (curl, Python client)
↓
Dynatrace built-in endpoint
↓
Generic processing (shared with all other events)
Desired Flow (Unified dtctl + Custom Endpoints)
Agent has telemetry
↓
dtctl send event --endpoint agent-activity (same tool)
↓
Custom-configured OpenPipeline endpoint
↓
Operator-configured processing rules → Dynatrace
Architecture: OpenPipeline Custom Endpoints
The OpenPipeline Capability
OpenPipeline supports three categories of ingest endpoints:
Built-in Endpoints (Generic, No Config)
POST /platform/ingest/v1/events (generic events)
POST /api/v2/bizevents/ingest (business events)
POST /api/v2/logs/ingest (logs)
POST /api/v2/metrics/ingest (metrics)
Custom Configured Endpoints (Operator-Defined)
POST /platform/ingest/custom/events/<endpoint-name>
POST /platform/ingest/custom/events.sdlc/<endpoint-name>
POST /platform/ingest/custom/security.events/<endpoint-name>
Why Custom Endpoints for Agents
Instead of all agents emitting to generic endpoints, each agent type gets:
- Its own ingest endpoint (named, discoverable)
- Its own processing pipeline (custom rules per agent type)
- Built-in source tracking (events tagged by endpoint)
- Operator control (enrichment, routing, filtering per endpoint)
Configuration Model
Step 1: Operator creates custom ingest endpoint
dtctl create settings \
--schema builtin:openpipeline.custom-events \
--name agent-activity \
--description "Ingest endpoint for agent activity events"
# Result: POST /platform/ingest/custom/events/agent-activity becomes available
Step 2: Operator configures processing pipeline for that endpoint
dtctl create settings \
--schema builtin:openpipeline.bizevents.pipelines \
--scope custom.events.agent-activity \
-f agent-pipeline-config.yaml
# Pipeline defines:
# - Enrichment rules (add computed fields)
# - Routing rules (send to specific storage per agent type)
# - Filtering rules (include/exclude records)
# - Transformation rules (reshape events)
Step 3: Agent emits to its configured endpoint
dtctl send event -f activity.json --endpoint agent-activity
# Targets: POST /platform/ingest/custom/events/agent-activity
# OpenPipeline processes via configured rules
# Result: enriched, routed, transformed events in Dynatrace
Complete Data Flow
Operator Configuration Phase:
dtctl create settings --schema openpipeline.custom-events --name agent-activity
dtctl create settings --schema openpipeline.bizevents.pipelines --scope custom.events.agent-activity
Agent Runtime Phase:
Agent Logic → dtctl send event --endpoint agent-activity → /platform/ingest/custom/events/agent-activity
OpenPipeline Processing:
├─ Enrichment (add fields, compute values)
├─ Routing (route to specific BizEvents stream per agent type)
├─ Filtering (include/exclude, sample if needed)
├─ Transformation (reshape, extract fields)
↓
Result in Dynatrace:
Events queryable by agent.id, endpoint, computed fields, etc.
Custom Endpoints vs. Built-in Endpoints
| Aspect |
Built-in |
Custom Configured |
| Endpoint |
Generic shared |
Dedicated per agent/use-case |
| Configuration |
None (hard-coded) |
Full OpenPipeline customization |
| Source tracking |
No built-in separation |
Automatic via endpoint name |
| Processing rules |
Same for all events |
Unique rules per endpoint |
| Operator control |
Limited |
Full (enrichment, routing, filtering) |
| Agent identity |
Mixed in generic stream |
Preserved, queryable |
| Scaling |
All agents compete |
Clean separation |
dtctl send event — Implementation
Proposed Commands
# Operator lists available custom endpoints
dtctl get settings --schema builtin:openpipeline.custom-events
# Agent emits to specific endpoint
dtctl send event -f activity.json --endpoint agent-activity
# Or with inline attributes
dtctl send event --type agent.activity \
--set agent.id=problem-resolver \
--set operation=investigate \
--endpoint agent-activity
# Batch emit
dtctl send events -f activities.jsonl --batch --endpoint agent-activity
# Dry-run validation (schema check without sending)
dtctl send event -f activity.json --endpoint agent-activity --dry-run
# Auto-discover endpoint from config (optional)
dtctl send event -f activity.json
# → Looks up default agent endpoint from dtctl profile
Token Scopes
For agents to emit to custom endpoints:
openpipeline.events.custom — Required to send to /platform/ingest/custom/events/<name>
For operators to configure:
openpipeline.settings.create — Configure custom endpoints
openpipeline.settings.manage — Manage pipeline rules
Use Cases Enabled
- ✅ Agents emitting to operator-configured custom endpoints
- ✅ Different agent types with different processing pipelines
- ✅ Custom enrichment per agent type (add computed fields)
- ✅ Custom routing per agent type (route to different storage)
- ✅ Full audit trail (events tracked by endpoint + agent.id)
- ✅ Closed-loop workflows (emit → query → decide → act, all via dtctl)
- ✅ Fully autonomous agents (no SDK, pure CLI, fully observable)
Real-World Reference: Agentic Enterprise Operating Model
For detailed context: https://github.com/wlfghdr/agentic-enterprise
The agentic enterprise model uses dtctl for the complete observability lifecycle:
- Operator configuration:
dtctl create settings (custom endpoints + pipelines)
- Agent querying:
dtctl query (grounding context for decisions)
- Agent execution:
dtctl exec workflow (self-remediation)
- Agent emission:
dtctl send event --endpoint <name> (complete the loop)
Result: Fully autonomous agents with first-class observability, all via unified dtctl CLI.
Acceptance Criteria
Priority
High — Enables fully autonomous agents with operator-controlled observability pipelines, completing the dtctl-native observability lifecycle for agentic systems.
Problem Statement
Agents as First-Class OpenPipeline Data Sources with Custom Ingest Endpoints
dtctl is excellent for configuring observability (pipelines, dashboards, SLOs, workflows), but agents—as autonomous systems that emit telemetry—currently have no native way to send data to their custom-configured OpenPipeline endpoints via the same CLI.
This creates friction in closed-loop agentic systems where agents:
dtctl create settings ...) ✓dtctl query "fetch bizevents | filter ...") ✓dtctl exec workflow ...) ✓Current Flow (Fragmented)
Desired Flow (Unified dtctl + Custom Endpoints)
Architecture: OpenPipeline Custom Endpoints
The OpenPipeline Capability
OpenPipeline supports three categories of ingest endpoints:
Built-in Endpoints (Generic, No Config)
Custom Configured Endpoints (Operator-Defined)
Why Custom Endpoints for Agents
Instead of all agents emitting to generic endpoints, each agent type gets:
Configuration Model
Step 1: Operator creates custom ingest endpoint
Step 2: Operator configures processing pipeline for that endpoint
Step 3: Agent emits to its configured endpoint
Complete Data Flow
Custom Endpoints vs. Built-in Endpoints
dtctl send event — Implementation
Proposed Commands
Token Scopes
For agents to emit to custom endpoints:
openpipeline.events.custom— Required to send to/platform/ingest/custom/events/<name>For operators to configure:
openpipeline.settings.create— Configure custom endpointsopenpipeline.settings.manage— Manage pipeline rulesUse Cases Enabled
Real-World Reference: Agentic Enterprise Operating Model
For detailed context: https://github.com/wlfghdr/agentic-enterprise
The agentic enterprise model uses dtctl for the complete observability lifecycle:
dtctl create settings(custom endpoints + pipelines)dtctl query(grounding context for decisions)dtctl exec workflow(self-remediation)dtctl send event --endpoint <name>(complete the loop)Result: Fully autonomous agents with first-class observability, all via unified dtctl CLI.
Acceptance Criteria
POST /platform/ingest/custom/events/<name>POST /platform/ingest/custom/events.sdlc/<name>POST /platform/ingest/custom/security.events/<name>dtctl send event -f <file> --endpoint <name>sends to custom endpointdtctl send event --type <type> --set <key=value> --endpoint <name>worksdtctl send events --batch --endpoint <name>handles JSONL batch ingestdtctl send event --dry-run --endpoint <name>validates without sendingdtctl get settings --schema openpipeline.custom-eventsopenpipeline.events.custom)Priority
High — Enables fully autonomous agents with operator-controlled observability pipelines, completing the dtctl-native observability lifecycle for agentic systems.