Skip to content

Feature Request: dtctl send event — Unified CLI for Agent Observability Lifecycle #44

@WulfAI

Description

@WulfAI

Problem Statement

Agents as First-Class OpenPipeline Data Sources with Custom Ingest Endpoints

dtctl is excellent for configuring observability (pipelines, dashboards, SLOs, workflows), but agents—as autonomous systems that emit telemetry—currently have no native way to send data to their custom-configured OpenPipeline endpoints via the same CLI.

This creates friction in closed-loop agentic systems where agents:

  1. Operators configure observability infrastructure (dtctl create settings ...) ✓
  2. Agents query telemetry for context (dtctl query "fetch bizevents | filter ...") ✓
  3. Agents execute workflows for self-remediation (dtctl exec workflow ...) ✓
  4. Agents emit telemetry to their custom endpoint ... ❌ (requires a different tool)

Current Flow (Fragmented)

Agent has telemetry
  ↓
SDK or raw API call (curl, Python client)
  ↓
Dynatrace built-in endpoint
  ↓
Generic processing (shared with all other events)

Desired Flow (Unified dtctl + Custom Endpoints)

Agent has telemetry
  ↓
dtctl send event --endpoint agent-activity (same tool)
  ↓
Custom-configured OpenPipeline endpoint
  ↓
Operator-configured processing rules → Dynatrace

Architecture: OpenPipeline Custom Endpoints

The OpenPipeline Capability

OpenPipeline supports three categories of ingest endpoints:

Built-in Endpoints (Generic, No Config)

POST /platform/ingest/v1/events                    (generic events)
POST /api/v2/bizevents/ingest                     (business events)
POST /api/v2/logs/ingest                          (logs)
POST /api/v2/metrics/ingest                       (metrics)

Custom Configured Endpoints (Operator-Defined)

POST /platform/ingest/custom/events/<endpoint-name>
POST /platform/ingest/custom/events.sdlc/<endpoint-name>
POST /platform/ingest/custom/security.events/<endpoint-name>

Why Custom Endpoints for Agents

Instead of all agents emitting to generic endpoints, each agent type gets:

  • Its own ingest endpoint (named, discoverable)
  • Its own processing pipeline (custom rules per agent type)
  • Built-in source tracking (events tagged by endpoint)
  • Operator control (enrichment, routing, filtering per endpoint)

Configuration Model

Step 1: Operator creates custom ingest endpoint

dtctl create settings \
  --schema builtin:openpipeline.custom-events \
  --name agent-activity \
  --description "Ingest endpoint for agent activity events"

# Result: POST /platform/ingest/custom/events/agent-activity becomes available

Step 2: Operator configures processing pipeline for that endpoint

dtctl create settings \
  --schema builtin:openpipeline.bizevents.pipelines \
  --scope custom.events.agent-activity \
  -f agent-pipeline-config.yaml

# Pipeline defines:
#   - Enrichment rules (add computed fields)
#   - Routing rules (send to specific storage per agent type)
#   - Filtering rules (include/exclude records)
#   - Transformation rules (reshape events)

Step 3: Agent emits to its configured endpoint

dtctl send event -f activity.json --endpoint agent-activity

# Targets: POST /platform/ingest/custom/events/agent-activity
# OpenPipeline processes via configured rules
# Result: enriched, routed, transformed events in Dynatrace

Complete Data Flow

Operator Configuration Phase:
  dtctl create settings --schema openpipeline.custom-events --name agent-activity
  dtctl create settings --schema openpipeline.bizevents.pipelines --scope custom.events.agent-activity

Agent Runtime Phase:
  Agent Logic → dtctl send event --endpoint agent-activity → /platform/ingest/custom/events/agent-activity

OpenPipeline Processing:
  ├─ Enrichment (add fields, compute values)
  ├─ Routing (route to specific BizEvents stream per agent type)
  ├─ Filtering (include/exclude, sample if needed)
  ├─ Transformation (reshape, extract fields)
  ↓
Result in Dynatrace:
  Events queryable by agent.id, endpoint, computed fields, etc.

Custom Endpoints vs. Built-in Endpoints

Aspect Built-in Custom Configured
Endpoint Generic shared Dedicated per agent/use-case
Configuration None (hard-coded) Full OpenPipeline customization
Source tracking No built-in separation Automatic via endpoint name
Processing rules Same for all events Unique rules per endpoint
Operator control Limited Full (enrichment, routing, filtering)
Agent identity Mixed in generic stream Preserved, queryable
Scaling All agents compete Clean separation

dtctl send event — Implementation

Proposed Commands

# Operator lists available custom endpoints
dtctl get settings --schema builtin:openpipeline.custom-events

# Agent emits to specific endpoint
dtctl send event -f activity.json --endpoint agent-activity

# Or with inline attributes
dtctl send event --type agent.activity \
  --set agent.id=problem-resolver \
  --set operation=investigate \
  --endpoint agent-activity

# Batch emit
dtctl send events -f activities.jsonl --batch --endpoint agent-activity

# Dry-run validation (schema check without sending)
dtctl send event -f activity.json --endpoint agent-activity --dry-run

# Auto-discover endpoint from config (optional)
dtctl send event -f activity.json
# → Looks up default agent endpoint from dtctl profile

Token Scopes

For agents to emit to custom endpoints:

  • openpipeline.events.custom — Required to send to /platform/ingest/custom/events/<name>

For operators to configure:

  • openpipeline.settings.create — Configure custom endpoints
  • openpipeline.settings.manage — Manage pipeline rules

Use Cases Enabled

  • ✅ Agents emitting to operator-configured custom endpoints
  • ✅ Different agent types with different processing pipelines
  • ✅ Custom enrichment per agent type (add computed fields)
  • ✅ Custom routing per agent type (route to different storage)
  • ✅ Full audit trail (events tracked by endpoint + agent.id)
  • ✅ Closed-loop workflows (emit → query → decide → act, all via dtctl)
  • ✅ Fully autonomous agents (no SDK, pure CLI, fully observable)

Real-World Reference: Agentic Enterprise Operating Model

For detailed context: https://github.com/wlfghdr/agentic-enterprise

The agentic enterprise model uses dtctl for the complete observability lifecycle:

  1. Operator configuration: dtctl create settings (custom endpoints + pipelines)
  2. Agent querying: dtctl query (grounding context for decisions)
  3. Agent execution: dtctl exec workflow (self-remediation)
  4. Agent emission: dtctl send event --endpoint <name> (complete the loop)

Result: Fully autonomous agents with first-class observability, all via unified dtctl CLI.


Acceptance Criteria

  • Support custom endpoints: POST /platform/ingest/custom/events/<name>
  • Support custom SDLC endpoints: POST /platform/ingest/custom/events.sdlc/<name>
  • Support custom security endpoints: POST /platform/ingest/custom/security.events/<name>
  • dtctl send event -f <file> --endpoint <name> sends to custom endpoint
  • dtctl send event --type <type> --set <key=value> --endpoint <name> works
  • dtctl send events --batch --endpoint <name> handles JSONL batch ingest
  • dtctl send event --dry-run --endpoint <name> validates without sending
  • Optional auto-discovery: dtctl get settings --schema openpipeline.custom-events
  • Token scope documentation (openpipeline.events.custom)
  • Help text explains: operator config → custom endpoint → pipeline processing
  • Documentation includes operator + agent workflow examples
  • Agents can use custom endpoints without SDK dependency

Priority

High — Enables fully autonomous agents with operator-controlled observability pipelines, completing the dtctl-native observability lifecycle for agentic systems.

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions