Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
39 changes: 38 additions & 1 deletion config.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
# Honeycomb Refinery Configuration Documentation

This is the documentation for the configuration file for Honeycomb's Refinery.
It was automatically generated on 2024-03-08 at 20:13:38 UTC.
It was automatically generated on 2024-03-11 at 06:48:05 UTC.

## The Config file

Expand All @@ -27,6 +27,7 @@ The remainder of this document describes the sections within the file and the fi
## Table of Contents
- [General Configuration](#general-configuration)
- [Network Configuration](#network-configuration)
- [Kafka Receiver Configuration](#kafka-receiver-configuration)
- [Access Key Configuration](#access-key-configuration)
- [Refinery Telemetry](#refinery-telemetry)
- [Traces](#traces)
Expand Down Expand Up @@ -148,6 +149,42 @@ This setting is the destination to which Refinery sends all events that it decid
- Environment variable: `REFINERY_HONEYCOMB_API`
- Command line switch: `--honeycomb-api`

## Kafka Receiver Configuration

`KafkaReceiver` contains configuration options for the Kafka receiver.
### `BootstrapAddr`

BootstrapAddr is the IP and port on which to connect to a Kafka bootstrap server.

This is how we determine the Kafka broker we are using.

- Not eligible for live reload.
- Type: `hostport`
- Environment variable: `REFINERY_KAFKA_BOOTSTRAP_ADDRESS`
- Command line switch: `--kafka-bootstrap-address`

### `Topic`

Topic is the Kafka topic to consume.

This is how we determine the Kafka topic we are using.

- Not eligible for live reload.
- Type: `string`
- Environment variable: `REFINERY_KAFKA_TOPIC`
- Command line switch: `--kafka-topic`

### `ConsumerGroupName`

ConsumerGroupName is the name of the Kafka consumer group to join.

This is how we determine the Kafka consumer group we are using.

- Not eligible for live reload.
- Type: `string`
- Environment variable: `REFINERY_KAFKA_CONSUMER_GROUP_NAME`
- Command line switch: `--kafka-consumer-group-name`

## Access Key Configuration

`AccessKeys` contains access keys -- API keys that the proxy will treat specially, and other flags that control how the proxy handles API keys.
Expand Down
49 changes: 26 additions & 23 deletions config/cmdenv.go
Original file line number Diff line number Diff line change
Expand Up @@ -26,29 +26,32 @@ import (
// that this system uses reflection to establish the relationship between the
// config struct and the command line options.
type CmdEnv struct {
ConfigLocation string `short:"c" long:"config" env:"REFINERY_CONFIG" default:"/etc/refinery/refinery.yaml" description:"config file or URL to load"`
RulesLocation string `short:"r" long:"rules_config" env:"REFINERY_RULES_CONFIG" default:"/etc/refinery/rules.yaml" description:"config file or URL to load"`
HTTPListenAddr string `long:"http-listen-address" env:"REFINERY_HTTP_LISTEN_ADDRESS" description:"HTTP listen address for incoming event traffic"`
PeerListenAddr string `long:"peer-listen-address" env:"REFINERY_PEER_LISTEN_ADDRESS" description:"Peer listen address for communication between Refinery instances"`
GRPCListenAddr string `long:"grpc-listen-address" env:"REFINERY_GRPC_LISTEN_ADDRESS" description:"gRPC listen address for OTLP traffic"`
RedisHost string `long:"redis-host" env:"REFINERY_REDIS_HOST" description:"Redis host address"`
RedisUsername string `long:"redis-username" env:"REFINERY_REDIS_USERNAME" description:"Redis username"`
RedisPassword string `long:"redis-password" env:"REFINERY_REDIS_PASSWORD" description:"Redis password"`
RedisAuthCode string `long:"redis-auth-code" env:"REFINERY_REDIS_AUTH_CODE" description:"Redis AUTH code"`
HoneycombAPI string `long:"honeycomb-api" env:"REFINERY_HONEYCOMB_API" description:"Honeycomb API URL"`
HoneycombAPIKey string `long:"honeycomb-api-key" env:"REFINERY_HONEYCOMB_API_KEY" description:"Honeycomb API key (for logger and metrics)"`
HoneycombLoggerAPIKey string `long:"logger-api-key" env:"REFINERY_HONEYCOMB_LOGGER_API_KEY" description:"Honeycomb logger API key"`
LegacyMetricsAPIKey string `long:"legacy-metrics-api-key" env:"REFINERY_HONEYCOMB_METRICS_API_KEY" description:"API key for legacy Honeycomb metrics"`
OTelMetricsAPIKey string `long:"otel-metrics-api-key" env:"REFINERY_OTEL_METRICS_API_KEY" description:"API key for OTel metrics if being sent to Honeycomb"`
QueryAuthToken string `long:"query-auth-token" env:"REFINERY_QUERY_AUTH_TOKEN" description:"Token for debug/management queries"`
AvailableMemory MemorySize `long:"available-memory" env:"REFINERY_AVAILABLE_MEMORY" description:"The maximum memory available for Refinery to use (ex: 4GiB)."`
Debug bool `short:"d" long:"debug" description:"Runs debug service (on the first open port between localhost:6060 and :6069 by default)"`
Version bool `short:"v" long:"version" description:"Print version number and exit"`
InterfaceNames bool `long:"interface-names" description:"Print system's network interface names and exit."`
Validate bool `short:"V" long:"validate" description:"Validate the configuration files, writing results to stdout, and exit with 0 if valid, 1 if invalid."`
NoValidate bool `long:"no-validate" description:"Do not attempt to validate the configuration files. Makes --validate meaningless."`
WriteConfig string `long:"write-config" description:"After applying defaults, environment variables, and command line values, write the loaded configuration to the specified file as YAML and exit."`
WriteRules string `long:"write-rules" description:"After applying defaults, write the loaded rules to the specified file as YAML and exit."`
ConfigLocation string `short:"c" long:"config" env:"REFINERY_CONFIG" default:"/etc/refinery/refinery.yaml" description:"config file or URL to load"`
RulesLocation string `short:"r" long:"rules_config" env:"REFINERY_RULES_CONFIG" default:"/etc/refinery/rules.yaml" description:"config file or URL to load"`
HTTPListenAddr string `long:"http-listen-address" env:"REFINERY_HTTP_LISTEN_ADDRESS" description:"HTTP listen address for incoming event traffic"`
PeerListenAddr string `long:"peer-listen-address" env:"REFINERY_PEER_LISTEN_ADDRESS" description:"Peer listen address for communication between Refinery instances"`
GRPCListenAddr string `long:"grpc-listen-address" env:"REFINERY_GRPC_LISTEN_ADDRESS" description:"gRPC listen address for OTLP traffic"`
KafkaBootstrapAddr string `long:"kafka-bootstrap-address" env:"REFINERY_KAFKA_BOOTSTRAP_ADDRESS" description:"Kafka bootstrap address"`
KafkaTopic string `long:"kafka-topic" env:"REFINERY_KAFKA_TOPIC" description:"Kafka topic to consume"`
KafkaConsumerGroupName string `long:"kafka-consumer-group-name" env:"REFINERY_KAFKA_CONSUMER_GROUP_NAME" description:"Kafka consumer group to join"`
RedisHost string `long:"redis-host" env:"REFINERY_REDIS_HOST" description:"Redis host address"`
RedisUsername string `long:"redis-username" env:"REFINERY_REDIS_USERNAME" description:"Redis username"`
RedisPassword string `long:"redis-password" env:"REFINERY_REDIS_PASSWORD" description:"Redis password"`
RedisAuthCode string `long:"redis-auth-code" env:"REFINERY_REDIS_AUTH_CODE" description:"Redis AUTH code"`
HoneycombAPI string `long:"honeycomb-api" env:"REFINERY_HONEYCOMB_API" description:"Honeycomb API URL"`
HoneycombAPIKey string `long:"honeycomb-api-key" env:"REFINERY_HONEYCOMB_API_KEY" description:"Honeycomb API key (for logger and metrics)"`
HoneycombLoggerAPIKey string `long:"logger-api-key" env:"REFINERY_HONEYCOMB_LOGGER_API_KEY" description:"Honeycomb logger API key"`
LegacyMetricsAPIKey string `long:"legacy-metrics-api-key" env:"REFINERY_HONEYCOMB_METRICS_API_KEY" description:"API key for legacy Honeycomb metrics"`
OTelMetricsAPIKey string `long:"otel-metrics-api-key" env:"REFINERY_OTEL_METRICS_API_KEY" description:"API key for OTel metrics if being sent to Honeycomb"`
QueryAuthToken string `long:"query-auth-token" env:"REFINERY_QUERY_AUTH_TOKEN" description:"Token for debug/management queries"`
AvailableMemory MemorySize `long:"available-memory" env:"REFINERY_AVAILABLE_MEMORY" description:"The maximum memory available for Refinery to use (ex: 4GiB)."`
Debug bool `short:"d" long:"debug" description:"Runs debug service (on the first open port between localhost:6060 and :6069 by default)"`
Version bool `short:"v" long:"version" description:"Print version number and exit"`
InterfaceNames bool `long:"interface-names" description:"Print system's network interface names and exit."`
Validate bool `short:"V" long:"validate" description:"Validate the configuration files, writing results to stdout, and exit with 0 if valid, 1 if invalid."`
NoValidate bool `long:"no-validate" description:"Do not attempt to validate the configuration files. Makes --validate meaningless."`
WriteConfig string `long:"write-config" description:"After applying defaults, environment variables, and command line values, write the loaded configuration to the specified file as YAML and exit."`
WriteRules string `long:"write-rules" description:"After applying defaults, write the loaded rules to the specified file as YAML and exit."`
}

func NewCmdEnvOptions(args []string) (*CmdEnv, error) {
Expand Down
10 changes: 10 additions & 0 deletions config/config.go
Original file line number Diff line number Diff line change
Expand Up @@ -34,6 +34,16 @@ type Config interface {
// GetHTTPIdleTimeout returns the idle timeout for refinery's HTTP server
GetHTTPIdleTimeout() time.Duration

// GetKafkaBootstrapAddr returns the address and port on which to connect
// to become a Kafka consumer.
GetKafkaBootstrapAddr() (string, error)

// GetKafkaTopic returns the topic to consume.
GetKafkaTopic() (string, error)

// GetKafkaConsumerGroupsName returns the Kafka consumer group to join.
GetKafkaConsumerGroupName() (string, error)

// GetCompressPeerCommunication will be true if refinery should compress
// data before forwarding it to a peer.
GetCompressPeerCommunication() bool
Expand Down
33 changes: 33 additions & 0 deletions config/file_config.go
Original file line number Diff line number Diff line change
Expand Up @@ -49,6 +49,7 @@ type fileConfig struct {
type configContents struct {
General GeneralConfig `yaml:"General"`
Network NetworkConfig `yaml:"Network"`
KafkaReceiver KafkaReceiverConfig `yaml:"KafkaReceiver"`
AccessKeys AccessKeyConfig `yaml:"AccessKeys"`
Telemetry RefineryTelemetryConfig `yaml:"RefineryTelemetry"`
Traces TracesConfig `yaml:"Traces"`
Expand Down Expand Up @@ -84,6 +85,12 @@ type NetworkConfig struct {
HTTPIdleTimeout Duration `yaml:"HTTPIdleTimeout"`
}

type KafkaReceiverConfig struct {
BootstrapAddr string `yaml:"BootstrapAddr" cmdenv:"KafkaBootstrapAddr"`
Topic string `yaml:"Topic" cmdenv:"KafkaTopic"`
ConsumerGroupName string `yaml:"ConsumerGroupName" cmdenv:"KafkaConsumerGroupName"`
}

type AccessKeyConfig struct {
ReceiveKeys []string `yaml:"ReceiveKeys" default:"[]"`
AcceptOnlyListedKeys bool `yaml:"AcceptOnlyListedKeys"`
Expand Down Expand Up @@ -505,6 +512,32 @@ func (f *fileConfig) GetHTTPIdleTimeout() time.Duration {
return time.Duration(f.mainConfig.Network.HTTPIdleTimeout)
}

func (f *fileConfig) GetKafkaBootstrapAddr() (string, error) {
f.mux.RLock()
defer f.mux.RUnlock()

addr := f.mainConfig.KafkaReceiver.BootstrapAddr
_, _, err := net.SplitHostPort(addr)
if addr != "" && err != nil {
return "", err
}
return f.mainConfig.KafkaReceiver.BootstrapAddr, nil
}

func (f *fileConfig) GetKafkaTopic() (string, error) {
f.mux.RLock()
defer f.mux.RUnlock()

return f.mainConfig.KafkaReceiver.Topic, nil
}

func (f *fileConfig) GetKafkaConsumerGroupName() (string, error) {
f.mux.RLock()
defer f.mux.RUnlock()

return f.mainConfig.KafkaReceiver.ConsumerGroupName, nil
}

func (f *fileConfig) GetCompressPeerCommunication() bool {
f.mux.RLock()
defer f.mux.RUnlock()
Expand Down
37 changes: 37 additions & 0 deletions config/metadata/configMeta.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -140,6 +140,43 @@ groups:
This setting is the destination to which Refinery sends all events
that it decides to keep.

- name: KafkaReceiver
title: "Kafka Receiver Configuration"
description: contains configuration options for the Kafka receiver.
fields:
- name: BootstrapAddr
type: hostport
valuetype: nondefault
reload: false
default: ""
envvar: REFINERY_KAFKA_BOOTSTRAP_ADDRESS
commandLine: kafka-bootstrap-address
summary: is the IP and port on which to connect to a Kafka bootstrap server.
description: >
This is how we determine the Kafka broker we are using.

- name: Topic
type: string
valuetype: nondefault
reload: false
default: ""
envvar: REFINERY_KAFKA_TOPIC
commandLine: kafka-topic
summary: is the Kafka topic to consume.
description: >
This is how we determine the Kafka topic we are using.

- name: ConsumerGroupName
type: string
valuetype: nondefault
reload: false
default: ""
envvar: REFINERY_KAFKA_CONSUMER_GROUP_NAME
commandLine: kafka-consumer-group-name
summary: is the name of the Kafka consumer group to join.
description: >
This is how we determine the Kafka consumer group we are using.

- name: AccessKeys
title: "Access Key Configuration"
description: >
Expand Down
25 changes: 25 additions & 0 deletions config/mock.go
Original file line number Diff line number Diff line change
Expand Up @@ -91,6 +91,10 @@ type MockConfig struct {
ParentIdFieldNames []string
CfgMetadata []ConfigMetadata

KafkaBootstrapAddr string
KafkaTopic string
KafkaConsumerGroupName string

Mux sync.RWMutex
}

Expand Down Expand Up @@ -163,6 +167,27 @@ func (m *MockConfig) GetHTTPIdleTimeout() time.Duration {
return m.GetHTTPIdleTimeoutVal
}

func (m *MockConfig) GetKafkaBootstrapAddr() (string, error) {
m.Mux.RLock()
defer m.Mux.RUnlock()

return m.KafkaBootstrapAddr, nil
}

func (m *MockConfig) GetKafkaTopic() (string, error) {
m.Mux.RLock()
defer m.Mux.RUnlock()

return m.KafkaTopic, nil
}

func (m *MockConfig) GetKafkaConsumerGroupName() (string, error) {
m.Mux.RLock()
defer m.Mux.RUnlock()

return m.KafkaConsumerGroupName, nil
}

func (m *MockConfig) GetCompressPeerCommunication() bool {
m.Mux.RLock()
defer m.Mux.RUnlock()
Expand Down
31 changes: 30 additions & 1 deletion config_complete.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
## Honeycomb Refinery Configuration ##
######################################
#
# created on 2024-02-23 at 22:23:27 UTC from ../../config.yaml using a template generated on 2024-02-23 at 22:23:25 UTC
# created on 2024-03-11 at 06:48:04 UTC from ../../config.yaml using a template generated on 2024-03-11 at 06:48:03 UTC

# This file contains a configuration for the Honeycomb Refinery. It is in YAML
# format, organized into named groups, each of which contains a set of
Expand Down Expand Up @@ -125,6 +125,35 @@ Network:
## Eligible for live reload.
# HoneycombAPI: "https://api.honeycomb.io"

##################################
## Kafka Receiver Configuration ##
##################################
KafkaReceiver:
## KafkaReceiver contains configuration options for the Kafka receiver.
####
## BootstrapAddr is the IP and port on which to connect to a Kafka
## bootstrap server.
##
## This is how we determine the Kafka broker we are using.
##
## Should be an ip:port like "".
## Not eligible for live reload.
# BootstrapAddr: ""

## Topic is the Kafka topic to consume.
##
## This is how we determine the Kafka topic we are using.
##
## Not eligible for live reload.
# Topic: ""

## ConsumerGroupName is the name of the Kafka consumer group to join.
##
## This is how we determine the Kafka consumer group we are using.
##
## Not eligible for live reload.
# ConsumerGroupName: ""

##############################
## Access Key Configuration ##
##############################
Expand Down
37 changes: 37 additions & 0 deletions refinery_config.md
Original file line number Diff line number Diff line change
Expand Up @@ -125,6 +125,43 @@ This setting is the destination to which Refinery sends all events that it decid
- Environment variable: `REFINERY_HONEYCOMB_API`
- Command line switch: `--honeycomb-api`

## Kafka Receiver Configuration

`KafkaReceiver` contains configuration options for the Kafka receiver.

### `BootstrapAddr`

`BootstrapAddr` is the IP and port on which to connect to a Kafka bootstrap server.

This is how we determine the Kafka broker we are using.

- Not eligible for live reload.
- Type: `hostport`
- Environment variable: `REFINERY_KAFKA_BOOTSTRAP_ADDRESS`
- Command line switch: `--kafka-bootstrap-address`

### `Topic`

`Topic` is the Kafka topic to consume.

This is how we determine the Kafka topic we are using.

- Not eligible for live reload.
- Type: `string`
- Environment variable: `REFINERY_KAFKA_TOPIC`
- Command line switch: `--kafka-topic`

### `ConsumerGroupName`

`ConsumerGroupName` is the name of the Kafka consumer group to join.

This is how we determine the Kafka consumer group we are using.

- Not eligible for live reload.
- Type: `string`
- Environment variable: `REFINERY_KAFKA_CONSUMER_GROUP_NAME`
- Command line switch: `--kafka-consumer-group-name`

## Access Key Configuration

`AccessKeys` contains access keys -- API keys that the proxy will treat specially, and other flags that control how the proxy handles API keys.
Expand Down
2 changes: 1 addition & 1 deletion rules.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
# Honeycomb Refinery Rules Documentation

This is the documentation for the rules configuration for Honeycomb's Refinery.
It was automatically generated on 2024-02-23 at 22:23:29 UTC.
It was automatically generated on 2024-03-11 at 06:48:05 UTC.

## The Rules file

Expand Down
10 changes: 9 additions & 1 deletion tools/convert/configDataNames.txt
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
# Names of groups and fields in the new config file format.
# Automatically generated on 2024-02-23 at 22:23:26 UTC.
# Automatically generated on 2024-03-11 at 06:48:03 UTC.

General:
- ConfigurationVersion
Expand All @@ -21,6 +21,14 @@ Network:
- HoneycombAPI


KafkaReceiver:
- BootstrapAddr

- Topic

- ConsumerGroupName


AccessKeys:
- ReceiveKeys (originally APIKeys)

Expand Down
Loading