|
1 | | -# kafka |
2 | | -removes the "esque" from Kafka |
| 1 | +<div align="center" style="margin-bottom:20px"> |
| 2 | + <img src=".assets/banner.png" alt="ulog" /> |
| 3 | + <h1><i>making Kafka less Kafka-esque</i></h1> |
| 4 | + <div align="center"> |
| 5 | + <a href="https://github.com/blugnu/kafka/actions/workflows/release.yml"> |
| 6 | + <img alt="build-status" src="https://github.com/blugnu/kafka/actions/workflows/pipeline.yml/badge.svg?branch=master&style=flat-square"/> |
| 7 | + </a> |
| 8 | + <a href="https://goreportcard.com/report/github.com/blugnu/kafka" > |
| 9 | + <img alt="go report" src="https://goreportcard.com/badge/github.com/blugnu/kafka"/> |
| 10 | + </a> |
| 11 | + <a> |
| 12 | + <img alt="go version >= 1.14" src="https://img.shields.io/github/go-mod/go-version/blugnu/kafka?style=flat-square"/> |
| 13 | + </a> |
| 14 | + <a href="https://github.com/blugnu/kafka/blob/master/LICENSE"> |
| 15 | + <img alt="MIT License" src="https://img.shields.io/github/license/blugnu/kafka?color=%234275f5&style=flat-square"/> |
| 16 | + </a> |
| 17 | + <a href="https://coveralls.io/github/blugnu/kafka?branch=master"> |
| 18 | + <img alt="coverage" src="https://img.shields.io/coveralls/github/blugnu/kafka?style=flat-square"/> |
| 19 | + </a> |
| 20 | + <a href="https://pkg.go.dev/github.com/blugnu/kafka"> |
| 21 | + <img alt="docs" src="https://pkg.go.dev/badge/github.com/blugnu/kafka"/> |
| 22 | + </a> |
| 23 | + </div> |
| 24 | +</div> |
| 25 | + |
| 26 | +<br> |
| 27 | + |
| 28 | +# blugnu/kafka |
| 29 | + |
| 30 | +## Features |
| 31 | + |
| 32 | +- [x] **Discoverable Configuration**: Provides option functions for configuring Kafka clients, |
| 33 | + with separation of general, consumer and producer-specific configuration |
| 34 | + |
| 35 | +- [x] **Reduced Boilerplate**: Provides a complete implementation of a Kafka consumer with |
| 36 | + a single function call, handling all the boilerplate code for you including offset |
| 37 | + commits, signal handling and graceful shutdown |
| 38 | + |
| 39 | +- [x] **Producer Retries**: Provides a producer implementation that will retry sending |
| 40 | + messages to Kafka in the event of a timeout (configurable timeout and `MaxRetries`) |
| 41 | + |
| 42 | +- [x] **Mock Producer**: Provides a mock producer implementation that can be used |
| 43 | + for testing that applications produce expected messages |
| 44 | + |
| 45 | +## Installation |
| 46 | + |
| 47 | +```bash |
| 48 | +go get github.com/blugnu/kafka |
| 49 | +``` |
| 50 | + |
| 51 | +## Usage |
| 52 | + |
| 53 | +- Establish a base configuration (e.g. bootstrap servers) |
| 54 | +- Configure a consumer and/or producer |
| 55 | +- Start the consumer |
| 56 | + |
| 57 | +## Example |
| 58 | + |
| 59 | +```go |
| 60 | +package main |
| 61 | + |
| 62 | +import ( |
| 63 | + "context" |
| 64 | + "fmt" |
| 65 | + "os" |
| 66 | + "os/signal" |
| 67 | + "syscall" |
| 68 | + |
| 69 | + "github.com/blugnu/kafka" |
| 70 | +) |
| 71 | + |
| 72 | +func HandleEvent(ctx context.Context, msg *kafka.Message) error { |
| 73 | + fmt.Printf("received message: %s\n", string(msg.Value)) |
| 74 | + return nil |
| 75 | +} |
| 76 | + |
| 77 | +func main() { |
| 78 | + // initialise a base configuration |
| 79 | + cfg := kafka.NewConfig( |
| 80 | + kafka.BootstrapServers("localhost:9092"), |
| 81 | + ) |
| 82 | + |
| 83 | + // configure a consumer |
| 84 | + consumer, err := kafka.NewConsumer(cfg, |
| 85 | + kafka.ConsumerGroupID("my-group"), |
| 86 | + kafka.TopicHandler("event", kafka.HandlerFunc(HandleEvent)), |
| 87 | + ) |
| 88 | + if err != nil { |
| 89 | + log.Fatal("error creating consumer:", err) |
| 90 | + } |
| 91 | + |
| 92 | + // start the consumer |
| 93 | + if err := consumer.Start(ctx); err != nil { |
| 94 | + log.Fatal(err) |
| 95 | + } |
| 96 | + |
| 97 | + if err := consumer.Wait(); err != nil { |
| 98 | + log.Fatal(err) |
| 99 | + } |
| 100 | +} |
| 101 | +``` |
| 102 | + |
| 103 | +# Logging |
| 104 | + |
| 105 | +To avoid importing a specific logging library or imposing a log format on applications, |
| 106 | +logs are written using internal log hooks. These are set to no-op by default. |
| 107 | + |
| 108 | +To enable logging you must call the `kafka.EnableLogs` function, providing functions |
| 109 | +to log entries at different levels as required. |
| 110 | + |
| 111 | +For example, the following might be used to initialise a `blugnu/ulog` context logger |
| 112 | +and enable logging of `INFO` level Kafka logs to that logger; logs at all other levels |
| 113 | +are left as no-ops: |
| 114 | + |
| 115 | +```go |
| 116 | +func logger(ctx context.Context) (context.Context, ulog.Logger, func()) { |
| 117 | + log, cfn, err := ulog.NewLogger( |
| 118 | + ulog.WithLevel(ulog.DebugLevel), |
| 119 | + ) |
| 120 | + if err != { |
| 121 | + log.Fatal(fmt.Errorf("error initialising logger: %v", err)) |
| 122 | + } |
| 123 | + |
| 124 | + kafka.EnableLogs(&kafka.Loggers{ |
| 125 | + Info: func(ctx context.Context, msg string, fields map[string]interface{}) { |
| 126 | + log := ulog.FromContext(ctx) |
| 127 | + log.Info(msg, ulog.Fields(fields)) |
| 128 | + }, |
| 129 | + }) |
| 130 | + |
| 131 | + return ulog.ContextWithLogger(ctx, log), log, cfn |
| 132 | +} |
| 133 | +``` |
| 134 | + |
| 135 | +Logging functions are provided for each of the following levels: |
| 136 | + |
| 137 | +- `Debug` |
| 138 | +- `Info` |
| 139 | +- `Error` |
| 140 | + |
| 141 | +## Default Logging |
| 142 | + |
| 143 | +A `nil` argument may be passed to `EnableLogs` to enable default logging, which will |
| 144 | +write logs using the standard `log` package. |
| 145 | + |
| 146 | +Default logging is also emitted if a zero-value `&Loggers{}` is passed to `EnableLogs`, i.e. |
| 147 | +all functions set `nil`. |
| 148 | + |
| 149 | +> **Note**: Default logging is not recommended for production use. |
0 commit comments