Skip to content

Releases: questdb/kafka-questdb-connector

v0.21

11 Feb 16:58

Choose a tag to compare

Release v0.21

What's New

New SMT: StructArrayExplode

A new Single Message Transform that explodes arrays of structs into separate 1D double[] columns — one per struct field. Unlike OrderBookToArray which produces a single 2D double[][] column, StructArrayExplode gives each field its own column.

Example transformation:

Input:  vols: [{strike: 150.0, ivol: 0.25}, {strike: 160.0, ivol: 0.22}]
Output: strikes: [150.0, 160.0]
        ivols:   [0.25, 0.22]

Configuration:

transforms=explode
transforms.explode.type=io.questdb.kafka.StructArrayExplode$Value
transforms.explode.mappings=vols:strikes,ivols:strike,ivol

The mapping format uses positional pairing - target columns and struct fields are matched by position: sourceField:targetCol1,targetCol2:structField1,structField2. Use semicolons to map multiple source arrays:

transforms.explode.mappings=bids:bid_prices,bid_amounts:price,amount;asks:ask_prices,ask_amounts:price,amount

This is useful for Protobuf repeated fields and similar data where each struct field needs its own array column in QuestDB. The SMT handles edge cases the same way as OrderBookToArray: missing source fields are skipped, empty arrays are omitted, string-encoded numbers are parsed automatically, and null values inside entries produce clear error messages.

Full Changelog

v0.20...v0.21

v0.20

09 Feb 16:21

Choose a tag to compare

Release v0.20

What's New

OrderBookToArray SMT: String-Encoded Numeric Values

The OrderBookToArray SMT now transparently handles string-encoded numeric values in addition to native numeric types. This is common in financial data feeds where prices and quantities arrive as JSON strings (e.g., "price": "45120.50" instead of "price": 45120.50).

No configuration changes are needed. The SMT automatically detects the value type:

  • Native numbers (int, long, double, etc.) are cast directly with zero overhead
  • String values are parsed to double on the fly
  • Non-numeric strings produce a clear error message

Example — works with both formats:

{"price": 45120.50, "amount": 2.45}
{"price": "45120.50", "amount": "2.45"}

Both produce the same output: [[45120.5], [2.45]]

What's Changed

  • Testcontainers dependency updated to 1.21.4 for Docker Engine 29.x compatibility

Full Changelog

v0.19...v0.20

v0.19

03 Feb 13:46

Choose a tag to compare

Release v0.19

⚠️ Breaking Change: Java 17 Required

This release requires JDK 17 or later. The connector no longer runs on JDK 8 or JDK 11. If you're upgrading from a previous version, ensure your Kafka Connect workers are running on JDK 17+.

What's New

Composed Timestamps from Multiple Fields

You can now build a designated timestamp by concatenating multiple fields. This is useful when your data has separate date and time columns that need to be combined.

Example: KDB-style dates with separate date and time fields:

timestamp.field.name=date,time
timestamp.string.format=yyyyMMddHHmmssSSS

This concatenates the date and time field values and parses them using the specified format.

New SMT: OrderBookToArray

A new Single Message Transform that pivots arrays of structs into columnar arrays - designed for order book data and similar use cases where QuestDB's array columns are a better fit.

Example transformation:

Input:  buy_entries: [{price: 10, size: 100}, {price: 5, size: 200}]
Output: bids: [[10.0, 5.0], [100.0, 200.0]]

Configuration:

transforms=orderbook
transforms.orderbook.type=io.questdb.kafka.OrderBookToArray$Value
transforms.orderbook.mappings=buy_entries:bids:price,size;sell_entries:asks:price,size

The SMT handles edge cases gracefully: missing fields are skipped, empty arrays are omitted, and null values inside entries produce clear error messages.

What's Changed

  • Updated QuestDB client dependency to 9.3.2
  • Modernized demo projects to current software versions
  • Updated integration tests to Confluent Platform 7.8.0, Debezium 2.7.3, and PostgreSQL 16

Full Changelog

v0.18...v0.19

v0.18

09 Jan 14:29

Choose a tag to compare

v0.18

This release adds support for environment variable expansion in client.conf.string, making it easy to inject secrets from Kubernetes Secrets or other external sources without storing sensitive values directly in your connector configuration.

What's New

Environment Variable Expansion in Configuration

You can now use ${VAR} syntax in client.conf.string to reference environment variables:

client.conf.string=http::addr=localhost;token=${QDB_TOKEN};

Features:

  • ${VAR} syntax expands environment variables at connector startup
  • $$ escapes to a literal $ for values containing dollar signs
  • Clear error messages for undefined variables and malformed syntax
  • Only ${VAR} syntax is supported; $VAR without braces stays literal

This aligns with Kubernetes secret management best practices, allowing you to source tokens and passwords from K8s Secrets.

What's Changed

  • feat: support ${VAR} environment variable expansion in client.conf.string by @jerrinot in #35

Full Changelog: v0.17...v0.18

v0.17

29 Oct 16:34

Choose a tag to compare

This release adds a new config option dlq.send.batch.on.error that gives you better control over Dead Letter Queue behavior when parsing errors hit. Turn it on, and the connector sends the whole batch straight to the DLQ instead of trying each record one-by-one against the database. This prevents the connector from falling behind and building up message lag, especially useful on high-latency networks where those individual retries can really slow things down. We've set it to false by default, so you keep the existing behavior where the connector tries to save as many good records as possible.

What's Changed

  • feat: a config flag to send a whole failed batch to DLQ by @jerrinot in #34

Full Changelog: v0.16...v0.17

v0.16

05 Aug 08:00

Choose a tag to compare

What's New

Array Support (Preview)

This release introduces support for multi-dimensional arrays, enabling you to stream array data from Kafka to QuestDB.
Important: Array support is currently in preview mode and subject to change in future releases!

Supported Array Types

  • 1D, 2D, and 3D numeric arrays
  • Both schema-based and schema-free messages
  • Nested arrays within complex structures

Example

{
  "symbol": "AAPL",
  "ohlc_5min": [[150.1, 151.2, 149.8, 150.9], [150.9, 152.1, 150.3, 151.5]],
  "volume": [12500, 13200]
}

Current Limitations

  • Not supported: String arrays, mixed-type arrays, null elements, empty arrays
  • Future: Support for up to 32 dimensions and empty arrays.

Array Type Constraints

QuestDB Server Limitation: QuestDB currently supports arrays of floating-point numbers only.

Schema-less Records

  • Arrays of any numeric type are automatically converted to arrays of doubles
  • No errors will occur as all numbers are sent as floating-point

Schema-based Records

  • The schema must specify floating-point types for arrays
  • Arrays with fixed-point (integer) schemas will throw an error
  • Make sure your schemas use FLOAT32 or FLOAT64 for array elements

Multidimensional Array Requirements

QuestDB Server Limitation: Multidimensional arrays must have uniform dimensions (non-jagged).

Examples:

✅ Supported - uniform 2D array (2x3)
{
  "matrix": [
    [1.0, 2.0, 3.0], 
    [4.0, 5.0, 6.0]
  ]
}
❌ Not supported - jagged array (different row lengths)
{
  "matrix": [
    [1.0, 2.0],
    [3.0, 4.0, 5.0]
  ]
}
✅ Supported - uniform 3D array (2x2x2)
{
  "tensor": [
    [[1.0, 2.0], [3.0, 4.0]], 
    [[5.0, 6.0], [7.0, 8.0]]
  ]
}

Messages containing jagged arrays will be rejected by the connector.
Read more about arrays in QuestDB: https://github.com/questdb/kafka-questdb-connector/releases

Upgrading from v0.15

This release is backward compatible with v0.15. Simply update your connector version to start using array support.

Full Changelog: v0.15...v0.16

v0.15

28 Jul 09:48

Choose a tag to compare

What's New

  • Topic templating now supports ${partition} variable. This enables partitioning strategies where records from different Kafka partitions can be routed to separate QuestDB tables. Example: table=trades_${partition} #32

What's Fixed

  • The connector skips tombstone records. eec4dfa

Full Changelog: v0.14...v0.15

v0.14

27 Nov 10:36

Choose a tag to compare

This release improves error handling consistency by expanding Dead Letter Queue (DLQ) functionality. Previously, messages were only sent to the configured DLQ when the Kafka Connect framework threw errors (e.g., during deserialization failures). Now, the DLQ captures all error scenarios, regardless of their origin point in the system.

What's Fixed

  • Invalid entries are sent to a Dead Letter Queue (when configured)

Breaking change

This release upgrades the internal QuestDB ILP client to version 8.2, which introduces a new dependency on Linux systems. The updated client uses native code that requires GNU glibc 2.28 or higher on Linux distributions. This requirement may impact compatibility with older Linux systems. If this limitation affects your deployment, please open an issue to discuss alternatives.

Full Changelog: v0.13...v0.14

v0.13

18 Jun 14:07

Choose a tag to compare

🚀 What’s New?

This release introduces templating for the target table name 🎯 and includes no other changes.

🔧 Features

It enables dynamic generation of the QuestDB target table name based on the message key and the originating topic.
For example: table=${topic}_${key} or table=from_kafka_${topic}

Supported placeholders: ${key} and ${topic}. Placeholders are case-sensitive and an unsupported placeholder will throw an error on connector startup. When a message does not have a key then ${key} resolves to a string null.

🔗 More Info

For more details, see the original PR: #24.

Full Changelog: v0.12...v0.13

v0.12

24 May 13:12

Choose a tag to compare

This release improves the flushing behavior. Flushing is now managed by the connector, rather than depending on the embedded ILP client. With more contextual awareness, the connector can make better decisions, leading to reduced latency and higher throughput. The flushing parameters can still be configured via client.conf.string, as with any other client settings.

What's Changed

  • Use auto flush configuration from client config string by @jerrinot in #22

What's New

  • Log connector version and git revision by @jerrinot in #21

What's Fixed

  • Fix NPE when Kafka Connect requests a commit right after an error by @jerrinot in #23

Full Changelog: v0.11...v0.12