Skip to content

Commit ba22a7b

Browse files
eddumelendezkiview
andauthored
Add ConfluentKafkaContainer (#9139)
* Deprecate `org.testcontainers.containers.KafkaContainer` * Add `org.testcontainers.kafka.ConfluentKafkaContainer` which works with `confluentinc/cp-kafka` images with version `7.4.0` or later. * `KafkaHelper` container common env vars, command, wait strategy to be shared with `org.testcontainers.kafka.KafkaContainer` --------- Co-authored-by: Kevin Wittek <[email protected]>
1 parent c1f4796 commit ba22a7b

File tree

6 files changed

+360
-9
lines changed

6 files changed

+360
-9
lines changed

docs/modules/kafka.md

Lines changed: 29 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -4,18 +4,25 @@ Testcontainers can be used to automatically instantiate and manage [Apache Kafka
44

55
Currently, two different Kafka images are supported:
66

7-
* `org.testcontainers.containers.KafkaContainer` supports
7+
* `org.testcontainers.kafka.ConfluentKafkaContainer` supports
88
[confluentinc/cp-kafka](https://hub.docker.com/r/confluentinc/cp-kafka/)
99
* `org.testcontainers.kafka.KafkaContainer` supports [apache/kafka](https://hub.docker.com/r/apache/kafka/) and [apache/kafka-native](https://hub.docker.com/r/apache/kafka-native/)
1010

11+
!!! note
12+
`org.testcontainers.containers.KafkaContainer` is deprecated.
13+
Please use `org.testcontainers.kafka.ConfluentKafkaContainer` or `org.testcontainers.kafka.KafkaContainer` instead, depending on the used image.
14+
1115
## Benefits
1216

1317
* Running a single node Kafka installation with just one line of code
1418
* No need to manage external Zookeeper installation, required by Kafka. But see [below](#zookeeper)
1519

1620
## Example
1721

22+
### Using org.testcontainers.containers.KafkaContainer
23+
1824
Create a `KafkaContainer` to use it in your tests:
25+
1926
<!--codeinclude-->
2027
[Creating a KafkaContainer](../../modules/kafka/src/test/java/org/testcontainers/containers/KafkaContainerTest.java) inside_block:constructorWithVersion
2128
<!--/codeinclude-->
@@ -28,33 +35,46 @@ Now your tests or any other process running on your machine can get access to ru
2835
[Bootstrap Servers](../../modules/kafka/src/test/java/org/testcontainers/containers/KafkaContainerTest.java) inside_block:getBootstrapServers
2936
<!--/codeinclude-->
3037

31-
## Options
38+
### Using org.testcontainers.kafka.ConfluentKafkaContainer
39+
40+
!!! note
41+
Compatible with `confluentinc/cp-kafka` images version `7.4.0` and later.
42+
43+
Create a `ConfluentKafkaContainer` to use it in your tests:
3244

33-
!!! note
34-
The options below are only available for `org.testcontainers.containers.KafkaContainer`
45+
<!--codeinclude-->
46+
[Creating a ConlfuentKafkaContainer](../../modules/kafka/src/test/java/org/testcontainers/kafka/ConfluentKafkaContainerTest.java) inside_block:constructorWithVersion
47+
<!--/codeinclude-->
48+
49+
## Options
3550
3651
### <a name="zookeeper"></a> Using external Zookeeper
3752

53+
!!! note
54+
Only available for `org.testcontainers.containers.KafkaContainer`
55+
3856
If for some reason you want to use an externally running Zookeeper, then just pass its location during construction:
3957
<!--codeinclude-->
4058
[External Zookeeper](../../modules/kafka/src/test/java/org/testcontainers/containers/KafkaContainerTest.java) inside_block:withExternalZookeeper
4159
<!--/codeinclude-->
4260

4361
### Using Kraft mode
4462

45-
KRaft mode was declared production ready in 3.3.1 (confluentinc/cp-kafka:7.3.x)"
63+
!!! note
64+
Only available for `org.testcontainers.containers.KafkaContainer`
65+
66+
KRaft mode was declared production ready in 3.3.1 (confluentinc/cp-kafka:7.3.x)
4667

4768
<!--codeinclude-->
4869
[Kraft mode](../../modules/kafka/src/test/java/org/testcontainers/containers/KafkaContainerTest.java) inside_block:withKraftMode
4970
<!--/codeinclude-->
5071

51-
See the [versions interoperability matrix](https://docs.confluent.io/platform/current/installation/versions-interoperability.html) for more details.
72+
See the [versions interoperability matrix](https://docs.confluent.io/platform/current/installation/versions-interoperability.html) for more details.
5273

53-
## Register listeners
74+
### Register listeners
5475

5576
There are scenarios where additional listeners are needed because the consumer/producer can be in another
56-
container in the same network or a different process where the port to connect differs from the default
57-
exposed port `9093`. E.g [Toxiproxy](../../modules/toxiproxy/).
77+
container in the same network or a different process where the port to connect differs from the default exposed port. E.g [Toxiproxy](../../modules/toxiproxy/).
5878

5979
<!--codeinclude-->
6080
[Register additional listener](../../modules/kafka/src/test/java/org/testcontainers/containers/KafkaContainerTest.java) inside_block:registerListener

modules/kafka/src/main/java/org/testcontainers/containers/KafkaContainer.java

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -26,7 +26,11 @@
2626
* <li>Kafka: 9093</li>
2727
* <li>Zookeeper: 2181</li>
2828
* </ul>
29+
*
30+
* @deprecated use {@link org.testcontainers.kafka.ConfluentKafkaContainer} or
31+
* {@link org.testcontainers.kafka.KafkaContainer} instead
2932
*/
33+
@Deprecated
3034
public class KafkaContainer extends GenericContainer<KafkaContainer> {
3135

3236
private static final DockerImageName DEFAULT_IMAGE_NAME = DockerImageName.parse("confluentinc/cp-kafka");
Lines changed: 142 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,142 @@
1+
package org.testcontainers.kafka;
2+
3+
import com.github.dockerjava.api.command.InspectContainerResponse;
4+
import org.testcontainers.containers.GenericContainer;
5+
import org.testcontainers.images.builder.Transferable;
6+
import org.testcontainers.utility.DockerImageName;
7+
8+
import java.util.ArrayList;
9+
import java.util.HashSet;
10+
import java.util.List;
11+
import java.util.Set;
12+
import java.util.function.Supplier;
13+
14+
/**
15+
* Testcontainers implementation for Confluent Kafka.
16+
* <p>
17+
* Supported image: {@code confluentinc/cp-kafka}
18+
* <p>
19+
* Exposed ports: 9092
20+
*/
21+
public class ConfluentKafkaContainer extends GenericContainer<ConfluentKafkaContainer> {
22+
23+
private static final DockerImageName DEFAULT_IMAGE_NAME = DockerImageName.parse("confluentinc/cp-kafka");
24+
25+
private final Set<String> listeners = new HashSet<>();
26+
27+
private final Set<Supplier<String>> advertisedListeners = new HashSet<>();
28+
29+
public ConfluentKafkaContainer(String imageName) {
30+
this(DockerImageName.parse(imageName));
31+
}
32+
33+
public ConfluentKafkaContainer(DockerImageName dockerImageName) {
34+
super(dockerImageName);
35+
dockerImageName.assertCompatibleWith(DEFAULT_IMAGE_NAME);
36+
37+
withExposedPorts(KafkaHelper.KAFKA_PORT);
38+
withEnv(KafkaHelper.envVars());
39+
40+
withCommand(KafkaHelper.COMMAND);
41+
waitingFor(KafkaHelper.WAIT_STRATEGY);
42+
}
43+
44+
@Override
45+
protected void configure() {
46+
KafkaHelper.resolveListeners(this, this.listeners);
47+
48+
String firstNetworkAlias = getNetworkAliases().stream().findFirst().orElse(null);
49+
String networkAlias = getNetwork() != null ? firstNetworkAlias : "localhost";
50+
String controllerQuorumVoters = String.format("%s@%s:9094", getEnvMap().get("KAFKA_NODE_ID"), networkAlias);
51+
withEnv("KAFKA_CONTROLLER_QUORUM_VOTERS", controllerQuorumVoters);
52+
}
53+
54+
@Override
55+
protected void containerIsStarting(InspectContainerResponse containerInfo) {
56+
String brokerAdvertisedListener = String.format(
57+
"BROKER://%s:%s",
58+
containerInfo.getConfig().getHostName(),
59+
"9093"
60+
);
61+
List<String> advertisedListeners = new ArrayList<>();
62+
advertisedListeners.add("PLAINTEXT://" + getBootstrapServers());
63+
advertisedListeners.add(brokerAdvertisedListener);
64+
65+
advertisedListeners.addAll(KafkaHelper.resolveAdvertisedListeners(this.advertisedListeners));
66+
String kafkaAdvertisedListeners = String.join(",", advertisedListeners);
67+
68+
String command = "#!/bin/bash\n";
69+
// exporting KAFKA_ADVERTISED_LISTENERS with the container hostname
70+
command += String.format("export KAFKA_ADVERTISED_LISTENERS=%s\n", kafkaAdvertisedListeners);
71+
72+
command += "/etc/confluent/docker/run \n";
73+
copyFileToContainer(Transferable.of(command, 0777), KafkaHelper.STARTER_SCRIPT);
74+
}
75+
76+
/**
77+
* Add a listener in the format {@code host:port}.
78+
* Host will be included as a network alias.
79+
* <p>
80+
* Use it to register additional connections to the Kafka broker within the same container network.
81+
* <p>
82+
* The listener will be added to the list of default listeners.
83+
* <p>
84+
* Default listeners:
85+
* <ul>
86+
* <li>0.0.0.0:9092</li>
87+
* <li>0.0.0.0:9093</li>
88+
* <li>0.0.0.0:9094</li>
89+
* </ul>
90+
* <p>
91+
* The listener will be added to the list of default advertised listeners.
92+
* <p>
93+
* Default advertised listeners:
94+
* <ul>
95+
* <li>{@code container.getConfig().getHostName():9092}</li>
96+
* <li>{@code container.getHost():container.getMappedPort(9093)}</li>
97+
* </ul>
98+
* @param listener a listener with format {@code host:port}
99+
* @return this {@link ConfluentKafkaContainer} instance
100+
*/
101+
public ConfluentKafkaContainer withListener(String listener) {
102+
this.listeners.add(listener);
103+
this.advertisedListeners.add(() -> listener);
104+
return this;
105+
}
106+
107+
/**
108+
* Add a listener in the format {@code host:port} and a {@link Supplier} for the advertised listener.
109+
* Host from listener will be included as a network alias.
110+
* <p>
111+
* Use it to register additional connections to the Kafka broker from outside the container network
112+
* <p>
113+
* The listener will be added to the list of default listeners.
114+
* <p>
115+
* Default listeners:
116+
* <ul>
117+
* <li>0.0.0.0:9092</li>
118+
* <li>0.0.0.0:9093</li>
119+
* <li>0.0.0.0:9094</li>
120+
* </ul>
121+
* <p>
122+
* The {@link Supplier} will be added to the list of default advertised listeners.
123+
* <p>
124+
* Default advertised listeners:
125+
* <ul>
126+
* <li>{@code container.getConfig().getHostName():9092}</li>
127+
* <li>{@code container.getHost():container.getMappedPort(9093)}</li>
128+
* </ul>
129+
* @param listener a supplier that will provide a listener
130+
* @param advertisedListener a supplier that will provide a listener
131+
* @return this {@link ConfluentKafkaContainer} instance
132+
*/
133+
public ConfluentKafkaContainer withListener(String listener, Supplier<String> advertisedListener) {
134+
this.listeners.add(listener);
135+
this.advertisedListeners.add(advertisedListener);
136+
return this;
137+
}
138+
139+
public String getBootstrapServers() {
140+
return String.format("%s:%s", getHost(), getMappedPort(KafkaHelper.KAFKA_PORT));
141+
}
142+
}
Lines changed: 104 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,104 @@
1+
package org.testcontainers.kafka;
2+
3+
import org.testcontainers.containers.GenericContainer;
4+
import org.testcontainers.containers.wait.strategy.Wait;
5+
import org.testcontainers.containers.wait.strategy.WaitStrategy;
6+
7+
import java.util.ArrayList;
8+
import java.util.Arrays;
9+
import java.util.HashMap;
10+
import java.util.List;
11+
import java.util.Map;
12+
import java.util.Set;
13+
import java.util.function.Supplier;
14+
import java.util.stream.Collectors;
15+
16+
class KafkaHelper {
17+
18+
private static final String DEFAULT_INTERNAL_TOPIC_RF = "1";
19+
20+
private static final String DEFAULT_CLUSTER_ID = "4L6g3nShT-eMCtK--X86sw";
21+
22+
private static final String PROTOCOL_PREFIX = "TC";
23+
24+
static final int KAFKA_PORT = 9092;
25+
26+
static final String STARTER_SCRIPT = "/tmp/testcontainers_start.sh";
27+
28+
static final String[] COMMAND = {
29+
"sh",
30+
"-c",
31+
"while [ ! -f " + STARTER_SCRIPT + " ]; do sleep 0.1; done; " + STARTER_SCRIPT,
32+
};
33+
34+
static final WaitStrategy WAIT_STRATEGY = Wait.forLogMessage(".*Transitioning from RECOVERY to RUNNING.*", 1);
35+
36+
static Map<String, String> envVars() {
37+
Map<String, String> envVars = new HashMap<>();
38+
envVars.put("CLUSTER_ID", DEFAULT_CLUSTER_ID);
39+
40+
envVars.put(
41+
"KAFKA_LISTENERS",
42+
"PLAINTEXT://0.0.0.0:" + KAFKA_PORT + ",BROKER://0.0.0.0:9093,CONTROLLER://0.0.0.0:9094"
43+
);
44+
envVars.put(
45+
"KAFKA_LISTENER_SECURITY_PROTOCOL_MAP",
46+
"BROKER:PLAINTEXT,PLAINTEXT:PLAINTEXT,CONTROLLER:PLAINTEXT"
47+
);
48+
envVars.put("KAFKA_INTER_BROKER_LISTENER_NAME", "BROKER");
49+
envVars.put("KAFKA_PROCESS_ROLES", "broker,controller");
50+
envVars.put("KAFKA_CONTROLLER_LISTENER_NAMES", "CONTROLLER");
51+
52+
envVars.put("KAFKA_NODE_ID", "1");
53+
envVars.put("KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR", DEFAULT_INTERNAL_TOPIC_RF);
54+
envVars.put("KAFKA_OFFSETS_TOPIC_NUM_PARTITIONS", DEFAULT_INTERNAL_TOPIC_RF);
55+
envVars.put("KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR", DEFAULT_INTERNAL_TOPIC_RF);
56+
envVars.put("KAFKA_TRANSACTION_STATE_LOG_MIN_ISR", DEFAULT_INTERNAL_TOPIC_RF);
57+
envVars.put("KAFKA_LOG_FLUSH_INTERVAL_MESSAGES", Long.MAX_VALUE + "");
58+
envVars.put("KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS", "0");
59+
return envVars;
60+
}
61+
62+
static void resolveListeners(GenericContainer<?> kafkaContainer, Set<String> listenersSuppliers) {
63+
Set<String> listeners = Arrays
64+
.stream(kafkaContainer.getEnvMap().get("KAFKA_LISTENERS").split(","))
65+
.collect(Collectors.toSet());
66+
Set<String> listenerSecurityProtocolMap = Arrays
67+
.stream(kafkaContainer.getEnvMap().get("KAFKA_LISTENER_SECURITY_PROTOCOL_MAP").split(","))
68+
.collect(Collectors.toSet());
69+
70+
List<String> listenersToTransform = new ArrayList<>(listenersSuppliers);
71+
for (int i = 0; i < listenersToTransform.size(); i++) {
72+
String protocol = String.format("%s-%d", PROTOCOL_PREFIX, i);
73+
String listener = listenersToTransform.get(i);
74+
String listenerHost = listener.split(":")[0];
75+
String listenerPort = listener.split(":")[1];
76+
String listenerProtocol = String.format("%s://%s:%s", protocol, listenerHost, listenerPort);
77+
String protocolMap = String.format("%s:PLAINTEXT", protocol);
78+
listeners.add(listenerProtocol);
79+
listenerSecurityProtocolMap.add(protocolMap);
80+
81+
String host = listener.split(":")[0];
82+
kafkaContainer.withNetworkAliases(host);
83+
}
84+
85+
String kafkaListeners = String.join(",", listeners);
86+
String kafkaListenerSecurityProtocolMap = String.join(",", listenerSecurityProtocolMap);
87+
88+
kafkaContainer.getEnvMap().put("KAFKA_LISTENERS", kafkaListeners);
89+
kafkaContainer.getEnvMap().put("KAFKA_LISTENER_SECURITY_PROTOCOL_MAP", kafkaListenerSecurityProtocolMap);
90+
}
91+
92+
static List<String> resolveAdvertisedListeners(Set<Supplier<String>> listenerSuppliers) {
93+
List<String> advertisedListeners = new ArrayList<>();
94+
List<Supplier<String>> listenersToTransform = new ArrayList<>(listenerSuppliers);
95+
for (int i = 0; i < listenersToTransform.size(); i++) {
96+
Supplier<String> listenerSupplier = listenersToTransform.get(i);
97+
String protocol = String.format("%s-%d", PROTOCOL_PREFIX, i);
98+
String listener = listenerSupplier.get();
99+
String listenerProtocol = String.format("%s://%s", protocol, listener);
100+
advertisedListeners.add(listenerProtocol);
101+
}
102+
return advertisedListeners;
103+
}
104+
}
Lines changed: 16 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,16 @@
1+
package org.testcontainers;
2+
3+
import org.testcontainers.containers.GenericContainer;
4+
import org.testcontainers.images.builder.Transferable;
5+
6+
public class KCatContainer extends GenericContainer<KCatContainer> {
7+
8+
public KCatContainer() {
9+
super("confluentinc/cp-kcat:7.4.1");
10+
withCreateContainerCmdModifier(cmd -> {
11+
cmd.withEntrypoint("sh");
12+
});
13+
withCopyToContainer(Transferable.of("Message produced by kcat"), "/data/msgs.txt");
14+
withCommand("-c", "tail -f /dev/null");
15+
}
16+
}

0 commit comments

Comments
 (0)