Skip to content
Open
Show file tree
Hide file tree
Changes from 3 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -6,10 +6,10 @@
// deploying/assembly-deploy-tasks.adoc

[id='kafka-bridge-{context}']
= Deploying Kafka Bridge
= Deploying HTTP Bridge

[role="_abstract"]
Kafka Bridge provides an API for integrating HTTP-based clients with a Kafka cluster.
HTTP Bridge provides an API for integrating HTTP-based clients with a Kafka cluster.

//Procedure to deploy a Kafka Bridge cluster
include::../../modules/deploying/proc-deploy-kafka-bridge.adoc[leveloffset=+1]
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -82,11 +82,11 @@ metrics
<8> Alerting rules examples for use with Prometheus Alertmanager (deployed with Prometheus).
<9> Installation resource file for the Prometheus image.
<10> Grafana dashboards for components using the Strimzi Metrics Reporter.
<11> `KafkaBridge` resource for deploying Kafka Bridge with Strimzi Metrics Reporter.
<11> `KafkaBridge` resource for deploying HTTP Bridge with Strimzi Metrics Reporter.
<12> `KafkaConnect` resource for deploying Kafka Connect with Strimzi Metrics Reporter.
<13> `Kafka` resource for deploying Kafka with Strimzi Metrics Reporter.
<14> `KafkaMirrorMaker2` resource for deploying MirrorMaker 2 with Strimzi Metrics Reporter.
<15> Metrics configuration that defines Prometheus JMX Exporter relabeling rules for Kafka Bridge.
<15> Metrics configuration that defines Prometheus JMX Exporter relabeling rules for HTTP Bridge.
<16> Metrics configuration that defines Prometheus JMX Exporter relabeling rules for Kafka Connect.
<17> Metrics configuration that defines Prometheus JMX Exporter relabeling rules for Cruise Control.
<18> Metrics configuration that defines Prometheus JMX Exporter relabeling rules for Kafka.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ Brokers and clients communicate with the authorization server, as necessary, to
For a deployment of Strimzi, OAuth 2.0 integration provides the following support:

* Server-side OAuth 2.0 authentication for Kafka brokers
* Client-side OAuth 2.0 authentication for Kafka MirrorMaker, Kafka Connect, and the Kafka Bridge
* Client-side OAuth 2.0 authentication for Kafka MirrorMaker, Kafka Connect, and the HTTP Bridge

include::../../modules/oauth/con-oauth-authentication-broker.adoc[leveloffset=+1]
include::../../modules/oauth/con-oauth-authentication-client.adoc[leveloffset=+1]
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ Strimzi provides built-in support for tracing for the following Kafka components

* MirrorMaker to trace messages from a source cluster to a target cluster
* Kafka Connect to trace messages consumed and produced by Kafka Connect
* Kafka Bridge to trace messages between Kafka and HTTP client applications
* HTTP Bridge to trace messages between Kafka and HTTP client applications

Tracing is not supported for Kafka brokers.

Expand Down
20 changes: 10 additions & 10 deletions documentation/modules/configuring/con-config-kafka-bridge.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -5,16 +5,16 @@
// assembly-config.adoc

[id='con-config-kafka-bridge-{context}']
= Configuring the Kafka Bridge
= Configuring the HTTP Bridge

[role="_abstract"]
Update the `spec` properties of the `KafkaBridge` custom resource to configure your Kafka Bridge deployment.
Update the `spec` properties of the `KafkaBridge` custom resource to configure your HTTP Bridge deployment.

In order to prevent issues arising when client consumer requests are processed by different Kafka Bridge instances, address-based routing must be employed to ensure that requests are routed to the right Kafka Bridge instance.
Additionally, each independent Kafka Bridge instance must have a replica.
A Kafka Bridge instance has its own state which is not shared with another instances.
In order to prevent issues arising when client consumer requests are processed by different HTTP Bridge instances, address-based routing must be employed to ensure that requests are routed to the right Kafka Bridge instance.
Additionally, each independent HTTP Bridge instance must have a replica.
A HTTP Bridge instance has its own state which is not shared with another instances.

For a deeper understanding of the Kafka Bridge and its cluster configuration options, refer to the link:{BookURLBridge}[Using the Kafka Bridge^] guide and the link:{BookURLConfiguring}[Strimzi Custom Resource API Reference^].
For a deeper understanding of the HTTP Bridge and its cluster configuration options, refer to the link:{BookURLBridge}[Using the Kafka Bridge^] guide and the link:{BookURLConfiguring}[Strimzi Custom Resource API Reference^].

.Example `KafkaBridge` custom resource configuration
[source,yaml,subs="+quotes,attributes"]
Expand Down Expand Up @@ -125,13 +125,13 @@ spec:
<4> CORS access specifying selected resources and access methods. Additional HTTP headers in requests describe the origins that are permitted access to the Kafka cluster.
<5> Requests for reservation of supported resources, currently `cpu` and `memory`, and limits to specify the maximum resources that can be consumed.
<6> TLS configuration for encrypted connections to the Kafka cluster, with trusted certificates stored in X.509 format within the specified secrets.
<7> Authentication for the Kafka Bridge cluster, specified as `tls`, `scram-sha-256`, `scram-sha-512`, `plain`, or `oauth`.
By default, the Kafka Bridge connects to Kafka brokers without authentication.
<7> Authentication for the HTTP Bridge cluster, specified as `tls`, `scram-sha-256`, `scram-sha-512`, `plain`, or `oauth`.
By default, the HTTP Bridge connects to Kafka brokers without authentication.
For details on configuring authentication, see the link:{BookURLConfiguring}#type-KafkaBridgeSpec-schema-reference[`KafkaBridgeSpec` schema properties^]
<8> Consumer configuration options.
<9> Producer configuration options.
<10> Kafka Bridge loggers and log levels added directly (`inline`) or indirectly (`external`) through a `ConfigMap`. Custom Log4j configuration must be placed under the `log4j2.properties` key in the `ConfigMap`. You can set log levels to `INFO`, `ERROR`, `WARN`, `TRACE`, `DEBUG`, `FATAL` or `OFF`.
<11> JVM configuration options to optimize performance for the Virtual Machine (VM) running the Kafka Bridge.
<10> HTTP Bridge loggers and log levels added directly (`inline`) or indirectly (`external`) through a `ConfigMap`. Custom Log4j configuration must be placed under the `log4j2.properties` key in the `ConfigMap`. You can set log levels to `INFO`, `ERROR`, `WARN`, `TRACE`, `DEBUG`, `FATAL` or `OFF`.
<11> JVM configuration options to optimize performance for the Virtual Machine (VM) running the HTTP Bridge.
<12> Healthchecks to know when to restart a container (liveness) and when a container can accept traffic (readiness).
<13> Optional: Container image configuration, which is recommended only in special situations.
<14> Template customization. Here a pod is scheduled with anti-affinity, so the pod is not scheduled on nodes with the same hostname.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -58,16 +58,16 @@ fetch.max.bytes: 10000000
max.partition.fetch.bytes: 10485760
----

It's also possible to configure the producers and consumers used by other Kafka components like Kafka Bridge, Kafka Connect, and MirrorMaker 2 to handle larger messages more effectively.
It's also possible to configure the producers and consumers used by other Kafka components like HTTP Bridge, Kafka Connect, and MirrorMaker 2 to handle larger messages more effectively.

Kafka Bridge:: Configure the Kafka Bridge using specific producer and consumer configuration properties:
HTTP Bridge:: Configure the HTTP Bridge using specific producer and consumer configuration properties:
+
--
* `producer.config` for producers
* `consumer.config` for consumers
--
+
.Example Kafka Bridge configuration
.Example HTTP Bridge configuration
[source,yaml,subs="+attributes"]
----
apiVersion: {KafkaBridgeApiVersion}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -5,12 +5,12 @@
// assembly-deploy-kafka-bridge.adoc

[id='ref-list-of-kafka-bridge-resources-{context}']
= List of Kafka Bridge cluster resources
= List of HTTP Bridge cluster resources

[role="_abstract"]
The following resources are created by the Cluster Operator in the Kubernetes cluster:

<bridge_cluster_name>-bridge:: Deployment which is in charge to create the Kafka Bridge worker node pods.
<bridge_cluster_name>-bridge-service:: Service which exposes the REST interface of the Kafka Bridge cluster.
<bridge_cluster_name>-bridge-config:: ConfigMap which contains the Kafka Bridge ancillary configuration and is mounted as a volume by the Kafka broker pods.
<bridge_cluster_name>-bridge:: Pod Disruption Budget configured for the Kafka Bridge worker nodes.
<bridge_cluster_name>-bridge:: Deployment which is in charge to create the HTTP Bridge worker node pods.
<bridge_cluster_name>-bridge-service:: Service which exposes the HTTP Bridge REST interface.
<bridge_cluster_name>-bridge-config:: ConfigMap which contains the HTTP Bridge ancillary configuration and is mounted as a volume by the Kafka broker pods.
<bridge_cluster_name>-bridge:: Pod Disruption Budget configured for the HTTP Bridge worker nodes.
Original file line number Diff line number Diff line change
Expand Up @@ -6,13 +6,13 @@

[id='con-accessing-kafka-bridge-from-outside-{context}']

= Accessing the Kafka Bridge outside of Kubernetes
= Accessing the HTTP Bridge outside of Kubernetes

[role="_abstract"]
After deployment, the Kafka Bridge can only be accessed by applications running in the same Kubernetes cluster.
After deployment, the HTTP Bridge can only be accessed by applications running in the same Kubernetes cluster.
These applications use the `_<kafka_bridge_name>_-bridge-service` service to access the API.

If you want to make the Kafka Bridge accessible to applications running outside of the Kubernetes cluster, you can expose it manually by creating one of the following features:
If you want to make the HTTP Bridge accessible to applications running outside of the Kubernetes cluster, you can expose it manually by creating one of the following features:

* `LoadBalancer` or `NodePort` type services

Expand All @@ -30,4 +30,4 @@ If you decide to create Services, use the labels from the `selector` of the `_<k
strimzi.io/kind: KafkaBridge
#...
----
<1> Name of the Kafka Bridge custom resource in your Kubernetes cluster.
<1> Name of the HTTP Bridge custom resource in your Kubernetes cluster.
10 changes: 5 additions & 5 deletions documentation/modules/deploying/con-kafka-bridge-concepts.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -5,15 +5,15 @@
// books-rhel/using/master.adoc

[id='con-kafka-bridge-concepts-{context}']
= Using the Kafka Bridge to connect with a Kafka cluster
= Using the HTTP Bridge to connect with a Kafka cluster

[role="_abstract"]
You can use the Kafka Bridge API to create and manage consumers and send and receive records over HTTP rather than the native Kafka protocol.
You can use the HTTP Bridge API to create and manage consumers and send and receive records over HTTP rather than the native Kafka protocol.

When you set up the Kafka Bridge you configure HTTP access to the Kafka cluster.
You can then use the Kafka Bridge to produce and consume messages from the cluster, as well as performing other operations through its REST interface.
When you set up the HTTP Bridge you configure HTTP access to the Kafka cluster.
You can then use the HTTP Bridge to produce and consume messages from the cluster, as well as performing other operations through its REST interface.

[role="_additional-resources"]
.Additional resources

* For information on installing and using the Kafka Bridge, see link:{BookURLBridge}[Using the Kafka Bridge^].
* For information on installing and using the HTTP Bridge, see link:{BookURLBridge}[Using the HTTP Bridge^].
4 changes: 2 additions & 2 deletions documentation/modules/deploying/con-service-discovery.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ Service discovery makes it easier for client applications running in the same Ku
A service discovery label and annotation are created for the following services:

* Internal Kafka bootstrap service
* Kafka Bridge service
* HTTP Bridge service

Service discovery label:: The service discovery label, `strimzi.io/discovery`, is set to `true` for `Service` resources to make them discoverable for client connections.
Service discovery annotation:: The service discovery annotation provides connection details in JSON format for each service for client applications to use to establish connections.
Expand Down Expand Up @@ -47,7 +47,7 @@ spec:
#...
----

.Example Kafka Bridge service
.Example HTTP Bridge service

[source,yaml,subs="attributes+"]
----
Expand Down
14 changes: 7 additions & 7 deletions documentation/modules/deploying/proc-deploy-kafka-bridge.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -5,10 +5,10 @@
// deploying/assembly_deploy-kafka-bridge.adoc

[id='deploying-kafka-bridge-{context}']
= Deploying Kafka Bridge
= Deploying HTTP Bridge

[role="_abstract"]
This procedure shows how to deploy a Kafka Bridge cluster to your Kubernetes cluster using the Cluster Operator.
This procedure shows how to deploy a HTTP Bridge cluster to your Kubernetes cluster using the Cluster Operator.

The deployment uses a YAML file to provide the specification to create a `KafkaBridge` resource.

Expand Down Expand Up @@ -37,7 +37,7 @@ See the link:{BookURLConfiguring}#type-KafkaBridgeSpec-schema-reference[`KafkaBr
Use `[]` (an empty array) to trust the default Java CAs, or specify secrets containing trusted certificates. +
See the link:{BookURLConfiguring}#con-common-configuration-trusted-certificates-reference[`trustedCertificates` properties^] for configuration details.

. Deploy Kafka Bridge to your Kubernetes cluster:
. Deploy HTTP Bridge to your Kubernetes cluster:
+
[source,shell]
----
Expand All @@ -58,14 +58,14 @@ NAME READY STATUS RESTARTS
my-bridge-bridge-<pod_id> 1/1 Running 0
----
+
In this example, `my-bridge` is the name of the Kafka Bridge cluster.
In this example, `my-bridge` is the name of the HTTP Bridge cluster.
A pod ID identifies each created pod.
By default, the deployment creates a single Kafka Bridge pod.
By default, the deployment creates a single HTTP Bridge pod.
`READY` shows the number of ready versus expected replicas.
The deployment is successful when the `STATUS` is `Running`.

[role="_additional-resources"]
.Additional resources

* xref:con-config-kafka-bridge-str[Kafka Bridge cluster configuration]
* link:{BookURLBridge}[Using the Kafka Bridge^]
* xref:con-config-kafka-bridge-str[HTTP Bridge cluster configuration]
* link:{BookURLBridge}[Using the HTTP Bridge^]
Original file line number Diff line number Diff line change
Expand Up @@ -5,10 +5,10 @@
// assembly-deploy-kafka-bridge.adoc

[id='proc-exposing-kafka-bridge-service-local-machine-{context}']
= Exposing the Kafka Bridge service to your local machine
= Exposing the HTTP Bridge service to your local machine

[role="_abstract"]
Use port forwarding to expose the Kafka Bridge service to your local machine on http://localhost:8080.
Use port forwarding to expose the HTTP Bridge service to your local machine on http://localhost:8080.

NOTE: Port forwarding is only suitable for development and testing purposes.

Expand All @@ -25,7 +25,7 @@ pod/kafka-consumer
pod/my-bridge-bridge-<pod_id>
----

. Connect to the Kafka Bridge pod on port `8080`:
. Connect to the HTTP Bridge pod on port `8080`:
+
[source,shell,subs=attributes+]
----
Expand All @@ -34,4 +34,4 @@ kubectl port-forward pod/my-bridge-bridge-<pod_id> 8080:8080 &
+
NOTE: If port 8080 on your local machine is already in use, use an alternative HTTP port, such as `8008`.

API requests are now forwarded from port 8080 on your local machine to port 8080 in the Kafka Bridge pod.
API requests are now forwarded from port 8080 on your local machine to port 8080 in the HTTP Bridge pod.
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@ sed -i '' 's/namespace: .*/namespace: _my-namespace_/' prometheus.yaml
+
Update the `namespaceSelector.matchNames` property with the namespace where the pods to scrape the metrics from are running.
+
`PodMonitor` is used to scrape data directly from pods for Apache Kafka, Operators, the Kafka Bridge and Cruise Control.
`PodMonitor` is used to scrape data directly from pods for Apache Kafka, Operators, the HTTP Bridge and Cruise Control.

. Edit the `prometheus.yaml` installation file to include additional configuration for scraping metrics directly from nodes.
+
Expand Down
10 changes: 5 additions & 5 deletions documentation/modules/oauth/proc-oauth-kafka-config.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ You can configure OAuth 2.0 authentication for the following components:

* Kafka Connect
* Kafka MirrorMaker
* Kafka Bridge
* HTTP Bridge

In this scenario, the Kafka component and the authorization server are running in the same cluster.

Expand All @@ -33,7 +33,7 @@ The schema reference includes examples of configuration options.

. Create a client secret and mount it to the component as an environment variable.
+
For example, here we are creating a client `Secret` for the Kafka Bridge:
For example, here we are creating a client `Secret` for the HTTP Bridge:
+
[source,yaml,subs="+quotes,attributes"]
----
Expand All @@ -60,7 +60,7 @@ For OAuth 2.0 authentication, you can use the following options:
* TLS
--
+
For example, here OAuth 2.0 is assigned to the Kafka Bridge client using a client ID and secret, and TLS:
For example, here OAuth 2.0 is assigned to the HTTP Bridge client using a client ID and secret, and TLS:
+
--
.Example OAuth 2.0 authentication configuration using the client secret
Expand Down Expand Up @@ -88,7 +88,7 @@ spec:
<3> Certificates stored in X.509 format within the specified secrets for TLS connection to the authorization server.
--
+
In this example, OAuth 2.0 is assigned to the Kafka Bridge client using a client ID and the location of a client assertion file, with TLS to connect to the authorization server:
In this example, OAuth 2.0 is assigned to the HTTP Bridge client using a client ID and the location of a client assertion file, with TLS to connect to the authorization server:
+
--
.Example OAuth 2.0 authentication configuration using client assertion
Expand All @@ -114,7 +114,7 @@ This file is typically added to the deployed pod by an external operator service
Alternatively, use `clientAssertion` to refer to a secret containing the client assertion value.
--
+
Here, OAuth 2.0 is assigned to the Kafka Bridge client using a service account token:
Here, OAuth 2.0 is assigned to the HTTP Bridge client using a service account token:
+
--
.Example OAuth 2.0 authentication configuration using the service account token
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@
= Enabling tracing in supported Kafka components

[role="_abstract"]
Distributed tracing is supported for MirrorMaker, MirrorMaker 2, Kafka Connect, and the Kafka Bridge.
Distributed tracing is supported for MirrorMaker, MirrorMaker 2, Kafka Connect, and the HTTP Bridge.
Enable tracing using OpenTelemetry by setting the `spec.tracing.type` property to `opentelemetry`.
Configure the custom resource of the component to specify and enable a tracing system using `spec.template` properties.

Expand All @@ -26,7 +26,7 @@ Enabling tracing in a resource triggers the following events:

* For MirrorMaker, MirrorMaker 2, and Kafka Connect, the tracing agent initializes a tracer based on the tracing configuration defined in the resource.

* For the Kafka Bridge, a tracer based on the tracing configuration defined in the resource is initialized by the Kafka Bridge itself.
* For the HTTP Bridge, a tracer based on the tracing configuration defined in the resource is initialized by the HTTP Bridge itself.

.Tracing in MirrorMaker 2

Expand All @@ -36,9 +36,9 @@ For MirrorMaker 2, messages are traced from the source cluster to the target clu

For Kafka Connect, only messages produced and consumed by Kafka Connect are traced. To trace messages sent between Kafka Connect and external systems, you must configure tracing in the connectors for those systems.

.Tracing in the Kafka Bridge
.Tracing in the HTTP Bridge

For the Kafka Bridge, messages produced and consumed by the Kafka Bridge are traced. Incoming HTTP requests from client applications to send and receive messages through the Kafka Bridge are also traced.
For the HTTP Bridge, messages produced and consumed by the HTTP Bridge are traced. Incoming HTTP requests from client applications to send and receive messages through the HTTP Bridge are also traced.
To have end-to-end tracing, you must configure tracing in your HTTP clients.

.Procedure
Expand Down Expand Up @@ -95,7 +95,7 @@ spec:
#...
----

.Example tracing configuration for the Kafka Bridge using OpenTelemetry
.Example tracing configuration for the HTTP Bridge using OpenTelemetry
[source,yaml,subs=attributes+]
----
apiVersion: {KafkaBridgeApiVersion}
Expand Down
Loading