diff --git a/documentation/api/io.strimzi.api.kafka.model.bridge.KafkaBridgeSpec.adoc b/documentation/api/io.strimzi.api.kafka.model.bridge.KafkaBridgeSpec.adoc index 12870bd2e73..97204c4aad5 100644 --- a/documentation/api/io.strimzi.api.kafka.model.bridge.KafkaBridgeSpec.adoc +++ b/documentation/api/io.strimzi.api.kafka.model.bridge.KafkaBridgeSpec.adoc @@ -10,7 +10,7 @@ Configuration options relate to: * Producer configuration * HTTP configuration -[id='property-kafka-bridge-logging-{context}'] +[id='property-http-bridge-logging-{context}'] = Logging Kafka Bridge has its own preconfigured loggers: diff --git a/documentation/assemblies/configuring/assembly-config.adoc b/documentation/assemblies/configuring/assembly-config.adoc index 1d430a35488..f9703f615b3 100644 --- a/documentation/assemblies/configuring/assembly-config.adoc +++ b/documentation/assemblies/configuring/assembly-config.adoc @@ -140,7 +140,7 @@ include::../../modules/configuring/proc-manual-restart-mirrormaker2-connector-ta include::../../modules/configuring/con-config-mirrormaker2-producers-consumers.adoc[leveloffset=+2] //`KafkaBridge` resource config -include::../../modules/configuring/con-config-kafka-bridge.adoc[leveloffset=+1] +include::../../modules/configuring/con-config-http-bridge.adoc[leveloffset=+1] //common config examples include::../../modules/configuring/con-common-configuration.adoc[leveloffset=+1] diff --git a/documentation/assemblies/deploying/assembly-deploy-http-bridge.adoc b/documentation/assemblies/deploying/assembly-deploy-http-bridge.adoc new file mode 100644 index 00000000000..075b60ca2ed --- /dev/null +++ b/documentation/assemblies/deploying/assembly-deploy-http-bridge.adoc @@ -0,0 +1,21 @@ +:_mod-docs-content-type: ASSEMBLY + +// This assembly is included in the following assemblies: +// +// assembly-getting-started.adoc +// deploying/assembly-deploy-tasks.adoc + +[id='http-bridge-{context}'] += Deploying HTTP Bridge + +[role="_abstract"] +HTTP Bridge provides an API for integrating HTTP-based clients with a Kafka cluster. + +//Procedure to deploy a Kafka Bridge cluster +include::../../modules/deploying/proc-deploy-http-bridge.adoc[leveloffset=+1] +//exposing the bridge to a local machine +include::../../modules/deploying/proc-exposing-http-bridge-service-local-machine.adoc[leveloffset=+1] +//accessing the bridge outside the cluster +include::../../modules/deploying/con-accessing-http-bridge-from-outside.adoc[leveloffset=+1] +//Resources created for Kafka Bridge +include::../../modules/configuring/ref-list-of-http-bridge-resources.adoc[leveloffset=+1] diff --git a/documentation/assemblies/deploying/assembly-deploy-intro.adoc b/documentation/assemblies/deploying/assembly-deploy-intro.adoc index d041b4f5a13..cfb7452d884 100644 --- a/documentation/assemblies/deploying/assembly-deploy-intro.adoc +++ b/documentation/assemblies/deploying/assembly-deploy-intro.adoc @@ -31,7 +31,7 @@ include::assembly-deploy-intro-custom-resources.adoc[leveloffset=+1] //operators include::assembly-deploy-intro-operators.adoc[leveloffset=+1] //info on kafka bridge -include::../../modules/deploying/con-kafka-bridge-concepts.adoc[leveloffset=+1] +include::../../modules/deploying/con-http-bridge-concepts.adoc[leveloffset=+1] //info on FIPS include::../../modules/deploying/con-fips-support.adoc[leveloffset=+1] //formatting conventions used in guide diff --git a/documentation/assemblies/deploying/assembly-deploy-kafka-bridge.adoc b/documentation/assemblies/deploying/assembly-deploy-kafka-bridge.adoc deleted file mode 100644 index f27a932ef5b..00000000000 --- a/documentation/assemblies/deploying/assembly-deploy-kafka-bridge.adoc +++ /dev/null @@ -1,21 +0,0 @@ -:_mod-docs-content-type: ASSEMBLY - -// This assembly is included in the following assemblies: -// -// assembly-getting-started.adoc -// deploying/assembly-deploy-tasks.adoc - -[id='kafka-bridge-{context}'] -= Deploying Kafka Bridge - -[role="_abstract"] -Kafka Bridge provides an API for integrating HTTP-based clients with a Kafka cluster. - -//Procedure to deploy a Kafka Bridge cluster -include::../../modules/deploying/proc-deploy-kafka-bridge.adoc[leveloffset=+1] -//exposing the bridge to a local machine -include::../../modules/deploying/proc-exposing-kafka-bridge-service-local-machine.adoc[leveloffset=+1] -//accessing the bridge outside the cluster -include::../../modules/deploying/con-accessing-kafka-bridge-from-outside.adoc[leveloffset=+1] -//Resources created for Kafka Bridge -include::../../modules/configuring/ref-list-of-kafka-bridge-resources.adoc[leveloffset=+1] diff --git a/documentation/assemblies/deploying/assembly-deploy-tasks.adoc b/documentation/assemblies/deploying/assembly-deploy-tasks.adoc index a66a1005012..0e356d01e0d 100644 --- a/documentation/assemblies/deploying/assembly-deploy-tasks.adoc +++ b/documentation/assemblies/deploying/assembly-deploy-tasks.adoc @@ -29,7 +29,7 @@ The steps to deploy Strimzi using the installation files are as follows: . Optionally, deploy the following Kafka components according to your requirements: * xref:kafka-connect-{context}[Kafka Connect] * xref:kafka-mirror-maker-{context}[Kafka MirrorMaker] -* xref:kafka-bridge-{context}[Kafka Bridge] +* xref:http-bridge-{context}[Kafka Bridge] NOTE: To run the commands in this guide, a Kubernetes user must have the rights to manage role-based access control (RBAC) and CRDs. @@ -44,7 +44,7 @@ include::assembly-deploy-kafka-connect-with-plugins.adoc[leveloffset=+1] //Procedure to deploy Kafka MirrorMaker include::assembly-deploy-kafka-mirror-maker.adoc[leveloffset=+1] //Procedure to deploy Kafka Bridge -include::assembly-deploy-kafka-bridge.adoc[leveloffset=+1] +include::assembly-deploy-http-bridge.adoc[leveloffset=+1] //Alternative standalone deployment of Topic Operator and Cluster Operator include::assembly-deploy-standalone-operators.adoc[leveloffset=+1] diff --git a/documentation/assemblies/metrics/assembly-metrics-config-files.adoc b/documentation/assemblies/metrics/assembly-metrics-config-files.adoc index 31e462334f9..389c4d358c0 100644 --- a/documentation/assemblies/metrics/assembly-metrics-config-files.adoc +++ b/documentation/assemblies/metrics/assembly-metrics-config-files.adoc @@ -82,11 +82,11 @@ metrics <8> Alerting rules examples for use with Prometheus Alertmanager (deployed with Prometheus). <9> Installation resource file for the Prometheus image. <10> Grafana dashboards for components using the Strimzi Metrics Reporter. -<11> `KafkaBridge` resource for deploying Kafka Bridge with Strimzi Metrics Reporter. +<11> `KafkaBridge` resource for deploying HTTP Bridge with Strimzi Metrics Reporter. <12> `KafkaConnect` resource for deploying Kafka Connect with Strimzi Metrics Reporter. <13> `Kafka` resource for deploying Kafka with Strimzi Metrics Reporter. <14> `KafkaMirrorMaker2` resource for deploying MirrorMaker 2 with Strimzi Metrics Reporter. -<15> Metrics configuration that defines Prometheus JMX Exporter relabeling rules for Kafka Bridge. +<15> Metrics configuration that defines Prometheus JMX Exporter relabeling rules for HTTP Bridge. <16> Metrics configuration that defines Prometheus JMX Exporter relabeling rules for Kafka Connect. <17> Metrics configuration that defines Prometheus JMX Exporter relabeling rules for Cruise Control. <18> Metrics configuration that defines Prometheus JMX Exporter relabeling rules for Kafka. diff --git a/documentation/assemblies/oauth/assembly-oauth-authentication.adoc b/documentation/assemblies/oauth/assembly-oauth-authentication.adoc index b82ca6bccaf..8010bfff1fc 100644 --- a/documentation/assemblies/oauth/assembly-oauth-authentication.adoc +++ b/documentation/assemblies/oauth/assembly-oauth-authentication.adoc @@ -16,7 +16,7 @@ Brokers and clients communicate with the authorization server, as necessary, to For a deployment of Strimzi, OAuth 2.0 integration provides the following support: * Server-side OAuth 2.0 authentication for Kafka brokers -* Client-side OAuth 2.0 authentication for Kafka MirrorMaker, Kafka Connect, and the Kafka Bridge +* Client-side OAuth 2.0 authentication for Kafka MirrorMaker, Kafka Connect, and the HTTP Bridge include::../../modules/oauth/con-oauth-authentication-broker.adoc[leveloffset=+1] include::../../modules/oauth/con-oauth-authentication-client.adoc[leveloffset=+1] diff --git a/documentation/assemblies/tracing/assembly-distributed-tracing.adoc b/documentation/assemblies/tracing/assembly-distributed-tracing.adoc index b0ae5f8ae78..909a5efa904 100644 --- a/documentation/assemblies/tracing/assembly-distributed-tracing.adoc +++ b/documentation/assemblies/tracing/assembly-distributed-tracing.adoc @@ -19,7 +19,7 @@ Strimzi provides built-in support for tracing for the following Kafka components * MirrorMaker to trace messages from a source cluster to a target cluster * Kafka Connect to trace messages consumed and produced by Kafka Connect -* Kafka Bridge to trace messages between Kafka and HTTP client applications +* HTTP Bridge to trace messages between Kafka and HTTP client applications Tracing is not supported for Kafka brokers. diff --git a/documentation/modules/configuring/con-config-examples.adoc b/documentation/modules/configuring/con-config-examples.adoc index e6eb21068fa..ec4769820d8 100644 --- a/documentation/modules/configuring/con-config-examples.adoc +++ b/documentation/modules/configuring/con-config-examples.adoc @@ -50,4 +50,4 @@ examples `Kafka` configuration examples enable auto-rebalancing on scaling events and set default optimization goals. `KakaRebalance` configuration examples set proposal-specific optimization goals and generate optimization proposals in various supported modes. <8> `KafkaConnect` and `KafkaConnector` custom resource configuration for a deployment of Kafka Connect. Includes example configurations for a single or multi-node deployment. -<9> `KafkaBridge` custom resource configuration for a deployment of Kafka Bridge. \ No newline at end of file +<9> `KafkaBridge` custom resource configuration for a deployment of HTTP Bridge. \ No newline at end of file diff --git a/documentation/modules/configuring/con-config-kafka-bridge.adoc b/documentation/modules/configuring/con-config-http-bridge.adoc similarity index 79% rename from documentation/modules/configuring/con-config-kafka-bridge.adoc rename to documentation/modules/configuring/con-config-http-bridge.adoc index 8e015f4d49d..975a2d8a3d7 100644 --- a/documentation/modules/configuring/con-config-kafka-bridge.adoc +++ b/documentation/modules/configuring/con-config-http-bridge.adoc @@ -4,17 +4,17 @@ // // assembly-config.adoc -[id='con-config-kafka-bridge-{context}'] -= Configuring the Kafka Bridge +[id='con-config-http-bridge-{context}'] += Configuring the HTTP Bridge [role="_abstract"] -Update the `spec` properties of the `KafkaBridge` custom resource to configure your Kafka Bridge deployment. +Update the `spec` properties of the `KafkaBridge` custom resource to configure your HTTP Bridge deployment. -In order to prevent issues arising when client consumer requests are processed by different Kafka Bridge instances, address-based routing must be employed to ensure that requests are routed to the right Kafka Bridge instance. -Additionally, each independent Kafka Bridge instance must have a replica. -A Kafka Bridge instance has its own state which is not shared with another instances. +In order to prevent issues arising when client consumer requests are processed by different HTTP Bridge instances, address-based routing must be employed to ensure that requests are routed to the right Kafka Bridge instance. +Additionally, each independent HTTP Bridge instance must have a replica. +A HTTP Bridge instance has its own state which is not shared with another instances. -For a deeper understanding of the Kafka Bridge and its cluster configuration options, refer to the link:{BookURLBridge}[Using the Kafka Bridge^] guide and the link:{BookURLConfiguring}[Strimzi Custom Resource API Reference^]. +For a deeper understanding of the HTTP Bridge and its cluster configuration options, refer to the link:{BookURLBridge}[Using the Kafka Bridge^] guide and the link:{BookURLConfiguring}[Strimzi Custom Resource API Reference^]. .Example `KafkaBridge` custom resource configuration [source,yaml,subs="+quotes,attributes"] @@ -125,13 +125,13 @@ spec: <4> CORS access specifying selected resources and access methods. Additional HTTP headers in requests describe the origins that are permitted access to the Kafka cluster. <5> Requests for reservation of supported resources, currently `cpu` and `memory`, and limits to specify the maximum resources that can be consumed. <6> TLS configuration for encrypted connections to the Kafka cluster, with trusted certificates stored in X.509 format within the specified secrets. -<7> Authentication for the Kafka Bridge cluster, specified as `tls`, `scram-sha-256`, `scram-sha-512`, `plain`, or `oauth`. -By default, the Kafka Bridge connects to Kafka brokers without authentication. +<7> Authentication for the HTTP Bridge cluster, specified as `tls`, `scram-sha-256`, `scram-sha-512`, `plain`, or `oauth`. +By default, the HTTP Bridge connects to Kafka brokers without authentication. For details on configuring authentication, see the link:{BookURLConfiguring}#type-KafkaBridgeSpec-schema-reference[`KafkaBridgeSpec` schema properties^] <8> Consumer configuration options. <9> Producer configuration options. -<10> Kafka Bridge loggers and log levels added directly (`inline`) or indirectly (`external`) through a `ConfigMap`. Custom Log4j configuration must be placed under the `log4j2.properties` key in the `ConfigMap`. You can set log levels to `INFO`, `ERROR`, `WARN`, `TRACE`, `DEBUG`, `FATAL` or `OFF`. -<11> JVM configuration options to optimize performance for the Virtual Machine (VM) running the Kafka Bridge. +<10> HTTP Bridge loggers and log levels added directly (`inline`) or indirectly (`external`) through a `ConfigMap`. Custom Log4j configuration must be placed under the `log4j2.properties` key in the `ConfigMap`. You can set log levels to `INFO`, `ERROR`, `WARN`, `TRACE`, `DEBUG`, `FATAL` or `OFF`. +<11> JVM configuration options to optimize performance for the Virtual Machine (VM) running the HTTP Bridge. <12> Healthchecks to know when to restart a container (liveness) and when a container can accept traffic (readiness). <13> Optional: Container image configuration, which is recommended only in special situations. <14> Template customization. Here a pod is scheduled with anti-affinity, so the pod is not scheduled on nodes with the same hostname. diff --git a/documentation/modules/configuring/con-config-large-messages.adoc b/documentation/modules/configuring/con-config-large-messages.adoc index e3b4f860474..a3d3a43d5ca 100644 --- a/documentation/modules/configuring/con-config-large-messages.adoc +++ b/documentation/modules/configuring/con-config-large-messages.adoc @@ -58,16 +58,16 @@ fetch.max.bytes: 10000000 max.partition.fetch.bytes: 10485760 ---- -It's also possible to configure the producers and consumers used by other Kafka components like Kafka Bridge, Kafka Connect, and MirrorMaker 2 to handle larger messages more effectively. +It's also possible to configure the producers and consumers used by other Kafka components like HTTP Bridge, Kafka Connect, and MirrorMaker 2 to handle larger messages more effectively. -Kafka Bridge:: Configure the Kafka Bridge using specific producer and consumer configuration properties: +HTTP Bridge:: Configure the HTTP Bridge using specific producer and consumer configuration properties: + -- * `producer.config` for producers * `consumer.config` for consumers -- + -.Example Kafka Bridge configuration +.Example HTTP Bridge configuration [source,yaml,subs="+attributes"] ---- apiVersion: {KafkaBridgeApiVersion} diff --git a/documentation/modules/configuring/ref-list-of-kafka-bridge-resources.adoc b/documentation/modules/configuring/ref-list-of-http-bridge-resources.adoc similarity index 56% rename from documentation/modules/configuring/ref-list-of-kafka-bridge-resources.adoc rename to documentation/modules/configuring/ref-list-of-http-bridge-resources.adoc index 4988a3e5314..e433bf4014a 100644 --- a/documentation/modules/configuring/ref-list-of-kafka-bridge-resources.adoc +++ b/documentation/modules/configuring/ref-list-of-http-bridge-resources.adoc @@ -2,15 +2,15 @@ // Module included in the following assemblies: // -// assembly-deploy-kafka-bridge.adoc +// assembly-deploy-http-bridge.adoc -[id='ref-list-of-kafka-bridge-resources-{context}'] -= List of Kafka Bridge cluster resources +[id='ref-list-of-http-bridge-resources-{context}'] += List of HTTP Bridge cluster resources [role="_abstract"] The following resources are created by the Cluster Operator in the Kubernetes cluster: --bridge:: Deployment which is in charge to create the Kafka Bridge worker node pods. --bridge-service:: Service which exposes the REST interface of the Kafka Bridge cluster. --bridge-config:: ConfigMap which contains the Kafka Bridge ancillary configuration and is mounted as a volume by the Kafka broker pods. --bridge:: Pod Disruption Budget configured for the Kafka Bridge worker nodes. +-bridge:: Deployment which is in charge to create the HTTP Bridge worker node pods. +-bridge-service:: Service which exposes the HTTP Bridge REST interface. +-bridge-config:: ConfigMap which contains the HTTP Bridge ancillary configuration and is mounted as a volume by the Kafka broker pods. +-bridge:: Pod Disruption Budget configured for the HTTP Bridge worker nodes. diff --git a/documentation/modules/deploying/con-accessing-kafka-bridge-from-outside.adoc b/documentation/modules/deploying/con-accessing-http-bridge-from-outside.adoc similarity index 56% rename from documentation/modules/deploying/con-accessing-kafka-bridge-from-outside.adoc rename to documentation/modules/deploying/con-accessing-http-bridge-from-outside.adoc index b8006fd3ab0..bb28d36990a 100644 --- a/documentation/modules/deploying/con-accessing-kafka-bridge-from-outside.adoc +++ b/documentation/modules/deploying/con-accessing-http-bridge-from-outside.adoc @@ -2,17 +2,17 @@ // This assembly is included in the following assemblies: // -// assembly-kafka-bridge-overview.adoc +// assembly-http-bridge-overview.adoc -[id='con-accessing-kafka-bridge-from-outside-{context}'] +[id='con-accessing-http-bridge-from-outside-{context}'] -= Accessing the Kafka Bridge outside of Kubernetes += Accessing the HTTP Bridge outside of Kubernetes [role="_abstract"] -After deployment, the Kafka Bridge can only be accessed by applications running in the same Kubernetes cluster. +After deployment, the HTTP Bridge can only be accessed by applications running in the same Kubernetes cluster. These applications use the `__-bridge-service` service to access the API. -If you want to make the Kafka Bridge accessible to applications running outside of the Kubernetes cluster, you can expose it manually by creating one of the following features: +If you want to make the HTTP Bridge accessible to applications running outside of the Kubernetes cluster, you can expose it manually by creating one of the following features: * `LoadBalancer` or `NodePort` type services @@ -30,4 +30,4 @@ If you decide to create Services, use the labels from the `selector` of the `_ Name of the Kafka Bridge custom resource in your Kubernetes cluster. +<1> Name of the HTTP Bridge custom resource in your Kubernetes cluster. diff --git a/documentation/modules/deploying/con-http-bridge-concepts.adoc b/documentation/modules/deploying/con-http-bridge-concepts.adoc new file mode 100644 index 00000000000..f92ca60daf2 --- /dev/null +++ b/documentation/modules/deploying/con-http-bridge-concepts.adoc @@ -0,0 +1,19 @@ +:_mod-docs-content-type: CONCEPT + +// Module included in the following assemblies: +// +// books-rhel/using/master.adoc + +[id='con-http-bridge-concepts-{context}'] += Using the HTTP Bridge to connect with a Kafka cluster + +[role="_abstract"] +You can use the HTTP Bridge API to create and manage consumers and send and receive records over HTTP rather than the native Kafka protocol. + +When you set up the HTTP Bridge you configure HTTP access to the Kafka cluster. +You can then use the HTTP Bridge to produce and consume messages from the cluster, as well as performing other operations through its REST interface. + +[role="_additional-resources"] +.Additional resources + +* For information on installing and using the HTTP Bridge, see link:{BookURLBridge}[Using the HTTP Bridge^]. diff --git a/documentation/modules/deploying/con-kafka-bridge-concepts.adoc b/documentation/modules/deploying/con-kafka-bridge-concepts.adoc deleted file mode 100644 index 63b1eba8b48..00000000000 --- a/documentation/modules/deploying/con-kafka-bridge-concepts.adoc +++ /dev/null @@ -1,19 +0,0 @@ -:_mod-docs-content-type: CONCEPT - -// Module included in the following assemblies: -// -// books-rhel/using/master.adoc - -[id='con-kafka-bridge-concepts-{context}'] -= Using the Kafka Bridge to connect with a Kafka cluster - -[role="_abstract"] -You can use the Kafka Bridge API to create and manage consumers and send and receive records over HTTP rather than the native Kafka protocol. - -When you set up the Kafka Bridge you configure HTTP access to the Kafka cluster. -You can then use the Kafka Bridge to produce and consume messages from the cluster, as well as performing other operations through its REST interface. - -[role="_additional-resources"] -.Additional resources - -* For information on installing and using the Kafka Bridge, see link:{BookURLBridge}[Using the Kafka Bridge^]. diff --git a/documentation/modules/deploying/con-service-discovery.adoc b/documentation/modules/deploying/con-service-discovery.adoc index bd2d9e487d6..65b5f8d1a36 100644 --- a/documentation/modules/deploying/con-service-discovery.adoc +++ b/documentation/modules/deploying/con-service-discovery.adoc @@ -13,7 +13,7 @@ Service discovery makes it easier for client applications running in the same Ku A service discovery label and annotation are created for the following services: * Internal Kafka bootstrap service -* Kafka Bridge service +* HTTP Bridge service Service discovery label:: The service discovery label, `strimzi.io/discovery`, is set to `true` for `Service` resources to make them discoverable for client connections. Service discovery annotation:: The service discovery annotation provides connection details in JSON format for each service for client applications to use to establish connections. @@ -47,7 +47,7 @@ spec: #... ---- -.Example Kafka Bridge service +.Example HTTP Bridge service [source,yaml,subs="attributes+"] ---- diff --git a/documentation/modules/deploying/proc-deploy-kafka-bridge.adoc b/documentation/modules/deploying/proc-deploy-http-bridge.adoc similarity index 78% rename from documentation/modules/deploying/proc-deploy-kafka-bridge.adoc rename to documentation/modules/deploying/proc-deploy-http-bridge.adoc index d1b2a169ba7..7ba355e59ba 100644 --- a/documentation/modules/deploying/proc-deploy-kafka-bridge.adoc +++ b/documentation/modules/deploying/proc-deploy-http-bridge.adoc @@ -2,13 +2,13 @@ // Module included in the following assemblies: // -// deploying/assembly_deploy-kafka-bridge.adoc +// deploying/assembly_deploy-http-bridge.adoc -[id='deploying-kafka-bridge-{context}'] -= Deploying Kafka Bridge +[id='deploying-http-bridge-{context}'] += Deploying HTTP Bridge [role="_abstract"] -This procedure shows how to deploy a Kafka Bridge cluster to your Kubernetes cluster using the Cluster Operator. +This procedure shows how to deploy a HTTP Bridge cluster to your Kubernetes cluster using the Cluster Operator. The deployment uses a YAML file to provide the specification to create a `KafkaBridge` resource. @@ -37,7 +37,7 @@ See the link:{BookURLConfiguring}#type-KafkaBridgeSpec-schema-reference[`KafkaBr Use `[]` (an empty array) to trust the default Java CAs, or specify secrets containing trusted certificates. + See the link:{BookURLConfiguring}#con-common-configuration-trusted-certificates-reference[`trustedCertificates` properties^] for configuration details. -. Deploy Kafka Bridge to your Kubernetes cluster: +. Deploy HTTP Bridge to your Kubernetes cluster: + [source,shell] ---- @@ -58,14 +58,14 @@ NAME READY STATUS RESTARTS my-bridge-bridge- 1/1 Running 0 ---- + -In this example, `my-bridge` is the name of the Kafka Bridge cluster. +In this example, `my-bridge` is the name of the HTTP Bridge cluster. A pod ID identifies each created pod. -By default, the deployment creates a single Kafka Bridge pod. +By default, the deployment creates a single HTTP Bridge pod. `READY` shows the number of ready versus expected replicas. The deployment is successful when the `STATUS` is `Running`. [role="_additional-resources"] .Additional resources -* xref:con-config-kafka-bridge-str[Kafka Bridge cluster configuration] -* link:{BookURLBridge}[Using the Kafka Bridge^] +* xref:con-config-http-bridge-str[HTTP Bridge cluster configuration] +* link:{BookURLBridge}[Using the HTTP Bridge^] diff --git a/documentation/modules/deploying/proc-exposing-kafka-bridge-service-local-machine.adoc b/documentation/modules/deploying/proc-exposing-http-bridge-service-local-machine.adoc similarity index 65% rename from documentation/modules/deploying/proc-exposing-kafka-bridge-service-local-machine.adoc rename to documentation/modules/deploying/proc-exposing-http-bridge-service-local-machine.adoc index 1ad68b55265..7afc70a8565 100644 --- a/documentation/modules/deploying/proc-exposing-kafka-bridge-service-local-machine.adoc +++ b/documentation/modules/deploying/proc-exposing-http-bridge-service-local-machine.adoc @@ -2,13 +2,13 @@ // Module included in the following assemblies: // -// assembly-deploy-kafka-bridge.adoc +// assembly-deploy-http-bridge.adoc -[id='proc-exposing-kafka-bridge-service-local-machine-{context}'] -= Exposing the Kafka Bridge service to your local machine +[id='proc-exposing-http-bridge-service-local-machine-{context}'] += Exposing the HTTP Bridge service to your local machine [role="_abstract"] -Use port forwarding to expose the Kafka Bridge service to your local machine on http://localhost:8080. +Use port forwarding to expose the HTTP Bridge service to your local machine on http://localhost:8080. NOTE: Port forwarding is only suitable for development and testing purposes. @@ -25,7 +25,7 @@ pod/kafka-consumer pod/my-bridge-bridge- ---- -. Connect to the Kafka Bridge pod on port `8080`: +. Connect to the HTTP Bridge pod on port `8080`: + [source,shell,subs=attributes+] ---- @@ -34,4 +34,4 @@ kubectl port-forward pod/my-bridge-bridge- 8080:8080 & + NOTE: If port 8080 on your local machine is already in use, use an alternative HTTP port, such as `8008`. -API requests are now forwarded from port 8080 on your local machine to port 8080 in the Kafka Bridge pod. +API requests are now forwarded from port 8080 on your local machine to port 8080 in the HTTP Bridge pod. diff --git a/documentation/modules/glossary/k.adoc b/documentation/modules/glossary/k.adoc index 5fbce732f27..8209f4942c5 100644 --- a/documentation/modules/glossary/k.adoc +++ b/documentation/modules/glossary/k.adoc @@ -9,15 +9,15 @@ A custom resource for deploying and configuring a Kafka cluster, including setti For more information, see the link:{BookURLConfiguring}#type-Kafka-reference[`Kafka` schema reference^]. -== Kafka Bridge -[id="glossary-kafka-bridge_{context}"] +== HTTP Bridge +[id="glossary-http-bridge_{context}"] Provides a RESTful interface that allows HTTP-based clients to interact with a Kafka cluster. -For more information, see link:{BookURLBridge}[Using the Kafka Bridge^]. +For more information, see link:{BookURLBridge}[Using the HTTP Bridge^]. == KafkaBridge (custom resource) [id="glossary-kafkabridge-cr_{context}"] -A custom resource used to deploy and configure a Kafka Bridge instance, specifying replicas, authentication, and connection details. +A custom resource used to deploy and configure a HTTP Bridge instance, specifying replicas, authentication, and connection details. For more information, see the link:{BookURLConfiguring}#type-KafkaBridge-reference[`KafkaBridge` schema reference^]. diff --git a/documentation/modules/metrics/proc_metrics-deploying-prometheus.adoc b/documentation/modules/metrics/proc_metrics-deploying-prometheus.adoc index 300c1fe189d..0bf4ea5982a 100644 --- a/documentation/modules/metrics/proc_metrics-deploying-prometheus.adoc +++ b/documentation/modules/metrics/proc_metrics-deploying-prometheus.adoc @@ -46,7 +46,7 @@ sed -i '' 's/namespace: .*/namespace: _my-namespace_/' prometheus.yaml + Update the `namespaceSelector.matchNames` property with the namespace where the pods to scrape the metrics from are running. + -`PodMonitor` is used to scrape data directly from pods for Apache Kafka, Operators, the Kafka Bridge and Cruise Control. +`PodMonitor` is used to scrape data directly from pods for Apache Kafka, Operators, the HTTP Bridge and Cruise Control. . Edit the `prometheus.yaml` installation file to include additional configuration for scraping metrics directly from nodes. + diff --git a/documentation/modules/oauth/proc-oauth-kafka-config.adoc b/documentation/modules/oauth/proc-oauth-kafka-config.adoc index 25c7f17f46f..138ac733640 100644 --- a/documentation/modules/oauth/proc-oauth-kafka-config.adoc +++ b/documentation/modules/oauth/proc-oauth-kafka-config.adoc @@ -14,7 +14,7 @@ You can configure OAuth 2.0 authentication for the following components: * Kafka Connect * Kafka MirrorMaker -* Kafka Bridge +* HTTP Bridge In this scenario, the Kafka component and the authorization server are running in the same cluster. @@ -33,7 +33,7 @@ The schema reference includes examples of configuration options. . Create a client secret and mount it to the component as an environment variable. + -For example, here we are creating a client `Secret` for the Kafka Bridge: +For example, here we are creating a client `Secret` for the HTTP Bridge: + [source,yaml,subs="+quotes,attributes"] ---- @@ -60,7 +60,7 @@ For OAuth 2.0 authentication, you can use the following options: * TLS -- + -For example, here OAuth 2.0 is assigned to the Kafka Bridge client using a client ID and secret, and TLS: +For example, here OAuth 2.0 is assigned to the HTTP Bridge client using a client ID and secret, and TLS: + -- .Example OAuth 2.0 authentication configuration using the client secret @@ -88,7 +88,7 @@ spec: <3> Certificates stored in X.509 format within the specified secrets for TLS connection to the authorization server. -- + -In this example, OAuth 2.0 is assigned to the Kafka Bridge client using a client ID and the location of a client assertion file, with TLS to connect to the authorization server: +In this example, OAuth 2.0 is assigned to the HTTP Bridge client using a client ID and the location of a client assertion file, with TLS to connect to the authorization server: + -- .Example OAuth 2.0 authentication configuration using client assertion @@ -114,7 +114,7 @@ This file is typically added to the deployed pod by an external operator service Alternatively, use `clientAssertion` to refer to a secret containing the client assertion value. -- + -Here, OAuth 2.0 is assigned to the Kafka Bridge client using a service account token: +Here, OAuth 2.0 is assigned to the HTTP Bridge client using a service account token: + -- .Example OAuth 2.0 authentication configuration using the service account token diff --git a/documentation/modules/overview/con-overview-components-kafka-bridge-clients.adoc b/documentation/modules/overview/con-overview-components-http-bridge-clients.adoc similarity index 90% rename from documentation/modules/overview/con-overview-components-kafka-bridge-clients.adoc rename to documentation/modules/overview/con-overview-components-http-bridge-clients.adoc index d5b8949eb0e..925b92e49c7 100644 --- a/documentation/modules/overview/con-overview-components-kafka-bridge-clients.adoc +++ b/documentation/modules/overview/con-overview-components-http-bridge-clients.adoc @@ -3,9 +3,9 @@ // Module included in the following assemblies: // // overview/overview.adoc -// assembly-kafka-bridge-overview.adoc +// assembly-http-bridge-overview.adoc -[id='con-overview-components-kafka-bridge-clients_{context}'] +[id='con-overview-components-http-bridge-clients_{context}'] = Supported clients for the Kafka Bridge diff --git a/documentation/modules/overview/con-overview-components-kafka-bridge.adoc b/documentation/modules/overview/con-overview-components-http-bridge.adoc similarity index 97% rename from documentation/modules/overview/con-overview-components-kafka-bridge.adoc rename to documentation/modules/overview/con-overview-components-http-bridge.adoc index 871177272f3..4b7c92f21e2 100644 --- a/documentation/modules/overview/con-overview-components-kafka-bridge.adoc +++ b/documentation/modules/overview/con-overview-components-http-bridge.adoc @@ -4,7 +4,7 @@ // // // overview/overview.adoc -[id="overview-components-kafka-bridge_{context}"] +[id="overview-components-http-bridge_{context}"] = Kafka Bridge interface [role="_abstract"] diff --git a/documentation/modules/tracing/proc-enabling-tracing-in-connect-mirror-maker-bridge-resources.adoc b/documentation/modules/tracing/proc-enabling-tracing-in-connect-mirror-maker-bridge-resources.adoc index b46c152bac7..97edd6dd6c2 100644 --- a/documentation/modules/tracing/proc-enabling-tracing-in-connect-mirror-maker-bridge-resources.adoc +++ b/documentation/modules/tracing/proc-enabling-tracing-in-connect-mirror-maker-bridge-resources.adoc @@ -8,7 +8,7 @@ = Enabling tracing in supported Kafka components [role="_abstract"] -Distributed tracing is supported for MirrorMaker, MirrorMaker 2, Kafka Connect, and the Kafka Bridge. +Distributed tracing is supported for MirrorMaker, MirrorMaker 2, Kafka Connect, and the HTTP Bridge. Enable tracing using OpenTelemetry by setting the `spec.tracing.type` property to `opentelemetry`. Configure the custom resource of the component to specify and enable a tracing system using `spec.template` properties. @@ -26,7 +26,7 @@ Enabling tracing in a resource triggers the following events: * For MirrorMaker, MirrorMaker 2, and Kafka Connect, the tracing agent initializes a tracer based on the tracing configuration defined in the resource. -* For the Kafka Bridge, a tracer based on the tracing configuration defined in the resource is initialized by the Kafka Bridge itself. +* For the HTTP Bridge, a tracer based on the tracing configuration defined in the resource is initialized by the HTTP Bridge itself. .Tracing in MirrorMaker 2 @@ -36,9 +36,9 @@ For MirrorMaker 2, messages are traced from the source cluster to the target clu For Kafka Connect, only messages produced and consumed by Kafka Connect are traced. To trace messages sent between Kafka Connect and external systems, you must configure tracing in the connectors for those systems. -.Tracing in the Kafka Bridge +.Tracing in the HTTP Bridge -For the Kafka Bridge, messages produced and consumed by the Kafka Bridge are traced. Incoming HTTP requests from client applications to send and receive messages through the Kafka Bridge are also traced. +For the HTTP Bridge, messages produced and consumed by the HTTP Bridge are traced. Incoming HTTP requests from client applications to send and receive messages through the HTTP Bridge are also traced. To have end-to-end tracing, you must configure tracing in your HTTP clients. .Procedure @@ -95,7 +95,7 @@ spec: #... ---- -.Example tracing configuration for the Kafka Bridge using OpenTelemetry +.Example tracing configuration for the HTTP Bridge using OpenTelemetry [source,yaml,subs=attributes+] ---- apiVersion: {KafkaBridgeApiVersion} diff --git a/documentation/overview/overview.adoc b/documentation/overview/overview.adoc index 1911b2d7d57..dfc2a12992f 100644 --- a/documentation/overview/overview.adoc +++ b/documentation/overview/overview.adoc @@ -20,9 +20,9 @@ include::assemblies/overview/assembly-kafka-concepts.adoc[leveloffset=+1] include::assemblies/overview/assembly-kafka-connect-components.adoc[leveloffset=+1] //MirrorMaker replication modes include::modules/overview/con-overview-mirrormaker2.adoc[leveloffset=+1] -//description of kafka bridge -include::modules/overview/con-overview-components-kafka-bridge.adoc[leveloffset=+1] -include::modules/overview/con-overview-components-kafka-bridge-clients.adoc[leveloffset=+2] +//description of HTTP bridge +include::modules/overview/con-overview-components-http-bridge.adoc[leveloffset=+1] +include::modules/overview/con-overview-components-http-bridge-clients.adoc[leveloffset=+2] //Main configuration points include::modules/overview/con-configuration-points.adoc[leveloffset=+1] //security options