diff --git a/documentation/assemblies/assembly-http-bridge-config.adoc b/documentation/assemblies/assembly-http-bridge-config.adoc new file mode 100644 index 000000000..b2538b2f6 --- /dev/null +++ b/documentation/assemblies/assembly-http-bridge-config.adoc @@ -0,0 +1,19 @@ +// This assembly is included in the following assemblies: +// +// bridge.adoc + +[id='assembly-http-bridge-config-{context}'] += HTTP Bridge configuration + +[role="_abstract"] +Configure a deployment of the HTTP Bridge with Kafka-related properties and specify the HTTP connection details needed to be able to interact with Kafka. +Additionally, enable metrics in Prometheus format using either the https://github.com/prometheus/jmx_exporter[Prometheus JMX Exporter] or the https://github.com/strimzi/metrics-reporter[Strimzi Metrics Reporter]. +You can also use configuration properties to enable and use distributed tracing with the HTTP Bridge. +Distributed tracing allows you to track the progress of transactions between applications in a distributed system. + +NOTE: Use the `KafkaBridge` resource to configure properties when you are xref:overview-components-running-http-bridge-cluster-{context}[running the HTTP Bridge on Kubernetes]. + +include::modules/proc-configuring-http-bridge.adoc[leveloffset=+1] +include::modules/proc-configuring-http-bridge-jmx-metrics.adoc[leveloffset=+1] +include::modules/proc-configuring-http-bridge-smr-metrics.adoc[leveloffset=+1] +include::modules/proc-configuring-http-bridge-tracing.adoc[leveloffset=+1] diff --git a/documentation/assemblies/assembly-http-bridge-overview.adoc b/documentation/assemblies/assembly-http-bridge-overview.adoc new file mode 100644 index 000000000..6780a5c2d --- /dev/null +++ b/documentation/assemblies/assembly-http-bridge-overview.adoc @@ -0,0 +1,29 @@ +// This assembly is included in the following assemblies: +// +// bridge.adoc + +[id='assembly-http-bridge-overview-{context}'] += HTTP Bridge overview + +[role="_abstract"] +Use the HTTP Bridge to make HTTP requests to a Kafka cluster. + +You can use the HTTP Bridge to integrate HTTP client applications with your Kafka cluster. + +.HTTP client integration + +image:kafka-bridge.png[Internal and external HTTP producers and consumers exchange data with the Kafka brokers through the HTTP Bridge] + +include::modules/con-overview-running-http-bridge.adoc[leveloffset=+1] + +include::modules/con-overview-components-http-bridge.adoc[leveloffset=+1] + +include::modules/con-overview-open-api-spec-http-bridge.adoc[leveloffset=+1] + +include::modules/con-securing-http-bridge.adoc[leveloffset=+1] + +include::modules/con-securing-http-interface.adoc[leveloffset=+1] + +include::modules/con-requests-http-bridge.adoc[leveloffset=+1] + +include::modules/con-loggers-http-bridge.adoc[leveloffset=+1] diff --git a/documentation/assemblies/assembly-kafka-bridge-quickstart.adoc b/documentation/assemblies/assembly-http-bridge-quickstart.adoc similarity index 75% rename from documentation/assemblies/assembly-kafka-bridge-quickstart.adoc rename to documentation/assemblies/assembly-http-bridge-quickstart.adoc index 960c0da91..b52000cb0 100644 --- a/documentation/assemblies/assembly-kafka-bridge-quickstart.adoc +++ b/documentation/assemblies/assembly-http-bridge-quickstart.adoc @@ -2,16 +2,16 @@ // // bridge.adoc -[id='assembly-kafka-bridge-quickstart-{context}'] -= Kafka Bridge quickstart +[id='assembly-http-bridge-quickstart-{context}'] += HTTP Bridge quickstart [role="_abstract"] -Use this quickstart to try out the Kafka Bridge in your local development environment. +Use this quickstart to try out the HTTP Bridge in your local development environment. You will learn how to do the following: * Produce messages to topics and partitions in your Kafka cluster -* Create a Kafka Bridge consumer +* Create a KafHTTPka Bridge consumer * Perform basic consumer operations, such as subscribing the consumer to topics and retrieving the messages that you produced In this quickstart, HTTP requests are formatted as curl commands that you can copy and paste to your terminal. @@ -24,13 +24,13 @@ In this quickstart, you will produce and consume messages in JSON format. * A Kafka cluster is running on the host machine. -include::modules/proc-downloading-kafka-bridge.adoc[leveloffset=+1] +include::modules/proc-downloading-http-bridge.adoc[leveloffset=+1] -include::modules/proc-installing-kafka-bridge.adoc[leveloffset=+1] +include::modules/proc-installing-http-bridge.adoc[leveloffset=+1] include::modules/proc-producing-messages-from-bridge-topics-partitions.adoc[leveloffset=+1] -include::modules/proc-creating-kafka-bridge-consumer.adoc[leveloffset=+1] +include::modules/proc-creating-http-bridge-consumer.adoc[leveloffset=+1] include::modules/proc-bridge-subscribing-consumer-topics.adoc[leveloffset=+1] diff --git a/documentation/assemblies/assembly-kafka-bridge-config.adoc b/documentation/assemblies/assembly-kafka-bridge-config.adoc deleted file mode 100644 index 0302a6bc2..000000000 --- a/documentation/assemblies/assembly-kafka-bridge-config.adoc +++ /dev/null @@ -1,19 +0,0 @@ -// This assembly is included in the following assemblies: -// -// bridge.adoc - -[id='assembly-kafka-bridge-config-{context}'] -= Kafka Bridge configuration - -[role="_abstract"] -Configure a deployment of the Kafka Bridge with Kafka-related properties and specify the HTTP connection details needed to be able to interact with Kafka. -Additionally, enable metrics in Prometheus format using either the https://github.com/prometheus/jmx_exporter[Prometheus JMX Exporter] or the https://github.com/strimzi/metrics-reporter[Strimzi Metrics Reporter]. -You can also use configuration properties to enable and use distributed tracing with the Kafka Bridge. -Distributed tracing allows you to track the progress of transactions between applications in a distributed system. - -NOTE: Use the `KafkaBridge` resource to configure properties when you are xref:overview-components-running-kafka-bridge-cluster-{context}[running the Kafka Bridge on Kubernetes]. - -include::modules/proc-configuring-kafka-bridge.adoc[leveloffset=+1] -include::modules/proc-configuring-kafka-bridge-jmx-metrics.adoc[leveloffset=+1] -include::modules/proc-configuring-kafka-bridge-smr-metrics.adoc[leveloffset=+1] -include::modules/proc-configuring-kafka-bridge-tracing.adoc[leveloffset=+1] diff --git a/documentation/assemblies/assembly-kafka-bridge-overview.adoc b/documentation/assemblies/assembly-kafka-bridge-overview.adoc deleted file mode 100644 index de7533ca3..000000000 --- a/documentation/assemblies/assembly-kafka-bridge-overview.adoc +++ /dev/null @@ -1,29 +0,0 @@ -// This assembly is included in the following assemblies: -// -// bridge.adoc - -[id='assembly-kafka-bridge-overview-{context}'] -= Kafka Bridge overview - -[role="_abstract"] -Use the Kafka Bridge to make HTTP requests to a Kafka cluster. - -You can use the Kafka Bridge to integrate HTTP client applications with your Kafka cluster. - -.HTTP client integration - -image:kafka-bridge.png[Internal and external HTTP producers and consumers exchange data with the Kafka brokers through the Kafka Bridge] - -include::modules/con-overview-running-kafka-bridge.adoc[leveloffset=+1] - -include::modules/con-overview-components-kafka-bridge.adoc[leveloffset=+1] - -include::modules/con-overview-open-api-spec-kafka-bridge.adoc[leveloffset=+1] - -include::modules/con-securing-kafka-bridge.adoc[leveloffset=+1] - -include::modules/con-securing-http-interface.adoc[leveloffset=+1] - -include::modules/con-requests-kafka-bridge.adoc[leveloffset=+1] - -include::modules/con-loggers-kafka-bridge.adoc[leveloffset=+1] diff --git a/documentation/book/api/snippet/consumers/{groupid}/POST/http-response.adoc b/documentation/book/api/snippet/consumers/{groupid}/POST/http-response.adoc index de1717128..79cb567c5 100644 --- a/documentation/book/api/snippet/consumers/{groupid}/POST/http-response.adoc +++ b/documentation/book/api/snippet/consumers/{groupid}/POST/http-response.adoc @@ -15,7 +15,7 @@ ---- { "error_code" : 409, - "message" : "A consumer instance with the specified name already exists in the Kafka Bridge." + "message" : "A consumer instance with the specified name already exists in the HTTP Bridge." } ---- diff --git a/documentation/book/bridge.adoc b/documentation/book/bridge.adoc index 81bf9878f..463b7d026 100644 --- a/documentation/book/bridge.adoc +++ b/documentation/book/bridge.adoc @@ -3,13 +3,13 @@ include::common/attributes.adoc[] :context: bridge [id='using_book-{context}'] -= Using the Strimzi Kafka Bridge += Using the Strimzi HTTP Bridge -include::assemblies/assembly-kafka-bridge-overview.adoc[leveloffset=+1] +include::assemblies/assembly-http-bridge-overview.adoc[leveloffset=+1] -include::assemblies/assembly-kafka-bridge-quickstart.adoc[leveloffset=+1] +include::assemblies/assembly-http-bridge-quickstart.adoc[leveloffset=+1] -include::assemblies/assembly-kafka-bridge-config.adoc[leveloffset=+1] +include::assemblies/assembly-http-bridge-config.adoc[leveloffset=+1] [id='api_reference-{context}'] include::api/index.adoc[leveloffset=+1] diff --git a/documentation/modules/con-loggers-kafka-bridge.adoc b/documentation/modules/con-loggers-http-bridge.adoc similarity index 89% rename from documentation/modules/con-loggers-kafka-bridge.adoc rename to documentation/modules/con-loggers-http-bridge.adoc index 6bacd4ea7..fc90d8b75 100644 --- a/documentation/modules/con-loggers-kafka-bridge.adoc +++ b/documentation/modules/con-loggers-http-bridge.adoc @@ -1,14 +1,14 @@ // Module included in the following assemblies: // -// assembly-kafka-bridge-overview.adoc +// assembly-http-bridge-overview.adoc -[id='con-loggers-kafka-bridge-{context}'] +[id='con-loggers-http-bridge-{context}'] [role="_abstract"] -= Configuring loggers for the Kafka Bridge += Configuring loggers for the HTTP Bridge [role="_abstract"] -You can set a different log level for each operation that is defined by the Kafka Bridge OpenAPI specification. +You can set a different log level for each operation that is defined by the HTTP Bridge OpenAPI specification. Each operation has a corresponding API endpoint through which the bridge receives requests from HTTP clients. You can change the log level on each endpoint to produce more or less fine-grained logging information about the incoming and outgoing HTTP requests. diff --git a/documentation/modules/con-overview-components-kafka-bridge.adoc b/documentation/modules/con-overview-components-http-bridge.adoc similarity index 72% rename from documentation/modules/con-overview-components-kafka-bridge.adoc rename to documentation/modules/con-overview-components-http-bridge.adoc index 6777bcb14..f3c4a4278 100644 --- a/documentation/modules/con-overview-components-kafka-bridge.adoc +++ b/documentation/modules/con-overview-components-http-bridge.adoc @@ -1,18 +1,18 @@ // Module included in the following assemblies: // -// assembly-kafka-bridge-overview.adoc +// assembly-http-bridge-overview.adoc -[id="overview-components-kafka-bridge_{context}"] -= Kafka Bridge interface +[id="overview-components-http-bridge_{context}"] += HTTP Bridge interface [role="_abstract"] -The Kafka Bridge provides a RESTful interface that allows HTTP-based clients to interact with a Kafka cluster.  +The HTTP Bridge provides a RESTful interface that allows HTTP-based clients to interact with a Kafka cluster.  It offers the advantages of a web API connection to Strimzi, without the need for client applications to interpret the Kafka protocol. -The API has two main resources — `consumers` and `topics` — that are exposed and made accessible through endpoints to interact with consumers and producers in your Kafka cluster. The resources relate only to the Kafka Bridge, not the consumers and producers connected directly to Kafka. +The API has two main resources — `consumers` and `topics` — that are exposed and made accessible through endpoints to interact with consumers and producers in your Kafka cluster. The resources relate only to the HTTP Bridge, not the consumers and producers connected directly to Kafka. == HTTP requests -The Kafka Bridge supports HTTP requests to a Kafka cluster, with methods to: +The HTTP Bridge supports HTTP requests to a Kafka cluster, with methods to: * Send messages to a topic. * Retrieve messages from topics. @@ -32,4 +32,4 @@ Clients can produce and consume messages without the requirement to use the nati [role="_additional-resources"] .Additional resources -* xref:api_reference-{context}[Kafka Bridge API reference] +* xref:api_reference-{context}[HTTP Bridge API reference] diff --git a/documentation/modules/con-overview-open-api-spec-kafka-bridge.adoc b/documentation/modules/con-overview-open-api-spec-http-bridge.adoc similarity index 61% rename from documentation/modules/con-overview-open-api-spec-kafka-bridge.adoc rename to documentation/modules/con-overview-open-api-spec-http-bridge.adoc index 2130f7f79..e50787696 100644 --- a/documentation/modules/con-overview-open-api-spec-kafka-bridge.adoc +++ b/documentation/modules/con-overview-open-api-spec-http-bridge.adoc @@ -1,16 +1,16 @@ // This assembly is included in the following assemblies: // -// assembly-kafka-bridge-overview.adoc +// assembly-http-bridge-overview.adoc -[id='overview-open-api-spec-kafka-bridge-{context}'] -= Kafka Bridge OpenAPI specification +[id='overview-open-api-spec-http-bridge-{context}'] += HTTP Bridge OpenAPI specification [role="_abstract"] -Kafka Bridge APIs use the OpenAPI Specification (OAS). +HTTP Bridge APIs use the OpenAPI Specification (OAS). OAS provides a standard framework for describing and implementing HTTP APIs. -The Kafka Bridge OpenAPI specification is in JSON format. -You can find the OpenAPI JSON files in the `src/main/resources/` folder of the Kafka Bridge source download files. +The HTTP Bridge OpenAPI specification is in JSON format. +You can find the OpenAPI JSON files in the `src/main/resources/` folder of the HTTP Bridge source download files. The download files are available from the {ReleaseDownload}. You can also use the xref:openapi[`GET /openapi` method] to retrieve the OpenAPI v3 specification in JSON format. diff --git a/documentation/modules/con-overview-running-http-bridge.adoc b/documentation/modules/con-overview-running-http-bridge.adoc new file mode 100644 index 000000000..5d4e486d7 --- /dev/null +++ b/documentation/modules/con-overview-running-http-bridge.adoc @@ -0,0 +1,30 @@ +// Module included in the following assemblies: +// +// assembly-http-bridge-overview.adoc + +[id="overview-components-running-http-bridge-{context}"] += Running the HTTP Bridge + +[role="_abstract"] +Install the HTTP Bridge to run in the same environment as your Kafka cluster. + +You can download and add the HTTP Bridge installation artifacts to your host machine. +To try out the HTTP Bridge in your local environment, see the xref:assembly-http-bridge-quickstart-{context}[HTTP Bridge quickstart]. + +It's important to note that each instance of the HTTP Bridge maintains its own set of in-memory consumers (and subscriptions) that connect to the Kafka Brokers on behalf of the HTTP clients. +This means that each HTTP client must maintain affinity to the same HTTP Bridge instance in order to access any subscriptions that are created. +Additionally, when an instance of the HTTP Bridge restarts, the in-memory consumers and subscriptions are lost. +**It is the responsibility of the HTTP client to recreate any consumers and subscriptions if the HTTP Bridge restarts.** + +[id="overview-components-running-http-bridge-cluster-{context}"] +== Running the HTTP Bridge on Kubernetes + +If you deployed Strimzi on Kubernetes, you can use the Strimzi Cluster Operator to deploy the HTTP Bridge to the Kubernetes cluster. +Configure and deploy the HTTP Bridge as a `KafkaBridge` resource. +You'll need a running Kafka cluster that was deployed by the Cluster Operator in a Kubernetes namespace. +You can configure your deployment to access the HTTP Bridge outside the Kubernetes cluster. + +HTTP clients must maintain affinity to the same instance of the HTTP Bridge to access any consumers or subscriptions that they create. Hence, running multiple replicas of the HTTP Bridge per Kubernetes Deployment is not recommended. +If the HTTP Bridge pod restarts (for instance, due to Kubernetes relocating the workload to another node), the HTTP client must recreate any consumers or subscriptions. + +For information on deploying and configuring the HTTP Bridge as a `KafkaBridge` resource, see the {BookURLConfiguring}. diff --git a/documentation/modules/con-overview-running-kafka-bridge.adoc b/documentation/modules/con-overview-running-kafka-bridge.adoc deleted file mode 100644 index 7a0679111..000000000 --- a/documentation/modules/con-overview-running-kafka-bridge.adoc +++ /dev/null @@ -1,30 +0,0 @@ -// Module included in the following assemblies: -// -// assembly-kafka-bridge-overview.adoc - -[id="overview-components-running-kafka-bridge-{context}"] -= Running the Kafka Bridge - -[role="_abstract"] -Install the Kafka Bridge to run in the same environment as your Kafka cluster. - -You can download and add the Kafka Bridge installation artifacts to your host machine. -To try out the Kafka Bridge in your local environment, see the xref:assembly-kafka-bridge-quickstart-{context}[Kafka Bridge quickstart]. - -It's important to note that each instance of the Kafka Bridge maintains its own set of in-memory consumers (and subscriptions) that connect to the Kafka Brokers on behalf of the HTTP clients. -This means that each HTTP client must maintain affinity to the same Kafka Bridge instance in order to access any subscriptions that are created. -Additionally, when an instance of the Kafka Bridge restarts, the in-memory consumers and subscriptions are lost. -**It is the responsibility of the HTTP client to recreate any consumers and subscriptions if the Kafka Bridge restarts.** - -[id="overview-components-running-kafka-bridge-cluster-{context}"] -== Running the Kafka Bridge on Kubernetes - -If you deployed Strimzi on Kubernetes, you can use the Strimzi Cluster Operator to deploy the Kafka Bridge to the Kubernetes cluster. -Configure and deploy the Kafka Bridge as a `KafkaBridge` resource. -You'll need a running Kafka cluster that was deployed by the Cluster Operator in a Kubernetes namespace. -You can configure your deployment to access the Kafka Bridge outside the Kubernetes cluster. - -HTTP clients must maintain affinity to the same instance of the Kafka Bridge to access any consumers or subscriptions that they create. Hence, running multiple replicas of the Kafka Bridge per Kubernetes Deployment is not recommended. -If the Kafka Bridge pod restarts (for instance, due to Kubernetes relocating the workload to another node), the HTTP client must recreate any consumers or subscriptions. - -For information on deploying and configuring the Kafka Bridge as a `KafkaBridge` resource, see the {BookURLConfiguring}. diff --git a/documentation/modules/con-requests-kafka-bridge.adoc b/documentation/modules/con-requests-http-bridge.adoc similarity index 87% rename from documentation/modules/con-requests-kafka-bridge.adoc rename to documentation/modules/con-requests-http-bridge.adoc index 63049007a..4a0ac3de3 100644 --- a/documentation/modules/con-requests-kafka-bridge.adoc +++ b/documentation/modules/con-requests-http-bridge.adoc @@ -1,12 +1,12 @@ // Module included in the following assemblies: // -// assembly-kafka-bridge-overview.adoc +// assembly-http-bridge-overview.adoc -[id='con-requests-kafka-bridge-{context}'] -= Requests to the Kafka Bridge +[id='con-requests-http-bridge-{context}'] += Requests to the HTTP Bridge [role="_abstract"] -Specify data formats and HTTP headers to ensure valid requests are submitted to the Kafka Bridge. +Specify data formats and HTTP headers to ensure valid requests are submitted to the HTTP Bridge. == Content Type headers @@ -45,7 +45,7 @@ An empty body can be used to create a consumer with the default values. == Embedded data format -The embedded data format is the format of the Kafka messages that are transmitted, over HTTP, from a producer to a consumer using the Kafka Bridge. Three embedded data formats are supported: JSON, binary and text. +The embedded data format is the format of the Kafka messages that are transmitted, over HTTP, from a producer to a consumer using the HTTP Bridge. Three embedded data formats are supported: JSON, binary and text. When creating a consumer using the `/consumers/_groupid_` endpoint, the `POST` request body must specify an embedded data format of either JSON, binary or text. This is specified in the `format` field, for example: @@ -132,15 +132,15 @@ For example, when retrieving records for a subscribed consumer using an embedded Accept: application/vnd.kafka.json.v2+json ---- -[id='con-requests-kafka-bridge-cors-{context}'] +[id='con-requests-http-bridge-cors-{context}'] = CORS In general, it is not possible for an HTTP client to issue requests across different domains. -For example, suppose the Kafka Bridge you deployed alongside a Kafka cluster is accessible using the `\http://my-bridge.io` domain. -HTTP clients can use the URL to interact with the Kafka Bridge and exchange messages through the Kafka cluster. +For example, suppose the HTTP Bridge you deployed alongside a Kafka cluster is accessible using the `\http://my-bridge.io` domain. +HTTP clients can use the URL to interact with the HTTP Bridge and exchange messages through the Kafka cluster. However, your client is running as a web application in the `\http://my-web-application.io` domain. -The client (source) domain is different from the Kafka Bridge (target) domain. +The client (source) domain is different from the HTTP Bridge (target) domain. Because of same-origin policy restrictions, requests from the client fail. You can avoid this situation by using Cross-Origin Resource Sharing (CORS). @@ -155,9 +155,9 @@ and use non-standard headers. All requests require an _origins_ value in their header, which is the source of the HTTP request. -CORS allows you to specify allowed methods and originating URLs for accessing the Kafka cluster in your Kafka Bridge HTTP configuration. +CORS allows you to specify allowed methods and originating URLs for accessing the Kafka cluster in your HTTP Bridge HTTP configuration. -.Example CORS configuration for Kafka Bridge +.Example CORS configuration for HTTP Bridge [source,properties,subs="attributes+"] ---- # ... @@ -184,7 +184,7 @@ curl -v -X GET _HTTP-BRIDGE-ADDRESS_/consumers/my-group/instances/my-consumer/re -H 'content-type: application/vnd.kafka.v2+json' ---- -In the response from the Kafka Bridge, an `Access-Control-Allow-Origin` header is returned. +In the response from the HTTP Bridge, an `Access-Control-Allow-Origin` header is returned. It contains the list of domains from where HTTP requests can be issued to the bridge. [source,http,subs=+quotes] @@ -196,8 +196,8 @@ Access-Control-Allow-Origin: * <1> == Preflighted request -An initial preflight request is sent to Kafka Bridge using an `OPTIONS` method. -The _HTTP OPTIONS_ request sends header information to check that Kafka Bridge will allow the actual request. +An initial preflight request is sent to HTTP Bridge using an `OPTIONS` method. +The _HTTP OPTIONS_ request sends header information to check that HTTP Bridge will allow the actual request. Here the preflight request checks that a `POST` request is valid from `\http://my-web-application.io`. @@ -208,7 +208,7 @@ Origin: http://my-web-application.io Access-Control-Request-Method: POST <1> Access-Control-Request-Headers: Content-Type <2> ---- -<1> Kafka Bridge is alerted that the actual request is a `POST` request. +<1> HTTP Bridge is alerted that the actual request is a `POST` request. <2> The actual request will be sent with a `Content-Type` header. `OPTIONS` is added to the header information of the preflight request. @@ -220,7 +220,7 @@ curl -v -X OPTIONS -H 'Origin: http://my-web-application.io' \ -H 'content-type: application/vnd.kafka.v2+json' ---- -Kafka Bridge responds to the initial request to confirm that the request will be accepted. +HTTP Bridge responds to the initial request to confirm that the request will be accepted. The response header returns allowed origins, methods and headers. [source,http,subs=+quotes] diff --git a/documentation/modules/con-securing-http-bridge.adoc b/documentation/modules/con-securing-http-bridge.adoc new file mode 100644 index 000000000..a542d4208 --- /dev/null +++ b/documentation/modules/con-securing-http-bridge.adoc @@ -0,0 +1,18 @@ +// This assembly is included in the following assemblies: +// +// assembly-http-bridge-overview.adoc + +[id='con-securing-http-bridge-{context}'] += Securing connectivity to the Kafka cluster + +[role="_abstract"] +You can configure the following between the HTTP Bridge and your Kafka cluster: + +* TLS or SASL-based authentication +* A TLS-encrypted connection + +You configure the HTTP Bridge for authentication through its xref:proc-configuring-http-bridge-{context}[properties file]. + +You can also use ACLs in Kafka brokers to restrict the topics that can be consumed and produced using the HTTP Bridge. + +NOTE: Use the `KafkaBridge` resource to configure authentication when you are xref:overview-components-running-http-bridge-cluster-{context}[running the HTTP Bridge on Kubernetes]. \ No newline at end of file diff --git a/documentation/modules/con-securing-http-interface.adoc b/documentation/modules/con-securing-http-interface.adoc index 8b38e304d..5cd410a32 100644 --- a/documentation/modules/con-securing-http-interface.adoc +++ b/documentation/modules/con-securing-http-interface.adoc @@ -1,17 +1,17 @@ // This assembly is included in the following assemblies: // -// assembly-kafka-bridge-overview.adoc +// assembly-http-bridge-overview.adoc [id='con-securing-http-interface-{context}'] -= Securing the Kafka Bridge HTTP interface += Securing the HTTP Bridge HTTP interface [role="_abstract"] -Authentication and encryption between HTTP clients and the Kafka Bridge is not supported directly by the Kafka Bridge. -Requests sent from clients to the Kafka Bridge are sent without authentication or encryption. +Authentication and encryption between HTTP clients and the HTTP Bridge is not supported directly by the HTTP Bridge. +Requests sent from clients to the HTTP Bridge are sent without authentication or encryption. Requests must use HTTP rather than HTTPS. -You can combine the Kafka Bridge with the following tools to secure it: +You can combine the HTTP Bridge with the following tools to secure it: -* Network policies and firewalls that define which pods can access the Kafka Bridge +* Network policies and firewalls that define which pods can access the HTTP Bridge * Reverse proxies (for example, OAuth 2.0) * API gateways diff --git a/documentation/modules/con-securing-kafka-bridge.adoc b/documentation/modules/con-securing-kafka-bridge.adoc deleted file mode 100644 index 13607061b..000000000 --- a/documentation/modules/con-securing-kafka-bridge.adoc +++ /dev/null @@ -1,18 +0,0 @@ -// This assembly is included in the following assemblies: -// -// assembly-kafka-bridge-overview.adoc - -[id='con-securing-kafka-bridge-{context}'] -= Securing connectivity to the Kafka cluster - -[role="_abstract"] -You can configure the following between the Kafka Bridge and your Kafka cluster: - -* TLS or SASL-based authentication -* A TLS-encrypted connection - -You configure the Kafka Bridge for authentication through its xref:proc-configuring-kafka-bridge-{context}[properties file]. - -You can also use ACLs in Kafka brokers to restrict the topics that can be consumed and produced using the Kafka Bridge. - -NOTE: Use the `KafkaBridge` resource to configure authentication when you are xref:overview-components-running-kafka-bridge-cluster-{context}[running the Kafka Bridge on Kubernetes]. \ No newline at end of file diff --git a/documentation/modules/proc-bridge-committing-consumer-offsets-to-log.adoc b/documentation/modules/proc-bridge-committing-consumer-offsets-to-log.adoc index 56839b2f5..ced41b6a2 100644 --- a/documentation/modules/proc-bridge-committing-consumer-offsets-to-log.adoc +++ b/documentation/modules/proc-bridge-committing-consumer-offsets-to-log.adoc @@ -1,12 +1,12 @@ // Module included in the following assemblies: // -// assembly-kafka-bridge-quickstart.adoc +// assembly-http-bridge-quickstart.adoc [id='proc-bridge-committing-consumer-offsets-to-log-{context}'] = Commiting offsets to the log [role="_abstract"] -Use the xref:commit[offsets] endpoint to manually commit offsets to the log for all messages received by the Kafka Bridge consumer. This is required because the Kafka Bridge consumer that you created earlier, in xref:proc-creating-kafka-bridge-consumer-{context}[Creating a Kafka Bridge consumer], was configured with the `enable.auto.commit` setting as `false`. +Use the xref:commit[offsets] endpoint to manually commit offsets to the log for all messages received by the HTTP Bridge consumer. This is required because the HTTP Bridge consumer that you created earlier, in xref:proc-creating-http-bridge-consumer-{context}[Creating a HTTP Bridge consumer], was configured with the `enable.auto.commit` setting as `false`. .Procedure @@ -19,7 +19,7 @@ curl -X POST http://localhost:8080/consumers/bridge-quickstart-consumer-group/in + Because no request body is submitted, offsets are committed for all the records that have been received by the consumer. Alternatively, the request body can contain an array of (xref:OffsetCommitSeek[OffsetCommitSeek]) that specifies the topics and partitions that you want to commit offsets for. + -If the request is successful, the Kafka Bridge returns a `204` code only. +If the request is successful, the HTTP Bridge returns a `204` code only. .What to do next diff --git a/documentation/modules/proc-bridge-deleting-consumer.adoc b/documentation/modules/proc-bridge-deleting-consumer.adoc index de54bb03d..72385d107 100644 --- a/documentation/modules/proc-bridge-deleting-consumer.adoc +++ b/documentation/modules/proc-bridge-deleting-consumer.adoc @@ -1,23 +1,23 @@ // Module included in the following assemblies: // -// assembly-kafka-bridge-quickstart.adoc +// assembly-http-bridge-quickstart.adoc [id='proc-bridge-deleting-consumer-{context}'] -= Deleting a Kafka Bridge consumer += Deleting a HTTP Bridge consumer [role="_abstract"] -Delete the Kafka Bridge consumer that you used throughout this quickstart. +Delete the HTTP Bridge consumer that you used throughout this quickstart. .Procedure -* Delete the Kafka Bridge consumer by sending a `DELETE` request to the xref:deleteconsumer[instances] endpoint. +* Delete the HTTP Bridge consumer by sending a `DELETE` request to the xref:deleteconsumer[instances] endpoint. + [source,curl,subs=attributes+] ---- curl -X DELETE http://localhost:8080/consumers/bridge-quickstart-consumer-group/instances/bridge-quickstart-consumer ---- + -If the request is successful, the Kafka Bridge returns a `204` code. +If the request is successful, the HTTP Bridge returns a `204` code. [role="_additional-resources"] .Additional resources diff --git a/documentation/modules/proc-bridge-retrieving-latest-messages-from-consumer.adoc b/documentation/modules/proc-bridge-retrieving-latest-messages-from-consumer.adoc index 54d896f41..66c58f574 100644 --- a/documentation/modules/proc-bridge-retrieving-latest-messages-from-consumer.adoc +++ b/documentation/modules/proc-bridge-retrieving-latest-messages-from-consumer.adoc @@ -1,16 +1,16 @@ // Module included in the following assemblies: // -// assembly-kafka-bridge-quickstart.adoc +// assembly-http-bridge-quickstart.adoc [id='proc-bridge-retrieving-latest-messages-from-consumer-{context}'] -= Retrieving the latest messages from a Kafka Bridge consumer += Retrieving the latest messages from a HTTP Bridge consumer [role="_abstract"] -Retrieve the latest messages from the Kafka Bridge consumer by requesting data from the xref:poll[records] endpoint. In production, HTTP clients can call this endpoint repeatedly (in a loop). +Retrieve the latest messages from the HTTP Bridge consumer by requesting data from the xref:poll[records] endpoint. In production, HTTP clients can call this endpoint repeatedly (in a loop). .Procedure -. Produce additional messages to the Kafka Bridge consumer, as described in xref:proc-producing-messages-from-bridge-topics-partitions-{context}[Producing messages to topics and partitions]. +. Produce additional messages to the HTTP Bridge consumer, as described in xref:proc-producing-messages-from-bridge-topics-partitions-{context}[Producing messages to topics and partitions]. . Submit a `GET` request to the `records` endpoint: + @@ -20,11 +20,11 @@ curl -X GET http://localhost:8080/consumers/bridge-quickstart-consumer-group/ins -H 'accept: application/vnd.kafka.json.v2+json' ---- + -After creating and subscribing to a Kafka Bridge consumer, a first GET request will return an empty response because the poll operation starts a rebalancing process to assign partitions. +After creating and subscribing to a HTTP Bridge consumer, a first GET request will return an empty response because the poll operation starts a rebalancing process to assign partitions. -. Repeat step two to retrieve messages from the Kafka Bridge consumer. +. Repeat step two to retrieve messages from the HTTP Bridge consumer. + -The Kafka Bridge returns an array of messages -- describing the topic name, key, value, partition, and offset -- in the response body, along with a `200` code. Messages are retrieved from the latest offset by default. +The HTTP Bridge returns an array of messages -- describing the topic name, key, value, partition, and offset -- in the response body, along with a `200` code. Messages are retrieved from the latest offset by default. + [source,json,subs=attributes+] ---- @@ -53,7 +53,7 @@ NOTE: If an empty response is returned, produce more records to the consumer as .What to do next -After retrieving messages from a Kafka Bridge consumer, try xref:proc-bridge-committing-consumer-offsets-to-log-{context}[committing offsets to the log]. +After retrieving messages from a HTTP Bridge consumer, try xref:proc-bridge-committing-consumer-offsets-to-log-{context}[committing offsets to the log]. [role="_additional-resources"] .Additional resources diff --git a/documentation/modules/proc-bridge-seeking-offsets-for-partition.adoc b/documentation/modules/proc-bridge-seeking-offsets-for-partition.adoc index 7b065b711..46b56fd29 100644 --- a/documentation/modules/proc-bridge-seeking-offsets-for-partition.adoc +++ b/documentation/modules/proc-bridge-seeking-offsets-for-partition.adoc @@ -1,12 +1,12 @@ // Module included in the following assemblies: // -// assembly-kafka-bridge-quickstart.adoc +// assembly-http-bridge-quickstart.adoc [id='proc-bridge-seeking-offset-for-partition-{context}'] = Seeking to offsets for a partition [role="_abstract"] -Use the xref:seek[positions] endpoints to configure the Kafka Bridge consumer to retrieve messages for a partition from a specific offset, and then from the latest offset. This is referred to in Apache Kafka as a seek operation. +Use the xref:seek[positions] endpoints to configure the HTTP Bridge consumer to retrieve messages for a partition from a specific offset, and then from the latest offset. This is referred to in Apache Kafka as a seek operation. .Procedure @@ -27,7 +27,7 @@ curl -X POST http://localhost:8080/consumers/bridge-quickstart-consumer-group/in }' ---- + -If the request is successful, the Kafka Bridge returns a `204` code only. +If the request is successful, the HTTP Bridge returns a `204` code only. . Submit a `GET` request to the `records` endpoint: + @@ -37,7 +37,7 @@ curl -X GET http://localhost:8080/consumers/bridge-quickstart-consumer-group/ins -H 'accept: application/vnd.kafka.json.v2+json' ---- + -The Kafka Bridge returns messages from the offset that you seeked to. +The HTTP Bridge returns messages from the offset that you seeked to. . Restore the default message retrieval behavior by seeking to the last offset for the same partition. This time, use the xref:seektoend[positions/end] endpoint. + @@ -55,13 +55,13 @@ curl -X POST http://localhost:8080/consumers/bridge-quickstart-consumer-group/in }' ---- + -If the request is successful, the Kafka Bridge returns another `204` code. +If the request is successful, the HTTP Bridge returns another `204` code. NOTE: You can also use the xref:seektobeginning[positions/beginning] endpoint to seek to the first offset for one or more partitions. .What to do next -In this quickstart, you have used the Kafka Bridge to perform several common operations on a Kafka cluster. You can now xref:proc-bridge-deleting-consumer-{context}[delete the Kafka Bridge consumer] that you created earlier. +In this quickstart, you have used the HTTP Bridge to perform several common operations on a Kafka cluster. You can now xref:proc-bridge-deleting-consumer-{context}[delete the HTTP Bridge consumer] that you created earlier. [role="_additional-resources"] .Additional resources diff --git a/documentation/modules/proc-bridge-subscribing-consumer-topics.adoc b/documentation/modules/proc-bridge-subscribing-consumer-topics.adoc index a663ca9b3..4f75c40cb 100644 --- a/documentation/modules/proc-bridge-subscribing-consumer-topics.adoc +++ b/documentation/modules/proc-bridge-subscribing-consumer-topics.adoc @@ -1,12 +1,12 @@ // Module included in the following assemblies: // -// assembly-kafka-bridge-quickstart.adoc +// assembly-http-bridge-quickstart.adoc [id='proc-bridge-subscribing-consumer-topics-{context}'] -= Subscribing a Kafka Bridge consumer to topics += Subscribing a HTTP Bridge consumer to topics [role="_abstract"] -After you have created a Kafka Bridge consumer, subscribe it to one or more topics by using the xref:subscribe[subscription] endpoint. +After you have created a HTTP Bridge consumer, subscribe it to one or more topics by using the xref:subscribe[subscription] endpoint. When subscribed, the consumer starts receiving all messages that are produced to the topic. .Procedure @@ -26,7 +26,7 @@ curl -X POST http://localhost:8080/consumers/bridge-quickstart-consumer-group/in + The `topics` array can contain a single topic (as shown here) or multiple topics. If you want to subscribe the consumer to multiple topics that match a regular expression, you can use the `topic_pattern` string instead of the `topics` array. + -If the request is successful, the Kafka Bridge returns a `204` (No Content) code only. +If the request is successful, the HTTP Bridge returns a `204` (No Content) code only. When using an Apache Kafka client, the HTTP subscribe operation adds topics to the local consumer's subscriptions. Joining a consumer group and obtaining partition assignments occur after running multiple HTTP poll operations, starting the partition rebalance and join-group process. @@ -34,7 +34,7 @@ It's important to note that the initial HTTP poll operations may not return any .What to do next -After subscribing a Kafka Bridge consumer to topics, you can xref:proc-bridge-retrieving-latest-messages-from-consumer-{context}[retrieve messages from the consumer]. +After subscribing a HTTP Bridge consumer to topics, you can xref:proc-bridge-retrieving-latest-messages-from-consumer-{context}[retrieve messages from the consumer]. [role="_additional-resources"] .Additional resources diff --git a/documentation/modules/proc-configuring-kafka-bridge-jmx-metrics.adoc b/documentation/modules/proc-configuring-http-bridge-jmx-metrics.adoc similarity index 60% rename from documentation/modules/proc-configuring-kafka-bridge-jmx-metrics.adoc rename to documentation/modules/proc-configuring-http-bridge-jmx-metrics.adoc index b954891c1..c628df0c8 100644 --- a/documentation/modules/proc-configuring-kafka-bridge-jmx-metrics.adoc +++ b/documentation/modules/proc-configuring-http-bridge-jmx-metrics.adoc @@ -1,12 +1,12 @@ -[id='proc-configuring-kafka-bridge-jmx-metrics-{context}'] +[id='proc-configuring-http-bridge-jmx-metrics-{context}'] = Configuring Prometheus JMX Exporter metrics [role="_abstract"] -Enable the Prometheus JMX Exporter to collect Kafka Bridge metrics by setting the `bridge.metrics` option to `jmxPrometheusExporter`. +Enable the Prometheus JMX Exporter to collect HTTP Bridge metrics by setting the `bridge.metrics` option to `jmxPrometheusExporter`. .Prerequisites -* xref:proc-downloading-kafka-bridge-{context}[The Kafka Bridge installation archive is downloaded]. +* xref:proc-downloading-http-bridge-{context}[The HTTP Bridge installation archive is downloaded]. .Procedure @@ -22,12 +22,12 @@ bridge.metrics=jmxPrometheusExporter Optionally, you can add a custom Prometheus JMX Exporter configuration using the `bridge.metrics.exporter.config.path` property. If not configured, a default embedded configuration file is used. -. Run the Kafka Bridge run script. +. Run the HTTP Bridge run script. + -.Running the Kafka Bridge +.Running the HTTP Bridge [source,shell] ---- ./bin/kafka_bridge_run.sh --config-file=/application.properties ---- + -With metrics enabled, you can scrape metrics in Prometheus format from the `/metrics` endpoint of the Kafka Bridge. +With metrics enabled, you can scrape metrics in Prometheus format from the `/metrics` endpoint of the HTTP Bridge. diff --git a/documentation/modules/proc-configuring-kafka-bridge-smr-metrics.adoc b/documentation/modules/proc-configuring-http-bridge-smr-metrics.adoc similarity index 67% rename from documentation/modules/proc-configuring-kafka-bridge-smr-metrics.adoc rename to documentation/modules/proc-configuring-http-bridge-smr-metrics.adoc index ce49d3243..468e9ecfb 100644 --- a/documentation/modules/proc-configuring-kafka-bridge-smr-metrics.adoc +++ b/documentation/modules/proc-configuring-http-bridge-smr-metrics.adoc @@ -1,12 +1,12 @@ -[id='proc-configuring-kafka-bridge-smr-metrics-{context}'] +[id='proc-configuring-http-bridge-smr-metrics-{context}'] = Configuring Strimzi Metrics Reporter metrics [role="_abstract"] -Enable the Strimzi Metrics Reporter to collect Kafka Bridge metrics by setting the `bridge.metrics` option to `strimziMetricsReporter`. +Enable the Strimzi Metrics Reporter to collect HTTP Bridge metrics by setting the `bridge.metrics` option to `strimziMetricsReporter`. .Prerequisites -* xref:proc-downloading-kafka-bridge-{context}[The Kafka Bridge installation archive is downloaded]. +* xref:proc-downloading-http-bridge-{context}[The HTTP Bridge installation archive is downloaded]. .Procedure @@ -27,16 +27,16 @@ When needed, it is possible to configure the `allowlist` per client type. For example, by using the `kafka.admin` prefix and setting `kafka.admin.prometheus.metrics.reporter.allowlist=`, all admin client metrics are excluded. + -You can add any plugin configuration to the Kafka Bridge properties file using `kafka.`, `kafka.admin.`, `kafka.producer.`, and `kafka.consumer.` prefixes. +You can add any plugin configuration to the HTTP Bridge properties file using `kafka.`, `kafka.admin.`, `kafka.producer.`, and `kafka.consumer.` prefixes. In the event that the same property is configured with multiple prefixes, the most specific prefix takes precedence. For example, `kafka.producer.prometheus.metrics.reporter.allowlist` takes precedence over `kafka.prometheus.metrics.reporter.allowlist`. -. Run the Kafka Bridge run script. +. Run the HTTP Bridge run script. + -.Running the Kafka Bridge +.Running the HTTP Bridge [source,shell] ---- ./bin/kafka_bridge_run.sh --config-file=/application.properties ---- + -With metrics enabled, you can scrape metrics in Prometheus format from the `/metrics` endpoint of the Kafka Bridge. +With metrics enabled, you can scrape metrics in Prometheus format from the `/metrics` endpoint of the HTTP Bridge. diff --git a/documentation/modules/proc-configuring-kafka-bridge-tracing.adoc b/documentation/modules/proc-configuring-http-bridge-tracing.adoc similarity index 80% rename from documentation/modules/proc-configuring-kafka-bridge-tracing.adoc rename to documentation/modules/proc-configuring-http-bridge-tracing.adoc index 8ad75b655..01d1c9925 100644 --- a/documentation/modules/proc-configuring-kafka-bridge-tracing.adoc +++ b/documentation/modules/proc-configuring-http-bridge-tracing.adoc @@ -1,12 +1,12 @@ // Module included in the following assemblies: // -// assembly-kafka-bridge-config.adoc +// assembly-http-bridge-config.adoc -[id='proc-configuring-kafka-bridge-tracing-{context}'] +[id='proc-configuring-http-bridge-tracing-{context}'] = Configuring distributed tracing [role="_abstract"] -Enable distributed tracing to trace messages consumed and produced by the Kafka Bridge, and HTTP requests from client applications. +Enable distributed tracing to trace messages consumed and produced by the HTTP Bridge, and HTTP requests from client applications. Properties to enable tracing are present in the `application.properties` file. To enable distributed tracing, do the following: @@ -23,7 +23,7 @@ OpenTelemetry defines an API specification for collecting tracing data as _spans Spans represent a specific operation. A trace is a collection of one or more spans. -Traces are generated when the Kafka Bridge does the following: +Traces are generated when the HTTP Bridge does the following: * Sends messages from Kafka to consumer HTTP clients * Receives messages from producer HTTP clients to send to Kafka @@ -37,11 +37,11 @@ If you were previously using OpenTracing with the `bridge.tracing=jaeger` option .Prerequisites -* xref:proc-downloading-kafka-bridge-{context}[The Kafka Bridge installation archive is downloaded]. +* xref:proc-downloading-http-bridge-{context}[The HTTP Bridge installation archive is downloaded]. .Procedure -. Edit the `application.properties` file provided with the Kafka Bridge installation archive. +. Edit the `application.properties` file provided with the HTTP Bridge installation archive. + Use the `bridge.tracing` property to enable the tracing you want to use. + @@ -52,7 +52,7 @@ bridge.tracing=opentelemetry # <1> ---- <1> The property for enabling OpenTelemetry is uncommented by removing the `#` at the beginning of the line. + -With tracing enabled, you initialize tracing when you run the Kafka Bridge script. +With tracing enabled, you initialize tracing when you run the HTTP Bridge script. . Save the configuration file. . Set the environment variables for tracing. @@ -66,15 +66,15 @@ OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4317 # <2> <1> The name of the OpenTelemetry tracer service. <2> The gRPC-based OTLP endpoint that listens for spans on port 4317. -. Run the Kafka Bridge script with the property enabled for tracing. +. Run the HTTP Bridge script with the property enabled for tracing. + -.Running the Kafka Bridge with OpenTelemetry enabled +.Running the HTTP Bridge with OpenTelemetry enabled [source,shell,subs="+quotes,attributes"] ---- ./bin/kafka_bridge_run.sh --config-file=__/application.properties ---- + -The internal consumers and producers of the Kafka Bridge are now enabled for tracing. +The internal consumers and producers of the HTTP Bridge are now enabled for tracing. == Specifying tracing systems with OpenTelemetry @@ -97,7 +97,7 @@ OTEL_EXPORTER_ZIPKIN_ENDPOINT=http://localhost:9411/api/v2/spans <2> == Supported Span attributes -The Kafka Bridge adds, in addition to the standard OpenTelemetry attributes, the following attributes from the https://opentelemetry.io/docs/specs/semconv/http/http-spans/#http-server-semantic-conventions[OpenTelemetry standard conventions for HTTP] to its spans. +The HTTP Bridge adds, in addition to the standard OpenTelemetry attributes, the following attributes from the https://opentelemetry.io/docs/specs/semconv/http/http-spans/#http-server-semantic-conventions[OpenTelemetry standard conventions for HTTP] to its spans. [cols="1,1"] |=== | Attribute key | Attribute value diff --git a/documentation/modules/proc-configuring-kafka-bridge.adoc b/documentation/modules/proc-configuring-http-bridge.adoc similarity index 76% rename from documentation/modules/proc-configuring-kafka-bridge.adoc rename to documentation/modules/proc-configuring-http-bridge.adoc index 792f8ad87..54ac819e1 100644 --- a/documentation/modules/proc-configuring-kafka-bridge.adoc +++ b/documentation/modules/proc-configuring-http-bridge.adoc @@ -1,31 +1,31 @@ // Module included in the following assemblies: // -// assembly-kafka-bridge-config.adoc +// assembly-http-bridge-config.adoc -[id='proc-configuring-kafka-bridge-{context}'] -= Configuring Kafka Bridge properties +[id='proc-configuring-http-bridge-{context}'] += Configuring HTTP Bridge properties [role="_abstract"] -This procedure describes how to configure the Kafka and HTTP connection properties used by the Kafka Bridge. +This procedure describes how to configure the Kafka and HTTP connection properties used by the HTTP Bridge. -You configure the Kafka Bridge, as any other Kafka client, using appropriate prefixes for Kafka-related properties. +You configure the HTTP Bridge, as any other Kafka client, using appropriate prefixes for Kafka-related properties. * `kafka.` for general configuration that applies to producers and consumers, such as server connection and security. * `kafka.consumer.` for consumer-specific configuration passed only to the consumer. * `kafka.producer.` for producer-specific configuration passed only to the producer. -As well as enabling HTTP access to a Kafka cluster, HTTP properties provide the capability to enable and define access control for the Kafka Bridge through Cross-Origin Resource Sharing (CORS). +As well as enabling HTTP access to a Kafka cluster, HTTP properties provide the capability to enable and define access control for the HTTP Bridge through Cross-Origin Resource Sharing (CORS). CORS is a HTTP mechanism that allows browser access to selected resources from more than one origin. To configure CORS, you define a list of allowed resource origins and HTTP methods to access them. Additional HTTP headers in requests describe the CORS origins that are permitted access to the Kafka cluster. .Prerequisites -* xref:proc-downloading-kafka-bridge-{context}[The Kafka Bridge installation archive is downloaded] +* xref:proc-downloading-http-bridge-{context}[The HTTP Bridge installation archive is downloaded] .Procedure -. Edit the `application.properties` file provided with the Kafka Bridge installation archive. +. Edit the `application.properties` file provided with the HTTP Bridge installation archive. + Use the properties file to specify Kafka and HTTP-related properties. @@ -52,7 +52,7 @@ http.cors.enabled=true <2> http.cors.allowedOrigins=https://strimzi.io <3> http.cors.allowedMethods=GET,POST,PUT,DELETE,OPTIONS,PATCH <4> ---- -<1> The default HTTP configuration for the Kafka Bridge to listen on port 8080. +<1> The default HTTP configuration for the HTTP Bridge to listen on port 8080. <2> Set to `true` to enable CORS. <3> Comma-separated list of allowed CORS origins. You can use a URL or a Java regular expression. <4> Comma-separated list of allowed HTTP methods for CORS. diff --git a/documentation/modules/proc-creating-kafka-bridge-consumer.adoc b/documentation/modules/proc-creating-http-bridge-consumer.adoc similarity index 69% rename from documentation/modules/proc-creating-kafka-bridge-consumer.adoc rename to documentation/modules/proc-creating-http-bridge-consumer.adoc index a3cf5bb0c..c7538bdf0 100644 --- a/documentation/modules/proc-creating-kafka-bridge-consumer.adoc +++ b/documentation/modules/proc-creating-http-bridge-consumer.adoc @@ -1,16 +1,16 @@ // Module included in the following assemblies: // -// assembly-kafka-bridge-quickstart.adoc +// assembly-http-bridge-quickstart.adoc -[id='proc-creating-kafka-bridge-consumer-{context}'] -= Creating a Kafka Bridge consumer +[id='proc-creating-http-bridge-consumer-{context}'] += Creating a HTTP Bridge consumer [role="_abstract"] -Before you can perform any consumer operations in the Kafka cluster, you must first create a consumer by using the xref:createconsumer[consumers] endpoint. The consumer is referred to as a __Kafka Bridge consumer__. +Before you can perform any consumer operations in the Kafka cluster, you must first create a consumer by using the xref:createconsumer[consumers] endpoint. The consumer is referred to as a __HTTP Bridge consumer__. .Procedure -. Create a Kafka Bridge consumer in a new consumer group named `bridge-quickstart-consumer-group`: +. Create a HTTP Bridge consumer in a new consumer group named `bridge-quickstart-consumer-group`: + [source,curl,subs=attributes+] ---- @@ -30,7 +30,7 @@ curl -X POST http://localhost:8080/consumers/bridge-quickstart-consumer-group \ * Some basic configuration settings are defined. * The consumer will not commit offsets to the log automatically because the `enable.auto.commit` setting is `false`. You will commit the offsets manually later in this quickstart. + -If the request is successful, the Kafka Bridge returns the consumer ID (`instance_id`) and base URL (`base_uri`) in the response body, along with a `200` code. +If the request is successful, the HTTP Bridge returns the consumer ID (`instance_id`) and base URL (`base_uri`) in the response body, along with a `200` code. + .Example response @@ -47,7 +47,7 @@ If the request is successful, the Kafka Bridge returns the consumer ID (`instanc .What to do next -Now that you have created a Kafka Bridge consumer, you can xref:proc-bridge-subscribing-consumer-topics-{context}[subscribe it to topics]. +Now that you have created a HTTP Bridge consumer, you can xref:proc-bridge-subscribing-consumer-topics-{context}[subscribe it to topics]. [role="_additional-resources"] .Additional resources diff --git a/documentation/modules/proc-downloading-http-bridge.adoc b/documentation/modules/proc-downloading-http-bridge.adoc new file mode 100644 index 000000000..d2334e3e3 --- /dev/null +++ b/documentation/modules/proc-downloading-http-bridge.adoc @@ -0,0 +1,14 @@ +// Module included in the following assemblies: +// +// assembly-http-bridge-quickstart.adoc + +[id='proc-downloading-http-bridge-{context}'] + += Downloading a HTTP Bridge archive + +[role="_abstract"] +A zipped distribution of the HTTP Bridge is available for download. + +.Procedure + +- Download the latest version of the HTTP Bridge archive from the {ReleaseDownload}. diff --git a/documentation/modules/proc-downloading-kafka-bridge.adoc b/documentation/modules/proc-downloading-kafka-bridge.adoc deleted file mode 100644 index e88382825..000000000 --- a/documentation/modules/proc-downloading-kafka-bridge.adoc +++ /dev/null @@ -1,14 +0,0 @@ -// Module included in the following assemblies: -// -// assembly-kafka-bridge-quickstart.adoc - -[id='proc-downloading-kafka-bridge-{context}'] - -= Downloading a Kafka Bridge archive - -[role="_abstract"] -A zipped distribution of the Kafka Bridge is available for download. - -.Procedure - -- Download the latest version of the Kafka Bridge archive from the {ReleaseDownload}. diff --git a/documentation/modules/proc-installing-kafka-bridge.adoc b/documentation/modules/proc-installing-http-bridge.adoc similarity index 50% rename from documentation/modules/proc-installing-kafka-bridge.adoc rename to documentation/modules/proc-installing-http-bridge.adoc index 6ed51134e..256b30376 100644 --- a/documentation/modules/proc-installing-kafka-bridge.adoc +++ b/documentation/modules/proc-installing-http-bridge.adoc @@ -1,15 +1,15 @@ // Module included in the following assemblies: // -// assembly-kafka-bridge-quickstart.adoc +// assembly-http-bridge-quickstart.adoc -[id='proc-installing-kafka-bridge-{context}'] -= Installing the Kafka Bridge +[id='proc-installing-http-bridge-{context}'] += Installing the HTTP Bridge [role="_abstract"] -Use the script provided with the Kafka Bridge archive to install the Kafka Bridge. +Use the script provided with the HTTP Bridge archive to install the HTTP Bridge. The `application.properties` file provided with the installation archive provides default configuration settings. -The following default property values configure the Kafka Bridge to listen for requests on port 8080. +The following default property values configure the HTTP Bridge to listen for requests on port 8080. .Default configuration properties [source,shell,subs=attributes+] @@ -20,13 +20,13 @@ http.port=8080 .Prerequisites -* xref:proc-downloading-kafka-bridge-{context}[The Kafka Bridge installation archive is downloaded] +* xref:proc-downloading-http-bridge-{context}[The HTTP Bridge installation archive is downloaded] .Procedure -. If you have not already done so, unzip the Kafka Bridge installation archive to any directory. +. If you have not already done so, unzip the HTTP Bridge installation archive to any directory. -. Run the Kafka Bridge script using the configuration properties as a parameter: +. Run the HTTP Bridge script using the configuration properties as a parameter: + For example: + @@ -39,8 +39,8 @@ For example: + [source,shell] ---- -HTTP-Kafka Bridge started and listening on port 8080 -HTTP-Kafka Bridge bootstrap servers localhost:9092 +HTTP Bridge started and listening on port 8080 +HTTP Bridge bootstrap servers localhost:9092 ---- .What to do next diff --git a/documentation/modules/proc-producing-messages-from-bridge-topics-partitions.adoc b/documentation/modules/proc-producing-messages-from-bridge-topics-partitions.adoc index 685d41731..994104ccf 100644 --- a/documentation/modules/proc-producing-messages-from-bridge-topics-partitions.adoc +++ b/documentation/modules/proc-producing-messages-from-bridge-topics-partitions.adoc @@ -1,12 +1,12 @@ // Module included in the following assemblies: // -// assembly-kafka-bridge-quickstart.adoc +// assembly-http-bridge-quickstart.adoc [id='proc-producing-messages-from-bridge-topics-partitions-{context}'] = Producing messages to topics and partitions [role="_abstract"] -Use the Kafka Bridge to produce messages to a Kafka topic in JSON format by using the topics endpoint. +Use the HTTP Bridge to produce messages to a Kafka topic in JSON format by using the topics endpoint. You can produce messages to topics in JSON format by using the xref:send[topics] endpoint. You can specify destination partitions for messages in the request body. @@ -38,7 +38,7 @@ NOTE: If you deployed Strimzi on Kubernetes, you can create a topic using the `K .Procedure -. Using the Kafka Bridge, produce three messages to the topic you created: +. Using the HTTP Bridge, produce three messages to the topic you created: + [source,curl,subs=attributes+] ---- @@ -66,7 +66,7 @@ curl -X POST \ * `sales-lead-0002` is sent directly to partition 2. * `sales-lead-0003` is sent to a partition in the `bridge-quickstart-topic` topic using a round-robin method. -. If the request is successful, the Kafka Bridge returns an `offsets` array, along with a `200` code and a `content-type` header of `application/vnd.kafka.v2+json`. For each message, the `offsets` array describes: +. If the request is successful, the HTTP Bridge returns an `offsets` array, along with a `200` code and a `content-type` header of `application/vnd.kafka.v2+json`. For each message, the `offsets` array describes: + * The partition that the message was sent to * The current message offset of the partition @@ -367,7 +367,7 @@ curl -X GET \ .What to do next -After producing messages to topics and partitions, xref:proc-creating-kafka-bridge-consumer-{context}[create a Kafka Bridge consumer]. +After producing messages to topics and partitions, xref:proc-creating-http-bridge-consumer-{context}[create a HTTP Bridge consumer]. [role="_additional-resources"] .Additional resources