Skip to content

Commit 7b2ce9a

Browse files
committed
Add v0.8.0 docs
Copied from kroxylicious/kroxylicious@dd852e2 Signed-off-by: Robert Young <[email protected]>
1 parent c723799 commit 7b2ce9a

28 files changed

+2331
-0
lines changed

_data/kroxylicious.yml

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,8 @@
11
versions:
22
- title: 'Development'
33
url: '/kroxylicious'
4+
- title: 'v0.8.0'
5+
url: '/docs/v0.8.0/'
46
- title: 'v0.7.0'
57
url: '/docs/v0.7.0/'
68
- title: 'v0.6.0'
Lines changed: 44 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,44 @@
1+
// AsciiDoc settings
2+
:data-uri!:
3+
:doctype: book
4+
:experimental:
5+
:idprefix:
6+
:imagesdir: images
7+
:numbered:
8+
:sectanchors!:
9+
:sectnums:
10+
:source-highlighter: highlight.js
11+
:toc: left
12+
:linkattrs:
13+
:toclevels: 2
14+
:icons: font
15+
16+
//Latest version
17+
:ProductVersion: 0.8
18+
:gitRef: releases/tag/v0.8.0
19+
:ApicurioVersion: 2.6.x
20+
21+
//Proxy links
22+
:github: https://github.com/kroxylicious/kroxylicious
23+
:github-releases: https://github.com/kroxylicious/kroxylicious/{gitRef}
24+
:github-issues: https://github.com/kroxylicious/kroxylicious/issues[Kroxylicious issues^]
25+
:api-javadoc: https://javadoc.io/doc/io.kroxylicious/kroxylicious-api/{ProductVersion}
26+
:kms-api-javadoc: https://javadoc.io/doc/io.kroxylicious/kroxylicious-kms/{ProductVersion}
27+
:encryption-api-javadoc: https://javadoc.io/doc/io.kroxylicious/kroxylicious-encryption/{ProductVersion}
28+
:start-script: https://github.com/kroxylicious/kroxylicious/blob/{gitRef}/kroxylicious-app/src/assembly/kroxylicious-start.sh
29+
30+
//Kafka links
31+
:ApacheKafkaSite: https://kafka.apache.org[Apache Kafka website^]
32+
:kafka-protocol: https://kafka.apache.org/protocol.html
33+
34+
//java links
35+
:java-17-javadoc: https://docs.oracle.com/en/java/javase/17/docs/api
36+
37+
//Vault links
38+
:hashicorp-vault: https://developer.hashicorp.com/vault
39+
40+
//AWS links
41+
:aws: https://docs.aws.amazon.com/
42+
43+
// Apicurio links
44+
:apicurio-docs: https://www.apicur.io/registry/docs/apicurio-registry/{ApicurioVersion}/
Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,5 @@
1+
= Trademark notice
2+
3+
* Hashicorp Vault is a registered trademark of HashiCorp, Inc.
4+
* AWS Key Management Service is a trademark of Amazon.com, Inc. or its affiliates.
5+
* Apache Kafka is a registered trademark of The Apache Software Foundation.
Lines changed: 19 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,19 @@
1+
// file included in the following:
2+
//
3+
// assembly-record-encryption-filter.adoc
4+
5+
[id='assembly-aws-kms-{context}']
6+
= Setting up AWS KMS
7+
8+
[role="_abstract"]
9+
To use {aws}/kms/latest/developerguide/overview.html[AWS Key Management Service] with the Record Encryption filter, use the following setup:
10+
11+
* Establish an AWS KMS aliasing convention for keys
12+
* Configure the AWS KMS
13+
* Create AWS KMS keys
14+
15+
You'll need a privileged AWS user that is capable of creating users and policies to perform the set-up.
16+
17+
include::../modules/record-encryption/aws-kms/con-aws-kms-setup.adoc[leveloffset=+1]
18+
include::../modules/record-encryption/aws-kms/con-aws-kms-service-config.adoc[leveloffset=+1]
19+
include::../modules/record-encryption/aws-kms/con-aws-kms-key-creation.adoc[leveloffset=+1]
Lines changed: 14 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,14 @@
1+
// file included in the following:
2+
//
3+
// index.adoc
4+
5+
[id='assembly-built-in-filters-{context}']
6+
= Built-in filters
7+
8+
[role="_abstract"]
9+
Kroxylicious comes with a suite of built-in filters designed to enhance the functionality and security of your Kafka clusters.
10+
11+
include::assembly-record-encryption-filter.adoc[leveloffset=+1]
12+
include::assembly-multi-tenancy-filter.adoc[leveloffset=+1]
13+
include::assembly-record-validation-filter.adoc[leveloffset=+1]
14+
include::../modules/oauthbearer/con-oauthbearer.adoc[leveloffset=+1]
Lines changed: 17 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,17 @@
1+
// file included in the following:
2+
//
3+
// assembly-record-encryption-filter.adoc
4+
5+
[id='assembly-hashicorp-vault-{context}']
6+
= Setting up HashiCorp Vault
7+
8+
[role="_abstract"]
9+
To use HashiCorp Vault with the Record Encryption filter, use the following setup:
10+
11+
* Enable the Transit Engine as the Record Encryption filter relies on its APIs.
12+
* Create a Vault policy specifically for the filter with permissions for generating and decrypting Data Encryption Keys (DEKs) for envelope encryption.
13+
* Obtain a Vault token that includes the filter policy.
14+
15+
include::../modules/record-encryption/hashicorp-vault/con-vault-setup.adoc[leveloffset=+1]
16+
include::../modules/record-encryption/hashicorp-vault/con-vault-service-config.adoc[leveloffset=+1]
17+
include::../modules/record-encryption/hashicorp-vault/con-vault-key-creation.adoc[leveloffset=+1]
Lines changed: 31 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,31 @@
1+
// file included in the following:
2+
//
3+
// assembly-built-in-filters.adoc
4+
5+
[id='assembly-multi-tenancy-filter-{context}']
6+
= (Preview) Multi-tenancy filter
7+
8+
[role="_abstract"]
9+
Kroxylicious’s Multi-tenancy filter presents a single Kafka cluster to tenants as if it were multiple clusters.
10+
Operations are isolated to a single tenant by prefixing resources with an identifier.
11+
12+
NOTE: This filter is currently in incubation and available as a preview.
13+
We would not recommend using it in a production environment.
14+
15+
The Multi-tenancy filter works by intercepting all Kafka RPCs (remote procedure calls) that reference resources, such as topic names and consumer group names:
16+
17+
Request path:: On the request path, resource names are prefixed with a tenant identifier.
18+
Response path:: On the response path, the prefix is removed.
19+
20+
Kafka RPCs that list resources are filtered so that only resources belonging to the tenant are returned, effectively creating a private cluster experience for each tenant.
21+
22+
To set up the filter, configure it in Kroxylicious.
23+
24+
IMPORTANT: While the Multi-tenancy filter isolates operations on resources, it does not isolate user identities across tenants.
25+
User authentication and ACLs (Access Control Lists) are shared across all tenants, meaning that identity is not scoped to individual tenants.
26+
For more information on open issues related to this filter, see {github-issues}.
27+
28+
NOTE: For more information on Kafka's support for multi-tenancy, see the {ApacheKafkaSite}.
29+
30+
//configuring the multi-tenancy filter
31+
include::../modules/multi-tenancy/proc-multi-tenancy.adoc[leveloffset=+1]
Lines changed: 24 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,24 @@
1+
// file included in the following:
2+
//
3+
// index.adoc
4+
5+
[id='assembly-overview-{context}']
6+
= Kroxylicious overview
7+
8+
[role="_abstract"]
9+
Kroxylicious is an Apache Kafka protocol-aware ("Layer 7") proxy designed to enhance Kafka-based systems.
10+
Through its filter mechanism it allows additional behavior to be introduced into a Kafka-based system without requiring changes to either your applications or the Kafka cluster itself.
11+
Built-in filters are provided as part of the solution.
12+
13+
Functioning as an intermediary, the Kroxylicious mediates communication between a Kafka cluster and its clients.
14+
It takes on the responsibility of receiving, filtering, and forwarding messages.
15+
16+
An API provides a convenient means for implementing custom logic within the proxy.
17+
18+
[role="_additional-resources"]
19+
.Additional resources
20+
21+
* {ApacheKafkaSite}
22+
23+
//broker config (upstream)
24+
include::../modules/con-proxy-overview.adoc[leveloffset=+1]
Lines changed: 32 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,32 @@
1+
// file included in the following:
2+
//
3+
// assembly-built-in-filters.adoc
4+
5+
[id='assembly-record-encryption-filter-{context}']
6+
= Record Encryption filter
7+
8+
[role="_abstract"]
9+
Kroxylicious's Record Encryption filter enhances the security of Kafka messages.
10+
The filter uses industry-standard cryptographic techniques to apply encryption to Kafka messages, ensuring the confidentiality of data stored in the Kafka Cluster.
11+
Kroxylicious centralizes topic-level encryption, ensuring streamlined encryption across Kafka clusters.
12+
13+
There are three steps to using the filter:
14+
15+
1. Setting up a Key Management System (KMS).
16+
2. Establishing the encryption keys within the KMS that will be used to encrypt the topics.
17+
3. Configuring the filter within Kroxylicious.
18+
19+
The filter integrates with a Key Management Service (KMS), which has ultimate responsibility for the safe storage of sensitive key material.
20+
The filter relies on a KMS implementation.
21+
Currently, Kroxylicious integrates with either HashiCorp Vault or AWS Key Management Service.
22+
You can provide implementations for your specific KMS systems.
23+
Additional KMS support will be added based on demand.
24+
25+
//overview of the record encryption process
26+
include::../modules/record-encryption/con-record-encryption-overview.adoc[leveloffset=+1]
27+
//setting up hashicorp vault
28+
include::assembly-hashicorp-vault.adoc[leveloffset=+1]
29+
//setting up AWS KMS
30+
include::assembly-aws-kms.adoc[leveloffset=+1]
31+
//configuring the record encryption filter
32+
include::../modules/record-encryption/proc-configuring-record-encryption-filter.adoc[leveloffset=+1]
Lines changed: 27 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,27 @@
1+
// file included in the following:
2+
//
3+
// assembly-built-in-filters.adoc
4+
5+
[id='assembly-record-validation-filter-{context}']
6+
= (Preview) Record Validation
7+
8+
[role="_abstract"]
9+
The Record Validation filter validates records sent by a producer.
10+
Only records that pass the validation are sent to the broker.
11+
This filter can be used to prevent _poison messages_—such as those containing corrupted data or invalid formats—from entering the Kafka system, which may otherwise lead to consumer failure.
12+
13+
The filter currently supports two modes of operation:
14+
15+
1. Schema Validation ensures the content of the record conforms to a schema stored in an https://www.apicur.io/registry/[Apicurio Registry].
16+
2. JSON Syntax Validation ensures the content of the record contains syntactically valid JSON.
17+
18+
Validation rules can be applied to check the content of the Kafka record key or value.
19+
20+
If the validation fails, the product request is rejected and the producing application receives an error response. The broker
21+
will not receive the rejected records.
22+
23+
NOTE: This filter is currently in incubation and available as a preview.
24+
We would not recommend using it in a production environment.
25+
26+
//configuring the record-validation filter
27+
include::../modules/record-validation/proc-record-validation.adoc[leveloffset=+1]

0 commit comments

Comments
 (0)