Skip to content

Commit f3df080

Browse files
authored
Merge branch 'beta' into DOC-1045-Iceberg-Databricks-integration
2 parents 707b72d + be2027a commit f3df080

File tree

6 files changed

+163
-44
lines changed

6 files changed

+163
-44
lines changed

.github/workflows/build.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@
44
name: Build Production Site
55
on:
66
push:
7-
branches: [main, 'v/*', shared, api, site-search, beta]
7+
branches: ['v/*', shared, api, site-search]
88
jobs:
99
dispatch:
1010
runs-on: ubuntu-latest

antora.yml

Lines changed: 3 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1,15 +1,13 @@
11
name: ROOT
22
title: Self-Managed
33
version: 25.1
4-
display_version: '25.1 Beta'
5-
prerelease: true
64
start_page: home:index.adoc
75
nav:
86
- modules/ROOT/nav.adoc
97
asciidoc:
108
attributes:
119
# Date of release in the format YYYY-MM-DD
12-
page-release-date: 2024-12-03
10+
page-release-date: 2025-04-07
1311
# Only used in the main branch (latest version)
1412
page-header-data:
1513
order: 2
@@ -20,8 +18,8 @@ asciidoc:
2018
# We try to fetch the latest versions from GitHub at build time
2119
# --
2220

23-
full-version: 24.3.9
24-
latest-redpanda-tag: 'v24.3.9'
21+
full-version: 25.1.1
22+
latest-redpanda-tag: 'v25.1.1'
2523
latest-console-tag: 'v2.8.5'
2624
latest-release-commit: 'afe1a3f'
2725
latest-operator-version: 'v2.3.8-24.3.6'

local-antora-playbook.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@ content:
1515
- url: .
1616
branches: HEAD
1717
- url: https://github.com/redpanda-data/docs
18-
branches: [main, v/*, api, shared, site-search,'!v-end-of-life/*']
18+
branches: [v/*, api, shared, site-search,'!v-end-of-life/*']
1919
- url: https://github.com/redpanda-data/cloud-docs
2020
branches: 'main'
2121
- url: https://github.com/redpanda-data/redpanda-labs

modules/console/pages/ui/programmable-push-filters.adoc

Lines changed: 151 additions & 35 deletions
Original file line numberDiff line numberDiff line change
@@ -2,58 +2,174 @@
22
:page-aliases: console:features/programmable-push-filters.adoc, reference:console/programmable-push-filters.adoc
33
// Do not put page aliases in the single-sourced content
44
// tag::single-source[]
5-
:description: Learn how to filter Kafka records in {ui} based on your provided JavaScript code.
5+
:description: Learn how to filter Kafka records using custom JavaScript code within {ui}.
66

7-
You can use push-down filters in {ui} to search for specific records within a Kafka topic.
7+
You can use push-down filters in {ui} to search through large Kafka topics that may contain millions of records. Filters are JavaScript functions executed on the backend, evaluating each record individually. Your function must return a boolean:
88

9-
Push-down filters are TypeScript/JavaScript function bodies that you define in {ui} and that are executed on the backend for
10-
every individual record in a topic. The code must return a boolean. If your code returns `true`, the backend sends the record to the results in the frontend.
11-
Otherwise the record is skipped and {ui} continues to consume records until either the selected number
12-
of maximum search results or the end of the topic has been reached.
9+
* `true`: record is included in the frontend results.
10+
* `false`: record is skipped.
1311
14-
On a topic's *Messages* page, click *Add filter* > *JavaScript Filter*.
12+
Multiple filters combine logically with `AND` conditions.
1513

16-
{ui} can inject the following properties into your function, which you can use in your filter code:
14+
== Add a JavaScript filter
1715

18-
* `partitionId` - The record's partition ID
19-
* `offset` - The record's offset within its partition
20-
* `key` - The record's key in its decoded form
21-
* `value` - The record's value in its decoded form
22-
* `headers` - The record's header value in its decoded form
16+
To add a JavaScript filter:
2317

24-
NOTE: Keys, values, and headers are passed into your JavaScript code in their decoded form. The
25-
deserialization logic (for example, decode an Avro serialized byte array to a JSON object) is applied first, before injecting it into
26-
the JavaScript function. If your record is presented as a JSON object in the UI, you can also access it
27-
like a JavaScript object in your filter code.
18+
. Navigate to the topic's *Messages* page.
19+
. Click *Add filter > JavaScript Filter*.
20+
. Define your JavaScript filtering logic in the provided input area.
2821

29-
Suppose you have a series of Avro, JSON, or Protobuf encoded record values that deserialize to JSON objects like this:
22+
ifndef::env-cloud[]
23+
image::ROOT:console:js-filter.png[alt="JavaScript filter in {ui}"]
24+
endif::[]
3025

31-
[,json]
26+
== Resource usage and performance
27+
JavaScript filters are executed on the backend, consuming CPU and network resources. The performance of your filter depends on the complexity of your JavaScript code and the volume of data being processed.
28+
Complex JavaScript logic or large data volumes may increase CPU load and network usage.
29+
30+
== Available JavaScript properties
31+
32+
{ui} injects these properties into your JavaScript context:
33+
34+
[cols="1a,2a,1a"]
35+
|===
36+
| Property | Description | Type
37+
38+
| `headers` | Record headers as key-value pairs (ArrayBuffers) | Object
39+
| `key` | Decoded record key | String
40+
| `keySchemaID` | Schema Registry ID for key (if present) | Number
41+
| `partitionId` | Partition ID of the record | Number
42+
| `offset` | Record offset within partition | Number
43+
| `timestamp` | Timestamp as JavaScript Date object | Date
44+
| `value` | Decoded record value | Object/String
45+
| `valueSchemaID` | Schema Registry ID for value (if present) | Number
46+
|===
47+
48+
NOTE: Values, keys, and headers are deserialized before being injected into your script.
49+
50+
== JavaScript filter examples
51+
52+
=== Filter by header value
53+
54+
*Scenario:* Records tagged with headers specifying customer plan type.
55+
56+
.Sample header data (string value)
57+
[source,json]
58+
----
59+
headers: {
60+
"plan_type": "premium"
61+
}
62+
----
63+
64+
.JavaScript filter
65+
[source,javascript]
66+
----
67+
let headerValue = headers["plan_type"];
68+
if (headerValue) {
69+
let stringValue = String.fromCharCode(...new Uint8Array(headerValue));
70+
return stringValue === "premium";
71+
}
72+
return false;
73+
----
74+
75+
*Scenario:* Records include a header with JSON-encoded customer metadata.
76+
77+
.Sample header data (JSON value)
78+
[source,json]
79+
----
80+
headers: {
81+
"customer": "{"orgID":"123-abc","name":"ACME Inc."}"
82+
}
83+
----
84+
85+
.JavaScript filter
86+
[source,javascript]
87+
----
88+
let headerValue = headers["customer"];
89+
if (headerValue) {
90+
let stringValue = String.fromCharCode(headerValue);
91+
let valueObj = JSON.parse(stringValue);
92+
return valueObj["orgID"] === "123-abc";
93+
}
94+
return false;
95+
----
96+
97+
=== Filter by timestamp
98+
99+
*Scenario:* Retrieve records from a promotional event.
100+
101+
.JavaScript filter
102+
[source,javascript]
103+
----
104+
return timestamp.getMonth() === 10 && timestamp.getDate() === 24;
105+
----
106+
107+
=== Filter by schema ID
108+
109+
*Scenario:* Filter customer activity records based on Avro schema version.
110+
111+
.JavaScript filter
112+
[source,javascript]
113+
----
114+
return valueSchemaID === 204;
115+
----
116+
117+
=== Filter JSON record values
118+
119+
*Scenario:* Filter transactions by customer ID.
120+
121+
.Sample JSON record
122+
[source,json]
32123
----
33124
{
34-
"event_type": "BASKET_ITEM_ADDED",
35-
"event_id": "777036dd-1bac-499c-993a-8cc86cee3ccc"
36-
"item": {
37-
"id": "895e443a-f1b7-4fe5-ad66-b9adfe5420b9",
38-
"name": "milk"
125+
"transaction_id": "abc123",
126+
"customer_id": "cust789",
127+
"amount": 59.99
128+
}
129+
----
130+
131+
.JavaScript filter (top-level property)
132+
[source,javascript]
133+
----
134+
return value.customer_id === "cust789";
135+
----
136+
137+
*Scenario:* Filter orders by item availability.
138+
139+
.Sample JSON record
140+
[source,json]
141+
----
142+
{
143+
"order_id": "ord456",
144+
"inventory": {
145+
"item_id": "itm001",
146+
"status": "in_stock"
39147
}
40148
}
41149
----
42150

43-
[,ts]
151+
.JavaScript filter (nested property)
152+
[source,javascript]
44153
----
45-
return value.item.id == "895e443a-f1b7-4fe5-ad66-b9adfe5420b9"
154+
return value.inventory.status === "in_stock";
46155
----
47156

48-
When the filter function returns `true`, the record is sent to the front end. If you use more than one filter function at the same time, filters are combined with a logical `AND`, so records must pass every filter. The offset specified also is effectively combined using an `AND` operator.
157+
*Scenario:* Filter products missing price information.
49158

50-
== Resource usage and performance
159+
.JavaScript filter (property absence)
160+
[source,javascript]
161+
----
162+
return !value.hasOwnProperty("price");
163+
----
164+
165+
=== Filter string keys
51166

52-
You can use the filter engine against topics with millions of records, as the filter code is evaluated in the backend
53-
where more resources are available. However, while the filter engine is fairly efficient, it could potentially consume all available CPU
54-
resources and cause significant network traffic due to the number of consumed Kafka records.
167+
*Scenario:* Filter sensor data records by IoT device ID.
168+
169+
.JavaScript filter
170+
[source,javascript]
171+
----
172+
return key === "sensor-device-1234";
173+
----
55174

56-
Usually, performance is constrained by available CPU resources. Depending on the JavaScript code and the records, the expected
57-
performance is around 15,000 -20,000 filtered records per second for each available core. The request is only processed on a single instance of {ui} and
58-
cannot be shared across multiple instances.
59-
// end::single-source[]
175+
// end::single-source[]

modules/deploy/pages/deployment-option/self-hosted/manual/production/production-deployment-automation.adoc

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -178,8 +178,9 @@ If you didn't use Terraform, then you must manually update the `[redpanda]` sect
178178
[,ini]
179179
----
180180
[redpanda]
181-
ip ansible_user=ssh_user ansible_become=True private_ip=pip id=0
182-
ip ansible_user=ssh_user ansible_become=True private_ip=pip id=1
181+
ip ansible_user=ssh_user ansible_become=True private_ip=pip
182+
ip ansible_user=ssh_user ansible_become=True private_ip=pip
183+
ip ansible_user=ssh_user ansible_become=True private_ip=pip
183184
184185
[monitor]
185186
ip ansible_user=ssh_user ansible_become=True private_ip=pip id=1

modules/get-started/pages/release-notes/redpanda.adoc

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -72,6 +72,10 @@ Support for https://protobuf.dev/reference/protobuf/google.protobuf/[Protobuf we
7272

7373
You now can configure Kafka clients to authenticate using xref:manage:security/authentication#enable-sasl.adoc[SASL/PLAIN] with a single account using the same username and password. Unlike SASL/SCRAM, which uses a challenge response with hashed credentials, SASL/PLAIN transmits plaintext passwords. You enable SASL/PLAIN by appending `PLAIN` to the list of SASL mechanisms.
7474

75+
== Pause and resume uploads
76+
77+
Redpanda now supports xref:manage:tiered-storage.adoc#pause-and-resume-uploads[pausing and resuming uploads] to object storage when running Tiered Storage, with no risk to data consistency or data loss. You can use the xref:reference:properties/object-storage-properties.adoc#cloud_storage_enable_segment_uploads[`cloud_storage_enable_segment_uploads`] property to pause or resume uploads to help you troubleshoot any issues that occur in your cluster during uploads.
78+
7579
== Metrics
7680

7781
The following metrics are new in this version:

0 commit comments

Comments
 (0)