-
Notifications
You must be signed in to change notification settings - Fork 91
Bump Cli-testing to 1.3.0, update kafka libraries #2341
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: development/2.14
Are you sure you want to change the base?
Changes from all commits
9f68638
2e26ece
cd7fd01
c37446f
d7e565a
c728107
5f3b575
d49a9bb
c25591a
61fa240
9d3b913
3403315
b8ef05f
b5e165a
70ba790
585d1bd
f98827e
cc4c5e9
37f611f
879ebee
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -4,7 +4,7 @@ import { CacheHelper, Constants, Identity, IdentityEnum, S3, Utils } from 'cli-t | |
| import Zenko from 'world/Zenko'; | ||
| import { safeJsonParse } from './utils'; | ||
| import assert from 'assert'; | ||
| import { Admin, Kafka } from 'kafkajs'; | ||
| import { Admin } from '@platformatic/kafka'; | ||
|
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Switched the kafka library These internally relied on librdkafka, a C library. Because of that, we needed to setup quite a few extra stuff to make them work : node-gyp (which was throwing weird python errors when doing yarn install in the Codespace), and also need a bunch of things to be installed in the dockerfile (you'll see later in the pr, librd-XXX).
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. notes:
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Thanks for the kafka change! |
||
| import { | ||
| createBucketWithConfiguration, | ||
| putMpuObject, | ||
|
|
@@ -99,8 +99,24 @@ async function addUserMetadataToObject(this: Zenko, objectName: string | undefin | |
| async function getTopicsOffsets(topics: string[], kafkaAdmin: Admin) { | ||
| const offsets = []; | ||
| for (const topic of topics) { | ||
| const partitions: ({ high: string; low: string; })[] = | ||
| await kafkaAdmin.fetchTopicOffsets(topic); | ||
| const metadata = await kafkaAdmin.metadata({ topics: [topic] }); | ||
| const partitionCount = metadata.topics.get(topic)?.partitionsCount ?? 0; | ||
| const partitionIndexes = Array.from({ length: partitionCount }, (_, i) => ({ | ||
| partitionIndex: i, | ||
| timestamp: BigInt(-2), | ||
| })); | ||
| const earliestResult = await kafkaAdmin.listOffsets({ | ||
| topics: [{ name: topic, partitions: partitionIndexes }], | ||
| }); | ||
| const latestResult = await kafkaAdmin.listOffsets({ | ||
| topics: [{ name: topic, partitions: partitionIndexes.map(p => ({ ...p, timestamp: BigInt(-1) })) }], | ||
| }); | ||
| const partitions = []; | ||
| for (let i = 0; i < partitionCount; i++) { | ||
| const low = earliestResult[0]?.partitions.find(p => p.partitionIndex === i)?.offset ?? BigInt(0); | ||
| const high = latestResult[0]?.partitions.find(p => p.partitionIndex === i)?.offset ?? BigInt(0); | ||
|
Comment on lines
+116
to
+117
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. partitions are lists, right? So searching is not efficient...
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I think we don't have guarantee about the order, but the list should be small enough that in practice this find is not too impactfull |
||
| partitions.push({ low: String(low), high: String(high) }); | ||
| } | ||
| offsets.push({ topic, partitions }); | ||
| } | ||
| return offsets; | ||
|
|
@@ -305,10 +321,15 @@ Then('kafka consumed messages should not take too much place on disk', { timeout | |
| assert.fail('Kafka cleaner did not clean the topics within the expected time'); | ||
| }, checkInterval * 10); // Timeout after 10 Kafka cleaner intervals | ||
|
|
||
| const kafkaAdmin = new Admin({ | ||
| clientId: 'ctst-kafka-cleaner-check', | ||
| bootstrapBrokers: [this.parameters.KafkaHosts], | ||
| }); | ||
|
|
||
| try { | ||
| const ignoredTopics = ['dead-letter']; | ||
| const kafkaAdmin = new Kafka({ brokers: [this.parameters.KafkaHosts] }).admin(); | ||
| const topics: string[] = (await kafkaAdmin.listTopics()) | ||
| const allTopics = await kafkaAdmin.listTopics(); | ||
| const topics: string[] = allTopics | ||
| .filter(t => (t.includes(this.parameters.InstanceID) && | ||
| !ignoredTopics.some(e => t.includes(e)))); | ||
|
|
||
|
|
@@ -368,6 +389,7 @@ Then('kafka consumed messages should not take too much place on disk', { timeout | |
| assert(topics.length === 0, `Topics ${topics.join(', ')} still have not been cleaned`); | ||
| } finally { | ||
| clearTimeout(timeoutID); | ||
| await kafkaAdmin.close(); | ||
| } | ||
| }); | ||
|
|
||
|
|
||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -16,6 +16,10 @@ import { | |
| cleanupAccount, | ||
| } from './utils'; | ||
|
|
||
| import 'cli-testing/hooks/KeycloakSetup'; | ||
|
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. We have BeforeAll in Cli-testing, no import => they are not run |
||
| import 'cli-testing/hooks/Logger'; | ||
| import 'cli-testing/hooks/versionTags'; | ||
|
|
||
| // HTTPS should not cause any error for CTST | ||
| process.env.NODE_TLS_REJECT_UNAUTHORIZED = '0'; | ||
|
|
||
|
|
||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -5,8 +5,7 @@ module.exports = { | |
| require: ['steps/**/*.ts', 'common/**/*.ts', 'world/**/*.ts'], | ||
| paths: ['features/**/*.feature'], | ||
| format: [ | ||
| 'progress-bar', | ||
| '@cucumber/pretty-formatter', | ||
| 'pretty', | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. why are we removing
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Pretty formatter is archived (https://github.com/cucumber/cucumber-js-pretty-formatter?tab=readme-ov-file), It's integrated directly in cucumber now (thats why i removed pretty formatter in cli-testing), and pretty formatter is now just "pretty" : https://github.com/cucumber/cucumber-js/blob/main/docs/formatters.md |
||
| 'json:reports/cucumber-report.json', | ||
| 'html:reports/report.html', | ||
| ], | ||
|
|
||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seed keycloak is now done in a before all from Cli-testing