-
Notifications
You must be signed in to change notification settings - Fork 498
[FLINK-36646] Test different versions of the JDK in the Flink image #910
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
cc: @tomncooper & @robobario |
| - "v1_18" | ||
| uses: ./.github/workflows/e2e.yaml | ||
| with: | ||
| java-version: ${{ matrix.java-version }} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm wondering if it's a bit funky that we are using java-version to control both the JDK/JRE used to build/run the operator and the runtime of the flink image. Aren't they different dimensions of the matrix?
This would show that a JDK17 operator works with a JDK17 flink image, but not cover the current default operator runtime with a JDK17 flink image.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, I had debated making a flink-tag parameter but didn't as we would then have to manually encode the matrix.
This would show that a JDK17 operator works with a JDK17 flink image, but not cover the current default operator runtime with a JDK17 flink image.
I'm not convinced that the operator JDK really matters so don't think its particularly important to have a JDK11 operator deploying a JDK 17 Flink. I can go that direction if others see it as valuable.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah you're right we aren't likely to see some incompatibility arising from different JRE combinations. Maybe it would be better to hold the operator JDK constant and vary only the flink runtime. Just to make it clearer which thing is varied.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah I agree with @SamBarker. The issue here is actually not the interplay of JDK versions but rather the config supplied by the operator's logic not being compatible (or more accuratly enabling) Java 17 support. So that's what we need to test.
| create-namespace: | ||
| type: boolean | ||
| default: false | ||
| append-java-version: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
maybe this should be an optional flink-java-version and the matrix could contain the versions we want to test
Co-authored-by: Robert Young <[email protected]>
tomncooper
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, just need to update the changelog in the PR description.
|
E2E test failed as expected: https://github.com/apache/flink-kubernetes-operator/actions/runs/11696768142/job/32793460638?pr=910#step:9:4256 |
|
Closing this as it was merged in another PR |
What is the purpose of the change
Brief change log
Verifying this change
This change is already covered by existing tests, such as
test_application_operations.shshould pass with each of the supported JDK versions.Does this pull request potentially affect one of the following parts:
CustomResourceDescriptors: noDocumentation