-
Notifications
You must be signed in to change notification settings - Fork 1.3k
feat: add example for GKE confidential nodes with GPU #2426
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
feat: add example for GKE confidential nodes with GPU #2426
Conversation
|
@apeabody could you please run the build for this PR? |
|
/gcbrun |
|
@apeabody I don't have access to the GCP cloud build project. Could you please send me the error? |
|
|
@apeabody could you please re-run the build? |
|
/gcbrun |
|
@apeabody I think the build wasn't properly triggered, could you please take a look? |
|
/gcbrun |
Might have been too quick after the merge, it's running now. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @arthurlapertosa!
|
/gemini review |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces a new example for creating a GKE cluster with confidential nodes and GPUs. This is a valuable addition. The changes include modifications to several Terraform modules to support confidential_instance_type and guest_accelerator configurations, along with the new example files and corresponding integration tests. The implementation is mostly correct, but I've found a few issues related to version constraints, external dependencies, and a bug in the for_each logic that need to be addressed.
| enabled = confidential_nodes.enabled | ||
| confidential_instance_type = confidential_nodes.confidential_instance_type |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There seems to be an issue with accessing the values from the for_each iterator. The iterator for a dynamic block is named after the block itself (confidential_nodes in this case), and you should use .value to access the current item. The correct way to access the properties of the object in the for_each list would be confidential_nodes.value.enabled and confidential_nodes.value.confidential_instance_type.
This issue is present in multiple files where this pattern is repeated, including cluster.tf and various cluster.tf files under modules/.
enabled = confidential_nodes.value.enabled
confidential_instance_type = confidential_nodes.value.confidential_instance_type
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The syntax is different because of the above line (for_each = lookup(each.value, "enable_confidential_nodes", null) != null ? [{ enabled = each.value.enable_confidential_nodes, confidential_instance_type = lookup(each.value, "confidential_instance_type", null) }] : []).
Once we define this object ({ enabled = each.value.enable_confidential_nodes, confidential_instance_type = lookup(each.value, "confidential_instance_type", null) }), it already gets the value: lookup(each.value, "confidential_instance_type", null).
|
@apeabody I don't have access to the build error, could you please send it to me? |
|
@apeabody could you please rerun the build and the checks? |
|
/gcbrun |
It's expired, triggered a fresh test |
Thanks - This change looks pretty good, but we still need to enable the new test. Can you please add to the cloudbuild similar to https://github.com/terraform-google-modules/terraform-google-kubernetes-engine/pull/2458/files#diff-35f3e17c28a8a8de710f4be35fb5448d0e33c2d9b89fb2be7499b9830f890d12 |
No description provided.