Skip to content

Add resource for Cluster Director (hypercomputecluster.googleapis.com) Cluster resource#16274

Open
annuay-google wants to merge 7 commits intoGoogleCloudPlatform:mainfrom
annuay-google:cluster-director-terraform
Open

Add resource for Cluster Director (hypercomputecluster.googleapis.com) Cluster resource#16274
annuay-google wants to merge 7 commits intoGoogleCloudPlatform:mainfrom
annuay-google:cluster-director-terraform

Conversation

@annuay-google
Copy link

@annuay-google annuay-google commented Feb 2, 2026

Creating a new terraform resource for Cluster Director (https://docs.cloud.google.com/cluster-director/docs).

`google_hypercomputecluster_cluster`

@github-actions github-actions bot requested a review from hao-nan-li February 2, 2026 17:52
@github-actions
Copy link

github-actions bot commented Feb 2, 2026

Hello! I am a robot. Tests will require approval from a repository maintainer to run.

Googlers: For automatic test runs see go/terraform-auto-test-runs.

@hao-nan-li, a repository maintainer, has been assigned to review your changes. If you have not received review feedback within 2 business days, please leave a comment on this PR asking them to take a look.

You can help make sure that review is quick by doing a self-review and by running impacted tests locally.

@annuay-google annuay-google force-pushed the cluster-director-terraform branch from de8ae0b to fc4cb0b Compare February 9, 2026 12:54
@annuay-google annuay-google force-pushed the cluster-director-terraform branch from a7702f7 to 94f500f Compare February 10, 2026 09:56
@modular-magician
Copy link
Collaborator

Hi there, I'm the Modular magician. I've detected the following information about your changes:

Diff report

Your PR generated some diffs in downstreams - here they are.

google provider: Diff ( 16 files changed, 5699 insertions(+), 2 deletions(-))
google-beta provider: Diff ( 14 files changed, 5689 insertions(+), 2 deletions(-))
terraform-google-conversion: Diff ( 3 files changed, 1679 insertions(+))
Open in Cloud Shell: Diff ( 4 files changed, 160 insertions(+))

Missing test report

Your PR includes resource fields which are not covered by any test.

Resource: google_hypercomputecluster_cluster (3 total tests)
Please add an acceptance test which includes these fields. The test should include the following:

resource "google_hypercomputecluster_cluster" "primary" {
  compute_resources {
    config {
      new_flex_start_instances {
        machine_type = # value needed
        max_duration = # value needed
        zone         = # value needed
      }
      new_reserved_instances {
        reservation = # value needed
      }
    }
  }
  network_resources {
    config {
      existing_network {
        network    = # value needed
        subnetwork = # value needed
      }
    }
  }
  storage_resources {
    config {
      existing_bucket {
        bucket = # value needed
      }
      existing_filestore {
        filestore = # value needed
      }
      existing_lustre {
        lustre = # value needed
      }
      new_bucket {
        autoclass {
          enabled                = # value needed
          terminal_storage_class = # value needed
        }
        hierarchical_namespace {
          enabled = # value needed
        }
      }
      new_filestore {
        description = # value needed
        file_shares {
          capacity_gb = # value needed
          file_share  = # value needed
        }
        filestore = # value needed
        protocol  = # value needed
        tier      = # value needed
      }
      new_lustre {
        capacity_gb = # value needed
        description = # value needed
        filesystem  = # value needed
        lustre      = # value needed
      }
    }
    id = # value needed
  }
}

Missing service labels

The following new resources do not have corresponding service labels:

  • google_hypercomputecluster_cluster

If you believe this detection to be incorrect please raise the concern with your reviewer. Googlers: This error is safe to ignore once you've completed go/fix-missing-service-labels.
An override-missing-service-label label can be added to allow merging.

@modular-magician
Copy link
Collaborator

Hi there, I'm the Modular magician. I've detected the following information about your changes:

Diff report

Your PR generated some diffs in downstreams - here they are.

google provider: Diff ( 16 files changed, 5699 insertions(+), 2 deletions(-))
google-beta provider: Diff ( 14 files changed, 5689 insertions(+), 2 deletions(-))
terraform-google-conversion: Diff ( 3 files changed, 1679 insertions(+))
Open in Cloud Shell: Diff ( 4 files changed, 160 insertions(+))

Missing test report

Your PR includes resource fields which are not covered by any test.

Resource: google_hypercomputecluster_cluster (3 total tests)
Please add an acceptance test which includes these fields. The test should include the following:

resource "google_hypercomputecluster_cluster" "primary" {
  compute_resources {
    config {
      new_flex_start_instances {
        machine_type = # value needed
        max_duration = # value needed
        zone         = # value needed
      }
      new_reserved_instances {
        reservation = # value needed
      }
    }
  }
  network_resources {
    config {
      existing_network {
        network    = # value needed
        subnetwork = # value needed
      }
    }
  }
  storage_resources {
    config {
      existing_bucket {
        bucket = # value needed
      }
      existing_filestore {
        filestore = # value needed
      }
      existing_lustre {
        lustre = # value needed
      }
      new_bucket {
        autoclass {
          enabled                = # value needed
          terminal_storage_class = # value needed
        }
        hierarchical_namespace {
          enabled = # value needed
        }
      }
      new_filestore {
        description = # value needed
        file_shares {
          capacity_gb = # value needed
          file_share  = # value needed
        }
        filestore = # value needed
        protocol  = # value needed
        tier      = # value needed
      }
      new_lustre {
        capacity_gb = # value needed
        description = # value needed
        filesystem  = # value needed
        lustre      = # value needed
      }
    }
    id = # value needed
  }
}

Missing service labels

The following new resources do not have corresponding service labels:

  • google_hypercomputecluster_cluster

If you believe this detection to be incorrect please raise the concern with your reviewer. Googlers: This error is safe to ignore once you've completed go/fix-missing-service-labels.
An override-missing-service-label label can be added to allow merging.

@modular-magician
Copy link
Collaborator

Hi there, I'm the Modular magician. I've detected the following information about your changes:

Diff report

Your PR generated some diffs in downstreams - here they are.

google provider: Diff ( 16 files changed, 5699 insertions(+), 2 deletions(-))
google-beta provider: Diff ( 14 files changed, 5689 insertions(+), 2 deletions(-))
terraform-google-conversion: Diff ( 3 files changed, 1679 insertions(+))
Open in Cloud Shell: Diff ( 4 files changed, 160 insertions(+))

Missing test report

Your PR includes resource fields which are not covered by any test.

Resource: google_hypercomputecluster_cluster (3 total tests)
Please add an acceptance test which includes these fields. The test should include the following:

resource "google_hypercomputecluster_cluster" "primary" {
  compute_resources {
    config {
      new_flex_start_instances {
        machine_type = # value needed
        max_duration = # value needed
        zone         = # value needed
      }
      new_reserved_instances {
        reservation = # value needed
      }
    }
  }
  network_resources {
    config {
      existing_network {
        network    = # value needed
        subnetwork = # value needed
      }
    }
  }
  storage_resources {
    config {
      existing_bucket {
        bucket = # value needed
      }
      existing_filestore {
        filestore = # value needed
      }
      existing_lustre {
        lustre = # value needed
      }
      new_bucket {
        autoclass {
          enabled                = # value needed
          terminal_storage_class = # value needed
        }
        hierarchical_namespace {
          enabled = # value needed
        }
      }
      new_filestore {
        description = # value needed
        file_shares {
          capacity_gb = # value needed
          file_share  = # value needed
        }
        filestore = # value needed
        protocol  = # value needed
        tier      = # value needed
      }
      new_lustre {
        capacity_gb = # value needed
        description = # value needed
        filesystem  = # value needed
        lustre      = # value needed
      }
    }
  }
}

Missing service labels

The following new resources do not have corresponding service labels:

  • google_hypercomputecluster_cluster

If you believe this detection to be incorrect please raise the concern with your reviewer. Googlers: This error is safe to ignore once you've completed go/fix-missing-service-labels.
An override-missing-service-label label can be added to allow merging.

@modular-magician
Copy link
Collaborator

Tests analytics

Total tests: 6012
Passed tests: 5374
Skipped tests: 634
Affected tests: 4

Click here to see the affected service packages

All service packages are affected

Action taken

Found 4 affected test(s) by replaying old test recordings. Starting RECORDING based on the most recent commit. Click here to see the affected tests
  • TestAccAccessContextManager__service_perimeter_dry_run_egress_policy
  • TestAccAccessContextManager__service_perimeter_dry_run_ingress_policy
  • TestAccHypercomputeclusterCluster_hypercomputeclusterClusterBasicExample
  • TestAccHypercomputeclusterCluster_update

Get to know how VCR tests work

1 similar comment
@modular-magician
Copy link
Collaborator

Tests analytics

Total tests: 6012
Passed tests: 5374
Skipped tests: 634
Affected tests: 4

Click here to see the affected service packages

All service packages are affected

Action taken

Found 4 affected test(s) by replaying old test recordings. Starting RECORDING based on the most recent commit. Click here to see the affected tests
  • TestAccAccessContextManager__service_perimeter_dry_run_egress_policy
  • TestAccAccessContextManager__service_perimeter_dry_run_ingress_policy
  • TestAccHypercomputeclusterCluster_hypercomputeclusterClusterBasicExample
  • TestAccHypercomputeclusterCluster_update

Get to know how VCR tests work

@modular-magician
Copy link
Collaborator

Tests analytics

Total tests: 6013
Passed tests: 5374
Skipped tests: 634
Affected tests: 5

Click here to see the affected service packages

All service packages are affected

Action taken

Found 5 affected test(s) by replaying old test recordings. Starting RECORDING based on the most recent commit. Click here to see the affected tests
  • TestAccAccessContextManager__service_perimeter_dry_run_egress_policy
  • TestAccAccessContextManager__service_perimeter_dry_run_ingress_policy
  • TestAccFolderIamPolicy_basic
  • TestAccHypercomputeclusterCluster_hypercomputeclusterClusterBasicExample
  • TestAccHypercomputeclusterCluster_update

Get to know how VCR tests work

@modular-magician
Copy link
Collaborator

🟢 Tests passed during RECORDING mode:
TestAccAccessContextManager__access_level [Debug log]
TestAccAccessContextManager__access_level_condition [Debug log]
TestAccAccessContextManager__access_level_custom [Debug log]
TestAccAccessContextManager__access_level_full [Debug log]
TestAccAccessContextManager__access_levels [Debug log]
TestAccAccessContextManager__access_policy_scoped [Debug log]
TestAccAccessContextManager__service_perimeter [Debug log]
TestAccAccessContextManager__service_perimeter_dry_run_ingress_policy [Debug log]

🟢 No issues found for passed tests after REPLAYING rerun.


🔴 Tests failed during RECORDING mode:
TestAccAccessContextManager__access_policy [Error message] [Debug log]
TestAccAccessContextManager__authorized_orgs_desc [Error message] [Debug log]
TestAccAccessContextManager__service_perimeter_dry_run_egress_policy [Error message] [Debug log]
TestAccAccessContextManager__service_perimeter_update [Error message] [Debug log]
TestAccAccessContextManager__service_perimeters [Error message] [Debug log]
TestAccHypercomputeclusterCluster_hypercomputeclusterClusterBasicExample [Error message] [Debug log]
TestAccHypercomputeclusterCluster_update [Error message] [Debug log]

🔴 Errors occurred during RECORDING mode. Please fix them to complete your PR.

View the build log or the debug log for each test

@modular-magician
Copy link
Collaborator

🟢 Tests passed during RECORDING mode:
TestAccAccessContextManager__access_level_full [Debug log]
TestAccAccessContextManager__access_policy_scoped [Debug log]

🟢 No issues found for passed tests after REPLAYING rerun.


🔴 Tests failed during RECORDING mode:
TestAccAccessContextManager__access_level [Error message] [Debug log]
TestAccAccessContextManager__access_level_condition [Error message] [Debug log]
TestAccAccessContextManager__access_level_custom [Error message] [Debug log]
TestAccAccessContextManager__access_levels [Error message] [Debug log]
TestAccAccessContextManager__access_policy [Error message] [Debug log]
TestAccAccessContextManager__authorized_orgs_desc [Error message] [Debug log]
TestAccAccessContextManager__service_perimeter [Error message] [Debug log]
TestAccAccessContextManager__service_perimeter_dry_run_egress_policy [Error message] [Debug log]
TestAccAccessContextManager__service_perimeter_dry_run_ingress_policy [Error message] [Debug log]
TestAccAccessContextManager__service_perimeter_update [Error message] [Debug log]
TestAccAccessContextManager__service_perimeters [Error message] [Debug log]
TestAccHypercomputeclusterCluster_hypercomputeclusterClusterBasicExample [Error message] [Debug log]
TestAccHypercomputeclusterCluster_update [Error message] [Debug log]

🔴 Errors occurred during RECORDING mode. Please fix them to complete your PR.

View the build log or the debug log for each test

@modular-magician
Copy link
Collaborator

🟢 Tests passed during RECORDING mode:
TestAccAccessContextManager__access_level [Debug log]
TestAccAccessContextManager__access_level_custom [Debug log]
TestAccAccessContextManager__access_level_full [Debug log]
TestAccAccessContextManager__access_levels [Debug log]
TestAccAccessContextManager__access_policy [Debug log]
TestAccAccessContextManager__access_policy_scoped [Debug log]
TestAccAccessContextManager__authorized_orgs_desc [Debug log]
TestAccAccessContextManager__service_perimeter [Debug log]
TestAccAccessContextManager__service_perimeter_dry_run_egress_policy [Debug log]
TestAccAccessContextManager__service_perimeter_dry_run_ingress_policy [Debug log]
TestAccAccessContextManager__service_perimeter_update [Debug log]
TestAccAccessContextManager__service_perimeters [Debug log]
TestAccFolderIamPolicy_basic [Debug log]

🟢 No issues found for passed tests after REPLAYING rerun.


🔴 Tests failed during RECORDING mode:
TestAccAccessContextManager__access_level_condition [Error message] [Debug log]
TestAccHypercomputeclusterCluster_hypercomputeclusterClusterBasicExample [Error message] [Debug log]
TestAccHypercomputeclusterCluster_update [Error message] [Debug log]

🔴 Errors occurred during RECORDING mode. Please fix them to complete your PR.

View the build log or the debug log for each test

@c2thorn
Copy link
Member

c2thorn commented Feb 10, 2026

/gcbrun

@modular-magician
Copy link
Collaborator

Hi there, I'm the Modular magician. I've detected the following information about your changes:

Diff report

Your PR generated some diffs in downstreams - here they are.

google provider: Diff ( 16 files changed, 5699 insertions(+), 2 deletions(-))
google-beta provider: Diff ( 14 files changed, 5689 insertions(+), 2 deletions(-))
terraform-google-conversion: Diff ( 3 files changed, 1679 insertions(+))
Open in Cloud Shell: Diff ( 4 files changed, 160 insertions(+))

Missing test report

Your PR includes resource fields which are not covered by any test.

Resource: google_hypercomputecluster_cluster (3 total tests)
Please add an acceptance test which includes these fields. The test should include the following:

resource "google_hypercomputecluster_cluster" "primary" {
  compute_resources {
    config {
      new_flex_start_instances {
        machine_type = # value needed
        max_duration = # value needed
        zone         = # value needed
      }
      new_reserved_instances {
        reservation = # value needed
      }
    }
  }
  network_resources {
    config {
      existing_network {
        network    = # value needed
        subnetwork = # value needed
      }
    }
  }
  storage_resources {
    config {
      existing_bucket {
        bucket = # value needed
      }
      existing_filestore {
        filestore = # value needed
      }
      existing_lustre {
        lustre = # value needed
      }
      new_bucket {
        autoclass {
          enabled                = # value needed
          terminal_storage_class = # value needed
        }
        hierarchical_namespace {
          enabled = # value needed
        }
      }
      new_filestore {
        description = # value needed
        file_shares {
          capacity_gb = # value needed
          file_share  = # value needed
        }
        filestore = # value needed
        protocol  = # value needed
        tier      = # value needed
      }
      new_lustre {
        capacity_gb = # value needed
        description = # value needed
        filesystem  = # value needed
        lustre      = # value needed
      }
    }
  }
}

Missing service labels

The following new resources do not have corresponding service labels:

  • google_hypercomputecluster_cluster

If you believe this detection to be incorrect please raise the concern with your reviewer. Googlers: This error is safe to ignore once you've completed go/fix-missing-service-labels.
An override-missing-service-label label can be added to allow merging.

@modular-magician
Copy link
Collaborator

Tests analytics

Total tests: 6013
Passed tests: 5375
Skipped tests: 634
Affected tests: 4

Click here to see the affected service packages

All service packages are affected

Action taken

Found 4 affected test(s) by replaying old test recordings. Starting RECORDING based on the most recent commit. Click here to see the affected tests
  • TestAccAccessContextManager__service_perimeter_dry_run_egress_policy
  • TestAccAccessContextManager__service_perimeter_dry_run_ingress_policy
  • TestAccHypercomputeclusterCluster_hypercomputeclusterClusterBasicExample
  • TestAccHypercomputeclusterCluster_update

Get to know how VCR tests work

@modular-magician
Copy link
Collaborator

🟢 Tests passed during RECORDING mode:
TestAccAccessContextManager__access_level [Debug log]
TestAccAccessContextManager__access_level_condition [Debug log]
TestAccAccessContextManager__access_level_custom [Debug log]
TestAccAccessContextManager__access_level_full [Debug log]
TestAccAccessContextManager__access_levels [Debug log]
TestAccAccessContextManager__access_policy [Debug log]
TestAccAccessContextManager__access_policy_scoped [Debug log]
TestAccAccessContextManager__authorized_orgs_desc [Debug log]
TestAccAccessContextManager__service_perimeter [Debug log]
TestAccAccessContextManager__service_perimeter_dry_run_egress_policy [Debug log]
TestAccAccessContextManager__service_perimeter_dry_run_ingress_policy [Debug log]
TestAccAccessContextManager__service_perimeter_update [Debug log]
TestAccAccessContextManager__service_perimeters [Debug log]
TestAccHypercomputeclusterCluster_hypercomputeclusterClusterBasicExample [Debug log]

🔴 Tests failed when rerunning REPLAYING mode:
TestAccAccessContextManager__service_perimeter_dry_run_egress_policy [Error message] [Debug log]

Tests failed due to non-determinism or randomness when the VCR replayed the response after the HTTP request was made.

Please fix these to complete your PR. If you believe these test failures to be incorrect or unrelated to your change, or if you have any questions, please raise the concern with your reviewer.


🔴 Tests failed during RECORDING mode:
TestAccHypercomputeclusterCluster_update [Error message] [Debug log]

🔴 Errors occurred during RECORDING mode. Please fix them to complete your PR.

View the build log or the debug log for each test

@annuay-google
Copy link
Author

/gcbrun

@modular-magician
Copy link
Collaborator

Hi there, I'm the Modular magician. I've detected the following information about your changes:

Diff report

Your PR generated some diffs in downstreams - here they are.

google provider: Diff ( 16 files changed, 5690 insertions(+), 2 deletions(-))
google-beta provider: Diff ( 14 files changed, 5680 insertions(+), 2 deletions(-))
terraform-google-conversion: Diff ( 3 files changed, 1679 insertions(+))
Open in Cloud Shell: Diff ( 4 files changed, 160 insertions(+))

Missing test report

Your PR includes resource fields which are not covered by any test.

Resource: google_hypercomputecluster_cluster (3 total tests)
Please add an acceptance test which includes these fields. The test should include the following:

resource "google_hypercomputecluster_cluster" "primary" {
  compute_resources {
    config {
      new_flex_start_instances {
        machine_type = # value needed
        max_duration = # value needed
        zone         = # value needed
      }
      new_reserved_instances {
        reservation = # value needed
      }
    }
  }
  network_resources {
    config {
      existing_network {
        network    = # value needed
        subnetwork = # value needed
      }
    }
  }
  storage_resources {
    config {
      existing_bucket {
        bucket = # value needed
      }
      existing_filestore {
        filestore = # value needed
      }
      existing_lustre {
        lustre = # value needed
      }
      new_bucket {
        autoclass {
          enabled                = # value needed
          terminal_storage_class = # value needed
        }
        hierarchical_namespace {
          enabled = # value needed
        }
      }
      new_filestore {
        description = # value needed
        file_shares {
          capacity_gb = # value needed
          file_share  = # value needed
        }
        filestore = # value needed
        protocol  = # value needed
        tier      = # value needed
      }
      new_lustre {
        capacity_gb = # value needed
        description = # value needed
        filesystem  = # value needed
        lustre      = # value needed
      }
    }
  }
}

Missing service labels

The following new resources do not have corresponding service labels:

  • google_hypercomputecluster_cluster

If you believe this detection to be incorrect please raise the concern with your reviewer. Googlers: This error is safe to ignore once you've completed go/fix-missing-service-labels.
An override-missing-service-label label can be added to allow merging.

@modular-magician
Copy link
Collaborator

Hi there, I'm the Modular magician. I've detected the following information about your changes:

Diff report

Your PR generated some diffs in downstreams - here they are.

google provider: Diff ( 16 files changed, 5690 insertions(+), 2 deletions(-))
google-beta provider: Diff ( 14 files changed, 5680 insertions(+), 2 deletions(-))
terraform-google-conversion: Diff ( 3 files changed, 1679 insertions(+))
Open in Cloud Shell: Diff ( 4 files changed, 160 insertions(+))

Missing test report

Your PR includes resource fields which are not covered by any test.

Resource: google_hypercomputecluster_cluster (3 total tests)
Please add an acceptance test which includes these fields. The test should include the following:

resource "google_hypercomputecluster_cluster" "primary" {
  compute_resources {
    config {
      new_flex_start_instances {
        machine_type = # value needed
        max_duration = # value needed
        zone         = # value needed
      }
      new_reserved_instances {
        reservation = # value needed
      }
    }
  }
  network_resources {
    config {
      existing_network {
        network    = # value needed
        subnetwork = # value needed
      }
    }
  }
  storage_resources {
    config {
      existing_bucket {
        bucket = # value needed
      }
      existing_filestore {
        filestore = # value needed
      }
      existing_lustre {
        lustre = # value needed
      }
      new_bucket {
        autoclass {
          enabled                = # value needed
          terminal_storage_class = # value needed
        }
        hierarchical_namespace {
          enabled = # value needed
        }
      }
      new_filestore {
        description = # value needed
        file_shares {
          capacity_gb = # value needed
          file_share  = # value needed
        }
        filestore = # value needed
        protocol  = # value needed
        tier      = # value needed
      }
      new_lustre {
        capacity_gb = # value needed
        description = # value needed
        filesystem  = # value needed
        lustre      = # value needed
      }
    }
  }
}

Missing service labels

The following new resources do not have corresponding service labels:

  • google_hypercomputecluster_cluster

If you believe this detection to be incorrect please raise the concern with your reviewer. Googlers: This error is safe to ignore once you've completed go/fix-missing-service-labels.
An override-missing-service-label label can be added to allow merging.

@modular-magician
Copy link
Collaborator

Hi there, I'm the Modular magician. I've detected the following information about your changes:

Diff report

Your PR generated some diffs in downstreams - here they are.

google provider: Diff ( 16 files changed, 5689 insertions(+), 2 deletions(-))
google-beta provider: Diff ( 14 files changed, 5679 insertions(+), 2 deletions(-))
terraform-google-conversion: Diff ( 3 files changed, 1679 insertions(+))
Open in Cloud Shell: Diff ( 4 files changed, 160 insertions(+))

Missing test report

Your PR includes resource fields which are not covered by any test.

Resource: google_hypercomputecluster_cluster (3 total tests)
Please add an acceptance test which includes these fields. The test should include the following:

resource "google_hypercomputecluster_cluster" "primary" {
  compute_resources {
    config {
      new_flex_start_instances {
        machine_type = # value needed
        max_duration = # value needed
        zone         = # value needed
      }
      new_reserved_instances {
        reservation = # value needed
      }
    }
  }
  network_resources {
    config {
      existing_network {
        network    = # value needed
        subnetwork = # value needed
      }
    }
  }
  storage_resources {
    config {
      existing_bucket {
        bucket = # value needed
      }
      existing_filestore {
        filestore = # value needed
      }
      existing_lustre {
        lustre = # value needed
      }
      new_bucket {
        autoclass {
          enabled                = # value needed
          terminal_storage_class = # value needed
        }
        hierarchical_namespace {
          enabled = # value needed
        }
      }
      new_filestore {
        description = # value needed
        file_shares {
          capacity_gb = # value needed
          file_share  = # value needed
        }
        filestore = # value needed
        protocol  = # value needed
        tier      = # value needed
      }
      new_lustre {
        capacity_gb = # value needed
        description = # value needed
        filesystem  = # value needed
        lustre      = # value needed
      }
    }
  }
}

Missing service labels

The following new resources do not have corresponding service labels:

  • google_hypercomputecluster_cluster

If you believe this detection to be incorrect please raise the concern with your reviewer. Googlers: This error is safe to ignore once you've completed go/fix-missing-service-labels.
An override-missing-service-label label can be added to allow merging.

@modular-magician
Copy link
Collaborator

Tests analytics

Total tests: 6031
Passed tests: 5399
Skipped tests: 625
Affected tests: 7

Click here to see the affected service packages

All service packages are affected

Action taken

Found 7 affected test(s) by replaying old test recordings. Starting RECORDING based on the most recent commit. Click here to see the affected tests
  • TestAccAccessContextManager__service_perimeter_dry_run_egress_policy
  • TestAccAccessContextManager__service_perimeter_dry_run_ingress_policy
  • TestAccComputeRegionTargetHttpsProxy_update
  • TestAccGKEHubFeatureMembership_gkehubFeatureAcmUpdate
  • TestAccGKEHubFeatureMembership_gkehubFeaturePolicyController
  • TestAccHypercomputeclusterCluster_hypercomputeclusterClusterBasicExample
  • TestAccHypercomputeclusterCluster_update

Get to know how VCR tests work

@modular-magician
Copy link
Collaborator

Tests analytics

Total tests: 6030
Passed tests: 5398
Skipped tests: 625
Affected tests: 7

Click here to see the affected service packages

All service packages are affected

Action taken

Found 7 affected test(s) by replaying old test recordings. Starting RECORDING based on the most recent commit. Click here to see the affected tests
  • TestAccAccessContextManager__service_perimeter_dry_run_egress_policy
  • TestAccAccessContextManager__service_perimeter_dry_run_ingress_policy
  • TestAccComputeRegionTargetHttpsProxy_update
  • TestAccGKEHubFeatureMembership_gkehubFeatureAcmUpdate
  • TestAccGKEHubFeatureMembership_gkehubFeaturePolicyController
  • TestAccHypercomputeclusterCluster_hypercomputeclusterClusterBasicExample
  • TestAccHypercomputeclusterCluster_update

Get to know how VCR tests work

@modular-magician
Copy link
Collaborator

Tests analytics

Total tests: 6030
Passed tests: 5397
Skipped tests: 625
Affected tests: 8

Click here to see the affected service packages

All service packages are affected

Action taken

Found 8 affected test(s) by replaying old test recordings. Starting RECORDING based on the most recent commit. Click here to see the affected tests
  • TestAccAccessContextManager__service_perimeter_dry_run_egress_policy
  • TestAccAccessContextManager__service_perimeter_dry_run_ingress_policy
  • TestAccAlloydbInstance_updateInstanceWithPscInterfaceConfigs
  • TestAccComputeRegionTargetHttpsProxy_update
  • TestAccGKEHubFeatureMembership_gkehubFeatureAcmUpdate
  • TestAccGKEHubFeatureMembership_gkehubFeaturePolicyController
  • TestAccHypercomputeclusterCluster_hypercomputeclusterClusterBasicExample
  • TestAccHypercomputeclusterCluster_update

Get to know how VCR tests work

@modular-magician
Copy link
Collaborator

🟢 Tests passed during RECORDING mode:
TestAccAccessContextManager__access_level_condition [Debug log]
TestAccAccessContextManager__access_level_full [Debug log]
TestAccAccessContextManager__access_policy [Debug log]
TestAccAccessContextManager__access_policy_iam_binding [Debug log]
TestAccAccessContextManager__access_policy_iam_member [Debug log]
TestAccAccessContextManager__access_policy_iam_policy [Debug log]
TestAccAccessContextManager__access_policy_scoped [Debug log]
TestAccAccessContextManager__authorized_orgs_desc [Debug log]
TestAccAccessContextManager__data_source_access_policy_basic [Debug log]
TestAccAccessContextManager__data_source_access_policy_scoped [Debug log]
TestAccAccessContextManager__service_perimeter [Debug log]
TestAccAccessContextManager__service_perimeter_dry_run_egress_policy [Debug log]
TestAccAccessContextManager__service_perimeter_egress_policy [Debug log]
TestAccAccessContextManager__service_perimeter_ingress_policy [Debug log]
TestAccAccessContextManager__service_perimeters [Debug log]
TestAccGKEHubFeatureMembership_gkehubFeatureAcmUpdate [Debug log]
TestAccHypercomputeclusterCluster_hypercomputeclusterClusterBasicExample [Debug log]

🟢 No issues found for passed tests after REPLAYING rerun.


🔴 Tests failed during RECORDING mode:
TestAccAccessContextManager__access_level [Error message] [Debug log]
TestAccAccessContextManager__access_level_custom [Error message] [Debug log]
TestAccAccessContextManager__access_levels [Error message] [Debug log]
TestAccAccessContextManager__gcp_user_access_binding [Error message] [Debug log]
TestAccAccessContextManager__service_perimeter_dry_run_ingress_policy [Error message] [Debug log]
TestAccAccessContextManager__service_perimeter_dry_run_resource [Error message] [Debug log]
TestAccAccessContextManager__service_perimeter_resource [Error message] [Debug log]
TestAccAccessContextManager__service_perimeter_update [Error message] [Debug log]
TestAccComputeRegionTargetHttpsProxy_update [Error message] [Debug log]
TestAccGKEHubFeatureMembership_gkehubFeaturePolicyController [Error message] [Debug log]
TestAccHypercomputeclusterCluster_update [Error message] [Debug log]

🔴 Errors occurred during RECORDING mode. Please fix them to complete your PR.

View the build log or the debug log for each test

@modular-magician
Copy link
Collaborator

🟢 Tests passed during RECORDING mode:
TestAccAccessContextManager__access_level_custom [Debug log]
TestAccAccessContextManager__access_level_full [Debug log]
TestAccAccessContextManager__access_policy_iam_member [Debug log]
TestAccAccessContextManager__access_policy_iam_policy [Debug log]
TestAccAccessContextManager__access_policy_scoped [Debug log]
TestAccAccessContextManager__authorized_orgs_desc [Debug log]
TestAccAccessContextManager__data_source_access_policy_scoped [Debug log]
TestAccAccessContextManager__gcp_user_access_binding [Debug log]
TestAccAccessContextManager__service_perimeter_dry_run_egress_policy [Debug log]
TestAccGKEHubFeatureMembership_gkehubFeatureAcmUpdate [Debug log]

🔴 Tests failed when rerunning REPLAYING mode:
TestAccGKEHubFeatureMembership_gkehubFeatureAcmUpdate [Error message] [Debug log]

Tests failed due to non-determinism or randomness when the VCR replayed the response after the HTTP request was made.

Please fix these to complete your PR. If you believe these test failures to be incorrect or unrelated to your change, or if you have any questions, please raise the concern with your reviewer.


🔴 Tests failed during RECORDING mode:
TestAccAccessContextManager__access_level [Error message] [Debug log]
TestAccAccessContextManager__access_level_condition [Error message] [Debug log]
TestAccAccessContextManager__access_levels [Error message] [Debug log]
TestAccAccessContextManager__access_policy [Error message] [Debug log]
TestAccAccessContextManager__access_policy_iam_binding [Error message] [Debug log]
TestAccAccessContextManager__data_source_access_policy_basic [Error message] [Debug log]
TestAccAccessContextManager__service_perimeter [Error message] [Debug log]
TestAccAccessContextManager__service_perimeter_dry_run_ingress_policy [Error message] [Debug log]
TestAccAccessContextManager__service_perimeter_dry_run_resource [Error message] [Debug log]
TestAccAccessContextManager__service_perimeter_egress_policy [Error message] [Debug log]
TestAccAccessContextManager__service_perimeter_ingress_policy [Error message] [Debug log]
TestAccAccessContextManager__service_perimeter_resource [Error message] [Debug log]
TestAccAccessContextManager__service_perimeter_update [Error message] [Debug log]
TestAccAccessContextManager__service_perimeters [Error message] [Debug log]
TestAccComputeRegionTargetHttpsProxy_update [Error message] [Debug log]
TestAccGKEHubFeatureMembership_gkehubFeaturePolicyController [Error message] [Debug log]
TestAccHypercomputeclusterCluster_hypercomputeclusterClusterBasicExample [Error message] [Debug log]
TestAccHypercomputeclusterCluster_update [Error message] [Debug log]

🔴 Errors occurred during RECORDING mode. Please fix them to complete your PR.

View the build log or the debug log for each test

@modular-magician
Copy link
Collaborator

🟢 Tests passed during RECORDING mode:
TestAccAccessContextManager__access_level_condition [Debug log]
TestAccAccessContextManager__access_level_custom [Debug log]
TestAccAccessContextManager__access_levels [Debug log]
TestAccAccessContextManager__access_policy [Debug log]
TestAccAccessContextManager__access_policy_iam_policy [Debug log]
TestAccAccessContextManager__access_policy_scoped [Debug log]
TestAccAccessContextManager__data_source_access_policy_scoped [Debug log]
TestAccAccessContextManager__gcp_user_access_binding [Debug log]
TestAccAccessContextManager__service_perimeter [Debug log]
TestAccAccessContextManager__service_perimeter_dry_run_egress_policy [Debug log]
TestAccAccessContextManager__service_perimeter_dry_run_ingress_policy [Debug log]
TestAccAccessContextManager__service_perimeter_egress_policy [Debug log]
TestAccAccessContextManager__service_perimeter_ingress_policy [Debug log]
TestAccAccessContextManager__service_perimeter_resource [Debug log]
TestAccAccessContextManager__service_perimeter_update [Debug log]
TestAccAlloydbInstance_updateInstanceWithPscInterfaceConfigs [Debug log]
TestAccGKEHubFeatureMembership_gkehubFeatureAcmUpdate [Debug log]

🔴 Tests failed when rerunning REPLAYING mode:
TestAccGKEHubFeatureMembership_gkehubFeatureAcmUpdate [Error message] [Debug log]

Tests failed due to non-determinism or randomness when the VCR replayed the response after the HTTP request was made.

Please fix these to complete your PR. If you believe these test failures to be incorrect or unrelated to your change, or if you have any questions, please raise the concern with your reviewer.


🔴 Tests failed during RECORDING mode:
TestAccAccessContextManager__access_level [Error message] [Debug log]
TestAccAccessContextManager__access_level_full [Error message] [Debug log]
TestAccAccessContextManager__access_policy_iam_binding [Error message] [Debug log]
TestAccAccessContextManager__access_policy_iam_member [Error message] [Debug log]
TestAccAccessContextManager__authorized_orgs_desc [Error message] [Debug log]
TestAccAccessContextManager__data_source_access_policy_basic [Error message] [Debug log]
TestAccAccessContextManager__service_perimeter_dry_run_resource [Error message] [Debug log]
TestAccAccessContextManager__service_perimeters [Error message] [Debug log]
TestAccComputeRegionTargetHttpsProxy_update [Error message] [Debug log]
TestAccGKEHubFeatureMembership_gkehubFeaturePolicyController [Error message] [Debug log]
TestAccHypercomputeclusterCluster_hypercomputeclusterClusterBasicExample [Error message] [Debug log]
TestAccHypercomputeclusterCluster_update [Error message] [Debug log]

🔴 Errors occurred during RECORDING mode. Please fix them to complete your PR.

View the build log or the debug log for each test

@modular-magician
Copy link
Collaborator

Hi there, I'm the Modular magician. I've detected the following information about your changes:

Diff report

Your PR generated some diffs in downstreams - here they are.

google provider: Diff ( 16 files changed, 5677 insertions(+), 2 deletions(-))
google-beta provider: Diff ( 14 files changed, 5667 insertions(+), 2 deletions(-))
terraform-google-conversion: Diff ( 3 files changed, 1679 insertions(+))
Open in Cloud Shell: Diff ( 4 files changed, 160 insertions(+))

Missing test report

Your PR includes resource fields which are not covered by any test.

Resource: google_hypercomputecluster_cluster (3 total tests)
Please add an acceptance test which includes these fields. The test should include the following:

resource "google_hypercomputecluster_cluster" "primary" {
  compute_resources {
    config {
      new_flex_start_instances {
        machine_type = # value needed
        max_duration = # value needed
        zone         = # value needed
      }
      new_reserved_instances {
        reservation = # value needed
      }
    }
  }
  network_resources {
    config {
      existing_network {
        network    = # value needed
        subnetwork = # value needed
      }
    }
  }
  storage_resources {
    config {
      existing_bucket {
        bucket = # value needed
      }
      existing_filestore {
        filestore = # value needed
      }
      existing_lustre {
        lustre = # value needed
      }
      new_bucket {
        autoclass {
          enabled                = # value needed
          terminal_storage_class = # value needed
        }
        hierarchical_namespace {
          enabled = # value needed
        }
      }
      new_filestore {
        description = # value needed
        file_shares {
          capacity_gb = # value needed
          file_share  = # value needed
        }
        filestore = # value needed
        protocol  = # value needed
        tier      = # value needed
      }
      new_lustre {
        capacity_gb = # value needed
        description = # value needed
        filesystem  = # value needed
        lustre      = # value needed
      }
    }
  }
}

Missing service labels

The following new resources do not have corresponding service labels:

  • google_hypercomputecluster_cluster

If you believe this detection to be incorrect please raise the concern with your reviewer. Googlers: This error is safe to ignore once you've completed go/fix-missing-service-labels.
An override-missing-service-label label can be added to allow merging.

@annuay-google
Copy link
Author

/gcbrun

@modular-magician
Copy link
Collaborator

Tests analytics

Total tests: 6036
Passed tests: 5407
Skipped tests: 625
Affected tests: 4

Click here to see the affected service packages

All service packages are affected

Action taken

Found 4 affected test(s) by replaying old test recordings. Starting RECORDING based on the most recent commit. Click here to see the affected tests
  • TestAccComputeRegionTargetHttpsProxy_update
  • TestAccGKEHubFeatureMembership_gkehubFeatureAcmUpdate
  • TestAccGKEHubFeatureMembership_gkehubFeaturePolicyController
  • TestAccHypercomputeclusterCluster_update

Get to know how VCR tests work

@modular-magician
Copy link
Collaborator

🟢 Tests passed during RECORDING mode:
TestAccGKEHubFeatureMembership_gkehubFeatureAcmUpdate [Debug log]
TestAccHypercomputeclusterCluster_update [Debug log]

🟢 No issues found for passed tests after REPLAYING rerun.


🔴 Tests failed during RECORDING mode:
TestAccComputeRegionTargetHttpsProxy_update [Error message] [Debug log]
TestAccGKEHubFeatureMembership_gkehubFeaturePolicyController [Error message] [Debug log]

🔴 Errors occurred during RECORDING mode. Please fix them to complete your PR.

View the build log or the debug log for each test

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants