Skip to content

Conversation

lantoli
Copy link
Collaborator

@lantoli lantoli commented Feb 5, 2025

Description

Converts clusters with one replication_specs, regions_config and shard

Link to any related issue(s): CLOUDP-298699

Type of change:

  • Bug fix (non-breaking change which fixes an issue). Please, add the "bug" label to the PR.
  • New feature (non-breaking change which adds functionality). Please, add the "enhancement" label to the PR. A migration guide must be created or updated if the new feature will go in a major version.
  • Breaking change (fix or feature that would cause existing functionality to not work as expected). Please, add the "breaking change" label to the PR. A migration guide must be created or updated.
  • This change requires a documentation update
  • Documentation fix/enhancement

Required Checklist:

  • I have signed the MongoDB CLA
  • I have read the contributing guides
  • I have checked that this change does not generate any credentials and that they are NOT accidentally logged anywhere.
  • I have added tests that prove my fix is effective or that my feature works per HashiCorp requirements
  • I have added any necessary documentation (if appropriate)
  • I have run make fmt and formatted my code
  • If changes include deprecations or removals I have added appropriate changelog entries.
  • If changes include removal or addition of 3rd party GitHub actions, I updated our internal document. Reach out to the APIx Integration slack channel to get access to the internal document.

Further comments

@github-actions github-actions bot added the enhancement New feature or request label Feb 5, 2025
@lantoli lantoli changed the title feat: Converts clusters with replication_specs feat: Converts clusters with one replication_specs and one regions_config Feb 5, 2025
@lantoli lantoli changed the title feat: Converts clusters with one replication_specs and one regions_config feat: Converts clusters with one replication_specs, regions_config and shard Feb 5, 2025
# but plugin doesn't help here.
ignore_changes = [provider_instance_size_name]
}
backup_enabled = true
Copy link
Collaborator Author

@lantoli lantoli Feb 6, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

note that they are some empty lines that are added where replication_specs and regions_config were, and new attributes are added at the end of current resource , in this case after lifecycle.

will see if it can be improved in following PRs

@lantoli lantoli marked this pull request as ready for review February 6, 2025 10:00
@lantoli lantoli requested a review from a team as a code owner February 6, 2025 10:00
nConfigSrc = "regions_config"
nElectableSpecs = "electable_specs"
nAutoScaling = "auto_scaling"
nRegionNameSrc = "provider_region_name"
Copy link
Collaborator Author

@lantoli lantoli Feb 6, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

added Src when the cluster attr name is different to adv_cluster, so it's easier to see the equivalence


func getElectableSpecs(configSrc *hclwrite.Block, root attrVals) (hclwrite.Tokens, error) {
file := hclwrite.NewEmptyFile()
fileb := file.Body()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

specBlock?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

specBlock for which var you mean?

Copy link
Contributor

@EspenAlbert EspenAlbert Feb 6, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

instead of fileb

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

b is for body,

I'm using file as it's how HCL library calls it, instead of calling electableSpecs inside getElectableSpecs, scalingSpecs in getAutoScalingOpt, etc. as we're inside that func so it's implicit what block we're working with




lifecycle { // To simulate if there a new instance size name to avoid scale cluster down to original value
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would prefer having lifecycle at the end, but understand it is because we are appending to the resource block?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

correct, when the functionality is done, we can try to improve these formatting issues

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've created this ticket: CLOUDP-299208

"configuration_file_error": "failed to parse Terraform config file",
"free_cluster_missing_attribute": "free cluster (because no replication_specs): attribute backing_provider_name not found"
"free_cluster_missing_attribute": "free cluster (because no replication_specs): attribute backing_provider_name not found",
"autoscaling_missing_attribute": "setting replication_specs: attribute provider_instance_size_name not found"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Love this!

Copy link
Contributor

@EspenAlbert EspenAlbert left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nice!

@lantoli lantoli merged commit b7e4ee3 into main Feb 6, 2025
6 checks passed
@lantoli lantoli deleted the CLOUDP-298699_convert_rep_specs branch February 6, 2025 11:34
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

enhancement New feature or request

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants