Skip to content
2 changes: 1 addition & 1 deletion CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

ENHANCEMENTS:

* Supports `dynamic` block for `tags`, `labels` and `regions_config`
* Supports `dynamic` blocks for `tags`, `labels`, `regions_config` and `replication_specs`

## 1.0.0 (Mar 6, 2025)

Expand Down
44 changes: 33 additions & 11 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,6 +30,8 @@ atlas plugin list

### Usage

You can find more information in the [Migration Guide: Cluster to Advanced Cluster](https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/guides/cluster-to-advanced-cluster-migration-guide).

**Note**: In order to use the **Preview for MongoDB Atlas Provider 2.0.0** of `mongodbatlas_advanced_cluster`, you need to set the environment variable `MONGODB_ATLAS_PREVIEW_PROVIDER_V2_ADVANCED_CLUSTER` to `true`.

If you want to convert a Terraform configuration from `mongodbatlas_cluster` to `mongodbatlas_advanced_cluster`, use the following command:
Expand Down Expand Up @@ -75,31 +77,51 @@ dynamic "tags" {

#### Dynamic blocks in regions_config

You can use `dynamic` blocks for `regions_config`. The plugin assumes that `for_each` has an expression which is evaluated to a `list` or `set` of objects.
You can use `dynamic` blocks for `regions_config`. The plugin assumes that `for_each` has an expression which is evaluated to a `list` or `set` of objects. See this [guide](./docs/guide_clu2adv_dynamic_block.md) to learn more about some limitations.
This is an example of how to use dynamic blocks in `regions_config`:
```hcl
replication_specs {
num_shards = var.replication_specs.num_shards
zone_name = var.replication_specs.zone_name # only needed if you're using zones
replication_specs {
num_shards = var.replication_specs.num_shards
zone_name = var.replication_specs.zone_name # only needed if you're using zones
dynamic "regions_config" {
for_each = var.replication_specs.regions_config
content {
priority = regions_config.value.priority
region_name = regions_config.value.region_name
electable_nodes = regions_config.value.electable_nodes
read_only_nodes = regions_config.value.read_only_nodes
}
}
}
```

#### Dynamic blocks in replication_specs

You can use `dynamic` blocks for `replication_specs`. The plugin assumes that `for_each` has an expression which is evaluated to a `list` of objects. See this [guide](./docs/guide_clu2adv_dynamic_block.md) to learn more about some limitations.
This is an example of how to use dynamic blocks in `replication_specs`:
```hcl
dynamic "replication_specs" {
for_each = var.replication_specs
content {
num_shards = replication_specs.value.num_shards
zone_name = replication_specs.value.zone_name # only needed if you're using zones
dynamic "regions_config" {
for_each = var.replication_specs.regions_config
for_each = replication_specs.value.regions_config
content {
priority = regions_config.value.priority
region_name = regions_config.value.region_name
electable_nodes = regions_config.value.electable_nodes
priority = regions_config.value.priority
read_only_nodes = regions_config.value.read_only_nodes
region_name = regions_config.value.region_name
}
}
}
}
```
Dynamic block and individual blocks for `regions_config` are not supported at the same time. If you need this use case, please send us [feedback](https://github.com/mongodb-labs/atlas-cli-plugin-terraform/issues). There are currently two main approaches to handle this:
- (Recommended) Remove the individual `regions_config` blocks and add their information to the variable you're using in the `for_each` expression, e.g. using [concat](https://developer.hashicorp.com/terraform/language/functions/concat) if you're using a list or [setunion](https://developer.hashicorp.com/terraform/language/functions/setunion) for sets. In this way, you don't need to change the generated `mongodb_advanced_cluster` configuration.
- Change the generated `mongodb_advanced_cluster` configuration to join the individual blocks to the code generated for the `dynamic` block. This approach is more error-prone.

### Limitations

- [`num_shards`](https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/resources/cluster#num_shards-2) in `replication_specs` must be a numeric [literal expression](https://developer.hashicorp.com/nomad/docs/job-specification/hcl2/expressions#literal-expressions), e.g. `var.num_shards` is not supported. This is to allow creating a `replication_specs` element per shard in `mongodbatlas_advanced_cluster`. This limitation doesn't apply if you're using `dynamic` blocks in `regions_config` or `replication_specs`.
- `dynamic` blocks are currently supported only for `tags`, `labels` and `regions_config`. See limitations for `regions_config` support in [its section](#dynamic-blocks-in-regions_config) above. **Coming soon**: support for `replication_specs`.
- `dynamic` blocks are supported with some [limitations](./docs/guide_clu2adv_dynamic_block.md).

## Feedback

Expand Down
98 changes: 98 additions & 0 deletions docs/guide_clu2adv_dynamic_block.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,98 @@
# Guide to handle dynamic block limitations in regions_config and replication_specs

The plugin command to convert `mongodbatlas_cluster` resources to `mongodbatlas_advanced_cluster` supports `dynamic` blocks for `regions_config` and `replication_specs`. However, there are some limitations when using `dynamic` blocks in these fields. This guide explains how to handle these limitations.

If you need to use the plugin for use cases not yet supported, please send us [feedback](https://github.com/mongodb-labs/atlas-cli-plugin-terraform/issues).

## Dynamic block and individual blocks in the same resource

Dynamic block and individual blocks for `regions_config` or `replication_specs` are not supported at the same time. The recommended way to handle this is to remove the individual `regions_config` or `replication_specs` blocks and use a local variable to add the individual block information to the variable you're using in the `for_each` expression, using [concat](https://developer.hashicorp.com/terraform/language/functions/concat) if you're using a list or [setunion](https://developer.hashicorp.com/terraform/language/functions/setunion) for sets.

Let's see an example with `regions_config`, it is the same for `replication_specs`. In the original configuration file, the `mongodb_cluster` resource is used inside a module that receives the `regions_config` elements in a `list` variable and we want to add an additional `region_config` with a read-only node.
```hcl
variable "replication_specs" {
type = object({
num_shards = number
regions_config = list(object({
region_name = string
electable_nodes = number
priority = number
read_only_nodes = number
}))
})
}
resource "mongodbatlas_cluster" "this" {
project_id = var.project_id
name = var.cluster_name
cluster_type = var.cluster_type
provider_name = var.provider_name
provider_instance_size_name = var.provider_instance_size_name
replication_specs {
num_shards = var.replication_specs.num_shards
dynamic "regions_config" {
for_each = var.replication_specs.regions_config
content {
region_name = regions_config.value.region_name
electable_nodes = regions_config.value.electable_nodes
priority = regions_config.value.priority
read_only_nodes = regions_config.value.read_only_nodes
}
}
regions_config { # individual region
region_name = "US_EAST_1"
read_only_nodes = 1
}
}
}
```

We modify the configuration file to create an intermediate `local` variable to merge the `regions_config` variable elements and the additional `region_config`:
```hcl
variable "replication_specs" {
type = object({
num_shards = number
regions_config = list(object({
region_name = string
electable_nodes = number
priority = number
read_only_nodes = number
}))
})
}
locals {
regions_config_all = concat(
var.replication_specs.regions_config,
[
{
region_name = "US_EAST_1"
electable_nodes = 0
priority = 0
read_only_nodes = 1
},
]
)
}
resource "mongodbatlas_cluster" "this" {
project_id = var.project_id
name = var.cluster_name
cluster_type = var.cluster_type
provider_name = var.provider_name
provider_instance_size_name = var.provider_instance_size_name
replication_specs {
num_shards = var.replication_specs.num_shards
dynamic "regions_config" {
for_each = local.regions_config_all # changed to use the local variable
content {
region_name = regions_config.value.region_name
electable_nodes = regions_config.value.electable_nodes
priority = regions_config.value.priority
read_only_nodes = regions_config.value.read_only_nodes
}
}
}
}
```
This modified configuration file has the same behavior as the original one, but it doesn't have individual blocks anymore, only the `dynamic` block, so it is supported by the plugin.
1 change: 1 addition & 0 deletions internal/convert/const_names.go
Original file line number Diff line number Diff line change
Expand Up @@ -55,4 +55,5 @@ const (
nForEach = "for_each"
nContent = "content"
nRegion = "region"
nSpec = "spec"
)
83 changes: 58 additions & 25 deletions internal/convert/convert.go
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ const (
)

var (
dynamicBlockAllowList = []string{nTags, nLabels, nConfigSrc}
dynamicBlockAllowList = []string{nTags, nLabels, nConfigSrc, nRepSpecs}
)

type attrVals struct {
Expand Down Expand Up @@ -91,17 +91,22 @@ func convertResource(block *hclwrite.Block) (bool, error) {
}

var err error
if blockb.FirstMatchingBlock(nRepSpecs, nil) != nil {
err = fillCluster(blockb)
} else {
if isFreeTierCluster(blockb) {
err = fillFreeTierCluster(blockb)
} else {
err = fillCluster(blockb)
}
if err != nil {
return false, err
}
return true, nil
}

func isFreeTierCluster(resourceb *hclwrite.Body) bool {
d, _ := getDynamicBlock(resourceb, nRepSpecs)
return resourceb.FirstMatchingBlock(nRepSpecs, nil) == nil && !d.IsPresent()
}

func convertDataSource(block *hclwrite.Block) bool {
if block.Type() != dataSourceType {
return false
Expand Down Expand Up @@ -190,6 +195,15 @@ func fillCluster(resourceb *hclwrite.Body) error {
}

func fillReplicationSpecs(resourceb *hclwrite.Body, root attrVals) error {
d, err := fillReplicationSpecsWithDynamicBlock(resourceb, root)
if err != nil {
return err
}
if d.IsPresent() {
resourceb.RemoveBlock(d.block)
resourceb.SetAttributeRaw(nRepSpecs, d.tokens)
return nil
}
// at least one replication_specs exists here, if not it would be a free tier cluster
var specbs []*hclwrite.Body
for {
Expand All @@ -202,7 +216,7 @@ func fillReplicationSpecs(resourceb *hclwrite.Body, root attrVals) error {
break
}
specbSrc := specSrc.Body()
d, err := fillReplicationSpecsWithDynamicRegionConfigs(specbSrc, root)
d, err := fillReplicationSpecsWithDynamicRegionConfigs(specbSrc, root, false)
if err != nil {
return err
}
Expand Down Expand Up @@ -312,8 +326,26 @@ func fillBlockOpt(resourceb *hclwrite.Body, name string) {
resourceb.SetAttributeRaw(name, hcl.TokensObject(block.Body()))
}

// fillReplicationSpecsWithDynamicBlock used for dynamic blocks in replication_specs
func fillReplicationSpecsWithDynamicBlock(resourceb *hclwrite.Body, root attrVals) (dynamicBlock, error) {
dSpec, err := getDynamicBlock(resourceb, nRepSpecs)
if err != nil || !dSpec.IsPresent() {
return dynamicBlock{}, err
}
transformDynamicBlockReferences(dSpec.content.Body(), nRepSpecs, nSpec)
dConfig, err := fillReplicationSpecsWithDynamicRegionConfigs(dSpec.content.Body(), root, true)
if err != nil {
return dynamicBlock{}, err
}
forSpec := hcl.TokensFromExpr(fmt.Sprintf("for %s in %s : ", nSpec, hcl.GetAttrExpr(dSpec.forEach)))
forSpec = append(forSpec, dConfig.tokens...)
tokens := hcl.TokensFuncFlatten(forSpec)
dSpec.tokens = tokens
return dSpec, nil
}

// fillReplicationSpecsWithDynamicRegionConfigs is used for dynamic blocks in region_configs
func fillReplicationSpecsWithDynamicRegionConfigs(specbSrc *hclwrite.Body, root attrVals) (dynamicBlock, error) {
func fillReplicationSpecsWithDynamicRegionConfigs(specbSrc *hclwrite.Body, root attrVals, transformRegionReferences bool) (dynamicBlock, error) {
d, err := getDynamicBlock(specbSrc, nConfigSrc)
if err != nil || !d.IsPresent() {
return dynamicBlock{}, err
Expand All @@ -323,7 +355,11 @@ func fillReplicationSpecsWithDynamicRegionConfigs(specbSrc *hclwrite.Body, root
if zoneName := hcl.GetAttrExpr(specbSrc.GetAttribute(nZoneName)); zoneName != "" {
repSpecb.SetAttributeRaw(nZoneName, hcl.TokensFromExpr(zoneName))
}
regionFor, err := getDynamicBlockRegionConfigsRegionArray(d, root)
forEach := hcl.GetAttrExpr(d.forEach)
if transformRegionReferences {
forEach = replaceDynamicBlockReferences(forEach, nRepSpecs, nSpec)
}
regionFor, err := getDynamicBlockRegionConfigsRegionArray(forEach, d.content, root)
if err != nil {
return dynamicBlock{}, err
}
Expand Down Expand Up @@ -414,7 +450,7 @@ func getSpecs(configSrc *hclwrite.Block, countName string, root attrVals, isDyna
}
tokens := hcl.TokensObject(fileb)
if isDynamicBlock {
tokens = encloseDynamicBlockRegionSpec(tokens, countName)
tokens = append(hcl.TokensFromExpr(fmt.Sprintf("%s == 0 ? null :", hcl.GetAttrExpr(count))), tokens...)
}
return tokens, nil
}
Expand Down Expand Up @@ -520,41 +556,38 @@ func replaceDynamicBlockExpr(attr *hclwrite.Attribute, blockName, attrName strin
return strings.ReplaceAll(expr, fmt.Sprintf("%s.%s", blockName, attrName), attrName)
}

func encloseDynamicBlockRegionSpec(specTokens hclwrite.Tokens, countName string) hclwrite.Tokens {
tokens := hcl.TokensFromExpr(fmt.Sprintf("%s.%s > 0 ?", nRegion, countName))
tokens = append(tokens, specTokens...)
return append(tokens, hcl.TokensFromExpr(": null")...)
}

// getDynamicBlockRegionConfigsRegionArray returns the region array for a dynamic block in replication_specs.
// e.g. [ for region in var.replication_specs.regions_config : { ... } if priority == region.priority ]
func getDynamicBlockRegionConfigsRegionArray(d dynamicBlock, root attrVals) (hclwrite.Tokens, error) {
transformDynamicBlockReferences(d.content.Body())
priorityStr := hcl.GetAttrExpr(d.content.Body().GetAttribute(nPriority))
func getDynamicBlockRegionConfigsRegionArray(forEach string, configSrc *hclwrite.Block, root attrVals) (hclwrite.Tokens, error) {
transformDynamicBlockReferences(configSrc.Body(), nConfigSrc, nRegion)
priorityStr := hcl.GetAttrExpr(configSrc.Body().GetAttribute(nPriority))
if priorityStr == "" {
return nil, fmt.Errorf("%s: %s not found", errRepSpecs, nPriority)
}
region, err := getRegionConfig(d.content, root, true)
region, err := getRegionConfig(configSrc, root, true)
if err != nil {
return nil, err
}
tokens := hcl.TokensFromExpr(fmt.Sprintf("for %s in %s :", nRegion, hcl.GetAttrExpr(d.forEach)))
tokens := hcl.TokensFromExpr(fmt.Sprintf("for %s in %s :", nRegion, forEach))
tokens = append(tokens, hcl.EncloseBraces(region.BuildTokens(nil), true)...)
tokens = append(tokens, hcl.TokensFromExpr(fmt.Sprintf("if %s == %s", nPriority, priorityStr))...)
return hcl.EncloseBracketsNewLines(tokens), nil
}

// transformDynamicBlockReferences changes value references in all attributes, e.g. regions_config.value.electable_nodes to region.electable_nodes
func transformDynamicBlockReferences(configSrcb *hclwrite.Body) {
func transformDynamicBlockReferences(configSrcb *hclwrite.Body, blockName, varName string) {
for name, attr := range configSrcb.Attributes() {
expr := hcl.GetAttrExpr(attr)
expr = strings.ReplaceAll(expr,
fmt.Sprintf("%s.%s.", nConfigSrc, nValue),
fmt.Sprintf("%s.", nRegion))
expr := replaceDynamicBlockReferences(hcl.GetAttrExpr(attr), blockName, varName)
configSrcb.SetAttributeRaw(name, hcl.TokensFromExpr(expr))
}
}

// replaceDynamicBlockReferences changes value references, e.g. regions_config.value.electable_nodes to region.electable_nodes
func replaceDynamicBlockReferences(expr, blockName, varName string) string {
return strings.ReplaceAll(expr,
fmt.Sprintf("%s.%s.", blockName, nValue),
fmt.Sprintf("%s.", varName))
}

func sortConfigsByPriority(configs []*hclwrite.Body) []*hclwrite.Body {
for _, config := range configs {
if _, err := hcl.GetAttrInt(config.GetAttribute(nPriority), errPriority); err != nil {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -23,27 +23,27 @@ resource "mongodbatlas_advanced_cluster" "cluster" {
provider_name = var.provider_name
region_name = region.region_name
priority = region.priority
electable_specs = region.electable_nodes > 0 ? {
electable_specs = region.electable_nodes == 0 ? null : {
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: easier to read as we only need to change the first line of the object but not the last one

Copy link
Contributor

@EspenAlbert EspenAlbert Mar 20, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it possible that any of these are nullable? (electable_nodes, read_only_nodes, etc.)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, read_only_nodes for sure, and in the latest PR I also allow electable_nodes to be null (e.g. a region only with read-only nodes)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@lantoli, will the comparison crash if region.read_only_nodes are null?

node_count = region.electable_nodes
instance_size = var.provider_instance_size_name
disk_size_gb = var.disk_size_gb
ebs_volume_type = var.provider_volume_type
disk_iops = var.provider_disk_iops
} : null
read_only_specs = region.read_only_nodes > 0 ? {
}
read_only_specs = region.read_only_nodes == 0 ? null : {
node_count = region.read_only_nodes
instance_size = var.provider_instance_size_name
disk_size_gb = var.disk_size_gb
ebs_volume_type = var.provider_volume_type
disk_iops = var.provider_disk_iops
} : null
analytics_specs = region.analytics_nodes > 0 ? {
}
analytics_specs = region.analytics_nodes == 0 ? null : {
node_count = region.analytics_nodes
instance_size = var.provider_instance_size_name
disk_size_gb = var.disk_size_gb
ebs_volume_type = var.provider_volume_type
disk_iops = var.provider_disk_iops
} : null
}
auto_scaling = {
disk_gb_enabled = var.auto_scaling_disk_gb_enabled
}
Expand Down
Loading