Skip to content
Merged
Show file tree
Hide file tree
Changes from 30 commits
Commits
Show all changes
37 commits
Select commit Hold shift + click to select a range
c7857b0
doc assumption for dynamic blocks in tags and labels
lantoli Mar 17, 2025
72a2fbc
dynamic_regions_config example
lantoli Mar 17, 2025
26d2183
range for priorities
lantoli Mar 17, 2025
cf5e3f7
allow dynamic block in regions_config
lantoli Mar 18, 2025
771845c
doc
lantoli Mar 18, 2025
5dfbede
update comment
lantoli Mar 18, 2025
17a6444
minimum implementation to have test failing because difference in gol…
lantoli Mar 18, 2025
0f10bb9
export enclose funcs
lantoli Mar 18, 2025
44a4bc7
create EncloseNewLines and remove SetAttrExpr
lantoli Mar 18, 2025
130009b
root replication_specs
lantoli Mar 18, 2025
b1dc050
remove priority checks about numerical literal
lantoli Mar 18, 2025
c5e6d5f
reuse getRegionConfig from dynamic block logic
lantoli Mar 18, 2025
ae8471b
only sort by priority if all priorities are numerical literals
lantoli Mar 18, 2025
3f93627
remove limitations for priority and electable_nodes
lantoli Mar 18, 2025
ff3fc22
use config in dynamic blocks from individual
lantoli Mar 18, 2025
87bf0fc
passing test
lantoli Mar 19, 2025
b4db307
add auto_scaling example
lantoli Mar 19, 2025
969da96
fix region_configs name replacement
lantoli Mar 19, 2025
55b93fd
refactor isDynamicBlock
lantoli Mar 19, 2025
6186dcd
go back to unexported tokenNewLine
lantoli Mar 19, 2025
1883ff0
add analytics specs
lantoli Mar 19, 2025
032c072
Merge branch 'main' into CLOUDP-303941_regions_config
lantoli Mar 19, 2025
4684c7c
example in readme
lantoli Mar 19, 2025
861a6c4
clarify num_shards limitation
lantoli Mar 19, 2025
165256e
feedback section
lantoli Mar 19, 2025
11314b5
getDynamicBlockRegionConfigsRegionArray
lantoli Mar 19, 2025
b2c3161
refactor fillRegionConfigsDynamicBlock
lantoli Mar 19, 2025
8f6e967
EncloseBracketsNewLines
lantoli Mar 19, 2025
0f2dfeb
fillRegionConfigsDynamicBlock doc
lantoli Mar 19, 2025
876b6eb
move shards closer to where it's used
lantoli Mar 19, 2025
4ab2d3a
add comment for priority loop
lantoli Mar 19, 2025
32ca654
add dynamic block doc
lantoli Mar 19, 2025
0d61fe7
small doc adjustment
lantoli Mar 19, 2025
5fa1cb7
rename to fillReplicationSpecsWithDynamicRegionConfigs
lantoli Mar 19, 2025
b8258ae
Update README.md
lantoli Mar 19, 2025
f90d5a2
link to limitations
lantoli Mar 19, 2025
2bec1f9
how to handle limitation
lantoli Mar 19, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

ENHANCEMENTS:

* Supports `dynamic` block for `tags` and `labels`
* Supports `dynamic` block for `tags`, `labels` and `regions_config`

## 1.0.0 (Mar 6, 2025)

Expand Down
33 changes: 28 additions & 5 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,8 @@ Given the different ways of using dynamic blocks, we recommend reviewing the out

#### Dynamic blocks in tags and labels

You can use `dynamic` blocks for `tags` and `labels`. You can also combine the use of dynamic blocks in `tags` and `labels` with individual blocks in the same cluster definition, e.g.:
You can use `dynamic` blocks for `tags` and `labels`. The plugin assumes that `for_each` has an expression which is evaluated to a `map` of strings.
You can also combine the use of dynamic blocks in `tags` and `labels` with individual blocks in the same cluster definition, e.g.:
```hcl
tags {
key = "environment"
Expand All @@ -72,12 +73,34 @@ dynamic "tags" {
}
```

#### Dynamic blocks in regions_config

You can use `dynamic` blocks for `regions_config`. The plugin assumes that `for_each` has an expression which is evaluated to a `list` or `set` of objects.
Dynamic block and individual blocks for `regions_config` are not supported at the same time in a `replication_specs`. This is an example of how to use dynamic blocks in `regions_config`:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. should we move what's not supported in the limitations?
  2. is there an "easy" why we don't support dynamic and individual blocks?
  3. would a workaround be to execute the command twice, one with dynamic block only and one with individual?

Copy link
Collaborator Author

@lantoli lantoli Mar 19, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. As we have specific sections for the different dynamic blocks where we explain them in more details, I think it's better to keep the limitation for each dynamic block in their specific section, but I've added a note here: 32ca654
  2. The main reasons are:
  • Customers won't probably have definitions file like that.
  • Effort and complexity to support it is not trivial.
  • The result output will be quite complicated.
    I've added a comment in the previous commit regarding this so they can give feedback if some customer needs it. For example we support it in tags and label as some customers are using it, e.g:
  1. That workaround might help, at the end I think the solution would be to use the merge function like in tags.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The point I was trying to get is: customers will come at us with questions. For limitations we are aware, it's useful to have clear instructions on "why" the limitation and "what is the alternative". (this internal thread should teach us).

Said that, it looks like the "why" is more on "we are not investing on it", which is fine. Can we at least have a "what's the alternative" clearly written in the docs?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

added here: 2bec1f9

```hcl
replication_specs {
num_shards = var.replication_specs.num_shards
zone_name = var.replication_specs.zone_name # only needed if you're using zones
dynamic "regions_config" {
for_each = var.replication_specs.regions_config
content {
priority = regions_config.value.priority
region_name = regions_config.value.region_name
electable_nodes = regions_config.value.electable_nodes
read_only_nodes = regions_config.value.read_only_nodes
}
}
}
```

### Limitations

- The plugin doesn't support `regions_config` without `electable_nodes` as there can be some issues with `priority` when they only have `analytics_nodes` and/or `electable_nodes`.
- [`priority`](https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/resources/cluster#priority-1) is required in `regions_config` and must be a numeric [literal expression](https://developer.hashicorp.com/nomad/docs/job-specification/hcl2/expressions#literal-expressions) between 7 and 1, e.g. `var.priority` is not supported. This is to allow reordering them by descending priority as this is expected in `mongodbatlas_advanced_cluster`.
- [`num_shards`](https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/resources/cluster#num_shards-2) in `replication_specs` must be a numeric [literal expression](https://developer.hashicorp.com/nomad/docs/job-specification/hcl2/expressions#literal-expressions), e.g. `var.num_shards` is not supported. This is to allow creating a `replication_specs` element per shard in `mongodbatlas_advanced_cluster`.
- `dynamic` blocks are currently supported only for `tags` and `labels`. **Coming soon**: support for `replication_specs` and `regions_config`.
- [`num_shards`](https://registry.terraform.io/providers/mongodb/mongodbatlas/latest/docs/resources/cluster#num_shards-2) in `replication_specs` must be a numeric [literal expression](https://developer.hashicorp.com/nomad/docs/job-specification/hcl2/expressions#literal-expressions), e.g. `var.num_shards` is not supported. This is to allow creating a `replication_specs` element per shard in `mongodbatlas_advanced_cluster`. This limitation doesn't apply if you're using `dynamic` blocks in `regions_config` or `replication_specs`.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

so great to see our limitations going away. Great stuff @lantoli

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This limitation doesn't apply if you're using dynamic blocks in regions_config or replication_specs

I am not fully clear why we can support this case but not when is a regular literal replication_specs block.

Copy link
Collaborator Author

@lantoli lantoli Mar 20, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

because in the literal case we physically replicate the block num_shard times so we avoid to introduce for loops and the adv_cluster is straighthforward. However in the module case as we need to create the loops and introduce some complexity, it's ok to also iterate through the priorities. We could potentially support that case if some customers are interested

- `dynamic` blocks are currently supported only for `tags`, `labels` and `regions_config`. **Coming soon**: support for `replication_specs`.

## Feedback

If you find any issues or have any suggestions, please open an [issue](https://github.com/mongodb-labs/atlas-cli-plugin-terraform/issues) in this repository.

## Contributing

Expand Down
1 change: 1 addition & 0 deletions internal/convert/const_names.go
Original file line number Diff line number Diff line change
Expand Up @@ -54,4 +54,5 @@ const (
nDynamic = "dynamic"
nForEach = "for_each"
nContent = "content"
nRegion = "region"
)
171 changes: 121 additions & 50 deletions internal/convert/convert.go
Original file line number Diff line number Diff line change
Expand Up @@ -22,22 +22,21 @@ const (
advClusterPlural = "mongodbatlas_advanced_clusters"
valClusterType = "REPLICASET"
valMaxPriority = 7
valMinPriority = 1

errFreeCluster = "free cluster (because no " + nRepSpecs + ")"
errRepSpecs = "setting " + nRepSpecs
errConfigs = "setting " + nConfig
errPriority = "setting " + nPriority
errNumShards = "setting " + nNumShards
valMinPriority = 0
errFreeCluster = "free cluster (because no " + nRepSpecs + ")"
errRepSpecs = "setting " + nRepSpecs
errConfigs = "setting " + nConfig
errPriority = "setting " + nPriority
errNumShards = "setting " + nNumShards

commentGeneratedBy = "Generated by atlas-cli-plugin-terraform."
commentConfirmReferences = "Please confirm that all references to this resource are updated."
commentConfirmReferences = "Please review the changes and confirm that references to this resource are updated."
commentMovedBlock = "Moved blocks"
commentRemovedOld = "Note: Remember to remove or comment out the old cluster definitions."
)

var (
dynamicBlockAllowList = []string{nTags, nLabels}
dynamicBlockAllowList = []string{nTags, nLabels, nConfigSrc}
)

type attrVals struct {
Expand Down Expand Up @@ -129,8 +128,8 @@ func fillMovedBlocks(body *hclwrite.Body, moveLabels []string) {
for i, moveLabel := range moveLabels {
block := body.AppendNewBlock(nMoved, nil)
blockb := block.Body()
hcl.SetAttrExpr(blockb, nFrom, fmt.Sprintf("%s.%s", cluster, moveLabel))
hcl.SetAttrExpr(blockb, nTo, fmt.Sprintf("%s.%s", advCluster, moveLabel))
blockb.SetAttributeRaw(nFrom, hcl.TokensFromExpr(fmt.Sprintf("%s.%s", cluster, moveLabel)))
blockb.SetAttributeRaw(nTo, hcl.TokensFromExpr(fmt.Sprintf("%s.%s", advCluster, moveLabel)))
if i < len(moveLabels)-1 {
body.AppendNewline()
}
Expand Down Expand Up @@ -202,9 +201,15 @@ func fillReplicationSpecs(resourceb *hclwrite.Body, root attrVals) error {
break
}
specbSrc := specSrc.Body()
if err := checkDynamicBlock(specbSrc); err != nil {
d, err := fillRegionConfigsDynamicBlock(specbSrc, root)
if err != nil {
return err
}
if d.IsPresent() {
resourceb.RemoveBlock(specSrc)
resourceb.SetAttributeRaw(nRepSpecs, d.tokens)
return nil
}
// ok to fail as zone_name is optional
_ = hcl.MoveAttr(specbSrc, specb, nZoneName, nZoneName, errRepSpecs)
shards := specbSrc.GetAttribute(nNumShards)
Expand Down Expand Up @@ -251,7 +256,7 @@ func fillTagsLabelsOpt(resourceb *hclwrite.Body, name string) error {

func extractTagsLabelsDynamicBlock(resourceb *hclwrite.Body, name string) (hclwrite.Tokens, error) {
d, err := getDynamicBlock(resourceb, name)
if err != nil || d.forEach == nil {
if err != nil || !d.IsPresent() {
return nil, err
}
key := d.content.Body().GetAttribute(nKey)
Expand Down Expand Up @@ -306,14 +311,43 @@ func fillBlockOpt(resourceb *hclwrite.Body, name string) {
resourceb.SetAttributeRaw(name, hcl.TokensObject(block.Body()))
}

// fillRegionConfigsDynamicBlock is used for dynamic blocks in region_configs
func fillRegionConfigsDynamicBlock(specbSrc *hclwrite.Body, root attrVals) (dynamicBlock, error) {
d, err := getDynamicBlock(specbSrc, nConfigSrc)
if err != nil || !d.IsPresent() {
return dynamicBlock{}, err
}
repSpec := hclwrite.NewEmptyFile()
repSpecb := repSpec.Body()
if zoneName := hcl.GetAttrExpr(specbSrc.GetAttribute(nZoneName)); zoneName != "" {
repSpecb.SetAttributeRaw(nZoneName, hcl.TokensFromExpr(zoneName))
}
regionFor, err := getDynamicBlockRegionConfigsRegionArray(d, root)
if err != nil {
return dynamicBlock{}, err
}
priorityFor := hcl.TokensFromExpr(fmt.Sprintf("for %s in range(%d, %d, -1) : ", nPriority, valMaxPriority, valMinPriority))
priorityFor = append(priorityFor, regionFor...)
repSpecb.SetAttributeRaw(nConfig, hcl.TokensFuncFlatten(priorityFor))

shards := specbSrc.GetAttribute(nNumShards)
if shards == nil {
return dynamicBlock{}, fmt.Errorf("%s: %s not found", errRepSpecs, nNumShards)
}
tokens := hcl.TokensFromExpr(fmt.Sprintf("for i in range(%s) :", hcl.GetAttrExpr(shards)))
tokens = append(tokens, hcl.EncloseBraces(repSpec.BuildTokens(nil), true)...)
d.tokens = hcl.EncloseBracketsNewLines(tokens)
return d, nil
}

func fillRegionConfigs(specb, specbSrc *hclwrite.Body, root attrVals) error {
var configs []*hclwrite.Body
for {
configSrc := specbSrc.FirstMatchingBlock(nConfigSrc, nil)
if configSrc == nil {
break
}
config, err := getRegionConfig(configSrc, root)
config, err := getRegionConfig(configSrc, root, false)
if err != nil {
return err
}
Expand All @@ -323,34 +357,28 @@ func fillRegionConfigs(specb, specbSrc *hclwrite.Body, root attrVals) error {
if len(configs) == 0 {
return fmt.Errorf("%s: %s not found", errRepSpecs, nConfigSrc)
}
sort.Slice(configs, func(i, j int) bool {
pi, _ := hcl.GetAttrInt(configs[i].GetAttribute(nPriority), errPriority)
pj, _ := hcl.GetAttrInt(configs[j].GetAttribute(nPriority), errPriority)
return pi > pj
})
configs = sortConfigsByPriority(configs)
specb.SetAttributeRaw(nConfig, hcl.TokensArray(configs))
return nil
}

func getRegionConfig(configSrc *hclwrite.Block, root attrVals) (*hclwrite.File, error) {
func getRegionConfig(configSrc *hclwrite.Block, root attrVals, isDynamicBlock bool) (*hclwrite.File, error) {
file := hclwrite.NewEmptyFile()
fileb := file.Body()
fileb.SetAttributeRaw(nProviderName, root.req[nProviderName])
if err := hcl.MoveAttr(configSrc.Body(), fileb, nRegionName, nRegionName, errRepSpecs); err != nil {
return nil, err
}
if err := setPriority(fileb, configSrc.Body().GetAttribute(nPriority)); err != nil {
if err := hcl.MoveAttr(configSrc.Body(), fileb, nPriority, nPriority, errRepSpecs); err != nil {
return nil, err
}
electableSpecs, errElec := getSpecs(configSrc, nElectableNodes, root)
if errElec != nil {
return nil, errElec
if electable, _ := getSpecs(configSrc, nElectableNodes, root, isDynamicBlock); electable != nil {
fileb.SetAttributeRaw(nElectableSpecs, electable)
}
fileb.SetAttributeRaw(nElectableSpecs, electableSpecs)
if readOnly, _ := getSpecs(configSrc, nReadOnlyNodes, root); readOnly != nil {
if readOnly, _ := getSpecs(configSrc, nReadOnlyNodes, root, isDynamicBlock); readOnly != nil {
fileb.SetAttributeRaw(nReadOnlySpecs, readOnly)
}
if analytics, _ := getSpecs(configSrc, nAnalyticsNodes, root); analytics != nil {
if analytics, _ := getSpecs(configSrc, nAnalyticsNodes, root, isDynamicBlock); analytics != nil {
fileb.SetAttributeRaw(nAnalyticsSpecs, analytics)
}
if autoScaling := getAutoScalingOpt(root.opt); autoScaling != nil {
Expand All @@ -359,7 +387,7 @@ func getRegionConfig(configSrc *hclwrite.Block, root attrVals) (*hclwrite.File,
return file, nil
}

func getSpecs(configSrc *hclwrite.Block, countName string, root attrVals) (hclwrite.Tokens, error) {
func getSpecs(configSrc *hclwrite.Block, countName string, root attrVals, isDynamicBlock bool) (hclwrite.Tokens, error) {
var (
file = hclwrite.NewEmptyFile()
fileb = file.Body()
Expand All @@ -382,7 +410,11 @@ func getSpecs(configSrc *hclwrite.Block, countName string, root attrVals) (hclwr
if root.opt[nDiskIOPSSrc] != nil {
fileb.SetAttributeRaw(nDiskIOPS, root.opt[nDiskIOPSSrc])
}
return hcl.TokensObject(fileb), nil
tokens := hcl.TokensObject(fileb)
if isDynamicBlock {
tokens = encloseDynamicBlockRegionSpec(tokens, countName)
}
return tokens, nil
}

func getAutoScalingOpt(opt map[string]hclwrite.Tokens) hclwrite.Tokens {
Expand Down Expand Up @@ -440,6 +472,17 @@ func getResourceLabel(resource *hclwrite.Block) string {
return labels[1]
}

type dynamicBlock struct {
block *hclwrite.Block
forEach *hclwrite.Attribute
content *hclwrite.Block
tokens hclwrite.Tokens
}

func (d dynamicBlock) IsPresent() bool {
return d.block != nil
}

func checkDynamicBlock(body *hclwrite.Body) error {
for _, block := range body.Blocks() {
name := getResourceName(block)
Expand All @@ -451,12 +494,6 @@ func checkDynamicBlock(body *hclwrite.Body) error {
return nil
}

type dynamicBlock struct {
block *hclwrite.Block
forEach *hclwrite.Attribute
content *hclwrite.Block
}

func getDynamicBlock(body *hclwrite.Body, name string) (dynamicBlock, error) {
for _, block := range body.Blocks() {
if block.Type() != nDynamic || name != getResourceName(block) {
Expand All @@ -481,6 +518,55 @@ func replaceDynamicBlockExpr(attr *hclwrite.Attribute, blockName, attrName strin
return strings.ReplaceAll(expr, fmt.Sprintf("%s.%s", blockName, attrName), attrName)
}

func encloseDynamicBlockRegionSpec(specTokens hclwrite.Tokens, countName string) hclwrite.Tokens {
tokens := hcl.TokensFromExpr(fmt.Sprintf("%s.%s > 0 ?", nRegion, countName))
tokens = append(tokens, specTokens...)
return append(tokens, hcl.TokensFromExpr(": null")...)
}

// getDynamicBlockRegionConfigsRegionArray returns the region array for a dynamic block in replication_specs.
// e.g. [ for region in var.replication_specs.regions_config : { ... } if priority == region.priority ]
func getDynamicBlockRegionConfigsRegionArray(d dynamicBlock, root attrVals) (hclwrite.Tokens, error) {
transformDynamicBlockReferences(d.content.Body())
priorityStr := hcl.GetAttrExpr(d.content.Body().GetAttribute(nPriority))
if priorityStr == "" {
return nil, fmt.Errorf("%s: %s not found", errRepSpecs, nPriority)
}
region, err := getRegionConfig(d.content, root, true)
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@AgustinBettati FYI region config object creation is reused in individual and dynamic block

if err != nil {
return nil, err
}
tokens := hcl.TokensFromExpr(fmt.Sprintf("for %s in %s :", nRegion, hcl.GetAttrExpr(d.forEach)))
tokens = append(tokens, hcl.EncloseBraces(region.BuildTokens(nil), true)...)
tokens = append(tokens, hcl.TokensFromExpr(fmt.Sprintf("if %s == %s", nPriority, priorityStr))...)
return hcl.EncloseBracketsNewLines(tokens), nil
}

// transformDynamicBlockReferences changes value references in all attributes, e.g. regions_config.value.electable_nodes to region.electable_nodes
func transformDynamicBlockReferences(configSrcb *hclwrite.Body) {
for name, attr := range configSrcb.Attributes() {
expr := hcl.GetAttrExpr(attr)
expr = strings.ReplaceAll(expr,
fmt.Sprintf("%s.%s.", nConfigSrc, nValue),
fmt.Sprintf("%s.", nRegion))
configSrcb.SetAttributeRaw(name, hcl.TokensFromExpr(expr))
}
}

func sortConfigsByPriority(configs []*hclwrite.Body) []*hclwrite.Body {
for _, config := range configs {
if _, err := hcl.GetAttrInt(config.GetAttribute(nPriority), errPriority); err != nil {
return configs // don't sort priorities if any is not a numerical literal
}
}
sort.Slice(configs, func(i, j int) bool {
pi, _ := hcl.GetAttrInt(configs[i].GetAttribute(nPriority), errPriority)
pj, _ := hcl.GetAttrInt(configs[j].GetAttribute(nPriority), errPriority)
return pi > pj
})
return configs
}

func setKeyValue(body *hclwrite.Body, key, value *hclwrite.Attribute) {
keyStr, err := hcl.GetAttrString(key, "")
if err == nil {
Expand All @@ -494,21 +580,6 @@ func setKeyValue(body *hclwrite.Body, key, value *hclwrite.Attribute) {
body.SetAttributeRaw(keyStr, value.Expr().BuildTokens(nil))
}

func setPriority(body *hclwrite.Body, priority *hclwrite.Attribute) error {
if priority == nil {
return fmt.Errorf("%s: %s not found", errRepSpecs, nPriority)
}
valPriority, err := hcl.GetAttrInt(priority, errPriority)
if err != nil {
return err
}
if valPriority < valMinPriority || valPriority > valMaxPriority {
return fmt.Errorf("%s: %s is %d but must be between %d and %d", errPriority, nPriority, valPriority, valMinPriority, valMaxPriority)
}
hcl.SetAttrInt(body, nPriority, valPriority)
return nil
}

// popRootAttrs deletes the attributes common to all replication_specs/regions_config and returns them.
func popRootAttrs(body *hclwrite.Body) (attrVals, error) {
var (
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -32,5 +32,5 @@ resource "mongodbatlas_advanced_cluster" "this" {
}

# Generated by atlas-cli-plugin-terraform.
# Please confirm that all references to this resource are updated.
# Please review the changes and confirm that references to this resource are updated.
}
Original file line number Diff line number Diff line change
Expand Up @@ -18,3 +18,24 @@ resource "mongodbatlas_cluster" "ar" {
}
}
}

resource "mongodbatlas_cluster" "ar_not_electable" {
project_id = var.project_id
name = "ar"
cluster_type = "REPLICASET"
provider_name = "AWS"
provider_instance_size_name = "M10"
disk_size_gb = 90
provider_volume_type = "PROVISIONED"
provider_disk_iops = 100
replication_specs {
num_shards = 1
regions_config {
region_name = "US_EAST_1"
priority = 7
electable_nodes = 0
analytics_nodes = 2
read_only_nodes = 1
}
}
}
Loading