fix: docs, clusters, and object store improvements#68
Conversation
There was a problem hiding this comment.
Pull request overview
This PR updates the Terraform provider to better handle Valkey Cluster and Object Store lifecycle operations, and aligns docs/examples with the removal of the cluster description field from the API.
Changes:
- Reorders Valkey Cluster update operations to allow multiple update types in a single
terraform apply(replica updates before shard updates). - Adds retry logic for Object Store creation and improves object store request/response JSON handling with
omitemptyfor optional configs. - Removes the deprecated
descriptionfield from docs/examples.
Reviewed changes
Copilot reviewed 6 out of 6 changed files in this pull request and generated 6 comments.
Show a summary per file
| File | Description |
|---|---|
| internal/provider/valkey_cluster_resource.go | Reorders update steps (replication factor before shard count) and keeps replication group updates, enabling combined updates. |
| internal/provider/object_store_resource.go | Adds create retries, makes optional configs JSON-omitempty-safe, and improves HTTP body closing. |
| examples/resources/momento_valkey_cluster/resource.tf | Removes description from the cluster example. |
| examples/resources/momento_object_store/resource.tf | Removes description from the embedded cluster example. |
| docs/resources/valkey_cluster.md | Removes description from the documented example. |
| docs/resources/object_store.md | Removes description from the documented example. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 6 out of 6 changed files in this pull request and generated 3 comments.
Comments suppressed due to low confidence (1)
internal/provider/object_store_resource.go:456
Readonly overwritesstate.AccessLoggingConfig/state.MetricsConfigwhen the API returns those blocks. If the API omits them, the previous state's pointers remain set, which can cause permanent drift (Terraform thinks the config is still present). Reset these fields to nil before the conditional assignments (or explicitly nil them out in the else case).
if foundObjectStore.AccessLoggingConfig != nil && foundObjectStore.AccessLoggingConfig.Cloudwatch != nil {
state.AccessLoggingConfig = &AccessLoggingConfig{
LogGroupName: types.StringValue(foundObjectStore.AccessLoggingConfig.Cloudwatch.LogGroupName),
IamRoleArn: types.StringValue(foundObjectStore.AccessLoggingConfig.Cloudwatch.IamRoleArn),
Region: types.StringValue(foundObjectStore.AccessLoggingConfig.Cloudwatch.Region),
}
}
if foundObjectStore.MetricsConfig != nil && foundObjectStore.MetricsConfig.Cloudwatch != nil {
state.MetricsConfig = &MetricsConfig{
IamRoleArn: types.StringValue(foundObjectStore.MetricsConfig.Cloudwatch.IamRoleArn),
Region: types.StringValue(foundObjectStore.MetricsConfig.Cloudwatch.Region),
}
}
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| // Make copy of current shard placements to modify for the replication factor update, so that we don't mutate the current state shard placements in case we need to use them for a subsequent shard count update if both replication_factor and shard_count are changing | ||
| updatedCurrentShardPlacements = make([]ShardPlacementModel, len(currentState.ShardPlacements)) |
There was a problem hiding this comment.
Hm, there is an interesting effect here where you are using the current shard placements instead of the plan's shard placements which means if the user's specified replica AZs changed from the existing state, it will be dropped/ignored in this step.
You only use the plan's / user's provided shard placements in the subsequent diff[shard_count] logic which means the new shards will have right placement, but the existing shard won't.
I don't know how ElastiCache update handles the case where you are changing shard count, and the shard placement is also different from existing state - it just updates shard placement to match the user provided shard placement?
There was a problem hiding this comment.
ok i looked into it, it seems like elasticache doesn't even support updating AZ placement of an existing shard with one update API call, you have to increase replica then decrease replica with specified replicas to remove...
I'm ok with our implementation right now given the complexities here
Main improvements:
terraform apply(had to rearrange the steps and account for nuances in replica updates before shard updates)omitemptywhen marshaling object store terraform config into json so it doesn't fail when access logs or metrics config not provideddescriptionfield from cluster docs since we removed that in api