Skip to content

feat: Create streams_workspace as a equivalent resource for streams_instance resource #3564

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Draft
wants to merge 3 commits into
base: master
Choose a base branch
from
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions .changelog/3559.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
```release-note:new-resource

```
38 changes: 38 additions & 0 deletions docs/data-sources/stream_workspace.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,38 @@
# Data Source: mongodbatlas_stream_workspace

`mongodbatlas_stream_workspace` describes a stream workspace.

## Example Usage

```terraform
data "mongodbatlas_stream_workspace" "example" {
project_id = "<PROJECT_ID>"
workspace_name = "<INSTANCE_NAME>"
}
```

## Argument Reference

* `project_id` - (Required) Unique 24-hexadecimal digit string that identifies your project.
* `workspace_name` - (Required) Human-readable label that identifies the stream workspace.

## Attributes Reference

* `data_process_region` - Defines the cloud service provider and region where MongoDB Cloud performs stream processing. See [data process region](#data-process-region).
* `hostnames` - List that contains the hostnames assigned to the stream workspace.
* `stream_config` - Defines the configuration options for an Atlas Stream Processing Workspace. See [stream config](#stream-config)


### Data Process Region

* `cloud_provider` - Label that identifies the cloud service provider where MongoDB Cloud performs stream processing. The [MongoDB Atlas API](https://www.mongodb.com/docs/atlas/reference/api-resources-spec/#tag/Streams/operation/createStreamInstance) describes the valid values.
* `region` - Name of the cloud provider region hosting Atlas Stream Processing. The [MongoDB Atlas API](https://www.mongodb.com/docs/atlas/reference/api-resources-spec/#tag/Streams/operation/createStreamInstance) describes the valid values.

### Stream Config

* `tier` - Selected tier for the Stream Workspace. Configures Memory / VCPU allowances. The [MongoDB Atlas API](https://www.mongodb.com/docs/atlas/reference/api-resources-spec/#tag/Streams/operation/createStreamInstance) describes the valid values.
* `defaultTier` - Selected defaultTier for the Stream Workspace. Configures Memory / VCPU allowances. The [MongoDB Atlas API](https://www.mongodb.com/docs/atlas/reference/api-resources-spec/#tag/Streams/operation/createStreamInstance) describes the valid values.
* `maxTierSize` - Selected maxTierSize for the Stream Workspace. Configures Memory / VCPU allowances. The [MongoDB Atlas API](https://www.mongodb.com/docs/atlas/reference/api-resources-spec/#tag/Streams/operation/createStreamInstance) describes the valid values.

To learn more, see: [MongoDB Atlas API - Stream Instance](https://www.mongodb.com/docs/atlas/reference/api-resources-spec/#tag/Streams/operation/createStreamInstance) Documentation.
The [Terraform Provider Examples Section](https://github.com/mongodb/terraform-provider-mongodbatlas/blob/master/examples/mongodbatlas_stream_instance/atlas-streams-user-journey.md) also contains details on the overall support for Atlas Streams Processing in Terraform.
48 changes: 48 additions & 0 deletions docs/data-sources/stream_workspaces.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,48 @@
# Data Source: mongodbatlas_stream_workspaces

`mongodbatlas_stream_workspaces` describes the stream workspaces defined in a project.

## Example Usage

```terraform
data "mongodbatlas_stream_workspaces" "test" {
project_id = "<PROJECT_ID>"
}
```

## Argument Reference

* `project_id` - (Required) Unique 24-hexadecimal digit string that identifies your project.

* `page_num` - (Optional) Number of the page that displays the current set of the total objects that the response returns. Defaults to `1`.
* `items_per_page` - (Optional) Number of items that the response returns per page, up to a maximum of `500`. Defaults to `100`.


## Attributes Reference

In addition to all arguments above, it also exports the following attributes:

* `results` - A list where each element contains a Stream Workspace.
* `total_count` - Count of the total number of items in the result set. The count might be greater than the number of objects in the results array if the entire result set is paginated.

### Stream Workspace

* `project_id` - Unique 24-hexadecimal digit string that identifies your project.
* `workspace_name` - Human-readable label that identifies the stream workspace.
* `data_process_region` - Defines the cloud service provider and region where MongoDB Cloud performs stream processing. See [data process region](#data-process-region).
* `hostnames` - List that contains the hostnames assigned to the stream workspace.
* `stream_config` - Defines the configuration options for an Atlas Stream Processing Workspace. See [stream config](#stream-config)

### Data Process Region

* `cloud_provider` - Label that identifies the cloud service provider where MongoDB Cloud performs stream processing. The [MongoDB Atlas API](https://www.mongodb.com/docs/atlas/reference/api-resources-spec/#tag/Streams/operation/createStreamInstance) describes the valid values.
* `region` - Name of the cloud provider region hosting Atlas Stream Processing. The [MongoDB Atlas API](https://www.mongodb.com/docs/atlas/reference/api-resources-spec/#tag/Streams/operation/createStreamInstance) describes the valid values.

### Stream Config

* `tier` - Selected tier for the Stream Workspace. Configures Memory / VCPU allowances. The [MongoDB Atlas API](https://www.mongodb.com/docs/atlas/reference/api-resources-spec/#tag/Streams/operation/createStreamInstance) describes the valid values.
* `defaultTier` - Selected defaultTier for the Stream Workspace. Configures Memory / VCPU allowances. The [MongoDB Atlas API](https://www.mongodb.com/docs/atlas/reference/api-resources-spec/#tag/Streams/operation/createStreamInstance) describes the valid values.
* `maxTierSize` - Selected maxTierSize for the Stream Workspace. Configures Memory / VCPU allowances. The [MongoDB Atlas API](https://www.mongodb.com/docs/atlas/reference/api-resources-spec/#tag/Streams/operation/createStreamInstance) describes the valid values.

To learn more, see: [MongoDB Atlas API - Stream Instance](https://www.mongodb.com/docs/atlas/reference/api-resources-spec/#tag/Streams/operation/createStreamInstance) Documentation.
The [Terraform Provider Examples Section](https://github.com/mongodb/terraform-provider-mongodbatlas/blob/master/examples/mongodbatlas_stream_instance/atlas-streams-user-journey.md) also contains details on the overall support for Atlas Streams Processing in Terraform.
52 changes: 52 additions & 0 deletions docs/resources/stream_workspace.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,52 @@
# Resource: mongodbatlas_stream_workspace

`mongodbatlas_stream_workspace` provides a Stream Workspace resource. The resource lets you create, edit, and delete stream workspaces in a project.

## Example Usage

```terraform
resource "mongodbatlas_stream_workspace" "test" {
project_id = var.project_id
workspace_name = "WorkspaceName"
data_process_region = {
region = "VIRGINIA_USA"
cloud_provider = "AWS"
}
}
```

## Argument Reference

* `project_id` - (Required) Unique 24-hexadecimal digit string that identifies your project.
* `workspace_name` - (Required) Human-readable label that identifies the stream workspace.
* `data_process_region` - (Required) Cloud service provider and region where MongoDB Cloud performs stream processing. See [data process region](#data-process-region).
* `stream_config` - (Optional) Configuration options for an Atlas Stream Processing Workspace. See [stream config](#stream-config)


### Data Process Region

* `cloud_provider` - (Required) Label that identifies the cloud service provider where MongoDB Cloud performs stream processing. The [MongoDB Atlas API](https://www.mongodb.com/docs/atlas/reference/api-resources-spec/#tag/Streams/operation/createStreamInstance) describes the valid values.
* `region` - (Required) Name of the cloud provider region hosting Atlas Stream Processing. The [MongoDB Atlas API](https://www.mongodb.com/docs/atlas/reference/api-resources-spec/#tag/Streams/operation/createStreamInstance) describes the valid values.

### Stream Config

* `tier` - (Required) Selected tier for the Stream Workspace. Configures Memory / VCPU allowances. The [MongoDB Atlas API](https://www.mongodb.com/docs/atlas/reference/api-resources-spec/#tag/Streams/operation/createStreamInstance) describes the valid values.
* `defaultTier` - Selected defaultTier for the Stream Workspace. Configures Memory / VCPU allowances. The [MongoDB Atlas API](https://www.mongodb.com/docs/atlas/reference/api-resources-spec/#tag/Streams/operation/createStreamInstance) describes the valid values.
* `maxTierSize` - Selected maxTierSize for the Stream Workspace. Configures Memory / VCPU allowances. The [MongoDB Atlas API](https://www.mongodb.com/docs/atlas/reference/api-resources-spec/#tag/Streams/operation/createStreamInstance) describes the valid values.

## Attributes Reference

In addition to all arguments above, the following attributes are exported:

* `hostnames` - List that contains the hostnames assigned to the stream workspace.

## Import

You can import stream workspace resource using the project ID and workspace name, in the format `PROJECT_ID-INSTANCE_NAME`. For example:

```
$ terraform import mongodbatlas_stream_workspace.test 650972848269185c55f40ca1-WorkspaceName
```

To learn more, see: [MongoDB Atlas API - Stream Instance](https://www.mongodb.com/docs/atlas/reference/api-resources-spec/#tag/Streams/operation/createStreamInstance) Documentation.
The [Terraform Provider Examples Section](https://github.com/mongodb/terraform-provider-mongodbatlas/blob/master/examples/mongodbatlas_stream_instance/atlas-streams-user-journey.md) also contains details on the overall support for Atlas Streams Processing in Terraform.
4 changes: 4 additions & 0 deletions internal/provider/provider.go
Original file line number Diff line number Diff line change
Expand Up @@ -48,6 +48,7 @@ import (
"github.com/mongodb/terraform-provider-mongodbatlas/internal/service/streaminstance"
"github.com/mongodb/terraform-provider-mongodbatlas/internal/service/streamprivatelinkendpoint"
"github.com/mongodb/terraform-provider-mongodbatlas/internal/service/streamprocessor"
"github.com/mongodb/terraform-provider-mongodbatlas/internal/service/streamworkspace"
"github.com/mongodb/terraform-provider-mongodbatlas/version"
)

Expand Down Expand Up @@ -444,6 +445,8 @@ func (p *MongodbtlasProvider) DataSources(context.Context) []func() datasource.D
projectipaddresses.DataSource,
streamprocessor.DataSource,
streamprocessor.PluralDataSource,
streamworkspace.DataSource,
streamworkspace.PluralDataSource,
encryptionatrest.DataSource,
encryptionatrestprivateendpoint.DataSource,
encryptionatrestprivateendpoint.PluralDataSource,
Expand Down Expand Up @@ -478,6 +481,7 @@ func (p *MongodbtlasProvider) Resources(context.Context) []func() resource.Resou
searchdeployment.Resource,
pushbasedlogexport.Resource,
streaminstance.Resource,
streamworkspace.Resource,
streamconnection.Resource,
streamprocessor.Resource,
encryptionatrestprivateendpoint.Resource,
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,54 @@
package streamworkspace

import (
"context"

"github.com/hashicorp/terraform-plugin-framework/datasource"
"github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion"
"github.com/mongodb/terraform-provider-mongodbatlas/internal/config"
)

var _ datasource.DataSource = &streamWorkspaceDS{}
var _ datasource.DataSourceWithConfigure = &streamWorkspaceDS{}

func DataSource() datasource.DataSource {
return &streamWorkspaceDS{
DSCommon: config.DSCommon{
DataSourceName: streamWorkspaceName,
},
}
}

type streamWorkspaceDS struct {
config.DSCommon
}

func (d *streamWorkspaceDS) Schema(ctx context.Context, req datasource.SchemaRequest, resp *datasource.SchemaResponse) {
resp.Schema = conversion.DataSourceSchemaFromResource(ResourceSchema(ctx), &conversion.DataSourceSchemaRequest{
RequiredFields: []string{"project_id", "workspace_name"},
})
}

func (d *streamWorkspaceDS) Read(ctx context.Context, req datasource.ReadRequest, resp *datasource.ReadResponse) {
var streamWorkspaceConfig TFStreamWorkspaceModel
resp.Diagnostics.Append(req.Config.Get(ctx, &streamWorkspaceConfig)...)
if resp.Diagnostics.HasError() {
return
}

connV2 := d.Client.AtlasV2
projectID := streamWorkspaceConfig.ProjectID.ValueString()
workspaceName := streamWorkspaceConfig.WorkspaceName.ValueString()
apiResp, _, err := connV2.StreamsApi.GetStreamInstance(ctx, projectID, workspaceName).Execute()
if err != nil {
resp.Diagnostics.AddError("error fetching resource", err.Error())
return
}

newStreamWorkspaceModel, diags := NewTFStreamWorkspace(ctx, apiResp)
if diags.HasError() {
resp.Diagnostics.Append(diags...)
return
}
resp.Diagnostics.Append(resp.State.Set(ctx, newStreamWorkspaceModel)...)
}
Original file line number Diff line number Diff line change
@@ -0,0 +1,43 @@
package streamworkspace_test

import (
"fmt"
"testing"

"github.com/hashicorp/terraform-plugin-testing/helper/resource"
"github.com/mongodb/terraform-provider-mongodbatlas/internal/testutil/acc"
)

func TestAccStreamDSStreamWorkspace_basic(t *testing.T) {
var (
dataSourceName = "data.mongodbatlas_stream_workspace.test"
projectID = acc.ProjectIDExecution(t)
workspaceName = acc.RandomName()
)

resource.ParallelTest(t, resource.TestCase{
PreCheck: func() { acc.PreCheckBasic(t) },
ProtoV6ProviderFactories: acc.TestAccProviderV6Factories,
CheckDestroy: acc.CheckDestroyStreamInstance,
Steps: []resource.TestStep{
{
Config: streamWorkspaceDataSourceConfig(projectID, workspaceName, region, cloudProvider),
Check: resource.ComposeAggregateTestCheckFunc(
streamWorkspaceAttributeChecks(dataSourceName, workspaceName, region, cloudProvider),
resource.TestCheckResourceAttr(dataSourceName, "stream_config.tier", "SP30"),
),
},
},
})
}

func streamWorkspaceDataSourceConfig(projectID, workspaceName, region, cloudProvider string) string {
return fmt.Sprintf(`
%s

data "mongodbatlas_stream_workspace" "test" {
project_id = mongodbatlas_stream_workspace.test.project_id
workspace_name = mongodbatlas_stream_workspace.test.workspace_name
}
`, acc.StreamInstanceConfig(projectID, workspaceName, region, cloudProvider))
}
Original file line number Diff line number Diff line change
@@ -0,0 +1,72 @@
package streamworkspace

import (
"context"
"fmt"

"github.com/hashicorp/terraform-plugin-framework/datasource"
"github.com/hashicorp/terraform-plugin-framework/types"
"github.com/mongodb/terraform-provider-mongodbatlas/internal/common/conversion"
"github.com/mongodb/terraform-provider-mongodbatlas/internal/config"
"go.mongodb.org/atlas-sdk/v20250312005/admin"
)

var _ datasource.DataSource = &streamWorkspacesDS{}
var _ datasource.DataSourceWithConfigure = &streamWorkspacesDS{}

func PluralDataSource() datasource.DataSource {
return &streamWorkspacesDS{
DSCommon: config.DSCommon{
DataSourceName: fmt.Sprintf("%ss", streamWorkspaceName),
},
}
}

type streamWorkspacesDS struct {
config.DSCommon
}

func (d *streamWorkspacesDS) Schema(ctx context.Context, req datasource.SchemaRequest, resp *datasource.SchemaResponse) {
resp.Schema = conversion.PluralDataSourceSchemaFromResource(ResourceSchema(ctx), &conversion.PluralDataSourceSchemaRequest{
RequiredFields: []string{"project_id"},
HasLegacyFields: true,
})
}

func (d *streamWorkspacesDS) Read(ctx context.Context, req datasource.ReadRequest, resp *datasource.ReadResponse) {
var streamWorkspacesConfig TFStreamWorkspacesModel
resp.Diagnostics.Append(req.Config.Get(ctx, &streamWorkspacesConfig)...)
if resp.Diagnostics.HasError() {
return
}

connV2 := d.Client.AtlasV2
projectID := streamWorkspacesConfig.ProjectID.ValueString()
itemsPerPage := streamWorkspacesConfig.ItemsPerPage.ValueInt64Pointer()
pageNum := streamWorkspacesConfig.PageNum.ValueInt64Pointer()
apiResp, _, err := connV2.StreamsApi.ListStreamInstancesWithParams(ctx, &admin.ListStreamInstancesApiParams{
GroupId: projectID,
ItemsPerPage: conversion.Int64PtrToIntPtr(itemsPerPage),
PageNum: conversion.Int64PtrToIntPtr(pageNum),
}).Execute()
if err != nil {
resp.Diagnostics.AddError("error fetching results", err.Error())
return
}

newStreamWorkspacesModel, diags := NewTFStreamWorkspaces(ctx, &streamWorkspacesConfig, apiResp)
if diags.HasError() {
resp.Diagnostics.Append(diags...)
return
}
resp.Diagnostics.Append(resp.State.Set(ctx, newStreamWorkspacesModel)...)
}

type TFStreamWorkspacesModel struct {
ID types.String `tfsdk:"id"`
ProjectID types.String `tfsdk:"project_id"`
Results []TFStreamWorkspaceModel `tfsdk:"results"`
PageNum types.Int64 `tfsdk:"page_num"`
ItemsPerPage types.Int64 `tfsdk:"items_per_page"`
TotalCount types.Int64 `tfsdk:"total_count"`
}
Loading
Loading