Skip to content

Commit 8479e75

Browse files
CLOUDP-368426-Stream-Workspace Review comments
1 parent 6e53b2e commit 8479e75

File tree

2 files changed

+21
-208
lines changed

2 files changed

+21
-208
lines changed

cfn-resources/stream-workspace/mongodb-atlas-streamworkspace.json

Lines changed: 4 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -30,15 +30,13 @@
3030
"properties": {
3131
"Tier": {
3232
"type": "string",
33-
"description": "Selected tier for the Stream Workspace. Configures Memory / VCPU allowances.",
34-
"title": "Stream Workspace Tier",
35-
"enum": ["SP2", "SP5", "SP10", "SP30", "SP50"]
33+
"description": "Selected tier for the Stream Workspace. Configures Memory / VCPU allowances. Valid values: SP2, SP5, SP10, SP30, SP50.",
34+
"title": "Stream Workspace Tier"
3635
},
3736
"MaxTierSize": {
3837
"type": "string",
39-
"description": "Max tier size for the Stream Workspace. Configures Memory / VCPU allowances.",
40-
"title": "Stream Workspace Max Tier Size",
41-
"enum": ["SP2", "SP5", "SP10", "SP30", "SP50"]
38+
"description": "Max tier size for the Stream Workspace. Configures Memory / VCPU allowances. Valid values: SP2, SP5, SP10, SP30, SP50.",
39+
"title": "Stream Workspace Max Tier Size"
4240
}
4341
},
4442
"required": ["Tier"],
Lines changed: 17 additions & 202 deletions
Original file line numberDiff line numberDiff line change
@@ -1,207 +1,22 @@
1-
# MongoDB::Atlas::StreamWorkspace Examples
1+
# How to create a MongoDB::Atlas::StreamWorkspace
22

3-
This directory contains an example CloudFormation template for creating Stream Workspaces in MongoDB Atlas.
3+
## Step 1: Activate the stream workspace resource in cloudformation
4+
Step a: Create Role using [execution-role.yaml](https://github.com/mongodb/mongodbatlas-cloudformation-resources/blob/master/examples/execution-role.yaml) in CFN resources folder.
45

5-
## Prerequisites
6+
Step b: Search for Mongodb::Atlas::StreamWorkspace resource.
67

7-
1. **Atlas Project**: You need an existing Atlas project. Get your Project ID from the Atlas UI or using:
8+
(CloudFormation > Public extensions > choose 'Third party' > Search with " Execution name prefix = MongoDB " )
9+
Step c: Select and activate
10+
Enter the RoleArn that is created in step 1.
811

9-
```bash
10-
atlas projects list
11-
```
12+
Your StreamWorkspace Resource is ready to use.
1213

13-
2. **AWS Credentials**: Ensure your AWS credentials are configured with permissions to:
14-
15-
- Create/update/delete CloudFormation stacks
16-
- Access AWS Secrets Manager (for storing Atlas API keys)
17-
18-
3. **Atlas API Keys**: Store your Atlas API keys in AWS Secrets Manager:
19-
20-
```bash
21-
aws secretsmanager create-secret \
22-
--name cfn/atlas/profile/default \
23-
--secret-string '{"PublicKey":"YOUR_PUBLIC_KEY","PrivateKey":"YOUR_PRIVATE_KEY","BaseURL":"https://cloud.mongodb.com"}' \
24-
--region eu-west-1
25-
```
26-
27-
4. **Resource Type Registered**: Ensure the `MongoDB::Atlas::StreamWorkspace` resource type is registered in your AWS CloudFormation Private Registry:
28-
```bash
29-
aws cloudformation describe-type \
30-
--type RESOURCE \
31-
--type-name MongoDB::Atlas::StreamWorkspace \
32-
--region eu-west-1
33-
```
34-
35-
## Example Template
36-
37-
### Stream Workspace (`stream-workspace.json`)
38-
39-
Creates a Stream Workspace with configurable tier and data processing region. All parameters have sensible defaults, so you can deploy with just the required `ProjectId` and `WorkspaceName`, or customize all settings as needed.
40-
41-
**Parameters:**
42-
43-
- `ProjectId`: Your Atlas project ID (24-hexadecimal characters, required)
44-
- `WorkspaceName`: Name for the Stream Workspace (optional, will auto-generate if empty)
45-
- `CloudProvider`: Cloud provider for data processing region (default: "AWS", AWS only for CloudFormation)
46-
- `Region`: Region for data processing (default: "VIRGINIA_USA")
47-
- `Tier`: Stream Workspace Tier - "SP2", "SP5", "SP10", "SP30", or "SP50" (default: "SP30")
48-
- `Profile`: AWS Secrets Manager profile name (default: "default")
49-
50-
**Deploy:**
51-
52-
```bash
53-
# Setup credentials first (if using local credentials)
54-
# source ./prompts/setup-credentials.sh
55-
56-
# Deploy the stack
57-
aws cloudformation create-stack \
58-
--stack-name stream-workspace-example-$(date +%s) \
59-
--template-body file://examples/atlas-streams/stream-workspace/stream-workspace.json \
60-
--parameters \
61-
ParameterKey=ProjectId,ParameterValue=YOUR_PROJECT_ID \
62-
ParameterKey=WorkspaceName,ParameterValue=my-stream-workspace \
63-
ParameterKey=CloudProvider,ParameterValue=AWS \
64-
ParameterKey=Region,ParameterValue=VIRGINIA_USA \
65-
ParameterKey=Tier,ParameterValue=SP30 \
66-
ParameterKey=Profile,ParameterValue=default \
67-
--capabilities CAPABILITY_IAM \
68-
--region eu-west-1
69-
```
70-
71-
**Monitor Stack Creation:**
72-
73-
```bash
74-
# Check stack status
75-
aws cloudformation describe-stacks \
76-
--stack-name <stack-name> \
77-
--region eu-west-1 \
78-
--query 'Stacks[0].StackStatus' \
79-
--output text
80-
81-
# Check resource status
82-
aws cloudformation describe-stack-resources \
83-
--stack-name <stack-name> \
84-
--region eu-west-1
85-
86-
# Check CloudWatch logs for handler execution
87-
aws logs describe-log-groups \
88-
--log-group-name-prefix "mongodb-atlas-streamworkspace" \
89-
--region eu-west-1
90-
```
91-
92-
**Expected Stack Creation Time:**
93-
94-
- Typically 5-10 seconds for stream workspace creation
95-
- Stack status should transition: `CREATE_IN_PROGRESS``CREATE_COMPLETE`
96-
97-
**Verify with Atlas CLI:**
98-
99-
```bash
100-
# List all stream workspaces
101-
atlas streams instances list --projectId <PROJECT_ID>
102-
103-
# Get specific workspace details
104-
atlas streams instances describe <WORKSPACE_NAME> --projectId <PROJECT_ID>
105-
```
106-
107-
**Expected Output:**
108-
109-
- Workspace should appear in the list with the specified name
110-
- Workspace should show:
111-
- `name`: Matches the WorkspaceName parameter
112-
- `dataProcessRegion.cloudProvider`: "AWS"
113-
- `dataProcessRegion.region`: Matches the Region parameter (e.g., "VIRGINIA_USA")
114-
- `streamConfig.tier`: Matches the Tier parameter (e.g., "SP30")
115-
- `hostnames`: Array of hostnames for connecting to the workspace
116-
117-
**Cross-Reference with CloudFormation:**
118-
119-
```bash
120-
# Get physical resource ID from stack
121-
aws cloudformation describe-stack-resources \
122-
--stack-name <stack-name> \
123-
--region eu-west-1 \
124-
--query 'StackResources[?LogicalResourceId==`StreamWorkspace`].PhysicalResourceId' \
125-
--output text
126-
127-
# Get stack outputs
128-
aws cloudformation describe-stacks \
129-
--stack-name <stack-name> \
130-
--region eu-west-1 \
131-
--query 'Stacks[0].Outputs' \
132-
--output json
133-
```
134-
135-
**Stack Outputs:**
136-
137-
- `StreamWorkspaceId`: The unique identifier for the Stream Workspace
138-
- `StreamWorkspaceName`: The name of the Stream Workspace
139-
- `StreamWorkspaceHostnames`: Comma-separated list of hostnames assigned to the stream workspace
140-
141-
**Cleanup:**
142-
143-
```bash
144-
# Delete the stack (will also delete the stream workspace)
145-
aws cloudformation delete-stack \
146-
--stack-name <stack-name> \
147-
--region eu-west-1
148-
149-
# Wait for deletion to complete
150-
aws cloudformation wait stack-delete-complete \
151-
--stack-name <stack-name> \
152-
--region eu-west-1
153-
154-
# Verify workspace is deleted
155-
atlas streams instances list --projectId <PROJECT_ID>
156-
```
157-
158-
**Quick Deploy (Using Defaults):**
159-
160-
If you want to use all default values, you only need to provide the required parameters:
161-
162-
```bash
163-
aws cloudformation create-stack \
164-
--stack-name stream-workspace-$(date +%s) \
165-
--template-body file://examples/atlas-streams/stream-workspace/stream-workspace.json \
166-
--parameters \
167-
ParameterKey=ProjectId,ParameterValue=YOUR_PROJECT_ID \
168-
ParameterKey=WorkspaceName,ParameterValue=my-workspace \
169-
--capabilities CAPABILITY_IAM \
170-
--region eu-west-1
171-
```
172-
173-
This will use the default values:
174-
175-
- `CloudProvider`: AWS
176-
- `Region`: VIRGINIA_USA
177-
- `Tier`: SP30
178-
- `MaxTierSize`: SP50
179-
- `Profile`: default
180-
181-
## Notes
182-
183-
- **AWS Only**: This CloudFormation resource is designed for AWS deployments. The CloudProvider parameter is constrained to "AWS" only.
184-
- **Create-Only Properties**: `WorkspaceName`, `ProjectId`, and `Profile` are create-only properties. To change these, you must delete and recreate the stack.
185-
- **Updateable Properties**: `StreamConfig.Tier` and `StreamConfig.MaxTierSize` can be updated after creation.
186-
- **Read-Only Properties**: `Id`, `Hostnames`, and `Connections` are read-only and returned by CloudFormation but cannot be set.
187-
188-
## Troubleshooting
189-
190-
**Stack Creation Fails:**
191-
192-
- Verify Atlas API keys are correctly stored in AWS Secrets Manager
193-
- Check CloudWatch logs for handler execution errors
194-
- Ensure the resource type is registered in your private registry
195-
- Verify your IP address is on the Atlas IP Access List
196-
197-
**Workspace Not Found in Atlas:**
198-
199-
- Wait a few seconds after stack creation completes
200-
- Verify the Project ID is correct
201-
- Check Atlas UI for the workspace
202-
203-
**Handler Execution Errors:**
204-
205-
- Review CloudWatch logs: `aws logs tail /aws/lambda/mongodb-atlas-streamworkspace-role-stack-* --follow --region eu-west-1`
206-
- Verify execution role has `secretsmanager:GetSecretValue` permission
207-
- Check Atlas API key permissions in Atlas UI
14+
## Step 2: Create template using [stream-workspace.json](stream-workspace.json)
15+
Note: Make sure you are providing appropriate values for:
16+
1. ProjectId
17+
2. WorkspaceName (optional)
18+
3. CloudProvider: AWS (optional, default: AWS)
19+
4. Region (optional, default: VIRGINIA_USA)
20+
5. Tier: SP2, SP5, SP10, SP30, or SP50 (optional, default: SP30)
21+
6. MaxTierSize (optional, default: SP50)
22+
7. Profile (optional)

0 commit comments

Comments
 (0)