You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Adds the Pulumi code to:
- Deploy the registry (and associated services e.g. mongodb) to Google
Cloud Platform (GCP), on top of Google Kubernetes Engine (GKE)
- Sets up proper environments and secrets management
- Uses the real container image, now that it's published in #225. At the
moment attached to latest, we might want to pin the version later (or
perhaps always use `latest` in staging, and pin prod)
- Uses real domains (`staging.registry.modelcontextprotocol.io`) rather
than examples (``)
## Motivation and Context
Setting up infrastructure to deploy it. I set something up in Azure in
#227, although not super robust (e.g. no service accounts etc.). Think
we will use GCP as:
- the maintainers have experience with GCP, but none with Azure
- costs are quite low, and Anthropic is happy to cover them in the short
term
- means we only have to maintain one login system (just Google Cloud
Identity), not two (Google Workspace + Azure)
## How Has This Been Tested?
Deployed this to a staging and production cluster. Try it yourself at:
```bash
curl -H "Host: staging.registry.modelcontextprotocol.io" -k https://35.222.36.75/v0/ping
```
(will be sorting out domains very soon)
## Breaking Changes
NA - just adds support for GCP deployment
## Types of changes
<!-- What types of changes does your code introduce? Put an `x` in all
the boxes that apply: -->
- [ ] Bug fix (non-breaking change which fixes an issue)
- [x] New feature (non-breaking change which adds functionality)
- [ ] Breaking change (fix or feature that would cause existing
functionality to change)
- [ ] Documentation update
## Checklist
<!-- Go over all the following points, and put an `x` in all the boxes
that apply. -->
- [x] I have read the [MCP
Documentation](https://modelcontextprotocol.io)
- [x] My code follows the repository's style guidelines
- [ ] New and existing tests pass locally
- [x] I have added appropriate error handling
- [x] I have added or updated documentation as needed
## Additional context
<!-- Add any other context, implementation notes, or design decisions
-->
Expected follow-ups:
- GitHub Action setup to deploy things to the cluster from GitHub, to
avoid gatekeeping to just the people with the secrets.
Copy file name to clipboardExpand all lines: deploy/README.md
+52-10Lines changed: 52 additions & 10 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,6 +1,6 @@
1
1
# MCP Registry Kubernetes Deployment
2
2
3
-
This directory contains Pulumi infrastructure code to deploy the MCP Registry service to a Kubernetes cluster. It supports multiple Kubernetes providers: Azure Kubernetes Service (AKS) and local (using existing kubeconfig).
3
+
This directory contains Pulumi infrastructure code to deploy the MCP Registry service to a Kubernetes cluster. It supports deploying the infrastructure locally (using an existing kubeconfig, e.g. with minikube) or to Google Cloud Platform (GCP).
4
4
5
5
## Quick Start
6
6
@@ -19,17 +19,53 @@ Pre-requisites:
19
19
20
20
# To use your local kubeconfig (default)
21
21
pulumi config set mcp-registry:provider local
22
-
# Alternative: To use AKS
23
-
# pulumi config set mcp-registry:provider aks
24
22
25
23
# GitHub OAuth
26
24
pulumi config set mcp-registry:githubClientId <your-github-client-id>
27
25
pulumi config set --secret mcp-registry:githubClientSecret <your-github-client-secret>
28
26
```
29
-
4. Deploy: `go build && PULUMI_CONFIG_PASSPHRASE="" pulumi up --yes`
30
-
5. Access the repository via the ingress load balancer. You can find its external IP with `kubectl get svc nginx-ingress-ingress-nginx-controller -n ingress-nginx` (with minikube, if it's 'pending' you might need `minikube tunnel`). Then run `curl -H "Host: mcp-registry-local.example.com" -k https://<EXTERNAL-IP>/v0/ping` to check that the service is up.
27
+
4. Deploy: `make local-up`
28
+
5. Access the repository via the ingress load balancer. You can find its external IP with `kubectl get svc ingress-nginx-controller -n ingress-nginx` (with minikube, if it's 'pending' you might need `minikube tunnel`). Then run `curl -H "Host: local.registry.modelcontextprotocol.io" -k https://<EXTERNAL-IP>/v0/ping` to check that the service is up.
31
29
32
-
### Production Deployment (AKS)
30
+
### Production Deployment (GCP)
31
+
32
+
**Note:** This is how the production deployment will be set up once. But then the plan will be future updates are effectively a login + `pulumi up` from GitHub Actions.
gcloud iam service-accounts keys create sa-key.json --iam-account=pulumi-svc@mcp-registry-prod.iam.gserviceaccount.com
51
+
```
52
+
5. Create a GCS bucket for Pulumi state: `gsutil mb gs://mcp-registry-prod-pulumi-state`
53
+
6. Set Pulumi's backend to GCS: `pulumi login gs://mcp-registry-prod-pulumi-state`
54
+
7. Get the passphrase file `passphrase.prod.txt` from @domdomegg
55
+
- TODO: avoid dependence on one person! Probably will shift all of this into CI.
56
+
8. Init the GCP stack: `PULUMI_CONFIG_PASSPHRASE_FILE=passphrase.prod.txt pulumi stack init gcpProd`
57
+
9. Set the GCP credentials in Pulumi config:
58
+
```bash
59
+
# Base64 encode the service account key and set it
60
+
pulumi config set --secret gcp:credentials "$(base64 < sa-key.json)"
61
+
```
62
+
10. Deploy: `make prod-up`
63
+
11. Access the repository via the ingress load balancer. You can find its external IP with: `kubectl get svc ingress-nginx-controller -n ingress-nginx`.
64
+
Then run `curl -H "Host: prod.registry.modelcontextprotocol.io" -k https://<EXTERNAL-IP>/v0/ping` to check that the service is up.
65
+
66
+
<!--
67
+
68
+
### Production Deployment (Azure)
33
69
34
70
**Note:** This is how the production deployment will be set up once. But then the plan will be future updates are effectively a login + `pulumi up` from GitHub Actions.
35
71
@@ -45,18 +81,21 @@ Pre-requisites:
45
81
5. Add the 'Storage Blob Data Contributor' role assignment for yourself on the storage account: `az role assignment create --assignee $(az ad signed-in-user show --query id -o tsv) --role "Storage Blob Data Contributor" --scope "/subscriptions/$(az account show --query id -o tsv)/resourceGroups/official-mcp-registry-prod"`
7. Set Pulumi's backend to Azure: `pulumi login 'azblob://pulumi-state?storage_account=officialmcpregistryprod'`
48
-
8. Init the production stack: `pulumi stack init prod`
84
+
8. Init the production stack: `pulumi stack init aksProd`
49
85
- TODO: This has a password that maybe needs to be shared with select contributors?
50
86
9. Deploy: `go build && PULUMI_CONFIG_PASSPHRASE="" pulumi up --yes`
51
-
10. Access the repository via the ingress load balancer. You can find its external IP with `kubectl get svc nginx-ingress-ingress-nginx-controller -n ingress-nginx` or view it in the Pulumi outputs. Then run `curl -H "Host: mcp-registry-prod.example.com" -k https://<EXTERNAL-IP>/v0/ping` to check that the service is up.
87
+
10. Access the repository via the ingress load balancer. You can find its external IP with `kubectl get svc ingress-nginx-controller -n ingress-nginx` or view it in the Pulumi outputs. Then run `curl -H "Host: prod.registry.modelcontextprotocol.io" -k https://<EXTERNAL-IP>/v0/ping` to check that the service is up.
88
+
89
+
-->
52
90
53
91
## Structure
54
92
55
93
```
56
94
├── main.go # Pulumi program entry point
57
95
├── Pulumi.yaml # Project configuration
58
96
├── Pulumi.local.yaml # Local stack configuration
59
-
├── Pulumi.prod.yaml # Production stack configuration
97
+
├── Pulumi.prod.yaml # Production stack configuration (Azure)
98
+
├── Pulumi.gcp.yaml # Production stack configuration (GCP)
0 commit comments