This is an installation of TrustGraph on OVHcloud using the Managed Kubernetes Service (MKS).
The full stack includes:
- A managed Kubernetes cluster in OVHcloud
- Node pool containing 2 nodes (configurable)
- Private network with subnet configuration
- Service account and credentials for AI access
- Deploys a complete TrustGraph stack of resources in MKS
- Integration with OVHcloud AI Endpoints
Keys and other configuration for the AI components are configured into TrustGraph using Kubernetes secrets.
The Pulumi configuration uses OVHcloud AI Endpoints with Mistral Nemo Instruct model by default.
This project uses the https://github.com/ovh/pulumi-ovh project which at the time of writing does not support provisioning keys for AI endpoints, so you have to create this key yourself using the console.
This uses Pulumi which is a deployment framework, similar to Terraform but:
- Pulumi has an open source licence
- Pulumi uses general-purpose programming languages, particularly useful because you can use test frameworks to test the infrastructure.
Roadmap to deploy:
- Install Pulumi
- Setup Pulumi
- Configure your environment with OVHcloud credentials
- Modify the local configuration to do what you want
- Deploy
- Use the system
- Go to https://www.ovh.com/auth/api/createToken
- Create API keys with the following rights:
- GET /cloud/project/*
- POST /cloud/project/*
- PUT /cloud/project/*
- DELETE /cloud/project/*
- Note down your credentials:
- Application Key
- Application Secret
- Consumer Key
export OVH_ENDPOINT=ovh-eu # or ovh-ca, ovh-us
export OVH_APPLICATION_KEY=your_application_key
export OVH_APPLICATION_SECRET=your_application_secret
export OVH_CONSUMER_KEY=your_consumer_key
export PULUMI_CONFIG_PASSPHRASE=
You'll need your OVHcloud project ID (service name). You can find this in the OVHcloud Control Panel under Public Cloud. It's the hex string in the top LHS of the screen.
- Go to https://endpoints.ai.cloud.ovh.net/
- Click on "Get your free token"
- Follow the process to create your AI Endpoints access token
- Keep this token safe - you'll need it for the deployment
cd pulumi
npm install
You need to tell Pulumi which state to use. You can store this in an S3 bucket, but for experimentation, you can just use local state:
pulumi login --local
Pulumi operates in stacks, each stack is a separate deployment. To create a new stack for OVHcloud:
pulumi stack init ovhcloud
This will use the configuration in Pulumi.ovhcloud.yaml
.
Edit Pulumi.ovhcloud.yaml
and update the following values:
trustgraph-ovhcloud:service-name
- Your OVHcloud project IDtrustgraph-ovhcloud:region
- OVHcloud region (e.g., GRA11, BHS5, WAW1, SBG5, UK1, DE1)trustgraph-ovhcloud:environment
- Environment name (dev, prod, etc.)trustgraph-ovhcloud:ai-model
- AI model to use (default: mistral-nemo-instruct-2407)trustgraph-ovhcloud:node-size
- Node flavor (default: b2-15)trustgraph-ovhcloud:node-count
- Number of nodes (default: 2)trustgraph-ovhcloud:ai-endpoints-token
- Your AI Endpoints access token (encrypted)
Available AI models in OVHcloud AI Endpoints include:
mistral-nemo-instruct-2407
mixtral-8x7b-instruct-0123
llama-3-8b-instruct
codestral-2405
Available node flavors:
b2-7
- 2 vCPUs, 7GB RAMb2-15
- 4 vCPUs, 15GB RAMb2-30
- 8 vCPUs, 30GB RAMb2-60
- 16 vCPUs, 60GB RAMb2-120
- 32 vCPUs, 120GB RAM
You can edit resources.yaml
to customize what gets deployed to the cluster.
The resources.yaml file was created using the TrustGraph config portal,
so you can re-generate your own.
Before deploying, set your AI Endpoints token:
pulumi config set --secret trustgraph-ovhcloud:ai-endpoints-token YOUR_AI_ENDPOINTS_TOKEN
pulumi up
Review the planned changes and confirm by typing "yes".
If everything works:
- A file
kube.cfg
will be created which provides access to the Kubernetes cluster - The TrustGraph application will be deployed to the cluster
- AI credentials will be configured automatically
To connect to the Kubernetes cluster:
kubectl --kubeconfig kube.cfg -n trustgraph get pods
If something goes wrong while deploying, retry before giving up.
pulumi up
is a retryable command and will continue from where it left off.
To access TrustGraph services, set up port-forwarding. You'll need multiple terminal windows to run each of these commands:
kubectl --kubeconfig kube.cfg -n trustgraph port-forward service/api-gateway 8088:8088
kubectl --kubeconfig kube.cfg -n trustgraph port-forward service/workbench-ui 8888:8888
kubectl --kubeconfig kube.cfg -n trustgraph port-forward service/grafana 3000:3000
This will allow you to access:
- API Gateway: http://localhost:8088
- Workbench UI: http://localhost:8888
- Grafana: http://localhost:3000
The deployment automatically configures access to OVHcloud AI Endpoints. The AI endpoint URL and authentication token are stored as Kubernetes secrets.
To use a different AI model, update the ai-model
configuration in your
Pulumi stack configuration.
For production use, you should generate a proper AI Endpoints token through the OVHcloud Control Panel instead of using the automatically generated service account credentials.
To tear down all the infrastructure:
pulumi destroy
Type "yes" to confirm the destruction of all resources.
If you get authentication errors, verify:
- All four environment variables are set correctly
- Your API credentials have the necessary permissions
- The endpoint matches your account region (ovh-eu, ovh-ca, ovh-us)
If cluster creation fails:
- Check that your project has sufficient quota
- Verify the region is available for Kubernetes
- Ensure the node flavor is available in your selected region
If AI features aren't working:
- Check that the AI model name is correct
- Verify AI Endpoints are available in your region
- Consider generating a dedicated AI token in the OVHcloud Control Panel
python3 -m venv env
. env/bin/activate
pip install git+https://github.com/trustgraph-ai/trustgraph-templates@master
tg-configurator -t 1.3 -v 1.3.18 --platform ovh-k8s -R > resources.yaml