The official Helm chart for deploying interLink virtual nodes in Kubernetes clusters. interLink enables hybrid cloud deployments by creating virtual nodes that can execute workloads on remote computing resources while appearing as regular nodes to the Kubernetes scheduler.
- Prerequisites
- Installation
- Deployment Modes
- Configuration
- Examples
- Post-Installation
- Troubleshooting
- Development
- Kubernetes cluster (v1.20+)
- Helm 3.8+
- Appropriate RBAC permissions for virtual node operations
- For REST mode: OAuth2 provider configured
- For socket mode: SSH access to remote resources
# Add the repository
# Will be available once published:
# helm repo add interlink https://intertwin-eu.github.io/interlink-helm-chart/
# Update repositories
helm repo update
# Install with custom values
helm install --create-namespace -n interlink virtual-node \
interlink/interlink --values my-values.yaml
helm install --create-namespace -n interlink virtual-node \
oci://ghcr.io/intertwin-eu/interlink-helm-chart/interlink \
--values my-values.yaml
git clone https://github.com/interTwin-eu/interlink-helm-chart.git
cd interlink-helm-chart
helm install --create-namespace -n interlink virtual-node \
./interlink --values my-values.yaml
Architecture: Virtual kubelet + OAuth2 token refresher in cluster, plugin + interLink API server + OAuth2 proxy on remote side.
Use Case: Secure communication over HTTPS with OAuth2 authentication.
# values-rest.yaml
nodeName: interlink-rest-node
interlink:
address: https://your-remote-endpoint.com
port: 443
OAUTH:
enabled: true
TokenURL: "https://your-oauth-provider.com/token"
ClientID: "your-client-id"
ClientSecret: "your-client-secret"
RefreshToken: "your-refresh-token"
Audience: "your-audience"
virtualNode:
resources:
CPUs: 16
memGiB: 64
pods: 200
Architecture: interLink + virtual kubelet + SSH bastion in cluster, only plugin on remote side.
Use Case: Secure communication via SSH tunnels using Unix sockets.
# values-socket.yaml
nodeName: interlink-socket-node
interlink:
enabled: true
socket: unix:///var/run/interlink.sock
plugin:
socket: unix:///var/run/plugin.sock
sshBastion:
enabled: true
clientKeys:
authorizedKeys: "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAI..."
port: 31022
virtualNode:
resources:
CPUs: 8
memGiB: 32
pods: 100
Architecture: All components (virtual kubelet, interLink, plugin) deployed in cluster with socket communication.
Use Case: Testing, development, or when remote resources support direct API access.
# values-incluster.yaml
nodeName: interlink-incluster-node
interlink:
enabled: true
socket: unix:///var/run/interlink.sock
plugin:
enabled: true
image: "ghcr.io/intertwin-eu/interlink/plugin-docker:latest"
socket: unix:///var/run/plugin.sock
config: |
InterlinkURL: "unix:///var/run/interlink.sock"
SidecarURL: "unix:///var/run/plugin.sock"
VerboseLogging: true
virtualNode:
resources:
CPUs: 4
memGiB: 16
pods: 50
Parameter | Description | Default |
---|---|---|
nodeName |
Name of the virtual node | virtual-node |
virtualNode.image |
Virtual kubelet image | ghcr.io/interlink-hq/interlink/virtual-kubelet-inttw:latest |
virtualNode.resources.CPUs |
Node CPU capacity | 8 |
virtualNode.resources.memGiB |
Node memory capacity in GiB | 49 |
virtualNode.resources.pods |
Maximum pods per node | 100 |
interlink.enabled |
Deploy interLink API server | false |
plugin.enabled |
Deploy plugin component | false |
virtualNode:
resources:
accelerators:
- resourceType: nvidia.com/gpu
model: a100
available: 2
- resourceType: xilinx.com/fpga
model: u55c
available: 1
virtualNode:
nodeLabels:
- "node-type=virtual"
- "accelerator=gpu"
nodeTaints:
- key: "virtual-node"
value: "true"
effect: "NoSchedule"
Complete examples are available in the examples/
directory:
edge_with_rest.yaml
- REST communication setupedge_with_socket.yaml
- Socket communication setup
# Check virtual node status
kubectl get node <nodeName>
# Check pod status
kubectl get pods -n interlink
# View virtual node details
kubectl describe node <nodeName>
# Check logs
kubectl logs -n interlink deployment/<nodeName>-node -c vk
# test-pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: test-workload
spec:
nodeSelector:
kubernetes.io/hostname: <nodeName>
containers:
- name: test
image: busybox
command: ["sleep", "3600"]
kubectl apply -f test-pod.yaml
kubectl get pod test-workload -o wide
# Check node conditions
kubectl describe node <nodeName>
# Check virtual kubelet logs
kubectl logs -n interlink deployment/<nodeName>-node -c vk
# Verify interLink connectivity
kubectl logs -n interlink deployment/<nodeName>-node -c interlink
# Check node resources
kubectl describe node <nodeName>
# Verify taints and tolerations
kubectl get node <nodeName> -o yaml | grep -A5 taints
# Check scheduler logs
kubectl logs -n kube-system deployment/kube-scheduler
# Check OAuth token refresh
kubectl logs -n interlink deployment/<nodeName>-node -c refresh-token
# Verify token file
kubectl exec -n interlink deployment/<nodeName>-node -c vk -- cat /opt/interlink/token
# Check SSH bastion logs
kubectl logs -n interlink deployment/<nodeName>-node -c ssh-bastion
# Test SSH connectivity
kubectl exec -n interlink deployment/<nodeName>-node -c ssh-bastion -- ssh -T interlink@remote-host
Enable verbose logging:
virtualNode:
debug: true
The chart includes readiness and liveness probes. Check their status:
kubectl get pods -n interlink -o wide
kubectl describe pod <pod-name> -n interlink
# Lint the chart
helm lint interlink/
# Template and preview
helm template virtual-node ./interlink --values examples/edge_with_socket.yaml
# Test installation
helm install --dry-run --debug virtual-node ./interlink --values my-values.yaml
This chart uses chartpress for automated versioning:
# Update version and publish
chartpress --push
# Reset to development
chartpress --reset
- Fork the repository
- Create a feature branch
- Make your changes
- Test thoroughly
- Submit a pull request
For detailed contribution guidelines, see CONTRIBUTING.md.
- interLink Documentation
- Official Cookbook
- GitHub Repository
- Chart Repository (GitHub Pages coming soon)
This project is licensed under the Apache License 2.0 - see the LICENSE file for details.