Skip to content

Commit 97d0438

Browse files
committed
Enhance AWS BYOC documentation
Updated the README files for AWS BYOC examples to improve clarity and usability. Added detailed instructions for deployment and configuration, including IAM permissions for BYOC-I. This update aims to provide users with better guidance and best practices for utilizing Zilliz BYOC Terraform.
1 parent 778b6cb commit 97d0438

File tree

1 file changed

+361
-0
lines changed

1 file changed

+361
-0
lines changed
Lines changed: 361 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,361 @@
1+
# EKS Cluster Access and Management Guide
2+
3+
This guide explains how customers can access and manage the EKS cluster created by the BYOC-I deployment.
4+
5+
## Important: Private Endpoint Configuration
6+
7+
**By default, BYOC-I deployments create EKS clusters with private endpoint access only** for security reasons. This means:
8+
9+
- The Kubernetes API server endpoint is **only accessible from within the VPC**
10+
- Public internet access to the API server is **disabled by default**
11+
- You **must establish network connectivity** to the VPC before you can access the cluster
12+
13+
For more information about EKS endpoint access modes, see the [AWS EKS Cluster Endpoint Documentation](https://docs.aws.amazon.com/eks/latest/userguide/cluster-endpoint.html).
14+
15+
### Network Connectivity Requirements
16+
17+
Before accessing the EKS cluster, you must ensure network connectivity to the VPC. Common options include:
18+
19+
1. **VPN Connection**: Connect your local network to the VPC via VPN
20+
2. **AWS Direct Connect**: Use AWS Direct Connect for dedicated network connection
21+
3. **Bastion Host**: Launch an EC2 instance in a public subnet and SSH into it
22+
4. **AWS Systems Manager Session Manager**: Use SSM to connect to an EC2 instance in the VPC
23+
5. **AWS Cloud9 IDE**: Create a Cloud9 environment within the VPC
24+
6. **Transit Gateway**: Connect networks via AWS Transit Gateway
25+
26+
**Note**: The cluster's control plane security group must allow ingress traffic on port 443 from your source network. For more details, see [Accessing a private only API server](https://docs.aws.amazon.com/eks/latest/userguide/cluster-endpoint.html#private-access).
27+
28+
## Prerequisites
29+
30+
Before accessing the EKS cluster, ensure you have:
31+
32+
1. **Network Connectivity**: Established connection to the VPC (see above)
33+
2. **AWS CLI** installed and configured with appropriate credentials
34+
3. **kubectl** installed (version compatible with your EKS cluster version)
35+
4. **AWS Credentials**: Use AWS credentials from the IAM role that was used to create the EKS cluster (recommended). This role automatically has cluster-admin permissions configured.
36+
37+
> **Note**: If you need to use a different IAM role or user, see [Granting Access to Other IAM Roles or Users](#granting-access-to-other-iam-roles-or-users) section below.
38+
39+
## Configuring kubectl Access
40+
41+
The simplest way to configure kubectl is using the AWS CLI `update-kubeconfig` command. This command automatically retrieves cluster information (endpoint, certificate authority, etc.) from AWS and configures authentication.
42+
43+
**Recommended**: Use AWS credentials from the IAM role that was used to create the EKS cluster (the same role used during Terraform deployment). This role automatically has cluster-admin permissions configured, so no additional setup is needed.
44+
45+
```bash
46+
# Set your AWS region and cluster name
47+
export AWS_REGION=<your-region> # e.g., us-east-1
48+
export CLUSTER_NAME=$(terraform output -json | jq -r '.data_plane_id.value')
49+
50+
# Update kubeconfig (must be run from a location with VPC network access)
51+
aws eks update-kubeconfig --region $AWS_REGION --name $CLUSTER_NAME
52+
```
53+
54+
This command:
55+
- Retrieves cluster information (endpoint, certificate authority, etc.) from AWS
56+
- Configures authentication using your current AWS credentials via EKS access entries
57+
- Adds the cluster to your `~/.kube/config` file
58+
- Sets the current context to the new cluster
59+
60+
**Prerequisites**:
61+
1. **Network Connectivity**: This command must be run from a location that has network connectivity to the VPC (since the endpoint is private by default)
62+
2. **IAM Role**: Ensure you're using AWS credentials from the role that created the cluster (or see [Granting Access to Other IAM Roles or Users](#granting-access-to-other-iam-roles-or-users) if using a different role)
63+
64+
For more details, see [Connect kubectl to an EKS cluster](https://docs.aws.amazon.com/eks/latest/userguide/create-kubeconfig.html).
65+
66+
## Granting Access to Other IAM Roles or Users
67+
68+
BYOC-I EKS clusters use **EKS Access Entries** for authentication. If you need to grant access to IAM roles or users other than the one that created the cluster, you must create an access entry for them first.
69+
70+
For more information about EKS Access Entries, see the [AWS EKS Access Entries Documentation](https://docs.aws.amazon.com/eks/latest/userguide/access-entries.html).
71+
72+
If you need to grant access to other IAM roles or users, you must create an access entry for them first. Follow these steps:
73+
74+
1. **Create an access entry** for the IAM principal:
75+
```bash
76+
aws eks create-access-entry \
77+
--cluster-name $CLUSTER_NAME \
78+
--principal-arn arn:aws:iam::ACCOUNT_ID:role/ROLE_NAME \
79+
--region $AWS_REGION
80+
```
81+
82+
2. **Associate an access policy** with the access entry:
83+
```bash
84+
aws eks associate-access-policy \
85+
--cluster-name $CLUSTER_NAME \
86+
--principal-arn arn:aws:iam::ACCOUNT_ID:role/ROLE_NAME \
87+
--policy-arn arn:aws:eks::aws:cluster-access-policy/AmazonEKSClusterAdminPolicy \
88+
--access-scope type=cluster \
89+
--region $AWS_REGION
90+
```
91+
92+
Available access policies:
93+
- `AmazonEKSClusterAdminPolicy`: Full cluster administrator access
94+
- `AmazonEKSAdminPolicy`: Administrative access (can manage most resources)
95+
- `AmazonEKSViewPolicy`: Read-only access
96+
97+
For more details, see:
98+
- [Create access entries](https://docs.aws.amazon.com/eks/latest/userguide/access-entries.html#creating-access-entries)
99+
- [Associate access policies](https://docs.aws.amazon.com/eks/latest/userguide/access-entries.html#associating-access-policies)
100+
- [Review access policy permissions](https://docs.aws.amazon.com/eks/latest/userguide/access-policy-permissions.html)
101+
102+
103+
## Deploying Kubernetes Resources
104+
105+
After configuring kubectl access, you can deploy Kubernetes resources to the cluster. There are two main approaches:
106+
107+
### Method 1: Using kubectl (Recommended for Manual Operations)
108+
109+
Once kubectl is configured and you have network connectivity to the VPC:
110+
111+
```bash
112+
# Verify access
113+
kubectl cluster-info
114+
kubectl get nodes
115+
116+
# Deploy resources using kubectl
117+
kubectl apply -f your-manifest.yaml
118+
119+
# Or create resources directly
120+
kubectl create deployment nginx --image=nginx
121+
kubectl expose deployment nginx --port=80 --type=LoadBalancer
122+
```
123+
124+
**Note**: All `kubectl` commands must be executed from a location with network connectivity to the VPC, as the API server endpoint is private.
125+
126+
### Method 2: Using Terraform Kubernetes Provider
127+
128+
You can also deploy Kubernetes resources directly using Terraform, which is useful for infrastructure-as-code workflows. The Terraform Kubernetes provider can authenticate using the EKS cluster's access entry system.
129+
130+
#### Step 1: Configure Terraform Kubernetes Provider
131+
132+
Add the Kubernetes provider to your Terraform configuration:
133+
134+
```hcl
135+
terraform {
136+
required_providers {
137+
kubernetes = {
138+
source = "hashicorp/kubernetes"
139+
version = "~> 2.23"
140+
}
141+
}
142+
}
143+
144+
# Data source to get EKS cluster information
145+
data "aws_eks_cluster" "example" {
146+
name = var.cluster_name # Use your cluster name (dataplane_id or custom name)
147+
}
148+
149+
data "aws_eks_cluster_auth" "example" {
150+
name = data.aws_eks_cluster.example.name
151+
}
152+
153+
# Configure Kubernetes provider
154+
provider "kubernetes" {
155+
host = data.aws_eks_cluster.example.endpoint
156+
cluster_ca_certificate = base64decode(data.aws_eks_cluster.example.certificate_authority[0].data)
157+
token = data.aws_eks_cluster_auth.example.token
158+
}
159+
```
160+
161+
#### Step 2: Deploy Kubernetes Resources
162+
163+
Now you can create Kubernetes resources using Terraform:
164+
165+
```hcl
166+
# Example: Create a namespace
167+
resource "kubernetes_namespace" "example" {
168+
metadata {
169+
name = "example-namespace"
170+
}
171+
}
172+
173+
# Example: Create a ConfigMap
174+
resource "kubernetes_config_map" "example" {
175+
metadata {
176+
name = "example-config"
177+
namespace = kubernetes_namespace.example.metadata[0].name
178+
}
179+
180+
data = {
181+
config_key = "config_value"
182+
}
183+
}
184+
185+
# Example: Create a Deployment
186+
resource "kubernetes_deployment" "example" {
187+
metadata {
188+
name = "example-deployment"
189+
namespace = kubernetes_namespace.example.metadata[0].name
190+
}
191+
192+
spec {
193+
replicas = 2
194+
195+
selector {
196+
match_labels = {
197+
app = "example"
198+
}
199+
}
200+
201+
template {
202+
metadata {
203+
labels = {
204+
app = "example"
205+
}
206+
}
207+
208+
spec {
209+
container {
210+
image = "nginx:latest"
211+
name = "nginx"
212+
213+
port {
214+
container_port = 80
215+
}
216+
}
217+
}
218+
}
219+
}
220+
}
221+
222+
# Example: Create a Service
223+
resource "kubernetes_service" "example" {
224+
metadata {
225+
name = "example-service"
226+
namespace = kubernetes_namespace.example.metadata[0].name
227+
}
228+
229+
spec {
230+
selector = {
231+
app = kubernetes_deployment.example.spec[0].selector[0].match_labels.app
232+
}
233+
234+
port {
235+
port = 80
236+
target_port = 80
237+
}
238+
239+
type = "LoadBalancer"
240+
}
241+
}
242+
```
243+
244+
#### Step 3: Apply Terraform Configuration
245+
246+
```bash
247+
# Initialize Terraform (if not already done)
248+
terraform init
249+
250+
# Plan the changes
251+
terraform plan
252+
253+
# Apply the configuration
254+
terraform apply
255+
```
256+
257+
**Important Notes**:
258+
- The Terraform Kubernetes provider uses the AWS credentials configured in your environment
259+
- The IAM identity used must have an access entry configured for the EKS cluster (the role used to create the cluster works by default)
260+
- Terraform must be run from a location with network connectivity to the VPC (or use a CI/CD system within the VPC)
261+
- The `data.aws_eks_cluster_auth` data source automatically retrieves the authentication token using your AWS credentials
262+
263+
## Troubleshooting
264+
265+
### Cannot Connect to Cluster
266+
267+
**Error**: `Unable to connect to the server: dial tcp: lookup <endpoint>`
268+
269+
**Solutions**:
270+
1. Verify your AWS credentials are configured: `aws sts get-caller-identity`
271+
2. Check if cluster endpoint is accessible (for private clusters, ensure you're in the VPC)
272+
3. Verify cluster name is correct: `aws eks list-clusters --region $AWS_REGION`
273+
4. Re-run `aws eks update-kubeconfig` command
274+
275+
### Authentication Errors
276+
277+
**Error**: `error: You must be logged in to the server (Unauthorized)`
278+
279+
**Solutions**:
280+
1. Verify your IAM user/role has EKS access permissions
281+
2. Check if your AWS credentials are expired: `aws sts get-caller-identity`
282+
3. Ensure you're using the correct AWS profile: `export AWS_PROFILE=<profile-name>`
283+
4. Verify cluster access entry exists (for EKS 1.23+)
284+
285+
### Private Cluster Access
286+
287+
**Issue**: Cannot access private EKS cluster from local machine
288+
289+
**Solutions**:
290+
1. **Establish VPC Network Connectivity**: Ensure you have network connectivity to the VPC (see [Network Connectivity Requirements](#network-connectivity-requirements) above)
291+
292+
2. **Verify Security Group Rules**: Ensure the EKS control plane security group allows ingress on port 443 from your source network
293+
294+
3. **Verify DNS Configuration**: For private endpoints, ensure your VPC has:
295+
- `enableDnsHostnames = true`
296+
- `enableDnsSupport = true`
297+
- DHCP options set includes `AmazonProvidedDNS`
298+
299+
4. **Temporary Public Access** (Not recommended for production):
300+
```bash
301+
# Enable public access temporarily for troubleshooting
302+
aws eks update-cluster-config \
303+
--name $CLUSTER_NAME \
304+
--region $AWS_REGION \
305+
--resources-vpc-config endpointPublicAccess=true,endpointPrivateAccess=true
306+
307+
# Remember to disable it after troubleshooting
308+
aws eks update-cluster-config \
309+
--name $CLUSTER_NAME \
310+
--region $AWS_REGION \
311+
--resources-vpc-config endpointPublicAccess=false,endpointPrivateAccess=true
312+
```
313+
314+
5. **Verify Access Entry**: Ensure your IAM identity has an access entry configured:
315+
```bash
316+
# List access entries
317+
aws eks list-access-entries --cluster-name $CLUSTER_NAME --region $AWS_REGION
318+
319+
# Describe your access entry
320+
aws eks describe-access-entry \
321+
--cluster-name $CLUSTER_NAME \
322+
--principal-arn $(aws sts get-caller-identity --query Arn --output text) \
323+
--region $AWS_REGION
324+
```
325+
326+
## Security Best Practices
327+
328+
1. **Use Private Endpoints**: For production, disable public endpoint access (default for BYOC-I)
329+
2. **IAM Authentication**: Always use IAM for cluster authentication via EKS Access Entries
330+
3. **Least Privilege**: Grant only necessary permissions to users/roles when creating access entries
331+
4. **Audit Logging**: Enable EKS control plane logging for audit purposes
332+
333+
## Additional Resources
334+
335+
### AWS Documentation
336+
- [AWS EKS Cluster Endpoint Documentation](https://docs.aws.amazon.com/eks/latest/userguide/cluster-endpoint.html) - Detailed information about endpoint access modes
337+
- [AWS EKS Access Entries Documentation](https://docs.aws.amazon.com/eks/latest/userguide/access-entries.html) - Managing IAM access to Kubernetes clusters
338+
- [Connect kubectl to EKS Cluster](https://docs.aws.amazon.com/eks/latest/userguide/create-kubeconfig.html) - Configuring kubectl for EKS
339+
- [AWS EKS User Guide](https://docs.aws.amazon.com/eks/latest/userguide/) - Complete EKS documentation
340+
- [Accessing a Private Only API Server](https://docs.aws.amazon.com/eks/latest/userguide/cluster-endpoint.html#private-access) - Network connectivity options
341+
342+
### Kubernetes Documentation
343+
- [kubectl Cheat Sheet](https://kubernetes.io/docs/reference/kubectl/cheatsheet/) - Quick reference for kubectl commands
344+
- [Kubernetes Documentation](https://kubernetes.io/docs/) - Official Kubernetes documentation
345+
346+
### Best Practices
347+
- [EKS Best Practices](https://aws.github.io/aws-eks-best-practices/) - AWS EKS best practices guide
348+
349+
### Zilliz Documentation
350+
- [Zilliz Cloud Documentation](https://docs.zilliz.com/) - Zilliz Cloud platform documentation
351+
352+
## Getting Help
353+
354+
If you encounter issues:
355+
356+
1. Check Terraform outputs: `terraform output`
357+
2. Review AWS CloudWatch logs for EKS
358+
3. Check node group status in AWS Console
359+
4. Verify IAM permissions and roles
360+
5. Contact Zilliz Cloud support with cluster details
361+

0 commit comments

Comments
 (0)