Skip to content

Commit 6b79d7a

Browse files
committed
Enhance AWS BYOC documentation
Updated the README files for AWS BYOC examples to improve clarity and usability. Added detailed instructions for deployment and configuration, including IAM permissions for BYOC-I. This update aims to provide users with better guidance and best practices for utilizing Zilliz BYOC Terraform.
1 parent 778b6cb commit 6b79d7a

File tree

1 file changed

+379
-0
lines changed

1 file changed

+379
-0
lines changed
Lines changed: 379 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,379 @@
1+
# EKS Cluster Access and Management Guide
2+
3+
This guide explains how customers can access and manage the EKS cluster created by the BYOC-I deployment.
4+
5+
## Important: Private Endpoint Configuration
6+
7+
**By default, BYOC-I deployments create EKS clusters with private endpoint access only** for security reasons. This means:
8+
9+
- The Kubernetes API server endpoint is **only accessible from within the VPC**
10+
- Public internet access to the API server is **disabled by default**
11+
- You **must establish network connectivity** to the VPC before you can access the cluster
12+
13+
> **Note**: While not recommended for production, you can configure the cluster to use both public and private endpoint access modes. For details, see [Temporary Public Access](#temporary-public-access-not-recommended-for-production).
14+
15+
For more information about EKS endpoint access modes, see the [AWS EKS Cluster Endpoint Documentation](https://docs.aws.amazon.com/eks/latest/userguide/cluster-endpoint.html).
16+
17+
### Network Connectivity Requirements
18+
19+
Before accessing the EKS cluster, you must ensure network connectivity to the VPC. Common options include:
20+
21+
1. **AWS CloudShell**: Use AWS CloudShell from the AWS Console (requires VPC Peering or VPN connection to the EKS VPC)
22+
2. **VPN Connection**: Connect your local network to the VPC via VPN
23+
3. **AWS Direct Connect**: Use AWS Direct Connect for dedicated network connection
24+
4. **Bastion Host**: Launch an EC2 instance in a public subnet and SSH into it
25+
5. **AWS Systems Manager Session Manager**: Use SSM to connect to an EC2 instance in the VPC
26+
6. **AWS Cloud9 IDE**: Create a Cloud9 environment within the VPC
27+
7. **VPC Peering**: Establish VPC peering connection between your VPC and the EKS VPC
28+
8. **Transit Gateway**: Connect networks via AWS Transit Gateway
29+
30+
**Note**: The cluster's control plane security group must allow ingress traffic on port 443 from your source network. For more details, see [Accessing a private only API server](https://docs.aws.amazon.com/eks/latest/userguide/cluster-endpoint.html#private-access).
31+
32+
## Prerequisites
33+
34+
Before accessing the EKS cluster, ensure you have:
35+
36+
1. **Network Connectivity**: Established connection to the VPC (see above)
37+
2. **AWS CLI** installed and configured with appropriate credentials
38+
3. **kubectl** installed (version compatible with your EKS cluster version)
39+
4. **AWS Credentials**: Use AWS credentials from the IAM role that was used to create the EKS cluster (recommended). This role automatically has cluster-admin permissions configured.
40+
41+
> **Note**: If you need to use a different IAM role or user, see [Granting Access to Other IAM Roles or Users](#granting-access-to-other-iam-roles-or-users) section below.
42+
43+
## Configuring kubectl Access
44+
45+
The simplest way to configure kubectl is using the AWS CLI `update-kubeconfig` command. This command automatically retrieves cluster information (endpoint, certificate authority, etc.) from AWS and configures authentication.
46+
47+
**Recommended**: Use AWS credentials from the IAM role that was used to create the EKS cluster (the same role used during Terraform deployment). This role automatically has cluster-admin permissions configured, so no additional setup is needed.
48+
49+
```bash
50+
# Set your AWS region and cluster name
51+
export AWS_REGION=<your-region> # e.g., us-east-1
52+
export CLUSTER_NAME=<your-cluster-name>
53+
54+
# Update kubeconfig (must be run from a location with VPC network access)
55+
aws eks update-kubeconfig --region $AWS_REGION --name $CLUSTER_NAME
56+
```
57+
58+
This command:
59+
- Retrieves cluster information (endpoint, certificate authority, etc.) from AWS
60+
- Configures authentication using your current AWS credentials via EKS access entries
61+
- Adds the cluster to your `~/.kube/config` file
62+
- Sets the current context to the new cluster
63+
64+
**Prerequisites**:
65+
1. **Network Connectivity**: This command must be run from a location that has network connectivity to the VPC (since the endpoint is private by default)
66+
2. **IAM Role**: Ensure you're using AWS credentials from the role that created the cluster (or see [Granting Access to Other IAM Roles or Users](#granting-access-to-other-iam-roles-or-users) if using a different role)
67+
68+
For more details, see [Connect kubectl to an EKS cluster](https://docs.aws.amazon.com/eks/latest/userguide/create-kubeconfig.html).
69+
70+
## Granting Access to Other IAM Roles or Users
71+
72+
BYOC-I EKS clusters use **EKS Access Entries** for authentication. If you need to grant access to IAM roles or users other than the one that created the cluster, you must create an access entry for them first.
73+
74+
For more information about EKS Access Entries, see the [AWS EKS Access Entries Documentation](https://docs.aws.amazon.com/eks/latest/userguide/access-entries.html).
75+
76+
If you need to grant access to other IAM roles or users, you must create an access entry for them first. Follow these steps:
77+
78+
1. **Create an access entry** for the IAM principal:
79+
```bash
80+
aws eks create-access-entry \
81+
--cluster-name $CLUSTER_NAME \
82+
--principal-arn arn:aws:iam::ACCOUNT_ID:role/ROLE_NAME \
83+
--region $AWS_REGION
84+
```
85+
86+
2. **Associate an access policy** with the access entry:
87+
```bash
88+
aws eks associate-access-policy \
89+
--cluster-name $CLUSTER_NAME \
90+
--principal-arn arn:aws:iam::ACCOUNT_ID:role/ROLE_NAME \
91+
--policy-arn arn:aws:eks::aws:cluster-access-policy/AmazonEKSClusterAdminPolicy \
92+
--access-scope type=cluster \
93+
--region $AWS_REGION
94+
```
95+
96+
Available access policies:
97+
- `AmazonEKSClusterAdminPolicy`: Full cluster administrator access
98+
- `AmazonEKSAdminPolicy`: Administrative access (can manage most resources)
99+
- `AmazonEKSViewPolicy`: Read-only access
100+
101+
For more details, see:
102+
- [Create access entries](https://docs.aws.amazon.com/eks/latest/userguide/access-entries.html#creating-access-entries)
103+
- [Associate access policies](https://docs.aws.amazon.com/eks/latest/userguide/access-entries.html#associating-access-policies)
104+
- [Review access policy permissions](https://docs.aws.amazon.com/eks/latest/userguide/access-policy-permissions.html)
105+
106+
## Temporary Public Access (Not recommended for production)
107+
108+
While not recommended for production environments, you can configure the cluster to use both public and private endpoint access modes. This allows you to access the cluster from the internet while maintaining private access within the VPC.
109+
110+
**Important**: When enabling public access, always configure CIDR whitelist to restrict access to specific IP addresses or networks.
111+
112+
```bash
113+
# Enable public access with CIDR whitelist
114+
# Replace <YOUR_IP_CIDR> with your IP address or CIDR block (e.g., 203.0.113.0/24)
115+
aws eks update-cluster-config \
116+
--name $CLUSTER_NAME \
117+
--region $AWS_REGION \
118+
--resources-vpc-config endpointPublicAccess=true,endpointPrivateAccess=true,publicAccessCidrs=["<YOUR_IP_CIDR>"]
119+
120+
# Example with multiple CIDR blocks:
121+
# aws eks update-cluster-config \
122+
# --name $CLUSTER_NAME \
123+
# --region $AWS_REGION \
124+
# --resources-vpc-config endpointPublicAccess=true,endpointPrivateAccess=true,publicAccessCidrs=["203.0.113.0/24","198.51.100.0/24"]
125+
126+
# Remember to disable public access after use
127+
aws eks update-cluster-config \
128+
--name $CLUSTER_NAME \
129+
--region $AWS_REGION \
130+
--resources-vpc-config endpointPublicAccess=false,endpointPrivateAccess=true
131+
```
132+
133+
For more details about configuring endpoint access and CIDR whitelist, see [Configure network access to cluster API server endpoint](https://docs.aws.amazon.com/eks/latest/userguide/config-cluster-endpoint.html).
134+
135+
## Deploying Kubernetes Resources
136+
137+
After configuring kubectl access, you can deploy Kubernetes resources to the cluster. There are two main approaches:
138+
139+
### Method 1: Using kubectl (Recommended for Manual Operations)
140+
141+
Once kubectl is configured and you have network connectivity to the VPC:
142+
143+
```bash
144+
# Verify access
145+
kubectl cluster-info
146+
kubectl get nodes
147+
148+
# Deploy resources using kubectl
149+
kubectl apply -f your-manifest.yaml
150+
151+
# Or create resources directly
152+
kubectl create deployment nginx --image=nginx
153+
kubectl expose deployment nginx --port=80 --type=LoadBalancer
154+
```
155+
156+
**Note**: All `kubectl` commands must be executed from a location with network connectivity to the VPC, as the API server endpoint is private.
157+
158+
### Method 2: Using Terraform Kubernetes Provider
159+
160+
You can also deploy Kubernetes resources directly using Terraform, which is useful for infrastructure-as-code workflows. The Terraform Kubernetes provider can authenticate using the EKS cluster's access entry system.
161+
162+
#### Step 1: Configure Terraform Kubernetes Provider
163+
164+
Add the Kubernetes provider to your Terraform configuration:
165+
166+
```hcl
167+
terraform {
168+
required_providers {
169+
kubernetes = {
170+
source = "hashicorp/kubernetes"
171+
version = "~> 2.23"
172+
}
173+
}
174+
}
175+
176+
# Data source to get EKS cluster information
177+
data "aws_eks_cluster" "example" {
178+
name = var.cluster_name # Use your cluster name (dataplane_id or custom name)
179+
}
180+
181+
data "aws_eks_cluster_auth" "example" {
182+
name = data.aws_eks_cluster.example.name
183+
}
184+
185+
# Configure Kubernetes provider
186+
provider "kubernetes" {
187+
host = data.aws_eks_cluster.example.endpoint
188+
cluster_ca_certificate = base64decode(data.aws_eks_cluster.example.certificate_authority[0].data)
189+
token = data.aws_eks_cluster_auth.example.token
190+
}
191+
```
192+
193+
#### Step 2: Deploy Kubernetes Resources
194+
195+
Now you can create Kubernetes resources using Terraform:
196+
197+
```hcl
198+
# Example: Create a namespace
199+
resource "kubernetes_namespace" "example" {
200+
metadata {
201+
name = "example-namespace"
202+
}
203+
}
204+
205+
# Example: Create a ConfigMap
206+
resource "kubernetes_config_map" "example" {
207+
metadata {
208+
name = "example-config"
209+
namespace = kubernetes_namespace.example.metadata[0].name
210+
}
211+
212+
data = {
213+
config_key = "config_value"
214+
}
215+
}
216+
217+
# Example: Create a Deployment
218+
resource "kubernetes_deployment" "example" {
219+
metadata {
220+
name = "example-deployment"
221+
namespace = kubernetes_namespace.example.metadata[0].name
222+
}
223+
224+
spec {
225+
replicas = 2
226+
227+
selector {
228+
match_labels = {
229+
app = "example"
230+
}
231+
}
232+
233+
template {
234+
metadata {
235+
labels = {
236+
app = "example"
237+
}
238+
}
239+
240+
spec {
241+
container {
242+
image = "nginx:latest"
243+
name = "nginx"
244+
245+
port {
246+
container_port = 80
247+
}
248+
}
249+
}
250+
}
251+
}
252+
}
253+
254+
# Example: Create a Service
255+
resource "kubernetes_service" "example" {
256+
metadata {
257+
name = "example-service"
258+
namespace = kubernetes_namespace.example.metadata[0].name
259+
}
260+
261+
spec {
262+
selector = {
263+
app = kubernetes_deployment.example.spec[0].selector[0].match_labels.app
264+
}
265+
266+
port {
267+
port = 80
268+
target_port = 80
269+
}
270+
271+
type = "LoadBalancer"
272+
}
273+
}
274+
```
275+
276+
#### Step 3: Apply Terraform Configuration
277+
278+
```bash
279+
# Initialize Terraform (if not already done)
280+
terraform init
281+
282+
# Plan the changes
283+
terraform plan
284+
285+
# Apply the configuration
286+
terraform apply
287+
```
288+
289+
**Important Notes**:
290+
- The Terraform Kubernetes provider uses the AWS credentials configured in your environment
291+
- The IAM identity used must have an access entry configured for the EKS cluster (the role used to create the cluster works by default)
292+
- Terraform must be run from a location with network connectivity to the VPC (or use a CI/CD system within the VPC)
293+
- The `data.aws_eks_cluster_auth` data source automatically retrieves the authentication token using your AWS credentials
294+
295+
## Troubleshooting
296+
297+
### Cannot Connect to Cluster
298+
299+
**Error**: `Unable to connect to the server: dial tcp: lookup <endpoint>`
300+
301+
**Solutions**:
302+
1. Verify your AWS credentials are configured: `aws sts get-caller-identity`
303+
2. Check if cluster endpoint is accessible (for private clusters, ensure you're in the VPC)
304+
3. Verify cluster name is correct: `aws eks list-clusters --region $AWS_REGION`
305+
4. Re-run `aws eks update-kubeconfig` command
306+
307+
### Authentication Errors
308+
309+
**Error**: `error: You must be logged in to the server (Unauthorized)`
310+
311+
**Solutions**:
312+
1. Verify your IAM user/role has EKS access permissions
313+
2. Check if your AWS credentials are expired: `aws sts get-caller-identity`
314+
3. Ensure you're using the correct AWS profile: `export AWS_PROFILE=<profile-name>`
315+
4. Verify cluster access entry exists (for EKS 1.23+)
316+
317+
### Private Cluster Access
318+
319+
**Issue**: Cannot access private EKS cluster from local machine
320+
321+
**Solutions**:
322+
1. **Establish VPC Network Connectivity**: Ensure you have network connectivity to the VPC (see [Network Connectivity Requirements](#network-connectivity-requirements) above)
323+
324+
2. **Verify Security Group Rules**: Ensure the EKS control plane security group allows ingress on port 443 from your source network
325+
326+
3. **Verify DNS Configuration**: For private endpoints, ensure your VPC has:
327+
- `enableDnsHostnames = true`
328+
- `enableDnsSupport = true`
329+
- DHCP options set includes `AmazonProvidedDNS`
330+
331+
4. **Verify Access Entry**: Ensure your IAM identity has an access entry configured:
332+
```bash
333+
# List access entries
334+
aws eks list-access-entries --cluster-name $CLUSTER_NAME --region $AWS_REGION
335+
336+
# Describe your access entry
337+
aws eks describe-access-entry \
338+
--cluster-name $CLUSTER_NAME \
339+
--principal-arn $(aws sts get-caller-identity --query Arn --output text) \
340+
--region $AWS_REGION
341+
```
342+
343+
## Security Best Practices
344+
345+
1. **Use Private Endpoints**: For production, disable public endpoint access (default for BYOC-I)
346+
2. **IAM Authentication**: Always use IAM for cluster authentication via EKS Access Entries
347+
3. **Least Privilege**: Grant only necessary permissions to users/roles when creating access entries
348+
4. **Audit Logging**: Enable EKS control plane logging for audit purposes
349+
350+
## Additional Resources
351+
352+
### AWS Documentation
353+
- [AWS EKS Cluster Endpoint Documentation](https://docs.aws.amazon.com/eks/latest/userguide/cluster-endpoint.html) - Detailed information about endpoint access modes
354+
- [Configure network access to cluster API server endpoint](https://docs.aws.amazon.com/eks/latest/userguide/config-cluster-endpoint.html) - How to configure endpoint access and CIDR whitelist
355+
- [AWS EKS Access Entries Documentation](https://docs.aws.amazon.com/eks/latest/userguide/access-entries.html) - Managing IAM access to Kubernetes clusters
356+
- [Connect kubectl to EKS Cluster](https://docs.aws.amazon.com/eks/latest/userguide/create-kubeconfig.html) - Configuring kubectl for EKS
357+
- [AWS EKS User Guide](https://docs.aws.amazon.com/eks/latest/userguide/) - Complete EKS documentation
358+
- [Accessing a Private Only API Server](https://docs.aws.amazon.com/eks/latest/userguide/cluster-endpoint.html#private-access) - Network connectivity options
359+
360+
### Kubernetes Documentation
361+
- [kubectl Cheat Sheet](https://kubernetes.io/docs/reference/kubectl/cheatsheet/) - Quick reference for kubectl commands
362+
- [Kubernetes Documentation](https://kubernetes.io/docs/) - Official Kubernetes documentation
363+
364+
### Best Practices
365+
- [EKS Best Practices](https://aws.github.io/aws-eks-best-practices/) - AWS EKS best practices guide
366+
367+
### Zilliz Documentation
368+
- [Zilliz Cloud Documentation](https://docs.zilliz.com/) - Zilliz Cloud platform documentation
369+
370+
## Getting Help
371+
372+
If you encounter issues:
373+
374+
1. Check Terraform outputs: `terraform output`
375+
2. Review AWS CloudWatch logs for EKS
376+
3. Check node group status in AWS Console
377+
4. Verify IAM permissions and roles
378+
5. Contact Zilliz Cloud support with cluster details
379+

0 commit comments

Comments
 (0)