Skip to content

Commit f6b33a0

Browse files
authored
Support updating kube-router in a local VM cluster (#116)
* gofmt * docs: Remove manual AWS config reference. It's automatic now. * Support updating kube-router in a running local VM cluster - "make vagrant-image-update" target added - Documentation added and small reorganization
1 parent d3f43fc commit f6b33a0

File tree

9 files changed

+175
-87
lines changed

9 files changed

+175
-87
lines changed

Documentation/bootkube.md

Lines changed: 0 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -54,14 +54,6 @@ curl -L https://raw.githubusercontent.com/cloudnativelabs/kube-router/master/con
5454
curl -L https://raw.githubusercontent.com/cloudnativelabs/kube-router/master/contrib/bootkube/kube-router.yaml -o assets/manifests/kube-router.yaml
5555
```
5656

57-
## Additional steps when deploying on AWS
58-
59-
Since kube-router uses node routing rules to directly route pod-to-pod traffic to destination node, nodes will send and recieve IP traffic with source and destination IP's from pod CIDR. AWS by default prevents sending and recieving traffic from ip different from instance private ip. So we need to relax this restricition. Please run below commnad on each of the node in the cluster to send and recieve traffic from pod IP's.
60-
61-
```
62-
aws ec2 modify-instance-attribute --instance-id <instance id>--no-source-dest-check
63-
```
64-
6557
## Cluster Startup
6658

6759
Finally, proceed by following the Bootkube documentation, which generally

Documentation/developing.md

Lines changed: 4 additions & 70 deletions
Original file line numberDiff line numberDiff line change
@@ -49,76 +49,6 @@ kube-router images, cloudnativelabs/kube-router. You can push to a different
4949
repository by changing a couple settings, as described in [Image Options](#image-options)
5050
below.
5151

52-
## Testing Code Changes
53-
54-
### Running Your Code On A Local VM Cluster
55-
56-
Running your code changes in a real Kubernetes cluster is easy. Just make sure
57-
you have Virtualbox, VMware Fusion, or VMware Workstation installed and run:
58-
```
59-
make vagrant-up-single-node
60-
```
61-
62-
Alternatively if you have 6GB RAM for the VMs, you can run a multi-node cluster
63-
that consists of a dedicated etcd node, a controller node, and a worker node:
64-
```
65-
make vagrant-up-multi-node
66-
```
67-
68-
You will see lots of output as the VMs are provisioned, and the first run may
69-
take some time as VM and container images are downloaded. After the cluster is
70-
up you will recieve instructions for using kubectl and gaining ssh access:
71-
```
72-
SUCCESS! The local cluster is ready.
73-
74-
### kubectl usage ###
75-
# Quickstart - Use this kubeconfig for individual commands
76-
KUBECONFIG=/tmp/kr-vagrant-shortcut/cluster/auth/kubeconfig kubectl get pods --all-namespaces -o wide
77-
#
78-
## OR ##
79-
#
80-
# Use this kubeconfig for the current terminal session
81-
KUBECONFIG=/tmp/kr-vagrant-shortcut/cluster/auth/kubeconfig
82-
export KUBECONFIG
83-
kubectl get pods --all-namespaces -o wide
84-
#
85-
## OR ##
86-
#
87-
# Backup and replace your default kubeconfig
88-
# Note: This will continue to work on recreated local clusters
89-
mv ~/.kube/config ~/.kube/config-backup
90-
ln -s /tmp/kr-vagrant-shortcut/cluster/auth/kubeconfig ~/.kube/config
91-
92-
### SSH ###
93-
# Get node names
94-
make vagrant status
95-
# SSH into a the controller node (c1)
96-
make vagrant ssh c1
97-
```
98-
99-
#### Managing A Local VM Cluster
100-
101-
You can use [Vagrant](https://www.vagrantup.com/docs/cli/) commands against the
102-
running cluster with `make vagrant COMMANDS`.
103-
104-
For example, `make vagrant status` outputs:
105-
```
106-
Current machine states:
107-
108-
e1 not created (virtualbox)
109-
c1 not created (virtualbox)
110-
w1 not created (virtualbox)
111-
112-
This environment represents multiple VMs. The VMs are all listed
113-
above with their current state. For more information about a specific
114-
VM, run `vagrant status NAME`.
115-
```
116-
117-
With this information you can ssh into any of the VMs listed:
118-
```
119-
make vagrant ssh c1
120-
```
121-
12252
### Makefile Options
12353

12454
There are several variables which can be modified in the Makefile to customize
@@ -169,6 +99,10 @@ Successfully tagged quay.io/bzub/kube-router-git:custom
16999
for testing purposes.
170100
Example (DEV-SUFFIX=master-latest): quay.io/cloudnativelabs/kube-router-git:master-latest
171101

102+
## Testing kube-router
103+
104+
Please read the [testing documentation](testing.md) for details.
105+
172106
## Release Workflow
173107

174108
These instructions show how official kube-router releases are performed.

Documentation/testing.md

Lines changed: 107 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,107 @@
1+
# Testing kube-router
2+
3+
Our end-user testing goals are to:
4+
- Support easily running kube-router in any Kubernetes environment, new or
5+
existing.
6+
- Provide tools to quickly collect information about a cluster to help with
7+
troubleshooting kube-router issues.
8+
9+
Our developer testing goals are to:
10+
- Provide tools to quickly build and test kube-router code and container images
11+
- Provide well-documented code testing protocols to ensure consistent code
12+
quality for all contributions.
13+
- Support quickly testing code changes by spinning up test clusters in local
14+
VMs, cloud environments, and via CI systems in pull requests.
15+
- Support running official Kubernetes e2e tests as well as custom e2e tests for
16+
kube-router's exclusive features.
17+
18+
## End Users
19+
20+
We currently support running kube-router on local VMs via Vagrant. Follow the
21+
instructions in [Starting A Local VM Cluster](#starting-a-local-vm-cluster)
22+
to get started.
23+
24+
## Developers
25+
26+
### Option 1: Local VM Cluster
27+
28+
#### Starting A Local VM Cluster
29+
30+
Running your code changes or simply trying out kube-router as-is in a real
31+
Kubernetes cluster is easy. Just make sure you have Virtualbox, VMware Fusion,
32+
or VMware Workstation installed and run:
33+
```
34+
make vagrant-up-single-node
35+
```
36+
37+
Alternatively if you have 6GB RAM for the VMs, you can run a multi-node cluster
38+
that consists of a dedicated etcd node, a controller node, and a worker node:
39+
```
40+
make vagrant-up-multi-node
41+
```
42+
43+
You will see lots of output as the VMs are provisioned, and the first run may
44+
take some time as VM and container images are downloaded. After the cluster is
45+
up you will recieve instructions for using kubectl and gaining ssh access:
46+
```
47+
SUCCESS! The local cluster is ready.
48+
49+
### kubectl usage ###
50+
# Quickstart - Use this kubeconfig for individual commands
51+
KUBECONFIG=/tmp/kr-vagrant-shortcut/cluster/auth/kubeconfig kubectl get pods --all-namespaces -o wide
52+
#
53+
## OR ##
54+
#
55+
# Use this kubeconfig for the current terminal session
56+
KUBECONFIG=/tmp/kr-vagrant-shortcut/cluster/auth/kubeconfig
57+
export KUBECONFIG
58+
kubectl get pods --all-namespaces -o wide
59+
#
60+
## OR ##
61+
#
62+
# Backup and replace your default kubeconfig
63+
# Note: This will continue to work on recreated local clusters
64+
mv ~/.kube/config ~/.kube/config-backup
65+
ln -s /tmp/kr-vagrant-shortcut/cluster/auth/kubeconfig ~/.kube/config
66+
67+
### SSH ###
68+
# Get node names
69+
make vagrant status
70+
# SSH into a the controller node (c1)
71+
make vagrant ssh c1
72+
```
73+
74+
#### Managing Your Local VM Cluster
75+
76+
You can use [Vagrant](https://www.vagrantup.com/docs/cli/) commands against the
77+
running cluster with `make vagrant COMMANDS`.
78+
79+
For example, `make vagrant status` outputs:
80+
```
81+
Current machine states:
82+
83+
e1 not created (virtualbox)
84+
c1 not created (virtualbox)
85+
w1 not created (virtualbox)
86+
87+
This environment represents multiple VMs. The VMs are all listed
88+
above with their current state. For more information about a specific
89+
VM, run `vagrant status NAME`.
90+
```
91+
92+
With this information you can ssh into any of the VMs listed:
93+
```
94+
make vagrant ssh c1
95+
```
96+
97+
#### Upgrading kube-router In Your Local VM Cluster
98+
99+
If you make code changes or checkout a different branch/tag, you can easily
100+
build, install, and run these changes in your previously started local VM
101+
cluster.
102+
103+
`make vagrant-image-update`
104+
105+
Unlike `make vagrant-up-*` targets, this does not destroy and recreate the VMs,
106+
and instead does the updates live. This will save time if you aren't concerned
107+
about having a pristine OS/Kubernetes environment to test against.

Makefile

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -43,6 +43,11 @@ vagrant-destroy: ## Destroy a previously created local VM cluster
4343
vagrant-clean: vagrant-destroy ## Destroy a previously created local VM cluster and remove all downloaded/generated assets
4444
@rm -rf hack/_output
4545

46+
vagrant-image-update: export docker=$(DOCKER)
47+
vagrant-image-update: export DEV_IMG=$(REGISTRY_DEV):$(IMG_TAG)
48+
vagrant-image-update: all ## Rebuild kube-router, update image in local VMs, and restart kube-router pods.
49+
@hack/vagrant-image-update.sh
50+
4651
run: kube-router ## Runs "kube-router --help".
4752
./kube-router --help
4853

app/watchers/network_policy_watcher.go

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,10 +1,10 @@
11
package watchers
22

33
import (
4+
"errors"
45
"reflect"
56
"strconv"
67
"time"
7-
"errors"
88

99
"github.com/cloudnativelabs/kube-router/utils"
1010
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
@@ -81,7 +81,7 @@ func StartNetworkPolicyWatcher(clientset *kubernetes.Clientset, resyncPeriod tim
8181
v1NetworkPolicy := true
8282
v, err := clientset.Discovery().ServerVersion()
8383
if err != nil {
84-
return nil, errors.New("Failed to get API server version due to " + err.Error())
84+
return nil, errors.New("Failed to get API server version due to " + err.Error())
8585
}
8686

8787
minorVer, _ := strconv.Atoi(v.Minor)

hack/sync-image-cache.sh

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -36,9 +36,9 @@ else
3636
echo "INFO: Location: ${HACK_DOCKER_CACHE_FILE}"
3737
else
3838
echo "INFO: Fetching ${HYPERKUBE_IMG_URL} Docker image."
39-
"${docker}" pull "${HYPERKUBE_IMG_URL}"
39+
eval "${docker}" pull "${HYPERKUBE_IMG_URL}"
4040

4141
echo "INFO: Saving ${HYPERKUBE_IMG_URL} Docker image to cache directory."
42-
"${docker}" save "${HYPERKUBE_IMG_URL}" -o "${HACK_DOCKER_CACHE_FILE}"
42+
eval "${docker}" save "${HYPERKUBE_IMG_URL}" -o "${HACK_DOCKER_CACHE_FILE}"
4343
fi
4444
fi

hack/vagrant-common.sh

Lines changed: 19 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -50,3 +50,22 @@ HACK_ACI_CACHE_FILE="${HACK_IMG_CACHE_DIR}/hyperkube-${HYPERKUBE_IMG_TAG}.aci"
5050
export HACK_ACI_CACHE_FILE
5151
HACK_DOCKER_CACHE_FILE="${HACK_IMG_CACHE_DIR}/hyperkube-${HYPERKUBE_IMG_TAG}.docker"
5252
export HACK_DOCKER_CACHE_FILE
53+
54+
# Export the kube-router container image
55+
export_latest_image() {
56+
mkdir -p "${HACK_IMG_CACHE_DIR}"
57+
eval "${docker}" tag ${DEV_IMG} "${KR_IMAGE_TAG}"
58+
eval "${docker}" save "${KR_IMAGE_TAG}" -o "${HACK_IMG_CACHE_DIR}/kube-router.docker"
59+
}
60+
61+
# Re-pull the kube-router container image file within the VM
62+
# Usage: update_image_in_vm() VM_NAME
63+
update_image_in_vm() {
64+
if [ -z "${1}" ]; then
65+
echo "ERROR: VM name required."
66+
echo "Usage: update_image_in_vm() VM_NAME"
67+
return 1
68+
fi
69+
70+
vagrant ssh "${i}" -c "docker load -i /var/tmp/images/kube-router.docker"
71+
}

hack/vagrant-image-update.sh

Lines changed: 35 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,35 @@
1+
#!/usr/bin/env sh
2+
# vim: noai:ts=2:sw=2:set expandtab
3+
set -e
4+
5+
HACK_DIR="$(CDPATH='' cd -- "$(dirname -- "$0")" && pwd -P)"
6+
export HACK_DIR
7+
8+
# shellcheck source=vagrant-common.sh
9+
. "${HACK_DIR}/vagrant-common.sh"
10+
11+
if [ ! -d "${BK_SHORTCUT_DIR}" ]; then
12+
echo "INFO: bootkube hack shortcut is not initialized."
13+
echo "INFO: \"vagrant up\" has not been run yet."
14+
exit 0
15+
fi
16+
17+
echo "INFO: Exporting your kube-router container image."
18+
export_latest_image
19+
20+
cd "${BK_SHORTCUT_DIR}"
21+
22+
if [ "$(basename "$(readlink "${PWD}")")" = "single-node" ]; then
23+
NODES="default"
24+
else # multi-node
25+
NODES="c1 w1"
26+
fi
27+
28+
for i in ${NODES}; do
29+
echo "INFO: Importing your kube-router container image in VM \"${i}\""
30+
update_image_in_vm "${i}"
31+
done
32+
33+
echo "INFO: Restarting all kube-router pods"
34+
kubectl --kubeconfig="${BK_SHORTCUT_DIR}/cluster/auth/kubeconfig" \
35+
--namespace=kube-system delete pod -l k8s-app=kube-router

hack/vagrant-up.sh

Lines changed: 1 addition & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -39,13 +39,9 @@ else
3939
git clone --depth=1 --branch "${BK_VERSION}" "${BK_CLONE_URL}" "${BK_CLONE_DIR}"
4040
fi
4141

42-
# Export the kube-router container image
4342
echo "INFO: Exporting your kube-router container image."
44-
mkdir -p "${HACK_IMG_CACHE_DIR}"
45-
eval "${docker} tag ${DEV_IMG} ${KR_IMAGE_TAG}"
46-
eval "${docker} save ${KR_IMAGE_TAG} -o ${HACK_IMG_CACHE_DIR}/kube-router.docker"
43+
export-latest-image
4744

48-
# Copy cached images to Bootkube local-images directory
4945
echo "INFO: Caching hyperkube images to Bootkube local-images directory."
5046
"${HACK_DIR}/sync-image-cache.sh"
5147

0 commit comments

Comments
 (0)