This doc explains how to setup a development environment so you can get started
contributing
to Knative Serving. Also take a look at:
- Create a GitHub account
- Setup GitHub access via SSH
- Install requirements
- Set up a kubernetes cluster
- Set up a docker repository you can push to
- Set up your shell environment
- Create and checkout a repo fork
Once you meet these requirements, you can start Knative Serving!
Before submitting a PR, see also CONTRIBUTING.md.
You must install these tools:
go: The languageKnative Servingis built ingit: For source controldep: For managing external Go dependencies.ko: For development.kubectl: For managing development environments.
To start your environment you'll need to set these environment
variables (we recommend adding them to your .bashrc):
GOPATH: If you don't have one, simply pick a directory and addexport GOPATH=...$GOPATH/binonPATH: This is so that tooling installed viago getwill work properly.KO_DOCKER_REPOandDOCKER_REPO_OVERRIDE: The docker repository to which developer images should be pushed (e.g.gcr.io/[gcloud-project]).K8S_CLUSTER_OVERRIDE: The Kubernetes cluster on which development environments should be managed.K8S_USER_OVERRIDE: The Kubernetes user that you use to manage your cluster. This depends on your cluster setup, please take a look at cluster setup instruction.
.bashrc example:
export GOPATH="$HOME/go"
export PATH="${PATH}:${GOPATH}/bin"
export KO_DOCKER_REPO='gcr.io/my-gcloud-project-name'
export DOCKER_REPO_OVERRIDE="${KO_DOCKER_REPO}"
export K8S_CLUSTER_OVERRIDE='my-k8s-cluster-name'
export K8S_USER_OVERRIDE='my-k8s-user'Make sure to configure authentication
for your KO_DOCKER_REPO if required. To be able to push images to gcr.io/<project>, you need to run this once:
gcloud auth configure-dockerFor K8S_CLUSTER_OVERRIDE, we expect that this name matches a cluster with authentication configured
with kubectl. You can list the clusters you currently have configured via:
kubectl config get-contexts. For the cluster you want to target, the value in the CLUSTER column
should be put in this variable.
The Go tools require that you clone the repository to the src/github.com/knative/serving directory
in your GOPATH.
To check out this repository:
- Create your own fork of this repo
- Clone it to your machine:
mkdir -p ${GOPATH}/src/github.com/knative
cd ${GOPATH}/src/github.com/knative
git clone git@github.com:${YOUR_GITHUB_USERNAME}/serving.git
cd serving
git remote add upstream git@github.com:knative/serving.git
git remote set-url --push upstream no_pushAdding the upstream remote sets you up nicely for regularly syncing your
fork.
Once you reach this point you are ready to do a full build and deploy as described below.
Once you've setup your development environment, stand up
Knative Serving:
- Setup cluster admin
- Deploy istio
- Deploy build
- Deploy Knative Serving
- Enable log and metric collection
Your $K8S_USER_OVERRIDE must be a cluster admin to perform
the setup needed for Knative:
kubectl create clusterrolebinding cluster-admin-binding \
--clusterrole=cluster-admin \
--user="${K8S_USER_OVERRIDE}"kubectl apply -f ./third_party/istio-1.0.2/istio.yamlFollow the instructions if you need to set up static IP for Ingresses in the cluster.
kubectl apply -f ./third_party/config/build/release.yamlThis step includes building Knative Serving, creating and pushing developer images and deploying them to your Kubernetes cluster.
First, edit config-network.yaml as instructed within the file. If this file is edited and deployed after Knative Serving installation, the changes in it will be effective only for newly created revisions.
Next, run:
ko apply -f config/You can see things running with:
kubectl -n knative-serving get pods
NAME READY STATUS RESTARTS AGE
controller-77897cc687-vp27q 1/1 Running 0 16s
webhook-5cb5cfc667-k7mcg 1/1 Running 0 16sYou can access the Knative Serving Controller's logs with:
kubectl -n knative-serving logs $(kubectl -n knative-serving get pods -l app=controller -o name)If you're using a GCP project to host your Kubernetes cluster, it's good to check the Discovery & load balancing page to ensure that all services are up and running (and not blocked by a quota issue, for example).
Run:
kubectl apply -R -f config/monitoring/100-common \
-f config/monitoring/150-elasticsearch \
-f third_party/config/monitoring/common \
-f third_party/config/monitoring/elasticsearch \
-f config/monitoring/200-common \
-f config/monitoring/200-common/100-istio.yamlAs you make changes to the code-base, there are two special cases to be aware of:
-
If you change an input to generated code, then you must run
./hack/update-codegen.sh. Inputs include:- API type definitions in pkg/apis/serving/v1alpha1/,
- Types definitions annotated with
// +k8s:deepcopy-gen=true.
-
If you change a package's deps (including adding external dep), then you must run
./hack/update-deps.sh.
These are both idempotent, and we expect that running these at HEAD to have no diffs.
Code generation is automatically checked to produce no diffs for each pull request.
Dependencies are not yet automatically checked (see issue 1711).
Once the codegen and dependency information is correct, redeploying the controller is simply:
ko apply -f config/controller.yamlOr you can clean it up completely and completely
redeploy Knative Serving.
You can delete all of the service components with:
ko delete --ignore-not-found=true \
-f config/monitoring/100-common \
-f config/ \
-f ./third_party/config/build/release.yaml \
-f ./third_party/istio-1.0.2/istio.yaml