The High Level Design document of Alpine can be found here.
- Clone the SONiC repo:
git clone https://github.com/sonic-net/sonic-buildimage.git
- Init
export NOJESSIE=1 NOSTRETCH=1 NOBUSTER=1 NOBULLSEYE=1 NOBOOKWORM=0 NOTRIXIE=0
cd sonic-buildimage
make init
- Enable build for modules of interest
These are optional modules that are not necessary for the base Alpine.
echo "INCLUDE_SYSTEM_GNMI = y" >> rules/config.user
echo "ENABLE_TRANSLIB_WRITE = y" >> rules/config.user
echo "INCLUDE_P4RT = y" >> rules/config.user
Pull the latest version of the sonic-pins
cd src/sonic-p4rt
git submodule update --remote sonic-pins
cd ..
- Configure
PLATFORM=alpinevs make configure
- Build
SONIC_BUILD_JOBS specifies the number of build tasks that run parallely. An ideal number depends on the resources but a value of 8 or 16 is reasonable for most systems.
SONIC_BUILD_JOBS=16 make target/sonic-alpinevs.img.gz
- Build alpinevs container
platform/alpinevs/src/build/build_alpinevs_container.sh
Pre-requisite: A KVM enabled workstation (or VM) that can support VMs on it
Setup KNE cluster
kne deploy deploy/kne/kind-bridge.yaml
- Load alpinevs container image in KNE
kind load docker-image alpine-vs:latest --name kne
- Download Lemming. Build the Lucius dataplane and load it
gh repo clone openconfig/lemming
cd lemming
bazel clean --expunge
bazel build --output_groups=+tarball //dataplane/standalone/lucius:image-tar
docker load -i bazel-bin/dataplane/standalone/lucius/image-tar/tarball.tar
kind load docker-image us-west1-docker.pkg.dev/openconfig-lemming/release/lucius:ga --name kne
- Create the two switch Alpine topology:
- Open the twodut-alpine-vs.pb.txt file and ensure that it points to the correct Alpine and Lucius images. You can find the name of the images from the output of 'docker images -a'. For example,
docker images -a | grep lucius
us-west1-docker.pkg.dev/openconfig-lemming/release/lucius:ga 01b58c448eaf 217MB 0B
docker images -a | grep alpine
alpine-vs:latest ebd8a4a5b357 5.04GB 0B
- Create the KNE topology
kne create twodut-alpine-vs.pb.txt
Confirm that the alpine-ctl and alpine-dut are in running state.
kubectl get pods -A | grep alpine
twodut-alpine alpine-ctl 2/2 Running 0 16h
twodut-alpine alpine-dut 2/2 Running 0 16h
- Terminals
- [Terminal1] SSH to the AlpineVS DUT Switch VM inside the deployment:
ssh-keygen -f /tmp/id_rsa -N ""
#Set IPDUT var to the EXTERNAL-IP of "kubectl get svc -n twodut-alpine service-alpine-dut"
export IPDUT=kubectl get svc service-alpine-dut -n twodut-alpine -o jsonpath='{.status.loadBalancer.ingress[*].ip}'
ssh-copy-id -i /tmp/id_rsa.pub -oProxyCommand=none admin@$IPDUT
ssh -i /tmp/id_rsa -oProxyCommand=none admin@$IPDUT
- [Terminal2] SSH to the AlpineVS Control Switch VM inside the deployment:
ssh-keygen -f /tmp/id_rsa -N ""
#Set IPCTL var to the EXTERNAL-IP of "kubectl get svc -n twodut-alpine service-alpine-ctl"
export IPCTL=kubectl get svc service-alpine-ctl -n twodut-alpine -o jsonpath='{.status.loadBalancer.ingress[*].ip}'
ssh-copy-id -i /tmp/id_rsa.pub -oProxyCommand=none admin@$IPCTL
ssh -i /tmp/id_rsa -oProxyCommand=none admin@$IPCTL
Alternately, you can get the external ip addresses directly from kubectl and use it for ssh
kubectl get services -A | grep alpine
twodut-alpine service-alpine-ctl LoadBalancer 10.96.215.178 192.168.8.51 22/TCP,9339/TCP,9559/TCP 16h
twodut-alpine service-alpine-dut LoadBalancer 10.96.195.3 192.168.8.50 22/TCP,9339/TCP,9559/TCP 16h
The password is set in your sonic-buildimage/rules/config. You may want to change it to something simpler.
- Useful commands
- Login to the host
kubectl exec -it -n twodut-alpine alpine-dut -- bash
- Dataplane logs
kubectl logs -n twodut-alpine alpine-dut -c dataplane