Skip to content

Latest commit

 

History

History
213 lines (140 loc) · 8.76 KB

File metadata and controls

213 lines (140 loc) · 8.76 KB

Lab 01 Instructions

Overview

This lab uses snappi to control the free Ixia-c Community Edition (OTG Test Tool) to send traffic between 2 hosts from the same AWS VPC. The lab has 2 parts. In the first part (traffic only) we will deploy the Ixia-c traffic engines and controller using docker run then in the second part (protocols and traffic) we will add the Ixia-c Protocol Engine component. The deployment will be simplified by using docker compose.

The test scripts have been already created and the goal of the lab is to instruct users how to send traffic between these 2 servers with various parameters.

Prerequisites

Let's open terminals to VM1 and VM2

  • SSH command example below for Windows using PowerShell or CMD.

ssh -i C:\\Users\\USER\\Downloads\\ENA8FWiOpusuWSA3PIMPcocw2_aws_rsa ubuntu@VM_EXTERNAL_IP

  • SSH command example below for Linux / MacOS terminal. For MAC you may have to set the permission of the private key first using chmod 600 /path/to/keyfile

ssh -i /home/USER/Downloads/ENA8FWiOpusuWSA3PIMPcocw2_aws_rsa ubuntu@VM_EXTERNAL_IP

  • Or optionally, you can set this up in Visual Studio Code. It will be much easier to modify files and start new terminals. You will need to install the “Remote SSH” and “Remote Explorer” extensions then add the VM hosts by editing the ssh config to add the host and include the IdentifyFile parameter as seen below. Then back in VsCode, refresh the list of hosts, connect, follow the prompts to continue and open the /home/ubuntu folder.

remote

new remote

alt text

  • docker must be installed and ready. Check docker version and status on both VM1 and VM2. CTRL+C to exit.
docker version && sudo systemctl status docker

alt text

  • Install snappi on VM1 (controller host).
python3 -m pip install --upgrade snappi --break-system-packages
  • Clone the git repository associated with this workshop on VM1 and VM2.
git clone https://github.com/open-traffic-generator/ac4-workshop.git

Execution

Part 1

  • Let's pull the docker images needed for this lab. Go to both VM1 and VM2 terminals and run the commands below.
docker pull ghcr.io/open-traffic-generator/keng-controller:1.40.0-15
docker pull ghcr.io/open-traffic-generator/ixia-c-traffic-engine:1.8.0.245

We'll deploy 2 containers on VM1 (KENG tontroller and Ixia-C traffic engine) and one container on VM2 (Ixia-C traffic engine) using docker run

alt text

On VM1 deploy the controller. Here we're using the network mode "host" but Ixia-C containers could also be deployed in custom bridge. Ixia-c deployments examples

docker run -d --name=keng-controller --network host ghcr.io/open-traffic-generator/keng-controller:1.40.0-15 \
--accept-eula \
--http-port 8443
  • On VM1 deploy the Ixia-C traffic engine. Notice the nic name ens6 used.
docker run --privileged -d                    \
   --name=lab-01-traffic-engine               \
   --network host                             \
   -e OPT_LISTEN_PORT="5551"                  \
   -e ARG_IFACE_LIST="virtual@af_packet,ens6" \
   -e OPT_NO_HUGEPAGES="Yes"                  \
   -e OPT_NO_PINNING="Yes"                    \
   -e WAIT_FOR_IFACE="Yes"                    \
   -e OPT_ADAPTIVE_CPU_USAGE="Yes"            \
   ghcr.io/open-traffic-generator/ixia-c-traffic-engine:1.8.0.245             
  • On VM2 deploy the Ixia-C traffic engine.
docker run --privileged -d                    \
   --name=lab-01-traffic-engine               \
   --network host                             \
   -e OPT_LISTEN_PORT="5551"                  \
   -e ARG_IFACE_LIST="virtual@af_packet,ens6" \
   -e OPT_NO_HUGEPAGES="Yes"                  \
   -e OPT_NO_PINNING="Yes"                    \
   -e WAIT_FOR_IFACE="Yes"                    \
   -e OPT_ADAPTIVE_CPU_USAGE="Yes"            \
   ghcr.io/open-traffic-generator/ixia-c-traffic-engine:1.8.0.245             
  • On VM1 open lab-01-part1.py and set the controller and port location attributes. This is the management IP for each Ixia-c port. Since we're using the controller on VM1, the location of port1 should be 'localhost' but the location of port2 should point to the management IP of VM2 host. This address usually is present in the prompt

alt text

  • Since we're running in AWS VPC, promiscuous mode is disabled and the endpoints must match the interface configurations. On VM1 run arp to find out the MAC address of your gateway (10.0.2.1), then run ip address to find out the interface MAC and IPv4

alt text alt text

  • On VM2 terminal, run ip address to retrieve the MAC and IPv4 interface information. The gw address is the same as on VM1

alt text

  • Back on VM1 let's open the script lab-01-part1.py and make these changes. There are 2 flows, one for each transmitting port.

alt text

  • Run the script and observe the results
python3 lab-01-part1.py

alt text

  • Change the script to generate 2 Mbps on each flow and rerun

alt text

alt text

  • Stop and remove the containers on both VMs for part1 of this lab.
docker stop $(docker ps -aq) && docker rm $(docker ps -aq)

Part 2

This time we're adding the Ixia-C Protocol Engine component. This will ensure that ARP is resolved and the destination MAC address is automatically populated. As you can see in the diagram below we need to deploy 3 containers on each VM but we will only use the VM1 controller (VM2 Keng Controller container is optional)

alt text

The configuration will include devices which will be used as endpoints for traffic.

  • Open compose.yml file and notice the 3 containers. We're binding the traffic engine to ens6 and the Ixia-C Protocol Engine is using the Ixia-C Traffic Engine network setting.

alt text

  • On both VMs run docker compose to deploy the containers.
cd ~/ac4-workshop/lab-01/
docker compose up -d
  • Check the containers and the traffic engine log from one of them.
docker ps
docker logs lab-01-traffic_engine-1
  • You can check the Interface ens6 found entry in the logs on both VMs, to ensure the traffic engine is ready.

alt text

  • Let's open the script lab-01-part2.py and make the changes to match the interface information: management IP, test IP and MAC address. You can use arp and ip address commands to retrieve these. Because we're now using the protocol engine, this must be specified in the port "location" attribute. Unlike the traffic engine where the port can be changed, the protocol engine will always be "listening" on port 50071. Also don't forget to set controller location.

alt text

Notice now the flow configuration has no destination mac address set. That is because we're using flow tx_rx parameter set to device which will populate the destination mac address upon completion of the ARP request.

alt text

  • On VM1 run the test and observe stats.

alt text

  • Enable capture in the script by uncommenting the capture specific steps then run the test. Set the pktCount to 200 and run the test

alt text

  • Run ls at the end to see 2 files containing the captured packets. With keng we can only capture incoming packets. These files could be opened in Wireshark or Tshark for further analysis.

alt text

  • Let's run some manual curl commands for retrieving the port stats, flow stats and ARP table.
curl -k -X POST https://127.0.0.1:8443/monitor/metrics -d '{"choice":"port"}'
curl -k -X POST https://127.0.0.1:8443/monitor/metrics -d '{"choice":"flow"}'
curl -k -X POST https://127.0.0.1:8443/monitor/states -d '{"choice":"ipv4_neighbors"}'
  • As you can see in the port metrics, we're receiving more frames on P1 than what P2 is sending. This is filtered correctly by KENG as seen in the flow metrics.

alt text

  • See the entire controller configuration in json format.
curl -k https://127.0.0.1:8443/config
  • Clear the containers on both VMs.
cd ~/ac4-workshop/lab-01/ && docker compose down