Skip to content

Commit e65673c

Browse files
Update README.md
1 parent 245b1c6 commit e65673c

File tree

1 file changed

+12
-13
lines changed

1 file changed

+12
-13
lines changed

README.md

Lines changed: 12 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -24,10 +24,9 @@ Some highlights:
2424
* For other DPDK-compatible NICs, a system-wide installation from DPDK
2525
19.11.5 LTS sources (i.e., `sudo make install T=x86_64-native-linuxapp-gcc
2626
DESTDIR=/usr`). Other DPDK versions are not supported.
27-
* NICs: Fast (10 GbE+) bare-metal NICs are needed for good performance. eRPC
28-
works best with Mellanox Ethernet and InfiniBand NICs. Any DPDK-capable NICs
29-
also work well. Slower/virtual NICs can still be used for testing and
30-
development.
27+
* NICs: Fast (10 GbE+) NICs are needed for good performance. eRPC works best
28+
with Mellanox Ethernet and InfiniBand NICs. Any DPDK-capable NICs
29+
also work well.
3130
* System configuration:
3231
* At least 1024 huge pages on every NUMA node, and unlimited SHM limits
3332
* On a machine with `n` eRPC processes, eRPC uses kernel UDP ports `{31850,
@@ -55,10 +54,10 @@ Some highlights:
5554
## Supported bare-metal NICs:
5655
* Ethernet/UDP mode:
5756
* ConnectX-4 or newer Mellanox Ethernet NICs: Use `DTRANSPORT=raw`
58-
* DPDK-compatible NICs that support flow-director: Use `DTRANSPORT=dpdk`
57+
* DPDK-enabled NICs that support flow-director: Use `DTRANSPORT=dpdk`
5958
* Intel 82599 and Intel X710 NICs have been tested
60-
* Virtual NICs have not been tested
6159
* `raw` transport is faster for Mellanox NICs, which also support DPDK
60+
* DPDK-enabled NICs on Microsoft Azure: Use `-DTRANSPORT=dpdk -DAZURE=on`
6261
* ConnectX-3 Ethernet NICs are supported in eRPC's RoCE mode
6362
* RDMA (InfiniBand/RoCE) NICs: Use `DTRANSPORT=infiniband`. Add `DROCE=on`
6463
if using RoCE.
@@ -71,19 +70,19 @@ Some highlights:
7170
supports only one RPC ID per machine on Azure.
7271

7372
* Configure two Ubuntu 18.04 VMs as below. Use the same resource group and
74-
availability zone for both VMs
73+
availability zone for both VMs.
7574

76-
* Uncheck "Accelerated Networking" when launching the VM from the Azure
77-
portal (e.g., F32s-v2). This VM should have just the control network
78-
(i.e., `eth0`) and `lo` interfaces.
75+
* Uncheck "Accelerated Networking" when launching each VM from the Azure
76+
portal (e.g., F32s-v2). For now, this VM should have just the control
77+
network (i.e., `eth0`) and `lo` interfaces.
7978
* Add a NIC to Azure via the Azure CLI: `az network nic create
8079
--resource-group <your resource group> --name <a name for the NIC>
8180
--vnet-name <name of the VMs' virtual network> --subnet default
8281
--accelerated-networking true --subscription <Azure subscription, if
8382
any>`
8483
* Stop the VM launched earlier, and attach the NIC created in the previous
85-
step ("Networking" -> "Attach network interface").
86-
* Start the VM. It should have a new interface called `eth1`, which eRPC
84+
step to the VM (i.e., in "Networking" -> "Attach network interface").
85+
* Re-start the VM. It should have a new interface called `eth1`, which eRPC
8786
will use for DPDK traffic.
8887

8988
* Prepare DPDK 19.11.5:
@@ -114,7 +113,7 @@ sudo mount -t hugetlbfs nodev /mnt/huge
114113
<Public IPv4 address of VM #2> 31850 0
115114
```
116115

117-
* Run the eRPC application (a latency benchmark by default):
116+
* Run the eRPC application (the latency benchmark by default):
118117
* At VM #1: `./scripts/do.sh 0 0`
119118
* At VM #2: `./scripts/do.sh 1 0`
120119

0 commit comments

Comments
 (0)