Skip to content

Commit 04de363

Browse files
committed
Doc: add raven performance test
1 parent 1698fd9 commit 04de363

File tree

7 files changed

+623
-1
lines changed

7 files changed

+623
-1
lines changed
Lines changed: 155 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,155 @@
1+
---
2+
title: Raven Performance Test
3+
---
4+
5+
## 背景
6+
Raven项目是OpenYurt项目中作为打通边-边、边-云的网络的组件,通过公网加密通道让不同的边缘的节点上的网络相互打通,从而实现跨边的业务或者管控流量的通信,Raven作为打通各边容器网络的重要组件,它的性能表现对OpenYurt集群有着很大影响。因此我们需要对Raven组件的性能有更深入的了解。
7+
## 测试环境
8+
### Kubernetes版本
9+
`Major:"1", Minor:"20+", GitVersion:"v1.20.11-aliyunedge.1", GitCommit:"1403255", GitTreeState:"", BuildDate:"2021-12-27T08:07:37Z", GoVersion:"go1.15.15", Compiler:"gc", Platform:"linux/amd64"`
10+
### OpenYurt 版本
11+
`GitVersion:"v0.7.0", GitCommit:"d331a42", BuildDate:"2022-08-29T13:33:43Z", GoVersion:"go1.17.12", Compiler:"gc", Platform:"linux/amd64"`
12+
### Raven版本
13+
raven-agent: `GitVersion:"v0.1.0", GitCommit:"a8d13a52", BuildDate:"2022-05-09T10:30:42Z", GoVersion:"go1.15", Compiler:"gc", Platform:"linux/amd64"`
14+
15+
raven-controller-manager: `GitVersion:"v0.1.0", GitCommit:"1f65198", BuildDate:"2022-05-09T10:31:33Z", GoVersion:"go1.15", Compiler:"gc", Platform:"linux/amd64"`
16+
### 实验配置
17+
集群中包含1个北京的master节点、2个张家口的edge节点以及1个上海的edge节点,master节点和edge节点的配置均相同。
18+
master
19+
20+
## 测试方法
21+
通过iperf3和ping等工具测试在同一边缘节点池和跨边缘节点池的网络带宽和延迟
22+
23+
- 同边(基准):容器网络带宽、延迟
24+
- 云-边(对照1):容器网络带宽、延迟
25+
- 边-云-边(对照2):容器网络带宽、延迟
26+
27+
整体的测试架构如下图所示
28+
![](../../static/img/docs/test-report/raven/arch.png)
29+
30+
## 测试结果
31+
32+
### 同边Pod互访
33+
本组测试Node-1与Node-2的Pod互访,Node-2的Pod作为客户端,Node-1的Pod作为服务端
34+
35+
- 带宽
36+
37+
```bash
38+
$ iperf3 -c 10.58.4.131 -i 1 -t 100 -p 1314 # 正向:客户端发送,服务端接收
39+
[ ID] Interval Transfer Bitrate Retr
40+
[ 5] 0.00-100.00 sec 42.8 GBytes 3.67 Gbits/sec 12742 sender
41+
[ 5] 0.00-100.04 sec 42.8 GBytes 3.67 Gbits/sec receiver
42+
43+
$ iperf3 -c 10.58.4.131 -i 1 -R -t 100 -p 1314 # 反向:服务端发送,客户端接收
44+
[ ID] Interval Transfer Bitrate Retr
45+
[ 5] 0.00-100.04 sec 38.9 GBytes 3.34 Gbits/sec 11024 sender
46+
[ 5] 0.00-100.00 sec 38.8 GBytes 3.34 Gbits/sec receiver
47+
```
48+
49+
| 正向 | 反向 |
50+
|---------------------|-----------------|
51+
| 3.67 Gbits/sec | 3.34 Gbits/sec |
52+
53+
- 延迟
54+
```bash
55+
$ ping -A -c 10000 -q 10.58.4.131
56+
--- 10.58.4.131 ping statistics ---
57+
10000 packets transmitted, 10000 received, 0% packet loss, time 766ms
58+
rtt min/avg/max/mdev = 0.061/0.065/0.355/0.008 ms, ipg/ewma 0.076/0.066 ms
59+
60+
$ ping -c 10 10.58.4.131
61+
--- 10.58.4.131 ping statistics ---
62+
10 packets transmitted, 10 received, 0% packet loss, time 9000ms
63+
rtt min/avg/max/mdev = 0.130/0.190/0.347/0.065 ms
64+
```
65+
66+
| Flooding | Non Flooding |
67+
|----------------------------|----------------------------|
68+
| 0.061/0.065/0.355/0.008 ms | 0.130/0.190/0.347/0.065 ms |
69+
70+
71+
### 云-边Pod互访
72+
73+
本组测试Node-2与Node-4的Pod互访,,Node-2的Pod作为客户端,Node-4的Pod作为服务端
74+
75+
- 带宽
76+
77+
```bash
78+
$ iperf3 -c 10.58.2.3 -i 1 -t 100 -p 1314 # 正向:客户端发送,服务端接收
79+
[ ID] Interval Transfer Bitrate Retr
80+
[ 5] 0.00-100.00 sec 1.02 GBytes 87.8 Mbits/sec 5269 sender
81+
[ 5] 0.00-100.04 sec 1.02 GBytes 87.6 Mbits/sec receiver
82+
83+
$ iperf3 -c 10.58.2.3 -i 1 -R -t 100 -p 1314 # 反向:服务端发送,客户端接收
84+
[ ID] Interval Transfer Bitrate Retr
85+
[ 5] 0.00-100.04 sec 13.1 MBytes 1.10 Mbits/sec 1692 sender
86+
[ 5] 0.00-100.00 sec 10.1 MBytes 845 Kbits/sec receiver
87+
```
88+
89+
| 正向 | 反向 |
90+
|---------------------|-----------------|
91+
| 87.8 Mbits/sec | 1.10 Mbits/sec |
92+
93+
- 延迟
94+
```bash
95+
$ ping -A -c 10000 -q 10.58.2.3
96+
--- 10.58.2.3 ping statistics ---
97+
10000 packets transmitted, 10000 received, 0% packet loss, time 339961ms
98+
rtt min/avg/max/mdev = 33.865/33.919/44.063/0.147 ms, pipe 2, ipg/ewma 33.999/33.909 ms
99+
100+
$ ping -c 10 10.58.2.3
101+
--- 10.58.2.3 ping statistics ---
102+
10 packets transmitted, 10 received, 0% packet loss, time 306ms
103+
rtt min/avg/max/mdev = 33.902/33.938/34.036/0.037 ms, ipg/ewma 34.009/33.960 ms
104+
105+
```
106+
107+
| Flooding | Non Flooding |
108+
|----------------------------|----------------------------|
109+
| 33.865/33.919/44.063/0.147 ms | 33.902/33.938/34.036/0.037 ms |
110+
111+
### 边-云-边Pod互访
112+
113+
本组测试Node-2与Node-3的Pod互访,,Node-2的Pod作为客户端,Node-3的Pod作为服务端
114+
115+
- 带宽
116+
117+
```bash
118+
$ iperf3 -c 10.58.0.9 -i 1 -t 100 -p 1314 # 正向:客户端发送,服务端接收
119+
[ ID] Interval Transfer Bitrate Retr
120+
[ 5] 0.00-100.00 sec 12.3 MBytes 1.03 Mbits/sec 1345 sender
121+
[ 5] 0.00-100.06 sec 9.39 MBytes 787 Kbits/sec receiver
122+
123+
$ iperf3 -c 10.58.0.9 -i 1 -R -t 100 -p 1314 # 反向:服务端发送,客户端接收
124+
[ ID] Interval Transfer Bitrate Retr
125+
[ 5] 0.00-100.06 sec 17.5 MBytes 1.47 Mbits/sec 1234 sender
126+
[ 5] 0.00-100.00 sec 9.43 MBytes 791 Kbits/sec receiver
127+
128+
```
129+
130+
| 正向 | 反向 |
131+
|---------------------|-----------------|
132+
| 1.03 Mbits/sec | 1.47 Mbits/sec |
133+
134+
- 延迟
135+
```bash
136+
$ ping -A -c 10000 -q 10.58.0.9
137+
--- 10.58.0.9 ping statistics ---
138+
10000 packets transmitted, 10000 received, 0% packet loss, time 631991ms
139+
rtt min/avg/max/mdev = 63.242/63.325/80.732/0.189 ms, pipe 2, ipg/ewma 63.205/63.364 ms
140+
141+
$ ping -c 10 10.58.0.9
142+
--- 10.58.0.9 ping statistics ---
143+
10 packets transmitted, 10 received, 0% packet loss, time 9010ms
144+
rtt min/avg/max/mdev = 63.373/63.413/63.492/0.033 ms****
145+
146+
```
147+
148+
| Flooding | Non Flooding |
149+
|----------------------------|----------------------------|
150+
| 0.061/0.065/0.355/0.008 ms | 63.373/63.413/63.492/0.033 |
151+
152+
## 结论及分析
153+
154+
- 相较于同边访问:云-边互访带宽比为42.37(同边带宽/云-边带宽),边-云-边互访带宽比为3579.02(同边带宽/边-云-边带宽)
155+
- 相较于同边访问:云-边互访延迟比为0.02(同边延迟/云-边延迟),边-云-边互访延迟比为0.01(同边延迟/边-云-边延迟)
Lines changed: 155 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,155 @@
1+
---
2+
title: Raven Performance Test
3+
---
4+
5+
## 背景
6+
Raven项目是OpenYurt项目中作为打通边-边、边-云的网络的组件,通过公网加密通道让不同的边缘的节点上的网络相互打通,从而实现跨边的业务或者管控流量的通信,Raven作为打通各边容器网络的重要组件,它的性能表现对OpenYurt集群有着很大影响。因此我们需要对Raven组件的性能有更深入的了解。
7+
## 测试环境
8+
### Kubernetes版本
9+
`Major:"1", Minor:"20+", GitVersion:"v1.20.11-aliyunedge.1", GitCommit:"1403255", GitTreeState:"", BuildDate:"2021-12-27T08:07:37Z", GoVersion:"go1.15.15", Compiler:"gc", Platform:"linux/amd64"`
10+
### OpenYurt 版本
11+
`GitVersion:"v0.7.0", GitCommit:"d331a42", BuildDate:"2022-08-29T13:33:43Z", GoVersion:"go1.17.12", Compiler:"gc", Platform:"linux/amd64"`
12+
### Raven版本
13+
raven-agent: `GitVersion:"v0.1.0", GitCommit:"a8d13a52", BuildDate:"2022-05-09T10:30:42Z", GoVersion:"go1.15", Compiler:"gc", Platform:"linux/amd64"`
14+
15+
raven-controller-manager: `GitVersion:"v0.1.0", GitCommit:"1f65198", BuildDate:"2022-05-09T10:31:33Z", GoVersion:"go1.15", Compiler:"gc", Platform:"linux/amd64"`
16+
### 实验配置
17+
集群中包含1个北京的master节点、2个张家口的edge节点以及1个上海的edge节点,master节点和edge节点的配置均相同。
18+
master
19+
20+
## 测试方法
21+
通过iperf3和ping等工具测试在同一边缘节点池和跨边缘节点池的网络带宽和延迟
22+
23+
- 同边(基准):容器网络带宽、延迟
24+
- 云-边(对照1):容器网络带宽、延迟
25+
- 边-云-边(对照2):容器网络带宽、延迟
26+
27+
整体的测试架构如下图所示
28+
![](../../../../../static/img/docs/test-report/raven/arch.png)
29+
30+
## 测试结果
31+
32+
### 同边Pod互访
33+
本组测试Node-1与Node-2的Pod互访,Node-2的Pod作为客户端,Node-1的Pod作为服务端
34+
35+
- 带宽
36+
37+
```bash
38+
$ iperf3 -c 10.58.4.131 -i 1 -t 100 -p 1314 # 正向:客户端发送,服务端接收
39+
[ ID] Interval Transfer Bitrate Retr
40+
[ 5] 0.00-100.00 sec 42.8 GBytes 3.67 Gbits/sec 12742 sender
41+
[ 5] 0.00-100.04 sec 42.8 GBytes 3.67 Gbits/sec receiver
42+
43+
$ iperf3 -c 10.58.4.131 -i 1 -R -t 100 -p 1314 # 反向:服务端发送,客户端接收
44+
[ ID] Interval Transfer Bitrate Retr
45+
[ 5] 0.00-100.04 sec 38.9 GBytes 3.34 Gbits/sec 11024 sender
46+
[ 5] 0.00-100.00 sec 38.8 GBytes 3.34 Gbits/sec receiver
47+
```
48+
49+
| 正向 | 反向 |
50+
|---------------------|-----------------|
51+
| 3.67 Gbits/sec | 3.34 Gbits/sec |
52+
53+
- 延迟
54+
```bash
55+
$ ping -A -c 10000 -q 10.58.4.131
56+
--- 10.58.4.131 ping statistics ---
57+
10000 packets transmitted, 10000 received, 0% packet loss, time 766ms
58+
rtt min/avg/max/mdev = 0.061/0.065/0.355/0.008 ms, ipg/ewma 0.076/0.066 ms
59+
60+
$ ping -c 10 10.58.4.131
61+
--- 10.58.4.131 ping statistics ---
62+
10 packets transmitted, 10 received, 0% packet loss, time 9000ms
63+
rtt min/avg/max/mdev = 0.130/0.190/0.347/0.065 ms
64+
```
65+
66+
| Flooding | Non Flooding |
67+
|----------------------------|----------------------------|
68+
| 0.061/0.065/0.355/0.008 ms | 0.130/0.190/0.347/0.065 ms |
69+
70+
71+
### 云-边Pod互访
72+
73+
本组测试Node-2与Node-4的Pod互访,,Node-2的Pod作为客户端,Node-4的Pod作为服务端
74+
75+
- 带宽
76+
77+
```bash
78+
$ iperf3 -c 10.58.2.3 -i 1 -t 100 -p 1314 # 正向:客户端发送,服务端接收
79+
[ ID] Interval Transfer Bitrate Retr
80+
[ 5] 0.00-100.00 sec 1.02 GBytes 87.8 Mbits/sec 5269 sender
81+
[ 5] 0.00-100.04 sec 1.02 GBytes 87.6 Mbits/sec receiver
82+
83+
$ iperf3 -c 10.58.2.3 -i 1 -R -t 100 -p 1314 # 反向:服务端发送,客户端接收
84+
[ ID] Interval Transfer Bitrate Retr
85+
[ 5] 0.00-100.04 sec 13.1 MBytes 1.10 Mbits/sec 1692 sender
86+
[ 5] 0.00-100.00 sec 10.1 MBytes 845 Kbits/sec receiver
87+
```
88+
89+
| 正向 | 反向 |
90+
|---------------------|-----------------|
91+
| 87.8 Mbits/sec | 1.10 Mbits/sec |
92+
93+
- 延迟
94+
```bash
95+
$ ping -A -c 10000 -q 10.58.2.3
96+
--- 10.58.2.3 ping statistics ---
97+
10000 packets transmitted, 10000 received, 0% packet loss, time 339961ms
98+
rtt min/avg/max/mdev = 33.865/33.919/44.063/0.147 ms, pipe 2, ipg/ewma 33.999/33.909 ms
99+
100+
$ ping -c 10 10.58.2.3
101+
--- 10.58.2.3 ping statistics ---
102+
10 packets transmitted, 10 received, 0% packet loss, time 306ms
103+
rtt min/avg/max/mdev = 33.902/33.938/34.036/0.037 ms, ipg/ewma 34.009/33.960 ms
104+
105+
```
106+
107+
| Flooding | Non Flooding |
108+
|----------------------------|----------------------------|
109+
| 33.865/33.919/44.063/0.147 ms | 33.902/33.938/34.036/0.037 ms |
110+
111+
### 边-云-边Pod互访
112+
113+
本组测试Node-2与Node-3的Pod互访,,Node-2的Pod作为客户端,Node-3的Pod作为服务端
114+
115+
- 带宽
116+
117+
```bash
118+
$ iperf3 -c 10.58.0.9 -i 1 -t 100 -p 1314 # 正向:客户端发送,服务端接收
119+
[ ID] Interval Transfer Bitrate Retr
120+
[ 5] 0.00-100.00 sec 12.3 MBytes 1.03 Mbits/sec 1345 sender
121+
[ 5] 0.00-100.06 sec 9.39 MBytes 787 Kbits/sec receiver
122+
123+
$ iperf3 -c 10.58.0.9 -i 1 -R -t 100 -p 1314 # 反向:服务端发送,客户端接收
124+
[ ID] Interval Transfer Bitrate Retr
125+
[ 5] 0.00-100.06 sec 17.5 MBytes 1.47 Mbits/sec 1234 sender
126+
[ 5] 0.00-100.00 sec 9.43 MBytes 791 Kbits/sec receiver
127+
128+
```
129+
130+
| 正向 | 反向 |
131+
|---------------------|-----------------|
132+
| 1.03 Mbits/sec | 1.47 Mbits/sec |
133+
134+
- 延迟
135+
```bash
136+
$ ping -A -c 10000 -q 10.58.0.9
137+
--- 10.58.0.9 ping statistics ---
138+
10000 packets transmitted, 10000 received, 0% packet loss, time 631991ms
139+
rtt min/avg/max/mdev = 63.242/63.325/80.732/0.189 ms, pipe 2, ipg/ewma 63.205/63.364 ms
140+
141+
$ ping -c 10 10.58.0.9
142+
--- 10.58.0.9 ping statistics ---
143+
10 packets transmitted, 10 received, 0% packet loss, time 9010ms
144+
rtt min/avg/max/mdev = 63.373/63.413/63.492/0.033 ms****
145+
146+
```
147+
148+
| Flooding | Non Flooding |
149+
|----------------------------|----------------------------|
150+
| 0.061/0.065/0.355/0.008 ms | 63.373/63.413/63.492/0.033 |
151+
152+
## 结论及分析
153+
154+
- 相较于同边访问:云-边互访带宽比为42.37(同边带宽/云-边带宽),边-云-边互访带宽比为3579.02(同边带宽/边-云-边带宽)
155+
- 相较于同边访问:云-边互访延迟比为0.02(同边延迟/云-边延迟),边-云-边互访延迟比为0.01(同边延迟/边-云-边延迟)

0 commit comments

Comments
 (0)