Skip to content

Commit c1d0685

Browse files
committed
address hopping
1 parent 54422b6 commit c1d0685

21 files changed

+1277
-569
lines changed

README.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -103,9 +103,10 @@ encryption_algorithm=AES-GCM
103103
| 名称 | 可设置值 | 必填 |备注|
104104
| ---- | ---- | ---- | ---- |
105105
| mode | client<br>server ||客户端<br>服务端|
106+
| listen_on | 域名或 IP 地址 ||只能填写域名或 IP 地址。多个地址请用逗号分隔|
106107
| listen_port | 1 - 65535 ||以服务端运行时可以指定端口范围|
107108
| destination_port | 1 - 65535 ||以客户端运行时可以指定端口范围|
108-
| destination_address | IP地址、域名 ||填入 IPv6 地址时不需要中括号|
109+
| destination_address | IP地址、域名 ||填入 IPv6 地址时不需要中括号。多个地址请用逗号分隔|
109110
| dport_refresh | 20 - 65535 ||单位“秒”。预设值 60 秒,小于20秒按20秒算,大于65535时按65536秒算|
110111
| encryption_algorithm | AES-GCM<br>AES-OCB<br>chacha20<br>xchacha20 ||AES-256-GCM-AEAD<br>AES-256-OCB-AEAD<br>ChaCha20-Poly1305<br>XChaCha20-Poly1305 |
111112
| encryption_password | 任意字符 |视情况|设置了 encryption_algorithm 时必填|

README_EN.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -101,9 +101,10 @@ encryption_algorithm=AES-GCM
101101
| Name | Possible Values | Required | Remarks |
102102
| ---- | ---- | ---- | ---- |
103103
| mode | client<br>server | Yes | Choose between client and server mode |
104+
| listen_on | domain name or IP address |No|domain name / IP address only. Multiple addresses should be comma-separated.|
104105
| listen_port | 1 - 65535 | Yes | Specify the port range when running as a server |
105106
| destination_port | 1 - 65535 | Yes | Specify the port range when running as a client |
106-
| destination_address | IP address, domain name | Yes | When inputting an IPv6 address, no need for square brackets |
107+
| destination_address | IP address, domain name | Yes | When inputting an IPv6 address, no need for square brackets. Multiple addresses should be comma-separated.|
107108
| dport_refresh | 20 - 65535 | No | Unit: seconds. Default value is 60 seconds. If less than 20 seconds, it will be considered as 20 seconds; if greater than 65535, it will be considered as 65536 seconds |
108109
| encryption_algorithm| AES-GCM<br>AES-OCB<br>chacha20<br>xchacha20 | No | Select from AES-256-GCM-AEAD, AES-256-OCB-AEAD, ChaCha20-Poly1305, XChaCha20-Poly1305 |
109110
| encryption_password | Any characters | Depending on situation | Required when setting encryption_algorithm |

docs/client_server_en.md

Lines changed: 39 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -90,7 +90,7 @@ stun_server=stun.qq.com
9090
log_path=./
9191
```
9292

93-
When using STUN for NAT Hole punching, the server cannot listen on multiple ports and can only use single-port mode. This is because the port number obtained after NAT Hole punching using STUN is not fixed. Even if the server's own port range is continuous, it cannot be guaranteed that the port number range obtained during NAT Hole punching is also continuous. Therefore, in this mode, UDPHop is limited to using only single-port mode.
93+
When using STUN for NAT Hole punching, the server cannot listen on multiple ports and can only use single-port mode; listening multiple address can't be supported. This is because the port number obtained after NAT Hole punching using STUN is not fixed. Even if the server's own port range is continuous, it cannot be guaranteed that the port number range obtained during NAT Hole punching is also continuous. Therefore, in this mode, UDPHop is limited to using only single-port mode.
9494

9595
## Specify the listening NIC
9696

@@ -100,6 +100,44 @@ Both the client and the server can specify the NIC to listen to, and only need t
100100
listen_on=192.168.1.1
101101
```
102102

103+
or multiple addresses
104+
105+
```
106+
listen_on=192.168.1.1,172.16.20.1
107+
```
108+
109+
## Multiple Destination Addresses
110+
111+
Both client and relay modes can specify multiple destination addresses, which must point to the same server.
112+
113+
```
114+
destination_address=127.0.0.1,::1,10.200.30.1
115+
```
116+
117+
**Note**: When using multiple addresses, it is recommended that the client's `destination_address` matches the server's `listen_on`.
118+
119+
If the server's `listen_on` is not specified, ensure that each address in the client's `destination_address` is in a different network segment.
120+
121+
For example, if the client specifies `destination_address=192.168.0.1,FDCA:1234::1`, the server's `listen_on` can be left blank, since `192.168.0.1` and `FDCA:1234::1` are guaranteed to be in different network segments.
122+
123+
However, if the client specifies `destination_address=192.168.0.1,192.168.0.2,FDCA:1234::1,FDCA:1234::2`, it is better to explicitly specify these addresses in the server's `listen_on` to avoid data packets being sent from unintended addresses.
124+
125+
## Non-continuous Port Range
126+
127+
To use a non-continuous port range, you can separate the ranges with commas.
128+
129+
### Server
130+
131+
```
132+
listen_port=13000-13050,14000-14050,15000
133+
```
134+
135+
### Client
136+
137+
```
138+
destination_port=13000-13050,14000-14050,15000
139+
```
140+
103141
## Multiple Configuration Files
104142

105143
If you want to listen to multiple ports and multiple NICs, you can pass multiple configuration files to kcptube and use them at the same time

docs/client_server_zh-hans.md

Lines changed: 38 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -90,7 +90,7 @@ stun_server=stun.qq.com
9090
log_path=./
9191
```
9292

93-
注意:使用 STUN 打洞时,服务端无法监听多端口,只能使用单端口模式。因为 STUN 打洞后获得的端口号并不固定,即使服务端自己的端口范围是连续的,打洞时无法保证获得的端口号范围也是连续的。因此这种模式下 UDPHop 限制为只能使用单端口模式。
93+
注意:使用 STUN 打洞时,服务端无法侦听多端口,只能使用单端口模式;不支持自定义侦听地址。因为 STUN 打洞后获得的端口号并不固定,即使服务端自己的端口范围是连续的,打洞时无法保证获得的端口号范围也是连续的。因此这种模式下 UDPHop 限制为只能使用单端口模式。
9494

9595
## 指定侦听网卡
9696

@@ -100,6 +100,43 @@ log_path=./
100100
listen_on=192.168.1.1
101101
```
102102

103+
或者多个地址
104+
105+
```
106+
listen_on=192.168.1.1,172.16.20.1
107+
```
108+
109+
## 多个目标地址
110+
111+
客户端及中继模式都可以指定多个目标地址,这些地址必须指向同一个服务端。
112+
113+
```
114+
destination_address=127.0.0.1,::1,10.200.30.1
115+
```
116+
117+
**备注**:使用多地址时,建议客户端的 `destination_address` 与服务端的 `listen_on` 保持一致。
118+
119+
如果服务端的 `listen_on` 未填写,那么在填写客户端 `destination_address` 时需要确保每个地址都处于不同的网段。
120+
121+
例如,客户端填写 `destination_address=192.168.0.1,FDCA:1234::1`,那么可以不填写服务端的 `listen_on`,因为`192.168.0.1``FDCA:1234::1`必然不是同一个网段。
122+
123+
如果客户端填写 `destination_address=192.168.0.1.192.168.0.2,FDCA:1234::1,FDCA:1234::2`,那么最好在服务端的 `listen_on` 那里指定这几个地址,以免数据包从意想不到的地址发出去。
124+
125+
## 不连续端口范围
126+
127+
若需要使用非连续端口范围,可以使用逗号分隔
128+
129+
### 服务端
130+
131+
```
132+
listen_port=13000-13050,14000-14050,15000
133+
```
134+
135+
### 客户端
136+
```
137+
destination_port=13000-13050,14000-14050,15000
138+
```
139+
103140
## 多个配置文件
104141

105142
如果想要侦听多个端口、多个网卡,那就分开多个配置文件,然后同时使用

src/3rd_party/thread_pool.hpp

Lines changed: 69 additions & 43 deletions
Original file line numberDiff line numberDiff line change
@@ -44,13 +44,6 @@ namespace ttp
4444
return thread_number;
4545
}
4646

47-
[[nodiscard]]
48-
static size_t assign_thread_odd(size_t input_value, concurrency_t thread_count) noexcept
49-
{
50-
static calculate_func calc[2] = { calculate_odd, always_zero };
51-
return (calc[thread_count == 1])(input_value, thread_count);
52-
}
53-
5447
[[nodiscard]]
5548
static size_t calculate_even(size_t input_value, concurrency_t thread_count) noexcept
5649
{
@@ -59,13 +52,6 @@ namespace ttp
5952
return thread_number;
6053
}
6154

62-
[[nodiscard]]
63-
static size_t assign_thread_even(size_t input_value, concurrency_t thread_count) noexcept
64-
{
65-
static calculate_func calc[2] = { calculate_even, always_zero };
66-
return (calc[thread_count == 1])(input_value, thread_count);
67-
}
68-
6955
[[nodiscard]]
7056
static size_t calculate_assign(size_t input_value, concurrency_t thread_count) noexcept
7157
{
@@ -74,13 +60,6 @@ namespace ttp
7460
return thread_number;
7561
}
7662

77-
[[nodiscard]]
78-
static size_t assign_thread(size_t input_value, concurrency_t thread_count) noexcept
79-
{
80-
static calculate_func calc[2] = { calculate_assign, always_zero };
81-
return (calc[thread_count == 1])(input_value, thread_count);
82-
}
83-
8463
/**
8564
* @brief A fast, lightweight, and easy-to-use C++17 thread pool class. This is a lighter version of the main thread pool class.
8665
*/
@@ -401,13 +380,25 @@ namespace ttp
401380
task_group_pool(const concurrency_t thread_count_ = 0) :
402381
thread_count(determine_thread_count(thread_count_)),
403382
threads(std::make_unique<std::thread[]>(thread_count)),
404-
local_network_tasks_total_of_threads(std::make_unique<std::atomic<size_t>[]>(thread_count)),
405-
peer_network_tasks_total_of_threads(std::make_unique<std::atomic<size_t>[]>(thread_count))
383+
listener_network_tasks_total_of_threads(std::make_unique<std::atomic<size_t>[]>(thread_count)),
384+
forwarder_network_tasks_total_of_threads(std::make_unique<std::atomic<size_t>[]>(thread_count))
406385
{
407386
task_queue_of_threads = std::make_unique<task_queue[]>(thread_count);
408387
tasks_total_of_threads = std::make_unique<std::atomic<size_t>[]>(thread_count);
409388
tasks_mutex_of_threads = std::make_unique<std::mutex[]>(thread_count);
410389
task_available_cv = std::make_unique<std::condition_variable[]>(thread_count);
390+
if (thread_count == 1)
391+
{
392+
assign_thread_odd = always_zero;
393+
assign_thread_even = always_zero;
394+
assign_thread = always_zero;
395+
}
396+
else
397+
{
398+
assign_thread_odd = calculate_odd;
399+
assign_thread_even = calculate_even;
400+
assign_thread = calculate_assign;
401+
}
411402

412403
create_threads();
413404
}
@@ -452,35 +443,35 @@ namespace ttp
452443
}
453444

454445
[[nodiscard]]
455-
size_t get_local_network_task_count_all() const
446+
size_t get_listener_network_task_count_all() const
456447
{
457448
size_t total = 0;
458449
for (size_t i = 0; i < thread_count; ++i)
459-
total += local_network_tasks_total_of_threads[i].load();
450+
total += listener_network_tasks_total_of_threads[i].load();
460451
return total;
461452
}
462453

463454
[[nodiscard]]
464-
size_t get_peer_network_task_count_all() const
455+
size_t get_forwarder_network_task_count_all() const
465456
{
466457
size_t total = 0;
467458
for (size_t i = 0; i < thread_count; ++i)
468-
total += peer_network_tasks_total_of_threads[i].load();
459+
total += forwarder_network_tasks_total_of_threads[i].load();
469460
return total;
470461
}
471462

472463
[[nodiscard]]
473-
size_t get_local_network_task_count(size_t number) const
464+
size_t get_listener_network_task_count(size_t number) const
474465
{
475466
size_t thread_number = assign_thread_odd(number, thread_count);
476-
return local_network_tasks_total_of_threads[thread_number].load();
467+
return listener_network_tasks_total_of_threads[thread_number].load();
477468
}
478469

479470
[[nodiscard]]
480-
size_t get_peer_network_task_count(size_t number) const
471+
size_t get_forwarder_network_task_count(size_t number) const
481472
{
482473
size_t thread_number = assign_thread_even(number, thread_count);
483-
return peer_network_tasks_total_of_threads[thread_number].load();
474+
return forwarder_network_tasks_total_of_threads[thread_number].load();
484475
}
485476

486477
bool thread_id_exists(std::thread::id tid)
@@ -495,12 +486,28 @@ namespace ttp
495486
*/
496487
void push_task(size_t number, task_void_callback void_task_function)
497488
{
498-
std::unique_ptr<uint8_t[]> data = nullptr;
499489
size_t thread_number = assign_thread(number, thread_count);
500490
{
501491
std::scoped_lock tasks_lock(tasks_mutex_of_threads[thread_number]);
502492
auto task_function = [void_task_function](std::unique_ptr<uint8_t[]> data) { void_task_function(); };
503-
task_queue_of_threads[thread_number].push_back({ task_function, std::move(data) });
493+
task_queue_of_threads[thread_number].push_back({ task_function, std::unique_ptr<uint8_t[]>{} });
494+
++tasks_total_of_threads[thread_number];
495+
}
496+
task_available_cv[thread_number].notify_one();
497+
}
498+
499+
void push_task(std::thread::id tid, task_void_callback void_task_function)
500+
{
501+
size_t thread_number = 0;
502+
if (auto iter = thread_ids.find(tid); iter == thread_ids.end())
503+
thread_number = assign_thread(std::hash<std::thread::id>{}(tid), thread_count);
504+
else
505+
thread_number = iter->second;
506+
507+
{
508+
std::scoped_lock tasks_lock(tasks_mutex_of_threads[thread_number]);
509+
auto task_function = [void_task_function](std::unique_ptr<uint8_t[]> data) { void_task_function(); };
510+
task_queue_of_threads[thread_number].push_back({ task_function, std::unique_ptr<uint8_t[]>{} });
504511
++tasks_total_of_threads[thread_number];
505512
}
506513
task_available_cv[thread_number].notify_one();
@@ -523,36 +530,52 @@ namespace ttp
523530
task_available_cv[thread_number].notify_one();
524531
}
525532

526-
void push_task_local(size_t number, task_callback task_function, std::unique_ptr<uint8_t[]> data)
533+
void push_task(std::thread::id tid, task_callback task_function, std::unique_ptr<uint8_t[]> data)
534+
{
535+
size_t thread_number = 0;
536+
if (auto iter = thread_ids.find(tid); iter == thread_ids.end())
537+
thread_number = assign_thread(std::hash<std::thread::id>{}(tid), thread_count);
538+
else
539+
thread_number = iter->second;
540+
541+
{
542+
std::scoped_lock tasks_lock(tasks_mutex_of_threads[thread_number]);
543+
task_queue_of_threads[thread_number].push_back({ task_function, std::move(data) });
544+
++tasks_total_of_threads[thread_number];
545+
}
546+
task_available_cv[thread_number].notify_one();
547+
}
548+
549+
void push_task_listener(size_t number, task_callback task_function, std::unique_ptr<uint8_t[]> data)
527550
{
528551
size_t thread_number = assign_thread_odd(number, thread_count);
529552
{
530553
std::scoped_lock tasks_lock(tasks_mutex_of_threads[thread_number]);
531554
auto task_func = [task_function, this, thread_number](std::unique_ptr<uint8_t[]> data)
532555
{
533556
task_function(std::move(data));
534-
local_network_tasks_total_of_threads[thread_number]--;
557+
listener_network_tasks_total_of_threads[thread_number]--;
535558
};
536559
task_queue_of_threads[thread_number].push_back({ task_func, std::move(data) });
537560
tasks_total_of_threads[thread_number]++;
538-
local_network_tasks_total_of_threads[thread_number]++;
561+
listener_network_tasks_total_of_threads[thread_number]++;
539562
}
540563
task_available_cv[thread_number].notify_one();
541564
}
542565

543-
void push_task_peer(size_t number, task_callback task_function, std::unique_ptr<uint8_t[]> data)
566+
void push_task_forwarder(size_t number, task_callback task_function, std::unique_ptr<uint8_t[]> data)
544567
{
545568
size_t thread_number = assign_thread_even(number, thread_count);
546569
{
547570
std::scoped_lock tasks_lock(tasks_mutex_of_threads[thread_number]);
548571
auto task_func = [task_function, this, thread_number](std::unique_ptr<uint8_t[]> data)
549572
{
550573
task_function(std::move(data));
551-
peer_network_tasks_total_of_threads[thread_number]--;
574+
forwarder_network_tasks_total_of_threads[thread_number]--;
552575
};
553576
task_queue_of_threads[thread_number].push_back({ task_func, std::move(data) });
554577
tasks_total_of_threads[thread_number]++;
555-
peer_network_tasks_total_of_threads[thread_number]++;
578+
forwarder_network_tasks_total_of_threads[thread_number]++;
556579
}
557580
task_available_cv[thread_number].notify_one();
558581
}
@@ -682,7 +705,7 @@ namespace ttp
682705
for (concurrency_t i = 0; i < thread_count; ++i)
683706
{
684707
threads[i] = std::thread(&task_group_pool::worker, this, i);
685-
thread_ids.insert(threads[i].get_id());
708+
thread_ids[threads[i].get_id()] = i;
686709
}
687710
}
688711

@@ -797,9 +820,12 @@ namespace ttp
797820
*/
798821
std::atomic<bool> waiting = false;
799822

800-
std::unique_ptr<std::atomic<size_t>[]> local_network_tasks_total_of_threads;
801-
std::unique_ptr<std::atomic<size_t>[]> peer_network_tasks_total_of_threads;
802-
std::set<std::thread::id> thread_ids;
823+
std::unique_ptr<std::atomic<size_t>[]> listener_network_tasks_total_of_threads;
824+
std::unique_ptr<std::atomic<size_t>[]> forwarder_network_tasks_total_of_threads;
825+
std::map<std::thread::id, size_t> thread_ids;
826+
calculate_func assign_thread_odd;
827+
calculate_func assign_thread_even;
828+
calculate_func assign_thread;
803829
};
804830

805831

0 commit comments

Comments
 (0)