Skip to content

Commit 464960c

Browse files
authored
Merge pull request #80 from chainbound/docs/ipc
feat(docs): document IPC transport
2 parents 0d4cc89 + bac3338 commit 464960c

File tree

1 file changed

+58
-6
lines changed

1 file changed

+58
-6
lines changed

book/src/usage/transport-layers.md

Lines changed: 58 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -6,9 +6,9 @@ layers are supported:
66

77
- [TCP](#tcp)
88
- [QUIC](#quic)
9+
- [IPC](#ipc)
910

1011
<!--
11-
- [IPC](#ipc)
1212
- [Inproc](#inproc)
1313
- [UDP](#udp)
1414
- [TLS](#tls)
@@ -59,14 +59,22 @@ async fn main() {
5959
QUIC is a new transport layer protocol that is built on top of UDP. It is designed to provide the same
6060
reliability & security guarantees as TCP + TLS, while solving some of the issues that it has, like
6161

62-
- **Head-of-line blocking**: If a packet is lost, all subsequent packets are held up until the lost packet is retransmitted. This can be a problem especially when multiplexing multiple streams over a single connection because it can cause a single slow stream to block all other streams.
63-
- **Slow connection setup**: TCP + TLS requires 2-3 round trips to establish a connection, which can be slow on high latency networks.
64-
- **No support for multiplexing**: TCP does not support multiplexing multiple streams over a single connection. This means that if you want to send multiple streams of data over a single connection, you have to implement your own multiplexing layer on top of TCP, which can run into issues like head-of-line blocking that we've seen above.
62+
- **Head-of-line blocking**: If a packet is lost, all subsequent packets are held up until the lost packet
63+
is retransmitted. This can be a problem especially when multiplexing multiple streams over a single
64+
connection because it can cause a single slow stream to block all other streams.
65+
- **Slow connection setup**: TCP + TLS requires 2-3 round trips to establish a connection, which can be
66+
slow on high latency networks.
67+
- **No support for multiplexing**: TCP does not support multiplexing multiple streams over a single connection.
68+
This means that if you want to send multiple streams of data over a single connection, you have to
69+
implement your own multiplexing layer on top of TCP, which can run into issues like head-of-line
70+
blocking that we've seen above.
6571

6672
### QUIC in MSG
6773

68-
The MSG QUIC implementation is based on [quinn](https://github.com/quinn-rs/quinn). It relies on self-signed certificates and does not verify
69-
server certificates. Also, due to how our `Transport` abstraction works, we don't support QUIC connections with multiple streams. This means that the `Quic` transport implementation will do all its work over a single, bi-directional stream for now.
74+
The MSG QUIC implementation is based on [quinn](https://github.com/quinn-rs/quinn). It relies on self-signed
75+
certificates and does not verify server certificates. Also, due to how our `Transport` abstraction works, we
76+
don't support QUIC connections with multiple streams. This means that the `Quic` transport implementation will
77+
do all its work over a single, bi-directional stream for now.
7078

7179
### How to use QUIC
7280

@@ -93,4 +101,48 @@ async fn main() {
93101
}
94102
```
95103

104+
## IPC
105+
106+
More precisely, MSG-RS supports [Unix Domain Sockets (UDS)][uds] for IPC.
107+
108+
### Why choose IPC?
109+
110+
IPC is a transport layer that allows for communication between processes on the same machine.
111+
The main difference between IPC and other transport layers is that IPC sockets use the filesystem
112+
as the address namespace.
113+
114+
IPC is useful when you want to avoid the overhead of network sockets and want to have a low-latency
115+
communication link between processes on the same machine, all while being able to use the same API
116+
as the other transport layers that MSG-RS supports.
117+
118+
Due to its simplicity, IPC is typically faster than TCP and QUIC, but the exact performance improvements
119+
also depend on the throughput of the underlying UDS implementation. We only recommend using IPC when you
120+
know that the performance benefits outweigh the overhead of using a network socket.
121+
122+
### How to use IPC
123+
124+
In MSG, here is how you can setup any socket type with the IPC transport:
125+
126+
```rust
127+
use msg::{RepSocket, ReqSocket, Ipc};
128+
129+
#[tokio::main]
130+
async fn main() {
131+
// Initialize the reply socket (server side) with default IPC
132+
let mut rep = RepSocket::new(Ipc::default());
133+
// Bind the socket to the address. This will start listening for incoming connections.
134+
// You can use any path that is valid for a Unix Domain Socket.
135+
rep.bind("/tmp/msg.sock").await.unwrap();
136+
137+
// Initialize the request socket (client side) with default IPC
138+
let mut req = ReqSocket::new(Ipc::default());
139+
// Connect the socket to the address. This will initiate a connection to the server.
140+
req.connect("/tmp/msg.sock").await.unwrap();
141+
142+
// ...
143+
}
144+
```
145+
146+
[uds]: https://en.wikipedia.org/wiki/Unix_domain_socket
147+
96148
{{#include ../links.md}}

0 commit comments

Comments
 (0)