-
Notifications
You must be signed in to change notification settings - Fork 4
IPC
Interprocess communication provides a way for thread of execution to cross Protection Domain (PD) boundaries.
The Vesper microkernel provides a message-passing IPC mechanism for communication between threads. The same mechanism is also used for communication with kernel-provided services. Messages are sent by invoking a capability to a kernel object. Messages sent to Endpoint are destined for other threads, while messages sent to other objects are processed by the kernel. This chapter describes the common message format, endpoints, and how they can be used for communication between applications.
IPC mechanism allows for sharing some area of memory between kernel and one or more user applications, to avoid costly memory copy-in and copy-out.
Basic IPC is done via syscalls:
- Send
- send to a capability
- Non-blocking Send
- non-blocking send to a capability
- Call
- call a capability (send followed by recv?)
- Recv
- receive from an endpoint
- Non-blocking Recv
- receive but don't block if nothing to receive
- Reply
- send to a one-off reply capability
- Reply-then-Recv
- send reply atomically followed by a recv
These are performed on capabilities.
Additional kernel operations are implemented as calling certain exposed capabilities.
Logically, the kernel provides three system calls, Send, Receive and Yield. However, there are also combinations and variants of the basic Send and Receive calls, e.g. the Call operation, which consists of a send followed by a Receive from the same object. Methods on kernel objects other than endpoints and notifications are all mapped to Send or Call, depending on whether or not the method returns a result. The Yield system call is not associated with any kernel object and is the only operation that does not invoke a capability.
Endpoints allow a small amount of data and capabilities (namely the IPC buffer) to be transferred between two threads. Endpoint objects are invoked directly using the system calls described above.
IPC Endpoints use a rendezvous model and as such are synchronous and blocking. An Endpoint object may queue threads either to send or to receive. If no receiver is ready, threads performing the Send() or Call() system calls will wait in a queue for the first available receiver. Likewise, if no sender is ready, threads performing the Recv() system call or the second half of ReplyRecv() will wait for the first available sender.
Messages may contain capabilities, which will be transferred to the receiver, provided that the endpoint capability invoked by the sending thread has Grant rights. An attempt to send capabilities using an endpoint capability without the Grant right will result in transfer of the raw message, without any capability transfer.
IPC makes use of portals - pieces of code, used as trampolines into the kernel to do the thread switch. These portals can be re-generated by the optimizing loader (OMOS) without giving up flexibility and protection but gaining additional speed by optimizing generic things like thread_self() call which can be done completely in userspace.
- One-way
- Idempotent
- Reliable
[lrpc.pdf]
Multiple processors are used to reduce LRPC latency by caching domain contexts on idle processors. As we show in Section 4, the context switch that occurs during an LRPC is responsible for a large part of the transfer time. This time is due partly to the code required to update the hardware’s virtual memory registers and partly to the extra memory fetches that occur as a result of invalidating the translation lookaside buffer (TLB).
LRPC reduces context-switch overhead by caching domains on idle processors. When a call is made, the kernel checks for a processor idling in the context of the server domain. If one is found, the kernel exchanges the processors of the calling and idling threads, placing the calling thread on a processor where the context of the server domain is already loaded; the called server procedure can then execute on that processor without requiring a context switch. The idling thread continues to idle, but on the client’s original processor in the context of the client domain. On return from the server, a check is also made. If a processor is idling in the client domain (likely for calls that return quickly), then the processor exchange can be done again.
Utah-Mach. [Ford and Lepreau 1994, thread-migrate.pdf] changed Mach IPC semantics to migrating RPC which is based on thread migration between address spaces, similar to the ''Clouds model [Bernabeu-Auban et al. 1988].'' Substantial performance gain was achieved, a factor of 3 to 4.
