-
Notifications
You must be signed in to change notification settings - Fork 65
Open
Labels
Description
Background
Currently, when a new client gets connected to an existing ipykernel (e.g. when refreshing web browser), it is not possible for the client to get the kernel's status. The client will have to make assumptions about the kernel status which, if the assumption is wrong, could lead to hang commands.
In addition, when the IOPub channel gets overloaded (for example, when there are a lot of display
messages, hitting the ZeroMQ high wartermark limit), the status update message kernel send could be dropped. In that case, current clients will be unable to learn about kernel status as well.
Proposal
- Add additional optional field in kernel_info_reply named
execution_state
for kernel status. When a new client gets connected, it will be able to poll kenel status with kernel_info_request and inspect theexecution_state
in the reply to get current kernel status. This field is made optional for backward compatibility. - Since there is currently no source of truth in the kernel class to track execution status, I propose to add a new class variable
execution_state
(type:str
) to track the execution status on the main shell thread. Sincekernel_info_request
could be sent and processed on Control channel, reflecting status on the control channel is less useful. Reflecting status of the subshell status is useful, but we could start with a single string value that represent main shell status to validate the idea, and then expand to a map of (channel, status) later to stay forward compatible with subshell status.
Major Alternatives
- We could choose to stipulate that before dispatch_shell returns, it MUST send a status update to IOPub channel. However, this requires us to parse out headers of the message and, in the situation of deserialization issue (which could indicate message tampering), could compromise ipykernel's security posture. Also, this does not solve the problem of overloaded IOPub channels.
- We could choose to implement a kernel-tending process that tracks the status of the kernel. The downside is that this introduces questions like how do we handle multiple threads of execution, whether adding a new process will add kernel overhead, etc.
Shepherd: @jasongrout