Skip to content

Commit 612d578

Browse files
authored
feat(python): redesign runtimed public API with high-level wrappers (#1030)
* feat(python): redesign runtimed public API with high-level wrappers Replace the flat Session/AsyncSession API with a layered design: - `Client` wraps `NativeAsyncClient`, returns `Notebook` objects - `Notebook` wraps `AsyncSession` with sync reads and async writes - `CellHandle` provides sync property access (source, cell_type, outputs) and async mutations (set_source, run, delete) - `CellCollection` on `notebook.cells` for sync iteration and async cell creation - `NotebookInfo` dataclass for structured room metadata Rust changes: - Add sync read methods to AsyncSession (blocking_lock for local Automerge reads) - Rename PyRuntimeState/PyKernelState/PyEnvState to clean names via #[pyclass(name)] - Rename Client/AsyncClient to NativeClient/NativeAsyncClient - Rename list_rooms to list_active_notebooks - Fix AsyncClient.list_rooms returning strings for int/bool fields - Delete DaemonClient and deprecated Session/AsyncSession constructors Closes #983 * fix(python): update demo files to use new runtimed API Replace DaemonClient/Session references with NativeClient in presence_cursor.py and test-presence.ipynb demos. * fix(python): address review — add start_kernel, fix Windows path detection - Add start_kernel() and shutdown_kernel() to Notebook so non-Python runtimes (e.g. deno) can be launched through the high-level API - Fix NotebookInfo.path to use Path.is_absolute() instead of checking for "/" — works on both POSIX and Windows paths * fix(runtimed): restart_kernel preserves kernel_type and env_source Previously restart_kernel hardcoded kernel_type="python" and env_source="auto", so restarting a deno or conda:inline kernel would silently switch it to a prewarmed Python kernel. Now captures the running kernel's type and env_source before shutdown and re-launches with the same configuration. * fix(python): address review — close lifecycle, docs, repr strings - Make Session.close() and AsyncSession.close() actually disconnect by dropping the doc handle and broadcast receiver - Fix README examples to use asyncio.run() wrapper instead of bare await - Update all first-party docs to use new API (docs/python-bindings.md, contributing/runtimed.md, contributing/testing.md, SKILL.md) - Fix NativeClient/NativeAsyncClient __repr__ to match their class names * fix(session): hydrate kernel state, type mismatch error, queue_cell auto-start Incorporates fixes from #1033: - hydrate_kernel_state() reads RuntimeStateDoc at connect time to populate kernel_started/kernel_type/env_source from the daemon's source of truth (fixes stale state after auto-launch) - start_kernel() now errors when requesting a different kernel type than what's already running instead of silently succeeding - queue_cell() gains ensure_kernel_started guard matching execute_cell * style(runtimed): fix cargo fmt in queue_cell * fix(python): add missing CellHandle properties and harden deleted-cell behavior (#1035) * fix(python): add missing CellHandle properties and harden deleted-cell behavior - Add tags, is_source_hidden, is_outputs_hidden sync properties to CellHandle - Make outputs/metadata return defaults on deleted cells instead of raising - Fix NotebookInfo.join() client parameter typing (Any -> Client) - Make test_all_exports an exact set match against __all__ (27 items) * docs: add supervisor-first guidance for agents managing dev daemons CLAUDE.md, contributing/development.md, and contributing/runtimed.md all documented the two-terminal cargo xtask dev-daemon workflow without mentioning the supervisor as the preferred path. Agents with supervisor_* tools available should use those instead of manual terminal commands. - Add 'If you have supervisor tools, use them' section to CLAUDE.md - Label manual commands as fallback for when supervisor isn't available - Add supervisor workflow to runtimed.md fast iteration section - Update development.md daemon section with supervisor-first note * docs: clarify agents should not launch the notebook app The app is a GUI process that blocks until the user cmd-q's it. An agent running it in a terminal will misinterpret the exit. Let the human launch it from their own terminal or Zed task. * refactor(python): use Rust Cell helpers for tags/hidden, narrow exception catches - tags, is_source_hidden, is_outputs_hidden delegate to Rust Cell properties instead of manually parsing JSON metadata keys - metadata property uses get_cell_metadata_sync (no blob I/O) - except Exception narrowed to except RuntimedError so real errors surface - Addresses copilot review feedback on #1035 * fix(python): remove Notebook.run() — ambiguous semantics (#1037) * fix(python): remove Notebook.run() — ambiguous semantics Notebook.run(code) created a persistent cell as a side effect, and the name suggests 'run all cells' to anyone coming from Jupyter conventions. The explicit two-step is clear and discoverable: cell = await notebook.cells.create(source) result = await cell.run() Session.run() / session_core::run are kept — they're session-level primitives used heavily in integration tests. * docs: update examples to use cells.create + cell.run instead of notebook.run * chore(python): clean up stale references and dead tests (#1039) * chore(python): clean up stale references and dead tests - Update runtimed-py lib.rs doc comment: DaemonClient -> NativeClient/NativeAsyncClient, list rooms -> list active notebooks - Remove 4 unconditionally-skipped presence tests that tested a removed AsyncSession() constructor - Fix stale DaemonClient reference in integration test comment * fix(test): clarify pool warmup comment — socket != pools ready * refactor(python): use runtime terminology in user-facing API (#1040) * refactor(python): use runtime terminology in user-facing API Jupyter kernels are an implementation detail — the wrapper layer should speak in terms of runtimes. Native Session/AsyncSession methods keep their kernel terminology (they ARE kernel operations). Notebook: start_kernel(kernel_type=) -> start(runtime=) shutdown_kernel() -> shutdown() restart_kernel() -> restart() NotebookInfo: kernel_type -> runtime_type kernel_status -> status has_kernel -> has_runtime _from_dict still reads the daemon's kernel_* dict keys — the mapping is internal to NotebookInfo. * refactor(python): explicit notebook methods, drop is_ prefix on hidden props Client: create() -> create_notebook() open() -> open_notebook() join() -> join_notebook() CellHandle: is_source_hidden -> source_hidden is_outputs_hidden -> outputs_hidden Fix test_from_dict_ephemeral to use daemon's raw has_kernel key and assert the mapped has_runtime field. * fix: use session runtime type in ensure_kernel_started instead of hardcoding python (#1043) ensure_kernel_started() hardcoded "python" as the kernel type when auto-launching kernels. This caused a type mismatch error for deno sessions: the daemon already had a deno kernel running, but the Python binding requested "python". Add a `runtime` field to SessionState that tracks the intended kernel type, set at connection time: - connect_create: from the runtime parameter - connect_open: inferred from the notebook's kernelspec - connect_with_socket (join): inferred from the notebook's kernelspec ensure_kernel_started now reads state.runtime instead of hardcoding "python". * refactor(python): split save into save() and save_as(path) (#1042) save() saves to the current path (in-place). save_as(path) saves to a new location. Clearer intent than a single method with an optional path arg. * refactor(mcp): migrate nteract server to wrapper API, expand API surface (#1044) * refactor(mcp): migrate nteract server to Client/Notebook wrapper API Replace NativeAsyncClient/AsyncSession with the high-level wrapper: - _client: NativeAsyncClient -> Client - _session: AsyncSession -> _notebook: Notebook - join/open/create use Client.join_notebook(), etc. - save uses notebook.save() / notebook.save_as(path) - restart/interrupt use notebook.restart() / notebook.interrupt() - list_active_notebooks returns NotebookInfo objects with runtime terminology - _get_session() helper preserved as escape hatch for advanced ops (splice_source, presence, deps, streaming, cell metadata) This dogfoods the wrapper API in a real consumer and validates the escape hatch pattern works for operations not on the wrapper. * refactor(python): expand wrapper API surface, dogfood in MCP server Add to Notebook: - add_dependency/remove_dependency/get_dependencies (auto-detect pkg mgr) - sync_environment - run_all - presence (Presence object for cursor/selection/focus) Add to CellHandle: - set_tags, set_source_hidden, set_outputs_hidden (async mutations) - stream (streaming execution returning async iterator) New Presence class (notebook.presence): - set_cursor, set_selection, focus - clear_cursor, clear_selection - get_remote_cursors Migrate nteract MCP server to use wrapper API throughout: - Dependencies go through notebook.add_dependency() etc. - Cell tags/hidden go through CellHandle methods - Create/delete/move/clear_outputs use CellCollection/CellHandle - replace_match/replace_regex use cell.source + cell.splice - Streaming execution uses cell.stream() - Presence uses notebook.presence - Only remaining escape hatch: get_cells/get_cell for full Cell snapshots (needed by formatting helpers that want resolved outputs) Exports updated: Presence added to __all__ (26 items), stubs, and tests. * refactor(mcp): eliminate all session escape hatches Migrate formatting helpers from Cell (Rust snapshot) to CellHandle (live wrapper). All formatters now accept CellHandle, which has every property they need (id, source, cell_type, execution_count, outputs). Removed _get_session() — no callers remain. The MCP server now uses the wrapper API exclusively: - notebook.cells for iteration and lookup - CellHandle properties for reads - CellHandle methods for mutations - notebook.presence for cursor/selection/focus - notebook.session only for queue_state and is_connected (status resource) * fix(python): correct get_remote_cursors return type * refactor(python): trim public surface to wrapper-reachable types only (#1047) __all__ goes from 26 to 17 items — only types reachable through the wrapper API are publicly advertised. Removed from __all__ (still importable via runtimed.runtimed): - NativeAsyncClient, NativeClient, AsyncSession, Session - CompletionItem, CompletionResult, HistoryEntry - QueueState, NotebookConnectionInfo Removed notebook.session property — no public escape hatch. Internal code uses notebook._session directly. Added to Notebook: - is_connected() — eliminates last session.is_connected() usage - queue_state() — eliminates last session.get_queue_state() usage The nteract MCP server now has zero notebook.session references. * chore: delete stale presence demos — will be rewritten with wrapper API
1 parent b4b55a7 commit 612d578

30 files changed

+1744
-2139
lines changed

.claude/skills/python-bindings/SKILL.md

Lines changed: 41 additions & 48 deletions
Original file line numberDiff line numberDiff line change
@@ -37,24 +37,29 @@ Running `maturin develop` without `VIRTUAL_ENV` installs the `.so` into whicheve
3737
## Basic Usage
3838

3939
```python
40+
import asyncio
4041
import runtimed
4142

42-
session = runtimed.Session()
43-
session.connect()
44-
session.start_kernel()
45-
46-
result = session.run("print('hello')")
47-
print(result.stdout) # "hello\n"
48-
print(result.outputs) # [Output(stream, stdout: "hello\n")]
49-
50-
# Rich output
51-
result = session.run("from IPython.display import Image, display; display(Image(filename='photo.png'))")
52-
for output in result.outputs:
53-
for mime, value in output.data.items():
54-
print(mime, type(value))
55-
# image/png <class 'bytes'> -- raw binary, NOT base64
56-
# text/llm+plain <class 'str'> -- synthesized blob URL
57-
# text/plain <class 'str'>
43+
async def main():
44+
client = runtimed.Client()
45+
async with await client.create() as notebook:
46+
cell = await notebook.cells.create("print('hello')")
47+
result = await cell.run()
48+
print(result.stdout) # "hello\n"
49+
50+
# Sync reads from local CRDT
51+
print(cell.source) # "print('hello')"
52+
print(cell.cell_type) # "code"
53+
54+
asyncio.run(main())
55+
```
56+
57+
For the native session API (streaming, presence, metadata), use `NativeAsyncClient`:
58+
59+
```python
60+
native_client = runtimed.NativeAsyncClient()
61+
session = await native_client.create_notebook()
62+
result = await session.run("print('hello')")
5863
```
5964

6065
## Output.data Typing
@@ -72,32 +77,35 @@ for output in result.outputs:
7277

7378
When an output contains a binary image MIME type, the daemon synthesizes a `text/llm+plain` entry combining text/plain, image metadata, and blob URL. Lets LLMs reference images without decoding binary data.
7479

75-
## Per-Cell Accessors
80+
## High-Level Cell Access
7681

77-
Prefer these O(1) methods over `get_cells()` (which materializes everything):
82+
The `Notebook.cells` collection provides sync reads and async writes:
7883

7984
```python
80-
source = session.get_cell_source(cell_id) # just the source string
81-
cell_type = session.get_cell_type(cell_id) # "code" | "markdown" | "raw"
82-
cell_ids = session.get_cell_ids() # position-sorted IDs
85+
# Sync reads from local CRDT
86+
cell = notebook.cells.get_by_index(0)
87+
print(cell.source, cell.cell_type, cell.outputs)
8388

84-
# Full cell with outputs
85-
cell = session.get_cell(cell_id)
86-
print(cell.outputs, cell.position)
87-
88-
# Move a cell
89-
session.move_cell("cell-id", after_cell_id="other-cell-id")
89+
# Search
90+
matches = notebook.cells.find("import")
9091

9192
# Runtime state
92-
state = await async_session.get_runtime_state() # idle, busy, etc.
93+
print(notebook.runtime.kernel.status) # sync read
94+
```
95+
96+
For the native session API, per-cell accessors are also available:
97+
98+
```python
99+
source = session.get_cell_source(cell_id) # just the source string
100+
cell_type = session.get_cell_type(cell_id) # "code" | "markdown" | "raw"
101+
cell_ids = session.get_cell_ids() # position-sorted IDs
93102
```
94103

95104
## Socket Path Configuration
96105

97106
**System daemon (default):**
98107
```python
99-
session = runtimed.Session()
100-
session.connect() # ~/Library/Caches/runt/runtimed.sock
108+
client = runtimed.Client() # ~/Library/Caches/runt/runtimed.sock
101109
```
102110

103111
**Worktree daemon (development):**
@@ -106,21 +114,6 @@ export RUNTIMED_SOCKET_PATH="$(./target/debug/runt daemon status --json | python
106114
python your_script.py
107115
```
108116

109-
## Cross-Session Output Visibility
110-
111-
The `Cell.outputs` field is from the Automerge document. Agents can see outputs from cells executed by other clients:
112-
113-
```python
114-
s1 = runtimed.Session(notebook_id="shared")
115-
s1.connect(); s1.start_kernel()
116-
s1.run("x = 42")
117-
118-
s2 = runtimed.Session(notebook_id="shared")
119-
s2.connect()
120-
cells = s2.get_cells()
121-
print(cells[0].outputs) # Shows outputs from s1
122-
```
123-
124117
## Running Integration Tests
125118

126119
```bash
@@ -159,11 +152,11 @@ Three packages are workspace members:
159152

160153
### Wrong daemon
161154

162-
If `session.run()` returns `Output(stream, stderr: "Failed to parse output: <hash>")`, the bindings are connecting to the wrong daemon. The blob store is per-daemon. Set `RUNTIMED_SOCKET_PATH` to the correct daemon socket.
155+
If `notebook.run()` returns `Output(stream, stderr: "Failed to parse output: <hash>")`, the bindings are connecting to the wrong daemon. The blob store is per-daemon. Set `RUNTIMED_SOCKET_PATH` to the correct daemon socket.
163156

164-
### Empty outputs from get_cell()
157+
### Empty outputs from cell.outputs
165158

166-
If `session.run()` shows outputs but `session.get_cell()` returns `outputs=[]`:
159+
If `cell.run()` shows outputs but `cell.outputs` returns `[]`:
167160
1. Check socket path — daemon needs blob store access
168161
2. Timing — outputs may not be written to Automerge yet. Try a small delay or re-fetch.
169162

AGENTS.md

Lines changed: 32 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,23 @@ This document provides guidance for AI agents working in this repository. Claude
66

77
## Quick Recipes (Common Dev Tasks)
88

9-
These are copy-paste-ready commands. **All commands that interact with the dev daemon require two env vars.** Without them you'll hit the system daemon and cause problems.
9+
### If you have `supervisor_*` tools — use them
10+
11+
If your MCP client provides `supervisor_status`, `supervisor_restart`, `supervisor_rebuild`, etc., **prefer those over manual terminal commands**. The supervisor manages the dev daemon lifecycle for you — no env vars, no extra terminals.
12+
13+
| Instead of… | Use… |
14+
|-------------|------|
15+
| `cargo xtask dev-daemon` (in a terminal) | `supervisor_restart(target="daemon")` |
16+
| `maturin develop` (rebuild bindings) | `supervisor_rebuild` |
17+
| `runt daemon status` (with env vars) | `supervisor_status` |
18+
| `runt daemon logs` | `supervisor_logs` |
19+
| `cargo xtask vite` | `supervisor_start_vite` |
20+
21+
The supervisor automatically handles per-worktree isolation, env var plumbing, and daemon restarts. You only need the manual commands below when the supervisor isn't available.
22+
23+
### Manual commands (when supervisor is not available)
24+
25+
All commands that interact with the dev daemon require two env vars. Without them you'll hit the system daemon and cause problems.
1026

1127
```bash
1228
# ── Dev daemon env vars (required for ALL dev commands) ────────────
@@ -59,6 +75,14 @@ python/runtimed/.venv/bin/python -m pytest python/runtimed/tests/test_session_un
5975

6076
### Running the notebook app (dev mode)
6177

78+
**Do not launch the notebook app from an agent terminal.** The app is a GUI process that blocks until the user quits it (⌘Q), and the agent will misinterpret the exit. Let the human launch it from their own terminal or Zed task.
79+
80+
With supervisor tools, the daemon and vite are already managed — the human just runs:
81+
```bash
82+
cargo xtask notebook
83+
```
84+
85+
Without supervisor (human runs both):
6286
```bash
6387
# Terminal 1: Start dev daemon
6488
cargo xtask dev-daemon
@@ -183,10 +207,10 @@ The supervisor watches `python/nteract/src/`, `python/runtimed/src/`, `crates/ru
183207

184208
### Tool availability
185209

186-
- **Inkwell active** → all supervisor + nteract tools available
187-
- **nteract MCP only** → nteract tools only, no `supervisor_*`
210+
- **Inkwell active** → all supervisor + nteract tools available. **Prefer supervisor tools for daemon lifecycle** — they handle env vars and isolation automatically.
211+
- **nteract MCP only** → nteract tools only, no `supervisor_*`. Use manual terminal commands for daemon management.
188212
- **No MCP server** → use `cargo xtask run-mcp` to set one up
189-
- **Dev daemon not running** → Inkwell starts it automatically
213+
- **Dev daemon not running** → Inkwell starts it automatically via `supervisor_restart(target="daemon")`
190214

191215
## Build System (`cargo xtask`)
192216

@@ -206,7 +230,9 @@ Use instead:
206230

207231
### Per-Worktree Daemon Isolation
208232

209-
Each git worktree runs its own isolated daemon in dev mode.
233+
Each git worktree runs its own isolated daemon in dev mode. If you have supervisor tools, the daemon is managed for you — use `supervisor_restart(target="daemon")` to start or restart it, and `supervisor_status` to check it.
234+
235+
Without supervisor (manual two-terminal workflow):
210236

211237
```bash
212238
# Terminal 1: Start dev daemon
@@ -216,7 +242,7 @@ cargo xtask dev-daemon
216242
cargo xtask notebook
217243
```
218244

219-
Use `./target/debug/runt` to interact with the worktree daemon:
245+
Use `./target/debug/runt` to interact with the worktree daemon (or `supervisor_status`/`supervisor_logs` if available):
220246

221247
```bash
222248
./target/debug/runt daemon status

contributing/development.md

Lines changed: 13 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -166,7 +166,18 @@ In production, the Tauri app auto-installs and manages the system daemon. In dev
166166
- Your code changes take effect immediately on daemon restart
167167
- No interference with the system daemon
168168

169-
**Two-terminal workflow:**
169+
**With Inkwell supervisor (preferred for agents):**
170+
171+
If you have `supervisor_*` MCP tools available (e.g. in Zed with `mcp-supervisor`), the daemon is managed for you:
172+
173+
- `supervisor_restart(target="daemon")` — start or restart the dev daemon
174+
- `supervisor_status` — check daemon status (includes `daemon_managed: true/false`)
175+
- `supervisor_rebuild` — rebuild Python bindings + restart
176+
- `supervisor_logs` — tail daemon logs
177+
178+
No env vars or extra terminals needed. The supervisor handles per-worktree isolation automatically.
179+
180+
**Two-terminal workflow (without supervisor):**
170181

171182
```bash
172183
# Terminal 1: Start the dev daemon (stays running)
@@ -205,7 +216,7 @@ RUNTIMED_DEV=1 cargo xtask notebook
205216

206217
Per-worktree state is stored in `<cache>/runt-nightly/worktrees/{hash}/` (macOS: `~/Library/Caches/`, Linux: `~/.cache/`).
207218

208-
**For AI agents:** Use `./target/debug/runt` directly to interact with the daemon. See the "Agent Access to Dev Daemon" section in CLAUDE.md. When using a raw terminal (not Zed tasks), set the env vars manually:
219+
**For AI agents:** If `supervisor_*` tools are available, prefer those — they handle env vars and daemon lifecycle automatically. Otherwise, use `./target/debug/runt` directly (see "Agent Access to Dev Daemon" in CLAUDE.md). When using a raw terminal (not Zed tasks), set the env vars manually:
209220

210221
```bash
211222
export RUNTIMED_DEV=1

contributing/runtimed.md

Lines changed: 41 additions & 54 deletions
Original file line numberDiff line numberDiff line change
@@ -83,7 +83,26 @@ cat ~/.cache/runt/daemon.json # check "version" field
8383

8484
### Fast iteration: Daemon + bundled notebook
8585

86-
When iterating on daemon code, you often want to test changes in the notebook app without rebuilding the frontend:
86+
When iterating on daemon code, you often want to test changes in the notebook app without rebuilding the frontend.
87+
88+
**With Inkwell supervisor** (if you have `supervisor_*` MCP tools — e.g. in Zed):
89+
90+
The supervisor manages the dev daemon for you. No env vars or extra terminals needed.
91+
92+
- `supervisor_restart(target="daemon")` — start or restart the dev daemon after code changes
93+
- `supervisor_rebuild` — rebuild Python bindings (`maturin develop`) + restart
94+
- `supervisor_status` — check daemon status (`daemon_managed: true` confirms it's running)
95+
- `supervisor_logs` — tail daemon logs
96+
- `supervisor_start_vite` — start the Vite dev server for hot-reload
97+
98+
Then build and run the app normally:
99+
```bash
100+
cargo xtask build # Full build (includes frontend)
101+
cargo xtask build --rust-only # Fast rebuild (reuses frontend assets)
102+
cargo xtask run # Run the bundled binary
103+
```
104+
105+
**Without supervisor** (manual two-terminal workflow):
87106

88107
```bash
89108
# Terminal 1: Run dev daemon (restart when you change daemon code)
@@ -274,42 +293,29 @@ VIRTUAL_ENV=../../python/runtimed/.venv maturin develop
274293
### Basic Usage
275294

276295
```python
296+
import asyncio
277297
import runtimed
278298

279-
session = runtimed.Session()
280-
session.connect()
281-
session.start_kernel()
282-
283-
result = session.run("print('hello')")
284-
print(result.stdout) # "hello\n"
285-
print(result.outputs) # [Output(stream, stdout: "hello\n")]
286-
287-
# Rich output (e.g. display(Image(...)))
288-
result = session.run("from IPython.display import Image, display; display(Image(filename='photo.png'))")
289-
for output in result.outputs:
290-
for mime, value in output.data.items():
291-
print(mime, type(value))
292-
# image/png <class 'bytes'> — raw binary, NOT base64
293-
# text/llm+plain <class 'str'> — synthesized blob URL for LLM consumers
294-
# text/plain <class 'str'>
295-
296-
# Get cell with outputs (includes historical outputs from other clients)
297-
cell = session.get_cell(result.cell_id)
298-
print(cell.outputs) # [Output(stream, stdout: "hello\n")]
299-
300-
# Per-cell accessors (O(1), no full-doc materialization)
301-
source = session.get_cell_source(result.cell_id) # just the source string
302-
cell_type = session.get_cell_type(result.cell_id) # "code" | "markdown" | "raw"
303-
cell_ids = session.get_cell_ids() # position-sorted IDs
304-
305-
# Move a cell (updates fractional index position)
306-
new_position = session.move_cell("cell-id", after_cell_id="other-cell-id")
307-
308-
# Cell objects include position
309-
print(cell.position) # fractional index string e.g. "80", "C0"
299+
async def main():
300+
client = runtimed.Client()
301+
async with await client.create_notebook() as notebook:
302+
# Work with cells
303+
cell = await notebook.cells.create("print('hello')")
304+
result = await cell.run()
305+
print(result.stdout) # "hello\n"
306+
307+
cell = await notebook.cells.create("x = 42")
308+
await cell.run()
309+
310+
# Sync reads from local CRDT
311+
print(cell.source) # "x = 42"
312+
print(cell.cell_type) # "code"
313+
print(cell.outputs) # resolved outputs
314+
315+
asyncio.run(main())
310316
```
311317

312-
**Prefer per-cell accessors** (`get_cell_source`, `get_cell_type`, `get_cell_ids`) over `get_cells()` when you only need one cell or one field. `get_cells()` materializes every cell's source, outputs, and metadata.
318+
See [docs/python-bindings.md](../docs/python-bindings.md) for the full API reference.
313319

314320
### Output.data Typing
315321

@@ -342,8 +348,7 @@ The Python bindings respect the `RUNTIMED_SOCKET_PATH` environment variable. Thi
342348
**System daemon (default):**
343349
```python
344350
# Connects to system daemon at ~/Library/Caches/runt/runtimed.sock
345-
session = runtimed.Session()
346-
session.connect()
351+
client = runtimed.Client()
347352
```
348353

349354
**Worktree daemon (for development):**
@@ -367,25 +372,7 @@ export RUNTIMED_SOCKET_PATH=$(cat ~/Library/Caches/runt/worktrees/*/daemon.json
367372
jq -r 'select(.worktree_path == "'$(pwd)'") | .endpoint')
368373

369374
# Now Python bindings will use the worktree daemon
370-
python -c "import runtimed; s = runtimed.Session(); s.connect(); print('Connected!')"
371-
```
372-
373-
### Cross-Session Output Visibility
374-
375-
The `Cell.outputs` field is populated from the Automerge document, enabling agents to see outputs from cells executed by other clients:
376-
377-
```python
378-
# Session 1 executes code
379-
s1 = runtimed.Session(notebook_id="shared")
380-
s1.connect()
381-
s1.start_kernel()
382-
s1.run("x = 42")
383-
384-
# Session 2 sees outputs without executing
385-
s2 = runtimed.Session(notebook_id="shared")
386-
s2.connect()
387-
cells = s2.get_cells()
388-
print(cells[0].outputs) # Shows outputs from s1's execution
375+
python -c "import asyncio, runtimed; asyncio.run(runtimed.Client().ping())"
389376
```
390377

391378
## Troubleshooting

contributing/testing.md

Lines changed: 11 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -215,19 +215,19 @@ RUNTIMED_INTEGRATION_TEST=1 pytest python/runtimed/tests/ -v
215215

216216
```python
217217
# Unit test (no daemon)
218-
class TestSessionConstruction:
219-
def test_session_with_auto_id(self):
220-
session = runtimed.Session()
221-
assert session.notebook_id.startswith("agent-session-")
222-
assert not session.is_connected
218+
class TestModuleExports:
219+
def test_client_exported(self):
220+
assert hasattr(runtimed, "Client")
223221

224-
# Integration test (needs daemon)
222+
def test_notebook_exported(self):
223+
assert hasattr(runtimed, "Notebook")
224+
225+
# Integration test (needs daemon, uses NativeAsyncClient for direct session access)
225226
@pytest.mark.asyncio
226-
async def test_kernel_execution(self):
227-
async with runtimed.AsyncSession() as session:
228-
await session.start_kernel()
229-
result = await session.run("1 + 1")
230-
assert result.success
227+
async def test_kernel_execution(async_session):
228+
await async_session.start_kernel()
229+
result = await async_session.run("1 + 1")
230+
assert result.success
231231
```
232232

233233
**Environment variables:**

0 commit comments

Comments
 (0)