Skip to content

feat(subscribers): add JSONL event log subscriber#6420

Open
cckellogg wants to merge 2 commits intomainfrom
chris/event-log
Open

feat(subscribers): add JSONL event log subscriber#6420
cckellogg wants to merge 2 commits intomainfrom
chris/event-log

Conversation

@cckellogg
Copy link
Contributor

Changes Made

EventLogSubscriber writes query lifecycle events (start, optimization, execution, operator, stats) to a per-run JSONL file under ~/.daft/events/ directory as the default. This is configurable. Includes enable_event_log()/disable_event_log() helpers for global attachment with atexit cleanup.

Each log is contained in a run directory with following format run_YYYYMMDD_HHMMSS_<4 hex chars>.

This is experimental and the api will probably change in future PRs.

Related Issues

sample log

{"event": "session_started", "ts": "2026-03-17T23:33:49.916Z", "daft_version": "0.3.0-dev0"}
{"event": "query_started", "ts": "2026-03-17T23:33:53.867Z", "query_id": "nimble-falcon-57ee1d"}
{"event": "plan_unoptimized", "ts": "2026-03-17T23:33:53.867Z", "query_id": "nimble-falcon-57ee1d", "plan": "{\"children\":[{\"batch_size\":10000,\"children\":[{\"children\":[{\"children\":[{\"children\":[],\"type\":\"Source\"}],\"projection\":[\"col(VendorID)\",\"col(total_amount)\",\"col(tpep_pickup_datetime)\"],\"type\":\"Project\"}],\"limit\":100000,\"type\":\"Limit\"}],\"type\":\"IntoBatches\"}],\"type\":\"Sink\"}"}
{"event": "optimization_started", "ts": "2026-03-17T23:33:53.867Z", "query_id": "nimble-falcon-57ee1d"}
{"event": "optimization_ended", "ts": "2026-03-17T23:33:54.164Z", "query_id": "nimble-falcon-57ee1d", "duration_ms": 298}
{"event": "plan_optimized", "ts": "2026-03-17T23:33:54.165Z", "query_id": "nimble-falcon-57ee1d", "plan": "{\"children\":[{\"batch_size\":10000,\"children\":[{\"children\":[{\"children\":[{\"children\":[],\"type\":\"Source\"}],\"limit\":100000,\"type\":\"Limit\"}],\"projection\":[\"col(VendorID)\",\"col(total_amount)\",\"col(tpep_pickup_datetime)\"],\"type\":\"Project\"}],\"type\":\"IntoBatches\"}],\"type\":\"Sink\"}"}
{"event": "execution_started", "ts": "2026-03-17T23:33:54.178Z", "query_id": "nimble-falcon-57ee1d"}
{"event": "plan_physical", "ts": "2026-03-17T23:33:54.178Z", "query_id": "nimble-falcon-57ee1d", "plan": "{\"approx_stats\":{\"approx_stats\":{\"acc_selectivity\":1.0,\"num_rows\":0,\"size_bytes\":0}},\"category\":\"BlockingSink\",\"children\":[{\"approx_stats\":{\"approx_stats\":{\"acc_selectivity\":1.0,\"num_rows\":0,\"size_bytes\":0}},\"category\":\"BlockingSink\",\"children\":[{\"approx_stats\":{\"approx_stats\":{\"acc_selectivity\":1.0,\"num_rows\":100000,\"size_bytes\":2400000}},\"category\":\"Intermediate\",\"children\":[{\"approx_stats\":{\"approx_stats\":{\"acc_selectivity\":1.0,\"num_rows\":100000,\"size_bytes\":2400000}},\"category\":\"Intermediate\",\"children\":[{\"approx_stats\":{\"approx_stats\":{\"acc_selectivity\":1.0,\"num_rows\":100000,\"size_bytes\":2400000}},\"category\":\"StreamingSink\",\"children\":[{\"approx_stats\":{\"approx_stats\":{\"acc_selectivity\":1.0,\"num_rows\":100000,\"size_bytes\":2400000}},\"category\":\"Source\",\"id\":0,\"name\":\"Read Parquet\",\"type\":\"ScanTask\"}],\"id\":1,\"name\":\"Limit 100000\",\"type\":\"Limit\"}],\"id\":2,\"name\":\"Rename & Reorder\",\"type\":\"Project\"}],\"id\":3,\"name\":\"Into Batches of 10000\",\"type\":\"IntoBatches\"}],\"id\":4,\"name\":\"CSV Write\",\"type\":\"Write\"}],\"id\":5,\"name\":\"Commit Write\",\"type\":\"CommitWrite\"}"}
{"event": "operator_started", "ts": "2026-03-17T23:33:54.365Z", "query_id": "nimble-falcon-57ee1d", "node_id": 0}
{"event": "stats", "ts": "2026-03-17T23:33:54.370Z", "query_id": "nimble-falcon-57ee1d", "node_id": 0, "metrics": {"bytes.read": 0, "rows.out": 0, "duration": 0}}
{"event": "stats", "ts": "2026-03-17T23:33:54.555Z", "query_id": "nimble-falcon-57ee1d", "node_id": 0, "metrics": {"bytes.read": 0, "rows.out": 0, "duration": 0}}
{"event": "stats", "ts": "2026-03-17T23:33:54.754Z", "query_id": "nimble-falcon-57ee1d", "node_id": 0, "metrics": {"rows.out": 0, "duration": 0, "bytes.read": 65536}}
{"event": "operator_started", "ts": "2026-03-17T23:33:54.901Z", "query_id": "nimble-falcon-57ee1d", "node_id": 1}
{"event": "operator_started", "ts": "2026-03-17T23:33:54.903Z", "query_id": "nimble-falcon-57ee1d", "node_id": 2}
{"event": "operator_started", "ts": "2026-03-17T23:33:54.905Z", "query_id": "nimble-falcon-57ee1d", "node_id": 3}
{"event": "operator_started", "ts": "2026-03-17T23:33:54.906Z", "query_id": "nimble-falcon-57ee1d", "node_id": 4}
{"event": "stats", "ts": "2026-03-17T23:33:54.953Z", "query_id": "nimble-falcon-57ee1d", "node_id": 3, "metrics": {"duration": 0, "rows.in": 60000, "rows.out": 50000}}
{"event": "stats", "ts": "2026-03-17T23:33:54.953Z", "query_id": "nimble-falcon-57ee1d", "node_id": 0, "metrics": {"bytes.read": 1653009, "duration": 0, "rows.out": 78091}}
{"event": "stats", "ts": "2026-03-17T23:33:54.953Z", "query_id": "nimble-falcon-57ee1d", "node_id": 1, "metrics": {"rows.out": 78091, "duration": 0, "rows.in": 78091}}
{"event": "stats", "ts": "2026-03-17T23:33:54.953Z", "query_id": "nimble-falcon-57ee1d", "node_id": 2, "metrics": {"rows.in": 78091, "rows.out": 78091, "duration": 1}}
{"event": "stats", "ts": "2026-03-17T23:33:54.953Z", "query_id": "nimble-falcon-57ee1d", "node_id": 4, "metrics": {"rows.written": 30000, "duration": 82, "rows.in": 30000, "bytes.written": 720000}}
{"event": "stats", "ts": "2026-03-17T23:33:54.970Z", "query_id": "nimble-falcon-57ee1d", "node_id": 1, "metrics": {"rows.in": 100000, "rows.out": 100000, "duration": 0}}
{"event": "operator_ended", "ts": "2026-03-17T23:33:54.970Z", "query_id": "nimble-falcon-57ee1d", "node_id": 1, "duration_ms": 69}
{"event": "stats", "ts": "2026-03-17T23:33:54.971Z", "query_id": "nimble-falcon-57ee1d", "node_id": 0, "metrics": {"duration": 0, "rows.out": 100000, "bytes.read": 2164041}}
{"event": "operator_ended", "ts": "2026-03-17T23:33:54.971Z", "query_id": "nimble-falcon-57ee1d", "node_id": 0, "duration_ms": 606}
{"event": "stats", "ts": "2026-03-17T23:33:54.989Z", "query_id": "nimble-falcon-57ee1d", "node_id": 2, "metrics": {"duration": 1, "rows.in": 100000, "rows.out": 100000}}
{"event": "operator_ended", "ts": "2026-03-17T23:33:54.989Z", "query_id": "nimble-falcon-57ee1d", "node_id": 2, "duration_ms": 87}
{"event": "stats", "ts": "2026-03-17T23:33:55.022Z", "query_id": "nimble-falcon-57ee1d", "node_id": 3, "metrics": {"rows.out": 100000, "rows.in": 100000, "duration": 0}}
{"event": "operator_ended", "ts": "2026-03-17T23:33:55.023Z", "query_id": "nimble-falcon-57ee1d", "node_id": 3, "duration_ms": 118}
{"event": "stats", "ts": "2026-03-17T23:33:55.037Z", "query_id": "nimble-falcon-57ee1d", "node_id": 4, "metrics": {"rows.in": 100000, "bytes.written": 2400000, "rows.written": 100000, "duration": 254}}
{"event": "operator_ended", "ts": "2026-03-17T23:33:55.037Z", "query_id": "nimble-falcon-57ee1d", "node_id": 4, "duration_ms": 131}
{"event": "operator_started", "ts": "2026-03-17T23:33:55.037Z", "query_id": "nimble-falcon-57ee1d", "node_id": 5}
{"event": "stats", "ts": "2026-03-17T23:33:55.049Z", "query_id": "nimble-falcon-57ee1d", "node_id": 5, "metrics": {"rows.in": 1, "rows.out": 1, "duration": 29}}
{"event": "operator_ended", "ts": "2026-03-17T23:33:55.049Z", "query_id": "nimble-falcon-57ee1d", "node_id": 5, "duration_ms": 12}
{"event": "execution_ended", "ts": "2026-03-17T23:33:55.049Z", "query_id": "nimble-falcon-57ee1d", "duration_ms": 871}
{"event": "result_out", "ts": "2026-03-17T23:33:55.050Z", "query_id": "nimble-falcon-57ee1d", "rows": 1}
{"event": "query_ended", "ts": "2026-03-17T23:33:55.050Z", "query_id": "nimble-falcon-57ee1d", "duration_ms": 1183, "status": "ok"}

@cckellogg cckellogg requested a review from a team March 17, 2026 23:44
@github-actions github-actions bot added the feat label Mar 17, 2026
@greptile-apps
Copy link
Contributor

greptile-apps bot commented Mar 17, 2026

Greptile Summary

This PR introduces EventLogSubscriber, an experimental subscriber that writes Daft query lifecycle events (session start, query start/end, optimization, execution, operator-level stats) to a per-run JSONL file under ~/.daft/events/ by default. It also adds enable_event_log() / disable_event_log() helpers for global attachment with atexit cleanup, and extends the Subscriber base class with a non-abstract close() hook.

Key changes:

  • daft/subscribers/abc.py: Adds close() as an overridable (non-abstract) no-op on Subscriber — backward-compatible and clean.
  • daft/subscribers/events.py: New EventLogSubscriber with per-run directories, monotonic-clock duration tracking, defensive I/O error handling, and idiomatic atexit registration.
  • tests/test_events.py: Good lifecycle and cleanup tests, including a full end-to-end integration test.

Issues found:

  • The callback methods (on_exec_operator_start, on_exec_operator_end, on_exec_emit_stats, _write_event) share mutable state (timing dicts, file handle) without any locking. The PR's own sample log confirms concurrent operator callbacks ~2ms apart, making this a real race condition risk — especially as Python moves toward free-threaded execution.
  • enable_event_log() returns None, giving callers no easy way to discover where the log was written without accessing private attributes.
  • record.update(payload) in _write_event can silently overwrite the event and ts meta-keys if a caller ever passes them in the payload.

Confidence Score: 3/5

  • Safe to merge with low immediate risk, but the missing thread-safety guards should be addressed before the API stabilizes.
  • The core subscriber logic is correct and the tests cover the main lifecycle paths. However, concurrent operator callbacks are confirmed by the sample log, and the shared mutable state (timing dicts + file handle) is unprotected. In CPython today this is largely safe due to the GIL, but it will become a real bug under Python 3.13+ free-threaded mode or non-CPython runtimes. Since the PR is already marked experimental, the risk is contained — but the thread-safety issue should be resolved before the API is promoted.
  • daft/subscribers/events.py — thread safety of shared timing state and file writes needs attention before the experimental label is removed.

Important Files Changed

Filename Overview
daft/subscribers/events.py New JSONL event-log subscriber. Core logic is sound but lacks thread-safety guards (shared timing dicts and file I/O are unprotected), and enable_event_log doesn't return the log path to callers.
daft/subscribers/abc.py Adds a non-abstract close() hook to the base Subscriber class, enabling resource cleanup without breaking existing implementors. Clean, minimal, backward-compatible change.
tests/test_events.py Good coverage of subscriber lifecycle, timing-state cleanup, and end-to-end integration via enable_event_log. No tests for concurrent callback invocations or operator-stat events.

Sequence Diagram

sequenceDiagram
    participant User
    participant DaftContext
    participant EventLogSubscriber
    participant JSONL as events.jsonl

    User->>DaftContext: enable_event_log(dir?)
    DaftContext->>EventLogSubscriber: __init__(log_dir, run_id)
    EventLogSubscriber->>JSONL: session_started
    DaftContext->>DaftContext: attach_subscriber("_daft_event_log", subscriber)

    User->>DaftContext: df.collect()
    DaftContext->>EventLogSubscriber: on_query_start(query_id, metadata)
    EventLogSubscriber->>JSONL: query_started + plan_unoptimized

    DaftContext->>EventLogSubscriber: on_optimization_start(query_id)
    EventLogSubscriber->>JSONL: optimization_started
    DaftContext->>EventLogSubscriber: on_optimization_end(query_id, plan)
    EventLogSubscriber->>JSONL: optimization_ended + plan_optimized

    DaftContext->>EventLogSubscriber: on_exec_start(query_id, physical_plan)
    EventLogSubscriber->>JSONL: execution_started + plan_physical

    par Concurrent operator callbacks
        DaftContext->>EventLogSubscriber: on_exec_operator_start(query_id, node_id=N)
        EventLogSubscriber->>JSONL: operator_started
        DaftContext->>EventLogSubscriber: on_exec_emit_stats(query_id, stats)
        EventLogSubscriber->>JSONL: stats (per node)
        DaftContext->>EventLogSubscriber: on_exec_operator_end(query_id, node_id=N)
        EventLogSubscriber->>JSONL: operator_ended
    end

    DaftContext->>EventLogSubscriber: on_result_out(query_id, partition)
    EventLogSubscriber->>JSONL: result_out
    DaftContext->>EventLogSubscriber: on_exec_end(query_id)
    EventLogSubscriber->>JSONL: execution_ended
    DaftContext->>EventLogSubscriber: on_query_end(query_id, result)
    EventLogSubscriber->>JSONL: query_ended

    User->>DaftContext: disable_event_log()
    DaftContext->>DaftContext: detach_subscriber("_daft_event_log")
    DaftContext->>EventLogSubscriber: close()
    EventLogSubscriber->>JSONL: (file closed)
Loading

Last reviewed commit: "fixes"

_EVENT_LOG_SUBSCRIBER: EventLogSubscriber | None = None


def enable_event_log(dir: str | Path | None = None) -> None:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Parameter name dir shadows built-in

dir is a Python built-in function. Using it as a parameter name silently shadows the built-in for the entire function scope, which can cause confusing bugs if the built-in is ever needed inside the function. A more descriptive name would also improve readability:

Suggested change
def enable_event_log(dir: str | Path | None = None) -> None:
def enable_event_log(log_dir: str | Path | None = None) -> None:

And update the body accordingly (dir or _DEFAULT_EVENT_LOG_DIRlog_dir or _DEFAULT_EVENT_LOG_DIR).

return time.monotonic() * 1000


def _generate_run_id() -> str:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we made them a nice human readable id that shows up in the dashboard #6310. Would it be possible to reuse that so users can quickly correlate runs in the dashboard to these events.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's going to be tricky. Those names are associated with a query and a script can execute many queries. This means we would need to maintain and write multiple files. I thought for the first version it would be easier to write to a single file per script execution. Thoughts?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am of the opinion that it makes the most sense to have the query be the atomic unit as opposed to the script. @rchowell and I had a lengthy discussion about this the other day actually, and he may have some opinions on it as well. I think there's pros/cons of both sides but ultimately I think having everything 1-1 with the query (not script) will make things easier to reason about. Additionally, considering the dashboard uses queries as the atomic unti, it would be odd to me if we introduced an additional grouping here that does not exist there.

Happy to have further discussions as well.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My 2-cents is log to a file whose name is the query id.

Put it in some conventional location, then it's trivial to show logs in the dashboard for each query.

@cckellogg
Copy link
Contributor Author

@greptileai

Comment on lines +89 to +103
def _write_event(self, event_name: str, payload: dict[str, Any]) -> None:
if self._closed:
return
record: dict[str, Any] = {"event": event_name, "ts": _iso_now()}
record.update(payload)
try:
self._file.write(json.dumps(record, default=_json_default) + "\n")
except OSError:
pass # Don't let logging failures affect query execution

def close(self) -> None:
if self._closed:
return
self._closed = True
self._file.close()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Shared mutable state and file writes are not thread-safe

The sample log in the PR description shows concurrent operator_started events just 2ms apart (node IDs 1–4 at 23:33:54.901, .903, .905, .906), confirming that subscriber callbacks are invoked from multiple threads simultaneously.

The current implementation has several unprotected shared-state operations:

  • self._operator_starts[(query_id, node_id)] = _mono_ms() and self._operator_starts.pop(...) in on_exec_operator_start / on_exec_operator_end called concurrently from different threads
  • self._closed is checked and self._file.write(...) is called non-atomically in _write_event — a thread could pass the if self._closed: return guard just before another thread runs close(), then attempt a write on a closed file (though OSError is caught)
  • With Python 3.13+ free-threaded mode (--disable-gil) or non-CPython runtimes, individual dict operations are also no longer atomic

A threading.Lock should guard both the file write and any mutation of the timing-state dicts:

import threading

def __init__(self, log_dir: str | Path, run_id: str | None = None) -> None:
    ...
    self._lock = threading.Lock()
    ...

def _write_event(self, event_name: str, payload: dict[str, Any]) -> None:
    if self._closed:
        return
    record: dict[str, Any] = {"event": event_name, "ts": _iso_now()}
    record.update(payload)
    try:
        with self._lock:
            self._file.write(json.dumps(record, default=_json_default) + "\n")
    except OSError:
        pass

And similarly wrap mutations to _operator_starts / _exec_starts / etc. in the same lock (or a separate state lock).

Comment on lines +220 to +239
def enable_event_log(dir: str | Path | None = None) -> None:
"""Experimental helper that attaches an event-log subscriber.

This API is currently intended for local event-log capture through
`enable_event_log()` / `disable_event_log()`.
"""
global _EVENT_LOG_ATEXIT_REGISTERED, _EVENT_LOG_SUBSCRIBER
if _EVENT_LOG_SUBSCRIBER is not None:
disable_event_log()
if not _EVENT_LOG_ATEXIT_REGISTERED:
atexit.register(disable_event_log)
_EVENT_LOG_ATEXIT_REGISTERED = True

subscriber = EventLogSubscriber(dir or _DEFAULT_EVENT_LOG_DIR)
try:
get_context().attach_subscriber(_EVENT_LOG_ALIAS, subscriber)
except Exception:
subscriber.close()
raise
_EVENT_LOG_SUBSCRIBER = subscriber
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 enable_event_log returns None, making log path undiscoverable

After enable_event_log() is called, the user has no way to know where the log is being written without reaching into the private _EVENT_LOG_SUBSCRIBER._events_path attribute. Since this is the primary user-facing entry point for the feature, returning the log path would significantly improve discoverability — especially for new users who may not know the default ~/.daft/events/ location.

Consider returning the path to the generated log file:

Suggested change
def enable_event_log(dir: str | Path | None = None) -> None:
"""Experimental helper that attaches an event-log subscriber.
This API is currently intended for local event-log capture through
`enable_event_log()` / `disable_event_log()`.
"""
global _EVENT_LOG_ATEXIT_REGISTERED, _EVENT_LOG_SUBSCRIBER
if _EVENT_LOG_SUBSCRIBER is not None:
disable_event_log()
if not _EVENT_LOG_ATEXIT_REGISTERED:
atexit.register(disable_event_log)
_EVENT_LOG_ATEXIT_REGISTERED = True
subscriber = EventLogSubscriber(dir or _DEFAULT_EVENT_LOG_DIR)
try:
get_context().attach_subscriber(_EVENT_LOG_ALIAS, subscriber)
except Exception:
subscriber.close()
raise
_EVENT_LOG_SUBSCRIBER = subscriber
def enable_event_log(log_dir: str | Path | None = None) -> Path:
"""Experimental helper that attaches an event-log subscriber.
This API is currently intended for local event-log capture through
`enable_event_log()` / `disable_event_log()`.
Returns:
Path to the events.jsonl file for this run.
"""
global _EVENT_LOG_ATEXIT_REGISTERED, _EVENT_LOG_SUBSCRIBER
if _EVENT_LOG_SUBSCRIBER is not None:
disable_event_log()
if not _EVENT_LOG_ATEXIT_REGISTERED:
atexit.register(disable_event_log)
_EVENT_LOG_ATEXIT_REGISTERED = True
subscriber = EventLogSubscriber(log_dir or _DEFAULT_EVENT_LOG_DIR)
try:
get_context().attach_subscriber(_EVENT_LOG_ALIAS, subscriber)
except Exception:
subscriber.close()
raise
_EVENT_LOG_SUBSCRIBER = subscriber
return subscriber._events_path

Comment on lines +89 to +97
def _write_event(self, event_name: str, payload: dict[str, Any]) -> None:
if self._closed:
return
record: dict[str, Any] = {"event": event_name, "ts": _iso_now()}
record.update(payload)
try:
self._file.write(json.dumps(record, default=_json_default) + "\n")
except OSError:
pass # Don't let logging failures affect query execution
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Payload keys event and ts can be silently overwritten

record.update(payload) is called after the event and ts keys are set. If any internal caller ever accidentally includes "event" or "ts" in the payload dict, those values will be silently clobbered without any error. Since this is an internal method, this is an unlikely bug today, but it's a defensive programming issue that could cause subtle, hard-to-debug corruption as the event type roster grows.

Suggested change
def _write_event(self, event_name: str, payload: dict[str, Any]) -> None:
if self._closed:
return
record: dict[str, Any] = {"event": event_name, "ts": _iso_now()}
record.update(payload)
try:
self._file.write(json.dumps(record, default=_json_default) + "\n")
except OSError:
pass # Don't let logging failures affect query execution
def _write_event(self, event_name: str, payload: dict[str, Any]) -> None:
if self._closed:
return
record: dict[str, Any] = {"event": event_name, "ts": _iso_now()}
record.update(payload)
# Ensure the meta-keys cannot be overridden by payload
record["event"] = event_name
record["ts"] = record["ts"] # already set above; keep for clarity or assert
try:
self._file.write(json.dumps(record, default=_json_default) + "\n")
except OSError:
pass # Don't let logging failures affect query execution

Or, more simply, merge in the opposite order so meta-keys always win:

record: dict[str, Any] = {**payload, "event": event_name, "ts": _iso_now()}

@codecov
Copy link

codecov bot commented Mar 18, 2026

Codecov Report

❌ Patch coverage is 78.80795% with 32 lines in your changes missing coverage. Please review.
✅ Project coverage is 74.76%. Comparing base (c0f5ed4) to head (3103947).
⚠️ Report is 1 commits behind head on main.

Files with missing lines Patch % Lines
daft/subscribers/events.py 79.19% 31 Missing ⚠️
daft/subscribers/abc.py 50.00% 1 Missing ⚠️
Additional details and impacted files

Impacted file tree graph

@@            Coverage Diff             @@
##             main    #6420      +/-   ##
==========================================
- Coverage   74.79%   74.76%   -0.03%     
==========================================
  Files        1021     1022       +1     
  Lines      136569   136711     +142     
==========================================
+ Hits       102142   102212      +70     
- Misses      34427    34499      +72     
Files with missing lines Coverage Δ
daft/subscribers/abc.py 71.05% <50.00%> (-1.17%) ⬇️
daft/subscribers/events.py 79.19% <79.19%> (ø)

... and 11 files with indirect coverage changes

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

@desmondcheongzx
Copy link
Collaborator

Cool! Looking at the log output, is the plan to add the operator multiline displays in the output too? E.g. "Filter: col(x) > 5", "Project: col(x) + 1, col(y) * 2"? Would be super useful

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants