Skip to content

Releases: sathish-t/nanalogue-gui

0.2.7

18 Mar 14:50

Choose a tag to compare

Highlights

  • plot_histogram(bins, **kwargs) — renders pre-binned histogram data as an SVG file using Vega-Lite, server-side; LLM bins the data in Python and passes dicts with bin_start/bin_end/count
  • plot_series(points, kind, **kwargs) — renders x/y point data as a line or scatter SVG using Vega-Lite; accepts kind="line" (default) or kind="scatter"
  • minimap2(reference_path, query_path, preset=None) — runs sequence alignment via a WebAssembly build of minimap2 v2.22, entirely in-process; always returns PAF format
  • Better error messagesRuntimeError feedback now includes a full Python-style traceback (file, line number, source preview with caret); external-function errors show the Python call site; SyntaxError hint expanded to guide the LLM to use print() for direct user output
  • ls() shape fix — always returns list[str]; previously returned a dict when the entry cap was hit, causing silent breakage when iterating; cap raised to 10,000 entries
  • /dump_llm_instructions transcript fix — last assistant bubble no longer cut off in the HTML output

See the full CHANGELOG for details.

0.2.6

13 Mar 11:39

Choose a tag to compare

Added

  • /dump_llm_instructions (and --dump-llm-instructions CLI flag) now writes a self-contained .html transcript alongside the existing .log file; system messages are collapsible, assistant Python is syntax-highlighted, and code execution results are rendered as structured ✓/✗ cards; copy buttons on every message bubble; links and images in message content are stripped to plain text to prevent XSS
  • Adds bash(command) to the AI chat sandbox: runs shell commands (grep, sed, awk, sort, jq, and standard builtins) with a deny-list blocking reads of sensitive files; wall-clock timeout via AbortSignal.timeout; writes persist to disk in ai_chat_temp_files/ via a ReadWriteFs mount; the rest of allowedDir is read-only; a symlink guard prevents write escapes; truncated stdout/stderr

Dependencies

  • Updates @pydantic/monty to v0.0.8, which ships the sandbox execution loop natively; removes the vendored loop from monty-sandbox.ts

Infrastructure

  • Dependency bumps: electron 40.6.1→41.0.2, eslint-plugin-jsdoc 62.7.1→62.8.0, just-bash 2.12.8→2.13.0, @vitest/coverage-v8 4.0.18→4.1.0, esbuild 0.27.3→0.27.4, typescript-eslint 8.56.1→8.57.0, eslint 10.0.2→10.0.3, @biomejs/biome 2.4.4→2.4.6, html-validate 10.9.0→10.11.1, astral-sh/setup-uv v5→v7

0.2.5

10 Mar 07:51

Choose a tag to compare

Highlights

  • Font size tweaker — three A buttons (small / medium / large) in the landing page header scale all text in the app via a rem cascade; chart tick labels, axis titles, and legend text follow the chosen size automatically
  • --non-interactive <msg> — send a single message to nanalogue-chat, print the response, and exit; clean for scripting
  • --dump-llm-instructions — when used with --non-interactive, writes the full LLM request payload to a dated log file in ai_chat_output/
  • --system-prompt <text> — replace the built-in sandbox prompt in nanalogue-chat; SYSTEM_APPEND.md and facts still stack on top
  • --rm-tools <t1,t2,...> — remove a subset of sandbox tools from the Monty execution environment; requires --system-prompt
  • Token estimate in system prompt dialog~N tokens (rough) shown in the actions bar when the prompt loads
  • Widened AI chat option bounds — timeout, record counts, duration, memory, allocations, and read-size fields all accept a broader range of values
  • Clearer CLI error reporting — out-of-range flag values now report the flag name and allowed range instead of silently clamping; all bad flags reported together
  • Record-count cap fixread_info, bam_mods, window_reads, and seq_table now correctly honour sandbox record-count caps even when the Python script passes its own limit argument

See the full CHANGELOG for details.

0.2.4

03 Mar 13:49

Choose a tag to compare

Highlights

  • SYSTEM_APPEND.md support — place a file with that name in the BAM analysis directory to append domain-specific instructions to the default system prompt; loaded once per session with a 64 KB size cap; available in both GUI and CLI
  • Sensitive file blocking — best-effort blocking of keys, certificates, dotenv, SSH keys, and GPG files from read_file and ls; consent dialog and CLI banner show a notice
  • /dump_system_prompt CLI slash command — dumps the static system prompt to a file in ai_chat_output/ at any point in the session
  • "View System Prompt" button — shows the static initial prompt built from current Advanced Options; includes a Copy button
  • Sandbox print() cap — output capped at 1 MB per execution, truncated at a UTF-8 boundary; printsTruncated flag signals clipping. Can be altered in nanalogue-sandbox-exec.
  • Copy-to-clipboard button in the sandbox code panel
  • nanalogue-sandbox-exec CLI — run Python scripts directly in the Monty sandbox without LLM involvement
  • write_file writes to the allowed directory instead of a fixed ai_chat_output/ subdirectory
  • max_tokens vs max_completion_tokens chosen per endpoint (Mistral/chutes.ai use max_tokens; others use max_completion_tokens)
  • Extra BED fields displayed in the swipe info strip

See the full CHANGELOG for details.

0.2.3

26 Feb 20:36

Choose a tag to compare

Highlights

  • Configurable file size limitsmaxReadMB and maxWriteMB sandbox limits with --max-read-mb/--max-write-mb CLI flags, validation specs, and settings UI
  • Windowed density plot fix — corrects a shift in windowed density plot rendering

See the full CHANGELOG for details.

0.2.2

25 Feb 15:36

Choose a tag to compare

Highlights

  • Standalone CLI (nanalogue-chat) — terminal REPL for LLM-powered BAM analysis without Electron
  • Native fetch rewrite — chat orchestrator replaced Vercel AI SDK with a direct fetch loop for Python code sandbox execution
  • Configurable sandbox limitsmaxDurationSecs, maxMemoryMB, maxAllocations with CLI flags and UI controls
  • /dump_llm_instructions and /exec slash commands for inspecting LLM payloads and running Python files directly

See the full CHANGELOG for details.

0.2.0

20 Feb 14:54

Choose a tag to compare

Highlights

  • AI Chat mode — ask natural-language questions about BAM files using any OpenAI-compatible endpoint (local Ollama, remote API, etc.) with sandboxed code execution via Pydantic's Monty and Vercel's AI SDK
  • Sequences tab in QC — per-read modification highlighting with quality tooltips, row selection, read ID copy, and multi-alignment support
  • CRAM file support across all modes (QC, Swipe, Locate Reads)
  • Advanced QC filtering — MAPQ, read type, length, read ID file, base quality and probability thresholds
  • Configurable window size in Swipe mode, replacing the hardcoded 300-base default
  • Exit watchdog for reliable app shutdown even when native addon calls block the event loop
  • QC pagination with streaming histograms — reduces peak memory for large BAM files
  • Deterministic sample seed for reproducible QC subsampling

Other changes

  • Version button and dialog on landing page
  • QC loading overlay with per-source progress counters
  • Connection status indicator for AI Chat endpoint
  • Dependency updates: @nanalogue/node ^0.1.4, ai ^6.0.94, @pydantic/monty ^0.0.7, and more

See the full CHANGELOG for details.

0.1.2

09 Feb 13:06

Choose a tag to compare

See CHANGELOG.md for full details.

Highlights:

  • Locate reads mode — new mode for converting BAM + read ID file to BED format with region filtering
  • TSV download — export per-read whole-read density data from QC analysis
  • Reusable custom elements<output-file-input>, <bam-resource-input>, <mod-filter-input>
  • QC pipeline optimization — streaming histograms, parallelized data retrieval
  • CLI removed — application is now GUI-only