Releases: sathish-t/nanalogue-gui
Releases · sathish-t/nanalogue-gui
0.2.7
Highlights
plot_histogram(bins, **kwargs)— renders pre-binned histogram data as an SVG file using Vega-Lite, server-side; LLM bins the data in Python and passes dicts withbin_start/bin_end/countplot_series(points, kind, **kwargs)— renders x/y point data as a line or scatter SVG using Vega-Lite; acceptskind="line"(default) orkind="scatter"minimap2(reference_path, query_path, preset=None)— runs sequence alignment via a WebAssembly build of minimap2 v2.22, entirely in-process; always returns PAF format- Better error messages —
RuntimeErrorfeedback now includes a full Python-style traceback (file, line number, source preview with caret); external-function errors show the Python call site;SyntaxErrorhint expanded to guide the LLM to useprint()for direct user output ls()shape fix — always returnslist[str]; previously returned a dict when the entry cap was hit, causing silent breakage when iterating; cap raised to 10,000 entries/dump_llm_instructionstranscript fix — last assistant bubble no longer cut off in the HTML output
See the full CHANGELOG for details.
0.2.6
Added
/dump_llm_instructions(and--dump-llm-instructionsCLI flag) now writes a self-contained.htmltranscript alongside the existing.logfile; system messages are collapsible, assistant Python is syntax-highlighted, and code execution results are rendered as structured ✓/✗ cards; copy buttons on every message bubble; links and images in message content are stripped to plain text to prevent XSS- Adds
bash(command)to the AI chat sandbox: runs shell commands (grep, sed, awk, sort, jq, and standard builtins) with a deny-list blocking reads of sensitive files; wall-clock timeout viaAbortSignal.timeout; writes persist to disk inai_chat_temp_files/via aReadWriteFsmount; the rest ofallowedDiris read-only; a symlink guard prevents write escapes; truncated stdout/stderr
Dependencies
- Updates
@pydantic/montyto v0.0.8, which ships the sandbox execution loop natively; removes the vendored loop frommonty-sandbox.ts
Infrastructure
- Dependency bumps:
electron40.6.1→41.0.2,eslint-plugin-jsdoc62.7.1→62.8.0,just-bash2.12.8→2.13.0,@vitest/coverage-v84.0.18→4.1.0,esbuild0.27.3→0.27.4,typescript-eslint8.56.1→8.57.0,eslint10.0.2→10.0.3,@biomejs/biome2.4.4→2.4.6,html-validate10.9.0→10.11.1,astral-sh/setup-uvv5→v7
0.2.5
Highlights
- Font size tweaker — three A buttons (small / medium / large) in the landing page header scale all text in the app via a
remcascade; chart tick labels, axis titles, and legend text follow the chosen size automatically --non-interactive <msg>— send a single message tonanalogue-chat, print the response, and exit; clean for scripting--dump-llm-instructions— when used with--non-interactive, writes the full LLM request payload to a dated log file inai_chat_output/--system-prompt <text>— replace the built-in sandbox prompt innanalogue-chat;SYSTEM_APPEND.mdand facts still stack on top--rm-tools <t1,t2,...>— remove a subset of sandbox tools from the Monty execution environment; requires--system-prompt- Token estimate in system prompt dialog —
~N tokens (rough)shown in the actions bar when the prompt loads - Widened AI chat option bounds — timeout, record counts, duration, memory, allocations, and read-size fields all accept a broader range of values
- Clearer CLI error reporting — out-of-range flag values now report the flag name and allowed range instead of silently clamping; all bad flags reported together
- Record-count cap fix —
read_info,bam_mods,window_reads, andseq_tablenow correctly honour sandbox record-count caps even when the Python script passes its ownlimitargument
See the full CHANGELOG for details.
0.2.4
Highlights
SYSTEM_APPEND.mdsupport — place a file with that name in the BAM analysis directory to append domain-specific instructions to the default system prompt; loaded once per session with a 64 KB size cap; available in both GUI and CLI- Sensitive file blocking — best-effort blocking of keys, certificates, dotenv, SSH keys, and GPG files from
read_fileandls; consent dialog and CLI banner show a notice /dump_system_promptCLI slash command — dumps the static system prompt to a file inai_chat_output/at any point in the session- "View System Prompt" button — shows the static initial prompt built from current Advanced Options; includes a Copy button
- Sandbox
print()cap — output capped at 1 MB per execution, truncated at a UTF-8 boundary;printsTruncatedflag signals clipping. Can be altered innanalogue-sandbox-exec. - Copy-to-clipboard button in the sandbox code panel
nanalogue-sandbox-execCLI — run Python scripts directly in the Monty sandbox without LLM involvementwrite_filewrites to the allowed directory instead of a fixedai_chat_output/subdirectorymax_tokensvsmax_completion_tokenschosen per endpoint (Mistral/chutes.ai usemax_tokens; others usemax_completion_tokens)- Extra BED fields displayed in the swipe info strip
See the full CHANGELOG for details.
0.2.3
0.2.2
Highlights
- Standalone CLI (
nanalogue-chat) — terminal REPL for LLM-powered BAM analysis without Electron - Native fetch rewrite — chat orchestrator replaced Vercel AI SDK with a direct fetch loop for Python code sandbox execution
- Configurable sandbox limits —
maxDurationSecs,maxMemoryMB,maxAllocationswith CLI flags and UI controls /dump_llm_instructionsand/execslash commands for inspecting LLM payloads and running Python files directly
See the full CHANGELOG for details.
0.2.0
Highlights
- AI Chat mode — ask natural-language questions about BAM files using any OpenAI-compatible endpoint (local Ollama, remote API, etc.) with sandboxed code execution via Pydantic's Monty and Vercel's AI SDK
- Sequences tab in QC — per-read modification highlighting with quality tooltips, row selection, read ID copy, and multi-alignment support
- CRAM file support across all modes (QC, Swipe, Locate Reads)
- Advanced QC filtering — MAPQ, read type, length, read ID file, base quality and probability thresholds
- Configurable window size in Swipe mode, replacing the hardcoded 300-base default
- Exit watchdog for reliable app shutdown even when native addon calls block the event loop
- QC pagination with streaming histograms — reduces peak memory for large BAM files
- Deterministic sample seed for reproducible QC subsampling
Other changes
- Version button and dialog on landing page
- QC loading overlay with per-source progress counters
- Connection status indicator for AI Chat endpoint
- Dependency updates:
@nanalogue/node^0.1.4,ai^6.0.94,@pydantic/monty^0.0.7, and more
See the full CHANGELOG for details.
0.1.2
See CHANGELOG.md for full details.
Highlights:
- Locate reads mode — new mode for converting BAM + read ID file to BED format with region filtering
- TSV download — export per-read whole-read density data from QC analysis
- Reusable custom elements —
<output-file-input>,<bam-resource-input>,<mod-filter-input> - QC pipeline optimization — streaming histograms, parallelized data retrieval
- CLI removed — application is now GUI-only