Releases: scottstirling/pi2llm
Releases · scottstirling/pi2llm
09-12-2025 v2.0
Features in Version 2.0
- Visual Analysis (New!): If you have access to a vision-enabled LLM (such as Qwen2.5-VL-7B-Instruct locally, or Google Gemini or OpenAI remote APIs), LLM Assistant for PixInsight can now send a snapshot of a selected nonlinear image along with its history and metadata for more thorough analysis.
- User-configurable, opt-in feature, enabled globally in Settings and optionally per image request on the main chat UI.
- The selected view dimensions are checked before sending. Visual LLMs currently (Sept. 2025) support maximum image dimensions no greater than 2048 pixels on a side.
- If the selected view exceeds the configuration option for maximum image dimensions (see the Settings), a copy is dynamically created and resized to fit the maximum supported.
- The view is copied to a JPG file in the system temp directory, Base64-encoded and included in a JSON POST to the LLM.
- The temporary JPG is deleted after sending.
- Save/Load Configuration Profiles: Save and load configuration settings to a
.pi2llm.jsonfile.- This makes it easy to switch between different LLM providers, version or share configurations.
- NOTE: API tokens are saved in clear text in the JSON file.
- Improved Chat Experience:
- The chat prompt input is now a proper multi-line text box.
- Initial configuration and default settings reset workflow has been redone to remove obstacles.
- A bug with stale state change between configuration settings and chat UI has been fixed.
- Validation of format for URLs input to the configuration.
- System Prompt Updated:
- The metadata and history of an image may be incomplete and image view names may be more ad hoc than informative, so the prompt is more aware of discrepancies in data and is told to prioritize the visual of the image itself when in doubt.
- LLM Response Error Messages:
- Error message details from the LLM API are displayed to the user better than before. So if a model is not selected or the wrong name given or other vendor-specific errors, any specific details in the error message will be output to the chat UI and console.
- README Updated with Common LLM API Endpoints:
- Added
Common LLM API Endpointstable to the README.md for reference and will add more info as users request.
- Added
Full Changelog: v1.1...v2.0
08-31-2025 v1.1
- improve error message handling for server response error messages.
- update README for LMStudio default port
- default URL configuration to match default LMStudio server URL and port
Public v1.0
First public release of the LLM Assistant for PixInsight script