This changelog documents all the different updates that occur for this framework.
No changes are required at this time.
- None
- As the SCR endpoint isn't currently used in the register-LLMs.py script it has been moved to optional
- Implement fix for no longer requiring the fact sheet entry for LLMs
In order for these changes to take effect you need to update the source code of the SAS Portalframework for SAS Viya. If you have deployed the LLM containers in kubernetes no additional changes are required, if you have deployed them as an Azure Container App or Azure Container Instance please utilize the prompt-builder-json.py utility script with the additional option -dt aca in order to enable the prompt builder for this as well.
- Ability for the prompt builder to also communicate with LLM containers deployed as Azure Container Apps or Azure Container Instances
- Prompt Templates now include a check for an environment variable called LLMCONTAINERPATH that can be set in order to provide an environment independant path to the endpoint where the LLMs are hosted.
- Documentation on how to set environment variable
- None
- None
No changes are required at this time.
- Utility script for recreating the Prompt Builder JSON
- None
- None
No changes are required at this time.
- Improve documentation
- None
- None
No changes are required at this time.
- None
- None
- The verify_ssl couldn't previously be set to false for the Python scripts
No changes are required at this time.
- None
- The register-LLMs.py script does no longer require an entry in the LLM Fact Sheet.
- None
No changes are required at this time.
- Expanded documentation pages by a lot
- Optional argument for the Model-Manager-Setup.py script to provide the ability to change to deployment: k8s or aca
- Ignore .venv added to the .gitignore
- The Model-Manager-Setup.py script now provides bash commands, instead of PowerShell
- None
No changes are required at this time.
- None
- None
- Fix argument error in publish-Embedding script and tag value in register-LLMs
No changes are required at this time.
- SECURITY.md and CONTRIBUTING.md
- Source code headers for copyright and license information added
- None
- None
No changes are required at this time.
- BGE Small, Base and Large EN v1.5
- Attribution note in the main README
- The register-LLMs.py now supports the additional new metadata items provided by SAS Model Manager 2025.08+
- The register-LLMs.py now requires an additional parameter -e which is the same as for the publish-LLMs.py to know the endpoint, as this will enable further metadata integration with the SAS Model Manager
- Typo in the Gemma Embedding model folder path
No changes are required at this time.
- All MiniLLM L6 v2, an open-weight embedding model
- Embedding Gemma 300M, an open-weight embedding model
- Change model function from classification to text generation to make use of the new features within SAS Model Manager as off 2025.08
- None
No changes are required at this time.
- None
- None
- Fix in the Base_Definition baseScore.py LLM code
No changes are required at this time.
- Two images added as preperation for upcoming documentation enhancements
- Update main README.md
- Updated two minor versions in the Changelog as they were stuck on 0.1.20
No changes are required at this time.
- README for Tools
- None
- Wrongly formatted API_KEYS in embedding models
No changes are required at this time.
- New Tools folder and the first tool contribution for websearch
- None
- None
Run the ./SAS-Viya-Tool-Integrations/SAS-Intelligent-Decisioning-Integration/Update-Custom-SAS-Intelligent-Decisioning-Node.sas in your environment to get the update. The script has been validated to not require any changes to your existing messages - but it has expanded the supported character limits of both the llmBody and the llmGenerated variables to the maximum length of 10,485,760 characters.
- Update-Custom-SAS-Intelligent-Decisioning-Node.sas is available as an update script of existing Call LLM nodes to support the increased character limit.
- In the Troubleshooting-Guide.md a new line that explains Duplicate Variable Error.
- Renamed Non-SAS-Viya-Tool-Integrations and SAS-Viya-Tool-Integrations to Non-SAS-Viya-Integrations and SAS-Viya-Integrations to better reflect that these aren't tools themselves but rather integration points. The documentation has been updated accordingly.
- Create-Custom-SAS-Intelligent-Decisioning-Node.sas now provides a character limit for the llmBody and llmGenerated variables of 10,485,760 characters (the maximum supported by all publishing destinations).
No changes are required at this time.
- Token-Calculator.html now has an additional input field were you can change to estimate how often a prompt is run per day
- None
- None
No changes are required at this time.
- Gemini Flash Lite 2.5, Flash 2.5 and Pro 2.5 have been added
- The llm_facht_sheet.csv was updated alongside the introduced models
- Implemented Issue 30 - requires update of the Portal Framework, by switching to the aaia branch
- Tag documentation for LLM Definitions
- Gemini Flash 1.5 001 and 002 are deprecated by Google and have received that tag accordingly
- Claude 2.0 and 2.1 used a Legacy tag, this has been changed to make use of the deprecated feature
- None
The Token-Calculator.html and LLM-Details-Page.html need to be uploaded to a webserver (e.g. the one used for the Prompt Builder UI or where the customers stores Data Driven Content object sources).
- Token-Calculator.html, in report utility to calculate the tokens used up by a prompt and then multiplies it with the pricing data from the llm_fact_sheet.csv
- LLM-Details-Page.html, in report utility to display the information from the llm_fact_sheet.csv as a type of super light weight model card
- In Load-Fact-Sheets.sas the default path has been updated.
- LLM - Get All Prompts.step has been updated to remove warning as the macro variable was spelled incorrectly.
- Implemented fix suggested by Issue 29
- Fix typo for phi_35_mini model id in llm_fact_sheet.csv
No changes are required at this time.
- Update the LLM Fact Sheet to include all current models
- Add Load-Fact-Sheets.sas programm to load the data to CAS
- Removed tiktoken dependency for OpenAI models, as tokens are included in the response. This will improve the total processing time
- None
No changes are required at this time.
- Get-All-Prompts.sas retrieves all prompting projects, models and their experiments and turns it into a table for reporting
- LLM - Get ALL Prompts custom step introduced, that does the same as the script, just wrapped in a custom step
- LLM Fact Sheet entries for all Anthropic and Google models
- None
- None
No changes are required at this time.
- register-Embedding.py to register Embedding models to SAS Model Manager
- publish-Embedding.py to publish Embedding models to SCR
- Model-Manger-Setup.py now also creates the Embedding Model Project
- .gitignore now ignores the rag-builer.json which is used for the RAG Builder UI
- README.md was updated to refelect these changes
- Added additional embedding models from Voyage.ai
- The LLM base example was called _Base-Definitions for consistency this name was update to _Base_Definitions
- None
This update requires you to update requires you to switch from the prompt builder provider here to https://github.com/sassoftware/sas-portal-framework-for-sas-viya
- Add Embedding Definitions
- Removed LLM Prompt Builder content from this repository and moved it to https://github.com/sassoftware/sas-portal-framework-for-sas-viya
- Leading and Trailing blanks are now removed from the variables in the manifested prompts
- The name of the manifested prompt was based on the name of the prompt, this has now been fixed to adhere to proper Python package names
This update requires you to update the ./js/objects/add-prompt-builder.js file and add the two lines at the end of the ./language/de.json and ./language/en.json files (maybe best to update the whole prompt builder section) - make sure to also empty your browser cache.
- New button that provides a link to the model, if one is selected
- A Troubleshooting-Guide.md was added
- None
- None
- The README.md chapter Modifying the SAS Portal Framework for SAS Viya has been removed as the Prompt Builder is now part of the main repository
No updating required as this update is a design phase.
- Added Claude 2.0 as a first test
- None
- None
- Evals from the fact sheets - mabye an idea for the future in a different sheet
No updating required as this update is a design phase.
- Base attributes for all default included models
- Added two additional attributes to the llm_fact_sheet.csv model_id and deployment_type
- None
- None
No updating required as this update is a design phase.
- Start desgining the LLM fact sheet which will be the new base for further reporting
- None
- None
- None
This update requires you to update the ./js/objects/add-prompt-builder.js file and add the two lines at the end of the ./language/de.json and ./language/en.json files (maybe best to update the whole prompt builder section) - make sure to also empty your browser cache.
- One new line in the language file to explain best prompt
- Icon is displayed next to the model name if for best response + with hover text
- None
- None
- None
This update requires you to update the ./js/objects/add-prompt-builder.js file and add the two lines at the end of the ./language/de.json and ./language/en.json files (maybe best to update the whole prompt builder section) - make sure to also empty your browser cache.
- Two new lines in the language file to explain fastest and fewest token prompts
- Icons are displayed next to the model name if they had the fastest and/or fewest token prompts
- Icons display an hover text to explain themselves
- None
- None
- None
This update requires you to update the ./js/objects/add-prompt-builder.js file - make sure to also empty your browser cache.
- Base implemention for fastest prompt and fewest token prompt has been added (no UI support yet)
- None
- None
This update requires you to update the ./js/objects/add-prompt-builder.js file - make sure to also empty your browser cache.
- None
- LLM calls are now done in parallel, instead of in sequence - this should lead to a big performance uplift for prompt engineers
- No more leading and trailing new lines in the manifested model
- Added missing semi-colons
- Fix hardcoded model in the model variable deletion
- Having to escape special characters e.g. \n
- None
This update requires you to update the ./js/objects/add-prompt-builder.js file - make sure to also empty your browser cache.
- Model Responses are now renderd as Markdown instead of plain text if the model response contains Markdown syntax.
- None
- None
- None
- Explain HF token
- Documentation was added on how to add Proprietary models
- A template for gpt_4o_mini_az_2024_07_18 was added showcasing how to deploy GPT models using Azure Cognitive Services
- Add default transfer package to implement Logging and Monitoring Assets
- Updated the Base-Definition options.json to be a collection of all options that are currently used across the models
- Improved the perfomance and robustness of the log parser code and custom step by moving to Python for processing
- Moved the createLLMRepository.sas script into the MM specific subfolder along with its documentation
- Moved the LLM - Log Parser.step into the Custom Step repository to be more consistent
- Typos in documentation
- None
- Two new utility functions have been added to the main portal-framework - get-model-variables.js and delete-model-variable.js
- New utitlity function has been added to the main portal-framework - validate-ds2-variable-name.js
- The userPrompt variable now has a description
- When using the prompt variable functionality in the userPrompt the variable is now checked for DS2 variable name compliance
- Unneccessary API_KEY in userPrompt
- The prompt experiment tracker file was called Prompt-Example-Tracker.json - it has been renamed to Prompt-Experiment-Tracker.json
- Fixed an issue in the create-model-content.js utility function via the main portal-framework
- Missing comma after top_p
- When you manifest a prompt a couple of times it would lead to the creation of duplicate variables, this has been fixed
- Fixed an issue where if only one input variable was provided it was not added
- Fixed an issue where if the user used a semi-colon for the last variable it created an empty input variable
- Prompt Expierments stay when changing/creating projects/prompts
- Catch and Return API errors
- Ensure valid variable names
- None
- Added CHANGELOG.md to the repository to communicated updates better in the future
- Added rules to .gitignore to ignore the SAS Viya CLI if present in the repository
- Added rule to .gitignore to ignore the SAS Viya CLI setup commands
- Added documentation on setting up the SAS Viya CLI
- Added documentation on the SAS Viya CLI setup commands
- Added the generation of the SAS VIYA CLI setup commands to the Model-Manager-Setup.py script
- Added additional error messages to all Python scripts
- Added gpt-4o-mini-2025-01-01-preview, as an example for using Azure AI Foundry
- None
- None
- None