Releases: spydra-tech/trusys-llm-security-scan-action
v1.0.2 – Trusys LLM Security Scan for GitHub Actions
Trusys LLM Security Scan – GitHub Action
Run trusys-llm-scan in your workflows to find LLM security issues (prompt injection, unsafe model usage, data exposure) across OpenAI, Anthropic, Langchain, LlamaIndex, Hugging Face, Azure, and AWS Bedrock.
Highlights
- Zero config – Add the action; it installs the scanner from PyPI and scans your repo (default: whole repo, SARIF, Security tab).
- Python 3.11+ – Default
python-versionis 3.11 (required by trusys-llm-scan). - SARIF → Code Scanning – Results show in Security → Code scanning; optional
upload-sarif-to-github: falseto upload SARIF in your own workflow for full logs. - AI filtering – Optional
enable-ai-filterwith OpenAI or Anthropic; installsopenai/anthropicwhen enabled. - Backend upload – Optional
upload-endpoint,application-id, andapi-keyto send results to your API. - Report-only runs –
no-fail-on-findings: truekeeps the job green when findings exist; SARIF still uploads.
Quick start
- uses: actions/checkout@v4
- name: Run LLM Security Scan
uses: spydra-tech/trusys-llm-security-scan-action@v1
Fix action.yml validation for GitHub Actions
v1.0.1 – Fix composite action validation
What changed
- Fixed: Removed invalid
shellkey from steps that call other actions (uses:).- In composite actions,
shellis only allowed on steps that haverun:. - This was causing:
Unexpected value 'shell'at lines 103 and 109 when the action was used from the Marketplace (e.g. inspydra-tech/trusys-llm-security-scan-action@v1.0.0).
- In composite actions,
- Steps that run scripts ("Install LLM Security Scanner", "Run LLM Security Scan") still use
shell: bashand are unchanged.
Upgrade
If you were seeing validation errors with @v1.0.0, update your workflow to:
uses: spydra-tech/trusys-llm-security-scan-action@v1.0.1
v1.0.0 – Scan your repo for LLM security issues in CI
LLM Security Scan – GitHub Action v1.0.0
Run trusys-llm-scan in your GitHub Actions workflow to find LLM security issues (prompt injection, unsafe model usage, data exposure) across OpenAI, Anthropic, Langchain, LlamaIndex, Hugging Face, Azure, and AWS Bedrock.
Highlights
- Zero config – Add the action to a workflow; it installs the scanner from PyPI and scans your repo.
- SARIF by default – Results show in the repository Security tab (Code scanning).
- Flexible – Custom paths, severity filters, exclude/include patterns, and output format (console, JSON, SARIF).
- Optional AI filtering – Use OpenAI or Anthropic to reduce false positives (requires API key in secrets).
- Optional backend upload – Send results to your own API.
Quick start
Add a workflow (e.g. .github/workflows/llm-security-scan.yml):
name: LLM Security Scan
on: [push, pull_request]
jobs:
scan:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Run LLM Security Scan
uses: spydra-tech/trusys-llm-security-scan-action@v1