Skip to content

Releases: spydra-tech/trusys-llm-security-scan-action

v1.0.2 – Trusys LLM Security Scan for GitHub Actions

29 Jan 12:37

Choose a tag to compare

Trusys LLM Security Scan – GitHub Action

Run trusys-llm-scan in your workflows to find LLM security issues (prompt injection, unsafe model usage, data exposure) across OpenAI, Anthropic, Langchain, LlamaIndex, Hugging Face, Azure, and AWS Bedrock.

Highlights

  • Zero config – Add the action; it installs the scanner from PyPI and scans your repo (default: whole repo, SARIF, Security tab).
  • Python 3.11+ – Default python-version is 3.11 (required by trusys-llm-scan).
  • SARIF → Code Scanning – Results show in Security → Code scanning; optional upload-sarif-to-github: false to upload SARIF in your own workflow for full logs.
  • AI filtering – Optional enable-ai-filter with OpenAI or Anthropic; installs openai/anthropic when enabled.
  • Backend upload – Optional upload-endpoint, application-id, and api-key to send results to your API.
  • Report-only runsno-fail-on-findings: true keeps the job green when findings exist; SARIF still uploads.

Quick start

  • uses: actions/checkout@v4
  • name: Run LLM Security Scan
    uses: spydra-tech/trusys-llm-security-scan-action@v1

Fix action.yml validation for GitHub Actions

28 Jan 08:07

Choose a tag to compare

v1.0.1 – Fix composite action validation

What changed

  • Fixed: Removed invalid shell key from steps that call other actions (uses:).
    • In composite actions, shell is only allowed on steps that have run:.
    • This was causing: Unexpected value 'shell' at lines 103 and 109 when the action was used from the Marketplace (e.g. in spydra-tech/trusys-llm-security-scan-action@v1.0.0).
  • Steps that run scripts ("Install LLM Security Scanner", "Run LLM Security Scan") still use shell: bash and are unchanged.

Upgrade

If you were seeing validation errors with @v1.0.0, update your workflow to:

uses: spydra-tech/trusys-llm-security-scan-action@v1.0.1

v1.0.0 – Scan your repo for LLM security issues in CI

28 Jan 07:36

Choose a tag to compare

LLM Security Scan – GitHub Action v1.0.0

Run trusys-llm-scan in your GitHub Actions workflow to find LLM security issues (prompt injection, unsafe model usage, data exposure) across OpenAI, Anthropic, Langchain, LlamaIndex, Hugging Face, Azure, and AWS Bedrock.

Highlights

  • Zero config – Add the action to a workflow; it installs the scanner from PyPI and scans your repo.
  • SARIF by default – Results show in the repository Security tab (Code scanning).
  • Flexible – Custom paths, severity filters, exclude/include patterns, and output format (console, JSON, SARIF).
  • Optional AI filtering – Use OpenAI or Anthropic to reduce false positives (requires API key in secrets).
  • Optional backend upload – Send results to your own API.

Quick start

Add a workflow (e.g. .github/workflows/llm-security-scan.yml):

name: LLM Security Scan
on: [push, pull_request]
jobs:
scan:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Run LLM Security Scan
uses: spydra-tech/trusys-llm-security-scan-action@v1