Automated triage of JavaScript/TypeScript security findings using LLM agents with semantic code analysis.
# 1. Set environment variables
export OPENAI_API_KEY="your-key"
export REPO_ROOT="/path/to/your/js/code"
export OUT_README_DIR="./test/"
export PROMPT_TYPE="DOMXSS"
export ENABLE_LSP="true" # Recommended for semantic analysis
# 2. Run analysis
node index.js findings.jsonltrAIger analyzes security findings in JavaScript/TypeScript code using:
- 5 LLM Tools:
grepJSON,listFiles,readRange,lspGetDefinition,lspGetReferences - Semantic Analysis: LSP-powered code understanding for accurate taint flow tracing
- Modular Prompts: Different analysis types via
prompts/directory
💡 Recommendation: Enable LSP tools (ENABLE_LSP=true) for superior semantic analysis. LSP tools provide cross-file reference tracking, precise symbol definitions, and intelligent code clustering - essential for accurate vulnerability triage.
JSONL file containing security findings with required fields:
ruleId- Unique identifier for the rule/vulnerability typefile- Path to the JavaScript/TypeScript fileline- Line number where the issue occurs
Example:
{"ruleId": "DOMXSS", "file": "src/app.js", "line": 123, "message": "Unsanitized input flows to innerHTML"}
{"ruleId": "CVE-2023-1234", "file": "package.json", "version": "1.2.3", "component": "jquery"}
{"ruleId": "SensitiveData", "file": "config.js", "line": 45, "data": "API_KEY"}Supported formats:
- Flattened SARIF results
- RetireJS vulnerability reports
- Custom taint flow analysis
- Any JSONL vulnerability list
Custom Formats: If using a new JSONL format or special result names, create a custom prompt in prompts/ directory to guide the LLM on how to analyze your specific vulnerability types and output format.
| Variable | Description | Example |
|---|---|---|
OPENAI_API_KEY |
OpenAI API key | sk-proj-... |
REPO_ROOT |
Path to JavaScript/TypeScript codebase | /tmp/extracted-js/ |
OUT_README_DIR |
Output directory for results | ./test/ |
PROMPT_TYPE |
Analysis prompt to use | DOMXSS, SensitiveData, retirejs |
ENABLE_LSP |
Enable LSP tools for semantic analysis | true (recommended) or false |
| Variable | Description | Default |
|---|---|---|
ENABLE_LSP |
Enable LSP tools (requires LSP server) | true |
PROVIDER |
LLM provider | openai |
MODEL |
LLM model | gpt-5 |
TEMPERATURE |
LLM creativity | 1 |
VERBOSE |
Enable detailed logging | false |
LOG_FILE_DIR |
Log directory | ./test/ |
MAX_TOOL_CALLS |
Max tools per finding | 100 |
MAX_ITERATIONS |
Max agent iterations | 50 |
MAX_KB_TOOL_USE |
Max KB per tool output | 50 |
Prompts define analysis behavior. File name in prompts/ = PROMPT_TYPE:
DOMXSS.js- DOM-based XSS taint flow analysisSensitiveData.js- Sensitive data exposure detectionretirejs.js- JavaScript dependency vulnerability analysis
Custom Prompts: Create your own prompt file in prompts/ for custom JSONL formats or specialized analysis. The prompt should guide the LLM on how to analyze your specific vulnerability types and output format.
For semantic code analysis, you need to run the TypeScript Language Server in a Docker container.
cd lspServer
docker build -t trAIger-lsp .# Basic setup
docker run -d --name trAIger-lsp \
-p 2089:2089 \
-v /path/to/your/code:/workspace \
trAIger-lsp
# With resource limits (recommended)
docker run -d --name trAIger-lsp \
-p 2089:2089 \
-v /path/to/your/code:/workspace \
trAIger-lsp# Test LSP functionality
cd lspServer && node lspClientTest.jsDockerfile Features:
- Node.js 20-slim base image
- TypeScript Language Server installed globally
- TCP bridge on port 2089
- Optimized for JavaScript/TypeScript analysis
Environment Variables:
LSP_HOST=127.0.0.1(default)LSP_PORT=2089(default)LSP_ROOT=/workspace(container path)HOST_WORKSPACE=/path/to/your/code(host path)
Note: Set ENABLE_LSP=false to disable LSP tools and use only basic tools (grepJSON, listFiles, readRange) for simpler setup.
Container won't start:
# Check Docker logs
docker logs trAIger-lsp
# Restart container
docker restart trAIger-lspLSP connection fails:
# Verify port is open
netstat -tlnp | grep 2089
# Test with curl
curl -X POST http://localhost:2089Performance issues:
- Increase container memory:
--memory="8g" - Increase CPU cores:
--cpus="8" - Check host system resources
See test/ directory for:
- Input:
test.jsonl- Sample vulnerability findings - Output:
*-SARIF_README.md- Analysis results - Logs:
*-agent-conversation.log- Detailed execution logs
# DOMXSS analysis
PROMPT_TYPE="DOMXSS" node index.js sarif-findings.jsonl
# RetireJS analysis
PROMPT_TYPE="retirejs" node index.js retirejs-output.jsonl
# With verbose logging
VERBOSE=true node index.js findings.jsonlResults saved to OUT_README_DIR as timestamped markdown files:
- Assessment (true_positive/false_positive/uncertain)
- Confidence score
- Detailed reasoning with source→sink path
- Transformation analysis
- Exploitability assessment
- Node.js 20+
- Docker (for LSP container)
- LLM API key (OpenAI/Gemini/Claude)
npm install langchain @langchain/openai zod execaNote: Designed specifically for JavaScript/TypeScript codebases. LSP tools provide semantic understanding for accurate vulnerability analysis.
