This repository contains experiments with AI-driven parsers for analyzing vulnerabilities, extracting package URLs (PURLs), and determining affected/fixed version ranges.
You can interact with all parsers through the VulnerabilityAgent class, which provides a single entry point for:
-
Create an instance of the
VulnerabilityAgent:instance = VulnerabilityAgent() -
Get the Package URL (PURL) for the given summary:
purl = instance.get_purl_from_summary(summary) # Output: pkg:pypi/django-helpdeskEnsure the
summaryvariable contains the relevant information to extract the PURL. -
Get the version ranges (affected and fixed versions) from the summary:
version_ranges = instance.get_version_ranges(summary, purl.type)
This will return a tuple containing two lists:
affected_versions: Versions affected by the vulnerabilityfixed_versions: Versions where the vulnerability has been fixed
Example output:
print(version_ranges) # Output: ([affected_versions], [fixed_versions])
-
Create an instance of the VulnerabilityAgent:
instance = VulnerabilityAgent() -
Get the Package URL (PURL) for the given cpe:
cpe = "cpe:2.3:a:django-helpdesk_project:django-helpdesk:-:*:*:*:*:*:*:*" pkg_type = "pypi" purl = instance.get_purl_from_cpe(cpe, pkg_type) print(purl) # Output: pkg:pypi/django-helpdesk
Ensure the
cpevariable contains the relevant information to extract the PURL.
To configure the model source, set the appropriate environment variables. You can choose between using a local LLM model or the OpenAI API.
If you want to use a local LLM model, set the USE_LOCAL_LLM_MODEL environment variable to True, and provide the necessary details for the local model:
- Set the following environment variables:
OLLAMA_MODEL_NAME="your_model_name"OLLAMA_BASE_URL="http://your_local_model_url"
If you prefer to use OpenAI's API, simply set the OPENAI_API_KEY environment variable:
- Set the following environment variable:
OPENAI_API_KEY="your_openai_api_key"OPENAI_MODEL_NAME="gpt-4o-mini"