Autonima is an LLM-assisted CLI for neuroimaging review workflows: search PubMed, screen studies, retrieve full text, parse coordinates, and export NiMADS artifacts for downstream meta-analysis.
Full documentation: https://adelavega.github.io/autonima/
Base install:
git clone git@github.com:adelavega/autonima.git
cd autonima
pip install -e .Useful extras:
pip install -e .[llm] # screening and other LLM-backed workflows
pip install -e .[meta] # `autonima meta`
pip install -e .[readability] # enhanced HTML extraction
pip install -e .[docs] # local docs buildGenerate a starting config:
autonima create-sample-config > config.yamlValidate it:
autonima validate config.yamlRun the pipeline:
autonima run config.yamlIf you omit OUTPUT_FOLDER, the CLI derives it from the config filename stem. For example:
- config:
projects/cue_reactivity/default.yaml - default output folder:
projects/cue_reactivity/default/
You can still pass an explicit runtime output folder:
autonima run config.yaml runs/my_reviewRun meta-analysis on the generated NiMADS outputs:
autonima meta runs/my_review/outputssearch:
database: "pubmed"
query: "schizophrenia AND working memory AND fMRI"
max_results: 100
retrieval:
sources:
- pubget
load_excluded: false
screening:
abstract:
model: "gpt-5-mini-2025-08-07"
objective: "Identify fMRI studies of working memory in schizophrenia"
inclusion_criteria:
- Human participants
- fMRI neuroimaging
fulltext:
model: "gpt-5-mini-2025-08-07"
objective: "Identify fMRI studies of working memory in schizophrenia"
inclusion_criteria:
- Human participants
- fMRI neuroimaging
parsing:
parse_coordinates: false
coordinate_model: "gpt-4o-mini"
output:
directory: "results"
annotation:
enabled: falseFor the full sample config and field-by-field guidance, see: