Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
28 commits
Select commit Hold shift + click to select a range
432f69a
updated new nilearn nv download
ljchang May 1, 2024
85067c8
added deprecation warning
ljchang May 2, 2024
fb504d9
updated new nilearn nv download
ljchang May 1, 2024
2e24674
added deprecation warning
ljchang May 2, 2024
3900ff9
Merge remote-tracking branch 'refs/remotes/origin/fetch_data' into fe…
ejolly May 21, 2025
a89623a
switch to uv and ensure tests pass and package builds
ejolly May 21, 2025
77baee0
initial docs revamp
ejolly May 21, 2025
bf13d4a
convert sphinx gallery to jupyter notebooks
ejolly May 21, 2025
b002732
update GA and fix formatting
ejolly May 22, 2025
0423abe
ga updates
ejolly May 22, 2025
29162cd
Fix linting and formatting errors
ejolly May 22, 2025
1227245
fix warnings 1
ejolly May 22, 2025
db7d3c2
fix warnings 2
ejolly May 22, 2025
56220ba
fix us few more things
ejolly May 22, 2025
fc056ad
revert r-to-z clipping
ejolly May 22, 2025
0480b16
cursor claude v1
ejolly Jun 10, 2025
9835b6d
fix neurovault downloaders and add tests
ejolly Jun 10, 2025
2cca3b4
fix tutorial and log
ejolly Jun 11, 2025
c573536
fix nilearn resampling messages
ejolly Jul 16, 2025
57f0e03
initial tutorial redesign
ejolly Jul 16, 2025
96ba2fe
remove author attributions as they're in pyproject.toml. Start refact…
ejolly Jul 16, 2025
c358e55
refactor and improve brain data dunder math. improve first tutorial
ejolly Jul 16, 2025
2c7b177
suppress warnings, fixup tutorials
ejolly Jul 16, 2025
bfcda69
add mni2009c (fmriprep) templates
ejolly Nov 8, 2023
c4e8b97
revamp mni module
ejolly Jul 16, 2025
faa825b
refactor onsets_to_dm to wrap new nilearn functionality instead
ejolly Jul 17, 2025
ae63fd0
[wip] replace Brain_Data.regress with nilearn and use new ResultsCont…
ejolly Jul 18, 2025
53689c2
new Brain_Collection class that supports 2d slicing
ejolly Jul 18, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
187 changes: 187 additions & 0 deletions .cursor/rules/00-quick-reference.mdc
Original file line number Diff line number Diff line change
@@ -0,0 +1,187 @@
---
description: Quick reference and project overview for nltools development
globs:
- "**/*.py"
- "**/*.ipynb"
alwaysApply: true
---

# nltools Quick Reference

## Project Overview

nltools is a neuroimaging analysis library with three core classes:

1. **Brain_Data**: Vectorized neuroimaging data (n_images × n_voxels)
2. **Design_Matrix**: Enhanced DataFrame for experimental designs with temporal metadata
3. **Adjacency**: Vectorized similarity/connectivity matrices

## Essential Imports

```python
# Core imports for any nltools file
import numpy as np
import pandas as pd
from copy import deepcopy

# nltools classes
from nltools.data import Brain_Data, Design_Matrix, Adjacency
from nltools.stats import threshold, fdr, regress
from nltools.utils import check_brain_data
from nltools.mask import create_sphere, expand_mask
```

## Key Patterns to Follow

### 1. Never Modify In Place
```python
# ❌ BAD
def process(self):
self.data = new_data
return self

# ✅ GOOD
def process(self):
out = deepcopy(self)
out.data = new_data
return out
```

### 2. Return Dictionaries from Statistical Methods
```python
# Standard return format
return {
'beta': Brain_Data(beta_values, mask=self.mask),
't': Brain_Data(t_values, mask=self.mask),
'p': Brain_Data(p_values, mask=self.mask),
'df': degrees_of_freedom
}
```

### 3. Handle Multiple Input Types
```python
# Support single file, list of files, or data array
if isinstance(data, str):
# Load single file
elif isinstance(data, list):
# Load multiple files
elif isinstance(data, np.ndarray):
# Use array directly
```

### 4. Document with Google-style Docstrings
```python
def method(self, param1, param2='default'):
"""One-line summary.

Detailed description if needed.

Args:
param1: Description of param1
param2: Description of param2 (default: 'default')

Returns:
result_type: Description of return value

Raises:
ValueError: When invalid parameters
"""
```

## Common Workflows

### GLM Analysis
```python
# 1. Load data
brain = Brain_Data('functional.nii.gz')

# 2. Create design
dm = Design_Matrix(events_df, sampling_freq=2.0)
dm = dm.add_poly(2).convolve().clean()

# 3. Run regression
results = brain.regress(dm)
```

### Prediction/ML
```python
brain.Y = behavioral_scores
results = brain.predict(
algorithm='ridge',
cv_dict={'type': 'kfolds', 'n_folds': 5}
)
```

### Group Analysis
```python
# Load all subjects
group = Brain_Data([f'sub-{i:02d}.nii' for i in range(20)])

# Group statistics
group_mean = group.mean()
group_stats = group.ttest()
```

## File Structure Expectations

```
nltools/
├── data/ # Core classes
│ ├── brain_data.py
│ ├── design_matrix.py
│ └── adjacency.py
├── stats.py # Statistical functions
├── utils.py # Utility functions
├── mask.py # Mask operations
├── plotting.py # Visualization
└── tests/ # Test files
├── conftest.py # Test fixtures
└── test_*.py # Test modules
```

## Testing Checklist

When adding new functionality:
- [ ] Add unit tests in appropriate test file
- [ ] Use fixtures from conftest.py
- [ ] Test normal cases, edge cases, and errors
- [ ] Run: `uv run pytest nltools/tests/test_yourfile.py`

## Performance Tips

1. **Use vectorized operations** instead of loops
2. **Leverage joblib** for parallel processing with `n_jobs=-1`
3. **Memory map large files** with `np.load(file, mmap_mode='r')`
4. **Process in chunks** for very large datasets

## Common Errors and Solutions

| Error | Likely Cause | Solution |
|-------|--------------|----------|
| Shape mismatch | X and Y different lengths | Check with `.shape()` |
| Memory error | Large dataset | Use chunks or reduce data |
| Type error | Wrong input type | Check `isinstance()` |
| Import error | Missing optional dependency | Use `attempt_to_import()` |

## Development Commands

```bash
# Run tests
uv run pytest

# Run specific test
uv run pytest nltools/tests/test_brain_data.py::test_mean

# Check code style
uv run ruff check

# Build documentation
uv run jupyter-book build docs/
```

## Getting Help

1. Check test files for usage examples
2. Look at similar methods in the codebase
3. Use AI assistance with specific questions
4. Refer to specialized rule files for detailed patterns
140 changes: 140 additions & 0 deletions .cursor/rules/01-base-conventions.mdc
Original file line number Diff line number Diff line change
@@ -0,0 +1,140 @@
---
description: Base conventions and imports for nltools development
globs:
- "nltools/**/*.py"
alwaysApply: true
---

# Base Conventions for nltools Development

## Import Standards

Always follow these import patterns at the beginning of files:

```python
# Standard library imports
import os
import warnings
from copy import deepcopy
from pathlib import Path

# Core scientific computing
import numpy as np
import pandas as pd
from scipy import stats

# Machine learning and neuroimaging
from sklearn.base import BaseEstimator
from sklearn.utils import check_random_state
from nilearn import masking, image

# Internal imports
from nltools.utils import check_brain_data, attempt_to_import
from nltools.stats import (
one_sample_permutation,
two_sample_permutation,
threshold
)
```

## Error Handling

Use consistent error handling patterns:

```python
# Type checking
if not isinstance(data, Brain_Data):
raise TypeError("data must be a Brain_Data instance")

# Value validation
if axis not in ['rows', 'columns']:
raise ValueError("axis must be 'rows' or 'columns'")

# Shape validation
if self.shape()[0] != other.shape()[0]:
raise ValueError("Brain_Data objects must have same number of images")
```

## Method Signatures

Follow these patterns for method signatures:

```python
def method_name(self, parameter, axis='rows', n_jobs=-1,
verbose=False, plot=False, **kwargs):
"""Short one-line description.

Longer description explaining what the method does and any
important implementation details.

Args:
parameter: Description of parameter
axis: 'rows' for images or 'columns' for voxels
n_jobs: Number of parallel jobs (-1 for all cores)
verbose: Print progress information
plot: Generate visualization
**kwargs: Additional keyword arguments passed to underlying function

Returns:
result_type: Description of what is returned

Raises:
ValueError: When invalid parameters are provided
TypeError: When incorrect data types are passed
"""
```

## Copy Operations

Always use deepcopy for data modifications to avoid mutations:

```python
# Creating modified copies
out = deepcopy(self)
out.data = modified_data
return out

# Never modify in place unless explicitly requested
# Bad: self.data = new_data
# Good: out = deepcopy(self); out.data = new_data
```

## Path Handling

Support both string and pathlib.Path objects:

```python
# Convert to Path object
file_path = Path(file_name)

# Check file existence
if not file_path.exists():
raise FileNotFoundError(f"File {file_path} not found")

# Handle extensions
if file_path.suffix.lower() in ['.nii', '.gz']:
# Handle NIfTI files
```

## Parallel Processing

Use joblib for parallel operations:

```python
from joblib import Parallel, delayed

# Standard parallel pattern
results = Parallel(n_jobs=n_jobs)(
delayed(function)(arg) for arg in arguments
)
```

## Documentation

All classes and methods must have Google-style docstrings with:
- Brief one-line summary
- Detailed description if needed
- Args section with parameter descriptions
- Returns section describing output
- Raises section for exceptions
- Examples section for complex functionality
Loading
Loading