Skip to content
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
179 changes: 179 additions & 0 deletions .github/workflows/test.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,179 @@
name: Test Suite

on:
push:
branches: [ main, develop ]
pull_request:
branches: [ main, develop ]

jobs:
test:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: ["3.12"]

steps:
- uses: actions/checkout@v4

- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v4
with:
python-version: ${{ matrix.python-version }}

- name: Install uv
uses: astral-sh/setup-uv@v2
with:
version: "latest"

- name: Set up virtual environment
run: |
uv venv
echo "VIRTUAL_ENV=.venv" >> $GITHUB_ENV
echo "$PWD/.venv/bin" >> $GITHUB_PATH

- name: Install dependencies
run: |
uv sync --extra dev

- name: Run linting
run: |
uv run ruff check .
uv run ruff format --check .

- name: Run unit tests
run: |
uv run pytest tests/unit/ -v --cov=grainchain --cov-report=xml --cov-report=term-missing

- name: Run integration tests (local only)
run: |
uv run pytest tests/integration/test_local_provider.py -v -m "not slow"

- name: Upload coverage to Codecov
uses: codecov/codecov-action@v3
with:
file: ./coverage.xml
flags: unittests
name: codecov-umbrella
fail_ci_if_error: false

test-with-providers:
runs-on: ubuntu-latest
if: github.event_name == 'push' && (github.ref == 'refs/heads/main' || github.ref == 'refs/heads/develop')

steps:
- uses: actions/checkout@v4

- name: Set up Python 3.12
uses: actions/setup-python@v4
with:
python-version: "3.12"

- name: Install uv
uses: astral-sh/setup-uv@v2
with:
version: "latest"

- name: Set up virtual environment
run: |
uv venv
echo "VIRTUAL_ENV=.venv" >> $GITHUB_ENV
echo "$PWD/.venv/bin" >> $GITHUB_PATH

- name: Install dependencies with all providers
run: |
uv sync --extra dev --extra all

- name: Run integration tests with E2B
if: env.E2B_API_KEY != ''
env:
E2B_API_KEY: ${{ secrets.E2B_API_KEY }}
run: |
uv run pytest tests/integration/test_e2b_provider.py -v -m "not slow"

- name: Run integration tests with Modal
if: env.MODAL_TOKEN_ID != '' && env.MODAL_TOKEN_SECRET != ''
env:
MODAL_TOKEN_ID: ${{ secrets.MODAL_TOKEN_ID }}
MODAL_TOKEN_SECRET: ${{ secrets.MODAL_TOKEN_SECRET }}
run: |
uv run pytest tests/integration/test_modal_provider.py -v -m "not slow"

- name: Run integration tests with Daytona
if: env.DAYTONA_API_KEY != ''
env:
DAYTONA_API_KEY: ${{ secrets.DAYTONA_API_KEY }}
run: |
uv run pytest tests/integration/test_daytona_provider.py -v -m "not slow"

performance-test:
runs-on: ubuntu-latest
if: github.event_name == 'push' && github.ref == 'refs/heads/main'

steps:
- uses: actions/checkout@v4

- name: Set up Python 3.12
uses: actions/setup-python@v4
with:
python-version: "3.12"

- name: Install uv
uses: astral-sh/setup-uv@v2
with:
version: "latest"

- name: Set up virtual environment
run: |
uv venv
echo "VIRTUAL_ENV=.venv" >> $GITHUB_ENV
echo "$PWD/.venv/bin" >> $GITHUB_PATH

- name: Install dependencies
run: |
uv sync --extra dev --extra benchmark

- name: Run performance tests
run: |
uv run pytest tests/ -v -m "slow" --timeout=300

- name: Run benchmark
run: |
uv run grainchain benchmark --provider local --output benchmarks/results/

security-scan:
runs-on: ubuntu-latest

steps:
- uses: actions/checkout@v4

- name: Set up Python 3.12
uses: actions/setup-python@v4
with:
python-version: "3.12"

- name: Install uv
uses: astral-sh/setup-uv@v2
with:
version: "latest"

- name: Set up virtual environment
run: |
uv venv
echo "VIRTUAL_ENV=.venv" >> $GITHUB_ENV
echo "$PWD/.venv/bin" >> $GITHUB_PATH

- name: Install dependencies
run: |
uv sync --extra dev

- name: Run security scan with bandit
run: |
uv run pip install bandit[toml]
uv run bandit -r grainchain/ -f json -o bandit-report.json || true

- name: Upload security scan results
uses: actions/upload-artifact@v3
with:
name: security-scan-results
path: bandit-report.json
89 changes: 89 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -370,6 +370,95 @@ grainchain benchmark --provider local
./scripts/benchmark_status.sh
```

### Testing

Grainchain includes a comprehensive test suite with >90% code coverage to ensure reliability across all providers.

### Running Tests

```bash
# Run all tests
uv run pytest

# Run only unit tests
uv run pytest tests/unit/ -v

# Run only integration tests
uv run pytest tests/integration/ -v

# Run tests with coverage
uv run pytest --cov=grainchain --cov-report=html

# Run tests for specific provider
uv run pytest tests/integration/test_local_provider.py -v

# Run performance tests
uv run pytest -m slow

# Run tests excluding slow tests
uv run pytest -m "not slow"
```

### Test Categories

- **Unit Tests** (`tests/unit/`): Fast, isolated tests for core functionality
- `test_sandbox.py`: Core Sandbox class tests
- `test_providers.py`: Provider implementation tests
- `test_config.py`: Configuration management tests
- `test_exceptions.py`: Exception handling tests
- `test_interfaces.py`: Interface and data structure tests

- **Integration Tests** (`tests/integration/`): Real provider interactions
- `test_e2b_provider.py`: E2B provider integration tests
- `test_modal_provider.py`: Modal provider integration tests
- `test_daytona_provider.py`: Daytona provider integration tests
- `test_local_provider.py`: Local provider integration tests

### Test Configuration

Tests are configured via `pytest.ini` with the following markers:

- `unit`: Unit tests (fast, no external dependencies)
- `integration`: Integration tests (require provider credentials)
- `slow`: Slow tests that may take longer to run
- `e2b`: Tests requiring E2B provider
- `modal`: Tests requiring Modal provider
- `daytona`: Tests requiring Daytona provider
- `local`: Tests requiring Local provider
- `snapshot`: Tests for snapshot functionality

### Provider Credentials for Integration Tests

To run integration tests with real providers, set these environment variables:

```bash
# E2B
export E2B_API_KEY=your-e2b-api-key

# Modal
export MODAL_TOKEN_ID=your-modal-token-id
export MODAL_TOKEN_SECRET=your-modal-token-secret

# Daytona
export DAYTONA_API_KEY=your-daytona-api-key
```

### Continuous Integration

Tests run automatically on GitHub Actions for:

- **Python 3.12** on pull requests and main branch pushes
- **Integration tests** with real providers on main branch pushes
- **Performance tests** and benchmarks on releases
- **Security scans** with bandit on all commits

### Coverage Requirements

- Minimum coverage: **90%**
- All new code must include tests
- Integration tests must cover happy path and error scenarios
- Performance tests ensure operations complete within expected timeframes

### CLI Commands

Grainchain includes a comprehensive CLI for development:
Expand Down
12 changes: 6 additions & 6 deletions benchmarks/scripts/benchmark_runner.py
Original file line number Diff line number Diff line change
Expand Up @@ -198,9 +198,9 @@ def take_snapshot(self, snapshot_name: str) -> dict[str, Any]:
)
if result.exit_code == 0:
size_output = result.output.decode().strip()
snapshot["metrics"]["filesystem"][
"node_modules_size"
] = size_output.split()[0]
snapshot["metrics"]["filesystem"]["node_modules_size"] = (
size_output.split()[0]
)

# Package count
result = self.container.exec_run(
Expand All @@ -224,9 +224,9 @@ def take_snapshot(self, snapshot_name: str) -> dict[str, Any]:
}

if result.exit_code != 0:
snapshot["metrics"]["performance"][
"build_error"
] = result.output.decode()
snapshot["metrics"]["performance"]["build_error"] = (
result.output.decode()
)

# Test run time (if tests exist)
start_time = time.time()
Expand Down
28 changes: 28 additions & 0 deletions pytest.ini
Original file line number Diff line number Diff line change
@@ -0,0 +1,28 @@
[pytest]
asyncio_mode = auto
asyncio_default_fixture_loop_scope = function
testpaths = tests
python_files = test_*.py
python_classes = Test*
python_functions = test_*
addopts =
--cov=grainchain
--cov-report=term-missing
--cov-report=html:htmlcov
--cov-report=xml
--cov-fail-under=90
--strict-markers
--strict-config
-v
markers =
unit: Unit tests
integration: Integration tests
slow: Slow tests that may take longer to run
e2b: Tests requiring E2B provider
modal: Tests requiring Modal provider
daytona: Tests requiring Daytona provider
local: Tests requiring Local provider
snapshot: Tests for snapshot functionality
filterwarnings =
ignore::DeprecationWarning
ignore::PendingDeprecationWarning
Loading
Loading