This directory contains a comprehensive pytest-based test suite for the GitLab
package upload functionality. The test suite validates the glpkg command
through both unit tests and end-to-end integration testing.
The test suite is organized into two categories:
- Unit tests (
tests/unit/): Fast tests that validate individual components without external dependencies - Integration tests (
tests/integration/): End-to-end tests that execute the actual upload script and verify results against the GitLab Package Registry
All test dependencies are automatically managed by uv. However, the glpkg
package must be installed in development mode before running tests.
# Install the package in development mode (required before running tests)
uv pip install -e .
# Run all tests
uv run pytest tests/
# Run only unit tests (fast, no external dependencies)
uv run pytest tests/unit/
# Run integration tests (requires RUN_INTEGRATION_TESTS=1 and GITLAB_TOKEN)
export RUN_INTEGRATION_TESTS=1
export GITLAB_TOKEN="your-token"
uv run pytest tests/integration/ -m integration
# Run with parallel execution
uv run pytest tests/ -n auto
# Run with verbose output
uv run pytest tests/ -vThe run_tests.py script provides convenience commands that delegate to
uv run pytest:
# Run unit tests
./run_tests.py --unit
# Run integration tests
./run_tests.py --integration
# Run all test categories
./run_tests.py --all
# Pass-through to pytest
./run_tests.py -v -k "test_import" tests/tests/
├── conftest.py # Shared fixtures and configuration
├── unit/ # Unit tests (no external dependencies)
│ ├── __init__.py
│ ├── test_cli.py # CLI argument parsing and validation
│ ├── test_models.py # Data models and structures
│ ├── test_uploader.py # Upload logic and file handling
│ └── test_validators.py # Input validation functions
├── integration/ # Integration tests (requires GITLAB_TOKEN)
│ ├── __init__.py
│ ├── conftest.py # Integration-specific fixtures
│ ├── test_single_file_upload.py # Single file upload tests
│ ├── test_multiple_files_upload.py # Multiple files upload tests
│ ├── test_duplicate_handling.py # Skip, replace, error policies
│ ├── test_project_resolution.py # Auto-detection and manual spec
│ ├── test_error_scenarios.py # Network failures, auth errors
│ └── test_end_to_end.py # Comprehensive end-to-end scenarios
├── utils/
│ ├── __init__.py
│ ├── test_helpers.py # Common test utilities
│ ├── artifact_factory.py # Test file creation utilities
│ ├── gitlab_helpers.py # GitLab API interaction utilities
│ ├── rate_limiter.py # API rate limiting utilities
│ └── performance.py # Performance monitoring utilities
└── README.md # This file
../
├── run_tests.py # Convenience wrapper for uv run pytest
└── pyproject.toml # Project configuration and pytest settings
All test dependencies are automatically installed by uv when running
uv run pytest. The dependencies are defined in pyproject.toml under
[project.optional-dependencies]:
- pytest
- pytest-xdist (parallel execution)
- pytest-timeout (timeout management)
- pytest-sugar (progress visualization)
- pytest-instafail (instant failure reporting)
- pytest-cov (code coverage reporting)
Important: The glpkg package itself must be installed in development
mode before running tests:
uv pip install -e .Alternatively, you can use pip install -e . if not using uv.
Integration tests require explicit opt-in and a GitLab API token:
# Required: Opt-in to run integration tests
export RUN_INTEGRATION_TESTS=1
# Required: GitLab API token
export GITLAB_TOKEN="your-gitlab-token"
# Optional: Custom GitLab URL (defaults to GitLab.com)
export GITLAB_URL="https://gitlab.example.com"
# Optional: Project path (can auto-detect from git)
export GITLAB_PROJECT_PATH="group/project"Your GitLab token needs the following permissions:
apiscope for full API access- Write access to the target project's Package Registry
- Ability to create and delete packages in the registry
# Run all tests
uv run pytest tests/
# Run only unit tests
uv run pytest tests/unit/
# Run only integration tests
uv run pytest tests/integration/ -m integration
# Run specific test file
uv run pytest tests/unit/test_cli.py
# Run specific test function
uv run pytest tests/unit/test_cli.py::test_parse_args
# Run with verbose output
uv run pytest tests/ -v
# Run with parallel execution
uv run pytest tests/ -n auto
# Stop on first failure
uv run pytest tests/ -x
# Show 10 slowest tests
uv run pytest tests/ --durations=10# Run only unit tests
uv run pytest tests/ -m unit
# Run only integration tests
uv run pytest tests/ -m integration
# Run only fast tests
uv run pytest tests/ -m fast
# Skip slow tests
uv run pytest tests/ -m "not slow"
# Run API tests
uv run pytest tests/ -m api# Run unit tests
./run_tests.py --unit
# Run integration tests (requires GITLAB_TOKEN)
./run_tests.py --integration
# Run all test categories
./run_tests.py --all
# Pass-through to pytest with custom arguments
./run_tests.py -v -k "upload" tests/
./run_tests.py -n auto tests/
./run_tests.py --durations=5 tests/The test suite includes code coverage reporting with dual thresholds:
- Warning threshold: 95% (CI adds a warning annotation when below)
- Failure threshold: 90% (tests fail if coverage drops below)
# Run unit tests with coverage (default configuration)
uv run pytest tests/unit/ -m unit --cov=glpkg --cov-report=term
# Generate HTML coverage report
uv run pytest tests/unit/ -m unit --cov=glpkg --cov-report=html
# View HTML report
open htmlcov/index.html # macOS
xdg-open htmlcov/index.html # Linux
# Generate XML coverage report (for CI)
uv run pytest tests/unit/ -m unit --cov=glpkg --cov-report=xmlCoverage is configured in pyproject.toml:
[tool.coverage.run]
source = ["src/glpkg"]
omit = ["tests/*", "*/tests/*"]
[tool.coverage.report]
precision = 2
show_missing = true- Term-missing output: Shows which lines are not covered in terminal
- HTML report: Provides an interactive view in
htmlcov/index.html - XML report: Machine-readable format for CI integration in
coverage.xml
Integration tests require explicit opt-in and automatically validate their environment before running.
When you run integration tests, the test suite checks:
- RUN_INTEGRATION_TESTS environment variable - Must be set to
1 - GITLAB_TOKEN environment variable - Must be set with a valid token
- Git repository - Must run from within a Git repository
- GitLab remotes - Repository must have at least one GitLab remote
# Check if integration tests are enabled
echo $RUN_INTEGRATION_TESTS
# Check if in Git repository
git remote -v
# Verify GitLab remote exists
git remote -v | grep gitlab
# Check token is set
echo $GITLAB_TOKEN
# Verify full setup
[ "$RUN_INTEGRATION_TESTS" = "1" ] && \
echo "Integration tests enabled" || \
echo "Integration tests NOT enabled"
[ -n "$GITLAB_TOKEN" ] && \
echo "Token is set" || \
echo "Token is NOT set"If integration tests are skipped, the error message explains what's missing:
- Integration tests disabled: Set
export RUN_INTEGRATION_TESTS=1 - Missing GITLAB_TOKEN: Set with
export GITLAB_TOKEN='your-token' - No Git repository: Navigate to a Git repository or initialize one
- No GitLab remotes: Add a GitLab remote with
git remote add origin
# Verbose output
uv run pytest tests/ -v
# Extra verbose output
uv run pytest tests/ -vv
# Stop on first failure
uv run pytest tests/ -x
# Full traceback
uv run pytest tests/ --tb=long
# Short traceback
uv run pytest tests/ --tb=short
# Maximum verbosity for specific test
uv run pytest tests/unit/test_cli.py::test_parse_args -vvv --tb=long
# Enable debug logging
uv run pytest tests/ -v --log-cli-level=DEBUG- Verify
GITLAB_TOKENis set and valid - Check token has required permissions (
apiscope) - Test with unit tests:
uv run pytest tests/unit/(doesn't require token)
- Check network connectivity to GitLab
- Run sequentially:
uv run pytest tests/integration/(without-n auto) - Increase timeout:
uv run pytest tests/ --timeout=1200
- Run sequentially for debugging:
uv run pytest tests/(without-n auto) - Check rate limiter is working properly
Tests are designed for safe parallel execution using pytest-xdist:
# Auto-detect CPU cores
uv run pytest tests/ -n auto
# Specify worker count
uv run pytest tests/ -n 4
# Use worksteal distribution for better load balancing
uv run pytest tests/ --dist=worksteal -n auto- Unique package names per test: Timestamp + worker ID + random suffix
- Thread-safe rate limiting: For GitLab API calls across all workers
- Isolated temporary directories: Per worker isolation
- Automatic cleanup: In fixture teardown
Fast tests that validate individual components:
- test_cli.py: CLI argument parsing and validation
- test_models.py: Data models and structures
- test_uploader.py: Upload logic and file handling
- test_validators.py: Input validation functions
End-to-end tests requiring GitLab API access:
- test_single_file_upload.py: Single file upload tests
- test_multiple_files_upload.py: Multiple files upload tests
- test_duplicate_handling.py: Skip, replace, error policies
- test_project_resolution.py: Auto-detection and manual specification
- test_error_scenarios.py: Network failures, auth errors
- test_end_to_end.py: Comprehensive end-to-end scenarios
| Test Category | Duration | Requirements |
|---|---|---|
| Unit tests | 10-30 seconds | None |
| Integration tests | 10-15 minutes | GITLAB_TOKEN |
| All tests (parallel) | 5-10 minutes | GITLAB_TOKEN |
- Choose the appropriate directory (
tests/unit/ortests/integration/) - Follow pytest naming conventions (
test_*functions) - Use existing fixtures from
conftest.py - Add appropriate markers for categorization
Example:
import pytest
@pytest.mark.unit
def test_new_functionality():
"""Test new functionality."""
# Test implementation
assert Truegitlab_client: Authenticated GitLab client (integration tests)artifact_manager: Test file creation and cleanuptemp_dir: Isolated temporary directoryproject_resolver: Project identification utilitiesrate_limiter: API rate limiting management
@pytest.mark.unit: Unit tests (no external dependencies)@pytest.mark.integration: Integration tests (requires GITLAB_TOKEN)@pytest.mark.fast: Quick tests@pytest.mark.slow: Slow tests@pytest.mark.api: Tests requiring API access@pytest.mark.sequential: Tests that must run sequentially
# Show pytest help
uv run pytest --help
# Show available markers
uv run pytest --markers
# Show available fixtures
uv run pytest --fixturesWhen contributing new tests:
- Follow existing patterns for consistency
- Add appropriate documentation and docstrings
- Use existing fixtures to avoid duplication
- Add proper markers for categorization
- Ensure cleanup of any created resources
- Test both success and failure scenarios
- Run tests locally before submitting:
uv run pytest tests/