Skip to content

Commit c2a95cb

Browse files
authored
Merge branch 'main' into fix/remove-pypy310-from-tests
2 parents 5e92d2f + 50140dd commit c2a95cb

File tree

6 files changed

+282
-17
lines changed

6 files changed

+282
-17
lines changed

CONTRIBUTING.rst

Lines changed: 85 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -73,8 +73,80 @@ For merging, you should:
7373
3. Add a note to ``CHANGELOG.rst`` about the changes.
7474
4. Add yourself to ``AUTHORS.rst``.
7575

76-
Tips
77-
----
76+
Running Tests
77+
-------------
78+
79+
Quick Start (Using Make)
80+
~~~~~~~~~~~~~~~~~~~~~~~~
81+
82+
The easiest way to run tests::
83+
84+
# First time setup - create virtual environment
85+
make venv
86+
source .venv/bin/activate
87+
88+
# Install dependencies
89+
make install
90+
91+
# Run all tests
92+
make test
93+
94+
# Run just the catalog vendor tests
95+
make test-vendor
96+
97+
# Run tests with coverage
98+
make test-cov
99+
100+
Manual Test Commands
101+
~~~~~~~~~~~~~~~~~~~~
102+
103+
If you prefer to run pytest directly::
104+
105+
# Activate virtual environment
106+
source .venv/bin/activate
107+
108+
# Run catalog extra fields tests
109+
python -m pytest tests/test_vendor/test_catalog_v1.py -v
110+
111+
All Test Commands
112+
~~~~~~~~~~~~~~~~~
113+
114+
::
115+
116+
# Run all catalog vendor tests
117+
python -m pytest tests/test_vendor/ -v
118+
119+
# Run specific test file
120+
python -m pytest tests/test_vendor/test_catalog_v1.py -v
121+
122+
# Run specific test class
123+
python -m pytest tests/test_vendor/test_catalog_v1.py::TestMetadataExtraFields -v
124+
125+
# Run specific test method
126+
python -m pytest tests/test_vendor/test_catalog_v1.py::TestMetadataExtraFields::test_metadata_accepts_extra_fields -v
127+
128+
# Run with more verbose output
129+
python -m pytest tests/test_vendor/test_catalog_v1.py -vv
130+
131+
# Run and show print statements
132+
python -m pytest tests/test_vendor/test_catalog_v1.py -v -s
133+
134+
# Run all tests in the project
135+
python -m pytest tests/ -v
136+
137+
Using tox
138+
~~~~~~~~~
139+
140+
The GitHub Actions CI uses tox to run tests across multiple Python and Pydantic versions::
141+
142+
# Run tests with Python 3.10 and Pydantic 2.10 (no coverage)
143+
python3 -m tox -e py310-pydantic210-nocov
144+
145+
# Run tests with coverage
146+
python3 -m tox -e py310-pydantic210-cover
147+
148+
# Run specific tests with tox
149+
python3 -m tox -e py310-pydantic210-nocov -- tests/test_vendor/test_catalog_v1.py
78150

79151
To run a subset of tests::
80152

@@ -83,3 +155,14 @@ To run a subset of tests::
83155
To run all the test environments in *parallel*::
84156

85157
tox -p auto
158+
159+
Continuous Integration
160+
~~~~~~~~~~~~~~~~~~~~~~
161+
162+
Tests run automatically on every push and pull request via GitHub Actions (``.github/workflows/github-actions.yml``).
163+
164+
The CI runs tests across:
165+
166+
* Python versions: 3.10, 3.11, 3.12, PyPy 3.9
167+
* Pydantic versions: 2.8, 2.10
168+
* With and without coverage reports

Makefile

Lines changed: 54 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,54 @@
1+
.PHONY: help venv install test test-vendor test-cov test-all clean lint format
2+
3+
## help - Display help about make targets for this Makefile
4+
help:
5+
@cat Makefile | grep '^## ' --color=never | cut -c4- | sed -e "`printf 's/ - /\t- /;'`" | column -s "`printf '\t'`" -t
6+
7+
## venv - Create virtual environment
8+
venv:
9+
python3 -m venv .venv
10+
.venv/bin/pip install --upgrade pip
11+
@echo ""
12+
@echo "Virtual environment created. Activate with:"
13+
@echo " source .venv/bin/activate"
14+
15+
## install - Install package and dependencies in development mode
16+
install:
17+
pip install -e .
18+
pip install pytest pytest-cov tox pre-commit ruff
19+
20+
## test - Run tests quickly
21+
test:
22+
python -m pytest tests/ -v
23+
24+
## test-vendor - Run catalog vendor tests
25+
test-vendor:
26+
python -m pytest tests/test_vendor/test_catalog_v1.py -v
27+
28+
## test-cov - Run tests with coverage report
29+
test-cov:
30+
python -m pytest --cov=src --cov-report=term-missing --cov-report=html tests/ -v
31+
32+
## test-all - Run full test suite with tox (all Python/Pydantic versions)
33+
test-all:
34+
tox
35+
36+
## lint - Run code quality checks
37+
lint:
38+
pre-commit run --all-files
39+
40+
## format - Format code with ruff
41+
format:
42+
ruff format src/ tests/
43+
44+
## clean - Remove build artifacts and cache
45+
clean:
46+
rm -rf build/
47+
rm -rf dist/
48+
rm -rf *.egg-info
49+
rm -rf .pytest_cache/
50+
rm -rf .tox/
51+
rm -rf htmlcov/
52+
rm -rf .coverage
53+
find . -type d -name __pycache__ -exec rm -rf {} +
54+
find . -type f -name '*.pyc' -delete

README.md

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,11 @@
1+
# DataPilot CLI
2+
3+
[![Build Status](https://github.com/AltimateAI/datapilot-cli/workflows/build/badge.svg)](https://github.com/AltimateAI/datapilot-cli/actions)
4+
[![PyPI version](https://badge.fury.io/py/altimate-datapilot-cli.svg)](https://pypi.org/project/altimate-datapilot-cli/)
5+
[![Python Version](https://img.shields.io/pypi/pyversions/altimate-datapilot-cli.svg)](https://pypi.org/project/altimate-datapilot-cli/)
6+
[![License](https://img.shields.io/github/license/AltimateAI/datapilot-cli.svg)](https://github.com/AltimateAI/datapilot-cli/blob/main/LICENSE)
7+
[![Ruff](https://img.shields.io/endpoint?url=https://raw.githubusercontent.com/astral-sh/ruff/main/assets/badge/v2.json)](https://github.com/astral-sh/ruff)
8+
[![Maintained](https://img.shields.io/badge/Maintained%3F-yes-green.svg)](https://github.com/AltimateAI/datapilot-cli/graphs/commit-activity)
19

210
## Introduction
311

README.rst

Lines changed: 22 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -4,32 +4,39 @@ Overview
44

55
.. start-badges
66
7-
.. list-table::
8-
:stub-columns: 1
9-
10-
* - docs
11-
- |docs|
12-
* - tests
13-
- | |github-actions|
14-
| |codecov|
15-
| |scrutinizer|
16-
* - package
17-
- | |version| |wheel| |supported-versions| |supported-implementations|
18-
| |commits-since|
197
.. |docs| image:: https://readthedocs.org/projects/datapilot/badge/?style=flat
208
:target: https://datapilot.readthedocs.io/
219
:alt: Documentation Status
2210

23-
.. |github-actions| image:: https://github.com/AltimateAI/datapilot-cli/actions/workflows/github-actions.yml/badge.svg
24-
:alt: GitHub Actions Build Status
25-
:target: https://github.com/AltimateAI/datapilot/actions
11+
.. |build| image:: https://github.com/AltimateAI/datapilot-cli/workflows/build/badge.svg
12+
:target: https://github.com/AltimateAI/datapilot-cli/actions
13+
:alt: Build Status
2614

2715
.. |codecov| image:: https://codecov.io/gh/anandgupta42/datapilot/branch/main/graphs/badge.svg?branch=main
2816
:alt: Coverage Status
2917
:target: https://app.codecov.io/github/anandgupta42/datapilot
3018

19+
.. |pypi| image:: https://badge.fury.io/py/altimate-datapilot-cli.svg
20+
:target: https://pypi.org/project/altimate-datapilot-cli/
21+
:alt: PyPI Package
22+
23+
.. |pyversion| image:: https://img.shields.io/pypi/pyversions/altimate-datapilot-cli.svg
24+
:target: https://pypi.org/project/altimate-datapilot-cli/
25+
:alt: Python Versions
26+
27+
.. |license| image:: https://img.shields.io/github/license/AltimateAI/datapilot-cli.svg
28+
:target: https://github.com/AltimateAI/datapilot-cli/blob/main/LICENSE
29+
:alt: License
30+
31+
.. |ruff| image:: https://img.shields.io/endpoint?url=https://raw.githubusercontent.com/astral-sh/ruff/main/assets/badge/v2.json
32+
:target: https://github.com/astral-sh/ruff
33+
:alt: Ruff
3134

35+
.. |maintained| image:: https://img.shields.io/badge/Maintained%3F-yes-green.svg
36+
:target: https://github.com/AltimateAI/datapilot-cli/graphs/commit-activity
37+
:alt: Maintained
3238

39+
|docs| |build| |codecov| |pypi| |pyversion| |license| |ruff| |maintained|
3340

3441
.. end-badges
3542

tests/test_vendor/__init__.py

Whitespace-only changes.
Lines changed: 113 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,113 @@
1+
"""Tests for catalog v1 parser, specifically testing extra fields handling."""
2+
import pytest
3+
4+
from vendor.dbt_artifacts_parser.parsers.catalog.catalog_v1 import Metadata
5+
6+
7+
class TestMetadataExtraFields:
8+
"""Test that Metadata class accepts extra fields from dbt."""
9+
10+
def test_metadata_accepts_extra_fields(self):
11+
"""Test that metadata accepts fields not explicitly defined in the model."""
12+
# Test with a new field that dbt might add in the future
13+
data = {
14+
"dbt_schema_version": "https://schemas.getdbt.com/dbt/catalog/v1.json",
15+
"dbt_version": "1.9.0",
16+
"generated_at": "2025-11-05T10:00:00Z",
17+
"invocation_id": "test-invocation-123",
18+
"invocation_started_at": "2025-11-05T09:59:00Z", # New field
19+
"new_future_field": "some_value", # Another potential future field
20+
}
21+
22+
# This should not raise a validation error
23+
metadata = Metadata(**data)
24+
25+
# Verify that known fields are accessible normally
26+
assert metadata.dbt_schema_version == "https://schemas.getdbt.com/dbt/catalog/v1.json"
27+
assert metadata.dbt_version == "1.9.0"
28+
assert metadata.generated_at == "2025-11-05T10:00:00Z"
29+
assert metadata.invocation_id == "test-invocation-123"
30+
31+
def test_metadata_extra_fields_in_pydantic_extra(self):
32+
"""Test that extra fields are stored in __pydantic_extra__."""
33+
data = {
34+
"dbt_version": "1.9.0",
35+
"invocation_started_at": "2025-11-05T09:59:00Z",
36+
"new_field_1": "value1",
37+
"new_field_2": 123,
38+
}
39+
40+
metadata = Metadata(**data)
41+
42+
# Extra fields should be stored in __pydantic_extra__
43+
assert metadata.__pydantic_extra__ is not None
44+
assert "invocation_started_at" in metadata.__pydantic_extra__
45+
assert "new_field_1" in metadata.__pydantic_extra__
46+
assert "new_field_2" in metadata.__pydantic_extra__
47+
assert metadata.__pydantic_extra__["invocation_started_at"] == "2025-11-05T09:59:00Z"
48+
assert metadata.__pydantic_extra__["new_field_1"] == "value1"
49+
assert metadata.__pydantic_extra__["new_field_2"] == 123
50+
51+
def test_metadata_model_dump_includes_extra_fields(self):
52+
"""Test that model_dump() includes extra fields."""
53+
data = {
54+
"dbt_version": "1.9.0",
55+
"invocation_id": "test-123",
56+
"invocation_started_at": "2025-11-05T09:59:00Z",
57+
"future_field": "future_value",
58+
}
59+
60+
metadata = Metadata(**data)
61+
dumped = metadata.model_dump()
62+
63+
# All fields including extra should be in the dump
64+
assert dumped["dbt_version"] == "1.9.0"
65+
assert dumped["invocation_id"] == "test-123"
66+
assert dumped["invocation_started_at"] == "2025-11-05T09:59:00Z"
67+
assert dumped["future_field"] == "future_value"
68+
69+
def test_metadata_with_no_extra_fields(self):
70+
"""Test that metadata works normally when no extra fields are provided."""
71+
data = {
72+
"dbt_version": "1.9.0",
73+
"generated_at": "2025-11-05T10:00:00Z",
74+
}
75+
76+
metadata = Metadata(**data)
77+
78+
assert metadata.dbt_version == "1.9.0"
79+
assert metadata.generated_at == "2025-11-05T10:00:00Z"
80+
81+
def test_metadata_with_only_extra_fields(self):
82+
"""Test that metadata accepts data with only extra fields (all known fields are Optional)."""
83+
data = {
84+
"some_new_field": "value",
85+
"another_new_field": 42,
86+
}
87+
88+
# This should work since all defined fields are Optional
89+
metadata = Metadata(**data)
90+
91+
assert metadata.__pydantic_extra__["some_new_field"] == "value"
92+
assert metadata.__pydantic_extra__["another_new_field"] == 42
93+
94+
def test_invocation_started_at_as_extra_field(self):
95+
"""Test the specific case of invocation_started_at being handled as an extra field."""
96+
# This is the real-world scenario: dbt adds invocation_started_at
97+
data = {
98+
"dbt_schema_version": "https://schemas.getdbt.com/dbt/catalog/v1.json",
99+
"dbt_version": "1.9.0",
100+
"generated_at": "2025-11-05T10:00:00Z",
101+
"invocation_id": "abc-123-def-456",
102+
"invocation_started_at": "2025-11-05T09:55:30.123456Z",
103+
}
104+
105+
# Should not raise ValidationError
106+
metadata = Metadata(**data)
107+
108+
# The field should be accessible via __pydantic_extra__
109+
assert metadata.__pydantic_extra__["invocation_started_at"] == "2025-11-05T09:55:30.123456Z"
110+
111+
# And should be included in model_dump()
112+
dumped = metadata.model_dump()
113+
assert dumped["invocation_started_at"] == "2025-11-05T09:55:30.123456Z"

0 commit comments

Comments
 (0)