Allow custom metadata for replay file#82
Conversation
Refactor replay metadata handling with dataclasses. Introduced `ReplayTestMetadata` dataclass for cleaner metadata management. Updated serialization and deserialization logic to use the new dataclass. Added new tests to validate metadata handling and functionality.
Replaced PEP 604 union syntax (|) with Optional for type hints to maintain compatibility with Python versions earlier than 3.10.
ad55871 to
a0dae93
Compare
|
that is ready for review please |
nicoddemus
left a comment
There was a problem hiding this comment.
Great work!
Left some comments, please take a look.
src/pytest_replay/__init__.py
Outdated
| node_metadata = json.loads(stripped) | ||
| nodeid = node_metadata["nodeid"] | ||
| if "finish" in node_metadata: | ||
| self.nodes[nodeid] = ReplayTestMetadata(**node_metadata) |
There was a problem hiding this comment.
I believe that if somehow the test crashes before the finish you metadata would no be save, but it could be relevant to the crash reproduction.
You possible want to emit to the replay file "update metadata" instructions so they could be loaded back to reproduce the problem.
Something in the lines of:
{"nodeid": "test_1.py::test_bar", "start": 3.0}
{"nodeid": "test_1.py::test_bar", "start": 3.0, "metadata": {"a": "asd"}}
{"nodeid": "test_1.py::test_bar", "start": 3.0, "metadata": {"a": "asd", "b": 7}}
{"nodeid": "test_1.py::test_bar", "start": 3.0, "metadata": {"a": "asd", "b": 7}, "finish": 4.0, "outcome": "passed"}There we see the metadata evolving.
And probably could help pinpoint the crash source if the output is just:
{"nodeid": "test_1.py::test_bar", "start": 3.0}
{"nodeid": "test_1.py::test_bar", "start": 3.0, "metadata": {"a": "asd"}}(a crash cause some lines to not be present)
There was a problem hiding this comment.
Thanks for the feedback! You're absolutely right, and this is something I already noted in the issue under "Limitations". In my specific scenario, it's quite rare for tests to crash, they usually just fail, so the current approach works well enough for my use case. However, I do see the value in capturing incremental metadata.
To fully address this broader scenario, we'd likely need to track all setups and all teardowns associated with a test, ensuring the metadata reflects each setup and teardown, since a crash could happen at any point.
For now, I’d prefer to move forward with the current implementation, but I completely agree that your suggestion would be the ideal enhancement. It’s definitely something worth considering for future improvements! Maybe, could you create an issue about it so we can tackle that in the future or when someone sees values in it please?
Co-authored-by: Bruno Oliveira <bruno@soliv.dev>
Co-authored-by: Bruno Oliveira <bruno@soliv.dev>
Thank you for your feedback! I have addressed all your comments in the review, could you please take a second look? Thanks! |
nicoddemus
left a comment
There was a problem hiding this comment.
Nice, thanks!
After @prusse-martin's approval we can make a new release.
Allow custom metadata in replay files
Refactor replay metadata handling with dataclasses. Introduced
ReplayTestMetadatadataclass for cleaner metadata management. Updated serialization and deserialization logic to use the new dataclass. Added new tests to validate metadata handling and functionality.Closes #78