Skip to content

Conversation

@augustebaum
Copy link

@augustebaum augustebaum commented Feb 2, 2026

Closes #186

@augustebaum
Copy link
Author

Note that this PR was produced using LLMs, with heavy editing after.

@augustebaum
Copy link
Author

augustebaum commented Feb 2, 2026

Still todo:

  • Check coverage
  • Add changelog entry
  • Write all examples at once, don't stop after the line fails
  • Deal with ellipsis case
  • Deal with empty tests
  • Lint
  • [ ] Make PyPy tests pass if possible likely not related
  • Test for pathological cases
    • Ellipsis definitely go into those
    • What else could there be?

@codecov
Copy link

codecov bot commented Feb 2, 2026

Codecov Report

❌ Patch coverage is 93.97590% with 5 lines in your changes missing coverage. Please review.
✅ Project coverage is 82.57%. Comparing base (b143525) to head (5baa0d3).
⚠️ Report is 15 commits behind head on main.

Additional details and impacted files
@@            Coverage Diff             @@
##             main     #187      +/-   ##
==========================================
+ Coverage   82.07%   82.57%   +0.49%     
==========================================
  Files          28       29       +1     
  Lines        3549     3701     +152     
  Branches      736      775      +39     
==========================================
+ Hits         2913     3056     +143     
- Misses        504      507       +3     
- Partials      132      138       +6     
🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@augustebaum
Copy link
Author

@Erotemic Feel free to review even if it's still in draft state

@Erotemic
Copy link
Owner

Erotemic commented Feb 3, 2026

Overall it looks like the right place to add the new code.

I've never been a huge fan of pytest fixtures, but with AI generated code I think they are fine.

Some of the tests added look like they don't do anything: (e.g. test_strip_ansi_codes). You will want to audit those more carefully.

We will definitely need tests for pathological cases. Ideally we should get the linter to pass, but I've not been great at maintaining type annotations, so if the mypy issue is truely out of scope we can ignore it. In a future PR I want to use AI to add proper inline type annotations, but based on my experience with other libraries, you have to go slow. It's not good enough to just fix an entire package at this point.

I'm also not sure what the pypy error is. I'm testing CI here: #188 if it fails there too when we can conclude it's a different issue.

In this PR you will also need to add a CHANGELOG entry. Add it to 1.3.1 for now.

@augustebaum
Copy link
Author

I've never been a huge fan of pytest fixtures, but with AI generated code I think they are fine.

That's fair; in this case I feel like it could go either way; the code is clearer this way but it wouldn't be so hard to inline them. Let me know.

Some of the tests added look like they don't do anything: (e.g. test_strip_ansi_codes). You will want to audit those more carefully.

It turns out these were actual tests, in the form of doctests. I've moved them to their respective functions; sorry for the oversight.

We will definitely need tests for pathological cases.

I'd welcome any ideas you have for this; in the meantime I think I'll test the features manually and see if I like the outputs.

Ideally we should get the linter to pass, but I've not been great at maintaining type annotations, so if the mypy issue is truely out of scope we can ignore it.

I'm actually not sure why the mypy check failed in my PR but not on main. I do in fact get the same errors on main, locally (ran with uv run --with mypy mypy ./src/xdoctest). Either way, I've produced a fix; happy to put it in a separate PR or keep it here.

I'm also not sure what the pypy error is.

The log feels quite far removed from what we're doing in this PR, so I'll keep it for later (if we deal with it at all).

In this PR you will also need to add a CHANGELOG entry. Add it to 1.3.1 for now.

You got it, thanks for the reminder.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

feat: "Update output" mode

2 participants