-
Notifications
You must be signed in to change notification settings - Fork 2k
Support for python 3.14 #6267
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Support for python 3.14 #6267
Conversation
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## master #6267 +/- ##
==========================================
+ Coverage 68.24% 68.25% +0.01%
==========================================
Files 138 138
Lines 18815 18815
Branches 3167 3167
==========================================
+ Hits 12840 12842 +2
+ Misses 5302 5300 -2
Partials 673 673 🚀 New features to boost your workflow:
|
Adjusted numpy&numba versions to support 3.14.
This comment was marked as outdated.
This comment was marked as outdated.
5df13eb to
f3c1e31
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey - I've found 2 issues, and left some high level feedback:
- The global
patch(...).start()call insidestart_servernever gets stopped, which can leak the mock across tests; consider using a context manager or storing the patcher and explicitly stopping it once the server shuts down. - The change to
MPCClient.get_responsefrom checkingif force_multi or any(responses)toif isinstance(force_multi, int)plus filteringNoneinsend_commandssubtly alters the behavior and number of returned responses; verify that all call sites still behave as expected and that the potential loss of placeholder entries is intentional.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- The global `patch(...).start()` call inside `start_server` never gets stopped, which can leak the mock across tests; consider using a context manager or storing the patcher and explicitly stopping it once the server shuts down.
- The change to `MPCClient.get_response` from checking `if force_multi or any(responses)` to `if isinstance(force_multi, int)` plus filtering `None` in `send_commands` subtly alters the behavior and number of returned responses; verify that all call sites still behave as expected and that the potential loss of placeholder entries is intentional.
## Individual Comments
### Comment 1
<location> `test/plugins/test_bpd.py:250-256` </location>
<code_context>
def start_server(args, assigned_port, listener_patch):
"""Start the bpd server, writing the port to `assigned_port`."""
+ # FIXME: This is used in the test_cmd_decoders test. Patch does not apply in
+ # mp.Process anymore (change in 3.14)
+ # There might be a better way to fix this but as I have already spend
+ # way more time here than planned I will keep it as is
+ patch(
+ "beetsplug.bpd.gstplayer.GstPlayer.get_decoders",
+ MagicMock(return_value={"default": ({"audio/mpeg"}, {"mp3"})}),
+ ).start()
+
</code_context>
<issue_to_address>
**suggestion (testing):** Scope the GstPlayer.get_decoders patch to the specific test or server lifetime to avoid surprising cross-test effects and make the 3.14 regression fix more explicit.
Switching from a decorator to a global `patch(...).start()` inside `start_server` fixes the 3.14 multiprocessing issue, but it also:
1. Applies the mock to every use of this helper, not just `test_cmd_decoders`.
2. Starts a patch with no matching `stop()`, which obscures its effective scope.
To keep the fix clear and contained, please either scope this patch to the specific test (e.g., via a fixture/context manager that wraps `test_cmd_decoders` and injects a patched `start_server`), or wrap the patch in a small helper (e.g. `_patch_decoders_for_tests()`) that documents the 3.14 rationale and is easy to remove later.
Also, in `test_cmd_decoders`, consider asserting that the returned value matches the mocked payload (e.g. `{"default": ({"audio/mpeg"}, {"mp3"})}`) so the regression is explicitly tested rather than implied by the setup.
Suggested implementation:
```python
def start_server(args, assigned_port, listener_patch):
"""Start the bpd server, writing the port to `assigned_port`."""
```
```python
import confuse
from unittest.mock import MagicMock, patch
import contextlib
bpd = pytest.importorskip("beetsplug.bpd")
@contextlib.contextmanager
def _patch_decoders_for_tests():
"""Temporarily patch GstPlayer.get_decoders for tests affected by Python 3.14 mp change.
This centralizes the regression workaround and ensures the patch is scoped
to a single test/server lifetime.
"""
patcher = patch(
"beetsplug.bpd.gstplayer.GstPlayer.get_decoders",
MagicMock(return_value={"default": ({"audio/mpeg"}, {"mp3"})}),
)
patcher.start()
try:
yield
finally:
patcher.stop()
```
To fully implement the review suggestion, you should also:
1. **Scope the patch to `test_cmd_decoders`:**
- Locate the `test_cmd_decoders` test in `test/plugins/test_bpd.py`.
- Wrap the part that uses `run_bpd()` / `start_server()` with the new context manager:
```python
def test_cmd_decoders(...):
with _patch_decoders_for_tests():
with self.run_bpd() as client:
# existing assertions / commands
...
```
or, if `test_cmd_decoders` is a method on a test class, the same pattern applies inside the method body.
Alternatively, convert `_patch_decoders_for_tests` into a fixture:
```python
@pytest.fixture
def patch_decoders():
with _patch_decoders_for_tests():
yield
```
and then add `patch_decoders` as a parameter to `test_cmd_decoders`.
2. **Add an explicit assertion on the mocked payload:**
- In `test_cmd_decoders`, after invoking the command that returns decoders, assert that the returned value matches the mocked payload, e.g.:
```python
def test_cmd_decoders(..., patch_decoders): # or inside the context manager
...
decoders = client.send_command("decoders")
assert decoders == {"default": ({"audio/mpeg"}, {"mp3"})}
```
Adjust the exact assertion to match how the test currently retrieves and parses the decoder information. The key point is to assert that the command reflects the `{"default": ({"audio/mpeg"}, {"mp3"})}` structure from `_patch_decoders_for_tests()` so the Python 3.14 regression is explicitly covered.
</issue_to_address>
### Comment 2
<location> `test/plugins/test_bpd.py:197-194` </location>
<code_context>
return self.get_response()
- def send_commands(self, *commands):
+ def send_commands(self, *commands: str) -> list[MPCResponse]:
"""Use MPD command batching to send multiple commands at once.
Each item of commands is a tuple containing a command followed by
</code_context>
<issue_to_address>
**question (testing):** Clarify and test the expectation around None responses in send_commands now that filter(None, ...) is used.
Wrapping `get_response(force_multi=...)` in `list(filter(None, ...))` changes the semantics: it now assumes `get_response` may yield `None` values that should be dropped, which can hide unexpected `None`s and desynchronize the count of commands vs. responses.
To make this helper’s behavior explicit and stable for tests, please either:
1. Add a test that calls `send_commands` with a known set of commands and asserts that the response count matches `len(commands)` and that no `None` values are present; or
2. If `None` is a valid outcome, add a test that covers that case and documents which responses may be `None` and why they should be filtered out.
This will lock in the intended semantics and prevent silent behavior changes in the future.
</issue_to_address>Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
| def send_command(self, command, *args): | ||
| def send_command(self, command: str, *args: str) -> MPCResponse: | ||
| request = self.serialise_command(command, *args) | ||
| self.sock.sendall(request) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
question (testing): Clarify and test the expectation around None responses in send_commands now that filter(None, ...) is used.
Wrapping get_response(force_multi=...) in list(filter(None, ...)) changes the semantics: it now assumes get_response may yield None values that should be dropped, which can hide unexpected Nones and desynchronize the count of commands vs. responses.
To make this helper’s behavior explicit and stable for tests, please either:
- Add a test that calls
send_commandswith a known set of commands and asserts that the response count matcheslen(commands)and that noNonevalues are present; or - If
Noneis a valid outcome, add a test that covers that case and documents which responses may beNoneand why they should be filtered out.
This will lock in the intended semantics and prevent silent behavior changes in the future.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
This PR adds support for Python 3.14 by updating testing infrastructure, refactoring test code to work around multiprocessing patching changes in Python 3.14, and updating dependency version requirements.
- Adds Python 3.14 to the testing matrix and project classifiers
- Refactors
test_bpd.pyto address unittest.mock.patch behavior changes with multiprocessing in Python 3.14 - Updates minimum versions for numpy (2.3.5) and numba (0.63.1) dependencies for Python 3.13+
Reviewed changes
Copilot reviewed 5 out of 7 changed files in this pull request and generated 7 comments.
Show a summary per file
| File | Description |
|---|---|
.github/workflows/ci.yaml |
Adds Python 3.14 to the CI test matrix |
pyproject.toml |
Updates Python version support to include 3.14, updates numpy/numba minimum versions, and modifies docstrfmt command with -pA flag |
test/plugins/test_bpd.py |
Refactors test class from unittest.TestCase to plain class, adds comprehensive type annotations, splits run_bpd into bpd_server and bpd_client methods, and implements workaround for multiprocessing.Process patching issues in Python 3.14 |
docs/plugins/playlist.rst |
Fixes RST reference formatting by adding spaces before angle brackets |
docs/plugins/duplicates.rst |
Fixes RST reference formatting by adding spaces before angle brackets |
docs/changelog.rst |
Updates changelog to document Python 3.14 support and clarifies Python 3.9 EOL status |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
The failing test stems from differences in how the patch decorator from unitest works in python 3.14 in compassion to earlier versions. It does not patch code running in a multiprocessing. Generally enhanced typehints in file.
henry-oberholtzer
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not super familiar with the bpd plugin, but these changes seem like a fine enough way to get it cleaned up & working with 3.14 - if not a bit clearer to understand than it was before!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could we please have support for Python 3.14 and refactors in separate PRs?
|
I dont think it is worth our time to move the refactor into another PR. It was just a happy little side effect of the debugging and improves the readability. If we move it to another PR the scope will change and I really do not think we should invested much time in the refactor of this specific test file. The only thing required to fix the patch issue is test_bpd.py R250-R257. Lets just remove it if we feel like it is an issue 🤷 |
| # code running in mp.Process anymore (change in 3.14) | ||
| # There might be a better way to fix this but as I have already spent | ||
| # way more time here than planned this seems like the easiest way forward | ||
| patch( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
An alternative would be to just skip the tests on python 3.14.
Although, running tests is better than skipping them imo.
|
@semohr, look, I get why you bundled these together - you were debugging and naturally cleaned things up as you went. But I need to push back on this one. The actual Python 3.14 fix is 7 lines: the patch workaround in Here's the issue: right now I can't review "does this work with Python 3.14?" without also reviewing "is this refactoring solid?" And Sourcery's already flagged some concerns that deserve proper attention. Those concerns shouldn't be blocking version support. Could we just split this? Python 3.14 support goes out this week with the minimal patch fix, and the typing improvements get their own PR where we can actually discuss them properly. The work's already done - it's just cleanly separating two different things. Regarding the BPD fix, I think below may do the job in bpd = pytest.importorskip("beetsplug.bpd")
+if hasattr(mp, "set_start_method"):
+ try:
+ mp.set_start_method("fork", force=True)
+ except RuntimeError:
+ # Already set, which is fine
+ pass
+
class CommandParseTest(unittest.TestCase):
def test_no_args(self): |
|
There is no pushback needed. I get that it is not too releated that's why I said lets remove it if it is inconvinient here. The only thing Im saying is, in my opinion we can spend our time better than with the refactor of this test file. I gona push the refactor into another branch for you later and maybe create a draft PR. Feel free to take provernance.
Haven't tested it but makes sense if that is the underlying issue. Although we properly somehow want to reset it after running the test to minimise future possible sideeffects in the testing suite. Lets focus on the more relevant things here: What do we think about the pyacousticid issue/pr linked above? I would like the fix included before we "officially" start to support 3.14 as this otherwise will introduce issues for all 3.14 users (which use the chroma plugin). This might delay this further as we do not have a proper release setup yet and in general it is not aligned with the other packages under beetbox. |
|
Can we have Python 3.14 migration and refactors in separate PRs?
Seems like the issue is only present in the tests - and they run fine with this fix.
Potentially a job for a module-level pytest fixture?
I added a comment under that PR. We do indeed want to fix this!
I'm happy to create a PR to address this in the evening :) |
Description
This PR adds testing targets for version 3.14, enabling us to verify compatibility with Python 3.14.
closes #6232
I would also like to see this addition included here as it will introduce issues for 3.14 users if not
beetbox/pyacoustid#90
Note:
I refactored the test_bpd.py file slightly to triage why there was an issue running tests on 3.14: It seems like the patch decorator from unitest works different in python 3.14 in compassion to earlier versions. It does not patch code running in a multiprocessing.Process anymore (possibly because GIL related changes) . I did not find any documentation for this but I seem to have found a (slightly hacky) fix for it.