Skip to content

Conversation

@semohr
Copy link
Contributor

@semohr semohr commented Jan 5, 2026

Description

This PR adds testing targets for version 3.14, enabling us to verify compatibility with Python 3.14.

closes #6232

I would also like to see this addition included here as it will introduce issues for 3.14 users if not
beetbox/pyacoustid#90

Note:
I refactored the test_bpd.py file slightly to triage why there was an issue running tests on 3.14: It seems like the patch decorator from unitest works different in python 3.14 in compassion to earlier versions. It does not patch code running in a multiprocessing.Process anymore (possibly because GIL related changes) . I did not find any documentation for this but I seem to have found a (slightly hacky) fix for it.

@codecov
Copy link

codecov bot commented Jan 5, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 68.25%. Comparing base (ea2e7bf) to head (d82b1fa).
⚠️ Report is 23 commits behind head on master.
✅ All tests successful. No failed tests found.

Additional details and impacted files
@@            Coverage Diff             @@
##           master    #6267      +/-   ##
==========================================
+ Coverage   68.24%   68.25%   +0.01%     
==========================================
  Files         138      138              
  Lines       18815    18815              
  Branches     3167     3167              
==========================================
+ Hits        12840    12842       +2     
+ Misses       5302     5300       -2     
  Partials      673      673              

see 1 file with indirect coverage changes

🚀 New features to boost your workflow:
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

@semohr

This comment was marked as outdated.

@semohr semohr force-pushed the bump-3_14 branch 3 times, most recently from 5df13eb to f3c1e31 Compare January 5, 2026 23:21
@semohr semohr marked this pull request as ready for review January 5, 2026 23:29
@semohr semohr requested a review from a team as a code owner January 5, 2026 23:29
Copilot AI review requested due to automatic review settings January 5, 2026 23:29
Copy link
Contributor

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey - I've found 2 issues, and left some high level feedback:

  • The global patch(...).start() call inside start_server never gets stopped, which can leak the mock across tests; consider using a context manager or storing the patcher and explicitly stopping it once the server shuts down.
  • The change to MPCClient.get_response from checking if force_multi or any(responses) to if isinstance(force_multi, int) plus filtering None in send_commands subtly alters the behavior and number of returned responses; verify that all call sites still behave as expected and that the potential loss of placeholder entries is intentional.
Prompt for AI Agents
Please address the comments from this code review:

## Overall Comments
- The global `patch(...).start()` call inside `start_server` never gets stopped, which can leak the mock across tests; consider using a context manager or storing the patcher and explicitly stopping it once the server shuts down.
- The change to `MPCClient.get_response` from checking `if force_multi or any(responses)` to `if isinstance(force_multi, int)` plus filtering `None` in `send_commands` subtly alters the behavior and number of returned responses; verify that all call sites still behave as expected and that the potential loss of placeholder entries is intentional.

## Individual Comments

### Comment 1
<location> `test/plugins/test_bpd.py:250-256` </location>
<code_context>
 def start_server(args, assigned_port, listener_patch):
     """Start the bpd server, writing the port to `assigned_port`."""

+    # FIXME: This is used in the test_cmd_decoders test. Patch does not apply in
+    # mp.Process anymore (change in 3.14)
+    # There might be a better way to fix this but as I have already spend
+    # way more time here than planned I will keep it as is
+    patch(
+        "beetsplug.bpd.gstplayer.GstPlayer.get_decoders",
+        MagicMock(return_value={"default": ({"audio/mpeg"}, {"mp3"})}),
+    ).start()
+
</code_context>

<issue_to_address>
**suggestion (testing):** Scope the GstPlayer.get_decoders patch to the specific test or server lifetime to avoid surprising cross-test effects and make the 3.14 regression fix more explicit.

Switching from a decorator to a global `patch(...).start()` inside `start_server` fixes the 3.14 multiprocessing issue, but it also:

1. Applies the mock to every use of this helper, not just `test_cmd_decoders`.
2. Starts a patch with no matching `stop()`, which obscures its effective scope.

To keep the fix clear and contained, please either scope this patch to the specific test (e.g., via a fixture/context manager that wraps `test_cmd_decoders` and injects a patched `start_server`), or wrap the patch in a small helper (e.g. `_patch_decoders_for_tests()`) that documents the 3.14 rationale and is easy to remove later.

Also, in `test_cmd_decoders`, consider asserting that the returned value matches the mocked payload (e.g. `{"default": ({"audio/mpeg"}, {"mp3"})}`) so the regression is explicitly tested rather than implied by the setup.

Suggested implementation:

```python
def start_server(args, assigned_port, listener_patch):
    """Start the bpd server, writing the port to `assigned_port`."""

```

```python
import confuse
from unittest.mock import MagicMock, patch
import contextlib

bpd = pytest.importorskip("beetsplug.bpd")


@contextlib.contextmanager
def _patch_decoders_for_tests():
    """Temporarily patch GstPlayer.get_decoders for tests affected by Python 3.14 mp change.

    This centralizes the regression workaround and ensures the patch is scoped
    to a single test/server lifetime.
    """
    patcher = patch(
        "beetsplug.bpd.gstplayer.GstPlayer.get_decoders",
        MagicMock(return_value={"default": ({"audio/mpeg"}, {"mp3"})}),
    )
    patcher.start()
    try:
        yield
    finally:
        patcher.stop()

```

To fully implement the review suggestion, you should also:

1. **Scope the patch to `test_cmd_decoders`:**
   - Locate the `test_cmd_decoders` test in `test/plugins/test_bpd.py`.
   - Wrap the part that uses `run_bpd()` / `start_server()` with the new context manager:

   ```python
   def test_cmd_decoders(...):
       with _patch_decoders_for_tests():
           with self.run_bpd() as client:
               # existing assertions / commands
               ...
   ```

   or, if `test_cmd_decoders` is a method on a test class, the same pattern applies inside the method body.

   Alternatively, convert `_patch_decoders_for_tests` into a fixture:

   ```python
   @pytest.fixture
   def patch_decoders():
       with _patch_decoders_for_tests():
           yield
   ```

   and then add `patch_decoders` as a parameter to `test_cmd_decoders`.

2. **Add an explicit assertion on the mocked payload:**
   - In `test_cmd_decoders`, after invoking the command that returns decoders, assert that the returned value matches the mocked payload, e.g.:

   ```python
   def test_cmd_decoders(..., patch_decoders):  # or inside the context manager
       ...
       decoders = client.send_command("decoders")
       assert decoders == {"default": ({"audio/mpeg"}, {"mp3"})}
   ```

   Adjust the exact assertion to match how the test currently retrieves and parses the decoder information. The key point is to assert that the command reflects the `{"default": ({"audio/mpeg"}, {"mp3"})}` structure from `_patch_decoders_for_tests()` so the Python 3.14 regression is explicitly covered.
</issue_to_address>

### Comment 2
<location> `test/plugins/test_bpd.py:197-194` </location>
<code_context>
         return self.get_response()

-    def send_commands(self, *commands):
+    def send_commands(self, *commands: str) -> list[MPCResponse]:
         """Use MPD command batching to send multiple commands at once.
         Each item of commands is a tuple containing a command followed by
</code_context>

<issue_to_address>
**question (testing):** Clarify and test the expectation around None responses in send_commands now that filter(None, ...) is used.

Wrapping `get_response(force_multi=...)` in `list(filter(None, ...))` changes the semantics: it now assumes `get_response` may yield `None` values that should be dropped, which can hide unexpected `None`s and desynchronize the count of commands vs. responses.

To make this helper’s behavior explicit and stable for tests, please either:

1. Add a test that calls `send_commands` with a known set of commands and asserts that the response count matches `len(commands)` and that no `None` values are present; or
2. If `None` is a valid outcome, add a test that covers that case and documents which responses may be `None` and why they should be filtered out.

This will lock in the intended semantics and prevent silent behavior changes in the future.
</issue_to_address>

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

def send_command(self, command, *args):
def send_command(self, command: str, *args: str) -> MPCResponse:
request = self.serialise_command(command, *args)
self.sock.sendall(request)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

question (testing): Clarify and test the expectation around None responses in send_commands now that filter(None, ...) is used.

Wrapping get_response(force_multi=...) in list(filter(None, ...)) changes the semantics: it now assumes get_response may yield None values that should be dropped, which can hide unexpected Nones and desynchronize the count of commands vs. responses.

To make this helper’s behavior explicit and stable for tests, please either:

  1. Add a test that calls send_commands with a known set of commands and asserts that the response count matches len(commands) and that no None values are present; or
  2. If None is a valid outcome, add a test that covers that case and documents which responses may be None and why they should be filtered out.

This will lock in the intended semantics and prevent silent behavior changes in the future.

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR adds support for Python 3.14 by updating testing infrastructure, refactoring test code to work around multiprocessing patching changes in Python 3.14, and updating dependency version requirements.

  • Adds Python 3.14 to the testing matrix and project classifiers
  • Refactors test_bpd.py to address unittest.mock.patch behavior changes with multiprocessing in Python 3.14
  • Updates minimum versions for numpy (2.3.5) and numba (0.63.1) dependencies for Python 3.13+

Reviewed changes

Copilot reviewed 5 out of 7 changed files in this pull request and generated 7 comments.

Show a summary per file
File Description
.github/workflows/ci.yaml Adds Python 3.14 to the CI test matrix
pyproject.toml Updates Python version support to include 3.14, updates numpy/numba minimum versions, and modifies docstrfmt command with -pA flag
test/plugins/test_bpd.py Refactors test class from unittest.TestCase to plain class, adds comprehensive type annotations, splits run_bpd into bpd_server and bpd_client methods, and implements workaround for multiprocessing.Process patching issues in Python 3.14
docs/plugins/playlist.rst Fixes RST reference formatting by adding spaces before angle brackets
docs/plugins/duplicates.rst Fixes RST reference formatting by adding spaces before angle brackets
docs/changelog.rst Updates changelog to document Python 3.14 support and clarifies Python 3.9 EOL status

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

semohr added 3 commits January 7, 2026 12:09
The failing test stems from differences in how the patch decorator
from unitest works in python 3.14 in compassion to earlier versions. It
does not patch code running in a multiprocessing.

Generally enhanced typehints in file.
@JOJ0 JOJ0 added the core Pull requests that modify the beets core `beets` label Jan 10, 2026
Copy link
Member

@henry-oberholtzer henry-oberholtzer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not super familiar with the bpd plugin, but these changes seem like a fine enough way to get it cleaned up & working with 3.14 - if not a bit clearer to understand than it was before!

Copy link
Member

@snejus snejus left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could we please have support for Python 3.14 and refactors in separate PRs?

@semohr
Copy link
Contributor Author

semohr commented Jan 13, 2026

I dont think it is worth our time to move the refactor into another PR. It was just a happy little side effect of the debugging and improves the readability. If we move it to another PR the scope will change and I really do not think we should invested much time in the refactor of this specific test file.

The only thing required to fix the patch issue is test_bpd.py R250-R257.

Lets just remove it if we feel like it is an issue 🤷

# code running in mp.Process anymore (change in 3.14)
# There might be a better way to fix this but as I have already spent
# way more time here than planned this seems like the easiest way forward
patch(
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

An alternative would be to just skip the tests on python 3.14.
Although, running tests is better than skipping them imo.

@snejus
Copy link
Member

snejus commented Jan 14, 2026

@semohr, look, I get why you bundled these together - you were debugging and naturally cleaned things up as you went. But I need to push back on this one.

The actual Python 3.14 fix is 7 lines: the patch workaround in start_server. Everything else in test_bpd.py - the type hints, splitting run_bpd, the logic changes in send_commands - that's refactoring work. Useful refactoring, but not required for 3.14 support.

Here's the issue: right now I can't review "does this work with Python 3.14?" without also reviewing "is this refactoring solid?" And Sourcery's already flagged some concerns that deserve proper attention. Those concerns shouldn't be blocking version support.

Could we just split this? Python 3.14 support goes out this week with the minimal patch fix, and the typing improvements get their own PR where we can actually discuss them properly. The work's already done - it's just cleanly separating two different things.

Regarding the BPD fix, I think below may do the job in test_bpd.py?

bpd = pytest.importorskip("beetsplug.bpd")

+if hasattr(mp, "set_start_method"):
+    try:
+        mp.set_start_method("fork", force=True)
+    except RuntimeError:
+        # Already set, which is fine
+        pass
+ 

class CommandParseTest(unittest.TestCase):
    def test_no_args(self):

@semohr
Copy link
Contributor Author

semohr commented Jan 14, 2026

There is no pushback needed. I get that it is not too releated that's why I said lets remove it if it is inconvinient here. The only thing Im saying is, in my opinion we can spend our time better than with the refactor of this test file.

I gona push the refactor into another branch for you later and maybe create a draft PR. Feel free to take provernance.

Regarding the BPD fix, I think below may do the job in test_bpd.py? ...

Haven't tested it but makes sense if that is the underlying issue. Although we properly somehow want to reset it after running the test to minimise future possible sideeffects in the testing suite.


Lets focus on the more relevant things here: What do we think about the pyacousticid issue/pr linked above?

I would like the fix included before we "officially" start to support 3.14 as this otherwise will introduce issues for all 3.14 users (which use the chroma plugin). This might delay this further as we do not have a proper release setup yet and in general it is not aligned with the other packages under beetbox.

@snejus
Copy link
Member

snejus commented Jan 14, 2026

Can we have Python 3.14 migration and refactors in separate PRs?

Haven't tested it but makes sense if that is the underlying issue.

Seems like the issue is only present in the tests - and they run fine with this fix. beet bpd seems to run fine.

Although we properly somehow want to reset it after running the test to minimise future possible sideeffects in the testing suite.

Potentially a job for a module-level pytest fixture?

Lets focus on the more relevant things here: What do we think about the pyacousticid issue/pr linked above?

I added a comment under that PR. We do indeed want to fix this!

This might delay this further as we do not have a proper release setup yet and in general it is not aligned with the other packages under beetbox.

I'm happy to create a PR to address this in the evening :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

core Pull requests that modify the beets core `beets`

Projects

None yet

Development

Successfully merging this pull request may close these issues.

tracking: python 3.14 support

5 participants