Skip to content

Conversation

@aisk
Copy link
Contributor

@aisk aisk commented Oct 27, 2024

aisk and others added 2 commits October 27, 2024 19:19
Copy link
Member

@sobolevn sobolevn left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM now, I will wait for MS expert to also approve :)

@sobolevn sobolevn requested a review from a team October 28, 2024 08:26
old = msvcrt.GetErrorMode()
self.addCleanup(msvcrt.SetErrorMode, old)

returned = msvcrt.SetErrorMode(0)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe test it also with non-zero value, and also with out-of-range values -1 and 2**32? If this is possible (if the test does not crash or hangs).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This function SetErrorMode have weird behaviors and it behaves different in the test case or in a standalone test script, it sometimes returns different mode compare to the one set. I'm still try to figure it out and will update it tomorrow, thank you for the review!

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Figured out the case of SetErrorMode and found that it's actually not set the value to current, but set the bit flags on it. Currently made the test workable, but still have some concerns about it.

Added some details as a separate comment blow.

@aisk
Copy link
Contributor Author

aisk commented Jul 17, 2025

Updated: I misunderstand the SetErrorMode API, it's mode can be restored after the test. Now this PR is ready for review.

cc @serhiy-storchaka

Bellow comment is wrong: Hi, I found that the [`SetErrorMode`](https://learn.microsoft.com/en-us/windows/win32/api/errhandlingapi/nf-errhandlingapi-seterrormode) actually sets the value as a bit flag to the current state, and `SetErrorMode(0)` sets it to 0.

I have some code like this:

import msvcrt
import unittest


class TestOther(unittest.TestCase):

   def test_SetErrorMode(self):
       origin = msvcrt.GetErrorMode()
       print(f"origin mode: {origin}")
       msvcrt.SetErrorMode(0)
       current_default = msvcrt.GetErrorMode()
       print(f"current default: {current_default}")
       for v in (msvcrt.SEM_FAILCRITICALERRORS, msvcrt.SEM_NOGPFAULTERRORBOX,
                 msvcrt.SEM_NOALIGNMENTFAULTEXCEPT, msvcrt.SEM_NOOPENFILEERRORBOX):
           if origin & v:
               print("set v:", v)
               msvcrt.SetErrorMode(v)
       self.assertEqual(origin, msvcrt.GetErrorMode())


if __name__ == "__main__":
   unittest.main()

It reads the original mode, and sets the current mode to 0, and compares the 4 available bit flags to the original mode, and if it's true, then sets it.

There is the running result:

python .\a.py
origin mode: 32771  (0x1 + 0x2 + 0x8000)
current default: 0
set v: 1 (0x1)
set v: 2 (0x2)
set v: 32768 (0x8000)
F
======================================================================
FAIL: test_SetErrorMode (__main__.TestOther)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "C:\Users\xxxxx\Source\cpython\a.py", line 18, in test_SetErrorMode
    self.assertEqual(origin, msvcrt.GetErrorMode())
AssertionError: 32771 != 32768

----------------------------------------------------------------------
Ran 1 test in 0.001s

FAILED (failures=1)

But the final result is not the same as the original mode. There is a difference of "3", so I guess the bit flags "0x1" and "0x2" don't have any effect.
This leads to us not being able to restore the mode to the original value after we set it in the test case. I have no idea why this happens, maybe there are some limitations in the Windows OS. So should we keep this test?

@zooba
Copy link
Member

zooba commented Jul 29, 2025

These should pretty much all be used in every test run anyway, since we need them to avoid failed/error tests from hanging due to various popups that UCRT causes. I don't think we really gain anything by having additional tests.

I haven't reviewed the test implementations yet (might be a while before I can get to that), but make sure that there's 0% chance of ever failing and leaving the state corrupt here. That's the kind of thing that will make test runs hang instead of abort, which is far more of a problem than not having additional tests for these functions.

@aisk
Copy link
Contributor Author

aisk commented Jul 30, 2025

Hi @zooba, do you mean to put the test in a separate process to avoid making the process's state violated?

@zooba
Copy link
Member

zooba commented Jul 30, 2025

do you mean to put the test in a separate process to avoid making the process's state violated

We need to configure this state correctly for child processes as well, otherwise we risk the same hanging behaviour. So it doesn't make a huge difference.

What I really mean is we don't need to add these tests, because the functions are always used (when available), so if they're going to fail then the test run won't ever have started, and if they do fail, the entire test run is going to hang (because e.g. test_os will pop up assert dialogs that prevent progress). The situations where we would run these new tests and they provide any useful information are basically non-existent, as far as I can tell.

And if we run the new tests and anything at all goes wrong, then chances are we won't get a failed test, but a broken test run. So it makes it harder to diagnose the problem, because we won't see any info about it.

So I think we should close the PR without merging. But if someone else wants to make an argument for why we should include these tests despite the lack of usefulness and the risk of greater breakage, they should make those arguments and I might be convinced.

@serhiy-storchaka
Copy link
Member

I think that it would be interesting to test these functions at least for out-of-range values. These calls should always fail and should not have side effect. I am not sure that all errors are correctly handled in the implementation, so we can get a successful call instead of exception.

Of course, we can test also successful calls if this is safe.

@aisk
Copy link
Contributor Author

aisk commented Jul 31, 2025

I think there are some read-only functions like GetErrorMode, which we can at least test.

Another thought is: can we add a special resource (for example, violate_process), so that we can test them in the local development environment? I'm not sure about this because maybe there are some build environments which enable the test with -uall?

@zooba
Copy link
Member

zooba commented Sep 15, 2025

These APIs literally just pass a number to the CRT and pass its result back. They don't even attempt to raise errors - it's only incredibly thin wrappers. So we're just testing the CRT implementation, and that isn't really our responsibility (we're not going to fix them, or shim their return values).

And yeah, putting tests that deliberately corrupt test state behind a resource is bad. They need to be in independent processes, and can't really be validated because they involve deliberately failing tests. A standalone script that sets settings and deliberately fails would be fine, but it's not going to work as part of the test suite.

I'm still not convinced this is worth anything to us.

@serhiy-storchaka
Copy link
Member

A number cannot just be passed from Python to the CRT and back. There should be some conversion between Python and C. We need to test the wrappers -- how they convert arguments from Python to C and result (or error) from C to Python. After looking at the code, I suspect that they do it poorly -- do not check for integer overflow, do not check for resulting error, silently truncate integers or interpret signed as unsigned or vice versa. So as result of this PR I expect adding more strict converters and checks in the wrappers.

To perform these tests, we should first pass wrong type of arguments, arguments of wrong type, out of range values. This will test the first half of the wrapper. Then, we should pass invalid arguments which can be successfully converted, but for which the CRT function will return an error. If this is possible, we can test the second part. Finally, we can test valid values to check that the success path works as well, but if this has irreversible effects, we cannot do this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

awaiting merge skip news tests Tests in the Lib/test dir

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants