-
-
Notifications
You must be signed in to change notification settings - Fork 33.2k
gh-126028: Add more tests for msvcrt module #126029
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Co-authored-by: Tomas R. <[email protected]>
Co-authored-by: sobolevn <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM now, I will wait for MS expert to also approve :)
Lib/test/test_msvcrt.py
Outdated
| old = msvcrt.GetErrorMode() | ||
| self.addCleanup(msvcrt.SetErrorMode, old) | ||
|
|
||
| returned = msvcrt.SetErrorMode(0) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe test it also with non-zero value, and also with out-of-range values -1 and 2**32? If this is possible (if the test does not crash or hangs).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This function SetErrorMode have weird behaviors and it behaves different in the test case or in a standalone test script, it sometimes returns different mode compare to the one set. I'm still try to figure it out and will update it tomorrow, thank you for the review!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Figured out the case of SetErrorMode and found that it's actually not set the value to current, but set the bit flags on it. Currently made the test workable, but still have some concerns about it.
Added some details as a separate comment blow.
|
Updated: I misunderstand the Bellow comment is wrong:Hi, I found that the [`SetErrorMode`](https://learn.microsoft.com/en-us/windows/win32/api/errhandlingapi/nf-errhandlingapi-seterrormode) actually sets the value as a bit flag to the current state, and `SetErrorMode(0)` sets it to 0.I have some code like this: import msvcrt
import unittest
class TestOther(unittest.TestCase):
def test_SetErrorMode(self):
origin = msvcrt.GetErrorMode()
print(f"origin mode: {origin}")
msvcrt.SetErrorMode(0)
current_default = msvcrt.GetErrorMode()
print(f"current default: {current_default}")
for v in (msvcrt.SEM_FAILCRITICALERRORS, msvcrt.SEM_NOGPFAULTERRORBOX,
msvcrt.SEM_NOALIGNMENTFAULTEXCEPT, msvcrt.SEM_NOOPENFILEERRORBOX):
if origin & v:
print("set v:", v)
msvcrt.SetErrorMode(v)
self.assertEqual(origin, msvcrt.GetErrorMode())
if __name__ == "__main__":
unittest.main()It reads the original mode, and sets the current mode to 0, and compares the 4 available bit flags to the original mode, and if it's true, then sets it. There is the running result: But the final result is not the same as the original mode. There is a difference of "3", so I guess the bit flags "0x1" and "0x2" don't have any effect. |
|
These should pretty much all be used in every test run anyway, since we need them to avoid failed/error tests from hanging due to various popups that UCRT causes. I don't think we really gain anything by having additional tests. I haven't reviewed the test implementations yet (might be a while before I can get to that), but make sure that there's 0% chance of ever failing and leaving the state corrupt here. That's the kind of thing that will make test runs hang instead of abort, which is far more of a problem than not having additional tests for these functions. |
|
Hi @zooba, do you mean to put the test in a separate process to avoid making the process's state violated? |
We need to configure this state correctly for child processes as well, otherwise we risk the same hanging behaviour. So it doesn't make a huge difference. What I really mean is we don't need to add these tests, because the functions are always used (when available), so if they're going to fail then the test run won't ever have started, and if they do fail, the entire test run is going to hang (because e.g. And if we run the new tests and anything at all goes wrong, then chances are we won't get a failed test, but a broken test run. So it makes it harder to diagnose the problem, because we won't see any info about it. So I think we should close the PR without merging. But if someone else wants to make an argument for why we should include these tests despite the lack of usefulness and the risk of greater breakage, they should make those arguments and I might be convinced. |
|
I think that it would be interesting to test these functions at least for out-of-range values. These calls should always fail and should not have side effect. I am not sure that all errors are correctly handled in the implementation, so we can get a successful call instead of exception. Of course, we can test also successful calls if this is safe. |
|
I think there are some read-only functions like Another thought is: can we add a special resource (for example, |
|
These APIs literally just pass a number to the CRT and pass its result back. They don't even attempt to raise errors - it's only incredibly thin wrappers. So we're just testing the CRT implementation, and that isn't really our responsibility (we're not going to fix them, or shim their return values). And yeah, putting tests that deliberately corrupt test state behind a resource is bad. They need to be in independent processes, and can't really be validated because they involve deliberately failing tests. A standalone script that sets settings and deliberately fails would be fine, but it's not going to work as part of the test suite. I'm still not convinced this is worth anything to us. |
|
A number cannot just be passed from Python to the CRT and back. There should be some conversion between Python and C. We need to test the wrappers -- how they convert arguments from Python to C and result (or error) from C to Python. After looking at the code, I suspect that they do it poorly -- do not check for integer overflow, do not check for resulting error, silently truncate integers or interpret signed as unsigned or vice versa. So as result of this PR I expect adding more strict converters and checks in the wrappers. To perform these tests, we should first pass wrong type of arguments, arguments of wrong type, out of range values. This will test the first half of the wrapper. Then, we should pass invalid arguments which can be successfully converted, but for which the CRT function will return an error. If this is possible, we can test the second part. Finally, we can test valid values to check that the success path works as well, but if this has irreversible effects, we cannot do this. |
msvcrtdon't have tests #126028