- 
                Notifications
    
You must be signed in to change notification settings  - Fork 9
 
          Use a float for the tolerance in the timer tests
          #347
        
          New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
The `event_loop` fixture is deprecated and `event_loop_policy` should be used instead. The option `asyncio_default_fixture_loop_scope = "function"` is also added to `pyproject.toml`, as it is also deprecated to rely on a default. Signed-off-by: Leandro Lucarella <[email protected]>
| 
           Tests failed again when queuing a PR, so here is another attempt.  | 
    
          
 Hypothesis usually tells you how to reproduce the error by temporarily adding @reproduce_failure({PARAMETERS, FOR, THE, TEST, HERE}) as a decorator on the test case. Have you tried that to be sure the patch solves the issue?  | 
    
| 
           Good tip. I actually validated it manually, but for a previous attempt and forgot to do it with the new approach. I will check with the decorator 👍 💯  | 
    
| 
           FYI, this was the failure:  | 
    
          
 I see in this case hypothesis only mentioned how to reproduce the error/falsify the example.  | 
    
When using an `int`, we need to do a double conversion, first to `float`
and then back to `int`, and due to rounding errors, this means there are
inconsistencies between the expected and actual values.
This is an example failure:
```
______________________ test_policy_skip_missed_and_drift _______________________
    @hypothesis.given(
>       tolerance=st.integers(min_value=0, max_value=_max_timedelta_microseconds),
        **_calculate_next_tick_time_args,
    )
tests/test_timer.py:148:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
tolerance = 171726190479152817, now = 171726190479152817
scheduled_tick_time = -1, interval = 1
    @hypothesis.given(
        tolerance=st.integers(min_value=0, max_value=_max_timedelta_microseconds),
        **_calculate_next_tick_time_args,
    )
    def test_policy_skip_missed_and_drift(
        tolerance: int, now: int, scheduled_tick_time: int, interval: int
    ) -> None:
        """Test the SkipMissedAndDrift policy."""
        hypothesis.assume(now >= scheduled_tick_time)
        next_tick_time = SkipMissedAndDrift(
            delay_tolerance=timedelta(microseconds=tolerance)
        ).calculate_next_tick_time(
            now=now, interval=interval, scheduled_tick_time=scheduled_tick_time
        )
        if tolerance < interval:
            assert next_tick_time > now
        drift = now - scheduled_tick_time
        if drift > tolerance:
>           assert next_tick_time == now + interval
E           assert 0 == (171726190479152817 + 1)
E           Falsifying example: test_policy_skip_missed_and_drift(
E               tolerance=171_726_190_479_152_817,
E               now=171_726_190_479_152_817,
E               scheduled_tick_time=-1,
E               interval=1,  # or any other generated value
E           )
tests/test_timer.py:166: AssertionError
```
Using `float` directly ensures we are comparing the same values in the
tests and in the code.
Some explicit examples are now included in the hypothesis tests to
ensure this issue is not reintroduced.
Signed-off-by: Leandro Lucarella <[email protected]>
    Tests failed because of the double conversion fixes in the previous commit, so we can remove this hack now. This reverts commit 1084381. Signed-off-by: Leandro Lucarella <[email protected]>
| 
           Yeah, I don't know how  With this in mind, it seems like it fixes the issue. Pushed some updates, adding the examples so they are always tested just in case.  | 
    
Hopefully this finally fixes the flaky hypothesis tests.
floatfor the tolerance in the timer tests