Skip to content

Commit 34f86f4

Browse files
authored
Merge pull request #1444 from Textualize/strip-optimization
adds Strip primitive
2 parents e59d606 + 734b742 commit 34f86f4

File tree

21 files changed

+2955
-2203
lines changed

21 files changed

+2955
-2203
lines changed

CHANGELOG.md

Lines changed: 12 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5,12 +5,19 @@ All notable changes to this project will be documented in this file.
55
The format is based on [Keep a Changelog](http://keepachangelog.com/)
66
and this project adheres to [Semantic Versioning](http://semver.org/).
77

8-
## [0.8.3] - Unreleased
8+
## [0.9.0] - Unreleased
99

1010
### Added
1111

12+
- Added textual.strip.Strip primitive
13+
- Added textual._cache.FIFOCache
1214
- Added an option to clear columns in DataTable.clear() https://github.com/Textualize/textual/pull/1427
1315

16+
### Changed
17+
18+
- Widget.render_line now returns a Strip
19+
- Fix for slow updates on Windows
20+
1421
## [0.8.2] - 2022-12-28
1522

1623
### Fixed
@@ -308,6 +315,10 @@ https://textual.textualize.io/blog/2022/11/08/version-040/#version-040
308315
- New handler system for messages that doesn't require inheritance
309316
- Improved traceback handling
310317

318+
[0.9.0]: https://github.com/Textualize/textual/compare/v0.8.2...v0.9.0
319+
[0.8.2]: https://github.com/Textualize/textual/compare/v0.8.1...v0.8.2
320+
[0.8.1]: https://github.com/Textualize/textual/compare/v0.8.0...v0.8.1
321+
[0.8.0]: https://github.com/Textualize/textual/compare/v0.7.0...v0.8.0
311322
[0.7.0]: https://github.com/Textualize/textual/compare/v0.6.0...v0.7.0
312323
[0.6.0]: https://github.com/Textualize/textual/compare/v0.5.0...v0.6.0
313324
[0.5.0]: https://github.com/Textualize/textual/compare/v0.4.0...v0.5.0
Lines changed: 54 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,54 @@
1+
---
2+
draft: false
3+
date: 2022-12-30
4+
categories:
5+
- DevLog
6+
authors:
7+
- willmcgugan
8+
---
9+
# A better asyncio sleep for Windows to fix animation
10+
11+
I spent some time optimizing Textual on Windows recently, and discovered something which may be of interest to anyone working with async code on that platform.
12+
13+
<!-- more -->
14+
15+
Animation, scrolling, and fading had always been unsatisfactory on Windows. Textual was usable, but the lag when scrolling made it a little unpleasant to use. On macOS and Linux, scrolling is fast enough that it feels close to a native app, and not something running in a terminal. Yet the Windows experience never improved, even as Textual got faster with each release.
16+
17+
I had chalked this up to Windows Terminal being slow to render updates. After all, the classic Windows terminal was (and still is) glacially slow. Perhaps Microsoft just weren't focusing on performance.
18+
19+
In retrospect, that was highly improbable. Like all modern terminals, Windows Terminal uses the GPU to render updates. Even without focussing on performance, it should be fast.
20+
21+
I figured I'd give it once last attempt to speed up Textual on Windows. If I failed, Windows would forever be a third-class platform for Textual apps.
22+
23+
It turned out that it was nothing to do with performance, per se. The issue was with a single asyncio function: `asyncio.sleep`.
24+
25+
Textual has a `Timer` class which creates events at regular intervals. It powers the JS-like `set_interval` and `set_timer` functions. It is also used internally to do animation (such as smooth scrolling). This Timer class calls `asyncio.sleep` to wait the time between one event and the next.
26+
27+
On macOS and Linux, calling `asynco.sleep` is fairly accurate. If you call `sleep(3.14)`, it will return within 1% of 3.14 seconds. This is not the case for Windows, which for historical reasons uses a timer with a granularity of 15 milliseconds. The upshot is that sleep times will be rounded up to the nearest multiple of 15 milliseconds.
28+
29+
This limit appears holds true for all async primitives on Windows. If you wait for something with a timeout, it will return on a multiple of 15 milliseconds. Fortunately there is work in the CPython pipeline to make this more accurate. Thanks to [Steve Dower](https://twitter.com/zooba) for pointing this out.
30+
31+
This lack of accuracy in the timer meant that timer events were created at a far slower rate that intended. Animation was slower because Textual was waiting too long between updates.
32+
33+
Once I had figured that out, I needed an alternative to `asyncio.sleep` for Textual's Timer class. And I found one. The following version of `sleep` is accurate to well within 1%:
34+
35+
```python
36+
from time import sleep
37+
from asyncio import get_running_loop
38+
39+
async def sleep(sleep_for: float) -> None:
40+
"""An asyncio sleep.
41+
42+
On Windows this achieves a better granularity that asyncio.sleep
43+
44+
Args:
45+
sleep_for (float): Seconds to sleep for.
46+
"""
47+
await get_running_loop().run_in_executor(None, time_sleep, sleep_for)
48+
```
49+
50+
That is a drop-in replacement for sleep on Windows. With it, Textual runs a *lot* smoother. Easily on par with macOS and Linux.
51+
52+
It's not quite perfect. There is a little *tearing* during full "screen" updates, but performance is decent all round. I suspect when [this bug]( https://bugs.python.org/issue37871) is fixed (big thanks to [Paul Moore](https://twitter.com/pf_moore) for looking in to that), and Microsoft implements [this protocol](https://gist.github.com/christianparpart/d8a62cc1ab659194337d73e399004036) then Textual on Windows will be A+.
53+
54+
This Windows improvement will be in v0.9.0 of [Textual](https://github.com/Textualize/textual), which will be released in a few days.

src/textual/_cache.py

Lines changed: 168 additions & 49 deletions
Original file line numberDiff line numberDiff line change
@@ -14,13 +14,14 @@
1414

1515
from __future__ import annotations
1616

17-
from threading import Lock
1817
from typing import Dict, Generic, KeysView, TypeVar, overload
1918

2019
CacheKey = TypeVar("CacheKey")
2120
CacheValue = TypeVar("CacheValue")
2221
DefaultValue = TypeVar("DefaultValue")
2322

23+
__all__ = ["LRUCache", "FIFOCache"]
24+
2425

2526
class LRUCache(Generic[CacheKey, CacheValue]):
2627
"""
@@ -37,12 +38,22 @@ class LRUCache(Generic[CacheKey, CacheValue]):
3738
3839
"""
3940

41+
__slots__ = [
42+
"_maxsize",
43+
"_cache",
44+
"_full",
45+
"_head",
46+
"hits",
47+
"misses",
48+
]
49+
4050
def __init__(self, maxsize: int) -> None:
4151
self._maxsize = maxsize
4252
self._cache: Dict[CacheKey, list[object]] = {}
4353
self._full = False
4454
self._head: list[object] = []
45-
self._lock = Lock()
55+
self.hits = 0
56+
self.misses = 0
4657
super().__init__()
4758

4859
@property
@@ -60,6 +71,11 @@ def __bool__(self) -> bool:
6071
def __len__(self) -> int:
6172
return len(self._cache)
6273

74+
def __repr__(self) -> str:
75+
return (
76+
f"<LRUCache maxsize={self._maxsize} hits={self.hits} misses={self.misses}>"
77+
)
78+
6379
def grow(self, maxsize: int) -> None:
6480
"""Grow the maximum size to at least `maxsize` elements.
6581
@@ -70,10 +86,9 @@ def grow(self, maxsize: int) -> None:
7086

7187
def clear(self) -> None:
7288
"""Clear the cache."""
73-
with self._lock:
74-
self._cache.clear()
75-
self._full = False
76-
self._head = []
89+
self._cache.clear()
90+
self._full = False
91+
self._head = []
7792

7893
def keys(self) -> KeysView[CacheKey]:
7994
"""Get cache keys."""
@@ -87,29 +102,28 @@ def set(self, key: CacheKey, value: CacheValue) -> None:
87102
key (CacheKey): Key.
88103
value (CacheValue): Value.
89104
"""
90-
with self._lock:
91-
link = self._cache.get(key)
92-
if link is None:
105+
link = self._cache.get(key)
106+
if link is None:
107+
head = self._head
108+
if not head:
109+
# First link references itself
110+
self._head[:] = [head, head, key, value]
111+
else:
112+
# Add a new root to the beginning
113+
self._head = [head[0], head, key, value]
114+
# Updated references on previous root
115+
head[0][1] = self._head # type: ignore[index]
116+
head[0] = self._head
117+
self._cache[key] = self._head
118+
119+
if self._full or len(self._cache) > self._maxsize:
120+
# Cache is full, we need to evict the oldest one
121+
self._full = True
93122
head = self._head
94-
if not head:
95-
# First link references itself
96-
self._head[:] = [head, head, key, value]
97-
else:
98-
# Add a new root to the beginning
99-
self._head = [head[0], head, key, value]
100-
# Updated references on previous root
101-
head[0][1] = self._head # type: ignore[index]
102-
head[0] = self._head
103-
self._cache[key] = self._head
104-
105-
if self._full or len(self._cache) > self._maxsize:
106-
# Cache is full, we need to evict the oldest one
107-
self._full = True
108-
head = self._head
109-
last = head[0]
110-
last[0][1] = head # type: ignore[index]
111-
head[0] = last[0] # type: ignore[index]
112-
del self._cache[last[2]] # type: ignore[index]
123+
last = head[0]
124+
last[0][1] = head # type: ignore[index]
125+
head[0] = last[0] # type: ignore[index]
126+
del self._cache[last[2]] # type: ignore[index]
113127

114128
__setitem__ = set
115129

@@ -135,31 +149,136 @@ def get(
135149
"""
136150
link = self._cache.get(key)
137151
if link is None:
152+
self.misses += 1
138153
return default
139-
with self._lock:
140-
if link is not self._head:
141-
# Remove link from list
142-
link[0][1] = link[1] # type: ignore[index]
143-
link[1][0] = link[0] # type: ignore[index]
144-
head = self._head
145-
# Move link to head of list
146-
link[0] = head[0]
147-
link[1] = head
148-
self._head = head[0][1] = head[0] = link # type: ignore[index]
154+
if link is not self._head:
155+
# Remove link from list
156+
link[0][1] = link[1] # type: ignore[index]
157+
link[1][0] = link[0] # type: ignore[index]
158+
head = self._head
159+
# Move link to head of list
160+
link[0] = head[0]
161+
link[1] = head
162+
self._head = head[0][1] = head[0] = link # type: ignore[index]
163+
self.hits += 1
164+
return link[3] # type: ignore[return-value]
165+
166+
def __getitem__(self, key: CacheKey) -> CacheValue:
167+
link = self._cache.get(key)
168+
if link is None:
169+
self.misses += 1
170+
raise KeyError(key)
171+
if link is not self._head:
172+
link[0][1] = link[1] # type: ignore[index]
173+
link[1][0] = link[0] # type: ignore[index]
174+
head = self._head
175+
link[0] = head[0]
176+
link[1] = head
177+
self._head = head[0][1] = head[0] = link # type: ignore[index]
178+
self.hits += 1
179+
return link[3] # type: ignore[return-value]
180+
181+
def __contains__(self, key: CacheKey) -> bool:
182+
return key in self._cache
183+
184+
185+
class FIFOCache(Generic[CacheKey, CacheValue]):
186+
"""A simple cache that discards the first added key when full (First In First Out).
187+
188+
This has a lower overhead than LRUCache, but won't manage a working set as efficiently.
189+
It is most suitable for a cache with a relatively low maximum size that is not expected to
190+
do many lookups.
191+
192+
Args:
193+
maxsize (int): Maximum size of the cache.
194+
"""
195+
196+
__slots__ = [
197+
"_maxsize",
198+
"_cache",
199+
"hits",
200+
"misses",
201+
]
202+
203+
def __init__(self, maxsize: int) -> None:
204+
self._maxsize = maxsize
205+
self._cache: dict[CacheKey, CacheValue] = {}
206+
self.hits = 0
207+
self.misses = 0
208+
209+
def __bool__(self) -> bool:
210+
return bool(self._cache)
211+
212+
def __len__(self) -> int:
213+
return len(self._cache)
214+
215+
def __repr__(self) -> str:
216+
return (
217+
f"<FIFOCache maxsize={self._maxsize} hits={self.hits} misses={self.misses}>"
218+
)
219+
220+
def clear(self) -> None:
221+
"""Clear the cache."""
222+
self._cache.clear()
223+
224+
def keys(self) -> KeysView[CacheKey]:
225+
"""Get cache keys."""
226+
# Mostly for tests
227+
return self._cache.keys()
228+
229+
def set(self, key: CacheKey, value: CacheValue) -> None:
230+
"""Set a value.
231+
232+
Args:
233+
key (CacheKey): Key.
234+
value (CacheValue): Value.
235+
"""
236+
if key not in self._cache and len(self._cache) >= self._maxsize:
237+
for first_key in self._cache:
238+
self._cache.pop(first_key)
239+
break
240+
self._cache[key] = value
149241

150-
return link[3] # type: ignore[return-value]
242+
__setitem__ = set
243+
244+
@overload
245+
def get(self, key: CacheKey) -> CacheValue | None:
246+
...
247+
248+
@overload
249+
def get(self, key: CacheKey, default: DefaultValue) -> CacheValue | DefaultValue:
250+
...
251+
252+
def get(
253+
self, key: CacheKey, default: DefaultValue | None = None
254+
) -> CacheValue | DefaultValue | None:
255+
"""Get a value from the cache, or return a default if the key is not present.
256+
257+
Args:
258+
key (CacheKey): Key
259+
default (Optional[DefaultValue], optional): Default to return if key is not present. Defaults to None.
260+
261+
Returns:
262+
Union[CacheValue, Optional[DefaultValue]]: Either the value or a default.
263+
"""
264+
try:
265+
result = self._cache[key]
266+
except KeyError:
267+
self.misses += 1
268+
return default
269+
else:
270+
self.hits += 1
271+
return result
151272

152273
def __getitem__(self, key: CacheKey) -> CacheValue:
153-
link = self._cache[key]
154-
with self._lock:
155-
if link is not self._head:
156-
link[0][1] = link[1] # type: ignore[index]
157-
link[1][0] = link[0] # type: ignore[index]
158-
head = self._head
159-
link[0] = head[0]
160-
link[1] = head
161-
self._head = head[0][1] = head[0] = link # type: ignore[index]
162-
return link[3] # type: ignore[return-value]
274+
try:
275+
result = self._cache[key]
276+
except KeyError:
277+
self.misses += 1
278+
raise KeyError(key) from None
279+
else:
280+
self.hits += 1
281+
return result
163282

164283
def __contains__(self, key: CacheKey) -> bool:
165284
return key in self._cache

0 commit comments

Comments
 (0)