|
| 1 | +Thread safety in PyThaiNLP word tokenization |
| 2 | +============================================== |
| 3 | + |
| 4 | +Summary |
| 5 | +------- |
| 6 | + |
| 7 | +PyThaiNLP's core word tokenization engines are designed with thread-safety |
| 8 | +in mind. Internal implementations (``mm``, ``newmm``, ``newmm-safe``, |
| 9 | +``longest``, ``icu``) are thread-safe. |
| 10 | + |
| 11 | +For engines that wrap external libraries (``attacut``, ``budoux``, ``deepcut``, |
| 12 | +``nercut``, ``nlpo3``, ``oskut``, ``sefr_cut``, ``tltk``, ``wtsplit``), the |
| 13 | +wrapper code is thread-safe, but we cannot guarantee thread-safety of the |
| 14 | +underlying external libraries themselves. |
| 15 | + |
| 16 | +Thread safety implementation |
| 17 | +----------------------------- |
| 18 | + |
| 19 | +**Internal implementations (fully thread-safe):** |
| 20 | + |
| 21 | +- ``mm``, ``newmm``, ``newmm-safe``: Stateless implementation, |
| 22 | + all data is local |
| 23 | +- ``longest``: uses lock-protected check-then-act for |
| 24 | + the management of global cache shared across threads |
| 25 | +- ``icu``: each thread gets its own ``BreakIterator`` instance |
| 26 | + |
| 27 | +**External library wrappers (wrapper code is thread-safe):** |
| 28 | + |
| 29 | +- ``attacut``: uses lock-protected check-then-act for |
| 30 | + the management of global cache; underlying library thread-safety not guaranteed |
| 31 | +- ``budoux``: uses lock-protected lazy initialization of parser; |
| 32 | + underlying library thread-safety not guaranteed |
| 33 | +- ``deepcut``, ``nercut``, ``nlpo3``, ``tltk``: Stateless wrapper, |
| 34 | + underlying library thread-safety not guaranteed |
| 35 | +- ``oskut``, ``sefr_cut``, ``wtsplit``: use lock-protected model |
| 36 | + loading when switching models/engines; underlying library thread-safety not guaranteed |
| 37 | + |
| 38 | +Usage in multi-threaded applications |
| 39 | +------------------------------------- |
| 40 | + |
| 41 | +Using a tokenization engine safely in multi-threaded contexts: |
| 42 | + |
| 43 | +.. code-block:: python |
| 44 | +
|
| 45 | + import threading |
| 46 | + from pythainlp.tokenize import word_tokenize |
| 47 | +
|
| 48 | + def tokenize_worker(text, results, index): |
| 49 | + # Thread-safe for all engines |
| 50 | + results[index] = word_tokenize(text, engine="longest") |
| 51 | +
|
| 52 | + texts = ["ผมรักประเทศไทย", "วันนี้อากาศดี", "เขาไปโรงเรียน"] |
| 53 | + results = [None] * len(texts) |
| 54 | + threads = [] |
| 55 | +
|
| 56 | + for i, text in enumerate(texts): |
| 57 | + thread = threading.Thread(target=tokenize_worker, args=(text, results, i)) |
| 58 | + threads.append(thread) |
| 59 | + thread.start() |
| 60 | +
|
| 61 | + for thread in threads: |
| 62 | + thread.join() |
| 63 | +
|
| 64 | + # All results are correctly populated |
| 65 | + print(results) |
| 66 | +
|
| 67 | +Performance considerations |
| 68 | +-------------------------- |
| 69 | + |
| 70 | +1. **Lock-based synchronization** (longest, attacut): |
| 71 | + |
| 72 | + - Minimal overhead for cache access |
| 73 | + - Cache lookups are very fast |
| 74 | + - Lock contention is minimal in typical usage |
| 75 | + |
| 76 | +2. **Thread-local storage** (icu): |
| 77 | + |
| 78 | + - Each thread maintains its own instance |
| 79 | + - No synchronization overhead after initialization |
| 80 | + - Slightly higher memory usage (one instance per thread) |
| 81 | + |
| 82 | +3. **Stateless engines** (newmm, mm): |
| 83 | + |
| 84 | + - Zero synchronization overhead |
| 85 | + - Best performance in multi-threaded scenarios |
| 86 | + - Recommended for high-throughput applications |
| 87 | + |
| 88 | +Best practices |
| 89 | +-------------- |
| 90 | + |
| 91 | +1. **For high-throughput applications**: Consider using stateless engines like |
| 92 | + ``newmm`` or ``mm`` for optimal performance. |
| 93 | + |
| 94 | +2. **For custom dictionaries**: The ``longest`` engine with custom dictionaries |
| 95 | + maintains a cache per dictionary object. Reuse dictionary objects across |
| 96 | + threads to maximize cache efficiency. |
| 97 | + |
| 98 | +3. **For process pools**: All engines work correctly with multiprocessing as |
| 99 | + each process has its own memory space. |
| 100 | + |
| 101 | +4. **IMPORTANT: Do not modify custom dictionaries during tokenization**: |
| 102 | + |
| 103 | + - Create your custom Trie/dictionary before starting threads |
| 104 | + - Never call ``trie.add()`` or ``trie.remove()`` while tokenization is in progress |
| 105 | + - If you need to update the dictionary, |
| 106 | + create a new Trie instance and pass it to subsequent tokenization calls |
| 107 | + - The Trie data structure itself is NOT thread-safe for concurrent modifications |
| 108 | + |
| 109 | +Example of safe custom dictionary usage |
| 110 | +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
| 111 | + |
| 112 | +.. code-block:: python |
| 113 | +
|
| 114 | + from pythainlp.tokenize import word_tokenize |
| 115 | + from pythainlp.corpus.common import thai_words |
| 116 | + from pythainlp.util import dict_trie |
| 117 | + import threading |
| 118 | +
|
| 119 | + # SAFE: Create dictionary once before threading |
| 120 | + custom_words = set(thai_words()) |
| 121 | + custom_words.add("คำใหม่") |
| 122 | + custom_dict = dict_trie(custom_words) |
| 123 | +
|
| 124 | + texts = ["ผมรักประเทศไทย", "วันนี้อากาศดี", "เขาไปโรงเรียน"] |
| 125 | +
|
| 126 | + def worker(text, custom_dict): |
| 127 | + # SAFE: Only reading from the dictionary |
| 128 | + return word_tokenize(text, engine="newmm", custom_dict=custom_dict) |
| 129 | +
|
| 130 | + # All threads share the same dictionary (read-only) |
| 131 | + threads = [] |
| 132 | + for text in texts: |
| 133 | + t = threading.Thread(target=worker, args=(text, custom_dict)) |
| 134 | + threads.append(t) |
| 135 | + t.start() |
| 136 | +
|
| 137 | + # Wait for all threads to finish |
| 138 | + for t in threads: |
| 139 | + t.join() |
| 140 | +
|
| 141 | +Example of UNSAFE usage (DO NOT DO THIS) |
| 142 | +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
| 143 | + |
| 144 | +.. code-block:: python |
| 145 | +
|
| 146 | + # UNSAFE: Modifying dictionary while threads are using it |
| 147 | + custom_dict = dict_trie(thai_words()) |
| 148 | +
|
| 149 | + def unsafe_worker(text, custom_dict): |
| 150 | + result = word_tokenize(text, engine="newmm", custom_dict=custom_dict) |
| 151 | + # DANGER: Modifying the shared dictionary |
| 152 | + custom_dict.add("คำใหม่") # This is NOT thread-safe! |
| 153 | + return result |
| 154 | +
|
| 155 | +Testing |
| 156 | +------- |
| 157 | + |
| 158 | +Comprehensive thread safety tests are available in: |
| 159 | + |
| 160 | +- ``tests/core/test_tokenize_thread_safety.py`` |
| 161 | + |
| 162 | +The test suite includes: |
| 163 | + |
| 164 | +- Concurrent tokenization with multiple threads |
| 165 | +- Race condition testing with multiple dictionaries |
| 166 | +- Verification of result consistency across threads |
| 167 | +- Stress testing with up to 200 concurrent operations (20 threads × 10 iterations) |
| 168 | + |
| 169 | +Maintenance notes |
| 170 | +----------------- |
| 171 | + |
| 172 | +When adding new tokenization engines to PyThaiNLP: |
| 173 | + |
| 174 | +1. **Avoid global mutable state** whenever possible |
| 175 | +2. If caching is necessary, use thread-safe locks |
| 176 | +3. If per-thread state is needed, use ``threading.local()`` |
| 177 | +4. Always add thread safety tests for new engines |
| 178 | +5. Document thread safety guarantees in docstrings |
| 179 | + |
| 180 | +Related files |
| 181 | +------------- |
| 182 | + |
| 183 | +- Core implementation: ``pythainlp/tokenize/core.py`` |
| 184 | +- Engine implementations: ``pythainlp/tokenize/*.py`` |
| 185 | +- Tests: ``tests/core/test_tokenize_thread_safety.py`` |
0 commit comments