Skip to content

Commit 07740d1

Browse files
committed
add bigram compression to makeqstrdata
Compress common unicode bigrams by making code points in the range 0x80 - 0xbf (inclusive) represent them. Then, they can be greedily encoded and the substituted code points handled by the existing Huffman compression. Normally code points in the range 0x80-0xbf are not used in Unicode, so we stake our own claim. Using the more arguably correct "Private Use Area" (PUA) would mean that for scripts that only use code points under 256 we would use more memory for the "values" table. bigram means "two letters", and is also sometimes called a "digram". It's nothing to do with "big RAM". For our purposes, a bigram represents two successive unicode code points, so for instance in our build on trinket m0 for english the most frequent are: ['t ', 'e ', 'in', 'd ', ...]. The bigrams are selected based on frequency in the corpus, but the selection is not necessarily optimal, for these reasons I can think of: * Suppose the corpus was just "tea" repeated 100 times. The top bigrams would be "te", and "ea". However, overlap, "te" could never be used. Thus, some bigrams might actually waste space * I _assume_ this has to be why e.g., bigram 0x86 "s " is more frequent than bigram 0x85 " a" in English for Trinket M0, because sequences like "can't add" would get the "t " digram and then be unable to use the " a" digram. * And generally, if a bigram is frequent then so are its constituents. Say that "i" and "n" both encode to just 5 or 6 bits, then the huffman code for "in" had better compress to 10 or fewer bits or it's a net loss! * I checked though! "i" is 5 bits, "n" is 6 bits (lucky guess) but the bigram 0x83 also just 6 bits, so this one is a win of 5 bits for every "it" minus overhead. Yay, this round goes to team compression. * On the other hand, the least frequent bigram 0x9d " n" is 10 bits long and its constituent code points are 4+6 bits so there's no savings, but there is the cost of the table entry. * and somehow 0x9f 'an' is never used at all! With or without accounting for overlaps, there is some optimum number of bigrams. Adding one more bigram uses at least 2 bytes (for the entry in the bigram table; 4 bytes if code points >255 are in the source text) and also needs a slot in the Huffman dictionary, so adding bigrams beyond the optimim number makes compression worse again. If it's an improvement, the fact that it's not guaranteed optimal doesn't seem to matter too much. It just leaves a little more fruit for the next sweep to pick up. Perhaps try adding the most frequent bigram not yet present, until it doesn't improve compression overall. Right now, de_DE is again the "fullest" build on trinket_m0. (It's reclaimed that spot from the ja translation somehow) This change saves 104 bytes there, increasing free space about 6.8%. In the larger (but not critically full) pyportal build it saves 324 bytes. The specific number of bigrams used (32) was chosen as it is the max number that fit within the 0x80..0xbf range. Larger tables would require the use of 16 bit code points in the de_DE build, losing savings overall. (Side note: The most frequent letters in English have been said to be: ETA OIN SHRDLU; but we have UAC EIL MOPRST in our corpus)
1 parent f27b896 commit 07740d1

File tree

2 files changed

+38
-4
lines changed

2 files changed

+38
-4
lines changed

py/makeqstrdata.py

Lines changed: 29 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -100,9 +100,30 @@ def translate(translation_file, i18ns):
100100
translations.append((original, translation))
101101
return translations
102102

103+
def frequent_ngrams(corpus, sz, n):
104+
return collections.Counter(corpus[i:i+sz] for i in range(len(corpus)-sz)).most_common(n)
105+
106+
def ngrams_to_pua(translation, ngrams):
107+
if len(ngrams) > 32:
108+
start = 0xe000
109+
else:
110+
start = 0x80
111+
for i, g in enumerate(ngrams):
112+
translation = translation.replace(g, chr(start + i))
113+
return translation
114+
115+
def pua_to_ngrams(compressed, ngrams):
116+
if len(ngrams) > 32:
117+
start, end = 0xe000, 0xf8ff
118+
else:
119+
start, end = 0x80, 0xbf
120+
return "".join(ngrams[ord(c) - start] if (start <= ord(c) <= end) else c for c in compressed)
121+
103122
def compute_huffman_coding(translations, qstrs, compression_filename):
104123
all_strings = [x[1] for x in translations]
105124
all_strings_concat = "".join(all_strings)
125+
ngrams = [i[0] for i in frequent_ngrams(all_strings_concat, 2, 32)]
126+
all_strings_concat = ngrams_to_pua(all_strings_concat, ngrams)
106127
counts = collections.Counter(all_strings_concat)
107128
cb = huffman.codebook(counts.items())
108129
values = []
@@ -128,18 +149,20 @@ def compute_huffman_coding(translations, qstrs, compression_filename):
128149
for i in range(1, max(length_count) + 2):
129150
lengths.append(length_count.get(i, 0))
130151
print("// values", values, "lengths", len(lengths), lengths)
131-
print("// estimated total memory size", len(lengths) + 2*len(values) + sum(len(cb[u]) for u in all_strings_concat))
152+
ngramdata = [ord(ni) for i in ngrams for ni in i]
153+
print("// estimated total memory size", len(lengths) + 2*len(values) + 2 * len(ngramdata) + sum((len(cb[u]) + 7)//8 for u in all_strings_concat))
132154
print("//", values, lengths)
133155
values_type = "uint16_t" if max(ord(u) for u in values) > 255 else "uint8_t"
134156
max_translation_encoded_length = max(len(translation.encode("utf-8")) for original,translation in translations)
135157
with open(compression_filename, "w") as f:
136158
f.write("const uint8_t lengths[] = {{ {} }};\n".format(", ".join(map(str, lengths))))
137159
f.write("const {} values[] = {{ {} }};\n".format(values_type, ", ".join(str(ord(u)) for u in values)))
138160
f.write("#define compress_max_length_bits ({})\n".format(max_translation_encoded_length.bit_length()))
139-
return values, lengths
161+
f.write("const {} ngrams[] = {{ {} }};\n".format(values_type, ", ".join(str(u) for u in ngramdata)))
162+
return values, lengths, ngrams
140163

141164
def decompress(encoding_table, encoded, encoded_length_bits):
142-
values, lengths = encoding_table
165+
values, lengths, ngrams = encoding_table
143166
dec = []
144167
this_byte = 0
145168
this_bit = 7
@@ -187,14 +210,16 @@ def decompress(encoding_table, encoded, encoded_length_bits):
187210
searched_length += lengths[bit_length]
188211

189212
v = values[searched_length + bits - max_code]
213+
v = pua_to_ngrams(v, ngrams)
190214
i += len(v.encode('utf-8'))
191215
dec.append(v)
192216
return ''.join(dec)
193217

194218
def compress(encoding_table, decompressed, encoded_length_bits, len_translation_encoded):
195219
if not isinstance(decompressed, str):
196220
raise TypeError()
197-
values, lengths = encoding_table
221+
values, lengths, ngrams = encoding_table
222+
decompressed = ngrams_to_pua(decompressed, ngrams)
198223
enc = bytearray(len(decompressed) * 3)
199224
#print(decompressed)
200225
#print(lengths)

supervisor/shared/translate.c

Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -34,6 +34,7 @@
3434
#include "genhdr/compression.generated.h"
3535
#endif
3636

37+
#include "py/misc.h"
3738
#include "supervisor/serial.h"
3839

3940
void serial_write_compressed(const compressed_string_t* compressed) {
@@ -46,10 +47,18 @@ STATIC int put_utf8(char *buf, int u) {
4647
if(u <= 0x7f) {
4748
*buf = u;
4849
return 1;
50+
} else if(MP_ARRAY_SIZE(ngrams) <= 64 && u <= 0xbf) {
51+
int n = (u - 0x80) * 2;
52+
int ret = put_utf8(buf, ngrams[n]);
53+
return ret + put_utf8(buf + ret, ngrams[n+1]);
4954
} else if(u <= 0x07ff) {
5055
*buf++ = 0b11000000 | (u >> 6);
5156
*buf = 0b10000000 | (u & 0b00111111);
5257
return 2;
58+
} else if(MP_ARRAY_SIZE(ngrams) > 64 && u >= 0xe000 && u <= 0xf8ff) {
59+
int n = (u - 0xe000) * 2;
60+
int ret = put_utf8(buf, ngrams[n]);
61+
return ret + put_utf8(buf + ret, ngrams[n+1]);
5362
} else { // u <= 0xffff)
5463
*buf++ = 0b11000000 | (u >> 12);
5564
*buf = 0b10000000 | ((u >> 6) & 0b00111111);

0 commit comments

Comments
 (0)