Saving memory while converting to utf-16be #11886
Unanswered
scruss
asked this question in
Using MicroPython
Replies: 1 comment 8 replies
-
Hello Scruss, so what prevents you from returning the bytearray? As a workaround you may recode your function in viper, or use a viper subfunction that operates on the buffer of the bytes object that you allocate (instead of the bytearray) in your function and have to return.
yields:
|
Beta Was this translation helpful? Give feedback.
8 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
I have a device that expects data in UTF-16BE format. MicroPython doesn't have a
.encode('utf-16be')
method, so I had to roll my own function:This routine works well for the range of characters the device expects to see in its input stream, including those in the CJK Unified Ideographs block, U+4E00 - U+9FFF.
I'm concerned, however, that allocating a bytearray of a known size and then converting it to bytes is somehow wasteful of memory. Some of the strings a user might send to the device could be up to 4096 characters long, and I'd want to avoid allocating that amount twice.
Beta Was this translation helpful? Give feedback.
All reactions