Replies: 3 comments
-
The next step could be to freeze the code in .mpy files. I did it once to see if it helped my datalogger. Unfortunately, the code is always changing and freezing it each time is an added hassle. There are a few files I use like For big jobs I use a big processor, ie the SPIRAM variant of the ESP32 ... but that doesn't help you. Other recent posts have suggested a document on fragmentation. |
Beta Was this translation helpful? Give feedback.
-
Apart from executing your user code from .MPYs instead of .PYs (a .mpy is about a quarter the size of the .py) you can reserve memory for a byte array at the start of your program. I loathed byte arrays when I first got forced into using them because of the dreaded enomem (many of the search features available for variables don't work) but I've got used to them now & they have some nice editing/resizing features that use heaps less ram than the variable equivalent. Not sure if it's relevant to your situation but a .mpy, being pre compiled, will also load quicker. My programs are only about 500 lines but they used to take 5-6s to boot after a deepsleep Vs 1-2s these days. Why board makers are stingy with ram in their designs is beyond me. Hours of wasted head scratching for the sake of a few cents worth of ram! |
Beta Was this translation helpful? Give feedback.
-
THANK YOU for your suggestions and the article. In my case, the precompilation as MPY file was the important solution to my acute problem and I do not see the ENOMEM errors so far. For comparison to yesterday, now the numbers show up only 95k free, max free only 865. Weird thing is that the numbers seem worse than yesterday but still the memory-critical command ssl.wrap_socket() now works OK. Anyway, I keep on doing the basic code stilisation work, and the article mentioned contains useful advices. Just in case of interest, here are the MPY-CROSS operation step-by-steps, which I did in my W11 PC, for other Linux noobies:
|
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
I am building a Micropython script in my RPi Pico W, for house automation. It measures OneWire temperatures and ADC voltages and uses ssl/tls socket to communicate with Google doGet anonymous API, to report to Google Sheets and to use Google Drive for parameter file reads. Currently it is having some 2600 lines of code. My problem is that as implementing the functionalities, the available free memory has shrunk and fragmented and some commands like ssl.wrap_socket() has started to produce ENOMEM error.
As basic technique, I am trying to keep arrays etc. variables local and let them release after usage, and having gc.collect() commands every here and there. Still, I need something more to help the memory.
Today, I was testing some optimizing techniques, though mostly by guessing:
With my std script version, just between gc.collect() and ssl.wrap_socket() commands, the micropython.mem_info() reports total 191k, used 71k, free 120k, with max free size 840.
The same script, with all the comments stripped away so that file size reduced from 130k to 80k and lines count from 2600 to 2000, the mem_info reports just the same. So I guess that no help stripping comments away.
The same script, after experimentally moving couple of static strings from code to as static strings in the beginning of script resulted 119k free, max free 987
So, after these experiments, moving strings from inline code as static string variables seemed to slightly help in max free block size. I would guess that it puts the static strings more in the beginning of heap, allowing bigger free areas elsewhere.
But overall, my question is that what should I concentrate to help memory allocation problems in my script? Maybe some module import-related? Etc?
Beta Was this translation helpful? Give feedback.
All reactions