More on understanding Mem_info() and stack size #2210
Replies: 2 comments 1 reply
-
That is because it is enabled by default.
I don't think there have pushed Pybricks to the limits like you have since we don't get very many reports of people running out of memory! 😄 So thanks for sharing!
If you are measuring memory in your main loop, then it will show not much stack used. To measure how much your program actually uses, you would need to measure it at the the end of all of the the most nested functions in your program. Each time when one function calls another, it adds to the stack. If you really want to, you could build your own firmware and adjust the stack to heap ratio. |
Beta Was this translation helpful? Give feedback.
-
|
Hi David, writing new firmware sounds like a big undertaking. Can you point
me in the right direction with documentation, examples, etc and what
development environment to use? Or at least a Web page that can help me to
get started?
…On Mon, 26 May 2025, 11:59 am David Lechner, ***@***.***> wrote:
I was actually hoping the gc.enable() would do more of the automatic
memory cleaning
That is because it is enabled by default.
At the end of the project and reviewing Pybricks, I found the
documentation was good to a point, but was lacking in detailed information
around the memory aspects as mentioned. (Thanks to the maintainers for
their assistance)
I don't think there have pushed Pybricks to the limits like you have since
we don't get very many reports of people running out of memory! 😄 So
thanks for sharing!
The final observation was the actual stack memory - 680 used out of 5180
means 4500 is free! Reallocating 3000 bytes to the variable memory would
have saved a ton of time re-writing code and re-abbreviating names. Not
sure that's possible but please let me know!
If you are measuring memory in your main loop, then it will show not much
stack used. To measure how much your program actually uses, you would need
to measure it at the the end of all of the the most nested functions in
your program. Each time when one function calls another, it adds to the
stack.
If you really want to, you could build your own firmware and adjust the
stack to heap ratio.
—
Reply to this email directly, view it on GitHub
<#2210 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/BQPX7BB4BKWP4KSBNLB26M33AJYRBAVCNFSM6AAAAAB53XJHVKVHI2DSMVQWIX3LMV43URDJONRXK43TNFXW4Q3PNVWWK3TUHMYTGMRWG4YTGMI>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Having recently completed a 3 month dabble into Pybricks micropython code, there were a number of learnings that I thought I would share mainly around memory information.
Firstly, any mem_info() call needs to be immediately preceded by gc.collect() and preferably somewhere in the code after a 'main loop' has been executed at least once. Hopefully you will see something like this when you call mem_info()
The first line as stack memory is used for all the variables used within the procedures
The second line as GC: is the space used by the port / hub / remote drivers and the source code after it is compiled
The third line is for the constants, variables, strings, function names and parameters passed to functions. This memory is quickly consumed and long names must be avoided ! This was the major issue I had throughout the development phase - I kept running out memory. As you can see there are 25 bytes free and this number fluctuates +/- 20 depending on the routines that have recently been executed.
During development, in order to keep memory in a workable condition, anything to do with the REMOTE, PORT, HUB and BLE functions had the gc.collect() / mem_info() included, so I could keep track of what consumed memory. As an example, connecting a Technic motor to a PORT, consumed another 160 bytes of codespace memory.
I was actually hoping the gc.enable() would do more of the automatic memory cleaning, but ended up doing nothing that I could tell.
The other thing to watch out for, is one wrong line of code can suddenly create the dreaded 'insufficient memory' message. Realistically, it's probably the compiler consuming lots of memory while trying to generate the error message and then runs out of memory as well. I will presume variables memory is exhausted and that was causing the issue. (Generating an out-of-memory error is not useful when you have inadvertently forgotten the == or : ) Anyway my workaround was to delete portions of code, recompile and then fix the errant line number and then include the previously depleted code.
At the end of the project and reviewing Pybricks, I found the documentation was good to a point, but was lacking in detailed information around the memory aspects as mentioned. (Thanks to the maintainers for their assistance)
The final observation was the actual stack memory - 680 used out of 5180 means 4500 is free! Reallocating 3000 bytes to the variable memory would have saved a ton of time re-writing code and re-abbreviating names. Not sure that's possible but please let me know!
Beta Was this translation helpful? Give feedback.
All reactions