Replies: 4 comments 1 reply
-
Hi |
Beta Was this translation helpful? Give feedback.
-
@sophiedegran thanks for the pointer . I found the examples and tried to use StreamReader. I have the following issues
The normal |
Beta Was this translation helpful? Give feedback.
-
Use esp_get_free_heap_size() and esp_get_minimum_free_heap_size() to monitor heap availability during the burst. Check if the PBUF pool size or allocation failures correlate with packet loss. |
Beta Was this translation helpful? Give feedback.
-
I have not found how I can access these exact IDF functions form micropython but that put me in the right path. I do have access to This is the memory situation when I receive all 3 UDP packets correctly:
This is what I have after I imported a bunch of additional modules, which is also when I systematically lose the 3rd UDP packet of each burst:
At that point, Micropython seemed to have allocated memory from the 2nd and 4th pools, and the maximum available allocation (3rd number of each tuple) is down to 1344 bytes. Not enough for a MTU=1500 bytes packet buffer. Are any of these pools also used for PBUFs? I played a bit with this and allocated bytearrays 10K increments until the 2nd pool max allocatable size was 1600 bytes. Pool 4 was unaffected. Then I ran my packet capture function, which silently failed receiving the 3rd UDP packet. After that even the 4th pool had been allocated from and was also low on resources. So there seems to be a correlation between free memory in heap pool 2 and/or 4 and UDP packet loss. I admire Micropython for its tenacity in trying to scrape memory resources from wherever it can to keep the program running. Is there a way to prevent it from drawing too much from a particular heap (without recompiling it)? It would be nice to be able to ensure a deterministic UDP packet processing performance independent from the dynamics of Micropython memory allocation. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
I have implemented on my ESP32C3 a simple SSDP discovery function that multicasts a UDP requests and receives unicast responses from all servers on the subnet. UDP responses are 400-ish bytes long and are coming in a very rapid burst of 3 packets, sometimes separated by a few microseconds.
I observe that if my memory is heavily used and is fragmented, I almost systematically lose the 3rd UDP packet.
I have tried reading my packets in a tight loop with
socket.recv()
,socket.read()
,socket.readinto()
, blocking socket or not, with or withoutselect.poll()
, but I always have the same packet loss. This does not happen if I load only the discovery module and not the rest of the application.To me it looks like the UDP packets are arriving faster than I can process them and the underlying LWIP code cannot allocate PBUF space for it and just silently drops the packet. Is that expected? I would not have thought that LWIP was sharing heap memory with Python, but there seems somehow to be a link with python memory utilization. I thought that maybe the garbage collector kicks in more often and slows down things enough to lose packets, but calling
gc.collect()
before the packet receiving loop does not help.Any thoughts?
Beta Was this translation helpful? Give feedback.
All reactions