Using _thread library #10638
-
I've been experimenting with using the _thread library to run a thread on the second core (core 1). It appears that ISR code associated with IRQ specifications for Pins and Timers appears to run on core 0 (the first core) irrespective of where the IRQ was specified. Similarly a function invoked using micropython.schedule() always appears to run on core 0. I assume that there is a single queue for functions scheduled using micropython.schedule() and that the head of queue function is always executed in core 0. I know that in theory that micropython.schedule() is intended for use in 'hard' interrupt service routines to allow ISR code to continue in an interruptible context once the time critical elements of the ISR have been dealt with but I was wondering if it was thread safe and therefore using it from core 1 to execute a function in core 0 is permissible. Thanks Paul |
Beta Was this translation helpful? Give feedback.
Replies: 4 comments 8 replies
-
hi @redhead-p -- this is a known issue, see #9124 It's on the TODO list. Some thought required as to how this should work... |
Beta Was this translation helpful? Give feedback.
-
@redhead-p @rkompass I can confirm that the scheduled code sometimes runs on core 1. I have raised #10690 - in that code sample the main code and the hard ISR run on core 0 but the scheduled code runs on core 1. In my view this is a bug, for the reason explained in the issue. On the wider questions I'm looking forward to a response from @jimmo: he is the expert :) |
Beta Was this translation helpful? Give feedback.
-
@rkompass @peterhinch @redhead-p I will try and respond in more detail ASAP, but one thing to note is that looking at the thread id can only prove the hypothesis that the scheduler can run on both cores, but it cannot disprove it. There are many reasons why it might appear that the scheduler always runs on the same core, despite being able to run on both. In summary, the VM (which runs identically on both cores) will try and take items from the scheduler queue at certain times (i.e. at the pending exception check, executed just after a branch instruction). But there are lots of circumstances where one core essentially gets "starved" because the other thread will ~always get there first, especially if the two cores are not executing exactly the same code. |
Beta Was this translation helpful? Give feedback.
-
Just to throw my hat into the ring here, while looking into a (possibly related) hardlock when using Timers, I found that the timer callback would hop from core0 to core1 in a simplified repro when enabling the second core. Eg: from machine import Timer
import _thread
import time
def Tick(timer):
print(f"CPUID (Tick): {machine.mem32[0xd0000000]}")
def core1_task():
while True:
pass
t = Timer(period=1000, callback=Tick)
print(f"CPUID (main): {machine.mem32[0xd0000000]}")
for x in range(2):
time.sleep_ms(1000)
_thread.start_new_thread(core1_task, ())
while True:
time.sleep_ms(1000) This will output:
Reading above it seems there's no guarantee about which core a The original issue I was investigating was timers simply hardlocking the Pico when core1 is enabled. The following example will fire up both timers, which will be seemingly happy until core1 is enabled whereupon the system will invariably hardfault- from machine import Timer
import _thread
import time
def TickA(timer):
pass
def TickB(timer):
pass
def core1_task():
while True:
pass
t1 = Timer(period=1000, callback=TickA)
t2 = Timer(period=1000, callback=TickB)
for w in ("Three", "Two", "One", "Hardlock..."):
print(w)
time.sleep_ms(1000)
_thread.start_new_thread(core1_task, ())
while True:
print("Hello World")
time.sleep_ms(1000) I mention this on the off chance that these are somehow related issues, though I'm as of yet unsure why this hardlock occurs. |
Beta Was this translation helpful? Give feedback.
hi @redhead-p -- this is a known issue, see #9124
It's on the TODO list. Some thought required as to how this should work...