You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: doc/src/atomvm-internals.md
+24-3Lines changed: 24 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -57,10 +57,13 @@ As a BEAM implementation, AtomVM must be capable of spawning and managing the li
57
57
The `GlobalContext` structure maintains a list of running processes and contains the following fields for managing the running Erlang processes in the VM:
58
58
59
59
*`processes_table` the list of all processes running in the system
60
-
*`waiting_processes` the subset of processes that are waiting to run (e.g., waiting for a message or timeout condition). This set is the complement of the set of ready processes.
61
-
*`ready_processes` the subset of processes that are ready to run. This set is the complement of the set of waiting processes.
60
+
*`waiting_processes` the subset of processes that are waiting to run (e.g., waiting for a message or timeout condition).
61
+
*`running_processes` the subset of processes that are currently running.
62
+
*`ready_processes` the subset of processes that are ready to run.
62
63
63
-
Each of these fields are doubly-linked list (ring) structures, i.e, structs containing a `prev` and `next` pointer field. The `Context` data structure begins with two such structures, the first of which links the `Context` struct in the `processes_table` field, and the second of which is used for either the `waiting_processes` or the `ready_processes` field.
64
+
Processes are in either `waiting_processes`, `running_processes` or `ready_processes`. A running process can technically be moved to the ready list while running to signify that if it yields, it will be eligible for being run again, typically if it receives a message. Also, native handlers (ports) are never moved to the `running_processes` list but are in the `waiting_processes` list when they run (and can be moved to `ready_processes` list if they are made ready while running).
65
+
66
+
Each of these fields are doubly-linked list (ring) structures, i.e, structs containing a `prev` and `next` pointer field. The `Context` data structure begins with two such structures, the first of which links the `Context` struct in the `processes_table` field, and the second of which is used for either the `waiting_processes`, the `ready_processes` or the `running_processes` field.
64
67
65
68
> Note. The C programming language treats structures in memory as contiguous sequences of fields of given types. Structures have no hidden pramble data, such as you might find in C++ or who knows what in even higher level languages. The size of a struct, therefore, is determined simply by the size of the component fields.
66
69
@@ -90,6 +93,24 @@ The relationship between the `GlobalContext` fields that manage BEAM processes a
90
93
91
94
## The Scheduler
92
95
96
+
In SMP builds, AtomVM runs one scheduler thread per core. Scheduler threads are actually started on demand. The number of scheduler threads can be queried with `erlang:system_info/1` and be modified with `erlang:system_flag/2`. All scheduler threads are considered equal and there is no notion of main thread except when shutting down (main thread is shut down last).
97
+
98
+
Each scheduler thread picks a ready process and execute it until it yields. Erlang processes yield when they are waiting (for a message) and after a number of reductions elapsed. Native processes yield when they are done consuming messages (when the handler returns).
99
+
100
+
Once a scheduler thread is done executing a process, if no other thread is waiting into `sys_poll_events`, it calls `sys_poll_events` with a timeout that correspond to the time to wait for next execution. If there are ready processes, the timeout is 0. If there is no ready process, this scheduler thread will wait into `sys_poll_event` and depending on the platform implementation, the CPU usage can drop.
101
+
102
+
If there already is one thread in `sys_poll_events`, other scheduler threads pick the next ready process and if there is none, wait. Other scheduler threads can also interrupt the wait in `sys_poll_events` if a process is made ready to run. They do so using platform function `sys_signal`.
103
+
104
+
## Mailboxes and signals
105
+
106
+
Erlang processes receive messages in a mailbox. The mailbox is the interface with other processes.
107
+
108
+
When a sender process sends a message to a recipient process, the message is first enqueued into an outer mailbox. The recipient process eventually moves all messages from the outer mailbox to the inner mailbox. The reason for the inner and outer mailbox is to use lock-free data structures using atomic CAS operations.
109
+
110
+
Sometimes, Erlang processes need to query information from other processes but without sending a regular message, for example when using `process_info/1,2` nif. This is handled by signals. Signals are special messages that are enqueued in the outer mailbox of a process. Signals are processed by the recipient process when regular messages from the outer mailbox are moved to the inner mailbox. Signal processing code is part of the main loop and transparent to recipient processes. Both native handlers and erlang processes can receive signals. Signals are also used to run specific operation on other processes that cannot be done from another thread. For example, signals are used to perform garbage collection on another process.
111
+
112
+
When an Erlang process calls a nif that requires such an information from another process such as `process_info/1,2`, the nif returns a special value and set the Trap flag on the calling process. The calling process is effectively blocked until the other process is scheduled and the information is sent back using another signal message. This mechanism can also be used by nifs that want to block until a condition is true.
113
+
93
114
## Stacktraces
94
115
95
116
Stacktraces are computed from information gathered at load time from BEAM modules loaded into the application, together with information in the runtime stack that is maintained during the execution of a program. In addition, if a BEAM file contains a `Line` chunk, additional information is added to stack traces, including the file name (as defined at compile time), as well as the line number of a function call.
0 commit comments