@@ -398,6 +398,75 @@ See also the description of the :keyword:`try` statement in section :ref:`try`
398398and :keyword: `raise ` statement in section :ref: `raise `.
399399
400400
401+ .. _execcomponents :
402+ 
403+ Runtime Components
404+ ================== 
405+ 
406+ Python's execution model does not operate in a vacuum.  It runs on a
407+ computer.  When a program runs, the conceptual layers of how it runs
408+ on the computer look something like this::
409+ 
410+    host computer (or VM or container) 
411+      process 
412+        OS thread (runs machine code) 
413+ 
414+ ..  (Sometimes there may even be an extra layer right after "thread"
415+    for light-weight threads or coroutines.) 
416+ 
417+  While a program always starts with exactly one of each of those, it may
418+ grow to include multiple of each.  Hosts and processes are isolated and
419+ independent from one another.  However, threads are not.  Each thread
420+ does *run * independently, for the small segments of time it is
421+ scheduled to execute its code on the CPU.  Otherwise, all threads
422+ in a process share all the process' resources, including memory.
423+ This is exactly what can make threads a pain: two threads running
424+ at the same arbitrary time on different CPU cores can accidentally
425+ interfere with each other's use of some shared data.  The initial
426+ thread is known as the "main" thread.
427+ 
428+ The same layers apply to each Python program, with some extra layers
429+ specific to Python::
430+ 
431+    host 
432+      process 
433+        Python runtime 
434+          interpreter 
435+            Python thread (runs bytecode) 
436+ 
437+ when a Python program starts, it looks exactly like that, with one
438+ of each.  The process has a single global runtime to manage global
439+ resources.  Each Python thread has all the state it needs to run
440+ Python code (and use any supported C-API) in its OS thread.
441+ 
442+ ..  , including its stack of call frames.
443+ 
444+ ..  If the program uses coroutines (async) then the thread will end up
445+    juggling multiple stacks. 
446+ 
447+  In between the global runtime and the threads lies the interpreter.
448+ It encapsulates all of the non-global runtime state that the
449+ interpreter's Python threads share.  For example, all those threads
450+ share :data: `sys.modules `.  When a Python thread is created, it belongs
451+ to an interpreter.
452+ 
453+ If the runtime supports using multiple interpreters then each OS thread
454+ will have at most one Python thread for each interpreter.  However,
455+ only one is active in the OS thread at a time.  Switching between
456+ interpreters means changing the active Python thread.
457+ The initial interpreter is known as the "main" interpreter.
458+ 
459+ ..  (The interpreter is different from the "bytecode interpreter",
460+    of which each thread has one to execute Python code.) 
461+ 
462+  Once a program is running, new Python threads can be created using the
463+ :mod: `threading ` module.  Additional processes can be created using the
464+ :mod: `multiprocessing ` and :mod: `subprocess ` modules.  You can run
465+ coroutines (async) in the main thread using :mod: `asyncio `.
466+ Interpreters can be created using the :mod: `concurrent.interpreters `
467+ module.
468+ 
469+ 
401470.. rubric :: Footnotes 
402471
403472.. [# ] This limitation occurs because the code that is executed by these operations 
0 commit comments