Conversation
|
Probably a better idea: add a thread mode as an alternative to the multiprocessing mode for parallel execution. |
|
I've tried with this fabfile: and this is what I got: |
|
Yeah, this assumes that you decorate your task functions with |
|
It kind of works... Output: Notice the error: |
leave it for parallel mode to be able to re-import tasks
because python-3.8 will not pickle the task, because:
Can't pickle <function shellcmd at 0x10312c9d0>: it's not the same object as fabfile.shellcmd
0cc521e to
c95dca8
Compare
|
I'm seeing the error as well (upgrading from fabric 1.4 to fab-classic 1.18 with python 2.x)
|
|
that's not related to this PR or the issue this PR attempts to solve (parallel under python-3.8). that only is related to getting a PTY |
|
Sorry just searched for the error and that’s the first thing that popped up so I thought it’s somehow related. Happy to open a separate issue EDIT: reported #46 |
|
Any update on this? We are seeing the same issue. |
|
Unfortunately this still struggles when running under parallel mode (in 3.8+), two problems being the File ~/host_finder.py:348, in HostDataCollector.host_report(self, datacenter)
345 output = execute(get_kvm_data, roles=[datacenter])
347 # Trim trash results, and log a warning why this happened
--> 348 for key, value in output.items():
349 if not isinstance(value, collections.abc.MutableMapping):
350 logger.warning("%s raised when retrieving data from %s", value, key)
RuntimeError: dictionary changed size during iterationEdit: The above test was run on Python 3.8.6. I'd considered this might be related to python/cpython#83541, but this still occurs under Python 3.8.13. |
|
Ignore the above - I incorrectly attributed this to dict updates after execute finished, this is not a bug in fab-classic, this was me being stupid when putting together a test. However, there are still behavioural changes with 3.8 compared to 3.6/3.7 - one is around Execution time of parallel jobs is also significantly higher - on the above test against 68 hosts (with parallel pool size set to 100), Python 3.7 will take ~8s to finish execution, with Python 3.8 taking ~45s for the same task. |
|
I've had great success with ensuring the default processing call uses diff --git a/fabric/tasks.py b/fabric/tasks.py
index 47a2b49f..dcb34d5f 100644
--- a/fabric/tasks.py
+++ b/fabric/tasks.py
@@ -329,6 +329,7 @@ def execute(task, *args, **kwargs):
# if it can't.
try:
import multiprocessing
+ multiprocessing.set_start_method('fork')
except ImportError:
import traceback
tb = traceback.format_exc()@ploxiln it might be worth making this change for non-Windows (or all if we don't maintain Windows support), until such a time as the multi-thread method can be rewritten as you suggest. |
|
I'm pretty sure parallel mode never worked with windows anyway, so we could just switch to "fork" method now. |
|
Just came across your work, 1000% thank you for forking fab-classic. As for this issue, after dealing with so many old / fragile py2 related dependencies, really like the looks of this ability to restore parallel tasks in current python. Mg. |
|
I know this is not a proper fix or anything, but one can use this workaround without doing any monkey-patching: In my main fabfile I did this: import multiprocessing
multiprocessing.set_start_method('fork')And it seems to be working nicely. I just realized this as I was stuck to Also @ploxiln - based on the docs for multiprocessing: https://docs.python.org/3/library/multiprocessing.html
I believe that this is the reason why the problem started happening for me. As in - it would have also started for earlier python versions if they would use |
|
Thanks @Stealthii for figuring this one out (so long ago ... sorry) Closing this PR in favor of #88 |
attempt to fix #27
(but probably only works for basic cases)