-
Notifications
You must be signed in to change notification settings - Fork 536
support for gpu queue #3642
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
support for gpu queue #3642
Changes from 4 commits
0720aa1
6c47dc0
f1f5d76
a642430
684b9b0
8f74c5d
a307845
27448bc
7e57ab9
2c2c066
133dc0a
66d6280
610f1cb
59862c8
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
|
@@ -821,6 +821,11 @@ def update(self, **opts): | |||||||||||
"""Update inputs""" | ||||||||||||
self.inputs.update(**opts) | ||||||||||||
|
||||||||||||
def is_gpu_node(self): | ||||||||||||
return (hasattr(self.inputs, 'use_cuda') and self.inputs.use_cuda) or ( | ||||||||||||
hasattr(self.inputs, 'use_gpu') and self.inputs.use_gpu | ||||||||||||
) | ||||||||||||
|
return (hasattr(self.inputs, 'use_cuda') and self.inputs.use_cuda) or ( | |
hasattr(self.inputs, 'use_gpu') and self.inputs.use_gpu | |
) | |
return bool(getattr(self.inputs, 'use_cuda', False)) or bool( | |
getattr(self.inputs, 'use_gpu', False)) |
Original file line number | Diff line number | Diff line change | ||||||||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
@@ -100,6 +100,7 @@ class MultiProcPlugin(DistributedPluginBase): | |||||||||||||||||||||||||||||||||||
|
||||||||||||||||||||||||||||||||||||
- non_daemon: boolean flag to execute as non-daemon processes | ||||||||||||||||||||||||||||||||||||
- n_procs: maximum number of threads to be executed in parallel | ||||||||||||||||||||||||||||||||||||
- n_gpu_procs: maximum number of GPU threads to be executed in parallel | ||||||||||||||||||||||||||||||||||||
- memory_gb: maximum memory (in GB) that can be used at once. | ||||||||||||||||||||||||||||||||||||
- raise_insufficient: raise error if the requested resources for | ||||||||||||||||||||||||||||||||||||
a node over the maximum `n_procs` and/or `memory_gb` | ||||||||||||||||||||||||||||||||||||
|
@@ -130,10 +131,23 @@ def __init__(self, plugin_args=None): | |||||||||||||||||||||||||||||||||||
) | ||||||||||||||||||||||||||||||||||||
self.raise_insufficient = self.plugin_args.get("raise_insufficient", True) | ||||||||||||||||||||||||||||||||||||
|
||||||||||||||||||||||||||||||||||||
# GPU found on system | ||||||||||||||||||||||||||||||||||||
self.n_gpus_visible = MultiProcPlugin.gpu_count() | ||||||||||||||||||||||||||||||||||||
# proc per GPU set by user | ||||||||||||||||||||||||||||||||||||
self.n_gpu_procs = self.plugin_args.get('n_gpu_procs', self.n_gpus_visible) | ||||||||||||||||||||||||||||||||||||
|
||||||||||||||||||||||||||||||||||||
# total no. of processes allowed on all gpus | ||||||||||||||||||||||||||||||||||||
if self.n_gpu_procs > self.n_gpus_visible: | ||||||||||||||||||||||||||||||||||||
logger.info( | ||||||||||||||||||||||||||||||||||||
'Total number of GPUs proc requested (%d) exceeds the available number of GPUs (%d) on the system. Using requested GPU slots at your own risk!' | ||||||||||||||||||||||||||||||||||||
% (self.n_gpu_procs, self.n_gpus_visible) | ||||||||||||||||||||||||||||||||||||
|
'Total number of GPUs proc requested (%d) exceeds the available number of GPUs (%d) on the system. Using requested GPU slots at your own risk!' | |
% (self.n_gpu_procs, self.n_gpus_visible) | |
'Total number of GPUs proc requested (%d) exceeds the available number of GPUs (%d) on the system. Using requested GPU slots at your own risk!', | |
self.n_gpu_procs, self.n_gpus_visible) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would expect this to be hit by your test, but coverage shows it's not. Can you look into this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe I missed that because I never used updatedhash=True, but it seems that no test includes that. Should we add a test with that option?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Moreover that error does not impact "common" use (I have a project including this gpu support code)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
While I was looking into this I found two error about updatehash functionality. I sent a pull request #3709 to fix the biggest.
The second is that in multiproc plugin EVERY node will be executed in main thread if updatehash=True, so no multi process is enabled. I will try to send a pull request for that too (maybe after this gpu support is merged to avoid to handle merge conflicts)
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Note that this is releasing resource claims that were made around line 356 so the next time through the loop sees available resources.
if is_gpu_node: | |
free_gpu_slots -= next_job_gpu_th | |
if is_gpu_node: | |
free_gpu_slots += next_job_gpu_th |
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a general utility, I would put it into nipype.pipeline.plugins.tools
as a function, not a static method.
Also consider:
@staticmethod | |
def gpu_count(): | |
n_gpus = 1 | |
try: | |
import GPUtil | |
return len(GPUtil.getGPUs()) | |
except ImportError: | |
return n_gpus | |
@staticmethod | |
def gpu_count(): | |
try: | |
import GPUtil | |
except ImportError: | |
return 1 | |
else: | |
return len(GPUtil.getGPUs()) |
As a rule, I try to keep the section inside a try
block as short as possible, to avoid accidentally catching other exceptions that are raised. An else
block can contain anything that depends on the success of the try
block.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hard pins are a very bad idea. If you need a particular API, use
>=
to ensure it's present. We should avoid upper bounds as much as possible, although they are not always avoidable.