You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Basically my question is: in Snugdock, is running 10 jobs outputting 100 structures each equivalent to running 1 job with 1000 structures.
Within the compute I currently have access to, it is a lot easier for me to make 10 small jobs than 1 big job, so I was wondering whether I might be able to crudely parallelise my SnugDock runs.
However, that is of course dependant on whether or not the generation of docking positions is random or if there is some sort of ordered search/refinement.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
-
Basically my question is: in Snugdock, is running 10 jobs outputting 100 structures each equivalent to running 1 job with 1000 structures.
Within the compute I currently have access to, it is a lot easier for me to make 10 small jobs than 1 big job, so I was wondering whether I might be able to crudely parallelise my SnugDock runs.
However, that is of course dependant on whether or not the generation of docking positions is random or if there is some sort of ordered search/refinement.
Anyone have any experience with this?
Beta Was this translation helpful? Give feedback.
All reactions