Skip to content
Projects
Groups
Snippets
Help
Loading...
Help
Support
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in
Toggle navigation
bob.pipelines
Project overview
Project overview
Details
Activity
Releases
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Issues
4
Issues
4
List
Boards
Labels
Milestones
Merge Requests
0
Merge Requests
0
CI / CD
CI / CD
Pipelines
Jobs
Schedules
Analytics
Analytics
CI / CD
Repository
Value Stream
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Create a new issue
Jobs
Commits
Issue Boards
Open sidebar
bob
bob.pipelines
Commits
4b4ec3fc
Commit
4b4ec3fc
authored
Dec 07, 2020
by
Tiago de Freitas Pereira
Browse files
Options
Browse Files
Download
Email Patches
Plain Diff
[dask] Moved dask_get_partition_size from bob.bio.base to bob.pipelines
parent
a3210f40
Pipeline
#46381
passed with stage
in 3 minutes and 43 seconds
Changes
1
Pipelines
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
35 additions
and
0 deletions
+35
-0
bob/pipelines/distributed/__init__.py
bob/pipelines/distributed/__init__.py
+35
-0
No files found.
bob/pipelines/distributed/__init__.py
View file @
4b4ec3fc
...
...
@@ -16,3 +16,38 @@ __path__ = extend_path(__path__, __name__)
# )
VALID_DASK_CLIENT_STRINGS
=
(
"single-threaded"
,
"sync"
,
"threaded"
,
"processes"
)
def
dask_get_partition_size
(
cluster
,
n_objects
,
lower_bound
=
200
):
"""
Heuristics that gives you a number for dask.partition_size.
The heuristics is pretty simple, given the max number of possible workers to be run
in a queue (not the number of current workers running) and a total number objects to be processed do n_objects/n_max_workers:
Check https://docs.dask.org/en/latest/best-practices.html#avoid-very-large-partitions
for best practices
Parameters
----------
cluster: :any:`bob.pipelines.distributed.sge.SGEMultipleQueuesCluster`
Cluster of the type :any:`bob.pipelines.distributed.sge.SGEMultipleQueuesCluster`
n_objects: int
Number of objects to be processed
lower_bound: int
Minimum partition size.
"""
from
.sge
import
SGEMultipleQueuesCluster
if
not
isinstance
(
cluster
,
SGEMultipleQueuesCluster
):
return
None
max_jobs
=
cluster
.
sge_job_spec
[
"default"
][
"max_jobs"
]
# Trying to set a lower bound for the
return
(
max
(
n_objects
//
max_jobs
,
lower_bound
)
if
n_objects
>
max_jobs
else
n_objects
)
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment