Skip to content
GitLab
Explore
Sign in
Primary navigation
Search or go to…
Project
bob.pipelines
Manage
Activity
Members
Labels
Plan
Issues
Issue boards
Milestones
Code
Merge requests
Repository
Branches
Commits
Tags
Repository graph
Compare revisions
Build
Pipelines
Jobs
Pipeline schedules
Artifacts
Deploy
Releases
Package registry
Model registry
Operate
Environments
Terraform modules
Monitor
Incidents
Analyze
Value stream analytics
Contributor analytics
CI/CD analytics
Repository analytics
Model experiments
Help
Help
Support
GitLab documentation
Compare GitLab plans
GitLab community forum
Contribute to GitLab
Provide feedback
Keyboard shortcuts
?
Snippets
Groups
Projects
Show more breadcrumbs
bob
bob.pipelines
Merge requests
!56
Two new features
Code
Review changes
Check out branch
Download
Patches
Plain diff
Expand sidebar
Merged
Two new features
updates
into
master
Overview
0
Commits
2
Pipelines
1
Changes
2
Merged
Two new features
Tiago de Freitas Pereira
requested to merge
updates
into
master
Dec 7, 2020
Overview
0
Commits
2
Pipelines
1
Changes
2
Moved dask_get_partition_size from bob.bio.base to bob.pipelines
Updated the target duration of a task to 10s. Being very aggressive in scale-up
0
0
Merge request reports
Compare
master
master (base)
and
latest version
latest version
cf626943
2 commits,
Dec 7, 2020
2 files
+
40
−
3
Inline
Compare changes
Side-by-side
Inline
Show whitespace changes
Show one file at a time
Files
2
bob/pipelines/distributed/__init__.py
+
33
−
0
View file @ cf626943
Edit in single-file editor
Open in Web IDE
Show full file
@@ -16,3 +16,36 @@ __path__ = extend_path(__path__, __name__)
# )
VALID_DASK_CLIENT_STRINGS
=
(
"
single-threaded
"
,
"
sync
"
,
"
threaded
"
,
"
processes
"
)
def
dask_get_partition_size
(
cluster
,
n_objects
,
lower_bound
=
200
):
"""
Heuristics that gives you a number for dask.partition_size.
The heuristics is pretty simple, given the max number of possible workers to be run
in a queue (not the number of current workers running) and a total number objects to be processed do n_objects/n_max_workers:
Check https://docs.dask.org/en/latest/best-practices.html#avoid-very-large-partitions
for best practices
Parameters
----------
cluster: :any:`bob.pipelines.distributed.sge.SGEMultipleQueuesCluster`
Cluster of the type :any:`bob.pipelines.distributed.sge.SGEMultipleQueuesCluster`
n_objects: int
Number of objects to be processed
lower_bound: int
Minimum partition size.
"""
if
not
isinstance
(
cluster
,
SGEMultipleQueuesCluster
):
return
None
max_jobs
=
cluster
.
sge_job_spec
[
"
default
"
][
"
max_jobs
"
]
# Trying to set a lower bound for the
return
(
max
(
n_objects
//
max_jobs
,
lower_bound
)
if
n_objects
>
max_jobs
else
n_objects
)
Loading