Fixed multiqueue
1 unresolved thread
1 unresolved thread
I'm fixing here the issue raised with the multiqueue. I was wrongly setting all tasks to run in a particular resource restriction.
Now the problem is fixed.
To get it running you have to wrap your pipeline in the same way as before and fetch the resources like this
pipeline = bob.pipelines.wrap(
["sample", "checkpoint", "dask"],
pipeline,
model_path="./",
transform_extra_arguments=(("metadata", "metadata"),),
fit_tag="q_short_gpu",
)
from bob.pipelines.distributed.sge import get_resource_requirements
resources = get_resource_requirements(pipeline)
pipeline.fit_transform(X_as_sample).compute(
scheduler=client, resources=resources
)
Merge request reports
Activity
added 6 commits
-
dfdb7402...3caab253 - 5 commits from branch
master
- 5895262c - Merge branch 'master' into 'multi'
-
dfdb7402...3caab253 - 5 commits from branch
enabled an automatic merge when the pipeline for 5895262c succeeds
enabled an automatic merge when the pipeline for f9ec8e30 succeeds
120 A :py:class`sklearn.pipeline.Pipeline` wrapper with :any:`bob.pipelines.DaskWrapper` 121 122 Example 123 ------- 124 >>> cluster = SGEMultipleQueuesCluster(sge_job_spec=Q_1DAY_GPU_SPEC) # doctest: +SKIP 125 >>> client = Client(cluster) # doctest: +SKIP 126 >>> from bob.pipelines.sge import get_resource_requirements # doctest: +SKIP 127 >>> resources = get_resource_requirements(pipeline) # doctest: +SKIP 128 >>> my_delayed_task.compute(scheduler=client, resources=resources) # doctest: +SKIP 129 """ 130 131 resources = dict() 132 for s in pipeline: 133 if hasattr(s, "resource_tags"): 134 resources.update(s.resource_tags) 135 return resources mentioned in commit ac67a380
Please register or sign in to reply