bob issueshttps://gitlab.idiap.ch/groups/bob/-/issues2018-06-03T12:14:16Zhttps://gitlab.idiap.ch/bob/bob.bio.base/-/issues/29Preprocessor does not use the load method of the BioFile class2018-06-03T12:14:16ZAndré AnjosPreprocessor does not use the load method of the BioFile class*Created by: 183amir*
The databases provide a load method in its `File` class but this is not used in the preprocessor when data is read.
`read_original_data`: https://github.com/bioidiap/bob.bio.base/blob/master/bob/bio/base/preproces...*Created by: 183amir*
The databases provide a load method in its `File` class but this is not used in the preprocessor when data is read.
`read_original_data`: https://github.com/bioidiap/bob.bio.base/blob/master/bob/bio/base/preprocessor/Preprocessor.py#L87
Some databases may contain more than one sample in one file (like videos and audios with two channles); while I understand that this is handled in `bob.bio.video` for video files, it is not clear how this can be handled for audio files with two channels in them.
If the preprocessor was calling the `load` method of the `File` (`BioFile`) class, we could use logical paths for `File.path` instead of the actual path and handle this in the `load` method. For example,
`File.path` would be `origpath_A` or `origpath_B` depending on the channel and then the `load` method would return only channel A or B depending on the logical path that was requested.Refactoring 2016 and gitlab migration milestoneAmir MOHAMMADIAmir MOHAMMADIhttps://gitlab.idiap.ch/bob/bob.bio.base/-/issues/195download_file extracts the archive file every time it is called.2023-10-31T13:42:42ZYannick DAYERdownload_file extracts the archive file every time it is called.When the `extract` flag is set in [`download_file`](https://gitlab.idiap.ch/bob/bob.bio.base/-/blob/v8.0.0/src/bob/bio/base/database/utils.py?ref_type=tags#L474), the archive file is extracted at every call, even if the first call that d...When the `extract` flag is set in [`download_file`](https://gitlab.idiap.ch/bob/bob.bio.base/-/blob/v8.0.0/src/bob/bio/base/database/utils.py?ref_type=tags#L474), the archive file is extracted at every call, even if the first call that downloaded the archive already extracted it. The issue is that for some archives and on some systems, this extraction takes time (more than 10s).
Choices:
- Keep the extraction even if the archive was not re-downloaded:
- Takes time at the start of every run.
- Ensures the extracted files are correct and were not modified.
- Only extract the archive just after downloading it:
- If the extracted files are modified between runs, the next run will use those.Yannick DAYERYannick DAYERhttps://gitlab.idiap.ch/bob/bob.bio.base/-/issues/193Add a `--score-column` option in metrics and figures2023-08-22T07:59:18ZYannick DAYERAdd a `--score-column` option in metrics and figuresIn order to process score files with multiple score columns, we should have a way to select a specific column that contains the scores.
``` sh
bob bio metrics /path/to/the/file --score-column "fusion_score"
```
(Until now, we needed to...In order to process score files with multiple score columns, we should have a way to select a specific column that contains the scores.
``` sh
bob bio metrics /path/to/the/file --score-column "fusion_score"
```
(Until now, we needed to change both columns' headers to do that).Yannick DAYERYannick DAYERhttps://gitlab.idiap.ch/bob/nightlies/-/issues/64Pipeline passes even when some sub-pipeline fails2023-05-11T09:47:04ZYannick DAYERPipeline passes even when some sub-pipeline failsWhen the bob.bio.face pipeline fails (e.g. [on May the 4th](https://gitlab.idiap.ch/bob/bob.bio.face/-/pipelines/73071)), the [nightlies pipeline](https://gitlab.idiap.ch/bob/nightlies/-/pipelines/73059) does not care that it failed (it ...When the bob.bio.face pipeline fails (e.g. [on May the 4th](https://gitlab.idiap.ch/bob/bob.bio.face/-/pipelines/73071)), the [nightlies pipeline](https://gitlab.idiap.ch/bob/nightlies/-/pipelines/73059) does not care that it failed (it even marks it as succeeded), and continues executing.
I'm expecting the nightlies' pipeline to stop (or at least show that one job failed).
This is not critical as we receive a notification anyway, at the level of the package (here bob.bio.face).https://gitlab.idiap.ch/bob/bob.bio.base/-/issues/191Samples lose their metadata when going through PipelineSimple2023-04-04T17:16:21ZYannick DAYERSamples lose their metadata when going through PipelineSimpleSomewhere in the pipeline, the metadata of the references and probes (age, gender, ...) are lost. They are no longer available in the score files.
Issue encountered with `bob bio pipeline simple mobio-all arcface-insightface`.
After a ...Somewhere in the pipeline, the metadata of the references and probes (age, gender, ...) are lost. They are no longer available in the score files.
Issue encountered with `bob bio pipeline simple mobio-all arcface-insightface`.
After a first investigation:
- The metadata attributes are present when creating the `Sample` objects with the `Database`.
- The `ScoreWriter` receives `Sample` objects that are already missing those attributes.
- The `Distance` BioAlgorithm seems to transform samples into bags. Maybe more investigation needed here.Yannick DAYERYannick DAYERhttps://gitlab.idiap.ch/bob/bob.pipelines/-/issues/46Dask Client configuration not available in installed package2023-03-28T12:52:29ZYannick DAYERDask Client configuration not available in installed packageWhen using the `bob bio simple` commands, the `dask.client` entry-points are not available.
Doing `bob bio pipeline simple -H conf.py` outputs in `conf.py`:
``` python
# ----------8<----------
# dask_client = single-threaded
"""Option...When using the `bob bio simple` commands, the `dask.client` entry-points are not available.
Doing `bob bio pipeline simple -H conf.py` outputs in `conf.py`:
``` python
# ----------8<----------
# dask_client = single-threaded
"""Optional parameter: dask_client (--dask-client, -l) [default: single-threaded]
Dask client for the execution of the pipeline. Can be a `dask.client' entry point, a module name, or a path to a Python file which contains a variable named `dask_client'.
Registered entries are: []"""
# ----------8<----------
```
Tried with the package installed from conda beta; also tried with `pip install -e`.
Entry points in bob.bio.base and bob.bio.face are working. So I presume it's an issue with how we do it in this package (maybe a wrong name for the entry-point group?).Yannick DAYERYannick DAYERhttps://gitlab.idiap.ch/bob/bob.bio.face/-/issues/96Annotation type XYZ is not supported.2023-03-08T10:00:15ZChristophe ECABERTAnnotation type XYZ is not supported.With recent changes to `CSVDataset`, running any baseline against `multipie` database is silently failing with the following message:
```bash
bob.bio.face.utils@2023-02-23 13:54:36,596 -- WARNING: Annotation type ('eyes-center', 'left-p...With recent changes to `CSVDataset`, running any baseline against `multipie` database is silently failing with the following message:
```bash
bob.bio.face.utils@2023-02-23 13:54:36,596 -- WARNING: Annotation type ('eyes-center', 'left-profile', 'right-profile') is not supported. Input images will be fully scaled.
```
There are two reasons for this behaviour:
- The change from `list` to `tuple` for the `MultipieDatabase.annotation_type` variable ([now](https://gitlab.idiap.ch/bob/bob.bio.face/-/blob/master/src/bob/bio/face/database/multipie.py#L114), [then](https://gitlab.idiap.ch/bob/bob.bio.face/-/blob/c8a6495ad85105661efa915e86a7eac7b1c2b3f6/bob/bio/face/database/multipie.py#L127))
- The oversimplified checks in [bob.bio.face.utils.py](https://gitlab.idiap.ch/bob/bob.bio.face/-/blob/master/src/bob/bio/face/utils.py), checking only for list whereas it could potentially be any `Iterable` and not only `list`Yannick DAYERYannick DAYERhttps://gitlab.idiap.ch/bob/bob.bio.face/-/issues/95Set pytorch to run on single thread only on docker jobs2023-02-20T14:49:51ZYannick DAYERSet pytorch to run on single thread only on docker jobsTo fix the jobs on the docker runners for the CI, I added [this line](https://gitlab.idiap.ch/bob/bob.bio.face/-/blob/d6d8e20bb73cfe4b099fedee603fad6498203d7f/src/bob/bio/face/embeddings/pytorch.py#L27) a while back.
We should:
- [x] r...To fix the jobs on the docker runners for the CI, I added [this line](https://gitlab.idiap.ch/bob/bob.bio.face/-/blob/d6d8e20bb73cfe4b099fedee603fad6498203d7f/src/bob/bio/face/embeddings/pytorch.py#L27) a while back.
We should:
- [x] remove this line as this limits *any* work to a single thread and is not really wanted for performance reasons.
- [x] add `OMP_NUM_THREADS=1` as a variable in the CI config, either on the `.gitlab-ci.yaml` (in `bob/dev-profile`) (if possible, only for the jobs running on docker) or in the runner configuration (if possible?).André MAYORAZAndré MAYORAZhttps://gitlab.idiap.ch/bob/bob.bio.base/-/issues/190SampleSet missing attributes 'key'2023-04-04T17:55:49ZChristophe ECABERTSampleSet missing attributes 'key'With the updated version of the `CSVDatabase` the `references` and `probes` are now `SampleSet` objects. Special care is taken to make sure the required attributes are defined for samples within them.
However there is no check for `Sam...With the updated version of the `CSVDatabase` the `references` and `probes` are now `SampleSet` objects. Special care is taken to make sure the required attributes are defined for samples within them.
However there is no check for `SampleSet` attributes, which is not directly an issue. But later down the pipeline, when using checkpointed experiment, the `BioAlgCheckpointWrapper` rely on the `key` attribute to run (i.e. `_enroll_sample_set(...)`).Yannick DAYERYannick DAYERhttps://gitlab.idiap.ch/bob/bob.bio.base/-/issues/189Vulnerability analysis compares `template_id` instead of `subject_id`2023-02-13T16:35:57ZYannick DAYERVulnerability analysis compares `template_id` instead of `subject_id`To know if a score is genuine or an impostor, the fields to compare are `probe_subject_id` and `bio_ref_subject_id`.
The `vuln` commands currently use `probe_template_id` and `bio_ref_template_id`.To know if a score is genuine or an impostor, the fields to compare are `probe_subject_id` and `bio_ref_subject_id`.
The `vuln` commands currently use `probe_template_id` and `bio_ref_template_id`.Yannick DAYERYannick DAYERhttps://gitlab.idiap.ch/bob/bob.pipelines/-/issues/45CheckpointWrapper on annotator, saving the original dataset images as well as...2023-01-27T17:11:33ZAlain KOMATYCheckpointWrapper on annotator, saving the original dataset images as well as the annotations - waste of sapce!Hello,
When choosing to checkpoint in the pipeline, the annotator folder will contain the original images of the dataset instead of the annotations (face landmarks for example). One solution is the wrap a CheckpointWrapper around the an...Hello,
When choosing to checkpoint in the pipeline, the annotator folder will contain the original images of the dataset instead of the annotations (face landmarks for example). One solution is the wrap a CheckpointWrapper around the annotator. This will save the annotations in the annotator folder, but it will also save the original images, because now it is wrapped twice!
This problem comes from the [_wrap](https://gitlab.idiap.ch/bob/bob.pipelines/-/blob/master/src/bob/pipelines/wrappers.py#L1014) function in the [wrappers](https://gitlab.idiap.ch/bob/bob.pipelines/-/blob/master/src/bob/pipelines/wrappers.py) module.
Thanks to @cecabert, who pointed that in this function, there is no test whether the `estimator` is already an instance of CheckpointWrapper or not! One possible solution could be as follows (tested it and it is working for my pipelines):
```python
def _wrap(estimator, **kwargs):
# wrap the object and pass the kwargs
for w_class in bases:
valid_params = w_class._get_param_names()
params = {k: kwargs.pop(k) for k in valid_params if k in kwargs}
if estimator is None:
estimator = w_class(**params)
else:
if not isinstance(estimator, w_class):
estimator = w_class(estimator, **params)
return estimator, kwargs
```Yannick DAYERYannick DAYERhttps://gitlab.idiap.ch/bob/bob.devtools/-/issues/112`bdt ci check` is executed on the nightlies job2022-12-22T10:28:13ZSamuel GAIST`bdt ci check` is executed on the nightlies jobWhy do the nightlies job run `bdt ci check` before doing their actual work ?
The way these checks are implemented are specific to bob's packages which the nightlies are not.
It breaks both beat and bob nigthlies run.Why do the nightlies job run `bdt ci check` before doing their actual work ?
The way these checks are implemented are specific to bob's packages which the nightlies are not.
It breaks both beat and bob nigthlies run.André AnjosAndré Anjoshttps://gitlab.idiap.ch/bob/bob.pipelines/-/issues/44check_parameters_for_validity does not always return the same type2022-12-06T11:13:39ZYannick DAYERcheck_parameters_for_validity does not always return the same typeCurrently, `bob.pipelines.utils.check_parameters_for_validity` can return ["a list or tuple"](https://gitlab.idiap.ch/bob/bob.pipelines/-/blob/master/src/bob/pipelines/utils.py#L117).
This seems weird to return a list **or** a tuple. An...Currently, `bob.pipelines.utils.check_parameters_for_validity` can return ["a list or tuple"](https://gitlab.idiap.ch/bob/bob.pipelines/-/blob/master/src/bob/pipelines/utils.py#L117).
This seems weird to return a list **or** a tuple. And somewhere down the line, we actually expect a list (with a `remove` method).
Could you ensure that this returns a `list` in all cases (and edit the docstring to reflect that)?André MAYORAZAndré MAYORAZhttps://gitlab.idiap.ch/bob/bob.learn.em/-/issues/49dask problem with fitting GMMMachine ...2022-11-10T12:13:55ZPasra Rahimidask problem with fitting GMMMachine ...Can you guys check this error? The original script can be found at `/idiap/temp/prahimi/latentplay/gmm/gmm.py`
```bash
Loading latent data ...
Done loading.
(50000, 9216)
Working on GMMMachine with number of mixture models set to : 10...Can you guys check this error? The original script can be found at `/idiap/temp/prahimi/latentplay/gmm/gmm.py`
```bash
Loading latent data ...
Done loading.
(50000, 9216)
Working on GMMMachine with number of mixture models set to : 10
Traceback (most recent call last):
File "/somewhere/exps/latentplay/gmm/gmm.py", line 38, in <module>
machine = machine.fit(latents)
File "/somewhere/mambaforge/envs/latentplay/lib/python3.10/site-packages/bob/learn/em/gmm.py", line 792, in fit
self.initialize_gaussians(X)
File "/somewhere/mambaforge/envs/latentplay/lib/python3.10/site-packages/bob/learn/em/gmm.py", line 731, in initialize_gaussians
kmeans_machine = kmeans_machine.fit(data)
File "/somewhere/mambaforge/envs/latentplay/lib/python3.10/site-packages/bob/learn/em/kmeans.py", line 328, in fit
self.initialize(data=X)
File "/somewhere/mambaforge/envs/latentplay/lib/python3.10/site-packages/bob/learn/em/kmeans.py", line 312, in initialize
self.centroids_ = k_init(
File "/somewhere/mambaforge/envs/latentplay/lib/python3.10/site-packages/dask_ml/cluster/k_means.py", line 365, in k_init
return init_scalable(X, n_clusters, random_state, max_iter, oversampling_factor)
File "/somewhere/mambaforge/envs/latentplay/lib/python3.10/site-packages/dask_ml/utils.py", line 550, in wraps
results = f(*args, **kwargs)
File "/somewhere/mambaforge/envs/latentplay/lib/python3.10/site-packages/dask_ml/cluster/k_means.py", line 423, in init_scalable
(cost,) = compute(evaluate_cost(X, centers))
File "/somewhere/mambaforge/envs/latentplay/lib/python3.10/site-packages/dask_ml/cluster/k_means.py", line 486, in evaluate_cost
return (pairwise_distances(X, centers).min(1) ** 2).sum()
File "/somewhere/mambaforge/envs/latentplay/lib/python3.10/site-packages/dask_ml/metrics/pairwise.py", line 59, in pairwise_distances
return X.map_blocks(
File "/somewhere/mambaforge/envs/latentplay/lib/python3.10/site-packages/dask/array/core.py", line 2676, in map_blocks
return map_blocks(func, self, *args, **kwargs)
File "/somewhere/mambaforge/envs/latentplay/lib/python3.10/site-packages/dask/array/core.py", line 873, in map_blocks
out = blockwise(
File "/somewhere/mambaforge/envs/latentplay/lib/python3.10/site-packages/dask/array/blockwise.py", line 269, in blockwise
raise ValueError(
ValueError: Dimension 1 has 2 blocks, adjust_chunks specified with 1 blocks
```Flavio TARSETTIFlavio TARSETTIhttps://gitlab.idiap.ch/bob/bob.bio.face/-/issues/90Where went `CSVDatasetZTNorm` ?2023-02-20T08:45:24ZChristophe ECABERTWhere went `CSVDatasetZTNorm` ?With recent `csv-dataset` refactoring, some derived classes have been left on the side. With changes partially pushed to `bob` a simple import like `from bob.bio.face.database import MobioDatabase` will leads to:
```
ImportError: canno...With recent `csv-dataset` refactoring, some derived classes have been left on the side. With changes partially pushed to `bob` a simple import like `from bob.bio.face.database import MobioDatabase` will leads to:
```
ImportError: cannot import name 'CSVDatasetZTNorm' from 'bob.bio.base.database'
```
Which is quite inconvenient.https://gitlab.idiap.ch/bob/bob.learn.em/-/issues/48Token for pypi deployment2022-10-19T16:24:39ZAndré MAYORAZToken for pypi deploymentThere is currently a problem with the local-pypi job that is supposed to deploy the package :
```
$ test -z "${CI_COMMIT_TAG}" && citool deregister --git-url "${CI_SERVER_URL}" --token "${PYPI_PACKAGE_REGISTRY_TOKEN}" --project "${CI_PRO...There is currently a problem with the local-pypi job that is supposed to deploy the package :
```
$ test -z "${CI_COMMIT_TAG}" && citool deregister --git-url "${CI_SERVER_URL}" --token "${PYPI_PACKAGE_REGISTRY_TOKEN}" --project "${CI_PROJECT_ID}" --name "${PACKAGE_NAME}" --version "${PACKAGE_VERSION}"
Traceback (most recent call last):
File "/scratch/builds/bob/bob.learn.em/venv/lib/python3.10/site-packages/gitlab/exceptions.py", line 333, in wrapped_f
return f(*args, **kwargs)
File "/scratch/builds/bob/bob.learn.em/venv/lib/python3.10/site-packages/gitlab/mixins.py", line 246, in list
obj = self.gitlab.http_list(path, **data)
File "/scratch/builds/bob/bob.learn.em/venv/lib/python3.10/site-packages/gitlab/client.py", line 942, in http_list
gl_list = GitlabList(self, url, query_data, get_next=False, **kwargs)
File "/scratch/builds/bob/bob.learn.em/venv/lib/python3.10/site-packages/gitlab/client.py", line 1144, in __init__
self._query(url, query_data, **self._kwargs)
File "/scratch/builds/bob/bob.learn.em/venv/lib/python3.10/site-packages/gitlab/client.py", line 1154, in _query
result = self._gl.http_request("get", url, query_data=query_data, **kwargs)
File "/scratch/builds/bob/bob.learn.em/venv/lib/python3.10/site-packages/gitlab/client.py", line 798, in http_request
raise gitlab.exceptions.GitlabHttpError(
gitlab.exceptions.GitlabHttpError: 403: 403 Forbidden
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/scratch/builds/bob/bob.learn.em/venv/bin/citool", line 8, in <module>
sys.exit(main())
File "/scratch/builds/bob/bob.learn.em/venv/lib/python3.10/site-packages/citools/script.py", line 54, in main
args.func(args)
File "/scratch/builds/bob/bob.learn.em/venv/lib/python3.10/site-packages/citools/deregister.py", line 43, in _wrapper
for p in proj.packages.list():
File "/scratch/builds/bob/bob.learn.em/venv/lib/python3.10/site-packages/gitlab/exceptions.py", line 335, in wrapped_f
raise error(e.error_message, e.response_code, e.response_body) from e
gitlab.exceptions.GitlabListError: 403: 403 Forbidden
Cleaning up project directory and file based variables
00:01
ERROR: Job failed: exit code 1
```
The problem is in the call of `for p in proj.packages.list():` in the `citools/deregister.py` file.
From what I investigated, the `gitlab.exceptions.GitlabListError: 403: 403 Forbidden` error could indicate that no token is provided to Gitlab, and therefore the runner cannot access to the packages.
For info, `proj.packages.list()` calls here:
```
curl --header "PRIVATE-TOKEN: <token>" "https://gitlab.idiap.ch/api/v4/projects/1510/packages"
```
If an invalid token is given, it returns `{"message":"401 Unauthorized"}` but if no token is given, it returns `{"message":"403 Forbidden"}`
This would mean that `${PYPI_PACKAGE_REGISTRY_TOKEN}` is never set.
Where is it supposed to be set?
And why not use `${CI_JOB_TOKEN}` as `deregister.py` calls Gitlab API methods ?https://gitlab.idiap.ch/bob/bob.devtools/-/issues/111Adding xlrd library2022-09-29T16:09:56ZFlavio TARSETTIAdding xlrd libraryAdd `xlrd`
Package request from @smichelAdd `xlrd`
Package request from @smichelFlavio TARSETTIFlavio TARSETTIhttps://gitlab.idiap.ch/bob/bob.pad.face/-/issues/49Test for the deep_pix_bis pipeline fails on the linux CI pipeline2022-10-06T14:19:43ZYannick DAYERTest for the deep_pix_bis pipeline fails on the linux CI pipelineJobs [#283925](https://gitlab.idiap.ch/bob/bob.pad.face/-/jobs/283925) and [#283926](https://gitlab.idiap.ch/bob/bob.pad.face/-/jobs/283926) failed.
The `deep_pix_bis` pipeline fails by returning a prediction score of `0.60` or more (th...Jobs [#283925](https://gitlab.idiap.ch/bob/bob.pad.face/-/jobs/283925) and [#283926](https://gitlab.idiap.ch/bob/bob.pad.face/-/jobs/283926) failed.
The `deep_pix_bis` pipeline fails by returning a prediction score of `0.60` or more (the value changes between runs) when expected to be below `0.04`.
The mac_intel and mac_arm CI pipelines pass.
The test also passes on a fresh local environment.Yannick DAYERYannick DAYERhttps://gitlab.idiap.ch/bob/bob.devtools/-/issues/110Overhauling our CI and pipelines2022-11-17T17:34:29ZAndré AnjosOverhauling our CI and pipelinesI'm proposing we should re-think the whole CI procedure entirely. Our CI system was made for the time we had some C++ code to be compiled and made available for other packages (@lcolbois: this was probably another reason for the splits)...I'm proposing we should re-think the whole CI procedure entirely. Our CI system was made for the time we had some C++ code to be compiled and made available for other packages (@lcolbois: this was probably another reason for the splits).
As of now and thanks to a lot of work by predecessors, most of the packages in this group are Python-only - this is certainly true for the (biometrics) nightlies. This means we can probably benefit from this fact and improve our pipelines, at least for most packages in the bob/beat namespace.
I have a proposal in mind, but I think we need to debate this a bit more thoroughly as I may not be thinking about all aspects and oversimplifying. The bulk of the proposal goes through revising our pipeline requirements and moving most of the packages into a python-only workflow (for most part). The proposed pipeline would be like this:
1. QA via `pre-commit`, if `.pre-commit-config.yaml` is present - only linux, should fail fast. Caching is enabled.
2. Test (via pytest) on various platforms and python versions - uses a Python docker image (not conda), respects python package pinning, retrieve beta versions from internal Gitlab package registry (if they exist), otherwise PyPI. Pip caching is enabled.
3. Generate sphinx documentation building and doctests (if `doc/` directory is present) - only linux, via Python docker image (not conda)
4. Package python code (via `python setup.py sdist`)
5. Package conda package using the sdist produced at the previous step, run tests (single platform - linux, just to cross-check)
6. Deploy sdist (unreleased beta) package on internal "group" package registry (GitLab: https://docs.gitlab.com/ee/user/packages/pypi_repository/) if not tagged or private/internal. Deploy sdist package on PyPI if tagged and public.
7. Deploy conda package on internal beta channel if not tagged, deploy on stable channel if tagged.
8. Deploy documentation and coverage information on DAV web server if success (master and stable, if required)
For non-python packages, maintain minimal CI configuration, but allow maximum flexibility via their own personalised `conda_build_config.yaml`. Everything else should match the above pipeline.
I also have some ideas on how to improve the packaging:
* Moving to `pyproject.toml` (https://setuptools.pypa.io/en/latest/userguide/pyproject_config.html), and
* Reflect that automatically on conda packages to deduplicate package lists (https://docs.conda.io/projects/conda-build/en/stable/resources/define-metadata.html#loading-data-from-other-files)
I expect this plan should make our pipelines run much faster, while simplifying the setup and testing. Conda environment creation continues to be supported and possible. For other simpler setups, simple Python software management (e.g. via pip or poetry) would be possible.
(This would affect issue #102 for example, #104 and maybe #103.)André AnjosAndré Anjoshttps://gitlab.idiap.ch/bob/bob.devtools/-/issues/109`bdt dev create` fails to create a Python 3.8 or 3.9 environment and installs...2022-09-15T14:52:20ZAndré Anjos`bdt dev create` fails to create a Python 3.8 or 3.9 environment and installs Python 3.10 insteadI'm not sure where the problem is, but the following doesn't seem to work anymore:
```sh
$ cd bob.extension
$ #git checkout master; git pull # just make sure you're up-to-date
$ bdt create -vv ext --python=3.8
...
$ conda activate ext
...I'm not sure where the problem is, but the following doesn't seem to work anymore:
```sh
$ cd bob.extension
$ #git checkout master; git pull # just make sure you're up-to-date
$ bdt create -vv ext --python=3.8
...
$ conda activate ext
$ python -V
Python 3.10.6
```
I have the latest beta version of bdt installed (`5.3.1b0-py39_6`, arm/mac), as it seems.
Package planning shows Python 3.8 passing by, however it is later dropped for Python 3.10.
@flavio.tarsetti, @ydayer: Does anybody understand what is going on?