Evidence collection
v6.0.1-evidences-282.json 386518f9
Collected 9 months ago
Release notes
-
!112: fix: Database now contains an expected all_samples
Corresponds to the
bob.bio.base
equivalent, allowing the use of padDatabase
with bio commands (likebob bio annotate
). -
Be more lenient with the dependencies version pinning.
Evidence collection
v6.0.0-evidences-209.json 0099ddf0
Collected 1 year ago
Release notes
-
!105: Resolve "Switch to new CI/CD configuration"
-
!106: Autoupdate pre-commit and apply on all files
-
!107: Fix the database name parameter not followed to bob.pipelines
-
!108: [pyproject.toml] Changing documentation link to master/sphinx
-
!109: Replace clapp by clapper.
-
!110: meta [readme]: Switch the README.rst to markdown
Renames README.rst to README.md to be supported by the release script.
-
!111: meta(deps): add bob as dependency in new structure
Adapt to the new structure of bob with
bob/bob
on top.
Evidence collection
v5.0.4-evidences-156.json 1249753d
Collected 2 years ago
Release notes
- support python 3.10
Evidence collection
v5.0.3-evidences-137.json bba21329
Collected 2 years ago
Release notes
- !104 Use pytest instead of nose.
Evidence collection
v5.0.2-evidences-127.json 18830d45
Collected 2 years ago
Release notes
- !103 finalize-scores tweak: Use a different key to aggregate samples. Since 'key' has been changed in VideoToFrames transformer, uses "video_key" instead.
Evidence collection
v5.0.1-evidences-113.json dd3b24bb
Collected 2 years ago
Release notes
- update dependencies
Evidence collection
v5.0.0-evidences-78.json c0dcb92c
Collected 2 years ago
Release notes
- !88 Drop the old un-used database interface: Also related to bob#271
- !89 Refactor vanilla-pad script into multiple functions: Fixes #40
- !90 remove dependency on gridtk, fixes #41
- !91 [sphinx] Fixes: Close #42
- !92 deprecate bob.db.base and bob.io.image
- !93 Add pre-commit
- !95 Update the CSVDataset files format documentation: Documentation of CSV files for the database protocols is now correct. Fixes #43.
- !96 remove support for legacy databases
-
!97 Updating vanilla_pad_features.rst with proper class name: The old documentation said:
from bob.pad.base.database import CSVPADDataset database = CSVPADDataset("path/to/my_dataset", "my_protocol")
However, CSVPADDataset does not exist in bob.pad.base.dabase and it should beFileListPadDatabase
instead. See https://gitlab.idiap.ch/bob/bob.pad.base/-/blob/master/bob/pad/base/database/csv_dataset.py#L35`FileListPadDatabase` - !98 rename bob pad vanilla-pad to bob pad run-pipeline: * remove the vanilla_pad references * adapt to changes in bob.pipelines
- !99 Update the documentation: fixes #37
- !100 Drop support for writing 4 column score files
Release notes
-
!86 Fix the pipeline writing invalid scores as
None
instead ofnan
-
!87 Switch to CSV scores for pipelines and analysis commands: * Saving CSV scores with
bob pad vanilla-pad --write-metadata-scores
* Had to write a "hack" to retrieve the headers when using distributed score files * Reading CSV scores inbob pad
commands (metrics
,hist
, etc.) * Adapted the tests to use the CSV score files * Converted the test data to CSV
Release notes
-
!73 Merged bob.bio: Moved
bob.bio.base
,bob.bio.face
,bob.bio.gmm
, andbob.bio.video
to this packae. They were respectively renamed to:bob.bio.base_legacy
,bob.bio.face_legacy
,bob.bio.gmm_legacy
,bob.bio.video_legacy
Missing to port the docs. Fixes #34 - !74 Added spear as part of the temporary legacy: Addressing this issue https://gitlab.idiap.ch/bob/bob.bio.spear/-/issues
- !76 Cleaning up dependencies
- !75 WIP: Port to dask pipelines
- !77 Remove deprecated code
- !80 [vanilla-pad] Improve the dask client option and delayed annotations
- !78 Dask pipelines Improvements: This MR improves the dask pipelines port.
-
!81 [dask] Make vanilla-pad work properlly with dask: In this MR, I'm. 1. Wrapping the pipeline with DaskWrapper, because it wasn't. I think that's why things were not working for you with SGE @ydayer.
vanilla-pad
was creating SGE jobs AND running everything locally. 2. Added the options "--dask-partition-size" and "--dask-n-workers", so the user has some freedom to set these parameters if the heuristics in place are unpleasant. I'll merge this one right away because of the workshop. Feel free to open issues if any - !79 new documentation incl. vanilla-pad: The goal is to update the documentation of bob.pad.base with the latest changes. ping @amohammadi @tiago.pereira
-
!82 New database interface for PAD: Hi @amohammadi, @ydayer Follow the proposition for a new DB interface for PAD. It follows the same guide lines used in
bob.bio.base
. Follow below the features implemented: 1. Uses CSV files instead of LSTs; with that, you can ship metadata. However, it uses the same file structure as before, so no stress in porting stuff. 2. The CSVPADDataset can transparently read the current LST files we have (I've created a sample loader that handles that). 3. The CSVPADDataset is able to read either files inside of a file structure or files inside of a tarball. Follow an example on how to use it, by reading from a file structure and from a tarballpython def run(path): dataset = CSVPADDataset(path, "protocol1") bob/bob.pad.base# Train assert len(dataset.fit_samples()) == 5 bob/bob.pad.base# 2 out of 5 are bonafides assert sum([s.is_bonafide for s in dataset.fit_samples()]) == 2 bob/bob.pad.base# DEV assert len(dataset.predict_samples()) == 5 bob/bob.pad.base# 2 out of 5 are bonafides assert sum([s.is_bonafide for s in dataset.predict_samples()]) == 2 bob/bob.pad.base# EVAL assert len(dataset.predict_samples(group="eval")) == 7 bob/bob.pad.base# 3 out of 5 are bonafides assert sum([s.is_bonafide for s in dataset.predict_samples(group="eval")]) == 3 csv_example_dir = os.path.realpath( bob.io.base.test_utils.datafile(".", __name__, "data/csv_dataset") ) csv_example_tarball = os.path.realpath( bob.io.base.test_utils.datafile(".", __name__, "data/csv_dataset.tar.gz") ) run(csv_example_dir) run(csv_example_tarball)
- !84 Remove vulnerability analysis commands: Moved to bob.bio.base Fixes #27
-
!85 Allow to specify the pipeline decision function in vanilla_pad script: This changes makes it easier to use the
vanilla_pad
script with classifiers that have different names for their decision functions (eg "predict_proba", "predict", etc...) This also allows passing "transform" as a decision_function in case the pipeline does not contain a classifier.
Release notes
- !71 Configure tests: Similar to bob.bio.base!183 Some classes are picked up by test runners while they are not UnitTest classes. Fixes #33
- !72 Update OneClassGMM2.py: Updating joblib import.