Skip to content
GitLab
Projects Groups Snippets
  • /
  • Help
    • Help
    • Support
    • Community forum
    • Submit feedback
    • Contribute to GitLab
  • Sign in
  • bob.bio.face bob.bio.face
  • Project information
    • Project information
    • Activity
    • Labels
    • Members
  • Repository
    • Repository
    • Files
    • Commits
    • Branches
    • Tags
    • Contributors
    • Graph
    • Compare
  • Issues 22
    • Issues 22
    • List
    • Boards
    • Service Desk
    • Milestones
  • Merge requests 5
    • Merge requests 5
  • CI/CD
    • CI/CD
    • Pipelines
    • Jobs
    • Schedules
  • Deployments
    • Deployments
    • Environments
    • Releases
  • Packages and registries
    • Packages and registries
    • Package Registry
    • Infrastructure Registry
  • Monitor
    • Monitor
    • Incidents
  • Analytics
    • Analytics
    • Value stream
    • CI/CD
    • Repository
  • Activity
  • Graph
  • Create a new issue
  • Jobs
  • Commits
  • Issue Boards
Collapse sidebar
  • bobbob
  • bob.bio.facebob.bio.face
  • Merge requests
  • !67

Allowing facecrop helper to set the datatype

  • Review changes

  • Download
  • Email patches
  • Plain diff
Merged Tiago de Freitas Pereira requested to merge fix-gabor-graph into dask-pipelines Aug 21, 2020
  • Overview 0
  • Commits 1
  • Pipelines 1
  • Changes 2

Hi @ydayer @lcolbois,

I managed to find why the Gabor Graph from the new bob (with the pipelines) and the current one (verify.py) were slightly different.

The face crop from verify.py crop faces and cast them to float64 and the new bob does the same as uint8. So far so good. However, in the next step in this pipeline there's LBP as preprocessing. I'll not go into the details of this algorithm (look here for more info (https://en.wikipedia.org/wiki/Local_binary_patterns)), but this different typecasting makes the LBP bit string change in some corner cases, therefore changing gently the cropped image. This propagated to the whole pipeline make the scoring change.

This MR fixes this.

Follow bellow the comparison between the two versions now.

$ bob bio metrics -v /idiap/temp/tpereira/temp/mobio-male/verify/mobio-male/gabor_graph/male/nonorm/scores-{dev,eval} /idiap/temp/tpereira/temp/mobio-male/pipelines/scores-{dev,eval} -e --legends verify,pipelines

[Min. criterion: EER ] Threshold on Development set `verify`: 5.568844e-01
=====================  =================  ====================
..                     Development        Evaluation
=====================  =================  ====================
Failure to Acquire     0.0%               0.0%
False Match Rate       8.2% (4761/57960)  12.3% (18134/147630)
False Non Match Rate   8.2% (207/2520)    19.4% (776/3990)
False Accept Rate      8.2%               12.3%
False Reject Rate      8.2%               19.4%
Half Total Error Rate  8.2%               15.9%
=====================  =================  ====================
[Min. criterion: EER ] Threshold on Development set `pipelines`: 5.568844e-01
=====================  =================  ====================
..                     Development        Evaluation
=====================  =================  ====================
Failure to Acquire     0.0%               0.0%
False Match Rate       8.2% (4761/57960)  12.3% (18134/147630)
False Non Match Rate   8.2% (207/2520)    19.4% (776/3990)
False Accept Rate      8.2%               12.3%
False Reject Rate      8.2%               19.4%
Half Total Error Rate  8.2%               15.9%
=====================  =================  ====================
Assignee
Assign to
Reviewers
Request review from
Time tracking
Source branch: fix-gabor-graph