guide.rst 4.22 KB
Newer Older
Sushil BHATTACHARJEE's avatar
Sushil BHATTACHARJEE committed
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92
.. py:currentmodule:: bob.ip.qualitymeasure

.. testsetup:: *

   from __future__ import print_function
   import math
   import os, sys
   import argparse

   import bob.io.image #remove this if possible
   import bob.io.base
   import bob.io.video
   import bob.ip.color
   import numpy as np
   from bob.ip.qualitymeasure import galbally_iqm_features as iqm
   from bob.ip.qualitymeasure import msu_iqa_features as iqa

   import bob.io.base.test_utils #remove this if possible

   import pkg_resources
   video_file = bob.io.base.test_utils.datafile('real_client001_android_SD_scene01.mp4', 'bob.ip.qualitymeasure', 'data')
   video4d = bob.io.video.reader(video_file)

=============
 User Guide
=============

You can used this Bob package to extract image-quality features for face-PAD applications.
Two sets of quality-features are implemented in this package:

1. The image-quality measures proposed by Galbally et al. (TIFS 2014), and

2. The image-quality features proposed by Wen et al. (TIFS 2015).

The package includes separate modules for implementing the two feature-sets.
The module ``galbally_iqm_features`` implements the features proposed by Gabally et al., and the module ``msu_iqa_features`` implements the features proposed by Wen et al. 
In each module, a single function needs to be called, to retrieve all the features implemented in the module.
The examples below show how to use the functions in the two modules.

Note that both feature-sets are extracted from still-images. However, in face-PAD experiments, we typically process videos.
Therefore, the examples below use a video as input, but show how to extract image-quality features for a single frame.

Computing Galbally's image-quality measures
-------------------------------------------
The function ``compute_quality_features()`` (in the module galbally_iqm_features) can be used to compute 18 image-quality measures 
proposed by Galbally et al. Note that Galbally et al. proposed 25 features in their paper. This package implements the following
18 features from their paper, namely: 
[mse, psnr, ad, sc, nk, md, lmse, nae, snrv, ramdv, mas, mams, sme, gme, gpe, ssim, vif, hlfi].
Therefore, the function ``galbally_iqm_features::compute_quality_features()`` returns a tuple of 18 scalars, in the order listed above.

.. doctest::


   >>> from bob.ip.qualitymeasure import galbally_iqm_features as iqm
   >>> video4d = bob.io.video.reader(video_file) # doctest: +SKIP
   >>> rgb_frame = video4d[0]
   >>> print(len(rgb_frame))
   [3, 480, 720]
   >>> gf_set = iqm.compute_quality_features(rgb_frame)
   >>> print(len(gf_set))
   18

In the example-code above, we have used a color (RGB) image as input to the function ``compute_quality_features()``.
In fact, the features proposed by Galbally et al. are computed over gray-level images.
Therefore, the function ``galbally_iqm_features::compute_quality_features()`` takes as input either a RGB color-image,
or a gray-level image.
(The input image should be a numpy-array. RGB color-images should be in the format expected by Bob_.)
When the input image is 3-dimensional, the first dimension being '3' (as is the case in the example above), the input
is considered to represent a color RGB image, and is first converted to a gray-level image.
If the input is 2-dimensional (say, a numpy array of shape [480, 720]), then it is considered to represent a gray-level
image, and the RGB-to-gray conversion step is skipped. 

Computing Wen's image-quality measures
--------------------------------------
The code below shows how to compute the image-quality features proposed by Wen et al. (Here, we refer to these features as
'MSU features'.)
These features are computed from a RGB color-image. The 2 feature-types (image-blur, color-diversity) all together form
a 118-D feature-vector.
The function ``compute_msu_iqa_features()`` (from the module ``msu_iqa_features``) returns a 1D numpy array of length 118.

.. doctest::

   >>> from bob.ip.qualitymeasure import msu_iqa_features as iqa
   >>> video4d = bob.io.video.reader(video_file) # doctest: +SKIP
   >>> rgb_frame = video4d[0]
   >>> msuf_set = iqa.compute_msu_iqa_features(rgb_frame)
   >>> print(len(msuf_set))
   118


.. _Bob: https://www.idiap.ch/software/bob/ 
.. _documentation: https://menpofit.readthedocs.io/en/stable/