Skip to content
Snippets Groups Projects
Commit d1939dbd authored by Amir Mohammadi's avatar Amir Mohammadi
Browse files

Fix the documentation to use bob.ip.facedetect instead of opencv

parent 8a398e59
Branches
Tags
1 merge request!12Fix the documentation to use bob.ip.facedetect instead of opencv
Pipeline #
Source diff could not be displayed: it is too large. Options to address this: view the blob.
...@@ -24,14 +24,14 @@ def F(f): ...@@ -24,14 +24,14 @@ def F(f):
LENA = F('lena.jpg') LENA = F('lena.jpg')
LENA_BBX = [ LENA_BBX = [
[214, 202, 183, 183] [214, 202, 183, 183]
] #from OpenCV's cascade detector ]
MULTI = F('multi.jpg') MULTI = F('multi.jpg')
MULTI_BBX = [ MULTI_BBX = [
[326, 20, 31, 31], [326, 20, 31, 31],
[163, 25, 34, 34], [163, 25, 34, 34],
[253, 42, 28, 28], [253, 42, 28, 28],
] #from OpenCV's cascade detector ]
def pnpoly(point, vertices): def pnpoly(point, vertices):
......
bob.ip.draw
bob.io.image
bob.ip.facedetect
matplotlib
...@@ -53,43 +53,34 @@ The input bounding box describes the rectangle coordinates using 4 values: ``(y, ...@@ -53,43 +53,34 @@ The input bounding box describes the rectangle coordinates using 4 values: ``(y,
Square bounding boxes, i.e. when ``height == width``, will give best results. Square bounding boxes, i.e. when ``height == width``, will give best results.
If you don't know the bounding box coordinates of faces on the provided image, you will need to either manually annotate them or use an automatic face detector. If you don't know the bounding box coordinates of faces on the provided image, you will need to either manually annotate them or use an automatic face detector.
OpenCV_, if compiled with Python support, provides an easy to use frontal face detector. :ref:`bob.ip.facedetect` provides an easy to use frontal face detector.
The code below shall detect most frontal faces in a provided (gray-scaled) image: The code below shall detect most frontal faces in a provided image:
.. doctest:: .. doctest::
:options: +NORMALIZE_WHITESPACE, +ELLIPSIS :options: +NORMALIZE_WHITESPACE, +ELLIPSIS
>>> import bob.io.base >>> import bob.io.base
>>> import bob.io.image >>> import bob.io.image
>>> import bob.ip.color >>> import bob.ip.facedetect
>>> lena_gray = bob.ip.color.rgb_to_gray(bob.io.base.load(get_file('lena.jpg'))) >>> lena = bob.io.base.load(get_file('lena.jpg'))
>>> try: >>> bounding_box, quality = bob.ip.facedetect.detect_single_face(lena)
... # the following lines depend on opencv API, hence commented out. >>> y, x = bounding_box.topleft
... # from cv2 import CascadeClassifier >>> height, width = bounding_box.size
... # cc = CascadeClassifier(get_file('haarcascade_frontalface_alt.xml')) >>> print(y, x, height, width)
... # face_bbxs = cc.detectMultiScale(lena_gray, 1.3, 4, 0, (20, 20)) (...)
... face_bbxs = [[214, 202, 183, 183]] #e.g., manually
... except ImportError: #if you don't have OpenCV, do it otherwise
... face_bbxs = [[214, 202, 183, 183]] #e.g., manually
>>> print(face_bbxs)
[[...]]
.. note:: .. note::
To enable the :py:func:`bob.io.base.load` function to load images, :ref:`bob.io.image <bob.io.image>` must be imported, see :ref:`bob.io.image`. To enable the :py:func:`bob.io.base.load` function to load images, :ref:`bob.io.image <bob.io.image>` must be imported, see :ref:`bob.io.image`.
The function ``detectMultiScale`` returns OpenCV_ rectangles as 2D :py:class:`numpy.ndarray`\s.
Each row corresponds to a detected face at the input image.
Notice the format of each bounding box differs from that of Bob_.
Their format is ``(x, y, width, height)``.
Once in possession of bounding boxes for the provided (gray-scaled) image, you can find the keypoints in the following way: Once in possession of bounding boxes for the provided (gray-scaled) image, you can find the keypoints in the following way:
.. doctest:: .. doctest::
:options: +NORMALIZE_WHITESPACE, +ELLIPSIS :options: +NORMALIZE_WHITESPACE, +ELLIPSIS
>>> x, y, width, height = face_bbxs[0] >>> import bob.ip.color
>>> from bob.ip.flandmark import Flandmark >>> from bob.ip.flandmark import Flandmark
>>> localizer = Flandmark() >>> localizer = Flandmark()
>>> lena_gray = bob.ip.color.rgb_to_gray(lena)
>>> keypoints = localizer.locate(lena_gray, y, x, height, width) >>> keypoints = localizer.locate(lena_gray, y, x, height, width)
>>> keypoints >>> keypoints
array([[...]]) array([[...]])
......
...@@ -45,7 +45,6 @@ ...@@ -45,7 +45,6 @@
.. _matplotlib: http://matplotlib.sourceforge.net .. _matplotlib: http://matplotlib.sourceforge.net
.. _numpy: http://numpy.scipy.org .. _numpy: http://numpy.scipy.org
.. _nose: http://nose.readthedocs.org .. _nose: http://nose.readthedocs.org
.. _opencv: http://opencv.org/
.. _pil: http://www.pythonware.com/products/pil/ .. _pil: http://www.pythonware.com/products/pil/
.. _pillow: https://pypi.python.org/pypi/Pillow/ .. _pillow: https://pypi.python.org/pypi/Pillow/
.. _python: http://www.python.org .. _python: http://www.python.org
......
...@@ -12,7 +12,7 @@ def get_data(f): ...@@ -12,7 +12,7 @@ def get_data(f):
lena = get_data('lena.jpg') lena = get_data('lena.jpg')
lena_gray = rgb_to_gray(lena) lena_gray = rgb_to_gray(lena)
x, y, width, height = [214, 202, 183, 183] #or from OpenCV x, y, width, height = [214, 202, 183, 183] #or from bob.ip.facedetect
localizer = Flandmark() localizer = Flandmark()
keypoints = localizer.locate(lena_gray, y, x, height, width) keypoints = localizer.locate(lena_gray, y, x, height, width)
......
...@@ -6,11 +6,3 @@ bob.io.base ...@@ -6,11 +6,3 @@ bob.io.base
bob.math bob.math
bob.sp bob.sp
bob.ip.base bob.ip.base
# For tests
bob.io.image
bob.ip.color
# For documentation generation
bob.ip.draw
matplotlib
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment