Skip to content
GitLab
Projects Groups Snippets
  • /
  • Help
    • Help
    • Support
    • Community forum
    • Submit feedback
    • Contribute to GitLab
  • Sign in
  • bob.pad.face bob.pad.face
  • Project information
    • Project information
    • Activity
    • Labels
    • Members
  • Repository
    • Repository
    • Files
    • Commits
    • Branches
    • Tags
    • Contributors
    • Graph
    • Compare
  • Issues 1
    • Issues 1
    • List
    • Boards
    • Service Desk
    • Milestones
  • Merge requests 0
    • Merge requests 0
  • CI/CD
    • CI/CD
    • Pipelines
    • Jobs
    • Schedules
  • Deployments
    • Deployments
    • Environments
    • Releases
  • Packages and registries
    • Packages and registries
    • Package Registry
    • Infrastructure Registry
  • Monitor
    • Monitor
    • Incidents
  • Analytics
    • Analytics
    • Value stream
    • CI/CD
    • Repository
  • Activity
  • Graph
  • Create a new issue
  • Jobs
  • Commits
  • Issue Boards
Collapse sidebar
  • bobbob
  • bob.pad.facebob.pad.face
  • Issues
  • #12
Closed
Open
Issue created Oct 16, 2017 by Amir MOHAMMADI@amohammadiOwner

The methods in the FrameDifference class are better to be functions.

Most of the methods implemented in https://gitlab.idiap.ch/bob/bob.pad.face/blob/b0a14393f109e8bc15928ade60b0614e34e4b73f/bob/pad/face/preprocessor/FrameDifference.py are better to be functions instead. Making them functions enables you to reuse them in other places. For example, you can see that there are several copies check_face_size in several classes. This is will make the maintenance of all these methods which do the same thing more difficult and can easily lead to bugs.

As a rule of thumb, if you have a method that doesn't use the class attributes, this means that it is a function. For example the methods below are actually functions:

    def check_face_size(self, frame_container, annotations, min_face_size):
        """
        Return the FrameContainer containing the frames with faces of the
        size overcoming the specified threshold. The annotations for the selected
        frames are also returned.

        **Parameters:**

        ``frame_container`` : FrameContainer
            Video data stored in the FrameContainer, see ``bob.bio.video.utils.FrameContainer``
            for further details.

        ``annotations`` : :py:class:`dict`
            A dictionary containing the annotations for each frame in the video.
            Dictionary structure: ``annotations = {'1': frame1_dict, '2': frame1_dict, ...}``.
            Where ``frameN_dict = {'topleft': (row, col), 'bottomright': (row, col)}``
            is the dictionary defining the coordinates of the face bounding box in frame N.

        ``min_face_size`` : :py:class:`int`
            The minimal size of the face in pixels.

        **Returns:**

        ``selected_frames`` : FrameContainer
            Selected frames stored in the FrameContainer.

        ``selected_annotations`` : :py:class:`dict`
            A dictionary containing the annotations for selected frames.
            Dictionary structure: ``annotations = {'1': frame1_dict, '2': frame1_dict, ...}``.
            Where ``frameN_dict = {'topleft': (row, col), 'bottomright': (row, col)}``
            is the dictionary defining the coordinates of the face bounding box in frame N.
        """

        selected_frames = bob.bio.video.FrameContainer() # initialize the FrameContainer

        selected_annotations = {}

        selected_frame_idx = 0

        for idx in range(0, len(annotations)): # idx - frame index

            frame_annotations = annotations[str(idx)] # annotations for particular frame

            # size of current face
            face_size = np.min(np.array(frame_annotations['bottomright']) - np.array(frame_annotations['topleft']))

            if face_size >= min_face_size: # check if face size is above the threshold

                selected_frame = frame_container[idx][1] # get current frame

                selected_frames.add(selected_frame_idx, selected_frame) # add current frame to FrameContainer

                selected_annotations[str(selected_frame_idx)] = annotations[str(idx)]

                selected_frame_idx = selected_frame_idx + 1

        return selected_frames, selected_annotations

and

    def comp_face_bg_diff(self, frames, annotations, number_of_frames = None):
        """
        This function computes the frame differences for both facial and background
        regions. These parameters are computed for ``number_of_frames`` frames
        in the input FrameContainer.

        **Parameters:**

        ``frames`` : FrameContainer
            RGB video data stored in the FrameContainer, see ``bob.bio.video.utils.FrameContainer``
            for further details.

        ``annotations`` : :py:class:`dict`
            A dictionary containing the annotations for each frame in the video.
            Dictionary structure: ``annotations = {'1': frame1_dict, '2': frame1_dict, ...}``.
            Where ``frameN_dict = {'topleft': (row, col), 'bottomright': (row, col)}``
            is the dictionary defining the coordinates of the face bounding box in frame N.

        ``number_of_frames`` : :py:class:`int`
            The number of frames to use in processing. If ``None``, all frames of the
            input video are used. Default: ``None``.

        **Returns:**

        ``diff`` : 2D :py:class:`numpy.ndarray`
            An array of the size ``(number_of_frames - 1) x 2``.
            The first column contains frame differences of facial regions.
            The second column contains frame differences of non-facial/background regions.
        """

        # Compute the number of frames to process:
        if number_of_frames is not None:
            number_of_frames = np.min([len(frames), number_of_frames])
        else:
            number_of_frames = len(frames)

        previous = frames[0][1] # the first frame in the video

        if len(previous.shape) == 3: # if RGB convert to gray-scale
            previous = bob.ip.color.rgb_to_gray(previous)

        diff = []

        for k in range(1, number_of_frames):

            current = frames[k][1]

            if len(current.shape) == 3: # if RGB convert to gray-scale
                current = bob.ip.color.rgb_to_gray(current)

            face_diff = self.eval_face_differences(previous, current, annotations[str(k)])
            bg_diff = self.eval_background_differences(previous, current, annotations[str(k)], None)

            diff.append((face_diff, bg_diff))

            # swap buffers: current <=> previous
            tmp = previous
            previous = current
            current = tmp

        if not diff: # if list is empty

            diff = [(np.NaN, np.NaN)]

        diff = np.vstack(diff)

        return diff

See the second one uses self but it uses it to just call another method which should have been a function.

Also, if you have class that all it does is to implement the initialize and call methods and it does nothing in initialization except for keeping some variables in self, this class is also a function. You can see an example here: https://gitlab.idiap.ch/bob/bob.learn.tensorflow/blob/ee32fe3c8bf1cd0964af2d45a52aff32d3ea4202/bob/learn/tensorflow/datashuffler/Normalizer.py and how it is instead implemented as functions in https://gitlab.idiap.ch/bob/bob.learn.tensorflow/blob/882d1245783391016183aca91eb5422b8474d99a/bob/learn/tensorflow/datashuffler/Normalizer.py

Now to make a function to use less arguments, you can use partial: https://docs.python.org/3/library/functools.html?highlight=partial#functools.partial

Assignee
Assign to
Time tracking