raising errors instead of emitting warnings in case of unsuccessful processing
Closes #62 (closed)
Merge request reports
Activity
assigned to @tiago.pereira
@tiago.pereira Can you please check this or re-assign to someone else?
@mguenther I think there is another issue with this process. The issue is that the framework assumes that the
preprocessor
,extractor
, or ... will returnNone
in case of failure. But most of the time, there is an exception being raised in case of failure. This will result to program exiting even though--allow-missing-files
is provided. Now that you are modifying this, could you also put the calls intry
andcatch
phrases, please?Another things is that you are expecting certain behavior here in
verify.py
with this code.- What should happen when some files fail or do not exist and
--allow-missing-files
is provided -> program should continue normally. - Another behavior is to raise an exception when something fails and
--allow-missing-files
is not provided.
Is it possible to have test cases for those?
- What should happen when some files fail or do not exist and
@amohammadi I understand your concern, but I think we should differentiate between known (
None
) and unknown problems (Exception
) of the tool. Non-existing files are already handled by the framework, e.g. https://gitlab.idiap.ch/bob/bob.bio.base/blob/master/bob/bio/base/tools/preprocessor.py#L127 so these cases do not need to be handled by the tools. Note that reading original data must always succeed, as otherwise the database is corrupt, which should be fixed as only a clean database assures comparability.I think, any error that arises need to be caught inside the tool. Otherwise there is no way to distinguish a programming bug (Exception) from a known error case, i.e., when data cannot be processed (e.g. in case of
NaN
values). However, we need to document this properly. I hope that I did that already, but I would need to check the documentation. Anyways, the documentation is outdated: #58 (closed)Test cases for
--allow-missing-files
should be installed, indeed. I will have a look at that when I find some time next week. However, providing test cases, in which we make the tool to returnNone
might be difficult. I might need to write (or modify) certain test-tools for this -- precisely for the reason that I mentioned above: I have to define a "known" issue.assigned to @mguenther
mentioned in commit 58f3ae44