beat issueshttps://gitlab.idiap.ch/groups/beat/-/issues2020-12-16T15:09:00Zhttps://gitlab.idiap.ch/beat/beat.web/-/issues/580Safari issue with https://beat-eu.org and https://www.beat-eu.org/2020-12-16T15:09:00ZFlavio TARSETTISafari issue with https://beat-eu.org and https://www.beat-eu.org/Important note: This issue targets the browser Safari for Mac Users. I am unable to reproduce it via Chrome on Linux.
Login via https://beat-eu.org doesn't get you logged in while logging in via https://www.beat-eu.org/ does work fine....Important note: This issue targets the browser Safari for Mac Users. I am unable to reproduce it via Chrome on Linux.
Login via https://beat-eu.org doesn't get you logged in while logging in via https://www.beat-eu.org/ does work fine.
Could maybe be related to 2FA but maybe not.https://gitlab.idiap.ch/beat/beat.web/-/issues/583Password reset triggers internal error2021-01-07T10:06:51ZSamuel GAISTPassword reset triggers internal errorUsing the password reset form with a valid email address triggers an internal server error.Using the password reset form with a valid email address triggers an internal server error.Code cleanupSamuel GAISTSamuel GAISThttps://gitlab.idiap.ch/beat/beat.web/-/issues/584Disable signup page access for already logged in user2021-01-08T14:11:36ZSamuel GAISTDisable signup page access for already logged in userThis should not be accessible when users are already logged in.This should not be accessible when users are already logged in.Samuel GAISTSamuel GAISThttps://gitlab.idiap.ch/beat/beat_exporter/-/issues/2Acumos databroker creation2021-01-18T12:49:29ZSamuel GAISTAcumos databroker creationDatabroker are data sources for the Acumos platform.
It is not mandatory to create these blocks however, they are likely going to be needed in order for people to be able to more easily test the implementation of the BEAT exported "mode...Databroker are data sources for the Acumos platform.
It is not mandatory to create these blocks however, they are likely going to be needed in order for people to be able to more easily test the implementation of the BEAT exported "models".
This task tracks the work related to the creation of a script that will automate the process to create a Docker image that should be usable as data broker.
Currently information is sparse about how a data broker should be identified and how it is supposed to work.
Based on email message exchange done with Martin Weiss, the data broker shall notify the system that is have provided everything it has be returning an empty answer.Implement databroker creationSamuel GAISTSamuel GAISThttps://gitlab.idiap.ch/beat/beat_exporter/-/issues/1Acumos "model" creation2021-01-18T12:50:34ZSamuel GAISTAcumos "model" creationAI4EU computing assets are called "models".
This task tracks the work related to the creation of a tool that will allow the creation of docker images out of BEAT algorithms.
Note that not all BEAT algorithms are eligible for exportatio...AI4EU computing assets are called "models".
This task tracks the work related to the creation of a tool that will allow the creation of docker images out of BEAT algorithms.
Note that not all BEAT algorithms are eligible for exportation to an Acumos compatible Docker image.
Based on the current state of the platform as of 18.01.2021, here is a list of the constraints that must be taken into account:
The current Acumos orchestrator follows a single input single output sequential execution implementation. This means that each data frame coming from the entry data broker will have to pass through all the blocks from a solution up to its final block before processing the next data frame.
This means that there's no training currently possible within the Acumos platform.
Therefore in terms of BEAT algorithms, this means that only algorithms matching the following criteria can be used:
- Only sequential algorithms can be used as autonomous ones uses pull rather than a push to get data which is not available currently in Acumos
- Only algorithms which write as many outputs as they gets inputs as otherwise the orchestrator won't be able to carry on a full run
These criteria may change in the future based on the features implemented on Acumos.
It has also been decided that in the first iteration, the tool that will export the BEAT algorithm will only be used locally. Therefore no online platform data extraction will happen.
This means that a user will first have to run a full experiment using the algorithm(s) destinated to be exported before the tool can be used.Implement algorithm export to acumosSamuel GAISTSamuel GAISThttps://gitlab.idiap.ch/beat/beat.web/-/issues/585Add expiration date and alternative procedure for email accounts holding temp...2021-01-18T15:00:52ZFlavio TARSETTIAdd expiration date and alternative procedure for email accounts holding temporary URLsOn specific emails that target users accounts, temporary URLs were used to simplify the a user action.
However the expiration date of the temporay URL is not mentioned and an alternative procedure is not present to help the user realize...On specific emails that target users accounts, temporary URLs were used to simplify the a user action.
However the expiration date of the temporay URL is not mentioned and an alternative procedure is not present to help the user realize manually the required operation.Flavio TARSETTIFlavio TARSETTIhttps://gitlab.idiap.ch/beat/beat.core/-/issues/64Database schema and runtime improvements2021-01-29T09:46:59ZAndré AnjosDatabase schema and runtime improvementsDatabases were originally designed monolithically, assuming the environment where packages necessary to read their contents is fixed (and does not change), and the location of files to be used for the readout is fixed. As an after thoug...Databases were originally designed monolithically, assuming the environment where packages necessary to read their contents is fixed (and does not change), and the location of files to be used for the readout is fixed. As an after thought, templates were introduced without a proper formalisation and, as a consequence, templates with the same names may appear in various database JSON declarations, w/o necessarily being the same. This issue supersedes #25 and gathers all changes required for a revamp in this area:
- [x] Protocol and set templates should be separated from the database view declaration to avoid repetition and centralise "task-related" declarations. This will effectively simplify the declaration of new datasets
- [ ] The `root_db` parameter (maybe misspelled here) needs to be externalised as a runtime parameter during the run, as it is currently the case for other runtime prefixes such as algorithm caches. Currently, if one downloads the database view, they need to change this parameter by hand
- [x] The environment required to run the database view to provide the data for the experiment needs to be configured and have an entry on the database JSON declaration. This ensures that changes in the environment (docker image), will imply in new caches being generated at all times. As of today, it can happen that hashes are not regenerated even if the environment changes completely.
- [x] A default db env docker image (possibly named `beat.env.databases`) should be provided on docker hub. This avoids conflicts in future because when using multiple databases in one experiment, only one image can be used.
- [ ] The prototypes in `beat.core` should come configured to use this image.
- [ ] This new JSON entry must be documented in `beat/docs` and users should be recommended to use this image.Samuel GAISTSamuel GAISThttps://gitlab.idiap.ch/beat/beat.web/-/issues/586Database v2 handling2021-02-05T12:32:38ZSamuel GAISTDatabase v2 handlingDatabases object have seen a new version since beat/beat.core#64.
However the online platform does not handle yet the protocol template objects relate to the version 2 of the database object.
This ticket tracks the work relate to that....Databases object have seen a new version since beat/beat.core#64.
However the online platform does not handle yet the protocol template objects relate to the version 2 of the database object.
This ticket tracks the work relate to that.
Tasks to be done:
* [x] Implement application for handling ProtocolTemplate object
* [x] Implement installation of protocol templates
* [x] Implement handling of database v2 in the databases application
* [x] Ensure execution of experiment can continue with both v1 and v2 databases
This work will need update to the initial data provided through beat/beat.examples>Samuel GAISTSamuel GAISThttps://gitlab.idiap.ch/beat/beat.tutorial.prefix/-/issues/4Missing documentation2021-02-25T16:41:23ZSamuel GAISTMissing documentationDocumentations are missing for the following algorithms:
- linear_machine_learning_and_comparison
- facenet_project_and_comparison
- mnist_tester
A short description in the corresponding .json description would also be good.
There's n...Documentations are missing for the following algorithms:
- linear_machine_learning_and_comparison
- facenet_project_and_comparison
- mnist_tester
A short description in the corresponding .json description would also be good.
There's no need for a full paper about them but documenting what they do and their purpose will allow to generate the appropriate information for the AI4EU Acumos platform as well as make them visible on the BEAT platform.https://gitlab.idiap.ch/beat/docs/-/issues/17`beat editor serve` does not exist anymore2021-04-08T12:14:33ZAndré Anjos`beat editor serve` does not exist anymoreThere is a small typo on the front installation page referenced from other guides. It states `beat editor serve` should be available. Given the change in the editor strategy, I think it should read `beat editor start`.There is a small typo on the front installation page referenced from other guides. It states `beat editor serve` should be available. Given the change in the editor strategy, I think it should read `beat editor start`.https://gitlab.idiap.ch/beat/beat.editor/-/issues/282Editor will not start if Docker is not running2021-04-13T12:14:31ZAndré AnjosEditor will not start if Docker is not runningThis either needs fixing and a better error message. Currently, the editor only reports `Error while fetching server API version: ('Connection aborted.', FileNotFoundError(2, 'No such file or directory'))`.This either needs fixing and a better error message. Currently, the editor only reports `Error while fetching server API version: ('Connection aborted.', FileNotFoundError(2, 'No such file or directory'))`.Samuel GAISTSamuel GAISThttps://gitlab.idiap.ch/beat/beat.core/-/issues/107Error on docker client creation2021-04-13T12:14:32ZSamuel GAISTError on docker client creationAt least on macOS when the daemon is not running, the client creation may fail and thus lead to an exception.
This is a case that is not yet handled like the others that follows when looking up for images.
Related to beat/beat.editor#282At least on macOS when the daemon is not running, the client creation may fail and thus lead to an exception.
This is a case that is not yet handled like the others that follows when looking up for images.
Related to beat/beat.editor#282Samuel GAISTSamuel GAISThttps://gitlab.idiap.ch/beat/docs/-/issues/18Job Failed #2473632021-11-03T16:55:42ZAmir MOHAMMADIJob Failed #247363Job [#247363](https://gitlab.idiap.ch/beat/docs/-/jobs/247363) failed for e93323cf9e36cd9c5ef91dee850e39af7ccc2ea1:
Needs `yum_requirements.txt`Job [#247363](https://gitlab.idiap.ch/beat/docs/-/jobs/247363) failed for e93323cf9e36cd9c5ef91dee850e39af7ccc2ea1:
Needs `yum_requirements.txt`Samuel GAISTSamuel GAISThttps://gitlab.idiap.ch/beat/beat.core/-/issues/108Nightlies are failing for several reasons2022-02-24T09:03:39ZSamuel GAISTNightlies are failing for several reasonsThe nightlies are currently failing for several unrelated reasons:
- Looks like some keys have gone missing
- Some tests cannot be run because of database environment missing.
The first one is likely related to the use of cgroup v2 bas...The nightlies are currently failing for several unrelated reasons:
- Looks like some keys have gone missing
- Some tests cannot be run because of database environment missing.
The first one is likely related to the use of cgroup v2 based on the name of the fields missing and the information given on the [Container Stats information for the Docker v1.41 API](https://docs.docker.com/engine/api/v1.41/#operation/ContainerStats).
As for the database environments, the likely reasons is that all the runners and test machines had previous versions of the database images and thus the tests were run successfully although the test image pull method used later versions of them.Samuel GAISTSamuel GAISThttps://gitlab.idiap.ch/beat/beat.backend.python/-/issues/35BEAT componenets are tied to a prefix2022-03-03T17:35:06ZAmir MOHAMMADIBEAT componenets are tied to a prefixThroughout all code and components of BEAT, a prefix is required and this requirement makes it impossible to define and run BEAT experiments interactively.
Here is a tentative plan for refactoring the code:
1. [ ] Update BEAT compone...Throughout all code and components of BEAT, a prefix is required and this requirement makes it impossible to define and run BEAT experiments interactively.
Here is a tentative plan for refactoring the code:
1. [ ] Update BEAT component classes so that they can be created on the fly without pointing to a prefix
2. [ ] Implement a global config object to keep track of user's config such as where the prefix is or what the username is. This will help users provide less information when creating objects on the fly.
3. [ ] Dynamic creation of experiment/toolchain with running the python code in a kind of graph mode. This will be similar to how graphs are constructed in Python using tensorflow or dask.
3. [ ] We would also need a singleton class to hold the prefix objects in memory to avoid passing around caches.https://gitlab.idiap.ch/beat/beat.editor/-/issues/272Automatic inference of synchronization channels when drawing the toolchain2022-03-04T09:38:31ZAmir MOHAMMADIAutomatic inference of synchronization channels when drawing the toolchainSummary
When drawing several blocks in the toolchain and connecting inputs and outputs, the (synchronization) channel for blocks are not automatically determined even though most of the time there is only one option.
See the video below...Summary
When drawing several blocks in the toolchain and connecting inputs and outputs, the (synchronization) channel for blocks are not automatically determined even though most of the time there is only one option.
See the video below where I have to select the channel in all blocks before getting a valid toolchain:
![synchronization_channels_not_selected_automatically](/uploads/75d3cce295e08c2515f69ace8c2810b8/synchronization_channels_not_selected_automatically.webm)
It would be beneficial if the toolchain editor would infer and select the channel for me as I edit along.
Why is it needed ?
It would make editing toolchains simpler as I will not have to go into each block and edit them one by one.https://gitlab.idiap.ch/beat/beat.editor/-/issues/273Right-click actions on the blocks in the toolchain editor2022-03-04T09:38:51ZAmir MOHAMMADIRight-click actions on the blocks in the toolchain editorHere is my suggestion:
* Upon right clicking on the blocks, provide a list of actions.
* A `Delete` action
* A `Edit` or `Properties` action that does the same thing as double clickingHere is my suggestion:
* Upon right clicking on the blocks, provide a list of actions.
* A `Delete` action
* A `Edit` or `Properties` action that does the same thing as double clickinghttps://gitlab.idiap.ch/beat/beat.editor/-/issues/274Keep a history of selected Algorithms and datasets around in the toolchain2022-03-04T09:39:05ZAmir MOHAMMADIKeep a history of selected Algorithms and datasets around in the toolchainSummary
With the new editor, it's only possible to add blocks based on available datasets, algorithms, and analyzers. Which might be a good thing according to #265.
Also, once blocks are added (according to the algorithm or dataset it w...Summary
With the new editor, it's only possible to add blocks based on available datasets, algorithms, and analyzers. Which might be a good thing according to #265.
Also, once blocks are added (according to the algorithm or dataset it was selected from), it is no longer associated with the block that it was chosen from.
This makes sense because as far as the toolchain is concerned, this association is irrelevant and this association is kept around in the experiment definition.
However, I think it makes sense to keep association around in the toolchain. Doing so can let us:
- Provide hints to the user (reminding them what algorithm that block was) so that they have to memorize less when designing the toolchain.
- Automatically figure out the channel of blocks as disccussed in #272.
On the other hand, we want to merge the toolchain/experiment definitions per https://gitlab.idiap.ch/beat/beat.core/-/issues/88
So maybe this issue must be looked at when that works is already done?!https://gitlab.idiap.ch/beat/beat.web/-/issues/553toolchain created locally does not show up correctly on beat.web2022-03-04T09:39:11ZAmir MOHAMMADItoolchain created locally does not show up correctly on beat.webThis toolchain [1.json](/uploads/49903eb5d5e5223a6f191a8656049ba8/1.json) also available here: https://www.idiap.ch/software/beat/platform/toolchains/amohammadi/livdet/1/#viewer does not show up correctly in the web platform but looks ok...This toolchain [1.json](/uploads/49903eb5d5e5223a6f191a8656049ba8/1.json) also available here: https://www.idiap.ch/software/beat/platform/toolchains/amohammadi/livdet/1/#viewer does not show up correctly in the web platform but looks ok in beat.editor:
![image](/uploads/dfbfc0c76547e04de4db701dacb356c9/image.png)https://gitlab.idiap.ch/beat/beat.web/-/issues/577Cleanup yearly supervision command2022-03-04T09:39:15ZFlavio TARSETTICleanup yearly supervision commandThe yearly supervison command needs a rewrite to improve clarityThe yearly supervison command needs a rewrite to improve clarityCode cleanup