beat issueshttps://gitlab.idiap.ch/groups/beat/-/issues2017-08-06T16:44:51Zhttps://gitlab.idiap.ch/beat/beat.web/-/issues/452Migrate from deprecated `$http .success` & `$http .error` to `$http .then`2017-08-06T16:44:51ZJaden DIEFENBAUGHMigrate from deprecated `$http .success` & `$http .error` to `$http .then`While it's officially deprecated in [v1.5.0](https://code.angularjs.org/1.5.0/docs/api/ng/service/$http#deprecation-notice), the alternative is already implemented in v1.4.5.
`.success` & `.error` don't allow chaining as normal Promise ...While it's officially deprecated in [v1.5.0](https://code.angularjs.org/1.5.0/docs/api/ng/service/$http#deprecation-notice), the alternative is already implemented in v1.4.5.
`.success` & `.error` don't allow chaining as normal Promise handling (`.then`) do.https://gitlab.idiap.ch/beat/beat.web/-/issues/451More descriptive names for Angular entities2017-08-06T16:44:51ZJaden DIEFENBAUGHMore descriptive names for Angular entitiesA couple targets off the top of my head (more to be added):
- [ ] [`theColumn` directive](https://gitlab.idiap.ch/beat/beat.web/blob/67ca8af8e123d03debc1d6b89cf4b0751ddbb4b5/beat/web/reports/static/reports/app/directives/reportItemView.j...A couple targets off the top of my head (more to be added):
- [ ] [`theColumn` directive](https://gitlab.idiap.ch/beat/beat.web/blob/67ca8af8e123d03debc1d6b89cf4b0751ddbb4b5/beat/web/reports/static/reports/app/directives/reportItemView.js#L1117)
- [ ] [`item` directive](https://gitlab.idiap.ch/beat/beat.web/blob/67ca8af8e123d03debc1d6b89cf4b0751ddbb4b5/beat/web/reports/static/reports/app/directives/reportItemView.js#L1072)https://gitlab.idiap.ch/beat/beat.web/-/issues/450Refactor to not climb `$parent` in Angular2017-08-06T16:44:51ZJaden DIEFENBAUGHRefactor to not climb `$parent` in AngularControllers in Angular should not be depending on an exact parent hierarchy. If the child controller needs information from a parent scope, it should at the very least be passed down to the child explicitly (or better yet, refactored int...Controllers in Angular should not be depending on an exact parent hierarchy. If the child controller needs information from a parent scope, it should at the very least be passed down to the child explicitly (or better yet, refactored into a Service). Climbing `$parent` makes the app's parts too coupled and kills modularity. For example, [the `theColumn` directive in the reports app](https://gitlab.idiap.ch/beat/beat.web/blob/67ca8af8e123d03debc1d6b89cf4b0751ddbb4b5/beat/web/reports/static/reports/app/directives/reportItemView.js#L1127-1132):
```js
var the_parent = $scope.$parent.$parent.$parent.$parent;
var report_experiments = $scope.$parent.$parent.$parent.$parent.report_experiments;
var report_experiments_alias = $scope.$parent.$parent.$parent.$parent.report_experiments_alias;
var floating_point_precision = $scope.$parent.$parent.$parent.$parent.floating_point_precision;
var report = $scope.$parent.$parent.$parent.report;
var experiment_name = $scope.$parent.item;
```
There's more than this (just search for `$parent`)https://gitlab.idiap.ch/beat/beat.web/-/issues/449Separate frontend apps from backend API2017-08-06T16:44:51ZJaden DIEFENBAUGHSeparate frontend apps from backend APIIdeally, the code that runs in the browser (HTML/CSS/JS) isn't even in the same repository as the server API (Python). For example, the reports app is currently a continuum of Angular & Django, making it hard to test and refactor. Separa...Ideally, the code that runs in the browser (HTML/CSS/JS) isn't even in the same repository as the server API (Python). For example, the reports app is currently a continuum of Angular & Django, making it hard to test and refactor. Separating the two by *not using Django templating* but instead just using Django to serve the files would be a step in the right direction.https://gitlab.idiap.ch/beat/beat.web/-/issues/447add frontend regression tests2017-09-25T15:13:52ZJaden DIEFENBAUGHadd frontend regression tests99.99999999% of the frontend isn't tested right now. Adding regression tests (tests designed to be able to catch regressions/bugs in changes after the tests are added) would help with confidence in releasing new code.99.99999999% of the frontend isn't tested right now. Adding regression tests (tests designed to be able to catch regressions/bugs in changes after the tests are added) would help with confidence in releasing new code.https://gitlab.idiap.ch/beat/beat.web/-/issues/446Split up huge (>1000LOC) javascript files2019-07-04T08:37:00ZJaden DIEFENBAUGHSplit up huge (>1000LOC) javascript filesSome JS files over >2000LOC that are checked in:
- [ ] `beat/web/toolchains/static/toolchains/js/editor.js`
- [ ] `beat/web/experiments/static/experiments/js/panels.js`
- [ ] `beat/web/toolchains/static/toolchains/js/models.js`
- [...Some JS files over >2000LOC that are checked in:
- [ ] `beat/web/toolchains/static/toolchains/js/editor.js`
- [ ] `beat/web/experiments/static/experiments/js/panels.js`
- [ ] `beat/web/toolchains/static/toolchains/js/models.js`
- [ ] `beat/web/experiments/static/experiments/js/utils.js`
- [ ] `beat/web/algorithms/static/algorithms/js/editor.js`
- [ ] `beat/web/search/static/search/js/controls.js`
- [ ] `beat/web/toolchains/static/toolchains/js/common.js`
- [ ] `beat/web/toolchains/static/toolchains/js/viewer.js`
- [ ] `beat/web/experiments/static/experiments/js/dialogs.js`https://gitlab.idiap.ch/beat/beat.web/-/issues/445removed hardcoded unused URL parameter2019-07-04T08:37:00ZJaden DIEFENBAUGHremoved hardcoded unused URL parameterIn the [utils file](https://gitlab.idiap.ch/beat/beat.web/blob/67ca8af8e123d03debc1d6b89cf4b0751ddbb4b5/beat/web/experiments/static/experiments/js/utils.js#L1825), a request is sent to the server to fetch the list of reports the user may...In the [utils file](https://gitlab.idiap.ch/beat/beat.web/blob/67ca8af8e123d03debc1d6b89cf4b0751ddbb4b5/beat/web/experiments/static/experiments/js/utils.js#L1825), a request is sent to the server to fetch the list of reports the user may add selected experiments to. The URL includes a `fields` parameter that is [completely ignored on the API side](https://gitlab.idiap.ch/beat/beat.web/blob/67ca8af8e123d03debc1d6b89cf4b0751ddbb4b5/beat/web/reports/api.py#L100-109). Other URLs (in that utils file) might have unused parameters as well (?). Removing these unused parameters would make the functionality much more clear.https://gitlab.idiap.ch/beat/beat.web/-/issues/444Invalid default database2019-07-04T08:37:00ZJaden DIEFENBAUGHInvalid default databaseTo repro:
- Generate the default new database, with development settings
- Run the example experiment `user/user/single/1/single`
- Add that experiments to a (new) report
- Add a new plot/graph item to the report
- Click the Save Report ...To repro:
- Generate the default new database, with development settings
- Run the example experiment `user/user/single/1/single`
- Add that experiments to a (new) report
- Add a new plot/graph item to the report
- Click the Save Report button (top-right of the reports page)
Expected:
- The server saves the report with the new plot successfully
Actual:
- The server encounters an error and returns a 500
- Django encounters an error on [line 201 of `beat/web/reports/models.py`](https://gitlab.idiap.ch/beat/beat.web/blob/67ca8af8e123d03debc1d6b89cf4b0751ddbb4b5/beat/web/reports/models.py#L201), where `value['selected_template']` is `None`, and one cannot call `split()` on `None`https://gitlab.idiap.ch/beat/beat.web/-/issues/439[report] It is not possible to see tables of a report under certain constraints2019-07-04T08:37:00ZTiago de Freitas Pereira[report] It is not possible to see tables of a report under certain constraintsHI,
It is not possible to see the table of results in this **public** report https://www.beat-eu.org/platform/reports/tpereira/btas2015_mobio_male/ when I'm signed out.
However, if I access it via this unique report id ( https://www.be...HI,
It is not possible to see the table of results in this **public** report https://www.beat-eu.org/platform/reports/tpereira/btas2015_mobio_male/ when I'm signed out.
However, if I access it via this unique report id ( https://www.beat-eu.org/platform/reports/751803513/) it works.
Thanks for having a look at thishttps://gitlab.idiap.ch/beat/beat.web/-/issues/431[backend] Smart scheduling policy2019-07-04T08:37:00ZAndré Anjos[backend] Smart scheduling policyThis issue was migrated from the old `beat.scheduler` package. Original bug report by Laurent El-Shafey.
The scheduling policy currently implemented is naive, which means that it assigns the jobs when it can, in the order of processin...This issue was migrated from the old `beat.scheduler` package. Original bug report by Laurent El-Shafey.
The scheduling policy currently implemented is naive, which means that it assigns the jobs when it can, in the order of processing.
In particular, this policy is not able to properly address the following problems:
1. Fair (and/or controlled) sharing of computing power between users
2. Prioritization of jobs. This consists of both:
* User-based prioritization
* Queing duration-based prioritization
3. Non-starvation of jobs when resources (computing nodes) are shared between different queues (commonly known as 'resource reservation'). A typical case is when the queue relies on several cores per slot.
Ideally, we would like to address all these three points in a smarter scheduling policy.
All these may require the following variables:
1. User-weight
* The 'weight' of a user on a given queue (set by the administrator)
* The 'reputation' of the user (set by the administrator)
2. Prioritization
* The 'relative priority' of a job (set by the user)
* The 'queing duration' of a job (determined via the scheduler)
In addition to these variables, we may define additional parameters (set by the administrator), which will fine tune the behavior of this policy. For instance, we may define the 'relative weight' of a queue wrt. others, to address Point 3.
### Second iteration
The main difficulty is to find a 'good' way to schedule the jobs, while keeping a good trade-off between the three (often contradictory) requirements.
Following are aspects that arise, when attempting to address all these three requirements:
1. Fair sharing of computing power
* The computing-load caused by a given user can be computed per queue or globally. Which one should be consider? Both?
Considering the load on a queue basis, this will penalize users, who just want to use a specific queue. In fact, users may be tempted to 'artificially' use several queues just to get few more computing slots.
Considering it globally leads to the problem of how to measure it, since queues may be attached to different 'amount' of computing power (how to consider the number of slots, cores, maximum execution time, etc. when averaging to get the 'global' load caused by a user).
* There are different ways to compute the 'load': instantly or over a fixed period of time. SGE relies on the latter case. What would be the best option for our problem?
2. Prioritization of jobs
* Queuing time may be infinite. How should be the scale used to decide, whether a job queued long time ago should run or not. Linear? Square root? logarithmic? etc.
* User-specific prioritization and queueing time prioritization may be in complete contradiction. How to address this problem and find a good trade-off between these two?
3. Non-starvation of jobs when resources (computing nodes) are shared between different queues
* A job queued for a long time will have a high priority, since there is a policy-based on the queueing time. However, how could we guarantee, that it is going to run and not starve, because resources keep being used by jobs on other queues? For this purpose, we need to implement a resource reservation mechanism.
Overall, the difficulty is to combine all these requirements together. Could we generate a single priority value for a job, based on all these aspects, using some weighting mechanism?
Besides, each time a job is assigned, many of these values are theoretically updated. Should we recompute them all the time? Several times per scheduling loop? Only once (which may be a real problem regarding the 'fair sharing of computing power' aspect)
### More comments from Laurent
I've just implemented few methods for the scheduler to compute instantaneous load information to address 1.
In a first iteration, we won't rely on history/cumulative loads.
Wrt. fair shairing of computing power, we still need to address the problem of global load caused by a given user. This may be done directly in the smart scheduling policy.https://gitlab.idiap.ch/beat/beat.web/-/issues/415[toolchain] Editor: move the whole toolchain diagram2019-07-04T08:37:00ZPavel KORSHUNOV[toolchain] Editor: move the whole toolchain diagramIt would be great if we can select the whole toolchain diagram in the editor (by mouse and Ctrl+A) and be able to move it around.It would be great if we can select the whole toolchain diagram in the editor (by mouse and Ctrl+A) and be able to move it around.https://gitlab.idiap.ch/beat/beat.web/-/issues/400[algorithms] When sharing, cannot fine-tune source-code visibility of libraries2019-07-04T08:37:00ZAndré Anjos[algorithms] When sharing, cannot fine-tune source-code visibility of librariesA functionality like the one for attestation/experiments would be a good enhancement.A functionality like the one for attestation/experiments would be a good enhancement.https://gitlab.idiap.ch/beat/beat.web/-/issues/390[experiments] Clicking 'save' button brings to the list of experiments2019-07-04T08:37:00ZPavel KORSHUNOV[experiments] Clicking 'save' button brings to the list of experimentsOn the experiment editing page, when I click 'Save' button, it brings me to the list of all the experiments, so if I want to continue editing the experiment, I have to click on its name in the list again and wait for it to load.
Inst...On the experiment editing page, when I click 'Save' button, it brings me to the list of all the experiments, so if I want to continue editing the experiment, I have to click on its name in the list again and wait for it to load.
Instead, when clicking 'Save', can it just save experiment and continue staying on the same editing page?
In any case, if I want to see the list of experiments, I can click in the corresponding section of the experiment's name at the top of the page.Philip ABBETPhilip ABBEThttps://gitlab.idiap.ch/beat/beat.web/-/issues/355[ui] User deletion2019-07-04T08:37:02ZAndré Anjos[ui] User deletionAccording to legal advice, the BEAT platform must honour the "right to be forgotten" as is now being required by applications hosted on EU countries.
A user may be requested to be deleted, together with all (private) contributions. Fo...According to legal advice, the BEAT platform must honour the "right to be forgotten" as is now being required by applications hosted on EU countries.
A user may be requested to be deleted, together with all (private) contributions. For public contributions, we have the right to keep them (fortunately). Here are the actions to be taken when a user is removed:
* Public components are kept, but ownership is transferred to an anonymous user.
* Private components are removed
* The user account details (e-mail and such) is removed
* Attestations are kept if they are published, Locked attestations are deleted
* Users that have forked something from the said user are notified (?)https://gitlab.idiap.ch/beat/beat.web/-/issues/326[many] Gamification of the platform2019-07-04T08:37:02ZSébastien MARCEL[many] Gamification of the platformThis is for discussion later on.
We have been discussing the gamification of the platform to engage better the users by awarding them privileges. This is related to Activity page:
https://www.beat-eu.org/platform/user/smarcel/?tab=ac...This is for discussion later on.
We have been discussing the gamification of the platform to engage better the users by awarding them privileges. This is related to Activity page:
https://www.beat-eu.org/platform/user/smarcel/?tab=activity
I recommend this very nice MSc thesis "Gamification in a social system":
http://www.cs.rug.nl/~aiellom/tesi/blaauw.pdf
it describes various algorithms relying on graph -- and provides codes in appendix -- to model reputation with algorithms such PageRank and other. Another algorithm called LevelUp presents
principles to award badges and build a leaderboard.
The author is apparently a big fan of online games such as Call of Duty !
Here is also some interesting tips on Gamification in particular the tips 7, 8 and 9:
https://www.td.org/Publications/Blogs/Learning-Technologies-Blog/2014/02/10-Best-Practices-for-Implementing-Gamification
https://gitlab.idiap.ch/beat/beat.web/-/issues/325[experiments] Reproducibility chart2019-07-04T08:37:02ZAndré Anjos[experiments] Reproducibility chartAfter thinking about how reproducible an experiment in the platform really is, I think we can improve the experiment display a bit to include some sort of "reproducibility chart".
The idea behind this is to check, on a per-experiment ...After thinking about how reproducible an experiment in the platform really is, I think we can improve the experiment display a bit to include some sort of "reproducibility chart".
The idea behind this is to check, on a per-experiment basis and annotate, key points that may make the experiment irreproducible. Here are some key aspects
1. The database used by the experiment is deactivated due to the end of the license agreement between Idiap and the controller
2. The environment used by any of the blocks is not active anymore (outdated by another environment)
3. There are incompatible API changes that **may** affect reproducibility (for example, the changes pushed on September 2nd. removing the ``data_index`` and ``data_index_end`` attributes). This condition must be taken with care, as it would require an (human) in-depth analysis of all experiment algorithms for a thorough conclusion. What we can do though, is to setup a new table that lists incompatible API changes and dates. If the experiment finished after that date, then we can consider it is compatible. Otherwise, not. We keep this table updated in case of eventual non-compatible API modifications.https://gitlab.idiap.ch/beat/beat.web/-/issues/313[search,report] Detection and displaying of results2019-07-04T08:37:02ZAndré Anjos[search,report] Detection and displaying of resultsThis ticket is to follow-up on a realization I had this morning looking at some of our stored searches.
In the "settings" field, I noticed that result fields supposed to be displayed are prepended by their analyzer (algorithm), as rep...This ticket is to follow-up on a realization I had this morning looking at some of our stored searches.
In the "settings" field, I noticed that result fields supposed to be displayed are prepended by their analyzer (algorithm), as reported by `beat.web.algorithms.Algorithm.fullname()`. This will not work if we're comparing experiments (with the same toolchain) with two or more analyzer outputs which happen to use the same algorithm. @philip.abbet: Am I overlooking something? Please fill-in in this case.
To improve on this we need to better define what we allow to be displayed (before we figure-out how to properly display it) and, only then, how it is going to be displayed and saved.
As of today, we have a couple of use-cases covered:
1. The user wants to compare experiments for which there is only one analyzer output using the same algorithm
2. The user wants to compare experiments with the same toolchain, for which there are matching algorithms on each analyzer block over all experiments.
So that these 2 cases are correctly displayed and stored, and because in case 2 the analysis block (on the toolchain) can use the same algorithm, it is not good to store search "settings" prefixing result names with the algorithm fullname, but rather with the block name. Right?
Philip ABBETPhilip ABBEThttps://gitlab.idiap.ch/beat/beat.web/-/issues/306Sharing behaviour2019-07-04T08:37:02ZAndré AnjosSharing behaviourAs per discussion at today's meeting, we decided to open a ticket to:
1. Define what is/should be the sharing behavior on the platform
2. Assert the tests we have are catching errors on this
3. See if there would be any way we could...As per discussion at today's meeting, we decided to open a ticket to:
1. Define what is/should be the sharing behavior on the platform
2. Assert the tests we have are catching errors on this
3. See if there would be any way we could simplify the logic as to make the check faster and more
programatically evident
### Current situation
As indicated by our discussion, here is what I understood:
1. The permissions are organized in levels (from the most private to the least): `Private`, `Usable`, `Shared`, `Public`.
2. If the object is `Public`, then any user can view the object. Logged in users can use them in experiments. The contents of the 4 lists of users and teams (`shared_with`, `shared_with_team`, `usable_by` and `usable_by_team`) are ignored.
3. If the object is `Private` the lists are also ignored. Only the user, when logged in, can view and use the object in question.
4. If the object is `Usable` the lists for `usable_by` and `usable_by_team` are looked up. If they are empty, then the object is **usable** by all users and all teams available in the platform. If the lists are not empty, they implement a restriction saying "only" those have the `Usable` permission. Combinations of empty/having contents on any of those lists should be supported.
5. If the object is `Shared`, the lists of `shared_with` and `shared_with_team` are looked up in the same fashion as for `Usable`. The only exception here is that an object may be shared with some users/teams and usable other users/teams. So, if the object is `Shared`, authorization should look-up for viewing using the `shared_with` and `shared_with_team` attributes whereas for using only, with the `usable_by` and `usable_by_team` attributes. If these lists are empty, here is the expected behavior:
1. If `shared_with` and `shared_with_team` are empty, then the object is shared with everyone.
In this case, it should be made `Public` instead. I.e., there shouldn't be any case where
an object has a `Shared` permission with empty `shared_with*` attributes.
2. If `usable_by*` attributes are empty, then everybody on the platform will be able to use the
contribution. (Comment: so, basically, if we want to share an algorithm we have to copy-n-paste the user list from `shared_with*` to `usable_by*`?)
6. Sharing is an irreversible procedure (`+=`)
### Impossible states:
1. Object is `Shared` but `shared_with*` attributes are empty
2. Object is `Private` but `shared_with*` or `usable_by*` attributes are non-empty
3. Object is `Public` but `shared_with*` or `usable_by*` attributes are non-empty
4. Object is `Usable` but `shared_with*` attributes are non-empty
### Requirements
From the above state, I tried to extract the requirements:
1. Objects start their life time on the platform as `Private`. Only the author has view/use access on it.
2. It must be possible to make an object (algorithm, library, plotter and database) `Usable`, meaning users/teams in a list can use it (not view it).
3. It must be possible to make an object (algorithm, library, plotter and database) `Usable` to all users of the platform.
4. It must be possible to make an object `Shared`, meaning users/teams in a list can use **and view the said object**. Note that "sharing" with all is the same as making it `Public`.
5. If an object is `Shared`, the author may optionally decide if it is still `Usable` by other users and teams. In this case, the platform should restrict the access accordingly.
6. Sharing shall be reversible, for as long as the object in question is not being used by anyone (`deletable()` answers `True`). This is, effectively, the same as forking, deleting the existing object and renaming the new object to the old name, which should certainly be possible for as long as nobody is using it. If the object is being used, then the sharing permissions cannot be lowered, only raised. I.e., if the object is `Usable`, it can always be made `Shared` with the same or more users. If the object is `Shared`, it can always be made public. In summary, the currently implemented `+=` rule applies.
Does that look reasonable?https://gitlab.idiap.ch/beat/beat.core/-/issues/43Statistics output by agent don't account for the CPU time spent reading data ...2017-10-16T09:00:18ZAndré AnjosStatistics output by agent don't account for the CPU time spent reading data from diskThis is inconsistent with our cpulimit policy, for which we are stipulating both the agent and the user process are bound to the same CPU limit and must share the resources.
In this way, it would be best if we'd also account the CPU u...This is inconsistent with our cpulimit policy, for which we are stipulating both the agent and the user process are bound to the same CPU limit and must share the resources.
In this way, it would be best if we'd also account the CPU usage from the agent + the user process on the final statistics output.
One must be careful as the agent is called in two different contexts (`beat.scheduler iodaemon` or `beat.cmdline exp run`), but the accounting must always come out right.https://gitlab.idiap.ch/beat/beat.web/-/issues/283[reports] Inconsistent behaviour with delete2019-07-04T08:37:02ZAndré Anjos[reports] Inconsistent behaviour with deleteWhen I delete an experiment from the report by clicking a button on the GUI, it immediately deletes the experiment from the report and saves it.
If I do the same with any other button on that GUI, I need to click on the "Save report" ...When I delete an experiment from the report by clicking a button on the GUI, it immediately deletes the experiment from the report and saves it.
If I do the same with any other button on that GUI, I need to click on the "Save report" button to get it saved. This is confusing.
We either have to go one way or the other