Skip to content
Snippets Groups Projects

Integrate the structure of train, eval, and predict

Closed Saeed SARFJOO requested to merge integrate_structures into master
1 unresolved thread

Merge request reports

Loading
Loading

Activity

Filter activity
  • Approvals
  • Assignees & reviewers
  • Comments (from bots)
  • Comments (from users)
  • Commits & branches
  • Edits
  • Labels
  • Lock status
  • Mentions
  • Merge request status
  • Tracking
  • @ssarfjoo the reason that this script (predict_bio) takes biofiles, database, and load_data is that because they are needed so that the script can split them into several jobs when they are in a job array.

    We used to have another script called predict_generic which did what you wanted but it was removed since no one used it. I suggest you put that script back (you can write it from the beginning starting from predict_bio. You can also call it predict.) and share its code with the predict_bio script.

    Edited by Amir MOHAMMADI
236 245 checkpoint_path=checkpoint_path,
237 246 )
238 247
239 logger.info("Saving the predictions of %d files in %s", len(generator),
240 output_dir)
248 if not generator is None:
249 logger.info("Saving the predictions of %d files in %s", len(generator),
250 output_dir)
251 else:
252 logger.info("Saving the predictions files in %s", output_dir)
241 253
242 254 pool = Pool()
  • It must be possible to manage the parallel running both in this level or higher level. For instance, in my case, I handle it in the higher level and just like train and eval, I want to send the predict_input_fn without any parameter.

  • I understand but this is not what predict_bio is supposed to do. You need to add a separate script.

  • I think this is outdated

  • Please register or sign in to reply
    Loading