Skip to content
Snippets Groups Projects

autoencoders pretraining using RGB faces

Codes allowing to pre-train an autoencoder, using RGB facial images from the CelebA database.

Edited by Olegs NIKISINS

Merge request reports

Loading
Loading

Activity

Filter activity
  • Approvals
  • Assignees & reviewers
  • Comments (from bots)
  • Comments (from users)
  • Commits & branches
  • Edits
  • Labels
  • Lock status
  • Mentions
  • Merge request status
  • Tracking
  • added 1 commit

    • 8f8b627f - removed os from requirements.txt

    Compare with previous version

  • Hey,

    Did you have a look at how things are organized in this package ? The idea is actually not to have everything in one place (hence the folders dataset, architectures, trainers ...). Your script is handling almost all of that !

    I can understand that it may be easier for you, but that would be nice to comply to what has been done before, for consistency and reusability purposes.

    Also, since neither your Dataset, nor your architecture seem to be defined in the script, I guess that you should at least provide the configuration file ...

    Please have a look at the doc and try to follow the examples provided there, thanks !

  • @heusch,

    This Merge Request is not meant for reviewing yet, it is explicit WIP :) Sure, will add datasets and configs to corresponding locations, and will let you know once it is ready for reviewing.

    However, you are right, the code will be organized a bit differently. I have a generic training script, which is suitable for all DNN architectures I am planning to add to the package. Well, I think this is the best from re-usability point view, since doesn't require a separate trainer for each architecture.

    Please let me know if you have different opinion. Thanks!

  • added 1 commit

    • 720dd3ce - Removed standard lib packages from meta.yaml, added entry point for training script

    Compare with previous version

  • This Merge Request is not meant for reviewing yet, it is explicit WIP :) Sure, will add datasets and configs to corresponding locations, and will let you know once it is ready for reviewing.

    I made a comment early on just to avoid you unecessary and maybe redundant work ;)

    However, you are right, the code will be organized a bit differently. I have a generic training script, > which is suitable for all DNN architectures I am planning to add to the package. Well, I think this is the best from re-usability point view, since doesn't require a separate trainer for each architecture.

    I'm not familiar with autoencoders, but in my case (and in the package) different architectures (well, in the same "family" at least) could share a single trainer. This is the case for the CNNTrainer for instance: you can define whatever CNN (VGG, ResNet, LightCNN) to do classification, and the same trainer could be used as is. This does not apply to GAN, since a conditional GAN is quite different than a traditional GAN (in the problem definition I mean).

    I don't know it what goes for CNN also applies to autoencoder, but if it does, it would be nice to have an AutoEncoderTrainer in trainers and the different network architectures in architectures.

    Thanks !

  • Well, for me a single script worked for all nets :) Sure, will put to corresponding places. But in my case the trainer is not a class, just a script taking network configs as inputs. Will demonstrate how to run when ready. Thanks!

  • added 1 commit

    • f979eb7d - Fixed the map_labels in the utils, fixing the unit tests

    Compare with previous version

  • added 1 commit

    • 06d5b2df - Added dataset class, Conv-AE model, config to train on CelebA, and train script

    Compare with previous version

  • added 1 commit

    Compare with previous version

  • added 1 commit

    • b2a8f933 - modified the init in datasets folder

    Compare with previous version

  • added 1 commit

    Compare with previous version

  • added 1 commit

    • eca9f56b - Reverted changes in data_folder related to h5py import

    Compare with previous version

  • added 1 commit

    • 181b2fff - Added the unit test for ConvAutoencoder model

    Compare with previous version

  • Hey @heusch ,

    This is now ready, so I remove the WIP from MR. If interested to test the training you can run:

    jman submit --queue gpu \
    --name experiment_0_celeba_pretraining \
    --log-dir <path_to_save_the_results>/logs/ \
    --environment="PYTHONUNBUFFERED=1" -- \
    ./bin/pytorch-train-autoencoder-pad.py \
    /idiap/temp/onikisins/project/ODIN/experiment_data/pad_experiments_bob4/batl_db/ae_refactor_test/celeba_preprocessing/experiment_0/small_copy/ \
    <path_to_save_the_results>/ \
    -c autoencoder/net1_celeba.py \
    -cg bob.learn.pytorch.config \
    -gpu \
    -vv

    This will train an autoencoder on a small subset of CelebA faces. Thanks!

  • Olegs NIKISINS unmarked as a Work In Progress

    unmarked as a Work In Progress

  • Guillaume HEUSCH
    Guillaume HEUSCH @heusch started a thread on an outdated change in commit 720dd3ce
69 69
70 70 # scripts should be declared using this entry:
71 71 'console_scripts' : [
72 'train_cnn.py = bob.learn.pytorch.scripts.train_cnn:main',
73 'train_dcgan.py = bob.learn.pytorch.scripts.train_dcgan:main',
74 'train_conditionalgan.py = bob.learn.pytorch.scripts.train_conditionalgan:main',
72 'train_cnn.py = bob.learn.pytorch.scripts.train_cnn:main',
73 'train_dcgan.py = bob.learn.pytorch.scripts.train_dcgan:main',
74 'train_conditionalgan.py = bob.learn.pytorch.scripts.train_conditionalgan:main',
75 'pytorch-train-autoencoder-pad.py = bob.pad.face.script.pytorch.pytorch_train:main',
  • yes, will do so.

  • Apart from that, it looks alright .. That's a bit different than how I did it, but this should be ok ! I will check more in details when I'll have some more time, since it may actually be a better way to handle data for instance (although less generic I guess).

    Could you also update the doc ? That would be great to have this explained in plain text as well.

    Thanks

  • added 1 commit

    • 1876beb7 - Renamed the training script to more specific, now train_autoencoder

    Compare with previous version

  • Thanks for the review! Yes, I will create a section on AEs in the docs.

  • added 1 commit

    • 61202ef7 - Updated the documentation on AEs, trained on CelebA

    Compare with previous version

  • Hello @heusch,

    The documentation on AEs is now updated, ready to be merged :), unless you have more comments. Thanks!

  • Thanks for the documentation. Just don't forget to update the name of the training script (as previously mentioned above)

  • Already updated.

  • Sure ? Just build the package (and the doc), and I still have a ./bin/pytorch-train-autoencoder-pad.py script ...

  • Ah, sorry, I guess you mentioned different place, Will rename now. Thanks!

  • 1 .. py:currentmodule:: bob.learn.pytorch
    2
    3 Convolutional autoencoder
    4 =============================
    5
    6 This section introduces a work-flow for training a convolutional autoencoder. An autoencoder discussed in this section is introduced the following publication [NGM19]_. It is recommended to check the publication for better understanding of the architecture of the autoencoder, as well as for potential application of autoencoders in biometrics (face PAD in this case).
    7
    8 As an example, to train an autoencoder on facial images extracted from the CelebA database, you can use the following command:
    9
    10 .. code-block:: sh
    11
    12 ./bin/pytorch-train-autoencoder-pad.py \ # script used for autoencoders training, can be used for other networks as-well
  • Also, make sure that docstrings are right (within bob, it has been agreed to use numpy style: https://sphinxcontrib-napoleon.readthedocs.io/en/latest/example_numpy.html)

    For instance, your DataFolder class is not displayed correctly in the Python API section of the doc. You may want to have a look at other classes in the packages.

  • added 1 commit

    • 2da69255 - Renamed the script used for AE training, both in setup.py and docs

    Compare with previous version

  • added 1 commit

    • 18e00bbf - Updated the doc in data_folder to match the numpy format

    Compare with previous version

  • added 1 commit

    • 842c9bbd - Updated the inint in datasets folder, making the source link available in docs

    Compare with previous version

  • added 1 commit

    • 5e44dfde - Added the doc to the __init__ in data_folder, to fix the sphinx warning

    Compare with previous version

  • added 1 commit

    • 7e723289 - Renamed callable to object, sphinx warning

    Compare with previous version

  • @heusch , addressed your comments. The script is renamed, and the class documentation is in numpy format. Thanks!

  • All good thanks !

  • mentioned in commit c75f9631

  • Please register or sign in to reply
    Loading