The VGG16 that we have here amends a hot-encoded layer
Today we wrap the vgg16
and vgg19
directly from slim https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/slim/python/slim/nets/vgg.py
Although this is very convenient and we definitely must reuse code as much as possible this implementation has an issue.
Here, https://github.com/tensorflow/tensorflow/blob/e0585bc351b19da39610cc20f6d7622b439dca4d/tensorflow/contrib/slim/python/slim/nets/vgg.py#L187 the guys from slim
amends a hot-encoded layer in the architecture function.
This is not very useful if we want to use our estimators.
Furthermore, in my opinion, architecture functions shouldn't carry explicit classification layers.
For instance, with this architecture as is, we can't directly use the Siamese or Triplet arrangements since they work directly with embeddings.