bob.learn.tensorflow issueshttps://gitlab.idiap.ch/bob/bob.learn.tensorflow/-/issues2022-02-22T16:37:51Zhttps://gitlab.idiap.ch/bob/bob.learn.tensorflow/-/issues/87Docs don't build with new sphinx version2022-02-22T16:37:51ZAmir MOHAMMADIDocs don't build with new sphinx versionJob [#257923](https://gitlab.idiap.ch/bob/bob.learn.tensorflow/-/jobs/257923) failed for f420d1b322762c81b79f59fa103c4ad07713fd79:
```
bob/learn/tensorflow/losses/__init__.py:docstring of bob.learn.tensorflow.losses.center_loss.CenterLos...Job [#257923](https://gitlab.idiap.ch/bob/bob.learn.tensorflow/-/jobs/257923) failed for f420d1b322762c81b79f59fa103c4ad07713fd79:
```
bob/learn/tensorflow/losses/__init__.py:docstring of bob.learn.tensorflow.losses.center_loss.CenterLossLayer.call:11:Unexpected indentation.
```https://gitlab.idiap.ch/bob/bob.learn.tensorflow/-/issues/86Callback vanilla biometrics2021-02-10T07:59:14ZTiago de Freitas PereiraCallback vanilla biometricsWould be nice to have a callback that triggers vanilla-biometrics.Would be nice to have a callback that triggers vanilla-biometrics.Tiago de Freitas PereiraTiago de Freitas Pereirahttps://gitlab.idiap.ch/bob/bob.learn.tensorflow/-/issues/85Integrate dask with ``bob keras fit`` for multi worker strategy setup2020-10-08T16:21:21ZAmir MOHAMMADIIntegrate dask with ``bob keras fit`` for multi worker strategy setupDask can be used to setup a cluster for tensorflow: https://gitlab.idiap.ch/bob/bob.tf_experimental/
We should do this automatically in our train script.Dask can be used to setup a cluster for tensorflow: https://gitlab.idiap.ch/bob/bob.tf_experimental/
We should do this automatically in our train script.https://gitlab.idiap.ch/bob/bob.learn.tensorflow/-/issues/84Allow for strategies in ``bob keras fit`` script2020-10-08T16:19:49ZAmir MOHAMMADIAllow for strategies in ``bob keras fit`` scriptWhen fitting models under a distributed strategy, the model needs be created and compiled under the strategy scope: https://www.tensorflow.org/tutorials/distribute/keras
``bob keras fit`` script should do this scoping automatically.When fitting models under a distributed strategy, the model needs be created and compiled under the strategy scope: https://www.tensorflow.org/tutorials/distribute/keras
``bob keras fit`` script should do this scoping automatically.https://gitlab.idiap.ch/bob/bob.learn.tensorflow/-/issues/83NIghlies failing2019-08-19T13:42:11ZTiago de Freitas PereiraNIghlies failingAs far as I could see we have doctests issue
https://gitlab.idiap.ch/bob/bob.nightlies/-/jobs/170751
This is probably related with the `sphinx` major bump , which I'm not surprised.As far as I could see we have doctests issue
https://gitlab.idiap.ch/bob/bob.nightlies/-/jobs/170751
This is probably related with the `sphinx` major bump , which I'm not surprised.https://gitlab.idiap.ch/bob/bob.learn.tensorflow/-/issues/82Issue with style transfer2019-06-25T17:53:13ZTiago de Freitas PereiraIssue with style transferThere's an issue with the style transfer implemented
ping @amohammadiThere's an issue with the style transfer implemented
ping @amohammadiTiago de Freitas PereiraTiago de Freitas Pereirahttps://gitlab.idiap.ch/bob/bob.learn.tensorflow/-/issues/81Dense net2019-08-16T06:14:57ZTiago de Freitas PereiraDense netHey @amohammadi,
You said that you have patched the dense net in some branch.
Do you mind to open a MR for it?
ThanksHey @amohammadi,
You said that you have patched the dense net in some branch.
Do you mind to open a MR for it?
ThanksAmir MOHAMMADIAmir MOHAMMADIhttps://gitlab.idiap.ch/bob/bob.learn.tensorflow/-/issues/80Keras gotchas2019-10-07T21:42:02ZAmir MOHAMMADIKeras gotchasUsing Keras with estimators (without using `tf.keras.estimator.model_to_estimator`) is really weird. I am opening an issue here to keep track of the gotchas.
Look at this guide: https://www.tensorflow.org/beta/guide/migration_guide#usin...Using Keras with estimators (without using `tf.keras.estimator.model_to_estimator`) is really weird. I am opening an issue here to keep track of the gotchas.
Look at this guide: https://www.tensorflow.org/beta/guide/migration_guide#using_a_custom_model_fn
which explains what you should do but it does not cover everything.
* Keras variables do not go to variable stores. To use `tf.train.init_from_checkpoint` with Keras variables, one needs to pass explicitly the list of variables to the function. Something like this:
```python
assignment_map = {v.name.split(":")[0]: v for v in model.variables}
tf.train.init_from_checkpoint(
ckpt_dir_or_file=model_folder, assignment_map=assignment_map
)
```
* Keras layers (especially batch norm) do not update `tf.GraphKeys.UPDATE_OPS` collections. Hence you have to add those manually:
```python
# Add batch norm updates to the graph
for update_op in model.get_updates_for(inputs) + model.get_updates_for(None):
tf.add_to_collection(tf.GraphKeys.UPDATE_OPS, update_op)
```
* Keras layers' variables go to global trainable variables (weird enough because you cannot use init_from_checkpoint on them). Doing something like:
```python
for layer in model.layers:
layer.trainable = False
```
will not remove those from that list. To use `tf.contrib.layers.optimize_loss` with keras layers, you have to do something like:
```python
tf.contrib.layers.optimize_loss(
...
variables=model.trainable_variables
)
```
Otherwise, you will be training all layers.
* In Keras Models, `model.variables` and `model.trainable_variables` are different. So you would handle L2 loss like this:
```python
# Add L2 losses to the graph
regularization_loss = 0.0
l2 = tf.keras.regularizers.l2(weight_decay)
for variable in model.trainable_variables:
regularization_loss += l2(variable)
tf.add_to_collection(tf.GraphKeys.REGULARIZATION_LOSSES, regularization_loss)
```
or you do something like this:
```python
# Get both the unconditional losses (the None part)
# and the input-conditional losses (the features part).
reg_losses = model.get_losses_for(None) + model.get_losses_for(features)
```
* You have to name every layer/model explicitly otherwise you end up with different names depending on the state of keras layers ...
https://gitlab.idiap.ch/bob/bob.learn.tensorflow/-/issues/79Follow-up from "A lot of new features"2019-04-26T08:46:47ZAmir MOHAMMADIFollow-up from "A lot of new features"The following discussion from !75 should be addressed:
- [ ] @amohammadi started a [discussion](https://gitlab.idiap.ch/bob/bob.learn.tensorflow/merge_requests/75#note_41878):
> @tiago.pereira I don't think putting this `os.enviro...The following discussion from !75 should be addressed:
- [ ] @amohammadi started a [discussion](https://gitlab.idiap.ch/bob/bob.learn.tensorflow/merge_requests/75#note_41878):
> @tiago.pereira I don't think putting this `os.environ['KMP_DUPLICATE_LIB_OK']='True'` here is a good idea. Maybe we should update our bob-devel?
@tiago.pereira let's remove this when things are fixed upstream.Tiago de Freitas PereiraTiago de Freitas Pereirahttps://gitlab.idiap.ch/bob/bob.learn.tensorflow/-/issues/78Using tf.contrib.layers.optimize_loss in model_fns (estimators)2019-05-27T11:19:01ZAmir MOHAMMADIUsing tf.contrib.layers.optimize_loss in model_fns (estimators)Guys, there is a neat function in tensorflow v1 which takes care of a lot of biolerplates in estimators:
https://www.tensorflow.org/versions/r1.12/api_docs/python/tf/contrib/layers/optimize_loss
If you guys, don't mind, I will add this ...Guys, there is a neat function in tensorflow v1 which takes care of a lot of biolerplates in estimators:
https://www.tensorflow.org/versions/r1.12/api_docs/python/tf/contrib/layers/optimize_loss
If you guys, don't mind, I will add this to our estimators. It might break backward compatibility in terms of not being able to resume trainings from older checkpoints.Amir MOHAMMADIAmir MOHAMMADIhttps://gitlab.idiap.ch/bob/bob.learn.tensorflow/-/issues/77random rotate of images is not really random2020-11-05T15:18:17ZAmir MOHAMMADIrandom rotate of images is not really randomin `bob.learn.tensorflow/bob/learn/tensorflow/dataset/__init__.py` there is:
```
if random_rotate:
image = tf.contrib.image.rotate(
image,
angles=numpy.random.randint(-5, 5),
interpolation=...in `bob.learn.tensorflow/bob/learn/tensorflow/dataset/__init__.py` there is:
```
if random_rotate:
image = tf.contrib.image.rotate(
image,
angles=numpy.random.randint(-5, 5),
interpolation="BILINEAR")
```
this random number (from numpy) is going to be evaluated once and then all images will be rotated using that angle.Amir MOHAMMADIAmir MOHAMMADIhttps://gitlab.idiap.ch/bob/bob.learn.tensorflow/-/issues/76Logits embedding validation gives NaN loss2019-03-11T12:59:11ZAmir MOHAMMADILogits embedding validation gives NaN lossaccording to https://www.tensorflow.org/api_docs/python/tf/nn/sparse_softmax_cross_entropy_with_logits?hl=en :
```
labels: Tensor of shape [d_0, d_1, ..., d_{r-1}] (where r is rank of labels and result) and dtype int32 or int64.
Each ent...according to https://www.tensorflow.org/api_docs/python/tf/nn/sparse_softmax_cross_entropy_with_logits?hl=en :
```
labels: Tensor of shape [d_0, d_1, ..., d_{r-1}] (where r is rank of labels and result) and dtype int32 or int64.
Each entry in labels must be an index in [0, num_classes).
Other values will raise an exception when this op is run on CPU, and return NaN for corresponding loss and gradient rows on GPU.
```
and I am getting NaNs as loss in GPU and exceptions in CPU mode when using the `Logits` estimator with `embedding_validation=True`.
This happens when I run `bob tf eval` with ReplayMobile. It happens rarely so I don't know what is going on. Here is one error that I get on CPU:
```
InvalidArgumentError (see above for traceback): Received a label value of 13 which is outside the valid range of [0, 12). Label values: 10 1
0 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 1
0 10 10 10 10 10 10 10 10 10 10 10 10 13 10 10 10 10 10 10 13 10 13 10 10 10 10 10 10 10 10 10 10 13 13 10 13 13 10 13 10 10 10 10 13 10 13 1
0 13 13 10 13 10 10 10 10 13 10 13 10 13 10 10 13 10 13 10 13 13 13 13 13 10 10 10 13 13 13 10 13 13 10 10 13 13 10 10 13 13 13 13 13 10 13 1
0 10 13 13 13 13 13 13 10 13 10 13 10 10 13 13 13 13 10 10 13 13 13 10 13 10 13 13 13 13 10 10 13 13 13 13 13 10 10 10 13 13 10 13 13 10 10 1
3 13 13 13 13 13 13 13 13 10 13 13 6 13 13 13 13 10 13 6 13 13 6 13 13 6 6 13 6 13 13 13 13 13 13 13 13 13 6 10 10 13 13 13 6 13 10 13 13 6 1
3 10 6 13 6 13 13 6 10 13 13 10 6 6 13 6 10 13 6 6 6 6 6 6 13 13 6 6 6 6 6 6 6 6 10 13 13 13 6 10 6 13 13 13 13 13 13 6 6 13 6 13 13 13 12 12
6 6 12 13 6 13 6 13 6 12 6 6 13 13 6 12 6 12 6 6 10 12 6 10 12 12 12 6 12 13 6 6 6 6 12 12 6 12 6 12 10 13 6 12 12 10 12 12 12 6 12 12 6 13
12 12 12 13 6 12 12 6 12 12 12 13 12 6 12 12 6 12 12 12 12 13 6 12 13 13 13 12 12 12 12 12 12 12 12 13 12 6 12 6 12 13 12 10 12 12 12 12 12 1
2 6 12 12 13 12 12 12 12 13 12 12 13 13 12 6 12 12 12 12 12 6 13 12 6 12 12 12 12 10 12 13 13 12 6 12 12 6 12 12 12 12 12 12 6 12 13 12 12 6
12 12 12 12 12 12 12 12 13 12 13 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 13 12 12 12 13 12 12 6 12 12 12 12 12 12 12 12 12 12 12 1
2 12 13 12 12 12 12 12 12 12 12 12 6 12 12 12 12
[[node Bio_loss/sparse_softmax_cross_entropy_loss/xentropy/xentropy (defined at deep/sr
c/bob.learn.tensorflow/bob/learn/tensorflow/loss/epsc.py:10) = SparseSoftmaxCrossEntropyWithLogits[T=DT_FLOAT, Tlabels=DT_INT32, _device="/j
ob:localhost/replica:0/task:0/device:CPU:0"](Logits/Bio/BiasAdd, IteratorGetNext:2)]]
```
and I construct my labels like this:
```python
files = database.all_files(groups=groups, flat=True)
CLIENT_IDS = sorted(set(str(f.client_id) for f in files))
CLIENT_IDS = dict(zip(CLIENT_IDS, range(len(CLIENT_IDS))))
load_data = load(load_data, context=context,
entry_point_group='bob', attribute_name='load_data')
def reader(f):
key = str(f.make_path("", "")).encode('utf-8')
label = CLIENT_IDS[str(f.client_id)]
```
so I am not sure what is going on. I suspect I am hitting a corner case in https://gitlab.idiap.ch/bob/bob.learn.tensorflow/blob/c7a4d9f78adbcb9b6ec3c22a0ece375e6a271468/bob/learn/tensorflow/utils/util.py#L192
Any ideas are welcome.https://gitlab.idiap.ch/bob/bob.learn.tensorflow/-/issues/75Tensorflow 2 compatibility2020-11-05T15:17:35ZAmir MOHAMMADITensorflow 2 compatibilityTensorflow is making Keras and eager execution the center of its new API in version 2:
https://medium.com/tensorflow/standardizing-on-keras-guidance-on-high-level-apis-in-tensorflow-2-0-bad2b04c819a
While estimators are going to be suppo...Tensorflow is making Keras and eager execution the center of its new API in version 2:
https://medium.com/tensorflow/standardizing-on-keras-guidance-on-high-level-apis-in-tensorflow-2-0-bad2b04c819a
While estimators are going to be supported, they do not support eager execution (They always run in graph mode).
Per [this guide](https://www.tensorflow.org/guide/eager), it's best to run code that runs both in eager mode and graph mode. I think we can extend our estimator classes to support their execution in eager mode, i.e., we can have one eager execution training script that runs just like `estimator.train` but in eager mode. This allows for easier debugging of our programs and lets us to easily switch the same model training/evaluation/prediction to graph mode.
any feedback is welcomeAmir MOHAMMADIAmir MOHAMMADIhttps://gitlab.idiap.ch/bob/bob.learn.tensorflow/-/issues/74The VGG16 that we have here amends a hot-encoded layer2019-05-27T11:21:57ZTiago de Freitas PereiraThe VGG16 that we have here amends a hot-encoded layerToday we wrap the `vgg16` and `vgg19` directly from slim https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/slim/python/slim/nets/vgg.py
Although this is very convenient and we definitely **must** reuse code as much...Today we wrap the `vgg16` and `vgg19` directly from slim https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/slim/python/slim/nets/vgg.py
Although this is very convenient and we definitely **must** reuse code as much as possible this implementation has an issue.
Here, https://github.com/tensorflow/tensorflow/blob/e0585bc351b19da39610cc20f6d7622b439dca4d/tensorflow/contrib/slim/python/slim/nets/vgg.py#L187 the guys from `slim` amends a hot-encoded layer in the architecture function.
This is not very useful if we want to use our estimators.
Furthermore, in my opinion, architecture functions shouldn't carry explicit classification layers.
For instance, with this architecture as is, we can't directly use the Siamese or Triplet arrangements since they work directly with embeddings.Tiago de Freitas PereiraTiago de Freitas Pereirahttps://gitlab.idiap.ch/bob/bob.learn.tensorflow/-/issues/73Create an utility click command that describes the checkpoint file2019-01-25T16:24:41ZTiago de Freitas PereiraCreate an utility click command that describes the checkpoint fileBasically wraps this thing
`from tensorflow.python.tools.inspect_checkpoint import print_tensors_in_checkpoint_file`Basically wraps this thing
`from tensorflow.python.tools.inspect_checkpoint import print_tensors_in_checkpoint_file`Tiago de Freitas PereiraTiago de Freitas Pereirahttps://gitlab.idiap.ch/bob/bob.learn.tensorflow/-/issues/72Package release2019-08-16T06:15:43ZTiago de Freitas PereiraPackage releaseHi guys,
Just to let you know.
I'll tag this package.
CheersHi guys,
Just to let you know.
I'll tag this package.
CheersTiago de Freitas PereiraTiago de Freitas Pereirahttps://gitlab.idiap.ch/bob/bob.learn.tensorflow/-/issues/71bob tf predict_bio has a bug in checkpoint loading step2019-01-18T09:55:59ZAmir MOHAMMADIbob tf predict_bio has a bug in checkpoint loading step`tf.estimator.Estimator.predict` and `.evaluate` take a checkpoint parameter. This value must be a tensorflow checkpoint prefix (e.g. `/model_dir/model.ckpt-23952000`) but I wanted to point to a folder instead and wanted to pickup the la...`tf.estimator.Estimator.predict` and `.evaluate` take a checkpoint parameter. This value must be a tensorflow checkpoint prefix (e.g. `/model_dir/model.ckpt-23952000`) but I wanted to point to a folder instead and wanted to pickup the latest checkpoint from there automatically so this script can be used in parallel with `bob tf eval`. However, looks like there is a bug in https://gitlab.idiap.ch/bob/bob.learn.tensorflow/blob/9c068090975ab5cb13d738048017ff3b648c1bb7/bob/learn/tensorflow/script/predict_bio.py#L226 where `estimator.model_dir` is used as input to `tf.train.get_checkpoint_state` instead of `checkpoint`. This means the `--checkpoint` option had no effect so far :(Amir MOHAMMADIAmir MOHAMMADIhttps://gitlab.idiap.ch/bob/bob.learn.tensorflow/-/issues/70Tensorflow2018-11-23T14:02:50ZTiago de Freitas PereiraTensorflowGuys, I'm lunching several jobs to our GPU cluster (hundreds).
For **some** hosts I'm getting the following error once `estimator.train` is triggered.
Have you guys faced similar issue?
I'm using tensorflow-gpu 1.8
ping @andre.anjos, ...Guys, I'm lunching several jobs to our GPU cluster (hundreds).
For **some** hosts I'm getting the following error once `estimator.train` is triggered.
Have you guys faced similar issue?
I'm using tensorflow-gpu 1.8
ping @andre.anjos, @amohammadi
thanks
```
totalMemory: 11.17GiB freeMemory: 11.11GiB
2018-11-23 14:18:50.403387: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1435] Adding visible gpu devices: 0
2018-11-23 14:18:50.403643: E tensorflow/core/common_runtime/direct_session.cc:154] Internal: cudaGetDevice() failed. Status: CUDA driver version is insufficient for CUDA runtime version
Traceback (most recent call last):
File "/remote/idiap.svm/user.active/tpereira/gitlab/bob/bob.bio.htface/bin/bob", line 33, in <module>
sys.exit(bob.extension.scripts.main_cli())
File "/idiap/user/tpereira/conda/envs/bob.bio.htface/lib/python3.6/site-packages/click/core.py", line 722, in __call__
return self.main(*args, **kwargs)
File "/idiap/user/tpereira/conda/envs/bob.bio.htface/lib/python3.6/site-packages/click/core.py", line 697, in main
rv = self.invoke(ctx)
File "/idiap/user/tpereira/conda/envs/bob.bio.htface/lib/python3.6/site-packages/click/core.py", line 1066, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/idiap/user/tpereira/conda/envs/bob.bio.htface/lib/python3.6/site-packages/click/core.py", line 1066, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/idiap/user/tpereira/conda/envs/bob.bio.htface/lib/python3.6/site-packages/click/core.py", line 1066, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/idiap/user/tpereira/conda/envs/bob.bio.htface/lib/python3.6/site-packages/click/core.py", line 895, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/idiap/user/tpereira/conda/envs/bob.bio.htface/lib/python3.6/site-packages/click/core.py", line 535, in invoke
return callback(*args, **kwargs)
File "/idiap/user/tpereira/conda/envs/bob.bio.htface/lib/python3.6/site-packages/click/decorators.py", line 17, in new_func
return f(get_current_context(), *args, **kwargs)
File "/remote/idiap.svm/user.active/tpereira/gitlab/bob/bob.bio.htface/bob/bio/htface/script/domain_specic_units.py", line 86, in htface_train_dsu
steps=200000)
File "/idiap/user/tpereira/conda/envs/bob.bio.htface/lib/python3.6/site-packages/tensorflow/python/estimator/estimator.py", line 363, in train
loss = self._train_model(input_fn, hooks, saving_listeners)
File "/idiap/user/tpereira/conda/envs/bob.bio.htface/lib/python3.6/site-packages/tensorflow/python/estimator/estimator.py", line 843, in _train_model
return self._train_model_default(input_fn, hooks, saving_listeners)
File "/idiap/user/tpereira/conda/envs/bob.bio.htface/lib/python3.6/site-packages/tensorflow/python/estimator/estimator.py", line 859, in _train_model_default
saving_listeners)
File "/idiap/user/tpereira/conda/envs/bob.bio.htface/lib/python3.6/site-packages/tensorflow/python/estimator/estimator.py", line 1056, in _train_with_estimator_spec
log_step_count_steps=self._config.log_step_count_steps) as mon_sess:
File "/idiap/user/tpereira/conda/envs/bob.bio.htface/lib/python3.6/site-packages/tensorflow/python/training/monitored_session.py", line 405, in MonitoredTrainingSession
stop_grace_period_secs=stop_grace_period_secs)
File "/idiap/user/tpereira/conda/envs/bob.bio.htface/lib/python3.6/site-packages/tensorflow/python/training/monitored_session.py", line 816, in __init__
stop_grace_period_secs=stop_grace_period_secs)
File "/idiap/user/tpereira/conda/envs/bob.bio.htface/lib/python3.6/site-packages/tensorflow/python/training/monitored_session.py", line 539, in __init__
self._sess = _RecoverableSession(self._coordinated_creator)
File "/idiap/user/tpereira/conda/envs/bob.bio.htface/lib/python3.6/site-packages/tensorflow/python/training/monitored_session.py", line 1002, in __init__
_WrappedSession.__init__(self, self._create_session())
File "/idiap/user/tpereira/conda/envs/bob.bio.htface/lib/python3.6/site-packages/tensorflow/python/training/monitored_session.py", line 1007, in _create_session
return self._sess_creator.create_session()
File "/idiap/user/tpereira/conda/envs/bob.bio.htface/lib/python3.6/site-packages/tensorflow/python/training/monitored_session.py", line 696, in create_session
self.tf_sess = self._session_creator.create_session()
File "/idiap/user/tpereira/conda/envs/bob.bio.htface/lib/python3.6/site-packages/tensorflow/python/training/monitored_session.py", line 467, in create_session
init_fn=self._scaffold.init_fn)
File "/idiap/user/tpereira/conda/envs/bob.bio.htface/lib/python3.6/site-packages/tensorflow/python/training/session_manager.py", line 279, in prepare_session
config=config)
File "/idiap/user/tpereira/conda/envs/bob.bio.htface/lib/python3.6/site-packages/tensorflow/python/training/session_manager.py", line 180, in _restore_checkpoint
sess = session.Session(self._target, graph=self._graph, config=config)
File "/idiap/user/tpereira/conda/envs/bob.bio.htface/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1560, in __init__
super(Session, self).__init__(target, graph, config=config)
File "/idiap/user/tpereira/conda/envs/bob.bio.htface/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 633, in __init__
self._session = tf_session.TF_NewSession(self._graph._c_graph, opts)
tensorflow.python.framework.errors_impl.InternalError: Failed to create session.
```https://gitlab.idiap.ch/bob/bob.learn.tensorflow/-/issues/69Follow-up from "Several changes"2018-11-02T07:57:49ZTiago de Freitas PereiraFollow-up from "Several changes"The following discussion from !68 should be addressed:
- [ ] @tiago.pereira started a [discussion](https://gitlab.idiap.ch/bob/bob.learn.tensorflow/merge_requests/68#note_36057): (+2 comments)
> Implement the new mechanism of movi...The following discussion from !68 should be addressed:
- [ ] @tiago.pereira started a [discussion](https://gitlab.idiap.ch/bob/bob.learn.tensorflow/merge_requests/68#note_36057): (+2 comments)
> Implement the new mechanism of moving averages in the Logits, Triple and Siamese estimatorshttps://gitlab.idiap.ch/bob/bob.learn.tensorflow/-/issues/68Follow-up from "Enable mac builds"2018-10-17T12:35:11ZAmir MOHAMMADIFollow-up from "Enable mac builds"The following discussion from !70 should be addressed:
- [ ] @amohammadi started a [discussion](https://gitlab.idiap.ch/bob/bob.learn.tensorflow/merge_requests/70#note_35426):
> This is not enough to enable mac builds here. You al...The following discussion from !70 should be addressed:
- [ ] @amohammadi started a [discussion](https://gitlab.idiap.ch/bob/bob.learn.tensorflow/merge_requests/70#note_35426):
> This is not enough to enable mac builds here. You also need to change the conda recipe.Tiago de Freitas PereiraTiago de Freitas Pereira