Skip to content

Pinning torch.device to CPU for inference. We are having problems with IDIAP grid

Tiago de Freitas Pereira requested to merge cpu-pin into master

By default, I'm setting torch.device("cpu") in all our pytorch models for inference.

Since last major system update at Idiap torch.cuda.is_available() is True for "CPU" hosts in our grid. Then, we have an exception once pytorch tries to use the available GPU (which we are not allowed in CPU hosts).

I'm hypothesizing here; I don't remember having this issue last month.

Merge request reports

Loading