Skip to content
Projects
Groups
Snippets
Help
Loading...
Help
Support
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in
Toggle navigation
N
neural_filters
Project overview
Project overview
Details
Activity
Releases
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Issues
0
Issues
0
List
Boards
Labels
Milestones
Merge Requests
0
Merge Requests
0
CI / CD
CI / CD
Pipelines
Jobs
Schedules
Analytics
Analytics
CI / CD
Repository
Value Stream
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Create a new issue
Jobs
Commits
Issue Boards
Open sidebar
software
neural_filters
Commits
38b26d13
Commit
38b26d13
authored
Jun 07, 2018
by
François MARELLI
Browse files
Options
Browse Files
Download
Email Patches
Plain Diff
Delete log_loss.py
parent
33d0db50
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
0 additions
and
108 deletions
+0
-108
neural_filters/log_loss.py
neural_filters/log_loss.py
+0
-108
No files found.
neural_filters/log_loss.py
deleted
100644 → 0
View file @
33d0db50
import
torch
from
torch.nn
import
MSELoss
from
torch.nn
import
L1Loss
class
LogMSELoss
(
MSELoss
):
r"""Creates a criterion that measures the logarithmic mean squared error between
`n` elements in the input `x` and target `y`:
:math:`{loss}(x, y) = \
log(
1/n \
sum |x_i - y_i|^
2 + epsilon)`
`x` and `y` arbitrary shapes with a total of `n` elements each.
The sum operation still operates over all the elements, and divides by `n`.
The division by `n` can be avoided if one sets the internal variable
`size_average` to ``False``.
To get a batch of losses, a loss per batch element, set `reduce` to
``False``. These losses are not averaged and are not affected by
`size_average`.
The epsilon is a positive float used to avoid log(0) leading to NaN.
Args:
size_average (bool, optional): By default, the losses are averaged
over observations for each minibatch. However, if the field
size_average is set to ``False``, the losses are instead summed for
each minibatch. Only applies when reduce is ``True``. Default: ``True``
reduce (bool, optional): By default, the losses are averaged
over observations for each minibatch, or summed, depending on
size_average. When reduce is ``False``, returns a loss per batch
element instead and ignores size_average. Default: ``True``
epsilon (float, optional): add a small positive term to the MSE before
taking the log to avoid NaN with log(0). Default: ``0.05``
Shape:
- Input: :math:`(N, *)` where `*` means, any number of additional
dimensions
- Target: :math:`(N, *)`, same shape as the input
Examples::
>>> loss = neural_filters.LogMSELoss()
>>> input = autograd.Variable(torch.randn(3, 5), requires_grad=True)
>>> target = autograd.Variable(torch.randn(3, 5))
>>> output = loss(input, target)
>>> output.backward()
"""
def
__init__
(
self
,
size_average
=
True
,
reduce
=
True
,
epsilon
=
0.05
):
super
().
__init__
(
size_average
,
reduce
)
self
.
epsilon
=
epsilon
def
forward
(
self
,
input
,
target
):
loss
=
super
().
forward
(
input
,
target
)
return
torch
.
log
(
loss
+
self
.
epsilon
)
class
LogL1Loss
(
L1Loss
):
r"""Creates a criterion that measures the logarithm of the mean absolute value of the
element-wise difference between input `x` and target `y`:
:math:`{loss}(x, y) = \
log(
1/n \
sum |x_i - y_i| + epsilo
n )`
`x` and `y` arbitrary shapes with a total of `n` elements each.
The sum operation still operates over all the elements, and divides by `n`.
The division by `n` can be avoided if one sets the constructor argument
`size_average=False`.
The epsilon is a positive float used to avoid log(0) leading to NaN.
Args:
size_average (bool, optional): By default, the losses are averaged
over observations for each minibatch. However, if the field
size_average is set to ``False``, the losses are instead summed for
each minibatch. Ignored when reduce is ``False``. Default: ``True``
reduce (bool, optional): By default, the losses are averaged or summed
for each minibatch. When reduce is ``False``, the loss function returns
a loss per batch element instead and ignores size_average.
Default: ``True``
epsilon (float, optional): add a small positive term to the MSE before
taking the log to avoid NaN with log(0). Default: ``0.05``
Shape:
- Input: :math:`(N, *)` where `*` means, any number of additional
dimensions
- Target: :math:`(N, *)`, same shape as the input
- Output: scalar. If reduce is ``False``, then
:math:`(N, *)`, same shape as the input
Examples::
>>> loss = neural_filters.LogL1Loss()
>>> input = autograd.Variable(torch.randn(3, 5), requires_grad=True)
>>> target = autograd.Variable(torch.randn(3, 5))
>>> output = loss(input, target)
>>> output.backward()
"""
def
__init__
(
self
,
size_average
=
True
,
reduce
=
True
,
epsilon
=
0.05
):
super
().
__init__
(
size_average
,
reduce
)
self
.
epsilon
=
epsilon
def
forward
(
self
,
input
,
target
):
loss
=
super
().
forward
(
input
,
target
)
return
torch
.
log
(
loss
+
self
.
epsilon
)
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment