Skip to content
Projects
Groups
Snippets
Help
Loading...
Help
Support
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in
Toggle navigation
N
neural_filters
Project overview
Project overview
Details
Activity
Releases
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Issues
0
Issues
0
List
Boards
Labels
Milestones
Merge Requests
0
Merge Requests
0
CI / CD
CI / CD
Pipelines
Jobs
Schedules
Analytics
Analytics
CI / CD
Repository
Value Stream
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Create a new issue
Jobs
Commits
Issue Boards
Open sidebar
software
neural_filters
Commits
6322cf95
Commit
6322cf95
authored
Mar 23, 2018
by
Francois Marelli
Browse files
Options
Browse Files
Download
Email Patches
Plain Diff
LogL1Loss
parent
352aed53
Changes
1
Show whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
108 additions
and
0 deletions
+108
-0
neural_filters/log_loss.py
neural_filters/log_loss.py
+108
-0
No files found.
neural_filters/
LogMSEL
oss.py
→
neural_filters/
log_l
oss.py
View file @
6322cf95
import
torch
from
torch.nn
import
MSELoss
from
torch.nn
import
L1Loss
class
LogMSELoss
(
MSELoss
):
r"""Creates a criterion that measures the logarithmic mean squared error between
...
...
@@ -45,10 +47,62 @@ class LogMSELoss(MSELoss):
>>> output = loss(input, target)
>>> output.backward()
"""
def
__init__
(
self
,
size_average
=
True
,
reduce
=
True
,
epsilon
=
0.05
):
super
().
__init__
(
size_average
,
reduce
)
self
.
epsilon
=
epsilon
def
forward
(
self
,
input
,
target
):
loss
=
super
().
forward
(
input
,
target
)
return
torch
.
log
(
loss
+
self
.
epsilon
)
class
LogL1Loss
(
L1Loss
):
r"""Creates a criterion that measures the logarithm of the mean absolute value of the
element-wise difference between input `x` and target `y`:
:math:`{loss}(x, y) = \
log(
1/n \
sum |x_i - y_i| + epsilo
n )`
`x` and `y` arbitrary shapes with a total of `n` elements each.
The sum operation still operates over all the elements, and divides by `n`.
The division by `n` can be avoided if one sets the constructor argument
`size_average=False`.
The epsilon is a positive float used to avoid log(0) leading to NaN.
Args:
size_average (bool, optional): By default, the losses are averaged
over observations for each minibatch. However, if the field
size_average is set to ``False``, the losses are instead summed for
each minibatch. Ignored when reduce is ``False``. Default: ``True``
reduce (bool, optional): By default, the losses are averaged or summed
for each minibatch. When reduce is ``False``, the loss function returns
a loss per batch element instead and ignores size_average.
Default: ``True``
epsilon (float, optional): add a small positive term to the MSE before
taking the log to avoid NaN with log(0). Default: ``0.05``
Shape:
- Input: :math:`(N, *)` where `*` means, any number of additional
dimensions
- Target: :math:`(N, *)`, same shape as the input
- Output: scalar. If reduce is ``False``, then
:math:`(N, *)`, same shape as the input
Examples::
>>> loss = neural_filters.LogL1Loss()
>>> input = autograd.Variable(torch.randn(3, 5), requires_grad=True)
>>> target = autograd.Variable(torch.randn(3, 5))
>>> output = loss(input, target)
>>> output.backward()
"""
def
__init__
(
self
,
size_average
=
True
,
reduce
=
True
,
epsilon
=
0.05
):
super
(
LogMSELoss
,
self
).
__init__
(
size_average
,
reduce
)
super
().
__init__
(
size_average
,
reduce
)
self
.
epsilon
=
epsilon
def
forward
(
self
,
input
,
target
):
loss
=
super
(
LogMSELoss
,
self
).
forward
(
input
,
target
)
loss
=
super
().
forward
(
input
,
target
)
return
torch
.
log
(
loss
+
self
.
epsilon
)
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment