Hinge loss function gradient w.r.t. input prediction

Multi tool use
up vote
0
down vote
favorite
For an assignment I have to implement both the Hinge loss and its partial derivative calculation functions. I got the Hinge loss function itself but I'm having hard time understanding how to calculate its partial derivative w.r.t. prediction input. I tried different approaches but none worked.
Any help, hints, suggestions will be much appreciated!
Here is the analytical expression for Hinge loss function itself:
And here is my Hinge loss function implementation:
def hinge_forward(target_pred, target_true):
"""Compute the value of Hinge loss
for a given prediction and the ground truth
# Arguments
target_pred: predictions - np.array of size `(n_objects,)`
target_true: ground truth - np.array of size `(n_objects,)`
# Output
the value of Hinge loss
for a given prediction and the ground truth
scalar
"""
output = np.sum((np.maximum(0, 1 - target_pred * target_true)) / target_pred.size)
return output
Now I need to calculate this gradient:
This is what I tried for the Hinge loss gradient calculation:
def hinge_grad_input(target_pred, target_true):
"""Compute the partial derivative
of Hinge loss with respect to its input
# Arguments
target_pred: predictions - np.array of size `(n_objects,)`
target_true: ground truth - np.array of size `(n_objects,)`
# Output
the partial derivative
of Hinge loss with respect to its input
np.array of size `(n_objects,)`
"""
# ----------------
# try 1
# ----------------
# hinge_result = hinge_forward(target_pred, target_true)
# if hinge_result == 0:
# grad_input = 0
# else:
# hinge = np.maximum(0, 1 - target_pred * target_true)
# grad_input = np.zeros_like(hinge)
# grad_input[hinge > 0] = 1
# grad_input = np.sum(np.where(hinge > 0))
# ----------------
# try 2
# ----------------
# hinge = np.maximum(0, 1 - target_pred * target_true)
# grad_input = np.zeros_like(hinge)
# grad_input[hinge > 0] = 1
# ----------------
# try 3
# ----------------
hinge_result = hinge_forward(target_pred, target_true)
if hinge_result == 0:
grad_input = 0
else:
loss = np.maximum(0, 1 - target_pred * target_true)
grad_input = np.zeros_like(loss)
grad_input[loss > 0] = 1
grad_input = np.sum(grad_input) * target_pred
return grad_input
python machine-learning deep-learning loss-function
add a comment |
up vote
0
down vote
favorite
For an assignment I have to implement both the Hinge loss and its partial derivative calculation functions. I got the Hinge loss function itself but I'm having hard time understanding how to calculate its partial derivative w.r.t. prediction input. I tried different approaches but none worked.
Any help, hints, suggestions will be much appreciated!
Here is the analytical expression for Hinge loss function itself:
And here is my Hinge loss function implementation:
def hinge_forward(target_pred, target_true):
"""Compute the value of Hinge loss
for a given prediction and the ground truth
# Arguments
target_pred: predictions - np.array of size `(n_objects,)`
target_true: ground truth - np.array of size `(n_objects,)`
# Output
the value of Hinge loss
for a given prediction and the ground truth
scalar
"""
output = np.sum((np.maximum(0, 1 - target_pred * target_true)) / target_pred.size)
return output
Now I need to calculate this gradient:
This is what I tried for the Hinge loss gradient calculation:
def hinge_grad_input(target_pred, target_true):
"""Compute the partial derivative
of Hinge loss with respect to its input
# Arguments
target_pred: predictions - np.array of size `(n_objects,)`
target_true: ground truth - np.array of size `(n_objects,)`
# Output
the partial derivative
of Hinge loss with respect to its input
np.array of size `(n_objects,)`
"""
# ----------------
# try 1
# ----------------
# hinge_result = hinge_forward(target_pred, target_true)
# if hinge_result == 0:
# grad_input = 0
# else:
# hinge = np.maximum(0, 1 - target_pred * target_true)
# grad_input = np.zeros_like(hinge)
# grad_input[hinge > 0] = 1
# grad_input = np.sum(np.where(hinge > 0))
# ----------------
# try 2
# ----------------
# hinge = np.maximum(0, 1 - target_pred * target_true)
# grad_input = np.zeros_like(hinge)
# grad_input[hinge > 0] = 1
# ----------------
# try 3
# ----------------
hinge_result = hinge_forward(target_pred, target_true)
if hinge_result == 0:
grad_input = 0
else:
loss = np.maximum(0, 1 - target_pred * target_true)
grad_input = np.zeros_like(loss)
grad_input[loss > 0] = 1
grad_input = np.sum(grad_input) * target_pred
return grad_input
python machine-learning deep-learning loss-function
add a comment |
up vote
0
down vote
favorite
up vote
0
down vote
favorite
For an assignment I have to implement both the Hinge loss and its partial derivative calculation functions. I got the Hinge loss function itself but I'm having hard time understanding how to calculate its partial derivative w.r.t. prediction input. I tried different approaches but none worked.
Any help, hints, suggestions will be much appreciated!
Here is the analytical expression for Hinge loss function itself:
And here is my Hinge loss function implementation:
def hinge_forward(target_pred, target_true):
"""Compute the value of Hinge loss
for a given prediction and the ground truth
# Arguments
target_pred: predictions - np.array of size `(n_objects,)`
target_true: ground truth - np.array of size `(n_objects,)`
# Output
the value of Hinge loss
for a given prediction and the ground truth
scalar
"""
output = np.sum((np.maximum(0, 1 - target_pred * target_true)) / target_pred.size)
return output
Now I need to calculate this gradient:
This is what I tried for the Hinge loss gradient calculation:
def hinge_grad_input(target_pred, target_true):
"""Compute the partial derivative
of Hinge loss with respect to its input
# Arguments
target_pred: predictions - np.array of size `(n_objects,)`
target_true: ground truth - np.array of size `(n_objects,)`
# Output
the partial derivative
of Hinge loss with respect to its input
np.array of size `(n_objects,)`
"""
# ----------------
# try 1
# ----------------
# hinge_result = hinge_forward(target_pred, target_true)
# if hinge_result == 0:
# grad_input = 0
# else:
# hinge = np.maximum(0, 1 - target_pred * target_true)
# grad_input = np.zeros_like(hinge)
# grad_input[hinge > 0] = 1
# grad_input = np.sum(np.where(hinge > 0))
# ----------------
# try 2
# ----------------
# hinge = np.maximum(0, 1 - target_pred * target_true)
# grad_input = np.zeros_like(hinge)
# grad_input[hinge > 0] = 1
# ----------------
# try 3
# ----------------
hinge_result = hinge_forward(target_pred, target_true)
if hinge_result == 0:
grad_input = 0
else:
loss = np.maximum(0, 1 - target_pred * target_true)
grad_input = np.zeros_like(loss)
grad_input[loss > 0] = 1
grad_input = np.sum(grad_input) * target_pred
return grad_input
python machine-learning deep-learning loss-function
For an assignment I have to implement both the Hinge loss and its partial derivative calculation functions. I got the Hinge loss function itself but I'm having hard time understanding how to calculate its partial derivative w.r.t. prediction input. I tried different approaches but none worked.
Any help, hints, suggestions will be much appreciated!
Here is the analytical expression for Hinge loss function itself:
And here is my Hinge loss function implementation:
def hinge_forward(target_pred, target_true):
"""Compute the value of Hinge loss
for a given prediction and the ground truth
# Arguments
target_pred: predictions - np.array of size `(n_objects,)`
target_true: ground truth - np.array of size `(n_objects,)`
# Output
the value of Hinge loss
for a given prediction and the ground truth
scalar
"""
output = np.sum((np.maximum(0, 1 - target_pred * target_true)) / target_pred.size)
return output
Now I need to calculate this gradient:
This is what I tried for the Hinge loss gradient calculation:
def hinge_grad_input(target_pred, target_true):
"""Compute the partial derivative
of Hinge loss with respect to its input
# Arguments
target_pred: predictions - np.array of size `(n_objects,)`
target_true: ground truth - np.array of size `(n_objects,)`
# Output
the partial derivative
of Hinge loss with respect to its input
np.array of size `(n_objects,)`
"""
# ----------------
# try 1
# ----------------
# hinge_result = hinge_forward(target_pred, target_true)
# if hinge_result == 0:
# grad_input = 0
# else:
# hinge = np.maximum(0, 1 - target_pred * target_true)
# grad_input = np.zeros_like(hinge)
# grad_input[hinge > 0] = 1
# grad_input = np.sum(np.where(hinge > 0))
# ----------------
# try 2
# ----------------
# hinge = np.maximum(0, 1 - target_pred * target_true)
# grad_input = np.zeros_like(hinge)
# grad_input[hinge > 0] = 1
# ----------------
# try 3
# ----------------
hinge_result = hinge_forward(target_pred, target_true)
if hinge_result == 0:
grad_input = 0
else:
loss = np.maximum(0, 1 - target_pred * target_true)
grad_input = np.zeros_like(loss)
grad_input[loss > 0] = 1
grad_input = np.sum(grad_input) * target_pred
return grad_input
python machine-learning deep-learning loss-function
python machine-learning deep-learning loss-function
edited Nov 12 at 0:55
desertnaut
16.1k63466
16.1k63466
asked Nov 10 at 22:34


Andrei
125
125
add a comment |
add a comment |
1 Answer
1
active
oldest
votes
up vote
0
down vote
accepted
I've managed to solve this by using np.where() function. Here is the code:
def hinge_grad_input(target_pred, target_true):
"""Compute the partial derivative
of Hinge loss with respect to its input
# Arguments
target_pred: predictions - np.array of size `(n_objects,)`
target_true: ground truth - np.array of size `(n_objects,)`
# Output
the partial derivative
of Hinge loss with respect to its input
np.array of size `(n_objects,)`
"""
grad_input = np.where(target_pred * target_true < 1, -target_true / target_pred.size, 0)
return grad_input
Basically the gradient equals -y/N for all the cases where y*y < 1, otherwise 0.
add a comment |
Your Answer
StackExchange.ifUsing("editor", function ()
StackExchange.using("externalEditor", function ()
StackExchange.using("snippets", function ()
StackExchange.snippets.init();
);
);
, "code-snippets");
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "1"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53244095%2fhinge-loss-function-gradient-w-r-t-input-prediction%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
0
down vote
accepted
I've managed to solve this by using np.where() function. Here is the code:
def hinge_grad_input(target_pred, target_true):
"""Compute the partial derivative
of Hinge loss with respect to its input
# Arguments
target_pred: predictions - np.array of size `(n_objects,)`
target_true: ground truth - np.array of size `(n_objects,)`
# Output
the partial derivative
of Hinge loss with respect to its input
np.array of size `(n_objects,)`
"""
grad_input = np.where(target_pred * target_true < 1, -target_true / target_pred.size, 0)
return grad_input
Basically the gradient equals -y/N for all the cases where y*y < 1, otherwise 0.
add a comment |
up vote
0
down vote
accepted
I've managed to solve this by using np.where() function. Here is the code:
def hinge_grad_input(target_pred, target_true):
"""Compute the partial derivative
of Hinge loss with respect to its input
# Arguments
target_pred: predictions - np.array of size `(n_objects,)`
target_true: ground truth - np.array of size `(n_objects,)`
# Output
the partial derivative
of Hinge loss with respect to its input
np.array of size `(n_objects,)`
"""
grad_input = np.where(target_pred * target_true < 1, -target_true / target_pred.size, 0)
return grad_input
Basically the gradient equals -y/N for all the cases where y*y < 1, otherwise 0.
add a comment |
up vote
0
down vote
accepted
up vote
0
down vote
accepted
I've managed to solve this by using np.where() function. Here is the code:
def hinge_grad_input(target_pred, target_true):
"""Compute the partial derivative
of Hinge loss with respect to its input
# Arguments
target_pred: predictions - np.array of size `(n_objects,)`
target_true: ground truth - np.array of size `(n_objects,)`
# Output
the partial derivative
of Hinge loss with respect to its input
np.array of size `(n_objects,)`
"""
grad_input = np.where(target_pred * target_true < 1, -target_true / target_pred.size, 0)
return grad_input
Basically the gradient equals -y/N for all the cases where y*y < 1, otherwise 0.
I've managed to solve this by using np.where() function. Here is the code:
def hinge_grad_input(target_pred, target_true):
"""Compute the partial derivative
of Hinge loss with respect to its input
# Arguments
target_pred: predictions - np.array of size `(n_objects,)`
target_true: ground truth - np.array of size `(n_objects,)`
# Output
the partial derivative
of Hinge loss with respect to its input
np.array of size `(n_objects,)`
"""
grad_input = np.where(target_pred * target_true < 1, -target_true / target_pred.size, 0)
return grad_input
Basically the gradient equals -y/N for all the cases where y*y < 1, otherwise 0.
answered Nov 11 at 19:25


Andrei
125
125
add a comment |
add a comment |
Thanks for contributing an answer to Stack Overflow!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Some of your past answers have not been well-received, and you're in danger of being blocked from answering.
Please pay close attention to the following guidance:
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53244095%2fhinge-loss-function-gradient-w-r-t-input-prediction%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
MD0otQekKGMzBJG8,SE,ncUCwJB93NyVrKSXiluExR,kK2,lFCrxd8