Hinge loss function gradient w.r.t. input prediction

Multi tool use
Multi tool use








up vote
0
down vote

favorite












For an assignment I have to implement both the Hinge loss and its partial derivative calculation functions. I got the Hinge loss function itself but I'm having hard time understanding how to calculate its partial derivative w.r.t. prediction input. I tried different approaches but none worked.



Any help, hints, suggestions will be much appreciated!



Here is the analytical expression for Hinge loss function itself:



Hinge loss function



And here is my Hinge loss function implementation:



def hinge_forward(target_pred, target_true):
"""Compute the value of Hinge loss
for a given prediction and the ground truth
# Arguments
target_pred: predictions - np.array of size `(n_objects,)`
target_true: ground truth - np.array of size `(n_objects,)`
# Output
the value of Hinge loss
for a given prediction and the ground truth
scalar
"""
output = np.sum((np.maximum(0, 1 - target_pred * target_true)) / target_pred.size)

return output


Now I need to calculate this gradient:



Hinge loss gradient w.r.t. prediction input



This is what I tried for the Hinge loss gradient calculation:



def hinge_grad_input(target_pred, target_true):
"""Compute the partial derivative
of Hinge loss with respect to its input
# Arguments
target_pred: predictions - np.array of size `(n_objects,)`
target_true: ground truth - np.array of size `(n_objects,)`
# Output
the partial derivative
of Hinge loss with respect to its input
np.array of size `(n_objects,)`
"""
# ----------------
# try 1
# ----------------
# hinge_result = hinge_forward(target_pred, target_true)

# if hinge_result == 0:
# grad_input = 0
# else:
# hinge = np.maximum(0, 1 - target_pred * target_true)
# grad_input = np.zeros_like(hinge)
# grad_input[hinge > 0] = 1
# grad_input = np.sum(np.where(hinge > 0))
# ----------------
# try 2
# ----------------
# hinge = np.maximum(0, 1 - target_pred * target_true)
# grad_input = np.zeros_like(hinge)

# grad_input[hinge > 0] = 1
# ----------------
# try 3
# ----------------
hinge_result = hinge_forward(target_pred, target_true)

if hinge_result == 0:
grad_input = 0
else:
loss = np.maximum(0, 1 - target_pred * target_true)
grad_input = np.zeros_like(loss)
grad_input[loss > 0] = 1
grad_input = np.sum(grad_input) * target_pred

return grad_input









share|improve this question



























    up vote
    0
    down vote

    favorite












    For an assignment I have to implement both the Hinge loss and its partial derivative calculation functions. I got the Hinge loss function itself but I'm having hard time understanding how to calculate its partial derivative w.r.t. prediction input. I tried different approaches but none worked.



    Any help, hints, suggestions will be much appreciated!



    Here is the analytical expression for Hinge loss function itself:



    Hinge loss function



    And here is my Hinge loss function implementation:



    def hinge_forward(target_pred, target_true):
    """Compute the value of Hinge loss
    for a given prediction and the ground truth
    # Arguments
    target_pred: predictions - np.array of size `(n_objects,)`
    target_true: ground truth - np.array of size `(n_objects,)`
    # Output
    the value of Hinge loss
    for a given prediction and the ground truth
    scalar
    """
    output = np.sum((np.maximum(0, 1 - target_pred * target_true)) / target_pred.size)

    return output


    Now I need to calculate this gradient:



    Hinge loss gradient w.r.t. prediction input



    This is what I tried for the Hinge loss gradient calculation:



    def hinge_grad_input(target_pred, target_true):
    """Compute the partial derivative
    of Hinge loss with respect to its input
    # Arguments
    target_pred: predictions - np.array of size `(n_objects,)`
    target_true: ground truth - np.array of size `(n_objects,)`
    # Output
    the partial derivative
    of Hinge loss with respect to its input
    np.array of size `(n_objects,)`
    """
    # ----------------
    # try 1
    # ----------------
    # hinge_result = hinge_forward(target_pred, target_true)

    # if hinge_result == 0:
    # grad_input = 0
    # else:
    # hinge = np.maximum(0, 1 - target_pred * target_true)
    # grad_input = np.zeros_like(hinge)
    # grad_input[hinge > 0] = 1
    # grad_input = np.sum(np.where(hinge > 0))
    # ----------------
    # try 2
    # ----------------
    # hinge = np.maximum(0, 1 - target_pred * target_true)
    # grad_input = np.zeros_like(hinge)

    # grad_input[hinge > 0] = 1
    # ----------------
    # try 3
    # ----------------
    hinge_result = hinge_forward(target_pred, target_true)

    if hinge_result == 0:
    grad_input = 0
    else:
    loss = np.maximum(0, 1 - target_pred * target_true)
    grad_input = np.zeros_like(loss)
    grad_input[loss > 0] = 1
    grad_input = np.sum(grad_input) * target_pred

    return grad_input









    share|improve this question

























      up vote
      0
      down vote

      favorite









      up vote
      0
      down vote

      favorite











      For an assignment I have to implement both the Hinge loss and its partial derivative calculation functions. I got the Hinge loss function itself but I'm having hard time understanding how to calculate its partial derivative w.r.t. prediction input. I tried different approaches but none worked.



      Any help, hints, suggestions will be much appreciated!



      Here is the analytical expression for Hinge loss function itself:



      Hinge loss function



      And here is my Hinge loss function implementation:



      def hinge_forward(target_pred, target_true):
      """Compute the value of Hinge loss
      for a given prediction and the ground truth
      # Arguments
      target_pred: predictions - np.array of size `(n_objects,)`
      target_true: ground truth - np.array of size `(n_objects,)`
      # Output
      the value of Hinge loss
      for a given prediction and the ground truth
      scalar
      """
      output = np.sum((np.maximum(0, 1 - target_pred * target_true)) / target_pred.size)

      return output


      Now I need to calculate this gradient:



      Hinge loss gradient w.r.t. prediction input



      This is what I tried for the Hinge loss gradient calculation:



      def hinge_grad_input(target_pred, target_true):
      """Compute the partial derivative
      of Hinge loss with respect to its input
      # Arguments
      target_pred: predictions - np.array of size `(n_objects,)`
      target_true: ground truth - np.array of size `(n_objects,)`
      # Output
      the partial derivative
      of Hinge loss with respect to its input
      np.array of size `(n_objects,)`
      """
      # ----------------
      # try 1
      # ----------------
      # hinge_result = hinge_forward(target_pred, target_true)

      # if hinge_result == 0:
      # grad_input = 0
      # else:
      # hinge = np.maximum(0, 1 - target_pred * target_true)
      # grad_input = np.zeros_like(hinge)
      # grad_input[hinge > 0] = 1
      # grad_input = np.sum(np.where(hinge > 0))
      # ----------------
      # try 2
      # ----------------
      # hinge = np.maximum(0, 1 - target_pred * target_true)
      # grad_input = np.zeros_like(hinge)

      # grad_input[hinge > 0] = 1
      # ----------------
      # try 3
      # ----------------
      hinge_result = hinge_forward(target_pred, target_true)

      if hinge_result == 0:
      grad_input = 0
      else:
      loss = np.maximum(0, 1 - target_pred * target_true)
      grad_input = np.zeros_like(loss)
      grad_input[loss > 0] = 1
      grad_input = np.sum(grad_input) * target_pred

      return grad_input









      share|improve this question















      For an assignment I have to implement both the Hinge loss and its partial derivative calculation functions. I got the Hinge loss function itself but I'm having hard time understanding how to calculate its partial derivative w.r.t. prediction input. I tried different approaches but none worked.



      Any help, hints, suggestions will be much appreciated!



      Here is the analytical expression for Hinge loss function itself:



      Hinge loss function



      And here is my Hinge loss function implementation:



      def hinge_forward(target_pred, target_true):
      """Compute the value of Hinge loss
      for a given prediction and the ground truth
      # Arguments
      target_pred: predictions - np.array of size `(n_objects,)`
      target_true: ground truth - np.array of size `(n_objects,)`
      # Output
      the value of Hinge loss
      for a given prediction and the ground truth
      scalar
      """
      output = np.sum((np.maximum(0, 1 - target_pred * target_true)) / target_pred.size)

      return output


      Now I need to calculate this gradient:



      Hinge loss gradient w.r.t. prediction input



      This is what I tried for the Hinge loss gradient calculation:



      def hinge_grad_input(target_pred, target_true):
      """Compute the partial derivative
      of Hinge loss with respect to its input
      # Arguments
      target_pred: predictions - np.array of size `(n_objects,)`
      target_true: ground truth - np.array of size `(n_objects,)`
      # Output
      the partial derivative
      of Hinge loss with respect to its input
      np.array of size `(n_objects,)`
      """
      # ----------------
      # try 1
      # ----------------
      # hinge_result = hinge_forward(target_pred, target_true)

      # if hinge_result == 0:
      # grad_input = 0
      # else:
      # hinge = np.maximum(0, 1 - target_pred * target_true)
      # grad_input = np.zeros_like(hinge)
      # grad_input[hinge > 0] = 1
      # grad_input = np.sum(np.where(hinge > 0))
      # ----------------
      # try 2
      # ----------------
      # hinge = np.maximum(0, 1 - target_pred * target_true)
      # grad_input = np.zeros_like(hinge)

      # grad_input[hinge > 0] = 1
      # ----------------
      # try 3
      # ----------------
      hinge_result = hinge_forward(target_pred, target_true)

      if hinge_result == 0:
      grad_input = 0
      else:
      loss = np.maximum(0, 1 - target_pred * target_true)
      grad_input = np.zeros_like(loss)
      grad_input[loss > 0] = 1
      grad_input = np.sum(grad_input) * target_pred

      return grad_input






      python machine-learning deep-learning loss-function






      share|improve this question















      share|improve this question













      share|improve this question




      share|improve this question








      edited Nov 12 at 0:55









      desertnaut

      16.1k63466




      16.1k63466










      asked Nov 10 at 22:34









      Andrei

      125




      125






















          1 Answer
          1






          active

          oldest

          votes

















          up vote
          0
          down vote



          accepted










          I've managed to solve this by using np.where() function. Here is the code:



          def hinge_grad_input(target_pred, target_true):
          """Compute the partial derivative
          of Hinge loss with respect to its input
          # Arguments
          target_pred: predictions - np.array of size `(n_objects,)`
          target_true: ground truth - np.array of size `(n_objects,)`
          # Output
          the partial derivative
          of Hinge loss with respect to its input
          np.array of size `(n_objects,)`
          """
          grad_input = np.where(target_pred * target_true < 1, -target_true / target_pred.size, 0)

          return grad_input


          Basically the gradient equals -y/N for all the cases where y*y < 1, otherwise 0.






          share|improve this answer




















            Your Answer






            StackExchange.ifUsing("editor", function ()
            StackExchange.using("externalEditor", function ()
            StackExchange.using("snippets", function ()
            StackExchange.snippets.init();
            );
            );
            , "code-snippets");

            StackExchange.ready(function()
            var channelOptions =
            tags: "".split(" "),
            id: "1"
            ;
            initTagRenderer("".split(" "), "".split(" "), channelOptions);

            StackExchange.using("externalEditor", function()
            // Have to fire editor after snippets, if snippets enabled
            if (StackExchange.settings.snippets.snippetsEnabled)
            StackExchange.using("snippets", function()
            createEditor();
            );

            else
            createEditor();

            );

            function createEditor()
            StackExchange.prepareEditor(
            heartbeatType: 'answer',
            convertImagesToLinks: true,
            noModals: true,
            showLowRepImageUploadWarning: true,
            reputationToPostImages: 10,
            bindNavPrevention: true,
            postfix: "",
            imageUploader:
            brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
            contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
            allowUrls: true
            ,
            onDemand: true,
            discardSelector: ".discard-answer"
            ,immediatelyShowMarkdownHelp:true
            );



            );













            draft saved

            draft discarded


















            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53244095%2fhinge-loss-function-gradient-w-r-t-input-prediction%23new-answer', 'question_page');

            );

            Post as a guest















            Required, but never shown

























            1 Answer
            1






            active

            oldest

            votes








            1 Answer
            1






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes








            up vote
            0
            down vote



            accepted










            I've managed to solve this by using np.where() function. Here is the code:



            def hinge_grad_input(target_pred, target_true):
            """Compute the partial derivative
            of Hinge loss with respect to its input
            # Arguments
            target_pred: predictions - np.array of size `(n_objects,)`
            target_true: ground truth - np.array of size `(n_objects,)`
            # Output
            the partial derivative
            of Hinge loss with respect to its input
            np.array of size `(n_objects,)`
            """
            grad_input = np.where(target_pred * target_true < 1, -target_true / target_pred.size, 0)

            return grad_input


            Basically the gradient equals -y/N for all the cases where y*y < 1, otherwise 0.






            share|improve this answer
























              up vote
              0
              down vote



              accepted










              I've managed to solve this by using np.where() function. Here is the code:



              def hinge_grad_input(target_pred, target_true):
              """Compute the partial derivative
              of Hinge loss with respect to its input
              # Arguments
              target_pred: predictions - np.array of size `(n_objects,)`
              target_true: ground truth - np.array of size `(n_objects,)`
              # Output
              the partial derivative
              of Hinge loss with respect to its input
              np.array of size `(n_objects,)`
              """
              grad_input = np.where(target_pred * target_true < 1, -target_true / target_pred.size, 0)

              return grad_input


              Basically the gradient equals -y/N for all the cases where y*y < 1, otherwise 0.






              share|improve this answer






















                up vote
                0
                down vote



                accepted







                up vote
                0
                down vote



                accepted






                I've managed to solve this by using np.where() function. Here is the code:



                def hinge_grad_input(target_pred, target_true):
                """Compute the partial derivative
                of Hinge loss with respect to its input
                # Arguments
                target_pred: predictions - np.array of size `(n_objects,)`
                target_true: ground truth - np.array of size `(n_objects,)`
                # Output
                the partial derivative
                of Hinge loss with respect to its input
                np.array of size `(n_objects,)`
                """
                grad_input = np.where(target_pred * target_true < 1, -target_true / target_pred.size, 0)

                return grad_input


                Basically the gradient equals -y/N for all the cases where y*y < 1, otherwise 0.






                share|improve this answer












                I've managed to solve this by using np.where() function. Here is the code:



                def hinge_grad_input(target_pred, target_true):
                """Compute the partial derivative
                of Hinge loss with respect to its input
                # Arguments
                target_pred: predictions - np.array of size `(n_objects,)`
                target_true: ground truth - np.array of size `(n_objects,)`
                # Output
                the partial derivative
                of Hinge loss with respect to its input
                np.array of size `(n_objects,)`
                """
                grad_input = np.where(target_pred * target_true < 1, -target_true / target_pred.size, 0)

                return grad_input


                Basically the gradient equals -y/N for all the cases where y*y < 1, otherwise 0.







                share|improve this answer












                share|improve this answer



                share|improve this answer










                answered Nov 11 at 19:25









                Andrei

                125




                125



























                    draft saved

                    draft discarded
















































                    Thanks for contributing an answer to Stack Overflow!


                    • Please be sure to answer the question. Provide details and share your research!

                    But avoid


                    • Asking for help, clarification, or responding to other answers.

                    • Making statements based on opinion; back them up with references or personal experience.

                    To learn more, see our tips on writing great answers.





                    Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


                    Please pay close attention to the following guidance:


                    • Please be sure to answer the question. Provide details and share your research!

                    But avoid


                    • Asking for help, clarification, or responding to other answers.

                    • Making statements based on opinion; back them up with references or personal experience.

                    To learn more, see our tips on writing great answers.




                    draft saved


                    draft discarded














                    StackExchange.ready(
                    function ()
                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53244095%2fhinge-loss-function-gradient-w-r-t-input-prediction%23new-answer', 'question_page');

                    );

                    Post as a guest















                    Required, but never shown





















































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown

































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown







                    MD0otQekKGMzBJG8,SE,ncUCwJB93NyVrKSXiluExR,kK2,lFCrxd8
                    Y0 84tHH0 x7sRhO0,7okT4T1by3XR,hi,J ST0nGijyU,YQZLnR8ET4ij7tw1tOLJFiPjv,cab1ntLaN w Myya97 k pjg,vDXEvI

                    Popular posts from this blog

                    Use pre created SQLite database for Android project in kotlin

                    Ruanda

                    Ondo