Pytorch : Loss function for binary classificationActivation method and Loss function for multilabel multiclass classificationUnderstanding autoencoder loss functionInseting pretrained network to pytorchLoss function for an RNN used for binary classificationHow to use Cross Entropy loss in pytorch for binary prediction?Loss function when the output is a single probabilityWhich Loss function is correct for binary mapping?Loss Function for Probability RegressionPytorch dynamic forward passWhat loss function to use for imbalanced classes (using PyTorch)?

What are the spoon bit of a spoon and fork bit of a fork called?

Possible to set `foldexpr` using a function reference?

Why was Germany not as successful as other Europeans in establishing overseas colonies?

Illegal assignment from SObject to Contact

If Earth is tilted, why is Polaris always above the same spot?

Why does processed meat contain preservatives, while canned fish needs not?

Has any spacecraft ever had the ability to directly communicate with civilian air traffic control?

When and why did journal article titles become descriptive, rather than creatively allusive?

You look catfish vs You look like a catfish

Did Henry V’s archers at Agincourt fight with no pants / breeches on because of dysentery?

What's the metal clinking sound at the end of credits in Avengers: Endgame?

Help, my Death Star suffers from Kessler syndrome!

How to stop co-workers from teasing me because I know Russian?

Unexpected email from Yorkshire Bank

Is it cheaper to drop cargo drop than to land it?

Is GOCE a satellite or aircraft?

Stark VS Thanos

Phrase for the opposite of "foolproof"

Past Perfect Tense

What is the difference between `a[bc]d` (brackets) and `ab,cd` (braces)?

A question regarding using the definite article

Find the coordinate of two line segments that are perpendicular

What word means to make something obsolete?

Python "triplet" dictionary?



Pytorch : Loss function for binary classification


Activation method and Loss function for multilabel multiclass classificationUnderstanding autoencoder loss functionInseting pretrained network to pytorchLoss function for an RNN used for binary classificationHow to use Cross Entropy loss in pytorch for binary prediction?Loss function when the output is a single probabilityWhich Loss function is correct for binary mapping?Loss Function for Probability RegressionPytorch dynamic forward passWhat loss function to use for imbalanced classes (using PyTorch)?













1












$begingroup$


Fairly newbie to Pytorch & neural nets world.Below is a code snippet from a binary classification being done using a simple 3 layer network :



n_input_dim = X_train.shape[1]
n_hidden = 100 # Number of hidden nodes
n_output = 1 # Number of output nodes = for binary classifier
# Build the network
model = nn.Sequential(
nn.Linear(n_input_dim, n_hidden),
nn.ELU(),
nn.Linear(n_hidden, n_output),
nn.Sigmoid())

x_tensor = torch.from_numpy(X_train.values).float()
tensor([[ -1.0000, -1.0000, -1.0000, ..., -99.0000, -99.0000, -99.0000],
[ -1.0000, -1.0000, -1.0000, ..., 0.1538, 5.0000, 0.1538],
[ -1.0000, -1.0000, -1.0000, ..., -99.0000, 6.0000, 0.2381],
...,
[ -1.0000, -1.0000, -1.0000, ..., -99.0000, -99.0000, -99.0000],
[ -1.0000, -1.0000, -1.0000, ..., -99.0000, -99.0000, -99.0000],
[ -1.0000, -1.0000, -1.0000, ..., -99.0000, -99.0000, -99.0000]])
y_tensor = torch.from_numpy(Y_train).float()
tensor([0., 0., 1., ..., 0., 0., 0.])
#Loss Computation
loss_func = nn.BCELoss()
#Optimizer
learning_rate = 0.0001
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)

train_loss = []
iters = 500
for i in range(iters):
y_pred = model(x_tensor)
loss = loss_func(y_pred, y_tensor)
print " Loss in iteration :"
print (i, loss.item())

optimizer.zero_grad()
loss.backward()
optimizer.step()
train_loss.append(loss.item())


In the above case , what i'm not sure about is loss is being computed on y_pred which is a set of probabilities ,computed from the model on the training data with y_tensor (which is binary 0/1).
Is this way of loss computation fine in Classification problem in pytorch? Shouldn't loss be computed between two probabilities set ideally ? If this is fine , then does loss function , BCELoss over here , scales the input in some manner ?



Any insights towards this will be highly appreciated










share|improve this question









$endgroup$
















    1












    $begingroup$


    Fairly newbie to Pytorch & neural nets world.Below is a code snippet from a binary classification being done using a simple 3 layer network :



    n_input_dim = X_train.shape[1]
    n_hidden = 100 # Number of hidden nodes
    n_output = 1 # Number of output nodes = for binary classifier
    # Build the network
    model = nn.Sequential(
    nn.Linear(n_input_dim, n_hidden),
    nn.ELU(),
    nn.Linear(n_hidden, n_output),
    nn.Sigmoid())

    x_tensor = torch.from_numpy(X_train.values).float()
    tensor([[ -1.0000, -1.0000, -1.0000, ..., -99.0000, -99.0000, -99.0000],
    [ -1.0000, -1.0000, -1.0000, ..., 0.1538, 5.0000, 0.1538],
    [ -1.0000, -1.0000, -1.0000, ..., -99.0000, 6.0000, 0.2381],
    ...,
    [ -1.0000, -1.0000, -1.0000, ..., -99.0000, -99.0000, -99.0000],
    [ -1.0000, -1.0000, -1.0000, ..., -99.0000, -99.0000, -99.0000],
    [ -1.0000, -1.0000, -1.0000, ..., -99.0000, -99.0000, -99.0000]])
    y_tensor = torch.from_numpy(Y_train).float()
    tensor([0., 0., 1., ..., 0., 0., 0.])
    #Loss Computation
    loss_func = nn.BCELoss()
    #Optimizer
    learning_rate = 0.0001
    optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)

    train_loss = []
    iters = 500
    for i in range(iters):
    y_pred = model(x_tensor)
    loss = loss_func(y_pred, y_tensor)
    print " Loss in iteration :"
    print (i, loss.item())

    optimizer.zero_grad()
    loss.backward()
    optimizer.step()
    train_loss.append(loss.item())


    In the above case , what i'm not sure about is loss is being computed on y_pred which is a set of probabilities ,computed from the model on the training data with y_tensor (which is binary 0/1).
    Is this way of loss computation fine in Classification problem in pytorch? Shouldn't loss be computed between two probabilities set ideally ? If this is fine , then does loss function , BCELoss over here , scales the input in some manner ?



    Any insights towards this will be highly appreciated










    share|improve this question









    $endgroup$














      1












      1








      1





      $begingroup$


      Fairly newbie to Pytorch & neural nets world.Below is a code snippet from a binary classification being done using a simple 3 layer network :



      n_input_dim = X_train.shape[1]
      n_hidden = 100 # Number of hidden nodes
      n_output = 1 # Number of output nodes = for binary classifier
      # Build the network
      model = nn.Sequential(
      nn.Linear(n_input_dim, n_hidden),
      nn.ELU(),
      nn.Linear(n_hidden, n_output),
      nn.Sigmoid())

      x_tensor = torch.from_numpy(X_train.values).float()
      tensor([[ -1.0000, -1.0000, -1.0000, ..., -99.0000, -99.0000, -99.0000],
      [ -1.0000, -1.0000, -1.0000, ..., 0.1538, 5.0000, 0.1538],
      [ -1.0000, -1.0000, -1.0000, ..., -99.0000, 6.0000, 0.2381],
      ...,
      [ -1.0000, -1.0000, -1.0000, ..., -99.0000, -99.0000, -99.0000],
      [ -1.0000, -1.0000, -1.0000, ..., -99.0000, -99.0000, -99.0000],
      [ -1.0000, -1.0000, -1.0000, ..., -99.0000, -99.0000, -99.0000]])
      y_tensor = torch.from_numpy(Y_train).float()
      tensor([0., 0., 1., ..., 0., 0., 0.])
      #Loss Computation
      loss_func = nn.BCELoss()
      #Optimizer
      learning_rate = 0.0001
      optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)

      train_loss = []
      iters = 500
      for i in range(iters):
      y_pred = model(x_tensor)
      loss = loss_func(y_pred, y_tensor)
      print " Loss in iteration :"
      print (i, loss.item())

      optimizer.zero_grad()
      loss.backward()
      optimizer.step()
      train_loss.append(loss.item())


      In the above case , what i'm not sure about is loss is being computed on y_pred which is a set of probabilities ,computed from the model on the training data with y_tensor (which is binary 0/1).
      Is this way of loss computation fine in Classification problem in pytorch? Shouldn't loss be computed between two probabilities set ideally ? If this is fine , then does loss function , BCELoss over here , scales the input in some manner ?



      Any insights towards this will be highly appreciated










      share|improve this question









      $endgroup$




      Fairly newbie to Pytorch & neural nets world.Below is a code snippet from a binary classification being done using a simple 3 layer network :



      n_input_dim = X_train.shape[1]
      n_hidden = 100 # Number of hidden nodes
      n_output = 1 # Number of output nodes = for binary classifier
      # Build the network
      model = nn.Sequential(
      nn.Linear(n_input_dim, n_hidden),
      nn.ELU(),
      nn.Linear(n_hidden, n_output),
      nn.Sigmoid())

      x_tensor = torch.from_numpy(X_train.values).float()
      tensor([[ -1.0000, -1.0000, -1.0000, ..., -99.0000, -99.0000, -99.0000],
      [ -1.0000, -1.0000, -1.0000, ..., 0.1538, 5.0000, 0.1538],
      [ -1.0000, -1.0000, -1.0000, ..., -99.0000, 6.0000, 0.2381],
      ...,
      [ -1.0000, -1.0000, -1.0000, ..., -99.0000, -99.0000, -99.0000],
      [ -1.0000, -1.0000, -1.0000, ..., -99.0000, -99.0000, -99.0000],
      [ -1.0000, -1.0000, -1.0000, ..., -99.0000, -99.0000, -99.0000]])
      y_tensor = torch.from_numpy(Y_train).float()
      tensor([0., 0., 1., ..., 0., 0., 0.])
      #Loss Computation
      loss_func = nn.BCELoss()
      #Optimizer
      learning_rate = 0.0001
      optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)

      train_loss = []
      iters = 500
      for i in range(iters):
      y_pred = model(x_tensor)
      loss = loss_func(y_pred, y_tensor)
      print " Loss in iteration :"
      print (i, loss.item())

      optimizer.zero_grad()
      loss.backward()
      optimizer.step()
      train_loss.append(loss.item())


      In the above case , what i'm not sure about is loss is being computed on y_pred which is a set of probabilities ,computed from the model on the training data with y_tensor (which is binary 0/1).
      Is this way of loss computation fine in Classification problem in pytorch? Shouldn't loss be computed between two probabilities set ideally ? If this is fine , then does loss function , BCELoss over here , scales the input in some manner ?



      Any insights towards this will be highly appreciated







      loss-function pytorch






      share|improve this question













      share|improve this question











      share|improve this question




      share|improve this question










      asked Apr 8 at 17:11









      raulraul

      62




      62




















          1 Answer
          1






          active

          oldest

          votes


















          0












          $begingroup$

          You are right about the fact that cross entropy is computed between 2 distributions, however, in the case of the y_tensor values, we know for sure which class the example should actually belong to which is the ground truth. So, you can think of the binary values as probability distributions over possible classes in which case the loss function is absolutely correct and the way to go for the problem. Hope that helps.






          share|improve this answer









          $endgroup$













            Your Answer








            StackExchange.ready(function()
            var channelOptions =
            tags: "".split(" "),
            id: "557"
            ;
            initTagRenderer("".split(" "), "".split(" "), channelOptions);

            StackExchange.using("externalEditor", function()
            // Have to fire editor after snippets, if snippets enabled
            if (StackExchange.settings.snippets.snippetsEnabled)
            StackExchange.using("snippets", function()
            createEditor();
            );

            else
            createEditor();

            );

            function createEditor()
            StackExchange.prepareEditor(
            heartbeatType: 'answer',
            autoActivateHeartbeat: false,
            convertImagesToLinks: false,
            noModals: true,
            showLowRepImageUploadWarning: true,
            reputationToPostImages: null,
            bindNavPrevention: true,
            postfix: "",
            imageUploader:
            brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
            contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
            allowUrls: true
            ,
            onDemand: true,
            discardSelector: ".discard-answer"
            ,immediatelyShowMarkdownHelp:true
            );



            );













            draft saved

            draft discarded


















            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f48891%2fpytorch-loss-function-for-binary-classification%23new-answer', 'question_page');

            );

            Post as a guest















            Required, but never shown

























            1 Answer
            1






            active

            oldest

            votes








            1 Answer
            1






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes









            0












            $begingroup$

            You are right about the fact that cross entropy is computed between 2 distributions, however, in the case of the y_tensor values, we know for sure which class the example should actually belong to which is the ground truth. So, you can think of the binary values as probability distributions over possible classes in which case the loss function is absolutely correct and the way to go for the problem. Hope that helps.






            share|improve this answer









            $endgroup$

















              0












              $begingroup$

              You are right about the fact that cross entropy is computed between 2 distributions, however, in the case of the y_tensor values, we know for sure which class the example should actually belong to which is the ground truth. So, you can think of the binary values as probability distributions over possible classes in which case the loss function is absolutely correct and the way to go for the problem. Hope that helps.






              share|improve this answer









              $endgroup$















                0












                0








                0





                $begingroup$

                You are right about the fact that cross entropy is computed between 2 distributions, however, in the case of the y_tensor values, we know for sure which class the example should actually belong to which is the ground truth. So, you can think of the binary values as probability distributions over possible classes in which case the loss function is absolutely correct and the way to go for the problem. Hope that helps.






                share|improve this answer









                $endgroup$



                You are right about the fact that cross entropy is computed between 2 distributions, however, in the case of the y_tensor values, we know for sure which class the example should actually belong to which is the ground truth. So, you can think of the binary values as probability distributions over possible classes in which case the loss function is absolutely correct and the way to go for the problem. Hope that helps.







                share|improve this answer












                share|improve this answer



                share|improve this answer










                answered Apr 8 at 17:43









                Sajid AhmedSajid Ahmed

                315




                315



























                    draft saved

                    draft discarded
















































                    Thanks for contributing an answer to Data Science Stack Exchange!


                    • Please be sure to answer the question. Provide details and share your research!

                    But avoid


                    • Asking for help, clarification, or responding to other answers.

                    • Making statements based on opinion; back them up with references or personal experience.

                    Use MathJax to format equations. MathJax reference.


                    To learn more, see our tips on writing great answers.




                    draft saved


                    draft discarded














                    StackExchange.ready(
                    function ()
                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f48891%2fpytorch-loss-function-for-binary-classification%23new-answer', 'question_page');

                    );

                    Post as a guest















                    Required, but never shown





















































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown

































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown







                    Popular posts from this blog

                    Marja Vauras Lähteet | Aiheesta muualla | NavigointivalikkoMarja Vauras Turun yliopiston tutkimusportaalissaInfobox OKSuomalaisen Tiedeakatemian varsinaiset jäsenetKasvatustieteiden tiedekunnan dekaanit ja muu johtoMarja VaurasKoulutusvienti on kestävyys- ja ketteryyslaji (2.5.2017)laajentamallaWorldCat Identities0000 0001 0855 9405n86069603utb201588738523620927

                    Which is better: GPT or RelGAN for text generation?2019 Community Moderator ElectionWhat is the difference between TextGAN and LM for text generation?GANs (generative adversarial networks) possible for text as well?Generator loss not decreasing- text to image synthesisChoosing a right algorithm for template-based text generationHow should I format input and output for text generation with LSTMsGumbel Softmax vs Vanilla Softmax for GAN trainingWhich neural network to choose for classification from text/speech?NLP text autoencoder that generates text in poetic meterWhat is the interpretation of the expectation notation in the GAN formulation?What is the difference between TextGAN and LM for text generation?How to prepare the data for text generation task

                    Is this part of the description of the Archfey warlock's Misty Escape feature redundant?When is entropic ward considered “used”?How does the reaction timing work for Wrath of the Storm? Can it potentially prevent the damage from the triggering attack?Does the Dark Arts Archlich warlock patrons's Arcane Invisibility activate every time you cast a level 1+ spell?When attacking while invisible, when exactly does invisibility break?Can I cast Hellish Rebuke on my turn?Do I have to “pre-cast” a reaction spell in order for it to be triggered?What happens if a Player Misty Escapes into an Invisible CreatureCan a reaction interrupt multiattack?Does the Fiend-patron warlock's Hurl Through Hell feature dispel effects that require the target to be on the same plane as the caster?What are you allowed to do while using the Warlock's Eldritch Master feature?