Why real-world output of my classifier has similar label ratio to training data?Train classifier on balanced dataset and apply on imbalanced dataset?Cost sensitive classification with individual costintercept correction in deep learningNeural Networks: How to prepare real world data to detect low probability events?Why is my artificial neural networks almost always predicting positive elements?Neural Network: how to interpret this loss graph?A practical max-depth of LSTM layers, when trying to overfit?How do I separate Interleaved sentences using LSTM networks?LSTM: How to deal with nonstationarity when predicting a time seriesUnderstanding LSTM behaviour: Validation loss smaller than training loss throughout training for regression problemTraining multi-label classifier with unbalanced samples in KerasNetwork either overfits or underfits, but never generalizes - what to do?Remedies to CNN-LSTM overfitting on relatively small image dataset

How to make a pipeline wait for end-of-file or stop after an error?

How do I deal with a coworker that keeps asking to make small superficial changes to a report, and it is seriously triggering my anxiety?

a sore throat vs a strep throat vs strep throat

Error message with Tabularx

Minor Revision with suggestion of an alternative proof by reviewer

Is there a way to get a compiler for the original B programming language?

How would one muzzle a full grown polar bear in the 13th century?

Can someone publish a story that happened to you?

Fizzy, soft, pop and still drinks

Why other Westeros houses don't use wildfire?

Are Boeing 737-800’s grounded?

Can I spend a night at Vancouver then take a flight to my college in Toronto as an international student?

What are the potential pitfalls when using metals as a currency?

Why was Germany not as successful as other Europeans in establishing overseas colonies?

How can the Zone of Truth spell be defeated without the caster knowing?

How can I practically buy stocks?

What language was spoken in East Asia before Proto-Turkic?

Don’t seats that recline flat defeat the purpose of having seatbelts?

Can SQL Server create collisions in system generated constraint names?

Which big number is bigger?

What is Niska's accent?

how to find the equation of a circle given points of the circle

what is the sudo password for a --disabled-password user

Binary Numbers Magic Trick



Why real-world output of my classifier has similar label ratio to training data?


Train classifier on balanced dataset and apply on imbalanced dataset?Cost sensitive classification with individual costintercept correction in deep learningNeural Networks: How to prepare real world data to detect low probability events?Why is my artificial neural networks almost always predicting positive elements?Neural Network: how to interpret this loss graph?A practical max-depth of LSTM layers, when trying to overfit?How do I separate Interleaved sentences using LSTM networks?LSTM: How to deal with nonstationarity when predicting a time seriesUnderstanding LSTM behaviour: Validation loss smaller than training loss throughout training for regression problemTraining multi-label classifier with unbalanced samples in KerasNetwork either overfits or underfits, but never generalizes - what to do?Remedies to CNN-LSTM overfitting on relatively small image dataset













6












$begingroup$


I trained a neural network on balanced dataset, and it has good accuracy ~85%. But in real world positives appear in about 10% of the cases or less. When I test network on set with real world distribution it seems to assign more positive labels than needed tending to balanced proportion as in training set.
What can be the reason of such behavior and what should I do solve it?
I'm using Keras and combination of LSTM and CNN layers.










share|improve this question











$endgroup$











  • $begingroup$
    Would you elaborate the clause But in real world positives appear in about 10%. By positive you've meant positive labels or something?
    $endgroup$
    – Vaalizaadeh
    Apr 7 at 12:17










  • $begingroup$
    @Vaalizaadeh, yes positive labels, so basically one category is less represented in real world.
    $endgroup$
    – Bien
    Apr 7 at 12:34















6












$begingroup$


I trained a neural network on balanced dataset, and it has good accuracy ~85%. But in real world positives appear in about 10% of the cases or less. When I test network on set with real world distribution it seems to assign more positive labels than needed tending to balanced proportion as in training set.
What can be the reason of such behavior and what should I do solve it?
I'm using Keras and combination of LSTM and CNN layers.










share|improve this question











$endgroup$











  • $begingroup$
    Would you elaborate the clause But in real world positives appear in about 10%. By positive you've meant positive labels or something?
    $endgroup$
    – Vaalizaadeh
    Apr 7 at 12:17










  • $begingroup$
    @Vaalizaadeh, yes positive labels, so basically one category is less represented in real world.
    $endgroup$
    – Bien
    Apr 7 at 12:34













6












6








6


2



$begingroup$


I trained a neural network on balanced dataset, and it has good accuracy ~85%. But in real world positives appear in about 10% of the cases or less. When I test network on set with real world distribution it seems to assign more positive labels than needed tending to balanced proportion as in training set.
What can be the reason of such behavior and what should I do solve it?
I'm using Keras and combination of LSTM and CNN layers.










share|improve this question











$endgroup$




I trained a neural network on balanced dataset, and it has good accuracy ~85%. But in real world positives appear in about 10% of the cases or less. When I test network on set with real world distribution it seems to assign more positive labels than needed tending to balanced proportion as in training set.
What can be the reason of such behavior and what should I do solve it?
I'm using Keras and combination of LSTM and CNN layers.







neural-network cnn lstm training class-imbalance






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Apr 8 at 16:11









Esmailian

4,021422




4,021422










asked Apr 7 at 12:10









BienBien

333




333











  • $begingroup$
    Would you elaborate the clause But in real world positives appear in about 10%. By positive you've meant positive labels or something?
    $endgroup$
    – Vaalizaadeh
    Apr 7 at 12:17










  • $begingroup$
    @Vaalizaadeh, yes positive labels, so basically one category is less represented in real world.
    $endgroup$
    – Bien
    Apr 7 at 12:34
















  • $begingroup$
    Would you elaborate the clause But in real world positives appear in about 10%. By positive you've meant positive labels or something?
    $endgroup$
    – Vaalizaadeh
    Apr 7 at 12:17










  • $begingroup$
    @Vaalizaadeh, yes positive labels, so basically one category is less represented in real world.
    $endgroup$
    – Bien
    Apr 7 at 12:34















$begingroup$
Would you elaborate the clause But in real world positives appear in about 10%. By positive you've meant positive labels or something?
$endgroup$
– Vaalizaadeh
Apr 7 at 12:17




$begingroup$
Would you elaborate the clause But in real world positives appear in about 10%. By positive you've meant positive labels or something?
$endgroup$
– Vaalizaadeh
Apr 7 at 12:17












$begingroup$
@Vaalizaadeh, yes positive labels, so basically one category is less represented in real world.
$endgroup$
– Bien
Apr 7 at 12:34




$begingroup$
@Vaalizaadeh, yes positive labels, so basically one category is less represented in real world.
$endgroup$
– Bien
Apr 7 at 12:34










2 Answers
2






active

oldest

votes


















5












$begingroup$


What can be the reason of such behavior?




A classifier only tries to capture the features-label relationship as accurate as it can, it does not learn nor does it guarantee that the ratio of predicted labels to be close to the true ratio. However, if sampled classes (even balanced) are good representatives of true class regions and classifier finds good decision boundaries, the closeness of ratios will happen naturally. Therefore, the explanation is that the sampled negative class is not a good representative of its true occupied regions, meaning some regions are not sampled well, and/or classifier is finding bad decision boundaries. Here is a visual illustration (drawn by myself):



enter image description here



In the case of good training set, predictions resemble the real-world ratio (2:1), even though model is trained on a balanced set (1:1). In the case of bad training set and/or bad decision boundary, predictions are completely incompatible (1:2) with the real-world ratio (2:1).




What should I do to solve it?





  1. If the problem is related to bad negative representatives



    1. Use more negative samples in the training set, i.e. imbalanced training (you can nullify the effect of imbalanced samples with class weights), or


    2. Train with balanced classes but change the decision threshold (a post-training solution). That is, instead of assigning instances with output > 0.5 to positive class, use a harder-to-pass threshold like output > 0.8 to decrease the number of positive predictions.



  2. If the problem is related to classifier performance, we should come up with a better classifier which is an open-ended endeavor.


However, in my opinion, you should not select models based on the ratio of positive predictions. You should ecide based on a metric like macro-f1 (or any other one). Therefore, by using a validation set, a model that produces more positive samples and has a higher macro-f1 should be preferred over a model that produces less positives but has a lower macro-f1.



EDIT:



As @BenReiniger pointed out in another post, a hidden assumption here (specially in the sketch) is that classes are "clearly separable". This assumption becomes more justified in higher dimensions. As an example, dogs and cats are clearly separable based on their images (high dimensions) compared to their length (one dimension).






share|improve this answer











$endgroup$








  • 2




    $begingroup$
    I like that illustration! Source?
    $endgroup$
    – Ben Reiniger
    Apr 7 at 21:24






  • 1




    $begingroup$
    @BenReiniger Thanks Ben! I drew it by paint.
    $endgroup$
    – Esmailian
    Apr 7 at 23:22










  • $begingroup$
    @Esmailian, I would like to say that this is something that applies to many of your illustrations. They look like something out of a write-up in a blog. You are a natural teacher :)
    $endgroup$
    – Simon Larsson
    Apr 8 at 10:20










  • $begingroup$
    Thanks, that is really helpful!
    $endgroup$
    – Bien
    Apr 8 at 10:47










  • $begingroup$
    @SimonLarsson Thanks man!
    $endgroup$
    – Esmailian
    Apr 8 at 10:51


















1












$begingroup$

Esmailian's answer is great, another possible solution is intercept correction, it is more commonly used in logistic regression, but the principal will apply here as well,
see here or here, there's a lot of material on it online...






share|improve this answer









$endgroup$













    Your Answer








    StackExchange.ready(function()
    var channelOptions =
    tags: "".split(" "),
    id: "557"
    ;
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function()
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled)
    StackExchange.using("snippets", function()
    createEditor();
    );

    else
    createEditor();

    );

    function createEditor()
    StackExchange.prepareEditor(
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: false,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: null,
    bindNavPrevention: true,
    postfix: "",
    imageUploader:
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    ,
    onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    );



    );













    draft saved

    draft discarded


















    StackExchange.ready(
    function ()
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f48821%2fwhy-real-world-output-of-my-classifier-has-similar-label-ratio-to-training-data%23new-answer', 'question_page');

    );

    Post as a guest















    Required, but never shown

























    2 Answers
    2






    active

    oldest

    votes








    2 Answers
    2






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    5












    $begingroup$


    What can be the reason of such behavior?




    A classifier only tries to capture the features-label relationship as accurate as it can, it does not learn nor does it guarantee that the ratio of predicted labels to be close to the true ratio. However, if sampled classes (even balanced) are good representatives of true class regions and classifier finds good decision boundaries, the closeness of ratios will happen naturally. Therefore, the explanation is that the sampled negative class is not a good representative of its true occupied regions, meaning some regions are not sampled well, and/or classifier is finding bad decision boundaries. Here is a visual illustration (drawn by myself):



    enter image description here



    In the case of good training set, predictions resemble the real-world ratio (2:1), even though model is trained on a balanced set (1:1). In the case of bad training set and/or bad decision boundary, predictions are completely incompatible (1:2) with the real-world ratio (2:1).




    What should I do to solve it?





    1. If the problem is related to bad negative representatives



      1. Use more negative samples in the training set, i.e. imbalanced training (you can nullify the effect of imbalanced samples with class weights), or


      2. Train with balanced classes but change the decision threshold (a post-training solution). That is, instead of assigning instances with output > 0.5 to positive class, use a harder-to-pass threshold like output > 0.8 to decrease the number of positive predictions.



    2. If the problem is related to classifier performance, we should come up with a better classifier which is an open-ended endeavor.


    However, in my opinion, you should not select models based on the ratio of positive predictions. You should ecide based on a metric like macro-f1 (or any other one). Therefore, by using a validation set, a model that produces more positive samples and has a higher macro-f1 should be preferred over a model that produces less positives but has a lower macro-f1.



    EDIT:



    As @BenReiniger pointed out in another post, a hidden assumption here (specially in the sketch) is that classes are "clearly separable". This assumption becomes more justified in higher dimensions. As an example, dogs and cats are clearly separable based on their images (high dimensions) compared to their length (one dimension).






    share|improve this answer











    $endgroup$








    • 2




      $begingroup$
      I like that illustration! Source?
      $endgroup$
      – Ben Reiniger
      Apr 7 at 21:24






    • 1




      $begingroup$
      @BenReiniger Thanks Ben! I drew it by paint.
      $endgroup$
      – Esmailian
      Apr 7 at 23:22










    • $begingroup$
      @Esmailian, I would like to say that this is something that applies to many of your illustrations. They look like something out of a write-up in a blog. You are a natural teacher :)
      $endgroup$
      – Simon Larsson
      Apr 8 at 10:20










    • $begingroup$
      Thanks, that is really helpful!
      $endgroup$
      – Bien
      Apr 8 at 10:47










    • $begingroup$
      @SimonLarsson Thanks man!
      $endgroup$
      – Esmailian
      Apr 8 at 10:51















    5












    $begingroup$


    What can be the reason of such behavior?




    A classifier only tries to capture the features-label relationship as accurate as it can, it does not learn nor does it guarantee that the ratio of predicted labels to be close to the true ratio. However, if sampled classes (even balanced) are good representatives of true class regions and classifier finds good decision boundaries, the closeness of ratios will happen naturally. Therefore, the explanation is that the sampled negative class is not a good representative of its true occupied regions, meaning some regions are not sampled well, and/or classifier is finding bad decision boundaries. Here is a visual illustration (drawn by myself):



    enter image description here



    In the case of good training set, predictions resemble the real-world ratio (2:1), even though model is trained on a balanced set (1:1). In the case of bad training set and/or bad decision boundary, predictions are completely incompatible (1:2) with the real-world ratio (2:1).




    What should I do to solve it?





    1. If the problem is related to bad negative representatives



      1. Use more negative samples in the training set, i.e. imbalanced training (you can nullify the effect of imbalanced samples with class weights), or


      2. Train with balanced classes but change the decision threshold (a post-training solution). That is, instead of assigning instances with output > 0.5 to positive class, use a harder-to-pass threshold like output > 0.8 to decrease the number of positive predictions.



    2. If the problem is related to classifier performance, we should come up with a better classifier which is an open-ended endeavor.


    However, in my opinion, you should not select models based on the ratio of positive predictions. You should ecide based on a metric like macro-f1 (or any other one). Therefore, by using a validation set, a model that produces more positive samples and has a higher macro-f1 should be preferred over a model that produces less positives but has a lower macro-f1.



    EDIT:



    As @BenReiniger pointed out in another post, a hidden assumption here (specially in the sketch) is that classes are "clearly separable". This assumption becomes more justified in higher dimensions. As an example, dogs and cats are clearly separable based on their images (high dimensions) compared to their length (one dimension).






    share|improve this answer











    $endgroup$








    • 2




      $begingroup$
      I like that illustration! Source?
      $endgroup$
      – Ben Reiniger
      Apr 7 at 21:24






    • 1




      $begingroup$
      @BenReiniger Thanks Ben! I drew it by paint.
      $endgroup$
      – Esmailian
      Apr 7 at 23:22










    • $begingroup$
      @Esmailian, I would like to say that this is something that applies to many of your illustrations. They look like something out of a write-up in a blog. You are a natural teacher :)
      $endgroup$
      – Simon Larsson
      Apr 8 at 10:20










    • $begingroup$
      Thanks, that is really helpful!
      $endgroup$
      – Bien
      Apr 8 at 10:47










    • $begingroup$
      @SimonLarsson Thanks man!
      $endgroup$
      – Esmailian
      Apr 8 at 10:51













    5












    5








    5





    $begingroup$


    What can be the reason of such behavior?




    A classifier only tries to capture the features-label relationship as accurate as it can, it does not learn nor does it guarantee that the ratio of predicted labels to be close to the true ratio. However, if sampled classes (even balanced) are good representatives of true class regions and classifier finds good decision boundaries, the closeness of ratios will happen naturally. Therefore, the explanation is that the sampled negative class is not a good representative of its true occupied regions, meaning some regions are not sampled well, and/or classifier is finding bad decision boundaries. Here is a visual illustration (drawn by myself):



    enter image description here



    In the case of good training set, predictions resemble the real-world ratio (2:1), even though model is trained on a balanced set (1:1). In the case of bad training set and/or bad decision boundary, predictions are completely incompatible (1:2) with the real-world ratio (2:1).




    What should I do to solve it?





    1. If the problem is related to bad negative representatives



      1. Use more negative samples in the training set, i.e. imbalanced training (you can nullify the effect of imbalanced samples with class weights), or


      2. Train with balanced classes but change the decision threshold (a post-training solution). That is, instead of assigning instances with output > 0.5 to positive class, use a harder-to-pass threshold like output > 0.8 to decrease the number of positive predictions.



    2. If the problem is related to classifier performance, we should come up with a better classifier which is an open-ended endeavor.


    However, in my opinion, you should not select models based on the ratio of positive predictions. You should ecide based on a metric like macro-f1 (or any other one). Therefore, by using a validation set, a model that produces more positive samples and has a higher macro-f1 should be preferred over a model that produces less positives but has a lower macro-f1.



    EDIT:



    As @BenReiniger pointed out in another post, a hidden assumption here (specially in the sketch) is that classes are "clearly separable". This assumption becomes more justified in higher dimensions. As an example, dogs and cats are clearly separable based on their images (high dimensions) compared to their length (one dimension).






    share|improve this answer











    $endgroup$




    What can be the reason of such behavior?




    A classifier only tries to capture the features-label relationship as accurate as it can, it does not learn nor does it guarantee that the ratio of predicted labels to be close to the true ratio. However, if sampled classes (even balanced) are good representatives of true class regions and classifier finds good decision boundaries, the closeness of ratios will happen naturally. Therefore, the explanation is that the sampled negative class is not a good representative of its true occupied regions, meaning some regions are not sampled well, and/or classifier is finding bad decision boundaries. Here is a visual illustration (drawn by myself):



    enter image description here



    In the case of good training set, predictions resemble the real-world ratio (2:1), even though model is trained on a balanced set (1:1). In the case of bad training set and/or bad decision boundary, predictions are completely incompatible (1:2) with the real-world ratio (2:1).




    What should I do to solve it?





    1. If the problem is related to bad negative representatives



      1. Use more negative samples in the training set, i.e. imbalanced training (you can nullify the effect of imbalanced samples with class weights), or


      2. Train with balanced classes but change the decision threshold (a post-training solution). That is, instead of assigning instances with output > 0.5 to positive class, use a harder-to-pass threshold like output > 0.8 to decrease the number of positive predictions.



    2. If the problem is related to classifier performance, we should come up with a better classifier which is an open-ended endeavor.


    However, in my opinion, you should not select models based on the ratio of positive predictions. You should ecide based on a metric like macro-f1 (or any other one). Therefore, by using a validation set, a model that produces more positive samples and has a higher macro-f1 should be preferred over a model that produces less positives but has a lower macro-f1.



    EDIT:



    As @BenReiniger pointed out in another post, a hidden assumption here (specially in the sketch) is that classes are "clearly separable". This assumption becomes more justified in higher dimensions. As an example, dogs and cats are clearly separable based on their images (high dimensions) compared to their length (one dimension).







    share|improve this answer














    share|improve this answer



    share|improve this answer








    edited Apr 11 at 14:25

























    answered Apr 7 at 14:10









    EsmailianEsmailian

    4,021422




    4,021422







    • 2




      $begingroup$
      I like that illustration! Source?
      $endgroup$
      – Ben Reiniger
      Apr 7 at 21:24






    • 1




      $begingroup$
      @BenReiniger Thanks Ben! I drew it by paint.
      $endgroup$
      – Esmailian
      Apr 7 at 23:22










    • $begingroup$
      @Esmailian, I would like to say that this is something that applies to many of your illustrations. They look like something out of a write-up in a blog. You are a natural teacher :)
      $endgroup$
      – Simon Larsson
      Apr 8 at 10:20










    • $begingroup$
      Thanks, that is really helpful!
      $endgroup$
      – Bien
      Apr 8 at 10:47










    • $begingroup$
      @SimonLarsson Thanks man!
      $endgroup$
      – Esmailian
      Apr 8 at 10:51












    • 2




      $begingroup$
      I like that illustration! Source?
      $endgroup$
      – Ben Reiniger
      Apr 7 at 21:24






    • 1




      $begingroup$
      @BenReiniger Thanks Ben! I drew it by paint.
      $endgroup$
      – Esmailian
      Apr 7 at 23:22










    • $begingroup$
      @Esmailian, I would like to say that this is something that applies to many of your illustrations. They look like something out of a write-up in a blog. You are a natural teacher :)
      $endgroup$
      – Simon Larsson
      Apr 8 at 10:20










    • $begingroup$
      Thanks, that is really helpful!
      $endgroup$
      – Bien
      Apr 8 at 10:47










    • $begingroup$
      @SimonLarsson Thanks man!
      $endgroup$
      – Esmailian
      Apr 8 at 10:51







    2




    2




    $begingroup$
    I like that illustration! Source?
    $endgroup$
    – Ben Reiniger
    Apr 7 at 21:24




    $begingroup$
    I like that illustration! Source?
    $endgroup$
    – Ben Reiniger
    Apr 7 at 21:24




    1




    1




    $begingroup$
    @BenReiniger Thanks Ben! I drew it by paint.
    $endgroup$
    – Esmailian
    Apr 7 at 23:22




    $begingroup$
    @BenReiniger Thanks Ben! I drew it by paint.
    $endgroup$
    – Esmailian
    Apr 7 at 23:22












    $begingroup$
    @Esmailian, I would like to say that this is something that applies to many of your illustrations. They look like something out of a write-up in a blog. You are a natural teacher :)
    $endgroup$
    – Simon Larsson
    Apr 8 at 10:20




    $begingroup$
    @Esmailian, I would like to say that this is something that applies to many of your illustrations. They look like something out of a write-up in a blog. You are a natural teacher :)
    $endgroup$
    – Simon Larsson
    Apr 8 at 10:20












    $begingroup$
    Thanks, that is really helpful!
    $endgroup$
    – Bien
    Apr 8 at 10:47




    $begingroup$
    Thanks, that is really helpful!
    $endgroup$
    – Bien
    Apr 8 at 10:47












    $begingroup$
    @SimonLarsson Thanks man!
    $endgroup$
    – Esmailian
    Apr 8 at 10:51




    $begingroup$
    @SimonLarsson Thanks man!
    $endgroup$
    – Esmailian
    Apr 8 at 10:51











    1












    $begingroup$

    Esmailian's answer is great, another possible solution is intercept correction, it is more commonly used in logistic regression, but the principal will apply here as well,
    see here or here, there's a lot of material on it online...






    share|improve this answer









    $endgroup$

















      1












      $begingroup$

      Esmailian's answer is great, another possible solution is intercept correction, it is more commonly used in logistic regression, but the principal will apply here as well,
      see here or here, there's a lot of material on it online...






      share|improve this answer









      $endgroup$















        1












        1








        1





        $begingroup$

        Esmailian's answer is great, another possible solution is intercept correction, it is more commonly used in logistic regression, but the principal will apply here as well,
        see here or here, there's a lot of material on it online...






        share|improve this answer









        $endgroup$



        Esmailian's answer is great, another possible solution is intercept correction, it is more commonly used in logistic regression, but the principal will apply here as well,
        see here or here, there's a lot of material on it online...







        share|improve this answer












        share|improve this answer



        share|improve this answer










        answered Apr 8 at 14:39









        Oren MatarOren Matar

        262




        262



























            draft saved

            draft discarded
















































            Thanks for contributing an answer to Data Science Stack Exchange!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid


            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.

            Use MathJax to format equations. MathJax reference.


            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f48821%2fwhy-real-world-output-of-my-classifier-has-similar-label-ratio-to-training-data%23new-answer', 'question_page');

            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            Adding axes to figuresAdding axes labels to LaTeX figuresLaTeX equivalent of ConTeXt buffersRotate a node but not its content: the case of the ellipse decorationHow to define the default vertical distance between nodes?TikZ scaling graphic and adjust node position and keep font sizeNumerical conditional within tikz keys?adding axes to shapesAlign axes across subfiguresAdding figures with a certain orderLine up nested tikz enviroments or how to get rid of themAdding axes labels to LaTeX figures

            Tähtien Talli Jäsenet | Lähteet | NavigointivalikkoSuomen Hippos – Tähtien Talli

            Do these cracks on my tires look bad? The Next CEO of Stack OverflowDry rot tire should I replace?Having to replace tiresFishtailed so easily? Bad tires? ABS?Filling the tires with something other than air, to avoid puncture hassles?Used Michelin tires safe to install?Do these tyre cracks necessitate replacement?Rumbling noise: tires or mechanicalIs it possible to fix noisy feathered tires?Are bad winter tires still better than summer tires in winter?Torque converter failure - Related to replacing only 2 tires?Why use snow tires on all 4 wheels on 2-wheel-drive cars?