Why real-world output of my classifier has similar label ratio to training data?Train classifier on balanced dataset and apply on imbalanced dataset?Cost sensitive classification with individual costintercept correction in deep learningNeural Networks: How to prepare real world data to detect low probability events?Why is my artificial neural networks almost always predicting positive elements?Neural Network: how to interpret this loss graph?A practical max-depth of LSTM layers, when trying to overfit?How do I separate Interleaved sentences using LSTM networks?LSTM: How to deal with nonstationarity when predicting a time seriesUnderstanding LSTM behaviour: Validation loss smaller than training loss throughout training for regression problemTraining multi-label classifier with unbalanced samples in KerasNetwork either overfits or underfits, but never generalizes - what to do?Remedies to CNN-LSTM overfitting on relatively small image dataset
How to make a pipeline wait for end-of-file or stop after an error?
How do I deal with a coworker that keeps asking to make small superficial changes to a report, and it is seriously triggering my anxiety?
a sore throat vs a strep throat vs strep throat
Error message with Tabularx
Minor Revision with suggestion of an alternative proof by reviewer
Is there a way to get a compiler for the original B programming language?
How would one muzzle a full grown polar bear in the 13th century?
Can someone publish a story that happened to you?
Fizzy, soft, pop and still drinks
Why other Westeros houses don't use wildfire?
Are Boeing 737-800’s grounded?
Can I spend a night at Vancouver then take a flight to my college in Toronto as an international student?
What are the potential pitfalls when using metals as a currency?
Why was Germany not as successful as other Europeans in establishing overseas colonies?
How can the Zone of Truth spell be defeated without the caster knowing?
How can I practically buy stocks?
What language was spoken in East Asia before Proto-Turkic?
Don’t seats that recline flat defeat the purpose of having seatbelts?
Can SQL Server create collisions in system generated constraint names?
Which big number is bigger?
What is Niska's accent?
how to find the equation of a circle given points of the circle
what is the sudo password for a --disabled-password user
Binary Numbers Magic Trick
Why real-world output of my classifier has similar label ratio to training data?
Train classifier on balanced dataset and apply on imbalanced dataset?Cost sensitive classification with individual costintercept correction in deep learningNeural Networks: How to prepare real world data to detect low probability events?Why is my artificial neural networks almost always predicting positive elements?Neural Network: how to interpret this loss graph?A practical max-depth of LSTM layers, when trying to overfit?How do I separate Interleaved sentences using LSTM networks?LSTM: How to deal with nonstationarity when predicting a time seriesUnderstanding LSTM behaviour: Validation loss smaller than training loss throughout training for regression problemTraining multi-label classifier with unbalanced samples in KerasNetwork either overfits or underfits, but never generalizes - what to do?Remedies to CNN-LSTM overfitting on relatively small image dataset
$begingroup$
I trained a neural network on balanced dataset, and it has good accuracy ~85%. But in real world positives appear in about 10% of the cases or less. When I test network on set with real world distribution it seems to assign more positive labels than needed tending to balanced proportion as in training set.
What can be the reason of such behavior and what should I do solve it?
I'm using Keras and combination of LSTM and CNN layers.
neural-network cnn lstm training class-imbalance
$endgroup$
add a comment |
$begingroup$
I trained a neural network on balanced dataset, and it has good accuracy ~85%. But in real world positives appear in about 10% of the cases or less. When I test network on set with real world distribution it seems to assign more positive labels than needed tending to balanced proportion as in training set.
What can be the reason of such behavior and what should I do solve it?
I'm using Keras and combination of LSTM and CNN layers.
neural-network cnn lstm training class-imbalance
$endgroup$
$begingroup$
Would you elaborate the clause But in real world positives appear in about 10%. By positive you've meant positive labels or something?
$endgroup$
– Vaalizaadeh
Apr 7 at 12:17
$begingroup$
@Vaalizaadeh, yes positive labels, so basically one category is less represented in real world.
$endgroup$
– Bien
Apr 7 at 12:34
add a comment |
$begingroup$
I trained a neural network on balanced dataset, and it has good accuracy ~85%. But in real world positives appear in about 10% of the cases or less. When I test network on set with real world distribution it seems to assign more positive labels than needed tending to balanced proportion as in training set.
What can be the reason of such behavior and what should I do solve it?
I'm using Keras and combination of LSTM and CNN layers.
neural-network cnn lstm training class-imbalance
$endgroup$
I trained a neural network on balanced dataset, and it has good accuracy ~85%. But in real world positives appear in about 10% of the cases or less. When I test network on set with real world distribution it seems to assign more positive labels than needed tending to balanced proportion as in training set.
What can be the reason of such behavior and what should I do solve it?
I'm using Keras and combination of LSTM and CNN layers.
neural-network cnn lstm training class-imbalance
neural-network cnn lstm training class-imbalance
edited Apr 8 at 16:11
Esmailian
4,021422
4,021422
asked Apr 7 at 12:10
BienBien
333
333
$begingroup$
Would you elaborate the clause But in real world positives appear in about 10%. By positive you've meant positive labels or something?
$endgroup$
– Vaalizaadeh
Apr 7 at 12:17
$begingroup$
@Vaalizaadeh, yes positive labels, so basically one category is less represented in real world.
$endgroup$
– Bien
Apr 7 at 12:34
add a comment |
$begingroup$
Would you elaborate the clause But in real world positives appear in about 10%. By positive you've meant positive labels or something?
$endgroup$
– Vaalizaadeh
Apr 7 at 12:17
$begingroup$
@Vaalizaadeh, yes positive labels, so basically one category is less represented in real world.
$endgroup$
– Bien
Apr 7 at 12:34
$begingroup$
Would you elaborate the clause But in real world positives appear in about 10%. By positive you've meant positive labels or something?
$endgroup$
– Vaalizaadeh
Apr 7 at 12:17
$begingroup$
Would you elaborate the clause But in real world positives appear in about 10%. By positive you've meant positive labels or something?
$endgroup$
– Vaalizaadeh
Apr 7 at 12:17
$begingroup$
@Vaalizaadeh, yes positive labels, so basically one category is less represented in real world.
$endgroup$
– Bien
Apr 7 at 12:34
$begingroup$
@Vaalizaadeh, yes positive labels, so basically one category is less represented in real world.
$endgroup$
– Bien
Apr 7 at 12:34
add a comment |
2 Answers
2
active
oldest
votes
$begingroup$
What can be the reason of such behavior?
A classifier only tries to capture the features-label relationship as accurate as it can, it does not learn nor does it guarantee that the ratio of predicted labels to be close to the true ratio. However, if sampled classes (even balanced) are good representatives of true class regions and classifier finds good decision boundaries, the closeness of ratios will happen naturally. Therefore, the explanation is that the sampled negative class is not a good representative of its true occupied regions, meaning some regions are not sampled well, and/or classifier is finding bad decision boundaries. Here is a visual illustration (drawn by myself):
In the case of good training set, predictions resemble the real-world ratio (2:1), even though model is trained on a balanced set (1:1). In the case of bad training set and/or bad decision boundary, predictions are completely incompatible (1:2) with the real-world ratio (2:1).
What should I do to solve it?
If the problem is related to bad negative representatives
Use more negative samples in the training set, i.e. imbalanced training (you can nullify the effect of imbalanced samples with class weights), or
Train with balanced classes but change the decision threshold (a post-training solution). That is, instead of assigning instances with
output > 0.5
to positive class, use a harder-to-pass threshold likeoutput > 0.8
to decrease the number of positive predictions.
If the problem is related to classifier performance, we should come up with a better classifier which is an open-ended endeavor.
However, in my opinion, you should not select models based on the ratio of positive predictions. You should ecide based on a metric like macro-f1 (or any other one). Therefore, by using a validation set, a model that produces more positive samples and has a higher macro-f1 should be preferred over a model that produces less positives but has a lower macro-f1.
EDIT:
As @BenReiniger pointed out in another post, a hidden assumption here (specially in the sketch) is that classes are "clearly separable". This assumption becomes more justified in higher dimensions. As an example, dogs and cats are clearly separable based on their images (high dimensions) compared to their length (one dimension).
$endgroup$
2
$begingroup$
I like that illustration! Source?
$endgroup$
– Ben Reiniger
Apr 7 at 21:24
1
$begingroup$
@BenReiniger Thanks Ben! I drew it by paint.
$endgroup$
– Esmailian
Apr 7 at 23:22
$begingroup$
@Esmailian, I would like to say that this is something that applies to many of your illustrations. They look like something out of a write-up in a blog. You are a natural teacher :)
$endgroup$
– Simon Larsson
Apr 8 at 10:20
$begingroup$
Thanks, that is really helpful!
$endgroup$
– Bien
Apr 8 at 10:47
$begingroup$
@SimonLarsson Thanks man!
$endgroup$
– Esmailian
Apr 8 at 10:51
add a comment |
$begingroup$
Esmailian's answer is great, another possible solution is intercept correction, it is more commonly used in logistic regression, but the principal will apply here as well,
see here or here, there's a lot of material on it online...
$endgroup$
add a comment |
Your Answer
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "557"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f48821%2fwhy-real-world-output-of-my-classifier-has-similar-label-ratio-to-training-data%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
2 Answers
2
active
oldest
votes
2 Answers
2
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
What can be the reason of such behavior?
A classifier only tries to capture the features-label relationship as accurate as it can, it does not learn nor does it guarantee that the ratio of predicted labels to be close to the true ratio. However, if sampled classes (even balanced) are good representatives of true class regions and classifier finds good decision boundaries, the closeness of ratios will happen naturally. Therefore, the explanation is that the sampled negative class is not a good representative of its true occupied regions, meaning some regions are not sampled well, and/or classifier is finding bad decision boundaries. Here is a visual illustration (drawn by myself):
In the case of good training set, predictions resemble the real-world ratio (2:1), even though model is trained on a balanced set (1:1). In the case of bad training set and/or bad decision boundary, predictions are completely incompatible (1:2) with the real-world ratio (2:1).
What should I do to solve it?
If the problem is related to bad negative representatives
Use more negative samples in the training set, i.e. imbalanced training (you can nullify the effect of imbalanced samples with class weights), or
Train with balanced classes but change the decision threshold (a post-training solution). That is, instead of assigning instances with
output > 0.5
to positive class, use a harder-to-pass threshold likeoutput > 0.8
to decrease the number of positive predictions.
If the problem is related to classifier performance, we should come up with a better classifier which is an open-ended endeavor.
However, in my opinion, you should not select models based on the ratio of positive predictions. You should ecide based on a metric like macro-f1 (or any other one). Therefore, by using a validation set, a model that produces more positive samples and has a higher macro-f1 should be preferred over a model that produces less positives but has a lower macro-f1.
EDIT:
As @BenReiniger pointed out in another post, a hidden assumption here (specially in the sketch) is that classes are "clearly separable". This assumption becomes more justified in higher dimensions. As an example, dogs and cats are clearly separable based on their images (high dimensions) compared to their length (one dimension).
$endgroup$
2
$begingroup$
I like that illustration! Source?
$endgroup$
– Ben Reiniger
Apr 7 at 21:24
1
$begingroup$
@BenReiniger Thanks Ben! I drew it by paint.
$endgroup$
– Esmailian
Apr 7 at 23:22
$begingroup$
@Esmailian, I would like to say that this is something that applies to many of your illustrations. They look like something out of a write-up in a blog. You are a natural teacher :)
$endgroup$
– Simon Larsson
Apr 8 at 10:20
$begingroup$
Thanks, that is really helpful!
$endgroup$
– Bien
Apr 8 at 10:47
$begingroup$
@SimonLarsson Thanks man!
$endgroup$
– Esmailian
Apr 8 at 10:51
add a comment |
$begingroup$
What can be the reason of such behavior?
A classifier only tries to capture the features-label relationship as accurate as it can, it does not learn nor does it guarantee that the ratio of predicted labels to be close to the true ratio. However, if sampled classes (even balanced) are good representatives of true class regions and classifier finds good decision boundaries, the closeness of ratios will happen naturally. Therefore, the explanation is that the sampled negative class is not a good representative of its true occupied regions, meaning some regions are not sampled well, and/or classifier is finding bad decision boundaries. Here is a visual illustration (drawn by myself):
In the case of good training set, predictions resemble the real-world ratio (2:1), even though model is trained on a balanced set (1:1). In the case of bad training set and/or bad decision boundary, predictions are completely incompatible (1:2) with the real-world ratio (2:1).
What should I do to solve it?
If the problem is related to bad negative representatives
Use more negative samples in the training set, i.e. imbalanced training (you can nullify the effect of imbalanced samples with class weights), or
Train with balanced classes but change the decision threshold (a post-training solution). That is, instead of assigning instances with
output > 0.5
to positive class, use a harder-to-pass threshold likeoutput > 0.8
to decrease the number of positive predictions.
If the problem is related to classifier performance, we should come up with a better classifier which is an open-ended endeavor.
However, in my opinion, you should not select models based on the ratio of positive predictions. You should ecide based on a metric like macro-f1 (or any other one). Therefore, by using a validation set, a model that produces more positive samples and has a higher macro-f1 should be preferred over a model that produces less positives but has a lower macro-f1.
EDIT:
As @BenReiniger pointed out in another post, a hidden assumption here (specially in the sketch) is that classes are "clearly separable". This assumption becomes more justified in higher dimensions. As an example, dogs and cats are clearly separable based on their images (high dimensions) compared to their length (one dimension).
$endgroup$
2
$begingroup$
I like that illustration! Source?
$endgroup$
– Ben Reiniger
Apr 7 at 21:24
1
$begingroup$
@BenReiniger Thanks Ben! I drew it by paint.
$endgroup$
– Esmailian
Apr 7 at 23:22
$begingroup$
@Esmailian, I would like to say that this is something that applies to many of your illustrations. They look like something out of a write-up in a blog. You are a natural teacher :)
$endgroup$
– Simon Larsson
Apr 8 at 10:20
$begingroup$
Thanks, that is really helpful!
$endgroup$
– Bien
Apr 8 at 10:47
$begingroup$
@SimonLarsson Thanks man!
$endgroup$
– Esmailian
Apr 8 at 10:51
add a comment |
$begingroup$
What can be the reason of such behavior?
A classifier only tries to capture the features-label relationship as accurate as it can, it does not learn nor does it guarantee that the ratio of predicted labels to be close to the true ratio. However, if sampled classes (even balanced) are good representatives of true class regions and classifier finds good decision boundaries, the closeness of ratios will happen naturally. Therefore, the explanation is that the sampled negative class is not a good representative of its true occupied regions, meaning some regions are not sampled well, and/or classifier is finding bad decision boundaries. Here is a visual illustration (drawn by myself):
In the case of good training set, predictions resemble the real-world ratio (2:1), even though model is trained on a balanced set (1:1). In the case of bad training set and/or bad decision boundary, predictions are completely incompatible (1:2) with the real-world ratio (2:1).
What should I do to solve it?
If the problem is related to bad negative representatives
Use more negative samples in the training set, i.e. imbalanced training (you can nullify the effect of imbalanced samples with class weights), or
Train with balanced classes but change the decision threshold (a post-training solution). That is, instead of assigning instances with
output > 0.5
to positive class, use a harder-to-pass threshold likeoutput > 0.8
to decrease the number of positive predictions.
If the problem is related to classifier performance, we should come up with a better classifier which is an open-ended endeavor.
However, in my opinion, you should not select models based on the ratio of positive predictions. You should ecide based on a metric like macro-f1 (or any other one). Therefore, by using a validation set, a model that produces more positive samples and has a higher macro-f1 should be preferred over a model that produces less positives but has a lower macro-f1.
EDIT:
As @BenReiniger pointed out in another post, a hidden assumption here (specially in the sketch) is that classes are "clearly separable". This assumption becomes more justified in higher dimensions. As an example, dogs and cats are clearly separable based on their images (high dimensions) compared to their length (one dimension).
$endgroup$
What can be the reason of such behavior?
A classifier only tries to capture the features-label relationship as accurate as it can, it does not learn nor does it guarantee that the ratio of predicted labels to be close to the true ratio. However, if sampled classes (even balanced) are good representatives of true class regions and classifier finds good decision boundaries, the closeness of ratios will happen naturally. Therefore, the explanation is that the sampled negative class is not a good representative of its true occupied regions, meaning some regions are not sampled well, and/or classifier is finding bad decision boundaries. Here is a visual illustration (drawn by myself):
In the case of good training set, predictions resemble the real-world ratio (2:1), even though model is trained on a balanced set (1:1). In the case of bad training set and/or bad decision boundary, predictions are completely incompatible (1:2) with the real-world ratio (2:1).
What should I do to solve it?
If the problem is related to bad negative representatives
Use more negative samples in the training set, i.e. imbalanced training (you can nullify the effect of imbalanced samples with class weights), or
Train with balanced classes but change the decision threshold (a post-training solution). That is, instead of assigning instances with
output > 0.5
to positive class, use a harder-to-pass threshold likeoutput > 0.8
to decrease the number of positive predictions.
If the problem is related to classifier performance, we should come up with a better classifier which is an open-ended endeavor.
However, in my opinion, you should not select models based on the ratio of positive predictions. You should ecide based on a metric like macro-f1 (or any other one). Therefore, by using a validation set, a model that produces more positive samples and has a higher macro-f1 should be preferred over a model that produces less positives but has a lower macro-f1.
EDIT:
As @BenReiniger pointed out in another post, a hidden assumption here (specially in the sketch) is that classes are "clearly separable". This assumption becomes more justified in higher dimensions. As an example, dogs and cats are clearly separable based on their images (high dimensions) compared to their length (one dimension).
edited Apr 11 at 14:25
answered Apr 7 at 14:10
EsmailianEsmailian
4,021422
4,021422
2
$begingroup$
I like that illustration! Source?
$endgroup$
– Ben Reiniger
Apr 7 at 21:24
1
$begingroup$
@BenReiniger Thanks Ben! I drew it by paint.
$endgroup$
– Esmailian
Apr 7 at 23:22
$begingroup$
@Esmailian, I would like to say that this is something that applies to many of your illustrations. They look like something out of a write-up in a blog. You are a natural teacher :)
$endgroup$
– Simon Larsson
Apr 8 at 10:20
$begingroup$
Thanks, that is really helpful!
$endgroup$
– Bien
Apr 8 at 10:47
$begingroup$
@SimonLarsson Thanks man!
$endgroup$
– Esmailian
Apr 8 at 10:51
add a comment |
2
$begingroup$
I like that illustration! Source?
$endgroup$
– Ben Reiniger
Apr 7 at 21:24
1
$begingroup$
@BenReiniger Thanks Ben! I drew it by paint.
$endgroup$
– Esmailian
Apr 7 at 23:22
$begingroup$
@Esmailian, I would like to say that this is something that applies to many of your illustrations. They look like something out of a write-up in a blog. You are a natural teacher :)
$endgroup$
– Simon Larsson
Apr 8 at 10:20
$begingroup$
Thanks, that is really helpful!
$endgroup$
– Bien
Apr 8 at 10:47
$begingroup$
@SimonLarsson Thanks man!
$endgroup$
– Esmailian
Apr 8 at 10:51
2
2
$begingroup$
I like that illustration! Source?
$endgroup$
– Ben Reiniger
Apr 7 at 21:24
$begingroup$
I like that illustration! Source?
$endgroup$
– Ben Reiniger
Apr 7 at 21:24
1
1
$begingroup$
@BenReiniger Thanks Ben! I drew it by paint.
$endgroup$
– Esmailian
Apr 7 at 23:22
$begingroup$
@BenReiniger Thanks Ben! I drew it by paint.
$endgroup$
– Esmailian
Apr 7 at 23:22
$begingroup$
@Esmailian, I would like to say that this is something that applies to many of your illustrations. They look like something out of a write-up in a blog. You are a natural teacher :)
$endgroup$
– Simon Larsson
Apr 8 at 10:20
$begingroup$
@Esmailian, I would like to say that this is something that applies to many of your illustrations. They look like something out of a write-up in a blog. You are a natural teacher :)
$endgroup$
– Simon Larsson
Apr 8 at 10:20
$begingroup$
Thanks, that is really helpful!
$endgroup$
– Bien
Apr 8 at 10:47
$begingroup$
Thanks, that is really helpful!
$endgroup$
– Bien
Apr 8 at 10:47
$begingroup$
@SimonLarsson Thanks man!
$endgroup$
– Esmailian
Apr 8 at 10:51
$begingroup$
@SimonLarsson Thanks man!
$endgroup$
– Esmailian
Apr 8 at 10:51
add a comment |
$begingroup$
Esmailian's answer is great, another possible solution is intercept correction, it is more commonly used in logistic regression, but the principal will apply here as well,
see here or here, there's a lot of material on it online...
$endgroup$
add a comment |
$begingroup$
Esmailian's answer is great, another possible solution is intercept correction, it is more commonly used in logistic regression, but the principal will apply here as well,
see here or here, there's a lot of material on it online...
$endgroup$
add a comment |
$begingroup$
Esmailian's answer is great, another possible solution is intercept correction, it is more commonly used in logistic regression, but the principal will apply here as well,
see here or here, there's a lot of material on it online...
$endgroup$
Esmailian's answer is great, another possible solution is intercept correction, it is more commonly used in logistic regression, but the principal will apply here as well,
see here or here, there's a lot of material on it online...
answered Apr 8 at 14:39
Oren MatarOren Matar
262
262
add a comment |
add a comment |
Thanks for contributing an answer to Data Science Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f48821%2fwhy-real-world-output-of-my-classifier-has-similar-label-ratio-to-training-data%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
$begingroup$
Would you elaborate the clause But in real world positives appear in about 10%. By positive you've meant positive labels or something?
$endgroup$
– Vaalizaadeh
Apr 7 at 12:17
$begingroup$
@Vaalizaadeh, yes positive labels, so basically one category is less represented in real world.
$endgroup$
– Bien
Apr 7 at 12:34