When one model is superior in real world use? Unicorn Meta Zoo #1: Why another podcast? Announcing the arrival of Valued Associate #679: Cesar Manara 2019 Moderator Election Q&A - Questionnaire 2019 Community Moderator Election ResultsHow can one use a validation set to reduce overfitting Naive Bayes?Skip gram Word2Vec model, neural network implementationWhich is best Model to implement Question Answering SystemThoughts on improving the Multitask Learning ModelWhy does my model accuracy rise and then drop, with the loss sharing similar characteristics?keras' ModelCheckpoint not workingStrange Behavior for trying to Predict Tennis Millionaires with Keras (Validation Accuracy)Beyond one-hot encoding for LSTM model in KerasWhy is recall so high?
Why must Chinese maps be obfuscated?
What *exactly* is electrical current, voltage, and resistance?
Is it possible to cast 2x Final Payment while sacrificing just one creature?
Multiple fireplaces in an apartment building?
Is there really no use for MD5 anymore?
Is there metaphorical meaning of "aus der Haft entlassen"?
How can I practically buy stocks?
How do I check if a string is entirely made of the same substring?
Mistake in years of experience in resume?
What does a straight horizontal line above a few notes, after a changed tempo mean?
What is the best way to deal with NPC-NPC combat?
Air bladders in bat-like skin wings for better lift?
Older movie/show about humans on derelict alien warship which refuels by passing through a star
Implementing 3DES algorithm in Java: is my code secure?
Is there any pythonic way to find average of specific tuple elements in array?
What makes accurate emulation of old systems a difficult task?
What's the difference between using dependency injection with a container and using a service locator?
"Rubric" as meaning "signature" or "personal mark" -- is this accepted usage?
Did the Roman Empire have penal colonies?
How to keep bees out of canned beverages?
Could moose/elk survive in the Amazon forest?
A strange hotel
Israeli soda type drink
A faster way to compute the largest prime factor
When one model is superior in real world use?
Unicorn Meta Zoo #1: Why another podcast?
Announcing the arrival of Valued Associate #679: Cesar Manara
2019 Moderator Election Q&A - Questionnaire
2019 Community Moderator Election ResultsHow can one use a validation set to reduce overfitting Naive Bayes?Skip gram Word2Vec model, neural network implementationWhich is best Model to implement Question Answering SystemThoughts on improving the Multitask Learning ModelWhy does my model accuracy rise and then drop, with the loss sharing similar characteristics?keras' ModelCheckpoint not workingStrange Behavior for trying to Predict Tennis Millionaires with Keras (Validation Accuracy)Beyond one-hot encoding for LSTM model in KerasWhy is recall so high?
$begingroup$
I have an NLP neural network that I have developed with Keras for multi-label classification.
I have fit the model several times and save the best results (via best validation accuracy score) after each set of epochs completes. All of my saved models are in the 96%+ validation accuracy score (according to Keras).
However, when I run these models against real-world data where I also know the result (e.g. effectively a second round of validation) one model in particular outperforms the rest. I can take the champion model (96.29% validation accuracy) and put it up against another model (with something like 96.18% validation accuracy) and the champion model can achieve 90%+ accuracy in the second round of validation while the other model - or any other model - will do nowhere near that. This one model will achieve a minimum 8% accuracy above all other models.
I have double-checked my methodology and I'm nearly positive that all models are being created with the same code and process.
Should I be concerned that this one particular model outperforms the rest? Does it indicate anything in particular in my overall methodology?
neural-network keras nlp
$endgroup$
add a comment |
$begingroup$
I have an NLP neural network that I have developed with Keras for multi-label classification.
I have fit the model several times and save the best results (via best validation accuracy score) after each set of epochs completes. All of my saved models are in the 96%+ validation accuracy score (according to Keras).
However, when I run these models against real-world data where I also know the result (e.g. effectively a second round of validation) one model in particular outperforms the rest. I can take the champion model (96.29% validation accuracy) and put it up against another model (with something like 96.18% validation accuracy) and the champion model can achieve 90%+ accuracy in the second round of validation while the other model - or any other model - will do nowhere near that. This one model will achieve a minimum 8% accuracy above all other models.
I have double-checked my methodology and I'm nearly positive that all models are being created with the same code and process.
Should I be concerned that this one particular model outperforms the rest? Does it indicate anything in particular in my overall methodology?
neural-network keras nlp
$endgroup$
add a comment |
$begingroup$
I have an NLP neural network that I have developed with Keras for multi-label classification.
I have fit the model several times and save the best results (via best validation accuracy score) after each set of epochs completes. All of my saved models are in the 96%+ validation accuracy score (according to Keras).
However, when I run these models against real-world data where I also know the result (e.g. effectively a second round of validation) one model in particular outperforms the rest. I can take the champion model (96.29% validation accuracy) and put it up against another model (with something like 96.18% validation accuracy) and the champion model can achieve 90%+ accuracy in the second round of validation while the other model - or any other model - will do nowhere near that. This one model will achieve a minimum 8% accuracy above all other models.
I have double-checked my methodology and I'm nearly positive that all models are being created with the same code and process.
Should I be concerned that this one particular model outperforms the rest? Does it indicate anything in particular in my overall methodology?
neural-network keras nlp
$endgroup$
I have an NLP neural network that I have developed with Keras for multi-label classification.
I have fit the model several times and save the best results (via best validation accuracy score) after each set of epochs completes. All of my saved models are in the 96%+ validation accuracy score (according to Keras).
However, when I run these models against real-world data where I also know the result (e.g. effectively a second round of validation) one model in particular outperforms the rest. I can take the champion model (96.29% validation accuracy) and put it up against another model (with something like 96.18% validation accuracy) and the champion model can achieve 90%+ accuracy in the second round of validation while the other model - or any other model - will do nowhere near that. This one model will achieve a minimum 8% accuracy above all other models.
I have double-checked my methodology and I'm nearly positive that all models are being created with the same code and process.
Should I be concerned that this one particular model outperforms the rest? Does it indicate anything in particular in my overall methodology?
neural-network keras nlp
neural-network keras nlp
asked Nov 7 '18 at 14:11
I_Play_With_DataI_Play_With_Data
1,2521833
1,2521833
add a comment |
add a comment |
1 Answer
1
active
oldest
votes
$begingroup$
Maybe I did not get the question but all looks fine. This is how you do model selection. You have several models (either same algorithm with different parameters or different algorithms. does not matter) and then you perform cross validation to get the best model according to empirical errors coming from validation set. The best model wins the game and is chosen. Everything seems to be right.
$endgroup$
$begingroup$
To be clear, these are all the same NN just run at different times throughout the day. All parameters are equal. So, really the only difference (I think) would be the random mix that Keras creates for each training epoch. No other parameters/processes were changed in between runs
$endgroup$
– I_Play_With_Data
Nov 7 '18 at 14:31
$begingroup$
If Keras does the splitting for you then be sure it uses proper shuffling techniques to keep the results statistically significant. If you do it yourself then try to shuffle data and evaluate each model n times and see the mean and std of errors. That tells you what is the best model. If all models were literally the same then you have only one model and the empirical error is the mean of all. see this answer and the comment datascience.stackexchange.com/a/40862/8878
$endgroup$
– Kasra Manshaei
Nov 7 '18 at 14:44
$begingroup$
Might happen that at one run the data is accidentally "too beautiful"! That's why we try several times and see the mean error to be sure our results are not just by chance.
$endgroup$
– Kasra Manshaei
Nov 7 '18 at 14:45
1
$begingroup$
I do shuffle the data upon load before running all my epochs
$endgroup$
– I_Play_With_Data
Nov 7 '18 at 14:47
$begingroup$
Yes. and one shuffle is just by chance well-separated (if it's a classification task). And anyways, you are not choosing any model as all of them are the same. Put the mean of all obtained errors in one basket and try "Other Models" (e.g. a NN with another architecture of layers) and see their errors as well. Then you can say which model is the best. So far there is no model"s" but just model. And it does not tell you anything
$endgroup$
– Kasra Manshaei
Nov 7 '18 at 14:49
|
show 2 more comments
Your Answer
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "557"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f40870%2fwhen-one-model-is-superior-in-real-world-use%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
Maybe I did not get the question but all looks fine. This is how you do model selection. You have several models (either same algorithm with different parameters or different algorithms. does not matter) and then you perform cross validation to get the best model according to empirical errors coming from validation set. The best model wins the game and is chosen. Everything seems to be right.
$endgroup$
$begingroup$
To be clear, these are all the same NN just run at different times throughout the day. All parameters are equal. So, really the only difference (I think) would be the random mix that Keras creates for each training epoch. No other parameters/processes were changed in between runs
$endgroup$
– I_Play_With_Data
Nov 7 '18 at 14:31
$begingroup$
If Keras does the splitting for you then be sure it uses proper shuffling techniques to keep the results statistically significant. If you do it yourself then try to shuffle data and evaluate each model n times and see the mean and std of errors. That tells you what is the best model. If all models were literally the same then you have only one model and the empirical error is the mean of all. see this answer and the comment datascience.stackexchange.com/a/40862/8878
$endgroup$
– Kasra Manshaei
Nov 7 '18 at 14:44
$begingroup$
Might happen that at one run the data is accidentally "too beautiful"! That's why we try several times and see the mean error to be sure our results are not just by chance.
$endgroup$
– Kasra Manshaei
Nov 7 '18 at 14:45
1
$begingroup$
I do shuffle the data upon load before running all my epochs
$endgroup$
– I_Play_With_Data
Nov 7 '18 at 14:47
$begingroup$
Yes. and one shuffle is just by chance well-separated (if it's a classification task). And anyways, you are not choosing any model as all of them are the same. Put the mean of all obtained errors in one basket and try "Other Models" (e.g. a NN with another architecture of layers) and see their errors as well. Then you can say which model is the best. So far there is no model"s" but just model. And it does not tell you anything
$endgroup$
– Kasra Manshaei
Nov 7 '18 at 14:49
|
show 2 more comments
$begingroup$
Maybe I did not get the question but all looks fine. This is how you do model selection. You have several models (either same algorithm with different parameters or different algorithms. does not matter) and then you perform cross validation to get the best model according to empirical errors coming from validation set. The best model wins the game and is chosen. Everything seems to be right.
$endgroup$
$begingroup$
To be clear, these are all the same NN just run at different times throughout the day. All parameters are equal. So, really the only difference (I think) would be the random mix that Keras creates for each training epoch. No other parameters/processes were changed in between runs
$endgroup$
– I_Play_With_Data
Nov 7 '18 at 14:31
$begingroup$
If Keras does the splitting for you then be sure it uses proper shuffling techniques to keep the results statistically significant. If you do it yourself then try to shuffle data and evaluate each model n times and see the mean and std of errors. That tells you what is the best model. If all models were literally the same then you have only one model and the empirical error is the mean of all. see this answer and the comment datascience.stackexchange.com/a/40862/8878
$endgroup$
– Kasra Manshaei
Nov 7 '18 at 14:44
$begingroup$
Might happen that at one run the data is accidentally "too beautiful"! That's why we try several times and see the mean error to be sure our results are not just by chance.
$endgroup$
– Kasra Manshaei
Nov 7 '18 at 14:45
1
$begingroup$
I do shuffle the data upon load before running all my epochs
$endgroup$
– I_Play_With_Data
Nov 7 '18 at 14:47
$begingroup$
Yes. and one shuffle is just by chance well-separated (if it's a classification task). And anyways, you are not choosing any model as all of them are the same. Put the mean of all obtained errors in one basket and try "Other Models" (e.g. a NN with another architecture of layers) and see their errors as well. Then you can say which model is the best. So far there is no model"s" but just model. And it does not tell you anything
$endgroup$
– Kasra Manshaei
Nov 7 '18 at 14:49
|
show 2 more comments
$begingroup$
Maybe I did not get the question but all looks fine. This is how you do model selection. You have several models (either same algorithm with different parameters or different algorithms. does not matter) and then you perform cross validation to get the best model according to empirical errors coming from validation set. The best model wins the game and is chosen. Everything seems to be right.
$endgroup$
Maybe I did not get the question but all looks fine. This is how you do model selection. You have several models (either same algorithm with different parameters or different algorithms. does not matter) and then you perform cross validation to get the best model according to empirical errors coming from validation set. The best model wins the game and is chosen. Everything seems to be right.
answered Nov 7 '18 at 14:21
Kasra ManshaeiKasra Manshaei
3,8171135
3,8171135
$begingroup$
To be clear, these are all the same NN just run at different times throughout the day. All parameters are equal. So, really the only difference (I think) would be the random mix that Keras creates for each training epoch. No other parameters/processes were changed in between runs
$endgroup$
– I_Play_With_Data
Nov 7 '18 at 14:31
$begingroup$
If Keras does the splitting for you then be sure it uses proper shuffling techniques to keep the results statistically significant. If you do it yourself then try to shuffle data and evaluate each model n times and see the mean and std of errors. That tells you what is the best model. If all models were literally the same then you have only one model and the empirical error is the mean of all. see this answer and the comment datascience.stackexchange.com/a/40862/8878
$endgroup$
– Kasra Manshaei
Nov 7 '18 at 14:44
$begingroup$
Might happen that at one run the data is accidentally "too beautiful"! That's why we try several times and see the mean error to be sure our results are not just by chance.
$endgroup$
– Kasra Manshaei
Nov 7 '18 at 14:45
1
$begingroup$
I do shuffle the data upon load before running all my epochs
$endgroup$
– I_Play_With_Data
Nov 7 '18 at 14:47
$begingroup$
Yes. and one shuffle is just by chance well-separated (if it's a classification task). And anyways, you are not choosing any model as all of them are the same. Put the mean of all obtained errors in one basket and try "Other Models" (e.g. a NN with another architecture of layers) and see their errors as well. Then you can say which model is the best. So far there is no model"s" but just model. And it does not tell you anything
$endgroup$
– Kasra Manshaei
Nov 7 '18 at 14:49
|
show 2 more comments
$begingroup$
To be clear, these are all the same NN just run at different times throughout the day. All parameters are equal. So, really the only difference (I think) would be the random mix that Keras creates for each training epoch. No other parameters/processes were changed in between runs
$endgroup$
– I_Play_With_Data
Nov 7 '18 at 14:31
$begingroup$
If Keras does the splitting for you then be sure it uses proper shuffling techniques to keep the results statistically significant. If you do it yourself then try to shuffle data and evaluate each model n times and see the mean and std of errors. That tells you what is the best model. If all models were literally the same then you have only one model and the empirical error is the mean of all. see this answer and the comment datascience.stackexchange.com/a/40862/8878
$endgroup$
– Kasra Manshaei
Nov 7 '18 at 14:44
$begingroup$
Might happen that at one run the data is accidentally "too beautiful"! That's why we try several times and see the mean error to be sure our results are not just by chance.
$endgroup$
– Kasra Manshaei
Nov 7 '18 at 14:45
1
$begingroup$
I do shuffle the data upon load before running all my epochs
$endgroup$
– I_Play_With_Data
Nov 7 '18 at 14:47
$begingroup$
Yes. and one shuffle is just by chance well-separated (if it's a classification task). And anyways, you are not choosing any model as all of them are the same. Put the mean of all obtained errors in one basket and try "Other Models" (e.g. a NN with another architecture of layers) and see their errors as well. Then you can say which model is the best. So far there is no model"s" but just model. And it does not tell you anything
$endgroup$
– Kasra Manshaei
Nov 7 '18 at 14:49
$begingroup$
To be clear, these are all the same NN just run at different times throughout the day. All parameters are equal. So, really the only difference (I think) would be the random mix that Keras creates for each training epoch. No other parameters/processes were changed in between runs
$endgroup$
– I_Play_With_Data
Nov 7 '18 at 14:31
$begingroup$
To be clear, these are all the same NN just run at different times throughout the day. All parameters are equal. So, really the only difference (I think) would be the random mix that Keras creates for each training epoch. No other parameters/processes were changed in between runs
$endgroup$
– I_Play_With_Data
Nov 7 '18 at 14:31
$begingroup$
If Keras does the splitting for you then be sure it uses proper shuffling techniques to keep the results statistically significant. If you do it yourself then try to shuffle data and evaluate each model n times and see the mean and std of errors. That tells you what is the best model. If all models were literally the same then you have only one model and the empirical error is the mean of all. see this answer and the comment datascience.stackexchange.com/a/40862/8878
$endgroup$
– Kasra Manshaei
Nov 7 '18 at 14:44
$begingroup$
If Keras does the splitting for you then be sure it uses proper shuffling techniques to keep the results statistically significant. If you do it yourself then try to shuffle data and evaluate each model n times and see the mean and std of errors. That tells you what is the best model. If all models were literally the same then you have only one model and the empirical error is the mean of all. see this answer and the comment datascience.stackexchange.com/a/40862/8878
$endgroup$
– Kasra Manshaei
Nov 7 '18 at 14:44
$begingroup$
Might happen that at one run the data is accidentally "too beautiful"! That's why we try several times and see the mean error to be sure our results are not just by chance.
$endgroup$
– Kasra Manshaei
Nov 7 '18 at 14:45
$begingroup$
Might happen that at one run the data is accidentally "too beautiful"! That's why we try several times and see the mean error to be sure our results are not just by chance.
$endgroup$
– Kasra Manshaei
Nov 7 '18 at 14:45
1
1
$begingroup$
I do shuffle the data upon load before running all my epochs
$endgroup$
– I_Play_With_Data
Nov 7 '18 at 14:47
$begingroup$
I do shuffle the data upon load before running all my epochs
$endgroup$
– I_Play_With_Data
Nov 7 '18 at 14:47
$begingroup$
Yes. and one shuffle is just by chance well-separated (if it's a classification task). And anyways, you are not choosing any model as all of them are the same. Put the mean of all obtained errors in one basket and try "Other Models" (e.g. a NN with another architecture of layers) and see their errors as well. Then you can say which model is the best. So far there is no model"s" but just model. And it does not tell you anything
$endgroup$
– Kasra Manshaei
Nov 7 '18 at 14:49
$begingroup$
Yes. and one shuffle is just by chance well-separated (if it's a classification task). And anyways, you are not choosing any model as all of them are the same. Put the mean of all obtained errors in one basket and try "Other Models" (e.g. a NN with another architecture of layers) and see their errors as well. Then you can say which model is the best. So far there is no model"s" but just model. And it does not tell you anything
$endgroup$
– Kasra Manshaei
Nov 7 '18 at 14:49
|
show 2 more comments
Thanks for contributing an answer to Data Science Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f40870%2fwhen-one-model-is-superior-in-real-world-use%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown