Is a good shuffle random state for training data really good for the model?Keras' Evaluate function training model on test setsklearn select N best using classifierBest approach for image recognition/classification with few training datadifferent results with MEKA vs Scikit-learn!why is mse training drastically different from the begining of each training with Encoder-DecoderHow to predict class label from class probability given by predict_generator for testdata?Input data for this dataset to be feed into keras for trainingRenaming deep learning layers causes bad resultsWhat should be the requirement for training data in order to obtain a good regression model using neural network?Using keras with sklearn: apply class_weight with cross_val_score

What linear sensor for a keyboard?

QGIS Geometry Generator Line Type

In Star Trek IV, why did the Bounty go back to a time when whales were already rare?

Can a controlled ghast be a leader of a pack of ghouls?

Would it be legal for a US State to ban exports of a natural resource?

Why does the compiler allow throws when the method will never throw the Exception

How to deal with or prevent idle in the test team?

What is the term when two people sing in harmony, but they aren't singing the same notes?

Java - What do constructor type arguments mean when placed *before* the type?

Giant Toughroad SLR 2 for 200 miles in two days, will it make it?

Partial sums of primes

Blender - show edges angles “direction”

How do I repair my stair bannister?

What (else) happened July 1st 1858 in London?

What should I use for Mishna study?

Modern Day Chaucer

Indicating multiple different modes of speech (fantasy language or telepathy)

Simple image editor tool to draw a simple box/rectangle in an existing image

Can an armblade require double attunement if it integrates a magic weapon that normally requires attunement?

Could solar power be utilized and substitute coal in the 19th century?

Reply ‘no position’ while the job posting is still there (‘HiWi’ position in Germany)

What is the opposite of 'gravitas'?

Bob has never been a M before

Can one define wavefronts for waves travelling on a stretched string?



Is a good shuffle random state for training data really good for the model?


Keras' Evaluate function training model on test setsklearn select N best using classifierBest approach for image recognition/classification with few training datadifferent results with MEKA vs Scikit-learn!why is mse training drastically different from the begining of each training with Encoder-DecoderHow to predict class label from class probability given by predict_generator for testdata?Input data for this dataset to be feed into keras for trainingRenaming deep learning layers causes bad resultsWhat should be the requirement for training data in order to obtain a good regression model using neural network?Using keras with sklearn: apply class_weight with cross_val_score













2












$begingroup$


I'm using keras to train a binary classifier neural network. To shuffle the training data I am using shuffle function from scikit-learn.

I observe that for some shuffle_random_state (seed for shuffle()), the network gives really good results (~86% accuracy) while on others not so much (~75% accuracy). So i run the model for 1-20 shuffle_random_states and choose the random_state which gives the best accuracy for production model.

I was wondering if this is a good approach and with those good shuffle_random_state the network is actually learning better?










share|improve this question











$endgroup$











  • $begingroup$
    The accuracy you are mentioning, is it on validation split or? If so, what is the accuracy on training split?
    $endgroup$
    – Antonio Jurić
    Feb 18 at 8:56










  • $begingroup$
    Mentioned accuracy is on validation split
    $endgroup$
    – Chirag Gupta
    Feb 18 at 8:57










  • $begingroup$
    What is the accuracy on training split in those two cases?
    $endgroup$
    – Antonio Jurić
    Feb 18 at 8:58










  • $begingroup$
    Training loss and accuracy is almost the same in both cases. Goes till 100% if kept training. The rate of increase is also almost same for both cases (for training data)
    $endgroup$
    – Chirag Gupta
    Feb 18 at 9:08















2












$begingroup$


I'm using keras to train a binary classifier neural network. To shuffle the training data I am using shuffle function from scikit-learn.

I observe that for some shuffle_random_state (seed for shuffle()), the network gives really good results (~86% accuracy) while on others not so much (~75% accuracy). So i run the model for 1-20 shuffle_random_states and choose the random_state which gives the best accuracy for production model.

I was wondering if this is a good approach and with those good shuffle_random_state the network is actually learning better?










share|improve this question











$endgroup$











  • $begingroup$
    The accuracy you are mentioning, is it on validation split or? If so, what is the accuracy on training split?
    $endgroup$
    – Antonio Jurić
    Feb 18 at 8:56










  • $begingroup$
    Mentioned accuracy is on validation split
    $endgroup$
    – Chirag Gupta
    Feb 18 at 8:57










  • $begingroup$
    What is the accuracy on training split in those two cases?
    $endgroup$
    – Antonio Jurić
    Feb 18 at 8:58










  • $begingroup$
    Training loss and accuracy is almost the same in both cases. Goes till 100% if kept training. The rate of increase is also almost same for both cases (for training data)
    $endgroup$
    – Chirag Gupta
    Feb 18 at 9:08













2












2








2





$begingroup$


I'm using keras to train a binary classifier neural network. To shuffle the training data I am using shuffle function from scikit-learn.

I observe that for some shuffle_random_state (seed for shuffle()), the network gives really good results (~86% accuracy) while on others not so much (~75% accuracy). So i run the model for 1-20 shuffle_random_states and choose the random_state which gives the best accuracy for production model.

I was wondering if this is a good approach and with those good shuffle_random_state the network is actually learning better?










share|improve this question











$endgroup$




I'm using keras to train a binary classifier neural network. To shuffle the training data I am using shuffle function from scikit-learn.

I observe that for some shuffle_random_state (seed for shuffle()), the network gives really good results (~86% accuracy) while on others not so much (~75% accuracy). So i run the model for 1-20 shuffle_random_states and choose the random_state which gives the best accuracy for production model.

I was wondering if this is a good approach and with those good shuffle_random_state the network is actually learning better?







machine-learning neural-network keras scikit-learn






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Feb 18 at 6:26







Chirag Gupta

















asked Feb 18 at 6:10









Chirag GuptaChirag Gupta

112




112











  • $begingroup$
    The accuracy you are mentioning, is it on validation split or? If so, what is the accuracy on training split?
    $endgroup$
    – Antonio Jurić
    Feb 18 at 8:56










  • $begingroup$
    Mentioned accuracy is on validation split
    $endgroup$
    – Chirag Gupta
    Feb 18 at 8:57










  • $begingroup$
    What is the accuracy on training split in those two cases?
    $endgroup$
    – Antonio Jurić
    Feb 18 at 8:58










  • $begingroup$
    Training loss and accuracy is almost the same in both cases. Goes till 100% if kept training. The rate of increase is also almost same for both cases (for training data)
    $endgroup$
    – Chirag Gupta
    Feb 18 at 9:08
















  • $begingroup$
    The accuracy you are mentioning, is it on validation split or? If so, what is the accuracy on training split?
    $endgroup$
    – Antonio Jurić
    Feb 18 at 8:56










  • $begingroup$
    Mentioned accuracy is on validation split
    $endgroup$
    – Chirag Gupta
    Feb 18 at 8:57










  • $begingroup$
    What is the accuracy on training split in those two cases?
    $endgroup$
    – Antonio Jurić
    Feb 18 at 8:58










  • $begingroup$
    Training loss and accuracy is almost the same in both cases. Goes till 100% if kept training. The rate of increase is also almost same for both cases (for training data)
    $endgroup$
    – Chirag Gupta
    Feb 18 at 9:08















$begingroup$
The accuracy you are mentioning, is it on validation split or? If so, what is the accuracy on training split?
$endgroup$
– Antonio Jurić
Feb 18 at 8:56




$begingroup$
The accuracy you are mentioning, is it on validation split or? If so, what is the accuracy on training split?
$endgroup$
– Antonio Jurić
Feb 18 at 8:56












$begingroup$
Mentioned accuracy is on validation split
$endgroup$
– Chirag Gupta
Feb 18 at 8:57




$begingroup$
Mentioned accuracy is on validation split
$endgroup$
– Chirag Gupta
Feb 18 at 8:57












$begingroup$
What is the accuracy on training split in those two cases?
$endgroup$
– Antonio Jurić
Feb 18 at 8:58




$begingroup$
What is the accuracy on training split in those two cases?
$endgroup$
– Antonio Jurić
Feb 18 at 8:58












$begingroup$
Training loss and accuracy is almost the same in both cases. Goes till 100% if kept training. The rate of increase is also almost same for both cases (for training data)
$endgroup$
– Chirag Gupta
Feb 18 at 9:08




$begingroup$
Training loss and accuracy is almost the same in both cases. Goes till 100% if kept training. The rate of increase is also almost same for both cases (for training data)
$endgroup$
– Chirag Gupta
Feb 18 at 9:08










1 Answer
1






active

oldest

votes


















1












$begingroup$

If this is split is a train/validation split (not a hold out test set) then you should be doing cross-validation. You are going to be overly optimistic about the performance of your model for this set of features and hyperparameters if you try to split it "just right". Cross-validation will give you a more accurate portrayal regardless of your split. If this is for a train/test split (test being a hold out test set), this is a very bad practice, since you are informing your decision on how to make the split based on the performance of the test set.






share|improve this answer









$endgroup$












    Your Answer





    StackExchange.ifUsing("editor", function ()
    return StackExchange.using("mathjaxEditing", function ()
    StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
    StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
    );
    );
    , "mathjax-editing");

    StackExchange.ready(function()
    var channelOptions =
    tags: "".split(" "),
    id: "557"
    ;
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function()
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled)
    StackExchange.using("snippets", function()
    createEditor();
    );

    else
    createEditor();

    );

    function createEditor()
    StackExchange.prepareEditor(
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: false,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: null,
    bindNavPrevention: true,
    postfix: "",
    imageUploader:
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    ,
    onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    );



    );













    draft saved

    draft discarded


















    StackExchange.ready(
    function ()
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f45740%2fis-a-good-shuffle-random-state-for-training-data-really-good-for-the-model%23new-answer', 'question_page');

    );

    Post as a guest















    Required, but never shown

























    1 Answer
    1






    active

    oldest

    votes








    1 Answer
    1






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    1












    $begingroup$

    If this is split is a train/validation split (not a hold out test set) then you should be doing cross-validation. You are going to be overly optimistic about the performance of your model for this set of features and hyperparameters if you try to split it "just right". Cross-validation will give you a more accurate portrayal regardless of your split. If this is for a train/test split (test being a hold out test set), this is a very bad practice, since you are informing your decision on how to make the split based on the performance of the test set.






    share|improve this answer









    $endgroup$

















      1












      $begingroup$

      If this is split is a train/validation split (not a hold out test set) then you should be doing cross-validation. You are going to be overly optimistic about the performance of your model for this set of features and hyperparameters if you try to split it "just right". Cross-validation will give you a more accurate portrayal regardless of your split. If this is for a train/test split (test being a hold out test set), this is a very bad practice, since you are informing your decision on how to make the split based on the performance of the test set.






      share|improve this answer









      $endgroup$















        1












        1








        1





        $begingroup$

        If this is split is a train/validation split (not a hold out test set) then you should be doing cross-validation. You are going to be overly optimistic about the performance of your model for this set of features and hyperparameters if you try to split it "just right". Cross-validation will give you a more accurate portrayal regardless of your split. If this is for a train/test split (test being a hold out test set), this is a very bad practice, since you are informing your decision on how to make the split based on the performance of the test set.






        share|improve this answer









        $endgroup$



        If this is split is a train/validation split (not a hold out test set) then you should be doing cross-validation. You are going to be overly optimistic about the performance of your model for this set of features and hyperparameters if you try to split it "just right". Cross-validation will give you a more accurate portrayal regardless of your split. If this is for a train/test split (test being a hold out test set), this is a very bad practice, since you are informing your decision on how to make the split based on the performance of the test set.







        share|improve this answer












        share|improve this answer



        share|improve this answer










        answered Feb 18 at 14:53









        WesWes

        50712




        50712



























            draft saved

            draft discarded
















































            Thanks for contributing an answer to Data Science Stack Exchange!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid


            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.

            Use MathJax to format equations. MathJax reference.


            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f45740%2fis-a-good-shuffle-random-state-for-training-data-really-good-for-the-model%23new-answer', 'question_page');

            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            Adding axes to figuresAdding axes labels to LaTeX figuresLaTeX equivalent of ConTeXt buffersRotate a node but not its content: the case of the ellipse decorationHow to define the default vertical distance between nodes?TikZ scaling graphic and adjust node position and keep font sizeNumerical conditional within tikz keys?adding axes to shapesAlign axes across subfiguresAdding figures with a certain orderLine up nested tikz enviroments or how to get rid of themAdding axes labels to LaTeX figures

            Luettelo Yhdysvaltain laivaston lentotukialuksista Lähteet | Navigointivalikko

            Gary (muusikko) Sisällysluettelo Historia | Rockin' High | Lähteet | Aiheesta muualla | NavigointivalikkoInfobox OKTuomas "Gary" Keskinen Ancaran kitaristiksiProjekti Rockin' High