Validation accuracy vs Testing accuracyInformation on how value of k in k-fold cross-validation affects resulting accuraciesEstimating the variance of a bootstrap aggregator performance?Inconsistency in cross-validation resultsCross-validation including training, validation, and testing. Why do we need three subsets?My Test accuracy is pretty bad compared to cross-validation accuracyBetter accuracy with validation set than test setFeature selection: is nested cross-validation needed?10-fold cross validation, why having a validation set?Bias-Variance terminology for loss functions in ML vs cross-validation — different things?Is cross-validation better/worse than a third holdout set?

A ​Note ​on ​N!

How to type a section sign (§) into the Minecraft client

Don’t seats that recline flat defeat the purpose of having seatbelts?

How to reduce LED flash rate (frequency)

How can I practically buy stocks?

Rivers without rain

Why does nature favour the Laplacian?

Will a top journal at least read my introduction?

Was there a shared-world project before "Thieves World"?

How would one muzzle a full grown polar bear in the 13th century?

What are the potential pitfalls when using metals as a currency?

Is the 5 MB static resource size limit 5,242,880 bytes or 5,000,000 bytes?

Why do games have consumables?

US visa is under administrative processing, I need the passport back ASAP

What's the polite way to say "I need to urinate"?

How to stop co-workers from teasing me because I know Russian?

How to verbalise code in Mathematica?

Mac Pro install disk keeps ejecting itself

Size of electromagnet needed to replicate Earth's magnetic field

The Defining Moment

Why isn't the definition of absolute value applied when squaring a radical containing a variable?

how to find the equation of a circle given points of the circle

Error message with tabularx

What is Niska's accent?



Validation accuracy vs Testing accuracy


Information on how value of k in k-fold cross-validation affects resulting accuraciesEstimating the variance of a bootstrap aggregator performance?Inconsistency in cross-validation resultsCross-validation including training, validation, and testing. Why do we need three subsets?My Test accuracy is pretty bad compared to cross-validation accuracyBetter accuracy with validation set than test setFeature selection: is nested cross-validation needed?10-fold cross validation, why having a validation set?Bias-Variance terminology for loss functions in ML vs cross-validation — different things?Is cross-validation better/worse than a third holdout set?






.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty margin-bottom:0;








2












$begingroup$


I am trying to get my head straight on terminology which appears confusing. I know there are three 'splits' of data used in Machine learning models.:



  1. Training Data - Train the model

  2. Validation Data - Cross validation for model selection

  3. Testing Data - Test the generalisation error.

Now, as far as I am aware, the validation data is not always used as one can use k-fold cross-validation, reducing the need to further reduce ones dataset. The results of which are known as the validation accuracy. Then once the best model is selected, the model is tested on a 33% split from the initial data set (which has not been used to train). The results of this would be the testing accuracy?



Is this the right way around? or is vice versa? I am finding conflicting terminology used online! I am trying to find some explanations why my validation error is larger than my testing error, but before I find a solution, i would like to get my terminology correct.



Thanks.










share|cite|improve this question









$endgroup$











  • $begingroup$
    Please also take a look at my answer on a similar post which explains the key differences. Specially the last part on validation set.
    $endgroup$
    – Esmailian
    Apr 9 at 23:07


















2












$begingroup$


I am trying to get my head straight on terminology which appears confusing. I know there are three 'splits' of data used in Machine learning models.:



  1. Training Data - Train the model

  2. Validation Data - Cross validation for model selection

  3. Testing Data - Test the generalisation error.

Now, as far as I am aware, the validation data is not always used as one can use k-fold cross-validation, reducing the need to further reduce ones dataset. The results of which are known as the validation accuracy. Then once the best model is selected, the model is tested on a 33% split from the initial data set (which has not been used to train). The results of this would be the testing accuracy?



Is this the right way around? or is vice versa? I am finding conflicting terminology used online! I am trying to find some explanations why my validation error is larger than my testing error, but before I find a solution, i would like to get my terminology correct.



Thanks.










share|cite|improve this question









$endgroup$











  • $begingroup$
    Please also take a look at my answer on a similar post which explains the key differences. Specially the last part on validation set.
    $endgroup$
    – Esmailian
    Apr 9 at 23:07














2












2








2


1



$begingroup$


I am trying to get my head straight on terminology which appears confusing. I know there are three 'splits' of data used in Machine learning models.:



  1. Training Data - Train the model

  2. Validation Data - Cross validation for model selection

  3. Testing Data - Test the generalisation error.

Now, as far as I am aware, the validation data is not always used as one can use k-fold cross-validation, reducing the need to further reduce ones dataset. The results of which are known as the validation accuracy. Then once the best model is selected, the model is tested on a 33% split from the initial data set (which has not been used to train). The results of this would be the testing accuracy?



Is this the right way around? or is vice versa? I am finding conflicting terminology used online! I am trying to find some explanations why my validation error is larger than my testing error, but before I find a solution, i would like to get my terminology correct.



Thanks.










share|cite|improve this question









$endgroup$




I am trying to get my head straight on terminology which appears confusing. I know there are three 'splits' of data used in Machine learning models.:



  1. Training Data - Train the model

  2. Validation Data - Cross validation for model selection

  3. Testing Data - Test the generalisation error.

Now, as far as I am aware, the validation data is not always used as one can use k-fold cross-validation, reducing the need to further reduce ones dataset. The results of which are known as the validation accuracy. Then once the best model is selected, the model is tested on a 33% split from the initial data set (which has not been used to train). The results of this would be the testing accuracy?



Is this the right way around? or is vice versa? I am finding conflicting terminology used online! I am trying to find some explanations why my validation error is larger than my testing error, but before I find a solution, i would like to get my terminology correct.



Thanks.







machine-learning






share|cite|improve this question













share|cite|improve this question











share|cite|improve this question




share|cite|improve this question










asked Apr 7 at 18:26









BillyJo_ramblerBillyJo_rambler

296




296











  • $begingroup$
    Please also take a look at my answer on a similar post which explains the key differences. Specially the last part on validation set.
    $endgroup$
    – Esmailian
    Apr 9 at 23:07

















  • $begingroup$
    Please also take a look at my answer on a similar post which explains the key differences. Specially the last part on validation set.
    $endgroup$
    – Esmailian
    Apr 9 at 23:07
















$begingroup$
Please also take a look at my answer on a similar post which explains the key differences. Specially the last part on validation set.
$endgroup$
– Esmailian
Apr 9 at 23:07





$begingroup$
Please also take a look at my answer on a similar post which explains the key differences. Specially the last part on validation set.
$endgroup$
– Esmailian
Apr 9 at 23:07











2 Answers
2






active

oldest

votes


















1












$begingroup$

There isn't a standard terminology in this context (and I have seen long discussions and debates regarding this topic), so I completely understand you, but you should get used to different terminology (and assume that terminology might not be consistent or it change across sources).



I would like to point out a few things:



  • I have never seen people use the expression "validation accuracy" (or dataset) to refer to the test accuracy (or dataset), but I have seen people use the term "test accuracy" (or dataset) to refer to the validation accuracy (or dataset). In other words, the test (or testing) accuracy often refers to the validation accuracy, that is, the accuracy you calculate on the data set you do not use for training, but you use (during the training process) for validating (or "testing") the generalisation ability of your model or for "early stopping".


  • In k-fold cross-validation, people usually only mention two datasets: training and testing (or validation).


  • k-fold cross-validation is just a way of validating the model on different subsets of the data. This can be done for several reasons. For example, you have a small amount of data, so your validation (and training) dataset is quite small, so you want to have a better understanding of the model's generalisation ability by validating it on several subsets of the whole dataset.


  • You should likely have a separate (from the validation dataset) dataset for testing, because the validation dataset can be used for early stopping, so, in a certain way, it is dependent on the training process


I would suggest to use the following terminology



  • Training dataset: the data used to fit the model.

  • Validation dataset: the data used to validate the generalisation ability of the model or for early stopping, during the training process.

  • Testing dataset: the data used to for other purposes other than training and validating.

Note that some of these datasets might overlap, but this might almost never be a good thing (if you have enough data).






share|cite|improve this answer











$endgroup$












  • $begingroup$
    If the testing dataset overlaps with either of the others, it is definitely not a good thing. The test accuracy must measure performance on unseen data. If any part of training saw the data, then it isn't test data, and representing it as such is dishonest. Allowing the validation set to overlap with the training set isn't dishonest, but it probably won't accomplish its task as well. (e.g., if you're doing early stopping, and your validation set and training sets overlap, overfitting may occur and not be detected.)
    $endgroup$
    – Ray
    Apr 7 at 23:44











  • $begingroup$
    @Ray I didn't say it is a good thing. Indeed, see my point "You should likely have a separate (from the validation dataset) dataset for testing...".
    $endgroup$
    – nbro
    Apr 7 at 23:46











  • $begingroup$
    You said "If that's a 'good' thing or not, it's another question." I suspected from the rest that you understood the problems that that overlap would cause, but the problems with that should be made very clear, since contaminating your test data with training samples completely ruins its value.
    $endgroup$
    – Ray
    Apr 7 at 23:48











  • $begingroup$
    @Ray I wanted more to refer to the overlap between the training and validation datasets. Anyway, I think it's good that you wanted to clarify or emphasise this point. I edited my answer to emphasise this point.
    $endgroup$
    – nbro
    Apr 7 at 23:51



















1












$begingroup$

@nbro's answer is complete. I just add a couple of explanations to supplement. In more traditional textbooks data is often partitioned into two sets: training and test. In recent years, with more complex models and increasing need for model selection, development sets or validations sets are also considered. Devel/validation should have no overlap with the test set or the reporting accuracy/ error evaluation is not valid. In the modern setting: the model is trained on the training set, tested on the validation set to see if it is a good fit, possibly model is tweaked and trained again and validated again for multiple times. When the final model is selected, the testing set is used to calculate accuracy, error reports. The important thing is that the test set is only touched once.






share|cite|improve this answer









$endgroup$













    Your Answer








    StackExchange.ready(function()
    var channelOptions =
    tags: "".split(" "),
    id: "65"
    ;
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function()
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled)
    StackExchange.using("snippets", function()
    createEditor();
    );

    else
    createEditor();

    );

    function createEditor()
    StackExchange.prepareEditor(
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: false,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: null,
    bindNavPrevention: true,
    postfix: "",
    imageUploader:
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    ,
    onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    );



    );













    draft saved

    draft discarded


















    StackExchange.ready(
    function ()
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f401696%2fvalidation-accuracy-vs-testing-accuracy%23new-answer', 'question_page');

    );

    Post as a guest















    Required, but never shown

























    2 Answers
    2






    active

    oldest

    votes








    2 Answers
    2






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    1












    $begingroup$

    There isn't a standard terminology in this context (and I have seen long discussions and debates regarding this topic), so I completely understand you, but you should get used to different terminology (and assume that terminology might not be consistent or it change across sources).



    I would like to point out a few things:



    • I have never seen people use the expression "validation accuracy" (or dataset) to refer to the test accuracy (or dataset), but I have seen people use the term "test accuracy" (or dataset) to refer to the validation accuracy (or dataset). In other words, the test (or testing) accuracy often refers to the validation accuracy, that is, the accuracy you calculate on the data set you do not use for training, but you use (during the training process) for validating (or "testing") the generalisation ability of your model or for "early stopping".


    • In k-fold cross-validation, people usually only mention two datasets: training and testing (or validation).


    • k-fold cross-validation is just a way of validating the model on different subsets of the data. This can be done for several reasons. For example, you have a small amount of data, so your validation (and training) dataset is quite small, so you want to have a better understanding of the model's generalisation ability by validating it on several subsets of the whole dataset.


    • You should likely have a separate (from the validation dataset) dataset for testing, because the validation dataset can be used for early stopping, so, in a certain way, it is dependent on the training process


    I would suggest to use the following terminology



    • Training dataset: the data used to fit the model.

    • Validation dataset: the data used to validate the generalisation ability of the model or for early stopping, during the training process.

    • Testing dataset: the data used to for other purposes other than training and validating.

    Note that some of these datasets might overlap, but this might almost never be a good thing (if you have enough data).






    share|cite|improve this answer











    $endgroup$












    • $begingroup$
      If the testing dataset overlaps with either of the others, it is definitely not a good thing. The test accuracy must measure performance on unseen data. If any part of training saw the data, then it isn't test data, and representing it as such is dishonest. Allowing the validation set to overlap with the training set isn't dishonest, but it probably won't accomplish its task as well. (e.g., if you're doing early stopping, and your validation set and training sets overlap, overfitting may occur and not be detected.)
      $endgroup$
      – Ray
      Apr 7 at 23:44











    • $begingroup$
      @Ray I didn't say it is a good thing. Indeed, see my point "You should likely have a separate (from the validation dataset) dataset for testing...".
      $endgroup$
      – nbro
      Apr 7 at 23:46











    • $begingroup$
      You said "If that's a 'good' thing or not, it's another question." I suspected from the rest that you understood the problems that that overlap would cause, but the problems with that should be made very clear, since contaminating your test data with training samples completely ruins its value.
      $endgroup$
      – Ray
      Apr 7 at 23:48











    • $begingroup$
      @Ray I wanted more to refer to the overlap between the training and validation datasets. Anyway, I think it's good that you wanted to clarify or emphasise this point. I edited my answer to emphasise this point.
      $endgroup$
      – nbro
      Apr 7 at 23:51
















    1












    $begingroup$

    There isn't a standard terminology in this context (and I have seen long discussions and debates regarding this topic), so I completely understand you, but you should get used to different terminology (and assume that terminology might not be consistent or it change across sources).



    I would like to point out a few things:



    • I have never seen people use the expression "validation accuracy" (or dataset) to refer to the test accuracy (or dataset), but I have seen people use the term "test accuracy" (or dataset) to refer to the validation accuracy (or dataset). In other words, the test (or testing) accuracy often refers to the validation accuracy, that is, the accuracy you calculate on the data set you do not use for training, but you use (during the training process) for validating (or "testing") the generalisation ability of your model or for "early stopping".


    • In k-fold cross-validation, people usually only mention two datasets: training and testing (or validation).


    • k-fold cross-validation is just a way of validating the model on different subsets of the data. This can be done for several reasons. For example, you have a small amount of data, so your validation (and training) dataset is quite small, so you want to have a better understanding of the model's generalisation ability by validating it on several subsets of the whole dataset.


    • You should likely have a separate (from the validation dataset) dataset for testing, because the validation dataset can be used for early stopping, so, in a certain way, it is dependent on the training process


    I would suggest to use the following terminology



    • Training dataset: the data used to fit the model.

    • Validation dataset: the data used to validate the generalisation ability of the model or for early stopping, during the training process.

    • Testing dataset: the data used to for other purposes other than training and validating.

    Note that some of these datasets might overlap, but this might almost never be a good thing (if you have enough data).






    share|cite|improve this answer











    $endgroup$












    • $begingroup$
      If the testing dataset overlaps with either of the others, it is definitely not a good thing. The test accuracy must measure performance on unseen data. If any part of training saw the data, then it isn't test data, and representing it as such is dishonest. Allowing the validation set to overlap with the training set isn't dishonest, but it probably won't accomplish its task as well. (e.g., if you're doing early stopping, and your validation set and training sets overlap, overfitting may occur and not be detected.)
      $endgroup$
      – Ray
      Apr 7 at 23:44











    • $begingroup$
      @Ray I didn't say it is a good thing. Indeed, see my point "You should likely have a separate (from the validation dataset) dataset for testing...".
      $endgroup$
      – nbro
      Apr 7 at 23:46











    • $begingroup$
      You said "If that's a 'good' thing or not, it's another question." I suspected from the rest that you understood the problems that that overlap would cause, but the problems with that should be made very clear, since contaminating your test data with training samples completely ruins its value.
      $endgroup$
      – Ray
      Apr 7 at 23:48











    • $begingroup$
      @Ray I wanted more to refer to the overlap between the training and validation datasets. Anyway, I think it's good that you wanted to clarify or emphasise this point. I edited my answer to emphasise this point.
      $endgroup$
      – nbro
      Apr 7 at 23:51














    1












    1








    1





    $begingroup$

    There isn't a standard terminology in this context (and I have seen long discussions and debates regarding this topic), so I completely understand you, but you should get used to different terminology (and assume that terminology might not be consistent or it change across sources).



    I would like to point out a few things:



    • I have never seen people use the expression "validation accuracy" (or dataset) to refer to the test accuracy (or dataset), but I have seen people use the term "test accuracy" (or dataset) to refer to the validation accuracy (or dataset). In other words, the test (or testing) accuracy often refers to the validation accuracy, that is, the accuracy you calculate on the data set you do not use for training, but you use (during the training process) for validating (or "testing") the generalisation ability of your model or for "early stopping".


    • In k-fold cross-validation, people usually only mention two datasets: training and testing (or validation).


    • k-fold cross-validation is just a way of validating the model on different subsets of the data. This can be done for several reasons. For example, you have a small amount of data, so your validation (and training) dataset is quite small, so you want to have a better understanding of the model's generalisation ability by validating it on several subsets of the whole dataset.


    • You should likely have a separate (from the validation dataset) dataset for testing, because the validation dataset can be used for early stopping, so, in a certain way, it is dependent on the training process


    I would suggest to use the following terminology



    • Training dataset: the data used to fit the model.

    • Validation dataset: the data used to validate the generalisation ability of the model or for early stopping, during the training process.

    • Testing dataset: the data used to for other purposes other than training and validating.

    Note that some of these datasets might overlap, but this might almost never be a good thing (if you have enough data).






    share|cite|improve this answer











    $endgroup$



    There isn't a standard terminology in this context (and I have seen long discussions and debates regarding this topic), so I completely understand you, but you should get used to different terminology (and assume that terminology might not be consistent or it change across sources).



    I would like to point out a few things:



    • I have never seen people use the expression "validation accuracy" (or dataset) to refer to the test accuracy (or dataset), but I have seen people use the term "test accuracy" (or dataset) to refer to the validation accuracy (or dataset). In other words, the test (or testing) accuracy often refers to the validation accuracy, that is, the accuracy you calculate on the data set you do not use for training, but you use (during the training process) for validating (or "testing") the generalisation ability of your model or for "early stopping".


    • In k-fold cross-validation, people usually only mention two datasets: training and testing (or validation).


    • k-fold cross-validation is just a way of validating the model on different subsets of the data. This can be done for several reasons. For example, you have a small amount of data, so your validation (and training) dataset is quite small, so you want to have a better understanding of the model's generalisation ability by validating it on several subsets of the whole dataset.


    • You should likely have a separate (from the validation dataset) dataset for testing, because the validation dataset can be used for early stopping, so, in a certain way, it is dependent on the training process


    I would suggest to use the following terminology



    • Training dataset: the data used to fit the model.

    • Validation dataset: the data used to validate the generalisation ability of the model or for early stopping, during the training process.

    • Testing dataset: the data used to for other purposes other than training and validating.

    Note that some of these datasets might overlap, but this might almost never be a good thing (if you have enough data).







    share|cite|improve this answer














    share|cite|improve this answer



    share|cite|improve this answer








    edited Apr 7 at 23:53

























    answered Apr 7 at 18:52









    nbronbro

    8111023




    8111023











    • $begingroup$
      If the testing dataset overlaps with either of the others, it is definitely not a good thing. The test accuracy must measure performance on unseen data. If any part of training saw the data, then it isn't test data, and representing it as such is dishonest. Allowing the validation set to overlap with the training set isn't dishonest, but it probably won't accomplish its task as well. (e.g., if you're doing early stopping, and your validation set and training sets overlap, overfitting may occur and not be detected.)
      $endgroup$
      – Ray
      Apr 7 at 23:44











    • $begingroup$
      @Ray I didn't say it is a good thing. Indeed, see my point "You should likely have a separate (from the validation dataset) dataset for testing...".
      $endgroup$
      – nbro
      Apr 7 at 23:46











    • $begingroup$
      You said "If that's a 'good' thing or not, it's another question." I suspected from the rest that you understood the problems that that overlap would cause, but the problems with that should be made very clear, since contaminating your test data with training samples completely ruins its value.
      $endgroup$
      – Ray
      Apr 7 at 23:48











    • $begingroup$
      @Ray I wanted more to refer to the overlap between the training and validation datasets. Anyway, I think it's good that you wanted to clarify or emphasise this point. I edited my answer to emphasise this point.
      $endgroup$
      – nbro
      Apr 7 at 23:51

















    • $begingroup$
      If the testing dataset overlaps with either of the others, it is definitely not a good thing. The test accuracy must measure performance on unseen data. If any part of training saw the data, then it isn't test data, and representing it as such is dishonest. Allowing the validation set to overlap with the training set isn't dishonest, but it probably won't accomplish its task as well. (e.g., if you're doing early stopping, and your validation set and training sets overlap, overfitting may occur and not be detected.)
      $endgroup$
      – Ray
      Apr 7 at 23:44











    • $begingroup$
      @Ray I didn't say it is a good thing. Indeed, see my point "You should likely have a separate (from the validation dataset) dataset for testing...".
      $endgroup$
      – nbro
      Apr 7 at 23:46











    • $begingroup$
      You said "If that's a 'good' thing or not, it's another question." I suspected from the rest that you understood the problems that that overlap would cause, but the problems with that should be made very clear, since contaminating your test data with training samples completely ruins its value.
      $endgroup$
      – Ray
      Apr 7 at 23:48











    • $begingroup$
      @Ray I wanted more to refer to the overlap between the training and validation datasets. Anyway, I think it's good that you wanted to clarify or emphasise this point. I edited my answer to emphasise this point.
      $endgroup$
      – nbro
      Apr 7 at 23:51
















    $begingroup$
    If the testing dataset overlaps with either of the others, it is definitely not a good thing. The test accuracy must measure performance on unseen data. If any part of training saw the data, then it isn't test data, and representing it as such is dishonest. Allowing the validation set to overlap with the training set isn't dishonest, but it probably won't accomplish its task as well. (e.g., if you're doing early stopping, and your validation set and training sets overlap, overfitting may occur and not be detected.)
    $endgroup$
    – Ray
    Apr 7 at 23:44





    $begingroup$
    If the testing dataset overlaps with either of the others, it is definitely not a good thing. The test accuracy must measure performance on unseen data. If any part of training saw the data, then it isn't test data, and representing it as such is dishonest. Allowing the validation set to overlap with the training set isn't dishonest, but it probably won't accomplish its task as well. (e.g., if you're doing early stopping, and your validation set and training sets overlap, overfitting may occur and not be detected.)
    $endgroup$
    – Ray
    Apr 7 at 23:44













    $begingroup$
    @Ray I didn't say it is a good thing. Indeed, see my point "You should likely have a separate (from the validation dataset) dataset for testing...".
    $endgroup$
    – nbro
    Apr 7 at 23:46





    $begingroup$
    @Ray I didn't say it is a good thing. Indeed, see my point "You should likely have a separate (from the validation dataset) dataset for testing...".
    $endgroup$
    – nbro
    Apr 7 at 23:46













    $begingroup$
    You said "If that's a 'good' thing or not, it's another question." I suspected from the rest that you understood the problems that that overlap would cause, but the problems with that should be made very clear, since contaminating your test data with training samples completely ruins its value.
    $endgroup$
    – Ray
    Apr 7 at 23:48





    $begingroup$
    You said "If that's a 'good' thing or not, it's another question." I suspected from the rest that you understood the problems that that overlap would cause, but the problems with that should be made very clear, since contaminating your test data with training samples completely ruins its value.
    $endgroup$
    – Ray
    Apr 7 at 23:48













    $begingroup$
    @Ray I wanted more to refer to the overlap between the training and validation datasets. Anyway, I think it's good that you wanted to clarify or emphasise this point. I edited my answer to emphasise this point.
    $endgroup$
    – nbro
    Apr 7 at 23:51





    $begingroup$
    @Ray I wanted more to refer to the overlap between the training and validation datasets. Anyway, I think it's good that you wanted to clarify or emphasise this point. I edited my answer to emphasise this point.
    $endgroup$
    – nbro
    Apr 7 at 23:51














    1












    $begingroup$

    @nbro's answer is complete. I just add a couple of explanations to supplement. In more traditional textbooks data is often partitioned into two sets: training and test. In recent years, with more complex models and increasing need for model selection, development sets or validations sets are also considered. Devel/validation should have no overlap with the test set or the reporting accuracy/ error evaluation is not valid. In the modern setting: the model is trained on the training set, tested on the validation set to see if it is a good fit, possibly model is tweaked and trained again and validated again for multiple times. When the final model is selected, the testing set is used to calculate accuracy, error reports. The important thing is that the test set is only touched once.






    share|cite|improve this answer









    $endgroup$

















      1












      $begingroup$

      @nbro's answer is complete. I just add a couple of explanations to supplement. In more traditional textbooks data is often partitioned into two sets: training and test. In recent years, with more complex models and increasing need for model selection, development sets or validations sets are also considered. Devel/validation should have no overlap with the test set or the reporting accuracy/ error evaluation is not valid. In the modern setting: the model is trained on the training set, tested on the validation set to see if it is a good fit, possibly model is tweaked and trained again and validated again for multiple times. When the final model is selected, the testing set is used to calculate accuracy, error reports. The important thing is that the test set is only touched once.






      share|cite|improve this answer









      $endgroup$















        1












        1








        1





        $begingroup$

        @nbro's answer is complete. I just add a couple of explanations to supplement. In more traditional textbooks data is often partitioned into two sets: training and test. In recent years, with more complex models and increasing need for model selection, development sets or validations sets are also considered. Devel/validation should have no overlap with the test set or the reporting accuracy/ error evaluation is not valid. In the modern setting: the model is trained on the training set, tested on the validation set to see if it is a good fit, possibly model is tweaked and trained again and validated again for multiple times. When the final model is selected, the testing set is used to calculate accuracy, error reports. The important thing is that the test set is only touched once.






        share|cite|improve this answer









        $endgroup$



        @nbro's answer is complete. I just add a couple of explanations to supplement. In more traditional textbooks data is often partitioned into two sets: training and test. In recent years, with more complex models and increasing need for model selection, development sets or validations sets are also considered. Devel/validation should have no overlap with the test set or the reporting accuracy/ error evaluation is not valid. In the modern setting: the model is trained on the training set, tested on the validation set to see if it is a good fit, possibly model is tweaked and trained again and validated again for multiple times. When the final model is selected, the testing set is used to calculate accuracy, error reports. The important thing is that the test set is only touched once.







        share|cite|improve this answer












        share|cite|improve this answer



        share|cite|improve this answer










        answered Apr 7 at 22:22









        user3089485user3089485

        163




        163



























            draft saved

            draft discarded
















































            Thanks for contributing an answer to Cross Validated!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid


            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.

            Use MathJax to format equations. MathJax reference.


            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f401696%2fvalidation-accuracy-vs-testing-accuracy%23new-answer', 'question_page');

            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            Adding axes to figuresAdding axes labels to LaTeX figuresLaTeX equivalent of ConTeXt buffersRotate a node but not its content: the case of the ellipse decorationHow to define the default vertical distance between nodes?TikZ scaling graphic and adjust node position and keep font sizeNumerical conditional within tikz keys?adding axes to shapesAlign axes across subfiguresAdding figures with a certain orderLine up nested tikz enviroments or how to get rid of themAdding axes labels to LaTeX figures

            Tähtien Talli Jäsenet | Lähteet | NavigointivalikkoSuomen Hippos – Tähtien Talli

            Do these cracks on my tires look bad? The Next CEO of Stack OverflowDry rot tire should I replace?Having to replace tiresFishtailed so easily? Bad tires? ABS?Filling the tires with something other than air, to avoid puncture hassles?Used Michelin tires safe to install?Do these tyre cracks necessitate replacement?Rumbling noise: tires or mechanicalIs it possible to fix noisy feathered tires?Are bad winter tires still better than summer tires in winter?Torque converter failure - Related to replacing only 2 tires?Why use snow tires on all 4 wheels on 2-wheel-drive cars?