Difference in model performance measures of train and test data sets The Next CEO of Stack Overflow2019 Community Moderator ElectionOver-fitting issue in a classification problem (unbalanced data)Parameters for CART treePossible Reason for low Test accuracy and high AUCConvolution Neural Network Loss and performanceDealing with unbalanced error rate in confusion matrixIdentifying whether training dataset is imbalanced in CARTDecision tree not using all features from training datasetImbalanced data causing mis-classification on multiclass datasetEstimating test AUC using k-fold CV for imbalanced classification problemValid Approach to Kaggle's Porto Seguro ML Problem?

Is this a new Fibonacci Identity?

What happens if you break a law in another country outside of that country?

Simplify trigonometric expression using trigonometric identities

Is it OK to decorate a log book cover?

How can a day be of 24 hours?

How to unfasten electrical subpanel attached with ramset

Car headlights in a world without electricity

Upgrading From a 9 Speed Sora Derailleur?

How to pronounce fünf in 45

Finitely generated matrix groups whose eigenvalues are all algebraic

Prodigo = pro + ago?

Free fall ellipse or parabola?

Why was Sir Cadogan fired?

What did the word "leisure" mean in late 18th Century usage?

Is it okay to majorly distort historical facts while writing a fiction story?

Is it a bad idea to plug the other end of ESD strap to wall ground?

Why does freezing point matter when picking cooler ice packs?

logical reads on global temp table, but not on session-level temp table

Avoiding the "not like other girls" trope?

Which acid/base does a strong base/acid react when added to a buffer solution?

Oldie but Goldie

Physiological effects of huge anime eyes

Mathematica command that allows it to read my intentions

Why can't we say "I have been having a dog"?



Difference in model performance measures of train and test data sets



The Next CEO of Stack Overflow
2019 Community Moderator ElectionOver-fitting issue in a classification problem (unbalanced data)Parameters for CART treePossible Reason for low Test accuracy and high AUCConvolution Neural Network Loss and performanceDealing with unbalanced error rate in confusion matrixIdentifying whether training dataset is imbalanced in CARTDecision tree not using all features from training datasetImbalanced data causing mis-classification on multiclass datasetEstimating test AUC using k-fold CV for imbalanced classification problemValid Approach to Kaggle's Porto Seguro ML Problem?










2












$begingroup$


I am using CART classification technique by dividing a dataset into train and test sets. I have been using Mis-classification error, KS by rank ordering, AUC and Gini as MPMs(model performance measures). The problem I am facing is that the MPM values are quite far apart.




  • Dataset


  • Metadata

I have tried with minsplit equal to anywhere from 20 to 1400 and minbucket from 5 to 100 but couldn't get expected results. I have also tried oversampling/undersampling through ROSE package but without any improvement. Moreover, the mis-classification error increased a lot. Following code is through which I could get the best values, but they were not enough.



#Reading Data
pdata = read.csv("PL_XSELL.csv", header = TRUE)

#Converting ACC_OP_DATE from type factor to date
pdata$ACC_OP_DATE<-as.Date(pdata$ACC_OP_DATE, format = "%d-%m-%Y")

#Paritioning the data into training and test dataset
set.seed(2000)
n=nrow(pdata)
split= sample(c(TRUE, FALSE), n, replace=TRUE, prob=c(0.70, 0.30))
ptrain = pdata[split, ]
ptest = pdata[!split,]

#CART Model
#Taking the minsplit, minbucket values as low as possible, so that pruning
#can be done later. Higher values didn't allow any scope for pruning
r.ctrl = rpart.control(minsplit=20, minbucket = 5, cp = 0, xval = 10)

#Calling the rpart function to build the tree
cartModel <- rpart(formula = TARGET ~ .,
data = ptrain[,-1], method = "class",
control = r.ctrl)

#Pruning Tree Code
cartModel<- prune(cartModel, cp= 0.00225 ,"CP")

#Predicting class and scores
ptrain$predict.class <- predict(cartModel, ptrain, type="class")
ptrain$
predict.score <- predict(cartModel, ptrain, type="prob")


Results that I got-: Train data Mis-classification error-.103 AUC - 0.679 KS - 0.259 Gini - 0.313



Test data Mis-classification error-.113 AUC - 0.664 KS - 0.226 Gini - 0.307



Is it due to the dataset or am I doing something wrong. I am new to Data Analytics. It is a part of my academic project, so I need to use CART technique only. I will put separate questions for Random Forest and Neural Networks. Kindly help.










share|improve this question











$endgroup$
















    2












    $begingroup$


    I am using CART classification technique by dividing a dataset into train and test sets. I have been using Mis-classification error, KS by rank ordering, AUC and Gini as MPMs(model performance measures). The problem I am facing is that the MPM values are quite far apart.




    • Dataset


    • Metadata

    I have tried with minsplit equal to anywhere from 20 to 1400 and minbucket from 5 to 100 but couldn't get expected results. I have also tried oversampling/undersampling through ROSE package but without any improvement. Moreover, the mis-classification error increased a lot. Following code is through which I could get the best values, but they were not enough.



    #Reading Data
    pdata = read.csv("PL_XSELL.csv", header = TRUE)

    #Converting ACC_OP_DATE from type factor to date
    pdata$ACC_OP_DATE<-as.Date(pdata$ACC_OP_DATE, format = "%d-%m-%Y")

    #Paritioning the data into training and test dataset
    set.seed(2000)
    n=nrow(pdata)
    split= sample(c(TRUE, FALSE), n, replace=TRUE, prob=c(0.70, 0.30))
    ptrain = pdata[split, ]
    ptest = pdata[!split,]

    #CART Model
    #Taking the minsplit, minbucket values as low as possible, so that pruning
    #can be done later. Higher values didn't allow any scope for pruning
    r.ctrl = rpart.control(minsplit=20, minbucket = 5, cp = 0, xval = 10)

    #Calling the rpart function to build the tree
    cartModel <- rpart(formula = TARGET ~ .,
    data = ptrain[,-1], method = "class",
    control = r.ctrl)

    #Pruning Tree Code
    cartModel<- prune(cartModel, cp= 0.00225 ,"CP")

    #Predicting class and scores
    ptrain$predict.class <- predict(cartModel, ptrain, type="class")
    ptrain$
    predict.score <- predict(cartModel, ptrain, type="prob")


    Results that I got-: Train data Mis-classification error-.103 AUC - 0.679 KS - 0.259 Gini - 0.313



    Test data Mis-classification error-.113 AUC - 0.664 KS - 0.226 Gini - 0.307



    Is it due to the dataset or am I doing something wrong. I am new to Data Analytics. It is a part of my academic project, so I need to use CART technique only. I will put separate questions for Random Forest and Neural Networks. Kindly help.










    share|improve this question











    $endgroup$














      2












      2








      2





      $begingroup$


      I am using CART classification technique by dividing a dataset into train and test sets. I have been using Mis-classification error, KS by rank ordering, AUC and Gini as MPMs(model performance measures). The problem I am facing is that the MPM values are quite far apart.




      • Dataset


      • Metadata

      I have tried with minsplit equal to anywhere from 20 to 1400 and minbucket from 5 to 100 but couldn't get expected results. I have also tried oversampling/undersampling through ROSE package but without any improvement. Moreover, the mis-classification error increased a lot. Following code is through which I could get the best values, but they were not enough.



      #Reading Data
      pdata = read.csv("PL_XSELL.csv", header = TRUE)

      #Converting ACC_OP_DATE from type factor to date
      pdata$ACC_OP_DATE<-as.Date(pdata$ACC_OP_DATE, format = "%d-%m-%Y")

      #Paritioning the data into training and test dataset
      set.seed(2000)
      n=nrow(pdata)
      split= sample(c(TRUE, FALSE), n, replace=TRUE, prob=c(0.70, 0.30))
      ptrain = pdata[split, ]
      ptest = pdata[!split,]

      #CART Model
      #Taking the minsplit, minbucket values as low as possible, so that pruning
      #can be done later. Higher values didn't allow any scope for pruning
      r.ctrl = rpart.control(minsplit=20, minbucket = 5, cp = 0, xval = 10)

      #Calling the rpart function to build the tree
      cartModel <- rpart(formula = TARGET ~ .,
      data = ptrain[,-1], method = "class",
      control = r.ctrl)

      #Pruning Tree Code
      cartModel<- prune(cartModel, cp= 0.00225 ,"CP")

      #Predicting class and scores
      ptrain$predict.class <- predict(cartModel, ptrain, type="class")
      ptrain$
      predict.score <- predict(cartModel, ptrain, type="prob")


      Results that I got-: Train data Mis-classification error-.103 AUC - 0.679 KS - 0.259 Gini - 0.313



      Test data Mis-classification error-.113 AUC - 0.664 KS - 0.226 Gini - 0.307



      Is it due to the dataset or am I doing something wrong. I am new to Data Analytics. It is a part of my academic project, so I need to use CART technique only. I will put separate questions for Random Forest and Neural Networks. Kindly help.










      share|improve this question











      $endgroup$




      I am using CART classification technique by dividing a dataset into train and test sets. I have been using Mis-classification error, KS by rank ordering, AUC and Gini as MPMs(model performance measures). The problem I am facing is that the MPM values are quite far apart.




      • Dataset


      • Metadata

      I have tried with minsplit equal to anywhere from 20 to 1400 and minbucket from 5 to 100 but couldn't get expected results. I have also tried oversampling/undersampling through ROSE package but without any improvement. Moreover, the mis-classification error increased a lot. Following code is through which I could get the best values, but they were not enough.



      #Reading Data
      pdata = read.csv("PL_XSELL.csv", header = TRUE)

      #Converting ACC_OP_DATE from type factor to date
      pdata$ACC_OP_DATE<-as.Date(pdata$ACC_OP_DATE, format = "%d-%m-%Y")

      #Paritioning the data into training and test dataset
      set.seed(2000)
      n=nrow(pdata)
      split= sample(c(TRUE, FALSE), n, replace=TRUE, prob=c(0.70, 0.30))
      ptrain = pdata[split, ]
      ptest = pdata[!split,]

      #CART Model
      #Taking the minsplit, minbucket values as low as possible, so that pruning
      #can be done later. Higher values didn't allow any scope for pruning
      r.ctrl = rpart.control(minsplit=20, minbucket = 5, cp = 0, xval = 10)

      #Calling the rpart function to build the tree
      cartModel <- rpart(formula = TARGET ~ .,
      data = ptrain[,-1], method = "class",
      control = r.ctrl)

      #Pruning Tree Code
      cartModel<- prune(cartModel, cp= 0.00225 ,"CP")

      #Predicting class and scores
      ptrain$predict.class <- predict(cartModel, ptrain, type="class")
      ptrain$
      predict.score <- predict(cartModel, ptrain, type="prob")


      Results that I got-: Train data Mis-classification error-.103 AUC - 0.679 KS - 0.259 Gini - 0.313



      Test data Mis-classification error-.113 AUC - 0.664 KS - 0.226 Gini - 0.307



      Is it due to the dataset or am I doing something wrong. I am new to Data Analytics. It is a part of my academic project, so I need to use CART technique only. I will put separate questions for Random Forest and Neural Networks. Kindly help.







      machine-learning classification r classifier






      share|improve this question















      share|improve this question













      share|improve this question




      share|improve this question








      edited Mar 25 at 14:24









      ebrahimi

      74421022




      74421022










      asked Mar 24 at 23:45









      user2268901user2268901

      111




      111




















          1 Answer
          1






          active

          oldest

          votes


















          0












          $begingroup$

          Your model suffers of a slight overfitting, however it doesn't seem too dramatic.



          Performance on train set are always better than test set if random sampled (when you have statistically significant volumes)



          Maybe you can reduce the gap of performance by controlling the CP parameter, try setting a higher cp when you prune the tree (like 0.01) or by using the parameter maxdepth that prune according to the length of the tree.






          share|improve this answer









          $endgroup$












          • $begingroup$
            I will try that but increasing cp to prune the tree reduces KS to lower than 0.20, which is not desirable. Also I've heard that the MPMs should be within 10% of difference between the train and test sets. What is the acceptable difference to you?
            $endgroup$
            – user2268901
            Mar 25 at 12:02












          Your Answer





          StackExchange.ifUsing("editor", function ()
          return StackExchange.using("mathjaxEditing", function ()
          StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
          StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
          );
          );
          , "mathjax-editing");

          StackExchange.ready(function()
          var channelOptions =
          tags: "".split(" "),
          id: "557"
          ;
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function()
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled)
          StackExchange.using("snippets", function()
          createEditor();
          );

          else
          createEditor();

          );

          function createEditor()
          StackExchange.prepareEditor(
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: false,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: null,
          bindNavPrevention: true,
          postfix: "",
          imageUploader:
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          ,
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          );



          );













          draft saved

          draft discarded


















          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f47914%2fdifference-in-model-performance-measures-of-train-and-test-data-sets%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown

























          1 Answer
          1






          active

          oldest

          votes








          1 Answer
          1






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes









          0












          $begingroup$

          Your model suffers of a slight overfitting, however it doesn't seem too dramatic.



          Performance on train set are always better than test set if random sampled (when you have statistically significant volumes)



          Maybe you can reduce the gap of performance by controlling the CP parameter, try setting a higher cp when you prune the tree (like 0.01) or by using the parameter maxdepth that prune according to the length of the tree.






          share|improve this answer









          $endgroup$












          • $begingroup$
            I will try that but increasing cp to prune the tree reduces KS to lower than 0.20, which is not desirable. Also I've heard that the MPMs should be within 10% of difference between the train and test sets. What is the acceptable difference to you?
            $endgroup$
            – user2268901
            Mar 25 at 12:02
















          0












          $begingroup$

          Your model suffers of a slight overfitting, however it doesn't seem too dramatic.



          Performance on train set are always better than test set if random sampled (when you have statistically significant volumes)



          Maybe you can reduce the gap of performance by controlling the CP parameter, try setting a higher cp when you prune the tree (like 0.01) or by using the parameter maxdepth that prune according to the length of the tree.






          share|improve this answer









          $endgroup$












          • $begingroup$
            I will try that but increasing cp to prune the tree reduces KS to lower than 0.20, which is not desirable. Also I've heard that the MPMs should be within 10% of difference between the train and test sets. What is the acceptable difference to you?
            $endgroup$
            – user2268901
            Mar 25 at 12:02














          0












          0








          0





          $begingroup$

          Your model suffers of a slight overfitting, however it doesn't seem too dramatic.



          Performance on train set are always better than test set if random sampled (when you have statistically significant volumes)



          Maybe you can reduce the gap of performance by controlling the CP parameter, try setting a higher cp when you prune the tree (like 0.01) or by using the parameter maxdepth that prune according to the length of the tree.






          share|improve this answer









          $endgroup$



          Your model suffers of a slight overfitting, however it doesn't seem too dramatic.



          Performance on train set are always better than test set if random sampled (when you have statistically significant volumes)



          Maybe you can reduce the gap of performance by controlling the CP parameter, try setting a higher cp when you prune the tree (like 0.01) or by using the parameter maxdepth that prune according to the length of the tree.







          share|improve this answer












          share|improve this answer



          share|improve this answer










          answered Mar 25 at 10:39









          VD93VD93

          111




          111











          • $begingroup$
            I will try that but increasing cp to prune the tree reduces KS to lower than 0.20, which is not desirable. Also I've heard that the MPMs should be within 10% of difference between the train and test sets. What is the acceptable difference to you?
            $endgroup$
            – user2268901
            Mar 25 at 12:02

















          • $begingroup$
            I will try that but increasing cp to prune the tree reduces KS to lower than 0.20, which is not desirable. Also I've heard that the MPMs should be within 10% of difference between the train and test sets. What is the acceptable difference to you?
            $endgroup$
            – user2268901
            Mar 25 at 12:02
















          $begingroup$
          I will try that but increasing cp to prune the tree reduces KS to lower than 0.20, which is not desirable. Also I've heard that the MPMs should be within 10% of difference between the train and test sets. What is the acceptable difference to you?
          $endgroup$
          – user2268901
          Mar 25 at 12:02





          $begingroup$
          I will try that but increasing cp to prune the tree reduces KS to lower than 0.20, which is not desirable. Also I've heard that the MPMs should be within 10% of difference between the train and test sets. What is the acceptable difference to you?
          $endgroup$
          – user2268901
          Mar 25 at 12:02


















          draft saved

          draft discarded
















































          Thanks for contributing an answer to Data Science Stack Exchange!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid


          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.

          Use MathJax to format equations. MathJax reference.


          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f47914%2fdifference-in-model-performance-measures-of-train-and-test-data-sets%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          Adding axes to figuresAdding axes labels to LaTeX figuresLaTeX equivalent of ConTeXt buffersRotate a node but not its content: the case of the ellipse decorationHow to define the default vertical distance between nodes?TikZ scaling graphic and adjust node position and keep font sizeNumerical conditional within tikz keys?adding axes to shapesAlign axes across subfiguresAdding figures with a certain orderLine up nested tikz enviroments or how to get rid of themAdding axes labels to LaTeX figures

          Luettelo Yhdysvaltain laivaston lentotukialuksista Lähteet | Navigointivalikko

          Gary (muusikko) Sisällysluettelo Historia | Rockin' High | Lähteet | Aiheesta muualla | NavigointivalikkoInfobox OKTuomas "Gary" Keskinen Ancaran kitaristiksiProjekti Rockin' High