Too much inputs = overfitting? Announcing the arrival of Valued Associate #679: Cesar Manara Planned maintenance scheduled April 23, 2019 at 23:30 UTC (7:30pm US/Eastern) 2019 Moderator Election Q&A - Questionnaire 2019 Community Moderator Election ResultsHow to fight underfitting in a deep neural netWhat is the “dying ReLU” problem in neural networks?Train a classifier for a game with feedback on chosen move instead of true labelsConvnet training error does not decreaseHelp with backpropagation equations for a simple neural network with Sigmoid activationEncog neural network multiple outputsKeras intuition/guidelines for setting epochs and batch sizeOverfitting problem in modelMulti-Class Neural Networks | different featuresLoss is bad, but accuracy increases?

Does using the Inspiration rules for character defects encourage My Guy Syndrome?

Etymology of 見舞い

How to ask rejected full-time candidates to apply to teach individual courses?

Is "ein Herz wie das meine" an antiquated or colloquial use of the possesive pronoun?

Protagonist's race is hidden - should I reveal it?

Does Prince Arnaud cause someone holding the Princess to lose?

Why aren't these two solutions equivalent? Combinatorics problem

Why do people think Winterfell crypts is the safest place for women, children & old people?

Output the slug and name of a CPT single post taxonomy term

How to calculate density of unknown planet?

Is the Mordenkainen's Sword spell underpowered?

Coin Game with infinite paradox

How is it possible to implement unitary operator when its size is exponential in inputs?

What came first? Venom as the movie or as the song?

false 'Security alert' from Google - every login generates mails from 'no-reply@accounts.google.com'

How was Lagrange appointed professor of mathematics so early?

How can I introduce the names of fantasy creatures to the reader?

enable https on private network

How to produce a PS1 prompt in bash or ksh93 similar to tcsh

Assertions In A Mock Callout Test

Lights are flickering on and off after accidentally bumping into light switch

What *exactly* is electrical current, voltage, and resistance?

/bin/ls sorts differently than just ls

Can gravitational waves pass through a black hole?



Too much inputs = overfitting?



Announcing the arrival of Valued Associate #679: Cesar Manara
Planned maintenance scheduled April 23, 2019 at 23:30 UTC (7:30pm US/Eastern)
2019 Moderator Election Q&A - Questionnaire
2019 Community Moderator Election ResultsHow to fight underfitting in a deep neural netWhat is the “dying ReLU” problem in neural networks?Train a classifier for a game with feedback on chosen move instead of true labelsConvnet training error does not decreaseHelp with backpropagation equations for a simple neural network with Sigmoid activationEncog neural network multiple outputsKeras intuition/guidelines for setting epochs and batch sizeOverfitting problem in modelMulti-Class Neural Networks | different featuresLoss is bad, but accuracy increases?










3












$begingroup$


First question : can I mix different sorts of inputs types for example, height and age (of course my inputs are normalized)? in general, can we mix different types of inputs in a neural network ?



Second question : can too much different inputs cause overfitting ?



I am using 120 inputs neurons and 20,000 train data and I am overfitting at 53% accuracy (bad)...



Thank you.










share|improve this question











$endgroup$











  • $begingroup$
    What do you mean by "overfitting at 53%". Do you mean your test accuracy is just 53% and training accuracy is good?
    $endgroup$
    – ab123
    Jun 25 '18 at 7:11










  • $begingroup$
    @ab123 I mean that at 53%, my validation ~= 53%; but when training > 53%, my validation drop to 40%, etc...
    $endgroup$
    – Fang 1Gao
    Jun 25 '18 at 15:37















3












$begingroup$


First question : can I mix different sorts of inputs types for example, height and age (of course my inputs are normalized)? in general, can we mix different types of inputs in a neural network ?



Second question : can too much different inputs cause overfitting ?



I am using 120 inputs neurons and 20,000 train data and I am overfitting at 53% accuracy (bad)...



Thank you.










share|improve this question











$endgroup$











  • $begingroup$
    What do you mean by "overfitting at 53%". Do you mean your test accuracy is just 53% and training accuracy is good?
    $endgroup$
    – ab123
    Jun 25 '18 at 7:11










  • $begingroup$
    @ab123 I mean that at 53%, my validation ~= 53%; but when training > 53%, my validation drop to 40%, etc...
    $endgroup$
    – Fang 1Gao
    Jun 25 '18 at 15:37













3












3








3


3



$begingroup$


First question : can I mix different sorts of inputs types for example, height and age (of course my inputs are normalized)? in general, can we mix different types of inputs in a neural network ?



Second question : can too much different inputs cause overfitting ?



I am using 120 inputs neurons and 20,000 train data and I am overfitting at 53% accuracy (bad)...



Thank you.










share|improve this question











$endgroup$




First question : can I mix different sorts of inputs types for example, height and age (of course my inputs are normalized)? in general, can we mix different types of inputs in a neural network ?



Second question : can too much different inputs cause overfitting ?



I am using 120 inputs neurons and 20,000 train data and I am overfitting at 53% accuracy (bad)...



Thank you.







neural-network classification feature-engineering overfitting feature-construction






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Jun 25 '18 at 3:53









ebrahimi

76021022




76021022










asked Jun 24 '18 at 23:09









Fang 1GaoFang 1Gao

161




161











  • $begingroup$
    What do you mean by "overfitting at 53%". Do you mean your test accuracy is just 53% and training accuracy is good?
    $endgroup$
    – ab123
    Jun 25 '18 at 7:11










  • $begingroup$
    @ab123 I mean that at 53%, my validation ~= 53%; but when training > 53%, my validation drop to 40%, etc...
    $endgroup$
    – Fang 1Gao
    Jun 25 '18 at 15:37
















  • $begingroup$
    What do you mean by "overfitting at 53%". Do you mean your test accuracy is just 53% and training accuracy is good?
    $endgroup$
    – ab123
    Jun 25 '18 at 7:11










  • $begingroup$
    @ab123 I mean that at 53%, my validation ~= 53%; but when training > 53%, my validation drop to 40%, etc...
    $endgroup$
    – Fang 1Gao
    Jun 25 '18 at 15:37















$begingroup$
What do you mean by "overfitting at 53%". Do you mean your test accuracy is just 53% and training accuracy is good?
$endgroup$
– ab123
Jun 25 '18 at 7:11




$begingroup$
What do you mean by "overfitting at 53%". Do you mean your test accuracy is just 53% and training accuracy is good?
$endgroup$
– ab123
Jun 25 '18 at 7:11












$begingroup$
@ab123 I mean that at 53%, my validation ~= 53%; but when training > 53%, my validation drop to 40%, etc...
$endgroup$
– Fang 1Gao
Jun 25 '18 at 15:37




$begingroup$
@ab123 I mean that at 53%, my validation ~= 53%; but when training > 53%, my validation drop to 40%, etc...
$endgroup$
– Fang 1Gao
Jun 25 '18 at 15:37










2 Answers
2






active

oldest

votes


















2












$begingroup$

Yes, you can mix any different sort of inputs when the scales of the features are similar, which is achieved by normalising the feature vectors.



I assume you mean too many features when you say 'too much input'



If you mean the size (number of training examples) of input data, size of input data is not directly related to overfitting. Overfitting depends on model complexity. It happens when model tries to fit to the noise of the input data and hence becomes too specific that it can't generalize well to new training data.



Any model that is "sufficiently" complex (for eg. one that contains many hidden layers, large number of neurons in each layer, whose weights are not regularized) can easily converge to give very little loss on training data (unless it converges to a different sub-optimal local minima), but will give poor accuracy on test data. But in general, on the contrary, lack of enough data often leads to overfitting because the model tries to learn based on very few specimens which are less diverse. It's like showing a child a samples of balls containing only white and orange table tennis balls, and asking him/her to identify a blue colored ball.



Too many features can lead to overfitting because it can increase model complexity. There is greater chance of redundancy in features and of features that are not at all related to prediction.



For eg. if you're predicting quality of a tennis ball, the feature chosen as colour of the ball is irrelevant, but the network will learn from training examples and there is a chance that since people like yellow colored balls to play with, they play more often with them and those balls don't last long.






share|improve this answer











$endgroup$




















    0












    $begingroup$

    Based on my experience so far, having too many features as inputs to your NN ,tends to degrade performance *full disclaimer i'm no expert, but smarter people than me have coined a term called The curse of dimensionality. Here is a paragraph I took from Medium Curse of dimensionality and feature reduction




    The curse of dimensionality occurs because the sample density decreases exponentially with the increase of the dimensionality




    Good now we know that having too many features is bad for our model performance (or feature to sample ratio which increases significantly) what can we do to solve it?



    Right now I can think of 3 ways



    1. Feature Selection


    2. Feature Extraction


    3. Ensemble learning of different sub series of those features (yummmyyy :) )






    share|improve this answer











    $endgroup$













      Your Answer








      StackExchange.ready(function()
      var channelOptions =
      tags: "".split(" "),
      id: "557"
      ;
      initTagRenderer("".split(" "), "".split(" "), channelOptions);

      StackExchange.using("externalEditor", function()
      // Have to fire editor after snippets, if snippets enabled
      if (StackExchange.settings.snippets.snippetsEnabled)
      StackExchange.using("snippets", function()
      createEditor();
      );

      else
      createEditor();

      );

      function createEditor()
      StackExchange.prepareEditor(
      heartbeatType: 'answer',
      autoActivateHeartbeat: false,
      convertImagesToLinks: false,
      noModals: true,
      showLowRepImageUploadWarning: true,
      reputationToPostImages: null,
      bindNavPrevention: true,
      postfix: "",
      imageUploader:
      brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
      contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
      allowUrls: true
      ,
      onDemand: true,
      discardSelector: ".discard-answer"
      ,immediatelyShowMarkdownHelp:true
      );



      );













      draft saved

      draft discarded


















      StackExchange.ready(
      function ()
      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f33580%2ftoo-much-inputs-overfitting%23new-answer', 'question_page');

      );

      Post as a guest















      Required, but never shown

























      2 Answers
      2






      active

      oldest

      votes








      2 Answers
      2






      active

      oldest

      votes









      active

      oldest

      votes






      active

      oldest

      votes









      2












      $begingroup$

      Yes, you can mix any different sort of inputs when the scales of the features are similar, which is achieved by normalising the feature vectors.



      I assume you mean too many features when you say 'too much input'



      If you mean the size (number of training examples) of input data, size of input data is not directly related to overfitting. Overfitting depends on model complexity. It happens when model tries to fit to the noise of the input data and hence becomes too specific that it can't generalize well to new training data.



      Any model that is "sufficiently" complex (for eg. one that contains many hidden layers, large number of neurons in each layer, whose weights are not regularized) can easily converge to give very little loss on training data (unless it converges to a different sub-optimal local minima), but will give poor accuracy on test data. But in general, on the contrary, lack of enough data often leads to overfitting because the model tries to learn based on very few specimens which are less diverse. It's like showing a child a samples of balls containing only white and orange table tennis balls, and asking him/her to identify a blue colored ball.



      Too many features can lead to overfitting because it can increase model complexity. There is greater chance of redundancy in features and of features that are not at all related to prediction.



      For eg. if you're predicting quality of a tennis ball, the feature chosen as colour of the ball is irrelevant, but the network will learn from training examples and there is a chance that since people like yellow colored balls to play with, they play more often with them and those balls don't last long.






      share|improve this answer











      $endgroup$

















        2












        $begingroup$

        Yes, you can mix any different sort of inputs when the scales of the features are similar, which is achieved by normalising the feature vectors.



        I assume you mean too many features when you say 'too much input'



        If you mean the size (number of training examples) of input data, size of input data is not directly related to overfitting. Overfitting depends on model complexity. It happens when model tries to fit to the noise of the input data and hence becomes too specific that it can't generalize well to new training data.



        Any model that is "sufficiently" complex (for eg. one that contains many hidden layers, large number of neurons in each layer, whose weights are not regularized) can easily converge to give very little loss on training data (unless it converges to a different sub-optimal local minima), but will give poor accuracy on test data. But in general, on the contrary, lack of enough data often leads to overfitting because the model tries to learn based on very few specimens which are less diverse. It's like showing a child a samples of balls containing only white and orange table tennis balls, and asking him/her to identify a blue colored ball.



        Too many features can lead to overfitting because it can increase model complexity. There is greater chance of redundancy in features and of features that are not at all related to prediction.



        For eg. if you're predicting quality of a tennis ball, the feature chosen as colour of the ball is irrelevant, but the network will learn from training examples and there is a chance that since people like yellow colored balls to play with, they play more often with them and those balls don't last long.






        share|improve this answer











        $endgroup$















          2












          2








          2





          $begingroup$

          Yes, you can mix any different sort of inputs when the scales of the features are similar, which is achieved by normalising the feature vectors.



          I assume you mean too many features when you say 'too much input'



          If you mean the size (number of training examples) of input data, size of input data is not directly related to overfitting. Overfitting depends on model complexity. It happens when model tries to fit to the noise of the input data and hence becomes too specific that it can't generalize well to new training data.



          Any model that is "sufficiently" complex (for eg. one that contains many hidden layers, large number of neurons in each layer, whose weights are not regularized) can easily converge to give very little loss on training data (unless it converges to a different sub-optimal local minima), but will give poor accuracy on test data. But in general, on the contrary, lack of enough data often leads to overfitting because the model tries to learn based on very few specimens which are less diverse. It's like showing a child a samples of balls containing only white and orange table tennis balls, and asking him/her to identify a blue colored ball.



          Too many features can lead to overfitting because it can increase model complexity. There is greater chance of redundancy in features and of features that are not at all related to prediction.



          For eg. if you're predicting quality of a tennis ball, the feature chosen as colour of the ball is irrelevant, but the network will learn from training examples and there is a chance that since people like yellow colored balls to play with, they play more often with them and those balls don't last long.






          share|improve this answer











          $endgroup$



          Yes, you can mix any different sort of inputs when the scales of the features are similar, which is achieved by normalising the feature vectors.



          I assume you mean too many features when you say 'too much input'



          If you mean the size (number of training examples) of input data, size of input data is not directly related to overfitting. Overfitting depends on model complexity. It happens when model tries to fit to the noise of the input data and hence becomes too specific that it can't generalize well to new training data.



          Any model that is "sufficiently" complex (for eg. one that contains many hidden layers, large number of neurons in each layer, whose weights are not regularized) can easily converge to give very little loss on training data (unless it converges to a different sub-optimal local minima), but will give poor accuracy on test data. But in general, on the contrary, lack of enough data often leads to overfitting because the model tries to learn based on very few specimens which are less diverse. It's like showing a child a samples of balls containing only white and orange table tennis balls, and asking him/her to identify a blue colored ball.



          Too many features can lead to overfitting because it can increase model complexity. There is greater chance of redundancy in features and of features that are not at all related to prediction.



          For eg. if you're predicting quality of a tennis ball, the feature chosen as colour of the ball is irrelevant, but the network will learn from training examples and there is a chance that since people like yellow colored balls to play with, they play more often with them and those balls don't last long.







          share|improve this answer














          share|improve this answer



          share|improve this answer








          edited Jun 25 '18 at 7:52

























          answered Jun 25 '18 at 7:45









          ab123ab123

          15716




          15716





















              0












              $begingroup$

              Based on my experience so far, having too many features as inputs to your NN ,tends to degrade performance *full disclaimer i'm no expert, but smarter people than me have coined a term called The curse of dimensionality. Here is a paragraph I took from Medium Curse of dimensionality and feature reduction




              The curse of dimensionality occurs because the sample density decreases exponentially with the increase of the dimensionality




              Good now we know that having too many features is bad for our model performance (or feature to sample ratio which increases significantly) what can we do to solve it?



              Right now I can think of 3 ways



              1. Feature Selection


              2. Feature Extraction


              3. Ensemble learning of different sub series of those features (yummmyyy :) )






              share|improve this answer











              $endgroup$

















                0












                $begingroup$

                Based on my experience so far, having too many features as inputs to your NN ,tends to degrade performance *full disclaimer i'm no expert, but smarter people than me have coined a term called The curse of dimensionality. Here is a paragraph I took from Medium Curse of dimensionality and feature reduction




                The curse of dimensionality occurs because the sample density decreases exponentially with the increase of the dimensionality




                Good now we know that having too many features is bad for our model performance (or feature to sample ratio which increases significantly) what can we do to solve it?



                Right now I can think of 3 ways



                1. Feature Selection


                2. Feature Extraction


                3. Ensemble learning of different sub series of those features (yummmyyy :) )






                share|improve this answer











                $endgroup$















                  0












                  0








                  0





                  $begingroup$

                  Based on my experience so far, having too many features as inputs to your NN ,tends to degrade performance *full disclaimer i'm no expert, but smarter people than me have coined a term called The curse of dimensionality. Here is a paragraph I took from Medium Curse of dimensionality and feature reduction




                  The curse of dimensionality occurs because the sample density decreases exponentially with the increase of the dimensionality




                  Good now we know that having too many features is bad for our model performance (or feature to sample ratio which increases significantly) what can we do to solve it?



                  Right now I can think of 3 ways



                  1. Feature Selection


                  2. Feature Extraction


                  3. Ensemble learning of different sub series of those features (yummmyyy :) )






                  share|improve this answer











                  $endgroup$



                  Based on my experience so far, having too many features as inputs to your NN ,tends to degrade performance *full disclaimer i'm no expert, but smarter people than me have coined a term called The curse of dimensionality. Here is a paragraph I took from Medium Curse of dimensionality and feature reduction




                  The curse of dimensionality occurs because the sample density decreases exponentially with the increase of the dimensionality




                  Good now we know that having too many features is bad for our model performance (or feature to sample ratio which increases significantly) what can we do to solve it?



                  Right now I can think of 3 ways



                  1. Feature Selection


                  2. Feature Extraction


                  3. Ensemble learning of different sub series of those features (yummmyyy :) )







                  share|improve this answer














                  share|improve this answer



                  share|improve this answer








                  edited Apr 5 at 17:28









                  Stephen Rauch

                  1,52551330




                  1,52551330










                  answered Apr 5 at 16:44









                  jetychilljetychill

                  1




                  1



























                      draft saved

                      draft discarded
















































                      Thanks for contributing an answer to Data Science Stack Exchange!


                      • Please be sure to answer the question. Provide details and share your research!

                      But avoid


                      • Asking for help, clarification, or responding to other answers.

                      • Making statements based on opinion; back them up with references or personal experience.

                      Use MathJax to format equations. MathJax reference.


                      To learn more, see our tips on writing great answers.




                      draft saved


                      draft discarded














                      StackExchange.ready(
                      function ()
                      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f33580%2ftoo-much-inputs-overfitting%23new-answer', 'question_page');

                      );

                      Post as a guest















                      Required, but never shown





















































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown

































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown







                      Popular posts from this blog

                      Adding axes to figuresAdding axes labels to LaTeX figuresLaTeX equivalent of ConTeXt buffersRotate a node but not its content: the case of the ellipse decorationHow to define the default vertical distance between nodes?TikZ scaling graphic and adjust node position and keep font sizeNumerical conditional within tikz keys?adding axes to shapesAlign axes across subfiguresAdding figures with a certain orderLine up nested tikz enviroments or how to get rid of themAdding axes labels to LaTeX figures

                      Luettelo Yhdysvaltain laivaston lentotukialuksista Lähteet | Navigointivalikko

                      Gary (muusikko) Sisällysluettelo Historia | Rockin' High | Lähteet | Aiheesta muualla | NavigointivalikkoInfobox OKTuomas "Gary" Keskinen Ancaran kitaristiksiProjekti Rockin' High