tuning a convolution neural net, sample sizeSignal classification with convolution neural networkConvnet training error does not decreaseRelation between convolution in math and CNNHow to use the output of GridSearch?How to input & pre-process images for a Deep Convolutional Neural Network?Automated tuning of HyperparameterConvolution over volume in CNNsunderstanding the filter function in convolution neural networkTest Loss plateau fast in Convolutional Neural NetDisadvantages of hyperparameter tuning on a random sample of dataset

Does a large simulator bay have standard public address announcements?

How can I practically buy stocks?

Is there any official lore on the Far Realm?

How does Captain America channel this power?

How exactly does Hawking radiation decrease the mass of black holes?

How can I print the prosodic symbols in LaTeX?

What makes accurate emulation of old systems a difficult task?

What's the polite way to say "I need to urinate"?

What is the optimal strategy for the Dictionary Game?

Can someone publish a story that happened to you?

'It addicted me, with one taste.' Can 'addict' be used transitively?

Why did C use the -> operator instead of reusing the . operator?

How would 10 generations of living underground change the human body?

If a planet has 3 moons, is it possible to have triple Full/New Moons at once?

Why was the Spitfire's elliptical wing almost uncopied by other aircraft of World War 2?

Random Forest different results for same observation

How do I deal with a coworker that keeps asking to make small superficial changes to a report, and it is seriously triggering my anxiety?

How much cash can I safely carry into the USA and avoid civil forfeiture?

Your bread will be buttered on both sides

Why must Chinese maps be obfuscated?

Apparently, my CLR assembly function is causing deadlocks?

How to stop co-workers from teasing me because I know Russian?

Do I have an "anti-research" personality?

Re-entry to Germany after vacation using blue card



tuning a convolution neural net, sample size


Signal classification with convolution neural networkConvnet training error does not decreaseRelation between convolution in math and CNNHow to use the output of GridSearch?How to input & pre-process images for a Deep Convolutional Neural Network?Automated tuning of HyperparameterConvolution over volume in CNNsunderstanding the filter function in convolution neural networkTest Loss plateau fast in Convolutional Neural NetDisadvantages of hyperparameter tuning on a random sample of dataset













1












$begingroup$


I keep reading that convolution neural net (CNN) performs best with lots and lots (100k+) of data. Is there any rule of thumb, or lower limit for data size during the grid search phase?



For example, if I run a CNN with 100 data points, vary just one parameter (say add an extra layer, or increase a filter size), and get better results, can I reasonably expect better results with those parameters during the actual training phase?










share|improve this question









$endgroup$











  • $begingroup$
    It's wrong that you need a lot of images to train a conv-net... I had only trained them with 22 images as trainset and 7 as validation and test.... that also works
    $endgroup$
    – Aditya
    Apr 5 '18 at 2:35
















1












$begingroup$


I keep reading that convolution neural net (CNN) performs best with lots and lots (100k+) of data. Is there any rule of thumb, or lower limit for data size during the grid search phase?



For example, if I run a CNN with 100 data points, vary just one parameter (say add an extra layer, or increase a filter size), and get better results, can I reasonably expect better results with those parameters during the actual training phase?










share|improve this question









$endgroup$











  • $begingroup$
    It's wrong that you need a lot of images to train a conv-net... I had only trained them with 22 images as trainset and 7 as validation and test.... that also works
    $endgroup$
    – Aditya
    Apr 5 '18 at 2:35














1












1








1





$begingroup$


I keep reading that convolution neural net (CNN) performs best with lots and lots (100k+) of data. Is there any rule of thumb, or lower limit for data size during the grid search phase?



For example, if I run a CNN with 100 data points, vary just one parameter (say add an extra layer, or increase a filter size), and get better results, can I reasonably expect better results with those parameters during the actual training phase?










share|improve this question









$endgroup$




I keep reading that convolution neural net (CNN) performs best with lots and lots (100k+) of data. Is there any rule of thumb, or lower limit for data size during the grid search phase?



For example, if I run a CNN with 100 data points, vary just one parameter (say add an extra layer, or increase a filter size), and get better results, can I reasonably expect better results with those parameters during the actual training phase?







machine-learning cnn convolution hyperparameter-tuning






share|improve this question













share|improve this question











share|improve this question




share|improve this question










asked Apr 4 '18 at 15:41









Mohammad AtharMohammad Athar

261111




261111











  • $begingroup$
    It's wrong that you need a lot of images to train a conv-net... I had only trained them with 22 images as trainset and 7 as validation and test.... that also works
    $endgroup$
    – Aditya
    Apr 5 '18 at 2:35

















  • $begingroup$
    It's wrong that you need a lot of images to train a conv-net... I had only trained them with 22 images as trainset and 7 as validation and test.... that also works
    $endgroup$
    – Aditya
    Apr 5 '18 at 2:35
















$begingroup$
It's wrong that you need a lot of images to train a conv-net... I had only trained them with 22 images as trainset and 7 as validation and test.... that also works
$endgroup$
– Aditya
Apr 5 '18 at 2:35





$begingroup$
It's wrong that you need a lot of images to train a conv-net... I had only trained them with 22 images as trainset and 7 as validation and test.... that also works
$endgroup$
– Aditya
Apr 5 '18 at 2:35











1 Answer
1






active

oldest

votes


















0












$begingroup$

If you use pre-trained weights, you need significantly lesser data as the initial layers have already learned from a ton of data and you just need to fine tune the later ones.



What you said is not true, you can train on CIFAR10 and get 90%+ and that is not 100k+. It depends on the complexity of the data and how similar the features are. If they are easily Seperable -less data. If the disctintions are harder then the model needs a lot of examples to figure out which of features are seperate.



I would say you could IF you sample was representive of the population.






share|improve this answer









$endgroup$













    Your Answer








    StackExchange.ready(function()
    var channelOptions =
    tags: "".split(" "),
    id: "557"
    ;
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function()
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled)
    StackExchange.using("snippets", function()
    createEditor();
    );

    else
    createEditor();

    );

    function createEditor()
    StackExchange.prepareEditor(
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: false,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: null,
    bindNavPrevention: true,
    postfix: "",
    imageUploader:
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    ,
    onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    );



    );













    draft saved

    draft discarded


















    StackExchange.ready(
    function ()
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f29904%2ftuning-a-convolution-neural-net-sample-size%23new-answer', 'question_page');

    );

    Post as a guest















    Required, but never shown

























    1 Answer
    1






    active

    oldest

    votes








    1 Answer
    1






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    0












    $begingroup$

    If you use pre-trained weights, you need significantly lesser data as the initial layers have already learned from a ton of data and you just need to fine tune the later ones.



    What you said is not true, you can train on CIFAR10 and get 90%+ and that is not 100k+. It depends on the complexity of the data and how similar the features are. If they are easily Seperable -less data. If the disctintions are harder then the model needs a lot of examples to figure out which of features are seperate.



    I would say you could IF you sample was representive of the population.






    share|improve this answer









    $endgroup$

















      0












      $begingroup$

      If you use pre-trained weights, you need significantly lesser data as the initial layers have already learned from a ton of data and you just need to fine tune the later ones.



      What you said is not true, you can train on CIFAR10 and get 90%+ and that is not 100k+. It depends on the complexity of the data and how similar the features are. If they are easily Seperable -less data. If the disctintions are harder then the model needs a lot of examples to figure out which of features are seperate.



      I would say you could IF you sample was representive of the population.






      share|improve this answer









      $endgroup$















        0












        0








        0





        $begingroup$

        If you use pre-trained weights, you need significantly lesser data as the initial layers have already learned from a ton of data and you just need to fine tune the later ones.



        What you said is not true, you can train on CIFAR10 and get 90%+ and that is not 100k+. It depends on the complexity of the data and how similar the features are. If they are easily Seperable -less data. If the disctintions are harder then the model needs a lot of examples to figure out which of features are seperate.



        I would say you could IF you sample was representive of the population.






        share|improve this answer









        $endgroup$



        If you use pre-trained weights, you need significantly lesser data as the initial layers have already learned from a ton of data and you just need to fine tune the later ones.



        What you said is not true, you can train on CIFAR10 and get 90%+ and that is not 100k+. It depends on the complexity of the data and how similar the features are. If they are easily Seperable -less data. If the disctintions are harder then the model needs a lot of examples to figure out which of features are seperate.



        I would say you could IF you sample was representive of the population.







        share|improve this answer












        share|improve this answer



        share|improve this answer










        answered Nov 7 '18 at 17:04









        Rahul DeoraRahul Deora

        1




        1



























            draft saved

            draft discarded
















































            Thanks for contributing an answer to Data Science Stack Exchange!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid


            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.

            Use MathJax to format equations. MathJax reference.


            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f29904%2ftuning-a-convolution-neural-net-sample-size%23new-answer', 'question_page');

            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            Adding axes to figuresAdding axes labels to LaTeX figuresLaTeX equivalent of ConTeXt buffersRotate a node but not its content: the case of the ellipse decorationHow to define the default vertical distance between nodes?TikZ scaling graphic and adjust node position and keep font sizeNumerical conditional within tikz keys?adding axes to shapesAlign axes across subfiguresAdding figures with a certain orderLine up nested tikz enviroments or how to get rid of themAdding axes labels to LaTeX figures

            Luettelo Yhdysvaltain laivaston lentotukialuksista Lähteet | Navigointivalikko

            Gary (muusikko) Sisällysluettelo Historia | Rockin' High | Lähteet | Aiheesta muualla | NavigointivalikkoInfobox OKTuomas "Gary" Keskinen Ancaran kitaristiksiProjekti Rockin' High