Which learning tasks do brains use to train themselves to see? Announcing the arrival of Valued Associate #679: Cesar Manara Planned maintenance scheduled April 23, 2019 at 23:30 UTC (7:30pm US/Eastern) 2019 Moderator Election Q&A - Questionnaire 2019 Community Moderator Election ResultsWhich type of machine learning to useWhy should the data be shuffled for machine learning tasksMultitask learning NN only trains on a few tasksBreaking through an accuracy brickwall with my LSTMAre there enough databases for all learning tasks?Which Kind of Machine Learning should I use for an Optimization Problem?What tasks you train with one set of features and predict with another?Navigating the jungle of choices for scalable ML deploymentDo I use actual data or data difference to train machine learning model?train Neural Network with SGD and see that it overfits data.

Does the Pact of the Blade warlock feature allow me to customize the properties of the pact weapon I create?

How can I introduce the names of fantasy creatures to the reader?

Is Bran literally the world's memory?

How to keep bees out of canned beverages?

Why aren't road bike wheels tiny?

2 sample t test for sample sizes - 30,000 and 150,000

How to ask rejected full-time candidates to apply to teach individual courses?

Is "ein Herz wie das meine" an antiquated or colloquial use of the possesive pronoun?

Marquee sign letters

Suing a Police Officer Instead of the Police Department

Trying to enter the Fox's den

Can this water damage be explained by lack of gutters and grading issues?

Is it OK if I do not take the receipt in Germany?

Help Recreating a Table

Is there a verb for listening stealthily?

Assertions In A Mock Callout Test

How to mute a string and play another at the same time

What were wait-states, and why was it only an issue for PCs?

How to create a command for the "strange m" symbol in latex?

A German immigrant ancestor has a "Registration Affidavit of Alien Enemy" on file. What does that mean exactly?

What's the connection between Mr. Nancy and fried chicken?

Is my guitar’s action too high?

Will I be more secure with my own router behind my ISP's router?

Will the Antimagic Field spell cause elementals not summoned by magic to dissipate?



Which learning tasks do brains use to train themselves to see?



Announcing the arrival of Valued Associate #679: Cesar Manara
Planned maintenance scheduled April 23, 2019 at 23:30 UTC (7:30pm US/Eastern)
2019 Moderator Election Q&A - Questionnaire
2019 Community Moderator Election ResultsWhich type of machine learning to useWhy should the data be shuffled for machine learning tasksMultitask learning NN only trains on a few tasksBreaking through an accuracy brickwall with my LSTMAre there enough databases for all learning tasks?Which Kind of Machine Learning should I use for an Optimization Problem?What tasks you train with one set of features and predict with another?Navigating the jungle of choices for scalable ML deploymentDo I use actual data or data difference to train machine learning model?train Neural Network with SGD and see that it overfits data.










3












$begingroup$


In computer vision is very common to use supervised tasks, where datasets have to be manually annotated by humans. Some examples are object classification (class labels), detection (bounding boxes) and segmentation (pixel-level masks). These datasets are essentially pairs of inputs-outputs which are used to train Convolutional Neural Networks to learn the mapping from inputs to outputs, via gradient descent optimization. But animals don't need anybody to show them bounding boxes or masks on top of things in order for them to learn to detect objects and make sense of the visual world around them. This leads me to think that brains must be performing some sort of self-supervision to train themselves to see.



What does current research say about the learning paradigm used by brains to achieve such an outstanding level of visual competence? Which tasks do brains use to train themselves to be so good at processing visual information and making sense of the visual world around them? Or said in other words: how does the brain manage to train its neural networks without having access to manually annotated datasets like ImageNet, COCO, etc. (i.e. how does the brain manage to generate its own training examples)? Finally, can we apply these insights in computer vision?










share|improve this question











$endgroup$







  • 2




    $begingroup$
    Cross-posted: psychology.stackexchange.com/q/21965/11209, ai.stackexchange.com/q/11666/1794, datascience.stackexchange.com/q/48645/8560. Please do not post the same question on multiple sites. Each community should have an honest shot at answering without anybody's time being wasted.
    $endgroup$
    – D.W.
    Apr 5 at 17:20















3












$begingroup$


In computer vision is very common to use supervised tasks, where datasets have to be manually annotated by humans. Some examples are object classification (class labels), detection (bounding boxes) and segmentation (pixel-level masks). These datasets are essentially pairs of inputs-outputs which are used to train Convolutional Neural Networks to learn the mapping from inputs to outputs, via gradient descent optimization. But animals don't need anybody to show them bounding boxes or masks on top of things in order for them to learn to detect objects and make sense of the visual world around them. This leads me to think that brains must be performing some sort of self-supervision to train themselves to see.



What does current research say about the learning paradigm used by brains to achieve such an outstanding level of visual competence? Which tasks do brains use to train themselves to be so good at processing visual information and making sense of the visual world around them? Or said in other words: how does the brain manage to train its neural networks without having access to manually annotated datasets like ImageNet, COCO, etc. (i.e. how does the brain manage to generate its own training examples)? Finally, can we apply these insights in computer vision?










share|improve this question











$endgroup$







  • 2




    $begingroup$
    Cross-posted: psychology.stackexchange.com/q/21965/11209, ai.stackexchange.com/q/11666/1794, datascience.stackexchange.com/q/48645/8560. Please do not post the same question on multiple sites. Each community should have an honest shot at answering without anybody's time being wasted.
    $endgroup$
    – D.W.
    Apr 5 at 17:20













3












3








3





$begingroup$


In computer vision is very common to use supervised tasks, where datasets have to be manually annotated by humans. Some examples are object classification (class labels), detection (bounding boxes) and segmentation (pixel-level masks). These datasets are essentially pairs of inputs-outputs which are used to train Convolutional Neural Networks to learn the mapping from inputs to outputs, via gradient descent optimization. But animals don't need anybody to show them bounding boxes or masks on top of things in order for them to learn to detect objects and make sense of the visual world around them. This leads me to think that brains must be performing some sort of self-supervision to train themselves to see.



What does current research say about the learning paradigm used by brains to achieve such an outstanding level of visual competence? Which tasks do brains use to train themselves to be so good at processing visual information and making sense of the visual world around them? Or said in other words: how does the brain manage to train its neural networks without having access to manually annotated datasets like ImageNet, COCO, etc. (i.e. how does the brain manage to generate its own training examples)? Finally, can we apply these insights in computer vision?










share|improve this question











$endgroup$




In computer vision is very common to use supervised tasks, where datasets have to be manually annotated by humans. Some examples are object classification (class labels), detection (bounding boxes) and segmentation (pixel-level masks). These datasets are essentially pairs of inputs-outputs which are used to train Convolutional Neural Networks to learn the mapping from inputs to outputs, via gradient descent optimization. But animals don't need anybody to show them bounding boxes or masks on top of things in order for them to learn to detect objects and make sense of the visual world around them. This leads me to think that brains must be performing some sort of self-supervision to train themselves to see.



What does current research say about the learning paradigm used by brains to achieve such an outstanding level of visual competence? Which tasks do brains use to train themselves to be so good at processing visual information and making sense of the visual world around them? Or said in other words: how does the brain manage to train its neural networks without having access to manually annotated datasets like ImageNet, COCO, etc. (i.e. how does the brain manage to generate its own training examples)? Finally, can we apply these insights in computer vision?







machine-learning computer-vision






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Apr 5 at 14:11







Pablo Messina

















asked Apr 5 at 3:24









Pablo MessinaPablo Messina

586




586







  • 2




    $begingroup$
    Cross-posted: psychology.stackexchange.com/q/21965/11209, ai.stackexchange.com/q/11666/1794, datascience.stackexchange.com/q/48645/8560. Please do not post the same question on multiple sites. Each community should have an honest shot at answering without anybody's time being wasted.
    $endgroup$
    – D.W.
    Apr 5 at 17:20












  • 2




    $begingroup$
    Cross-posted: psychology.stackexchange.com/q/21965/11209, ai.stackexchange.com/q/11666/1794, datascience.stackexchange.com/q/48645/8560. Please do not post the same question on multiple sites. Each community should have an honest shot at answering without anybody's time being wasted.
    $endgroup$
    – D.W.
    Apr 5 at 17:20







2




2




$begingroup$
Cross-posted: psychology.stackexchange.com/q/21965/11209, ai.stackexchange.com/q/11666/1794, datascience.stackexchange.com/q/48645/8560. Please do not post the same question on multiple sites. Each community should have an honest shot at answering without anybody's time being wasted.
$endgroup$
– D.W.
Apr 5 at 17:20




$begingroup$
Cross-posted: psychology.stackexchange.com/q/21965/11209, ai.stackexchange.com/q/11666/1794, datascience.stackexchange.com/q/48645/8560. Please do not post the same question on multiple sites. Each community should have an honest shot at answering without anybody's time being wasted.
$endgroup$
– D.W.
Apr 5 at 17:20










3 Answers
3






active

oldest

votes


















2












$begingroup$

Maybe this paper will give you an overview about (and entrance to) this topic from biological side. It is a review about the state of the art in human brain development (and its implications for clinical treatment).



The table of contents include for example



  • Stage 1: the first year, early maturation of vision and the structure of V1 neurobiology

  • Stage 2: preschool children have high variability in V1 development (1–4 years)

  • Stage 3: experience-dependent visual development in school aged children (5–11 years)

(V1 means "visual cortex")
Where this points all handle 3 categories: visual milestones (i.e. contrast sensitivity, contour integration), anatomical milestones (i.e. morphology) and neurobiological milestones (i.e. synapsis, but also a lot of genetics).



And as second : Maybe you could ask this question on bioinformatics.SE too, because this connection between biological example and computational reproduction is one of their fields.






share|improve this answer











$endgroup$




















    1












    $begingroup$

    I think this kind of question is better fit for the Artitifical Inteligence SE, but it works here as well (I guess).



    So Natural Neural Networks had a lot of time to develop using Genetic Algorithms (evolution). Even the complex human eye might have started with bacteria search for light (energy) sources using simple light intensity sensing.



    Having enough time, our brains developed and we have about 5 know regions in the Visual Cortex, each responsible for a kind of feature (check on Mind Field)



    Also, little is know about the learning process/otimization of a natural neuron but your question is on the data used...



    Well, we cluster things in utility for survival: We detect human faces and perform person identification really well, this is one of the most advanced features of our visual cortex and this can be traced to our social needs which are intrinsically related to our survival ability. It is really important for us to identify the people that are friendly to us and those that may cause us harm.



    When the object is brain diseases diagnosis using imaging, CNNs are already beating our brains.



    So summarizing my answer: Fitness to environment allow us to define what to learn, correct predictions allow us to survive and evolve, while premature deaths avoid bad genes from propagating



    Our environment provide us the label by Reinforced Learning + Genetic Algorithms.



    Adding: We also developed the capability of propagating our knowledge (sometimes by genetic code and sometimes by teaching others).






    share|improve this answer









    $endgroup$








    • 1




      $begingroup$
      The sense of vision is also connected with senses of touch, hearing and smell. As a result, we get more information about the environment in which we live. Also, we have two eyes. Hence, we can analyse depth in images. This depth helps us to detect objects by their shadow or proximity.
      $endgroup$
      – Shubham Panchal
      Apr 5 at 8:08


















    0












    $begingroup$

    I did not find a conclusive answer to your question. I present the closest content that I found, and my personal thoughts.



    The closest I got was finding these well-cited papers:




    1. 1997 How the brain learns to see objects and faces in an impoverished context




      Our results support psychological theories that perception is a
      conjoint function of current sensory input interacting with memory and
      possibly attentional processes.





    2. 2004 The reverse hierarchy theory of visual perceptual learning,




      RHT proposes that naïve performance is based on responses at
      high-level cortical areas, where crude, categorical level
      representations of the environment are represented. Hence initial
      learning stages involve understanding global aspects of the task.
      Subsequent practice may yield better perceptual resolution as a
      consequence of accessing lower-level information via the feedback
      connections going from high to low levels (wiki page on Perceptual learning).




    which lack the required comprehensiveness to answer the question. By going through the citations, I would say there is not yet a satisfying, and well-received answer to your question, which would usually lead to a highly-cited paper with a catchy title!



    Among projects, I came across Project Prakash which seems interesting and related:




    The goal of Project Prakash is to bring light into the lives of
    curably blind children and, in so doing, illuminate some of the most
    fundamental scientific questions about how the brain develops and
    learns to see (from here).




    along with an interesting (but addressed as controversial) TED talk that shows how well blind adults that are cured recently manage to detect objects by putting emphasis on the role of motion (which objection detection methods based on single image lack). Here is an example of distinct objects that they detect, which is possibly worse than artificial neural networks.






    Here are my thoughts regarding "the task" (which overlaps with @PedroHenriqueMonforte nicely put answer about evolution):



    A "task" has an objective, a goal. What is the goal of brain at the highest level? To serve the gene for survival and reproduction. What if brain (eye, heart, etc.) fails at this task? The gene will be removed from the pool.



    This is meta-learning, learning to learn. A pool of learners (genes that create brains that can learn to see) are constantly struggling to survive, where better (faster) learners have a higher chance of achieving the goal. This is the main supervision. At the extreme, the gene pool can get the job done by merely guessing the initial brain weights!



    The most important take away here is that brains are evolving for about 450 million years. I think this alone suggests that not all of the visual understanding happens after birth. That is, animals are being born with good architectures and initial weights to begin with, analogous to being handed a network that is pre-trained on the task of survival and reproduction. From this perspective, visual training based on visual input would be more like a fine-tuning.






    share|improve this answer











    $endgroup$













      Your Answer








      StackExchange.ready(function()
      var channelOptions =
      tags: "".split(" "),
      id: "557"
      ;
      initTagRenderer("".split(" "), "".split(" "), channelOptions);

      StackExchange.using("externalEditor", function()
      // Have to fire editor after snippets, if snippets enabled
      if (StackExchange.settings.snippets.snippetsEnabled)
      StackExchange.using("snippets", function()
      createEditor();
      );

      else
      createEditor();

      );

      function createEditor()
      StackExchange.prepareEditor(
      heartbeatType: 'answer',
      autoActivateHeartbeat: false,
      convertImagesToLinks: false,
      noModals: true,
      showLowRepImageUploadWarning: true,
      reputationToPostImages: null,
      bindNavPrevention: true,
      postfix: "",
      imageUploader:
      brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
      contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
      allowUrls: true
      ,
      onDemand: true,
      discardSelector: ".discard-answer"
      ,immediatelyShowMarkdownHelp:true
      );



      );













      draft saved

      draft discarded


















      StackExchange.ready(
      function ()
      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f48645%2fwhich-learning-tasks-do-brains-use-to-train-themselves-to-see%23new-answer', 'question_page');

      );

      Post as a guest















      Required, but never shown

























      3 Answers
      3






      active

      oldest

      votes








      3 Answers
      3






      active

      oldest

      votes









      active

      oldest

      votes






      active

      oldest

      votes









      2












      $begingroup$

      Maybe this paper will give you an overview about (and entrance to) this topic from biological side. It is a review about the state of the art in human brain development (and its implications for clinical treatment).



      The table of contents include for example



      • Stage 1: the first year, early maturation of vision and the structure of V1 neurobiology

      • Stage 2: preschool children have high variability in V1 development (1–4 years)

      • Stage 3: experience-dependent visual development in school aged children (5–11 years)

      (V1 means "visual cortex")
      Where this points all handle 3 categories: visual milestones (i.e. contrast sensitivity, contour integration), anatomical milestones (i.e. morphology) and neurobiological milestones (i.e. synapsis, but also a lot of genetics).



      And as second : Maybe you could ask this question on bioinformatics.SE too, because this connection between biological example and computational reproduction is one of their fields.






      share|improve this answer











      $endgroup$

















        2












        $begingroup$

        Maybe this paper will give you an overview about (and entrance to) this topic from biological side. It is a review about the state of the art in human brain development (and its implications for clinical treatment).



        The table of contents include for example



        • Stage 1: the first year, early maturation of vision and the structure of V1 neurobiology

        • Stage 2: preschool children have high variability in V1 development (1–4 years)

        • Stage 3: experience-dependent visual development in school aged children (5–11 years)

        (V1 means "visual cortex")
        Where this points all handle 3 categories: visual milestones (i.e. contrast sensitivity, contour integration), anatomical milestones (i.e. morphology) and neurobiological milestones (i.e. synapsis, but also a lot of genetics).



        And as second : Maybe you could ask this question on bioinformatics.SE too, because this connection between biological example and computational reproduction is one of their fields.






        share|improve this answer











        $endgroup$















          2












          2








          2





          $begingroup$

          Maybe this paper will give you an overview about (and entrance to) this topic from biological side. It is a review about the state of the art in human brain development (and its implications for clinical treatment).



          The table of contents include for example



          • Stage 1: the first year, early maturation of vision and the structure of V1 neurobiology

          • Stage 2: preschool children have high variability in V1 development (1–4 years)

          • Stage 3: experience-dependent visual development in school aged children (5–11 years)

          (V1 means "visual cortex")
          Where this points all handle 3 categories: visual milestones (i.e. contrast sensitivity, contour integration), anatomical milestones (i.e. morphology) and neurobiological milestones (i.e. synapsis, but also a lot of genetics).



          And as second : Maybe you could ask this question on bioinformatics.SE too, because this connection between biological example and computational reproduction is one of their fields.






          share|improve this answer











          $endgroup$



          Maybe this paper will give you an overview about (and entrance to) this topic from biological side. It is a review about the state of the art in human brain development (and its implications for clinical treatment).



          The table of contents include for example



          • Stage 1: the first year, early maturation of vision and the structure of V1 neurobiology

          • Stage 2: preschool children have high variability in V1 development (1–4 years)

          • Stage 3: experience-dependent visual development in school aged children (5–11 years)

          (V1 means "visual cortex")
          Where this points all handle 3 categories: visual milestones (i.e. contrast sensitivity, contour integration), anatomical milestones (i.e. morphology) and neurobiological milestones (i.e. synapsis, but also a lot of genetics).



          And as second : Maybe you could ask this question on bioinformatics.SE too, because this connection between biological example and computational reproduction is one of their fields.







          share|improve this answer














          share|improve this answer



          share|improve this answer








          edited Apr 5 at 8:25

























          answered Apr 5 at 8:19









          AllerleirauhAllerleirauh

          1313




          1313





















              1












              $begingroup$

              I think this kind of question is better fit for the Artitifical Inteligence SE, but it works here as well (I guess).



              So Natural Neural Networks had a lot of time to develop using Genetic Algorithms (evolution). Even the complex human eye might have started with bacteria search for light (energy) sources using simple light intensity sensing.



              Having enough time, our brains developed and we have about 5 know regions in the Visual Cortex, each responsible for a kind of feature (check on Mind Field)



              Also, little is know about the learning process/otimization of a natural neuron but your question is on the data used...



              Well, we cluster things in utility for survival: We detect human faces and perform person identification really well, this is one of the most advanced features of our visual cortex and this can be traced to our social needs which are intrinsically related to our survival ability. It is really important for us to identify the people that are friendly to us and those that may cause us harm.



              When the object is brain diseases diagnosis using imaging, CNNs are already beating our brains.



              So summarizing my answer: Fitness to environment allow us to define what to learn, correct predictions allow us to survive and evolve, while premature deaths avoid bad genes from propagating



              Our environment provide us the label by Reinforced Learning + Genetic Algorithms.



              Adding: We also developed the capability of propagating our knowledge (sometimes by genetic code and sometimes by teaching others).






              share|improve this answer









              $endgroup$








              • 1




                $begingroup$
                The sense of vision is also connected with senses of touch, hearing and smell. As a result, we get more information about the environment in which we live. Also, we have two eyes. Hence, we can analyse depth in images. This depth helps us to detect objects by their shadow or proximity.
                $endgroup$
                – Shubham Panchal
                Apr 5 at 8:08















              1












              $begingroup$

              I think this kind of question is better fit for the Artitifical Inteligence SE, but it works here as well (I guess).



              So Natural Neural Networks had a lot of time to develop using Genetic Algorithms (evolution). Even the complex human eye might have started with bacteria search for light (energy) sources using simple light intensity sensing.



              Having enough time, our brains developed and we have about 5 know regions in the Visual Cortex, each responsible for a kind of feature (check on Mind Field)



              Also, little is know about the learning process/otimization of a natural neuron but your question is on the data used...



              Well, we cluster things in utility for survival: We detect human faces and perform person identification really well, this is one of the most advanced features of our visual cortex and this can be traced to our social needs which are intrinsically related to our survival ability. It is really important for us to identify the people that are friendly to us and those that may cause us harm.



              When the object is brain diseases diagnosis using imaging, CNNs are already beating our brains.



              So summarizing my answer: Fitness to environment allow us to define what to learn, correct predictions allow us to survive and evolve, while premature deaths avoid bad genes from propagating



              Our environment provide us the label by Reinforced Learning + Genetic Algorithms.



              Adding: We also developed the capability of propagating our knowledge (sometimes by genetic code and sometimes by teaching others).






              share|improve this answer









              $endgroup$








              • 1




                $begingroup$
                The sense of vision is also connected with senses of touch, hearing and smell. As a result, we get more information about the environment in which we live. Also, we have two eyes. Hence, we can analyse depth in images. This depth helps us to detect objects by their shadow or proximity.
                $endgroup$
                – Shubham Panchal
                Apr 5 at 8:08













              1












              1








              1





              $begingroup$

              I think this kind of question is better fit for the Artitifical Inteligence SE, but it works here as well (I guess).



              So Natural Neural Networks had a lot of time to develop using Genetic Algorithms (evolution). Even the complex human eye might have started with bacteria search for light (energy) sources using simple light intensity sensing.



              Having enough time, our brains developed and we have about 5 know regions in the Visual Cortex, each responsible for a kind of feature (check on Mind Field)



              Also, little is know about the learning process/otimization of a natural neuron but your question is on the data used...



              Well, we cluster things in utility for survival: We detect human faces and perform person identification really well, this is one of the most advanced features of our visual cortex and this can be traced to our social needs which are intrinsically related to our survival ability. It is really important for us to identify the people that are friendly to us and those that may cause us harm.



              When the object is brain diseases diagnosis using imaging, CNNs are already beating our brains.



              So summarizing my answer: Fitness to environment allow us to define what to learn, correct predictions allow us to survive and evolve, while premature deaths avoid bad genes from propagating



              Our environment provide us the label by Reinforced Learning + Genetic Algorithms.



              Adding: We also developed the capability of propagating our knowledge (sometimes by genetic code and sometimes by teaching others).






              share|improve this answer









              $endgroup$



              I think this kind of question is better fit for the Artitifical Inteligence SE, but it works here as well (I guess).



              So Natural Neural Networks had a lot of time to develop using Genetic Algorithms (evolution). Even the complex human eye might have started with bacteria search for light (energy) sources using simple light intensity sensing.



              Having enough time, our brains developed and we have about 5 know regions in the Visual Cortex, each responsible for a kind of feature (check on Mind Field)



              Also, little is know about the learning process/otimization of a natural neuron but your question is on the data used...



              Well, we cluster things in utility for survival: We detect human faces and perform person identification really well, this is one of the most advanced features of our visual cortex and this can be traced to our social needs which are intrinsically related to our survival ability. It is really important for us to identify the people that are friendly to us and those that may cause us harm.



              When the object is brain diseases diagnosis using imaging, CNNs are already beating our brains.



              So summarizing my answer: Fitness to environment allow us to define what to learn, correct predictions allow us to survive and evolve, while premature deaths avoid bad genes from propagating



              Our environment provide us the label by Reinforced Learning + Genetic Algorithms.



              Adding: We also developed the capability of propagating our knowledge (sometimes by genetic code and sometimes by teaching others).







              share|improve this answer












              share|improve this answer



              share|improve this answer










              answered Apr 5 at 5:26









              Pedro Henrique MonfortePedro Henrique Monforte

              559218




              559218







              • 1




                $begingroup$
                The sense of vision is also connected with senses of touch, hearing and smell. As a result, we get more information about the environment in which we live. Also, we have two eyes. Hence, we can analyse depth in images. This depth helps us to detect objects by their shadow or proximity.
                $endgroup$
                – Shubham Panchal
                Apr 5 at 8:08












              • 1




                $begingroup$
                The sense of vision is also connected with senses of touch, hearing and smell. As a result, we get more information about the environment in which we live. Also, we have two eyes. Hence, we can analyse depth in images. This depth helps us to detect objects by their shadow or proximity.
                $endgroup$
                – Shubham Panchal
                Apr 5 at 8:08







              1




              1




              $begingroup$
              The sense of vision is also connected with senses of touch, hearing and smell. As a result, we get more information about the environment in which we live. Also, we have two eyes. Hence, we can analyse depth in images. This depth helps us to detect objects by their shadow or proximity.
              $endgroup$
              – Shubham Panchal
              Apr 5 at 8:08




              $begingroup$
              The sense of vision is also connected with senses of touch, hearing and smell. As a result, we get more information about the environment in which we live. Also, we have two eyes. Hence, we can analyse depth in images. This depth helps us to detect objects by their shadow or proximity.
              $endgroup$
              – Shubham Panchal
              Apr 5 at 8:08











              0












              $begingroup$

              I did not find a conclusive answer to your question. I present the closest content that I found, and my personal thoughts.



              The closest I got was finding these well-cited papers:




              1. 1997 How the brain learns to see objects and faces in an impoverished context




                Our results support psychological theories that perception is a
                conjoint function of current sensory input interacting with memory and
                possibly attentional processes.





              2. 2004 The reverse hierarchy theory of visual perceptual learning,




                RHT proposes that naïve performance is based on responses at
                high-level cortical areas, where crude, categorical level
                representations of the environment are represented. Hence initial
                learning stages involve understanding global aspects of the task.
                Subsequent practice may yield better perceptual resolution as a
                consequence of accessing lower-level information via the feedback
                connections going from high to low levels (wiki page on Perceptual learning).




              which lack the required comprehensiveness to answer the question. By going through the citations, I would say there is not yet a satisfying, and well-received answer to your question, which would usually lead to a highly-cited paper with a catchy title!



              Among projects, I came across Project Prakash which seems interesting and related:




              The goal of Project Prakash is to bring light into the lives of
              curably blind children and, in so doing, illuminate some of the most
              fundamental scientific questions about how the brain develops and
              learns to see (from here).




              along with an interesting (but addressed as controversial) TED talk that shows how well blind adults that are cured recently manage to detect objects by putting emphasis on the role of motion (which objection detection methods based on single image lack). Here is an example of distinct objects that they detect, which is possibly worse than artificial neural networks.






              Here are my thoughts regarding "the task" (which overlaps with @PedroHenriqueMonforte nicely put answer about evolution):



              A "task" has an objective, a goal. What is the goal of brain at the highest level? To serve the gene for survival and reproduction. What if brain (eye, heart, etc.) fails at this task? The gene will be removed from the pool.



              This is meta-learning, learning to learn. A pool of learners (genes that create brains that can learn to see) are constantly struggling to survive, where better (faster) learners have a higher chance of achieving the goal. This is the main supervision. At the extreme, the gene pool can get the job done by merely guessing the initial brain weights!



              The most important take away here is that brains are evolving for about 450 million years. I think this alone suggests that not all of the visual understanding happens after birth. That is, animals are being born with good architectures and initial weights to begin with, analogous to being handed a network that is pre-trained on the task of survival and reproduction. From this perspective, visual training based on visual input would be more like a fine-tuning.






              share|improve this answer











              $endgroup$

















                0












                $begingroup$

                I did not find a conclusive answer to your question. I present the closest content that I found, and my personal thoughts.



                The closest I got was finding these well-cited papers:




                1. 1997 How the brain learns to see objects and faces in an impoverished context




                  Our results support psychological theories that perception is a
                  conjoint function of current sensory input interacting with memory and
                  possibly attentional processes.





                2. 2004 The reverse hierarchy theory of visual perceptual learning,




                  RHT proposes that naïve performance is based on responses at
                  high-level cortical areas, where crude, categorical level
                  representations of the environment are represented. Hence initial
                  learning stages involve understanding global aspects of the task.
                  Subsequent practice may yield better perceptual resolution as a
                  consequence of accessing lower-level information via the feedback
                  connections going from high to low levels (wiki page on Perceptual learning).




                which lack the required comprehensiveness to answer the question. By going through the citations, I would say there is not yet a satisfying, and well-received answer to your question, which would usually lead to a highly-cited paper with a catchy title!



                Among projects, I came across Project Prakash which seems interesting and related:




                The goal of Project Prakash is to bring light into the lives of
                curably blind children and, in so doing, illuminate some of the most
                fundamental scientific questions about how the brain develops and
                learns to see (from here).




                along with an interesting (but addressed as controversial) TED talk that shows how well blind adults that are cured recently manage to detect objects by putting emphasis on the role of motion (which objection detection methods based on single image lack). Here is an example of distinct objects that they detect, which is possibly worse than artificial neural networks.






                Here are my thoughts regarding "the task" (which overlaps with @PedroHenriqueMonforte nicely put answer about evolution):



                A "task" has an objective, a goal. What is the goal of brain at the highest level? To serve the gene for survival and reproduction. What if brain (eye, heart, etc.) fails at this task? The gene will be removed from the pool.



                This is meta-learning, learning to learn. A pool of learners (genes that create brains that can learn to see) are constantly struggling to survive, where better (faster) learners have a higher chance of achieving the goal. This is the main supervision. At the extreme, the gene pool can get the job done by merely guessing the initial brain weights!



                The most important take away here is that brains are evolving for about 450 million years. I think this alone suggests that not all of the visual understanding happens after birth. That is, animals are being born with good architectures and initial weights to begin with, analogous to being handed a network that is pre-trained on the task of survival and reproduction. From this perspective, visual training based on visual input would be more like a fine-tuning.






                share|improve this answer











                $endgroup$















                  0












                  0








                  0





                  $begingroup$

                  I did not find a conclusive answer to your question. I present the closest content that I found, and my personal thoughts.



                  The closest I got was finding these well-cited papers:




                  1. 1997 How the brain learns to see objects and faces in an impoverished context




                    Our results support psychological theories that perception is a
                    conjoint function of current sensory input interacting with memory and
                    possibly attentional processes.





                  2. 2004 The reverse hierarchy theory of visual perceptual learning,




                    RHT proposes that naïve performance is based on responses at
                    high-level cortical areas, where crude, categorical level
                    representations of the environment are represented. Hence initial
                    learning stages involve understanding global aspects of the task.
                    Subsequent practice may yield better perceptual resolution as a
                    consequence of accessing lower-level information via the feedback
                    connections going from high to low levels (wiki page on Perceptual learning).




                  which lack the required comprehensiveness to answer the question. By going through the citations, I would say there is not yet a satisfying, and well-received answer to your question, which would usually lead to a highly-cited paper with a catchy title!



                  Among projects, I came across Project Prakash which seems interesting and related:




                  The goal of Project Prakash is to bring light into the lives of
                  curably blind children and, in so doing, illuminate some of the most
                  fundamental scientific questions about how the brain develops and
                  learns to see (from here).




                  along with an interesting (but addressed as controversial) TED talk that shows how well blind adults that are cured recently manage to detect objects by putting emphasis on the role of motion (which objection detection methods based on single image lack). Here is an example of distinct objects that they detect, which is possibly worse than artificial neural networks.






                  Here are my thoughts regarding "the task" (which overlaps with @PedroHenriqueMonforte nicely put answer about evolution):



                  A "task" has an objective, a goal. What is the goal of brain at the highest level? To serve the gene for survival and reproduction. What if brain (eye, heart, etc.) fails at this task? The gene will be removed from the pool.



                  This is meta-learning, learning to learn. A pool of learners (genes that create brains that can learn to see) are constantly struggling to survive, where better (faster) learners have a higher chance of achieving the goal. This is the main supervision. At the extreme, the gene pool can get the job done by merely guessing the initial brain weights!



                  The most important take away here is that brains are evolving for about 450 million years. I think this alone suggests that not all of the visual understanding happens after birth. That is, animals are being born with good architectures and initial weights to begin with, analogous to being handed a network that is pre-trained on the task of survival and reproduction. From this perspective, visual training based on visual input would be more like a fine-tuning.






                  share|improve this answer











                  $endgroup$



                  I did not find a conclusive answer to your question. I present the closest content that I found, and my personal thoughts.



                  The closest I got was finding these well-cited papers:




                  1. 1997 How the brain learns to see objects and faces in an impoverished context




                    Our results support psychological theories that perception is a
                    conjoint function of current sensory input interacting with memory and
                    possibly attentional processes.





                  2. 2004 The reverse hierarchy theory of visual perceptual learning,




                    RHT proposes that naïve performance is based on responses at
                    high-level cortical areas, where crude, categorical level
                    representations of the environment are represented. Hence initial
                    learning stages involve understanding global aspects of the task.
                    Subsequent practice may yield better perceptual resolution as a
                    consequence of accessing lower-level information via the feedback
                    connections going from high to low levels (wiki page on Perceptual learning).




                  which lack the required comprehensiveness to answer the question. By going through the citations, I would say there is not yet a satisfying, and well-received answer to your question, which would usually lead to a highly-cited paper with a catchy title!



                  Among projects, I came across Project Prakash which seems interesting and related:




                  The goal of Project Prakash is to bring light into the lives of
                  curably blind children and, in so doing, illuminate some of the most
                  fundamental scientific questions about how the brain develops and
                  learns to see (from here).




                  along with an interesting (but addressed as controversial) TED talk that shows how well blind adults that are cured recently manage to detect objects by putting emphasis on the role of motion (which objection detection methods based on single image lack). Here is an example of distinct objects that they detect, which is possibly worse than artificial neural networks.






                  Here are my thoughts regarding "the task" (which overlaps with @PedroHenriqueMonforte nicely put answer about evolution):



                  A "task" has an objective, a goal. What is the goal of brain at the highest level? To serve the gene for survival and reproduction. What if brain (eye, heart, etc.) fails at this task? The gene will be removed from the pool.



                  This is meta-learning, learning to learn. A pool of learners (genes that create brains that can learn to see) are constantly struggling to survive, where better (faster) learners have a higher chance of achieving the goal. This is the main supervision. At the extreme, the gene pool can get the job done by merely guessing the initial brain weights!



                  The most important take away here is that brains are evolving for about 450 million years. I think this alone suggests that not all of the visual understanding happens after birth. That is, animals are being born with good architectures and initial weights to begin with, analogous to being handed a network that is pre-trained on the task of survival and reproduction. From this perspective, visual training based on visual input would be more like a fine-tuning.







                  share|improve this answer














                  share|improve this answer



                  share|improve this answer








                  edited Apr 5 at 18:13

























                  answered Apr 5 at 16:07









                  EsmailianEsmailian

                  3,636420




                  3,636420



























                      draft saved

                      draft discarded
















































                      Thanks for contributing an answer to Data Science Stack Exchange!


                      • Please be sure to answer the question. Provide details and share your research!

                      But avoid


                      • Asking for help, clarification, or responding to other answers.

                      • Making statements based on opinion; back them up with references or personal experience.

                      Use MathJax to format equations. MathJax reference.


                      To learn more, see our tips on writing great answers.




                      draft saved


                      draft discarded














                      StackExchange.ready(
                      function ()
                      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f48645%2fwhich-learning-tasks-do-brains-use-to-train-themselves-to-see%23new-answer', 'question_page');

                      );

                      Post as a guest















                      Required, but never shown





















































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown

































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown







                      Popular posts from this blog

                      Adding axes to figuresAdding axes labels to LaTeX figuresLaTeX equivalent of ConTeXt buffersRotate a node but not its content: the case of the ellipse decorationHow to define the default vertical distance between nodes?TikZ scaling graphic and adjust node position and keep font sizeNumerical conditional within tikz keys?adding axes to shapesAlign axes across subfiguresAdding figures with a certain orderLine up nested tikz enviroments or how to get rid of themAdding axes labels to LaTeX figures

                      Tähtien Talli Jäsenet | Lähteet | NavigointivalikkoSuomen Hippos – Tähtien Talli

                      Do these cracks on my tires look bad? The Next CEO of Stack OverflowDry rot tire should I replace?Having to replace tiresFishtailed so easily? Bad tires? ABS?Filling the tires with something other than air, to avoid puncture hassles?Used Michelin tires safe to install?Do these tyre cracks necessitate replacement?Rumbling noise: tires or mechanicalIs it possible to fix noisy feathered tires?Are bad winter tires still better than summer tires in winter?Torque converter failure - Related to replacing only 2 tires?Why use snow tires on all 4 wheels on 2-wheel-drive cars?