Can feedback loops occur in machine learning that cause the model to become less precise?What statistical model should I use to analyze the likelihood that a single event influenced longitudinal data2 stage ensemble — CV MSE valid in 1st stage but not in 2ndUnsupervised binning of non-normal dataWhich machine learning algorithms support a feedback ingestion loop?How can I get a forecasting model to improve its forecasts over time instead of fitting to training data?Deep learning: parameter selection and result interpretationDimension problem in keras neural networksIs removing poorly predicted data points a valid approach?How to make machine learning specifically for an individual in a group when we have the data on the group?Robustness of ML Model in question

When were female captains banned from Starfleet?

A social experiment. What is the worst that can happen?

Biological Blimps: Propulsion

Where did Heinlein say "Once you get to Earth orbit, you're halfway to anywhere in the Solar System"?

Can the Supreme Court overturn an impeachment?

Offered money to buy a house, seller is asking for more to cover gap between their listing and mortgage owed

Count the occurrence of each unique word in the file

Should I stop contributing to retirement accounts?

Need a math help for the Cagan's model in macroeconomics

Argument list too long when zipping large list of certain files in a folder

Why did the EU agree to delay the Brexit deadline?

Freedom of speech and where it applies

Reply 'no position' while the job posting is still there

How can Trident be so inexpensive? Will it orbit Triton or just do a (slow) flyby?

Is the U.S. Code copyrighted by the Government?

Does an advisor owe his/her student anything? Will an advisor keep a PhD student only out of pity?

What linear sensor for a keybaord?

How can "mimic phobia" be cured or prevented?

Did arcade monitors have same pixel aspect ratio as TV sets?

What should you do when eye contact makes your subordinate uncomfortable?

Query about absorption line spectra

Calculating Wattage for Resistor in High Frequency Application?

Flux received by a negative charge

How should I respond when I lied about my education and the company finds out through background check?



Can feedback loops occur in machine learning that cause the model to become less precise?


What statistical model should I use to analyze the likelihood that a single event influenced longitudinal data2 stage ensemble — CV MSE valid in 1st stage but not in 2ndUnsupervised binning of non-normal dataWhich machine learning algorithms support a feedback ingestion loop?How can I get a forecasting model to improve its forecasts over time instead of fitting to training data?Deep learning: parameter selection and result interpretationDimension problem in keras neural networksIs removing poorly predicted data points a valid approach?How to make machine learning specifically for an individual in a group when we have the data on the group?Robustness of ML Model in question













0












$begingroup$


In discussions about ML algorithms, in for instance crime prediction, it is often claimed by non-experts that there are problems with feedback loops causing the model to become biased and give the wrong results.
Basically saying that the model's predictions give more attention to that type of data, and when retraining with the results, the predictions become skewed so even more attention is given to the same data type, and so on.



Is this true?



I would think that retraining the model with new data would make it more precise, regardless of how that data originated.










share|improve this question







New contributor




Rugbrød is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.







$endgroup$
















    0












    $begingroup$


    In discussions about ML algorithms, in for instance crime prediction, it is often claimed by non-experts that there are problems with feedback loops causing the model to become biased and give the wrong results.
    Basically saying that the model's predictions give more attention to that type of data, and when retraining with the results, the predictions become skewed so even more attention is given to the same data type, and so on.



    Is this true?



    I would think that retraining the model with new data would make it more precise, regardless of how that data originated.










    share|improve this question







    New contributor




    Rugbrød is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
    Check out our Code of Conduct.







    $endgroup$














      0












      0








      0





      $begingroup$


      In discussions about ML algorithms, in for instance crime prediction, it is often claimed by non-experts that there are problems with feedback loops causing the model to become biased and give the wrong results.
      Basically saying that the model's predictions give more attention to that type of data, and when retraining with the results, the predictions become skewed so even more attention is given to the same data type, and so on.



      Is this true?



      I would think that retraining the model with new data would make it more precise, regardless of how that data originated.










      share|improve this question







      New contributor




      Rugbrød is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.







      $endgroup$




      In discussions about ML algorithms, in for instance crime prediction, it is often claimed by non-experts that there are problems with feedback loops causing the model to become biased and give the wrong results.
      Basically saying that the model's predictions give more attention to that type of data, and when retraining with the results, the predictions become skewed so even more attention is given to the same data type, and so on.



      Is this true?



      I would think that retraining the model with new data would make it more precise, regardless of how that data originated.







      machine-learning






      share|improve this question







      New contributor




      Rugbrød is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.











      share|improve this question







      New contributor




      Rugbrød is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.









      share|improve this question




      share|improve this question






      New contributor




      Rugbrød is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.









      asked Mar 20 at 12:06









      RugbrødRugbrød

      11




      11




      New contributor




      Rugbrød is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.





      New contributor





      Rugbrød is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.






      Rugbrød is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.




















          2 Answers
          2






          active

          oldest

          votes


















          0












          $begingroup$

          Yes, this is a real problem that manifests once system is used by real users.



          Most prominent example is News Echo Chamber (accentuated by ML based recommendation systems)



          ML algo sees that you like news / videos related to certain point of view, you watch more of such videos and model becomes more convinced of your choice. So it suggests even more content with similar views.



          https://en.wikipedia.org/wiki/Echo_chamber_(media)



          http://theconversation.com/explainer-how-facebook-has-become-the-worlds-largest-echo-chamber-91024



          https://www.theguardian.com/science/blog/2017/dec/04/echo-chambers-are-dangerous-we-must-try-to-break-free-of-our-online-bubbles



          https://www.quora.com/Would-you-say-that-Quoras-generated-news-feed-suffers-from-an-echo-chamber-dilemma






          share|improve this answer









          $endgroup$












          • $begingroup$
            Isn't this only a problem because the user is shown the news items and views them without giving feedback on whether the prediction was true or false. The model then infers wrongly that its predictions were true.
            $endgroup$
            – Rugbrød
            Mar 20 at 12:22










          • $begingroup$
            User gives implicit feedback by viewing the content (and ignoring others), users also provide explicit feedback by like/share/dislike. For example, Youtube allows you to remove a suggestion and also provide feedback on why the suggestion was wrong.
            $endgroup$
            – Shamit Verma
            Mar 20 at 12:26


















          0












          $begingroup$

          Yes feedback loops can happen in much the same way in machine learning. It can happen when the predictions of a model affects the future labels.



          Let's say we are predicting crime rate in different neighborhoods. One neighborhood has biased data causing it to be predicted as higher in crime than it actually is. This causes more police presence in his neighborhood which in turn will lead to more real crime being discovered than in the areas that didn't receive extra attention caused by a biased model. This extra discovered crime will then be present for any new models to be trained even if the initial data error/bias is removed. The biased model as enforced its' own bias and produced new data to back it up.






          share|improve this answer











          $endgroup$












          • $begingroup$
            But if you include police activity in the neighbourhood as a variable, won't that compensate for more crime being discovered.
            $endgroup$
            – Rugbrød
            Mar 20 at 12:45










          • $begingroup$
            Probably not. The model is predicting crime rate in a neighborhood and high police activity will probably be correlated with higher crime rates. So adding it will probably only give additional feedback telling future models that this is a high crime neighborhood when it actually was all caused by the initial biased model.
            $endgroup$
            – Simon Larsson
            Mar 20 at 12:55











          Your Answer





          StackExchange.ifUsing("editor", function ()
          return StackExchange.using("mathjaxEditing", function ()
          StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
          StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
          );
          );
          , "mathjax-editing");

          StackExchange.ready(function()
          var channelOptions =
          tags: "".split(" "),
          id: "557"
          ;
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function()
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled)
          StackExchange.using("snippets", function()
          createEditor();
          );

          else
          createEditor();

          );

          function createEditor()
          StackExchange.prepareEditor(
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: false,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: null,
          bindNavPrevention: true,
          postfix: "",
          imageUploader:
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          ,
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          );



          );






          Rugbrød is a new contributor. Be nice, and check out our Code of Conduct.









          draft saved

          draft discarded


















          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f47666%2fcan-feedback-loops-occur-in-machine-learning-that-cause-the-model-to-become-less%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown

























          2 Answers
          2






          active

          oldest

          votes








          2 Answers
          2






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes









          0












          $begingroup$

          Yes, this is a real problem that manifests once system is used by real users.



          Most prominent example is News Echo Chamber (accentuated by ML based recommendation systems)



          ML algo sees that you like news / videos related to certain point of view, you watch more of such videos and model becomes more convinced of your choice. So it suggests even more content with similar views.



          https://en.wikipedia.org/wiki/Echo_chamber_(media)



          http://theconversation.com/explainer-how-facebook-has-become-the-worlds-largest-echo-chamber-91024



          https://www.theguardian.com/science/blog/2017/dec/04/echo-chambers-are-dangerous-we-must-try-to-break-free-of-our-online-bubbles



          https://www.quora.com/Would-you-say-that-Quoras-generated-news-feed-suffers-from-an-echo-chamber-dilemma






          share|improve this answer









          $endgroup$












          • $begingroup$
            Isn't this only a problem because the user is shown the news items and views them without giving feedback on whether the prediction was true or false. The model then infers wrongly that its predictions were true.
            $endgroup$
            – Rugbrød
            Mar 20 at 12:22










          • $begingroup$
            User gives implicit feedback by viewing the content (and ignoring others), users also provide explicit feedback by like/share/dislike. For example, Youtube allows you to remove a suggestion and also provide feedback on why the suggestion was wrong.
            $endgroup$
            – Shamit Verma
            Mar 20 at 12:26















          0












          $begingroup$

          Yes, this is a real problem that manifests once system is used by real users.



          Most prominent example is News Echo Chamber (accentuated by ML based recommendation systems)



          ML algo sees that you like news / videos related to certain point of view, you watch more of such videos and model becomes more convinced of your choice. So it suggests even more content with similar views.



          https://en.wikipedia.org/wiki/Echo_chamber_(media)



          http://theconversation.com/explainer-how-facebook-has-become-the-worlds-largest-echo-chamber-91024



          https://www.theguardian.com/science/blog/2017/dec/04/echo-chambers-are-dangerous-we-must-try-to-break-free-of-our-online-bubbles



          https://www.quora.com/Would-you-say-that-Quoras-generated-news-feed-suffers-from-an-echo-chamber-dilemma






          share|improve this answer









          $endgroup$












          • $begingroup$
            Isn't this only a problem because the user is shown the news items and views them without giving feedback on whether the prediction was true or false. The model then infers wrongly that its predictions were true.
            $endgroup$
            – Rugbrød
            Mar 20 at 12:22










          • $begingroup$
            User gives implicit feedback by viewing the content (and ignoring others), users also provide explicit feedback by like/share/dislike. For example, Youtube allows you to remove a suggestion and also provide feedback on why the suggestion was wrong.
            $endgroup$
            – Shamit Verma
            Mar 20 at 12:26













          0












          0








          0





          $begingroup$

          Yes, this is a real problem that manifests once system is used by real users.



          Most prominent example is News Echo Chamber (accentuated by ML based recommendation systems)



          ML algo sees that you like news / videos related to certain point of view, you watch more of such videos and model becomes more convinced of your choice. So it suggests even more content with similar views.



          https://en.wikipedia.org/wiki/Echo_chamber_(media)



          http://theconversation.com/explainer-how-facebook-has-become-the-worlds-largest-echo-chamber-91024



          https://www.theguardian.com/science/blog/2017/dec/04/echo-chambers-are-dangerous-we-must-try-to-break-free-of-our-online-bubbles



          https://www.quora.com/Would-you-say-that-Quoras-generated-news-feed-suffers-from-an-echo-chamber-dilemma






          share|improve this answer









          $endgroup$



          Yes, this is a real problem that manifests once system is used by real users.



          Most prominent example is News Echo Chamber (accentuated by ML based recommendation systems)



          ML algo sees that you like news / videos related to certain point of view, you watch more of such videos and model becomes more convinced of your choice. So it suggests even more content with similar views.



          https://en.wikipedia.org/wiki/Echo_chamber_(media)



          http://theconversation.com/explainer-how-facebook-has-become-the-worlds-largest-echo-chamber-91024



          https://www.theguardian.com/science/blog/2017/dec/04/echo-chambers-are-dangerous-we-must-try-to-break-free-of-our-online-bubbles



          https://www.quora.com/Would-you-say-that-Quoras-generated-news-feed-suffers-from-an-echo-chamber-dilemma







          share|improve this answer












          share|improve this answer



          share|improve this answer










          answered Mar 20 at 12:16









          Shamit VermaShamit Verma

          91929




          91929











          • $begingroup$
            Isn't this only a problem because the user is shown the news items and views them without giving feedback on whether the prediction was true or false. The model then infers wrongly that its predictions were true.
            $endgroup$
            – Rugbrød
            Mar 20 at 12:22










          • $begingroup$
            User gives implicit feedback by viewing the content (and ignoring others), users also provide explicit feedback by like/share/dislike. For example, Youtube allows you to remove a suggestion and also provide feedback on why the suggestion was wrong.
            $endgroup$
            – Shamit Verma
            Mar 20 at 12:26
















          • $begingroup$
            Isn't this only a problem because the user is shown the news items and views them without giving feedback on whether the prediction was true or false. The model then infers wrongly that its predictions were true.
            $endgroup$
            – Rugbrød
            Mar 20 at 12:22










          • $begingroup$
            User gives implicit feedback by viewing the content (and ignoring others), users also provide explicit feedback by like/share/dislike. For example, Youtube allows you to remove a suggestion and also provide feedback on why the suggestion was wrong.
            $endgroup$
            – Shamit Verma
            Mar 20 at 12:26















          $begingroup$
          Isn't this only a problem because the user is shown the news items and views them without giving feedback on whether the prediction was true or false. The model then infers wrongly that its predictions were true.
          $endgroup$
          – Rugbrød
          Mar 20 at 12:22




          $begingroup$
          Isn't this only a problem because the user is shown the news items and views them without giving feedback on whether the prediction was true or false. The model then infers wrongly that its predictions were true.
          $endgroup$
          – Rugbrød
          Mar 20 at 12:22












          $begingroup$
          User gives implicit feedback by viewing the content (and ignoring others), users also provide explicit feedback by like/share/dislike. For example, Youtube allows you to remove a suggestion and also provide feedback on why the suggestion was wrong.
          $endgroup$
          – Shamit Verma
          Mar 20 at 12:26




          $begingroup$
          User gives implicit feedback by viewing the content (and ignoring others), users also provide explicit feedback by like/share/dislike. For example, Youtube allows you to remove a suggestion and also provide feedback on why the suggestion was wrong.
          $endgroup$
          – Shamit Verma
          Mar 20 at 12:26











          0












          $begingroup$

          Yes feedback loops can happen in much the same way in machine learning. It can happen when the predictions of a model affects the future labels.



          Let's say we are predicting crime rate in different neighborhoods. One neighborhood has biased data causing it to be predicted as higher in crime than it actually is. This causes more police presence in his neighborhood which in turn will lead to more real crime being discovered than in the areas that didn't receive extra attention caused by a biased model. This extra discovered crime will then be present for any new models to be trained even if the initial data error/bias is removed. The biased model as enforced its' own bias and produced new data to back it up.






          share|improve this answer











          $endgroup$












          • $begingroup$
            But if you include police activity in the neighbourhood as a variable, won't that compensate for more crime being discovered.
            $endgroup$
            – Rugbrød
            Mar 20 at 12:45










          • $begingroup$
            Probably not. The model is predicting crime rate in a neighborhood and high police activity will probably be correlated with higher crime rates. So adding it will probably only give additional feedback telling future models that this is a high crime neighborhood when it actually was all caused by the initial biased model.
            $endgroup$
            – Simon Larsson
            Mar 20 at 12:55
















          0












          $begingroup$

          Yes feedback loops can happen in much the same way in machine learning. It can happen when the predictions of a model affects the future labels.



          Let's say we are predicting crime rate in different neighborhoods. One neighborhood has biased data causing it to be predicted as higher in crime than it actually is. This causes more police presence in his neighborhood which in turn will lead to more real crime being discovered than in the areas that didn't receive extra attention caused by a biased model. This extra discovered crime will then be present for any new models to be trained even if the initial data error/bias is removed. The biased model as enforced its' own bias and produced new data to back it up.






          share|improve this answer











          $endgroup$












          • $begingroup$
            But if you include police activity in the neighbourhood as a variable, won't that compensate for more crime being discovered.
            $endgroup$
            – Rugbrød
            Mar 20 at 12:45










          • $begingroup$
            Probably not. The model is predicting crime rate in a neighborhood and high police activity will probably be correlated with higher crime rates. So adding it will probably only give additional feedback telling future models that this is a high crime neighborhood when it actually was all caused by the initial biased model.
            $endgroup$
            – Simon Larsson
            Mar 20 at 12:55














          0












          0








          0





          $begingroup$

          Yes feedback loops can happen in much the same way in machine learning. It can happen when the predictions of a model affects the future labels.



          Let's say we are predicting crime rate in different neighborhoods. One neighborhood has biased data causing it to be predicted as higher in crime than it actually is. This causes more police presence in his neighborhood which in turn will lead to more real crime being discovered than in the areas that didn't receive extra attention caused by a biased model. This extra discovered crime will then be present for any new models to be trained even if the initial data error/bias is removed. The biased model as enforced its' own bias and produced new data to back it up.






          share|improve this answer











          $endgroup$



          Yes feedback loops can happen in much the same way in machine learning. It can happen when the predictions of a model affects the future labels.



          Let's say we are predicting crime rate in different neighborhoods. One neighborhood has biased data causing it to be predicted as higher in crime than it actually is. This causes more police presence in his neighborhood which in turn will lead to more real crime being discovered than in the areas that didn't receive extra attention caused by a biased model. This extra discovered crime will then be present for any new models to be trained even if the initial data error/bias is removed. The biased model as enforced its' own bias and produced new data to back it up.







          share|improve this answer














          share|improve this answer



          share|improve this answer








          edited Mar 20 at 12:42

























          answered Mar 20 at 12:37









          Simon LarssonSimon Larsson

          51910




          51910











          • $begingroup$
            But if you include police activity in the neighbourhood as a variable, won't that compensate for more crime being discovered.
            $endgroup$
            – Rugbrød
            Mar 20 at 12:45










          • $begingroup$
            Probably not. The model is predicting crime rate in a neighborhood and high police activity will probably be correlated with higher crime rates. So adding it will probably only give additional feedback telling future models that this is a high crime neighborhood when it actually was all caused by the initial biased model.
            $endgroup$
            – Simon Larsson
            Mar 20 at 12:55

















          • $begingroup$
            But if you include police activity in the neighbourhood as a variable, won't that compensate for more crime being discovered.
            $endgroup$
            – Rugbrød
            Mar 20 at 12:45










          • $begingroup$
            Probably not. The model is predicting crime rate in a neighborhood and high police activity will probably be correlated with higher crime rates. So adding it will probably only give additional feedback telling future models that this is a high crime neighborhood when it actually was all caused by the initial biased model.
            $endgroup$
            – Simon Larsson
            Mar 20 at 12:55
















          $begingroup$
          But if you include police activity in the neighbourhood as a variable, won't that compensate for more crime being discovered.
          $endgroup$
          – Rugbrød
          Mar 20 at 12:45




          $begingroup$
          But if you include police activity in the neighbourhood as a variable, won't that compensate for more crime being discovered.
          $endgroup$
          – Rugbrød
          Mar 20 at 12:45












          $begingroup$
          Probably not. The model is predicting crime rate in a neighborhood and high police activity will probably be correlated with higher crime rates. So adding it will probably only give additional feedback telling future models that this is a high crime neighborhood when it actually was all caused by the initial biased model.
          $endgroup$
          – Simon Larsson
          Mar 20 at 12:55





          $begingroup$
          Probably not. The model is predicting crime rate in a neighborhood and high police activity will probably be correlated with higher crime rates. So adding it will probably only give additional feedback telling future models that this is a high crime neighborhood when it actually was all caused by the initial biased model.
          $endgroup$
          – Simon Larsson
          Mar 20 at 12:55











          Rugbrød is a new contributor. Be nice, and check out our Code of Conduct.









          draft saved

          draft discarded


















          Rugbrød is a new contributor. Be nice, and check out our Code of Conduct.












          Rugbrød is a new contributor. Be nice, and check out our Code of Conduct.











          Rugbrød is a new contributor. Be nice, and check out our Code of Conduct.














          Thanks for contributing an answer to Data Science Stack Exchange!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid


          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.

          Use MathJax to format equations. MathJax reference.


          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f47666%2fcan-feedback-loops-occur-in-machine-learning-that-cause-the-model-to-become-less%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          Adding axes to figuresAdding axes labels to LaTeX figuresLaTeX equivalent of ConTeXt buffersRotate a node but not its content: the case of the ellipse decorationHow to define the default vertical distance between nodes?TikZ scaling graphic and adjust node position and keep font sizeNumerical conditional within tikz keys?adding axes to shapesAlign axes across subfiguresAdding figures with a certain orderLine up nested tikz enviroments or how to get rid of themAdding axes labels to LaTeX figures

          Tähtien Talli Jäsenet | Lähteet | NavigointivalikkoSuomen Hippos – Tähtien Talli

          Do these cracks on my tires look bad? The Next CEO of Stack OverflowDry rot tire should I replace?Having to replace tiresFishtailed so easily? Bad tires? ABS?Filling the tires with something other than air, to avoid puncture hassles?Used Michelin tires safe to install?Do these tyre cracks necessitate replacement?Rumbling noise: tires or mechanicalIs it possible to fix noisy feathered tires?Are bad winter tires still better than summer tires in winter?Torque converter failure - Related to replacing only 2 tires?Why use snow tires on all 4 wheels on 2-wheel-drive cars?