How to validate recommender model in healthcare?Preference Matching AlgorithmData scheduling for recommenderItem based recommender using SVDTaxonomy of recommender system methodologiesRecommender System: how to treat different eventsRecommender algorithmHow to calculate coverage in recommender systems?Putting a predictive model into productionrecommender systems : how to deal with items that change over time?Classification model for recommender system?

Survey Confirmation - Emphasize the question or the answer?

Did we get closer to another plane than we were supposed to, or was the pilot just protecting our delicate sensibilities?

Hang 20lb projector screen on Hardieplank

I caught several of my students plagiarizing. Could it be my fault as a teacher?

How can I close a gap between my fence and my neighbor's that's on his side of the property line?

Binary Numbers Magic Trick

Is lying to get "gardening leave" fraud?

Copy line and insert it in a new position with sed or awk

Can I use 1000v rectifier diodes instead of 600v rectifier diodes?

Why is Arya visibly scared in the library in S8E3?

Historically, were women trained for obligatory wars? Or did they serve some other military function?

Attending a conference where my ex-supervisor and his collaborator are present, should I attend?

Transfer over $10k

Entropy as a function of temperature: is temperature well defined?

When and why did journal article titles become descriptive, rather than creatively allusive?

What is the limiting factor for a CAN bus to exceed 1Mbps bandwidth?

Is it always OK to ask for a copy of the lecturer's slides?

You look catfish vs You look like a catfish?

Would "lab meat" be able to feed a much larger global population

Unidentified items in bicycle tube repair kit

Why was the battle set up *outside* Winterfell?

Applying a function to a nested list

What are the spoon bit of a spoon and fork bit of a fork called?

How to back up a running Linode server?



How to validate recommender model in healthcare?


Preference Matching AlgorithmData scheduling for recommenderItem based recommender using SVDTaxonomy of recommender system methodologiesRecommender System: how to treat different eventsRecommender algorithmHow to calculate coverage in recommender systems?Putting a predictive model into productionrecommender systems : how to deal with items that change over time?Classification model for recommender system?













0












$begingroup$


In order to validate a recommender model, a usual approach is create a hold-out set that will provide random suggestions (similar to an A/B testing setup).
However, in healthcare applications, this cannot be possible as a random suggestion can put at risk a patient's life.
Hence, what is a reasonable approach to validate the model?










share|improve this question











$endgroup$











  • $begingroup$
    Could you provide a little bit more detail about what sort of work you're doing? I'm assuming a lot, like that the randomness relates to group assignment and not the type of treatment itself, but there isn't much detail here.
    $endgroup$
    – Upper_Case
    Apr 8 at 21:27















0












$begingroup$


In order to validate a recommender model, a usual approach is create a hold-out set that will provide random suggestions (similar to an A/B testing setup).
However, in healthcare applications, this cannot be possible as a random suggestion can put at risk a patient's life.
Hence, what is a reasonable approach to validate the model?










share|improve this question











$endgroup$











  • $begingroup$
    Could you provide a little bit more detail about what sort of work you're doing? I'm assuming a lot, like that the randomness relates to group assignment and not the type of treatment itself, but there isn't much detail here.
    $endgroup$
    – Upper_Case
    Apr 8 at 21:27













0












0








0





$begingroup$


In order to validate a recommender model, a usual approach is create a hold-out set that will provide random suggestions (similar to an A/B testing setup).
However, in healthcare applications, this cannot be possible as a random suggestion can put at risk a patient's life.
Hence, what is a reasonable approach to validate the model?










share|improve this question











$endgroup$




In order to validate a recommender model, a usual approach is create a hold-out set that will provide random suggestions (similar to an A/B testing setup).
However, in healthcare applications, this cannot be possible as a random suggestion can put at risk a patient's life.
Hence, what is a reasonable approach to validate the model?







recommender-system data-product healthcare






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Apr 8 at 20:47









Brian Spiering

4,3531129




4,3531129










asked Apr 8 at 19:45









tashuhkatashuhka

356310




356310











  • $begingroup$
    Could you provide a little bit more detail about what sort of work you're doing? I'm assuming a lot, like that the randomness relates to group assignment and not the type of treatment itself, but there isn't much detail here.
    $endgroup$
    – Upper_Case
    Apr 8 at 21:27
















  • $begingroup$
    Could you provide a little bit more detail about what sort of work you're doing? I'm assuming a lot, like that the randomness relates to group assignment and not the type of treatment itself, but there isn't much detail here.
    $endgroup$
    – Upper_Case
    Apr 8 at 21:27















$begingroup$
Could you provide a little bit more detail about what sort of work you're doing? I'm assuming a lot, like that the randomness relates to group assignment and not the type of treatment itself, but there isn't much detail here.
$endgroup$
– Upper_Case
Apr 8 at 21:27




$begingroup$
Could you provide a little bit more detail about what sort of work you're doing? I'm assuming a lot, like that the randomness relates to group assignment and not the type of treatment itself, but there isn't much detail here.
$endgroup$
– Upper_Case
Apr 8 at 21:27










1 Answer
1






active

oldest

votes


















1












$begingroup$

You should still be able to use a validation set to evaluate the model, whether or not you pursue an experimental approach. (Specific features of your model and investigations may tweak these, but this is based on what's already been posted alone).



There is nothing wrong with A/B group assignment and testing in a medical context, with a few caveats (this list is not exhaustive):



  • The relevant clinical/medical knowledge must be in a state of
    equipoise (it's not already clear that one approach is better than
    another, or which is better is genuinely not known).

  • Individuals should be aware that they are participating in a study, and that they are being routed to
    group A or B, and have the option to decline their assignment (or,
    conversely, they have been made aware of the experimental assignment
    and have consented to participate in advance).

  • An institutional review board should evaluate your proposed
    experiment and signed off on it. This, of course, presupposes that
    you have access to such a board composed of members able to make
    those assessments.

Those can be a tall order, but you don't necessarily have to perform a prospective, double-blind experimental study in order to glean some information. A retrospective study could provide some insight as well, and your process for the validation set would be something like:



  1. Prepare your recommender model

  2. Feed your data through the model, without looking at actual outcomes

  3. Match your model output to actual outcomes to see whether or not
    people followed the recommendation (whether they ever saw that
    recommendation or not)

  4. Compare the results of people that ended up going with each
    recommended approach (A vs. B), as well as those who "followed" the
    recommendations or not (Recommended-A-did-A vs. Recommended-A-did-B,
    etc.)

Retrospective studies are generally not as good as well-designed, well-executed prospective experimental studies, but they can still provide a lot of information. In situations where prospective experimentation is impossible or undesirable, the information a retrospective study provides may be the very best you can actually get.






share|improve this answer









$endgroup$













    Your Answer








    StackExchange.ready(function()
    var channelOptions =
    tags: "".split(" "),
    id: "557"
    ;
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function()
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled)
    StackExchange.using("snippets", function()
    createEditor();
    );

    else
    createEditor();

    );

    function createEditor()
    StackExchange.prepareEditor(
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: false,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: null,
    bindNavPrevention: true,
    postfix: "",
    imageUploader:
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    ,
    onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    );



    );













    draft saved

    draft discarded


















    StackExchange.ready(
    function ()
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f48912%2fhow-to-validate-recommender-model-in-healthcare%23new-answer', 'question_page');

    );

    Post as a guest















    Required, but never shown

























    1 Answer
    1






    active

    oldest

    votes








    1 Answer
    1






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    1












    $begingroup$

    You should still be able to use a validation set to evaluate the model, whether or not you pursue an experimental approach. (Specific features of your model and investigations may tweak these, but this is based on what's already been posted alone).



    There is nothing wrong with A/B group assignment and testing in a medical context, with a few caveats (this list is not exhaustive):



    • The relevant clinical/medical knowledge must be in a state of
      equipoise (it's not already clear that one approach is better than
      another, or which is better is genuinely not known).

    • Individuals should be aware that they are participating in a study, and that they are being routed to
      group A or B, and have the option to decline their assignment (or,
      conversely, they have been made aware of the experimental assignment
      and have consented to participate in advance).

    • An institutional review board should evaluate your proposed
      experiment and signed off on it. This, of course, presupposes that
      you have access to such a board composed of members able to make
      those assessments.

    Those can be a tall order, but you don't necessarily have to perform a prospective, double-blind experimental study in order to glean some information. A retrospective study could provide some insight as well, and your process for the validation set would be something like:



    1. Prepare your recommender model

    2. Feed your data through the model, without looking at actual outcomes

    3. Match your model output to actual outcomes to see whether or not
      people followed the recommendation (whether they ever saw that
      recommendation or not)

    4. Compare the results of people that ended up going with each
      recommended approach (A vs. B), as well as those who "followed" the
      recommendations or not (Recommended-A-did-A vs. Recommended-A-did-B,
      etc.)

    Retrospective studies are generally not as good as well-designed, well-executed prospective experimental studies, but they can still provide a lot of information. In situations where prospective experimentation is impossible or undesirable, the information a retrospective study provides may be the very best you can actually get.






    share|improve this answer









    $endgroup$

















      1












      $begingroup$

      You should still be able to use a validation set to evaluate the model, whether or not you pursue an experimental approach. (Specific features of your model and investigations may tweak these, but this is based on what's already been posted alone).



      There is nothing wrong with A/B group assignment and testing in a medical context, with a few caveats (this list is not exhaustive):



      • The relevant clinical/medical knowledge must be in a state of
        equipoise (it's not already clear that one approach is better than
        another, or which is better is genuinely not known).

      • Individuals should be aware that they are participating in a study, and that they are being routed to
        group A or B, and have the option to decline their assignment (or,
        conversely, they have been made aware of the experimental assignment
        and have consented to participate in advance).

      • An institutional review board should evaluate your proposed
        experiment and signed off on it. This, of course, presupposes that
        you have access to such a board composed of members able to make
        those assessments.

      Those can be a tall order, but you don't necessarily have to perform a prospective, double-blind experimental study in order to glean some information. A retrospective study could provide some insight as well, and your process for the validation set would be something like:



      1. Prepare your recommender model

      2. Feed your data through the model, without looking at actual outcomes

      3. Match your model output to actual outcomes to see whether or not
        people followed the recommendation (whether they ever saw that
        recommendation or not)

      4. Compare the results of people that ended up going with each
        recommended approach (A vs. B), as well as those who "followed" the
        recommendations or not (Recommended-A-did-A vs. Recommended-A-did-B,
        etc.)

      Retrospective studies are generally not as good as well-designed, well-executed prospective experimental studies, but they can still provide a lot of information. In situations where prospective experimentation is impossible or undesirable, the information a retrospective study provides may be the very best you can actually get.






      share|improve this answer









      $endgroup$















        1












        1








        1





        $begingroup$

        You should still be able to use a validation set to evaluate the model, whether or not you pursue an experimental approach. (Specific features of your model and investigations may tweak these, but this is based on what's already been posted alone).



        There is nothing wrong with A/B group assignment and testing in a medical context, with a few caveats (this list is not exhaustive):



        • The relevant clinical/medical knowledge must be in a state of
          equipoise (it's not already clear that one approach is better than
          another, or which is better is genuinely not known).

        • Individuals should be aware that they are participating in a study, and that they are being routed to
          group A or B, and have the option to decline their assignment (or,
          conversely, they have been made aware of the experimental assignment
          and have consented to participate in advance).

        • An institutional review board should evaluate your proposed
          experiment and signed off on it. This, of course, presupposes that
          you have access to such a board composed of members able to make
          those assessments.

        Those can be a tall order, but you don't necessarily have to perform a prospective, double-blind experimental study in order to glean some information. A retrospective study could provide some insight as well, and your process for the validation set would be something like:



        1. Prepare your recommender model

        2. Feed your data through the model, without looking at actual outcomes

        3. Match your model output to actual outcomes to see whether or not
          people followed the recommendation (whether they ever saw that
          recommendation or not)

        4. Compare the results of people that ended up going with each
          recommended approach (A vs. B), as well as those who "followed" the
          recommendations or not (Recommended-A-did-A vs. Recommended-A-did-B,
          etc.)

        Retrospective studies are generally not as good as well-designed, well-executed prospective experimental studies, but they can still provide a lot of information. In situations where prospective experimentation is impossible or undesirable, the information a retrospective study provides may be the very best you can actually get.






        share|improve this answer









        $endgroup$



        You should still be able to use a validation set to evaluate the model, whether or not you pursue an experimental approach. (Specific features of your model and investigations may tweak these, but this is based on what's already been posted alone).



        There is nothing wrong with A/B group assignment and testing in a medical context, with a few caveats (this list is not exhaustive):



        • The relevant clinical/medical knowledge must be in a state of
          equipoise (it's not already clear that one approach is better than
          another, or which is better is genuinely not known).

        • Individuals should be aware that they are participating in a study, and that they are being routed to
          group A or B, and have the option to decline their assignment (or,
          conversely, they have been made aware of the experimental assignment
          and have consented to participate in advance).

        • An institutional review board should evaluate your proposed
          experiment and signed off on it. This, of course, presupposes that
          you have access to such a board composed of members able to make
          those assessments.

        Those can be a tall order, but you don't necessarily have to perform a prospective, double-blind experimental study in order to glean some information. A retrospective study could provide some insight as well, and your process for the validation set would be something like:



        1. Prepare your recommender model

        2. Feed your data through the model, without looking at actual outcomes

        3. Match your model output to actual outcomes to see whether or not
          people followed the recommendation (whether they ever saw that
          recommendation or not)

        4. Compare the results of people that ended up going with each
          recommended approach (A vs. B), as well as those who "followed" the
          recommendations or not (Recommended-A-did-A vs. Recommended-A-did-B,
          etc.)

        Retrospective studies are generally not as good as well-designed, well-executed prospective experimental studies, but they can still provide a lot of information. In situations where prospective experimentation is impossible or undesirable, the information a retrospective study provides may be the very best you can actually get.







        share|improve this answer












        share|improve this answer



        share|improve this answer










        answered Apr 8 at 21:34









        Upper_CaseUpper_Case

        1913




        1913



























            draft saved

            draft discarded
















































            Thanks for contributing an answer to Data Science Stack Exchange!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid


            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.

            Use MathJax to format equations. MathJax reference.


            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f48912%2fhow-to-validate-recommender-model-in-healthcare%23new-answer', 'question_page');

            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            Adding axes to figuresAdding axes labels to LaTeX figuresLaTeX equivalent of ConTeXt buffersRotate a node but not its content: the case of the ellipse decorationHow to define the default vertical distance between nodes?TikZ scaling graphic and adjust node position and keep font sizeNumerical conditional within tikz keys?adding axes to shapesAlign axes across subfiguresAdding figures with a certain orderLine up nested tikz enviroments or how to get rid of themAdding axes labels to LaTeX figures

            Tähtien Talli Jäsenet | Lähteet | NavigointivalikkoSuomen Hippos – Tähtien Talli

            Do these cracks on my tires look bad? The Next CEO of Stack OverflowDry rot tire should I replace?Having to replace tiresFishtailed so easily? Bad tires? ABS?Filling the tires with something other than air, to avoid puncture hassles?Used Michelin tires safe to install?Do these tyre cracks necessitate replacement?Rumbling noise: tires or mechanicalIs it possible to fix noisy feathered tires?Are bad winter tires still better than summer tires in winter?Torque converter failure - Related to replacing only 2 tires?Why use snow tires on all 4 wheels on 2-wheel-drive cars?