How to validate recommender model in healthcare?Preference Matching AlgorithmData scheduling for recommenderItem based recommender using SVDTaxonomy of recommender system methodologiesRecommender System: how to treat different eventsRecommender algorithmHow to calculate coverage in recommender systems?Putting a predictive model into productionrecommender systems : how to deal with items that change over time?Classification model for recommender system?
Survey Confirmation - Emphasize the question or the answer?
Did we get closer to another plane than we were supposed to, or was the pilot just protecting our delicate sensibilities?
Hang 20lb projector screen on Hardieplank
I caught several of my students plagiarizing. Could it be my fault as a teacher?
How can I close a gap between my fence and my neighbor's that's on his side of the property line?
Binary Numbers Magic Trick
Is lying to get "gardening leave" fraud?
Copy line and insert it in a new position with sed or awk
Can I use 1000v rectifier diodes instead of 600v rectifier diodes?
Why is Arya visibly scared in the library in S8E3?
Historically, were women trained for obligatory wars? Or did they serve some other military function?
Attending a conference where my ex-supervisor and his collaborator are present, should I attend?
Transfer over $10k
Entropy as a function of temperature: is temperature well defined?
When and why did journal article titles become descriptive, rather than creatively allusive?
What is the limiting factor for a CAN bus to exceed 1Mbps bandwidth?
Is it always OK to ask for a copy of the lecturer's slides?
You look catfish vs You look like a catfish?
Would "lab meat" be able to feed a much larger global population
Unidentified items in bicycle tube repair kit
Why was the battle set up *outside* Winterfell?
Applying a function to a nested list
What are the spoon bit of a spoon and fork bit of a fork called?
How to back up a running Linode server?
How to validate recommender model in healthcare?
Preference Matching AlgorithmData scheduling for recommenderItem based recommender using SVDTaxonomy of recommender system methodologiesRecommender System: how to treat different eventsRecommender algorithmHow to calculate coverage in recommender systems?Putting a predictive model into productionrecommender systems : how to deal with items that change over time?Classification model for recommender system?
$begingroup$
In order to validate a recommender model, a usual approach is create a hold-out set that will provide random suggestions (similar to an A/B testing setup).
However, in healthcare applications, this cannot be possible as a random suggestion can put at risk a patient's life.
Hence, what is a reasonable approach to validate the model?
recommender-system data-product healthcare
$endgroup$
add a comment |
$begingroup$
In order to validate a recommender model, a usual approach is create a hold-out set that will provide random suggestions (similar to an A/B testing setup).
However, in healthcare applications, this cannot be possible as a random suggestion can put at risk a patient's life.
Hence, what is a reasonable approach to validate the model?
recommender-system data-product healthcare
$endgroup$
$begingroup$
Could you provide a little bit more detail about what sort of work you're doing? I'm assuming a lot, like that the randomness relates to group assignment and not the type of treatment itself, but there isn't much detail here.
$endgroup$
– Upper_Case
Apr 8 at 21:27
add a comment |
$begingroup$
In order to validate a recommender model, a usual approach is create a hold-out set that will provide random suggestions (similar to an A/B testing setup).
However, in healthcare applications, this cannot be possible as a random suggestion can put at risk a patient's life.
Hence, what is a reasonable approach to validate the model?
recommender-system data-product healthcare
$endgroup$
In order to validate a recommender model, a usual approach is create a hold-out set that will provide random suggestions (similar to an A/B testing setup).
However, in healthcare applications, this cannot be possible as a random suggestion can put at risk a patient's life.
Hence, what is a reasonable approach to validate the model?
recommender-system data-product healthcare
recommender-system data-product healthcare
edited Apr 8 at 20:47
Brian Spiering
4,3531129
4,3531129
asked Apr 8 at 19:45
tashuhkatashuhka
356310
356310
$begingroup$
Could you provide a little bit more detail about what sort of work you're doing? I'm assuming a lot, like that the randomness relates to group assignment and not the type of treatment itself, but there isn't much detail here.
$endgroup$
– Upper_Case
Apr 8 at 21:27
add a comment |
$begingroup$
Could you provide a little bit more detail about what sort of work you're doing? I'm assuming a lot, like that the randomness relates to group assignment and not the type of treatment itself, but there isn't much detail here.
$endgroup$
– Upper_Case
Apr 8 at 21:27
$begingroup$
Could you provide a little bit more detail about what sort of work you're doing? I'm assuming a lot, like that the randomness relates to group assignment and not the type of treatment itself, but there isn't much detail here.
$endgroup$
– Upper_Case
Apr 8 at 21:27
$begingroup$
Could you provide a little bit more detail about what sort of work you're doing? I'm assuming a lot, like that the randomness relates to group assignment and not the type of treatment itself, but there isn't much detail here.
$endgroup$
– Upper_Case
Apr 8 at 21:27
add a comment |
1 Answer
1
active
oldest
votes
$begingroup$
You should still be able to use a validation set to evaluate the model, whether or not you pursue an experimental approach. (Specific features of your model and investigations may tweak these, but this is based on what's already been posted alone).
There is nothing wrong with A/B group assignment and testing in a medical context, with a few caveats (this list is not exhaustive):
- The relevant clinical/medical knowledge must be in a state of
equipoise (it's not already clear that one approach is better than
another, or which is better is genuinely not known). - Individuals should be aware that they are participating in a study, and that they are being routed to
group A or B, and have the option to decline their assignment (or,
conversely, they have been made aware of the experimental assignment
and have consented to participate in advance). - An institutional review board should evaluate your proposed
experiment and signed off on it. This, of course, presupposes that
you have access to such a board composed of members able to make
those assessments.
Those can be a tall order, but you don't necessarily have to perform a prospective, double-blind experimental study in order to glean some information. A retrospective study could provide some insight as well, and your process for the validation set would be something like:
- Prepare your recommender model
- Feed your data through the model, without looking at actual outcomes
- Match your model output to actual outcomes to see whether or not
people followed the recommendation (whether they ever saw that
recommendation or not) - Compare the results of people that ended up going with each
recommended approach (A vs. B), as well as those who "followed" the
recommendations or not (Recommended-A-did-A vs. Recommended-A-did-B,
etc.)
Retrospective studies are generally not as good as well-designed, well-executed prospective experimental studies, but they can still provide a lot of information. In situations where prospective experimentation is impossible or undesirable, the information a retrospective study provides may be the very best you can actually get.
$endgroup$
add a comment |
Your Answer
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "557"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f48912%2fhow-to-validate-recommender-model-in-healthcare%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
You should still be able to use a validation set to evaluate the model, whether or not you pursue an experimental approach. (Specific features of your model and investigations may tweak these, but this is based on what's already been posted alone).
There is nothing wrong with A/B group assignment and testing in a medical context, with a few caveats (this list is not exhaustive):
- The relevant clinical/medical knowledge must be in a state of
equipoise (it's not already clear that one approach is better than
another, or which is better is genuinely not known). - Individuals should be aware that they are participating in a study, and that they are being routed to
group A or B, and have the option to decline their assignment (or,
conversely, they have been made aware of the experimental assignment
and have consented to participate in advance). - An institutional review board should evaluate your proposed
experiment and signed off on it. This, of course, presupposes that
you have access to such a board composed of members able to make
those assessments.
Those can be a tall order, but you don't necessarily have to perform a prospective, double-blind experimental study in order to glean some information. A retrospective study could provide some insight as well, and your process for the validation set would be something like:
- Prepare your recommender model
- Feed your data through the model, without looking at actual outcomes
- Match your model output to actual outcomes to see whether or not
people followed the recommendation (whether they ever saw that
recommendation or not) - Compare the results of people that ended up going with each
recommended approach (A vs. B), as well as those who "followed" the
recommendations or not (Recommended-A-did-A vs. Recommended-A-did-B,
etc.)
Retrospective studies are generally not as good as well-designed, well-executed prospective experimental studies, but they can still provide a lot of information. In situations where prospective experimentation is impossible or undesirable, the information a retrospective study provides may be the very best you can actually get.
$endgroup$
add a comment |
$begingroup$
You should still be able to use a validation set to evaluate the model, whether or not you pursue an experimental approach. (Specific features of your model and investigations may tweak these, but this is based on what's already been posted alone).
There is nothing wrong with A/B group assignment and testing in a medical context, with a few caveats (this list is not exhaustive):
- The relevant clinical/medical knowledge must be in a state of
equipoise (it's not already clear that one approach is better than
another, or which is better is genuinely not known). - Individuals should be aware that they are participating in a study, and that they are being routed to
group A or B, and have the option to decline their assignment (or,
conversely, they have been made aware of the experimental assignment
and have consented to participate in advance). - An institutional review board should evaluate your proposed
experiment and signed off on it. This, of course, presupposes that
you have access to such a board composed of members able to make
those assessments.
Those can be a tall order, but you don't necessarily have to perform a prospective, double-blind experimental study in order to glean some information. A retrospective study could provide some insight as well, and your process for the validation set would be something like:
- Prepare your recommender model
- Feed your data through the model, without looking at actual outcomes
- Match your model output to actual outcomes to see whether or not
people followed the recommendation (whether they ever saw that
recommendation or not) - Compare the results of people that ended up going with each
recommended approach (A vs. B), as well as those who "followed" the
recommendations or not (Recommended-A-did-A vs. Recommended-A-did-B,
etc.)
Retrospective studies are generally not as good as well-designed, well-executed prospective experimental studies, but they can still provide a lot of information. In situations where prospective experimentation is impossible or undesirable, the information a retrospective study provides may be the very best you can actually get.
$endgroup$
add a comment |
$begingroup$
You should still be able to use a validation set to evaluate the model, whether or not you pursue an experimental approach. (Specific features of your model and investigations may tweak these, but this is based on what's already been posted alone).
There is nothing wrong with A/B group assignment and testing in a medical context, with a few caveats (this list is not exhaustive):
- The relevant clinical/medical knowledge must be in a state of
equipoise (it's not already clear that one approach is better than
another, or which is better is genuinely not known). - Individuals should be aware that they are participating in a study, and that they are being routed to
group A or B, and have the option to decline their assignment (or,
conversely, they have been made aware of the experimental assignment
and have consented to participate in advance). - An institutional review board should evaluate your proposed
experiment and signed off on it. This, of course, presupposes that
you have access to such a board composed of members able to make
those assessments.
Those can be a tall order, but you don't necessarily have to perform a prospective, double-blind experimental study in order to glean some information. A retrospective study could provide some insight as well, and your process for the validation set would be something like:
- Prepare your recommender model
- Feed your data through the model, without looking at actual outcomes
- Match your model output to actual outcomes to see whether or not
people followed the recommendation (whether they ever saw that
recommendation or not) - Compare the results of people that ended up going with each
recommended approach (A vs. B), as well as those who "followed" the
recommendations or not (Recommended-A-did-A vs. Recommended-A-did-B,
etc.)
Retrospective studies are generally not as good as well-designed, well-executed prospective experimental studies, but they can still provide a lot of information. In situations where prospective experimentation is impossible or undesirable, the information a retrospective study provides may be the very best you can actually get.
$endgroup$
You should still be able to use a validation set to evaluate the model, whether or not you pursue an experimental approach. (Specific features of your model and investigations may tweak these, but this is based on what's already been posted alone).
There is nothing wrong with A/B group assignment and testing in a medical context, with a few caveats (this list is not exhaustive):
- The relevant clinical/medical knowledge must be in a state of
equipoise (it's not already clear that one approach is better than
another, or which is better is genuinely not known). - Individuals should be aware that they are participating in a study, and that they are being routed to
group A or B, and have the option to decline their assignment (or,
conversely, they have been made aware of the experimental assignment
and have consented to participate in advance). - An institutional review board should evaluate your proposed
experiment and signed off on it. This, of course, presupposes that
you have access to such a board composed of members able to make
those assessments.
Those can be a tall order, but you don't necessarily have to perform a prospective, double-blind experimental study in order to glean some information. A retrospective study could provide some insight as well, and your process for the validation set would be something like:
- Prepare your recommender model
- Feed your data through the model, without looking at actual outcomes
- Match your model output to actual outcomes to see whether or not
people followed the recommendation (whether they ever saw that
recommendation or not) - Compare the results of people that ended up going with each
recommended approach (A vs. B), as well as those who "followed" the
recommendations or not (Recommended-A-did-A vs. Recommended-A-did-B,
etc.)
Retrospective studies are generally not as good as well-designed, well-executed prospective experimental studies, but they can still provide a lot of information. In situations where prospective experimentation is impossible or undesirable, the information a retrospective study provides may be the very best you can actually get.
answered Apr 8 at 21:34
Upper_CaseUpper_Case
1913
1913
add a comment |
add a comment |
Thanks for contributing an answer to Data Science Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f48912%2fhow-to-validate-recommender-model-in-healthcare%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
$begingroup$
Could you provide a little bit more detail about what sort of work you're doing? I'm assuming a lot, like that the randomness relates to group assignment and not the type of treatment itself, but there isn't much detail here.
$endgroup$
– Upper_Case
Apr 8 at 21:27