why do we need row sampling in random forests?2019 Community Moderator ElectionWhere does the random in Random Forests come from?Minimum number of trees for Random Forest classifierValueError when doing validation with random forestsWhy do we need XGBoost and Random Forest?Need Advice, Classification Problem in Python: Should I use Decision tree, Random Forests, or Logistic Regression?Default value of mtry for random forestsRandom Forests with complementary featuresDoes out-of-bag sampling make Random Forests inherently less robust than other classifiers?How (and why) are random forests able to represent both linear and non linear dataRandom Forest, Duplicating Data increases Accuracy. Why?
Was a professor correct to chastise me for writing "Prof. X" rather than "Professor X"?
How long to clear the 'suck zone' of a turbofan after start is initiated?
Crossing the line between justified force and brutality
Why escape if the_content isnt?
Efficient way to transport a Stargate
What is the intuitive meaning of having a linear relationship between the logs of two variables?
Return the Closest Prime Number
Is expanding the research of a group into machine learning as a PhD student risky?
System.debug(JSON.Serialize(o)) Not longer shows full string
India just shot down a satellite from the ground. At what altitude range is the resulting debris field?
Large drywall patch supports
when is out of tune ok?
Pre-amplifier input protection
Class Action - which options I have?
Go Pregnant or Go Home
I'm in charge of equipment buying but no one's ever happy with what I choose. How to fix this?
Trouble understanding the speech of overseas colleagues
Is a stroke of luck acceptable after a series of unfavorable events?
What is the best translation for "slot" in the context of multiplayer video games?
Is there a korbon needed for conversion?
Arithmetic mean geometric mean inequality unclear
Why not increase contact surface when reentering the atmosphere?
Can the discrete variable be a negative number?
How to safely derail a train during transit?
why do we need row sampling in random forests?
2019 Community Moderator ElectionWhere does the random in Random Forests come from?Minimum number of trees for Random Forest classifierValueError when doing validation with random forestsWhy do we need XGBoost and Random Forest?Need Advice, Classification Problem in Python: Should I use Decision tree, Random Forests, or Logistic Regression?Default value of mtry for random forestsRandom Forests with complementary featuresDoes out-of-bag sampling make Random Forests inherently less robust than other classifiers?How (and why) are random forests able to represent both linear and non linear dataRandom Forest, Duplicating Data increases Accuracy. Why?
$begingroup$
In random forests, where our estimators are decision trees, we do column (feature) sampling without replacement within an estimator, and with replacement in between estimators. This is perfectly fine as we are trying to reduce the high variance of individual decision trees.
But what is the need to do row sampling?
Usually more the data, the better it is for a model to learn, and even if i dont have any computational resource limitation, why do we have to do row sampling in estimators for random forest classifier?
random-forest decision-trees boosting
$endgroup$
add a comment |
$begingroup$
In random forests, where our estimators are decision trees, we do column (feature) sampling without replacement within an estimator, and with replacement in between estimators. This is perfectly fine as we are trying to reduce the high variance of individual decision trees.
But what is the need to do row sampling?
Usually more the data, the better it is for a model to learn, and even if i dont have any computational resource limitation, why do we have to do row sampling in estimators for random forest classifier?
random-forest decision-trees boosting
$endgroup$
add a comment |
$begingroup$
In random forests, where our estimators are decision trees, we do column (feature) sampling without replacement within an estimator, and with replacement in between estimators. This is perfectly fine as we are trying to reduce the high variance of individual decision trees.
But what is the need to do row sampling?
Usually more the data, the better it is for a model to learn, and even if i dont have any computational resource limitation, why do we have to do row sampling in estimators for random forest classifier?
random-forest decision-trees boosting
$endgroup$
In random forests, where our estimators are decision trees, we do column (feature) sampling without replacement within an estimator, and with replacement in between estimators. This is perfectly fine as we are trying to reduce the high variance of individual decision trees.
But what is the need to do row sampling?
Usually more the data, the better it is for a model to learn, and even if i dont have any computational resource limitation, why do we have to do row sampling in estimators for random forest classifier?
random-forest decision-trees boosting
random-forest decision-trees boosting
asked 2 days ago
InAFlashInAFlash
3671316
3671316
add a comment |
add a comment |
2 Answers
2
active
oldest
votes
$begingroup$
I think it is a way to reduce bias. If you're training a Random Forest with 100 trees, then you will grow these trees with (potentially) 100 different training sets. You can achieve the "wisdom of the crowds" since there is a crowd formed by these training sets.
New contributor
$endgroup$
$begingroup$
I think "wisdom of the crowds" analogy is problematic, since it refers to the benefit of larger number of individuals. But row sampling is about the benefit of smaller number of individuals, given the number of trees (weak learners) is fixed.
$endgroup$
– Esmailian
2 days ago
$begingroup$
Here the larger number of individuals is due to having different training set at each iteration. That's what I mean for "the crowd".
$endgroup$
– Matteo Felici
2 days ago
add a comment |
$begingroup$
First I think your understanding of “column sampling” is incorrect. Random forest try’s a subset of features for each split. It does not sample without replacement within an individual tree.
Random forest samples rows with replacement (bootstrap samples) to remove correlation between the decision trees. Think about it, if you didn’t do this even though you create each split based on only a subset of features, your trees would end up looking fairly similar (or at least more similar than if you bootstrapped). You do have larger bias due to creating trees based only on about ~63% unique values but the decrease in variance by having more uncorrelated trees makes up for it.
$endgroup$
add a comment |
Your Answer
StackExchange.ifUsing("editor", function ()
return StackExchange.using("mathjaxEditing", function ()
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
);
);
, "mathjax-editing");
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "557"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f47971%2fwhy-do-we-need-row-sampling-in-random-forests%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
2 Answers
2
active
oldest
votes
2 Answers
2
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
I think it is a way to reduce bias. If you're training a Random Forest with 100 trees, then you will grow these trees with (potentially) 100 different training sets. You can achieve the "wisdom of the crowds" since there is a crowd formed by these training sets.
New contributor
$endgroup$
$begingroup$
I think "wisdom of the crowds" analogy is problematic, since it refers to the benefit of larger number of individuals. But row sampling is about the benefit of smaller number of individuals, given the number of trees (weak learners) is fixed.
$endgroup$
– Esmailian
2 days ago
$begingroup$
Here the larger number of individuals is due to having different training set at each iteration. That's what I mean for "the crowd".
$endgroup$
– Matteo Felici
2 days ago
add a comment |
$begingroup$
I think it is a way to reduce bias. If you're training a Random Forest with 100 trees, then you will grow these trees with (potentially) 100 different training sets. You can achieve the "wisdom of the crowds" since there is a crowd formed by these training sets.
New contributor
$endgroup$
$begingroup$
I think "wisdom of the crowds" analogy is problematic, since it refers to the benefit of larger number of individuals. But row sampling is about the benefit of smaller number of individuals, given the number of trees (weak learners) is fixed.
$endgroup$
– Esmailian
2 days ago
$begingroup$
Here the larger number of individuals is due to having different training set at each iteration. That's what I mean for "the crowd".
$endgroup$
– Matteo Felici
2 days ago
add a comment |
$begingroup$
I think it is a way to reduce bias. If you're training a Random Forest with 100 trees, then you will grow these trees with (potentially) 100 different training sets. You can achieve the "wisdom of the crowds" since there is a crowd formed by these training sets.
New contributor
$endgroup$
I think it is a way to reduce bias. If you're training a Random Forest with 100 trees, then you will grow these trees with (potentially) 100 different training sets. You can achieve the "wisdom of the crowds" since there is a crowd formed by these training sets.
New contributor
New contributor
answered 2 days ago
Matteo FeliciMatteo Felici
1012
1012
New contributor
New contributor
$begingroup$
I think "wisdom of the crowds" analogy is problematic, since it refers to the benefit of larger number of individuals. But row sampling is about the benefit of smaller number of individuals, given the number of trees (weak learners) is fixed.
$endgroup$
– Esmailian
2 days ago
$begingroup$
Here the larger number of individuals is due to having different training set at each iteration. That's what I mean for "the crowd".
$endgroup$
– Matteo Felici
2 days ago
add a comment |
$begingroup$
I think "wisdom of the crowds" analogy is problematic, since it refers to the benefit of larger number of individuals. But row sampling is about the benefit of smaller number of individuals, given the number of trees (weak learners) is fixed.
$endgroup$
– Esmailian
2 days ago
$begingroup$
Here the larger number of individuals is due to having different training set at each iteration. That's what I mean for "the crowd".
$endgroup$
– Matteo Felici
2 days ago
$begingroup$
I think "wisdom of the crowds" analogy is problematic, since it refers to the benefit of larger number of individuals. But row sampling is about the benefit of smaller number of individuals, given the number of trees (weak learners) is fixed.
$endgroup$
– Esmailian
2 days ago
$begingroup$
I think "wisdom of the crowds" analogy is problematic, since it refers to the benefit of larger number of individuals. But row sampling is about the benefit of smaller number of individuals, given the number of trees (weak learners) is fixed.
$endgroup$
– Esmailian
2 days ago
$begingroup$
Here the larger number of individuals is due to having different training set at each iteration. That's what I mean for "the crowd".
$endgroup$
– Matteo Felici
2 days ago
$begingroup$
Here the larger number of individuals is due to having different training set at each iteration. That's what I mean for "the crowd".
$endgroup$
– Matteo Felici
2 days ago
add a comment |
$begingroup$
First I think your understanding of “column sampling” is incorrect. Random forest try’s a subset of features for each split. It does not sample without replacement within an individual tree.
Random forest samples rows with replacement (bootstrap samples) to remove correlation between the decision trees. Think about it, if you didn’t do this even though you create each split based on only a subset of features, your trees would end up looking fairly similar (or at least more similar than if you bootstrapped). You do have larger bias due to creating trees based only on about ~63% unique values but the decrease in variance by having more uncorrelated trees makes up for it.
$endgroup$
add a comment |
$begingroup$
First I think your understanding of “column sampling” is incorrect. Random forest try’s a subset of features for each split. It does not sample without replacement within an individual tree.
Random forest samples rows with replacement (bootstrap samples) to remove correlation between the decision trees. Think about it, if you didn’t do this even though you create each split based on only a subset of features, your trees would end up looking fairly similar (or at least more similar than if you bootstrapped). You do have larger bias due to creating trees based only on about ~63% unique values but the decrease in variance by having more uncorrelated trees makes up for it.
$endgroup$
add a comment |
$begingroup$
First I think your understanding of “column sampling” is incorrect. Random forest try’s a subset of features for each split. It does not sample without replacement within an individual tree.
Random forest samples rows with replacement (bootstrap samples) to remove correlation between the decision trees. Think about it, if you didn’t do this even though you create each split based on only a subset of features, your trees would end up looking fairly similar (or at least more similar than if you bootstrapped). You do have larger bias due to creating trees based only on about ~63% unique values but the decrease in variance by having more uncorrelated trees makes up for it.
$endgroup$
First I think your understanding of “column sampling” is incorrect. Random forest try’s a subset of features for each split. It does not sample without replacement within an individual tree.
Random forest samples rows with replacement (bootstrap samples) to remove correlation between the decision trees. Think about it, if you didn’t do this even though you create each split based on only a subset of features, your trees would end up looking fairly similar (or at least more similar than if you bootstrapped). You do have larger bias due to creating trees based only on about ~63% unique values but the decrease in variance by having more uncorrelated trees makes up for it.
answered 2 days ago
astelastel
412
412
add a comment |
add a comment |
Thanks for contributing an answer to Data Science Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f47971%2fwhy-do-we-need-row-sampling-in-random-forests%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown