What does it mean to take the “average” of two decision trees by 'voting'2019 Community Moderator ElectionQuestion on decision tree in the book Programming Collective IntelligenceHow do I deal with non-IID data in gradient boosted random forest (for stock market)?what is the difference between “fully developed decision trees” and “shallow decision trees”?How important is lookahead search in decision trees?Machine Learning to Get X, Y Coordinates From Picture of PlotDetecting spammers with artificially generated target classDecision Trees - C4.5 vs CART - rule setsDetails on soft decision treesUsing Decision Trees to interpret good factor values
Concept of linear mappings are confusing me
Is there really no realistic way for a skeleton monster to move around without magic?
Is it legal to have the "// (c) 2019 John Smith" header in all files when there are hundreds of contributors?
Can Medicine checks be used, with decent rolls, to completely mitigate the risk of death from ongoing damage?
How does one intimidate enemies without having the capacity for violence?
Are tax years 2016 & 2017 back taxes deductible for tax year 2018?
I see my dog run
Why is the design of haulage companies so “special”?
What do you call a Matrix-like slowdown and camera movement effect?
A newer friend of my brother's gave him a load of baseball cards that are supposedly extremely valuable. Is this a scam?
How do we improve the relationship with a client software team that performs poorly and is becoming less collaborative?
What do you call something that goes against the spirit of the law, but is legal when interpreting the law to the letter?
How do I create uniquely male characters?
My colleague's body is amazing
How to make payment on the internet without leaving a money trail?
What Brexit solution does the DUP want?
What would happen to a modern skyscraper if it rains micro blackholes?
Why don't electron-positron collisions release infinite energy?
How is the claim "I am in New York only if I am in America" the same as "If I am in New York, then I am in America?
Draw simple lines in Inkscape
What is the meaning of "of trouble" in the following sentence?
Prevent a directory in /tmp from being deleted
Non-Jewish family in an Orthodox Jewish Wedding
What makes Graph invariants so useful/important?
What does it mean to take the “average” of two decision trees by 'voting'
2019 Community Moderator ElectionQuestion on decision tree in the book Programming Collective IntelligenceHow do I deal with non-IID data in gradient boosted random forest (for stock market)?what is the difference between “fully developed decision trees” and “shallow decision trees”?How important is lookahead search in decision trees?Machine Learning to Get X, Y Coordinates From Picture of PlotDetecting spammers with artificially generated target classDecision Trees - C4.5 vs CART - rule setsDetails on soft decision treesUsing Decision Trees to interpret good factor values
$begingroup$
I have heard, in relation to random forest algorithm, that the algorithm will fit many decision trees and take the average of them by votes. (This is related to bagging as well)
I understand what the average means for something example such as $vecx=[1,2,3], ; barx =2 $. But I don't know what it would mean if I had two decision trees.
Could anyone please provide a simple example / explanation of this averaging process for a couple of decision trees?
machine-learning random-forest decision-trees
$endgroup$
add a comment |
$begingroup$
I have heard, in relation to random forest algorithm, that the algorithm will fit many decision trees and take the average of them by votes. (This is related to bagging as well)
I understand what the average means for something example such as $vecx=[1,2,3], ; barx =2 $. But I don't know what it would mean if I had two decision trees.
Could anyone please provide a simple example / explanation of this averaging process for a couple of decision trees?
machine-learning random-forest decision-trees
$endgroup$
add a comment |
$begingroup$
I have heard, in relation to random forest algorithm, that the algorithm will fit many decision trees and take the average of them by votes. (This is related to bagging as well)
I understand what the average means for something example such as $vecx=[1,2,3], ; barx =2 $. But I don't know what it would mean if I had two decision trees.
Could anyone please provide a simple example / explanation of this averaging process for a couple of decision trees?
machine-learning random-forest decision-trees
$endgroup$
I have heard, in relation to random forest algorithm, that the algorithm will fit many decision trees and take the average of them by votes. (This is related to bagging as well)
I understand what the average means for something example such as $vecx=[1,2,3], ; barx =2 $. But I don't know what it would mean if I had two decision trees.
Could anyone please provide a simple example / explanation of this averaging process for a couple of decision trees?
machine-learning random-forest decision-trees
machine-learning random-forest decision-trees
asked Mar 30 at 17:19
baxxbaxx
1314
1314
add a comment |
add a comment |
1 Answer
1
active
oldest
votes
$begingroup$
I think that you are mixing together two different things - random forests
for regression and for classification. Regression means to predict a
continuous value (number). Random forest can construct multiple regression
trees, each of which makes a prediction about the number. In that case,
it is simple to understand. The numerical predictions are averaged to give
a robust prediction of the true number value.
However, I think that you are asking about classification - predicting a
nominal value (also called categorical or factor). In this case, each
decision tree predicts a category. Usually, it does not make sense to
talk about averaging categories. Instead, the multiple decision trees
"vote" - that is one counts how many times each category was predicted
and takes the category that received the most votes as the prediction.
There is no averaging, only counting.
Here is a simple example.
Data
V1 V2 V3 Class
A C E X
A C F X
B C F Y
B D F Y
B D E X
Decision Tree 1 uses only feature V1:
If V1 = A, predict X, otherwise predict Y
Decision Tree 2 uses only feature V2:
If V2 = C, predict X, otherwise predict Y
Decision Tree 3 uses only feature V3:
If V3 = E, predict X, otherwise predict Y
Now we want to predict the class of a new point (A, C, F):
- Decision Tree 1 sees V1 = A and predicts Class=X
- Decision Tree 2 sees V2 = C and predicts Class=X
- Decision Tree 3 sees V3 = F and predicts Class=Y
There were two votes for X and one vote for Y, so the forest predicts X,
the class that received the majority of the votes.
$endgroup$
$begingroup$
Some implementations instead average the probability scores across trees, see e.g. scikit-learn.org/stable/modules/ensemble.html#random-forests
$endgroup$
– Ben Reiniger
Apr 3 at 2:44
add a comment |
Your Answer
StackExchange.ifUsing("editor", function ()
return StackExchange.using("mathjaxEditing", function ()
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
);
);
, "mathjax-editing");
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "557"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f48269%2fwhat-does-it-mean-to-take-the-average-of-two-decision-trees-by-voting%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
I think that you are mixing together two different things - random forests
for regression and for classification. Regression means to predict a
continuous value (number). Random forest can construct multiple regression
trees, each of which makes a prediction about the number. In that case,
it is simple to understand. The numerical predictions are averaged to give
a robust prediction of the true number value.
However, I think that you are asking about classification - predicting a
nominal value (also called categorical or factor). In this case, each
decision tree predicts a category. Usually, it does not make sense to
talk about averaging categories. Instead, the multiple decision trees
"vote" - that is one counts how many times each category was predicted
and takes the category that received the most votes as the prediction.
There is no averaging, only counting.
Here is a simple example.
Data
V1 V2 V3 Class
A C E X
A C F X
B C F Y
B D F Y
B D E X
Decision Tree 1 uses only feature V1:
If V1 = A, predict X, otherwise predict Y
Decision Tree 2 uses only feature V2:
If V2 = C, predict X, otherwise predict Y
Decision Tree 3 uses only feature V3:
If V3 = E, predict X, otherwise predict Y
Now we want to predict the class of a new point (A, C, F):
- Decision Tree 1 sees V1 = A and predicts Class=X
- Decision Tree 2 sees V2 = C and predicts Class=X
- Decision Tree 3 sees V3 = F and predicts Class=Y
There were two votes for X and one vote for Y, so the forest predicts X,
the class that received the majority of the votes.
$endgroup$
$begingroup$
Some implementations instead average the probability scores across trees, see e.g. scikit-learn.org/stable/modules/ensemble.html#random-forests
$endgroup$
– Ben Reiniger
Apr 3 at 2:44
add a comment |
$begingroup$
I think that you are mixing together two different things - random forests
for regression and for classification. Regression means to predict a
continuous value (number). Random forest can construct multiple regression
trees, each of which makes a prediction about the number. In that case,
it is simple to understand. The numerical predictions are averaged to give
a robust prediction of the true number value.
However, I think that you are asking about classification - predicting a
nominal value (also called categorical or factor). In this case, each
decision tree predicts a category. Usually, it does not make sense to
talk about averaging categories. Instead, the multiple decision trees
"vote" - that is one counts how many times each category was predicted
and takes the category that received the most votes as the prediction.
There is no averaging, only counting.
Here is a simple example.
Data
V1 V2 V3 Class
A C E X
A C F X
B C F Y
B D F Y
B D E X
Decision Tree 1 uses only feature V1:
If V1 = A, predict X, otherwise predict Y
Decision Tree 2 uses only feature V2:
If V2 = C, predict X, otherwise predict Y
Decision Tree 3 uses only feature V3:
If V3 = E, predict X, otherwise predict Y
Now we want to predict the class of a new point (A, C, F):
- Decision Tree 1 sees V1 = A and predicts Class=X
- Decision Tree 2 sees V2 = C and predicts Class=X
- Decision Tree 3 sees V3 = F and predicts Class=Y
There were two votes for X and one vote for Y, so the forest predicts X,
the class that received the majority of the votes.
$endgroup$
$begingroup$
Some implementations instead average the probability scores across trees, see e.g. scikit-learn.org/stable/modules/ensemble.html#random-forests
$endgroup$
– Ben Reiniger
Apr 3 at 2:44
add a comment |
$begingroup$
I think that you are mixing together two different things - random forests
for regression and for classification. Regression means to predict a
continuous value (number). Random forest can construct multiple regression
trees, each of which makes a prediction about the number. In that case,
it is simple to understand. The numerical predictions are averaged to give
a robust prediction of the true number value.
However, I think that you are asking about classification - predicting a
nominal value (also called categorical or factor). In this case, each
decision tree predicts a category. Usually, it does not make sense to
talk about averaging categories. Instead, the multiple decision trees
"vote" - that is one counts how many times each category was predicted
and takes the category that received the most votes as the prediction.
There is no averaging, only counting.
Here is a simple example.
Data
V1 V2 V3 Class
A C E X
A C F X
B C F Y
B D F Y
B D E X
Decision Tree 1 uses only feature V1:
If V1 = A, predict X, otherwise predict Y
Decision Tree 2 uses only feature V2:
If V2 = C, predict X, otherwise predict Y
Decision Tree 3 uses only feature V3:
If V3 = E, predict X, otherwise predict Y
Now we want to predict the class of a new point (A, C, F):
- Decision Tree 1 sees V1 = A and predicts Class=X
- Decision Tree 2 sees V2 = C and predicts Class=X
- Decision Tree 3 sees V3 = F and predicts Class=Y
There were two votes for X and one vote for Y, so the forest predicts X,
the class that received the majority of the votes.
$endgroup$
I think that you are mixing together two different things - random forests
for regression and for classification. Regression means to predict a
continuous value (number). Random forest can construct multiple regression
trees, each of which makes a prediction about the number. In that case,
it is simple to understand. The numerical predictions are averaged to give
a robust prediction of the true number value.
However, I think that you are asking about classification - predicting a
nominal value (also called categorical or factor). In this case, each
decision tree predicts a category. Usually, it does not make sense to
talk about averaging categories. Instead, the multiple decision trees
"vote" - that is one counts how many times each category was predicted
and takes the category that received the most votes as the prediction.
There is no averaging, only counting.
Here is a simple example.
Data
V1 V2 V3 Class
A C E X
A C F X
B C F Y
B D F Y
B D E X
Decision Tree 1 uses only feature V1:
If V1 = A, predict X, otherwise predict Y
Decision Tree 2 uses only feature V2:
If V2 = C, predict X, otherwise predict Y
Decision Tree 3 uses only feature V3:
If V3 = E, predict X, otherwise predict Y
Now we want to predict the class of a new point (A, C, F):
- Decision Tree 1 sees V1 = A and predicts Class=X
- Decision Tree 2 sees V2 = C and predicts Class=X
- Decision Tree 3 sees V3 = F and predicts Class=Y
There were two votes for X and one vote for Y, so the forest predicts X,
the class that received the majority of the votes.
edited Mar 31 at 9:40
Esmailian
2,680318
2,680318
answered Mar 31 at 0:56
G5WG5W
217310
217310
$begingroup$
Some implementations instead average the probability scores across trees, see e.g. scikit-learn.org/stable/modules/ensemble.html#random-forests
$endgroup$
– Ben Reiniger
Apr 3 at 2:44
add a comment |
$begingroup$
Some implementations instead average the probability scores across trees, see e.g. scikit-learn.org/stable/modules/ensemble.html#random-forests
$endgroup$
– Ben Reiniger
Apr 3 at 2:44
$begingroup$
Some implementations instead average the probability scores across trees, see e.g. scikit-learn.org/stable/modules/ensemble.html#random-forests
$endgroup$
– Ben Reiniger
Apr 3 at 2:44
$begingroup$
Some implementations instead average the probability scores across trees, see e.g. scikit-learn.org/stable/modules/ensemble.html#random-forests
$endgroup$
– Ben Reiniger
Apr 3 at 2:44
add a comment |
Thanks for contributing an answer to Data Science Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f48269%2fwhat-does-it-mean-to-take-the-average-of-two-decision-trees-by-voting%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown