What happens if GBM parameters (e.g., learning rate) vary as the training progresses? The Next CEO of Stack Overflow2019 Community Moderator ElectionWhen to use what - Machine LearningWhat loss function does the 'multinomial' distribution with the gbm package in R use?Why doesn't overfitting devastate neural networks for MNIST classification?AdaBoost implementation and tuning for high dimensional feature space in RFinal layer of neural network responsible for overfittingIs the gradient descent the same if cost function has interaction?Significance of comparing Receiver Operating Characteristic (ROC) curvesReinforcement learning - How to deal with varying number of actions which do number approximationWhat is the difference between parameters & cooficients in Machine learning?what exactly happens during each epoch in neural network training
Loop in macOS not working
My boss doesn't want me to have a side project
Does int main() need a declaration on C++?
"Eavesdropping" vs "Listen in on"
Does the Idaho Potato Commission associate potato skins with healthy eating?
What happens if you break a law in another country outside of that country?
Some wp-admin folder file deleted when Wordpress upgrade
Is the offspring between a demon and a celestial possible? If so what is it called and is it in a book somewhere?
A hang glider, sudden unexpected lift to 25,000 feet altitude, what could do this?
Man transported from Alternate World into ours by a Neutrino Detector
Find a path from s to t using as few red nodes as possible
Would a grinding machine be a simple and workable propulsion system for an interplanetary spacecraft?
Is there any pdf viewer with dark mode?
Can you teleport closer to a creature you are Frightened of?
Is it possible to get a referendum by a court decision?
What exactly is ineptocracy?
How seriously should I take size and weight limits of hand luggage?
Compensation for working overtime on Saturdays
What steps are necessary to read a Modern SSD in Medieval Europe?
Is it OK to decorate a log book cover?
How to find if SQL server backup is encrypted with TDE without restoring the backup
That's an odd coin - I wonder why
Mathematica command that allows it to read my intentions
My ex-girlfriend uses my Apple ID to login to her iPad, do I have to give her my Apple ID password to reset it?
What happens if GBM parameters (e.g., learning rate) vary as the training progresses?
The Next CEO of Stack Overflow2019 Community Moderator ElectionWhen to use what - Machine LearningWhat loss function does the 'multinomial' distribution with the gbm package in R use?Why doesn't overfitting devastate neural networks for MNIST classification?AdaBoost implementation and tuning for high dimensional feature space in RFinal layer of neural network responsible for overfittingIs the gradient descent the same if cost function has interaction?Significance of comparing Receiver Operating Characteristic (ROC) curvesReinforcement learning - How to deal with varying number of actions which do number approximationWhat is the difference between parameters & cooficients in Machine learning?what exactly happens during each epoch in neural network training
$begingroup$
In neural networks there is an idea of a "learning rate schedule" which changes the learning rate as training progresses.
This made me ask the question, what would be the impact of varying parameters in a GBM as a function of the number of trees?
Take the learning rate for example. For GBMs using the MART algorithm, the contribution of each tree is weighted by a function of the error and the learning rate. Trees fit early on have a higher impact; trees fit later on have less impact. What if the learning rate was a function of $N$ such as $exp(-a N)$ where $a$ would be the decay parameter of the learning rate?
Other parameters could vary as well. For example the max depth of each tree could start out high and then decrease as training progresses. Going beyond just the tree parameters, other examples are the subsample percentage if using bagging or parameters of a loss function (e.g., Huber loss parameter $delta$).
machine-learning xgboost supervised-learning hyperparameter-tuning gbm
$endgroup$
add a comment |
$begingroup$
In neural networks there is an idea of a "learning rate schedule" which changes the learning rate as training progresses.
This made me ask the question, what would be the impact of varying parameters in a GBM as a function of the number of trees?
Take the learning rate for example. For GBMs using the MART algorithm, the contribution of each tree is weighted by a function of the error and the learning rate. Trees fit early on have a higher impact; trees fit later on have less impact. What if the learning rate was a function of $N$ such as $exp(-a N)$ where $a$ would be the decay parameter of the learning rate?
Other parameters could vary as well. For example the max depth of each tree could start out high and then decrease as training progresses. Going beyond just the tree parameters, other examples are the subsample percentage if using bagging or parameters of a loss function (e.g., Huber loss parameter $delta$).
machine-learning xgboost supervised-learning hyperparameter-tuning gbm
$endgroup$
add a comment |
$begingroup$
In neural networks there is an idea of a "learning rate schedule" which changes the learning rate as training progresses.
This made me ask the question, what would be the impact of varying parameters in a GBM as a function of the number of trees?
Take the learning rate for example. For GBMs using the MART algorithm, the contribution of each tree is weighted by a function of the error and the learning rate. Trees fit early on have a higher impact; trees fit later on have less impact. What if the learning rate was a function of $N$ such as $exp(-a N)$ where $a$ would be the decay parameter of the learning rate?
Other parameters could vary as well. For example the max depth of each tree could start out high and then decrease as training progresses. Going beyond just the tree parameters, other examples are the subsample percentage if using bagging or parameters of a loss function (e.g., Huber loss parameter $delta$).
machine-learning xgboost supervised-learning hyperparameter-tuning gbm
$endgroup$
In neural networks there is an idea of a "learning rate schedule" which changes the learning rate as training progresses.
This made me ask the question, what would be the impact of varying parameters in a GBM as a function of the number of trees?
Take the learning rate for example. For GBMs using the MART algorithm, the contribution of each tree is weighted by a function of the error and the learning rate. Trees fit early on have a higher impact; trees fit later on have less impact. What if the learning rate was a function of $N$ such as $exp(-a N)$ where $a$ would be the decay parameter of the learning rate?
Other parameters could vary as well. For example the max depth of each tree could start out high and then decrease as training progresses. Going beyond just the tree parameters, other examples are the subsample percentage if using bagging or parameters of a loss function (e.g., Huber loss parameter $delta$).
machine-learning xgboost supervised-learning hyperparameter-tuning gbm
machine-learning xgboost supervised-learning hyperparameter-tuning gbm
asked Mar 25 at 15:35
Sam CastilloSam Castillo
111
111
add a comment |
add a comment |
1 Answer
1
active
oldest
votes
$begingroup$
Learning rate decay is implemented in, e.g., XGBoost and LightGBM as callbacks. (XGBoost used to allow the learning_rate parameter to be a list, but that was deprecated in favor of callbacks.) Similar functionality for other hyperparameters should be possible in the same way.
https://xgboost.readthedocs.io/en/latest/python/python_api.html#xgboost.callback.reset_learning_rate
https://github.com/Microsoft/LightGBM/issues/129
I played around with these ideas (for learning rate and tree depth) a while back, but didn't get improved performance. But you should try it out; if you do see significant gains, it'd be great to add it as an answer here.
$endgroup$
add a comment |
StackExchange.ifUsing("editor", function ()
return StackExchange.using("mathjaxEditing", function ()
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
);
);
, "mathjax-editing");
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "557"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f47954%2fwhat-happens-if-gbm-parameters-e-g-learning-rate-vary-as-the-training-progre%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
Learning rate decay is implemented in, e.g., XGBoost and LightGBM as callbacks. (XGBoost used to allow the learning_rate parameter to be a list, but that was deprecated in favor of callbacks.) Similar functionality for other hyperparameters should be possible in the same way.
https://xgboost.readthedocs.io/en/latest/python/python_api.html#xgboost.callback.reset_learning_rate
https://github.com/Microsoft/LightGBM/issues/129
I played around with these ideas (for learning rate and tree depth) a while back, but didn't get improved performance. But you should try it out; if you do see significant gains, it'd be great to add it as an answer here.
$endgroup$
add a comment |
$begingroup$
Learning rate decay is implemented in, e.g., XGBoost and LightGBM as callbacks. (XGBoost used to allow the learning_rate parameter to be a list, but that was deprecated in favor of callbacks.) Similar functionality for other hyperparameters should be possible in the same way.
https://xgboost.readthedocs.io/en/latest/python/python_api.html#xgboost.callback.reset_learning_rate
https://github.com/Microsoft/LightGBM/issues/129
I played around with these ideas (for learning rate and tree depth) a while back, but didn't get improved performance. But you should try it out; if you do see significant gains, it'd be great to add it as an answer here.
$endgroup$
add a comment |
$begingroup$
Learning rate decay is implemented in, e.g., XGBoost and LightGBM as callbacks. (XGBoost used to allow the learning_rate parameter to be a list, but that was deprecated in favor of callbacks.) Similar functionality for other hyperparameters should be possible in the same way.
https://xgboost.readthedocs.io/en/latest/python/python_api.html#xgboost.callback.reset_learning_rate
https://github.com/Microsoft/LightGBM/issues/129
I played around with these ideas (for learning rate and tree depth) a while back, but didn't get improved performance. But you should try it out; if you do see significant gains, it'd be great to add it as an answer here.
$endgroup$
Learning rate decay is implemented in, e.g., XGBoost and LightGBM as callbacks. (XGBoost used to allow the learning_rate parameter to be a list, but that was deprecated in favor of callbacks.) Similar functionality for other hyperparameters should be possible in the same way.
https://xgboost.readthedocs.io/en/latest/python/python_api.html#xgboost.callback.reset_learning_rate
https://github.com/Microsoft/LightGBM/issues/129
I played around with these ideas (for learning rate and tree depth) a while back, but didn't get improved performance. But you should try it out; if you do see significant gains, it'd be great to add it as an answer here.
answered Mar 27 at 14:36
Ben ReinigerBen Reiniger
333210
333210
add a comment |
add a comment |
Thanks for contributing an answer to Data Science Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f47954%2fwhat-happens-if-gbm-parameters-e-g-learning-rate-vary-as-the-training-progre%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown