Newton's method optimization for Deep LearningMachine Learning for hedging/ portfolio optimization?Which Optimization method to use?Algorithm for rule set optimizationOptimization Problem Pythonresults from “Google Vizier: A Service for Black-Box Optimization”What is a good classification type Machine Learning toolbox for a beginner to conduct geometric optimization?deep learning output data in keras fit methodLinear Regression OptimizationAlgorithm for campaign optimization (Digital Advertising)How to adjust deep learning parameters using Particle swarm optimization (PSO)?
Plot of a tornado-shaped surface
What is going on with 'gets(stdin)' on the site coderbyte?
Multiplicative persistence
What are some good ways to treat frozen vegetables such that they behave like fresh vegetables when stir frying them?
What features enable the Su-25 Frogfoot to operate with such a wide variety of fuels?
How do you respond to a colleague from another team when they're wrongly expecting that you'll help them?
Are Captain Marvel's powers affected by Thanos' actions in Infinity War
How should I respond when I lied about my education and the company finds out through background check?
Is there a RAID 0 Equivalent for RAM?
Has any country ever had 2 former presidents in jail simultaneously?
How could a planet have erratic days?
Hero deduces identity of a killer
Quoting Keynes in a lecture
What should you do if you miss a job interview (deliberately)?
Store Credit Card Information in Password Manager?
Mimic lecturing on blackboard, facing audience
How much character growth crosses the line into breaking the character
Strong empirical falsification of quantum mechanics based on vacuum energy density
Is aluminum electrical wire used on aircraft?
How to explain what's wrong with this application of the chain rule?
What is Cash Advance APR?
PTIJ: Haman's bad computer
How do apertures which seem too large to physically fit work?
On a tidally locked planet, would time be quantized?
Newton's method optimization for Deep Learning
Machine Learning for hedging/ portfolio optimization?Which Optimization method to use?Algorithm for rule set optimizationOptimization Problem Pythonresults from “Google Vizier: A Service for Black-Box Optimization”What is a good classification type Machine Learning toolbox for a beginner to conduct geometric optimization?deep learning output data in keras fit methodLinear Regression OptimizationAlgorithm for campaign optimization (Digital Advertising)How to adjust deep learning parameters using Particle swarm optimization (PSO)?
$begingroup$
I'm reading this paper "Deep learning via Hessian-free optimization" by J. Martens, I am having difficulty figure out the following statement:
In the standard Newton's method, $q_theta(p)$ is optimized by computing the $Ntimes N$ matrix $B$ and then solving the system $Bp = −nabla f(theta)$.
(section 3 of the paper)
Is there any theorem, or statement anywhere regarding why the above system needs to be solved to optimize the local approximation? I came across another paper that has a reference to J. Martens and has used the same statement.
machine-learning optimization
New contributor
$endgroup$
add a comment |
$begingroup$
I'm reading this paper "Deep learning via Hessian-free optimization" by J. Martens, I am having difficulty figure out the following statement:
In the standard Newton's method, $q_theta(p)$ is optimized by computing the $Ntimes N$ matrix $B$ and then solving the system $Bp = −nabla f(theta)$.
(section 3 of the paper)
Is there any theorem, or statement anywhere regarding why the above system needs to be solved to optimize the local approximation? I came across another paper that has a reference to J. Martens and has used the same statement.
machine-learning optimization
New contributor
$endgroup$
add a comment |
$begingroup$
I'm reading this paper "Deep learning via Hessian-free optimization" by J. Martens, I am having difficulty figure out the following statement:
In the standard Newton's method, $q_theta(p)$ is optimized by computing the $Ntimes N$ matrix $B$ and then solving the system $Bp = −nabla f(theta)$.
(section 3 of the paper)
Is there any theorem, or statement anywhere regarding why the above system needs to be solved to optimize the local approximation? I came across another paper that has a reference to J. Martens and has used the same statement.
machine-learning optimization
New contributor
$endgroup$
I'm reading this paper "Deep learning via Hessian-free optimization" by J. Martens, I am having difficulty figure out the following statement:
In the standard Newton's method, $q_theta(p)$ is optimized by computing the $Ntimes N$ matrix $B$ and then solving the system $Bp = −nabla f(theta)$.
(section 3 of the paper)
Is there any theorem, or statement anywhere regarding why the above system needs to be solved to optimize the local approximation? I came across another paper that has a reference to J. Martens and has used the same statement.
machine-learning optimization
machine-learning optimization
New contributor
New contributor
edited Mar 19 at 15:53
Esmailian
1,686114
1,686114
New contributor
asked Mar 19 at 9:30
AmanAman
305
305
New contributor
New contributor
add a comment |
add a comment |
1 Answer
1
active
oldest
votes
$begingroup$
If you take a look at section 2, it says
The central idea motivating Newton’s method is that $f$ can be locally
approximated around each $theta$, up to 2nd-order, by the quadratic: $$ f(theta + p) approx q_theta(p) equiv f(theta) + nabla f(theta)^Tp + frac12 p^TBp , , (1) $$ where $B = H(theta)$ is the
Hessian matrix of $f$ at $theta$. Finding a good search direction then reduces
to minimizing this quadratic with respect to $p$.
To minimize, you need to take the derivative of (1) with respect to $p$ and set it to zero:
$$Rightarrow nabla f(theta) + Bp = 0$$
which is equivalent to $Bp = -nabla f(theta)$.
$endgroup$
add a comment |
Your Answer
StackExchange.ifUsing("editor", function ()
return StackExchange.using("mathjaxEditing", function ()
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
);
);
, "mathjax-editing");
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "557"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Aman is a new contributor. Be nice, and check out our Code of Conduct.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f47598%2fnewtons-method-optimization-for-deep-learning%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
If you take a look at section 2, it says
The central idea motivating Newton’s method is that $f$ can be locally
approximated around each $theta$, up to 2nd-order, by the quadratic: $$ f(theta + p) approx q_theta(p) equiv f(theta) + nabla f(theta)^Tp + frac12 p^TBp , , (1) $$ where $B = H(theta)$ is the
Hessian matrix of $f$ at $theta$. Finding a good search direction then reduces
to minimizing this quadratic with respect to $p$.
To minimize, you need to take the derivative of (1) with respect to $p$ and set it to zero:
$$Rightarrow nabla f(theta) + Bp = 0$$
which is equivalent to $Bp = -nabla f(theta)$.
$endgroup$
add a comment |
$begingroup$
If you take a look at section 2, it says
The central idea motivating Newton’s method is that $f$ can be locally
approximated around each $theta$, up to 2nd-order, by the quadratic: $$ f(theta + p) approx q_theta(p) equiv f(theta) + nabla f(theta)^Tp + frac12 p^TBp , , (1) $$ where $B = H(theta)$ is the
Hessian matrix of $f$ at $theta$. Finding a good search direction then reduces
to minimizing this quadratic with respect to $p$.
To minimize, you need to take the derivative of (1) with respect to $p$ and set it to zero:
$$Rightarrow nabla f(theta) + Bp = 0$$
which is equivalent to $Bp = -nabla f(theta)$.
$endgroup$
add a comment |
$begingroup$
If you take a look at section 2, it says
The central idea motivating Newton’s method is that $f$ can be locally
approximated around each $theta$, up to 2nd-order, by the quadratic: $$ f(theta + p) approx q_theta(p) equiv f(theta) + nabla f(theta)^Tp + frac12 p^TBp , , (1) $$ where $B = H(theta)$ is the
Hessian matrix of $f$ at $theta$. Finding a good search direction then reduces
to minimizing this quadratic with respect to $p$.
To minimize, you need to take the derivative of (1) with respect to $p$ and set it to zero:
$$Rightarrow nabla f(theta) + Bp = 0$$
which is equivalent to $Bp = -nabla f(theta)$.
$endgroup$
If you take a look at section 2, it says
The central idea motivating Newton’s method is that $f$ can be locally
approximated around each $theta$, up to 2nd-order, by the quadratic: $$ f(theta + p) approx q_theta(p) equiv f(theta) + nabla f(theta)^Tp + frac12 p^TBp , , (1) $$ where $B = H(theta)$ is the
Hessian matrix of $f$ at $theta$. Finding a good search direction then reduces
to minimizing this quadratic with respect to $p$.
To minimize, you need to take the derivative of (1) with respect to $p$ and set it to zero:
$$Rightarrow nabla f(theta) + Bp = 0$$
which is equivalent to $Bp = -nabla f(theta)$.
answered Mar 19 at 15:45
oW_oW_
3,261731
3,261731
add a comment |
add a comment |
Aman is a new contributor. Be nice, and check out our Code of Conduct.
Aman is a new contributor. Be nice, and check out our Code of Conduct.
Aman is a new contributor. Be nice, and check out our Code of Conduct.
Aman is a new contributor. Be nice, and check out our Code of Conduct.
Thanks for contributing an answer to Data Science Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f47598%2fnewtons-method-optimization-for-deep-learning%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown