A question on realizable sample complexity The 2019 Stack Overflow Developer Survey Results Are InGlobal vs. local bias-variance tradeoffGeneralization Error DefinitionSemi-gradient TD(0) Choosing an ActionIs cross-entropy a good cost function if I'm interested in the probabilities of a sample belonging to a certain class?Maximum Entropy modelling - likelihood equationPerceptron - Convergence proofAnalysing Regret for Multi Armed BanditsIntuition behind Occam's Learner Algorithm using VC-DimensionQ learning transition matrix troublePAC Learnability - Notation
What is the meaning of Triage in Cybersec world?
Command for nulifying spaces
Why are there uneven bright areas in this photo of black hole?
The phrase "to the numbers born"?
For what reasons would an animal species NOT cross a *horizontal* land bridge?
Why couldn't they take pictures of a closer black hole?
Why isn't airport relocation done gradually?
Is it possible for absolutely everyone to attain enlightenment?
Is bread bad for ducks?
Ubuntu Server install with full GUI
What is the accessibility of a package's `Private` context variables?
Button changing its text & action. Good or terrible?
Is "plugging out" electronic devices an American expression?
How much of the clove should I use when using big garlic heads?
How can I refresh a custom data tab in the contact summary?
Output the Arecibo Message
How can I define good in a religion that claims no moral authority?
Why doesn't shell automatically fix "useless use of cat"?
Why isn't the circumferential light around the M87 black hole's event horizon symmetric?
Is there a better way to do an empty check in Java?
Using xargs with pdftk
Can a flute soloist sit?
Sums of normal random variables
What to do when moving next to a bird sanctuary with a loosely-domesticated cat?
A question on realizable sample complexity
The 2019 Stack Overflow Developer Survey Results Are InGlobal vs. local bias-variance tradeoffGeneralization Error DefinitionSemi-gradient TD(0) Choosing an ActionIs cross-entropy a good cost function if I'm interested in the probabilities of a sample belonging to a certain class?Maximum Entropy modelling - likelihood equationPerceptron - Convergence proofAnalysing Regret for Multi Armed BanditsIntuition behind Occam's Learner Algorithm using VC-DimensionQ learning transition matrix troublePAC Learnability - Notation
$begingroup$
I came across the following exercise, and I just can't seem to crack it:
Let $l$ be some loss function such that $l leq 1$. Let $H$ be some hypothesis class, and let $A$ be a learning algorithm. show that:
$m^textstat, r_H (epsilon) = Oleft(m^textstat, r_H (epsilon/2, 1/2)cdot log(1/epsilon) + fraclog(1/epsilon)epsilon^2right)$
Where $m^textstat, r_H (epsilon)$ is the minimal number $m$ such that for any realizable distribution over training examples $D$ we have that:
$$mathbbE_S sim D^mleft[ l_D(A(S)) right]leq epsilon$$
And where $m^textstat, r_H (epsilon, delta)$ is the minimal number $m$ such that for any realizable distribution over training examples $D$ we have that:
$$P_S sim D^mleft( l_D(A(S)) geq epsilon right) leq delta$$
Thanks a lot in advance!
machine-learning theory vc-theory pac-learning
$endgroup$
add a comment |
$begingroup$
I came across the following exercise, and I just can't seem to crack it:
Let $l$ be some loss function such that $l leq 1$. Let $H$ be some hypothesis class, and let $A$ be a learning algorithm. show that:
$m^textstat, r_H (epsilon) = Oleft(m^textstat, r_H (epsilon/2, 1/2)cdot log(1/epsilon) + fraclog(1/epsilon)epsilon^2right)$
Where $m^textstat, r_H (epsilon)$ is the minimal number $m$ such that for any realizable distribution over training examples $D$ we have that:
$$mathbbE_S sim D^mleft[ l_D(A(S)) right]leq epsilon$$
And where $m^textstat, r_H (epsilon, delta)$ is the minimal number $m$ such that for any realizable distribution over training examples $D$ we have that:
$$P_S sim D^mleft( l_D(A(S)) geq epsilon right) leq delta$$
Thanks a lot in advance!
machine-learning theory vc-theory pac-learning
$endgroup$
add a comment |
$begingroup$
I came across the following exercise, and I just can't seem to crack it:
Let $l$ be some loss function such that $l leq 1$. Let $H$ be some hypothesis class, and let $A$ be a learning algorithm. show that:
$m^textstat, r_H (epsilon) = Oleft(m^textstat, r_H (epsilon/2, 1/2)cdot log(1/epsilon) + fraclog(1/epsilon)epsilon^2right)$
Where $m^textstat, r_H (epsilon)$ is the minimal number $m$ such that for any realizable distribution over training examples $D$ we have that:
$$mathbbE_S sim D^mleft[ l_D(A(S)) right]leq epsilon$$
And where $m^textstat, r_H (epsilon, delta)$ is the minimal number $m$ such that for any realizable distribution over training examples $D$ we have that:
$$P_S sim D^mleft( l_D(A(S)) geq epsilon right) leq delta$$
Thanks a lot in advance!
machine-learning theory vc-theory pac-learning
$endgroup$
I came across the following exercise, and I just can't seem to crack it:
Let $l$ be some loss function such that $l leq 1$. Let $H$ be some hypothesis class, and let $A$ be a learning algorithm. show that:
$m^textstat, r_H (epsilon) = Oleft(m^textstat, r_H (epsilon/2, 1/2)cdot log(1/epsilon) + fraclog(1/epsilon)epsilon^2right)$
Where $m^textstat, r_H (epsilon)$ is the minimal number $m$ such that for any realizable distribution over training examples $D$ we have that:
$$mathbbE_S sim D^mleft[ l_D(A(S)) right]leq epsilon$$
And where $m^textstat, r_H (epsilon, delta)$ is the minimal number $m$ such that for any realizable distribution over training examples $D$ we have that:
$$P_S sim D^mleft( l_D(A(S)) geq epsilon right) leq delta$$
Thanks a lot in advance!
machine-learning theory vc-theory pac-learning
machine-learning theory vc-theory pac-learning
edited Mar 30 at 15:07
Esmailian
3,001320
3,001320
asked Mar 4 at 12:23
Nadav SchweigerNadav Schweiger
61
61
add a comment |
add a comment |
1 Answer
1
active
oldest
votes
$begingroup$
We want to prove:
If H is PAC learnable, then $forall epsilon, exists C, forall m geq m_2:=Clog(1/epsilon)(m_1+1/epsilon^2), E[L] leq epsilon mbox (a)$
where $m_1:=m(epsilon/2,1/2)$
Since $L leq 1$, we have $E[L] leq 1$. So proof is trivial for $epsilon geq 1$. Let $epsilon in (0, 1)$.
Find an equivalence for $(a)$:
$beginalign*
E[L] &= int_l geq epsilon / 2ldP + int_l< epsilon / 2 ldP leq int_l geq epsilon / 2dP + int_l< epsilon / 2 fracepsilon2 dP\
&= P(L geq epsilon/2) + fracepsilon2 P(L <epsilon/2)\
&= (1 - epsilon/2)P(L geq epsilon/2) + epsilon/2 < epsilon
Leftrightarrow P(L geq epsilon/2) < epsilon/(2 - epsilon)
endalign*$
Therefore, if
$forall epsilon, forall m geq m_3:=m(epsilon/2, epsilon/(2 - epsilon)), P(L geq epsilon/2) < epsilon/(2 - epsilon)mbox (b)$ holds,
$forall epsilon, forall m geq m(epsilon)=m_3, E[L] leq epsilon mbox (c)$ holds too (and vice versa)
Prove $(b) Rightarrow (a)$:
Using The Fundamental Theorem of Statistical Learning for PAC learnable H with VC dimension $d$, we have:
$beginalign*
(&epsilon/2, 1/2)mbox-learnable H with m_1 Leftrightarrow exists C_1 > 0, m_1 geq C_1fracd + log(2)epsilon/2\
&Leftrightarrow log(1/epsilon)(m_1 + 1/epsilon^2) geq fraclog(1/epsilon)(C_1d + C_1log(2) + 1/(2epsilon))epsilon/2
endalign*$
which uses $1/epsilon > 1$ and $log(1/epsilon) > 0$.
Now we use an inequality without proof (plot the function here)
$forall x > 1, forall d, C_1 geq 0, exists C_2 > 0, f(x)=fraclog(x)(C_1d+C_1log(2)+x/2)dlog(2x)+log(2x-1) geq C_2$
Setting $x=1/epsilon$, we continue as:
$beginalign*
&... oversetexists C_2geq C_2fracdlog(2/epsilon) + log((2-epsilon)/epsilon)epsilon/2 oversetexists C_3geq frac1C_3 m_3
Leftrightarrow (epsilon/2, epsilon/(2-epsilon))mbox-learnable H with m_3
endalign*$
By setting $m_2 := C_3log(1/epsilon)(m_1 + 1/epsilon^2)$, we have $m_2 geq m_3$, thus
$beginalign*
&forall epsilon, forall m geq m_3, P(L geq epsilon/2) < epsilon/(2 - epsilon)\
&Rightarrow forall epsilon, exists C, forall m geq m_2 := Clog(1/epsilon)(m_1 + 1/epsilon^2) geq m_3, P(L geq epsilon/2) < epsilon/(2 - epsilon)\
&Rightarrow forall epsilon, exists C, forall m geq m_2 := Clog(1/epsilon)(m_1 + 1/epsilon^2), E[L] < epsilon
endalign*$.
Proof is complete.
$endgroup$
add a comment |
Your Answer
StackExchange.ifUsing("editor", function ()
return StackExchange.using("mathjaxEditing", function ()
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
);
);
, "mathjax-editing");
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "557"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f46638%2fa-question-on-realizable-sample-complexity%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
We want to prove:
If H is PAC learnable, then $forall epsilon, exists C, forall m geq m_2:=Clog(1/epsilon)(m_1+1/epsilon^2), E[L] leq epsilon mbox (a)$
where $m_1:=m(epsilon/2,1/2)$
Since $L leq 1$, we have $E[L] leq 1$. So proof is trivial for $epsilon geq 1$. Let $epsilon in (0, 1)$.
Find an equivalence for $(a)$:
$beginalign*
E[L] &= int_l geq epsilon / 2ldP + int_l< epsilon / 2 ldP leq int_l geq epsilon / 2dP + int_l< epsilon / 2 fracepsilon2 dP\
&= P(L geq epsilon/2) + fracepsilon2 P(L <epsilon/2)\
&= (1 - epsilon/2)P(L geq epsilon/2) + epsilon/2 < epsilon
Leftrightarrow P(L geq epsilon/2) < epsilon/(2 - epsilon)
endalign*$
Therefore, if
$forall epsilon, forall m geq m_3:=m(epsilon/2, epsilon/(2 - epsilon)), P(L geq epsilon/2) < epsilon/(2 - epsilon)mbox (b)$ holds,
$forall epsilon, forall m geq m(epsilon)=m_3, E[L] leq epsilon mbox (c)$ holds too (and vice versa)
Prove $(b) Rightarrow (a)$:
Using The Fundamental Theorem of Statistical Learning for PAC learnable H with VC dimension $d$, we have:
$beginalign*
(&epsilon/2, 1/2)mbox-learnable H with m_1 Leftrightarrow exists C_1 > 0, m_1 geq C_1fracd + log(2)epsilon/2\
&Leftrightarrow log(1/epsilon)(m_1 + 1/epsilon^2) geq fraclog(1/epsilon)(C_1d + C_1log(2) + 1/(2epsilon))epsilon/2
endalign*$
which uses $1/epsilon > 1$ and $log(1/epsilon) > 0$.
Now we use an inequality without proof (plot the function here)
$forall x > 1, forall d, C_1 geq 0, exists C_2 > 0, f(x)=fraclog(x)(C_1d+C_1log(2)+x/2)dlog(2x)+log(2x-1) geq C_2$
Setting $x=1/epsilon$, we continue as:
$beginalign*
&... oversetexists C_2geq C_2fracdlog(2/epsilon) + log((2-epsilon)/epsilon)epsilon/2 oversetexists C_3geq frac1C_3 m_3
Leftrightarrow (epsilon/2, epsilon/(2-epsilon))mbox-learnable H with m_3
endalign*$
By setting $m_2 := C_3log(1/epsilon)(m_1 + 1/epsilon^2)$, we have $m_2 geq m_3$, thus
$beginalign*
&forall epsilon, forall m geq m_3, P(L geq epsilon/2) < epsilon/(2 - epsilon)\
&Rightarrow forall epsilon, exists C, forall m geq m_2 := Clog(1/epsilon)(m_1 + 1/epsilon^2) geq m_3, P(L geq epsilon/2) < epsilon/(2 - epsilon)\
&Rightarrow forall epsilon, exists C, forall m geq m_2 := Clog(1/epsilon)(m_1 + 1/epsilon^2), E[L] < epsilon
endalign*$.
Proof is complete.
$endgroup$
add a comment |
$begingroup$
We want to prove:
If H is PAC learnable, then $forall epsilon, exists C, forall m geq m_2:=Clog(1/epsilon)(m_1+1/epsilon^2), E[L] leq epsilon mbox (a)$
where $m_1:=m(epsilon/2,1/2)$
Since $L leq 1$, we have $E[L] leq 1$. So proof is trivial for $epsilon geq 1$. Let $epsilon in (0, 1)$.
Find an equivalence for $(a)$:
$beginalign*
E[L] &= int_l geq epsilon / 2ldP + int_l< epsilon / 2 ldP leq int_l geq epsilon / 2dP + int_l< epsilon / 2 fracepsilon2 dP\
&= P(L geq epsilon/2) + fracepsilon2 P(L <epsilon/2)\
&= (1 - epsilon/2)P(L geq epsilon/2) + epsilon/2 < epsilon
Leftrightarrow P(L geq epsilon/2) < epsilon/(2 - epsilon)
endalign*$
Therefore, if
$forall epsilon, forall m geq m_3:=m(epsilon/2, epsilon/(2 - epsilon)), P(L geq epsilon/2) < epsilon/(2 - epsilon)mbox (b)$ holds,
$forall epsilon, forall m geq m(epsilon)=m_3, E[L] leq epsilon mbox (c)$ holds too (and vice versa)
Prove $(b) Rightarrow (a)$:
Using The Fundamental Theorem of Statistical Learning for PAC learnable H with VC dimension $d$, we have:
$beginalign*
(&epsilon/2, 1/2)mbox-learnable H with m_1 Leftrightarrow exists C_1 > 0, m_1 geq C_1fracd + log(2)epsilon/2\
&Leftrightarrow log(1/epsilon)(m_1 + 1/epsilon^2) geq fraclog(1/epsilon)(C_1d + C_1log(2) + 1/(2epsilon))epsilon/2
endalign*$
which uses $1/epsilon > 1$ and $log(1/epsilon) > 0$.
Now we use an inequality without proof (plot the function here)
$forall x > 1, forall d, C_1 geq 0, exists C_2 > 0, f(x)=fraclog(x)(C_1d+C_1log(2)+x/2)dlog(2x)+log(2x-1) geq C_2$
Setting $x=1/epsilon$, we continue as:
$beginalign*
&... oversetexists C_2geq C_2fracdlog(2/epsilon) + log((2-epsilon)/epsilon)epsilon/2 oversetexists C_3geq frac1C_3 m_3
Leftrightarrow (epsilon/2, epsilon/(2-epsilon))mbox-learnable H with m_3
endalign*$
By setting $m_2 := C_3log(1/epsilon)(m_1 + 1/epsilon^2)$, we have $m_2 geq m_3$, thus
$beginalign*
&forall epsilon, forall m geq m_3, P(L geq epsilon/2) < epsilon/(2 - epsilon)\
&Rightarrow forall epsilon, exists C, forall m geq m_2 := Clog(1/epsilon)(m_1 + 1/epsilon^2) geq m_3, P(L geq epsilon/2) < epsilon/(2 - epsilon)\
&Rightarrow forall epsilon, exists C, forall m geq m_2 := Clog(1/epsilon)(m_1 + 1/epsilon^2), E[L] < epsilon
endalign*$.
Proof is complete.
$endgroup$
add a comment |
$begingroup$
We want to prove:
If H is PAC learnable, then $forall epsilon, exists C, forall m geq m_2:=Clog(1/epsilon)(m_1+1/epsilon^2), E[L] leq epsilon mbox (a)$
where $m_1:=m(epsilon/2,1/2)$
Since $L leq 1$, we have $E[L] leq 1$. So proof is trivial for $epsilon geq 1$. Let $epsilon in (0, 1)$.
Find an equivalence for $(a)$:
$beginalign*
E[L] &= int_l geq epsilon / 2ldP + int_l< epsilon / 2 ldP leq int_l geq epsilon / 2dP + int_l< epsilon / 2 fracepsilon2 dP\
&= P(L geq epsilon/2) + fracepsilon2 P(L <epsilon/2)\
&= (1 - epsilon/2)P(L geq epsilon/2) + epsilon/2 < epsilon
Leftrightarrow P(L geq epsilon/2) < epsilon/(2 - epsilon)
endalign*$
Therefore, if
$forall epsilon, forall m geq m_3:=m(epsilon/2, epsilon/(2 - epsilon)), P(L geq epsilon/2) < epsilon/(2 - epsilon)mbox (b)$ holds,
$forall epsilon, forall m geq m(epsilon)=m_3, E[L] leq epsilon mbox (c)$ holds too (and vice versa)
Prove $(b) Rightarrow (a)$:
Using The Fundamental Theorem of Statistical Learning for PAC learnable H with VC dimension $d$, we have:
$beginalign*
(&epsilon/2, 1/2)mbox-learnable H with m_1 Leftrightarrow exists C_1 > 0, m_1 geq C_1fracd + log(2)epsilon/2\
&Leftrightarrow log(1/epsilon)(m_1 + 1/epsilon^2) geq fraclog(1/epsilon)(C_1d + C_1log(2) + 1/(2epsilon))epsilon/2
endalign*$
which uses $1/epsilon > 1$ and $log(1/epsilon) > 0$.
Now we use an inequality without proof (plot the function here)
$forall x > 1, forall d, C_1 geq 0, exists C_2 > 0, f(x)=fraclog(x)(C_1d+C_1log(2)+x/2)dlog(2x)+log(2x-1) geq C_2$
Setting $x=1/epsilon$, we continue as:
$beginalign*
&... oversetexists C_2geq C_2fracdlog(2/epsilon) + log((2-epsilon)/epsilon)epsilon/2 oversetexists C_3geq frac1C_3 m_3
Leftrightarrow (epsilon/2, epsilon/(2-epsilon))mbox-learnable H with m_3
endalign*$
By setting $m_2 := C_3log(1/epsilon)(m_1 + 1/epsilon^2)$, we have $m_2 geq m_3$, thus
$beginalign*
&forall epsilon, forall m geq m_3, P(L geq epsilon/2) < epsilon/(2 - epsilon)\
&Rightarrow forall epsilon, exists C, forall m geq m_2 := Clog(1/epsilon)(m_1 + 1/epsilon^2) geq m_3, P(L geq epsilon/2) < epsilon/(2 - epsilon)\
&Rightarrow forall epsilon, exists C, forall m geq m_2 := Clog(1/epsilon)(m_1 + 1/epsilon^2), E[L] < epsilon
endalign*$.
Proof is complete.
$endgroup$
We want to prove:
If H is PAC learnable, then $forall epsilon, exists C, forall m geq m_2:=Clog(1/epsilon)(m_1+1/epsilon^2), E[L] leq epsilon mbox (a)$
where $m_1:=m(epsilon/2,1/2)$
Since $L leq 1$, we have $E[L] leq 1$. So proof is trivial for $epsilon geq 1$. Let $epsilon in (0, 1)$.
Find an equivalence for $(a)$:
$beginalign*
E[L] &= int_l geq epsilon / 2ldP + int_l< epsilon / 2 ldP leq int_l geq epsilon / 2dP + int_l< epsilon / 2 fracepsilon2 dP\
&= P(L geq epsilon/2) + fracepsilon2 P(L <epsilon/2)\
&= (1 - epsilon/2)P(L geq epsilon/2) + epsilon/2 < epsilon
Leftrightarrow P(L geq epsilon/2) < epsilon/(2 - epsilon)
endalign*$
Therefore, if
$forall epsilon, forall m geq m_3:=m(epsilon/2, epsilon/(2 - epsilon)), P(L geq epsilon/2) < epsilon/(2 - epsilon)mbox (b)$ holds,
$forall epsilon, forall m geq m(epsilon)=m_3, E[L] leq epsilon mbox (c)$ holds too (and vice versa)
Prove $(b) Rightarrow (a)$:
Using The Fundamental Theorem of Statistical Learning for PAC learnable H with VC dimension $d$, we have:
$beginalign*
(&epsilon/2, 1/2)mbox-learnable H with m_1 Leftrightarrow exists C_1 > 0, m_1 geq C_1fracd + log(2)epsilon/2\
&Leftrightarrow log(1/epsilon)(m_1 + 1/epsilon^2) geq fraclog(1/epsilon)(C_1d + C_1log(2) + 1/(2epsilon))epsilon/2
endalign*$
which uses $1/epsilon > 1$ and $log(1/epsilon) > 0$.
Now we use an inequality without proof (plot the function here)
$forall x > 1, forall d, C_1 geq 0, exists C_2 > 0, f(x)=fraclog(x)(C_1d+C_1log(2)+x/2)dlog(2x)+log(2x-1) geq C_2$
Setting $x=1/epsilon$, we continue as:
$beginalign*
&... oversetexists C_2geq C_2fracdlog(2/epsilon) + log((2-epsilon)/epsilon)epsilon/2 oversetexists C_3geq frac1C_3 m_3
Leftrightarrow (epsilon/2, epsilon/(2-epsilon))mbox-learnable H with m_3
endalign*$
By setting $m_2 := C_3log(1/epsilon)(m_1 + 1/epsilon^2)$, we have $m_2 geq m_3$, thus
$beginalign*
&forall epsilon, forall m geq m_3, P(L geq epsilon/2) < epsilon/(2 - epsilon)\
&Rightarrow forall epsilon, exists C, forall m geq m_2 := Clog(1/epsilon)(m_1 + 1/epsilon^2) geq m_3, P(L geq epsilon/2) < epsilon/(2 - epsilon)\
&Rightarrow forall epsilon, exists C, forall m geq m_2 := Clog(1/epsilon)(m_1 + 1/epsilon^2), E[L] < epsilon
endalign*$.
Proof is complete.
edited Mar 4 at 22:23
answered Mar 4 at 19:05
EsmailianEsmailian
3,001320
3,001320
add a comment |
add a comment |
Thanks for contributing an answer to Data Science Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f46638%2fa-question-on-realizable-sample-complexity%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown