Simulating a probability of 1 of 2^N with less than N random bitsIs the number of coin tosses of a probabilistic Turing machine a Blum complexity measure?Prove that inserting $n$ sorted values in to an AVL using AVL insertion is $Thetaleft (n log left ( n right ) right )$2SUM with a weightIs transitivity required for a sorting algorithmHow to compare conditional entropy and mutual information?How to state a recurrence that expresses the worst case for good pivots?Counting words that satisfy SAT-like constraints with BDDsHigher order empirical entropy is not the entropy of the empirical distribution?How do these alternative definitions of one-way functions compare?Average-case analysis of linear search given that the desired element appears $k$ times

Why was Sir Cadogan fired?

Explaination of a justification: additive functors preserve limits

How do conventional missiles fly?

Rotate ASCII Art by 45 Degrees

How to set continue counter from another counter (latex)?

What historical events would have to change in order to make 19th century "steampunk" technology possible?

Can I set a Ready action to trigger when literally anything happens?

Ambiguity in the definition of entropy

How obscure is the use of 令 in 令和?

ssTTsSTtRrriinInnnnNNNIiinngg

Can someone clarify Hamming's notion of important problems in relation to modern academia?

Should I tell management that I intend to leave due to bad software development practices?

Does the Cone of Cold spell freeze water?

Why is the sentence "Das ist eine Nase" correct?

Do Iron Man suits sport waste management systems?

How much mains leakage does an Ethernet connection to a PC induce, and what is the operating leakage path?

Array of objects return object when condition matched

Different meanings of こわい

What is required to make GPS signals available indoors?

Are characters in a string unique

What's the meaning of "Sollensaussagen"?

Amending the P2P Layer

How badly should I try to prevent a user from XSSing themselves?

Is it possible to create a QR code using text?



Simulating a probability of 1 of 2^N with less than N random bits


Is the number of coin tosses of a probabilistic Turing machine a Blum complexity measure?Prove that inserting $n$ sorted values in to an AVL using AVL insertion is $Thetaleft (n log left ( n right ) right )$2SUM with a weightIs transitivity required for a sorting algorithmHow to compare conditional entropy and mutual information?How to state a recurrence that expresses the worst case for good pivots?Counting words that satisfy SAT-like constraints with BDDsHigher order empirical entropy is not the entropy of the empirical distribution?How do these alternative definitions of one-way functions compare?Average-case analysis of linear search given that the desired element appears $k$ times













27












$begingroup$


Say I need to simulate the following discrete distribution:



$$
P(X = k) =
begincases
frac12^N, & textif $k = 1$ \
1 - frac12^N, & textif $k = 0$
endcases
$$



The most obvious way is to draw $N$ random bits and check if all of them equals to $0$ (or $1$). However, information theory says



$$
beginalign
S & = - sum_i P_i logP_i \
& = - frac12^N logfrac12^N - left(1 - frac12^Nright) logleft(1 - frac12^Nright) \
& = frac12^N log2^N + left(1 - frac12^Nright) logfrac2^N2^N - 1 \
& rightarrow 0
endalign
$$



So the minimum number of random bits required actually decreases as $N$ goes large. How is this possible?



Please assume that we are running on a computer where bits is your only source of randomness, so you can't just tose a biased coin.










share|cite|improve this question











$endgroup$











  • $begingroup$
    This is closely related to coding theory and Kolmogorov complexity, if you're looking for keywords to dig deeper. The technique of counting repeating runs of the same bit which D.W. mentions below comes up a lot - these lecture notes touch on it for example people.cs.uchicago.edu/~fortnow/papers/kaikoura.pdf
    $endgroup$
    – Brian Gordon
    Mar 25 at 16:01















27












$begingroup$


Say I need to simulate the following discrete distribution:



$$
P(X = k) =
begincases
frac12^N, & textif $k = 1$ \
1 - frac12^N, & textif $k = 0$
endcases
$$



The most obvious way is to draw $N$ random bits and check if all of them equals to $0$ (or $1$). However, information theory says



$$
beginalign
S & = - sum_i P_i logP_i \
& = - frac12^N logfrac12^N - left(1 - frac12^Nright) logleft(1 - frac12^Nright) \
& = frac12^N log2^N + left(1 - frac12^Nright) logfrac2^N2^N - 1 \
& rightarrow 0
endalign
$$



So the minimum number of random bits required actually decreases as $N$ goes large. How is this possible?



Please assume that we are running on a computer where bits is your only source of randomness, so you can't just tose a biased coin.










share|cite|improve this question











$endgroup$











  • $begingroup$
    This is closely related to coding theory and Kolmogorov complexity, if you're looking for keywords to dig deeper. The technique of counting repeating runs of the same bit which D.W. mentions below comes up a lot - these lecture notes touch on it for example people.cs.uchicago.edu/~fortnow/papers/kaikoura.pdf
    $endgroup$
    – Brian Gordon
    Mar 25 at 16:01













27












27








27


8



$begingroup$


Say I need to simulate the following discrete distribution:



$$
P(X = k) =
begincases
frac12^N, & textif $k = 1$ \
1 - frac12^N, & textif $k = 0$
endcases
$$



The most obvious way is to draw $N$ random bits and check if all of them equals to $0$ (or $1$). However, information theory says



$$
beginalign
S & = - sum_i P_i logP_i \
& = - frac12^N logfrac12^N - left(1 - frac12^Nright) logleft(1 - frac12^Nright) \
& = frac12^N log2^N + left(1 - frac12^Nright) logfrac2^N2^N - 1 \
& rightarrow 0
endalign
$$



So the minimum number of random bits required actually decreases as $N$ goes large. How is this possible?



Please assume that we are running on a computer where bits is your only source of randomness, so you can't just tose a biased coin.










share|cite|improve this question











$endgroup$




Say I need to simulate the following discrete distribution:



$$
P(X = k) =
begincases
frac12^N, & textif $k = 1$ \
1 - frac12^N, & textif $k = 0$
endcases
$$



The most obvious way is to draw $N$ random bits and check if all of them equals to $0$ (or $1$). However, information theory says



$$
beginalign
S & = - sum_i P_i logP_i \
& = - frac12^N logfrac12^N - left(1 - frac12^Nright) logleft(1 - frac12^Nright) \
& = frac12^N log2^N + left(1 - frac12^Nright) logfrac2^N2^N - 1 \
& rightarrow 0
endalign
$$



So the minimum number of random bits required actually decreases as $N$ goes large. How is this possible?



Please assume that we are running on a computer where bits is your only source of randomness, so you can't just tose a biased coin.







algorithms information-theory randomness pseudo-random-generators entropy






share|cite|improve this question















share|cite|improve this question













share|cite|improve this question




share|cite|improve this question








edited Mar 25 at 19:01









Discrete lizard

4,44011537




4,44011537










asked Mar 25 at 0:18









nalzoknalzok

557517




557517











  • $begingroup$
    This is closely related to coding theory and Kolmogorov complexity, if you're looking for keywords to dig deeper. The technique of counting repeating runs of the same bit which D.W. mentions below comes up a lot - these lecture notes touch on it for example people.cs.uchicago.edu/~fortnow/papers/kaikoura.pdf
    $endgroup$
    – Brian Gordon
    Mar 25 at 16:01
















  • $begingroup$
    This is closely related to coding theory and Kolmogorov complexity, if you're looking for keywords to dig deeper. The technique of counting repeating runs of the same bit which D.W. mentions below comes up a lot - these lecture notes touch on it for example people.cs.uchicago.edu/~fortnow/papers/kaikoura.pdf
    $endgroup$
    – Brian Gordon
    Mar 25 at 16:01















$begingroup$
This is closely related to coding theory and Kolmogorov complexity, if you're looking for keywords to dig deeper. The technique of counting repeating runs of the same bit which D.W. mentions below comes up a lot - these lecture notes touch on it for example people.cs.uchicago.edu/~fortnow/papers/kaikoura.pdf
$endgroup$
– Brian Gordon
Mar 25 at 16:01




$begingroup$
This is closely related to coding theory and Kolmogorov complexity, if you're looking for keywords to dig deeper. The technique of counting repeating runs of the same bit which D.W. mentions below comes up a lot - these lecture notes touch on it for example people.cs.uchicago.edu/~fortnow/papers/kaikoura.pdf
$endgroup$
– Brian Gordon
Mar 25 at 16:01










2 Answers
2






active

oldest

votes


















26












$begingroup$

Wow, great question! Let me try to explain the resolution. It'll take three distinct steps.



The first thing to note is that the entropy is focused more on the average number of bits needed per draw, not the maximum number of bits needed.



With your sampling procedure, the maximum number of random bits needed per draw is $N$ bits, but the average number of bits needed is 2 bits (the average of a geometric distribution with $p=1/2$) -- this is because there is a $1/2$ probability that you only need 1 bit (if the first bit turns out to be 1), a $1/4$ probability that you only need 2 bits (if the first two bits turn out to be 01), a $1/8$ probability that you only need 3 bits (if the first three bits turn out to be 001), and so on.



The second thing to note is that the entropy doesn't really capture the average number of bits needed for a single draw. Instead, the entropy captures the amortized number of bits needed to sample $m$ i.i.d. draws from this distribution. Suppose we need $f(m)$ bits to sample $m$ draws; then the entropy is the limit of $f(m)/m$ as $m to infty$.



The third thing to note is that, with this distribution, you can sample $m$ i.i.d. draws with fewer bits than needed to repeatedly sample one draw. Suppose you naively decided to draw one sample (takes 2 random bits on average), then draw another sample (using 2 more random bits on average), and so on, until you've repeated this $m$ times. That would require about $2m$ random bits on average.



But it turns out there's a way to sample from $m$ draws using fewer than $2m$ bits. It's hard to believe, but it's true!



Let me give you the intuition. Suppose you wrote down the result of sampling $m$ draws, where $m$ is really large. Then the result could be specified as a $m$-bit string. This $m$-bit string will be mostly 0's, with a few 1's in it: in particular, on average it will have about $m/2^N$ 1's (could be more or less than that, but if $m$ is sufficiently large, usually the number will be close to that). The length of the gaps between the 1's are random, but will typically be somewhere vaguely in the vicinity of $2^N$ (could easily be half that or twice that or even more, but of that order of magnitude). Of course, instead of writing down the entire $m$-bit string, we could write it down more succinctly by writing down a list of the lengths of the gaps -- that carries all the same information, in a more compressed format. How much more succinct? Well, we'll usually need about $N$ bits to represent the length of each gap; and there will be about $m/2^N$ gaps; so we'll need in total about $mN/2^N$ bits (could be a bit more, could be a bit less, but if $m$ is sufficiently large, it'll usually be close to that). That's a lot shorter than a $m$-bit string.



And if there's a way to write down the string this succinctly, perhaps it won't be too surprising if that means there's a way to generate the string with a number of random bits comparable to the length of the string. In particular, you randomly generate the length of each gap; this is sampling from a geometric distribution with $p=1/2^N$, and that can be done with roughly $sim N$ random bits on average (not $2^N$). You'll need about $m/2^N$ i.i.d. draws from this geometric distribution, so you'll need in total roughly $sim Nm/2^N$ random bits. (It could be a small constant factor larger, but not too much larger.) And, notice is that this is much smaller than $2m$ bits.



So, we can sample $m$ i.i.d. draws from your distribution, using just $f(m) sim Nm/2^N$ random bits (roughly). Recall that the entropy is $lim_m to infty f(m)/m$. So this means that you should expect the entropy to be (roughly) $N/2^N$. That's off by a little bit, because the above calculation was sketchy and crude -- but hopefully it gives you some intuition for why the entropy is what it is, and why everything is consistent and reasonable.






share|cite|improve this answer











$endgroup$












  • $begingroup$
    Wow, great answer! But could you elaborate on why sampling from a geometric distribution with $p=frac12^N$ takes $N$ bits on average? I know such a random variable would have a mean of $2^N$ , so it takes on average $N$ bits to store, but I suppose this doesn't mean you can generate one with $N$ bits.
    $endgroup$
    – nalzok
    Mar 25 at 5:36











  • $begingroup$
    @nalzok, A fair question! Could you perhaps ask that as a separate question? I can see how to do it, but it's a bit messy to type up right now. If you ask perhaps someone will get to answering quicker than I can. The approach I'm thinking of is similar to arithmetic coding. Define $q_i = Pr[Xle i]$ (where $X$ is the geometric r.v.), then generate a random number $r$ in the interval $[0,1)$, and find $i$ such that $q_i le r < q_i+1$. If you write down the bits of the binary expension $r$ one at a time, usually after writing down $N+O(1)$ bits of $r$, $i$ will be fully determined.
    $endgroup$
    – D.W.
    Mar 25 at 6:03






  • 1




    $begingroup$
    So you're basically using the inverse CDF method to convert a uniformly distributed random variable to an arbitrary distribution, combined with an idea similar to binary search? I'll need to analyze the quantile function of a geometric distribution to be sure, but this hint is enough. Thanks!
    $endgroup$
    – nalzok
    Mar 25 at 6:12







  • 1




    $begingroup$
    @nalzok, ahh, yes, that's a nicer way to think about it -- lovely. Thank you for suggesting that. Yup, that's what I had in mind.
    $endgroup$
    – D.W.
    Mar 25 at 6:14



















2












$begingroup$

You can think this backwards: consider the problem of binary encoding instead of generation. Suppose that you have a source that emits symbols $Xin A,B$ with $p(A)=2^-N$, $p(B)=1-2^-N$. For example, if $N=3$, we get $H(X)approx 0.54356$. So (Shannon tells us) there is an uniquely decodable binary encoding $X to Y$, where $Y in 0,1$ (data bits), such that we need, on average, about $0.54356$ data bits for each original symbol $X$.



(In case you are wondering how such encoding can exists, given that we have only two source symbols, and it seems that we cannot do better that the trivial encoding , $Ato 0$, $Bto 1$ , with one bit per symbol, you need to understand that to approximate the Shannon bound we need to take "extensions" of the source, that is, to code sequences of inputs as a whole. See in particular arithmetic encoding).



Once the above is clear, if we assume we have an invertible mapping $X^n to Y^n$ , and noticing that, in the Shannon limit $Y^n$ must have maximum entropy (1 bit of information per bit of data), i.e., $Y^n$ has the statistics of a fair coin, then we have a generation scheme at hand: draw $n$ random bits (here $n$ has no relation with $N$) with a fair coin, interpret it as the output $Y^n$ of the encoder, and decode $X^n$ from it. In this way, $X^n$ will have the desired probability distribution, and we need (in average) $H(X)<1$ coins to generate each value of $X$.






share|cite|improve this answer











$endgroup$













    Your Answer





    StackExchange.ifUsing("editor", function ()
    return StackExchange.using("mathjaxEditing", function ()
    StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
    StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
    );
    );
    , "mathjax-editing");

    StackExchange.ready(function()
    var channelOptions =
    tags: "".split(" "),
    id: "419"
    ;
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function()
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled)
    StackExchange.using("snippets", function()
    createEditor();
    );

    else
    createEditor();

    );

    function createEditor()
    StackExchange.prepareEditor(
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: false,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: null,
    bindNavPrevention: true,
    postfix: "",
    imageUploader:
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    ,
    onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    );



    );













    draft saved

    draft discarded


















    StackExchange.ready(
    function ()
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fcs.stackexchange.com%2fquestions%2f106018%2fsimulating-a-probability-of-1-of-2n-with-less-than-n-random-bits%23new-answer', 'question_page');

    );

    Post as a guest















    Required, but never shown

























    2 Answers
    2






    active

    oldest

    votes








    2 Answers
    2






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    26












    $begingroup$

    Wow, great question! Let me try to explain the resolution. It'll take three distinct steps.



    The first thing to note is that the entropy is focused more on the average number of bits needed per draw, not the maximum number of bits needed.



    With your sampling procedure, the maximum number of random bits needed per draw is $N$ bits, but the average number of bits needed is 2 bits (the average of a geometric distribution with $p=1/2$) -- this is because there is a $1/2$ probability that you only need 1 bit (if the first bit turns out to be 1), a $1/4$ probability that you only need 2 bits (if the first two bits turn out to be 01), a $1/8$ probability that you only need 3 bits (if the first three bits turn out to be 001), and so on.



    The second thing to note is that the entropy doesn't really capture the average number of bits needed for a single draw. Instead, the entropy captures the amortized number of bits needed to sample $m$ i.i.d. draws from this distribution. Suppose we need $f(m)$ bits to sample $m$ draws; then the entropy is the limit of $f(m)/m$ as $m to infty$.



    The third thing to note is that, with this distribution, you can sample $m$ i.i.d. draws with fewer bits than needed to repeatedly sample one draw. Suppose you naively decided to draw one sample (takes 2 random bits on average), then draw another sample (using 2 more random bits on average), and so on, until you've repeated this $m$ times. That would require about $2m$ random bits on average.



    But it turns out there's a way to sample from $m$ draws using fewer than $2m$ bits. It's hard to believe, but it's true!



    Let me give you the intuition. Suppose you wrote down the result of sampling $m$ draws, where $m$ is really large. Then the result could be specified as a $m$-bit string. This $m$-bit string will be mostly 0's, with a few 1's in it: in particular, on average it will have about $m/2^N$ 1's (could be more or less than that, but if $m$ is sufficiently large, usually the number will be close to that). The length of the gaps between the 1's are random, but will typically be somewhere vaguely in the vicinity of $2^N$ (could easily be half that or twice that or even more, but of that order of magnitude). Of course, instead of writing down the entire $m$-bit string, we could write it down more succinctly by writing down a list of the lengths of the gaps -- that carries all the same information, in a more compressed format. How much more succinct? Well, we'll usually need about $N$ bits to represent the length of each gap; and there will be about $m/2^N$ gaps; so we'll need in total about $mN/2^N$ bits (could be a bit more, could be a bit less, but if $m$ is sufficiently large, it'll usually be close to that). That's a lot shorter than a $m$-bit string.



    And if there's a way to write down the string this succinctly, perhaps it won't be too surprising if that means there's a way to generate the string with a number of random bits comparable to the length of the string. In particular, you randomly generate the length of each gap; this is sampling from a geometric distribution with $p=1/2^N$, and that can be done with roughly $sim N$ random bits on average (not $2^N$). You'll need about $m/2^N$ i.i.d. draws from this geometric distribution, so you'll need in total roughly $sim Nm/2^N$ random bits. (It could be a small constant factor larger, but not too much larger.) And, notice is that this is much smaller than $2m$ bits.



    So, we can sample $m$ i.i.d. draws from your distribution, using just $f(m) sim Nm/2^N$ random bits (roughly). Recall that the entropy is $lim_m to infty f(m)/m$. So this means that you should expect the entropy to be (roughly) $N/2^N$. That's off by a little bit, because the above calculation was sketchy and crude -- but hopefully it gives you some intuition for why the entropy is what it is, and why everything is consistent and reasonable.






    share|cite|improve this answer











    $endgroup$












    • $begingroup$
      Wow, great answer! But could you elaborate on why sampling from a geometric distribution with $p=frac12^N$ takes $N$ bits on average? I know such a random variable would have a mean of $2^N$ , so it takes on average $N$ bits to store, but I suppose this doesn't mean you can generate one with $N$ bits.
      $endgroup$
      – nalzok
      Mar 25 at 5:36











    • $begingroup$
      @nalzok, A fair question! Could you perhaps ask that as a separate question? I can see how to do it, but it's a bit messy to type up right now. If you ask perhaps someone will get to answering quicker than I can. The approach I'm thinking of is similar to arithmetic coding. Define $q_i = Pr[Xle i]$ (where $X$ is the geometric r.v.), then generate a random number $r$ in the interval $[0,1)$, and find $i$ such that $q_i le r < q_i+1$. If you write down the bits of the binary expension $r$ one at a time, usually after writing down $N+O(1)$ bits of $r$, $i$ will be fully determined.
      $endgroup$
      – D.W.
      Mar 25 at 6:03






    • 1




      $begingroup$
      So you're basically using the inverse CDF method to convert a uniformly distributed random variable to an arbitrary distribution, combined with an idea similar to binary search? I'll need to analyze the quantile function of a geometric distribution to be sure, but this hint is enough. Thanks!
      $endgroup$
      – nalzok
      Mar 25 at 6:12







    • 1




      $begingroup$
      @nalzok, ahh, yes, that's a nicer way to think about it -- lovely. Thank you for suggesting that. Yup, that's what I had in mind.
      $endgroup$
      – D.W.
      Mar 25 at 6:14
















    26












    $begingroup$

    Wow, great question! Let me try to explain the resolution. It'll take three distinct steps.



    The first thing to note is that the entropy is focused more on the average number of bits needed per draw, not the maximum number of bits needed.



    With your sampling procedure, the maximum number of random bits needed per draw is $N$ bits, but the average number of bits needed is 2 bits (the average of a geometric distribution with $p=1/2$) -- this is because there is a $1/2$ probability that you only need 1 bit (if the first bit turns out to be 1), a $1/4$ probability that you only need 2 bits (if the first two bits turn out to be 01), a $1/8$ probability that you only need 3 bits (if the first three bits turn out to be 001), and so on.



    The second thing to note is that the entropy doesn't really capture the average number of bits needed for a single draw. Instead, the entropy captures the amortized number of bits needed to sample $m$ i.i.d. draws from this distribution. Suppose we need $f(m)$ bits to sample $m$ draws; then the entropy is the limit of $f(m)/m$ as $m to infty$.



    The third thing to note is that, with this distribution, you can sample $m$ i.i.d. draws with fewer bits than needed to repeatedly sample one draw. Suppose you naively decided to draw one sample (takes 2 random bits on average), then draw another sample (using 2 more random bits on average), and so on, until you've repeated this $m$ times. That would require about $2m$ random bits on average.



    But it turns out there's a way to sample from $m$ draws using fewer than $2m$ bits. It's hard to believe, but it's true!



    Let me give you the intuition. Suppose you wrote down the result of sampling $m$ draws, where $m$ is really large. Then the result could be specified as a $m$-bit string. This $m$-bit string will be mostly 0's, with a few 1's in it: in particular, on average it will have about $m/2^N$ 1's (could be more or less than that, but if $m$ is sufficiently large, usually the number will be close to that). The length of the gaps between the 1's are random, but will typically be somewhere vaguely in the vicinity of $2^N$ (could easily be half that or twice that or even more, but of that order of magnitude). Of course, instead of writing down the entire $m$-bit string, we could write it down more succinctly by writing down a list of the lengths of the gaps -- that carries all the same information, in a more compressed format. How much more succinct? Well, we'll usually need about $N$ bits to represent the length of each gap; and there will be about $m/2^N$ gaps; so we'll need in total about $mN/2^N$ bits (could be a bit more, could be a bit less, but if $m$ is sufficiently large, it'll usually be close to that). That's a lot shorter than a $m$-bit string.



    And if there's a way to write down the string this succinctly, perhaps it won't be too surprising if that means there's a way to generate the string with a number of random bits comparable to the length of the string. In particular, you randomly generate the length of each gap; this is sampling from a geometric distribution with $p=1/2^N$, and that can be done with roughly $sim N$ random bits on average (not $2^N$). You'll need about $m/2^N$ i.i.d. draws from this geometric distribution, so you'll need in total roughly $sim Nm/2^N$ random bits. (It could be a small constant factor larger, but not too much larger.) And, notice is that this is much smaller than $2m$ bits.



    So, we can sample $m$ i.i.d. draws from your distribution, using just $f(m) sim Nm/2^N$ random bits (roughly). Recall that the entropy is $lim_m to infty f(m)/m$. So this means that you should expect the entropy to be (roughly) $N/2^N$. That's off by a little bit, because the above calculation was sketchy and crude -- but hopefully it gives you some intuition for why the entropy is what it is, and why everything is consistent and reasonable.






    share|cite|improve this answer











    $endgroup$












    • $begingroup$
      Wow, great answer! But could you elaborate on why sampling from a geometric distribution with $p=frac12^N$ takes $N$ bits on average? I know such a random variable would have a mean of $2^N$ , so it takes on average $N$ bits to store, but I suppose this doesn't mean you can generate one with $N$ bits.
      $endgroup$
      – nalzok
      Mar 25 at 5:36











    • $begingroup$
      @nalzok, A fair question! Could you perhaps ask that as a separate question? I can see how to do it, but it's a bit messy to type up right now. If you ask perhaps someone will get to answering quicker than I can. The approach I'm thinking of is similar to arithmetic coding. Define $q_i = Pr[Xle i]$ (where $X$ is the geometric r.v.), then generate a random number $r$ in the interval $[0,1)$, and find $i$ such that $q_i le r < q_i+1$. If you write down the bits of the binary expension $r$ one at a time, usually after writing down $N+O(1)$ bits of $r$, $i$ will be fully determined.
      $endgroup$
      – D.W.
      Mar 25 at 6:03






    • 1




      $begingroup$
      So you're basically using the inverse CDF method to convert a uniformly distributed random variable to an arbitrary distribution, combined with an idea similar to binary search? I'll need to analyze the quantile function of a geometric distribution to be sure, but this hint is enough. Thanks!
      $endgroup$
      – nalzok
      Mar 25 at 6:12







    • 1




      $begingroup$
      @nalzok, ahh, yes, that's a nicer way to think about it -- lovely. Thank you for suggesting that. Yup, that's what I had in mind.
      $endgroup$
      – D.W.
      Mar 25 at 6:14














    26












    26








    26





    $begingroup$

    Wow, great question! Let me try to explain the resolution. It'll take three distinct steps.



    The first thing to note is that the entropy is focused more on the average number of bits needed per draw, not the maximum number of bits needed.



    With your sampling procedure, the maximum number of random bits needed per draw is $N$ bits, but the average number of bits needed is 2 bits (the average of a geometric distribution with $p=1/2$) -- this is because there is a $1/2$ probability that you only need 1 bit (if the first bit turns out to be 1), a $1/4$ probability that you only need 2 bits (if the first two bits turn out to be 01), a $1/8$ probability that you only need 3 bits (if the first three bits turn out to be 001), and so on.



    The second thing to note is that the entropy doesn't really capture the average number of bits needed for a single draw. Instead, the entropy captures the amortized number of bits needed to sample $m$ i.i.d. draws from this distribution. Suppose we need $f(m)$ bits to sample $m$ draws; then the entropy is the limit of $f(m)/m$ as $m to infty$.



    The third thing to note is that, with this distribution, you can sample $m$ i.i.d. draws with fewer bits than needed to repeatedly sample one draw. Suppose you naively decided to draw one sample (takes 2 random bits on average), then draw another sample (using 2 more random bits on average), and so on, until you've repeated this $m$ times. That would require about $2m$ random bits on average.



    But it turns out there's a way to sample from $m$ draws using fewer than $2m$ bits. It's hard to believe, but it's true!



    Let me give you the intuition. Suppose you wrote down the result of sampling $m$ draws, where $m$ is really large. Then the result could be specified as a $m$-bit string. This $m$-bit string will be mostly 0's, with a few 1's in it: in particular, on average it will have about $m/2^N$ 1's (could be more or less than that, but if $m$ is sufficiently large, usually the number will be close to that). The length of the gaps between the 1's are random, but will typically be somewhere vaguely in the vicinity of $2^N$ (could easily be half that or twice that or even more, but of that order of magnitude). Of course, instead of writing down the entire $m$-bit string, we could write it down more succinctly by writing down a list of the lengths of the gaps -- that carries all the same information, in a more compressed format. How much more succinct? Well, we'll usually need about $N$ bits to represent the length of each gap; and there will be about $m/2^N$ gaps; so we'll need in total about $mN/2^N$ bits (could be a bit more, could be a bit less, but if $m$ is sufficiently large, it'll usually be close to that). That's a lot shorter than a $m$-bit string.



    And if there's a way to write down the string this succinctly, perhaps it won't be too surprising if that means there's a way to generate the string with a number of random bits comparable to the length of the string. In particular, you randomly generate the length of each gap; this is sampling from a geometric distribution with $p=1/2^N$, and that can be done with roughly $sim N$ random bits on average (not $2^N$). You'll need about $m/2^N$ i.i.d. draws from this geometric distribution, so you'll need in total roughly $sim Nm/2^N$ random bits. (It could be a small constant factor larger, but not too much larger.) And, notice is that this is much smaller than $2m$ bits.



    So, we can sample $m$ i.i.d. draws from your distribution, using just $f(m) sim Nm/2^N$ random bits (roughly). Recall that the entropy is $lim_m to infty f(m)/m$. So this means that you should expect the entropy to be (roughly) $N/2^N$. That's off by a little bit, because the above calculation was sketchy and crude -- but hopefully it gives you some intuition for why the entropy is what it is, and why everything is consistent and reasonable.






    share|cite|improve this answer











    $endgroup$



    Wow, great question! Let me try to explain the resolution. It'll take three distinct steps.



    The first thing to note is that the entropy is focused more on the average number of bits needed per draw, not the maximum number of bits needed.



    With your sampling procedure, the maximum number of random bits needed per draw is $N$ bits, but the average number of bits needed is 2 bits (the average of a geometric distribution with $p=1/2$) -- this is because there is a $1/2$ probability that you only need 1 bit (if the first bit turns out to be 1), a $1/4$ probability that you only need 2 bits (if the first two bits turn out to be 01), a $1/8$ probability that you only need 3 bits (if the first three bits turn out to be 001), and so on.



    The second thing to note is that the entropy doesn't really capture the average number of bits needed for a single draw. Instead, the entropy captures the amortized number of bits needed to sample $m$ i.i.d. draws from this distribution. Suppose we need $f(m)$ bits to sample $m$ draws; then the entropy is the limit of $f(m)/m$ as $m to infty$.



    The third thing to note is that, with this distribution, you can sample $m$ i.i.d. draws with fewer bits than needed to repeatedly sample one draw. Suppose you naively decided to draw one sample (takes 2 random bits on average), then draw another sample (using 2 more random bits on average), and so on, until you've repeated this $m$ times. That would require about $2m$ random bits on average.



    But it turns out there's a way to sample from $m$ draws using fewer than $2m$ bits. It's hard to believe, but it's true!



    Let me give you the intuition. Suppose you wrote down the result of sampling $m$ draws, where $m$ is really large. Then the result could be specified as a $m$-bit string. This $m$-bit string will be mostly 0's, with a few 1's in it: in particular, on average it will have about $m/2^N$ 1's (could be more or less than that, but if $m$ is sufficiently large, usually the number will be close to that). The length of the gaps between the 1's are random, but will typically be somewhere vaguely in the vicinity of $2^N$ (could easily be half that or twice that or even more, but of that order of magnitude). Of course, instead of writing down the entire $m$-bit string, we could write it down more succinctly by writing down a list of the lengths of the gaps -- that carries all the same information, in a more compressed format. How much more succinct? Well, we'll usually need about $N$ bits to represent the length of each gap; and there will be about $m/2^N$ gaps; so we'll need in total about $mN/2^N$ bits (could be a bit more, could be a bit less, but if $m$ is sufficiently large, it'll usually be close to that). That's a lot shorter than a $m$-bit string.



    And if there's a way to write down the string this succinctly, perhaps it won't be too surprising if that means there's a way to generate the string with a number of random bits comparable to the length of the string. In particular, you randomly generate the length of each gap; this is sampling from a geometric distribution with $p=1/2^N$, and that can be done with roughly $sim N$ random bits on average (not $2^N$). You'll need about $m/2^N$ i.i.d. draws from this geometric distribution, so you'll need in total roughly $sim Nm/2^N$ random bits. (It could be a small constant factor larger, but not too much larger.) And, notice is that this is much smaller than $2m$ bits.



    So, we can sample $m$ i.i.d. draws from your distribution, using just $f(m) sim Nm/2^N$ random bits (roughly). Recall that the entropy is $lim_m to infty f(m)/m$. So this means that you should expect the entropy to be (roughly) $N/2^N$. That's off by a little bit, because the above calculation was sketchy and crude -- but hopefully it gives you some intuition for why the entropy is what it is, and why everything is consistent and reasonable.







    share|cite|improve this answer














    share|cite|improve this answer



    share|cite|improve this answer








    edited Mar 25 at 12:25









    einpoklum

    373113




    373113










    answered Mar 25 at 4:17









    D.W.D.W.

    103k12129293




    103k12129293











    • $begingroup$
      Wow, great answer! But could you elaborate on why sampling from a geometric distribution with $p=frac12^N$ takes $N$ bits on average? I know such a random variable would have a mean of $2^N$ , so it takes on average $N$ bits to store, but I suppose this doesn't mean you can generate one with $N$ bits.
      $endgroup$
      – nalzok
      Mar 25 at 5:36











    • $begingroup$
      @nalzok, A fair question! Could you perhaps ask that as a separate question? I can see how to do it, but it's a bit messy to type up right now. If you ask perhaps someone will get to answering quicker than I can. The approach I'm thinking of is similar to arithmetic coding. Define $q_i = Pr[Xle i]$ (where $X$ is the geometric r.v.), then generate a random number $r$ in the interval $[0,1)$, and find $i$ such that $q_i le r < q_i+1$. If you write down the bits of the binary expension $r$ one at a time, usually after writing down $N+O(1)$ bits of $r$, $i$ will be fully determined.
      $endgroup$
      – D.W.
      Mar 25 at 6:03






    • 1




      $begingroup$
      So you're basically using the inverse CDF method to convert a uniformly distributed random variable to an arbitrary distribution, combined with an idea similar to binary search? I'll need to analyze the quantile function of a geometric distribution to be sure, but this hint is enough. Thanks!
      $endgroup$
      – nalzok
      Mar 25 at 6:12







    • 1




      $begingroup$
      @nalzok, ahh, yes, that's a nicer way to think about it -- lovely. Thank you for suggesting that. Yup, that's what I had in mind.
      $endgroup$
      – D.W.
      Mar 25 at 6:14

















    • $begingroup$
      Wow, great answer! But could you elaborate on why sampling from a geometric distribution with $p=frac12^N$ takes $N$ bits on average? I know such a random variable would have a mean of $2^N$ , so it takes on average $N$ bits to store, but I suppose this doesn't mean you can generate one with $N$ bits.
      $endgroup$
      – nalzok
      Mar 25 at 5:36











    • $begingroup$
      @nalzok, A fair question! Could you perhaps ask that as a separate question? I can see how to do it, but it's a bit messy to type up right now. If you ask perhaps someone will get to answering quicker than I can. The approach I'm thinking of is similar to arithmetic coding. Define $q_i = Pr[Xle i]$ (where $X$ is the geometric r.v.), then generate a random number $r$ in the interval $[0,1)$, and find $i$ such that $q_i le r < q_i+1$. If you write down the bits of the binary expension $r$ one at a time, usually after writing down $N+O(1)$ bits of $r$, $i$ will be fully determined.
      $endgroup$
      – D.W.
      Mar 25 at 6:03






    • 1




      $begingroup$
      So you're basically using the inverse CDF method to convert a uniformly distributed random variable to an arbitrary distribution, combined with an idea similar to binary search? I'll need to analyze the quantile function of a geometric distribution to be sure, but this hint is enough. Thanks!
      $endgroup$
      – nalzok
      Mar 25 at 6:12







    • 1




      $begingroup$
      @nalzok, ahh, yes, that's a nicer way to think about it -- lovely. Thank you for suggesting that. Yup, that's what I had in mind.
      $endgroup$
      – D.W.
      Mar 25 at 6:14
















    $begingroup$
    Wow, great answer! But could you elaborate on why sampling from a geometric distribution with $p=frac12^N$ takes $N$ bits on average? I know such a random variable would have a mean of $2^N$ , so it takes on average $N$ bits to store, but I suppose this doesn't mean you can generate one with $N$ bits.
    $endgroup$
    – nalzok
    Mar 25 at 5:36





    $begingroup$
    Wow, great answer! But could you elaborate on why sampling from a geometric distribution with $p=frac12^N$ takes $N$ bits on average? I know such a random variable would have a mean of $2^N$ , so it takes on average $N$ bits to store, but I suppose this doesn't mean you can generate one with $N$ bits.
    $endgroup$
    – nalzok
    Mar 25 at 5:36













    $begingroup$
    @nalzok, A fair question! Could you perhaps ask that as a separate question? I can see how to do it, but it's a bit messy to type up right now. If you ask perhaps someone will get to answering quicker than I can. The approach I'm thinking of is similar to arithmetic coding. Define $q_i = Pr[Xle i]$ (where $X$ is the geometric r.v.), then generate a random number $r$ in the interval $[0,1)$, and find $i$ such that $q_i le r < q_i+1$. If you write down the bits of the binary expension $r$ one at a time, usually after writing down $N+O(1)$ bits of $r$, $i$ will be fully determined.
    $endgroup$
    – D.W.
    Mar 25 at 6:03




    $begingroup$
    @nalzok, A fair question! Could you perhaps ask that as a separate question? I can see how to do it, but it's a bit messy to type up right now. If you ask perhaps someone will get to answering quicker than I can. The approach I'm thinking of is similar to arithmetic coding. Define $q_i = Pr[Xle i]$ (where $X$ is the geometric r.v.), then generate a random number $r$ in the interval $[0,1)$, and find $i$ such that $q_i le r < q_i+1$. If you write down the bits of the binary expension $r$ one at a time, usually after writing down $N+O(1)$ bits of $r$, $i$ will be fully determined.
    $endgroup$
    – D.W.
    Mar 25 at 6:03




    1




    1




    $begingroup$
    So you're basically using the inverse CDF method to convert a uniformly distributed random variable to an arbitrary distribution, combined with an idea similar to binary search? I'll need to analyze the quantile function of a geometric distribution to be sure, but this hint is enough. Thanks!
    $endgroup$
    – nalzok
    Mar 25 at 6:12





    $begingroup$
    So you're basically using the inverse CDF method to convert a uniformly distributed random variable to an arbitrary distribution, combined with an idea similar to binary search? I'll need to analyze the quantile function of a geometric distribution to be sure, but this hint is enough. Thanks!
    $endgroup$
    – nalzok
    Mar 25 at 6:12





    1




    1




    $begingroup$
    @nalzok, ahh, yes, that's a nicer way to think about it -- lovely. Thank you for suggesting that. Yup, that's what I had in mind.
    $endgroup$
    – D.W.
    Mar 25 at 6:14





    $begingroup$
    @nalzok, ahh, yes, that's a nicer way to think about it -- lovely. Thank you for suggesting that. Yup, that's what I had in mind.
    $endgroup$
    – D.W.
    Mar 25 at 6:14












    2












    $begingroup$

    You can think this backwards: consider the problem of binary encoding instead of generation. Suppose that you have a source that emits symbols $Xin A,B$ with $p(A)=2^-N$, $p(B)=1-2^-N$. For example, if $N=3$, we get $H(X)approx 0.54356$. So (Shannon tells us) there is an uniquely decodable binary encoding $X to Y$, where $Y in 0,1$ (data bits), such that we need, on average, about $0.54356$ data bits for each original symbol $X$.



    (In case you are wondering how such encoding can exists, given that we have only two source symbols, and it seems that we cannot do better that the trivial encoding , $Ato 0$, $Bto 1$ , with one bit per symbol, you need to understand that to approximate the Shannon bound we need to take "extensions" of the source, that is, to code sequences of inputs as a whole. See in particular arithmetic encoding).



    Once the above is clear, if we assume we have an invertible mapping $X^n to Y^n$ , and noticing that, in the Shannon limit $Y^n$ must have maximum entropy (1 bit of information per bit of data), i.e., $Y^n$ has the statistics of a fair coin, then we have a generation scheme at hand: draw $n$ random bits (here $n$ has no relation with $N$) with a fair coin, interpret it as the output $Y^n$ of the encoder, and decode $X^n$ from it. In this way, $X^n$ will have the desired probability distribution, and we need (in average) $H(X)<1$ coins to generate each value of $X$.






    share|cite|improve this answer











    $endgroup$

















      2












      $begingroup$

      You can think this backwards: consider the problem of binary encoding instead of generation. Suppose that you have a source that emits symbols $Xin A,B$ with $p(A)=2^-N$, $p(B)=1-2^-N$. For example, if $N=3$, we get $H(X)approx 0.54356$. So (Shannon tells us) there is an uniquely decodable binary encoding $X to Y$, where $Y in 0,1$ (data bits), such that we need, on average, about $0.54356$ data bits for each original symbol $X$.



      (In case you are wondering how such encoding can exists, given that we have only two source symbols, and it seems that we cannot do better that the trivial encoding , $Ato 0$, $Bto 1$ , with one bit per symbol, you need to understand that to approximate the Shannon bound we need to take "extensions" of the source, that is, to code sequences of inputs as a whole. See in particular arithmetic encoding).



      Once the above is clear, if we assume we have an invertible mapping $X^n to Y^n$ , and noticing that, in the Shannon limit $Y^n$ must have maximum entropy (1 bit of information per bit of data), i.e., $Y^n$ has the statistics of a fair coin, then we have a generation scheme at hand: draw $n$ random bits (here $n$ has no relation with $N$) with a fair coin, interpret it as the output $Y^n$ of the encoder, and decode $X^n$ from it. In this way, $X^n$ will have the desired probability distribution, and we need (in average) $H(X)<1$ coins to generate each value of $X$.






      share|cite|improve this answer











      $endgroup$















        2












        2








        2





        $begingroup$

        You can think this backwards: consider the problem of binary encoding instead of generation. Suppose that you have a source that emits symbols $Xin A,B$ with $p(A)=2^-N$, $p(B)=1-2^-N$. For example, if $N=3$, we get $H(X)approx 0.54356$. So (Shannon tells us) there is an uniquely decodable binary encoding $X to Y$, where $Y in 0,1$ (data bits), such that we need, on average, about $0.54356$ data bits for each original symbol $X$.



        (In case you are wondering how such encoding can exists, given that we have only two source symbols, and it seems that we cannot do better that the trivial encoding , $Ato 0$, $Bto 1$ , with one bit per symbol, you need to understand that to approximate the Shannon bound we need to take "extensions" of the source, that is, to code sequences of inputs as a whole. See in particular arithmetic encoding).



        Once the above is clear, if we assume we have an invertible mapping $X^n to Y^n$ , and noticing that, in the Shannon limit $Y^n$ must have maximum entropy (1 bit of information per bit of data), i.e., $Y^n$ has the statistics of a fair coin, then we have a generation scheme at hand: draw $n$ random bits (here $n$ has no relation with $N$) with a fair coin, interpret it as the output $Y^n$ of the encoder, and decode $X^n$ from it. In this way, $X^n$ will have the desired probability distribution, and we need (in average) $H(X)<1$ coins to generate each value of $X$.






        share|cite|improve this answer











        $endgroup$



        You can think this backwards: consider the problem of binary encoding instead of generation. Suppose that you have a source that emits symbols $Xin A,B$ with $p(A)=2^-N$, $p(B)=1-2^-N$. For example, if $N=3$, we get $H(X)approx 0.54356$. So (Shannon tells us) there is an uniquely decodable binary encoding $X to Y$, where $Y in 0,1$ (data bits), such that we need, on average, about $0.54356$ data bits for each original symbol $X$.



        (In case you are wondering how such encoding can exists, given that we have only two source symbols, and it seems that we cannot do better that the trivial encoding , $Ato 0$, $Bto 1$ , with one bit per symbol, you need to understand that to approximate the Shannon bound we need to take "extensions" of the source, that is, to code sequences of inputs as a whole. See in particular arithmetic encoding).



        Once the above is clear, if we assume we have an invertible mapping $X^n to Y^n$ , and noticing that, in the Shannon limit $Y^n$ must have maximum entropy (1 bit of information per bit of data), i.e., $Y^n$ has the statistics of a fair coin, then we have a generation scheme at hand: draw $n$ random bits (here $n$ has no relation with $N$) with a fair coin, interpret it as the output $Y^n$ of the encoder, and decode $X^n$ from it. In this way, $X^n$ will have the desired probability distribution, and we need (in average) $H(X)<1$ coins to generate each value of $X$.







        share|cite|improve this answer














        share|cite|improve this answer



        share|cite|improve this answer








        edited Mar 26 at 11:41

























        answered Mar 25 at 16:05









        leonbloyleonbloy

        23619




        23619



























            draft saved

            draft discarded
















































            Thanks for contributing an answer to Computer Science Stack Exchange!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid


            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.

            Use MathJax to format equations. MathJax reference.


            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fcs.stackexchange.com%2fquestions%2f106018%2fsimulating-a-probability-of-1-of-2n-with-less-than-n-random-bits%23new-answer', 'question_page');

            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            Adding axes to figuresAdding axes labels to LaTeX figuresLaTeX equivalent of ConTeXt buffersRotate a node but not its content: the case of the ellipse decorationHow to define the default vertical distance between nodes?TikZ scaling graphic and adjust node position and keep font sizeNumerical conditional within tikz keys?adding axes to shapesAlign axes across subfiguresAdding figures with a certain orderLine up nested tikz enviroments or how to get rid of themAdding axes labels to LaTeX figures

            Tähtien Talli Jäsenet | Lähteet | NavigointivalikkoSuomen Hippos – Tähtien Talli

            Do these cracks on my tires look bad? The Next CEO of Stack OverflowDry rot tire should I replace?Having to replace tiresFishtailed so easily? Bad tires? ABS?Filling the tires with something other than air, to avoid puncture hassles?Used Michelin tires safe to install?Do these tyre cracks necessitate replacement?Rumbling noise: tires or mechanicalIs it possible to fix noisy feathered tires?Are bad winter tires still better than summer tires in winter?Torque converter failure - Related to replacing only 2 tires?Why use snow tires on all 4 wheels on 2-wheel-drive cars?