Why Gaussian latent variable (noise) for GAN?Can the Generative Adversarial Network useful for Outlier detection and Outlier explanation in a high dimentional numerical data?Strange patterns from GANWhy should I normalize also the output data?Generative adversarial networks for multiple distribution noise removalHow can I train Generative Adversarial Inverse Reinforcement Learning(GAIL) by feeding encoded state representations in the GAN architecture ?Architecture Advice for training a GANWhat mu and sigma vector really mean in VAE?EGAN Paper With Confusing NotationWhy do most GAN (Generative Adversarial Network) implementations have symmetric discriminator and generator architectures?What is the interpretation of the expectation notation in the GAN formulation?

What (if any) is the reason to buy in small local stores?

Pre-Employment Background Check With Consent For Future Checks

Why does the Persian emissary display a string of crowned skulls?

Do people actually use the word "kaputt" in conversation?

What is it called to attack a person then say something uplifting?

Consistent Linux device enumeration

How can I, as DM, avoid the Conga Line of Death occurring when implementing some form of flanking rule?

Mimic lecturing on blackboard, facing audience

Should I warn a new PhD Student?

Purpose of creating non root user

Is there a distance limit for minecart tracks?

The Digit Triangles

Why didn't Voldemort know what Grindelwald looked like?

How can I split a complicated line into different fill-able groups?

When and why was runway 07/25 at Kai Tak removed?

Creating polygons that share the boundaries of existing polygons

When is the exact date for EOL of Ubuntu 14.04 LTS?

Has the laser at Magurele, Romania reached a tenth of the Sun's power?

Air travel with refrigerated insulin

Animation: customize bounce interpolation

How do I prevent inappropriate ads from appearing in my game?

How to predict the next number in a series while having additional series of data that might affect it?

Adding up numbers in Portuguese is strange

Would a primitive species be able to learn English from reading books alone?



Why Gaussian latent variable (noise) for GAN?


Can the Generative Adversarial Network useful for Outlier detection and Outlier explanation in a high dimentional numerical data?Strange patterns from GANWhy should I normalize also the output data?Generative adversarial networks for multiple distribution noise removalHow can I train Generative Adversarial Inverse Reinforcement Learning(GAIL) by feeding encoded state representations in the GAN architecture ?Architecture Advice for training a GANWhat mu and sigma vector really mean in VAE?EGAN Paper With Confusing NotationWhy do most GAN (Generative Adversarial Network) implementations have symmetric discriminator and generator architectures?What is the interpretation of the expectation notation in the GAN formulation?













5












$begingroup$


When I was reading about GAN, the thing I don't understand is why people often choose the input to a GAN (z) to be samples from a Gaussian? - and then are there also potential problems associated with this?










share|improve this question









New contributor




asahi kibou is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.







$endgroup$
















    5












    $begingroup$


    When I was reading about GAN, the thing I don't understand is why people often choose the input to a GAN (z) to be samples from a Gaussian? - and then are there also potential problems associated with this?










    share|improve this question









    New contributor




    asahi kibou is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
    Check out our Code of Conduct.







    $endgroup$














      5












      5








      5


      1



      $begingroup$


      When I was reading about GAN, the thing I don't understand is why people often choose the input to a GAN (z) to be samples from a Gaussian? - and then are there also potential problems associated with this?










      share|improve this question









      New contributor




      asahi kibou is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.







      $endgroup$




      When I was reading about GAN, the thing I don't understand is why people often choose the input to a GAN (z) to be samples from a Gaussian? - and then are there also potential problems associated with this?







      deep-learning gan gaussian






      share|improve this question









      New contributor




      asahi kibou is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.











      share|improve this question









      New contributor




      asahi kibou is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.









      share|improve this question




      share|improve this question








      edited 2 days ago









      Esmailian

      1,556114




      1,556114






      New contributor




      asahi kibou is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.









      asked Mar 16 at 22:27









      asahi kibouasahi kibou

      311




      311




      New contributor




      asahi kibou is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.





      New contributor





      asahi kibou is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.






      asahi kibou is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.




















          1 Answer
          1






          active

          oldest

          votes


















          0












          $begingroup$


          Why people often choose the input to a GAN (z)
          to be samples from a Gaussian?




          Generally, for two reasons: (1) mathematical simplicity, (2) working well enough in practice. However, as we explain, under additional assumptions the choice of Gaussian could be more justified.



          Compare to uniform distribution. Gaussian distribution is not as simple as uniform distribution but it is not that far off either. It adds "concentration around the mean" assumption to uniformity, which gives us the benefits of parameter regularization in practical problems.



          The least known. Use of Gaussian is best justified for continuous quantities that are the least known to us, e.g. noise $epsilon$ or latent factor $z$. "The least known" could be formalized as "distribution that maximizes entropy for a given variance". The answer to this optimization is $N(mu, sigma^2)$ for arbitrary mean $mu$. Therefore, in this sense, if we assume that a quantity is the least known to us, the best choice is Gaussian. Of course, if we acquire more knowledge about that quantity, we can do better than "the least known" assumption, as will be illustrated in the following examples.



          This would be the answer to "why we assume a Gaussian noise in probabilistic regression or Kalman filter?" too.




          Are there also potential problems associated with this?




          Yes. When we assume Gaussian, we are simplifying. If our simplification is unjustified, our model will under-perform. At this point, we should search for an alternative assumption. In practice, when we make a new assumption about the least known quantity (based on acquired knowledge or speculation), we could extract that assumption and introduce a new Gaussian one, instead of changing the Gaussian assumption. Here are two examples:



          1. Example in regression (noise). Suppose we have no knowledge about observation $A$ (the least known), thus we assume $A sim N(mu, sigma^2)$. After fitting the model, we may observe that the estimated variance $hatsigma^2$ is high. After some investigation, we may assume that $A$ is a linear function of measurement $B$, thus we extract this assumption as $A = colorblueb_1B +c + epsilon_1$, where $epsilon_1 sim N(0, sigma_1^2)$ is the new "the least known". Later, we may find out that our linearity assumption is also weak since, after fitting the model, the observed $hatepsilon_1 = A - hatb_1B -hatc$ also has a high $hatsigma_1^2$. Then, we may extract a new assumption as $A = b_1B + colorblueb_2B^2 + c + epsilon_2$, where $epsilon_2 sim N(0, sigma_2^2)$ is the new "the least known", and so on.


          2. Example in GAN (latent factor). Upon seeing unrealistic outputs from GAN (knowledge) we may add $colorbluetextmore layers$ between $z$ and the output (extract assumption), in the hope that the new network (or function) with the new $z_2 sim N(0, sigma_2^2)$ would lead to more realistic outputs, and so on.






          share|improve this answer











          $endgroup$












            Your Answer





            StackExchange.ifUsing("editor", function ()
            return StackExchange.using("mathjaxEditing", function ()
            StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
            StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
            );
            );
            , "mathjax-editing");

            StackExchange.ready(function()
            var channelOptions =
            tags: "".split(" "),
            id: "557"
            ;
            initTagRenderer("".split(" "), "".split(" "), channelOptions);

            StackExchange.using("externalEditor", function()
            // Have to fire editor after snippets, if snippets enabled
            if (StackExchange.settings.snippets.snippetsEnabled)
            StackExchange.using("snippets", function()
            createEditor();
            );

            else
            createEditor();

            );

            function createEditor()
            StackExchange.prepareEditor(
            heartbeatType: 'answer',
            autoActivateHeartbeat: false,
            convertImagesToLinks: false,
            noModals: true,
            showLowRepImageUploadWarning: true,
            reputationToPostImages: null,
            bindNavPrevention: true,
            postfix: "",
            imageUploader:
            brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
            contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
            allowUrls: true
            ,
            onDemand: true,
            discardSelector: ".discard-answer"
            ,immediatelyShowMarkdownHelp:true
            );



            );






            asahi kibou is a new contributor. Be nice, and check out our Code of Conduct.









            draft saved

            draft discarded


















            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f47437%2fwhy-gaussian-latent-variable-noise-for-gan%23new-answer', 'question_page');

            );

            Post as a guest















            Required, but never shown

























            1 Answer
            1






            active

            oldest

            votes








            1 Answer
            1






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes









            0












            $begingroup$


            Why people often choose the input to a GAN (z)
            to be samples from a Gaussian?




            Generally, for two reasons: (1) mathematical simplicity, (2) working well enough in practice. However, as we explain, under additional assumptions the choice of Gaussian could be more justified.



            Compare to uniform distribution. Gaussian distribution is not as simple as uniform distribution but it is not that far off either. It adds "concentration around the mean" assumption to uniformity, which gives us the benefits of parameter regularization in practical problems.



            The least known. Use of Gaussian is best justified for continuous quantities that are the least known to us, e.g. noise $epsilon$ or latent factor $z$. "The least known" could be formalized as "distribution that maximizes entropy for a given variance". The answer to this optimization is $N(mu, sigma^2)$ for arbitrary mean $mu$. Therefore, in this sense, if we assume that a quantity is the least known to us, the best choice is Gaussian. Of course, if we acquire more knowledge about that quantity, we can do better than "the least known" assumption, as will be illustrated in the following examples.



            This would be the answer to "why we assume a Gaussian noise in probabilistic regression or Kalman filter?" too.




            Are there also potential problems associated with this?




            Yes. When we assume Gaussian, we are simplifying. If our simplification is unjustified, our model will under-perform. At this point, we should search for an alternative assumption. In practice, when we make a new assumption about the least known quantity (based on acquired knowledge or speculation), we could extract that assumption and introduce a new Gaussian one, instead of changing the Gaussian assumption. Here are two examples:



            1. Example in regression (noise). Suppose we have no knowledge about observation $A$ (the least known), thus we assume $A sim N(mu, sigma^2)$. After fitting the model, we may observe that the estimated variance $hatsigma^2$ is high. After some investigation, we may assume that $A$ is a linear function of measurement $B$, thus we extract this assumption as $A = colorblueb_1B +c + epsilon_1$, where $epsilon_1 sim N(0, sigma_1^2)$ is the new "the least known". Later, we may find out that our linearity assumption is also weak since, after fitting the model, the observed $hatepsilon_1 = A - hatb_1B -hatc$ also has a high $hatsigma_1^2$. Then, we may extract a new assumption as $A = b_1B + colorblueb_2B^2 + c + epsilon_2$, where $epsilon_2 sim N(0, sigma_2^2)$ is the new "the least known", and so on.


            2. Example in GAN (latent factor). Upon seeing unrealistic outputs from GAN (knowledge) we may add $colorbluetextmore layers$ between $z$ and the output (extract assumption), in the hope that the new network (or function) with the new $z_2 sim N(0, sigma_2^2)$ would lead to more realistic outputs, and so on.






            share|improve this answer











            $endgroup$

















              0












              $begingroup$


              Why people often choose the input to a GAN (z)
              to be samples from a Gaussian?




              Generally, for two reasons: (1) mathematical simplicity, (2) working well enough in practice. However, as we explain, under additional assumptions the choice of Gaussian could be more justified.



              Compare to uniform distribution. Gaussian distribution is not as simple as uniform distribution but it is not that far off either. It adds "concentration around the mean" assumption to uniformity, which gives us the benefits of parameter regularization in practical problems.



              The least known. Use of Gaussian is best justified for continuous quantities that are the least known to us, e.g. noise $epsilon$ or latent factor $z$. "The least known" could be formalized as "distribution that maximizes entropy for a given variance". The answer to this optimization is $N(mu, sigma^2)$ for arbitrary mean $mu$. Therefore, in this sense, if we assume that a quantity is the least known to us, the best choice is Gaussian. Of course, if we acquire more knowledge about that quantity, we can do better than "the least known" assumption, as will be illustrated in the following examples.



              This would be the answer to "why we assume a Gaussian noise in probabilistic regression or Kalman filter?" too.




              Are there also potential problems associated with this?




              Yes. When we assume Gaussian, we are simplifying. If our simplification is unjustified, our model will under-perform. At this point, we should search for an alternative assumption. In practice, when we make a new assumption about the least known quantity (based on acquired knowledge or speculation), we could extract that assumption and introduce a new Gaussian one, instead of changing the Gaussian assumption. Here are two examples:



              1. Example in regression (noise). Suppose we have no knowledge about observation $A$ (the least known), thus we assume $A sim N(mu, sigma^2)$. After fitting the model, we may observe that the estimated variance $hatsigma^2$ is high. After some investigation, we may assume that $A$ is a linear function of measurement $B$, thus we extract this assumption as $A = colorblueb_1B +c + epsilon_1$, where $epsilon_1 sim N(0, sigma_1^2)$ is the new "the least known". Later, we may find out that our linearity assumption is also weak since, after fitting the model, the observed $hatepsilon_1 = A - hatb_1B -hatc$ also has a high $hatsigma_1^2$. Then, we may extract a new assumption as $A = b_1B + colorblueb_2B^2 + c + epsilon_2$, where $epsilon_2 sim N(0, sigma_2^2)$ is the new "the least known", and so on.


              2. Example in GAN (latent factor). Upon seeing unrealistic outputs from GAN (knowledge) we may add $colorbluetextmore layers$ between $z$ and the output (extract assumption), in the hope that the new network (or function) with the new $z_2 sim N(0, sigma_2^2)$ would lead to more realistic outputs, and so on.






              share|improve this answer











              $endgroup$















                0












                0








                0





                $begingroup$


                Why people often choose the input to a GAN (z)
                to be samples from a Gaussian?




                Generally, for two reasons: (1) mathematical simplicity, (2) working well enough in practice. However, as we explain, under additional assumptions the choice of Gaussian could be more justified.



                Compare to uniform distribution. Gaussian distribution is not as simple as uniform distribution but it is not that far off either. It adds "concentration around the mean" assumption to uniformity, which gives us the benefits of parameter regularization in practical problems.



                The least known. Use of Gaussian is best justified for continuous quantities that are the least known to us, e.g. noise $epsilon$ or latent factor $z$. "The least known" could be formalized as "distribution that maximizes entropy for a given variance". The answer to this optimization is $N(mu, sigma^2)$ for arbitrary mean $mu$. Therefore, in this sense, if we assume that a quantity is the least known to us, the best choice is Gaussian. Of course, if we acquire more knowledge about that quantity, we can do better than "the least known" assumption, as will be illustrated in the following examples.



                This would be the answer to "why we assume a Gaussian noise in probabilistic regression or Kalman filter?" too.




                Are there also potential problems associated with this?




                Yes. When we assume Gaussian, we are simplifying. If our simplification is unjustified, our model will under-perform. At this point, we should search for an alternative assumption. In practice, when we make a new assumption about the least known quantity (based on acquired knowledge or speculation), we could extract that assumption and introduce a new Gaussian one, instead of changing the Gaussian assumption. Here are two examples:



                1. Example in regression (noise). Suppose we have no knowledge about observation $A$ (the least known), thus we assume $A sim N(mu, sigma^2)$. After fitting the model, we may observe that the estimated variance $hatsigma^2$ is high. After some investigation, we may assume that $A$ is a linear function of measurement $B$, thus we extract this assumption as $A = colorblueb_1B +c + epsilon_1$, where $epsilon_1 sim N(0, sigma_1^2)$ is the new "the least known". Later, we may find out that our linearity assumption is also weak since, after fitting the model, the observed $hatepsilon_1 = A - hatb_1B -hatc$ also has a high $hatsigma_1^2$. Then, we may extract a new assumption as $A = b_1B + colorblueb_2B^2 + c + epsilon_2$, where $epsilon_2 sim N(0, sigma_2^2)$ is the new "the least known", and so on.


                2. Example in GAN (latent factor). Upon seeing unrealistic outputs from GAN (knowledge) we may add $colorbluetextmore layers$ between $z$ and the output (extract assumption), in the hope that the new network (or function) with the new $z_2 sim N(0, sigma_2^2)$ would lead to more realistic outputs, and so on.






                share|improve this answer











                $endgroup$




                Why people often choose the input to a GAN (z)
                to be samples from a Gaussian?




                Generally, for two reasons: (1) mathematical simplicity, (2) working well enough in practice. However, as we explain, under additional assumptions the choice of Gaussian could be more justified.



                Compare to uniform distribution. Gaussian distribution is not as simple as uniform distribution but it is not that far off either. It adds "concentration around the mean" assumption to uniformity, which gives us the benefits of parameter regularization in practical problems.



                The least known. Use of Gaussian is best justified for continuous quantities that are the least known to us, e.g. noise $epsilon$ or latent factor $z$. "The least known" could be formalized as "distribution that maximizes entropy for a given variance". The answer to this optimization is $N(mu, sigma^2)$ for arbitrary mean $mu$. Therefore, in this sense, if we assume that a quantity is the least known to us, the best choice is Gaussian. Of course, if we acquire more knowledge about that quantity, we can do better than "the least known" assumption, as will be illustrated in the following examples.



                This would be the answer to "why we assume a Gaussian noise in probabilistic regression or Kalman filter?" too.




                Are there also potential problems associated with this?




                Yes. When we assume Gaussian, we are simplifying. If our simplification is unjustified, our model will under-perform. At this point, we should search for an alternative assumption. In practice, when we make a new assumption about the least known quantity (based on acquired knowledge or speculation), we could extract that assumption and introduce a new Gaussian one, instead of changing the Gaussian assumption. Here are two examples:



                1. Example in regression (noise). Suppose we have no knowledge about observation $A$ (the least known), thus we assume $A sim N(mu, sigma^2)$. After fitting the model, we may observe that the estimated variance $hatsigma^2$ is high. After some investigation, we may assume that $A$ is a linear function of measurement $B$, thus we extract this assumption as $A = colorblueb_1B +c + epsilon_1$, where $epsilon_1 sim N(0, sigma_1^2)$ is the new "the least known". Later, we may find out that our linearity assumption is also weak since, after fitting the model, the observed $hatepsilon_1 = A - hatb_1B -hatc$ also has a high $hatsigma_1^2$. Then, we may extract a new assumption as $A = b_1B + colorblueb_2B^2 + c + epsilon_2$, where $epsilon_2 sim N(0, sigma_2^2)$ is the new "the least known", and so on.


                2. Example in GAN (latent factor). Upon seeing unrealistic outputs from GAN (knowledge) we may add $colorbluetextmore layers$ between $z$ and the output (extract assumption), in the hope that the new network (or function) with the new $z_2 sim N(0, sigma_2^2)$ would lead to more realistic outputs, and so on.







                share|improve this answer














                share|improve this answer



                share|improve this answer








                edited yesterday

























                answered Mar 17 at 10:58









                EsmailianEsmailian

                1,556114




                1,556114




















                    asahi kibou is a new contributor. Be nice, and check out our Code of Conduct.









                    draft saved

                    draft discarded


















                    asahi kibou is a new contributor. Be nice, and check out our Code of Conduct.












                    asahi kibou is a new contributor. Be nice, and check out our Code of Conduct.











                    asahi kibou is a new contributor. Be nice, and check out our Code of Conduct.














                    Thanks for contributing an answer to Data Science Stack Exchange!


                    • Please be sure to answer the question. Provide details and share your research!

                    But avoid


                    • Asking for help, clarification, or responding to other answers.

                    • Making statements based on opinion; back them up with references or personal experience.

                    Use MathJax to format equations. MathJax reference.


                    To learn more, see our tips on writing great answers.




                    draft saved


                    draft discarded














                    StackExchange.ready(
                    function ()
                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f47437%2fwhy-gaussian-latent-variable-noise-for-gan%23new-answer', 'question_page');

                    );

                    Post as a guest















                    Required, but never shown





















































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown

































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown







                    Popular posts from this blog

                    Luettelo Yhdysvaltain laivaston lentotukialuksista Lähteet | Navigointivalikko

                    Adding axes to figuresAdding axes labels to LaTeX figuresLaTeX equivalent of ConTeXt buffersRotate a node but not its content: the case of the ellipse decorationHow to define the default vertical distance between nodes?TikZ scaling graphic and adjust node position and keep font sizeNumerical conditional within tikz keys?adding axes to shapesAlign axes across subfiguresAdding figures with a certain orderLine up nested tikz enviroments or how to get rid of themAdding axes labels to LaTeX figures

                    Gary (muusikko) Sisällysluettelo Historia | Rockin' High | Lähteet | Aiheesta muualla | NavigointivalikkoInfobox OKTuomas "Gary" Keskinen Ancaran kitaristiksiProjekti Rockin' High