Covariance as inner productExtrapolating GLM coefficients for year a product was sold into future years?Identity covariance matrix, decorrelated data?Handling covariance matrices of changing sizeNeural networks: which cost function to use?Advance Methods of Understanding Significance of Customer BehaviorsA statistic for testing if $mu$ which is known to be in subspace $H$, is also in subspace $H_0subseteq H$How do I convert an L2 norm to a probability?Fast way of computing covariance matrix of nonstationary kernel in Python

Stark VS Thanos

Phrase for the opposite of "foolproof"

Illegal assignment from SObject to Contact

What's the metal clinking sound at the end of credits in Avengers: Endgame?

Multiple options for Pseudonyms

Will tsunami waves travel forever if there was no land?

Modify locally tikzset

Is it possible to measure lightning discharges as Nikola Tesla?

Confusion about capacitors

"ne paelici suspectaretur" (Tacitus)

Reverse the word in a string with the same order in javascript

Has any spacecraft ever had the ability to directly communicate with civilian air traffic control?

How to stop co-workers from teasing me because I know Russian?

How to creep the reader out with what seems like a normal person?

In the time of the mishna, were there Jewish cities without courts?

What is the difference between `a[bc]d` (brackets) and `ab,cd` (braces)?

Why was Germany not as successful as other Europeans in establishing overseas colonies?

Python "triplet" dictionary?

Can a creature tell when it has been affected by a Divination wizard's Portent?

Why does Bran Stark feel that Jon Snow "needs to know" about his lineage?

Does the EU Common Fisheries Policy cover British Overseas Territories?

Is it cheaper to drop cargo drop than to land it?

Options leqno, reqno for documentclass or exist another option?

Pawn Sacrifice Justification



Covariance as inner product


Extrapolating GLM coefficients for year a product was sold into future years?Identity covariance matrix, decorrelated data?Handling covariance matrices of changing sizeNeural networks: which cost function to use?Advance Methods of Understanding Significance of Customer BehaviorsA statistic for testing if $mu$ which is known to be in subspace $H$, is also in subspace $H_0subseteq H$How do I convert an L2 norm to a probability?Fast way of computing covariance matrix of nonstationary kernel in Python













6












$begingroup$


Why is covariance considered as inner product if there is no projection of one vector onto another?



Right now I perceive this as just a multiplication of $x$ segment of vector($x_i - barx$) and $y$ segment($y_i - bary$) of the same vector in order to understand direction of relationship.










share|improve this question











$endgroup$
















    6












    $begingroup$


    Why is covariance considered as inner product if there is no projection of one vector onto another?



    Right now I perceive this as just a multiplication of $x$ segment of vector($x_i - barx$) and $y$ segment($y_i - bary$) of the same vector in order to understand direction of relationship.










    share|improve this question











    $endgroup$














      6












      6








      6


      2



      $begingroup$


      Why is covariance considered as inner product if there is no projection of one vector onto another?



      Right now I perceive this as just a multiplication of $x$ segment of vector($x_i - barx$) and $y$ segment($y_i - bary$) of the same vector in order to understand direction of relationship.










      share|improve this question











      $endgroup$




      Why is covariance considered as inner product if there is no projection of one vector onto another?



      Right now I perceive this as just a multiplication of $x$ segment of vector($x_i - barx$) and $y$ segment($y_i - bary$) of the same vector in order to understand direction of relationship.







      statistics data-analysis






      share|improve this question















      share|improve this question













      share|improve this question




      share|improve this question








      edited Apr 8 at 16:11









      Stephen Rauch

      1,53551330




      1,53551330










      asked Apr 8 at 15:40









      user641597user641597

      434




      434




















          1 Answer
          1






          active

          oldest

          votes


















          2












          $begingroup$

          Definition



          A inner product (AKA dot product and scalar product) can be define on two vectors $mathbfx$ and $mathbfy$ $in mathcalR^n $ as



          $$ mathbfx.x^T = <mathbfx,mathbfy>_mathcalR^n=<mathbfy,mathbfx>_mathcalR^n = sum_i=1^n x_i times y_i $$



          The inner product can be seem as the length of the projection of a vector into another and it is widely used as a similarity measure between two vectors.



          Also the inner product have the following properties:




          • Commutative or symmetric


          • Distributive (over vector addition)

          • Bilinear


          • Positive-definite: i.e $mathbfx.x^T > 0,forall mathbfx $

          The covariance of two random variables $X$ and $Y$ can be defined as



          $$ E[(X-E[X]) times (Y - E[Y])] $$



          the covariance holds the properties of been commutative, bilinear and positive-definite.



          These properties imply that the covariance is an Inner Product in a vector space, more specifically the Quotient Space.



          Association with the kernel trick



          If you are familiar with Support Vector Machines you probably familiar with the Kernel Trick where you implicitly compute the inner product of two vectors into a mapped space, called feature space. Without performing the mapping you can compute the inner product into even a possibly infinite dimensional space given that this mapping.



          To perform that inner product, you need to find a function, known as kernel functions, that can perform this inner product without explicitly mapping the vectors.



          For a kernel function to exist it needs to have the following atributes:



          • It needs to be symmetric

          • It needs to be positive-definite

          That is sufficient and necessary to a function $kappa(mathbfx,y)$ to be considered a inner product in an arbitrary vector space $mathcalH$.



          As the covariance, comply to this definition it is a Kernel Function and consequentially it is an Inner Product in a Vector Space.






          share|improve this answer









          $endgroup$













            Your Answer








            StackExchange.ready(function()
            var channelOptions =
            tags: "".split(" "),
            id: "557"
            ;
            initTagRenderer("".split(" "), "".split(" "), channelOptions);

            StackExchange.using("externalEditor", function()
            // Have to fire editor after snippets, if snippets enabled
            if (StackExchange.settings.snippets.snippetsEnabled)
            StackExchange.using("snippets", function()
            createEditor();
            );

            else
            createEditor();

            );

            function createEditor()
            StackExchange.prepareEditor(
            heartbeatType: 'answer',
            autoActivateHeartbeat: false,
            convertImagesToLinks: false,
            noModals: true,
            showLowRepImageUploadWarning: true,
            reputationToPostImages: null,
            bindNavPrevention: true,
            postfix: "",
            imageUploader:
            brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
            contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
            allowUrls: true
            ,
            onDemand: true,
            discardSelector: ".discard-answer"
            ,immediatelyShowMarkdownHelp:true
            );



            );













            draft saved

            draft discarded


















            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f48885%2fcovariance-as-inner-product%23new-answer', 'question_page');

            );

            Post as a guest















            Required, but never shown

























            1 Answer
            1






            active

            oldest

            votes








            1 Answer
            1






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes









            2












            $begingroup$

            Definition



            A inner product (AKA dot product and scalar product) can be define on two vectors $mathbfx$ and $mathbfy$ $in mathcalR^n $ as



            $$ mathbfx.x^T = <mathbfx,mathbfy>_mathcalR^n=<mathbfy,mathbfx>_mathcalR^n = sum_i=1^n x_i times y_i $$



            The inner product can be seem as the length of the projection of a vector into another and it is widely used as a similarity measure between two vectors.



            Also the inner product have the following properties:




            • Commutative or symmetric


            • Distributive (over vector addition)

            • Bilinear


            • Positive-definite: i.e $mathbfx.x^T > 0,forall mathbfx $

            The covariance of two random variables $X$ and $Y$ can be defined as



            $$ E[(X-E[X]) times (Y - E[Y])] $$



            the covariance holds the properties of been commutative, bilinear and positive-definite.



            These properties imply that the covariance is an Inner Product in a vector space, more specifically the Quotient Space.



            Association with the kernel trick



            If you are familiar with Support Vector Machines you probably familiar with the Kernel Trick where you implicitly compute the inner product of two vectors into a mapped space, called feature space. Without performing the mapping you can compute the inner product into even a possibly infinite dimensional space given that this mapping.



            To perform that inner product, you need to find a function, known as kernel functions, that can perform this inner product without explicitly mapping the vectors.



            For a kernel function to exist it needs to have the following atributes:



            • It needs to be symmetric

            • It needs to be positive-definite

            That is sufficient and necessary to a function $kappa(mathbfx,y)$ to be considered a inner product in an arbitrary vector space $mathcalH$.



            As the covariance, comply to this definition it is a Kernel Function and consequentially it is an Inner Product in a Vector Space.






            share|improve this answer









            $endgroup$

















              2












              $begingroup$

              Definition



              A inner product (AKA dot product and scalar product) can be define on two vectors $mathbfx$ and $mathbfy$ $in mathcalR^n $ as



              $$ mathbfx.x^T = <mathbfx,mathbfy>_mathcalR^n=<mathbfy,mathbfx>_mathcalR^n = sum_i=1^n x_i times y_i $$



              The inner product can be seem as the length of the projection of a vector into another and it is widely used as a similarity measure between two vectors.



              Also the inner product have the following properties:




              • Commutative or symmetric


              • Distributive (over vector addition)

              • Bilinear


              • Positive-definite: i.e $mathbfx.x^T > 0,forall mathbfx $

              The covariance of two random variables $X$ and $Y$ can be defined as



              $$ E[(X-E[X]) times (Y - E[Y])] $$



              the covariance holds the properties of been commutative, bilinear and positive-definite.



              These properties imply that the covariance is an Inner Product in a vector space, more specifically the Quotient Space.



              Association with the kernel trick



              If you are familiar with Support Vector Machines you probably familiar with the Kernel Trick where you implicitly compute the inner product of two vectors into a mapped space, called feature space. Without performing the mapping you can compute the inner product into even a possibly infinite dimensional space given that this mapping.



              To perform that inner product, you need to find a function, known as kernel functions, that can perform this inner product without explicitly mapping the vectors.



              For a kernel function to exist it needs to have the following atributes:



              • It needs to be symmetric

              • It needs to be positive-definite

              That is sufficient and necessary to a function $kappa(mathbfx,y)$ to be considered a inner product in an arbitrary vector space $mathcalH$.



              As the covariance, comply to this definition it is a Kernel Function and consequentially it is an Inner Product in a Vector Space.






              share|improve this answer









              $endgroup$















                2












                2








                2





                $begingroup$

                Definition



                A inner product (AKA dot product and scalar product) can be define on two vectors $mathbfx$ and $mathbfy$ $in mathcalR^n $ as



                $$ mathbfx.x^T = <mathbfx,mathbfy>_mathcalR^n=<mathbfy,mathbfx>_mathcalR^n = sum_i=1^n x_i times y_i $$



                The inner product can be seem as the length of the projection of a vector into another and it is widely used as a similarity measure between two vectors.



                Also the inner product have the following properties:




                • Commutative or symmetric


                • Distributive (over vector addition)

                • Bilinear


                • Positive-definite: i.e $mathbfx.x^T > 0,forall mathbfx $

                The covariance of two random variables $X$ and $Y$ can be defined as



                $$ E[(X-E[X]) times (Y - E[Y])] $$



                the covariance holds the properties of been commutative, bilinear and positive-definite.



                These properties imply that the covariance is an Inner Product in a vector space, more specifically the Quotient Space.



                Association with the kernel trick



                If you are familiar with Support Vector Machines you probably familiar with the Kernel Trick where you implicitly compute the inner product of two vectors into a mapped space, called feature space. Without performing the mapping you can compute the inner product into even a possibly infinite dimensional space given that this mapping.



                To perform that inner product, you need to find a function, known as kernel functions, that can perform this inner product without explicitly mapping the vectors.



                For a kernel function to exist it needs to have the following atributes:



                • It needs to be symmetric

                • It needs to be positive-definite

                That is sufficient and necessary to a function $kappa(mathbfx,y)$ to be considered a inner product in an arbitrary vector space $mathcalH$.



                As the covariance, comply to this definition it is a Kernel Function and consequentially it is an Inner Product in a Vector Space.






                share|improve this answer









                $endgroup$



                Definition



                A inner product (AKA dot product and scalar product) can be define on two vectors $mathbfx$ and $mathbfy$ $in mathcalR^n $ as



                $$ mathbfx.x^T = <mathbfx,mathbfy>_mathcalR^n=<mathbfy,mathbfx>_mathcalR^n = sum_i=1^n x_i times y_i $$



                The inner product can be seem as the length of the projection of a vector into another and it is widely used as a similarity measure between two vectors.



                Also the inner product have the following properties:




                • Commutative or symmetric


                • Distributive (over vector addition)

                • Bilinear


                • Positive-definite: i.e $mathbfx.x^T > 0,forall mathbfx $

                The covariance of two random variables $X$ and $Y$ can be defined as



                $$ E[(X-E[X]) times (Y - E[Y])] $$



                the covariance holds the properties of been commutative, bilinear and positive-definite.



                These properties imply that the covariance is an Inner Product in a vector space, more specifically the Quotient Space.



                Association with the kernel trick



                If you are familiar with Support Vector Machines you probably familiar with the Kernel Trick where you implicitly compute the inner product of two vectors into a mapped space, called feature space. Without performing the mapping you can compute the inner product into even a possibly infinite dimensional space given that this mapping.



                To perform that inner product, you need to find a function, known as kernel functions, that can perform this inner product without explicitly mapping the vectors.



                For a kernel function to exist it needs to have the following atributes:



                • It needs to be symmetric

                • It needs to be positive-definite

                That is sufficient and necessary to a function $kappa(mathbfx,y)$ to be considered a inner product in an arbitrary vector space $mathcalH$.



                As the covariance, comply to this definition it is a Kernel Function and consequentially it is an Inner Product in a Vector Space.







                share|improve this answer












                share|improve this answer



                share|improve this answer










                answered Apr 9 at 23:28









                Pedro Henrique MonfortePedro Henrique Monforte

                569219




                569219



























                    draft saved

                    draft discarded
















































                    Thanks for contributing an answer to Data Science Stack Exchange!


                    • Please be sure to answer the question. Provide details and share your research!

                    But avoid


                    • Asking for help, clarification, or responding to other answers.

                    • Making statements based on opinion; back them up with references or personal experience.

                    Use MathJax to format equations. MathJax reference.


                    To learn more, see our tips on writing great answers.




                    draft saved


                    draft discarded














                    StackExchange.ready(
                    function ()
                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f48885%2fcovariance-as-inner-product%23new-answer', 'question_page');

                    );

                    Post as a guest















                    Required, but never shown





















































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown

































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown







                    Popular posts from this blog

                    Adding axes to figuresAdding axes labels to LaTeX figuresLaTeX equivalent of ConTeXt buffersRotate a node but not its content: the case of the ellipse decorationHow to define the default vertical distance between nodes?TikZ scaling graphic and adjust node position and keep font sizeNumerical conditional within tikz keys?adding axes to shapesAlign axes across subfiguresAdding figures with a certain orderLine up nested tikz enviroments or how to get rid of themAdding axes labels to LaTeX figures

                    Tähtien Talli Jäsenet | Lähteet | NavigointivalikkoSuomen Hippos – Tähtien Talli

                    Do these cracks on my tires look bad? The Next CEO of Stack OverflowDry rot tire should I replace?Having to replace tiresFishtailed so easily? Bad tires? ABS?Filling the tires with something other than air, to avoid puncture hassles?Used Michelin tires safe to install?Do these tyre cracks necessitate replacement?Rumbling noise: tires or mechanicalIs it possible to fix noisy feathered tires?Are bad winter tires still better than summer tires in winter?Torque converter failure - Related to replacing only 2 tires?Why use snow tires on all 4 wheels on 2-wheel-drive cars?