Optimizing vector values for maximum correlation2019 Community Moderator ElectionHow do I calculate the maximum likelihood (machine learning statistics) of this table of data?Eigenvectors and eigenvalues for natural language processingCorrelation with missing values. Is least squares an acceptable option?Derivates with respect to a vectorDifference between Sum of Squares and Maximum Likelihood Linear RegressionInterpreting Results of Multivariable Regression / how to transform variables to improve resultsLinear algebra library for c++Website for Datasets - Miles and Shevlin book, “Applying Regression & Correlation”What affects the magnitude of lasso penalty of a feature?Regression for discrete values?

How old can references or sources in a thesis be?

What defenses are there against being summoned by the Gate spell?

Revoked SSL certificate

Linear Path Optimization with Two Dependent Variables

Why can't we play rap on piano?

Which country benefited the most from UN Security Council vetoes?

What would happen to a modern skyscraper if it rains micro blackholes?

Rock identification in KY

Decision tree nodes overlapping with Tikz

Is it inappropriate for a student to attend their mentor's dissertation defense?

Horror movie about a virus at the prom; beginning and end are stylized as a cartoon

Why does Kotter return in Welcome Back Kotter?

expand `ifthenelse` immediately

A newer friend of my brother's gave him a load of baseball cards that are supposedly extremely valuable. Is this a scam?

Why is Minecraft giving an OpenGL error?

Modeling an IP Address

How can bays and straits be determined in a procedurally generated map?

Two films in a tank, only one comes out with a development error – why?

Replacing matching entries in one column of a file by another column from a different file

Why doesn't Newton's third law mean a person bounces back to where they started when they hit the ground?

Important Resources for Dark Age Civilizations?

How do I deal with an unproductive colleague in a small company?

Arrow those variables!

Convert two switches to a dual stack, and add outlet - possible here?



Optimizing vector values for maximum correlation



2019 Community Moderator ElectionHow do I calculate the maximum likelihood (machine learning statistics) of this table of data?Eigenvectors and eigenvalues for natural language processingCorrelation with missing values. Is least squares an acceptable option?Derivates with respect to a vectorDifference between Sum of Squares and Maximum Likelihood Linear RegressionInterpreting Results of Multivariable Regression / how to transform variables to improve resultsLinear algebra library for c++Website for Datasets - Miles and Shevlin book, “Applying Regression & Correlation”What affects the magnitude of lasso penalty of a feature?Regression for discrete values?










1












$begingroup$


I'm new to ML, linear algebra, statistics, etc. so bear with me on the terminology...



I’m looking to find a vector that produces the maximum correlation for the relationship between 1) all relationships among dimensions of the vector (determined by subtraction) and 2) some output value produced by said relationships. I'm specifically using this to create a sports ranking system that takes a number of matches and the resulting scores and attempts to assign a value to the teams that can be used to predict future scores. In other words, the difference between any two team's ratings should be predictive of the score differential for the next match between the two.



So for example, if I have 3 teams, A, B, and C, each start with unknown ratings:



$$
beginarrayc
A&?\
B&?\
C&?\
endarray
$$



If each team played each other team once, the left table would be used to calculate their rating differences (column team's rating minus row team's rating). The right table would be the difference in scores in the respective matchups.



$$
beginequation
beginarrayc
&A&B&C \
hline
A&*& B - A& C - A\
B&A - B& *& C - B\
C&A - C& B - C& *\
endarray
Rightarrow
beginarrayc
&A&B&C \
hline
A&0&3&6\
B&-3&0&3\
C&-6&-3&0\
endarray
endequation
$$



Here is a possible solution that, for this example, would result in a perfect correlation between team rating differentials and score differentials.



$$
beginarrayc
A&1\
B&2\
C&3\
endarray
$$



This would be the regression line where x2 is the column team's rating and x1 is the row team's rating.



$$y = 3 * (x_2 - x_1)$$



It’s worth noting that what matters is the relationship between the various values (not their nominal values) since this would be another possible solution:



$$
beginarrayc
A&2\
B&4\
C&6\
endarray
$$



Which would result in a linear equation that looks like this, which would also have a correlation of 1:



$$y = 3over 2 * (x_2 - x_1)$$



What I want to do is find a method to determine values for A, B, and C that maximizes the correlation between the pairwise differences and resulting output values. The one additional catch for the teams example is that not every team will play every other team so any resulting matrices will be asymmetrical (assuming that matters).



Are there any existing techniques to address this problem?










share|improve this question











$endgroup$
















    1












    $begingroup$


    I'm new to ML, linear algebra, statistics, etc. so bear with me on the terminology...



    I’m looking to find a vector that produces the maximum correlation for the relationship between 1) all relationships among dimensions of the vector (determined by subtraction) and 2) some output value produced by said relationships. I'm specifically using this to create a sports ranking system that takes a number of matches and the resulting scores and attempts to assign a value to the teams that can be used to predict future scores. In other words, the difference between any two team's ratings should be predictive of the score differential for the next match between the two.



    So for example, if I have 3 teams, A, B, and C, each start with unknown ratings:



    $$
    beginarrayc
    A&?\
    B&?\
    C&?\
    endarray
    $$



    If each team played each other team once, the left table would be used to calculate their rating differences (column team's rating minus row team's rating). The right table would be the difference in scores in the respective matchups.



    $$
    beginequation
    beginarrayc
    &A&B&C \
    hline
    A&*& B - A& C - A\
    B&A - B& *& C - B\
    C&A - C& B - C& *\
    endarray
    Rightarrow
    beginarrayc
    &A&B&C \
    hline
    A&0&3&6\
    B&-3&0&3\
    C&-6&-3&0\
    endarray
    endequation
    $$



    Here is a possible solution that, for this example, would result in a perfect correlation between team rating differentials and score differentials.



    $$
    beginarrayc
    A&1\
    B&2\
    C&3\
    endarray
    $$



    This would be the regression line where x2 is the column team's rating and x1 is the row team's rating.



    $$y = 3 * (x_2 - x_1)$$



    It’s worth noting that what matters is the relationship between the various values (not their nominal values) since this would be another possible solution:



    $$
    beginarrayc
    A&2\
    B&4\
    C&6\
    endarray
    $$



    Which would result in a linear equation that looks like this, which would also have a correlation of 1:



    $$y = 3over 2 * (x_2 - x_1)$$



    What I want to do is find a method to determine values for A, B, and C that maximizes the correlation between the pairwise differences and resulting output values. The one additional catch for the teams example is that not every team will play every other team so any resulting matrices will be asymmetrical (assuming that matters).



    Are there any existing techniques to address this problem?










    share|improve this question











    $endgroup$














      1












      1








      1





      $begingroup$


      I'm new to ML, linear algebra, statistics, etc. so bear with me on the terminology...



      I’m looking to find a vector that produces the maximum correlation for the relationship between 1) all relationships among dimensions of the vector (determined by subtraction) and 2) some output value produced by said relationships. I'm specifically using this to create a sports ranking system that takes a number of matches and the resulting scores and attempts to assign a value to the teams that can be used to predict future scores. In other words, the difference between any two team's ratings should be predictive of the score differential for the next match between the two.



      So for example, if I have 3 teams, A, B, and C, each start with unknown ratings:



      $$
      beginarrayc
      A&?\
      B&?\
      C&?\
      endarray
      $$



      If each team played each other team once, the left table would be used to calculate their rating differences (column team's rating minus row team's rating). The right table would be the difference in scores in the respective matchups.



      $$
      beginequation
      beginarrayc
      &A&B&C \
      hline
      A&*& B - A& C - A\
      B&A - B& *& C - B\
      C&A - C& B - C& *\
      endarray
      Rightarrow
      beginarrayc
      &A&B&C \
      hline
      A&0&3&6\
      B&-3&0&3\
      C&-6&-3&0\
      endarray
      endequation
      $$



      Here is a possible solution that, for this example, would result in a perfect correlation between team rating differentials and score differentials.



      $$
      beginarrayc
      A&1\
      B&2\
      C&3\
      endarray
      $$



      This would be the regression line where x2 is the column team's rating and x1 is the row team's rating.



      $$y = 3 * (x_2 - x_1)$$



      It’s worth noting that what matters is the relationship between the various values (not their nominal values) since this would be another possible solution:



      $$
      beginarrayc
      A&2\
      B&4\
      C&6\
      endarray
      $$



      Which would result in a linear equation that looks like this, which would also have a correlation of 1:



      $$y = 3over 2 * (x_2 - x_1)$$



      What I want to do is find a method to determine values for A, B, and C that maximizes the correlation between the pairwise differences and resulting output values. The one additional catch for the teams example is that not every team will play every other team so any resulting matrices will be asymmetrical (assuming that matters).



      Are there any existing techniques to address this problem?










      share|improve this question











      $endgroup$




      I'm new to ML, linear algebra, statistics, etc. so bear with me on the terminology...



      I’m looking to find a vector that produces the maximum correlation for the relationship between 1) all relationships among dimensions of the vector (determined by subtraction) and 2) some output value produced by said relationships. I'm specifically using this to create a sports ranking system that takes a number of matches and the resulting scores and attempts to assign a value to the teams that can be used to predict future scores. In other words, the difference between any two team's ratings should be predictive of the score differential for the next match between the two.



      So for example, if I have 3 teams, A, B, and C, each start with unknown ratings:



      $$
      beginarrayc
      A&?\
      B&?\
      C&?\
      endarray
      $$



      If each team played each other team once, the left table would be used to calculate their rating differences (column team's rating minus row team's rating). The right table would be the difference in scores in the respective matchups.



      $$
      beginequation
      beginarrayc
      &A&B&C \
      hline
      A&*& B - A& C - A\
      B&A - B& *& C - B\
      C&A - C& B - C& *\
      endarray
      Rightarrow
      beginarrayc
      &A&B&C \
      hline
      A&0&3&6\
      B&-3&0&3\
      C&-6&-3&0\
      endarray
      endequation
      $$



      Here is a possible solution that, for this example, would result in a perfect correlation between team rating differentials and score differentials.



      $$
      beginarrayc
      A&1\
      B&2\
      C&3\
      endarray
      $$



      This would be the regression line where x2 is the column team's rating and x1 is the row team's rating.



      $$y = 3 * (x_2 - x_1)$$



      It’s worth noting that what matters is the relationship between the various values (not their nominal values) since this would be another possible solution:



      $$
      beginarrayc
      A&2\
      B&4\
      C&6\
      endarray
      $$



      Which would result in a linear equation that looks like this, which would also have a correlation of 1:



      $$y = 3over 2 * (x_2 - x_1)$$



      What I want to do is find a method to determine values for A, B, and C that maximizes the correlation between the pairwise differences and resulting output values. The one additional catch for the teams example is that not every team will play every other team so any resulting matrices will be asymmetrical (assuming that matters).



      Are there any existing techniques to address this problem?







      linear-regression linear-algebra






      share|improve this question















      share|improve this question













      share|improve this question




      share|improve this question








      edited Mar 28 at 3:13







      SuperCodeBrah

















      asked Mar 27 at 19:34









      SuperCodeBrahSuperCodeBrah

      1084




      1084




















          1 Answer
          1






          active

          oldest

          votes


















          1












          $begingroup$

          This is one approach you can follow:



          • Setup a linear regression system, with each match as a row, and each feature corresponding to a team.

          • The unknown feature coefficient for each team is the team 'strength' that we try to determine.

          • The feature values will be one of 0, 1, or -1, depending on whether the team did not play, was the column team, or the row team respectively in that match.

          • The regression target will be the score differential (column team score - row team score) in that match.

          Eg: For the result matrix above, the system would be:



          $$
          (-1) * x_1 + (1) * x_2 + (0) * x_3 = 3\
          (-1) * x_1 + (0) * x_2 + (1) * x_3 = 6\
          (0) * x_1 + (0-1) * x_2 + (1) * x_3 = 3
          $$



          • One solution to the above system is: $x_1=-6; x_2=-3; x_3=0$

          • Multiple solutions are possible, which can be viewed as translations of each other (adding a constant to all team strengths).

          • If there are 'n' teams, then there are only 'n-1' linearly independent columns in the regression. (In R, one of the coefficients comes out as NA. This can be treated as 0, or dropped from the regression, effectively making it 0).





          share|improve this answer









          $endgroup$












          • $begingroup$
            I realized that I could do gradient descent where MSE would be the sum of (m(T2 - T1) + b - Actual)^2 for all matchups, which I think is just an algebraic representation of what you’re saying? Are you saying there’s a more straightforward approach other than gradient descent?
            $endgroup$
            – SuperCodeBrah
            Mar 28 at 13:58







          • 1




            $begingroup$
            What is 'm' in your comment? If predicted score = x_1 - x_2, and we want to minimize MSE between actual and prediction, then (in addition to gradient descent) there is an analytical solution as well: towardsdatascience.com/…
            $endgroup$
            – raghu
            Mar 28 at 14:12










          • $begingroup$
            Yes, predicted score is a function of the m coefficient multiplied by the difference in two team coefficients, y = m(x_1 - x_2) + b. Ultimately I'm trying to optimize m to to minimize MSE for this function. I noticed that the solution you linked requires matrix inversion - will that be an issue for a matrix of roughly size [6000, 350]? Are there any solutions that are definitely stable for this type of problem?
            $endgroup$
            – SuperCodeBrah
            Mar 29 at 0:41







          • 1




            $begingroup$
            A test using some randomly generated data (6000 matches among 350 teams) ran fine on my laptop. You can go through the link below, that compares various solution methods for linear regression (including gradient descent and matrix inversion): stats.stackexchange.com/questions/160179/…
            $endgroup$
            – raghu
            Mar 29 at 14:10










          • $begingroup$
            This was very helpful, thank you. I was previously able to get it to work with gradient descent but I found an SVD method here: machinelearningmastery.com/…. I've barely delved into linear algebra and don't really know python so it was a bit of a process, but it resulted in a better overall fit and is faster than the gradient descent method. I definitely have a lot to learn but am excited by the power of these techniques. Thank you.
            $endgroup$
            – SuperCodeBrah
            Mar 29 at 20:29












          Your Answer





          StackExchange.ifUsing("editor", function ()
          return StackExchange.using("mathjaxEditing", function ()
          StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
          StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
          );
          );
          , "mathjax-editing");

          StackExchange.ready(function()
          var channelOptions =
          tags: "".split(" "),
          id: "557"
          ;
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function()
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled)
          StackExchange.using("snippets", function()
          createEditor();
          );

          else
          createEditor();

          );

          function createEditor()
          StackExchange.prepareEditor(
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: false,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: null,
          bindNavPrevention: true,
          postfix: "",
          imageUploader:
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          ,
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          );



          );













          draft saved

          draft discarded


















          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f48105%2foptimizing-vector-values-for-maximum-correlation%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown

























          1 Answer
          1






          active

          oldest

          votes








          1 Answer
          1






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes









          1












          $begingroup$

          This is one approach you can follow:



          • Setup a linear regression system, with each match as a row, and each feature corresponding to a team.

          • The unknown feature coefficient for each team is the team 'strength' that we try to determine.

          • The feature values will be one of 0, 1, or -1, depending on whether the team did not play, was the column team, or the row team respectively in that match.

          • The regression target will be the score differential (column team score - row team score) in that match.

          Eg: For the result matrix above, the system would be:



          $$
          (-1) * x_1 + (1) * x_2 + (0) * x_3 = 3\
          (-1) * x_1 + (0) * x_2 + (1) * x_3 = 6\
          (0) * x_1 + (0-1) * x_2 + (1) * x_3 = 3
          $$



          • One solution to the above system is: $x_1=-6; x_2=-3; x_3=0$

          • Multiple solutions are possible, which can be viewed as translations of each other (adding a constant to all team strengths).

          • If there are 'n' teams, then there are only 'n-1' linearly independent columns in the regression. (In R, one of the coefficients comes out as NA. This can be treated as 0, or dropped from the regression, effectively making it 0).





          share|improve this answer









          $endgroup$












          • $begingroup$
            I realized that I could do gradient descent where MSE would be the sum of (m(T2 - T1) + b - Actual)^2 for all matchups, which I think is just an algebraic representation of what you’re saying? Are you saying there’s a more straightforward approach other than gradient descent?
            $endgroup$
            – SuperCodeBrah
            Mar 28 at 13:58







          • 1




            $begingroup$
            What is 'm' in your comment? If predicted score = x_1 - x_2, and we want to minimize MSE between actual and prediction, then (in addition to gradient descent) there is an analytical solution as well: towardsdatascience.com/…
            $endgroup$
            – raghu
            Mar 28 at 14:12










          • $begingroup$
            Yes, predicted score is a function of the m coefficient multiplied by the difference in two team coefficients, y = m(x_1 - x_2) + b. Ultimately I'm trying to optimize m to to minimize MSE for this function. I noticed that the solution you linked requires matrix inversion - will that be an issue for a matrix of roughly size [6000, 350]? Are there any solutions that are definitely stable for this type of problem?
            $endgroup$
            – SuperCodeBrah
            Mar 29 at 0:41







          • 1




            $begingroup$
            A test using some randomly generated data (6000 matches among 350 teams) ran fine on my laptop. You can go through the link below, that compares various solution methods for linear regression (including gradient descent and matrix inversion): stats.stackexchange.com/questions/160179/…
            $endgroup$
            – raghu
            Mar 29 at 14:10










          • $begingroup$
            This was very helpful, thank you. I was previously able to get it to work with gradient descent but I found an SVD method here: machinelearningmastery.com/…. I've barely delved into linear algebra and don't really know python so it was a bit of a process, but it resulted in a better overall fit and is faster than the gradient descent method. I definitely have a lot to learn but am excited by the power of these techniques. Thank you.
            $endgroup$
            – SuperCodeBrah
            Mar 29 at 20:29
















          1












          $begingroup$

          This is one approach you can follow:



          • Setup a linear regression system, with each match as a row, and each feature corresponding to a team.

          • The unknown feature coefficient for each team is the team 'strength' that we try to determine.

          • The feature values will be one of 0, 1, or -1, depending on whether the team did not play, was the column team, or the row team respectively in that match.

          • The regression target will be the score differential (column team score - row team score) in that match.

          Eg: For the result matrix above, the system would be:



          $$
          (-1) * x_1 + (1) * x_2 + (0) * x_3 = 3\
          (-1) * x_1 + (0) * x_2 + (1) * x_3 = 6\
          (0) * x_1 + (0-1) * x_2 + (1) * x_3 = 3
          $$



          • One solution to the above system is: $x_1=-6; x_2=-3; x_3=0$

          • Multiple solutions are possible, which can be viewed as translations of each other (adding a constant to all team strengths).

          • If there are 'n' teams, then there are only 'n-1' linearly independent columns in the regression. (In R, one of the coefficients comes out as NA. This can be treated as 0, or dropped from the regression, effectively making it 0).





          share|improve this answer









          $endgroup$












          • $begingroup$
            I realized that I could do gradient descent where MSE would be the sum of (m(T2 - T1) + b - Actual)^2 for all matchups, which I think is just an algebraic representation of what you’re saying? Are you saying there’s a more straightforward approach other than gradient descent?
            $endgroup$
            – SuperCodeBrah
            Mar 28 at 13:58







          • 1




            $begingroup$
            What is 'm' in your comment? If predicted score = x_1 - x_2, and we want to minimize MSE between actual and prediction, then (in addition to gradient descent) there is an analytical solution as well: towardsdatascience.com/…
            $endgroup$
            – raghu
            Mar 28 at 14:12










          • $begingroup$
            Yes, predicted score is a function of the m coefficient multiplied by the difference in two team coefficients, y = m(x_1 - x_2) + b. Ultimately I'm trying to optimize m to to minimize MSE for this function. I noticed that the solution you linked requires matrix inversion - will that be an issue for a matrix of roughly size [6000, 350]? Are there any solutions that are definitely stable for this type of problem?
            $endgroup$
            – SuperCodeBrah
            Mar 29 at 0:41







          • 1




            $begingroup$
            A test using some randomly generated data (6000 matches among 350 teams) ran fine on my laptop. You can go through the link below, that compares various solution methods for linear regression (including gradient descent and matrix inversion): stats.stackexchange.com/questions/160179/…
            $endgroup$
            – raghu
            Mar 29 at 14:10










          • $begingroup$
            This was very helpful, thank you. I was previously able to get it to work with gradient descent but I found an SVD method here: machinelearningmastery.com/…. I've barely delved into linear algebra and don't really know python so it was a bit of a process, but it resulted in a better overall fit and is faster than the gradient descent method. I definitely have a lot to learn but am excited by the power of these techniques. Thank you.
            $endgroup$
            – SuperCodeBrah
            Mar 29 at 20:29














          1












          1








          1





          $begingroup$

          This is one approach you can follow:



          • Setup a linear regression system, with each match as a row, and each feature corresponding to a team.

          • The unknown feature coefficient for each team is the team 'strength' that we try to determine.

          • The feature values will be one of 0, 1, or -1, depending on whether the team did not play, was the column team, or the row team respectively in that match.

          • The regression target will be the score differential (column team score - row team score) in that match.

          Eg: For the result matrix above, the system would be:



          $$
          (-1) * x_1 + (1) * x_2 + (0) * x_3 = 3\
          (-1) * x_1 + (0) * x_2 + (1) * x_3 = 6\
          (0) * x_1 + (0-1) * x_2 + (1) * x_3 = 3
          $$



          • One solution to the above system is: $x_1=-6; x_2=-3; x_3=0$

          • Multiple solutions are possible, which can be viewed as translations of each other (adding a constant to all team strengths).

          • If there are 'n' teams, then there are only 'n-1' linearly independent columns in the regression. (In R, one of the coefficients comes out as NA. This can be treated as 0, or dropped from the regression, effectively making it 0).





          share|improve this answer









          $endgroup$



          This is one approach you can follow:



          • Setup a linear regression system, with each match as a row, and each feature corresponding to a team.

          • The unknown feature coefficient for each team is the team 'strength' that we try to determine.

          • The feature values will be one of 0, 1, or -1, depending on whether the team did not play, was the column team, or the row team respectively in that match.

          • The regression target will be the score differential (column team score - row team score) in that match.

          Eg: For the result matrix above, the system would be:



          $$
          (-1) * x_1 + (1) * x_2 + (0) * x_3 = 3\
          (-1) * x_1 + (0) * x_2 + (1) * x_3 = 6\
          (0) * x_1 + (0-1) * x_2 + (1) * x_3 = 3
          $$



          • One solution to the above system is: $x_1=-6; x_2=-3; x_3=0$

          • Multiple solutions are possible, which can be viewed as translations of each other (adding a constant to all team strengths).

          • If there are 'n' teams, then there are only 'n-1' linearly independent columns in the regression. (In R, one of the coefficients comes out as NA. This can be treated as 0, or dropped from the regression, effectively making it 0).






          share|improve this answer












          share|improve this answer



          share|improve this answer










          answered Mar 28 at 10:47









          raghuraghu

          45633




          45633











          • $begingroup$
            I realized that I could do gradient descent where MSE would be the sum of (m(T2 - T1) + b - Actual)^2 for all matchups, which I think is just an algebraic representation of what you’re saying? Are you saying there’s a more straightforward approach other than gradient descent?
            $endgroup$
            – SuperCodeBrah
            Mar 28 at 13:58







          • 1




            $begingroup$
            What is 'm' in your comment? If predicted score = x_1 - x_2, and we want to minimize MSE between actual and prediction, then (in addition to gradient descent) there is an analytical solution as well: towardsdatascience.com/…
            $endgroup$
            – raghu
            Mar 28 at 14:12










          • $begingroup$
            Yes, predicted score is a function of the m coefficient multiplied by the difference in two team coefficients, y = m(x_1 - x_2) + b. Ultimately I'm trying to optimize m to to minimize MSE for this function. I noticed that the solution you linked requires matrix inversion - will that be an issue for a matrix of roughly size [6000, 350]? Are there any solutions that are definitely stable for this type of problem?
            $endgroup$
            – SuperCodeBrah
            Mar 29 at 0:41







          • 1




            $begingroup$
            A test using some randomly generated data (6000 matches among 350 teams) ran fine on my laptop. You can go through the link below, that compares various solution methods for linear regression (including gradient descent and matrix inversion): stats.stackexchange.com/questions/160179/…
            $endgroup$
            – raghu
            Mar 29 at 14:10










          • $begingroup$
            This was very helpful, thank you. I was previously able to get it to work with gradient descent but I found an SVD method here: machinelearningmastery.com/…. I've barely delved into linear algebra and don't really know python so it was a bit of a process, but it resulted in a better overall fit and is faster than the gradient descent method. I definitely have a lot to learn but am excited by the power of these techniques. Thank you.
            $endgroup$
            – SuperCodeBrah
            Mar 29 at 20:29

















          • $begingroup$
            I realized that I could do gradient descent where MSE would be the sum of (m(T2 - T1) + b - Actual)^2 for all matchups, which I think is just an algebraic representation of what you’re saying? Are you saying there’s a more straightforward approach other than gradient descent?
            $endgroup$
            – SuperCodeBrah
            Mar 28 at 13:58







          • 1




            $begingroup$
            What is 'm' in your comment? If predicted score = x_1 - x_2, and we want to minimize MSE between actual and prediction, then (in addition to gradient descent) there is an analytical solution as well: towardsdatascience.com/…
            $endgroup$
            – raghu
            Mar 28 at 14:12










          • $begingroup$
            Yes, predicted score is a function of the m coefficient multiplied by the difference in two team coefficients, y = m(x_1 - x_2) + b. Ultimately I'm trying to optimize m to to minimize MSE for this function. I noticed that the solution you linked requires matrix inversion - will that be an issue for a matrix of roughly size [6000, 350]? Are there any solutions that are definitely stable for this type of problem?
            $endgroup$
            – SuperCodeBrah
            Mar 29 at 0:41







          • 1




            $begingroup$
            A test using some randomly generated data (6000 matches among 350 teams) ran fine on my laptop. You can go through the link below, that compares various solution methods for linear regression (including gradient descent and matrix inversion): stats.stackexchange.com/questions/160179/…
            $endgroup$
            – raghu
            Mar 29 at 14:10










          • $begingroup$
            This was very helpful, thank you. I was previously able to get it to work with gradient descent but I found an SVD method here: machinelearningmastery.com/…. I've barely delved into linear algebra and don't really know python so it was a bit of a process, but it resulted in a better overall fit and is faster than the gradient descent method. I definitely have a lot to learn but am excited by the power of these techniques. Thank you.
            $endgroup$
            – SuperCodeBrah
            Mar 29 at 20:29
















          $begingroup$
          I realized that I could do gradient descent where MSE would be the sum of (m(T2 - T1) + b - Actual)^2 for all matchups, which I think is just an algebraic representation of what you’re saying? Are you saying there’s a more straightforward approach other than gradient descent?
          $endgroup$
          – SuperCodeBrah
          Mar 28 at 13:58





          $begingroup$
          I realized that I could do gradient descent where MSE would be the sum of (m(T2 - T1) + b - Actual)^2 for all matchups, which I think is just an algebraic representation of what you’re saying? Are you saying there’s a more straightforward approach other than gradient descent?
          $endgroup$
          – SuperCodeBrah
          Mar 28 at 13:58





          1




          1




          $begingroup$
          What is 'm' in your comment? If predicted score = x_1 - x_2, and we want to minimize MSE between actual and prediction, then (in addition to gradient descent) there is an analytical solution as well: towardsdatascience.com/…
          $endgroup$
          – raghu
          Mar 28 at 14:12




          $begingroup$
          What is 'm' in your comment? If predicted score = x_1 - x_2, and we want to minimize MSE between actual and prediction, then (in addition to gradient descent) there is an analytical solution as well: towardsdatascience.com/…
          $endgroup$
          – raghu
          Mar 28 at 14:12












          $begingroup$
          Yes, predicted score is a function of the m coefficient multiplied by the difference in two team coefficients, y = m(x_1 - x_2) + b. Ultimately I'm trying to optimize m to to minimize MSE for this function. I noticed that the solution you linked requires matrix inversion - will that be an issue for a matrix of roughly size [6000, 350]? Are there any solutions that are definitely stable for this type of problem?
          $endgroup$
          – SuperCodeBrah
          Mar 29 at 0:41





          $begingroup$
          Yes, predicted score is a function of the m coefficient multiplied by the difference in two team coefficients, y = m(x_1 - x_2) + b. Ultimately I'm trying to optimize m to to minimize MSE for this function. I noticed that the solution you linked requires matrix inversion - will that be an issue for a matrix of roughly size [6000, 350]? Are there any solutions that are definitely stable for this type of problem?
          $endgroup$
          – SuperCodeBrah
          Mar 29 at 0:41





          1




          1




          $begingroup$
          A test using some randomly generated data (6000 matches among 350 teams) ran fine on my laptop. You can go through the link below, that compares various solution methods for linear regression (including gradient descent and matrix inversion): stats.stackexchange.com/questions/160179/…
          $endgroup$
          – raghu
          Mar 29 at 14:10




          $begingroup$
          A test using some randomly generated data (6000 matches among 350 teams) ran fine on my laptop. You can go through the link below, that compares various solution methods for linear regression (including gradient descent and matrix inversion): stats.stackexchange.com/questions/160179/…
          $endgroup$
          – raghu
          Mar 29 at 14:10












          $begingroup$
          This was very helpful, thank you. I was previously able to get it to work with gradient descent but I found an SVD method here: machinelearningmastery.com/…. I've barely delved into linear algebra and don't really know python so it was a bit of a process, but it resulted in a better overall fit and is faster than the gradient descent method. I definitely have a lot to learn but am excited by the power of these techniques. Thank you.
          $endgroup$
          – SuperCodeBrah
          Mar 29 at 20:29





          $begingroup$
          This was very helpful, thank you. I was previously able to get it to work with gradient descent but I found an SVD method here: machinelearningmastery.com/…. I've barely delved into linear algebra and don't really know python so it was a bit of a process, but it resulted in a better overall fit and is faster than the gradient descent method. I definitely have a lot to learn but am excited by the power of these techniques. Thank you.
          $endgroup$
          – SuperCodeBrah
          Mar 29 at 20:29


















          draft saved

          draft discarded
















































          Thanks for contributing an answer to Data Science Stack Exchange!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid


          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.

          Use MathJax to format equations. MathJax reference.


          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f48105%2foptimizing-vector-values-for-maximum-correlation%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          Adding axes to figuresAdding axes labels to LaTeX figuresLaTeX equivalent of ConTeXt buffersRotate a node but not its content: the case of the ellipse decorationHow to define the default vertical distance between nodes?TikZ scaling graphic and adjust node position and keep font sizeNumerical conditional within tikz keys?adding axes to shapesAlign axes across subfiguresAdding figures with a certain orderLine up nested tikz enviroments or how to get rid of themAdding axes labels to LaTeX figures

          Luettelo Yhdysvaltain laivaston lentotukialuksista Lähteet | Navigointivalikko

          Gary (muusikko) Sisällysluettelo Historia | Rockin' High | Lähteet | Aiheesta muualla | NavigointivalikkoInfobox OKTuomas "Gary" Keskinen Ancaran kitaristiksiProjekti Rockin' High