How does on test regression for a subspace or matrix factorization?How does the test data gets collected?matrix factorization?How exactly does matrix factorization help with collaborative filteringKernelized Probabilistic Matrix Factorization - Implementation?Does a matrix factorization recommendation engine use user/item related features?Regularization term in Matrix FactorizationMatrix Factorization for Recommender SystemsHow dot product limits expressiveness and leads to sub-optimal solutions in Matrix Factorization?Finding unobserved ratings using matrix factorizationDoes cardinality of ratings column affect performance of matrix factorization based collaborative filtering?

How do I deal with a coworker that keeps asking to make small superficial changes to a report, and it is seriously triggering my anxiety?

Is there any official lore on the Far Realm?

Why does nature favour the Laplacian?

What does the integral of a function times a function of a random variable represent, conceptually?

How would 10 generations of living underground change the human body?

How come there are so many candidates for the 2020 Democratic party presidential nomination?

Why did some of my point & shoot film photos come back with one third light white or orange?

Why was the Spitfire's elliptical wing almost uncopied by other aircraft of World War 2?

Elements that can bond to themselves?

Coordinate my way to the name of the (video) game

bldc motor, esc and battery draw, nominal vs peak

Why does Mind Blank stop the Feeblemind spell?

Can an Area of Effect spell cast outside a Prismatic Wall extend inside it?

What is the most expensive material in the world that could be used to create Pun-Pun's lute?

Was there a Viking Exchange as well as a Columbian one?

How exactly does Hawking radiation decrease the mass of black holes?

Contradiction proof for inequality of P and NP?

Why did C use the -> operator instead of reusing the . operator?

On The Origin of Dissonant Chords

Did the BCPL programming language support floats?

Dynamic SOQL query relationship with field visibility for Users

a sore throat vs a strep throat vs strep throat

Is Diceware more secure than a long passphrase?

Is there a way to generate a list of distinct numbers such that no two subsets ever have an equal sum?



How does on test regression for a subspace or matrix factorization?


How does the test data gets collected?matrix factorization?How exactly does matrix factorization help with collaborative filteringKernelized Probabilistic Matrix Factorization - Implementation?Does a matrix factorization recommendation engine use user/item related features?Regularization term in Matrix FactorizationMatrix Factorization for Recommender SystemsHow dot product limits expressiveness and leads to sub-optimal solutions in Matrix Factorization?Finding unobserved ratings using matrix factorizationDoes cardinality of ratings column affect performance of matrix factorization based collaborative filtering?













0












$begingroup$


I've recently been reading a lot of papers and watching a lot of videos on both subspace learning, and matrix factorization. One thing is particularly eluding me though - how does any of this get tested?



Let's take, straight from Wikipedia, Matrix Factorization Non-Negative.



$$V = WH$$



So, you have a data matrix $V$. Your goal is to learn components $W$ and $H$, which when multiplied together, give a good approximation of $V$. This can be done by minimizing over $W$, $H$



$$| V - WH |$$



That seems fine so far. My problem, theoretically, is understanding when we want to apply this to a problem, like say Regression.



If you wanted to minimize:



$$Y - WH*B$$



How do you do this with a test point? I get confused here, because if we had, say a 100-user test set with 10 features. Then we do a 90/10 split, we get a size of $W*H$ that is different than the size of our test data.



Do people just plug the test data in directly when testing, in place of $W*H$, and just rely on those learned weights $B$?










share|improve this question











$endgroup$











  • $begingroup$
    I think your analogy is wrong. We don't do any matrix factorization in linear regression. We try to find $W$ in $Y approx WX$ where $Y$ and $X$ are given. In matrix factorization $Y approx WH$, we want to find $W$ and $H$ and only $Y$ is given.
    $endgroup$
    – Esmailian
    Apr 8 at 11:12











  • $begingroup$
    Appreciate the reply. Is there a reason, other than the problem above, why we can't do matrix factorization for linear regression? I was trying to think of use-cases where you want to complete a matrix with missing data, but traditional columnwise or rowwise imputation may not make sense.
    $endgroup$
    – Jibril
    Apr 8 at 11:16















0












$begingroup$


I've recently been reading a lot of papers and watching a lot of videos on both subspace learning, and matrix factorization. One thing is particularly eluding me though - how does any of this get tested?



Let's take, straight from Wikipedia, Matrix Factorization Non-Negative.



$$V = WH$$



So, you have a data matrix $V$. Your goal is to learn components $W$ and $H$, which when multiplied together, give a good approximation of $V$. This can be done by minimizing over $W$, $H$



$$| V - WH |$$



That seems fine so far. My problem, theoretically, is understanding when we want to apply this to a problem, like say Regression.



If you wanted to minimize:



$$Y - WH*B$$



How do you do this with a test point? I get confused here, because if we had, say a 100-user test set with 10 features. Then we do a 90/10 split, we get a size of $W*H$ that is different than the size of our test data.



Do people just plug the test data in directly when testing, in place of $W*H$, and just rely on those learned weights $B$?










share|improve this question











$endgroup$











  • $begingroup$
    I think your analogy is wrong. We don't do any matrix factorization in linear regression. We try to find $W$ in $Y approx WX$ where $Y$ and $X$ are given. In matrix factorization $Y approx WH$, we want to find $W$ and $H$ and only $Y$ is given.
    $endgroup$
    – Esmailian
    Apr 8 at 11:12











  • $begingroup$
    Appreciate the reply. Is there a reason, other than the problem above, why we can't do matrix factorization for linear regression? I was trying to think of use-cases where you want to complete a matrix with missing data, but traditional columnwise or rowwise imputation may not make sense.
    $endgroup$
    – Jibril
    Apr 8 at 11:16













0












0








0





$begingroup$


I've recently been reading a lot of papers and watching a lot of videos on both subspace learning, and matrix factorization. One thing is particularly eluding me though - how does any of this get tested?



Let's take, straight from Wikipedia, Matrix Factorization Non-Negative.



$$V = WH$$



So, you have a data matrix $V$. Your goal is to learn components $W$ and $H$, which when multiplied together, give a good approximation of $V$. This can be done by minimizing over $W$, $H$



$$| V - WH |$$



That seems fine so far. My problem, theoretically, is understanding when we want to apply this to a problem, like say Regression.



If you wanted to minimize:



$$Y - WH*B$$



How do you do this with a test point? I get confused here, because if we had, say a 100-user test set with 10 features. Then we do a 90/10 split, we get a size of $W*H$ that is different than the size of our test data.



Do people just plug the test data in directly when testing, in place of $W*H$, and just rely on those learned weights $B$?










share|improve this question











$endgroup$




I've recently been reading a lot of papers and watching a lot of videos on both subspace learning, and matrix factorization. One thing is particularly eluding me though - how does any of this get tested?



Let's take, straight from Wikipedia, Matrix Factorization Non-Negative.



$$V = WH$$



So, you have a data matrix $V$. Your goal is to learn components $W$ and $H$, which when multiplied together, give a good approximation of $V$. This can be done by minimizing over $W$, $H$



$$| V - WH |$$



That seems fine so far. My problem, theoretically, is understanding when we want to apply this to a problem, like say Regression.



If you wanted to minimize:



$$Y - WH*B$$



How do you do this with a test point? I get confused here, because if we had, say a 100-user test set with 10 features. Then we do a 90/10 split, we get a size of $W*H$ that is different than the size of our test data.



Do people just plug the test data in directly when testing, in place of $W*H$, and just rely on those learned weights $B$?







regression linear-regression matrix-factorisation matrix






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Apr 8 at 0:19









Stephen Rauch

1,52551330




1,52551330










asked Apr 7 at 23:23









JibrilJibril

1111




1111











  • $begingroup$
    I think your analogy is wrong. We don't do any matrix factorization in linear regression. We try to find $W$ in $Y approx WX$ where $Y$ and $X$ are given. In matrix factorization $Y approx WH$, we want to find $W$ and $H$ and only $Y$ is given.
    $endgroup$
    – Esmailian
    Apr 8 at 11:12











  • $begingroup$
    Appreciate the reply. Is there a reason, other than the problem above, why we can't do matrix factorization for linear regression? I was trying to think of use-cases where you want to complete a matrix with missing data, but traditional columnwise or rowwise imputation may not make sense.
    $endgroup$
    – Jibril
    Apr 8 at 11:16
















  • $begingroup$
    I think your analogy is wrong. We don't do any matrix factorization in linear regression. We try to find $W$ in $Y approx WX$ where $Y$ and $X$ are given. In matrix factorization $Y approx WH$, we want to find $W$ and $H$ and only $Y$ is given.
    $endgroup$
    – Esmailian
    Apr 8 at 11:12











  • $begingroup$
    Appreciate the reply. Is there a reason, other than the problem above, why we can't do matrix factorization for linear regression? I was trying to think of use-cases where you want to complete a matrix with missing data, but traditional columnwise or rowwise imputation may not make sense.
    $endgroup$
    – Jibril
    Apr 8 at 11:16















$begingroup$
I think your analogy is wrong. We don't do any matrix factorization in linear regression. We try to find $W$ in $Y approx WX$ where $Y$ and $X$ are given. In matrix factorization $Y approx WH$, we want to find $W$ and $H$ and only $Y$ is given.
$endgroup$
– Esmailian
Apr 8 at 11:12





$begingroup$
I think your analogy is wrong. We don't do any matrix factorization in linear regression. We try to find $W$ in $Y approx WX$ where $Y$ and $X$ are given. In matrix factorization $Y approx WH$, we want to find $W$ and $H$ and only $Y$ is given.
$endgroup$
– Esmailian
Apr 8 at 11:12













$begingroup$
Appreciate the reply. Is there a reason, other than the problem above, why we can't do matrix factorization for linear regression? I was trying to think of use-cases where you want to complete a matrix with missing data, but traditional columnwise or rowwise imputation may not make sense.
$endgroup$
– Jibril
Apr 8 at 11:16




$begingroup$
Appreciate the reply. Is there a reason, other than the problem above, why we can't do matrix factorization for linear regression? I was trying to think of use-cases where you want to complete a matrix with missing data, but traditional columnwise or rowwise imputation may not make sense.
$endgroup$
– Jibril
Apr 8 at 11:16










0






active

oldest

votes












Your Answer








StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "557"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);

else
createEditor();

);

function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);



);













draft saved

draft discarded


















StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f48838%2fhow-does-on-test-regression-for-a-subspace-or-matrix-factorization%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown

























0






active

oldest

votes








0






active

oldest

votes









active

oldest

votes






active

oldest

votes















draft saved

draft discarded
















































Thanks for contributing an answer to Data Science Stack Exchange!


  • Please be sure to answer the question. Provide details and share your research!

But avoid


  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.

Use MathJax to format equations. MathJax reference.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f48838%2fhow-does-on-test-regression-for-a-subspace-or-matrix-factorization%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Adding axes to figuresAdding axes labels to LaTeX figuresLaTeX equivalent of ConTeXt buffersRotate a node but not its content: the case of the ellipse decorationHow to define the default vertical distance between nodes?TikZ scaling graphic and adjust node position and keep font sizeNumerical conditional within tikz keys?adding axes to shapesAlign axes across subfiguresAdding figures with a certain orderLine up nested tikz enviroments or how to get rid of themAdding axes labels to LaTeX figures

Tähtien Talli Jäsenet | Lähteet | NavigointivalikkoSuomen Hippos – Tähtien Talli

Do these cracks on my tires look bad? The Next CEO of Stack OverflowDry rot tire should I replace?Having to replace tiresFishtailed so easily? Bad tires? ABS?Filling the tires with something other than air, to avoid puncture hassles?Used Michelin tires safe to install?Do these tyre cracks necessitate replacement?Rumbling noise: tires or mechanicalIs it possible to fix noisy feathered tires?Are bad winter tires still better than summer tires in winter?Torque converter failure - Related to replacing only 2 tires?Why use snow tires on all 4 wheels on 2-wheel-drive cars?