When projecting data with UMAP, should I use only the samples I need projected or the entire dataset?2019 Community Moderator ElectionOne-Class discriminatory classification with imbalanced, heterogenous Negative background?Binary classification model for sparse / biased dataShould I use epochs > 1 when training data is unlimited?Purpose of visualizing high dimensional data?Feature selection - QR code localizationWhen should we consider a dataset as imbalanced?What are 2D dimensionality reduction algorithms good for?Which tool should I use for combining this large dataset?When should I normalize data?I have limited samples for one class, unlimited samples for the other class. Need to balance?
How to safely derail a train during transit?
Describing a person. What needs to be mentioned?
What is the difference between "behavior" and "behaviour"?
Crossing the line between justified force and brutality
Number of words that can be made using all the letters of the word W, if Os as well as Is are separated is?
How to Reset Passwords on Multiple Websites Easily?
Sort a list by elements of another list
I'm in charge of equipment buying but no one's ever happy with what I choose. How to fix this?
What can we do to stop prior company from asking us questions?
Unreliable Magic - Is it worth it?
When is out-of-tune okay?
What does "I’d sit this one out, Cap," imply or mean in the context?
What is the intuitive meaning of having a linear relationship between the logs of two variables?
Is there a good way to store credentials outside of a password manager?
Was Spock the First Vulcan in Starfleet?
How easy is it to start Magic from scratch?
Is `x >> pure y` equivalent to `liftM (const y) x`
How do I rename a Linux host without needing to reboot for the rename to take effect?
Lay out the Carpet
Is there a korbon needed for conversion?
Would a high gravity rocky planet be guaranteed to have an atmosphere?
Go Pregnant or Go Home
Tiptoe or tiphoof? Adjusting words to better fit fantasy races
Applicability of Single Responsibility Principle
When projecting data with UMAP, should I use only the samples I need projected or the entire dataset?
2019 Community Moderator ElectionOne-Class discriminatory classification with imbalanced, heterogenous Negative background?Binary classification model for sparse / biased dataShould I use epochs > 1 when training data is unlimited?Purpose of visualizing high dimensional data?Feature selection - QR code localizationWhen should we consider a dataset as imbalanced?What are 2D dimensionality reduction algorithms good for?Which tool should I use for combining this large dataset?When should I normalize data?I have limited samples for one class, unlimited samples for the other class. Need to balance?
$begingroup$
I have a neural network that maps my data samples to a 64-dimensional embedding. I wish to visualize a few of these embeddings (between 30 and 600) through a 2-dimensional projection, and I plan to use umap to do that. Would providing more embeddings sampled from the dataset along with the ones I want to project help the algorithm to identify the manifold and improve the quality of projection?
machine-learning dataset dimensionality-reduction
$endgroup$
add a comment |
$begingroup$
I have a neural network that maps my data samples to a 64-dimensional embedding. I wish to visualize a few of these embeddings (between 30 and 600) through a 2-dimensional projection, and I plan to use umap to do that. Would providing more embeddings sampled from the dataset along with the ones I want to project help the algorithm to identify the manifold and improve the quality of projection?
machine-learning dataset dimensionality-reduction
$endgroup$
$begingroup$
Let me see if I understand you correctly, you learn a 64-dimensional embedding from your data using a neural network and you want to reduce it to a 2-dimensional space to visualize using UMAP? What prohibits you of providing more embeddings sampled from the dataset? Intuitively speaking, it sounds to me having more data points would make a difference in the manifold (though I have no proof of it!; have to dig more).
$endgroup$
– Majid Mortazavi
Jan 1 at 15:01
$begingroup$
It's merely a matter of performance. I'd like to have an idea of the order of magnitude of the number of samples needed to have a meaningful projection. More samples for UMAP means I need to forward more samples through the network and that UMAP will train for longer. With less than ~50 samples, I can run UMAP in my mainloop and not worry much about the impact. With more than that, I'll need to run a secondary thread.
$endgroup$
– Le Frite
Jan 1 at 15:50
add a comment |
$begingroup$
I have a neural network that maps my data samples to a 64-dimensional embedding. I wish to visualize a few of these embeddings (between 30 and 600) through a 2-dimensional projection, and I plan to use umap to do that. Would providing more embeddings sampled from the dataset along with the ones I want to project help the algorithm to identify the manifold and improve the quality of projection?
machine-learning dataset dimensionality-reduction
$endgroup$
I have a neural network that maps my data samples to a 64-dimensional embedding. I wish to visualize a few of these embeddings (between 30 and 600) through a 2-dimensional projection, and I plan to use umap to do that. Would providing more embeddings sampled from the dataset along with the ones I want to project help the algorithm to identify the manifold and improve the quality of projection?
machine-learning dataset dimensionality-reduction
machine-learning dataset dimensionality-reduction
asked Jan 1 at 13:24
Le FriteLe Frite
112
112
$begingroup$
Let me see if I understand you correctly, you learn a 64-dimensional embedding from your data using a neural network and you want to reduce it to a 2-dimensional space to visualize using UMAP? What prohibits you of providing more embeddings sampled from the dataset? Intuitively speaking, it sounds to me having more data points would make a difference in the manifold (though I have no proof of it!; have to dig more).
$endgroup$
– Majid Mortazavi
Jan 1 at 15:01
$begingroup$
It's merely a matter of performance. I'd like to have an idea of the order of magnitude of the number of samples needed to have a meaningful projection. More samples for UMAP means I need to forward more samples through the network and that UMAP will train for longer. With less than ~50 samples, I can run UMAP in my mainloop and not worry much about the impact. With more than that, I'll need to run a secondary thread.
$endgroup$
– Le Frite
Jan 1 at 15:50
add a comment |
$begingroup$
Let me see if I understand you correctly, you learn a 64-dimensional embedding from your data using a neural network and you want to reduce it to a 2-dimensional space to visualize using UMAP? What prohibits you of providing more embeddings sampled from the dataset? Intuitively speaking, it sounds to me having more data points would make a difference in the manifold (though I have no proof of it!; have to dig more).
$endgroup$
– Majid Mortazavi
Jan 1 at 15:01
$begingroup$
It's merely a matter of performance. I'd like to have an idea of the order of magnitude of the number of samples needed to have a meaningful projection. More samples for UMAP means I need to forward more samples through the network and that UMAP will train for longer. With less than ~50 samples, I can run UMAP in my mainloop and not worry much about the impact. With more than that, I'll need to run a secondary thread.
$endgroup$
– Le Frite
Jan 1 at 15:50
$begingroup$
Let me see if I understand you correctly, you learn a 64-dimensional embedding from your data using a neural network and you want to reduce it to a 2-dimensional space to visualize using UMAP? What prohibits you of providing more embeddings sampled from the dataset? Intuitively speaking, it sounds to me having more data points would make a difference in the manifold (though I have no proof of it!; have to dig more).
$endgroup$
– Majid Mortazavi
Jan 1 at 15:01
$begingroup$
Let me see if I understand you correctly, you learn a 64-dimensional embedding from your data using a neural network and you want to reduce it to a 2-dimensional space to visualize using UMAP? What prohibits you of providing more embeddings sampled from the dataset? Intuitively speaking, it sounds to me having more data points would make a difference in the manifold (though I have no proof of it!; have to dig more).
$endgroup$
– Majid Mortazavi
Jan 1 at 15:01
$begingroup$
It's merely a matter of performance. I'd like to have an idea of the order of magnitude of the number of samples needed to have a meaningful projection. More samples for UMAP means I need to forward more samples through the network and that UMAP will train for longer. With less than ~50 samples, I can run UMAP in my mainloop and not worry much about the impact. With more than that, I'll need to run a secondary thread.
$endgroup$
– Le Frite
Jan 1 at 15:50
$begingroup$
It's merely a matter of performance. I'd like to have an idea of the order of magnitude of the number of samples needed to have a meaningful projection. More samples for UMAP means I need to forward more samples through the network and that UMAP will train for longer. With less than ~50 samples, I can run UMAP in my mainloop and not worry much about the impact. With more than that, I'll need to run a secondary thread.
$endgroup$
– Le Frite
Jan 1 at 15:50
add a comment |
1 Answer
1
active
oldest
votes
$begingroup$
Yes, more data will improve the quality of the embedding UMAP can produce. While UMAP is somewhat robust/stable under subsampling in general you will get significantly better results with more data. It is also worth noting that most UMAP implementations are not designed for very small datasets (they make some optimization choices that assume a a reasonable dataset size). In practice it is probably best not to use UMAP with less than 100 or so data samples.
$endgroup$
$begingroup$
What I've found since - by experimenting around - is that the projection of a very few samples shares the "same ordering" as the projection of these same samples but with a few hundred additional samples fed to UMAP for training. I.e., while the distance between the projected points varies between the two projections, the same clusters appear. When it comes to clustering alone, it seems that adding more samples does not influence the end result much.
$endgroup$
– Le Frite
Mar 23 at 14:44
add a comment |
Your Answer
StackExchange.ifUsing("editor", function ()
return StackExchange.using("mathjaxEditing", function ()
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
);
);
, "mathjax-editing");
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "557"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f43375%2fwhen-projecting-data-with-umap-should-i-use-only-the-samples-i-need-projected-o%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
Yes, more data will improve the quality of the embedding UMAP can produce. While UMAP is somewhat robust/stable under subsampling in general you will get significantly better results with more data. It is also worth noting that most UMAP implementations are not designed for very small datasets (they make some optimization choices that assume a a reasonable dataset size). In practice it is probably best not to use UMAP with less than 100 or so data samples.
$endgroup$
$begingroup$
What I've found since - by experimenting around - is that the projection of a very few samples shares the "same ordering" as the projection of these same samples but with a few hundred additional samples fed to UMAP for training. I.e., while the distance between the projected points varies between the two projections, the same clusters appear. When it comes to clustering alone, it seems that adding more samples does not influence the end result much.
$endgroup$
– Le Frite
Mar 23 at 14:44
add a comment |
$begingroup$
Yes, more data will improve the quality of the embedding UMAP can produce. While UMAP is somewhat robust/stable under subsampling in general you will get significantly better results with more data. It is also worth noting that most UMAP implementations are not designed for very small datasets (they make some optimization choices that assume a a reasonable dataset size). In practice it is probably best not to use UMAP with less than 100 or so data samples.
$endgroup$
$begingroup$
What I've found since - by experimenting around - is that the projection of a very few samples shares the "same ordering" as the projection of these same samples but with a few hundred additional samples fed to UMAP for training. I.e., while the distance between the projected points varies between the two projections, the same clusters appear. When it comes to clustering alone, it seems that adding more samples does not influence the end result much.
$endgroup$
– Le Frite
Mar 23 at 14:44
add a comment |
$begingroup$
Yes, more data will improve the quality of the embedding UMAP can produce. While UMAP is somewhat robust/stable under subsampling in general you will get significantly better results with more data. It is also worth noting that most UMAP implementations are not designed for very small datasets (they make some optimization choices that assume a a reasonable dataset size). In practice it is probably best not to use UMAP with less than 100 or so data samples.
$endgroup$
Yes, more data will improve the quality of the embedding UMAP can produce. While UMAP is somewhat robust/stable under subsampling in general you will get significantly better results with more data. It is also worth noting that most UMAP implementations are not designed for very small datasets (they make some optimization choices that assume a a reasonable dataset size). In practice it is probably best not to use UMAP with less than 100 or so data samples.
answered Mar 23 at 0:57
Leland McInnesLeland McInnes
1613
1613
$begingroup$
What I've found since - by experimenting around - is that the projection of a very few samples shares the "same ordering" as the projection of these same samples but with a few hundred additional samples fed to UMAP for training. I.e., while the distance between the projected points varies between the two projections, the same clusters appear. When it comes to clustering alone, it seems that adding more samples does not influence the end result much.
$endgroup$
– Le Frite
Mar 23 at 14:44
add a comment |
$begingroup$
What I've found since - by experimenting around - is that the projection of a very few samples shares the "same ordering" as the projection of these same samples but with a few hundred additional samples fed to UMAP for training. I.e., while the distance between the projected points varies between the two projections, the same clusters appear. When it comes to clustering alone, it seems that adding more samples does not influence the end result much.
$endgroup$
– Le Frite
Mar 23 at 14:44
$begingroup$
What I've found since - by experimenting around - is that the projection of a very few samples shares the "same ordering" as the projection of these same samples but with a few hundred additional samples fed to UMAP for training. I.e., while the distance between the projected points varies between the two projections, the same clusters appear. When it comes to clustering alone, it seems that adding more samples does not influence the end result much.
$endgroup$
– Le Frite
Mar 23 at 14:44
$begingroup$
What I've found since - by experimenting around - is that the projection of a very few samples shares the "same ordering" as the projection of these same samples but with a few hundred additional samples fed to UMAP for training. I.e., while the distance between the projected points varies between the two projections, the same clusters appear. When it comes to clustering alone, it seems that adding more samples does not influence the end result much.
$endgroup$
– Le Frite
Mar 23 at 14:44
add a comment |
Thanks for contributing an answer to Data Science Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f43375%2fwhen-projecting-data-with-umap-should-i-use-only-the-samples-i-need-projected-o%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
$begingroup$
Let me see if I understand you correctly, you learn a 64-dimensional embedding from your data using a neural network and you want to reduce it to a 2-dimensional space to visualize using UMAP? What prohibits you of providing more embeddings sampled from the dataset? Intuitively speaking, it sounds to me having more data points would make a difference in the manifold (though I have no proof of it!; have to dig more).
$endgroup$
– Majid Mortazavi
Jan 1 at 15:01
$begingroup$
It's merely a matter of performance. I'd like to have an idea of the order of magnitude of the number of samples needed to have a meaningful projection. More samples for UMAP means I need to forward more samples through the network and that UMAP will train for longer. With less than ~50 samples, I can run UMAP in my mainloop and not worry much about the impact. With more than that, I'll need to run a secondary thread.
$endgroup$
– Le Frite
Jan 1 at 15:50