Building a large distance matrix2019 Community Moderator ElectionImage clustering by similarity measurement (CW-SSIM)Why does OPTICS use the core-distance as a minimum for the reachability distance?Agglomerative Hierarchial Clustering in python using DTW distanceDistance Based Classification in PythonClustering with multiple distance measuresDistance between very large discrete probability distributionsHandling large word embedding matrix in PythonClustering time series based on monotonic similarityClustering algorithm for a distance matrixClustering based on distance between points
Time travel short story where a man arrives in the late 19th century in a time machine and then sends the machine back into the past
Increase performance creating Mandelbrot set in python
quarter to five p.m
Valid Badminton Score?
Why did Kant, Hegel, and Adorno leave some words and phrases in the Greek alphabet?
Is there a good way to store credentials outside of a password manager?
What's a natural way to say that someone works somewhere (for a job)?
Bash method for viewing beginning and end of file
How could Frankenstein get the parts for his _second_ creature?
What is difference between behavior and behaviour
Why is delta-v is the most useful quantity for planning space travel?
What is the opposite of 'gravitas'?
Do I need a multiple entry visa for a trip UK -> Sweden -> UK?
Are there any comparative studies done between Ashtavakra Gita and Buddhim?
Will it be accepted, if there is no ''Main Character" stereotype?
How to prove that the query oracle is unitary?
Coordinate position not precise
If a character can use a +X magic weapon as a spellcasting focus, does it add the bonus to spell attacks or spell save DCs?
How does it work when somebody invests in my business?
How will losing mobility of one hand affect my career as a programmer?
Opposite of a diet
What to do with wrong results in talks?
Is there a problem with hiding "forgot password" until it's needed?
How to be diplomatic in refusing to write code that breaches the privacy of our users
Building a large distance matrix
2019 Community Moderator ElectionImage clustering by similarity measurement (CW-SSIM)Why does OPTICS use the core-distance as a minimum for the reachability distance?Agglomerative Hierarchial Clustering in python using DTW distanceDistance Based Classification in PythonClustering with multiple distance measuresDistance between very large discrete probability distributionsHandling large word embedding matrix in PythonClustering time series based on monotonic similarityClustering algorithm for a distance matrixClustering based on distance between points
$begingroup$
I am trying to build a distance matrix for around 600,000 locations for which I have the latitudes and longitudes. I want to use this distance matrix for agglomerative clustering. Since this is a large set of locations, calculating the distance matrix is an extremely heavy operation. Is there any way to opimize this process while keeping in mind that I am going to use this matrix for clustering later. Below is the code I am using.
from scipy.spatial.distance import pdist
import time
start = time.time()
# dist is a custom distance function that I wrote
y = pdist(locations[['Latitude', 'Longitude']].values, metric=dist)
end = time.time()
print(end - start)
python clustering
New contributor
$endgroup$
add a comment |
$begingroup$
I am trying to build a distance matrix for around 600,000 locations for which I have the latitudes and longitudes. I want to use this distance matrix for agglomerative clustering. Since this is a large set of locations, calculating the distance matrix is an extremely heavy operation. Is there any way to opimize this process while keeping in mind that I am going to use this matrix for clustering later. Below is the code I am using.
from scipy.spatial.distance import pdist
import time
start = time.time()
# dist is a custom distance function that I wrote
y = pdist(locations[['Latitude', 'Longitude']].values, metric=dist)
end = time.time()
print(end - start)
python clustering
New contributor
$endgroup$
add a comment |
$begingroup$
I am trying to build a distance matrix for around 600,000 locations for which I have the latitudes and longitudes. I want to use this distance matrix for agglomerative clustering. Since this is a large set of locations, calculating the distance matrix is an extremely heavy operation. Is there any way to opimize this process while keeping in mind that I am going to use this matrix for clustering later. Below is the code I am using.
from scipy.spatial.distance import pdist
import time
start = time.time()
# dist is a custom distance function that I wrote
y = pdist(locations[['Latitude', 'Longitude']].values, metric=dist)
end = time.time()
print(end - start)
python clustering
New contributor
$endgroup$
I am trying to build a distance matrix for around 600,000 locations for which I have the latitudes and longitudes. I want to use this distance matrix for agglomerative clustering. Since this is a large set of locations, calculating the distance matrix is an extremely heavy operation. Is there any way to opimize this process while keeping in mind that I am going to use this matrix for clustering later. Below is the code I am using.
from scipy.spatial.distance import pdist
import time
start = time.time()
# dist is a custom distance function that I wrote
y = pdist(locations[['Latitude', 'Longitude']].values, metric=dist)
end = time.time()
print(end - start)
python clustering
python clustering
New contributor
New contributor
edited Mar 21 at 6:33
Karthik Katragadda
New contributor
asked Mar 21 at 5:49
Karthik KatragaddaKarthik Katragadda
143
143
New contributor
New contributor
add a comment |
add a comment |
1 Answer
1
active
oldest
votes
$begingroup$
Have you considered that the following steps will be even worse?
The standard algorithm for hierarchical clustering scales O(n³). You just don't want to use it on large data. It does not scale.
Why don't you do a simple experiment yourself: measure the time to compute the distances (and do the clustering) for n=1000,2000,4000,8000,16000,32000 and then estimate how long it will take you to process the entire data set assuming that you had enough memory... You will see that it is not feasible to use this algorithm on such big data.
You should rather reconsider your approach...
$endgroup$
$begingroup$
You just don't want to use it on large data
. But how do I do it on large data sets. There must some way. Can you suggest any other approach? I am a bit new to this.
$endgroup$
– Karthik Katragadda
Mar 21 at 7:24
$begingroup$
Karthik: compute how much memory you would need. You can try to implement this yourself by storing the distances on disk, as this will be several TB. But trust me, do the experiment I suggested first and estimate how long it will take you. Are you willing to wait weeks for the result?
$endgroup$
– Anony-Mousse
Mar 21 at 16:14
$begingroup$
If the experiment shows your runtime increases by 4 with each doubling the size, going from 32k to 600k means you'll need about 350x as long. With the expected O(n³) increase, it will take 6600x as long. Then you can estimate if it's worth trying. Maybe add a factor of 10x additionally for working on disk instead of in-memory. You'll need about 1.341 TB disk space to store the matrix, and as much working space. That is doable. You'll need to read this matrix many many times though, so even with a SSD this will take several days just for the IO.
$endgroup$
– Anony-Mousse
Mar 21 at 16:31
add a comment |
Your Answer
StackExchange.ifUsing("editor", function ()
return StackExchange.using("mathjaxEditing", function ()
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
);
);
, "mathjax-editing");
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "557"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Karthik Katragadda is a new contributor. Be nice, and check out our Code of Conduct.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f47713%2fbuilding-a-large-distance-matrix%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
Have you considered that the following steps will be even worse?
The standard algorithm for hierarchical clustering scales O(n³). You just don't want to use it on large data. It does not scale.
Why don't you do a simple experiment yourself: measure the time to compute the distances (and do the clustering) for n=1000,2000,4000,8000,16000,32000 and then estimate how long it will take you to process the entire data set assuming that you had enough memory... You will see that it is not feasible to use this algorithm on such big data.
You should rather reconsider your approach...
$endgroup$
$begingroup$
You just don't want to use it on large data
. But how do I do it on large data sets. There must some way. Can you suggest any other approach? I am a bit new to this.
$endgroup$
– Karthik Katragadda
Mar 21 at 7:24
$begingroup$
Karthik: compute how much memory you would need. You can try to implement this yourself by storing the distances on disk, as this will be several TB. But trust me, do the experiment I suggested first and estimate how long it will take you. Are you willing to wait weeks for the result?
$endgroup$
– Anony-Mousse
Mar 21 at 16:14
$begingroup$
If the experiment shows your runtime increases by 4 with each doubling the size, going from 32k to 600k means you'll need about 350x as long. With the expected O(n³) increase, it will take 6600x as long. Then you can estimate if it's worth trying. Maybe add a factor of 10x additionally for working on disk instead of in-memory. You'll need about 1.341 TB disk space to store the matrix, and as much working space. That is doable. You'll need to read this matrix many many times though, so even with a SSD this will take several days just for the IO.
$endgroup$
– Anony-Mousse
Mar 21 at 16:31
add a comment |
$begingroup$
Have you considered that the following steps will be even worse?
The standard algorithm for hierarchical clustering scales O(n³). You just don't want to use it on large data. It does not scale.
Why don't you do a simple experiment yourself: measure the time to compute the distances (and do the clustering) for n=1000,2000,4000,8000,16000,32000 and then estimate how long it will take you to process the entire data set assuming that you had enough memory... You will see that it is not feasible to use this algorithm on such big data.
You should rather reconsider your approach...
$endgroup$
$begingroup$
You just don't want to use it on large data
. But how do I do it on large data sets. There must some way. Can you suggest any other approach? I am a bit new to this.
$endgroup$
– Karthik Katragadda
Mar 21 at 7:24
$begingroup$
Karthik: compute how much memory you would need. You can try to implement this yourself by storing the distances on disk, as this will be several TB. But trust me, do the experiment I suggested first and estimate how long it will take you. Are you willing to wait weeks for the result?
$endgroup$
– Anony-Mousse
Mar 21 at 16:14
$begingroup$
If the experiment shows your runtime increases by 4 with each doubling the size, going from 32k to 600k means you'll need about 350x as long. With the expected O(n³) increase, it will take 6600x as long. Then you can estimate if it's worth trying. Maybe add a factor of 10x additionally for working on disk instead of in-memory. You'll need about 1.341 TB disk space to store the matrix, and as much working space. That is doable. You'll need to read this matrix many many times though, so even with a SSD this will take several days just for the IO.
$endgroup$
– Anony-Mousse
Mar 21 at 16:31
add a comment |
$begingroup$
Have you considered that the following steps will be even worse?
The standard algorithm for hierarchical clustering scales O(n³). You just don't want to use it on large data. It does not scale.
Why don't you do a simple experiment yourself: measure the time to compute the distances (and do the clustering) for n=1000,2000,4000,8000,16000,32000 and then estimate how long it will take you to process the entire data set assuming that you had enough memory... You will see that it is not feasible to use this algorithm on such big data.
You should rather reconsider your approach...
$endgroup$
Have you considered that the following steps will be even worse?
The standard algorithm for hierarchical clustering scales O(n³). You just don't want to use it on large data. It does not scale.
Why don't you do a simple experiment yourself: measure the time to compute the distances (and do the clustering) for n=1000,2000,4000,8000,16000,32000 and then estimate how long it will take you to process the entire data set assuming that you had enough memory... You will see that it is not feasible to use this algorithm on such big data.
You should rather reconsider your approach...
edited Mar 21 at 16:37
answered Mar 21 at 6:59
Anony-MousseAnony-Mousse
5,030625
5,030625
$begingroup$
You just don't want to use it on large data
. But how do I do it on large data sets. There must some way. Can you suggest any other approach? I am a bit new to this.
$endgroup$
– Karthik Katragadda
Mar 21 at 7:24
$begingroup$
Karthik: compute how much memory you would need. You can try to implement this yourself by storing the distances on disk, as this will be several TB. But trust me, do the experiment I suggested first and estimate how long it will take you. Are you willing to wait weeks for the result?
$endgroup$
– Anony-Mousse
Mar 21 at 16:14
$begingroup$
If the experiment shows your runtime increases by 4 with each doubling the size, going from 32k to 600k means you'll need about 350x as long. With the expected O(n³) increase, it will take 6600x as long. Then you can estimate if it's worth trying. Maybe add a factor of 10x additionally for working on disk instead of in-memory. You'll need about 1.341 TB disk space to store the matrix, and as much working space. That is doable. You'll need to read this matrix many many times though, so even with a SSD this will take several days just for the IO.
$endgroup$
– Anony-Mousse
Mar 21 at 16:31
add a comment |
$begingroup$
You just don't want to use it on large data
. But how do I do it on large data sets. There must some way. Can you suggest any other approach? I am a bit new to this.
$endgroup$
– Karthik Katragadda
Mar 21 at 7:24
$begingroup$
Karthik: compute how much memory you would need. You can try to implement this yourself by storing the distances on disk, as this will be several TB. But trust me, do the experiment I suggested first and estimate how long it will take you. Are you willing to wait weeks for the result?
$endgroup$
– Anony-Mousse
Mar 21 at 16:14
$begingroup$
If the experiment shows your runtime increases by 4 with each doubling the size, going from 32k to 600k means you'll need about 350x as long. With the expected O(n³) increase, it will take 6600x as long. Then you can estimate if it's worth trying. Maybe add a factor of 10x additionally for working on disk instead of in-memory. You'll need about 1.341 TB disk space to store the matrix, and as much working space. That is doable. You'll need to read this matrix many many times though, so even with a SSD this will take several days just for the IO.
$endgroup$
– Anony-Mousse
Mar 21 at 16:31
$begingroup$
You just don't want to use it on large data
. But how do I do it on large data sets. There must some way. Can you suggest any other approach? I am a bit new to this.$endgroup$
– Karthik Katragadda
Mar 21 at 7:24
$begingroup$
You just don't want to use it on large data
. But how do I do it on large data sets. There must some way. Can you suggest any other approach? I am a bit new to this.$endgroup$
– Karthik Katragadda
Mar 21 at 7:24
$begingroup$
Karthik: compute how much memory you would need. You can try to implement this yourself by storing the distances on disk, as this will be several TB. But trust me, do the experiment I suggested first and estimate how long it will take you. Are you willing to wait weeks for the result?
$endgroup$
– Anony-Mousse
Mar 21 at 16:14
$begingroup$
Karthik: compute how much memory you would need. You can try to implement this yourself by storing the distances on disk, as this will be several TB. But trust me, do the experiment I suggested first and estimate how long it will take you. Are you willing to wait weeks for the result?
$endgroup$
– Anony-Mousse
Mar 21 at 16:14
$begingroup$
If the experiment shows your runtime increases by 4 with each doubling the size, going from 32k to 600k means you'll need about 350x as long. With the expected O(n³) increase, it will take 6600x as long. Then you can estimate if it's worth trying. Maybe add a factor of 10x additionally for working on disk instead of in-memory. You'll need about 1.341 TB disk space to store the matrix, and as much working space. That is doable. You'll need to read this matrix many many times though, so even with a SSD this will take several days just for the IO.
$endgroup$
– Anony-Mousse
Mar 21 at 16:31
$begingroup$
If the experiment shows your runtime increases by 4 with each doubling the size, going from 32k to 600k means you'll need about 350x as long. With the expected O(n³) increase, it will take 6600x as long. Then you can estimate if it's worth trying. Maybe add a factor of 10x additionally for working on disk instead of in-memory. You'll need about 1.341 TB disk space to store the matrix, and as much working space. That is doable. You'll need to read this matrix many many times though, so even with a SSD this will take several days just for the IO.
$endgroup$
– Anony-Mousse
Mar 21 at 16:31
add a comment |
Karthik Katragadda is a new contributor. Be nice, and check out our Code of Conduct.
Karthik Katragadda is a new contributor. Be nice, and check out our Code of Conduct.
Karthik Katragadda is a new contributor. Be nice, and check out our Code of Conduct.
Karthik Katragadda is a new contributor. Be nice, and check out our Code of Conduct.
Thanks for contributing an answer to Data Science Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f47713%2fbuilding-a-large-distance-matrix%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown