K-means: What are some good ways to choose an efficient set of initial centroids? The Next CEO of Stack Overflow2019 Community Moderator ElectionK-means vs. online K-meansDefinition - center of the cluster with non-Euclidean distanceThe implementation of overlapping clustering (NEO-K-Means)The problem of K-Means with non-convex functionConvergence in Hartigan-Wong k-means method and other algorithmsIn K-means, what happens if a centroid is never the closest to any point?Clustering for multiple variableWhat's the difference between finding the average Euclidean distance and using inertia_ in KMeans in sklearn?What's the difference between finding the average Euclidean distance and using inertia_ in KMeans in sklearn?K-Means clustering - What to do if a cluster has 0 elements?

How badly should I try to prevent a user from XSSing themselves?

Prodigo = pro + ago?

How exploitable/balanced is this homebrew spell: Spell Permanency?

Could you use a laser beam as a modulated carrier wave for radio signal?

How does a dynamic QR code work?

My ex-girlfriend uses my Apple ID to login to her iPad, do I have to give her my Apple ID password to reset it?

Masking layers by a vector polygon layer in QGIS

Can a PhD from a non-TU9 German university become a professor in a TU9 university?

How to pronounce fünf in 45

What difference does it make matching a word with/without a trailing whitespace?

Oldie but Goldie

Why doesn't Shulchan Aruch include the laws of destroying fruit trees?

Another proof that dividing by 0 does not exist -- is it right?

Finitely generated matrix groups whose eigenvalues are all algebraic

Do I need to write [sic] when including a quotation with a number less than 10 that isn't written out?

Is it reasonable to ask other researchers to send me their previous grant applications?

Avoiding the "not like other girls" trope?

What does this strange code stamp on my passport mean?

"Eavesdropping" vs "Listen in on"

Could a dragon use its wings to swim?

Is this a new Fibonacci Identity?

Which acid/base does a strong base/acid react when added to a buffer solution?

What did the word "leisure" mean in late 18th Century usage?

How to coordinate airplane tickets?



K-means: What are some good ways to choose an efficient set of initial centroids?



The Next CEO of Stack Overflow
2019 Community Moderator ElectionK-means vs. online K-meansDefinition - center of the cluster with non-Euclidean distanceThe implementation of overlapping clustering (NEO-K-Means)The problem of K-Means with non-convex functionConvergence in Hartigan-Wong k-means method and other algorithmsIn K-means, what happens if a centroid is never the closest to any point?Clustering for multiple variableWhat's the difference between finding the average Euclidean distance and using inertia_ in KMeans in sklearn?What's the difference between finding the average Euclidean distance and using inertia_ in KMeans in sklearn?K-Means clustering - What to do if a cluster has 0 elements?










17












$begingroup$


When a random initialization of centroids is used, different runs of K-means produce different total SSEs. And it is crucial in the performance of the algorithm.
What are some effective approaches toward solving this problem? Recent approaches are appreciated.










share|improve this question











$endgroup$
















    17












    $begingroup$


    When a random initialization of centroids is used, different runs of K-means produce different total SSEs. And it is crucial in the performance of the algorithm.
    What are some effective approaches toward solving this problem? Recent approaches are appreciated.










    share|improve this question











    $endgroup$














      17












      17








      17


      6



      $begingroup$


      When a random initialization of centroids is used, different runs of K-means produce different total SSEs. And it is crucial in the performance of the algorithm.
      What are some effective approaches toward solving this problem? Recent approaches are appreciated.










      share|improve this question











      $endgroup$




      When a random initialization of centroids is used, different runs of K-means produce different total SSEs. And it is crucial in the performance of the algorithm.
      What are some effective approaches toward solving this problem? Recent approaches are appreciated.







      data-mining clustering k-means






      share|improve this question















      share|improve this question













      share|improve this question




      share|improve this question








      edited May 2 '15 at 13:40









      aeroNotAuto

      31727




      31727










      asked Apr 30 '15 at 13:42









      ngub05ngub05

      198117




      198117




















          4 Answers
          4






          active

          oldest

          votes


















          12












          $begingroup$

          An approach that yields more consistent results is K-means++. This approach acknowledges that there is probably a better choice of initial centroid locations than simple random assignment. Specifically, K-means tends to perform better when centroids are seeded in such a way that doesn't clump them together in space.



          In short, the method is as follows:



          1. Choose one of your data points at random as an initial centroid.

          2. Calculate $D(x)$, the distance between your initial centroid and all other data points, $x$.

          3. Choose your next centroid from the remaining datapoints with probability proportional to $D(x)^2$

          4. Repeat until all centroids have been assigned.

          Note: $D(x)$ should be updated as more centroids are added. It should be set to be the distance between a data point and the nearest centroid.



          You may also be interested to read this paper that proposes the method and describes its overall expected performance.






          share|improve this answer









          $endgroup$




















            5












            $begingroup$

            I may be misunderstanding your question, but usually k-means chooses your centroids randomly for you depending on the number of clusters you set (i.e. k). Choosing the number for k tends to be a subjective exercise. A good place to start is an Elbow/Scree plot which can be found here:



            http://en.wikipedia.org/wiki/Determining_the_number_of_clusters_in_a_data_set#The_Elbow_Method






            share|improve this answer









            $endgroup$












            • $begingroup$
              I think the question is about centroid initialization, which are ‘k-means++’, ‘random’ or an ndarray on the documentation page scikit-learn.org/stable/modules/generated/…
              $endgroup$
              – Itachi
              Mar 25 at 17:21


















            4












            $begingroup$

            The usual approach to this problem is to re-run your K-means algorithm several times, with different random initializations of the centroids, and to keep the best solution. You can do that by evaluating the results on your training data or by means of cross validation.



            There are many other ways to initialize the centroids, but none of them is going to perform the best for every single problem. You could evaluate these approaches together with random initialization for your particular problem.






            share|improve this answer









            $endgroup$




















              0












              $begingroup$

              I agree with the Elbow/Scree plot. I found it more intuitively sensible than a random seed. Here's an example code to try it.



              Ks=30
              mean_acc=np.zeros((Ks-1))
              std_acc=np.zeros((Ks-1))
              ConfustionMx=[];
              for n in range(1,Ks):
              #Train Model and Predict
              kNN_model = KNeighborsClassifier(n_neighbors=n).fit(X_train,y_train)
              yhat = kNN_model.predict(X_test)
              mean_acc[n-1]=np.mean(yhat==y_test);
              std_acc[n-1]=np.std(yhat==y_test)/np.sqrt(yhat.shape[0])

              plt.plot(range(1,Ks),mean_acc,'g')
              plt.fill_between(range(1,Ks),mean_acc - 1 * std_acc,mean_acc + 1 * std_acc, alpha=0.10)
              plt.legend(('Accuracy ', '+/- 3xstd'))
              plt.ylabel('Accuracy ')
              plt.xlabel('Number of Nabors (K)')
              plt.tight_layout()
              plt.show()

              print( "The best accuracy was with", mean_acc.max(), "with k=", mean_acc.argmax()+1)





              share|improve this answer









              $endgroup$













                Your Answer





                StackExchange.ifUsing("editor", function ()
                return StackExchange.using("mathjaxEditing", function ()
                StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
                StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
                );
                );
                , "mathjax-editing");

                StackExchange.ready(function()
                var channelOptions =
                tags: "".split(" "),
                id: "557"
                ;
                initTagRenderer("".split(" "), "".split(" "), channelOptions);

                StackExchange.using("externalEditor", function()
                // Have to fire editor after snippets, if snippets enabled
                if (StackExchange.settings.snippets.snippetsEnabled)
                StackExchange.using("snippets", function()
                createEditor();
                );

                else
                createEditor();

                );

                function createEditor()
                StackExchange.prepareEditor(
                heartbeatType: 'answer',
                autoActivateHeartbeat: false,
                convertImagesToLinks: false,
                noModals: true,
                showLowRepImageUploadWarning: true,
                reputationToPostImages: null,
                bindNavPrevention: true,
                postfix: "",
                imageUploader:
                brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
                contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
                allowUrls: true
                ,
                onDemand: true,
                discardSelector: ".discard-answer"
                ,immediatelyShowMarkdownHelp:true
                );



                );













                draft saved

                draft discarded


















                StackExchange.ready(
                function ()
                StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f5656%2fk-means-what-are-some-good-ways-to-choose-an-efficient-set-of-initial-centroids%23new-answer', 'question_page');

                );

                Post as a guest















                Required, but never shown

























                4 Answers
                4






                active

                oldest

                votes








                4 Answers
                4






                active

                oldest

                votes









                active

                oldest

                votes






                active

                oldest

                votes









                12












                $begingroup$

                An approach that yields more consistent results is K-means++. This approach acknowledges that there is probably a better choice of initial centroid locations than simple random assignment. Specifically, K-means tends to perform better when centroids are seeded in such a way that doesn't clump them together in space.



                In short, the method is as follows:



                1. Choose one of your data points at random as an initial centroid.

                2. Calculate $D(x)$, the distance between your initial centroid and all other data points, $x$.

                3. Choose your next centroid from the remaining datapoints with probability proportional to $D(x)^2$

                4. Repeat until all centroids have been assigned.

                Note: $D(x)$ should be updated as more centroids are added. It should be set to be the distance between a data point and the nearest centroid.



                You may also be interested to read this paper that proposes the method and describes its overall expected performance.






                share|improve this answer









                $endgroup$

















                  12












                  $begingroup$

                  An approach that yields more consistent results is K-means++. This approach acknowledges that there is probably a better choice of initial centroid locations than simple random assignment. Specifically, K-means tends to perform better when centroids are seeded in such a way that doesn't clump them together in space.



                  In short, the method is as follows:



                  1. Choose one of your data points at random as an initial centroid.

                  2. Calculate $D(x)$, the distance between your initial centroid and all other data points, $x$.

                  3. Choose your next centroid from the remaining datapoints with probability proportional to $D(x)^2$

                  4. Repeat until all centroids have been assigned.

                  Note: $D(x)$ should be updated as more centroids are added. It should be set to be the distance between a data point and the nearest centroid.



                  You may also be interested to read this paper that proposes the method and describes its overall expected performance.






                  share|improve this answer









                  $endgroup$















                    12












                    12








                    12





                    $begingroup$

                    An approach that yields more consistent results is K-means++. This approach acknowledges that there is probably a better choice of initial centroid locations than simple random assignment. Specifically, K-means tends to perform better when centroids are seeded in such a way that doesn't clump them together in space.



                    In short, the method is as follows:



                    1. Choose one of your data points at random as an initial centroid.

                    2. Calculate $D(x)$, the distance between your initial centroid and all other data points, $x$.

                    3. Choose your next centroid from the remaining datapoints with probability proportional to $D(x)^2$

                    4. Repeat until all centroids have been assigned.

                    Note: $D(x)$ should be updated as more centroids are added. It should be set to be the distance between a data point and the nearest centroid.



                    You may also be interested to read this paper that proposes the method and describes its overall expected performance.






                    share|improve this answer









                    $endgroup$



                    An approach that yields more consistent results is K-means++. This approach acknowledges that there is probably a better choice of initial centroid locations than simple random assignment. Specifically, K-means tends to perform better when centroids are seeded in such a way that doesn't clump them together in space.



                    In short, the method is as follows:



                    1. Choose one of your data points at random as an initial centroid.

                    2. Calculate $D(x)$, the distance between your initial centroid and all other data points, $x$.

                    3. Choose your next centroid from the remaining datapoints with probability proportional to $D(x)^2$

                    4. Repeat until all centroids have been assigned.

                    Note: $D(x)$ should be updated as more centroids are added. It should be set to be the distance between a data point and the nearest centroid.



                    You may also be interested to read this paper that proposes the method and describes its overall expected performance.







                    share|improve this answer












                    share|improve this answer



                    share|improve this answer










                    answered May 1 '15 at 14:06









                    Ryan J. SmithRyan J. Smith

                    631315




                    631315





















                        5












                        $begingroup$

                        I may be misunderstanding your question, but usually k-means chooses your centroids randomly for you depending on the number of clusters you set (i.e. k). Choosing the number for k tends to be a subjective exercise. A good place to start is an Elbow/Scree plot which can be found here:



                        http://en.wikipedia.org/wiki/Determining_the_number_of_clusters_in_a_data_set#The_Elbow_Method






                        share|improve this answer









                        $endgroup$












                        • $begingroup$
                          I think the question is about centroid initialization, which are ‘k-means++’, ‘random’ or an ndarray on the documentation page scikit-learn.org/stable/modules/generated/…
                          $endgroup$
                          – Itachi
                          Mar 25 at 17:21















                        5












                        $begingroup$

                        I may be misunderstanding your question, but usually k-means chooses your centroids randomly for you depending on the number of clusters you set (i.e. k). Choosing the number for k tends to be a subjective exercise. A good place to start is an Elbow/Scree plot which can be found here:



                        http://en.wikipedia.org/wiki/Determining_the_number_of_clusters_in_a_data_set#The_Elbow_Method






                        share|improve this answer









                        $endgroup$












                        • $begingroup$
                          I think the question is about centroid initialization, which are ‘k-means++’, ‘random’ or an ndarray on the documentation page scikit-learn.org/stable/modules/generated/…
                          $endgroup$
                          – Itachi
                          Mar 25 at 17:21













                        5












                        5








                        5





                        $begingroup$

                        I may be misunderstanding your question, but usually k-means chooses your centroids randomly for you depending on the number of clusters you set (i.e. k). Choosing the number for k tends to be a subjective exercise. A good place to start is an Elbow/Scree plot which can be found here:



                        http://en.wikipedia.org/wiki/Determining_the_number_of_clusters_in_a_data_set#The_Elbow_Method






                        share|improve this answer









                        $endgroup$



                        I may be misunderstanding your question, but usually k-means chooses your centroids randomly for you depending on the number of clusters you set (i.e. k). Choosing the number for k tends to be a subjective exercise. A good place to start is an Elbow/Scree plot which can be found here:



                        http://en.wikipedia.org/wiki/Determining_the_number_of_clusters_in_a_data_set#The_Elbow_Method







                        share|improve this answer












                        share|improve this answer



                        share|improve this answer










                        answered Apr 30 '15 at 18:26









                        Jake C.Jake C.

                        40123




                        40123











                        • $begingroup$
                          I think the question is about centroid initialization, which are ‘k-means++’, ‘random’ or an ndarray on the documentation page scikit-learn.org/stable/modules/generated/…
                          $endgroup$
                          – Itachi
                          Mar 25 at 17:21
















                        • $begingroup$
                          I think the question is about centroid initialization, which are ‘k-means++’, ‘random’ or an ndarray on the documentation page scikit-learn.org/stable/modules/generated/…
                          $endgroup$
                          – Itachi
                          Mar 25 at 17:21















                        $begingroup$
                        I think the question is about centroid initialization, which are ‘k-means++’, ‘random’ or an ndarray on the documentation page scikit-learn.org/stable/modules/generated/…
                        $endgroup$
                        – Itachi
                        Mar 25 at 17:21




                        $begingroup$
                        I think the question is about centroid initialization, which are ‘k-means++’, ‘random’ or an ndarray on the documentation page scikit-learn.org/stable/modules/generated/…
                        $endgroup$
                        – Itachi
                        Mar 25 at 17:21











                        4












                        $begingroup$

                        The usual approach to this problem is to re-run your K-means algorithm several times, with different random initializations of the centroids, and to keep the best solution. You can do that by evaluating the results on your training data or by means of cross validation.



                        There are many other ways to initialize the centroids, but none of them is going to perform the best for every single problem. You could evaluate these approaches together with random initialization for your particular problem.






                        share|improve this answer









                        $endgroup$

















                          4












                          $begingroup$

                          The usual approach to this problem is to re-run your K-means algorithm several times, with different random initializations of the centroids, and to keep the best solution. You can do that by evaluating the results on your training data or by means of cross validation.



                          There are many other ways to initialize the centroids, but none of them is going to perform the best for every single problem. You could evaluate these approaches together with random initialization for your particular problem.






                          share|improve this answer









                          $endgroup$















                            4












                            4








                            4





                            $begingroup$

                            The usual approach to this problem is to re-run your K-means algorithm several times, with different random initializations of the centroids, and to keep the best solution. You can do that by evaluating the results on your training data or by means of cross validation.



                            There are many other ways to initialize the centroids, but none of them is going to perform the best for every single problem. You could evaluate these approaches together with random initialization for your particular problem.






                            share|improve this answer









                            $endgroup$



                            The usual approach to this problem is to re-run your K-means algorithm several times, with different random initializations of the centroids, and to keep the best solution. You can do that by evaluating the results on your training data or by means of cross validation.



                            There are many other ways to initialize the centroids, but none of them is going to perform the best for every single problem. You could evaluate these approaches together with random initialization for your particular problem.







                            share|improve this answer












                            share|improve this answer



                            share|improve this answer










                            answered May 1 '15 at 12:54









                            Pablo SuauPablo Suau

                            1,007714




                            1,007714





















                                0












                                $begingroup$

                                I agree with the Elbow/Scree plot. I found it more intuitively sensible than a random seed. Here's an example code to try it.



                                Ks=30
                                mean_acc=np.zeros((Ks-1))
                                std_acc=np.zeros((Ks-1))
                                ConfustionMx=[];
                                for n in range(1,Ks):
                                #Train Model and Predict
                                kNN_model = KNeighborsClassifier(n_neighbors=n).fit(X_train,y_train)
                                yhat = kNN_model.predict(X_test)
                                mean_acc[n-1]=np.mean(yhat==y_test);
                                std_acc[n-1]=np.std(yhat==y_test)/np.sqrt(yhat.shape[0])

                                plt.plot(range(1,Ks),mean_acc,'g')
                                plt.fill_between(range(1,Ks),mean_acc - 1 * std_acc,mean_acc + 1 * std_acc, alpha=0.10)
                                plt.legend(('Accuracy ', '+/- 3xstd'))
                                plt.ylabel('Accuracy ')
                                plt.xlabel('Number of Nabors (K)')
                                plt.tight_layout()
                                plt.show()

                                print( "The best accuracy was with", mean_acc.max(), "with k=", mean_acc.argmax()+1)





                                share|improve this answer









                                $endgroup$

















                                  0












                                  $begingroup$

                                  I agree with the Elbow/Scree plot. I found it more intuitively sensible than a random seed. Here's an example code to try it.



                                  Ks=30
                                  mean_acc=np.zeros((Ks-1))
                                  std_acc=np.zeros((Ks-1))
                                  ConfustionMx=[];
                                  for n in range(1,Ks):
                                  #Train Model and Predict
                                  kNN_model = KNeighborsClassifier(n_neighbors=n).fit(X_train,y_train)
                                  yhat = kNN_model.predict(X_test)
                                  mean_acc[n-1]=np.mean(yhat==y_test);
                                  std_acc[n-1]=np.std(yhat==y_test)/np.sqrt(yhat.shape[0])

                                  plt.plot(range(1,Ks),mean_acc,'g')
                                  plt.fill_between(range(1,Ks),mean_acc - 1 * std_acc,mean_acc + 1 * std_acc, alpha=0.10)
                                  plt.legend(('Accuracy ', '+/- 3xstd'))
                                  plt.ylabel('Accuracy ')
                                  plt.xlabel('Number of Nabors (K)')
                                  plt.tight_layout()
                                  plt.show()

                                  print( "The best accuracy was with", mean_acc.max(), "with k=", mean_acc.argmax()+1)





                                  share|improve this answer









                                  $endgroup$















                                    0












                                    0








                                    0





                                    $begingroup$

                                    I agree with the Elbow/Scree plot. I found it more intuitively sensible than a random seed. Here's an example code to try it.



                                    Ks=30
                                    mean_acc=np.zeros((Ks-1))
                                    std_acc=np.zeros((Ks-1))
                                    ConfustionMx=[];
                                    for n in range(1,Ks):
                                    #Train Model and Predict
                                    kNN_model = KNeighborsClassifier(n_neighbors=n).fit(X_train,y_train)
                                    yhat = kNN_model.predict(X_test)
                                    mean_acc[n-1]=np.mean(yhat==y_test);
                                    std_acc[n-1]=np.std(yhat==y_test)/np.sqrt(yhat.shape[0])

                                    plt.plot(range(1,Ks),mean_acc,'g')
                                    plt.fill_between(range(1,Ks),mean_acc - 1 * std_acc,mean_acc + 1 * std_acc, alpha=0.10)
                                    plt.legend(('Accuracy ', '+/- 3xstd'))
                                    plt.ylabel('Accuracy ')
                                    plt.xlabel('Number of Nabors (K)')
                                    plt.tight_layout()
                                    plt.show()

                                    print( "The best accuracy was with", mean_acc.max(), "with k=", mean_acc.argmax()+1)





                                    share|improve this answer









                                    $endgroup$



                                    I agree with the Elbow/Scree plot. I found it more intuitively sensible than a random seed. Here's an example code to try it.



                                    Ks=30
                                    mean_acc=np.zeros((Ks-1))
                                    std_acc=np.zeros((Ks-1))
                                    ConfustionMx=[];
                                    for n in range(1,Ks):
                                    #Train Model and Predict
                                    kNN_model = KNeighborsClassifier(n_neighbors=n).fit(X_train,y_train)
                                    yhat = kNN_model.predict(X_test)
                                    mean_acc[n-1]=np.mean(yhat==y_test);
                                    std_acc[n-1]=np.std(yhat==y_test)/np.sqrt(yhat.shape[0])

                                    plt.plot(range(1,Ks),mean_acc,'g')
                                    plt.fill_between(range(1,Ks),mean_acc - 1 * std_acc,mean_acc + 1 * std_acc, alpha=0.10)
                                    plt.legend(('Accuracy ', '+/- 3xstd'))
                                    plt.ylabel('Accuracy ')
                                    plt.xlabel('Number of Nabors (K)')
                                    plt.tight_layout()
                                    plt.show()

                                    print( "The best accuracy was with", mean_acc.max(), "with k=", mean_acc.argmax()+1)






                                    share|improve this answer












                                    share|improve this answer



                                    share|improve this answer










                                    answered Oct 24 '18 at 21:46









                                    Web SterWeb Ster

                                    1




                                    1



























                                        draft saved

                                        draft discarded
















































                                        Thanks for contributing an answer to Data Science Stack Exchange!


                                        • Please be sure to answer the question. Provide details and share your research!

                                        But avoid


                                        • Asking for help, clarification, or responding to other answers.

                                        • Making statements based on opinion; back them up with references or personal experience.

                                        Use MathJax to format equations. MathJax reference.


                                        To learn more, see our tips on writing great answers.




                                        draft saved


                                        draft discarded














                                        StackExchange.ready(
                                        function ()
                                        StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f5656%2fk-means-what-are-some-good-ways-to-choose-an-efficient-set-of-initial-centroids%23new-answer', 'question_page');

                                        );

                                        Post as a guest















                                        Required, but never shown





















































                                        Required, but never shown














                                        Required, but never shown












                                        Required, but never shown







                                        Required, but never shown

































                                        Required, but never shown














                                        Required, but never shown












                                        Required, but never shown







                                        Required, but never shown







                                        Popular posts from this blog

                                        Adding axes to figuresAdding axes labels to LaTeX figuresLaTeX equivalent of ConTeXt buffersRotate a node but not its content: the case of the ellipse decorationHow to define the default vertical distance between nodes?TikZ scaling graphic and adjust node position and keep font sizeNumerical conditional within tikz keys?adding axes to shapesAlign axes across subfiguresAdding figures with a certain orderLine up nested tikz enviroments or how to get rid of themAdding axes labels to LaTeX figures

                                        Luettelo Yhdysvaltain laivaston lentotukialuksista Lähteet | Navigointivalikko

                                        Gary (muusikko) Sisällysluettelo Historia | Rockin' High | Lähteet | Aiheesta muualla | NavigointivalikkoInfobox OKTuomas "Gary" Keskinen Ancaran kitaristiksiProjekti Rockin' High