Are support vector machines and logistic regression equivalent if data is linearly separable?Are Support Vector Machines still considered “state of the art” in their niche?What happens when we train a linear SVM on non-linearly separable data?Cost function for support vector regressionWhere exactly does $geq 1$ come from in SVMs optimization problem constraint?How do we know Kernels are successful in making data linearly Separable?Confusing Offset in my Support Vector Regression and other ModelsWhy does an SVM model store the support vectors, and not just the separating hyperplane?The differences between SVM and Logistic RegressionWhat is the difference between SVM and logistic regression?Minimum numbers of support vectors

What kind of floor tile is this?

What's the name of the logical fallacy where a debater extends a statement far beyond the original statement to make it true?

Confused about Cramer-Rao lower bound and CLT

What features enable the Su-25 Frogfoot to operate with such a wide variety of fuels?

Why can't the Brexit deadlock in the UK parliament be solved with a plurality vote?

What (the heck) is a Super Worm Equinox Moon?

What is the English pronunciation of "pain au chocolat"?

How do you make your own symbol when Detexify fails?

Is there a RAID 0 Equivalent for RAM?

Did the UK lift the requirement for registering SIM cards?

15% tax on $7.5k earnings. Is that right?

What is the difference between lands and mana?

How can ping know if my host is down

A Trivial Diagnosis

Change the color of a single dot in `ddot` symbol

Which was the first story featuring espers?

Doesn't the system of the Supreme Court oppose justice?

awk assign to multiple variables at once

Is it allowed to activate the ability of multiple planeswalkers in a single turn?

I found an audio circuit and I built it just fine, but I find it a bit too quiet. How do I amplify the output so that it is a bit louder?

How do I tell my boss that I'm quitting soon, especially given that a colleague just left this week

Permission on Database

Pre-mixing cryogenic fuels and using only one fuel tank

In a multiple cat home, how many litter boxes should you have?



Are support vector machines and logistic regression equivalent if data is linearly separable?


Are Support Vector Machines still considered “state of the art” in their niche?What happens when we train a linear SVM on non-linearly separable data?Cost function for support vector regressionWhere exactly does $geq 1$ come from in SVMs optimization problem constraint?How do we know Kernels are successful in making data linearly Separable?Confusing Offset in my Support Vector Regression and other ModelsWhy does an SVM model store the support vectors, and not just the separating hyperplane?The differences between SVM and Logistic RegressionWhat is the difference between SVM and logistic regression?Minimum numbers of support vectors













2












$begingroup$


I understand that SVMs separate data drawing an hyperplane with the biggest margin, but doesn't logistic regression do the same thing if data is linearly separable?










share|improve this question









New contributor




guest is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.







$endgroup$
















    2












    $begingroup$


    I understand that SVMs separate data drawing an hyperplane with the biggest margin, but doesn't logistic regression do the same thing if data is linearly separable?










    share|improve this question









    New contributor




    guest is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
    Check out our Code of Conduct.







    $endgroup$














      2












      2








      2


      1



      $begingroup$


      I understand that SVMs separate data drawing an hyperplane with the biggest margin, but doesn't logistic regression do the same thing if data is linearly separable?










      share|improve this question









      New contributor




      guest is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.







      $endgroup$




      I understand that SVMs separate data drawing an hyperplane with the biggest margin, but doesn't logistic regression do the same thing if data is linearly separable?







      svm logistic-regression






      share|improve this question









      New contributor




      guest is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.











      share|improve this question









      New contributor




      guest is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.









      share|improve this question




      share|improve this question








      edited 2 days ago









      Esmailian

      1,651114




      1,651114






      New contributor




      guest is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.









      asked 2 days ago









      guestguest

      111




      111




      New contributor




      guest is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.





      New contributor





      guest is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.






      guest is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.




















          3 Answers
          3






          active

          oldest

          votes


















          1












          $begingroup$

          Very closely, but not exactly. This post by Georgios Drakos goes through the question mathematically and visually. Here is an image from the post:



          linear SVM vs logistic regression



          which compares soft-margin SVM (SVM) and Logistic Regression (LR).






          share|improve this answer









          $endgroup$




















            0












            $begingroup$

            If the data are completely separable, during training, validation, and testing, then yes the two algorithms will perform equivalently. They both will find the optimal decision boundary to separate the data.



            They're not the only algorithms that will draw such a boundary. A linear discriminant will also perform equivalently, as will many other algorithms. There isn't something special about a SVC and LR, they and others are all trying to find a way to divide the data in the way that makes the most sense (i.e. minimal error).



            Completely separable data rarely are found in the wild though. It's true that if the data are separable then you could potentially just define a decision boundary yourself (i.e. if x>5, y=1 else y=0) and achieve perfect performance without the hassle of optimization. As soon as the data aren't completely separable, then you'll find differences in how the algorithms separate the data. You're correct by saying SVC attempts to find the biggest margin between data, and when it's not perfect, they'll use a loss function (usually hinge loss) to help guide the hyperplane to minimize the number of misclassifications. LR works off of probabilities, which is a little different but usually performs similarly.






            share|improve this answer









            $endgroup$




















              0












              $begingroup$

              SVM is a kernel-based method, with at its core, the classification being binary, and obeying Mercer's condition of the dot product. Also, there are various kernels of the SVM, like 'linear', 'poly' and 'radial basis function' kernel.



              Generally, none of the datasets in the real world are linearly separable. Even a basis change or dimensionality reduction cannot bring it to linear space sometimes.



              In Neural network at the last layer, applying logistic regression is more popular than doing a SVM.






              share|improve this answer









              $endgroup$












                Your Answer





                StackExchange.ifUsing("editor", function ()
                return StackExchange.using("mathjaxEditing", function ()
                StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
                StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
                );
                );
                , "mathjax-editing");

                StackExchange.ready(function()
                var channelOptions =
                tags: "".split(" "),
                id: "557"
                ;
                initTagRenderer("".split(" "), "".split(" "), channelOptions);

                StackExchange.using("externalEditor", function()
                // Have to fire editor after snippets, if snippets enabled
                if (StackExchange.settings.snippets.snippetsEnabled)
                StackExchange.using("snippets", function()
                createEditor();
                );

                else
                createEditor();

                );

                function createEditor()
                StackExchange.prepareEditor(
                heartbeatType: 'answer',
                autoActivateHeartbeat: false,
                convertImagesToLinks: false,
                noModals: true,
                showLowRepImageUploadWarning: true,
                reputationToPostImages: null,
                bindNavPrevention: true,
                postfix: "",
                imageUploader:
                brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
                contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
                allowUrls: true
                ,
                onDemand: true,
                discardSelector: ".discard-answer"
                ,immediatelyShowMarkdownHelp:true
                );



                );






                guest is a new contributor. Be nice, and check out our Code of Conduct.









                draft saved

                draft discarded


















                StackExchange.ready(
                function ()
                StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f47569%2fare-support-vector-machines-and-logistic-regression-equivalent-if-data-is-linear%23new-answer', 'question_page');

                );

                Post as a guest















                Required, but never shown

























                3 Answers
                3






                active

                oldest

                votes








                3 Answers
                3






                active

                oldest

                votes









                active

                oldest

                votes






                active

                oldest

                votes









                1












                $begingroup$

                Very closely, but not exactly. This post by Georgios Drakos goes through the question mathematically and visually. Here is an image from the post:



                linear SVM vs logistic regression



                which compares soft-margin SVM (SVM) and Logistic Regression (LR).






                share|improve this answer









                $endgroup$

















                  1












                  $begingroup$

                  Very closely, but not exactly. This post by Georgios Drakos goes through the question mathematically and visually. Here is an image from the post:



                  linear SVM vs logistic regression



                  which compares soft-margin SVM (SVM) and Logistic Regression (LR).






                  share|improve this answer









                  $endgroup$















                    1












                    1








                    1





                    $begingroup$

                    Very closely, but not exactly. This post by Georgios Drakos goes through the question mathematically and visually. Here is an image from the post:



                    linear SVM vs logistic regression



                    which compares soft-margin SVM (SVM) and Logistic Regression (LR).






                    share|improve this answer









                    $endgroup$



                    Very closely, but not exactly. This post by Georgios Drakos goes through the question mathematically and visually. Here is an image from the post:



                    linear SVM vs logistic regression



                    which compares soft-margin SVM (SVM) and Logistic Regression (LR).







                    share|improve this answer












                    share|improve this answer



                    share|improve this answer










                    answered 2 days ago









                    EsmailianEsmailian

                    1,651114




                    1,651114





















                        0












                        $begingroup$

                        If the data are completely separable, during training, validation, and testing, then yes the two algorithms will perform equivalently. They both will find the optimal decision boundary to separate the data.



                        They're not the only algorithms that will draw such a boundary. A linear discriminant will also perform equivalently, as will many other algorithms. There isn't something special about a SVC and LR, they and others are all trying to find a way to divide the data in the way that makes the most sense (i.e. minimal error).



                        Completely separable data rarely are found in the wild though. It's true that if the data are separable then you could potentially just define a decision boundary yourself (i.e. if x>5, y=1 else y=0) and achieve perfect performance without the hassle of optimization. As soon as the data aren't completely separable, then you'll find differences in how the algorithms separate the data. You're correct by saying SVC attempts to find the biggest margin between data, and when it's not perfect, they'll use a loss function (usually hinge loss) to help guide the hyperplane to minimize the number of misclassifications. LR works off of probabilities, which is a little different but usually performs similarly.






                        share|improve this answer









                        $endgroup$

















                          0












                          $begingroup$

                          If the data are completely separable, during training, validation, and testing, then yes the two algorithms will perform equivalently. They both will find the optimal decision boundary to separate the data.



                          They're not the only algorithms that will draw such a boundary. A linear discriminant will also perform equivalently, as will many other algorithms. There isn't something special about a SVC and LR, they and others are all trying to find a way to divide the data in the way that makes the most sense (i.e. minimal error).



                          Completely separable data rarely are found in the wild though. It's true that if the data are separable then you could potentially just define a decision boundary yourself (i.e. if x>5, y=1 else y=0) and achieve perfect performance without the hassle of optimization. As soon as the data aren't completely separable, then you'll find differences in how the algorithms separate the data. You're correct by saying SVC attempts to find the biggest margin between data, and when it's not perfect, they'll use a loss function (usually hinge loss) to help guide the hyperplane to minimize the number of misclassifications. LR works off of probabilities, which is a little different but usually performs similarly.






                          share|improve this answer









                          $endgroup$















                            0












                            0








                            0





                            $begingroup$

                            If the data are completely separable, during training, validation, and testing, then yes the two algorithms will perform equivalently. They both will find the optimal decision boundary to separate the data.



                            They're not the only algorithms that will draw such a boundary. A linear discriminant will also perform equivalently, as will many other algorithms. There isn't something special about a SVC and LR, they and others are all trying to find a way to divide the data in the way that makes the most sense (i.e. minimal error).



                            Completely separable data rarely are found in the wild though. It's true that if the data are separable then you could potentially just define a decision boundary yourself (i.e. if x>5, y=1 else y=0) and achieve perfect performance without the hassle of optimization. As soon as the data aren't completely separable, then you'll find differences in how the algorithms separate the data. You're correct by saying SVC attempts to find the biggest margin between data, and when it's not perfect, they'll use a loss function (usually hinge loss) to help guide the hyperplane to minimize the number of misclassifications. LR works off of probabilities, which is a little different but usually performs similarly.






                            share|improve this answer









                            $endgroup$



                            If the data are completely separable, during training, validation, and testing, then yes the two algorithms will perform equivalently. They both will find the optimal decision boundary to separate the data.



                            They're not the only algorithms that will draw such a boundary. A linear discriminant will also perform equivalently, as will many other algorithms. There isn't something special about a SVC and LR, they and others are all trying to find a way to divide the data in the way that makes the most sense (i.e. minimal error).



                            Completely separable data rarely are found in the wild though. It's true that if the data are separable then you could potentially just define a decision boundary yourself (i.e. if x>5, y=1 else y=0) and achieve perfect performance without the hassle of optimization. As soon as the data aren't completely separable, then you'll find differences in how the algorithms separate the data. You're correct by saying SVC attempts to find the biggest margin between data, and when it's not perfect, they'll use a loss function (usually hinge loss) to help guide the hyperplane to minimize the number of misclassifications. LR works off of probabilities, which is a little different but usually performs similarly.







                            share|improve this answer












                            share|improve this answer



                            share|improve this answer










                            answered 2 days ago









                            Alex LAlex L

                            21010




                            21010





















                                0












                                $begingroup$

                                SVM is a kernel-based method, with at its core, the classification being binary, and obeying Mercer's condition of the dot product. Also, there are various kernels of the SVM, like 'linear', 'poly' and 'radial basis function' kernel.



                                Generally, none of the datasets in the real world are linearly separable. Even a basis change or dimensionality reduction cannot bring it to linear space sometimes.



                                In Neural network at the last layer, applying logistic regression is more popular than doing a SVM.






                                share|improve this answer









                                $endgroup$

















                                  0












                                  $begingroup$

                                  SVM is a kernel-based method, with at its core, the classification being binary, and obeying Mercer's condition of the dot product. Also, there are various kernels of the SVM, like 'linear', 'poly' and 'radial basis function' kernel.



                                  Generally, none of the datasets in the real world are linearly separable. Even a basis change or dimensionality reduction cannot bring it to linear space sometimes.



                                  In Neural network at the last layer, applying logistic regression is more popular than doing a SVM.






                                  share|improve this answer









                                  $endgroup$















                                    0












                                    0








                                    0





                                    $begingroup$

                                    SVM is a kernel-based method, with at its core, the classification being binary, and obeying Mercer's condition of the dot product. Also, there are various kernels of the SVM, like 'linear', 'poly' and 'radial basis function' kernel.



                                    Generally, none of the datasets in the real world are linearly separable. Even a basis change or dimensionality reduction cannot bring it to linear space sometimes.



                                    In Neural network at the last layer, applying logistic regression is more popular than doing a SVM.






                                    share|improve this answer









                                    $endgroup$



                                    SVM is a kernel-based method, with at its core, the classification being binary, and obeying Mercer's condition of the dot product. Also, there are various kernels of the SVM, like 'linear', 'poly' and 'radial basis function' kernel.



                                    Generally, none of the datasets in the real world are linearly separable. Even a basis change or dimensionality reduction cannot bring it to linear space sometimes.



                                    In Neural network at the last layer, applying logistic regression is more popular than doing a SVM.







                                    share|improve this answer












                                    share|improve this answer



                                    share|improve this answer










                                    answered 2 days ago









                                    abunickabhiabunickabhi

                                    1838




                                    1838




















                                        guest is a new contributor. Be nice, and check out our Code of Conduct.









                                        draft saved

                                        draft discarded


















                                        guest is a new contributor. Be nice, and check out our Code of Conduct.












                                        guest is a new contributor. Be nice, and check out our Code of Conduct.











                                        guest is a new contributor. Be nice, and check out our Code of Conduct.














                                        Thanks for contributing an answer to Data Science Stack Exchange!


                                        • Please be sure to answer the question. Provide details and share your research!

                                        But avoid


                                        • Asking for help, clarification, or responding to other answers.

                                        • Making statements based on opinion; back them up with references or personal experience.

                                        Use MathJax to format equations. MathJax reference.


                                        To learn more, see our tips on writing great answers.




                                        draft saved


                                        draft discarded














                                        StackExchange.ready(
                                        function ()
                                        StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f47569%2fare-support-vector-machines-and-logistic-regression-equivalent-if-data-is-linear%23new-answer', 'question_page');

                                        );

                                        Post as a guest















                                        Required, but never shown





















































                                        Required, but never shown














                                        Required, but never shown












                                        Required, but never shown







                                        Required, but never shown

































                                        Required, but never shown














                                        Required, but never shown












                                        Required, but never shown







                                        Required, but never shown







                                        Popular posts from this blog

                                        Adding axes to figuresAdding axes labels to LaTeX figuresLaTeX equivalent of ConTeXt buffersRotate a node but not its content: the case of the ellipse decorationHow to define the default vertical distance between nodes?TikZ scaling graphic and adjust node position and keep font sizeNumerical conditional within tikz keys?adding axes to shapesAlign axes across subfiguresAdding figures with a certain orderLine up nested tikz enviroments or how to get rid of themAdding axes labels to LaTeX figures

                                        Luettelo Yhdysvaltain laivaston lentotukialuksista Lähteet | Navigointivalikko

                                        Gary (muusikko) Sisällysluettelo Historia | Rockin' High | Lähteet | Aiheesta muualla | NavigointivalikkoInfobox OKTuomas "Gary" Keskinen Ancaran kitaristiksiProjekti Rockin' High