Are support vector machines and logistic regression equivalent if data is linearly separable?Are Support Vector Machines still considered “state of the art” in their niche?What happens when we train a linear SVM on non-linearly separable data?Cost function for support vector regressionWhere exactly does $geq 1$ come from in SVMs optimization problem constraint?How do we know Kernels are successful in making data linearly Separable?Confusing Offset in my Support Vector Regression and other ModelsWhy does an SVM model store the support vectors, and not just the separating hyperplane?The differences between SVM and Logistic RegressionWhat is the difference between SVM and logistic regression?Minimum numbers of support vectors
What kind of floor tile is this?
What's the name of the logical fallacy where a debater extends a statement far beyond the original statement to make it true?
Confused about Cramer-Rao lower bound and CLT
What features enable the Su-25 Frogfoot to operate with such a wide variety of fuels?
Why can't the Brexit deadlock in the UK parliament be solved with a plurality vote?
What (the heck) is a Super Worm Equinox Moon?
What is the English pronunciation of "pain au chocolat"?
How do you make your own symbol when Detexify fails?
Is there a RAID 0 Equivalent for RAM?
Did the UK lift the requirement for registering SIM cards?
15% tax on $7.5k earnings. Is that right?
What is the difference between lands and mana?
How can ping know if my host is down
A Trivial Diagnosis
Change the color of a single dot in `ddot` symbol
Which was the first story featuring espers?
Doesn't the system of the Supreme Court oppose justice?
awk assign to multiple variables at once
Is it allowed to activate the ability of multiple planeswalkers in a single turn?
I found an audio circuit and I built it just fine, but I find it a bit too quiet. How do I amplify the output so that it is a bit louder?
How do I tell my boss that I'm quitting soon, especially given that a colleague just left this week
Permission on Database
Pre-mixing cryogenic fuels and using only one fuel tank
In a multiple cat home, how many litter boxes should you have?
Are support vector machines and logistic regression equivalent if data is linearly separable?
Are Support Vector Machines still considered “state of the art” in their niche?What happens when we train a linear SVM on non-linearly separable data?Cost function for support vector regressionWhere exactly does $geq 1$ come from in SVMs optimization problem constraint?How do we know Kernels are successful in making data linearly Separable?Confusing Offset in my Support Vector Regression and other ModelsWhy does an SVM model store the support vectors, and not just the separating hyperplane?The differences between SVM and Logistic RegressionWhat is the difference between SVM and logistic regression?Minimum numbers of support vectors
$begingroup$
I understand that SVMs separate data drawing an hyperplane with the biggest margin, but doesn't logistic regression do the same thing if data is linearly separable?
svm logistic-regression
New contributor
$endgroup$
add a comment |
$begingroup$
I understand that SVMs separate data drawing an hyperplane with the biggest margin, but doesn't logistic regression do the same thing if data is linearly separable?
svm logistic-regression
New contributor
$endgroup$
add a comment |
$begingroup$
I understand that SVMs separate data drawing an hyperplane with the biggest margin, but doesn't logistic regression do the same thing if data is linearly separable?
svm logistic-regression
New contributor
$endgroup$
I understand that SVMs separate data drawing an hyperplane with the biggest margin, but doesn't logistic regression do the same thing if data is linearly separable?
svm logistic-regression
svm logistic-regression
New contributor
New contributor
edited 2 days ago
Esmailian
1,651114
1,651114
New contributor
asked 2 days ago
guestguest
111
111
New contributor
New contributor
add a comment |
add a comment |
3 Answers
3
active
oldest
votes
$begingroup$
Very closely, but not exactly. This post by Georgios Drakos goes through the question mathematically and visually. Here is an image from the post:
which compares soft-margin SVM (SVM) and Logistic Regression (LR).
$endgroup$
add a comment |
$begingroup$
If the data are completely separable, during training, validation, and testing, then yes the two algorithms will perform equivalently. They both will find the optimal decision boundary to separate the data.
They're not the only algorithms that will draw such a boundary. A linear discriminant will also perform equivalently, as will many other algorithms. There isn't something special about a SVC and LR, they and others are all trying to find a way to divide the data in the way that makes the most sense (i.e. minimal error).
Completely separable data rarely are found in the wild though. It's true that if the data are separable then you could potentially just define a decision boundary yourself (i.e. if x>5
, y=1
else y=0
) and achieve perfect performance without the hassle of optimization. As soon as the data aren't completely separable, then you'll find differences in how the algorithms separate the data. You're correct by saying SVC attempts to find the biggest margin between data, and when it's not perfect, they'll use a loss function (usually hinge loss) to help guide the hyperplane to minimize the number of misclassifications. LR works off of probabilities, which is a little different but usually performs similarly.
$endgroup$
add a comment |
$begingroup$
SVM is a kernel-based method, with at its core, the classification being binary, and obeying Mercer's condition of the dot product. Also, there are various kernels of the SVM, like 'linear', 'poly' and 'radial basis function' kernel.
Generally, none of the datasets in the real world are linearly separable. Even a basis change or dimensionality reduction cannot bring it to linear space sometimes.
In Neural network at the last layer, applying logistic regression is more popular than doing a SVM.
$endgroup$
add a comment |
Your Answer
StackExchange.ifUsing("editor", function ()
return StackExchange.using("mathjaxEditing", function ()
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
);
);
, "mathjax-editing");
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "557"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
guest is a new contributor. Be nice, and check out our Code of Conduct.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f47569%2fare-support-vector-machines-and-logistic-regression-equivalent-if-data-is-linear%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
3 Answers
3
active
oldest
votes
3 Answers
3
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
Very closely, but not exactly. This post by Georgios Drakos goes through the question mathematically and visually. Here is an image from the post:
which compares soft-margin SVM (SVM) and Logistic Regression (LR).
$endgroup$
add a comment |
$begingroup$
Very closely, but not exactly. This post by Georgios Drakos goes through the question mathematically and visually. Here is an image from the post:
which compares soft-margin SVM (SVM) and Logistic Regression (LR).
$endgroup$
add a comment |
$begingroup$
Very closely, but not exactly. This post by Georgios Drakos goes through the question mathematically and visually. Here is an image from the post:
which compares soft-margin SVM (SVM) and Logistic Regression (LR).
$endgroup$
Very closely, but not exactly. This post by Georgios Drakos goes through the question mathematically and visually. Here is an image from the post:
which compares soft-margin SVM (SVM) and Logistic Regression (LR).
answered 2 days ago
EsmailianEsmailian
1,651114
1,651114
add a comment |
add a comment |
$begingroup$
If the data are completely separable, during training, validation, and testing, then yes the two algorithms will perform equivalently. They both will find the optimal decision boundary to separate the data.
They're not the only algorithms that will draw such a boundary. A linear discriminant will also perform equivalently, as will many other algorithms. There isn't something special about a SVC and LR, they and others are all trying to find a way to divide the data in the way that makes the most sense (i.e. minimal error).
Completely separable data rarely are found in the wild though. It's true that if the data are separable then you could potentially just define a decision boundary yourself (i.e. if x>5
, y=1
else y=0
) and achieve perfect performance without the hassle of optimization. As soon as the data aren't completely separable, then you'll find differences in how the algorithms separate the data. You're correct by saying SVC attempts to find the biggest margin between data, and when it's not perfect, they'll use a loss function (usually hinge loss) to help guide the hyperplane to minimize the number of misclassifications. LR works off of probabilities, which is a little different but usually performs similarly.
$endgroup$
add a comment |
$begingroup$
If the data are completely separable, during training, validation, and testing, then yes the two algorithms will perform equivalently. They both will find the optimal decision boundary to separate the data.
They're not the only algorithms that will draw such a boundary. A linear discriminant will also perform equivalently, as will many other algorithms. There isn't something special about a SVC and LR, they and others are all trying to find a way to divide the data in the way that makes the most sense (i.e. minimal error).
Completely separable data rarely are found in the wild though. It's true that if the data are separable then you could potentially just define a decision boundary yourself (i.e. if x>5
, y=1
else y=0
) and achieve perfect performance without the hassle of optimization. As soon as the data aren't completely separable, then you'll find differences in how the algorithms separate the data. You're correct by saying SVC attempts to find the biggest margin between data, and when it's not perfect, they'll use a loss function (usually hinge loss) to help guide the hyperplane to minimize the number of misclassifications. LR works off of probabilities, which is a little different but usually performs similarly.
$endgroup$
add a comment |
$begingroup$
If the data are completely separable, during training, validation, and testing, then yes the two algorithms will perform equivalently. They both will find the optimal decision boundary to separate the data.
They're not the only algorithms that will draw such a boundary. A linear discriminant will also perform equivalently, as will many other algorithms. There isn't something special about a SVC and LR, they and others are all trying to find a way to divide the data in the way that makes the most sense (i.e. minimal error).
Completely separable data rarely are found in the wild though. It's true that if the data are separable then you could potentially just define a decision boundary yourself (i.e. if x>5
, y=1
else y=0
) and achieve perfect performance without the hassle of optimization. As soon as the data aren't completely separable, then you'll find differences in how the algorithms separate the data. You're correct by saying SVC attempts to find the biggest margin between data, and when it's not perfect, they'll use a loss function (usually hinge loss) to help guide the hyperplane to minimize the number of misclassifications. LR works off of probabilities, which is a little different but usually performs similarly.
$endgroup$
If the data are completely separable, during training, validation, and testing, then yes the two algorithms will perform equivalently. They both will find the optimal decision boundary to separate the data.
They're not the only algorithms that will draw such a boundary. A linear discriminant will also perform equivalently, as will many other algorithms. There isn't something special about a SVC and LR, they and others are all trying to find a way to divide the data in the way that makes the most sense (i.e. minimal error).
Completely separable data rarely are found in the wild though. It's true that if the data are separable then you could potentially just define a decision boundary yourself (i.e. if x>5
, y=1
else y=0
) and achieve perfect performance without the hassle of optimization. As soon as the data aren't completely separable, then you'll find differences in how the algorithms separate the data. You're correct by saying SVC attempts to find the biggest margin between data, and when it's not perfect, they'll use a loss function (usually hinge loss) to help guide the hyperplane to minimize the number of misclassifications. LR works off of probabilities, which is a little different but usually performs similarly.
answered 2 days ago
Alex LAlex L
21010
21010
add a comment |
add a comment |
$begingroup$
SVM is a kernel-based method, with at its core, the classification being binary, and obeying Mercer's condition of the dot product. Also, there are various kernels of the SVM, like 'linear', 'poly' and 'radial basis function' kernel.
Generally, none of the datasets in the real world are linearly separable. Even a basis change or dimensionality reduction cannot bring it to linear space sometimes.
In Neural network at the last layer, applying logistic regression is more popular than doing a SVM.
$endgroup$
add a comment |
$begingroup$
SVM is a kernel-based method, with at its core, the classification being binary, and obeying Mercer's condition of the dot product. Also, there are various kernels of the SVM, like 'linear', 'poly' and 'radial basis function' kernel.
Generally, none of the datasets in the real world are linearly separable. Even a basis change or dimensionality reduction cannot bring it to linear space sometimes.
In Neural network at the last layer, applying logistic regression is more popular than doing a SVM.
$endgroup$
add a comment |
$begingroup$
SVM is a kernel-based method, with at its core, the classification being binary, and obeying Mercer's condition of the dot product. Also, there are various kernels of the SVM, like 'linear', 'poly' and 'radial basis function' kernel.
Generally, none of the datasets in the real world are linearly separable. Even a basis change or dimensionality reduction cannot bring it to linear space sometimes.
In Neural network at the last layer, applying logistic regression is more popular than doing a SVM.
$endgroup$
SVM is a kernel-based method, with at its core, the classification being binary, and obeying Mercer's condition of the dot product. Also, there are various kernels of the SVM, like 'linear', 'poly' and 'radial basis function' kernel.
Generally, none of the datasets in the real world are linearly separable. Even a basis change or dimensionality reduction cannot bring it to linear space sometimes.
In Neural network at the last layer, applying logistic regression is more popular than doing a SVM.
answered 2 days ago
abunickabhiabunickabhi
1838
1838
add a comment |
add a comment |
guest is a new contributor. Be nice, and check out our Code of Conduct.
guest is a new contributor. Be nice, and check out our Code of Conduct.
guest is a new contributor. Be nice, and check out our Code of Conduct.
guest is a new contributor. Be nice, and check out our Code of Conduct.
Thanks for contributing an answer to Data Science Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f47569%2fare-support-vector-machines-and-logistic-regression-equivalent-if-data-is-linear%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown