Implementing SVM from scratch? The Next CEO of Stack Overflow2019 Community Moderator ElectionIntuition for the regularization parameter in SVMImplementing sklearn.linear_model.SGDClassifier using pythonDoes the “laplacian” kernel used in the SVM context come from a Hilbert space inner product?Python SVM rgb clusterSVM on sparse dataDoubt with SVM mathsvm optimization problemSVM vs RVM, when to use what?Dataset where svm performance is significantly different from random forest

Planeswalker Ability and Death Timing

What is the difference between 'contrib' and 'non-free' packages repositories?

Is it a bad idea to plug the other end of ESD strap to wall ground?

What steps are necessary to read a Modern SSD in Medieval Europe?

Which acid/base does a strong base/acid react when added to a buffer solution?

How to coordinate airplane tickets?

Calculate the Mean mean of two numbers

Is it "common practice in Fourier transform spectroscopy to multiply the measured interferogram by an apodizing function"? If so, why?

"Eavesdropping" vs "Listen in on"

My ex-girlfriend uses my Apple ID to login to her iPad, do I have to give her my Apple ID password to reset it?

Strange use of "whether ... than ..." in official text

Free fall ellipse or parabola?

pgfplots: How to draw a tangent graph below two others?

How to pronounce fünf in 45

Why does sin(x) - sin(y) equal this?

Avoiding the "not like other girls" trope?

How can I separate the number from the unit in argument?

That's an odd coin - I wonder why

What is the difference between 서고 and 도서관?

Direct Implications Between USA and UK in Event of No-Deal Brexit

Is there a rule of thumb for determining the amount one should accept for a settlement offer?

Is the offspring between a demon and a celestial possible? If so what is it called and is it in a book somewhere?

Does the Idaho Potato Commission associate potato skins with healthy eating?

My boss doesn't want me to have a side project



Implementing SVM from scratch?



The Next CEO of Stack Overflow
2019 Community Moderator ElectionIntuition for the regularization parameter in SVMImplementing sklearn.linear_model.SGDClassifier using pythonDoes the “laplacian” kernel used in the SVM context come from a Hilbert space inner product?Python SVM rgb clusterSVM on sparse dataDoubt with SVM mathsvm optimization problemSVM vs RVM, when to use what?Dataset where svm performance is significantly different from random forest










0












$begingroup$


I am trying to implement the rbf kernel for SVM from scratch as practice for my coming interviews. I attempted to use cvxopt to solve the optimization problem. However, when I compute the accuracy and compare it to the actual SVM library on sklearn, there is an extremely large discrepancy. I have attempted to isolate the problem but I cannot seem to fix it. Any help would be greatly appreciated. Posted below is the code. If anyone could please let me know what I am doing wrong or help suggest an alternative approach I would greatly appreciate it.



import numpy as np
import cvxopt

def rbf_kernel(gamma, **kwargs):
def f(x1, x2):
distance = np.linalg.norm(x1 - x2) ** 2
return np.exp(-gamma * distance)
return f

class SupportVectorMachine(object):
def __init__(self, C=1, kernel=rbf_kernel, power=4, gamma=None, coef=4):
self.C = C
self.kernel = kernel
self.power = power
self.gamma = gamma
self.coef = coef
self.lagr_multipliers = None
self.support_vectors = None
self.support_vector_labels = None
self.intercept = None

def fit(self, X, y):

n_samples, n_features = np.shape(X)

# Set gamma to 1/n_features by default
if not self.gamma:
self.gamma = 1 / n_features

# Initialize kernel method with parameters
self.kernel = self.kernel(
power=self.power,
gamma=self.gamma,
coef=self.coef)

# Calculate kernel matrix
kernel_matrix = np.zeros((n_samples, n_samples))
for i in range(n_samples):
for j in range(n_samples):
kernel_matrix[i, j] = self.kernel(X[i], X[j])

# Define the quadratic optimization problem
P = cvxopt.matrix(np.outer(y, y) * kernel_matrix, tc='d')
q = cvxopt.matrix(np.ones(n_samples) * -1)
A = cvxopt.matrix(y, (1, n_samples), tc='d')
b = cvxopt.matrix(0, tc='d')

if not self.C: #if its empty
G = cvxopt.matrix(np.identity(n_samples) * -1)
h = cvxopt.matrix(np.zeros(n_samples))
else:
G_max = np.identity(n_samples) * -1
G_min = np.identity(n_samples)
G = cvxopt.matrix(np.vstack((G_max, G_min)))
h_max = cvxopt.matrix(np.zeros(n_samples))
h_min = cvxopt.matrix(np.ones(n_samples) * self.C)
h = cvxopt.matrix(np.vstack((h_max, h_min)))

# Solve the quadratic optimization problem using cvxopt
minimization = cvxopt.solvers.qp(P, q, G, h, A, b)

# Lagrange multipliers
lagr_mult = np.ravel(minimization['x'])

# Extract support vectors
# Get indexes of non-zero lagr. multipiers
idx = lagr_mult > 1e-11
# Get the corresponding lagr. multipliers
self.lagr_multipliers = lagr_mult[idx]
# Get the samples that will act as support vectors
self.support_vectors = X[idx]
# Get the corresponding labels
self.support_vector_labels = y[idx]

# Calculate intercept with first support vector
self.intercept = self.support_vector_labels[0]
for i in range(len(self.lagr_multipliers)):
self.intercept -= self.lagr_multipliers[i] * self.support_vector_labels[
i] * self.kernel(self.support_vectors[i], self.support_vectors[0])


def predict(self, X):
y_pred = []
# Iterate through list of samples and make predictions
for sample in X:
prediction = 0
# Determine the label of the sample by the support vectors
for i in range(len(self.lagr_multipliers)):
prediction += self.lagr_multipliers[i] * self.support_vector_labels[
i] * self.kernel(self.support_vectors[i], sample)
prediction += self.intercept
y_pred.append(np.sign(prediction))
return np.array(y_pred)



def main():
print ("-- SVM Classifier --")

data = load_iris()

# previous error
#X = normalize(data.data)
#y = data.target

# correct version
X = normalize(data.data[data.target != 0])
y = data.target[data.target != 0]
y[y == 1] = -1
y[y == 2] = 1

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.4)
clf = SupportVectorMachine(kernel=rbf_kernel, gamma = 1)
clf.fit(X_train, y_train)
y_pred = clf.predict(X_test)
accuracy = accuracy_score(y_test, y_pred)
print ("Accuracy (scratch):", accuracy)

clf_sklearn = SVC(gamma = 'auto')
clf_sklearn.fit(X_train, y_train)
y_pred2 = clf_sklearn.predict(X_test)
accuracy = accuracy_score(y_test, y_pred2)
print ("Accuracy :", accuracy)

if __name__ == "__main__":
main()


RESULTS:
Accuracy (from scratch): 0.31666666666666665
Accuracy (using SVM Library) : 1.0



Note, I did not add the libraries to save space










share|improve this question











$endgroup$











  • $begingroup$
    Could you please post the hyperparameters after you fitted the model, both from sklearn and your own model?
    $endgroup$
    – Pedro Henrique Monforte
    Mar 25 at 18:51
















0












$begingroup$


I am trying to implement the rbf kernel for SVM from scratch as practice for my coming interviews. I attempted to use cvxopt to solve the optimization problem. However, when I compute the accuracy and compare it to the actual SVM library on sklearn, there is an extremely large discrepancy. I have attempted to isolate the problem but I cannot seem to fix it. Any help would be greatly appreciated. Posted below is the code. If anyone could please let me know what I am doing wrong or help suggest an alternative approach I would greatly appreciate it.



import numpy as np
import cvxopt

def rbf_kernel(gamma, **kwargs):
def f(x1, x2):
distance = np.linalg.norm(x1 - x2) ** 2
return np.exp(-gamma * distance)
return f

class SupportVectorMachine(object):
def __init__(self, C=1, kernel=rbf_kernel, power=4, gamma=None, coef=4):
self.C = C
self.kernel = kernel
self.power = power
self.gamma = gamma
self.coef = coef
self.lagr_multipliers = None
self.support_vectors = None
self.support_vector_labels = None
self.intercept = None

def fit(self, X, y):

n_samples, n_features = np.shape(X)

# Set gamma to 1/n_features by default
if not self.gamma:
self.gamma = 1 / n_features

# Initialize kernel method with parameters
self.kernel = self.kernel(
power=self.power,
gamma=self.gamma,
coef=self.coef)

# Calculate kernel matrix
kernel_matrix = np.zeros((n_samples, n_samples))
for i in range(n_samples):
for j in range(n_samples):
kernel_matrix[i, j] = self.kernel(X[i], X[j])

# Define the quadratic optimization problem
P = cvxopt.matrix(np.outer(y, y) * kernel_matrix, tc='d')
q = cvxopt.matrix(np.ones(n_samples) * -1)
A = cvxopt.matrix(y, (1, n_samples), tc='d')
b = cvxopt.matrix(0, tc='d')

if not self.C: #if its empty
G = cvxopt.matrix(np.identity(n_samples) * -1)
h = cvxopt.matrix(np.zeros(n_samples))
else:
G_max = np.identity(n_samples) * -1
G_min = np.identity(n_samples)
G = cvxopt.matrix(np.vstack((G_max, G_min)))
h_max = cvxopt.matrix(np.zeros(n_samples))
h_min = cvxopt.matrix(np.ones(n_samples) * self.C)
h = cvxopt.matrix(np.vstack((h_max, h_min)))

# Solve the quadratic optimization problem using cvxopt
minimization = cvxopt.solvers.qp(P, q, G, h, A, b)

# Lagrange multipliers
lagr_mult = np.ravel(minimization['x'])

# Extract support vectors
# Get indexes of non-zero lagr. multipiers
idx = lagr_mult > 1e-11
# Get the corresponding lagr. multipliers
self.lagr_multipliers = lagr_mult[idx]
# Get the samples that will act as support vectors
self.support_vectors = X[idx]
# Get the corresponding labels
self.support_vector_labels = y[idx]

# Calculate intercept with first support vector
self.intercept = self.support_vector_labels[0]
for i in range(len(self.lagr_multipliers)):
self.intercept -= self.lagr_multipliers[i] * self.support_vector_labels[
i] * self.kernel(self.support_vectors[i], self.support_vectors[0])


def predict(self, X):
y_pred = []
# Iterate through list of samples and make predictions
for sample in X:
prediction = 0
# Determine the label of the sample by the support vectors
for i in range(len(self.lagr_multipliers)):
prediction += self.lagr_multipliers[i] * self.support_vector_labels[
i] * self.kernel(self.support_vectors[i], sample)
prediction += self.intercept
y_pred.append(np.sign(prediction))
return np.array(y_pred)



def main():
print ("-- SVM Classifier --")

data = load_iris()

# previous error
#X = normalize(data.data)
#y = data.target

# correct version
X = normalize(data.data[data.target != 0])
y = data.target[data.target != 0]
y[y == 1] = -1
y[y == 2] = 1

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.4)
clf = SupportVectorMachine(kernel=rbf_kernel, gamma = 1)
clf.fit(X_train, y_train)
y_pred = clf.predict(X_test)
accuracy = accuracy_score(y_test, y_pred)
print ("Accuracy (scratch):", accuracy)

clf_sklearn = SVC(gamma = 'auto')
clf_sklearn.fit(X_train, y_train)
y_pred2 = clf_sklearn.predict(X_test)
accuracy = accuracy_score(y_test, y_pred2)
print ("Accuracy :", accuracy)

if __name__ == "__main__":
main()


RESULTS:
Accuracy (from scratch): 0.31666666666666665
Accuracy (using SVM Library) : 1.0



Note, I did not add the libraries to save space










share|improve this question











$endgroup$











  • $begingroup$
    Could you please post the hyperparameters after you fitted the model, both from sklearn and your own model?
    $endgroup$
    – Pedro Henrique Monforte
    Mar 25 at 18:51














0












0








0





$begingroup$


I am trying to implement the rbf kernel for SVM from scratch as practice for my coming interviews. I attempted to use cvxopt to solve the optimization problem. However, when I compute the accuracy and compare it to the actual SVM library on sklearn, there is an extremely large discrepancy. I have attempted to isolate the problem but I cannot seem to fix it. Any help would be greatly appreciated. Posted below is the code. If anyone could please let me know what I am doing wrong or help suggest an alternative approach I would greatly appreciate it.



import numpy as np
import cvxopt

def rbf_kernel(gamma, **kwargs):
def f(x1, x2):
distance = np.linalg.norm(x1 - x2) ** 2
return np.exp(-gamma * distance)
return f

class SupportVectorMachine(object):
def __init__(self, C=1, kernel=rbf_kernel, power=4, gamma=None, coef=4):
self.C = C
self.kernel = kernel
self.power = power
self.gamma = gamma
self.coef = coef
self.lagr_multipliers = None
self.support_vectors = None
self.support_vector_labels = None
self.intercept = None

def fit(self, X, y):

n_samples, n_features = np.shape(X)

# Set gamma to 1/n_features by default
if not self.gamma:
self.gamma = 1 / n_features

# Initialize kernel method with parameters
self.kernel = self.kernel(
power=self.power,
gamma=self.gamma,
coef=self.coef)

# Calculate kernel matrix
kernel_matrix = np.zeros((n_samples, n_samples))
for i in range(n_samples):
for j in range(n_samples):
kernel_matrix[i, j] = self.kernel(X[i], X[j])

# Define the quadratic optimization problem
P = cvxopt.matrix(np.outer(y, y) * kernel_matrix, tc='d')
q = cvxopt.matrix(np.ones(n_samples) * -1)
A = cvxopt.matrix(y, (1, n_samples), tc='d')
b = cvxopt.matrix(0, tc='d')

if not self.C: #if its empty
G = cvxopt.matrix(np.identity(n_samples) * -1)
h = cvxopt.matrix(np.zeros(n_samples))
else:
G_max = np.identity(n_samples) * -1
G_min = np.identity(n_samples)
G = cvxopt.matrix(np.vstack((G_max, G_min)))
h_max = cvxopt.matrix(np.zeros(n_samples))
h_min = cvxopt.matrix(np.ones(n_samples) * self.C)
h = cvxopt.matrix(np.vstack((h_max, h_min)))

# Solve the quadratic optimization problem using cvxopt
minimization = cvxopt.solvers.qp(P, q, G, h, A, b)

# Lagrange multipliers
lagr_mult = np.ravel(minimization['x'])

# Extract support vectors
# Get indexes of non-zero lagr. multipiers
idx = lagr_mult > 1e-11
# Get the corresponding lagr. multipliers
self.lagr_multipliers = lagr_mult[idx]
# Get the samples that will act as support vectors
self.support_vectors = X[idx]
# Get the corresponding labels
self.support_vector_labels = y[idx]

# Calculate intercept with first support vector
self.intercept = self.support_vector_labels[0]
for i in range(len(self.lagr_multipliers)):
self.intercept -= self.lagr_multipliers[i] * self.support_vector_labels[
i] * self.kernel(self.support_vectors[i], self.support_vectors[0])


def predict(self, X):
y_pred = []
# Iterate through list of samples and make predictions
for sample in X:
prediction = 0
# Determine the label of the sample by the support vectors
for i in range(len(self.lagr_multipliers)):
prediction += self.lagr_multipliers[i] * self.support_vector_labels[
i] * self.kernel(self.support_vectors[i], sample)
prediction += self.intercept
y_pred.append(np.sign(prediction))
return np.array(y_pred)



def main():
print ("-- SVM Classifier --")

data = load_iris()

# previous error
#X = normalize(data.data)
#y = data.target

# correct version
X = normalize(data.data[data.target != 0])
y = data.target[data.target != 0]
y[y == 1] = -1
y[y == 2] = 1

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.4)
clf = SupportVectorMachine(kernel=rbf_kernel, gamma = 1)
clf.fit(X_train, y_train)
y_pred = clf.predict(X_test)
accuracy = accuracy_score(y_test, y_pred)
print ("Accuracy (scratch):", accuracy)

clf_sklearn = SVC(gamma = 'auto')
clf_sklearn.fit(X_train, y_train)
y_pred2 = clf_sklearn.predict(X_test)
accuracy = accuracy_score(y_test, y_pred2)
print ("Accuracy :", accuracy)

if __name__ == "__main__":
main()


RESULTS:
Accuracy (from scratch): 0.31666666666666665
Accuracy (using SVM Library) : 1.0



Note, I did not add the libraries to save space










share|improve this question











$endgroup$




I am trying to implement the rbf kernel for SVM from scratch as practice for my coming interviews. I attempted to use cvxopt to solve the optimization problem. However, when I compute the accuracy and compare it to the actual SVM library on sklearn, there is an extremely large discrepancy. I have attempted to isolate the problem but I cannot seem to fix it. Any help would be greatly appreciated. Posted below is the code. If anyone could please let me know what I am doing wrong or help suggest an alternative approach I would greatly appreciate it.



import numpy as np
import cvxopt

def rbf_kernel(gamma, **kwargs):
def f(x1, x2):
distance = np.linalg.norm(x1 - x2) ** 2
return np.exp(-gamma * distance)
return f

class SupportVectorMachine(object):
def __init__(self, C=1, kernel=rbf_kernel, power=4, gamma=None, coef=4):
self.C = C
self.kernel = kernel
self.power = power
self.gamma = gamma
self.coef = coef
self.lagr_multipliers = None
self.support_vectors = None
self.support_vector_labels = None
self.intercept = None

def fit(self, X, y):

n_samples, n_features = np.shape(X)

# Set gamma to 1/n_features by default
if not self.gamma:
self.gamma = 1 / n_features

# Initialize kernel method with parameters
self.kernel = self.kernel(
power=self.power,
gamma=self.gamma,
coef=self.coef)

# Calculate kernel matrix
kernel_matrix = np.zeros((n_samples, n_samples))
for i in range(n_samples):
for j in range(n_samples):
kernel_matrix[i, j] = self.kernel(X[i], X[j])

# Define the quadratic optimization problem
P = cvxopt.matrix(np.outer(y, y) * kernel_matrix, tc='d')
q = cvxopt.matrix(np.ones(n_samples) * -1)
A = cvxopt.matrix(y, (1, n_samples), tc='d')
b = cvxopt.matrix(0, tc='d')

if not self.C: #if its empty
G = cvxopt.matrix(np.identity(n_samples) * -1)
h = cvxopt.matrix(np.zeros(n_samples))
else:
G_max = np.identity(n_samples) * -1
G_min = np.identity(n_samples)
G = cvxopt.matrix(np.vstack((G_max, G_min)))
h_max = cvxopt.matrix(np.zeros(n_samples))
h_min = cvxopt.matrix(np.ones(n_samples) * self.C)
h = cvxopt.matrix(np.vstack((h_max, h_min)))

# Solve the quadratic optimization problem using cvxopt
minimization = cvxopt.solvers.qp(P, q, G, h, A, b)

# Lagrange multipliers
lagr_mult = np.ravel(minimization['x'])

# Extract support vectors
# Get indexes of non-zero lagr. multipiers
idx = lagr_mult > 1e-11
# Get the corresponding lagr. multipliers
self.lagr_multipliers = lagr_mult[idx]
# Get the samples that will act as support vectors
self.support_vectors = X[idx]
# Get the corresponding labels
self.support_vector_labels = y[idx]

# Calculate intercept with first support vector
self.intercept = self.support_vector_labels[0]
for i in range(len(self.lagr_multipliers)):
self.intercept -= self.lagr_multipliers[i] * self.support_vector_labels[
i] * self.kernel(self.support_vectors[i], self.support_vectors[0])


def predict(self, X):
y_pred = []
# Iterate through list of samples and make predictions
for sample in X:
prediction = 0
# Determine the label of the sample by the support vectors
for i in range(len(self.lagr_multipliers)):
prediction += self.lagr_multipliers[i] * self.support_vector_labels[
i] * self.kernel(self.support_vectors[i], sample)
prediction += self.intercept
y_pred.append(np.sign(prediction))
return np.array(y_pred)



def main():
print ("-- SVM Classifier --")

data = load_iris()

# previous error
#X = normalize(data.data)
#y = data.target

# correct version
X = normalize(data.data[data.target != 0])
y = data.target[data.target != 0]
y[y == 1] = -1
y[y == 2] = 1

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.4)
clf = SupportVectorMachine(kernel=rbf_kernel, gamma = 1)
clf.fit(X_train, y_train)
y_pred = clf.predict(X_test)
accuracy = accuracy_score(y_test, y_pred)
print ("Accuracy (scratch):", accuracy)

clf_sklearn = SVC(gamma = 'auto')
clf_sklearn.fit(X_train, y_train)
y_pred2 = clf_sklearn.predict(X_test)
accuracy = accuracy_score(y_test, y_pred2)
print ("Accuracy :", accuracy)

if __name__ == "__main__":
main()


RESULTS:
Accuracy (from scratch): 0.31666666666666665
Accuracy (using SVM Library) : 1.0



Note, I did not add the libraries to save space







svm






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Mar 25 at 19:05







user70145

















asked Mar 25 at 4:02









user70145user70145

42




42











  • $begingroup$
    Could you please post the hyperparameters after you fitted the model, both from sklearn and your own model?
    $endgroup$
    – Pedro Henrique Monforte
    Mar 25 at 18:51

















  • $begingroup$
    Could you please post the hyperparameters after you fitted the model, both from sklearn and your own model?
    $endgroup$
    – Pedro Henrique Monforte
    Mar 25 at 18:51
















$begingroup$
Could you please post the hyperparameters after you fitted the model, both from sklearn and your own model?
$endgroup$
– Pedro Henrique Monforte
Mar 25 at 18:51





$begingroup$
Could you please post the hyperparameters after you fitted the model, both from sklearn and your own model?
$endgroup$
– Pedro Henrique Monforte
Mar 25 at 18:51











1 Answer
1






active

oldest

votes


















0












$begingroup$

This is actually correct code. Nothing is wrong with it per se.



However, NOTE: that this is meant for OVO (one versus one) SVM. Basically if you are comparing two classes. THIS is not meant for more than two classes, hence why you would get a lower accuracy.






share|improve this answer









$endgroup$













    Your Answer





    StackExchange.ifUsing("editor", function ()
    return StackExchange.using("mathjaxEditing", function ()
    StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
    StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
    );
    );
    , "mathjax-editing");

    StackExchange.ready(function()
    var channelOptions =
    tags: "".split(" "),
    id: "557"
    ;
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function()
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled)
    StackExchange.using("snippets", function()
    createEditor();
    );

    else
    createEditor();

    );

    function createEditor()
    StackExchange.prepareEditor(
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: false,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: null,
    bindNavPrevention: true,
    postfix: "",
    imageUploader:
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    ,
    onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    );



    );













    draft saved

    draft discarded


















    StackExchange.ready(
    function ()
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f47919%2fimplementing-svm-from-scratch%23new-answer', 'question_page');

    );

    Post as a guest















    Required, but never shown

























    1 Answer
    1






    active

    oldest

    votes








    1 Answer
    1






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    0












    $begingroup$

    This is actually correct code. Nothing is wrong with it per se.



    However, NOTE: that this is meant for OVO (one versus one) SVM. Basically if you are comparing two classes. THIS is not meant for more than two classes, hence why you would get a lower accuracy.






    share|improve this answer









    $endgroup$

















      0












      $begingroup$

      This is actually correct code. Nothing is wrong with it per se.



      However, NOTE: that this is meant for OVO (one versus one) SVM. Basically if you are comparing two classes. THIS is not meant for more than two classes, hence why you would get a lower accuracy.






      share|improve this answer









      $endgroup$















        0












        0








        0





        $begingroup$

        This is actually correct code. Nothing is wrong with it per se.



        However, NOTE: that this is meant for OVO (one versus one) SVM. Basically if you are comparing two classes. THIS is not meant for more than two classes, hence why you would get a lower accuracy.






        share|improve this answer









        $endgroup$



        This is actually correct code. Nothing is wrong with it per se.



        However, NOTE: that this is meant for OVO (one versus one) SVM. Basically if you are comparing two classes. THIS is not meant for more than two classes, hence why you would get a lower accuracy.







        share|improve this answer












        share|improve this answer



        share|improve this answer










        answered Mar 25 at 18:42









        user70145user70145

        42




        42



























            draft saved

            draft discarded
















































            Thanks for contributing an answer to Data Science Stack Exchange!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid


            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.

            Use MathJax to format equations. MathJax reference.


            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f47919%2fimplementing-svm-from-scratch%23new-answer', 'question_page');

            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            Adding axes to figuresAdding axes labels to LaTeX figuresLaTeX equivalent of ConTeXt buffersRotate a node but not its content: the case of the ellipse decorationHow to define the default vertical distance between nodes?TikZ scaling graphic and adjust node position and keep font sizeNumerical conditional within tikz keys?adding axes to shapesAlign axes across subfiguresAdding figures with a certain orderLine up nested tikz enviroments or how to get rid of themAdding axes labels to LaTeX figures

            Tähtien Talli Jäsenet | Lähteet | NavigointivalikkoSuomen Hippos – Tähtien Talli

            Do these cracks on my tires look bad? The Next CEO of Stack OverflowDry rot tire should I replace?Having to replace tiresFishtailed so easily? Bad tires? ABS?Filling the tires with something other than air, to avoid puncture hassles?Used Michelin tires safe to install?Do these tyre cracks necessitate replacement?Rumbling noise: tires or mechanicalIs it possible to fix noisy feathered tires?Are bad winter tires still better than summer tires in winter?Torque converter failure - Related to replacing only 2 tires?Why use snow tires on all 4 wheels on 2-wheel-drive cars?