Logistic regression gradient descent classifier - more iterations leads to worse accuracyStochastic gradient descent in logistic regressionRegression problem - too complex for gradient descentNon-linear data preprocessing before mini-batch gradient descentinformation leakage when using empirical Bayesian to generate a predictorGradient Descent in logistic regressionHow does binary cross entropy work?Should the minimum value of a cost (loss) function be equal to zero?Problem with Linear Regression and Gradient DescentLinear classifier and gradient descent
Is there any significance to the Valyrian Stone vault door of Qarth?
does this mean what I think it means - 4th last time
Make "apt-get update" show the exact output as `apt update`
JavaScript array of objects contains the same array data
The verb "to prioritize"
Is camera lens focus an exact point or a range?
Can a malicious addon access internet history and such in chrome/firefox?
Is there a problem with hiding "forgot password" until it's needed?
Simulating a probability of 1 of 2^N with less than N random bits
Global amount of publications over time
Greatest common substring
Can somebody explain Brexit in a few child-proof sentences?
Is there enough fresh water in the world to eradicate the drinking water crisis?
Superhero words!
Fast sudoku solver
Find fails if filename contains brackets
Do these cracks on my tires look bad?
What does the "3am" section means in manpages?
Organic chemistry Iodoform Reaction
How did Monica know how to operate Carol's "designer"?
Can I Retrieve Email Addresses from BCC?
Would it be legal for a US State to ban exports of a natural resource?
Can one define wavefronts for waves travelling on a stretched string?
Getting the lowest value with key in array
Logistic regression gradient descent classifier - more iterations leads to worse accuracy
Stochastic gradient descent in logistic regressionRegression problem - too complex for gradient descentNon-linear data preprocessing before mini-batch gradient descentinformation leakage when using empirical Bayesian to generate a predictorGradient Descent in logistic regressionHow does binary cross entropy work?Should the minimum value of a cost (loss) function be equal to zero?Problem with Linear Regression and Gradient DescentLinear classifier and gradient descent
$begingroup$
I'm running a binary classification logistic regression. When I run gradient descent for 100 iterations I get ~ 90% prediction accuracy (cost function is decreasing constantly but hasn't converged yet). When I run the same algorithm for 1000, 10000 and 100000 iterations I get identical ~ 85% accuracy with cost function convergence.
When I run the classification using a built in logistic regression function in R, I get exactly the same ~ 85% accuracy as I got for my 1000, 10000, 100000 iteration runs.
I'm not sure whether this should be an impossibility and my code must therefore contain an error or there is something I'm not understanding regarding gradient descent for classification.
Any help would be appreciated, thanks.
Note: The quoted model accuracy is based on fitting the coefficients to the training data.
Edit: I used the glm() function combined with predict() to check my results.
Here is the code I am using for my own implementation:
# Logistic classification -----------------------------------------
# Features and targets
predictors <- matrix(mtcars[, 'hp'])
outcome <- mtcars[, 'vs']
# Feature scaling (only one this time)
for (i in ncol(predictors))
predictors[, i] <- (predictors[, i] - min(predictors[, i])) /
(max(predictors[, i]) - min(predictors[, i]))
# Ready features and targets
X <- cbind(rep(1, length(predictors)),
predictors)
y <- outcome
# Parameters
Thetas <- rep(0, ncol(X))
iterations <- 10000 # varying
Alpha <- 0.1
m <- length(y)
# Track updates
cost_history <- numeric()
Theta_history <- list()
# Sigmoid activation
g <- function(x)
1 / (1 + exp(-x))
# Gradient descent
for (i in 1:iterations)
cost <- - 1 / m * (t(y) %*% log(g(X %*% Thetas)) +
(1 - y) %*% log(1 - g(X %*% Thetas)))
cost_history[i] <- cost
Theta_history[[i]] <- Thetas
Thetas <- Thetas - Alpha * (t(X) %*% (g(X %*% Thetas) - y))
# Inspect convergence
plot(cost_history)
# Return predictions
probs <- g(X %*% Thetas)
preds <- ifelse(probs >= 0.5, 1, 0)
# Do predictions match
pred_match <- numeric()
for (i in 1:length(preds))
pred_match[i] <- ifelse(preds[i, ] == y[i], 1, 0)
**strong text**
# Model accuracy
mean(pred_match)
classification gradient-descent
New contributor
$endgroup$
add a comment |
$begingroup$
I'm running a binary classification logistic regression. When I run gradient descent for 100 iterations I get ~ 90% prediction accuracy (cost function is decreasing constantly but hasn't converged yet). When I run the same algorithm for 1000, 10000 and 100000 iterations I get identical ~ 85% accuracy with cost function convergence.
When I run the classification using a built in logistic regression function in R, I get exactly the same ~ 85% accuracy as I got for my 1000, 10000, 100000 iteration runs.
I'm not sure whether this should be an impossibility and my code must therefore contain an error or there is something I'm not understanding regarding gradient descent for classification.
Any help would be appreciated, thanks.
Note: The quoted model accuracy is based on fitting the coefficients to the training data.
Edit: I used the glm() function combined with predict() to check my results.
Here is the code I am using for my own implementation:
# Logistic classification -----------------------------------------
# Features and targets
predictors <- matrix(mtcars[, 'hp'])
outcome <- mtcars[, 'vs']
# Feature scaling (only one this time)
for (i in ncol(predictors))
predictors[, i] <- (predictors[, i] - min(predictors[, i])) /
(max(predictors[, i]) - min(predictors[, i]))
# Ready features and targets
X <- cbind(rep(1, length(predictors)),
predictors)
y <- outcome
# Parameters
Thetas <- rep(0, ncol(X))
iterations <- 10000 # varying
Alpha <- 0.1
m <- length(y)
# Track updates
cost_history <- numeric()
Theta_history <- list()
# Sigmoid activation
g <- function(x)
1 / (1 + exp(-x))
# Gradient descent
for (i in 1:iterations)
cost <- - 1 / m * (t(y) %*% log(g(X %*% Thetas)) +
(1 - y) %*% log(1 - g(X %*% Thetas)))
cost_history[i] <- cost
Theta_history[[i]] <- Thetas
Thetas <- Thetas - Alpha * (t(X) %*% (g(X %*% Thetas) - y))
# Inspect convergence
plot(cost_history)
# Return predictions
probs <- g(X %*% Thetas)
preds <- ifelse(probs >= 0.5, 1, 0)
# Do predictions match
pred_match <- numeric()
for (i in 1:length(preds))
pred_match[i] <- ifelse(preds[i, ] == y[i], 1, 0)
**strong text**
# Model accuracy
mean(pred_match)
classification gradient-descent
New contributor
$endgroup$
$begingroup$
Did you check the validation scores? Have you implemented logistic regression on your own (it would be helpful If you would provide your code) or are you using a package?
$endgroup$
– MachineLearner
Mar 20 at 13:56
add a comment |
$begingroup$
I'm running a binary classification logistic regression. When I run gradient descent for 100 iterations I get ~ 90% prediction accuracy (cost function is decreasing constantly but hasn't converged yet). When I run the same algorithm for 1000, 10000 and 100000 iterations I get identical ~ 85% accuracy with cost function convergence.
When I run the classification using a built in logistic regression function in R, I get exactly the same ~ 85% accuracy as I got for my 1000, 10000, 100000 iteration runs.
I'm not sure whether this should be an impossibility and my code must therefore contain an error or there is something I'm not understanding regarding gradient descent for classification.
Any help would be appreciated, thanks.
Note: The quoted model accuracy is based on fitting the coefficients to the training data.
Edit: I used the glm() function combined with predict() to check my results.
Here is the code I am using for my own implementation:
# Logistic classification -----------------------------------------
# Features and targets
predictors <- matrix(mtcars[, 'hp'])
outcome <- mtcars[, 'vs']
# Feature scaling (only one this time)
for (i in ncol(predictors))
predictors[, i] <- (predictors[, i] - min(predictors[, i])) /
(max(predictors[, i]) - min(predictors[, i]))
# Ready features and targets
X <- cbind(rep(1, length(predictors)),
predictors)
y <- outcome
# Parameters
Thetas <- rep(0, ncol(X))
iterations <- 10000 # varying
Alpha <- 0.1
m <- length(y)
# Track updates
cost_history <- numeric()
Theta_history <- list()
# Sigmoid activation
g <- function(x)
1 / (1 + exp(-x))
# Gradient descent
for (i in 1:iterations)
cost <- - 1 / m * (t(y) %*% log(g(X %*% Thetas)) +
(1 - y) %*% log(1 - g(X %*% Thetas)))
cost_history[i] <- cost
Theta_history[[i]] <- Thetas
Thetas <- Thetas - Alpha * (t(X) %*% (g(X %*% Thetas) - y))
# Inspect convergence
plot(cost_history)
# Return predictions
probs <- g(X %*% Thetas)
preds <- ifelse(probs >= 0.5, 1, 0)
# Do predictions match
pred_match <- numeric()
for (i in 1:length(preds))
pred_match[i] <- ifelse(preds[i, ] == y[i], 1, 0)
**strong text**
# Model accuracy
mean(pred_match)
classification gradient-descent
New contributor
$endgroup$
I'm running a binary classification logistic regression. When I run gradient descent for 100 iterations I get ~ 90% prediction accuracy (cost function is decreasing constantly but hasn't converged yet). When I run the same algorithm for 1000, 10000 and 100000 iterations I get identical ~ 85% accuracy with cost function convergence.
When I run the classification using a built in logistic regression function in R, I get exactly the same ~ 85% accuracy as I got for my 1000, 10000, 100000 iteration runs.
I'm not sure whether this should be an impossibility and my code must therefore contain an error or there is something I'm not understanding regarding gradient descent for classification.
Any help would be appreciated, thanks.
Note: The quoted model accuracy is based on fitting the coefficients to the training data.
Edit: I used the glm() function combined with predict() to check my results.
Here is the code I am using for my own implementation:
# Logistic classification -----------------------------------------
# Features and targets
predictors <- matrix(mtcars[, 'hp'])
outcome <- mtcars[, 'vs']
# Feature scaling (only one this time)
for (i in ncol(predictors))
predictors[, i] <- (predictors[, i] - min(predictors[, i])) /
(max(predictors[, i]) - min(predictors[, i]))
# Ready features and targets
X <- cbind(rep(1, length(predictors)),
predictors)
y <- outcome
# Parameters
Thetas <- rep(0, ncol(X))
iterations <- 10000 # varying
Alpha <- 0.1
m <- length(y)
# Track updates
cost_history <- numeric()
Theta_history <- list()
# Sigmoid activation
g <- function(x)
1 / (1 + exp(-x))
# Gradient descent
for (i in 1:iterations)
cost <- - 1 / m * (t(y) %*% log(g(X %*% Thetas)) +
(1 - y) %*% log(1 - g(X %*% Thetas)))
cost_history[i] <- cost
Theta_history[[i]] <- Thetas
Thetas <- Thetas - Alpha * (t(X) %*% (g(X %*% Thetas) - y))
# Inspect convergence
plot(cost_history)
# Return predictions
probs <- g(X %*% Thetas)
preds <- ifelse(probs >= 0.5, 1, 0)
# Do predictions match
pred_match <- numeric()
for (i in 1:length(preds))
pred_match[i] <- ifelse(preds[i, ] == y[i], 1, 0)
**strong text**
# Model accuracy
mean(pred_match)
classification gradient-descent
classification gradient-descent
New contributor
New contributor
edited Mar 20 at 14:13
Lokipovnov
New contributor
asked Mar 20 at 13:43
LokipovnovLokipovnov
11
11
New contributor
New contributor
$begingroup$
Did you check the validation scores? Have you implemented logistic regression on your own (it would be helpful If you would provide your code) or are you using a package?
$endgroup$
– MachineLearner
Mar 20 at 13:56
add a comment |
$begingroup$
Did you check the validation scores? Have you implemented logistic regression on your own (it would be helpful If you would provide your code) or are you using a package?
$endgroup$
– MachineLearner
Mar 20 at 13:56
$begingroup$
Did you check the validation scores? Have you implemented logistic regression on your own (it would be helpful If you would provide your code) or are you using a package?
$endgroup$
– MachineLearner
Mar 20 at 13:56
$begingroup$
Did you check the validation scores? Have you implemented logistic regression on your own (it would be helpful If you would provide your code) or are you using a package?
$endgroup$
– MachineLearner
Mar 20 at 13:56
add a comment |
0
active
oldest
votes
Your Answer
StackExchange.ifUsing("editor", function ()
return StackExchange.using("mathjaxEditing", function ()
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
);
);
, "mathjax-editing");
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "557"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Lokipovnov is a new contributor. Be nice, and check out our Code of Conduct.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f47675%2flogistic-regression-gradient-descent-classifier-more-iterations-leads-to-worse%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
0
active
oldest
votes
0
active
oldest
votes
active
oldest
votes
active
oldest
votes
Lokipovnov is a new contributor. Be nice, and check out our Code of Conduct.
Lokipovnov is a new contributor. Be nice, and check out our Code of Conduct.
Lokipovnov is a new contributor. Be nice, and check out our Code of Conduct.
Lokipovnov is a new contributor. Be nice, and check out our Code of Conduct.
Thanks for contributing an answer to Data Science Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f47675%2flogistic-regression-gradient-descent-classifier-more-iterations-leads-to-worse%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
$begingroup$
Did you check the validation scores? Have you implemented logistic regression on your own (it would be helpful If you would provide your code) or are you using a package?
$endgroup$
– MachineLearner
Mar 20 at 13:56