My model accuracy doesn't change after first epochAccuracy value constant even after different runsOverfitting after first epochConvolutional neural network overfitting. Dropout not helpingHigh model accuracy vs very low validation accuarcyAccuracy and loss don't change in CNN. Is it over-fitting?ValueError: Error when checking target: expected dense_2 to have shape (1,) but got array with shape (0,)Confusion regarding epoch and accuracyValue error in Merging two different models in kerasHow to increase accuracy of model from tensorflow model zoo?99% on the first epoch

What's the polite way to say "I need to urinate"?

What is the difference between `a[bc]d` (brackets) and `ab,cd` (braces)?

How do I tell my manager that he's wrong?

What are the spoon bit of a spoon and fork bit of a fork called?

Find the coordinate of two line segments that are perpendicular

Given what happens in Endgame, why doesn't Dormammu come back to attack the universe?

Pressure to defend the relevance of one's area of mathematics

How can I get precisely a certain cubic cm by changing the following factors?

Why does Bran Stark feel that Jon Snow "needs to know" about his lineage?

Why does the Betti number give the measure of k-dimensional holes?

Pawn Sacrifice Justification

Confusion about capacitors

A question regarding using the definite article

When to use 1/Ka vs Kb

How to creep the reader out with what seems like a normal person?

What is the strongest case that can be made in favour of the UK regaining some control over fishing policy after Brexit?

Volunteering in England

How to stop co-workers from teasing me because I know Russian?

Historically, were women trained for obligatory wars? Or did they serve some other military function?

Why was Germany not as successful as other Europeans in establishing overseas colonies?

Are some sounds more pleasing to the ear, like ㄴ and ㅁ?

If Earth is tilted, why is Polaris always above the same spot?

Does a creature that is immune to a condition still make a saving throw?

Any examples of headwear for races with animal ears?



My model accuracy doesn't change after first epoch


Accuracy value constant even after different runsOverfitting after first epochConvolutional neural network overfitting. Dropout not helpingHigh model accuracy vs very low validation accuarcyAccuracy and loss don't change in CNN. Is it over-fitting?ValueError: Error when checking target: expected dense_2 to have shape (1,) but got array with shape (0,)Confusion regarding epoch and accuracyValue error in Merging two different models in kerasHow to increase accuracy of model from tensorflow model zoo?99% on the first epoch













0












$begingroup$


I've created a model to predict housing prices in LA, and what should be a simple regression problem, is giving me headache because the loss is just too big and my accuracy wont change.



I've already tried normalizing, changing the architecture (decreasing layers, hidden units), adding dropout, changed the loss function, batch size, epochs and my accuracy is still only 0.022



input_shape = X_train_2[0].shape

model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=input_shape),
tf.keras.layers.Dense(units=300, activation=tf.nn.relu),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.Dropout(0.1),
tf.keras.layers.Dense(units=300, activation=tf.nn.relu),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.Dropout(0.1),
tf.keras.layers.Dense(units = 1, kernel_initializer = 'lecun_normal', activation='linear')
])

model.compile(optimizer='adam',loss='mean_squared_error', metrics=['accuracy'])
model.fit(X_train, y_train, epochs=5, batch_size=32)
model.summary()
model.evaluate(X_test_, y_test)


Training Log



Epoch 1/5 32444/32444 [==============================] - 1s
38us/sample - loss: 90230324650039.5469 - acc: 0.0012
Epoch 2/5
32444/32444 [==============================] - 1s 28us/sample -
loss: 90230315396180.2031 - acc: 0.0022
Epoch 3/5 32444/32444
[==============================] - 1s 27us/sample - loss:
90230293267377.3438 - acc: 0.0022
Epoch 4/5 32444/32444 [==============================] - 1s 27us/sample - loss:
90230260607518.6250 - acc: 0.0022
Epoch 5/5 32444/32444 [==============================] - 1s 28us/sample - loss:
90230216684525.4375 - acc: 0.0022









share|improve this question









$endgroup$











  • $begingroup$
    Your loss seems very huge, but it is decreasing. Among the list of things you tried, I didn't see you try adjusting the learning rate. Could be that you need to increase it?
    $endgroup$
    – Chris Moorhead
    Apr 9 at 1:34










  • $begingroup$
    Can you add a sample input? Are the inputs normalized/scaled? I can imagine if they are not and you have houses for 1e6, the NN will give huge loss like that because the numbers themselves are big... also, if the inputs are scaled to something like (0,1), can't hurt to add a sigmoid in last layer.
    $endgroup$
    – Pavel Savine
    Apr 9 at 1:39











  • $begingroup$
    Simply change the loss to mse (Mean square error) as accuracy is not loss we should be using in regression.
    $endgroup$
    – thanatoz
    Apr 9 at 4:50










  • $begingroup$
    I normalized my inputs, but all my predictions are now 0.0
    $endgroup$
    – Biel Borba
    Apr 9 at 13:48















0












$begingroup$


I've created a model to predict housing prices in LA, and what should be a simple regression problem, is giving me headache because the loss is just too big and my accuracy wont change.



I've already tried normalizing, changing the architecture (decreasing layers, hidden units), adding dropout, changed the loss function, batch size, epochs and my accuracy is still only 0.022



input_shape = X_train_2[0].shape

model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=input_shape),
tf.keras.layers.Dense(units=300, activation=tf.nn.relu),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.Dropout(0.1),
tf.keras.layers.Dense(units=300, activation=tf.nn.relu),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.Dropout(0.1),
tf.keras.layers.Dense(units = 1, kernel_initializer = 'lecun_normal', activation='linear')
])

model.compile(optimizer='adam',loss='mean_squared_error', metrics=['accuracy'])
model.fit(X_train, y_train, epochs=5, batch_size=32)
model.summary()
model.evaluate(X_test_, y_test)


Training Log



Epoch 1/5 32444/32444 [==============================] - 1s
38us/sample - loss: 90230324650039.5469 - acc: 0.0012
Epoch 2/5
32444/32444 [==============================] - 1s 28us/sample -
loss: 90230315396180.2031 - acc: 0.0022
Epoch 3/5 32444/32444
[==============================] - 1s 27us/sample - loss:
90230293267377.3438 - acc: 0.0022
Epoch 4/5 32444/32444 [==============================] - 1s 27us/sample - loss:
90230260607518.6250 - acc: 0.0022
Epoch 5/5 32444/32444 [==============================] - 1s 28us/sample - loss:
90230216684525.4375 - acc: 0.0022









share|improve this question









$endgroup$











  • $begingroup$
    Your loss seems very huge, but it is decreasing. Among the list of things you tried, I didn't see you try adjusting the learning rate. Could be that you need to increase it?
    $endgroup$
    – Chris Moorhead
    Apr 9 at 1:34










  • $begingroup$
    Can you add a sample input? Are the inputs normalized/scaled? I can imagine if they are not and you have houses for 1e6, the NN will give huge loss like that because the numbers themselves are big... also, if the inputs are scaled to something like (0,1), can't hurt to add a sigmoid in last layer.
    $endgroup$
    – Pavel Savine
    Apr 9 at 1:39











  • $begingroup$
    Simply change the loss to mse (Mean square error) as accuracy is not loss we should be using in regression.
    $endgroup$
    – thanatoz
    Apr 9 at 4:50










  • $begingroup$
    I normalized my inputs, but all my predictions are now 0.0
    $endgroup$
    – Biel Borba
    Apr 9 at 13:48













0












0








0





$begingroup$


I've created a model to predict housing prices in LA, and what should be a simple regression problem, is giving me headache because the loss is just too big and my accuracy wont change.



I've already tried normalizing, changing the architecture (decreasing layers, hidden units), adding dropout, changed the loss function, batch size, epochs and my accuracy is still only 0.022



input_shape = X_train_2[0].shape

model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=input_shape),
tf.keras.layers.Dense(units=300, activation=tf.nn.relu),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.Dropout(0.1),
tf.keras.layers.Dense(units=300, activation=tf.nn.relu),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.Dropout(0.1),
tf.keras.layers.Dense(units = 1, kernel_initializer = 'lecun_normal', activation='linear')
])

model.compile(optimizer='adam',loss='mean_squared_error', metrics=['accuracy'])
model.fit(X_train, y_train, epochs=5, batch_size=32)
model.summary()
model.evaluate(X_test_, y_test)


Training Log



Epoch 1/5 32444/32444 [==============================] - 1s
38us/sample - loss: 90230324650039.5469 - acc: 0.0012
Epoch 2/5
32444/32444 [==============================] - 1s 28us/sample -
loss: 90230315396180.2031 - acc: 0.0022
Epoch 3/5 32444/32444
[==============================] - 1s 27us/sample - loss:
90230293267377.3438 - acc: 0.0022
Epoch 4/5 32444/32444 [==============================] - 1s 27us/sample - loss:
90230260607518.6250 - acc: 0.0022
Epoch 5/5 32444/32444 [==============================] - 1s 28us/sample - loss:
90230216684525.4375 - acc: 0.0022









share|improve this question









$endgroup$




I've created a model to predict housing prices in LA, and what should be a simple regression problem, is giving me headache because the loss is just too big and my accuracy wont change.



I've already tried normalizing, changing the architecture (decreasing layers, hidden units), adding dropout, changed the loss function, batch size, epochs and my accuracy is still only 0.022



input_shape = X_train_2[0].shape

model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=input_shape),
tf.keras.layers.Dense(units=300, activation=tf.nn.relu),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.Dropout(0.1),
tf.keras.layers.Dense(units=300, activation=tf.nn.relu),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.Dropout(0.1),
tf.keras.layers.Dense(units = 1, kernel_initializer = 'lecun_normal', activation='linear')
])

model.compile(optimizer='adam',loss='mean_squared_error', metrics=['accuracy'])
model.fit(X_train, y_train, epochs=5, batch_size=32)
model.summary()
model.evaluate(X_test_, y_test)


Training Log



Epoch 1/5 32444/32444 [==============================] - 1s
38us/sample - loss: 90230324650039.5469 - acc: 0.0012
Epoch 2/5
32444/32444 [==============================] - 1s 28us/sample -
loss: 90230315396180.2031 - acc: 0.0022
Epoch 3/5 32444/32444
[==============================] - 1s 27us/sample - loss:
90230293267377.3438 - acc: 0.0022
Epoch 4/5 32444/32444 [==============================] - 1s 27us/sample - loss:
90230260607518.6250 - acc: 0.0022
Epoch 5/5 32444/32444 [==============================] - 1s 28us/sample - loss:
90230216684525.4375 - acc: 0.0022






neural-network tensorflow overfitting






share|improve this question













share|improve this question











share|improve this question




share|improve this question










asked Apr 9 at 0:35









Biel BorbaBiel Borba

1




1











  • $begingroup$
    Your loss seems very huge, but it is decreasing. Among the list of things you tried, I didn't see you try adjusting the learning rate. Could be that you need to increase it?
    $endgroup$
    – Chris Moorhead
    Apr 9 at 1:34










  • $begingroup$
    Can you add a sample input? Are the inputs normalized/scaled? I can imagine if they are not and you have houses for 1e6, the NN will give huge loss like that because the numbers themselves are big... also, if the inputs are scaled to something like (0,1), can't hurt to add a sigmoid in last layer.
    $endgroup$
    – Pavel Savine
    Apr 9 at 1:39











  • $begingroup$
    Simply change the loss to mse (Mean square error) as accuracy is not loss we should be using in regression.
    $endgroup$
    – thanatoz
    Apr 9 at 4:50










  • $begingroup$
    I normalized my inputs, but all my predictions are now 0.0
    $endgroup$
    – Biel Borba
    Apr 9 at 13:48
















  • $begingroup$
    Your loss seems very huge, but it is decreasing. Among the list of things you tried, I didn't see you try adjusting the learning rate. Could be that you need to increase it?
    $endgroup$
    – Chris Moorhead
    Apr 9 at 1:34










  • $begingroup$
    Can you add a sample input? Are the inputs normalized/scaled? I can imagine if they are not and you have houses for 1e6, the NN will give huge loss like that because the numbers themselves are big... also, if the inputs are scaled to something like (0,1), can't hurt to add a sigmoid in last layer.
    $endgroup$
    – Pavel Savine
    Apr 9 at 1:39











  • $begingroup$
    Simply change the loss to mse (Mean square error) as accuracy is not loss we should be using in regression.
    $endgroup$
    – thanatoz
    Apr 9 at 4:50










  • $begingroup$
    I normalized my inputs, but all my predictions are now 0.0
    $endgroup$
    – Biel Borba
    Apr 9 at 13:48















$begingroup$
Your loss seems very huge, but it is decreasing. Among the list of things you tried, I didn't see you try adjusting the learning rate. Could be that you need to increase it?
$endgroup$
– Chris Moorhead
Apr 9 at 1:34




$begingroup$
Your loss seems very huge, but it is decreasing. Among the list of things you tried, I didn't see you try adjusting the learning rate. Could be that you need to increase it?
$endgroup$
– Chris Moorhead
Apr 9 at 1:34












$begingroup$
Can you add a sample input? Are the inputs normalized/scaled? I can imagine if they are not and you have houses for 1e6, the NN will give huge loss like that because the numbers themselves are big... also, if the inputs are scaled to something like (0,1), can't hurt to add a sigmoid in last layer.
$endgroup$
– Pavel Savine
Apr 9 at 1:39





$begingroup$
Can you add a sample input? Are the inputs normalized/scaled? I can imagine if they are not and you have houses for 1e6, the NN will give huge loss like that because the numbers themselves are big... also, if the inputs are scaled to something like (0,1), can't hurt to add a sigmoid in last layer.
$endgroup$
– Pavel Savine
Apr 9 at 1:39













$begingroup$
Simply change the loss to mse (Mean square error) as accuracy is not loss we should be using in regression.
$endgroup$
– thanatoz
Apr 9 at 4:50




$begingroup$
Simply change the loss to mse (Mean square error) as accuracy is not loss we should be using in regression.
$endgroup$
– thanatoz
Apr 9 at 4:50












$begingroup$
I normalized my inputs, but all my predictions are now 0.0
$endgroup$
– Biel Borba
Apr 9 at 13:48




$begingroup$
I normalized my inputs, but all my predictions are now 0.0
$endgroup$
– Biel Borba
Apr 9 at 13:48










1 Answer
1






active

oldest

votes


















0












$begingroup$

Your metric is accuracy although you are working on a regression problem, this doesn't make sense. You should use instead:



metrics = ['mean squared error']






share|improve this answer









$endgroup$













    Your Answer








    StackExchange.ready(function()
    var channelOptions =
    tags: "".split(" "),
    id: "557"
    ;
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function()
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled)
    StackExchange.using("snippets", function()
    createEditor();
    );

    else
    createEditor();

    );

    function createEditor()
    StackExchange.prepareEditor(
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: false,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: null,
    bindNavPrevention: true,
    postfix: "",
    imageUploader:
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    ,
    onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    );



    );













    draft saved

    draft discarded


















    StackExchange.ready(
    function ()
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f48922%2fmy-model-accuracy-doesnt-change-after-first-epoch%23new-answer', 'question_page');

    );

    Post as a guest















    Required, but never shown

























    1 Answer
    1






    active

    oldest

    votes








    1 Answer
    1






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    0












    $begingroup$

    Your metric is accuracy although you are working on a regression problem, this doesn't make sense. You should use instead:



    metrics = ['mean squared error']






    share|improve this answer









    $endgroup$

















      0












      $begingroup$

      Your metric is accuracy although you are working on a regression problem, this doesn't make sense. You should use instead:



      metrics = ['mean squared error']






      share|improve this answer









      $endgroup$















        0












        0








        0





        $begingroup$

        Your metric is accuracy although you are working on a regression problem, this doesn't make sense. You should use instead:



        metrics = ['mean squared error']






        share|improve this answer









        $endgroup$



        Your metric is accuracy although you are working on a regression problem, this doesn't make sense. You should use instead:



        metrics = ['mean squared error']







        share|improve this answer












        share|improve this answer



        share|improve this answer










        answered Apr 9 at 1:48









        MaximeKanMaximeKan

        1011




        1011



























            draft saved

            draft discarded
















































            Thanks for contributing an answer to Data Science Stack Exchange!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid


            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.

            Use MathJax to format equations. MathJax reference.


            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f48922%2fmy-model-accuracy-doesnt-change-after-first-epoch%23new-answer', 'question_page');

            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            Adding axes to figuresAdding axes labels to LaTeX figuresLaTeX equivalent of ConTeXt buffersRotate a node but not its content: the case of the ellipse decorationHow to define the default vertical distance between nodes?TikZ scaling graphic and adjust node position and keep font sizeNumerical conditional within tikz keys?adding axes to shapesAlign axes across subfiguresAdding figures with a certain orderLine up nested tikz enviroments or how to get rid of themAdding axes labels to LaTeX figures

            Luettelo Yhdysvaltain laivaston lentotukialuksista Lähteet | Navigointivalikko

            Gary (muusikko) Sisällysluettelo Historia | Rockin' High | Lähteet | Aiheesta muualla | NavigointivalikkoInfobox OKTuomas "Gary" Keskinen Ancaran kitaristiksiProjekti Rockin' High