Keras Classifier returns similar output for all PredictionsMultiple output classes in kerasKeras stateful LSTM returns NaN for validation lossGet multiple output from KerasMultilabel multiclass classifier returns same probabilities for any inputKeras single sample prediction returns different valuesValueError: Error when checking target: expected dense_2 to have shape (1,) but got array with shape (0,)How to use Keras Linear Regression for Multiple input-output?Is regularization included in loss history Keras returns?predict gives the same output value for every image (Keras)

Why was the Spitfire's elliptical wing almost uncopied by other aircraft of World War 2?

Who was the lone kid in the line of people at the lake at the end of Avengers: Endgame?

Big O /Right or wrong?

Why do games have consumables?

How to denote matrix elements succinctly

How could Tony Stark make this in Endgame?

What is the term for a person whose job is to place products on shelves in stores?

How to fry ground beef so it is well-browned

Pulling the rope with one hand is as heavy as with two hands?

As an international instructor, should I openly talk about my accent?

basic difference between canonical isomorphism and isomorphims

Can I criticise the more senior developers around me for not writing clean code?

Your bread will be buttered on both sides

How do I check if a string is entirely made of the same substring?

Multiple options vs single option UI

Is it idiomatic to construct against `this`

How to limit Drive Letters Windows assigns to new removable USB drives

Is there a grandfather paradox in Endgame?

Is there any official lore on the Far Realm?

Equally distributed table columns

In scrum, if tasks are estimated in hours, how to avoid assigning the task in sprint planning?

Can't get 5V 3A DC constant

How does Nebula have access to these memories?

On The Origin of Dissonant Chords



Keras Classifier returns similar output for all Predictions


Multiple output classes in kerasKeras stateful LSTM returns NaN for validation lossGet multiple output from KerasMultilabel multiclass classifier returns same probabilities for any inputKeras single sample prediction returns different valuesValueError: Error when checking target: expected dense_2 to have shape (1,) but got array with shape (0,)How to use Keras Linear Regression for Multiple input-output?Is regularization included in loss history Keras returns?predict gives the same output value for every image (Keras)













0












$begingroup$


I completed training the model with an accuracy of 1.000 and a validation accuracy of 0.9565. Unfortunately whenever i input a image into my model i get the same output regardless. Am i doing something wrong when predicting or during my training. W and A are my class labels.



My folder structure for the image generators are as follows:



images/



a/
a001.jpg.png..
w/
w002.jpg.png..



train_datagen = ImageDataGenerator(
rescale=1./255,
shear_range=0.2,
zoom_range=0.2)

test_datagen = ImageDataGenerator(rescale=1./255)

from keras.models import Sequential
from keras.layers import Conv2D, MaxPooling2D
from keras.layers import Activation, Dropout, Flatten, Dense

model = Sequential()
model.add(Conv2D(32, (3, 3), input_shape=(150, 150,3),padding='same'))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2),padding='same'))

model.add(Conv2D(32, (3, 3),padding='same'))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2),padding='same'))

model.add(Conv2D(64, (3, 3),padding='same'))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2),padding='same'))

model.add(Flatten()) # this converts our 3D feature maps to 1D feature vectors
model.add(Dense(64))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(1))
model.add(Activation('sigmoid'))

model.compile(loss='binary_crossentropy',
optimizer='rmsprop',
metrics=['accuracy'])

batch_size = 64

# this is a generator that will read pictures found in
# subfolers of 'data/train', and indefinitely generate
# batches of augmented image data
train_generator = train_datagen.flow_from_directory(
'C:\Users\Zahid\Desktop\Dataset\train', # this is the target directory
target_size=(150, 150), # all images will be resized to 150x150
batch_size=batch_size,
color_mode='rgb',
class_mode='binary') # since we use binary_crossentropy loss, we need binary labels

# this is a similar generator, for validation data
validation_generator = test_datagen.flow_from_directory(
'C:\Users\Zahid\Desktop\Dataset\val',
target_size=(150, 150),
batch_size=batch_size,
color_mode='rgb',
class_mode='binary')

model.fit_generator(
train_generator,
steps_per_epoch=2000 // batch_size,
epochs=50,
validation_data=validation_generator,
validation_steps=800 // batch_size)
model.save_weights('first_try.h5')

img = cv2.imread("C:\Users\Zahid\Desktop\Data\TrainingData\images\a\img_0201.jpg.png")
resized_image = cv2.resize(image, (150, 150))
x = img_to_array(resized_image)
x = x.reshape((1,) + x.shape)
x = x/255
print(x.shape)
scores_train = model.predict(x)
print(scores_train)









share|improve this question









$endgroup$











  • $begingroup$
    what is number 1 in your last dense layer? Do you have only one class? and why you use a sigmoid function in the output layer? try using softmax function, it is better.
    $endgroup$
    – SoK
    Apr 6 at 13:40






  • 1




    $begingroup$
    @honas.cs I Have two classes as mentioned in the question , and i followed a keras example to train this model. As shown in my folder structure i have seperated the classes into two seperate folders and trained them.
    $endgroup$
    – Zahid Ahmed
    Apr 6 at 13:43















0












$begingroup$


I completed training the model with an accuracy of 1.000 and a validation accuracy of 0.9565. Unfortunately whenever i input a image into my model i get the same output regardless. Am i doing something wrong when predicting or during my training. W and A are my class labels.



My folder structure for the image generators are as follows:



images/



a/
a001.jpg.png..
w/
w002.jpg.png..



train_datagen = ImageDataGenerator(
rescale=1./255,
shear_range=0.2,
zoom_range=0.2)

test_datagen = ImageDataGenerator(rescale=1./255)

from keras.models import Sequential
from keras.layers import Conv2D, MaxPooling2D
from keras.layers import Activation, Dropout, Flatten, Dense

model = Sequential()
model.add(Conv2D(32, (3, 3), input_shape=(150, 150,3),padding='same'))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2),padding='same'))

model.add(Conv2D(32, (3, 3),padding='same'))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2),padding='same'))

model.add(Conv2D(64, (3, 3),padding='same'))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2),padding='same'))

model.add(Flatten()) # this converts our 3D feature maps to 1D feature vectors
model.add(Dense(64))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(1))
model.add(Activation('sigmoid'))

model.compile(loss='binary_crossentropy',
optimizer='rmsprop',
metrics=['accuracy'])

batch_size = 64

# this is a generator that will read pictures found in
# subfolers of 'data/train', and indefinitely generate
# batches of augmented image data
train_generator = train_datagen.flow_from_directory(
'C:\Users\Zahid\Desktop\Dataset\train', # this is the target directory
target_size=(150, 150), # all images will be resized to 150x150
batch_size=batch_size,
color_mode='rgb',
class_mode='binary') # since we use binary_crossentropy loss, we need binary labels

# this is a similar generator, for validation data
validation_generator = test_datagen.flow_from_directory(
'C:\Users\Zahid\Desktop\Dataset\val',
target_size=(150, 150),
batch_size=batch_size,
color_mode='rgb',
class_mode='binary')

model.fit_generator(
train_generator,
steps_per_epoch=2000 // batch_size,
epochs=50,
validation_data=validation_generator,
validation_steps=800 // batch_size)
model.save_weights('first_try.h5')

img = cv2.imread("C:\Users\Zahid\Desktop\Data\TrainingData\images\a\img_0201.jpg.png")
resized_image = cv2.resize(image, (150, 150))
x = img_to_array(resized_image)
x = x.reshape((1,) + x.shape)
x = x/255
print(x.shape)
scores_train = model.predict(x)
print(scores_train)









share|improve this question









$endgroup$











  • $begingroup$
    what is number 1 in your last dense layer? Do you have only one class? and why you use a sigmoid function in the output layer? try using softmax function, it is better.
    $endgroup$
    – SoK
    Apr 6 at 13:40






  • 1




    $begingroup$
    @honas.cs I Have two classes as mentioned in the question , and i followed a keras example to train this model. As shown in my folder structure i have seperated the classes into two seperate folders and trained them.
    $endgroup$
    – Zahid Ahmed
    Apr 6 at 13:43













0












0








0





$begingroup$


I completed training the model with an accuracy of 1.000 and a validation accuracy of 0.9565. Unfortunately whenever i input a image into my model i get the same output regardless. Am i doing something wrong when predicting or during my training. W and A are my class labels.



My folder structure for the image generators are as follows:



images/



a/
a001.jpg.png..
w/
w002.jpg.png..



train_datagen = ImageDataGenerator(
rescale=1./255,
shear_range=0.2,
zoom_range=0.2)

test_datagen = ImageDataGenerator(rescale=1./255)

from keras.models import Sequential
from keras.layers import Conv2D, MaxPooling2D
from keras.layers import Activation, Dropout, Flatten, Dense

model = Sequential()
model.add(Conv2D(32, (3, 3), input_shape=(150, 150,3),padding='same'))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2),padding='same'))

model.add(Conv2D(32, (3, 3),padding='same'))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2),padding='same'))

model.add(Conv2D(64, (3, 3),padding='same'))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2),padding='same'))

model.add(Flatten()) # this converts our 3D feature maps to 1D feature vectors
model.add(Dense(64))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(1))
model.add(Activation('sigmoid'))

model.compile(loss='binary_crossentropy',
optimizer='rmsprop',
metrics=['accuracy'])

batch_size = 64

# this is a generator that will read pictures found in
# subfolers of 'data/train', and indefinitely generate
# batches of augmented image data
train_generator = train_datagen.flow_from_directory(
'C:\Users\Zahid\Desktop\Dataset\train', # this is the target directory
target_size=(150, 150), # all images will be resized to 150x150
batch_size=batch_size,
color_mode='rgb',
class_mode='binary') # since we use binary_crossentropy loss, we need binary labels

# this is a similar generator, for validation data
validation_generator = test_datagen.flow_from_directory(
'C:\Users\Zahid\Desktop\Dataset\val',
target_size=(150, 150),
batch_size=batch_size,
color_mode='rgb',
class_mode='binary')

model.fit_generator(
train_generator,
steps_per_epoch=2000 // batch_size,
epochs=50,
validation_data=validation_generator,
validation_steps=800 // batch_size)
model.save_weights('first_try.h5')

img = cv2.imread("C:\Users\Zahid\Desktop\Data\TrainingData\images\a\img_0201.jpg.png")
resized_image = cv2.resize(image, (150, 150))
x = img_to_array(resized_image)
x = x.reshape((1,) + x.shape)
x = x/255
print(x.shape)
scores_train = model.predict(x)
print(scores_train)









share|improve this question









$endgroup$




I completed training the model with an accuracy of 1.000 and a validation accuracy of 0.9565. Unfortunately whenever i input a image into my model i get the same output regardless. Am i doing something wrong when predicting or during my training. W and A are my class labels.



My folder structure for the image generators are as follows:



images/



a/
a001.jpg.png..
w/
w002.jpg.png..



train_datagen = ImageDataGenerator(
rescale=1./255,
shear_range=0.2,
zoom_range=0.2)

test_datagen = ImageDataGenerator(rescale=1./255)

from keras.models import Sequential
from keras.layers import Conv2D, MaxPooling2D
from keras.layers import Activation, Dropout, Flatten, Dense

model = Sequential()
model.add(Conv2D(32, (3, 3), input_shape=(150, 150,3),padding='same'))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2),padding='same'))

model.add(Conv2D(32, (3, 3),padding='same'))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2),padding='same'))

model.add(Conv2D(64, (3, 3),padding='same'))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2),padding='same'))

model.add(Flatten()) # this converts our 3D feature maps to 1D feature vectors
model.add(Dense(64))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(1))
model.add(Activation('sigmoid'))

model.compile(loss='binary_crossentropy',
optimizer='rmsprop',
metrics=['accuracy'])

batch_size = 64

# this is a generator that will read pictures found in
# subfolers of 'data/train', and indefinitely generate
# batches of augmented image data
train_generator = train_datagen.flow_from_directory(
'C:\Users\Zahid\Desktop\Dataset\train', # this is the target directory
target_size=(150, 150), # all images will be resized to 150x150
batch_size=batch_size,
color_mode='rgb',
class_mode='binary') # since we use binary_crossentropy loss, we need binary labels

# this is a similar generator, for validation data
validation_generator = test_datagen.flow_from_directory(
'C:\Users\Zahid\Desktop\Dataset\val',
target_size=(150, 150),
batch_size=batch_size,
color_mode='rgb',
class_mode='binary')

model.fit_generator(
train_generator,
steps_per_epoch=2000 // batch_size,
epochs=50,
validation_data=validation_generator,
validation_steps=800 // batch_size)
model.save_weights('first_try.h5')

img = cv2.imread("C:\Users\Zahid\Desktop\Data\TrainingData\images\a\img_0201.jpg.png")
resized_image = cv2.resize(image, (150, 150))
x = img_to_array(resized_image)
x = x.reshape((1,) + x.shape)
x = x/255
print(x.shape)
scores_train = model.predict(x)
print(scores_train)






neural-network keras dataset






share|improve this question













share|improve this question











share|improve this question




share|improve this question










asked Apr 6 at 13:32









Zahid AhmedZahid Ahmed

64




64











  • $begingroup$
    what is number 1 in your last dense layer? Do you have only one class? and why you use a sigmoid function in the output layer? try using softmax function, it is better.
    $endgroup$
    – SoK
    Apr 6 at 13:40






  • 1




    $begingroup$
    @honas.cs I Have two classes as mentioned in the question , and i followed a keras example to train this model. As shown in my folder structure i have seperated the classes into two seperate folders and trained them.
    $endgroup$
    – Zahid Ahmed
    Apr 6 at 13:43
















  • $begingroup$
    what is number 1 in your last dense layer? Do you have only one class? and why you use a sigmoid function in the output layer? try using softmax function, it is better.
    $endgroup$
    – SoK
    Apr 6 at 13:40






  • 1




    $begingroup$
    @honas.cs I Have two classes as mentioned in the question , and i followed a keras example to train this model. As shown in my folder structure i have seperated the classes into two seperate folders and trained them.
    $endgroup$
    – Zahid Ahmed
    Apr 6 at 13:43















$begingroup$
what is number 1 in your last dense layer? Do you have only one class? and why you use a sigmoid function in the output layer? try using softmax function, it is better.
$endgroup$
– SoK
Apr 6 at 13:40




$begingroup$
what is number 1 in your last dense layer? Do you have only one class? and why you use a sigmoid function in the output layer? try using softmax function, it is better.
$endgroup$
– SoK
Apr 6 at 13:40




1




1




$begingroup$
@honas.cs I Have two classes as mentioned in the question , and i followed a keras example to train this model. As shown in my folder structure i have seperated the classes into two seperate folders and trained them.
$endgroup$
– Zahid Ahmed
Apr 6 at 13:43




$begingroup$
@honas.cs I Have two classes as mentioned in the question , and i followed a keras example to train this model. As shown in my folder structure i have seperated the classes into two seperate folders and trained them.
$endgroup$
– Zahid Ahmed
Apr 6 at 13:43










2 Answers
2






active

oldest

votes


















0












$begingroup$

As you are using sigmoid as activation function in last layer. It will output generate output based on if probability above 50% then it belongs to "W" class and if it is less than 50% belongs to "A" class.
If you can print output probability of different images & share it then it will little helpful for us for understand problem






share|improve this answer









$endgroup$












  • $begingroup$
    The output probability remains the same regardless of the image at 3.2287784e-15.
    $endgroup$
    – Zahid Ahmed
    Apr 6 at 16:28


















0












$begingroup$

The issue was fixed by changing the Dense Layer to 2 hence specifying two classes, and also switching the Sigmoid activation function with a Softmax function.






share|improve this answer









$endgroup$













    Your Answer








    StackExchange.ready(function()
    var channelOptions =
    tags: "".split(" "),
    id: "557"
    ;
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function()
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled)
    StackExchange.using("snippets", function()
    createEditor();
    );

    else
    createEditor();

    );

    function createEditor()
    StackExchange.prepareEditor(
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: false,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: null,
    bindNavPrevention: true,
    postfix: "",
    imageUploader:
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    ,
    onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    );



    );













    draft saved

    draft discarded


















    StackExchange.ready(
    function ()
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f48755%2fkeras-classifier-returns-similar-output-for-all-predictions%23new-answer', 'question_page');

    );

    Post as a guest















    Required, but never shown

























    2 Answers
    2






    active

    oldest

    votes








    2 Answers
    2






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    0












    $begingroup$

    As you are using sigmoid as activation function in last layer. It will output generate output based on if probability above 50% then it belongs to "W" class and if it is less than 50% belongs to "A" class.
    If you can print output probability of different images & share it then it will little helpful for us for understand problem






    share|improve this answer









    $endgroup$












    • $begingroup$
      The output probability remains the same regardless of the image at 3.2287784e-15.
      $endgroup$
      – Zahid Ahmed
      Apr 6 at 16:28















    0












    $begingroup$

    As you are using sigmoid as activation function in last layer. It will output generate output based on if probability above 50% then it belongs to "W" class and if it is less than 50% belongs to "A" class.
    If you can print output probability of different images & share it then it will little helpful for us for understand problem






    share|improve this answer









    $endgroup$












    • $begingroup$
      The output probability remains the same regardless of the image at 3.2287784e-15.
      $endgroup$
      – Zahid Ahmed
      Apr 6 at 16:28













    0












    0








    0





    $begingroup$

    As you are using sigmoid as activation function in last layer. It will output generate output based on if probability above 50% then it belongs to "W" class and if it is less than 50% belongs to "A" class.
    If you can print output probability of different images & share it then it will little helpful for us for understand problem






    share|improve this answer









    $endgroup$



    As you are using sigmoid as activation function in last layer. It will output generate output based on if probability above 50% then it belongs to "W" class and if it is less than 50% belongs to "A" class.
    If you can print output probability of different images & share it then it will little helpful for us for understand problem







    share|improve this answer












    share|improve this answer



    share|improve this answer










    answered Apr 6 at 15:32









    Swapnil PoteSwapnil Pote

    11




    11











    • $begingroup$
      The output probability remains the same regardless of the image at 3.2287784e-15.
      $endgroup$
      – Zahid Ahmed
      Apr 6 at 16:28
















    • $begingroup$
      The output probability remains the same regardless of the image at 3.2287784e-15.
      $endgroup$
      – Zahid Ahmed
      Apr 6 at 16:28















    $begingroup$
    The output probability remains the same regardless of the image at 3.2287784e-15.
    $endgroup$
    – Zahid Ahmed
    Apr 6 at 16:28




    $begingroup$
    The output probability remains the same regardless of the image at 3.2287784e-15.
    $endgroup$
    – Zahid Ahmed
    Apr 6 at 16:28











    0












    $begingroup$

    The issue was fixed by changing the Dense Layer to 2 hence specifying two classes, and also switching the Sigmoid activation function with a Softmax function.






    share|improve this answer









    $endgroup$

















      0












      $begingroup$

      The issue was fixed by changing the Dense Layer to 2 hence specifying two classes, and also switching the Sigmoid activation function with a Softmax function.






      share|improve this answer









      $endgroup$















        0












        0








        0





        $begingroup$

        The issue was fixed by changing the Dense Layer to 2 hence specifying two classes, and also switching the Sigmoid activation function with a Softmax function.






        share|improve this answer









        $endgroup$



        The issue was fixed by changing the Dense Layer to 2 hence specifying two classes, and also switching the Sigmoid activation function with a Softmax function.







        share|improve this answer












        share|improve this answer



        share|improve this answer










        answered Apr 7 at 18:01









        Zahid AhmedZahid Ahmed

        64




        64



























            draft saved

            draft discarded
















































            Thanks for contributing an answer to Data Science Stack Exchange!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid


            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.

            Use MathJax to format equations. MathJax reference.


            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f48755%2fkeras-classifier-returns-similar-output-for-all-predictions%23new-answer', 'question_page');

            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            Marja Vauras Lähteet | Aiheesta muualla | NavigointivalikkoMarja Vauras Turun yliopiston tutkimusportaalissaInfobox OKSuomalaisen Tiedeakatemian varsinaiset jäsenetKasvatustieteiden tiedekunnan dekaanit ja muu johtoMarja VaurasKoulutusvienti on kestävyys- ja ketteryyslaji (2.5.2017)laajentamallaWorldCat Identities0000 0001 0855 9405n86069603utb201588738523620927

            Which is better: GPT or RelGAN for text generation?2019 Community Moderator ElectionWhat is the difference between TextGAN and LM for text generation?GANs (generative adversarial networks) possible for text as well?Generator loss not decreasing- text to image synthesisChoosing a right algorithm for template-based text generationHow should I format input and output for text generation with LSTMsGumbel Softmax vs Vanilla Softmax for GAN trainingWhich neural network to choose for classification from text/speech?NLP text autoencoder that generates text in poetic meterWhat is the interpretation of the expectation notation in the GAN formulation?What is the difference between TextGAN and LM for text generation?How to prepare the data for text generation task

            Is this part of the description of the Archfey warlock's Misty Escape feature redundant?When is entropic ward considered “used”?How does the reaction timing work for Wrath of the Storm? Can it potentially prevent the damage from the triggering attack?Does the Dark Arts Archlich warlock patrons's Arcane Invisibility activate every time you cast a level 1+ spell?When attacking while invisible, when exactly does invisibility break?Can I cast Hellish Rebuke on my turn?Do I have to “pre-cast” a reaction spell in order for it to be triggered?What happens if a Player Misty Escapes into an Invisible CreatureCan a reaction interrupt multiattack?Does the Fiend-patron warlock's Hurl Through Hell feature dispel effects that require the target to be on the same plane as the caster?What are you allowed to do while using the Warlock's Eldritch Master feature?