Precision decreases with each epoch in CNNCNN tagging such that each input could have multiple tagsHigh, constant training loss with CNNHow to set batch_size, steps_per epoch and validation stepsLoss for CNN decreases and settles but training accuracy does not improveResources for CNN example with KerasMatlab: setting static iterations per epoch in a CNNHow to tinker with CNN architectures?Huge performance discrepancies at each run, with the same CNN architectureValue of loss and accuracy does not change over EpochsWhat happened to the accuracy before and after 75th epoch. Why its unstable at first, then stepped up after the 75th epoch?

JIS and ISO square taper

Stack Interview Code methods made from class Node and Smart Pointers

What features enable the Su-25 Frogfoot to operate with such a wide variety of fuels?

A variation to the phrase "hanging over my shoulders"

Giving feedback to someone without sounding prejudiced

Change the color of a single dot in `ddot` symbol

PTIJ: Why is Haman obsessed with Bose?

How to make money from a browser who sees 5 seconds into the future of any web page?

Is there any evidence that Cleopatra and Caesarion considered fleeing to India to escape the Romans?

The IT department bottlenecks progress, how should I handle this?

What to do when eye contact makes your coworker uncomfortable?

Multiplicative persistence

How would you translate "more" for use as an interface button?

Why should universal income be universal?

15% tax on $7.5k earnings. Is that right?

Why can't the Brexit deadlock in the UK parliament be solved with a plurality vote?

What (the heck) is a Super Worm Equinox Moon?

Can I say "fingers" when referring to toes?

How do I fix the group tension caused by my character stealing and possibly killing without provocation?

Do we have to expect a queue for the shuttle from Watford Junction to Harry Potter Studio?

Why is it that I can sometimes guess the next note?

What is going on with gets(stdin) on the site coderbyte?

How can ping know if my host is down

Circuit Analysis: Obtaining Close Loop OP - AMP Transfer function



Precision decreases with each epoch in CNN


CNN tagging such that each input could have multiple tagsHigh, constant training loss with CNNHow to set batch_size, steps_per epoch and validation stepsLoss for CNN decreases and settles but training accuracy does not improveResources for CNN example with KerasMatlab: setting static iterations per epoch in a CNNHow to tinker with CNN architectures?Huge performance discrepancies at each run, with the same CNN architectureValue of loss and accuracy does not change over EpochsWhat happened to the accuracy before and after 75th epoch. Why its unstable at first, then stepped up after the 75th epoch?













1












$begingroup$


Ive built the following CNN that is used to classify a binary classification set (something a like cats or dogs):



 self.opt = SGD(lr=0.0001)

self.model.add(Conv2D(filters=16, kernel_size=3, input_shape=(100, 100, 3), padding='same'))
self.model.add(BatchNormalization())
self.model.add(Activation(self.activation))
self.model.add(MaxPooling2D(pool_size=2))

self.model.add(Conv2D(filters=32, kernel_size=3, activation=self.activation, padding='same'))
self.model.add(MaxPooling2D(pool_size=2))

self.model.add(Conv2D(filters=64, kernel_size=3, activation=self.activation, padding='same'))
self.model.add(MaxPooling2D(pool_size=2))

self.model.add(Conv2D(filters=128, kernel_size=3, activation=self.activation, padding='same'))
self.model.add(MaxPooling2D(pool_size=2))

self.model.add(Dropout(0.5))
self.model.add(Flatten())
self.model.add(Dense(150))
self.model.add(Activation(self.activation))
self.model.add(Dropout(0.5))
self.model.add(BatchNormalization())
self.model.add(Dense(1, activation='sigmoid'))
self.model.summary()
self.model.compile(loss='binary_crossentropy',
optimizer=self.opt,
metrics=[self.precision])


Sadly my training seems to be doing very badly as the precision is decreasing with each epoch:



enter image description here



What are common issues when noticing this behaviour?



Is the most likely cause that my training data is biased for one class? Right now I have 25% class 1 and 75% class 2.



How much does the size of the pictures influence the performance? As of now, all pictures are size 100x100. Does increasing the size lets say to 400x400 make the CNN more capable of detecting the features?



Additional code:



Loading the Images:



def convert_image_to_array(files,relpath):
images_as_array=[]
len_files = len(files)
i = 0
print("---ConvImg2Arr---")
print("---STARTING---")
for file in files:
# Convert to Numpy Array
images_as_array.append(img_to_array(load_img(relpath+file, target_size=(soll_img_shape, soll_img_shape)))/255)
if i == int(len_files*0.2):
print("20% done")
if i == int(len_files*0.5):
print("50% done")
if i == int(len_files*0.8):
print("80% done")

i +=1
print("---DONE---")
return images_as_array

from sklearn.model_selection import train_test_split
from keras.preprocessing import image
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)

cnn = nn.NeuralNetwork()
cnn.compile()

##### Load Images to train, test and validate

x_train = np.array(convert_image_to_array(X_train,"images/processed/"))
x_test = np.array(convert_image_to_array(X_test,"images/processed/"))


Also the prediction metric:



def precision(self, y_true, y_pred):
'''Calculates the precision, a metric for multi-label classification of
how many selected items are relevant.
'''
true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
predicted_positives = K.sum(K.round(K.clip(y_pred, 0, 1)))
precision = true_positives / (predicted_positives + K.epsilon())
return (precision)









share|improve this question











$endgroup$
















    1












    $begingroup$


    Ive built the following CNN that is used to classify a binary classification set (something a like cats or dogs):



     self.opt = SGD(lr=0.0001)

    self.model.add(Conv2D(filters=16, kernel_size=3, input_shape=(100, 100, 3), padding='same'))
    self.model.add(BatchNormalization())
    self.model.add(Activation(self.activation))
    self.model.add(MaxPooling2D(pool_size=2))

    self.model.add(Conv2D(filters=32, kernel_size=3, activation=self.activation, padding='same'))
    self.model.add(MaxPooling2D(pool_size=2))

    self.model.add(Conv2D(filters=64, kernel_size=3, activation=self.activation, padding='same'))
    self.model.add(MaxPooling2D(pool_size=2))

    self.model.add(Conv2D(filters=128, kernel_size=3, activation=self.activation, padding='same'))
    self.model.add(MaxPooling2D(pool_size=2))

    self.model.add(Dropout(0.5))
    self.model.add(Flatten())
    self.model.add(Dense(150))
    self.model.add(Activation(self.activation))
    self.model.add(Dropout(0.5))
    self.model.add(BatchNormalization())
    self.model.add(Dense(1, activation='sigmoid'))
    self.model.summary()
    self.model.compile(loss='binary_crossentropy',
    optimizer=self.opt,
    metrics=[self.precision])


    Sadly my training seems to be doing very badly as the precision is decreasing with each epoch:



    enter image description here



    What are common issues when noticing this behaviour?



    Is the most likely cause that my training data is biased for one class? Right now I have 25% class 1 and 75% class 2.



    How much does the size of the pictures influence the performance? As of now, all pictures are size 100x100. Does increasing the size lets say to 400x400 make the CNN more capable of detecting the features?



    Additional code:



    Loading the Images:



    def convert_image_to_array(files,relpath):
    images_as_array=[]
    len_files = len(files)
    i = 0
    print("---ConvImg2Arr---")
    print("---STARTING---")
    for file in files:
    # Convert to Numpy Array
    images_as_array.append(img_to_array(load_img(relpath+file, target_size=(soll_img_shape, soll_img_shape)))/255)
    if i == int(len_files*0.2):
    print("20% done")
    if i == int(len_files*0.5):
    print("50% done")
    if i == int(len_files*0.8):
    print("80% done")

    i +=1
    print("---DONE---")
    return images_as_array

    from sklearn.model_selection import train_test_split
    from keras.preprocessing import image
    X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)

    cnn = nn.NeuralNetwork()
    cnn.compile()

    ##### Load Images to train, test and validate

    x_train = np.array(convert_image_to_array(X_train,"images/processed/"))
    x_test = np.array(convert_image_to_array(X_test,"images/processed/"))


    Also the prediction metric:



    def precision(self, y_true, y_pred):
    '''Calculates the precision, a metric for multi-label classification of
    how many selected items are relevant.
    '''
    true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
    predicted_positives = K.sum(K.round(K.clip(y_pred, 0, 1)))
    precision = true_positives / (predicted_positives + K.epsilon())
    return (precision)









    share|improve this question











    $endgroup$














      1












      1








      1





      $begingroup$


      Ive built the following CNN that is used to classify a binary classification set (something a like cats or dogs):



       self.opt = SGD(lr=0.0001)

      self.model.add(Conv2D(filters=16, kernel_size=3, input_shape=(100, 100, 3), padding='same'))
      self.model.add(BatchNormalization())
      self.model.add(Activation(self.activation))
      self.model.add(MaxPooling2D(pool_size=2))

      self.model.add(Conv2D(filters=32, kernel_size=3, activation=self.activation, padding='same'))
      self.model.add(MaxPooling2D(pool_size=2))

      self.model.add(Conv2D(filters=64, kernel_size=3, activation=self.activation, padding='same'))
      self.model.add(MaxPooling2D(pool_size=2))

      self.model.add(Conv2D(filters=128, kernel_size=3, activation=self.activation, padding='same'))
      self.model.add(MaxPooling2D(pool_size=2))

      self.model.add(Dropout(0.5))
      self.model.add(Flatten())
      self.model.add(Dense(150))
      self.model.add(Activation(self.activation))
      self.model.add(Dropout(0.5))
      self.model.add(BatchNormalization())
      self.model.add(Dense(1, activation='sigmoid'))
      self.model.summary()
      self.model.compile(loss='binary_crossentropy',
      optimizer=self.opt,
      metrics=[self.precision])


      Sadly my training seems to be doing very badly as the precision is decreasing with each epoch:



      enter image description here



      What are common issues when noticing this behaviour?



      Is the most likely cause that my training data is biased for one class? Right now I have 25% class 1 and 75% class 2.



      How much does the size of the pictures influence the performance? As of now, all pictures are size 100x100. Does increasing the size lets say to 400x400 make the CNN more capable of detecting the features?



      Additional code:



      Loading the Images:



      def convert_image_to_array(files,relpath):
      images_as_array=[]
      len_files = len(files)
      i = 0
      print("---ConvImg2Arr---")
      print("---STARTING---")
      for file in files:
      # Convert to Numpy Array
      images_as_array.append(img_to_array(load_img(relpath+file, target_size=(soll_img_shape, soll_img_shape)))/255)
      if i == int(len_files*0.2):
      print("20% done")
      if i == int(len_files*0.5):
      print("50% done")
      if i == int(len_files*0.8):
      print("80% done")

      i +=1
      print("---DONE---")
      return images_as_array

      from sklearn.model_selection import train_test_split
      from keras.preprocessing import image
      X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)

      cnn = nn.NeuralNetwork()
      cnn.compile()

      ##### Load Images to train, test and validate

      x_train = np.array(convert_image_to_array(X_train,"images/processed/"))
      x_test = np.array(convert_image_to_array(X_test,"images/processed/"))


      Also the prediction metric:



      def precision(self, y_true, y_pred):
      '''Calculates the precision, a metric for multi-label classification of
      how many selected items are relevant.
      '''
      true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
      predicted_positives = K.sum(K.round(K.clip(y_pred, 0, 1)))
      precision = true_positives / (predicted_positives + K.epsilon())
      return (precision)









      share|improve this question











      $endgroup$




      Ive built the following CNN that is used to classify a binary classification set (something a like cats or dogs):



       self.opt = SGD(lr=0.0001)

      self.model.add(Conv2D(filters=16, kernel_size=3, input_shape=(100, 100, 3), padding='same'))
      self.model.add(BatchNormalization())
      self.model.add(Activation(self.activation))
      self.model.add(MaxPooling2D(pool_size=2))

      self.model.add(Conv2D(filters=32, kernel_size=3, activation=self.activation, padding='same'))
      self.model.add(MaxPooling2D(pool_size=2))

      self.model.add(Conv2D(filters=64, kernel_size=3, activation=self.activation, padding='same'))
      self.model.add(MaxPooling2D(pool_size=2))

      self.model.add(Conv2D(filters=128, kernel_size=3, activation=self.activation, padding='same'))
      self.model.add(MaxPooling2D(pool_size=2))

      self.model.add(Dropout(0.5))
      self.model.add(Flatten())
      self.model.add(Dense(150))
      self.model.add(Activation(self.activation))
      self.model.add(Dropout(0.5))
      self.model.add(BatchNormalization())
      self.model.add(Dense(1, activation='sigmoid'))
      self.model.summary()
      self.model.compile(loss='binary_crossentropy',
      optimizer=self.opt,
      metrics=[self.precision])


      Sadly my training seems to be doing very badly as the precision is decreasing with each epoch:



      enter image description here



      What are common issues when noticing this behaviour?



      Is the most likely cause that my training data is biased for one class? Right now I have 25% class 1 and 75% class 2.



      How much does the size of the pictures influence the performance? As of now, all pictures are size 100x100. Does increasing the size lets say to 400x400 make the CNN more capable of detecting the features?



      Additional code:



      Loading the Images:



      def convert_image_to_array(files,relpath):
      images_as_array=[]
      len_files = len(files)
      i = 0
      print("---ConvImg2Arr---")
      print("---STARTING---")
      for file in files:
      # Convert to Numpy Array
      images_as_array.append(img_to_array(load_img(relpath+file, target_size=(soll_img_shape, soll_img_shape)))/255)
      if i == int(len_files*0.2):
      print("20% done")
      if i == int(len_files*0.5):
      print("50% done")
      if i == int(len_files*0.8):
      print("80% done")

      i +=1
      print("---DONE---")
      return images_as_array

      from sklearn.model_selection import train_test_split
      from keras.preprocessing import image
      X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)

      cnn = nn.NeuralNetwork()
      cnn.compile()

      ##### Load Images to train, test and validate

      x_train = np.array(convert_image_to_array(X_train,"images/processed/"))
      x_test = np.array(convert_image_to_array(X_test,"images/processed/"))


      Also the prediction metric:



      def precision(self, y_true, y_pred):
      '''Calculates the precision, a metric for multi-label classification of
      how many selected items are relevant.
      '''
      true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
      predicted_positives = K.sum(K.round(K.clip(y_pred, 0, 1)))
      precision = true_positives / (predicted_positives + K.epsilon())
      return (precision)






      machine-learning keras cnn






      share|improve this question















      share|improve this question













      share|improve this question




      share|improve this question








      edited 2 days ago







      Phil

















      asked Mar 18 at 6:20









      PhilPhil

      63




      63




















          0






          active

          oldest

          votes











          Your Answer





          StackExchange.ifUsing("editor", function ()
          return StackExchange.using("mathjaxEditing", function ()
          StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
          StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
          );
          );
          , "mathjax-editing");

          StackExchange.ready(function()
          var channelOptions =
          tags: "".split(" "),
          id: "557"
          ;
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function()
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled)
          StackExchange.using("snippets", function()
          createEditor();
          );

          else
          createEditor();

          );

          function createEditor()
          StackExchange.prepareEditor(
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: false,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: null,
          bindNavPrevention: true,
          postfix: "",
          imageUploader:
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          ,
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          );



          );













          draft saved

          draft discarded


















          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f47494%2fprecision-decreases-with-each-epoch-in-cnn%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown

























          0






          active

          oldest

          votes








          0






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes















          draft saved

          draft discarded
















































          Thanks for contributing an answer to Data Science Stack Exchange!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid


          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.

          Use MathJax to format equations. MathJax reference.


          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f47494%2fprecision-decreases-with-each-epoch-in-cnn%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          Adding axes to figuresAdding axes labels to LaTeX figuresLaTeX equivalent of ConTeXt buffersRotate a node but not its content: the case of the ellipse decorationHow to define the default vertical distance between nodes?TikZ scaling graphic and adjust node position and keep font sizeNumerical conditional within tikz keys?adding axes to shapesAlign axes across subfiguresAdding figures with a certain orderLine up nested tikz enviroments or how to get rid of themAdding axes labels to LaTeX figures

          Tähtien Talli Jäsenet | Lähteet | NavigointivalikkoSuomen Hippos – Tähtien Talli

          Do these cracks on my tires look bad? The Next CEO of Stack OverflowDry rot tire should I replace?Having to replace tiresFishtailed so easily? Bad tires? ABS?Filling the tires with something other than air, to avoid puncture hassles?Used Michelin tires safe to install?Do these tyre cracks necessitate replacement?Rumbling noise: tires or mechanicalIs it possible to fix noisy feathered tires?Are bad winter tires still better than summer tires in winter?Torque converter failure - Related to replacing only 2 tires?Why use snow tires on all 4 wheels on 2-wheel-drive cars?