What is the problem with this architecture ? I am getting error negative dimensions. I want to avoid dense layers and dropouts The Next CEO of Stack Overflow2019 Community Moderator Electioncounting number of parameters kerasValueError: Error when checking target: expected dense_2 to have shape (1,) but got array with shape (0,)Recreating ResNet50Padding in Keras with output half sized inputTraining Accuracy stuck in KerasHow to use LeakyRelu as activation function in sequence DNN in keras?When it perfoms better than Relu?Value error in Merging two different models in kerasQuery regarding (.output_shape) parameters used in CNN modelSteps taking too long to completeNumpy Python deep learning framework

Complex fractions

Science fiction short story involving a paper written by a schizophrenic

Rotate a column

Would this house-rule that treats advantage as a +1 to the roll instead (and disadvantage as -1) and allows them to stack be balanced?

How to make a variable always equal to the result of some calculations?

Why didn't Theresa May consult with Parliament before negotiating a deal with the EU?

How to be diplomatic in refusing to write code that breaches the privacy of our users

Is there a way to save my career from absolute disaster?

Can the Reverse Gravity spell affect the Meteor Swarm spell?

If the heap is initialized for security, then why is the stack uninitialized?

What happens if you roll doubles 3 times then land on "Go to jail?"

Does it take more energy to get to Venus or to Mars?

How do I construct this japanese bowl?

Extracting names from filename in bash

How do you know when two objects are so called entangled?

Oh, one short ode of love

Shade part of a Venn diagram

Why is there a PLL in CPU?

Inappropriate reference requests from Journal reviewers

If Nick Fury and Coulson already knew about aliens (Kree and Skrull) why did they wait until Thor's appearance to start making weapons?

How to make a software documentation "officially" citable?

How should I support this large drywall patch?

Is a stroke of luck acceptable after a series of unfavorable events?

How do we know the LHC results are robust?



What is the problem with this architecture ? I am getting error negative dimensions. I want to avoid dense layers and dropouts



The Next CEO of Stack Overflow
2019 Community Moderator Electioncounting number of parameters kerasValueError: Error when checking target: expected dense_2 to have shape (1,) but got array with shape (0,)Recreating ResNet50Padding in Keras with output half sized inputTraining Accuracy stuck in KerasHow to use LeakyRelu as activation function in sequence DNN in keras?When it perfoms better than Relu?Value error in Merging two different models in kerasQuery regarding (.output_shape) parameters used in CNN modelSteps taking too long to completeNumpy Python deep learning framework










2












$begingroup$


model = Sequential()
model.add(Conv2D(64, (1, 1), activation='relu', input_shape=(64,64,3)))
model.add(Conv2D(64, (5,5), strides=(2,2), padding='same'))
model.add(MaxPooling2D(pool_size=3, strides=(2,2)))
model.add(Conv2D(64, (1, 1),
strides=(1,1),
activation='relu',
padding='valid' ))
model.add(Conv2D(64, (3, 3),
strides=(1,1),
activation='relu',
padding='valid' ))
model.add(Conv2D(64, (1, 1),
strides=(1,1),
activation='relu',
padding='valid' ))
model.add(Conv2D(64, (3, 3),
strides=(1,1),
activation='relu',
padding='valid' ))
model.add(Conv2D(64, (1, 1),
strides=(1,1),
activation='relu',
padding='valid' ))
model.add(Conv2D(64, (3, 3),
strides=(1,1),
activation='relu',
padding='valid' ))
model.add(Conv2D(128, (1,1),
strides=(2,2),
activation='relu',
padding='valid'))
model.add(Conv2D(128, (3,3),
strides=(1,1),
activation='relu',
padding='valid'))
model.add(Conv2D(128, (1, 1),
strides=(1,1),
activation='relu',
padding='valid' ))
model.add(Conv2D(128, (3,3),
strides=(1,1),
activation='relu',
padding='valid'))
model.add(Conv2D(128, (1,1),
strides=(1,1),
activation='relu',
padding='valid'))
model.add(Conv2D(128, (3,3),
strides=(1,1),
activation='relu',
padding='valid'))
model.add(Conv2D(128, (1,1),
strides=(1,1),
activation='relu',
padding='valid'))
model.add(Conv2D(128, (3,3),
strides=(1,1),
padding='valid'))

model.add(AveragePooling2D(pool_size=2, strides=(2,2), padding='valid'))
model.add(Conv2D(1,1,200))
model.add(Flatten())
model.add(Activation('softmax'))









share|improve this question











$endgroup$
















    2












    $begingroup$


    model = Sequential()
    model.add(Conv2D(64, (1, 1), activation='relu', input_shape=(64,64,3)))
    model.add(Conv2D(64, (5,5), strides=(2,2), padding='same'))
    model.add(MaxPooling2D(pool_size=3, strides=(2,2)))
    model.add(Conv2D(64, (1, 1),
    strides=(1,1),
    activation='relu',
    padding='valid' ))
    model.add(Conv2D(64, (3, 3),
    strides=(1,1),
    activation='relu',
    padding='valid' ))
    model.add(Conv2D(64, (1, 1),
    strides=(1,1),
    activation='relu',
    padding='valid' ))
    model.add(Conv2D(64, (3, 3),
    strides=(1,1),
    activation='relu',
    padding='valid' ))
    model.add(Conv2D(64, (1, 1),
    strides=(1,1),
    activation='relu',
    padding='valid' ))
    model.add(Conv2D(64, (3, 3),
    strides=(1,1),
    activation='relu',
    padding='valid' ))
    model.add(Conv2D(128, (1,1),
    strides=(2,2),
    activation='relu',
    padding='valid'))
    model.add(Conv2D(128, (3,3),
    strides=(1,1),
    activation='relu',
    padding='valid'))
    model.add(Conv2D(128, (1, 1),
    strides=(1,1),
    activation='relu',
    padding='valid' ))
    model.add(Conv2D(128, (3,3),
    strides=(1,1),
    activation='relu',
    padding='valid'))
    model.add(Conv2D(128, (1,1),
    strides=(1,1),
    activation='relu',
    padding='valid'))
    model.add(Conv2D(128, (3,3),
    strides=(1,1),
    activation='relu',
    padding='valid'))
    model.add(Conv2D(128, (1,1),
    strides=(1,1),
    activation='relu',
    padding='valid'))
    model.add(Conv2D(128, (3,3),
    strides=(1,1),
    padding='valid'))

    model.add(AveragePooling2D(pool_size=2, strides=(2,2), padding='valid'))
    model.add(Conv2D(1,1,200))
    model.add(Flatten())
    model.add(Activation('softmax'))









    share|improve this question











    $endgroup$














      2












      2








      2





      $begingroup$


      model = Sequential()
      model.add(Conv2D(64, (1, 1), activation='relu', input_shape=(64,64,3)))
      model.add(Conv2D(64, (5,5), strides=(2,2), padding='same'))
      model.add(MaxPooling2D(pool_size=3, strides=(2,2)))
      model.add(Conv2D(64, (1, 1),
      strides=(1,1),
      activation='relu',
      padding='valid' ))
      model.add(Conv2D(64, (3, 3),
      strides=(1,1),
      activation='relu',
      padding='valid' ))
      model.add(Conv2D(64, (1, 1),
      strides=(1,1),
      activation='relu',
      padding='valid' ))
      model.add(Conv2D(64, (3, 3),
      strides=(1,1),
      activation='relu',
      padding='valid' ))
      model.add(Conv2D(64, (1, 1),
      strides=(1,1),
      activation='relu',
      padding='valid' ))
      model.add(Conv2D(64, (3, 3),
      strides=(1,1),
      activation='relu',
      padding='valid' ))
      model.add(Conv2D(128, (1,1),
      strides=(2,2),
      activation='relu',
      padding='valid'))
      model.add(Conv2D(128, (3,3),
      strides=(1,1),
      activation='relu',
      padding='valid'))
      model.add(Conv2D(128, (1, 1),
      strides=(1,1),
      activation='relu',
      padding='valid' ))
      model.add(Conv2D(128, (3,3),
      strides=(1,1),
      activation='relu',
      padding='valid'))
      model.add(Conv2D(128, (1,1),
      strides=(1,1),
      activation='relu',
      padding='valid'))
      model.add(Conv2D(128, (3,3),
      strides=(1,1),
      activation='relu',
      padding='valid'))
      model.add(Conv2D(128, (1,1),
      strides=(1,1),
      activation='relu',
      padding='valid'))
      model.add(Conv2D(128, (3,3),
      strides=(1,1),
      padding='valid'))

      model.add(AveragePooling2D(pool_size=2, strides=(2,2), padding='valid'))
      model.add(Conv2D(1,1,200))
      model.add(Flatten())
      model.add(Activation('softmax'))









      share|improve this question











      $endgroup$




      model = Sequential()
      model.add(Conv2D(64, (1, 1), activation='relu', input_shape=(64,64,3)))
      model.add(Conv2D(64, (5,5), strides=(2,2), padding='same'))
      model.add(MaxPooling2D(pool_size=3, strides=(2,2)))
      model.add(Conv2D(64, (1, 1),
      strides=(1,1),
      activation='relu',
      padding='valid' ))
      model.add(Conv2D(64, (3, 3),
      strides=(1,1),
      activation='relu',
      padding='valid' ))
      model.add(Conv2D(64, (1, 1),
      strides=(1,1),
      activation='relu',
      padding='valid' ))
      model.add(Conv2D(64, (3, 3),
      strides=(1,1),
      activation='relu',
      padding='valid' ))
      model.add(Conv2D(64, (1, 1),
      strides=(1,1),
      activation='relu',
      padding='valid' ))
      model.add(Conv2D(64, (3, 3),
      strides=(1,1),
      activation='relu',
      padding='valid' ))
      model.add(Conv2D(128, (1,1),
      strides=(2,2),
      activation='relu',
      padding='valid'))
      model.add(Conv2D(128, (3,3),
      strides=(1,1),
      activation='relu',
      padding='valid'))
      model.add(Conv2D(128, (1, 1),
      strides=(1,1),
      activation='relu',
      padding='valid' ))
      model.add(Conv2D(128, (3,3),
      strides=(1,1),
      activation='relu',
      padding='valid'))
      model.add(Conv2D(128, (1,1),
      strides=(1,1),
      activation='relu',
      padding='valid'))
      model.add(Conv2D(128, (3,3),
      strides=(1,1),
      activation='relu',
      padding='valid'))
      model.add(Conv2D(128, (1,1),
      strides=(1,1),
      activation='relu',
      padding='valid'))
      model.add(Conv2D(128, (3,3),
      strides=(1,1),
      padding='valid'))

      model.add(AveragePooling2D(pool_size=2, strides=(2,2), padding='valid'))
      model.add(Conv2D(1,1,200))
      model.add(Flatten())
      model.add(Activation('softmax'))






      machine-learning neural-network deep-learning






      share|improve this question















      share|improve this question













      share|improve this question




      share|improve this question








      edited Mar 22 at 12:31







      Vipul Gaurav

















      asked Mar 22 at 12:28









      Vipul GauravVipul Gaurav

      113




      113




















          1 Answer
          1






          active

          oldest

          votes


















          2












          $begingroup$

          You are using too many layers and you run out of spatial space.



          Most of your convolutional layers use "valid" padding, meaning that the convolution is performed only on actual "pixels" without any padding and as a result the spatial dimensions of the output are smaller than the input.



          I've marked down where it happens in your script:



          model = Sequential()
          model.add(Conv2D(64, (1, 1), activation='relu', input_shape=(64,64,3)))
          model.add(Conv2D(64, (5,5), strides=(2,2), padding='same'))
          model.add(MaxPooling2D(pool_size=3, strides=(2,2)))
          model.add(Conv2D(64, (1, 1),
          strides=(1,1),
          activation='relu',
          padding='valid' ))
          model.add(Conv2D(64, (3, 3),
          strides=(1,1),
          activation='relu',
          padding='valid' ))
          model.add(Conv2D(64, (1, 1),
          strides=(1,1),
          activation='relu',
          padding='valid' ))
          model.add(Conv2D(64, (3, 3),
          strides=(1,1),
          activation='relu',
          padding='valid' ))
          model.add(Conv2D(64, (1, 1),
          strides=(1,1),
          activation='relu',
          padding='valid' ))
          model.add(Conv2D(64, (3, 3),
          strides=(1,1),
          activation='relu',
          padding='valid' ))
          model.add(Conv2D(128, (1,1),
          strides=(2,2),
          activation='relu',
          padding='valid'))
          model.add(Conv2D(128, (3,3),
          strides=(1,1),
          activation='relu',
          padding='valid'))
          model.add(Conv2D(128, (1, 1),
          strides=(1,1),
          activation='relu',
          padding='valid' ))
          model.add(Conv2D(128, (3,3),
          strides=(1,1),
          activation='relu',
          padding='valid'))
          model.add(Conv2D(128, (1,1),
          strides=(1,1),
          activation='relu',
          padding='valid'))

          model.summary() # This is where it happens - The output of this layer is of shape (1,1,128)

          model.add(Conv2D(128, (3,3),
          strides=(1,1),
          activation='relu',
          padding='valid'))
          model.add(Conv2D(128, (1,1),
          strides=(1,1),
          activation='relu',
          padding='valid'))
          model.add(Conv2D(128, (3,3),
          strides=(1,1),
          padding='valid'))

          model.add(AveragePooling2D(pool_size=2, strides=(2,2), padding='valid'))
          model.add(Conv2D(1,1,200))
          model.add(Flatten())
          model.add(Activation('softmax'))


          You can use the Keras "summary" method to investigate your model. For example, the output from the script I've written here is:



          _________________________________________________________________
          Layer (type) Output Shape Param #
          =================================================================
          conv2d_1 (Conv2D) (None, 64, 64, 64) 256
          _________________________________________________________________
          conv2d_2 (Conv2D) (None, 32, 32, 64) 102464
          _________________________________________________________________
          max_pooling2d_1 (MaxPooling2 (None, 15, 15, 64) 0
          _________________________________________________________________
          conv2d_3 (Conv2D) (None, 15, 15, 64) 4160
          _________________________________________________________________
          conv2d_4 (Conv2D) (None, 13, 13, 64) 36928
          _________________________________________________________________
          conv2d_5 (Conv2D) (None, 13, 13, 64) 4160
          _________________________________________________________________
          conv2d_6 (Conv2D) (None, 11, 11, 64) 36928
          _________________________________________________________________
          conv2d_7 (Conv2D) (None, 11, 11, 64) 4160
          _________________________________________________________________
          conv2d_8 (Conv2D) (None, 9, 9, 64) 36928
          _________________________________________________________________
          conv2d_9 (Conv2D) (None, 5, 5, 128) 8320
          _________________________________________________________________
          conv2d_10 (Conv2D) (None, 3, 3, 128) 147584
          _________________________________________________________________
          conv2d_11 (Conv2D) (None, 3, 3, 128) 16512
          _________________________________________________________________
          conv2d_12 (Conv2D) (None, 1, 1, 128) 147584
          _________________________________________________________________
          conv2d_13 (Conv2D) (None, 1, 1, 128) 16512
          =================================================================
          Total params: 562,496
          Trainable params: 562,496
          Non-trainable params: 0





          share|improve this answer









          $endgroup$













            Your Answer





            StackExchange.ifUsing("editor", function ()
            return StackExchange.using("mathjaxEditing", function ()
            StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
            StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
            );
            );
            , "mathjax-editing");

            StackExchange.ready(function()
            var channelOptions =
            tags: "".split(" "),
            id: "557"
            ;
            initTagRenderer("".split(" "), "".split(" "), channelOptions);

            StackExchange.using("externalEditor", function()
            // Have to fire editor after snippets, if snippets enabled
            if (StackExchange.settings.snippets.snippetsEnabled)
            StackExchange.using("snippets", function()
            createEditor();
            );

            else
            createEditor();

            );

            function createEditor()
            StackExchange.prepareEditor(
            heartbeatType: 'answer',
            autoActivateHeartbeat: false,
            convertImagesToLinks: false,
            noModals: true,
            showLowRepImageUploadWarning: true,
            reputationToPostImages: null,
            bindNavPrevention: true,
            postfix: "",
            imageUploader:
            brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
            contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
            allowUrls: true
            ,
            onDemand: true,
            discardSelector: ".discard-answer"
            ,immediatelyShowMarkdownHelp:true
            );



            );













            draft saved

            draft discarded


















            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f47784%2fwhat-is-the-problem-with-this-architecture-i-am-getting-error-negative-dimensi%23new-answer', 'question_page');

            );

            Post as a guest















            Required, but never shown

























            1 Answer
            1






            active

            oldest

            votes








            1 Answer
            1






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes









            2












            $begingroup$

            You are using too many layers and you run out of spatial space.



            Most of your convolutional layers use "valid" padding, meaning that the convolution is performed only on actual "pixels" without any padding and as a result the spatial dimensions of the output are smaller than the input.



            I've marked down where it happens in your script:



            model = Sequential()
            model.add(Conv2D(64, (1, 1), activation='relu', input_shape=(64,64,3)))
            model.add(Conv2D(64, (5,5), strides=(2,2), padding='same'))
            model.add(MaxPooling2D(pool_size=3, strides=(2,2)))
            model.add(Conv2D(64, (1, 1),
            strides=(1,1),
            activation='relu',
            padding='valid' ))
            model.add(Conv2D(64, (3, 3),
            strides=(1,1),
            activation='relu',
            padding='valid' ))
            model.add(Conv2D(64, (1, 1),
            strides=(1,1),
            activation='relu',
            padding='valid' ))
            model.add(Conv2D(64, (3, 3),
            strides=(1,1),
            activation='relu',
            padding='valid' ))
            model.add(Conv2D(64, (1, 1),
            strides=(1,1),
            activation='relu',
            padding='valid' ))
            model.add(Conv2D(64, (3, 3),
            strides=(1,1),
            activation='relu',
            padding='valid' ))
            model.add(Conv2D(128, (1,1),
            strides=(2,2),
            activation='relu',
            padding='valid'))
            model.add(Conv2D(128, (3,3),
            strides=(1,1),
            activation='relu',
            padding='valid'))
            model.add(Conv2D(128, (1, 1),
            strides=(1,1),
            activation='relu',
            padding='valid' ))
            model.add(Conv2D(128, (3,3),
            strides=(1,1),
            activation='relu',
            padding='valid'))
            model.add(Conv2D(128, (1,1),
            strides=(1,1),
            activation='relu',
            padding='valid'))

            model.summary() # This is where it happens - The output of this layer is of shape (1,1,128)

            model.add(Conv2D(128, (3,3),
            strides=(1,1),
            activation='relu',
            padding='valid'))
            model.add(Conv2D(128, (1,1),
            strides=(1,1),
            activation='relu',
            padding='valid'))
            model.add(Conv2D(128, (3,3),
            strides=(1,1),
            padding='valid'))

            model.add(AveragePooling2D(pool_size=2, strides=(2,2), padding='valid'))
            model.add(Conv2D(1,1,200))
            model.add(Flatten())
            model.add(Activation('softmax'))


            You can use the Keras "summary" method to investigate your model. For example, the output from the script I've written here is:



            _________________________________________________________________
            Layer (type) Output Shape Param #
            =================================================================
            conv2d_1 (Conv2D) (None, 64, 64, 64) 256
            _________________________________________________________________
            conv2d_2 (Conv2D) (None, 32, 32, 64) 102464
            _________________________________________________________________
            max_pooling2d_1 (MaxPooling2 (None, 15, 15, 64) 0
            _________________________________________________________________
            conv2d_3 (Conv2D) (None, 15, 15, 64) 4160
            _________________________________________________________________
            conv2d_4 (Conv2D) (None, 13, 13, 64) 36928
            _________________________________________________________________
            conv2d_5 (Conv2D) (None, 13, 13, 64) 4160
            _________________________________________________________________
            conv2d_6 (Conv2D) (None, 11, 11, 64) 36928
            _________________________________________________________________
            conv2d_7 (Conv2D) (None, 11, 11, 64) 4160
            _________________________________________________________________
            conv2d_8 (Conv2D) (None, 9, 9, 64) 36928
            _________________________________________________________________
            conv2d_9 (Conv2D) (None, 5, 5, 128) 8320
            _________________________________________________________________
            conv2d_10 (Conv2D) (None, 3, 3, 128) 147584
            _________________________________________________________________
            conv2d_11 (Conv2D) (None, 3, 3, 128) 16512
            _________________________________________________________________
            conv2d_12 (Conv2D) (None, 1, 1, 128) 147584
            _________________________________________________________________
            conv2d_13 (Conv2D) (None, 1, 1, 128) 16512
            =================================================================
            Total params: 562,496
            Trainable params: 562,496
            Non-trainable params: 0





            share|improve this answer









            $endgroup$

















              2












              $begingroup$

              You are using too many layers and you run out of spatial space.



              Most of your convolutional layers use "valid" padding, meaning that the convolution is performed only on actual "pixels" without any padding and as a result the spatial dimensions of the output are smaller than the input.



              I've marked down where it happens in your script:



              model = Sequential()
              model.add(Conv2D(64, (1, 1), activation='relu', input_shape=(64,64,3)))
              model.add(Conv2D(64, (5,5), strides=(2,2), padding='same'))
              model.add(MaxPooling2D(pool_size=3, strides=(2,2)))
              model.add(Conv2D(64, (1, 1),
              strides=(1,1),
              activation='relu',
              padding='valid' ))
              model.add(Conv2D(64, (3, 3),
              strides=(1,1),
              activation='relu',
              padding='valid' ))
              model.add(Conv2D(64, (1, 1),
              strides=(1,1),
              activation='relu',
              padding='valid' ))
              model.add(Conv2D(64, (3, 3),
              strides=(1,1),
              activation='relu',
              padding='valid' ))
              model.add(Conv2D(64, (1, 1),
              strides=(1,1),
              activation='relu',
              padding='valid' ))
              model.add(Conv2D(64, (3, 3),
              strides=(1,1),
              activation='relu',
              padding='valid' ))
              model.add(Conv2D(128, (1,1),
              strides=(2,2),
              activation='relu',
              padding='valid'))
              model.add(Conv2D(128, (3,3),
              strides=(1,1),
              activation='relu',
              padding='valid'))
              model.add(Conv2D(128, (1, 1),
              strides=(1,1),
              activation='relu',
              padding='valid' ))
              model.add(Conv2D(128, (3,3),
              strides=(1,1),
              activation='relu',
              padding='valid'))
              model.add(Conv2D(128, (1,1),
              strides=(1,1),
              activation='relu',
              padding='valid'))

              model.summary() # This is where it happens - The output of this layer is of shape (1,1,128)

              model.add(Conv2D(128, (3,3),
              strides=(1,1),
              activation='relu',
              padding='valid'))
              model.add(Conv2D(128, (1,1),
              strides=(1,1),
              activation='relu',
              padding='valid'))
              model.add(Conv2D(128, (3,3),
              strides=(1,1),
              padding='valid'))

              model.add(AveragePooling2D(pool_size=2, strides=(2,2), padding='valid'))
              model.add(Conv2D(1,1,200))
              model.add(Flatten())
              model.add(Activation('softmax'))


              You can use the Keras "summary" method to investigate your model. For example, the output from the script I've written here is:



              _________________________________________________________________
              Layer (type) Output Shape Param #
              =================================================================
              conv2d_1 (Conv2D) (None, 64, 64, 64) 256
              _________________________________________________________________
              conv2d_2 (Conv2D) (None, 32, 32, 64) 102464
              _________________________________________________________________
              max_pooling2d_1 (MaxPooling2 (None, 15, 15, 64) 0
              _________________________________________________________________
              conv2d_3 (Conv2D) (None, 15, 15, 64) 4160
              _________________________________________________________________
              conv2d_4 (Conv2D) (None, 13, 13, 64) 36928
              _________________________________________________________________
              conv2d_5 (Conv2D) (None, 13, 13, 64) 4160
              _________________________________________________________________
              conv2d_6 (Conv2D) (None, 11, 11, 64) 36928
              _________________________________________________________________
              conv2d_7 (Conv2D) (None, 11, 11, 64) 4160
              _________________________________________________________________
              conv2d_8 (Conv2D) (None, 9, 9, 64) 36928
              _________________________________________________________________
              conv2d_9 (Conv2D) (None, 5, 5, 128) 8320
              _________________________________________________________________
              conv2d_10 (Conv2D) (None, 3, 3, 128) 147584
              _________________________________________________________________
              conv2d_11 (Conv2D) (None, 3, 3, 128) 16512
              _________________________________________________________________
              conv2d_12 (Conv2D) (None, 1, 1, 128) 147584
              _________________________________________________________________
              conv2d_13 (Conv2D) (None, 1, 1, 128) 16512
              =================================================================
              Total params: 562,496
              Trainable params: 562,496
              Non-trainable params: 0





              share|improve this answer









              $endgroup$















                2












                2








                2





                $begingroup$

                You are using too many layers and you run out of spatial space.



                Most of your convolutional layers use "valid" padding, meaning that the convolution is performed only on actual "pixels" without any padding and as a result the spatial dimensions of the output are smaller than the input.



                I've marked down where it happens in your script:



                model = Sequential()
                model.add(Conv2D(64, (1, 1), activation='relu', input_shape=(64,64,3)))
                model.add(Conv2D(64, (5,5), strides=(2,2), padding='same'))
                model.add(MaxPooling2D(pool_size=3, strides=(2,2)))
                model.add(Conv2D(64, (1, 1),
                strides=(1,1),
                activation='relu',
                padding='valid' ))
                model.add(Conv2D(64, (3, 3),
                strides=(1,1),
                activation='relu',
                padding='valid' ))
                model.add(Conv2D(64, (1, 1),
                strides=(1,1),
                activation='relu',
                padding='valid' ))
                model.add(Conv2D(64, (3, 3),
                strides=(1,1),
                activation='relu',
                padding='valid' ))
                model.add(Conv2D(64, (1, 1),
                strides=(1,1),
                activation='relu',
                padding='valid' ))
                model.add(Conv2D(64, (3, 3),
                strides=(1,1),
                activation='relu',
                padding='valid' ))
                model.add(Conv2D(128, (1,1),
                strides=(2,2),
                activation='relu',
                padding='valid'))
                model.add(Conv2D(128, (3,3),
                strides=(1,1),
                activation='relu',
                padding='valid'))
                model.add(Conv2D(128, (1, 1),
                strides=(1,1),
                activation='relu',
                padding='valid' ))
                model.add(Conv2D(128, (3,3),
                strides=(1,1),
                activation='relu',
                padding='valid'))
                model.add(Conv2D(128, (1,1),
                strides=(1,1),
                activation='relu',
                padding='valid'))

                model.summary() # This is where it happens - The output of this layer is of shape (1,1,128)

                model.add(Conv2D(128, (3,3),
                strides=(1,1),
                activation='relu',
                padding='valid'))
                model.add(Conv2D(128, (1,1),
                strides=(1,1),
                activation='relu',
                padding='valid'))
                model.add(Conv2D(128, (3,3),
                strides=(1,1),
                padding='valid'))

                model.add(AveragePooling2D(pool_size=2, strides=(2,2), padding='valid'))
                model.add(Conv2D(1,1,200))
                model.add(Flatten())
                model.add(Activation('softmax'))


                You can use the Keras "summary" method to investigate your model. For example, the output from the script I've written here is:



                _________________________________________________________________
                Layer (type) Output Shape Param #
                =================================================================
                conv2d_1 (Conv2D) (None, 64, 64, 64) 256
                _________________________________________________________________
                conv2d_2 (Conv2D) (None, 32, 32, 64) 102464
                _________________________________________________________________
                max_pooling2d_1 (MaxPooling2 (None, 15, 15, 64) 0
                _________________________________________________________________
                conv2d_3 (Conv2D) (None, 15, 15, 64) 4160
                _________________________________________________________________
                conv2d_4 (Conv2D) (None, 13, 13, 64) 36928
                _________________________________________________________________
                conv2d_5 (Conv2D) (None, 13, 13, 64) 4160
                _________________________________________________________________
                conv2d_6 (Conv2D) (None, 11, 11, 64) 36928
                _________________________________________________________________
                conv2d_7 (Conv2D) (None, 11, 11, 64) 4160
                _________________________________________________________________
                conv2d_8 (Conv2D) (None, 9, 9, 64) 36928
                _________________________________________________________________
                conv2d_9 (Conv2D) (None, 5, 5, 128) 8320
                _________________________________________________________________
                conv2d_10 (Conv2D) (None, 3, 3, 128) 147584
                _________________________________________________________________
                conv2d_11 (Conv2D) (None, 3, 3, 128) 16512
                _________________________________________________________________
                conv2d_12 (Conv2D) (None, 1, 1, 128) 147584
                _________________________________________________________________
                conv2d_13 (Conv2D) (None, 1, 1, 128) 16512
                =================================================================
                Total params: 562,496
                Trainable params: 562,496
                Non-trainable params: 0





                share|improve this answer









                $endgroup$



                You are using too many layers and you run out of spatial space.



                Most of your convolutional layers use "valid" padding, meaning that the convolution is performed only on actual "pixels" without any padding and as a result the spatial dimensions of the output are smaller than the input.



                I've marked down where it happens in your script:



                model = Sequential()
                model.add(Conv2D(64, (1, 1), activation='relu', input_shape=(64,64,3)))
                model.add(Conv2D(64, (5,5), strides=(2,2), padding='same'))
                model.add(MaxPooling2D(pool_size=3, strides=(2,2)))
                model.add(Conv2D(64, (1, 1),
                strides=(1,1),
                activation='relu',
                padding='valid' ))
                model.add(Conv2D(64, (3, 3),
                strides=(1,1),
                activation='relu',
                padding='valid' ))
                model.add(Conv2D(64, (1, 1),
                strides=(1,1),
                activation='relu',
                padding='valid' ))
                model.add(Conv2D(64, (3, 3),
                strides=(1,1),
                activation='relu',
                padding='valid' ))
                model.add(Conv2D(64, (1, 1),
                strides=(1,1),
                activation='relu',
                padding='valid' ))
                model.add(Conv2D(64, (3, 3),
                strides=(1,1),
                activation='relu',
                padding='valid' ))
                model.add(Conv2D(128, (1,1),
                strides=(2,2),
                activation='relu',
                padding='valid'))
                model.add(Conv2D(128, (3,3),
                strides=(1,1),
                activation='relu',
                padding='valid'))
                model.add(Conv2D(128, (1, 1),
                strides=(1,1),
                activation='relu',
                padding='valid' ))
                model.add(Conv2D(128, (3,3),
                strides=(1,1),
                activation='relu',
                padding='valid'))
                model.add(Conv2D(128, (1,1),
                strides=(1,1),
                activation='relu',
                padding='valid'))

                model.summary() # This is where it happens - The output of this layer is of shape (1,1,128)

                model.add(Conv2D(128, (3,3),
                strides=(1,1),
                activation='relu',
                padding='valid'))
                model.add(Conv2D(128, (1,1),
                strides=(1,1),
                activation='relu',
                padding='valid'))
                model.add(Conv2D(128, (3,3),
                strides=(1,1),
                padding='valid'))

                model.add(AveragePooling2D(pool_size=2, strides=(2,2), padding='valid'))
                model.add(Conv2D(1,1,200))
                model.add(Flatten())
                model.add(Activation('softmax'))


                You can use the Keras "summary" method to investigate your model. For example, the output from the script I've written here is:



                _________________________________________________________________
                Layer (type) Output Shape Param #
                =================================================================
                conv2d_1 (Conv2D) (None, 64, 64, 64) 256
                _________________________________________________________________
                conv2d_2 (Conv2D) (None, 32, 32, 64) 102464
                _________________________________________________________________
                max_pooling2d_1 (MaxPooling2 (None, 15, 15, 64) 0
                _________________________________________________________________
                conv2d_3 (Conv2D) (None, 15, 15, 64) 4160
                _________________________________________________________________
                conv2d_4 (Conv2D) (None, 13, 13, 64) 36928
                _________________________________________________________________
                conv2d_5 (Conv2D) (None, 13, 13, 64) 4160
                _________________________________________________________________
                conv2d_6 (Conv2D) (None, 11, 11, 64) 36928
                _________________________________________________________________
                conv2d_7 (Conv2D) (None, 11, 11, 64) 4160
                _________________________________________________________________
                conv2d_8 (Conv2D) (None, 9, 9, 64) 36928
                _________________________________________________________________
                conv2d_9 (Conv2D) (None, 5, 5, 128) 8320
                _________________________________________________________________
                conv2d_10 (Conv2D) (None, 3, 3, 128) 147584
                _________________________________________________________________
                conv2d_11 (Conv2D) (None, 3, 3, 128) 16512
                _________________________________________________________________
                conv2d_12 (Conv2D) (None, 1, 1, 128) 147584
                _________________________________________________________________
                conv2d_13 (Conv2D) (None, 1, 1, 128) 16512
                =================================================================
                Total params: 562,496
                Trainable params: 562,496
                Non-trainable params: 0






                share|improve this answer












                share|improve this answer



                share|improve this answer










                answered Mar 22 at 13:10









                Mark.FMark.F

                1,0241421




                1,0241421



























                    draft saved

                    draft discarded
















































                    Thanks for contributing an answer to Data Science Stack Exchange!


                    • Please be sure to answer the question. Provide details and share your research!

                    But avoid


                    • Asking for help, clarification, or responding to other answers.

                    • Making statements based on opinion; back them up with references or personal experience.

                    Use MathJax to format equations. MathJax reference.


                    To learn more, see our tips on writing great answers.




                    draft saved


                    draft discarded














                    StackExchange.ready(
                    function ()
                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f47784%2fwhat-is-the-problem-with-this-architecture-i-am-getting-error-negative-dimensi%23new-answer', 'question_page');

                    );

                    Post as a guest















                    Required, but never shown





















































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown

































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown







                    Popular posts from this blog

                    Adding axes to figuresAdding axes labels to LaTeX figuresLaTeX equivalent of ConTeXt buffersRotate a node but not its content: the case of the ellipse decorationHow to define the default vertical distance between nodes?TikZ scaling graphic and adjust node position and keep font sizeNumerical conditional within tikz keys?adding axes to shapesAlign axes across subfiguresAdding figures with a certain orderLine up nested tikz enviroments or how to get rid of themAdding axes labels to LaTeX figures

                    Luettelo Yhdysvaltain laivaston lentotukialuksista Lähteet | Navigointivalikko

                    Gary (muusikko) Sisällysluettelo Historia | Rockin' High | Lähteet | Aiheesta muualla | NavigointivalikkoInfobox OKTuomas "Gary" Keskinen Ancaran kitaristiksiProjekti Rockin' High