CNN output shape explanation The 2019 Stack Overflow Developer Survey Results Are InRelation between convolution in math and CNNAccuracy drops if more layers trainable - weirdVisualizing ConvNet filters using my own fine-tuned network resulting in a “NoneType” when running: K.gradients(loss, model.input)[0]ValueError: Error when checking target: expected dense_2 to have shape (1,) but got array with shape (0,)Intuitive explanation of Convolutional layers in Deep CNNValue error in Merging two different models in kerasValue of loss and accuracy does not change over EpochsQuery regarding (.output_shape) parameters used in CNN modelMultiple-input multiple-output CNN with custom loss functionSteps taking too long to complete

What is the steepest gradient that a canal can be traversable without locks?

Is it idiomatic to use a noun as the apparent subject of a first person plural?

How to install public key in host server

Where to refill my bottle in India?

What would happen to a Neanderthal today?

Flying Bloodthirsty Lampshades

How long do I have to send my income tax payment to the IRS?

Possible to make Vertices from overlapping Edges?

What does Linus Torvalds mean when he says that Git "never ever" tracks a file?

What do hard-Brexiteers want with respect to the Irish border?

What is the purpose of the constant in the probability density function

Pristine Bit Checking

What is the motivation for a law requiring 2 parties to consent for recording a conversation

Spanish for "widget"

How to answer pointed "are you quitting" questioning when I don't want them to suspect

Are there any other methods to apply to solving simultaneous equations?

What spell level should this homebrew After-Image spell be?

Apparent duplicates between Haynes service instructions and MOT

I need advice about my visa

Patience, young "Padovan"

Output the Arecibo Message

Order table by two columns

Monty Hall variation

Why do UK politicians seemingly ignore opinion polls on Brexit?



CNN output shape explanation



The 2019 Stack Overflow Developer Survey Results Are InRelation between convolution in math and CNNAccuracy drops if more layers trainable - weirdVisualizing ConvNet filters using my own fine-tuned network resulting in a “NoneType” when running: K.gradients(loss, model.input)[0]ValueError: Error when checking target: expected dense_2 to have shape (1,) but got array with shape (0,)Intuitive explanation of Convolutional layers in Deep CNNValue error in Merging two different models in kerasValue of loss and accuracy does not change over EpochsQuery regarding (.output_shape) parameters used in CNN modelMultiple-input multiple-output CNN with custom loss functionSteps taking too long to complete










1












$begingroup$


I have the following sequential model:



model = models.Sequential()
model.add(Reshape(([1]+in_shp), input_shape=in_shp))
model.add(ZeroPadding2D((0, 2)))
model.add(Conv2D(256, (1, 3),padding='valid', activation="relu", name="conv1",data_format="channels_first", kernel_initializer='glorot_uniform'))
model.add(Dropout(dr))
model.add(ZeroPadding2D((0, 2)))
model.add(Conv2D(80, (2, 3), padding="valid", activation="relu", name="conv2",data_format="channels_first", kernel_initializer='glorot_uniform'))
model.add(Dropout(dr))
model.add(Flatten())
model.add(Dense(256, activation='relu', kernel_initializer='he_normal', name="dense1"))
model.add(Dropout(dr))
model.add(Dense( len(classes), kernel_initializer='he_normal', name="dense2" ))
model.add(Activation('softmax'))
model.add(Reshape([len(classes)]))
model.compile(loss='categorical_crossentropy', optimizer='adam')
model.summary()


and I got the following summary:



_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
reshape_1 (Reshape) (None, 1, 2, 128) 0
_________________________________________________________________
zero_padding2d_1 (ZeroPaddin (None, 1, 6, 128) 0
_________________________________________________________________
conv1 (Conv2D) (None, 256, 6, 126) 1024
_________________________________________________________________
dropout_1 (Dropout) (None, 256, 6, 126) 0
_________________________________________________________________
zero_padding2d_2 (ZeroPaddin (None, 256, 10, 126) 0
_________________________________________________________________
conv2 (Conv2D) (None, 80, 9, 124) 122960
_________________________________________________________________
dropout_2 (Dropout) (None, 80, 9, 124) 0
_________________________________________________________________
flatten_1 (Flatten) (None, 89280) 0
_________________________________________________________________
dense1 (Dense) (None, 256) 22855936
_________________________________________________________________
dropout_3 (Dropout) (None, 256) 0
_________________________________________________________________
dense2 (Dense) (None, 8) 2056
_________________________________________________________________
activation_1 (Activation) (None, 8) 0
_________________________________________________________________
reshape_2 (Reshape) (None, 8) 0
=================================================================
Total params: 22,981,976
Trainable params: 22,981,976
Non-trainable params: 0


The model works fine. But, i want to understand something regarding conv1 layer. Why the width value have been reduced from 128 to 126 I am really confused about it shouldn't be the same value as the previous layer?



Also the same thing for the conv2 layer, the height and width have decreased from (10,126) to (9,124).



Could someone explain me why?










share|improve this question











$endgroup$







  • 3




    $begingroup$
    I guess you have valid convolution. if you want that to be $128$ set convolution to be same.
    $endgroup$
    – Vaalizaadeh
    Mar 29 at 12:43










  • $begingroup$
    @Media : after you pointed me to the padding parameter (same or valid). i digged a little bit and i understood why in the first conv layer it dropped from 128 to 126 but it does not really make sense for the second layer to drop from 126 to 124 or from 10 to 9
    $endgroup$
    – A.SDR
    Mar 29 at 12:55










  • $begingroup$
    They are valid too. It pads those layers too.
    $endgroup$
    – Vaalizaadeh
    Mar 29 at 13:07











  • $begingroup$
    +1 to Media, when windows are cut off by the input (image?) edges, the number of windows is smaller than the width of the input. But also, you appear to be trying to zero-pad before the conv layers, and those appear to be padding in the wrong dimensions; try specifying data_format in the padding layers too, or just skip those layers in favor of padding inside the conv layers.
    $endgroup$
    – Ben Reiniger
    Mar 29 at 13:43










  • $begingroup$
    i draw a small example and Now i understand Thank you
    $endgroup$
    – A.SDR
    Mar 29 at 14:11















1












$begingroup$


I have the following sequential model:



model = models.Sequential()
model.add(Reshape(([1]+in_shp), input_shape=in_shp))
model.add(ZeroPadding2D((0, 2)))
model.add(Conv2D(256, (1, 3),padding='valid', activation="relu", name="conv1",data_format="channels_first", kernel_initializer='glorot_uniform'))
model.add(Dropout(dr))
model.add(ZeroPadding2D((0, 2)))
model.add(Conv2D(80, (2, 3), padding="valid", activation="relu", name="conv2",data_format="channels_first", kernel_initializer='glorot_uniform'))
model.add(Dropout(dr))
model.add(Flatten())
model.add(Dense(256, activation='relu', kernel_initializer='he_normal', name="dense1"))
model.add(Dropout(dr))
model.add(Dense( len(classes), kernel_initializer='he_normal', name="dense2" ))
model.add(Activation('softmax'))
model.add(Reshape([len(classes)]))
model.compile(loss='categorical_crossentropy', optimizer='adam')
model.summary()


and I got the following summary:



_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
reshape_1 (Reshape) (None, 1, 2, 128) 0
_________________________________________________________________
zero_padding2d_1 (ZeroPaddin (None, 1, 6, 128) 0
_________________________________________________________________
conv1 (Conv2D) (None, 256, 6, 126) 1024
_________________________________________________________________
dropout_1 (Dropout) (None, 256, 6, 126) 0
_________________________________________________________________
zero_padding2d_2 (ZeroPaddin (None, 256, 10, 126) 0
_________________________________________________________________
conv2 (Conv2D) (None, 80, 9, 124) 122960
_________________________________________________________________
dropout_2 (Dropout) (None, 80, 9, 124) 0
_________________________________________________________________
flatten_1 (Flatten) (None, 89280) 0
_________________________________________________________________
dense1 (Dense) (None, 256) 22855936
_________________________________________________________________
dropout_3 (Dropout) (None, 256) 0
_________________________________________________________________
dense2 (Dense) (None, 8) 2056
_________________________________________________________________
activation_1 (Activation) (None, 8) 0
_________________________________________________________________
reshape_2 (Reshape) (None, 8) 0
=================================================================
Total params: 22,981,976
Trainable params: 22,981,976
Non-trainable params: 0


The model works fine. But, i want to understand something regarding conv1 layer. Why the width value have been reduced from 128 to 126 I am really confused about it shouldn't be the same value as the previous layer?



Also the same thing for the conv2 layer, the height and width have decreased from (10,126) to (9,124).



Could someone explain me why?










share|improve this question











$endgroup$







  • 3




    $begingroup$
    I guess you have valid convolution. if you want that to be $128$ set convolution to be same.
    $endgroup$
    – Vaalizaadeh
    Mar 29 at 12:43










  • $begingroup$
    @Media : after you pointed me to the padding parameter (same or valid). i digged a little bit and i understood why in the first conv layer it dropped from 128 to 126 but it does not really make sense for the second layer to drop from 126 to 124 or from 10 to 9
    $endgroup$
    – A.SDR
    Mar 29 at 12:55










  • $begingroup$
    They are valid too. It pads those layers too.
    $endgroup$
    – Vaalizaadeh
    Mar 29 at 13:07











  • $begingroup$
    +1 to Media, when windows are cut off by the input (image?) edges, the number of windows is smaller than the width of the input. But also, you appear to be trying to zero-pad before the conv layers, and those appear to be padding in the wrong dimensions; try specifying data_format in the padding layers too, or just skip those layers in favor of padding inside the conv layers.
    $endgroup$
    – Ben Reiniger
    Mar 29 at 13:43










  • $begingroup$
    i draw a small example and Now i understand Thank you
    $endgroup$
    – A.SDR
    Mar 29 at 14:11













1












1








1


0



$begingroup$


I have the following sequential model:



model = models.Sequential()
model.add(Reshape(([1]+in_shp), input_shape=in_shp))
model.add(ZeroPadding2D((0, 2)))
model.add(Conv2D(256, (1, 3),padding='valid', activation="relu", name="conv1",data_format="channels_first", kernel_initializer='glorot_uniform'))
model.add(Dropout(dr))
model.add(ZeroPadding2D((0, 2)))
model.add(Conv2D(80, (2, 3), padding="valid", activation="relu", name="conv2",data_format="channels_first", kernel_initializer='glorot_uniform'))
model.add(Dropout(dr))
model.add(Flatten())
model.add(Dense(256, activation='relu', kernel_initializer='he_normal', name="dense1"))
model.add(Dropout(dr))
model.add(Dense( len(classes), kernel_initializer='he_normal', name="dense2" ))
model.add(Activation('softmax'))
model.add(Reshape([len(classes)]))
model.compile(loss='categorical_crossentropy', optimizer='adam')
model.summary()


and I got the following summary:



_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
reshape_1 (Reshape) (None, 1, 2, 128) 0
_________________________________________________________________
zero_padding2d_1 (ZeroPaddin (None, 1, 6, 128) 0
_________________________________________________________________
conv1 (Conv2D) (None, 256, 6, 126) 1024
_________________________________________________________________
dropout_1 (Dropout) (None, 256, 6, 126) 0
_________________________________________________________________
zero_padding2d_2 (ZeroPaddin (None, 256, 10, 126) 0
_________________________________________________________________
conv2 (Conv2D) (None, 80, 9, 124) 122960
_________________________________________________________________
dropout_2 (Dropout) (None, 80, 9, 124) 0
_________________________________________________________________
flatten_1 (Flatten) (None, 89280) 0
_________________________________________________________________
dense1 (Dense) (None, 256) 22855936
_________________________________________________________________
dropout_3 (Dropout) (None, 256) 0
_________________________________________________________________
dense2 (Dense) (None, 8) 2056
_________________________________________________________________
activation_1 (Activation) (None, 8) 0
_________________________________________________________________
reshape_2 (Reshape) (None, 8) 0
=================================================================
Total params: 22,981,976
Trainable params: 22,981,976
Non-trainable params: 0


The model works fine. But, i want to understand something regarding conv1 layer. Why the width value have been reduced from 128 to 126 I am really confused about it shouldn't be the same value as the previous layer?



Also the same thing for the conv2 layer, the height and width have decreased from (10,126) to (9,124).



Could someone explain me why?










share|improve this question











$endgroup$




I have the following sequential model:



model = models.Sequential()
model.add(Reshape(([1]+in_shp), input_shape=in_shp))
model.add(ZeroPadding2D((0, 2)))
model.add(Conv2D(256, (1, 3),padding='valid', activation="relu", name="conv1",data_format="channels_first", kernel_initializer='glorot_uniform'))
model.add(Dropout(dr))
model.add(ZeroPadding2D((0, 2)))
model.add(Conv2D(80, (2, 3), padding="valid", activation="relu", name="conv2",data_format="channels_first", kernel_initializer='glorot_uniform'))
model.add(Dropout(dr))
model.add(Flatten())
model.add(Dense(256, activation='relu', kernel_initializer='he_normal', name="dense1"))
model.add(Dropout(dr))
model.add(Dense( len(classes), kernel_initializer='he_normal', name="dense2" ))
model.add(Activation('softmax'))
model.add(Reshape([len(classes)]))
model.compile(loss='categorical_crossentropy', optimizer='adam')
model.summary()


and I got the following summary:



_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
reshape_1 (Reshape) (None, 1, 2, 128) 0
_________________________________________________________________
zero_padding2d_1 (ZeroPaddin (None, 1, 6, 128) 0
_________________________________________________________________
conv1 (Conv2D) (None, 256, 6, 126) 1024
_________________________________________________________________
dropout_1 (Dropout) (None, 256, 6, 126) 0
_________________________________________________________________
zero_padding2d_2 (ZeroPaddin (None, 256, 10, 126) 0
_________________________________________________________________
conv2 (Conv2D) (None, 80, 9, 124) 122960
_________________________________________________________________
dropout_2 (Dropout) (None, 80, 9, 124) 0
_________________________________________________________________
flatten_1 (Flatten) (None, 89280) 0
_________________________________________________________________
dense1 (Dense) (None, 256) 22855936
_________________________________________________________________
dropout_3 (Dropout) (None, 256) 0
_________________________________________________________________
dense2 (Dense) (None, 8) 2056
_________________________________________________________________
activation_1 (Activation) (None, 8) 0
_________________________________________________________________
reshape_2 (Reshape) (None, 8) 0
=================================================================
Total params: 22,981,976
Trainable params: 22,981,976
Non-trainable params: 0


The model works fine. But, i want to understand something regarding conv1 layer. Why the width value have been reduced from 128 to 126 I am really confused about it shouldn't be the same value as the previous layer?



Also the same thing for the conv2 layer, the height and width have decreased from (10,126) to (9,124).



Could someone explain me why?







machine-learning neural-network deep-learning cnn convolution






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Mar 29 at 18:17









Vaalizaadeh

7,55062263




7,55062263










asked Mar 29 at 12:28









A.SDRA.SDR

132




132







  • 3




    $begingroup$
    I guess you have valid convolution. if you want that to be $128$ set convolution to be same.
    $endgroup$
    – Vaalizaadeh
    Mar 29 at 12:43










  • $begingroup$
    @Media : after you pointed me to the padding parameter (same or valid). i digged a little bit and i understood why in the first conv layer it dropped from 128 to 126 but it does not really make sense for the second layer to drop from 126 to 124 or from 10 to 9
    $endgroup$
    – A.SDR
    Mar 29 at 12:55










  • $begingroup$
    They are valid too. It pads those layers too.
    $endgroup$
    – Vaalizaadeh
    Mar 29 at 13:07











  • $begingroup$
    +1 to Media, when windows are cut off by the input (image?) edges, the number of windows is smaller than the width of the input. But also, you appear to be trying to zero-pad before the conv layers, and those appear to be padding in the wrong dimensions; try specifying data_format in the padding layers too, or just skip those layers in favor of padding inside the conv layers.
    $endgroup$
    – Ben Reiniger
    Mar 29 at 13:43










  • $begingroup$
    i draw a small example and Now i understand Thank you
    $endgroup$
    – A.SDR
    Mar 29 at 14:11












  • 3




    $begingroup$
    I guess you have valid convolution. if you want that to be $128$ set convolution to be same.
    $endgroup$
    – Vaalizaadeh
    Mar 29 at 12:43










  • $begingroup$
    @Media : after you pointed me to the padding parameter (same or valid). i digged a little bit and i understood why in the first conv layer it dropped from 128 to 126 but it does not really make sense for the second layer to drop from 126 to 124 or from 10 to 9
    $endgroup$
    – A.SDR
    Mar 29 at 12:55










  • $begingroup$
    They are valid too. It pads those layers too.
    $endgroup$
    – Vaalizaadeh
    Mar 29 at 13:07











  • $begingroup$
    +1 to Media, when windows are cut off by the input (image?) edges, the number of windows is smaller than the width of the input. But also, you appear to be trying to zero-pad before the conv layers, and those appear to be padding in the wrong dimensions; try specifying data_format in the padding layers too, or just skip those layers in favor of padding inside the conv layers.
    $endgroup$
    – Ben Reiniger
    Mar 29 at 13:43










  • $begingroup$
    i draw a small example and Now i understand Thank you
    $endgroup$
    – A.SDR
    Mar 29 at 14:11







3




3




$begingroup$
I guess you have valid convolution. if you want that to be $128$ set convolution to be same.
$endgroup$
– Vaalizaadeh
Mar 29 at 12:43




$begingroup$
I guess you have valid convolution. if you want that to be $128$ set convolution to be same.
$endgroup$
– Vaalizaadeh
Mar 29 at 12:43












$begingroup$
@Media : after you pointed me to the padding parameter (same or valid). i digged a little bit and i understood why in the first conv layer it dropped from 128 to 126 but it does not really make sense for the second layer to drop from 126 to 124 or from 10 to 9
$endgroup$
– A.SDR
Mar 29 at 12:55




$begingroup$
@Media : after you pointed me to the padding parameter (same or valid). i digged a little bit and i understood why in the first conv layer it dropped from 128 to 126 but it does not really make sense for the second layer to drop from 126 to 124 or from 10 to 9
$endgroup$
– A.SDR
Mar 29 at 12:55












$begingroup$
They are valid too. It pads those layers too.
$endgroup$
– Vaalizaadeh
Mar 29 at 13:07





$begingroup$
They are valid too. It pads those layers too.
$endgroup$
– Vaalizaadeh
Mar 29 at 13:07













$begingroup$
+1 to Media, when windows are cut off by the input (image?) edges, the number of windows is smaller than the width of the input. But also, you appear to be trying to zero-pad before the conv layers, and those appear to be padding in the wrong dimensions; try specifying data_format in the padding layers too, or just skip those layers in favor of padding inside the conv layers.
$endgroup$
– Ben Reiniger
Mar 29 at 13:43




$begingroup$
+1 to Media, when windows are cut off by the input (image?) edges, the number of windows is smaller than the width of the input. But also, you appear to be trying to zero-pad before the conv layers, and those appear to be padding in the wrong dimensions; try specifying data_format in the padding layers too, or just skip those layers in favor of padding inside the conv layers.
$endgroup$
– Ben Reiniger
Mar 29 at 13:43












$begingroup$
i draw a small example and Now i understand Thank you
$endgroup$
– A.SDR
Mar 29 at 14:11




$begingroup$
i draw a small example and Now i understand Thank you
$endgroup$
– A.SDR
Mar 29 at 14:11










2 Answers
2






active

oldest

votes


















2












$begingroup$

In the convolution layer, the filter (in your case 3x3) is applied to the images in order to produce the output (feature map), the filter is slide to the right and bottom by a parameter called stride (in your case it is not defined, the default is 1). Now if padding='valid' the output dimension will change, but if you change it to padding='same' the output dimension will be the same as input and this is because of the idea of zero padding (i.e. padding image borders with zero).






share|improve this answer











$endgroup$




















    2












    $begingroup$

    It is because of the kind of convolution you've used. It is a valid convolution. If you want the output to be $128$, set the convolution to be same. Consider that this is also applicable to the deep layers too. They also can have either of these convolutions.






    share|improve this answer











    $endgroup$













      Your Answer





      StackExchange.ifUsing("editor", function ()
      return StackExchange.using("mathjaxEditing", function ()
      StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
      StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
      );
      );
      , "mathjax-editing");

      StackExchange.ready(function()
      var channelOptions =
      tags: "".split(" "),
      id: "557"
      ;
      initTagRenderer("".split(" "), "".split(" "), channelOptions);

      StackExchange.using("externalEditor", function()
      // Have to fire editor after snippets, if snippets enabled
      if (StackExchange.settings.snippets.snippetsEnabled)
      StackExchange.using("snippets", function()
      createEditor();
      );

      else
      createEditor();

      );

      function createEditor()
      StackExchange.prepareEditor(
      heartbeatType: 'answer',
      autoActivateHeartbeat: false,
      convertImagesToLinks: false,
      noModals: true,
      showLowRepImageUploadWarning: true,
      reputationToPostImages: null,
      bindNavPrevention: true,
      postfix: "",
      imageUploader:
      brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
      contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
      allowUrls: true
      ,
      onDemand: true,
      discardSelector: ".discard-answer"
      ,immediatelyShowMarkdownHelp:true
      );



      );













      draft saved

      draft discarded


















      StackExchange.ready(
      function ()
      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f48216%2fcnn-output-shape-explanation%23new-answer', 'question_page');

      );

      Post as a guest















      Required, but never shown

























      2 Answers
      2






      active

      oldest

      votes








      2 Answers
      2






      active

      oldest

      votes









      active

      oldest

      votes






      active

      oldest

      votes









      2












      $begingroup$

      In the convolution layer, the filter (in your case 3x3) is applied to the images in order to produce the output (feature map), the filter is slide to the right and bottom by a parameter called stride (in your case it is not defined, the default is 1). Now if padding='valid' the output dimension will change, but if you change it to padding='same' the output dimension will be the same as input and this is because of the idea of zero padding (i.e. padding image borders with zero).






      share|improve this answer











      $endgroup$

















        2












        $begingroup$

        In the convolution layer, the filter (in your case 3x3) is applied to the images in order to produce the output (feature map), the filter is slide to the right and bottom by a parameter called stride (in your case it is not defined, the default is 1). Now if padding='valid' the output dimension will change, but if you change it to padding='same' the output dimension will be the same as input and this is because of the idea of zero padding (i.e. padding image borders with zero).






        share|improve this answer











        $endgroup$















          2












          2








          2





          $begingroup$

          In the convolution layer, the filter (in your case 3x3) is applied to the images in order to produce the output (feature map), the filter is slide to the right and bottom by a parameter called stride (in your case it is not defined, the default is 1). Now if padding='valid' the output dimension will change, but if you change it to padding='same' the output dimension will be the same as input and this is because of the idea of zero padding (i.e. padding image borders with zero).






          share|improve this answer











          $endgroup$



          In the convolution layer, the filter (in your case 3x3) is applied to the images in order to produce the output (feature map), the filter is slide to the right and bottom by a parameter called stride (in your case it is not defined, the default is 1). Now if padding='valid' the output dimension will change, but if you change it to padding='same' the output dimension will be the same as input and this is because of the idea of zero padding (i.e. padding image borders with zero).







          share|improve this answer














          share|improve this answer



          share|improve this answer








          edited Mar 30 at 6:51

























          answered Mar 29 at 18:43









          SoKSoK

          31814




          31814





















              2












              $begingroup$

              It is because of the kind of convolution you've used. It is a valid convolution. If you want the output to be $128$, set the convolution to be same. Consider that this is also applicable to the deep layers too. They also can have either of these convolutions.






              share|improve this answer











              $endgroup$

















                2












                $begingroup$

                It is because of the kind of convolution you've used. It is a valid convolution. If you want the output to be $128$, set the convolution to be same. Consider that this is also applicable to the deep layers too. They also can have either of these convolutions.






                share|improve this answer











                $endgroup$















                  2












                  2








                  2





                  $begingroup$

                  It is because of the kind of convolution you've used. It is a valid convolution. If you want the output to be $128$, set the convolution to be same. Consider that this is also applicable to the deep layers too. They also can have either of these convolutions.






                  share|improve this answer











                  $endgroup$



                  It is because of the kind of convolution you've used. It is a valid convolution. If you want the output to be $128$, set the convolution to be same. Consider that this is also applicable to the deep layers too. They also can have either of these convolutions.







                  share|improve this answer














                  share|improve this answer



                  share|improve this answer








                  edited Mar 29 at 18:50









                  Esmailian

                  2,951320




                  2,951320










                  answered Mar 29 at 18:16









                  VaalizaadehVaalizaadeh

                  7,55062263




                  7,55062263



























                      draft saved

                      draft discarded
















































                      Thanks for contributing an answer to Data Science Stack Exchange!


                      • Please be sure to answer the question. Provide details and share your research!

                      But avoid


                      • Asking for help, clarification, or responding to other answers.

                      • Making statements based on opinion; back them up with references or personal experience.

                      Use MathJax to format equations. MathJax reference.


                      To learn more, see our tips on writing great answers.




                      draft saved


                      draft discarded














                      StackExchange.ready(
                      function ()
                      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f48216%2fcnn-output-shape-explanation%23new-answer', 'question_page');

                      );

                      Post as a guest















                      Required, but never shown





















































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown

































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown







                      Popular posts from this blog

                      Adding axes to figuresAdding axes labels to LaTeX figuresLaTeX equivalent of ConTeXt buffersRotate a node but not its content: the case of the ellipse decorationHow to define the default vertical distance between nodes?TikZ scaling graphic and adjust node position and keep font sizeNumerical conditional within tikz keys?adding axes to shapesAlign axes across subfiguresAdding figures with a certain orderLine up nested tikz enviroments or how to get rid of themAdding axes labels to LaTeX figures

                      Tähtien Talli Jäsenet | Lähteet | NavigointivalikkoSuomen Hippos – Tähtien Talli

                      Do these cracks on my tires look bad? The Next CEO of Stack OverflowDry rot tire should I replace?Having to replace tiresFishtailed so easily? Bad tires? ABS?Filling the tires with something other than air, to avoid puncture hassles?Used Michelin tires safe to install?Do these tyre cracks necessitate replacement?Rumbling noise: tires or mechanicalIs it possible to fix noisy feathered tires?Are bad winter tires still better than summer tires in winter?Torque converter failure - Related to replacing only 2 tires?Why use snow tires on all 4 wheels on 2-wheel-drive cars?