Does Convolution kernel size affect number of channels?how to propagate error from convolutional layer to previous layer?TensorFlow: number of channels of conv1d filtertuning a convolution neural net, sample sizeProof of Kernel PropertyHow does convolution operation perform in CNN?Why convolution over volume sums up across channels?What advantage does Guassian kernel have than any other kernels, such as linear kernel, polynomial kernel and so on?What is the purpose of a 1x1 convolutional layer?degeneracy of a CNN having only 1 convolution kernel down to a fully connected NNHow to choose the number of output channels in a convolutional layer?

Is this toilet slogan correct usage of the English language?

Query about absorption line spectra

Loading commands from file

what is different between Do you interest vs interested in something?

Could the E-bike drivetrain wear down till needing replacement after 400 km?

Why is so much work done on numerical verification of the Riemann Hypothesis?

What should you do when eye contact makes your subordinate uncomfortable?

Travelling outside the UK without a passport

How do I nest cases?

What is this called? Old film camera viewer?

Customize circled numbers

Varistor? Purpose and principle

Is a model fitted to data or is data fitted to a model?

Count the occurrence of each unique word in the file

Why did the EU agree to delay the Brexit deadline?

Offered money to buy a house, seller is asking for more to cover gap between their listing and mortgage owed

Should I stop contributing to retirement accounts?

What should you do if you miss a job interview (deliberately)?

Why do we read the Megillah by night and by day?

Is there a name for this algorithm to calculate the concentration of a mixture of two solutions containing the same solute?

Which one is correct as adjective “protruding” or “protruded”?

Can I use Seifert-van Kampen theorem infinite times

Yosemite Fire Rings - What to Expect?

Why does the Sun have different day lengths, but not the gas giants?



Does Convolution kernel size affect number of channels?


how to propagate error from convolutional layer to previous layer?TensorFlow: number of channels of conv1d filtertuning a convolution neural net, sample sizeProof of Kernel PropertyHow does convolution operation perform in CNN?Why convolution over volume sums up across channels?What advantage does Guassian kernel have than any other kernels, such as linear kernel, polynomial kernel and so on?What is the purpose of a 1x1 convolutional layer?degeneracy of a CNN having only 1 convolution kernel down to a fully connected NNHow to choose the number of output channels in a convolutional layer?













0












$begingroup$


I am going through Dilated Residual Network blog post. In this, Under 2.Multi-scale Context aggregation heading, author mentioned this.




The last one is the 1×1 convolutions for mapping the number of
channels to be the same as the input one. Therefore, the input and the
output has the same number of channels. And it can be inserted to
different kinds of convolutional neural networks.




I thought, we decide number of channels in the next layer and kernels will be initialized randomly. These kernels shape is decided by us which are 1x1 or 3x3 etc., So, what did author mean when he said, 1x1 convolutions for mapping the number of channels to be the same as the input one.When, Even if its 2x2 convolutional kernel, number of channels are not changed.










share|improve this question









$endgroup$
















    0












    $begingroup$


    I am going through Dilated Residual Network blog post. In this, Under 2.Multi-scale Context aggregation heading, author mentioned this.




    The last one is the 1×1 convolutions for mapping the number of
    channels to be the same as the input one. Therefore, the input and the
    output has the same number of channels. And it can be inserted to
    different kinds of convolutional neural networks.




    I thought, we decide number of channels in the next layer and kernels will be initialized randomly. These kernels shape is decided by us which are 1x1 or 3x3 etc., So, what did author mean when he said, 1x1 convolutions for mapping the number of channels to be the same as the input one.When, Even if its 2x2 convolutional kernel, number of channels are not changed.










    share|improve this question









    $endgroup$














      0












      0








      0





      $begingroup$


      I am going through Dilated Residual Network blog post. In this, Under 2.Multi-scale Context aggregation heading, author mentioned this.




      The last one is the 1×1 convolutions for mapping the number of
      channels to be the same as the input one. Therefore, the input and the
      output has the same number of channels. And it can be inserted to
      different kinds of convolutional neural networks.




      I thought, we decide number of channels in the next layer and kernels will be initialized randomly. These kernels shape is decided by us which are 1x1 or 3x3 etc., So, what did author mean when he said, 1x1 convolutions for mapping the number of channels to be the same as the input one.When, Even if its 2x2 convolutional kernel, number of channels are not changed.










      share|improve this question









      $endgroup$




      I am going through Dilated Residual Network blog post. In this, Under 2.Multi-scale Context aggregation heading, author mentioned this.




      The last one is the 1×1 convolutions for mapping the number of
      channels to be the same as the input one. Therefore, the input and the
      output has the same number of channels. And it can be inserted to
      different kinds of convolutional neural networks.




      I thought, we decide number of channels in the next layer and kernels will be initialized randomly. These kernels shape is decided by us which are 1x1 or 3x3 etc., So, what did author mean when he said, 1x1 convolutions for mapping the number of channels to be the same as the input one.When, Even if its 2x2 convolutional kernel, number of channels are not changed.







      deep-learning cnn kernel






      share|improve this question













      share|improve this question











      share|improve this question




      share|improve this question










      asked Mar 20 at 12:00









      InAFlashInAFlash

      3521315




      3521315




















          1 Answer
          1






          active

          oldest

          votes


















          0












          $begingroup$

          Normally a convolutional neural network will get flattened into a single column vector after the convolutions and then maybe be processed by dense layer. In this model, the convolution $1times1$ is used as an output layer. It will have $C$ channels like every other layer, but it is not dilatated. Hence, you can use this layer as the input to other convolutional neural networks.



          The kernel size will not influence the channels. Imagine an RGB image with 4 by 4 pixels. If we have a $2times 2$ convolution with $2times 2$ stride we will get an output of dimension $3times 2 times 2$ (without padding). Hence the channels do not change. If we have $K$ filters we will ket $K$ times a $3times 2 times 2$ output. If the kernel size is $4times 4$ with a stride of $2times 2$ we will get a $3times 1 times 1$ (without padding) output for each of the $K$ filters. The kernel size only influences how large the receptive field of the convolutions are. Hence, it only influences how the layer is scaling the individual dimensions of the width and height (by using RGB images as an example).



          If you flatten the output of a layer you will always reduce its dimensionality to $1$. In the dilated convolutional neural network they do not have a layer that flattens the input.






          share|improve this answer










          New contributor




          MachineLearner is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
          Check out our Code of Conduct.






          $endgroup$












          • $begingroup$
            i think, it does not have 3 channels. Number of channels are increased if you actually see.It increased till last layer and Its C in last layers. Am i wrong?
            $endgroup$
            – InAFlash
            Mar 20 at 12:21










          • $begingroup$
            @InAFlash: For the basic network the layer always stays at $C$ channels. I assumed RGB images, hence $C=3$. I corrected this. You could also take any other layer of the basic network and feed it into another network.
            $endgroup$
            – MachineLearner
            Mar 20 at 12:24











          • $begingroup$
            if you see, for large network, its not. And my question was how kernel size is related to number of channels.Thanks
            $endgroup$
            – InAFlash
            Mar 20 at 12:27






          • 1




            $begingroup$
            @InAFlash I reached the same conclusion. Kernel 1x1 can give C channels, 2C channels, any channels. Just like kernel 3x3, which can give C channels, 32C channels, any channels. So the statement "1x1 because the same channels" seems unjustified in the article.
            $endgroup$
            – Esmailian
            Mar 20 at 12:44







          • 2




            $begingroup$
            @InAFlash: The author just wants to emphasize that the last layer is not flattened to ensure that the output can be used for other networks. The author could have been more precise in explaining his/her point.
            $endgroup$
            – MachineLearner
            Mar 20 at 12:46










          Your Answer





          StackExchange.ifUsing("editor", function ()
          return StackExchange.using("mathjaxEditing", function ()
          StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
          StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
          );
          );
          , "mathjax-editing");

          StackExchange.ready(function()
          var channelOptions =
          tags: "".split(" "),
          id: "557"
          ;
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function()
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled)
          StackExchange.using("snippets", function()
          createEditor();
          );

          else
          createEditor();

          );

          function createEditor()
          StackExchange.prepareEditor(
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: false,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: null,
          bindNavPrevention: true,
          postfix: "",
          imageUploader:
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          ,
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          );



          );













          draft saved

          draft discarded


















          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f47665%2fdoes-convolution-kernel-size-affect-number-of-channels%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown

























          1 Answer
          1






          active

          oldest

          votes








          1 Answer
          1






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes









          0












          $begingroup$

          Normally a convolutional neural network will get flattened into a single column vector after the convolutions and then maybe be processed by dense layer. In this model, the convolution $1times1$ is used as an output layer. It will have $C$ channels like every other layer, but it is not dilatated. Hence, you can use this layer as the input to other convolutional neural networks.



          The kernel size will not influence the channels. Imagine an RGB image with 4 by 4 pixels. If we have a $2times 2$ convolution with $2times 2$ stride we will get an output of dimension $3times 2 times 2$ (without padding). Hence the channels do not change. If we have $K$ filters we will ket $K$ times a $3times 2 times 2$ output. If the kernel size is $4times 4$ with a stride of $2times 2$ we will get a $3times 1 times 1$ (without padding) output for each of the $K$ filters. The kernel size only influences how large the receptive field of the convolutions are. Hence, it only influences how the layer is scaling the individual dimensions of the width and height (by using RGB images as an example).



          If you flatten the output of a layer you will always reduce its dimensionality to $1$. In the dilated convolutional neural network they do not have a layer that flattens the input.






          share|improve this answer










          New contributor




          MachineLearner is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
          Check out our Code of Conduct.






          $endgroup$












          • $begingroup$
            i think, it does not have 3 channels. Number of channels are increased if you actually see.It increased till last layer and Its C in last layers. Am i wrong?
            $endgroup$
            – InAFlash
            Mar 20 at 12:21










          • $begingroup$
            @InAFlash: For the basic network the layer always stays at $C$ channels. I assumed RGB images, hence $C=3$. I corrected this. You could also take any other layer of the basic network and feed it into another network.
            $endgroup$
            – MachineLearner
            Mar 20 at 12:24











          • $begingroup$
            if you see, for large network, its not. And my question was how kernel size is related to number of channels.Thanks
            $endgroup$
            – InAFlash
            Mar 20 at 12:27






          • 1




            $begingroup$
            @InAFlash I reached the same conclusion. Kernel 1x1 can give C channels, 2C channels, any channels. Just like kernel 3x3, which can give C channels, 32C channels, any channels. So the statement "1x1 because the same channels" seems unjustified in the article.
            $endgroup$
            – Esmailian
            Mar 20 at 12:44







          • 2




            $begingroup$
            @InAFlash: The author just wants to emphasize that the last layer is not flattened to ensure that the output can be used for other networks. The author could have been more precise in explaining his/her point.
            $endgroup$
            – MachineLearner
            Mar 20 at 12:46















          0












          $begingroup$

          Normally a convolutional neural network will get flattened into a single column vector after the convolutions and then maybe be processed by dense layer. In this model, the convolution $1times1$ is used as an output layer. It will have $C$ channels like every other layer, but it is not dilatated. Hence, you can use this layer as the input to other convolutional neural networks.



          The kernel size will not influence the channels. Imagine an RGB image with 4 by 4 pixels. If we have a $2times 2$ convolution with $2times 2$ stride we will get an output of dimension $3times 2 times 2$ (without padding). Hence the channels do not change. If we have $K$ filters we will ket $K$ times a $3times 2 times 2$ output. If the kernel size is $4times 4$ with a stride of $2times 2$ we will get a $3times 1 times 1$ (without padding) output for each of the $K$ filters. The kernel size only influences how large the receptive field of the convolutions are. Hence, it only influences how the layer is scaling the individual dimensions of the width and height (by using RGB images as an example).



          If you flatten the output of a layer you will always reduce its dimensionality to $1$. In the dilated convolutional neural network they do not have a layer that flattens the input.






          share|improve this answer










          New contributor




          MachineLearner is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
          Check out our Code of Conduct.






          $endgroup$












          • $begingroup$
            i think, it does not have 3 channels. Number of channels are increased if you actually see.It increased till last layer and Its C in last layers. Am i wrong?
            $endgroup$
            – InAFlash
            Mar 20 at 12:21










          • $begingroup$
            @InAFlash: For the basic network the layer always stays at $C$ channels. I assumed RGB images, hence $C=3$. I corrected this. You could also take any other layer of the basic network and feed it into another network.
            $endgroup$
            – MachineLearner
            Mar 20 at 12:24











          • $begingroup$
            if you see, for large network, its not. And my question was how kernel size is related to number of channels.Thanks
            $endgroup$
            – InAFlash
            Mar 20 at 12:27






          • 1




            $begingroup$
            @InAFlash I reached the same conclusion. Kernel 1x1 can give C channels, 2C channels, any channels. Just like kernel 3x3, which can give C channels, 32C channels, any channels. So the statement "1x1 because the same channels" seems unjustified in the article.
            $endgroup$
            – Esmailian
            Mar 20 at 12:44







          • 2




            $begingroup$
            @InAFlash: The author just wants to emphasize that the last layer is not flattened to ensure that the output can be used for other networks. The author could have been more precise in explaining his/her point.
            $endgroup$
            – MachineLearner
            Mar 20 at 12:46













          0












          0








          0





          $begingroup$

          Normally a convolutional neural network will get flattened into a single column vector after the convolutions and then maybe be processed by dense layer. In this model, the convolution $1times1$ is used as an output layer. It will have $C$ channels like every other layer, but it is not dilatated. Hence, you can use this layer as the input to other convolutional neural networks.



          The kernel size will not influence the channels. Imagine an RGB image with 4 by 4 pixels. If we have a $2times 2$ convolution with $2times 2$ stride we will get an output of dimension $3times 2 times 2$ (without padding). Hence the channels do not change. If we have $K$ filters we will ket $K$ times a $3times 2 times 2$ output. If the kernel size is $4times 4$ with a stride of $2times 2$ we will get a $3times 1 times 1$ (without padding) output for each of the $K$ filters. The kernel size only influences how large the receptive field of the convolutions are. Hence, it only influences how the layer is scaling the individual dimensions of the width and height (by using RGB images as an example).



          If you flatten the output of a layer you will always reduce its dimensionality to $1$. In the dilated convolutional neural network they do not have a layer that flattens the input.






          share|improve this answer










          New contributor




          MachineLearner is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
          Check out our Code of Conduct.






          $endgroup$



          Normally a convolutional neural network will get flattened into a single column vector after the convolutions and then maybe be processed by dense layer. In this model, the convolution $1times1$ is used as an output layer. It will have $C$ channels like every other layer, but it is not dilatated. Hence, you can use this layer as the input to other convolutional neural networks.



          The kernel size will not influence the channels. Imagine an RGB image with 4 by 4 pixels. If we have a $2times 2$ convolution with $2times 2$ stride we will get an output of dimension $3times 2 times 2$ (without padding). Hence the channels do not change. If we have $K$ filters we will ket $K$ times a $3times 2 times 2$ output. If the kernel size is $4times 4$ with a stride of $2times 2$ we will get a $3times 1 times 1$ (without padding) output for each of the $K$ filters. The kernel size only influences how large the receptive field of the convolutions are. Hence, it only influences how the layer is scaling the individual dimensions of the width and height (by using RGB images as an example).



          If you flatten the output of a layer you will always reduce its dimensionality to $1$. In the dilated convolutional neural network they do not have a layer that flattens the input.







          share|improve this answer










          New contributor




          MachineLearner is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
          Check out our Code of Conduct.









          share|improve this answer



          share|improve this answer








          edited Mar 20 at 12:51





















          New contributor




          MachineLearner is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
          Check out our Code of Conduct.









          answered Mar 20 at 12:18









          MachineLearnerMachineLearner

          32410




          32410




          New contributor




          MachineLearner is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
          Check out our Code of Conduct.





          New contributor





          MachineLearner is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
          Check out our Code of Conduct.






          MachineLearner is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
          Check out our Code of Conduct.











          • $begingroup$
            i think, it does not have 3 channels. Number of channels are increased if you actually see.It increased till last layer and Its C in last layers. Am i wrong?
            $endgroup$
            – InAFlash
            Mar 20 at 12:21










          • $begingroup$
            @InAFlash: For the basic network the layer always stays at $C$ channels. I assumed RGB images, hence $C=3$. I corrected this. You could also take any other layer of the basic network and feed it into another network.
            $endgroup$
            – MachineLearner
            Mar 20 at 12:24











          • $begingroup$
            if you see, for large network, its not. And my question was how kernel size is related to number of channels.Thanks
            $endgroup$
            – InAFlash
            Mar 20 at 12:27






          • 1




            $begingroup$
            @InAFlash I reached the same conclusion. Kernel 1x1 can give C channels, 2C channels, any channels. Just like kernel 3x3, which can give C channels, 32C channels, any channels. So the statement "1x1 because the same channels" seems unjustified in the article.
            $endgroup$
            – Esmailian
            Mar 20 at 12:44







          • 2




            $begingroup$
            @InAFlash: The author just wants to emphasize that the last layer is not flattened to ensure that the output can be used for other networks. The author could have been more precise in explaining his/her point.
            $endgroup$
            – MachineLearner
            Mar 20 at 12:46
















          • $begingroup$
            i think, it does not have 3 channels. Number of channels are increased if you actually see.It increased till last layer and Its C in last layers. Am i wrong?
            $endgroup$
            – InAFlash
            Mar 20 at 12:21










          • $begingroup$
            @InAFlash: For the basic network the layer always stays at $C$ channels. I assumed RGB images, hence $C=3$. I corrected this. You could also take any other layer of the basic network and feed it into another network.
            $endgroup$
            – MachineLearner
            Mar 20 at 12:24











          • $begingroup$
            if you see, for large network, its not. And my question was how kernel size is related to number of channels.Thanks
            $endgroup$
            – InAFlash
            Mar 20 at 12:27






          • 1




            $begingroup$
            @InAFlash I reached the same conclusion. Kernel 1x1 can give C channels, 2C channels, any channels. Just like kernel 3x3, which can give C channels, 32C channels, any channels. So the statement "1x1 because the same channels" seems unjustified in the article.
            $endgroup$
            – Esmailian
            Mar 20 at 12:44







          • 2




            $begingroup$
            @InAFlash: The author just wants to emphasize that the last layer is not flattened to ensure that the output can be used for other networks. The author could have been more precise in explaining his/her point.
            $endgroup$
            – MachineLearner
            Mar 20 at 12:46















          $begingroup$
          i think, it does not have 3 channels. Number of channels are increased if you actually see.It increased till last layer and Its C in last layers. Am i wrong?
          $endgroup$
          – InAFlash
          Mar 20 at 12:21




          $begingroup$
          i think, it does not have 3 channels. Number of channels are increased if you actually see.It increased till last layer and Its C in last layers. Am i wrong?
          $endgroup$
          – InAFlash
          Mar 20 at 12:21












          $begingroup$
          @InAFlash: For the basic network the layer always stays at $C$ channels. I assumed RGB images, hence $C=3$. I corrected this. You could also take any other layer of the basic network and feed it into another network.
          $endgroup$
          – MachineLearner
          Mar 20 at 12:24





          $begingroup$
          @InAFlash: For the basic network the layer always stays at $C$ channels. I assumed RGB images, hence $C=3$. I corrected this. You could also take any other layer of the basic network and feed it into another network.
          $endgroup$
          – MachineLearner
          Mar 20 at 12:24













          $begingroup$
          if you see, for large network, its not. And my question was how kernel size is related to number of channels.Thanks
          $endgroup$
          – InAFlash
          Mar 20 at 12:27




          $begingroup$
          if you see, for large network, its not. And my question was how kernel size is related to number of channels.Thanks
          $endgroup$
          – InAFlash
          Mar 20 at 12:27




          1




          1




          $begingroup$
          @InAFlash I reached the same conclusion. Kernel 1x1 can give C channels, 2C channels, any channels. Just like kernel 3x3, which can give C channels, 32C channels, any channels. So the statement "1x1 because the same channels" seems unjustified in the article.
          $endgroup$
          – Esmailian
          Mar 20 at 12:44





          $begingroup$
          @InAFlash I reached the same conclusion. Kernel 1x1 can give C channels, 2C channels, any channels. Just like kernel 3x3, which can give C channels, 32C channels, any channels. So the statement "1x1 because the same channels" seems unjustified in the article.
          $endgroup$
          – Esmailian
          Mar 20 at 12:44





          2




          2




          $begingroup$
          @InAFlash: The author just wants to emphasize that the last layer is not flattened to ensure that the output can be used for other networks. The author could have been more precise in explaining his/her point.
          $endgroup$
          – MachineLearner
          Mar 20 at 12:46




          $begingroup$
          @InAFlash: The author just wants to emphasize that the last layer is not flattened to ensure that the output can be used for other networks. The author could have been more precise in explaining his/her point.
          $endgroup$
          – MachineLearner
          Mar 20 at 12:46

















          draft saved

          draft discarded
















































          Thanks for contributing an answer to Data Science Stack Exchange!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid


          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.

          Use MathJax to format equations. MathJax reference.


          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f47665%2fdoes-convolution-kernel-size-affect-number-of-channels%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          Adding axes to figuresAdding axes labels to LaTeX figuresLaTeX equivalent of ConTeXt buffersRotate a node but not its content: the case of the ellipse decorationHow to define the default vertical distance between nodes?TikZ scaling graphic and adjust node position and keep font sizeNumerical conditional within tikz keys?adding axes to shapesAlign axes across subfiguresAdding figures with a certain orderLine up nested tikz enviroments or how to get rid of themAdding axes labels to LaTeX figures

          Tähtien Talli Jäsenet | Lähteet | NavigointivalikkoSuomen Hippos – Tähtien Talli

          Do these cracks on my tires look bad? The Next CEO of Stack OverflowDry rot tire should I replace?Having to replace tiresFishtailed so easily? Bad tires? ABS?Filling the tires with something other than air, to avoid puncture hassles?Used Michelin tires safe to install?Do these tyre cracks necessitate replacement?Rumbling noise: tires or mechanicalIs it possible to fix noisy feathered tires?Are bad winter tires still better than summer tires in winter?Torque converter failure - Related to replacing only 2 tires?Why use snow tires on all 4 wheels on 2-wheel-drive cars?