Gradient computation in neural networksAdjusting weights in an convolutional neural networkShould weights on earlier layers change less than weights on later layers in a neural networkEliminate input in gradient by clever choosing of cost function in neural networksNeural networks - adjusting weightsHow to implement gradient descent for a tanh() activation function for a single layer perceptron?Vanishing Gradient in a shallow networkBackpropagation with multiple different activation functionshow to optimize the weights of a neural net when feeding it with multiple training samples?How does Gradient Descent and Backpropagation work together?Neural Networks - Back Propogation

You're three for three

Can somebody explain Brexit in a few child-proof sentences?

Do these cracks on my tires look bad?

Is it possible to build a CPA Secure encryption scheme which remains secure even when the encryption of secret key is given?

What should I use for Mishna study?

What is the opposite of 'gravitas'?

General topology proving something for all of its points

QGIS Geometry Generator Line Type

When is separating the total wavefunction into a space part and a spin part possible?

Simulating a probability of 1 of 2^N with less than N random bits

Bob has never been a M before

Freedom of speech and where it applies

Partial sums of primes

Why does this part of the Space Shuttle launch pad seem to be floating in air?

Are taller landing gear bad for aircraft, particulary large airliners?

Adding empty element to declared container without declaring type of element

Can a Bard use an arcane focus?

How did Monica know how to operate Carol's "designer"?

Can I Retrieve Email Addresses from BCC?

Is there an Impartial Brexit Deal comparison site?

I'm in charge of equipment buying but no one's ever happy with what I choose. How to fix this?

Identify a stage play about a VR experience in which participants are encouraged to simulate performing horrific activities

The verb "to prioritize"

Teaching indefinite integrals



Gradient computation in neural networks


Adjusting weights in an convolutional neural networkShould weights on earlier layers change less than weights on later layers in a neural networkEliminate input in gradient by clever choosing of cost function in neural networksNeural networks - adjusting weightsHow to implement gradient descent for a tanh() activation function for a single layer perceptron?Vanishing Gradient in a shallow networkBackpropagation with multiple different activation functionshow to optimize the weights of a neural net when feeding it with multiple training samples?How does Gradient Descent and Backpropagation work together?Neural Networks - Back Propogation













0












$begingroup$


I am working on understanding gradient computation in neural networks. But there is an issue with my computation. Let the weights between input (X) and hidden layer between $W_ij$ and hidden layer and output be $W_jk$. $g1$ be the activation between hidden layer and input and linear activation between hidden layer and output. Then the gradient of error wrt to $W_ij$ comes out to be $W_jk*derivative(g1(w_ij* X))X$. Now lets suppose we have 4 inputs, 3 hidden nodes and 1 output. The gradient comes out to be of shape 1x4. But it should be 3x4 to update the weights as per gradient descent rule.










share|improve this question











$endgroup$
















    0












    $begingroup$


    I am working on understanding gradient computation in neural networks. But there is an issue with my computation. Let the weights between input (X) and hidden layer between $W_ij$ and hidden layer and output be $W_jk$. $g1$ be the activation between hidden layer and input and linear activation between hidden layer and output. Then the gradient of error wrt to $W_ij$ comes out to be $W_jk*derivative(g1(w_ij* X))X$. Now lets suppose we have 4 inputs, 3 hidden nodes and 1 output. The gradient comes out to be of shape 1x4. But it should be 3x4 to update the weights as per gradient descent rule.










    share|improve this question











    $endgroup$














      0












      0








      0





      $begingroup$


      I am working on understanding gradient computation in neural networks. But there is an issue with my computation. Let the weights between input (X) and hidden layer between $W_ij$ and hidden layer and output be $W_jk$. $g1$ be the activation between hidden layer and input and linear activation between hidden layer and output. Then the gradient of error wrt to $W_ij$ comes out to be $W_jk*derivative(g1(w_ij* X))X$. Now lets suppose we have 4 inputs, 3 hidden nodes and 1 output. The gradient comes out to be of shape 1x4. But it should be 3x4 to update the weights as per gradient descent rule.










      share|improve this question











      $endgroup$




      I am working on understanding gradient computation in neural networks. But there is an issue with my computation. Let the weights between input (X) and hidden layer between $W_ij$ and hidden layer and output be $W_jk$. $g1$ be the activation between hidden layer and input and linear activation between hidden layer and output. Then the gradient of error wrt to $W_ij$ comes out to be $W_jk*derivative(g1(w_ij* X))X$. Now lets suppose we have 4 inputs, 3 hidden nodes and 1 output. The gradient comes out to be of shape 1x4. But it should be 3x4 to update the weights as per gradient descent rule.







      neural-network gradient-descent






      share|improve this question















      share|improve this question













      share|improve this question




      share|improve this question








      edited Mar 20 at 16:56







      shaifali Gupta

















      asked Mar 20 at 14:34









      shaifali Guptashaifali Gupta

      769




      769




















          0






          active

          oldest

          votes











          Your Answer





          StackExchange.ifUsing("editor", function ()
          return StackExchange.using("mathjaxEditing", function ()
          StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
          StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
          );
          );
          , "mathjax-editing");

          StackExchange.ready(function()
          var channelOptions =
          tags: "".split(" "),
          id: "557"
          ;
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function()
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled)
          StackExchange.using("snippets", function()
          createEditor();
          );

          else
          createEditor();

          );

          function createEditor()
          StackExchange.prepareEditor(
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: false,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: null,
          bindNavPrevention: true,
          postfix: "",
          imageUploader:
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          ,
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          );



          );













          draft saved

          draft discarded


















          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f47678%2fgradient-computation-in-neural-networks%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown

























          0






          active

          oldest

          votes








          0






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes















          draft saved

          draft discarded
















































          Thanks for contributing an answer to Data Science Stack Exchange!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid


          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.

          Use MathJax to format equations. MathJax reference.


          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f47678%2fgradient-computation-in-neural-networks%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          Marja Vauras Lähteet | Aiheesta muualla | NavigointivalikkoMarja Vauras Turun yliopiston tutkimusportaalissaInfobox OKSuomalaisen Tiedeakatemian varsinaiset jäsenetKasvatustieteiden tiedekunnan dekaanit ja muu johtoMarja VaurasKoulutusvienti on kestävyys- ja ketteryyslaji (2.5.2017)laajentamallaWorldCat Identities0000 0001 0855 9405n86069603utb201588738523620927

          Which is better: GPT or RelGAN for text generation?2019 Community Moderator ElectionWhat is the difference between TextGAN and LM for text generation?GANs (generative adversarial networks) possible for text as well?Generator loss not decreasing- text to image synthesisChoosing a right algorithm for template-based text generationHow should I format input and output for text generation with LSTMsGumbel Softmax vs Vanilla Softmax for GAN trainingWhich neural network to choose for classification from text/speech?NLP text autoencoder that generates text in poetic meterWhat is the interpretation of the expectation notation in the GAN formulation?What is the difference between TextGAN and LM for text generation?How to prepare the data for text generation task

          Is this part of the description of the Archfey warlock's Misty Escape feature redundant?When is entropic ward considered “used”?How does the reaction timing work for Wrath of the Storm? Can it potentially prevent the damage from the triggering attack?Does the Dark Arts Archlich warlock patrons's Arcane Invisibility activate every time you cast a level 1+ spell?When attacking while invisible, when exactly does invisibility break?Can I cast Hellish Rebuke on my turn?Do I have to “pre-cast” a reaction spell in order for it to be triggered?What happens if a Player Misty Escapes into an Invisible CreatureCan a reaction interrupt multiattack?Does the Fiend-patron warlock's Hurl Through Hell feature dispel effects that require the target to be on the same plane as the caster?What are you allowed to do while using the Warlock's Eldritch Master feature?