Calculating saliency maps for text classificationhow to propagate error from convolutional layer to previous layer?Steps for back propagation of convolutional layer in CNNHow to user Keras's Embedding Layer properly?Keras intermediate layer (attention model) outputHow to propagate error back to previous layer in CNN?How to do give input to CNN when doing a text processing?Keras CNN image input and outputFully connected layer output explodes, but weights, gradients, and inputs all have sane valuesVisualizing word embeddingsLoss and Regularization inference

My adviser wants to be the first author

Violin - Can double stops be played when the strings are not next to each other?

Shortcut for setting origin to vertex

Is it normal that my co-workers at a fitness company criticize my food choices?

Instead of a Universal Basic Income program, why not implement a "Universal Basic Needs" program?

How to terminate ping <dest> &

How do I change two letters closest to a string and one letter immediately after a string using Notepad++?

Why is the President allowed to veto a cancellation of emergency powers?

Is it true that good novels will automatically sell themselves on Amazon (and so on) and there is no need for one to waste time promoting?

PTIJ: Who should I vote for? (21st Knesset Edition)

Why did it take so long to abandon sail after steamships were demonstrated?

This word with a lot of past tenses

et qui - how do you really understand that kind of phraseology?

Life insurance that covers only simultaneous/dual deaths

Employee lack of ownership

Python if-else code style for reduced code for rounding floats

How difficult is it to simply disable/disengage the MCAS on Boeing 737 Max 8 & 9 Aircraft?

What is the significance behind "40 days" that often appears in the Bible?

Can I use USB data pins as power source

What are substitutions for coconut in curry?

Recruiter wants very extensive technical details about all of my previous work

What is "focus distance lower/upper" and how is it different from depth of field?

Are Roman Catholic priests ever addressed as pastor

Is honey really a supersaturated solution? Does heating to un-crystalize redissolve it or melt it?



Calculating saliency maps for text classification


how to propagate error from convolutional layer to previous layer?Steps for back propagation of convolutional layer in CNNHow to user Keras's Embedding Layer properly?Keras intermediate layer (attention model) outputHow to propagate error back to previous layer in CNN?How to do give input to CNN when doing a text processing?Keras CNN image input and outputFully connected layer output explodes, but weights, gradients, and inputs all have sane valuesVisualizing word embeddingsLoss and Regularization inference













2












$begingroup$


I'm following the text classification with movie reviews TensorFlow tutorial, and wanted to extend the project by looking, for a certain input, which words influenced the classification the most.



I understand this is called a saliency map, but I'm having trouble calculating it. I believe that I need to calculate the gradients of the output with respect to the input. I tried to implement code similar to the code in this answer to no avail. A confounding issue is that the model uses an embedding layer, which doesn't propagate the gradient, so I think one needs to calculate the gradients with the input being the output of the embedding layer.



It's probably wrong for all sorts of reasons, but this is the closest I've gotten with the Python code:



# Create the saliency function
input_tensors = [model.layers[1].input, keras.backend.learning_phase()]
model_input = model.layers[1].input
model_output = model.output
gradients = model.optimizer.get_gradients(model_output, model_input)
compute_gradients = keras.backend.function(inputs=input_tensors, outputs=gradients)

# Word encoding
idx = 0 # Calculate the saliency for the first training example
embeddings = model.layers[0].get_weights()[0]
embedded_training_data = embeddings[train_data[idx]]
matrix = compute_gradients([embedded_training_data.reshape(sum([(1,), embedded_training_data.shape], ())), train_labels[idx]])


But the final matrix is the same row repeated and I'm not sure how to interpret it. Any help would be greatly appreciated. Thankfully, as this is extending a tutorial, there is a complete working example of the code!










share|improve this question









$endgroup$




bumped to the homepage by Community yesterday


This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.



















    2












    $begingroup$


    I'm following the text classification with movie reviews TensorFlow tutorial, and wanted to extend the project by looking, for a certain input, which words influenced the classification the most.



    I understand this is called a saliency map, but I'm having trouble calculating it. I believe that I need to calculate the gradients of the output with respect to the input. I tried to implement code similar to the code in this answer to no avail. A confounding issue is that the model uses an embedding layer, which doesn't propagate the gradient, so I think one needs to calculate the gradients with the input being the output of the embedding layer.



    It's probably wrong for all sorts of reasons, but this is the closest I've gotten with the Python code:



    # Create the saliency function
    input_tensors = [model.layers[1].input, keras.backend.learning_phase()]
    model_input = model.layers[1].input
    model_output = model.output
    gradients = model.optimizer.get_gradients(model_output, model_input)
    compute_gradients = keras.backend.function(inputs=input_tensors, outputs=gradients)

    # Word encoding
    idx = 0 # Calculate the saliency for the first training example
    embeddings = model.layers[0].get_weights()[0]
    embedded_training_data = embeddings[train_data[idx]]
    matrix = compute_gradients([embedded_training_data.reshape(sum([(1,), embedded_training_data.shape], ())), train_labels[idx]])


    But the final matrix is the same row repeated and I'm not sure how to interpret it. Any help would be greatly appreciated. Thankfully, as this is extending a tutorial, there is a complete working example of the code!










    share|improve this question









    $endgroup$




    bumped to the homepage by Community yesterday


    This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.

















      2












      2








      2





      $begingroup$


      I'm following the text classification with movie reviews TensorFlow tutorial, and wanted to extend the project by looking, for a certain input, which words influenced the classification the most.



      I understand this is called a saliency map, but I'm having trouble calculating it. I believe that I need to calculate the gradients of the output with respect to the input. I tried to implement code similar to the code in this answer to no avail. A confounding issue is that the model uses an embedding layer, which doesn't propagate the gradient, so I think one needs to calculate the gradients with the input being the output of the embedding layer.



      It's probably wrong for all sorts of reasons, but this is the closest I've gotten with the Python code:



      # Create the saliency function
      input_tensors = [model.layers[1].input, keras.backend.learning_phase()]
      model_input = model.layers[1].input
      model_output = model.output
      gradients = model.optimizer.get_gradients(model_output, model_input)
      compute_gradients = keras.backend.function(inputs=input_tensors, outputs=gradients)

      # Word encoding
      idx = 0 # Calculate the saliency for the first training example
      embeddings = model.layers[0].get_weights()[0]
      embedded_training_data = embeddings[train_data[idx]]
      matrix = compute_gradients([embedded_training_data.reshape(sum([(1,), embedded_training_data.shape], ())), train_labels[idx]])


      But the final matrix is the same row repeated and I'm not sure how to interpret it. Any help would be greatly appreciated. Thankfully, as this is extending a tutorial, there is a complete working example of the code!










      share|improve this question









      $endgroup$




      I'm following the text classification with movie reviews TensorFlow tutorial, and wanted to extend the project by looking, for a certain input, which words influenced the classification the most.



      I understand this is called a saliency map, but I'm having trouble calculating it. I believe that I need to calculate the gradients of the output with respect to the input. I tried to implement code similar to the code in this answer to no avail. A confounding issue is that the model uses an embedding layer, which doesn't propagate the gradient, so I think one needs to calculate the gradients with the input being the output of the embedding layer.



      It's probably wrong for all sorts of reasons, but this is the closest I've gotten with the Python code:



      # Create the saliency function
      input_tensors = [model.layers[1].input, keras.backend.learning_phase()]
      model_input = model.layers[1].input
      model_output = model.output
      gradients = model.optimizer.get_gradients(model_output, model_input)
      compute_gradients = keras.backend.function(inputs=input_tensors, outputs=gradients)

      # Word encoding
      idx = 0 # Calculate the saliency for the first training example
      embeddings = model.layers[0].get_weights()[0]
      embedded_training_data = embeddings[train_data[idx]]
      matrix = compute_gradients([embedded_training_data.reshape(sum([(1,), embedded_training_data.shape], ())), train_labels[idx]])


      But the final matrix is the same row repeated and I'm not sure how to interpret it. Any help would be greatly appreciated. Thankfully, as this is extending a tutorial, there is a complete working example of the code!







      machine-learning python deep-learning tensorflow






      share|improve this question













      share|improve this question











      share|improve this question




      share|improve this question










      asked Feb 8 at 17:36









      Marc JonesMarc Jones

      112




      112





      bumped to the homepage by Community yesterday


      This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.







      bumped to the homepage by Community yesterday


      This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.






















          1 Answer
          1






          active

          oldest

          votes


















          0












          $begingroup$

          I've been working on this for the last few days, and I think I've answered my own question. Calculating individual word saliency is not possible with the model structure as is. This is because of the GlobalAveragePooling layer of the model. This averages the embedding matrix in the 'word' dimension, removing the ability to distinguish the effect of an individual word on the classification. This is the code I used to convince myself of what was happening, left here in the hope that the next soul to try this finds this answer.



          outputTensor = model.output

          embeddingTensor = model.layers[1].input
          gradientsEmbedding = tf.gradients(outputTensor, embeddingTensor)

          globalAverageTensor = model.layers[2].input
          gradientsAverage = tf.gradients(outputTensor, globalAverageTensor)

          idx = 1

          sess = keras.backend.get_session()

          embedding = sess.run(embeddingTensor, feed_dict=model.input:train_data[(idx-1):idx,:])
          globalAverage = sess.run(model.layers[2].input, feed_dict=model.input:train_data[(idx-1):idx,:])

          print(np.mean(embedding, 1))
          print(globalAverage)

          gradientMatrixEmbedding = sess.run(gradientsEmbedding, feed_dict=embeddingTensor:embedding)
          gradientMatrixAverage = sess.run(gradientsAverage, feed_dict=globalAverageTensor:globalAverage)

          print(np.sum(gradientMatrixEmbedding, 2))
          print(gradientMatrixAverage)





          share|improve this answer











          $endgroup$












            Your Answer





            StackExchange.ifUsing("editor", function ()
            return StackExchange.using("mathjaxEditing", function ()
            StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
            StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
            );
            );
            , "mathjax-editing");

            StackExchange.ready(function()
            var channelOptions =
            tags: "".split(" "),
            id: "557"
            ;
            initTagRenderer("".split(" "), "".split(" "), channelOptions);

            StackExchange.using("externalEditor", function()
            // Have to fire editor after snippets, if snippets enabled
            if (StackExchange.settings.snippets.snippetsEnabled)
            StackExchange.using("snippets", function()
            createEditor();
            );

            else
            createEditor();

            );

            function createEditor()
            StackExchange.prepareEditor(
            heartbeatType: 'answer',
            autoActivateHeartbeat: false,
            convertImagesToLinks: false,
            noModals: true,
            showLowRepImageUploadWarning: true,
            reputationToPostImages: null,
            bindNavPrevention: true,
            postfix: "",
            imageUploader:
            brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
            contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
            allowUrls: true
            ,
            onDemand: true,
            discardSelector: ".discard-answer"
            ,immediatelyShowMarkdownHelp:true
            );



            );













            draft saved

            draft discarded


















            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f45276%2fcalculating-saliency-maps-for-text-classification%23new-answer', 'question_page');

            );

            Post as a guest















            Required, but never shown

























            1 Answer
            1






            active

            oldest

            votes








            1 Answer
            1






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes









            0












            $begingroup$

            I've been working on this for the last few days, and I think I've answered my own question. Calculating individual word saliency is not possible with the model structure as is. This is because of the GlobalAveragePooling layer of the model. This averages the embedding matrix in the 'word' dimension, removing the ability to distinguish the effect of an individual word on the classification. This is the code I used to convince myself of what was happening, left here in the hope that the next soul to try this finds this answer.



            outputTensor = model.output

            embeddingTensor = model.layers[1].input
            gradientsEmbedding = tf.gradients(outputTensor, embeddingTensor)

            globalAverageTensor = model.layers[2].input
            gradientsAverage = tf.gradients(outputTensor, globalAverageTensor)

            idx = 1

            sess = keras.backend.get_session()

            embedding = sess.run(embeddingTensor, feed_dict=model.input:train_data[(idx-1):idx,:])
            globalAverage = sess.run(model.layers[2].input, feed_dict=model.input:train_data[(idx-1):idx,:])

            print(np.mean(embedding, 1))
            print(globalAverage)

            gradientMatrixEmbedding = sess.run(gradientsEmbedding, feed_dict=embeddingTensor:embedding)
            gradientMatrixAverage = sess.run(gradientsAverage, feed_dict=globalAverageTensor:globalAverage)

            print(np.sum(gradientMatrixEmbedding, 2))
            print(gradientMatrixAverage)





            share|improve this answer











            $endgroup$

















              0












              $begingroup$

              I've been working on this for the last few days, and I think I've answered my own question. Calculating individual word saliency is not possible with the model structure as is. This is because of the GlobalAveragePooling layer of the model. This averages the embedding matrix in the 'word' dimension, removing the ability to distinguish the effect of an individual word on the classification. This is the code I used to convince myself of what was happening, left here in the hope that the next soul to try this finds this answer.



              outputTensor = model.output

              embeddingTensor = model.layers[1].input
              gradientsEmbedding = tf.gradients(outputTensor, embeddingTensor)

              globalAverageTensor = model.layers[2].input
              gradientsAverage = tf.gradients(outputTensor, globalAverageTensor)

              idx = 1

              sess = keras.backend.get_session()

              embedding = sess.run(embeddingTensor, feed_dict=model.input:train_data[(idx-1):idx,:])
              globalAverage = sess.run(model.layers[2].input, feed_dict=model.input:train_data[(idx-1):idx,:])

              print(np.mean(embedding, 1))
              print(globalAverage)

              gradientMatrixEmbedding = sess.run(gradientsEmbedding, feed_dict=embeddingTensor:embedding)
              gradientMatrixAverage = sess.run(gradientsAverage, feed_dict=globalAverageTensor:globalAverage)

              print(np.sum(gradientMatrixEmbedding, 2))
              print(gradientMatrixAverage)





              share|improve this answer











              $endgroup$















                0












                0








                0





                $begingroup$

                I've been working on this for the last few days, and I think I've answered my own question. Calculating individual word saliency is not possible with the model structure as is. This is because of the GlobalAveragePooling layer of the model. This averages the embedding matrix in the 'word' dimension, removing the ability to distinguish the effect of an individual word on the classification. This is the code I used to convince myself of what was happening, left here in the hope that the next soul to try this finds this answer.



                outputTensor = model.output

                embeddingTensor = model.layers[1].input
                gradientsEmbedding = tf.gradients(outputTensor, embeddingTensor)

                globalAverageTensor = model.layers[2].input
                gradientsAverage = tf.gradients(outputTensor, globalAverageTensor)

                idx = 1

                sess = keras.backend.get_session()

                embedding = sess.run(embeddingTensor, feed_dict=model.input:train_data[(idx-1):idx,:])
                globalAverage = sess.run(model.layers[2].input, feed_dict=model.input:train_data[(idx-1):idx,:])

                print(np.mean(embedding, 1))
                print(globalAverage)

                gradientMatrixEmbedding = sess.run(gradientsEmbedding, feed_dict=embeddingTensor:embedding)
                gradientMatrixAverage = sess.run(gradientsAverage, feed_dict=globalAverageTensor:globalAverage)

                print(np.sum(gradientMatrixEmbedding, 2))
                print(gradientMatrixAverage)





                share|improve this answer











                $endgroup$



                I've been working on this for the last few days, and I think I've answered my own question. Calculating individual word saliency is not possible with the model structure as is. This is because of the GlobalAveragePooling layer of the model. This averages the embedding matrix in the 'word' dimension, removing the ability to distinguish the effect of an individual word on the classification. This is the code I used to convince myself of what was happening, left here in the hope that the next soul to try this finds this answer.



                outputTensor = model.output

                embeddingTensor = model.layers[1].input
                gradientsEmbedding = tf.gradients(outputTensor, embeddingTensor)

                globalAverageTensor = model.layers[2].input
                gradientsAverage = tf.gradients(outputTensor, globalAverageTensor)

                idx = 1

                sess = keras.backend.get_session()

                embedding = sess.run(embeddingTensor, feed_dict=model.input:train_data[(idx-1):idx,:])
                globalAverage = sess.run(model.layers[2].input, feed_dict=model.input:train_data[(idx-1):idx,:])

                print(np.mean(embedding, 1))
                print(globalAverage)

                gradientMatrixEmbedding = sess.run(gradientsEmbedding, feed_dict=embeddingTensor:embedding)
                gradientMatrixAverage = sess.run(gradientsAverage, feed_dict=globalAverageTensor:globalAverage)

                print(np.sum(gradientMatrixEmbedding, 2))
                print(gradientMatrixAverage)






                share|improve this answer














                share|improve this answer



                share|improve this answer








                edited Feb 13 at 12:16

























                answered Feb 12 at 15:35









                Marc JonesMarc Jones

                112




                112



























                    draft saved

                    draft discarded
















































                    Thanks for contributing an answer to Data Science Stack Exchange!


                    • Please be sure to answer the question. Provide details and share your research!

                    But avoid


                    • Asking for help, clarification, or responding to other answers.

                    • Making statements based on opinion; back them up with references or personal experience.

                    Use MathJax to format equations. MathJax reference.


                    To learn more, see our tips on writing great answers.




                    draft saved


                    draft discarded














                    StackExchange.ready(
                    function ()
                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f45276%2fcalculating-saliency-maps-for-text-classification%23new-answer', 'question_page');

                    );

                    Post as a guest















                    Required, but never shown





















































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown

































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown







                    Popular posts from this blog

                    Adding axes to figuresAdding axes labels to LaTeX figuresLaTeX equivalent of ConTeXt buffersRotate a node but not its content: the case of the ellipse decorationHow to define the default vertical distance between nodes?TikZ scaling graphic and adjust node position and keep font sizeNumerical conditional within tikz keys?adding axes to shapesAlign axes across subfiguresAdding figures with a certain orderLine up nested tikz enviroments or how to get rid of themAdding axes labels to LaTeX figures

                    Tähtien Talli Jäsenet | Lähteet | NavigointivalikkoSuomen Hippos – Tähtien Talli

                    Do these cracks on my tires look bad? The Next CEO of Stack OverflowDry rot tire should I replace?Having to replace tiresFishtailed so easily? Bad tires? ABS?Filling the tires with something other than air, to avoid puncture hassles?Used Michelin tires safe to install?Do these tyre cracks necessitate replacement?Rumbling noise: tires or mechanicalIs it possible to fix noisy feathered tires?Are bad winter tires still better than summer tires in winter?Torque converter failure - Related to replacing only 2 tires?Why use snow tires on all 4 wheels on 2-wheel-drive cars?