What does SpatialDropout1D() do to output of Embedding() in Keras? The Next CEO of Stack Overflow2019 Community Moderator ElectionClarification on the Keras Recurrent Unit CellHow to use Embedding() with 3D tensor in Keras?Keras LSTM: use weights from Keras model to replicate predictions using numpyAlternatives to linear activation function in regression tasks to limit the outputTraining Accuracy stuck in KerasWhat does GlobalMaxPooling1D() do to output of LSTM unit in Keras?Architecture help for multivariate input and output LSTM modelsUnderstanding output of LSTM for regressionUnderstanding LSTM structure3 dimensional array as input with Embedding Layer and LSTM in Keras
How to place nodes around a circle from some initial angle?
What is the purpose of the Evocation wizard's Potent Cantrip feature?
Can I equip Skullclamp on a creature I am sacrificing?
What flight has the highest ratio of time difference to flight time?
Unreliable Magic - Is it worth it?
Are police here, aren't itthey?
The exact meaning of 'Mom made me a sandwich'
What's the best way to handle refactoring a big file?
I want to delete every two lines after 3rd lines in file contain very large number of lines :
Is it possible to replace duplicates of a character with one character using tr
Should I tutor a student who I know has cheated on their homework?
How to prevent changing the value of variable
Why the difference in type-inference over the as-pattern in two similar function definitions?
How to get from Geneva Airport to Metabief?
I'm self employed. Can I contribute to my previous employers 401k?
Rotate a column
If the updated MCAS software needs two AOA sensors, doesn't that introduce a new single point of failure?
Ubuntu shell scripting
If/When UK leaves the EU, can a future goverment conduct a referendum to join the EU?
Is wanting to ask what to write an indication that you need to change your story?
If Nick Fury and Coulson already knew about aliens (Kree and Skrull) why did they wait until Thor's appearance to start making weapons?
What connection does MS Office have to Netscape Navigator?
Arranging cats and dogs - what is wrong with my approach
Why didn't Khan get resurrected in the Genesis Explosion?
What does SpatialDropout1D() do to output of Embedding() in Keras?
The Next CEO of Stack Overflow2019 Community Moderator ElectionClarification on the Keras Recurrent Unit CellHow to use Embedding() with 3D tensor in Keras?Keras LSTM: use weights from Keras model to replicate predictions using numpyAlternatives to linear activation function in regression tasks to limit the outputTraining Accuracy stuck in KerasWhat does GlobalMaxPooling1D() do to output of LSTM unit in Keras?Architecture help for multivariate input and output LSTM modelsUnderstanding output of LSTM for regressionUnderstanding LSTM structure3 dimensional array as input with Embedding Layer and LSTM in Keras
$begingroup$
Keras model looks like this
inp = Input(shape=(maxlen, ))
x = Embedding(max_features, embed_size, weights=[embedding_matrix], trainable=False)(inp)
x = SpatialDropout1D(dropout)(x)
x = Bidirectional(LSTM(num_filters, return_sequences=True))(x)
max_pool = GlobalMaxPooling1D()(x)
x = concatenate([x_h, max_pool])
outp = Dense(6, activation="sigmoid")(x)
According to the documentation the output shape = input shape i.e. (samples, timesteps, channels)
Questions
- What does SpatialDropout1D() really do to the output of Embedding()? I know the output of LSTM Embedding is of dimension (batch_size, steps, features).
- Does SpatialDropout1D() just randomly replace some values of word embedding of each word by 0?
- How is SpatialDropout1D() different from Dropout() in Keras?
deep-learning keras tensorflow lstm dropout
$endgroup$
add a comment |
$begingroup$
Keras model looks like this
inp = Input(shape=(maxlen, ))
x = Embedding(max_features, embed_size, weights=[embedding_matrix], trainable=False)(inp)
x = SpatialDropout1D(dropout)(x)
x = Bidirectional(LSTM(num_filters, return_sequences=True))(x)
max_pool = GlobalMaxPooling1D()(x)
x = concatenate([x_h, max_pool])
outp = Dense(6, activation="sigmoid")(x)
According to the documentation the output shape = input shape i.e. (samples, timesteps, channels)
Questions
- What does SpatialDropout1D() really do to the output of Embedding()? I know the output of LSTM Embedding is of dimension (batch_size, steps, features).
- Does SpatialDropout1D() just randomly replace some values of word embedding of each word by 0?
- How is SpatialDropout1D() different from Dropout() in Keras?
deep-learning keras tensorflow lstm dropout
$endgroup$
$begingroup$
Intuitively This version performs the same function as Dropout, however it drops entire 1D feature maps instead of individual elements..
$endgroup$
– Aditya
Sep 20 '18 at 4:00
$begingroup$
What is a 1D feature maps in the context of Embedding()?
$endgroup$
– GeorgeOfTheRF
Sep 20 '18 at 4:02
$begingroup$
stackoverflow.com/questions/50393666/…
$endgroup$
– Aditya
Sep 20 '18 at 4:37
$begingroup$
Dropping the cols itself! Is what I can conclude.. need to play around them like visualisation of the embedding to confirm it!
$endgroup$
– Aditya
Sep 20 '18 at 8:49
add a comment |
$begingroup$
Keras model looks like this
inp = Input(shape=(maxlen, ))
x = Embedding(max_features, embed_size, weights=[embedding_matrix], trainable=False)(inp)
x = SpatialDropout1D(dropout)(x)
x = Bidirectional(LSTM(num_filters, return_sequences=True))(x)
max_pool = GlobalMaxPooling1D()(x)
x = concatenate([x_h, max_pool])
outp = Dense(6, activation="sigmoid")(x)
According to the documentation the output shape = input shape i.e. (samples, timesteps, channels)
Questions
- What does SpatialDropout1D() really do to the output of Embedding()? I know the output of LSTM Embedding is of dimension (batch_size, steps, features).
- Does SpatialDropout1D() just randomly replace some values of word embedding of each word by 0?
- How is SpatialDropout1D() different from Dropout() in Keras?
deep-learning keras tensorflow lstm dropout
$endgroup$
Keras model looks like this
inp = Input(shape=(maxlen, ))
x = Embedding(max_features, embed_size, weights=[embedding_matrix], trainable=False)(inp)
x = SpatialDropout1D(dropout)(x)
x = Bidirectional(LSTM(num_filters, return_sequences=True))(x)
max_pool = GlobalMaxPooling1D()(x)
x = concatenate([x_h, max_pool])
outp = Dense(6, activation="sigmoid")(x)
According to the documentation the output shape = input shape i.e. (samples, timesteps, channels)
Questions
- What does SpatialDropout1D() really do to the output of Embedding()? I know the output of LSTM Embedding is of dimension (batch_size, steps, features).
- Does SpatialDropout1D() just randomly replace some values of word embedding of each word by 0?
- How is SpatialDropout1D() different from Dropout() in Keras?
deep-learning keras tensorflow lstm dropout
deep-learning keras tensorflow lstm dropout
asked Sep 20 '18 at 3:55
GeorgeOfTheRFGeorgeOfTheRF
5332817
5332817
$begingroup$
Intuitively This version performs the same function as Dropout, however it drops entire 1D feature maps instead of individual elements..
$endgroup$
– Aditya
Sep 20 '18 at 4:00
$begingroup$
What is a 1D feature maps in the context of Embedding()?
$endgroup$
– GeorgeOfTheRF
Sep 20 '18 at 4:02
$begingroup$
stackoverflow.com/questions/50393666/…
$endgroup$
– Aditya
Sep 20 '18 at 4:37
$begingroup$
Dropping the cols itself! Is what I can conclude.. need to play around them like visualisation of the embedding to confirm it!
$endgroup$
– Aditya
Sep 20 '18 at 8:49
add a comment |
$begingroup$
Intuitively This version performs the same function as Dropout, however it drops entire 1D feature maps instead of individual elements..
$endgroup$
– Aditya
Sep 20 '18 at 4:00
$begingroup$
What is a 1D feature maps in the context of Embedding()?
$endgroup$
– GeorgeOfTheRF
Sep 20 '18 at 4:02
$begingroup$
stackoverflow.com/questions/50393666/…
$endgroup$
– Aditya
Sep 20 '18 at 4:37
$begingroup$
Dropping the cols itself! Is what I can conclude.. need to play around them like visualisation of the embedding to confirm it!
$endgroup$
– Aditya
Sep 20 '18 at 8:49
$begingroup$
Intuitively This version performs the same function as Dropout, however it drops entire 1D feature maps instead of individual elements..
$endgroup$
– Aditya
Sep 20 '18 at 4:00
$begingroup$
Intuitively This version performs the same function as Dropout, however it drops entire 1D feature maps instead of individual elements..
$endgroup$
– Aditya
Sep 20 '18 at 4:00
$begingroup$
What is a 1D feature maps in the context of Embedding()?
$endgroup$
– GeorgeOfTheRF
Sep 20 '18 at 4:02
$begingroup$
What is a 1D feature maps in the context of Embedding()?
$endgroup$
– GeorgeOfTheRF
Sep 20 '18 at 4:02
$begingroup$
stackoverflow.com/questions/50393666/…
$endgroup$
– Aditya
Sep 20 '18 at 4:37
$begingroup$
stackoverflow.com/questions/50393666/…
$endgroup$
– Aditya
Sep 20 '18 at 4:37
$begingroup$
Dropping the cols itself! Is what I can conclude.. need to play around them like visualisation of the embedding to confirm it!
$endgroup$
– Aditya
Sep 20 '18 at 8:49
$begingroup$
Dropping the cols itself! Is what I can conclude.. need to play around them like visualisation of the embedding to confirm it!
$endgroup$
– Aditya
Sep 20 '18 at 8:49
add a comment |
1 Answer
1
active
oldest
votes
$begingroup$
Basically, it removes all the pixel in a row from all channels.
eg: take [[1,1,1], [2,4,5]]
, there are 3 points with values in 2 channels, by doing SpatialDropout1D it zeros an entire row ie all attributes of a point is set to 0; like [[1,1,0], [2,4,0]]
number of such choices would be 3C0 + 3C1+ 3C2 + 3C3 = 8
The intuition behind this is in many cases for an image the adjacent pixels are correlated, so hiding one of them is not helping much, rather hiding entire row, that's gotta make a difference. reference material
$endgroup$
add a comment |
Your Answer
StackExchange.ifUsing("editor", function ()
return StackExchange.using("mathjaxEditing", function ()
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
);
);
, "mathjax-editing");
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "557"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f38519%2fwhat-does-spatialdropout1d-do-to-output-of-embedding-in-keras%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
Basically, it removes all the pixel in a row from all channels.
eg: take [[1,1,1], [2,4,5]]
, there are 3 points with values in 2 channels, by doing SpatialDropout1D it zeros an entire row ie all attributes of a point is set to 0; like [[1,1,0], [2,4,0]]
number of such choices would be 3C0 + 3C1+ 3C2 + 3C3 = 8
The intuition behind this is in many cases for an image the adjacent pixels are correlated, so hiding one of them is not helping much, rather hiding entire row, that's gotta make a difference. reference material
$endgroup$
add a comment |
$begingroup$
Basically, it removes all the pixel in a row from all channels.
eg: take [[1,1,1], [2,4,5]]
, there are 3 points with values in 2 channels, by doing SpatialDropout1D it zeros an entire row ie all attributes of a point is set to 0; like [[1,1,0], [2,4,0]]
number of such choices would be 3C0 + 3C1+ 3C2 + 3C3 = 8
The intuition behind this is in many cases for an image the adjacent pixels are correlated, so hiding one of them is not helping much, rather hiding entire row, that's gotta make a difference. reference material
$endgroup$
add a comment |
$begingroup$
Basically, it removes all the pixel in a row from all channels.
eg: take [[1,1,1], [2,4,5]]
, there are 3 points with values in 2 channels, by doing SpatialDropout1D it zeros an entire row ie all attributes of a point is set to 0; like [[1,1,0], [2,4,0]]
number of such choices would be 3C0 + 3C1+ 3C2 + 3C3 = 8
The intuition behind this is in many cases for an image the adjacent pixels are correlated, so hiding one of them is not helping much, rather hiding entire row, that's gotta make a difference. reference material
$endgroup$
Basically, it removes all the pixel in a row from all channels.
eg: take [[1,1,1], [2,4,5]]
, there are 3 points with values in 2 channels, by doing SpatialDropout1D it zeros an entire row ie all attributes of a point is set to 0; like [[1,1,0], [2,4,0]]
number of such choices would be 3C0 + 3C1+ 3C2 + 3C3 = 8
The intuition behind this is in many cases for an image the adjacent pixels are correlated, so hiding one of them is not helping much, rather hiding entire row, that's gotta make a difference. reference material
answered Mar 25 at 16:50
ItachiItachi
1713
1713
add a comment |
add a comment |
Thanks for contributing an answer to Data Science Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f38519%2fwhat-does-spatialdropout1d-do-to-output-of-embedding-in-keras%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
$begingroup$
Intuitively This version performs the same function as Dropout, however it drops entire 1D feature maps instead of individual elements..
$endgroup$
– Aditya
Sep 20 '18 at 4:00
$begingroup$
What is a 1D feature maps in the context of Embedding()?
$endgroup$
– GeorgeOfTheRF
Sep 20 '18 at 4:02
$begingroup$
stackoverflow.com/questions/50393666/…
$endgroup$
– Aditya
Sep 20 '18 at 4:37
$begingroup$
Dropping the cols itself! Is what I can conclude.. need to play around them like visualisation of the embedding to confirm it!
$endgroup$
– Aditya
Sep 20 '18 at 8:49