Why do we share parameters between two different inputs in the embeddings layer?Recurrent neural network multiple types of input KerasHow to fix these vanishing gradients?Basic encoder-decoder architectureUsing RNN (LSTM) for Gesture Recognition SystemTwo-class classification model with multi-type input dataWays to Encode context for text classification?Best practice for short sentences in a deep learning networkMulti-input Convolutional Neural Network for Images ClassificationCombining different features as input to Neural NetworkHow to optimally train deep learning model using output as new input

What is the relationship between relativity and the Doppler effect?

Is there a term for accumulated dirt on the outside of your hands and feet?

What does Jesus mean regarding "Raca," and "you fool?" - is he contrasting them?

What are substitutions for coconut in curry?

Help rendering a complicated sum/product formula

Do US professors/group leaders only get a salary, but no group budget?

Asserting that Atheism and Theism are both faith based positions

Existence of a celestial body big enough for early civilization to be thought of as a second moon

In Aliens, how many people were on LV-426 before the Marines arrived​?

Is honey really a supersaturated solution? Does heating to un-crystalize redissolve it or melt it?

HP P840 HDD RAID 5 many strange drive failures

Is it insecure to send a password in a `curl` command?

Why is indicated airspeed rather than ground speed used during the takeoff roll?

What does "mu" mean as an interjection?

Is there a hypothetical scenario that would make Earth uninhabitable for humans, but not for (the majority of) other animals?

Should I be concerned about student access to a test bank?

Matrix using tikz package

World War I as a war of liberals against authoritarians?

Generic TVP tradeoffs?

How to terminate ping <dest> &

Using Past-Perfect interchangeably with the Past Continuous

Can you move over difficult terrain with only 5 feet of movement?

How could an airship be repaired midflight?

Practical application of matrices and determinants



Why do we share parameters between two different inputs in the embeddings layer?


Recurrent neural network multiple types of input KerasHow to fix these vanishing gradients?Basic encoder-decoder architectureUsing RNN (LSTM) for Gesture Recognition SystemTwo-class classification model with multi-type input dataWays to Encode context for text classification?Best practice for short sentences in a deep learning networkMulti-input Convolutional Neural Network for Images ClassificationCombining different features as input to Neural NetworkHow to optimally train deep learning model using output as new input













0












$begingroup$


I noticed in some deep learning networks that have two inputs to the network, they use one embeddings layer to share the parameters between these two different inputs.



As an example, in Keras:



input_target = Input((1,))
input_context = Input((1,))
embedding = Embedding(vocab_size, embed_size, input_length=1, name='embedding')
target = embedding(input_target)
context = embedding(input_context)


Why do they use this way?



To make everything clear, the other case is: for each input we have different embeddings layer before moving to the RNN or CNN layers.










share|improve this question









$endgroup$











  • $begingroup$
    It depends on the use case. Sometimes you have parameter sharing in order to decrease parameters or because all inputs need to be embedded in the same manner.
    $endgroup$
    – Andreas Look
    2 days ago










  • $begingroup$
    @Andreas Look , could you give an example?
    $endgroup$
    – Ghanem
    2 days ago










  • $begingroup$
    e.g. you embedd two images in a low dimensional space where distance is interpretable and want to calculate their similarity afterwards. like siamese networks
    $endgroup$
    – Andreas Look
    2 days ago















0












$begingroup$


I noticed in some deep learning networks that have two inputs to the network, they use one embeddings layer to share the parameters between these two different inputs.



As an example, in Keras:



input_target = Input((1,))
input_context = Input((1,))
embedding = Embedding(vocab_size, embed_size, input_length=1, name='embedding')
target = embedding(input_target)
context = embedding(input_context)


Why do they use this way?



To make everything clear, the other case is: for each input we have different embeddings layer before moving to the RNN or CNN layers.










share|improve this question









$endgroup$











  • $begingroup$
    It depends on the use case. Sometimes you have parameter sharing in order to decrease parameters or because all inputs need to be embedded in the same manner.
    $endgroup$
    – Andreas Look
    2 days ago










  • $begingroup$
    @Andreas Look , could you give an example?
    $endgroup$
    – Ghanem
    2 days ago










  • $begingroup$
    e.g. you embedd two images in a low dimensional space where distance is interpretable and want to calculate their similarity afterwards. like siamese networks
    $endgroup$
    – Andreas Look
    2 days ago













0












0








0





$begingroup$


I noticed in some deep learning networks that have two inputs to the network, they use one embeddings layer to share the parameters between these two different inputs.



As an example, in Keras:



input_target = Input((1,))
input_context = Input((1,))
embedding = Embedding(vocab_size, embed_size, input_length=1, name='embedding')
target = embedding(input_target)
context = embedding(input_context)


Why do they use this way?



To make everything clear, the other case is: for each input we have different embeddings layer before moving to the RNN or CNN layers.










share|improve this question









$endgroup$




I noticed in some deep learning networks that have two inputs to the network, they use one embeddings layer to share the parameters between these two different inputs.



As an example, in Keras:



input_target = Input((1,))
input_context = Input((1,))
embedding = Embedding(vocab_size, embed_size, input_length=1, name='embedding')
target = embedding(input_target)
context = embedding(input_context)


Why do they use this way?



To make everything clear, the other case is: for each input we have different embeddings layer before moving to the RNN or CNN layers.







deep-learning keras word-embeddings embeddings






share|improve this question













share|improve this question











share|improve this question




share|improve this question










asked 2 days ago









GhanemGhanem

1186




1186











  • $begingroup$
    It depends on the use case. Sometimes you have parameter sharing in order to decrease parameters or because all inputs need to be embedded in the same manner.
    $endgroup$
    – Andreas Look
    2 days ago










  • $begingroup$
    @Andreas Look , could you give an example?
    $endgroup$
    – Ghanem
    2 days ago










  • $begingroup$
    e.g. you embedd two images in a low dimensional space where distance is interpretable and want to calculate their similarity afterwards. like siamese networks
    $endgroup$
    – Andreas Look
    2 days ago
















  • $begingroup$
    It depends on the use case. Sometimes you have parameter sharing in order to decrease parameters or because all inputs need to be embedded in the same manner.
    $endgroup$
    – Andreas Look
    2 days ago










  • $begingroup$
    @Andreas Look , could you give an example?
    $endgroup$
    – Ghanem
    2 days ago










  • $begingroup$
    e.g. you embedd two images in a low dimensional space where distance is interpretable and want to calculate their similarity afterwards. like siamese networks
    $endgroup$
    – Andreas Look
    2 days ago















$begingroup$
It depends on the use case. Sometimes you have parameter sharing in order to decrease parameters or because all inputs need to be embedded in the same manner.
$endgroup$
– Andreas Look
2 days ago




$begingroup$
It depends on the use case. Sometimes you have parameter sharing in order to decrease parameters or because all inputs need to be embedded in the same manner.
$endgroup$
– Andreas Look
2 days ago












$begingroup$
@Andreas Look , could you give an example?
$endgroup$
– Ghanem
2 days ago




$begingroup$
@Andreas Look , could you give an example?
$endgroup$
– Ghanem
2 days ago












$begingroup$
e.g. you embedd two images in a low dimensional space where distance is interpretable and want to calculate their similarity afterwards. like siamese networks
$endgroup$
– Andreas Look
2 days ago




$begingroup$
e.g. you embedd two images in a low dimensional space where distance is interpretable and want to calculate their similarity afterwards. like siamese networks
$endgroup$
– Andreas Look
2 days ago










0






active

oldest

votes











Your Answer





StackExchange.ifUsing("editor", function ()
return StackExchange.using("mathjaxEditing", function ()
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
);
);
, "mathjax-editing");

StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "557"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);

else
createEditor();

);

function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);



);













draft saved

draft discarded


















StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f47367%2fwhy-do-we-share-parameters-between-two-different-inputs-in-the-embeddings-layer%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown

























0






active

oldest

votes








0






active

oldest

votes









active

oldest

votes






active

oldest

votes















draft saved

draft discarded
















































Thanks for contributing an answer to Data Science Stack Exchange!


  • Please be sure to answer the question. Provide details and share your research!

But avoid


  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.

Use MathJax to format equations. MathJax reference.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f47367%2fwhy-do-we-share-parameters-between-two-different-inputs-in-the-embeddings-layer%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Marja Vauras Lähteet | Aiheesta muualla | NavigointivalikkoMarja Vauras Turun yliopiston tutkimusportaalissaInfobox OKSuomalaisen Tiedeakatemian varsinaiset jäsenetKasvatustieteiden tiedekunnan dekaanit ja muu johtoMarja VaurasKoulutusvienti on kestävyys- ja ketteryyslaji (2.5.2017)laajentamallaWorldCat Identities0000 0001 0855 9405n86069603utb201588738523620927

Which is better: GPT or RelGAN for text generation?2019 Community Moderator ElectionWhat is the difference between TextGAN and LM for text generation?GANs (generative adversarial networks) possible for text as well?Generator loss not decreasing- text to image synthesisChoosing a right algorithm for template-based text generationHow should I format input and output for text generation with LSTMsGumbel Softmax vs Vanilla Softmax for GAN trainingWhich neural network to choose for classification from text/speech?NLP text autoencoder that generates text in poetic meterWhat is the interpretation of the expectation notation in the GAN formulation?What is the difference between TextGAN and LM for text generation?How to prepare the data for text generation task

Is this part of the description of the Archfey warlock's Misty Escape feature redundant?When is entropic ward considered “used”?How does the reaction timing work for Wrath of the Storm? Can it potentially prevent the damage from the triggering attack?Does the Dark Arts Archlich warlock patrons's Arcane Invisibility activate every time you cast a level 1+ spell?When attacking while invisible, when exactly does invisibility break?Can I cast Hellish Rebuke on my turn?Do I have to “pre-cast” a reaction spell in order for it to be triggered?What happens if a Player Misty Escapes into an Invisible CreatureCan a reaction interrupt multiattack?Does the Fiend-patron warlock's Hurl Through Hell feature dispel effects that require the target to be on the same plane as the caster?What are you allowed to do while using the Warlock's Eldritch Master feature?