Why do we share parameters between two different inputs in the embeddings layer?Recurrent neural network multiple types of input KerasHow to fix these vanishing gradients?Basic encoder-decoder architectureUsing RNN (LSTM) for Gesture Recognition SystemTwo-class classification model with multi-type input dataWays to Encode context for text classification?Best practice for short sentences in a deep learning networkMulti-input Convolutional Neural Network for Images ClassificationCombining different features as input to Neural NetworkHow to optimally train deep learning model using output as new input

What is the relationship between relativity and the Doppler effect?

Is there a term for accumulated dirt on the outside of your hands and feet?

What does Jesus mean regarding "Raca," and "you fool?" - is he contrasting them?

What are substitutions for coconut in curry?

Help rendering a complicated sum/product formula

Do US professors/group leaders only get a salary, but no group budget?

Asserting that Atheism and Theism are both faith based positions

Existence of a celestial body big enough for early civilization to be thought of as a second moon

In Aliens, how many people were on LV-426 before the Marines arrived​?

Is honey really a supersaturated solution? Does heating to un-crystalize redissolve it or melt it?

HP P840 HDD RAID 5 many strange drive failures

Is it insecure to send a password in a `curl` command?

Why is indicated airspeed rather than ground speed used during the takeoff roll?

What does "mu" mean as an interjection?

Is there a hypothetical scenario that would make Earth uninhabitable for humans, but not for (the majority of) other animals?

Should I be concerned about student access to a test bank?

Matrix using tikz package

World War I as a war of liberals against authoritarians?

Generic TVP tradeoffs?

How to terminate ping <dest> &

Using Past-Perfect interchangeably with the Past Continuous

Can you move over difficult terrain with only 5 feet of movement?

How could an airship be repaired midflight?

Practical application of matrices and determinants



Why do we share parameters between two different inputs in the embeddings layer?


Recurrent neural network multiple types of input KerasHow to fix these vanishing gradients?Basic encoder-decoder architectureUsing RNN (LSTM) for Gesture Recognition SystemTwo-class classification model with multi-type input dataWays to Encode context for text classification?Best practice for short sentences in a deep learning networkMulti-input Convolutional Neural Network for Images ClassificationCombining different features as input to Neural NetworkHow to optimally train deep learning model using output as new input













0












$begingroup$


I noticed in some deep learning networks that have two inputs to the network, they use one embeddings layer to share the parameters between these two different inputs.



As an example, in Keras:



input_target = Input((1,))
input_context = Input((1,))
embedding = Embedding(vocab_size, embed_size, input_length=1, name='embedding')
target = embedding(input_target)
context = embedding(input_context)


Why do they use this way?



To make everything clear, the other case is: for each input we have different embeddings layer before moving to the RNN or CNN layers.










share|improve this question









$endgroup$











  • $begingroup$
    It depends on the use case. Sometimes you have parameter sharing in order to decrease parameters or because all inputs need to be embedded in the same manner.
    $endgroup$
    – Andreas Look
    2 days ago










  • $begingroup$
    @Andreas Look , could you give an example?
    $endgroup$
    – Ghanem
    2 days ago










  • $begingroup$
    e.g. you embedd two images in a low dimensional space where distance is interpretable and want to calculate their similarity afterwards. like siamese networks
    $endgroup$
    – Andreas Look
    2 days ago















0












$begingroup$


I noticed in some deep learning networks that have two inputs to the network, they use one embeddings layer to share the parameters between these two different inputs.



As an example, in Keras:



input_target = Input((1,))
input_context = Input((1,))
embedding = Embedding(vocab_size, embed_size, input_length=1, name='embedding')
target = embedding(input_target)
context = embedding(input_context)


Why do they use this way?



To make everything clear, the other case is: for each input we have different embeddings layer before moving to the RNN or CNN layers.










share|improve this question









$endgroup$











  • $begingroup$
    It depends on the use case. Sometimes you have parameter sharing in order to decrease parameters or because all inputs need to be embedded in the same manner.
    $endgroup$
    – Andreas Look
    2 days ago










  • $begingroup$
    @Andreas Look , could you give an example?
    $endgroup$
    – Ghanem
    2 days ago










  • $begingroup$
    e.g. you embedd two images in a low dimensional space where distance is interpretable and want to calculate their similarity afterwards. like siamese networks
    $endgroup$
    – Andreas Look
    2 days ago













0












0








0





$begingroup$


I noticed in some deep learning networks that have two inputs to the network, they use one embeddings layer to share the parameters between these two different inputs.



As an example, in Keras:



input_target = Input((1,))
input_context = Input((1,))
embedding = Embedding(vocab_size, embed_size, input_length=1, name='embedding')
target = embedding(input_target)
context = embedding(input_context)


Why do they use this way?



To make everything clear, the other case is: for each input we have different embeddings layer before moving to the RNN or CNN layers.










share|improve this question









$endgroup$




I noticed in some deep learning networks that have two inputs to the network, they use one embeddings layer to share the parameters between these two different inputs.



As an example, in Keras:



input_target = Input((1,))
input_context = Input((1,))
embedding = Embedding(vocab_size, embed_size, input_length=1, name='embedding')
target = embedding(input_target)
context = embedding(input_context)


Why do they use this way?



To make everything clear, the other case is: for each input we have different embeddings layer before moving to the RNN or CNN layers.







deep-learning keras word-embeddings embeddings






share|improve this question













share|improve this question











share|improve this question




share|improve this question










asked 2 days ago









GhanemGhanem

1186




1186











  • $begingroup$
    It depends on the use case. Sometimes you have parameter sharing in order to decrease parameters or because all inputs need to be embedded in the same manner.
    $endgroup$
    – Andreas Look
    2 days ago










  • $begingroup$
    @Andreas Look , could you give an example?
    $endgroup$
    – Ghanem
    2 days ago










  • $begingroup$
    e.g. you embedd two images in a low dimensional space where distance is interpretable and want to calculate their similarity afterwards. like siamese networks
    $endgroup$
    – Andreas Look
    2 days ago
















  • $begingroup$
    It depends on the use case. Sometimes you have parameter sharing in order to decrease parameters or because all inputs need to be embedded in the same manner.
    $endgroup$
    – Andreas Look
    2 days ago










  • $begingroup$
    @Andreas Look , could you give an example?
    $endgroup$
    – Ghanem
    2 days ago










  • $begingroup$
    e.g. you embedd two images in a low dimensional space where distance is interpretable and want to calculate their similarity afterwards. like siamese networks
    $endgroup$
    – Andreas Look
    2 days ago















$begingroup$
It depends on the use case. Sometimes you have parameter sharing in order to decrease parameters or because all inputs need to be embedded in the same manner.
$endgroup$
– Andreas Look
2 days ago




$begingroup$
It depends on the use case. Sometimes you have parameter sharing in order to decrease parameters or because all inputs need to be embedded in the same manner.
$endgroup$
– Andreas Look
2 days ago












$begingroup$
@Andreas Look , could you give an example?
$endgroup$
– Ghanem
2 days ago




$begingroup$
@Andreas Look , could you give an example?
$endgroup$
– Ghanem
2 days ago












$begingroup$
e.g. you embedd two images in a low dimensional space where distance is interpretable and want to calculate their similarity afterwards. like siamese networks
$endgroup$
– Andreas Look
2 days ago




$begingroup$
e.g. you embedd two images in a low dimensional space where distance is interpretable and want to calculate their similarity afterwards. like siamese networks
$endgroup$
– Andreas Look
2 days ago










0






active

oldest

votes











Your Answer





StackExchange.ifUsing("editor", function ()
return StackExchange.using("mathjaxEditing", function ()
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
);
);
, "mathjax-editing");

StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "557"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);

else
createEditor();

);

function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);



);













draft saved

draft discarded


















StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f47367%2fwhy-do-we-share-parameters-between-two-different-inputs-in-the-embeddings-layer%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown

























0






active

oldest

votes








0






active

oldest

votes









active

oldest

votes






active

oldest

votes















draft saved

draft discarded
















































Thanks for contributing an answer to Data Science Stack Exchange!


  • Please be sure to answer the question. Provide details and share your research!

But avoid


  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.

Use MathJax to format equations. MathJax reference.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f47367%2fwhy-do-we-share-parameters-between-two-different-inputs-in-the-embeddings-layer%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Marja Vauras Lähteet | Aiheesta muualla | NavigointivalikkoMarja Vauras Turun yliopiston tutkimusportaalissaInfobox OKSuomalaisen Tiedeakatemian varsinaiset jäsenetKasvatustieteiden tiedekunnan dekaanit ja muu johtoMarja VaurasKoulutusvienti on kestävyys- ja ketteryyslaji (2.5.2017)laajentamallaWorldCat Identities0000 0001 0855 9405n86069603utb201588738523620927

Which is better: GPT or RelGAN for text generation?2019 Community Moderator ElectionWhat is the difference between TextGAN and LM for text generation?GANs (generative adversarial networks) possible for text as well?Generator loss not decreasing- text to image synthesisChoosing a right algorithm for template-based text generationHow should I format input and output for text generation with LSTMsGumbel Softmax vs Vanilla Softmax for GAN trainingWhich neural network to choose for classification from text/speech?NLP text autoencoder that generates text in poetic meterWhat is the interpretation of the expectation notation in the GAN formulation?What is the difference between TextGAN and LM for text generation?How to prepare the data for text generation task

Is flight data recorder erased after every flight?When are black boxes used?What protects the location beacon (pinger) of a flight data recorder?Is there anywhere I can pick up raw flight data recorder information?Who legally owns the Flight Data Recorder?Constructing flight recorder dataWhy are FDRs and CVRs still two separate physical devices?What are the data elements shown on the GE235 flight data recorder (FDR) plot?Are CVR and FDR reset after every flight?What is the format of data stored by a Flight Data Recorder?How much data is stored in the flight data recorder per hour in a typical flight of an A380?Is a smart flight data recorder possible?