What is the vector value of [CLS] [SEP] tokens in BERT The 2019 Stack Overflow Developer Survey Results Are InHow much training data does word2vec need?How word2vec can handle unseen / new words to bypass this for new classifications?Word labeling with TensorflowInput for LSTM for financial time series directional predictionwhat actually word embedding dimensions values represent?Keras: apply masking to non-sequential dataWhat kind of neural network would work best for loosely-defined data, like video game RAM?How to dual encode two sentences to show similarity scoreUnderstanding output of LSTM for regression

"consumers choosing to rely" vs. "consumers to choose to rely"

Can an undergraduate be advised by a professor who is very far away?

Does a dangling wire really electrocute me if I'm standing in water?

What is the grammatical structure of "Il est de formation classique"?

What are the motivations for publishing new editions of an existing textbook, beyond new discoveries in a field?

How to check whether the reindex working or not in Magento?

Right tool to dig six foot holes?

When should I buy a clipper card after flying to Oakland?

How can I define good in a religion that claims no moral authority?

What do hard-Brexiteers want with respect to the Irish border?

Is it ok to offer lower paid work as a trial period before negotiating for a full-time job?

Why couldn't they take pictures of a closer black hole?

What to expect from an e-bike service?

Output the Arecibo Message

What information about me do stores get via my credit card?

How can I connect public and private node through a reverse SSH tunnel?

writing variables above the numbers in tikz picture

How come people say “Would of”?

Loose spokes after only a few rides

How can I refresh a custom data tab in the contact summary?

Is bread bad for ducks?

Can a flute soloist sit?

Cooking pasta in a water boiler

Keeping a retro style to sci-fi spaceships?



What is the vector value of [CLS] [SEP] tokens in BERT



The 2019 Stack Overflow Developer Survey Results Are InHow much training data does word2vec need?How word2vec can handle unseen / new words to bypass this for new classifications?Word labeling with TensorflowInput for LSTM for financial time series directional predictionwhat actually word embedding dimensions values represent?Keras: apply masking to non-sequential dataWhat kind of neural network would work best for loosely-defined data, like video game RAM?How to dual encode two sentences to show similarity scoreUnderstanding output of LSTM for regression










0












$begingroup$


In BERT, They replace separator and start of sentence with special token labels. What are there corresponding values in embedding_matrix. Are they 0-vector?



I wanted to replace the proper nouns like names, buildings, locations with similar approach. How should i go about masking the same?










share|improve this question









$endgroup$
















    0












    $begingroup$


    In BERT, They replace separator and start of sentence with special token labels. What are there corresponding values in embedding_matrix. Are they 0-vector?



    I wanted to replace the proper nouns like names, buildings, locations with similar approach. How should i go about masking the same?










    share|improve this question









    $endgroup$














      0












      0








      0





      $begingroup$


      In BERT, They replace separator and start of sentence with special token labels. What are there corresponding values in embedding_matrix. Are they 0-vector?



      I wanted to replace the proper nouns like names, buildings, locations with similar approach. How should i go about masking the same?










      share|improve this question









      $endgroup$




      In BERT, They replace separator and start of sentence with special token labels. What are there corresponding values in embedding_matrix. Are they 0-vector?



      I wanted to replace the proper nouns like names, buildings, locations with similar approach. How should i go about masking the same?







      lstm word-embeddings






      share|improve this question













      share|improve this question











      share|improve this question




      share|improve this question










      asked Feb 27 at 9:30









      ItachiItachi

      1713




      1713




















          1 Answer
          1






          active

          oldest

          votes


















          0












          $begingroup$

          I think replacing Proper Nouns with an aggregated Proper Nouns vector should do the trick



          Basically, words like Barcelona, Spain, India, and other locations will have a high similarity to a bias vector, we can find out that vector using standard deviation of this matrix along the column axis. Whichever has a low value can be retained, rest all can be set to 0



          eg: Delhi, can be replace with [2,3,4,0,0,0...] where [2,3,4...] are common attributes of other locations






          share|improve this answer









          $endgroup$













            Your Answer





            StackExchange.ifUsing("editor", function ()
            return StackExchange.using("mathjaxEditing", function ()
            StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
            StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
            );
            );
            , "mathjax-editing");

            StackExchange.ready(function()
            var channelOptions =
            tags: "".split(" "),
            id: "557"
            ;
            initTagRenderer("".split(" "), "".split(" "), channelOptions);

            StackExchange.using("externalEditor", function()
            // Have to fire editor after snippets, if snippets enabled
            if (StackExchange.settings.snippets.snippetsEnabled)
            StackExchange.using("snippets", function()
            createEditor();
            );

            else
            createEditor();

            );

            function createEditor()
            StackExchange.prepareEditor(
            heartbeatType: 'answer',
            autoActivateHeartbeat: false,
            convertImagesToLinks: false,
            noModals: true,
            showLowRepImageUploadWarning: true,
            reputationToPostImages: null,
            bindNavPrevention: true,
            postfix: "",
            imageUploader:
            brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
            contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
            allowUrls: true
            ,
            onDemand: true,
            discardSelector: ".discard-answer"
            ,immediatelyShowMarkdownHelp:true
            );



            );













            draft saved

            draft discarded


















            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f46312%2fwhat-is-the-vector-value-of-cls-sep-tokens-in-bert%23new-answer', 'question_page');

            );

            Post as a guest















            Required, but never shown

























            1 Answer
            1






            active

            oldest

            votes








            1 Answer
            1






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes









            0












            $begingroup$

            I think replacing Proper Nouns with an aggregated Proper Nouns vector should do the trick



            Basically, words like Barcelona, Spain, India, and other locations will have a high similarity to a bias vector, we can find out that vector using standard deviation of this matrix along the column axis. Whichever has a low value can be retained, rest all can be set to 0



            eg: Delhi, can be replace with [2,3,4,0,0,0...] where [2,3,4...] are common attributes of other locations






            share|improve this answer









            $endgroup$

















              0












              $begingroup$

              I think replacing Proper Nouns with an aggregated Proper Nouns vector should do the trick



              Basically, words like Barcelona, Spain, India, and other locations will have a high similarity to a bias vector, we can find out that vector using standard deviation of this matrix along the column axis. Whichever has a low value can be retained, rest all can be set to 0



              eg: Delhi, can be replace with [2,3,4,0,0,0...] where [2,3,4...] are common attributes of other locations






              share|improve this answer









              $endgroup$















                0












                0








                0





                $begingroup$

                I think replacing Proper Nouns with an aggregated Proper Nouns vector should do the trick



                Basically, words like Barcelona, Spain, India, and other locations will have a high similarity to a bias vector, we can find out that vector using standard deviation of this matrix along the column axis. Whichever has a low value can be retained, rest all can be set to 0



                eg: Delhi, can be replace with [2,3,4,0,0,0...] where [2,3,4...] are common attributes of other locations






                share|improve this answer









                $endgroup$



                I think replacing Proper Nouns with an aggregated Proper Nouns vector should do the trick



                Basically, words like Barcelona, Spain, India, and other locations will have a high similarity to a bias vector, we can find out that vector using standard deviation of this matrix along the column axis. Whichever has a low value can be retained, rest all can be set to 0



                eg: Delhi, can be replace with [2,3,4,0,0,0...] where [2,3,4...] are common attributes of other locations







                share|improve this answer












                share|improve this answer



                share|improve this answer










                answered Feb 28 at 9:10









                ItachiItachi

                1713




                1713



























                    draft saved

                    draft discarded
















































                    Thanks for contributing an answer to Data Science Stack Exchange!


                    • Please be sure to answer the question. Provide details and share your research!

                    But avoid


                    • Asking for help, clarification, or responding to other answers.

                    • Making statements based on opinion; back them up with references or personal experience.

                    Use MathJax to format equations. MathJax reference.


                    To learn more, see our tips on writing great answers.




                    draft saved


                    draft discarded














                    StackExchange.ready(
                    function ()
                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f46312%2fwhat-is-the-vector-value-of-cls-sep-tokens-in-bert%23new-answer', 'question_page');

                    );

                    Post as a guest















                    Required, but never shown





















































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown

































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown







                    Popular posts from this blog

                    Marja Vauras Lähteet | Aiheesta muualla | NavigointivalikkoMarja Vauras Turun yliopiston tutkimusportaalissaInfobox OKSuomalaisen Tiedeakatemian varsinaiset jäsenetKasvatustieteiden tiedekunnan dekaanit ja muu johtoMarja VaurasKoulutusvienti on kestävyys- ja ketteryyslaji (2.5.2017)laajentamallaWorldCat Identities0000 0001 0855 9405n86069603utb201588738523620927

                    Which is better: GPT or RelGAN for text generation?2019 Community Moderator ElectionWhat is the difference between TextGAN and LM for text generation?GANs (generative adversarial networks) possible for text as well?Generator loss not decreasing- text to image synthesisChoosing a right algorithm for template-based text generationHow should I format input and output for text generation with LSTMsGumbel Softmax vs Vanilla Softmax for GAN trainingWhich neural network to choose for classification from text/speech?NLP text autoencoder that generates text in poetic meterWhat is the interpretation of the expectation notation in the GAN formulation?What is the difference between TextGAN and LM for text generation?How to prepare the data for text generation task

                    Is this part of the description of the Archfey warlock's Misty Escape feature redundant?When is entropic ward considered “used”?How does the reaction timing work for Wrath of the Storm? Can it potentially prevent the damage from the triggering attack?Does the Dark Arts Archlich warlock patrons's Arcane Invisibility activate every time you cast a level 1+ spell?When attacking while invisible, when exactly does invisibility break?Can I cast Hellish Rebuke on my turn?Do I have to “pre-cast” a reaction spell in order for it to be triggered?What happens if a Player Misty Escapes into an Invisible CreatureCan a reaction interrupt multiattack?Does the Fiend-patron warlock's Hurl Through Hell feature dispel effects that require the target to be on the same plane as the caster?What are you allowed to do while using the Warlock's Eldritch Master feature?