Will a Count vectorizer ever perform (slightly) better than tf-idf?Classification when one class is otherHow does class_weights work in RandomForestClassifierHow to down-weight non-words in text classification?Same TF-IDF Vectorizer for 2 data inputsWord2Vec embeddings with TF-IDFFitting and transforming text data in training, testing, and validation setsRather use many linear classifiers than one complex one for numerical data?My naive (ha!) Gaussian Naive Bayes classifier is too slowWhat is a better solution for text classification than use of perplexityWhy does the classic Neural Network perform better than LSTM in Sentiment Analysis

What does a yield inside a yield do?

What is Shri Venkateshwara Mangalasasana stotram recited for?

Why do money exchangers give different rates to different bills?

Moving the subject of the sentence into a dangling participle

Did we get closer to another plane than we were supposed to, or was the pilot just protecting our delicate sensibilities?

Virus Detected - Please execute anti-virus code

SQL Server Management Studio SSMS 18.0 General Availability release (GA) install fails

How to give very negative feedback gracefully?

What is a "listed natural gas appliance"?

For a benzene shown in a skeletal structure, what does a substituent to the center of the ring mean?

Is Cola "probably the best-known" Latin word in the world? If not, which might it be?

Where can I go to avoid planes overhead?

What word means "to make something obsolete"?

Which industry am I working in? Software development or financial services?

Catholic vs Protestant Support for Nazism in Germany

Formatiing text inside tikz node

How do I tell my manager that his code review comment is wrong?

What are the spoon bit of a spoon and fork bit of a fork called?

Short story with physics professor who "brings back the dead" (Asimov or Bradbury?)

好きだ or 好きな: which one to be used in the below mentioned sentence

Why was the battle set up *outside* Winterfell?

How to reply this mail from potential PhD professor?

Quoting Yourself

Is this homebrew life-stealing melee cantrip unbalanced?



Will a Count vectorizer ever perform (slightly) better than tf-idf?


Classification when one class is otherHow does class_weights work in RandomForestClassifierHow to down-weight non-words in text classification?Same TF-IDF Vectorizer for 2 data inputsWord2Vec embeddings with TF-IDFFitting and transforming text data in training, testing, and validation setsRather use many linear classifiers than one complex one for numerical data?My naive (ha!) Gaussian Naive Bayes classifier is too slowWhat is a better solution for text classification than use of perplexityWhy does the classic Neural Network perform better than LSTM in Sentiment Analysis













0












$begingroup$


For the task of binary classification, I have a small data-set of a total 1000 texts (~590 positive and ~401 negative instances). With a training set of 800 and test set of 200, I get a (slightly) better accuracy for count vectorizer compared to the tf-idf.



Additionally, count vectorizer picks out the relevant "words" training the model, while tf-idf does not pick those relevant words out. Even the confusion matrix for count vectorizer shows marginally better numbers compared to tf-idf.



TFIDF confusion matrix
[[ 80 11]
[ 6 103]]
BoW confusion matrix
[[ 81 10]
[ 6 103]]


I haven't tried cross-validation yet though it came to me as shock that count vectorizer performed a bit better than tfidf. Is it because my data set is too small or if I have't used any dimensionality reduction to reduce the number of words taken into account by both the classifiers. What is it that I am doing wrong?



I am sorry, if it is an immature question, but I am really new to ML.










share|improve this question









$endgroup$
















    0












    $begingroup$


    For the task of binary classification, I have a small data-set of a total 1000 texts (~590 positive and ~401 negative instances). With a training set of 800 and test set of 200, I get a (slightly) better accuracy for count vectorizer compared to the tf-idf.



    Additionally, count vectorizer picks out the relevant "words" training the model, while tf-idf does not pick those relevant words out. Even the confusion matrix for count vectorizer shows marginally better numbers compared to tf-idf.



    TFIDF confusion matrix
    [[ 80 11]
    [ 6 103]]
    BoW confusion matrix
    [[ 81 10]
    [ 6 103]]


    I haven't tried cross-validation yet though it came to me as shock that count vectorizer performed a bit better than tfidf. Is it because my data set is too small or if I have't used any dimensionality reduction to reduce the number of words taken into account by both the classifiers. What is it that I am doing wrong?



    I am sorry, if it is an immature question, but I am really new to ML.










    share|improve this question









    $endgroup$














      0












      0








      0





      $begingroup$


      For the task of binary classification, I have a small data-set of a total 1000 texts (~590 positive and ~401 negative instances). With a training set of 800 and test set of 200, I get a (slightly) better accuracy for count vectorizer compared to the tf-idf.



      Additionally, count vectorizer picks out the relevant "words" training the model, while tf-idf does not pick those relevant words out. Even the confusion matrix for count vectorizer shows marginally better numbers compared to tf-idf.



      TFIDF confusion matrix
      [[ 80 11]
      [ 6 103]]
      BoW confusion matrix
      [[ 81 10]
      [ 6 103]]


      I haven't tried cross-validation yet though it came to me as shock that count vectorizer performed a bit better than tfidf. Is it because my data set is too small or if I have't used any dimensionality reduction to reduce the number of words taken into account by both the classifiers. What is it that I am doing wrong?



      I am sorry, if it is an immature question, but I am really new to ML.










      share|improve this question









      $endgroup$




      For the task of binary classification, I have a small data-set of a total 1000 texts (~590 positive and ~401 negative instances). With a training set of 800 and test set of 200, I get a (slightly) better accuracy for count vectorizer compared to the tf-idf.



      Additionally, count vectorizer picks out the relevant "words" training the model, while tf-idf does not pick those relevant words out. Even the confusion matrix for count vectorizer shows marginally better numbers compared to tf-idf.



      TFIDF confusion matrix
      [[ 80 11]
      [ 6 103]]
      BoW confusion matrix
      [[ 81 10]
      [ 6 103]]


      I haven't tried cross-validation yet though it came to me as shock that count vectorizer performed a bit better than tfidf. Is it because my data set is too small or if I have't used any dimensionality reduction to reduce the number of words taken into account by both the classifiers. What is it that I am doing wrong?



      I am sorry, if it is an immature question, but I am really new to ML.







      classification nlp tfidf






      share|improve this question













      share|improve this question











      share|improve this question




      share|improve this question










      asked Apr 10 at 12:49









      ftTomAndJerryftTomAndJerry

      206




      206




















          1 Answer
          1






          active

          oldest

          votes


















          1












          $begingroup$

          I would say 1000 documents is a bit less to draw any conclusion about the vectorization technique, Neither an increase in True positive by 1 would matter. As the size of the vocabulary increases, TfidfVectorizer would be better able to differentiate rare words and commonly occurring words while Countvectorizer would still give equal weight to all words which is undesirable. So, TfidfVectorizer will give you better performance than CountVectorizer as the size of the vocabulary increases






          share|improve this answer









          $endgroup$












          • $begingroup$
            This remains to be tested. I do not have a large annotated corpus of the text to test this at the moment.
            $endgroup$
            – ftTomAndJerry
            Apr 15 at 8:18











          Your Answer








          StackExchange.ready(function()
          var channelOptions =
          tags: "".split(" "),
          id: "557"
          ;
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function()
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled)
          StackExchange.using("snippets", function()
          createEditor();
          );

          else
          createEditor();

          );

          function createEditor()
          StackExchange.prepareEditor(
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: false,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: null,
          bindNavPrevention: true,
          postfix: "",
          imageUploader:
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          ,
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          );



          );













          draft saved

          draft discarded


















          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f49047%2fwill-a-count-vectorizer-ever-perform-slightly-better-than-tf-idf%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown

























          1 Answer
          1






          active

          oldest

          votes








          1 Answer
          1






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes









          1












          $begingroup$

          I would say 1000 documents is a bit less to draw any conclusion about the vectorization technique, Neither an increase in True positive by 1 would matter. As the size of the vocabulary increases, TfidfVectorizer would be better able to differentiate rare words and commonly occurring words while Countvectorizer would still give equal weight to all words which is undesirable. So, TfidfVectorizer will give you better performance than CountVectorizer as the size of the vocabulary increases






          share|improve this answer









          $endgroup$












          • $begingroup$
            This remains to be tested. I do not have a large annotated corpus of the text to test this at the moment.
            $endgroup$
            – ftTomAndJerry
            Apr 15 at 8:18















          1












          $begingroup$

          I would say 1000 documents is a bit less to draw any conclusion about the vectorization technique, Neither an increase in True positive by 1 would matter. As the size of the vocabulary increases, TfidfVectorizer would be better able to differentiate rare words and commonly occurring words while Countvectorizer would still give equal weight to all words which is undesirable. So, TfidfVectorizer will give you better performance than CountVectorizer as the size of the vocabulary increases






          share|improve this answer









          $endgroup$












          • $begingroup$
            This remains to be tested. I do not have a large annotated corpus of the text to test this at the moment.
            $endgroup$
            – ftTomAndJerry
            Apr 15 at 8:18













          1












          1








          1





          $begingroup$

          I would say 1000 documents is a bit less to draw any conclusion about the vectorization technique, Neither an increase in True positive by 1 would matter. As the size of the vocabulary increases, TfidfVectorizer would be better able to differentiate rare words and commonly occurring words while Countvectorizer would still give equal weight to all words which is undesirable. So, TfidfVectorizer will give you better performance than CountVectorizer as the size of the vocabulary increases






          share|improve this answer









          $endgroup$



          I would say 1000 documents is a bit less to draw any conclusion about the vectorization technique, Neither an increase in True positive by 1 would matter. As the size of the vocabulary increases, TfidfVectorizer would be better able to differentiate rare words and commonly occurring words while Countvectorizer would still give equal weight to all words which is undesirable. So, TfidfVectorizer will give you better performance than CountVectorizer as the size of the vocabulary increases







          share|improve this answer












          share|improve this answer



          share|improve this answer










          answered Apr 10 at 13:36









          karthikeyan mgkarthikeyan mg

          330111




          330111











          • $begingroup$
            This remains to be tested. I do not have a large annotated corpus of the text to test this at the moment.
            $endgroup$
            – ftTomAndJerry
            Apr 15 at 8:18
















          • $begingroup$
            This remains to be tested. I do not have a large annotated corpus of the text to test this at the moment.
            $endgroup$
            – ftTomAndJerry
            Apr 15 at 8:18















          $begingroup$
          This remains to be tested. I do not have a large annotated corpus of the text to test this at the moment.
          $endgroup$
          – ftTomAndJerry
          Apr 15 at 8:18




          $begingroup$
          This remains to be tested. I do not have a large annotated corpus of the text to test this at the moment.
          $endgroup$
          – ftTomAndJerry
          Apr 15 at 8:18

















          draft saved

          draft discarded
















































          Thanks for contributing an answer to Data Science Stack Exchange!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid


          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.

          Use MathJax to format equations. MathJax reference.


          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f49047%2fwill-a-count-vectorizer-ever-perform-slightly-better-than-tf-idf%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          Marja Vauras Lähteet | Aiheesta muualla | NavigointivalikkoMarja Vauras Turun yliopiston tutkimusportaalissaInfobox OKSuomalaisen Tiedeakatemian varsinaiset jäsenetKasvatustieteiden tiedekunnan dekaanit ja muu johtoMarja VaurasKoulutusvienti on kestävyys- ja ketteryyslaji (2.5.2017)laajentamallaWorldCat Identities0000 0001 0855 9405n86069603utb201588738523620927

          Which is better: GPT or RelGAN for text generation?2019 Community Moderator ElectionWhat is the difference between TextGAN and LM for text generation?GANs (generative adversarial networks) possible for text as well?Generator loss not decreasing- text to image synthesisChoosing a right algorithm for template-based text generationHow should I format input and output for text generation with LSTMsGumbel Softmax vs Vanilla Softmax for GAN trainingWhich neural network to choose for classification from text/speech?NLP text autoencoder that generates text in poetic meterWhat is the interpretation of the expectation notation in the GAN formulation?What is the difference between TextGAN and LM for text generation?How to prepare the data for text generation task

          Is this part of the description of the Archfey warlock's Misty Escape feature redundant?When is entropic ward considered “used”?How does the reaction timing work for Wrath of the Storm? Can it potentially prevent the damage from the triggering attack?Does the Dark Arts Archlich warlock patrons's Arcane Invisibility activate every time you cast a level 1+ spell?When attacking while invisible, when exactly does invisibility break?Can I cast Hellish Rebuke on my turn?Do I have to “pre-cast” a reaction spell in order for it to be triggered?What happens if a Player Misty Escapes into an Invisible CreatureCan a reaction interrupt multiattack?Does the Fiend-patron warlock's Hurl Through Hell feature dispel effects that require the target to be on the same plane as the caster?What are you allowed to do while using the Warlock's Eldritch Master feature?