Will a Count vectorizer ever perform (slightly) better than tf-idf?Classification when one class is otherHow does class_weights work in RandomForestClassifierHow to down-weight non-words in text classification?Same TF-IDF Vectorizer for 2 data inputsWord2Vec embeddings with TF-IDFFitting and transforming text data in training, testing, and validation setsRather use many linear classifiers than one complex one for numerical data?My naive (ha!) Gaussian Naive Bayes classifier is too slowWhat is a better solution for text classification than use of perplexityWhy does the classic Neural Network perform better than LSTM in Sentiment Analysis

What does a yield inside a yield do?

What is Shri Venkateshwara Mangalasasana stotram recited for?

Why do money exchangers give different rates to different bills?

Moving the subject of the sentence into a dangling participle

Did we get closer to another plane than we were supposed to, or was the pilot just protecting our delicate sensibilities?

Virus Detected - Please execute anti-virus code

SQL Server Management Studio SSMS 18.0 General Availability release (GA) install fails

How to give very negative feedback gracefully?

What is a "listed natural gas appliance"?

For a benzene shown in a skeletal structure, what does a substituent to the center of the ring mean?

Is Cola "probably the best-known" Latin word in the world? If not, which might it be?

Where can I go to avoid planes overhead?

What word means "to make something obsolete"?

Which industry am I working in? Software development or financial services?

Catholic vs Protestant Support for Nazism in Germany

Formatiing text inside tikz node

How do I tell my manager that his code review comment is wrong?

What are the spoon bit of a spoon and fork bit of a fork called?

Short story with physics professor who "brings back the dead" (Asimov or Bradbury?)

好きだ or 好きな: which one to be used in the below mentioned sentence

Why was the battle set up *outside* Winterfell?

How to reply this mail from potential PhD professor?

Quoting Yourself

Is this homebrew life-stealing melee cantrip unbalanced?



Will a Count vectorizer ever perform (slightly) better than tf-idf?


Classification when one class is otherHow does class_weights work in RandomForestClassifierHow to down-weight non-words in text classification?Same TF-IDF Vectorizer for 2 data inputsWord2Vec embeddings with TF-IDFFitting and transforming text data in training, testing, and validation setsRather use many linear classifiers than one complex one for numerical data?My naive (ha!) Gaussian Naive Bayes classifier is too slowWhat is a better solution for text classification than use of perplexityWhy does the classic Neural Network perform better than LSTM in Sentiment Analysis













0












$begingroup$


For the task of binary classification, I have a small data-set of a total 1000 texts (~590 positive and ~401 negative instances). With a training set of 800 and test set of 200, I get a (slightly) better accuracy for count vectorizer compared to the tf-idf.



Additionally, count vectorizer picks out the relevant "words" training the model, while tf-idf does not pick those relevant words out. Even the confusion matrix for count vectorizer shows marginally better numbers compared to tf-idf.



TFIDF confusion matrix
[[ 80 11]
[ 6 103]]
BoW confusion matrix
[[ 81 10]
[ 6 103]]


I haven't tried cross-validation yet though it came to me as shock that count vectorizer performed a bit better than tfidf. Is it because my data set is too small or if I have't used any dimensionality reduction to reduce the number of words taken into account by both the classifiers. What is it that I am doing wrong?



I am sorry, if it is an immature question, but I am really new to ML.










share|improve this question









$endgroup$
















    0












    $begingroup$


    For the task of binary classification, I have a small data-set of a total 1000 texts (~590 positive and ~401 negative instances). With a training set of 800 and test set of 200, I get a (slightly) better accuracy for count vectorizer compared to the tf-idf.



    Additionally, count vectorizer picks out the relevant "words" training the model, while tf-idf does not pick those relevant words out. Even the confusion matrix for count vectorizer shows marginally better numbers compared to tf-idf.



    TFIDF confusion matrix
    [[ 80 11]
    [ 6 103]]
    BoW confusion matrix
    [[ 81 10]
    [ 6 103]]


    I haven't tried cross-validation yet though it came to me as shock that count vectorizer performed a bit better than tfidf. Is it because my data set is too small or if I have't used any dimensionality reduction to reduce the number of words taken into account by both the classifiers. What is it that I am doing wrong?



    I am sorry, if it is an immature question, but I am really new to ML.










    share|improve this question









    $endgroup$














      0












      0








      0





      $begingroup$


      For the task of binary classification, I have a small data-set of a total 1000 texts (~590 positive and ~401 negative instances). With a training set of 800 and test set of 200, I get a (slightly) better accuracy for count vectorizer compared to the tf-idf.



      Additionally, count vectorizer picks out the relevant "words" training the model, while tf-idf does not pick those relevant words out. Even the confusion matrix for count vectorizer shows marginally better numbers compared to tf-idf.



      TFIDF confusion matrix
      [[ 80 11]
      [ 6 103]]
      BoW confusion matrix
      [[ 81 10]
      [ 6 103]]


      I haven't tried cross-validation yet though it came to me as shock that count vectorizer performed a bit better than tfidf. Is it because my data set is too small or if I have't used any dimensionality reduction to reduce the number of words taken into account by both the classifiers. What is it that I am doing wrong?



      I am sorry, if it is an immature question, but I am really new to ML.










      share|improve this question









      $endgroup$




      For the task of binary classification, I have a small data-set of a total 1000 texts (~590 positive and ~401 negative instances). With a training set of 800 and test set of 200, I get a (slightly) better accuracy for count vectorizer compared to the tf-idf.



      Additionally, count vectorizer picks out the relevant "words" training the model, while tf-idf does not pick those relevant words out. Even the confusion matrix for count vectorizer shows marginally better numbers compared to tf-idf.



      TFIDF confusion matrix
      [[ 80 11]
      [ 6 103]]
      BoW confusion matrix
      [[ 81 10]
      [ 6 103]]


      I haven't tried cross-validation yet though it came to me as shock that count vectorizer performed a bit better than tfidf. Is it because my data set is too small or if I have't used any dimensionality reduction to reduce the number of words taken into account by both the classifiers. What is it that I am doing wrong?



      I am sorry, if it is an immature question, but I am really new to ML.







      classification nlp tfidf






      share|improve this question













      share|improve this question











      share|improve this question




      share|improve this question










      asked Apr 10 at 12:49









      ftTomAndJerryftTomAndJerry

      206




      206




















          1 Answer
          1






          active

          oldest

          votes


















          1












          $begingroup$

          I would say 1000 documents is a bit less to draw any conclusion about the vectorization technique, Neither an increase in True positive by 1 would matter. As the size of the vocabulary increases, TfidfVectorizer would be better able to differentiate rare words and commonly occurring words while Countvectorizer would still give equal weight to all words which is undesirable. So, TfidfVectorizer will give you better performance than CountVectorizer as the size of the vocabulary increases






          share|improve this answer









          $endgroup$












          • $begingroup$
            This remains to be tested. I do not have a large annotated corpus of the text to test this at the moment.
            $endgroup$
            – ftTomAndJerry
            Apr 15 at 8:18











          Your Answer








          StackExchange.ready(function()
          var channelOptions =
          tags: "".split(" "),
          id: "557"
          ;
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function()
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled)
          StackExchange.using("snippets", function()
          createEditor();
          );

          else
          createEditor();

          );

          function createEditor()
          StackExchange.prepareEditor(
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: false,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: null,
          bindNavPrevention: true,
          postfix: "",
          imageUploader:
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          ,
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          );



          );













          draft saved

          draft discarded


















          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f49047%2fwill-a-count-vectorizer-ever-perform-slightly-better-than-tf-idf%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown

























          1 Answer
          1






          active

          oldest

          votes








          1 Answer
          1






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes









          1












          $begingroup$

          I would say 1000 documents is a bit less to draw any conclusion about the vectorization technique, Neither an increase in True positive by 1 would matter. As the size of the vocabulary increases, TfidfVectorizer would be better able to differentiate rare words and commonly occurring words while Countvectorizer would still give equal weight to all words which is undesirable. So, TfidfVectorizer will give you better performance than CountVectorizer as the size of the vocabulary increases






          share|improve this answer









          $endgroup$












          • $begingroup$
            This remains to be tested. I do not have a large annotated corpus of the text to test this at the moment.
            $endgroup$
            – ftTomAndJerry
            Apr 15 at 8:18















          1












          $begingroup$

          I would say 1000 documents is a bit less to draw any conclusion about the vectorization technique, Neither an increase in True positive by 1 would matter. As the size of the vocabulary increases, TfidfVectorizer would be better able to differentiate rare words and commonly occurring words while Countvectorizer would still give equal weight to all words which is undesirable. So, TfidfVectorizer will give you better performance than CountVectorizer as the size of the vocabulary increases






          share|improve this answer









          $endgroup$












          • $begingroup$
            This remains to be tested. I do not have a large annotated corpus of the text to test this at the moment.
            $endgroup$
            – ftTomAndJerry
            Apr 15 at 8:18













          1












          1








          1





          $begingroup$

          I would say 1000 documents is a bit less to draw any conclusion about the vectorization technique, Neither an increase in True positive by 1 would matter. As the size of the vocabulary increases, TfidfVectorizer would be better able to differentiate rare words and commonly occurring words while Countvectorizer would still give equal weight to all words which is undesirable. So, TfidfVectorizer will give you better performance than CountVectorizer as the size of the vocabulary increases






          share|improve this answer









          $endgroup$



          I would say 1000 documents is a bit less to draw any conclusion about the vectorization technique, Neither an increase in True positive by 1 would matter. As the size of the vocabulary increases, TfidfVectorizer would be better able to differentiate rare words and commonly occurring words while Countvectorizer would still give equal weight to all words which is undesirable. So, TfidfVectorizer will give you better performance than CountVectorizer as the size of the vocabulary increases







          share|improve this answer












          share|improve this answer



          share|improve this answer










          answered Apr 10 at 13:36









          karthikeyan mgkarthikeyan mg

          330111




          330111











          • $begingroup$
            This remains to be tested. I do not have a large annotated corpus of the text to test this at the moment.
            $endgroup$
            – ftTomAndJerry
            Apr 15 at 8:18
















          • $begingroup$
            This remains to be tested. I do not have a large annotated corpus of the text to test this at the moment.
            $endgroup$
            – ftTomAndJerry
            Apr 15 at 8:18















          $begingroup$
          This remains to be tested. I do not have a large annotated corpus of the text to test this at the moment.
          $endgroup$
          – ftTomAndJerry
          Apr 15 at 8:18




          $begingroup$
          This remains to be tested. I do not have a large annotated corpus of the text to test this at the moment.
          $endgroup$
          – ftTomAndJerry
          Apr 15 at 8:18

















          draft saved

          draft discarded
















































          Thanks for contributing an answer to Data Science Stack Exchange!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid


          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.

          Use MathJax to format equations. MathJax reference.


          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f49047%2fwill-a-count-vectorizer-ever-perform-slightly-better-than-tf-idf%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          Adding axes to figuresAdding axes labels to LaTeX figuresLaTeX equivalent of ConTeXt buffersRotate a node but not its content: the case of the ellipse decorationHow to define the default vertical distance between nodes?TikZ scaling graphic and adjust node position and keep font sizeNumerical conditional within tikz keys?adding axes to shapesAlign axes across subfiguresAdding figures with a certain orderLine up nested tikz enviroments or how to get rid of themAdding axes labels to LaTeX figures

          Luettelo Yhdysvaltain laivaston lentotukialuksista Lähteet | Navigointivalikko

          Gary (muusikko) Sisällysluettelo Historia | Rockin' High | Lähteet | Aiheesta muualla | NavigointivalikkoInfobox OKTuomas "Gary" Keskinen Ancaran kitaristiksiProjekti Rockin' High