CNN accuracy and loss doesn't change over epochs for sentiment analysisSentiment Analysis model for SpanishWhy use sum and not average for sentiment analysis?How to overcome training example's different lengths when working with Word Embeddings (word2vec)Feature extraction for sentiment analysisRetain similarity distances when using an autoencoder for dimensionality reductionIs this a good classified model based confusion matrix and classification report?Weighted sum of word vectors for document similarityAccuracy and loss don't change in CNN. Is it over-fitting?Value of loss and accuracy does not change over EpochsPerformance of model in production varying greatly from train-test data

Find the coordinate of two line segments that are perpendicular

A non-technological, repeating, visible object in the sky, holding its position in the sky for hours

Subtleties of choosing the sequence of tenses in Russian

Can fracking help reduce CO2?

Will tsunami waves travel forever if there was no land?

Colliding particles and Activation energy

Why was Germany not as successful as other Europeans in establishing overseas colonies?

Why do TACANs not have a symbol for compulsory reporting?

"ne paelici suspectaretur" (Tacitus)

Toggle Overlays shortcut?

Is GOCE a satellite or aircraft?

Examples of non trivial equivalence relations , I mean equivalence relations without the expression " same ... as" in their definition?

A question regarding using the definite article

Confusion about capacitors

What is a Recurrent Neural Network?

How to stop co-workers from teasing me because I know Russian?

Where does the labelling of extrinsic semiconductors as "n" and "p" come from?

Does a creature that is immune to a condition still make a saving throw?

Python "triplet" dictionary?

Illegal assignment from SObject to Contact

What word means to make something obsolete?

In the time of the mishna, were there Jewish cities without courts?

Sci-fi novel series with instant travel between planets through gates. A river runs through the gates

How to set the font color of quantity objects (Version 11.3 vs version 12)?



CNN accuracy and loss doesn't change over epochs for sentiment analysis


Sentiment Analysis model for SpanishWhy use sum and not average for sentiment analysis?How to overcome training example's different lengths when working with Word Embeddings (word2vec)Feature extraction for sentiment analysisRetain similarity distances when using an autoencoder for dimensionality reductionIs this a good classified model based confusion matrix and classification report?Weighted sum of word vectors for document similarityAccuracy and loss don't change in CNN. Is it over-fitting?Value of loss and accuracy does not change over EpochsPerformance of model in production varying greatly from train-test data













0












$begingroup$


I am performing text classification as Good [1] or Bad [0]. The texts are preprocessed and converted to Vectors using Google Word2Vec. Further CNN architecture is used for training. I have roughly 13000 texts as Bad[0] and 5450 texts as Good[1] for training (making it a roughly 70:30%)
The issue starts when I realize I don't have enough compute power (2GB GPU). Hence I compromise and use 100 dimensions of word embeddings from Word2Vec (instead of the 300). After certain hyperparameter tuning in the CNN architecture, I am able to obtain a 30-35% precision, which I am happy with.
After months, I have an 8GB GPU in the server and I implemented the 300-dimensional word embeddings in Word2Vec and kept for training. Ideally, I should have obtained better results; instead, the loss and accuracy don't change with time for every epoch. Thus it predicts all texts as Bad[0].
Can you please help me identify the problem and If I am missing out anything here!



EDIT:



I would like to add some clarifications:
In the Linux server with GPU 1070Ti 8GB, I tried with three experiments in this order: a) 300-dimensional word embeddings b) 100-dimensional word-embeddings and c) 105-dimensional word embeddings
I have obtained no change in accuracy and loss for a) and c). However, for b), it is exactly the same results as I obtained with my local GPU(750Ti Nvidia 2GB). In short, its working fine for 100-dimensional in the server.



Now, since I can't assign 300d word vectors in my local GPU, I make up the same experiment as c) with 105d vectors in the local GPU to just check if there's any fault in code, and surprisingly it's giving around 30% precision much similar to earlier results.



I am having a hard time figuring out the issue in the server GPU as its working fine for 100d word vectors but fail to give proper predictions for other dimensional word-embeddings.



I am attaching some results which might make it a lot clear:



1.) Trained with 100d vectors in local(2GB) GPU
precision recall f1-score support



 0.0 0.70 0.83 0.76 1973
1.0 **0.31** 0.17 0.22 850


avg / total 0.58 0.63 0.60 2823



2.) Trained with 100d vectors in server (8GB) GPU
precision recall f1-score support



 0.0 0.70 0.77 0.73 1973
1.0 **0.30** 0.23 0.26 850


avg / total 0.58 0.61 0.59 2823



3.) This is the same result for both 105d and 300d vectors trained in server GPU
precision recall f1-score support



 0.0 0.70 1.00 0.82 1973
1.0 **0.00** 0.00 0.00 850


avg / total 0.49 0.70 0.58 2823



4.) Trained with 105d vectors in local(2GB) GPU
precision recall f1-score support



 0.0 0.70 0.83 0.76 1973
1.0 **0.30** 0.18 0.23 850


avg / total 0.57 0.64 0.61 2823










share|improve this question











$endgroup$
















    0












    $begingroup$


    I am performing text classification as Good [1] or Bad [0]. The texts are preprocessed and converted to Vectors using Google Word2Vec. Further CNN architecture is used for training. I have roughly 13000 texts as Bad[0] and 5450 texts as Good[1] for training (making it a roughly 70:30%)
    The issue starts when I realize I don't have enough compute power (2GB GPU). Hence I compromise and use 100 dimensions of word embeddings from Word2Vec (instead of the 300). After certain hyperparameter tuning in the CNN architecture, I am able to obtain a 30-35% precision, which I am happy with.
    After months, I have an 8GB GPU in the server and I implemented the 300-dimensional word embeddings in Word2Vec and kept for training. Ideally, I should have obtained better results; instead, the loss and accuracy don't change with time for every epoch. Thus it predicts all texts as Bad[0].
    Can you please help me identify the problem and If I am missing out anything here!



    EDIT:



    I would like to add some clarifications:
    In the Linux server with GPU 1070Ti 8GB, I tried with three experiments in this order: a) 300-dimensional word embeddings b) 100-dimensional word-embeddings and c) 105-dimensional word embeddings
    I have obtained no change in accuracy and loss for a) and c). However, for b), it is exactly the same results as I obtained with my local GPU(750Ti Nvidia 2GB). In short, its working fine for 100-dimensional in the server.



    Now, since I can't assign 300d word vectors in my local GPU, I make up the same experiment as c) with 105d vectors in the local GPU to just check if there's any fault in code, and surprisingly it's giving around 30% precision much similar to earlier results.



    I am having a hard time figuring out the issue in the server GPU as its working fine for 100d word vectors but fail to give proper predictions for other dimensional word-embeddings.



    I am attaching some results which might make it a lot clear:



    1.) Trained with 100d vectors in local(2GB) GPU
    precision recall f1-score support



     0.0 0.70 0.83 0.76 1973
    1.0 **0.31** 0.17 0.22 850


    avg / total 0.58 0.63 0.60 2823



    2.) Trained with 100d vectors in server (8GB) GPU
    precision recall f1-score support



     0.0 0.70 0.77 0.73 1973
    1.0 **0.30** 0.23 0.26 850


    avg / total 0.58 0.61 0.59 2823



    3.) This is the same result for both 105d and 300d vectors trained in server GPU
    precision recall f1-score support



     0.0 0.70 1.00 0.82 1973
    1.0 **0.00** 0.00 0.00 850


    avg / total 0.49 0.70 0.58 2823



    4.) Trained with 105d vectors in local(2GB) GPU
    precision recall f1-score support



     0.0 0.70 0.83 0.76 1973
    1.0 **0.30** 0.18 0.23 850


    avg / total 0.57 0.64 0.61 2823










    share|improve this question











    $endgroup$














      0












      0








      0





      $begingroup$


      I am performing text classification as Good [1] or Bad [0]. The texts are preprocessed and converted to Vectors using Google Word2Vec. Further CNN architecture is used for training. I have roughly 13000 texts as Bad[0] and 5450 texts as Good[1] for training (making it a roughly 70:30%)
      The issue starts when I realize I don't have enough compute power (2GB GPU). Hence I compromise and use 100 dimensions of word embeddings from Word2Vec (instead of the 300). After certain hyperparameter tuning in the CNN architecture, I am able to obtain a 30-35% precision, which I am happy with.
      After months, I have an 8GB GPU in the server and I implemented the 300-dimensional word embeddings in Word2Vec and kept for training. Ideally, I should have obtained better results; instead, the loss and accuracy don't change with time for every epoch. Thus it predicts all texts as Bad[0].
      Can you please help me identify the problem and If I am missing out anything here!



      EDIT:



      I would like to add some clarifications:
      In the Linux server with GPU 1070Ti 8GB, I tried with three experiments in this order: a) 300-dimensional word embeddings b) 100-dimensional word-embeddings and c) 105-dimensional word embeddings
      I have obtained no change in accuracy and loss for a) and c). However, for b), it is exactly the same results as I obtained with my local GPU(750Ti Nvidia 2GB). In short, its working fine for 100-dimensional in the server.



      Now, since I can't assign 300d word vectors in my local GPU, I make up the same experiment as c) with 105d vectors in the local GPU to just check if there's any fault in code, and surprisingly it's giving around 30% precision much similar to earlier results.



      I am having a hard time figuring out the issue in the server GPU as its working fine for 100d word vectors but fail to give proper predictions for other dimensional word-embeddings.



      I am attaching some results which might make it a lot clear:



      1.) Trained with 100d vectors in local(2GB) GPU
      precision recall f1-score support



       0.0 0.70 0.83 0.76 1973
      1.0 **0.31** 0.17 0.22 850


      avg / total 0.58 0.63 0.60 2823



      2.) Trained with 100d vectors in server (8GB) GPU
      precision recall f1-score support



       0.0 0.70 0.77 0.73 1973
      1.0 **0.30** 0.23 0.26 850


      avg / total 0.58 0.61 0.59 2823



      3.) This is the same result for both 105d and 300d vectors trained in server GPU
      precision recall f1-score support



       0.0 0.70 1.00 0.82 1973
      1.0 **0.00** 0.00 0.00 850


      avg / total 0.49 0.70 0.58 2823



      4.) Trained with 105d vectors in local(2GB) GPU
      precision recall f1-score support



       0.0 0.70 0.83 0.76 1973
      1.0 **0.30** 0.18 0.23 850


      avg / total 0.57 0.64 0.61 2823










      share|improve this question











      $endgroup$




      I am performing text classification as Good [1] or Bad [0]. The texts are preprocessed and converted to Vectors using Google Word2Vec. Further CNN architecture is used for training. I have roughly 13000 texts as Bad[0] and 5450 texts as Good[1] for training (making it a roughly 70:30%)
      The issue starts when I realize I don't have enough compute power (2GB GPU). Hence I compromise and use 100 dimensions of word embeddings from Word2Vec (instead of the 300). After certain hyperparameter tuning in the CNN architecture, I am able to obtain a 30-35% precision, which I am happy with.
      After months, I have an 8GB GPU in the server and I implemented the 300-dimensional word embeddings in Word2Vec and kept for training. Ideally, I should have obtained better results; instead, the loss and accuracy don't change with time for every epoch. Thus it predicts all texts as Bad[0].
      Can you please help me identify the problem and If I am missing out anything here!



      EDIT:



      I would like to add some clarifications:
      In the Linux server with GPU 1070Ti 8GB, I tried with three experiments in this order: a) 300-dimensional word embeddings b) 100-dimensional word-embeddings and c) 105-dimensional word embeddings
      I have obtained no change in accuracy and loss for a) and c). However, for b), it is exactly the same results as I obtained with my local GPU(750Ti Nvidia 2GB). In short, its working fine for 100-dimensional in the server.



      Now, since I can't assign 300d word vectors in my local GPU, I make up the same experiment as c) with 105d vectors in the local GPU to just check if there's any fault in code, and surprisingly it's giving around 30% precision much similar to earlier results.



      I am having a hard time figuring out the issue in the server GPU as its working fine for 100d word vectors but fail to give proper predictions for other dimensional word-embeddings.



      I am attaching some results which might make it a lot clear:



      1.) Trained with 100d vectors in local(2GB) GPU
      precision recall f1-score support



       0.0 0.70 0.83 0.76 1973
      1.0 **0.31** 0.17 0.22 850


      avg / total 0.58 0.63 0.60 2823



      2.) Trained with 100d vectors in server (8GB) GPU
      precision recall f1-score support



       0.0 0.70 0.77 0.73 1973
      1.0 **0.30** 0.23 0.26 850


      avg / total 0.58 0.61 0.59 2823



      3.) This is the same result for both 105d and 300d vectors trained in server GPU
      precision recall f1-score support



       0.0 0.70 1.00 0.82 1973
      1.0 **0.00** 0.00 0.00 850


      avg / total 0.49 0.70 0.58 2823



      4.) Trained with 105d vectors in local(2GB) GPU
      precision recall f1-score support



       0.0 0.70 0.83 0.76 1973
      1.0 **0.30** 0.18 0.23 850


      avg / total 0.57 0.64 0.61 2823







      cnn word2vec accuracy sentiment-analysis gpu






      share|improve this question















      share|improve this question













      share|improve this question




      share|improve this question








      edited Apr 11 at 8:34







      Amy

















      asked Apr 10 at 10:16









      AmyAmy

      12




      12




















          0






          active

          oldest

          votes












          Your Answer








          StackExchange.ready(function()
          var channelOptions =
          tags: "".split(" "),
          id: "557"
          ;
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function()
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled)
          StackExchange.using("snippets", function()
          createEditor();
          );

          else
          createEditor();

          );

          function createEditor()
          StackExchange.prepareEditor(
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: false,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: null,
          bindNavPrevention: true,
          postfix: "",
          imageUploader:
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          ,
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          );



          );













          draft saved

          draft discarded


















          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f49032%2fcnn-accuracy-and-loss-doesnt-change-over-epochs-for-sentiment-analysis%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown

























          0






          active

          oldest

          votes








          0






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes















          draft saved

          draft discarded
















































          Thanks for contributing an answer to Data Science Stack Exchange!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid


          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.

          Use MathJax to format equations. MathJax reference.


          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f49032%2fcnn-accuracy-and-loss-doesnt-change-over-epochs-for-sentiment-analysis%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          Adding axes to figuresAdding axes labels to LaTeX figuresLaTeX equivalent of ConTeXt buffersRotate a node but not its content: the case of the ellipse decorationHow to define the default vertical distance between nodes?TikZ scaling graphic and adjust node position and keep font sizeNumerical conditional within tikz keys?adding axes to shapesAlign axes across subfiguresAdding figures with a certain orderLine up nested tikz enviroments or how to get rid of themAdding axes labels to LaTeX figures

          Tähtien Talli Jäsenet | Lähteet | NavigointivalikkoSuomen Hippos – Tähtien Talli

          Do these cracks on my tires look bad? The Next CEO of Stack OverflowDry rot tire should I replace?Having to replace tiresFishtailed so easily? Bad tires? ABS?Filling the tires with something other than air, to avoid puncture hassles?Used Michelin tires safe to install?Do these tyre cracks necessitate replacement?Rumbling noise: tires or mechanicalIs it possible to fix noisy feathered tires?Are bad winter tires still better than summer tires in winter?Torque converter failure - Related to replacing only 2 tires?Why use snow tires on all 4 wheels on 2-wheel-drive cars?