Not able to interpret decision tree when using class_weights The Next CEO of Stack Overflow2019 Community Moderator ElectionHow to interpret a decision tree correctly?How to interpret continuous variables in a decision tree model?Decision tree classifier: possible overfittingTensorflow regression predicting 1 for all inputspython sklearn decision tree classifier feature_importances_ with feature names when using continuous valuesDecision tree not using all features from training datasetHow to interpret a trained Decision Tree(Newbie) Decision Tree RandomnessWhen does decision tree perform better than the neural network?SciKit-Learn Decision Tree Overfitting

How seriously should I take size and weight limits of hand luggage?

Salesforce opportunity stages

"Eavesdropping" vs "Listen in on"

Gauss' Posthumous Publications?

Find a path from s to t using as few red nodes as possible

Finitely generated matrix groups whose eigenvalues are all algebraic

logical reads on global temp table, but not on session-level temp table

Mathematica command that allows it to read my intentions

Arrows in tikz Markov chain diagram overlap

Masking layers by a vector polygon layer in QGIS

Can I cast Thunderwave and be at the center of its bottom face, but not be affected by it?

Are British MPs missing the point, with these 'Indicative Votes'?

Direct Implications Between USA and UK in Event of No-Deal Brexit

Shortening a title without changing its meaning

What did the word "leisure" mean in late 18th Century usage?

How to pronounce fünf in 45

Does int main() need a declaration on C++?

Read/write a pipe-delimited file line by line with some simple text manipulation

Do I need to write [sic] when including a quotation with a number less than 10 that isn't written out?

What steps are necessary to read a Modern SSD in Medieval Europe?

Is it a bad idea to plug the other end of ESD strap to wall ground?

A hang glider, sudden unexpected lift to 25,000 feet altitude, what could do this?

Does the Idaho Potato Commission associate potato skins with healthy eating?

Raspberry pi 3 B with Ubuntu 18.04 server arm64: what pi version



Not able to interpret decision tree when using class_weights



The Next CEO of Stack Overflow
2019 Community Moderator ElectionHow to interpret a decision tree correctly?How to interpret continuous variables in a decision tree model?Decision tree classifier: possible overfittingTensorflow regression predicting 1 for all inputspython sklearn decision tree classifier feature_importances_ with feature names when using continuous valuesDecision tree not using all features from training datasetHow to interpret a trained Decision Tree(Newbie) Decision Tree RandomnessWhen does decision tree perform better than the neural network?SciKit-Learn Decision Tree Overfitting










2












$begingroup$


I'm working with an imbalanced dataset. I'm using a decision tree (scikit-learn) to build a model.
For explaining my problem I've taken iris dataset.


When I'm setting class_weight=None, I understood how the tree is assigning the probability scores when I use predict_proba.

When I'm setting class_weight='balanced', I know its using target value to calculate class weights but I'm not able to understand how the tree is assigning the probability scores.



import sklearn.datasets as datasets
import pandas as pd
import numpy as np
from sklearn.tree import DecisionTreeClassifier
from sklearn.model_selection import train_test_split

from sklearn.externals.six import StringIO
from IPython.display import Image
from sklearn.tree import export_graphviz
import pydotplus

iris=datasets.load_iris()
df=pd.DataFrame(iris.data, columns=iris.feature_names)
y=iris.target

X_train, X_test, y_train, y_test = train_test_split(df, y, test_size=0.33, random_state=1)

# class_weight=None
dtree=DecisionTreeClassifier(max_depth=2)
dtree.fit(X_train,y_train)

dot_data = StringIO()
export_graphviz(dtree, out_file=dot_data, filled=True, rounded=True, special_characters=True, feature_names=X_train.columns)
graph = pydotplus.graph_from_dot_data(dot_data.getvalue())
Image(graph.create_png()) # I use jupyter-notebook for visualizing the image



tree when class_weight=None



# printing unique probabilities in each class
probas = dtree.predict_proba(X_train)
print(np.unique(probas[:,0]))
print(np.unique(probas[:,1]))
print(np.unique(probas[:,2]))

# ratio for calculating probabilities
print(0/33, 0/34, 33/33)
print(0/33, 1/34, 30/33)
print(0/33, 3/33, 33/34)


The probabilities assigned by the tree and my ratios (determined by looking at tree image) are matching.



When I use the option class_weights='balanced'. I get the below tree.



# class_weight='balanced' 
dtree_balanced=DecisionTreeClassifier(max_depth=2, class_weight='balanced')
dtree_balanced.fit(X_train,y_train)

dot_data = StringIO()
export_graphviz(dtree_balanced, out_file=dot_data,filled=True, rounded=True, special_characters=True, feature_names=X_train.columns)
graph = pydotplus.graph_from_dot_data(dot_data.getvalue())
Image(graph.create_png())


tree when class_weight='balanced'



I'm printing unique probabilities using below code



probas = dtree_balanced.predict_proba(X_train)
print(np.unique(probas[:,0]))
print(np.unique(probas[:,1]))
print(np.unique(probas[:,2]))


I'm not able to understand (come-up with a formula) how the tree is assigning these probabilities.










share|improve this question









$endgroup$
















    2












    $begingroup$


    I'm working with an imbalanced dataset. I'm using a decision tree (scikit-learn) to build a model.
    For explaining my problem I've taken iris dataset.


    When I'm setting class_weight=None, I understood how the tree is assigning the probability scores when I use predict_proba.

    When I'm setting class_weight='balanced', I know its using target value to calculate class weights but I'm not able to understand how the tree is assigning the probability scores.



    import sklearn.datasets as datasets
    import pandas as pd
    import numpy as np
    from sklearn.tree import DecisionTreeClassifier
    from sklearn.model_selection import train_test_split

    from sklearn.externals.six import StringIO
    from IPython.display import Image
    from sklearn.tree import export_graphviz
    import pydotplus

    iris=datasets.load_iris()
    df=pd.DataFrame(iris.data, columns=iris.feature_names)
    y=iris.target

    X_train, X_test, y_train, y_test = train_test_split(df, y, test_size=0.33, random_state=1)

    # class_weight=None
    dtree=DecisionTreeClassifier(max_depth=2)
    dtree.fit(X_train,y_train)

    dot_data = StringIO()
    export_graphviz(dtree, out_file=dot_data, filled=True, rounded=True, special_characters=True, feature_names=X_train.columns)
    graph = pydotplus.graph_from_dot_data(dot_data.getvalue())
    Image(graph.create_png()) # I use jupyter-notebook for visualizing the image



    tree when class_weight=None



    # printing unique probabilities in each class
    probas = dtree.predict_proba(X_train)
    print(np.unique(probas[:,0]))
    print(np.unique(probas[:,1]))
    print(np.unique(probas[:,2]))

    # ratio for calculating probabilities
    print(0/33, 0/34, 33/33)
    print(0/33, 1/34, 30/33)
    print(0/33, 3/33, 33/34)


    The probabilities assigned by the tree and my ratios (determined by looking at tree image) are matching.



    When I use the option class_weights='balanced'. I get the below tree.



    # class_weight='balanced' 
    dtree_balanced=DecisionTreeClassifier(max_depth=2, class_weight='balanced')
    dtree_balanced.fit(X_train,y_train)

    dot_data = StringIO()
    export_graphviz(dtree_balanced, out_file=dot_data,filled=True, rounded=True, special_characters=True, feature_names=X_train.columns)
    graph = pydotplus.graph_from_dot_data(dot_data.getvalue())
    Image(graph.create_png())


    tree when class_weight='balanced'



    I'm printing unique probabilities using below code



    probas = dtree_balanced.predict_proba(X_train)
    print(np.unique(probas[:,0]))
    print(np.unique(probas[:,1]))
    print(np.unique(probas[:,2]))


    I'm not able to understand (come-up with a formula) how the tree is assigning these probabilities.










    share|improve this question









    $endgroup$














      2












      2








      2


      1



      $begingroup$


      I'm working with an imbalanced dataset. I'm using a decision tree (scikit-learn) to build a model.
      For explaining my problem I've taken iris dataset.


      When I'm setting class_weight=None, I understood how the tree is assigning the probability scores when I use predict_proba.

      When I'm setting class_weight='balanced', I know its using target value to calculate class weights but I'm not able to understand how the tree is assigning the probability scores.



      import sklearn.datasets as datasets
      import pandas as pd
      import numpy as np
      from sklearn.tree import DecisionTreeClassifier
      from sklearn.model_selection import train_test_split

      from sklearn.externals.six import StringIO
      from IPython.display import Image
      from sklearn.tree import export_graphviz
      import pydotplus

      iris=datasets.load_iris()
      df=pd.DataFrame(iris.data, columns=iris.feature_names)
      y=iris.target

      X_train, X_test, y_train, y_test = train_test_split(df, y, test_size=0.33, random_state=1)

      # class_weight=None
      dtree=DecisionTreeClassifier(max_depth=2)
      dtree.fit(X_train,y_train)

      dot_data = StringIO()
      export_graphviz(dtree, out_file=dot_data, filled=True, rounded=True, special_characters=True, feature_names=X_train.columns)
      graph = pydotplus.graph_from_dot_data(dot_data.getvalue())
      Image(graph.create_png()) # I use jupyter-notebook for visualizing the image



      tree when class_weight=None



      # printing unique probabilities in each class
      probas = dtree.predict_proba(X_train)
      print(np.unique(probas[:,0]))
      print(np.unique(probas[:,1]))
      print(np.unique(probas[:,2]))

      # ratio for calculating probabilities
      print(0/33, 0/34, 33/33)
      print(0/33, 1/34, 30/33)
      print(0/33, 3/33, 33/34)


      The probabilities assigned by the tree and my ratios (determined by looking at tree image) are matching.



      When I use the option class_weights='balanced'. I get the below tree.



      # class_weight='balanced' 
      dtree_balanced=DecisionTreeClassifier(max_depth=2, class_weight='balanced')
      dtree_balanced.fit(X_train,y_train)

      dot_data = StringIO()
      export_graphviz(dtree_balanced, out_file=dot_data,filled=True, rounded=True, special_characters=True, feature_names=X_train.columns)
      graph = pydotplus.graph_from_dot_data(dot_data.getvalue())
      Image(graph.create_png())


      tree when class_weight='balanced'



      I'm printing unique probabilities using below code



      probas = dtree_balanced.predict_proba(X_train)
      print(np.unique(probas[:,0]))
      print(np.unique(probas[:,1]))
      print(np.unique(probas[:,2]))


      I'm not able to understand (come-up with a formula) how the tree is assigning these probabilities.










      share|improve this question









      $endgroup$




      I'm working with an imbalanced dataset. I'm using a decision tree (scikit-learn) to build a model.
      For explaining my problem I've taken iris dataset.


      When I'm setting class_weight=None, I understood how the tree is assigning the probability scores when I use predict_proba.

      When I'm setting class_weight='balanced', I know its using target value to calculate class weights but I'm not able to understand how the tree is assigning the probability scores.



      import sklearn.datasets as datasets
      import pandas as pd
      import numpy as np
      from sklearn.tree import DecisionTreeClassifier
      from sklearn.model_selection import train_test_split

      from sklearn.externals.six import StringIO
      from IPython.display import Image
      from sklearn.tree import export_graphviz
      import pydotplus

      iris=datasets.load_iris()
      df=pd.DataFrame(iris.data, columns=iris.feature_names)
      y=iris.target

      X_train, X_test, y_train, y_test = train_test_split(df, y, test_size=0.33, random_state=1)

      # class_weight=None
      dtree=DecisionTreeClassifier(max_depth=2)
      dtree.fit(X_train,y_train)

      dot_data = StringIO()
      export_graphviz(dtree, out_file=dot_data, filled=True, rounded=True, special_characters=True, feature_names=X_train.columns)
      graph = pydotplus.graph_from_dot_data(dot_data.getvalue())
      Image(graph.create_png()) # I use jupyter-notebook for visualizing the image



      tree when class_weight=None



      # printing unique probabilities in each class
      probas = dtree.predict_proba(X_train)
      print(np.unique(probas[:,0]))
      print(np.unique(probas[:,1]))
      print(np.unique(probas[:,2]))

      # ratio for calculating probabilities
      print(0/33, 0/34, 33/33)
      print(0/33, 1/34, 30/33)
      print(0/33, 3/33, 33/34)


      The probabilities assigned by the tree and my ratios (determined by looking at tree image) are matching.



      When I use the option class_weights='balanced'. I get the below tree.



      # class_weight='balanced' 
      dtree_balanced=DecisionTreeClassifier(max_depth=2, class_weight='balanced')
      dtree_balanced.fit(X_train,y_train)

      dot_data = StringIO()
      export_graphviz(dtree_balanced, out_file=dot_data,filled=True, rounded=True, special_characters=True, feature_names=X_train.columns)
      graph = pydotplus.graph_from_dot_data(dot_data.getvalue())
      Image(graph.create_png())


      tree when class_weight='balanced'



      I'm printing unique probabilities using below code



      probas = dtree_balanced.predict_proba(X_train)
      print(np.unique(probas[:,0]))
      print(np.unique(probas[:,1]))
      print(np.unique(probas[:,2]))


      I'm not able to understand (come-up with a formula) how the tree is assigning these probabilities.







      machine-learning python scikit-learn predictive-modeling decision-trees






      share|improve this question













      share|improve this question











      share|improve this question




      share|improve this question










      asked Mar 25 at 9:58









      rahulrahul

      132




      132




















          1 Answer
          1






          active

          oldest

          votes


















          1












          $begingroup$

          We should consider two points. First, class_weight='balanced' does not change the actual number of samples in a class, only the weight of class $w_c_i$ is changed. Second, the [un-normalized] probability of class $c_i$ in each node is calculated as



          $w_c_i$ x (number of samples from $c_i$ in that node / size of $c_i$)



          For example, in balanced mode, the [un-normalized] probability of $c_3$ in the green leaf is calculated as



          $33.bar3% times (3 / 36) ≈ 2.778%$



          compared to $36% times (3 / 36) = 3%$ in unbalanced mode.



          The probability (normalized) in balanced mode would be:



          $100 times 2.778/(2.778+32.258) % = 7.9289%$



          Remark. The word "probability" is not applicable to each isolated node except for the root node. This is the un-normalized version of the probability used to classify a data point inside a leaf, though the normalization is not required for comparison. However, the notion is applicable to the aggregate of nodes at the same level and the leaves from upper levels (i.e. set of all samples).






          share|improve this answer











          $endgroup$












          • $begingroup$
            I don't think the value array in nodes are probabilities. The probabilities should sum upto 1. But these are not behaving so. Probabilities can be obtained by probas = dtree_balanced.predict_proba(X_train)
            $endgroup$
            – rahul
            Mar 25 at 17:58











          • $begingroup$
            @rahul Thanks. I've added a remark.
            $endgroup$
            – Esmailian
            Mar 25 at 18:13






          • 1




            $begingroup$
            thanks for updating the answer. The un-normalized and normalized explanation helped me understand it better.
            $endgroup$
            – rahul
            Mar 25 at 19:35












          Your Answer





          StackExchange.ifUsing("editor", function ()
          return StackExchange.using("mathjaxEditing", function ()
          StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
          StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
          );
          );
          , "mathjax-editing");

          StackExchange.ready(function()
          var channelOptions =
          tags: "".split(" "),
          id: "557"
          ;
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function()
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled)
          StackExchange.using("snippets", function()
          createEditor();
          );

          else
          createEditor();

          );

          function createEditor()
          StackExchange.prepareEditor(
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: false,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: null,
          bindNavPrevention: true,
          postfix: "",
          imageUploader:
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          ,
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          );



          );













          draft saved

          draft discarded


















          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f47934%2fnot-able-to-interpret-decision-tree-when-using-class-weights%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown

























          1 Answer
          1






          active

          oldest

          votes








          1 Answer
          1






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes









          1












          $begingroup$

          We should consider two points. First, class_weight='balanced' does not change the actual number of samples in a class, only the weight of class $w_c_i$ is changed. Second, the [un-normalized] probability of class $c_i$ in each node is calculated as



          $w_c_i$ x (number of samples from $c_i$ in that node / size of $c_i$)



          For example, in balanced mode, the [un-normalized] probability of $c_3$ in the green leaf is calculated as



          $33.bar3% times (3 / 36) ≈ 2.778%$



          compared to $36% times (3 / 36) = 3%$ in unbalanced mode.



          The probability (normalized) in balanced mode would be:



          $100 times 2.778/(2.778+32.258) % = 7.9289%$



          Remark. The word "probability" is not applicable to each isolated node except for the root node. This is the un-normalized version of the probability used to classify a data point inside a leaf, though the normalization is not required for comparison. However, the notion is applicable to the aggregate of nodes at the same level and the leaves from upper levels (i.e. set of all samples).






          share|improve this answer











          $endgroup$












          • $begingroup$
            I don't think the value array in nodes are probabilities. The probabilities should sum upto 1. But these are not behaving so. Probabilities can be obtained by probas = dtree_balanced.predict_proba(X_train)
            $endgroup$
            – rahul
            Mar 25 at 17:58











          • $begingroup$
            @rahul Thanks. I've added a remark.
            $endgroup$
            – Esmailian
            Mar 25 at 18:13






          • 1




            $begingroup$
            thanks for updating the answer. The un-normalized and normalized explanation helped me understand it better.
            $endgroup$
            – rahul
            Mar 25 at 19:35
















          1












          $begingroup$

          We should consider two points. First, class_weight='balanced' does not change the actual number of samples in a class, only the weight of class $w_c_i$ is changed. Second, the [un-normalized] probability of class $c_i$ in each node is calculated as



          $w_c_i$ x (number of samples from $c_i$ in that node / size of $c_i$)



          For example, in balanced mode, the [un-normalized] probability of $c_3$ in the green leaf is calculated as



          $33.bar3% times (3 / 36) ≈ 2.778%$



          compared to $36% times (3 / 36) = 3%$ in unbalanced mode.



          The probability (normalized) in balanced mode would be:



          $100 times 2.778/(2.778+32.258) % = 7.9289%$



          Remark. The word "probability" is not applicable to each isolated node except for the root node. This is the un-normalized version of the probability used to classify a data point inside a leaf, though the normalization is not required for comparison. However, the notion is applicable to the aggregate of nodes at the same level and the leaves from upper levels (i.e. set of all samples).






          share|improve this answer











          $endgroup$












          • $begingroup$
            I don't think the value array in nodes are probabilities. The probabilities should sum upto 1. But these are not behaving so. Probabilities can be obtained by probas = dtree_balanced.predict_proba(X_train)
            $endgroup$
            – rahul
            Mar 25 at 17:58











          • $begingroup$
            @rahul Thanks. I've added a remark.
            $endgroup$
            – Esmailian
            Mar 25 at 18:13






          • 1




            $begingroup$
            thanks for updating the answer. The un-normalized and normalized explanation helped me understand it better.
            $endgroup$
            – rahul
            Mar 25 at 19:35














          1












          1








          1





          $begingroup$

          We should consider two points. First, class_weight='balanced' does not change the actual number of samples in a class, only the weight of class $w_c_i$ is changed. Second, the [un-normalized] probability of class $c_i$ in each node is calculated as



          $w_c_i$ x (number of samples from $c_i$ in that node / size of $c_i$)



          For example, in balanced mode, the [un-normalized] probability of $c_3$ in the green leaf is calculated as



          $33.bar3% times (3 / 36) ≈ 2.778%$



          compared to $36% times (3 / 36) = 3%$ in unbalanced mode.



          The probability (normalized) in balanced mode would be:



          $100 times 2.778/(2.778+32.258) % = 7.9289%$



          Remark. The word "probability" is not applicable to each isolated node except for the root node. This is the un-normalized version of the probability used to classify a data point inside a leaf, though the normalization is not required for comparison. However, the notion is applicable to the aggregate of nodes at the same level and the leaves from upper levels (i.e. set of all samples).






          share|improve this answer











          $endgroup$



          We should consider two points. First, class_weight='balanced' does not change the actual number of samples in a class, only the weight of class $w_c_i$ is changed. Second, the [un-normalized] probability of class $c_i$ in each node is calculated as



          $w_c_i$ x (number of samples from $c_i$ in that node / size of $c_i$)



          For example, in balanced mode, the [un-normalized] probability of $c_3$ in the green leaf is calculated as



          $33.bar3% times (3 / 36) ≈ 2.778%$



          compared to $36% times (3 / 36) = 3%$ in unbalanced mode.



          The probability (normalized) in balanced mode would be:



          $100 times 2.778/(2.778+32.258) % = 7.9289%$



          Remark. The word "probability" is not applicable to each isolated node except for the root node. This is the un-normalized version of the probability used to classify a data point inside a leaf, though the normalization is not required for comparison. However, the notion is applicable to the aggregate of nodes at the same level and the leaves from upper levels (i.e. set of all samples).







          share|improve this answer














          share|improve this answer



          share|improve this answer








          edited Mar 25 at 18:40

























          answered Mar 25 at 14:09









          EsmailianEsmailian

          2,417318




          2,417318











          • $begingroup$
            I don't think the value array in nodes are probabilities. The probabilities should sum upto 1. But these are not behaving so. Probabilities can be obtained by probas = dtree_balanced.predict_proba(X_train)
            $endgroup$
            – rahul
            Mar 25 at 17:58











          • $begingroup$
            @rahul Thanks. I've added a remark.
            $endgroup$
            – Esmailian
            Mar 25 at 18:13






          • 1




            $begingroup$
            thanks for updating the answer. The un-normalized and normalized explanation helped me understand it better.
            $endgroup$
            – rahul
            Mar 25 at 19:35

















          • $begingroup$
            I don't think the value array in nodes are probabilities. The probabilities should sum upto 1. But these are not behaving so. Probabilities can be obtained by probas = dtree_balanced.predict_proba(X_train)
            $endgroup$
            – rahul
            Mar 25 at 17:58











          • $begingroup$
            @rahul Thanks. I've added a remark.
            $endgroup$
            – Esmailian
            Mar 25 at 18:13






          • 1




            $begingroup$
            thanks for updating the answer. The un-normalized and normalized explanation helped me understand it better.
            $endgroup$
            – rahul
            Mar 25 at 19:35
















          $begingroup$
          I don't think the value array in nodes are probabilities. The probabilities should sum upto 1. But these are not behaving so. Probabilities can be obtained by probas = dtree_balanced.predict_proba(X_train)
          $endgroup$
          – rahul
          Mar 25 at 17:58





          $begingroup$
          I don't think the value array in nodes are probabilities. The probabilities should sum upto 1. But these are not behaving so. Probabilities can be obtained by probas = dtree_balanced.predict_proba(X_train)
          $endgroup$
          – rahul
          Mar 25 at 17:58













          $begingroup$
          @rahul Thanks. I've added a remark.
          $endgroup$
          – Esmailian
          Mar 25 at 18:13




          $begingroup$
          @rahul Thanks. I've added a remark.
          $endgroup$
          – Esmailian
          Mar 25 at 18:13




          1




          1




          $begingroup$
          thanks for updating the answer. The un-normalized and normalized explanation helped me understand it better.
          $endgroup$
          – rahul
          Mar 25 at 19:35





          $begingroup$
          thanks for updating the answer. The un-normalized and normalized explanation helped me understand it better.
          $endgroup$
          – rahul
          Mar 25 at 19:35


















          draft saved

          draft discarded
















































          Thanks for contributing an answer to Data Science Stack Exchange!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid


          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.

          Use MathJax to format equations. MathJax reference.


          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f47934%2fnot-able-to-interpret-decision-tree-when-using-class-weights%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          Adding axes to figuresAdding axes labels to LaTeX figuresLaTeX equivalent of ConTeXt buffersRotate a node but not its content: the case of the ellipse decorationHow to define the default vertical distance between nodes?TikZ scaling graphic and adjust node position and keep font sizeNumerical conditional within tikz keys?adding axes to shapesAlign axes across subfiguresAdding figures with a certain orderLine up nested tikz enviroments or how to get rid of themAdding axes labels to LaTeX figures

          Luettelo Yhdysvaltain laivaston lentotukialuksista Lähteet | Navigointivalikko

          Gary (muusikko) Sisällysluettelo Historia | Rockin' High | Lähteet | Aiheesta muualla | NavigointivalikkoInfobox OKTuomas "Gary" Keskinen Ancaran kitaristiksiProjekti Rockin' High