How to perform (modified) t-test for multiple variables and multiple models on Python (Machine Learning) Announcing the arrival of Valued Associate #679: Cesar Manara Planned maintenance scheduled April 23, 2019 at 23:30 UTC (7:30pm US/Eastern) 2019 Moderator Election Q&A - Questionnaire 2019 Community Moderator Election ResultsPython vs R for machine learningPython Machine Learning ExpertsPython distributed machine learningHow to plot multiple variables with Pandas and BokehPython: Handling imbalance Classes in python Machine LearningPickled machine learning modelsConsistently inconsistent cross-validation results that are wildly different from original model accuracyTensorflow regression predicting 1 for all inputsStatistical test for machine learningHow standardizing and/or log transformation affect prediction result in machine learning models
What is the origin of 落第?
Resize vertical bars (absolute-value symbols)
In musical terms, what properties are varied by the human voice to produce different words / syllables?
Is there hard evidence that the grant peer review system performs significantly better than random?
Sally's older brother
Simple Http Server
What is the difference between a "ranged attack" and a "ranged weapon attack"?
Why is a lens darker than other ones when applying the same settings?
What are the main differences between Stargate SG-1 cuts?
Trying to understand entropy as a novice in thermodynamics
What is the chair depicted in Cesare Maccari's 1889 painting "Cicerone denuncia Catilina"?
How to write capital alpha?
Should a wizard buy fine inks every time he want to copy spells into his spellbook?
If Windows 7 doesn't support WSL, then what is "Subsystem for UNIX-based Applications"?
A proverb that is used to imply that you have unexpectedly faced a big problem
Tips to organize LaTeX presentations for a semester
Random body shuffle every night—can we still function?
Is multiple magic items in one inherently imbalanced?
Is there public access to the Meteor Crater in Arizona?
I can't produce songs
How much damage would a cupful of neutron star matter do to the Earth?
Does the Black Tentacles spell do damage twice at the start of turn to an already restrained creature?
How does light 'choose' between wave and particle behaviour?
How to ternary Plot3D a function
How to perform (modified) t-test for multiple variables and multiple models on Python (Machine Learning)
Announcing the arrival of Valued Associate #679: Cesar Manara
Planned maintenance scheduled April 23, 2019 at 23:30 UTC (7:30pm US/Eastern)
2019 Moderator Election Q&A - Questionnaire
2019 Community Moderator Election ResultsPython vs R for machine learningPython Machine Learning ExpertsPython distributed machine learningHow to plot multiple variables with Pandas and BokehPython: Handling imbalance Classes in python Machine LearningPickled machine learning modelsConsistently inconsistent cross-validation results that are wildly different from original model accuracyTensorflow regression predicting 1 for all inputsStatistical test for machine learningHow standardizing and/or log transformation affect prediction result in machine learning models
$begingroup$
I have created and analyzed around 16 machine learning models using WEKA. Right now, I have a CSV file which shows the models' metrics (such as percent_correct, F-measure, recall, precision, etc.). I am trying to conduct a (modified) student's t-test on these models. I am able to conduct one (according to THIS link) where I compare only ONE variable common to only TWO models. I want to perform a (or multiple) t-tests with MULTIPLE variables and MULTIPLE models at once.
As mentioned, I can only perform the test with one variable (let's say F-measure) among two models (let's say decision table and neural net).
Here's the code for that. I am performing a Kolmogorov-Smirnov test (modified t):
from matplotlib import pyplot
from pandas import read_csv, DataFrame
from scipy.stats import ks_2samp
results = DataFrame()
results['A'] = read_csv('LMT (f-measure).csv', header=None).values[:, 0]
results['B'] = read_csv('LWL (f-measure).csv', header=None).values[:, 0]
print(results.describe())
results.boxplot()
pyplot.show()
results.hist()
pyplot.show()
value, pvalue = ks_2samp(results['A'], results['B'])
alpha = 0.05
print(value, pvalue)
if pvalue > alpha:
print('Samples are likely drawn from the same distributions (fail to reject H0)')
else:
print('Samples are likely drawn from different distributions (reject H0)')
Any ideas?
machine-learning python pandas statistics visualization
$endgroup$
add a comment |
$begingroup$
I have created and analyzed around 16 machine learning models using WEKA. Right now, I have a CSV file which shows the models' metrics (such as percent_correct, F-measure, recall, precision, etc.). I am trying to conduct a (modified) student's t-test on these models. I am able to conduct one (according to THIS link) where I compare only ONE variable common to only TWO models. I want to perform a (or multiple) t-tests with MULTIPLE variables and MULTIPLE models at once.
As mentioned, I can only perform the test with one variable (let's say F-measure) among two models (let's say decision table and neural net).
Here's the code for that. I am performing a Kolmogorov-Smirnov test (modified t):
from matplotlib import pyplot
from pandas import read_csv, DataFrame
from scipy.stats import ks_2samp
results = DataFrame()
results['A'] = read_csv('LMT (f-measure).csv', header=None).values[:, 0]
results['B'] = read_csv('LWL (f-measure).csv', header=None).values[:, 0]
print(results.describe())
results.boxplot()
pyplot.show()
results.hist()
pyplot.show()
value, pvalue = ks_2samp(results['A'], results['B'])
alpha = 0.05
print(value, pvalue)
if pvalue > alpha:
print('Samples are likely drawn from the same distributions (fail to reject H0)')
else:
print('Samples are likely drawn from different distributions (reject H0)')
Any ideas?
machine-learning python pandas statistics visualization
$endgroup$
$begingroup$
I'm having trouble imagining any scenario where this would be a good idea - t-tests are useful and meaningful for a very specific set of statistical assumptions and interpretations, and this doesn't sound like one of them. I think you have an X-Y problem - perhaps you could explain what it is you are wanting to accomplish with this, so that someone might be able to suggest what sort of procedure you might want to try instead?
$endgroup$
– BrianH
Apr 4 at 0:24
$begingroup$
I separate ML into two sections: making models and analyzing them. I am in the analysis stage. Having made 16 different models, I want to see which ones are the best. One approach is to simply look at raw metrics outputted by the program and compare it between the models. For instance, I could look for which model was the "best" by looking for the one with the highest "Mathew's Correlation" (as an example). However, I don't know if the differences are statistically significant (that's why we have these other tests (like t-tests)). I want to do these tests, however more efficiently: thus my Q.
$endgroup$
– Shounak Ray
Apr 4 at 1:21
$begingroup$
Found a great solution! Check out my answer!
$endgroup$
– Shounak Ray
Apr 4 at 6:24
add a comment |
$begingroup$
I have created and analyzed around 16 machine learning models using WEKA. Right now, I have a CSV file which shows the models' metrics (such as percent_correct, F-measure, recall, precision, etc.). I am trying to conduct a (modified) student's t-test on these models. I am able to conduct one (according to THIS link) where I compare only ONE variable common to only TWO models. I want to perform a (or multiple) t-tests with MULTIPLE variables and MULTIPLE models at once.
As mentioned, I can only perform the test with one variable (let's say F-measure) among two models (let's say decision table and neural net).
Here's the code for that. I am performing a Kolmogorov-Smirnov test (modified t):
from matplotlib import pyplot
from pandas import read_csv, DataFrame
from scipy.stats import ks_2samp
results = DataFrame()
results['A'] = read_csv('LMT (f-measure).csv', header=None).values[:, 0]
results['B'] = read_csv('LWL (f-measure).csv', header=None).values[:, 0]
print(results.describe())
results.boxplot()
pyplot.show()
results.hist()
pyplot.show()
value, pvalue = ks_2samp(results['A'], results['B'])
alpha = 0.05
print(value, pvalue)
if pvalue > alpha:
print('Samples are likely drawn from the same distributions (fail to reject H0)')
else:
print('Samples are likely drawn from different distributions (reject H0)')
Any ideas?
machine-learning python pandas statistics visualization
$endgroup$
I have created and analyzed around 16 machine learning models using WEKA. Right now, I have a CSV file which shows the models' metrics (such as percent_correct, F-measure, recall, precision, etc.). I am trying to conduct a (modified) student's t-test on these models. I am able to conduct one (according to THIS link) where I compare only ONE variable common to only TWO models. I want to perform a (or multiple) t-tests with MULTIPLE variables and MULTIPLE models at once.
As mentioned, I can only perform the test with one variable (let's say F-measure) among two models (let's say decision table and neural net).
Here's the code for that. I am performing a Kolmogorov-Smirnov test (modified t):
from matplotlib import pyplot
from pandas import read_csv, DataFrame
from scipy.stats import ks_2samp
results = DataFrame()
results['A'] = read_csv('LMT (f-measure).csv', header=None).values[:, 0]
results['B'] = read_csv('LWL (f-measure).csv', header=None).values[:, 0]
print(results.describe())
results.boxplot()
pyplot.show()
results.hist()
pyplot.show()
value, pvalue = ks_2samp(results['A'], results['B'])
alpha = 0.05
print(value, pvalue)
if pvalue > alpha:
print('Samples are likely drawn from the same distributions (fail to reject H0)')
else:
print('Samples are likely drawn from different distributions (reject H0)')
Any ideas?
machine-learning python pandas statistics visualization
machine-learning python pandas statistics visualization
edited Apr 4 at 8:00
Shounak Ray
asked Apr 3 at 20:46
Shounak RayShounak Ray
62
62
$begingroup$
I'm having trouble imagining any scenario where this would be a good idea - t-tests are useful and meaningful for a very specific set of statistical assumptions and interpretations, and this doesn't sound like one of them. I think you have an X-Y problem - perhaps you could explain what it is you are wanting to accomplish with this, so that someone might be able to suggest what sort of procedure you might want to try instead?
$endgroup$
– BrianH
Apr 4 at 0:24
$begingroup$
I separate ML into two sections: making models and analyzing them. I am in the analysis stage. Having made 16 different models, I want to see which ones are the best. One approach is to simply look at raw metrics outputted by the program and compare it between the models. For instance, I could look for which model was the "best" by looking for the one with the highest "Mathew's Correlation" (as an example). However, I don't know if the differences are statistically significant (that's why we have these other tests (like t-tests)). I want to do these tests, however more efficiently: thus my Q.
$endgroup$
– Shounak Ray
Apr 4 at 1:21
$begingroup$
Found a great solution! Check out my answer!
$endgroup$
– Shounak Ray
Apr 4 at 6:24
add a comment |
$begingroup$
I'm having trouble imagining any scenario where this would be a good idea - t-tests are useful and meaningful for a very specific set of statistical assumptions and interpretations, and this doesn't sound like one of them. I think you have an X-Y problem - perhaps you could explain what it is you are wanting to accomplish with this, so that someone might be able to suggest what sort of procedure you might want to try instead?
$endgroup$
– BrianH
Apr 4 at 0:24
$begingroup$
I separate ML into two sections: making models and analyzing them. I am in the analysis stage. Having made 16 different models, I want to see which ones are the best. One approach is to simply look at raw metrics outputted by the program and compare it between the models. For instance, I could look for which model was the "best" by looking for the one with the highest "Mathew's Correlation" (as an example). However, I don't know if the differences are statistically significant (that's why we have these other tests (like t-tests)). I want to do these tests, however more efficiently: thus my Q.
$endgroup$
– Shounak Ray
Apr 4 at 1:21
$begingroup$
Found a great solution! Check out my answer!
$endgroup$
– Shounak Ray
Apr 4 at 6:24
$begingroup$
I'm having trouble imagining any scenario where this would be a good idea - t-tests are useful and meaningful for a very specific set of statistical assumptions and interpretations, and this doesn't sound like one of them. I think you have an X-Y problem - perhaps you could explain what it is you are wanting to accomplish with this, so that someone might be able to suggest what sort of procedure you might want to try instead?
$endgroup$
– BrianH
Apr 4 at 0:24
$begingroup$
I'm having trouble imagining any scenario where this would be a good idea - t-tests are useful and meaningful for a very specific set of statistical assumptions and interpretations, and this doesn't sound like one of them. I think you have an X-Y problem - perhaps you could explain what it is you are wanting to accomplish with this, so that someone might be able to suggest what sort of procedure you might want to try instead?
$endgroup$
– BrianH
Apr 4 at 0:24
$begingroup$
I separate ML into two sections: making models and analyzing them. I am in the analysis stage. Having made 16 different models, I want to see which ones are the best. One approach is to simply look at raw metrics outputted by the program and compare it between the models. For instance, I could look for which model was the "best" by looking for the one with the highest "Mathew's Correlation" (as an example). However, I don't know if the differences are statistically significant (that's why we have these other tests (like t-tests)). I want to do these tests, however more efficiently: thus my Q.
$endgroup$
– Shounak Ray
Apr 4 at 1:21
$begingroup$
I separate ML into two sections: making models and analyzing them. I am in the analysis stage. Having made 16 different models, I want to see which ones are the best. One approach is to simply look at raw metrics outputted by the program and compare it between the models. For instance, I could look for which model was the "best" by looking for the one with the highest "Mathew's Correlation" (as an example). However, I don't know if the differences are statistically significant (that's why we have these other tests (like t-tests)). I want to do these tests, however more efficiently: thus my Q.
$endgroup$
– Shounak Ray
Apr 4 at 1:21
$begingroup$
Found a great solution! Check out my answer!
$endgroup$
– Shounak Ray
Apr 4 at 6:24
$begingroup$
Found a great solution! Check out my answer!
$endgroup$
– Shounak Ray
Apr 4 at 6:24
add a comment |
1 Answer
1
active
oldest
votes
$begingroup$
This is a simple solution to my question. It only deals with two models and two variables, but you could easily have lists with the names of the classifiers and the metrics you want to analyze. For my purposes, I just change the values of COI
, ROI_1
, and ROI_2
respectively.
NOTE: This solution is also generalizable.
How? Just change the values of COI
, ROI_1
, and ROI_2
and load any chosen dataset in df = pandas.read_csv("FILENAME.csv, ...)
. If you want another visualization, just change the pyplot
settings near the end.
The key was assigning a new DataFrame
to the original DataFrame
and implementing the .loc["SOMESTRING"]
method. It removes all the rows in the data, EXCEPT for the one specified as a parameter.
Remember, however, to include index_col=0
when you read the file OR use some other method to set the index of the DataFrame
. Without doing this, your row
values will just be indexes, from 0 to MAX_INDEX
.
# Written: April 4, 2019
import pandas # for visualizations
from matplotlib import pyplot # for visualizations
from scipy.stats import ks_2samp # for 2-sample Kolmogorov-Smirnov test
import os # for deleting CSV files
# Functions which isolates DataFrame
def removeColumns(DataFrame, typeArray, stringOfInterest):
for i in range(0, len(typeArray)):
if typeArray[i].find(stringOfInterest) != -1:
continue
else:
DataFrame.drop(typeArray[i], axis = 1, inplace = True)
# Get the whole DataFrame
df = pandas.read_csv("ExperimentResultsCondensed.csv", index_col=0)
dfCopy = df
# Specified metrics and models for comparison
COI = "Area_under_PRC"
ROI_1 = "weka.classifiers.meta.AdaBoostM1[DecisionTable]"
ROI_2 = "weka.classifiers.meta.AdaBoostM1[DecisionStump]"
# Lists of header and row in dataFrame
# `rows` may act strangely
headers = list(df.dtypes.index)
rows = list(df.index)
# remove irrelevant rows
df1 = dfCopy.loc[ROI_1]
df2 = dfCopy.loc[ROI_2]
# remove irrelevant columns
removeColumns(df1, headers, COI)
removeColumns(df2, headers, COI)
# Make CSV files
df1.to_csv(str(ROI_1 + "-" + COI + ".csv"), index=False)
df2.to_csv(str(ROI_2 + "-" + COI) + ".csv", index=False)
results = pandas.DataFrame()
# Read CSV files
# The CSV files can be of any netric/measure, F-measure is used as an example
results[ROI_1] = pandas.read_csv(str(ROI_1 + "-" + COI + ".csv"), header=None).values[:, 0]
results[ROI_2] = pandas.read_csv(str(ROI_2 + "-" + COI + ".csv"), header=None).values[:, 0]
# Kolmogorov-Smirnov test since we have Non-Gaussian, independent, distinctive variance datasets
# Test configurations
value, pvalue = ks_2samp(results[ROI_1], results[ROI_2])
# Corresponding confidence level: 95%
alpha = 0.05
# Output the results
print('n')
print('33[1m' + '>>>TEST STATISTIC: ')
print(value)
print(">>>P-VALUE: ")
print(pvalue)
if pvalue > alpha:
print('t>>Samples are likely drawn from the same distributions (fail to reject H0 - NOT SIGNIFICANT)')
else:
print('t>>Samples are likely drawn from different distributions (reject H0 - SIGNIFICANT)')
# Plot files
df1.plot.density()
pyplot.xlabel(str(COI + " Values"))
pyplot.ylabel(str("Density"))
pyplot.title(str(COI + " Density Distribution of " + ROI_1))
pyplot.show()
df2.plot.density()
pyplot.xlabel(str(COI + " Values"))
pyplot.ylabel(str("Density"))
pyplot.title(str(COI + " Density Distribution of " + ROI_2))
pyplot.show()
# Delete Files
os.remove(str(ROI_1 + "-" + COI + ".csv"))
os.remove(str(ROI_2 + "-" + COI + ".csv"))
$endgroup$
add a comment |
Your Answer
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "557"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f48553%2fhow-to-perform-modified-t-test-for-multiple-variables-and-multiple-models-on-p%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
This is a simple solution to my question. It only deals with two models and two variables, but you could easily have lists with the names of the classifiers and the metrics you want to analyze. For my purposes, I just change the values of COI
, ROI_1
, and ROI_2
respectively.
NOTE: This solution is also generalizable.
How? Just change the values of COI
, ROI_1
, and ROI_2
and load any chosen dataset in df = pandas.read_csv("FILENAME.csv, ...)
. If you want another visualization, just change the pyplot
settings near the end.
The key was assigning a new DataFrame
to the original DataFrame
and implementing the .loc["SOMESTRING"]
method. It removes all the rows in the data, EXCEPT for the one specified as a parameter.
Remember, however, to include index_col=0
when you read the file OR use some other method to set the index of the DataFrame
. Without doing this, your row
values will just be indexes, from 0 to MAX_INDEX
.
# Written: April 4, 2019
import pandas # for visualizations
from matplotlib import pyplot # for visualizations
from scipy.stats import ks_2samp # for 2-sample Kolmogorov-Smirnov test
import os # for deleting CSV files
# Functions which isolates DataFrame
def removeColumns(DataFrame, typeArray, stringOfInterest):
for i in range(0, len(typeArray)):
if typeArray[i].find(stringOfInterest) != -1:
continue
else:
DataFrame.drop(typeArray[i], axis = 1, inplace = True)
# Get the whole DataFrame
df = pandas.read_csv("ExperimentResultsCondensed.csv", index_col=0)
dfCopy = df
# Specified metrics and models for comparison
COI = "Area_under_PRC"
ROI_1 = "weka.classifiers.meta.AdaBoostM1[DecisionTable]"
ROI_2 = "weka.classifiers.meta.AdaBoostM1[DecisionStump]"
# Lists of header and row in dataFrame
# `rows` may act strangely
headers = list(df.dtypes.index)
rows = list(df.index)
# remove irrelevant rows
df1 = dfCopy.loc[ROI_1]
df2 = dfCopy.loc[ROI_2]
# remove irrelevant columns
removeColumns(df1, headers, COI)
removeColumns(df2, headers, COI)
# Make CSV files
df1.to_csv(str(ROI_1 + "-" + COI + ".csv"), index=False)
df2.to_csv(str(ROI_2 + "-" + COI) + ".csv", index=False)
results = pandas.DataFrame()
# Read CSV files
# The CSV files can be of any netric/measure, F-measure is used as an example
results[ROI_1] = pandas.read_csv(str(ROI_1 + "-" + COI + ".csv"), header=None).values[:, 0]
results[ROI_2] = pandas.read_csv(str(ROI_2 + "-" + COI + ".csv"), header=None).values[:, 0]
# Kolmogorov-Smirnov test since we have Non-Gaussian, independent, distinctive variance datasets
# Test configurations
value, pvalue = ks_2samp(results[ROI_1], results[ROI_2])
# Corresponding confidence level: 95%
alpha = 0.05
# Output the results
print('n')
print('33[1m' + '>>>TEST STATISTIC: ')
print(value)
print(">>>P-VALUE: ")
print(pvalue)
if pvalue > alpha:
print('t>>Samples are likely drawn from the same distributions (fail to reject H0 - NOT SIGNIFICANT)')
else:
print('t>>Samples are likely drawn from different distributions (reject H0 - SIGNIFICANT)')
# Plot files
df1.plot.density()
pyplot.xlabel(str(COI + " Values"))
pyplot.ylabel(str("Density"))
pyplot.title(str(COI + " Density Distribution of " + ROI_1))
pyplot.show()
df2.plot.density()
pyplot.xlabel(str(COI + " Values"))
pyplot.ylabel(str("Density"))
pyplot.title(str(COI + " Density Distribution of " + ROI_2))
pyplot.show()
# Delete Files
os.remove(str(ROI_1 + "-" + COI + ".csv"))
os.remove(str(ROI_2 + "-" + COI + ".csv"))
$endgroup$
add a comment |
$begingroup$
This is a simple solution to my question. It only deals with two models and two variables, but you could easily have lists with the names of the classifiers and the metrics you want to analyze. For my purposes, I just change the values of COI
, ROI_1
, and ROI_2
respectively.
NOTE: This solution is also generalizable.
How? Just change the values of COI
, ROI_1
, and ROI_2
and load any chosen dataset in df = pandas.read_csv("FILENAME.csv, ...)
. If you want another visualization, just change the pyplot
settings near the end.
The key was assigning a new DataFrame
to the original DataFrame
and implementing the .loc["SOMESTRING"]
method. It removes all the rows in the data, EXCEPT for the one specified as a parameter.
Remember, however, to include index_col=0
when you read the file OR use some other method to set the index of the DataFrame
. Without doing this, your row
values will just be indexes, from 0 to MAX_INDEX
.
# Written: April 4, 2019
import pandas # for visualizations
from matplotlib import pyplot # for visualizations
from scipy.stats import ks_2samp # for 2-sample Kolmogorov-Smirnov test
import os # for deleting CSV files
# Functions which isolates DataFrame
def removeColumns(DataFrame, typeArray, stringOfInterest):
for i in range(0, len(typeArray)):
if typeArray[i].find(stringOfInterest) != -1:
continue
else:
DataFrame.drop(typeArray[i], axis = 1, inplace = True)
# Get the whole DataFrame
df = pandas.read_csv("ExperimentResultsCondensed.csv", index_col=0)
dfCopy = df
# Specified metrics and models for comparison
COI = "Area_under_PRC"
ROI_1 = "weka.classifiers.meta.AdaBoostM1[DecisionTable]"
ROI_2 = "weka.classifiers.meta.AdaBoostM1[DecisionStump]"
# Lists of header and row in dataFrame
# `rows` may act strangely
headers = list(df.dtypes.index)
rows = list(df.index)
# remove irrelevant rows
df1 = dfCopy.loc[ROI_1]
df2 = dfCopy.loc[ROI_2]
# remove irrelevant columns
removeColumns(df1, headers, COI)
removeColumns(df2, headers, COI)
# Make CSV files
df1.to_csv(str(ROI_1 + "-" + COI + ".csv"), index=False)
df2.to_csv(str(ROI_2 + "-" + COI) + ".csv", index=False)
results = pandas.DataFrame()
# Read CSV files
# The CSV files can be of any netric/measure, F-measure is used as an example
results[ROI_1] = pandas.read_csv(str(ROI_1 + "-" + COI + ".csv"), header=None).values[:, 0]
results[ROI_2] = pandas.read_csv(str(ROI_2 + "-" + COI + ".csv"), header=None).values[:, 0]
# Kolmogorov-Smirnov test since we have Non-Gaussian, independent, distinctive variance datasets
# Test configurations
value, pvalue = ks_2samp(results[ROI_1], results[ROI_2])
# Corresponding confidence level: 95%
alpha = 0.05
# Output the results
print('n')
print('33[1m' + '>>>TEST STATISTIC: ')
print(value)
print(">>>P-VALUE: ")
print(pvalue)
if pvalue > alpha:
print('t>>Samples are likely drawn from the same distributions (fail to reject H0 - NOT SIGNIFICANT)')
else:
print('t>>Samples are likely drawn from different distributions (reject H0 - SIGNIFICANT)')
# Plot files
df1.plot.density()
pyplot.xlabel(str(COI + " Values"))
pyplot.ylabel(str("Density"))
pyplot.title(str(COI + " Density Distribution of " + ROI_1))
pyplot.show()
df2.plot.density()
pyplot.xlabel(str(COI + " Values"))
pyplot.ylabel(str("Density"))
pyplot.title(str(COI + " Density Distribution of " + ROI_2))
pyplot.show()
# Delete Files
os.remove(str(ROI_1 + "-" + COI + ".csv"))
os.remove(str(ROI_2 + "-" + COI + ".csv"))
$endgroup$
add a comment |
$begingroup$
This is a simple solution to my question. It only deals with two models and two variables, but you could easily have lists with the names of the classifiers and the metrics you want to analyze. For my purposes, I just change the values of COI
, ROI_1
, and ROI_2
respectively.
NOTE: This solution is also generalizable.
How? Just change the values of COI
, ROI_1
, and ROI_2
and load any chosen dataset in df = pandas.read_csv("FILENAME.csv, ...)
. If you want another visualization, just change the pyplot
settings near the end.
The key was assigning a new DataFrame
to the original DataFrame
and implementing the .loc["SOMESTRING"]
method. It removes all the rows in the data, EXCEPT for the one specified as a parameter.
Remember, however, to include index_col=0
when you read the file OR use some other method to set the index of the DataFrame
. Without doing this, your row
values will just be indexes, from 0 to MAX_INDEX
.
# Written: April 4, 2019
import pandas # for visualizations
from matplotlib import pyplot # for visualizations
from scipy.stats import ks_2samp # for 2-sample Kolmogorov-Smirnov test
import os # for deleting CSV files
# Functions which isolates DataFrame
def removeColumns(DataFrame, typeArray, stringOfInterest):
for i in range(0, len(typeArray)):
if typeArray[i].find(stringOfInterest) != -1:
continue
else:
DataFrame.drop(typeArray[i], axis = 1, inplace = True)
# Get the whole DataFrame
df = pandas.read_csv("ExperimentResultsCondensed.csv", index_col=0)
dfCopy = df
# Specified metrics and models for comparison
COI = "Area_under_PRC"
ROI_1 = "weka.classifiers.meta.AdaBoostM1[DecisionTable]"
ROI_2 = "weka.classifiers.meta.AdaBoostM1[DecisionStump]"
# Lists of header and row in dataFrame
# `rows` may act strangely
headers = list(df.dtypes.index)
rows = list(df.index)
# remove irrelevant rows
df1 = dfCopy.loc[ROI_1]
df2 = dfCopy.loc[ROI_2]
# remove irrelevant columns
removeColumns(df1, headers, COI)
removeColumns(df2, headers, COI)
# Make CSV files
df1.to_csv(str(ROI_1 + "-" + COI + ".csv"), index=False)
df2.to_csv(str(ROI_2 + "-" + COI) + ".csv", index=False)
results = pandas.DataFrame()
# Read CSV files
# The CSV files can be of any netric/measure, F-measure is used as an example
results[ROI_1] = pandas.read_csv(str(ROI_1 + "-" + COI + ".csv"), header=None).values[:, 0]
results[ROI_2] = pandas.read_csv(str(ROI_2 + "-" + COI + ".csv"), header=None).values[:, 0]
# Kolmogorov-Smirnov test since we have Non-Gaussian, independent, distinctive variance datasets
# Test configurations
value, pvalue = ks_2samp(results[ROI_1], results[ROI_2])
# Corresponding confidence level: 95%
alpha = 0.05
# Output the results
print('n')
print('33[1m' + '>>>TEST STATISTIC: ')
print(value)
print(">>>P-VALUE: ")
print(pvalue)
if pvalue > alpha:
print('t>>Samples are likely drawn from the same distributions (fail to reject H0 - NOT SIGNIFICANT)')
else:
print('t>>Samples are likely drawn from different distributions (reject H0 - SIGNIFICANT)')
# Plot files
df1.plot.density()
pyplot.xlabel(str(COI + " Values"))
pyplot.ylabel(str("Density"))
pyplot.title(str(COI + " Density Distribution of " + ROI_1))
pyplot.show()
df2.plot.density()
pyplot.xlabel(str(COI + " Values"))
pyplot.ylabel(str("Density"))
pyplot.title(str(COI + " Density Distribution of " + ROI_2))
pyplot.show()
# Delete Files
os.remove(str(ROI_1 + "-" + COI + ".csv"))
os.remove(str(ROI_2 + "-" + COI + ".csv"))
$endgroup$
This is a simple solution to my question. It only deals with two models and two variables, but you could easily have lists with the names of the classifiers and the metrics you want to analyze. For my purposes, I just change the values of COI
, ROI_1
, and ROI_2
respectively.
NOTE: This solution is also generalizable.
How? Just change the values of COI
, ROI_1
, and ROI_2
and load any chosen dataset in df = pandas.read_csv("FILENAME.csv, ...)
. If you want another visualization, just change the pyplot
settings near the end.
The key was assigning a new DataFrame
to the original DataFrame
and implementing the .loc["SOMESTRING"]
method. It removes all the rows in the data, EXCEPT for the one specified as a parameter.
Remember, however, to include index_col=0
when you read the file OR use some other method to set the index of the DataFrame
. Without doing this, your row
values will just be indexes, from 0 to MAX_INDEX
.
# Written: April 4, 2019
import pandas # for visualizations
from matplotlib import pyplot # for visualizations
from scipy.stats import ks_2samp # for 2-sample Kolmogorov-Smirnov test
import os # for deleting CSV files
# Functions which isolates DataFrame
def removeColumns(DataFrame, typeArray, stringOfInterest):
for i in range(0, len(typeArray)):
if typeArray[i].find(stringOfInterest) != -1:
continue
else:
DataFrame.drop(typeArray[i], axis = 1, inplace = True)
# Get the whole DataFrame
df = pandas.read_csv("ExperimentResultsCondensed.csv", index_col=0)
dfCopy = df
# Specified metrics and models for comparison
COI = "Area_under_PRC"
ROI_1 = "weka.classifiers.meta.AdaBoostM1[DecisionTable]"
ROI_2 = "weka.classifiers.meta.AdaBoostM1[DecisionStump]"
# Lists of header and row in dataFrame
# `rows` may act strangely
headers = list(df.dtypes.index)
rows = list(df.index)
# remove irrelevant rows
df1 = dfCopy.loc[ROI_1]
df2 = dfCopy.loc[ROI_2]
# remove irrelevant columns
removeColumns(df1, headers, COI)
removeColumns(df2, headers, COI)
# Make CSV files
df1.to_csv(str(ROI_1 + "-" + COI + ".csv"), index=False)
df2.to_csv(str(ROI_2 + "-" + COI) + ".csv", index=False)
results = pandas.DataFrame()
# Read CSV files
# The CSV files can be of any netric/measure, F-measure is used as an example
results[ROI_1] = pandas.read_csv(str(ROI_1 + "-" + COI + ".csv"), header=None).values[:, 0]
results[ROI_2] = pandas.read_csv(str(ROI_2 + "-" + COI + ".csv"), header=None).values[:, 0]
# Kolmogorov-Smirnov test since we have Non-Gaussian, independent, distinctive variance datasets
# Test configurations
value, pvalue = ks_2samp(results[ROI_1], results[ROI_2])
# Corresponding confidence level: 95%
alpha = 0.05
# Output the results
print('n')
print('33[1m' + '>>>TEST STATISTIC: ')
print(value)
print(">>>P-VALUE: ")
print(pvalue)
if pvalue > alpha:
print('t>>Samples are likely drawn from the same distributions (fail to reject H0 - NOT SIGNIFICANT)')
else:
print('t>>Samples are likely drawn from different distributions (reject H0 - SIGNIFICANT)')
# Plot files
df1.plot.density()
pyplot.xlabel(str(COI + " Values"))
pyplot.ylabel(str("Density"))
pyplot.title(str(COI + " Density Distribution of " + ROI_1))
pyplot.show()
df2.plot.density()
pyplot.xlabel(str(COI + " Values"))
pyplot.ylabel(str("Density"))
pyplot.title(str(COI + " Density Distribution of " + ROI_2))
pyplot.show()
# Delete Files
os.remove(str(ROI_1 + "-" + COI + ".csv"))
os.remove(str(ROI_2 + "-" + COI + ".csv"))
edited Apr 4 at 7:58
answered Apr 4 at 6:24
Shounak RayShounak Ray
62
62
add a comment |
add a comment |
Thanks for contributing an answer to Data Science Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f48553%2fhow-to-perform-modified-t-test-for-multiple-variables-and-multiple-models-on-p%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
$begingroup$
I'm having trouble imagining any scenario where this would be a good idea - t-tests are useful and meaningful for a very specific set of statistical assumptions and interpretations, and this doesn't sound like one of them. I think you have an X-Y problem - perhaps you could explain what it is you are wanting to accomplish with this, so that someone might be able to suggest what sort of procedure you might want to try instead?
$endgroup$
– BrianH
Apr 4 at 0:24
$begingroup$
I separate ML into two sections: making models and analyzing them. I am in the analysis stage. Having made 16 different models, I want to see which ones are the best. One approach is to simply look at raw metrics outputted by the program and compare it between the models. For instance, I could look for which model was the "best" by looking for the one with the highest "Mathew's Correlation" (as an example). However, I don't know if the differences are statistically significant (that's why we have these other tests (like t-tests)). I want to do these tests, however more efficiently: thus my Q.
$endgroup$
– Shounak Ray
Apr 4 at 1:21
$begingroup$
Found a great solution! Check out my answer!
$endgroup$
– Shounak Ray
Apr 4 at 6:24