SVM SMOTE fit_resample() function runs forever with no result Unicorn Meta Zoo #1: Why another podcast? Announcing the arrival of Valued Associate #679: Cesar Manara 2019 Moderator Election Q&A - Questionnaire 2019 Community Moderator Election ResultsSVM using scikit learn runs endlessly and never completes executionPossible Reason for low Test accuracy and high AUCCan SMOTE be applied over sequence of words (sentences)?SMOTE and multi class oversamplingLogic behind SMOTE-NC?How to avoid resampling part of pipeline on test data (imblearn package, SMOTE)Error with functionSmoteBoost: Should SMOTE be ran individually for each iteration/tree in the boosting?solving multi-class imbalance classification using smote and OSSUsing SMOTE for Synthetic Data generation to improve performance on unbalanced data
How to translate "red flag" into Spanish?
Check if a string is entirely made of the same substring
Reattaching fallen shelf to wall?
"Whatever a Russian does, they end up making the Kalashnikov gun"? Are there any similar proverbs in English?
Was Dennis Ritchie being too modest in this quote about C and Pascal?
Older movie/show about humans on derelict alien warship which refuels by passing through a star
How to avoid introduction cliches
Why do distances seem to matter in the Foundation world?
How to open locks without disable device?
Is there really no use for MD5 anymore?
Crossed out red box fitting tightly around image
"My boss was furious with me and I have been fired" vs. "My boss was furious with me and I was fired"
How long after the last departure shall the airport stay open for an emergency return?
Is it possible to cast 2x Final Payment while sacrificing just one creature?
Double-nominative constructions and “von”
Israeli soda type drink
Intern got a job offer for same salary than a long term team member
Bayes factor vs P value
First instead of 1 when referencing
Why didn't the Space Shuttle bounce back into space as many times as possible so as to lose a lot of kinetic energy up there?
Protagonist's race is hidden - should I reveal it?
Long vowel quality before R
How to have a sharp product image?
Implementing 3DES algorithm in Java: is my code secure?
SVM SMOTE fit_resample() function runs forever with no result
Unicorn Meta Zoo #1: Why another podcast?
Announcing the arrival of Valued Associate #679: Cesar Manara
2019 Moderator Election Q&A - Questionnaire
2019 Community Moderator Election ResultsSVM using scikit learn runs endlessly and never completes executionPossible Reason for low Test accuracy and high AUCCan SMOTE be applied over sequence of words (sentences)?SMOTE and multi class oversamplingLogic behind SMOTE-NC?How to avoid resampling part of pipeline on test data (imblearn package, SMOTE)Error with functionSmoteBoost: Should SMOTE be ran individually for each iteration/tree in the boosting?solving multi-class imbalance classification using smote and OSSUsing SMOTE for Synthetic Data generation to improve performance on unbalanced data
$begingroup$
Problem
fit_resample(X,y)
is taking too long to complete execution for 2million rows.
Dataset specifications
I have a labeled dataset about network features, where the X
(features) and Y
(labels) are of shape (2M, 24)
and (2M,11)
respectively.
i.e. there are over 2million rows in the dataset. there are 24 features, and 11 different classes/labels.
Both X and Y are numpy
arrays of float
dtype.
Motivation for using SVM SMOTE
Due to class imbalance, I realized SVM SMOTE
is a good technique balance it, thereby, giving better classification.
Testing with smaller sub-datasets
To test the performance of my classifier, I started small. I made small subdatasets out of the big 2million row dataset.
It took the following code:-
%%time
sm = SVMSMOTE(random_state=42)
X_res, Y_res = sm.fit_resample(X, Y)
1st dataset contains only 7.5k rows. It took about 800ms to run the cell.
2nd dataset contains 115k rows. It took 20min to execute the cell.
Solution Attempts
My system crashes after running continuously for more than 48hrs, running out of memory.
I've tried some ideas, such as
1. splitting it to run on multiple CPU cores using %%px
No improvement in quicker execution
2. using NVIDIA GPU's
Same as above. Which is more understandable since the _smote.py
library functions aren't built with parallel programming for CUDA in mind.
I'm pretty frustrated by the lack of results, and a warm PC. What should I do?
python preprocessing numpy sampling smote
$endgroup$
add a comment |
$begingroup$
Problem
fit_resample(X,y)
is taking too long to complete execution for 2million rows.
Dataset specifications
I have a labeled dataset about network features, where the X
(features) and Y
(labels) are of shape (2M, 24)
and (2M,11)
respectively.
i.e. there are over 2million rows in the dataset. there are 24 features, and 11 different classes/labels.
Both X and Y are numpy
arrays of float
dtype.
Motivation for using SVM SMOTE
Due to class imbalance, I realized SVM SMOTE
is a good technique balance it, thereby, giving better classification.
Testing with smaller sub-datasets
To test the performance of my classifier, I started small. I made small subdatasets out of the big 2million row dataset.
It took the following code:-
%%time
sm = SVMSMOTE(random_state=42)
X_res, Y_res = sm.fit_resample(X, Y)
1st dataset contains only 7.5k rows. It took about 800ms to run the cell.
2nd dataset contains 115k rows. It took 20min to execute the cell.
Solution Attempts
My system crashes after running continuously for more than 48hrs, running out of memory.
I've tried some ideas, such as
1. splitting it to run on multiple CPU cores using %%px
No improvement in quicker execution
2. using NVIDIA GPU's
Same as above. Which is more understandable since the _smote.py
library functions aren't built with parallel programming for CUDA in mind.
I'm pretty frustrated by the lack of results, and a warm PC. What should I do?
python preprocessing numpy sampling smote
$endgroup$
$begingroup$
Try a linear SVM. It’s less complex. Also reducing your data set will help
$endgroup$
– Jon
Apr 6 at 0:58
add a comment |
$begingroup$
Problem
fit_resample(X,y)
is taking too long to complete execution for 2million rows.
Dataset specifications
I have a labeled dataset about network features, where the X
(features) and Y
(labels) are of shape (2M, 24)
and (2M,11)
respectively.
i.e. there are over 2million rows in the dataset. there are 24 features, and 11 different classes/labels.
Both X and Y are numpy
arrays of float
dtype.
Motivation for using SVM SMOTE
Due to class imbalance, I realized SVM SMOTE
is a good technique balance it, thereby, giving better classification.
Testing with smaller sub-datasets
To test the performance of my classifier, I started small. I made small subdatasets out of the big 2million row dataset.
It took the following code:-
%%time
sm = SVMSMOTE(random_state=42)
X_res, Y_res = sm.fit_resample(X, Y)
1st dataset contains only 7.5k rows. It took about 800ms to run the cell.
2nd dataset contains 115k rows. It took 20min to execute the cell.
Solution Attempts
My system crashes after running continuously for more than 48hrs, running out of memory.
I've tried some ideas, such as
1. splitting it to run on multiple CPU cores using %%px
No improvement in quicker execution
2. using NVIDIA GPU's
Same as above. Which is more understandable since the _smote.py
library functions aren't built with parallel programming for CUDA in mind.
I'm pretty frustrated by the lack of results, and a warm PC. What should I do?
python preprocessing numpy sampling smote
$endgroup$
Problem
fit_resample(X,y)
is taking too long to complete execution for 2million rows.
Dataset specifications
I have a labeled dataset about network features, where the X
(features) and Y
(labels) are of shape (2M, 24)
and (2M,11)
respectively.
i.e. there are over 2million rows in the dataset. there are 24 features, and 11 different classes/labels.
Both X and Y are numpy
arrays of float
dtype.
Motivation for using SVM SMOTE
Due to class imbalance, I realized SVM SMOTE
is a good technique balance it, thereby, giving better classification.
Testing with smaller sub-datasets
To test the performance of my classifier, I started small. I made small subdatasets out of the big 2million row dataset.
It took the following code:-
%%time
sm = SVMSMOTE(random_state=42)
X_res, Y_res = sm.fit_resample(X, Y)
1st dataset contains only 7.5k rows. It took about 800ms to run the cell.
2nd dataset contains 115k rows. It took 20min to execute the cell.
Solution Attempts
My system crashes after running continuously for more than 48hrs, running out of memory.
I've tried some ideas, such as
1. splitting it to run on multiple CPU cores using %%px
No improvement in quicker execution
2. using NVIDIA GPU's
Same as above. Which is more understandable since the _smote.py
library functions aren't built with parallel programming for CUDA in mind.
I'm pretty frustrated by the lack of results, and a warm PC. What should I do?
python preprocessing numpy sampling smote
python preprocessing numpy sampling smote
asked Apr 5 at 19:20
venom8914venom8914
1313
1313
$begingroup$
Try a linear SVM. It’s less complex. Also reducing your data set will help
$endgroup$
– Jon
Apr 6 at 0:58
add a comment |
$begingroup$
Try a linear SVM. It’s less complex. Also reducing your data set will help
$endgroup$
– Jon
Apr 6 at 0:58
$begingroup$
Try a linear SVM. It’s less complex. Also reducing your data set will help
$endgroup$
– Jon
Apr 6 at 0:58
$begingroup$
Try a linear SVM. It’s less complex. Also reducing your data set will help
$endgroup$
– Jon
Apr 6 at 0:58
add a comment |
1 Answer
1
active
oldest
votes
$begingroup$
This is expected and is not related to SMOTE sampling.
The computational complexity of non-linear SVM is on the order of $O(n^2)$ to $O(n^3)$ where $n$ is the number of samples. This means that if it takes 0.8 seconds for 7.5K data points, it should take [3, 48] minutes for 115K, $$[(115/7.5)^2 times 0.8, (115/7.5)^3 times 0.8]s=[3,48]m,$$and from 16 hours to 175 days, 11 days for $O(n^2.5)$, for 2M data points.
You should continue using sample sizes on the order of 100K or less. Also, it is fruitful to track the accuracy (or any other score) as a function of samples for 1K, 10K, 50K, and 100K samples. It is possible that SVM accuracy stops improving well before 100K, therefore, there will be not much to lose by limiting the samples to 100K or less.
$endgroup$
add a comment |
Your Answer
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "557"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f48709%2fsvm-smote-fit-resample-function-runs-forever-with-no-result%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
This is expected and is not related to SMOTE sampling.
The computational complexity of non-linear SVM is on the order of $O(n^2)$ to $O(n^3)$ where $n$ is the number of samples. This means that if it takes 0.8 seconds for 7.5K data points, it should take [3, 48] minutes for 115K, $$[(115/7.5)^2 times 0.8, (115/7.5)^3 times 0.8]s=[3,48]m,$$and from 16 hours to 175 days, 11 days for $O(n^2.5)$, for 2M data points.
You should continue using sample sizes on the order of 100K or less. Also, it is fruitful to track the accuracy (or any other score) as a function of samples for 1K, 10K, 50K, and 100K samples. It is possible that SVM accuracy stops improving well before 100K, therefore, there will be not much to lose by limiting the samples to 100K or less.
$endgroup$
add a comment |
$begingroup$
This is expected and is not related to SMOTE sampling.
The computational complexity of non-linear SVM is on the order of $O(n^2)$ to $O(n^3)$ where $n$ is the number of samples. This means that if it takes 0.8 seconds for 7.5K data points, it should take [3, 48] minutes for 115K, $$[(115/7.5)^2 times 0.8, (115/7.5)^3 times 0.8]s=[3,48]m,$$and from 16 hours to 175 days, 11 days for $O(n^2.5)$, for 2M data points.
You should continue using sample sizes on the order of 100K or less. Also, it is fruitful to track the accuracy (or any other score) as a function of samples for 1K, 10K, 50K, and 100K samples. It is possible that SVM accuracy stops improving well before 100K, therefore, there will be not much to lose by limiting the samples to 100K or less.
$endgroup$
add a comment |
$begingroup$
This is expected and is not related to SMOTE sampling.
The computational complexity of non-linear SVM is on the order of $O(n^2)$ to $O(n^3)$ where $n$ is the number of samples. This means that if it takes 0.8 seconds for 7.5K data points, it should take [3, 48] minutes for 115K, $$[(115/7.5)^2 times 0.8, (115/7.5)^3 times 0.8]s=[3,48]m,$$and from 16 hours to 175 days, 11 days for $O(n^2.5)$, for 2M data points.
You should continue using sample sizes on the order of 100K or less. Also, it is fruitful to track the accuracy (or any other score) as a function of samples for 1K, 10K, 50K, and 100K samples. It is possible that SVM accuracy stops improving well before 100K, therefore, there will be not much to lose by limiting the samples to 100K or less.
$endgroup$
This is expected and is not related to SMOTE sampling.
The computational complexity of non-linear SVM is on the order of $O(n^2)$ to $O(n^3)$ where $n$ is the number of samples. This means that if it takes 0.8 seconds for 7.5K data points, it should take [3, 48] minutes for 115K, $$[(115/7.5)^2 times 0.8, (115/7.5)^3 times 0.8]s=[3,48]m,$$and from 16 hours to 175 days, 11 days for $O(n^2.5)$, for 2M data points.
You should continue using sample sizes on the order of 100K or less. Also, it is fruitful to track the accuracy (or any other score) as a function of samples for 1K, 10K, 50K, and 100K samples. It is possible that SVM accuracy stops improving well before 100K, therefore, there will be not much to lose by limiting the samples to 100K or less.
edited Apr 8 at 23:46
answered Apr 5 at 19:51
EsmailianEsmailian
3,771420
3,771420
add a comment |
add a comment |
Thanks for contributing an answer to Data Science Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f48709%2fsvm-smote-fit-resample-function-runs-forever-with-no-result%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
$begingroup$
Try a linear SVM. It’s less complex. Also reducing your data set will help
$endgroup$
– Jon
Apr 6 at 0:58