Knowing when a GAN is overfitting (sequence classification study) Unicorn Meta Zoo #1: Why another podcast? Announcing the arrival of Valued Associate #679: Cesar Manara 2019 Moderator Election Q&A - Questionnaire 2019 Community Moderator Election ResultsCan overfitting occur even with validation loss still dropping?How to know the model has started overfitting?Why doesn't overfitting devastate neural networks for MNIST classification?Training the Discriminative Model in Generative Adversarial Neural NetworkGenerator loss not decreasing- text to image synthesisGAN to generate a custom image does not workIs my model overfitting when I add new features?Gumbel Softmax vs Vanilla Softmax for GAN trainingHow to train the generator in a recurrent GAN (Keras)How to recognise when to stop training based on Overfitting/Underfitting?

Philosophers who were composers?

What is the evidence that custom checks in Northern Ireland are going to result in violence?

How long can a nation maintain a technological edge over the rest of the world?

What's the difference between using dependency injection with a container and using a service locator?

Why does Java have support for time zone offsets with seconds precision?

/bin/ls sorts differently than just ls

Does a Draconic Bloodline sorcerer's doubled proficiency bonus for Charisma checks against dragons apply to all dragon types or only the chosen one?

Retract an already submitted recommendation letter (written for an undergrad student)

"Working on a knee"

Has a Nobel Peace laureate ever been accused of war crimes?

Guitar neck keeps tilting down

How would it unbalance gameplay to rule that Weapon Master allows for picking a fighting style?

How did Elite on the NES work?

What helicopter has the most rotor blades?

Why isn't everyone flabbergasted about Bran's "gift"?

Are `mathfont` and `mathspec` intended for same purpose?

Mechanism of the formation of peracetic acid

Where did Arya get these scars?

Is there a verb for listening stealthily?

What is a good proxy for government quality?

How to keep bees out of canned beverages?

Delete Strings name, John, John Doe, Doe to name, John Doe

Why doesn't the university give past final exams' answers?

RIP Packet Format



Knowing when a GAN is overfitting (sequence classification study)



Unicorn Meta Zoo #1: Why another podcast?
Announcing the arrival of Valued Associate #679: Cesar Manara
2019 Moderator Election Q&A - Questionnaire
2019 Community Moderator Election ResultsCan overfitting occur even with validation loss still dropping?How to know the model has started overfitting?Why doesn't overfitting devastate neural networks for MNIST classification?Training the Discriminative Model in Generative Adversarial Neural NetworkGenerator loss not decreasing- text to image synthesisGAN to generate a custom image does not workIs my model overfitting when I add new features?Gumbel Softmax vs Vanilla Softmax for GAN trainingHow to train the generator in a recurrent GAN (Keras)How to recognise when to stop training based on Overfitting/Underfitting?










2












$begingroup$


I have sequences of long, sparse 1_D vectors (3000 digits, made of of 0s and 1s) that I am trying to classify. I have previously implemented a simple CNN to classify them with relative success (with keras).



data:
label, sequence
0,0000000000010000000......
....
1,00000000000000000001........


I am trying to create a GAN that returns a sequence that is emblematic of the sequences that have a '1' label.



I have attempted to re-implement typical analysis of 2D tensors (i.e. images) on my 1-D data.



Thus far I have managed to get functioning code that creates 'fake' vectors that are similar looking to the original ones.... however my problem is that I don't know how to assess what is a sufficient number of epochs... I was advised that assessing the performance on test data might be a good indicator of this. Hence as a separate task to the normal procedure of the GAN training I also assess the generator's performance on test data in each epoch.



enter image description here



See above in the upper-left panel I have plotted the loss profiles of the generator and discriminator, in the upper right I have plotted the accuracy achieved at each epoch when validating on the test data and the bottom three figures are exemplary sequences output at different stages of the training (indicated by arrows).



The code that I use is pretty similar to that at the following link: https://github.com/osh/KerasGAN/blob/master/MNIST_CNN_GAN.ipynb



(If the actual code is needed please comment below and I will include it).



See below another exemplary output (from a different task):



enter image description here



Here the generator seems to be overfitting and generating all 0 sequences- which is undesirable. As an attempt to mitigate this I trained the discriminator to reject all 0 sequences before freezing its' weights and initiating the GAN:



discriminator=load_model('..discriminator..')
X_filler,y_filler=np.zeros((100,3000, 1)),np.zeros(100)
discriminator.fit(X_filler,y_filler,epochs=3,batch_size=100)
discriminator.trainable = False
......initiate GAN training....


Still after doing this the above shown problem persists..



I cannot find a good recommendation as to when to stop training.. The following source recommends:




Stop criteria: the number of Generator’s failures (failed attempts to
fool the Discriminator) is [almost] equal to the errors of
Discriminator to distinguish artificially generated samples from real
samples.




https://www.quora.com/What-is-the-stop-criteria-of-generative-adversarial-nets-Can-I-treat-it-as-a-multi-objective-problem



But I am not sure how to assess when this is happening based on the test accuracy graph displayed above.



Overall my questions are:



1. When can I know based on test validation, done in each epoch, when to stop training the GAN?



2. How do I know if my generator is overfitting to the and what can I do to mitigate this?










share|improve this question









$endgroup$











  • $begingroup$
    Welcome to the site! The observation that GAN produces all 0s is not over-fitting, it is under-fitting. If GAN generations were too similar to the training sequences, that meant over-fitting.
    $endgroup$
    – Esmailian
    Apr 5 at 9:44















2












$begingroup$


I have sequences of long, sparse 1_D vectors (3000 digits, made of of 0s and 1s) that I am trying to classify. I have previously implemented a simple CNN to classify them with relative success (with keras).



data:
label, sequence
0,0000000000010000000......
....
1,00000000000000000001........


I am trying to create a GAN that returns a sequence that is emblematic of the sequences that have a '1' label.



I have attempted to re-implement typical analysis of 2D tensors (i.e. images) on my 1-D data.



Thus far I have managed to get functioning code that creates 'fake' vectors that are similar looking to the original ones.... however my problem is that I don't know how to assess what is a sufficient number of epochs... I was advised that assessing the performance on test data might be a good indicator of this. Hence as a separate task to the normal procedure of the GAN training I also assess the generator's performance on test data in each epoch.



enter image description here



See above in the upper-left panel I have plotted the loss profiles of the generator and discriminator, in the upper right I have plotted the accuracy achieved at each epoch when validating on the test data and the bottom three figures are exemplary sequences output at different stages of the training (indicated by arrows).



The code that I use is pretty similar to that at the following link: https://github.com/osh/KerasGAN/blob/master/MNIST_CNN_GAN.ipynb



(If the actual code is needed please comment below and I will include it).



See below another exemplary output (from a different task):



enter image description here



Here the generator seems to be overfitting and generating all 0 sequences- which is undesirable. As an attempt to mitigate this I trained the discriminator to reject all 0 sequences before freezing its' weights and initiating the GAN:



discriminator=load_model('..discriminator..')
X_filler,y_filler=np.zeros((100,3000, 1)),np.zeros(100)
discriminator.fit(X_filler,y_filler,epochs=3,batch_size=100)
discriminator.trainable = False
......initiate GAN training....


Still after doing this the above shown problem persists..



I cannot find a good recommendation as to when to stop training.. The following source recommends:




Stop criteria: the number of Generator’s failures (failed attempts to
fool the Discriminator) is [almost] equal to the errors of
Discriminator to distinguish artificially generated samples from real
samples.




https://www.quora.com/What-is-the-stop-criteria-of-generative-adversarial-nets-Can-I-treat-it-as-a-multi-objective-problem



But I am not sure how to assess when this is happening based on the test accuracy graph displayed above.



Overall my questions are:



1. When can I know based on test validation, done in each epoch, when to stop training the GAN?



2. How do I know if my generator is overfitting to the and what can I do to mitigate this?










share|improve this question









$endgroup$











  • $begingroup$
    Welcome to the site! The observation that GAN produces all 0s is not over-fitting, it is under-fitting. If GAN generations were too similar to the training sequences, that meant over-fitting.
    $endgroup$
    – Esmailian
    Apr 5 at 9:44













2












2








2





$begingroup$


I have sequences of long, sparse 1_D vectors (3000 digits, made of of 0s and 1s) that I am trying to classify. I have previously implemented a simple CNN to classify them with relative success (with keras).



data:
label, sequence
0,0000000000010000000......
....
1,00000000000000000001........


I am trying to create a GAN that returns a sequence that is emblematic of the sequences that have a '1' label.



I have attempted to re-implement typical analysis of 2D tensors (i.e. images) on my 1-D data.



Thus far I have managed to get functioning code that creates 'fake' vectors that are similar looking to the original ones.... however my problem is that I don't know how to assess what is a sufficient number of epochs... I was advised that assessing the performance on test data might be a good indicator of this. Hence as a separate task to the normal procedure of the GAN training I also assess the generator's performance on test data in each epoch.



enter image description here



See above in the upper-left panel I have plotted the loss profiles of the generator and discriminator, in the upper right I have plotted the accuracy achieved at each epoch when validating on the test data and the bottom three figures are exemplary sequences output at different stages of the training (indicated by arrows).



The code that I use is pretty similar to that at the following link: https://github.com/osh/KerasGAN/blob/master/MNIST_CNN_GAN.ipynb



(If the actual code is needed please comment below and I will include it).



See below another exemplary output (from a different task):



enter image description here



Here the generator seems to be overfitting and generating all 0 sequences- which is undesirable. As an attempt to mitigate this I trained the discriminator to reject all 0 sequences before freezing its' weights and initiating the GAN:



discriminator=load_model('..discriminator..')
X_filler,y_filler=np.zeros((100,3000, 1)),np.zeros(100)
discriminator.fit(X_filler,y_filler,epochs=3,batch_size=100)
discriminator.trainable = False
......initiate GAN training....


Still after doing this the above shown problem persists..



I cannot find a good recommendation as to when to stop training.. The following source recommends:




Stop criteria: the number of Generator’s failures (failed attempts to
fool the Discriminator) is [almost] equal to the errors of
Discriminator to distinguish artificially generated samples from real
samples.




https://www.quora.com/What-is-the-stop-criteria-of-generative-adversarial-nets-Can-I-treat-it-as-a-multi-objective-problem



But I am not sure how to assess when this is happening based on the test accuracy graph displayed above.



Overall my questions are:



1. When can I know based on test validation, done in each epoch, when to stop training the GAN?



2. How do I know if my generator is overfitting to the and what can I do to mitigate this?










share|improve this question









$endgroup$




I have sequences of long, sparse 1_D vectors (3000 digits, made of of 0s and 1s) that I am trying to classify. I have previously implemented a simple CNN to classify them with relative success (with keras).



data:
label, sequence
0,0000000000010000000......
....
1,00000000000000000001........


I am trying to create a GAN that returns a sequence that is emblematic of the sequences that have a '1' label.



I have attempted to re-implement typical analysis of 2D tensors (i.e. images) on my 1-D data.



Thus far I have managed to get functioning code that creates 'fake' vectors that are similar looking to the original ones.... however my problem is that I don't know how to assess what is a sufficient number of epochs... I was advised that assessing the performance on test data might be a good indicator of this. Hence as a separate task to the normal procedure of the GAN training I also assess the generator's performance on test data in each epoch.



enter image description here



See above in the upper-left panel I have plotted the loss profiles of the generator and discriminator, in the upper right I have plotted the accuracy achieved at each epoch when validating on the test data and the bottom three figures are exemplary sequences output at different stages of the training (indicated by arrows).



The code that I use is pretty similar to that at the following link: https://github.com/osh/KerasGAN/blob/master/MNIST_CNN_GAN.ipynb



(If the actual code is needed please comment below and I will include it).



See below another exemplary output (from a different task):



enter image description here



Here the generator seems to be overfitting and generating all 0 sequences- which is undesirable. As an attempt to mitigate this I trained the discriminator to reject all 0 sequences before freezing its' weights and initiating the GAN:



discriminator=load_model('..discriminator..')
X_filler,y_filler=np.zeros((100,3000, 1)),np.zeros(100)
discriminator.fit(X_filler,y_filler,epochs=3,batch_size=100)
discriminator.trainable = False
......initiate GAN training....


Still after doing this the above shown problem persists..



I cannot find a good recommendation as to when to stop training.. The following source recommends:




Stop criteria: the number of Generator’s failures (failed attempts to
fool the Discriminator) is [almost] equal to the errors of
Discriminator to distinguish artificially generated samples from real
samples.




https://www.quora.com/What-is-the-stop-criteria-of-generative-adversarial-nets-Can-I-treat-it-as-a-multi-objective-problem



But I am not sure how to assess when this is happening based on the test accuracy graph displayed above.



Overall my questions are:



1. When can I know based on test validation, done in each epoch, when to stop training the GAN?



2. How do I know if my generator is overfitting to the and what can I do to mitigate this?







keras optimization overfitting gan






share|improve this question













share|improve this question











share|improve this question




share|improve this question










asked Apr 5 at 7:48









which_commandwhich_command

112




112











  • $begingroup$
    Welcome to the site! The observation that GAN produces all 0s is not over-fitting, it is under-fitting. If GAN generations were too similar to the training sequences, that meant over-fitting.
    $endgroup$
    – Esmailian
    Apr 5 at 9:44
















  • $begingroup$
    Welcome to the site! The observation that GAN produces all 0s is not over-fitting, it is under-fitting. If GAN generations were too similar to the training sequences, that meant over-fitting.
    $endgroup$
    – Esmailian
    Apr 5 at 9:44















$begingroup$
Welcome to the site! The observation that GAN produces all 0s is not over-fitting, it is under-fitting. If GAN generations were too similar to the training sequences, that meant over-fitting.
$endgroup$
– Esmailian
Apr 5 at 9:44




$begingroup$
Welcome to the site! The observation that GAN produces all 0s is not over-fitting, it is under-fitting. If GAN generations were too similar to the training sequences, that meant over-fitting.
$endgroup$
– Esmailian
Apr 5 at 9:44










0






active

oldest

votes












Your Answer








StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "557"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);

else
createEditor();

);

function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);



);













draft saved

draft discarded


















StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f48670%2fknowing-when-a-gan-is-overfitting-sequence-classification-study%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown

























0






active

oldest

votes








0






active

oldest

votes









active

oldest

votes






active

oldest

votes















draft saved

draft discarded
















































Thanks for contributing an answer to Data Science Stack Exchange!


  • Please be sure to answer the question. Provide details and share your research!

But avoid


  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.

Use MathJax to format equations. MathJax reference.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f48670%2fknowing-when-a-gan-is-overfitting-sequence-classification-study%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Adding axes to figuresAdding axes labels to LaTeX figuresLaTeX equivalent of ConTeXt buffersRotate a node but not its content: the case of the ellipse decorationHow to define the default vertical distance between nodes?TikZ scaling graphic and adjust node position and keep font sizeNumerical conditional within tikz keys?adding axes to shapesAlign axes across subfiguresAdding figures with a certain orderLine up nested tikz enviroments or how to get rid of themAdding axes labels to LaTeX figures

Luettelo Yhdysvaltain laivaston lentotukialuksista Lähteet | Navigointivalikko

Gary (muusikko) Sisällysluettelo Historia | Rockin' High | Lähteet | Aiheesta muualla | NavigointivalikkoInfobox OKTuomas "Gary" Keskinen Ancaran kitaristiksiProjekti Rockin' High