Transfer learning no improvement in loss Announcing the arrival of Valued Associate #679: Cesar Manara Planned maintenance scheduled April 17/18, 2019 at 00:00UTC (8:00pm US/Eastern) 2019 Moderator Election Q&A - Questionnaire 2019 Community Moderator Election ResultsTransfer learning: Poor performance with last layer replacedIs there any proven disadvantage of transfer learning for CNNs?Validation score (f1) remains the same when swapping labelsXor gate accuracy improvementValue error in Merging two different models in kerasValue of loss and accuracy does not change over EpochsTransfer learning - small databaseIN CIFAR 10 DATASETModel loss and validation loss not decreasing? How to speed?Why do I need pre-trained weights in transfer learning?
What is GELU activation?
Where are Serre’s lectures at Collège de France to be found?
Project Euler #1 in C++
How come Sam didn't become Lord of Horn Hill?
Physics no longer uses mechanical models to describe phenomena
How often does castling occur in grandmaster games?
Can a new player join a group only when a new campaign starts?
Is there hard evidence that the grant peer review system performs significantly better than random?
An adverb for when you're not exaggerating
Why does it sometimes sound good to play a grace note as a lead in to a note in a melody?
Sum letters are not two different
Withdrew £2800, but only £2000 shows as withdrawn on online banking; what are my obligations?
Do I really need to have a message in a novel to appeal to readers?
Do wooden building fires get hotter than 600°C?
Circuit to "zoom in" on mV fluctuations of a DC signal?
Do any jurisdictions seriously consider reclassifying social media websites as publishers?
Benefits of using sObject.clone versus creating a new record
How to write this math term? with cases it isn't working
Trademark violation for app?
Time to Settle Down!
How to write the following sign?
Does the Weapon Master feat grant you a fighting style?
How to Make a Beautiful Stacked 3D Plot
Why wasn't DOSKEY integrated with COMMAND.COM?
Transfer learning no improvement in loss
Announcing the arrival of Valued Associate #679: Cesar Manara
Planned maintenance scheduled April 17/18, 2019 at 00:00UTC (8:00pm US/Eastern)
2019 Moderator Election Q&A - Questionnaire
2019 Community Moderator Election ResultsTransfer learning: Poor performance with last layer replacedIs there any proven disadvantage of transfer learning for CNNs?Validation score (f1) remains the same when swapping labelsXor gate accuracy improvementValue error in Merging two different models in kerasValue of loss and accuracy does not change over EpochsTransfer learning - small databaseIN CIFAR 10 DATASETModel loss and validation loss not decreasing? How to speed?Why do I need pre-trained weights in transfer learning?
$begingroup$
I am doing transfer learning on a pre-trained model with an own dataset.
I am loading the model like:
train_generator = train_datagen.flow_from_directory(
train_data_dir,
target_size=(224, 224),
batch_size=32,
subset='training') # set as training data
validation_generator = train_datagen.flow_from_directory(
train_data_dir, # same directory as training data
target_size=(224, 224),
batch_size=32,
subset='validation') # set as validation data
model = ResNet50(include_top=False, weights=None, input_shape=(224,224,3))
model.load_weights("a trained model weights on 224x224")
model.layers.pop()
for layer in model.layers:
layer.trainable = False
x = model.layers[-1].output
x = Flatten(name='flatten')(x)
x = Dropout(0.2)(x)
x = Dense(1024, activation='relu')(x)
predictions = Dense(101, activation='softmax', name='pred_age')(x)
top_model = Model(inputs=model.input, outputs=predictions)
top_model.compile(loss='categorical_crossentropy',
optimizer=adam,
metrics=[accuracy])
EPOCHS = 100
BATCH_SIZE = 32
STEPS_PER_EPOCH = 4424 // BATCH_SIZE
VALIDATION_STEPS = 466 // BATCH_SIZE
callbacks = [LearningRateScheduler(schedule=Schedule(EPOCHS, initial_lr=lr_rate)),
ModelCheckpoint(str(output_dir) + "/weights.epoch:03d-val_loss:.3f-val_age_mae:.3f.hdf5",
monitor="val_age_mae",
verbose=1,
save_best_only=False,
mode="min")
]
hist = top_model.fit_generator(generator=train_set,
epochs=EPOCHS,
steps_per_epoch = STEPS_PER_EPOCH,
validation_data=val_set,
validation_steps = VALIDATION_STEPS,
verbose=1,
callbacks=callbacks)
activation_49 (Activation) (None, 7, 7, 2048) 0 add_16[0][0]
__________________________________________________________________________________________________
flatten (Flatten) (None, 100352) 0 activation_49[0][0]
__________________________________________________________________________________________________
dense_1 (Dense) (None, 512) 51380736 flatten[0][0]
__________________________________________________________________________________________________
pred_age (Dense) (None, 101) 51813 dense_1[0][0]
==================================================================================================
Total params: 75,020,261
Trainable params: 51,432,549
Non-trainable params: 23,587,712
__________________________________________________________________________________________________
Epoch 1/100
140/140 [==============================] - 1033s 7s/step - loss: 14.5776 - age_mae: 12.2994 - val_loss: 15.6144 - val_age_mae: 24.8527
Epoch 00001: val_age_mae improved from inf to 24.85268, saving model to /Users/aez/Desktop/AgeEstimation/yu4u/age_estimation/fine_tune_models/2_Finetune2//2-finetune-weights.001-15.614-24.853.hdf5
Epoch 2/100
140/140 [==============================] - 969s 7s/step - loss: 14.7104 - age_mae: 11.2545 - val_loss: 15.6462 - val_age_mae: 25.1104
Epoch 00002: val_age_mae did not improve from 24.85268
Epoch 3/100
140/140 [==============================] - 769s 5s/step - loss: 14.6159 - age_mae: 13.5181 - val_loss: 15.7551 - val_age_mae: 29.4640
Epoch 00003: val_age_mae did not improve from 24.85268
Epoch 4/100
140/140 [==============================] - 815s 6s/step - loss: 14.6509 - age_mae: 13.0087 - val_loss: 15.9366 - val_age_mae: 18.3581
Epoch 00004: val_age_mae improved from 24.85268 to 18.35811, saving model to /Users/aez/Desktop/AgeEstimation/yu4u/age_estimation/fine_tune_models/2_Finetune2//2-finetune-weights.004-15.937-18.358.hdf5
Epoch 5/100
140/140 [==============================] - 1059s 8s/step - loss: 14.3882 - age_mae: 11.8039 - val_loss: 15.6825 - val_age_mae: 24.6937
Epoch 00005: val_age_mae did not improve from 18.35811
Epoch 6/100
140/140 [==============================] - 1052s 8s/step - loss: 14.4496 - age_mae: 13.6652 - val_loss: 15.4278 - val_age_mae: 24.5045
Epoch 00006: val_age_mae did not improve from 18.35811
I already ruined this couple times, and after epoch 4 it is not improving anymore.
I get the following loss graph
python neural-network keras cnn transfer-learning
$endgroup$
add a comment |
$begingroup$
I am doing transfer learning on a pre-trained model with an own dataset.
I am loading the model like:
train_generator = train_datagen.flow_from_directory(
train_data_dir,
target_size=(224, 224),
batch_size=32,
subset='training') # set as training data
validation_generator = train_datagen.flow_from_directory(
train_data_dir, # same directory as training data
target_size=(224, 224),
batch_size=32,
subset='validation') # set as validation data
model = ResNet50(include_top=False, weights=None, input_shape=(224,224,3))
model.load_weights("a trained model weights on 224x224")
model.layers.pop()
for layer in model.layers:
layer.trainable = False
x = model.layers[-1].output
x = Flatten(name='flatten')(x)
x = Dropout(0.2)(x)
x = Dense(1024, activation='relu')(x)
predictions = Dense(101, activation='softmax', name='pred_age')(x)
top_model = Model(inputs=model.input, outputs=predictions)
top_model.compile(loss='categorical_crossentropy',
optimizer=adam,
metrics=[accuracy])
EPOCHS = 100
BATCH_SIZE = 32
STEPS_PER_EPOCH = 4424 // BATCH_SIZE
VALIDATION_STEPS = 466 // BATCH_SIZE
callbacks = [LearningRateScheduler(schedule=Schedule(EPOCHS, initial_lr=lr_rate)),
ModelCheckpoint(str(output_dir) + "/weights.epoch:03d-val_loss:.3f-val_age_mae:.3f.hdf5",
monitor="val_age_mae",
verbose=1,
save_best_only=False,
mode="min")
]
hist = top_model.fit_generator(generator=train_set,
epochs=EPOCHS,
steps_per_epoch = STEPS_PER_EPOCH,
validation_data=val_set,
validation_steps = VALIDATION_STEPS,
verbose=1,
callbacks=callbacks)
activation_49 (Activation) (None, 7, 7, 2048) 0 add_16[0][0]
__________________________________________________________________________________________________
flatten (Flatten) (None, 100352) 0 activation_49[0][0]
__________________________________________________________________________________________________
dense_1 (Dense) (None, 512) 51380736 flatten[0][0]
__________________________________________________________________________________________________
pred_age (Dense) (None, 101) 51813 dense_1[0][0]
==================================================================================================
Total params: 75,020,261
Trainable params: 51,432,549
Non-trainable params: 23,587,712
__________________________________________________________________________________________________
Epoch 1/100
140/140 [==============================] - 1033s 7s/step - loss: 14.5776 - age_mae: 12.2994 - val_loss: 15.6144 - val_age_mae: 24.8527
Epoch 00001: val_age_mae improved from inf to 24.85268, saving model to /Users/aez/Desktop/AgeEstimation/yu4u/age_estimation/fine_tune_models/2_Finetune2//2-finetune-weights.001-15.614-24.853.hdf5
Epoch 2/100
140/140 [==============================] - 969s 7s/step - loss: 14.7104 - age_mae: 11.2545 - val_loss: 15.6462 - val_age_mae: 25.1104
Epoch 00002: val_age_mae did not improve from 24.85268
Epoch 3/100
140/140 [==============================] - 769s 5s/step - loss: 14.6159 - age_mae: 13.5181 - val_loss: 15.7551 - val_age_mae: 29.4640
Epoch 00003: val_age_mae did not improve from 24.85268
Epoch 4/100
140/140 [==============================] - 815s 6s/step - loss: 14.6509 - age_mae: 13.0087 - val_loss: 15.9366 - val_age_mae: 18.3581
Epoch 00004: val_age_mae improved from 24.85268 to 18.35811, saving model to /Users/aez/Desktop/AgeEstimation/yu4u/age_estimation/fine_tune_models/2_Finetune2//2-finetune-weights.004-15.937-18.358.hdf5
Epoch 5/100
140/140 [==============================] - 1059s 8s/step - loss: 14.3882 - age_mae: 11.8039 - val_loss: 15.6825 - val_age_mae: 24.6937
Epoch 00005: val_age_mae did not improve from 18.35811
Epoch 6/100
140/140 [==============================] - 1052s 8s/step - loss: 14.4496 - age_mae: 13.6652 - val_loss: 15.4278 - val_age_mae: 24.5045
Epoch 00006: val_age_mae did not improve from 18.35811
I already ruined this couple times, and after epoch 4 it is not improving anymore.
I get the following loss graph
python neural-network keras cnn transfer-learning
$endgroup$
add a comment |
$begingroup$
I am doing transfer learning on a pre-trained model with an own dataset.
I am loading the model like:
train_generator = train_datagen.flow_from_directory(
train_data_dir,
target_size=(224, 224),
batch_size=32,
subset='training') # set as training data
validation_generator = train_datagen.flow_from_directory(
train_data_dir, # same directory as training data
target_size=(224, 224),
batch_size=32,
subset='validation') # set as validation data
model = ResNet50(include_top=False, weights=None, input_shape=(224,224,3))
model.load_weights("a trained model weights on 224x224")
model.layers.pop()
for layer in model.layers:
layer.trainable = False
x = model.layers[-1].output
x = Flatten(name='flatten')(x)
x = Dropout(0.2)(x)
x = Dense(1024, activation='relu')(x)
predictions = Dense(101, activation='softmax', name='pred_age')(x)
top_model = Model(inputs=model.input, outputs=predictions)
top_model.compile(loss='categorical_crossentropy',
optimizer=adam,
metrics=[accuracy])
EPOCHS = 100
BATCH_SIZE = 32
STEPS_PER_EPOCH = 4424 // BATCH_SIZE
VALIDATION_STEPS = 466 // BATCH_SIZE
callbacks = [LearningRateScheduler(schedule=Schedule(EPOCHS, initial_lr=lr_rate)),
ModelCheckpoint(str(output_dir) + "/weights.epoch:03d-val_loss:.3f-val_age_mae:.3f.hdf5",
monitor="val_age_mae",
verbose=1,
save_best_only=False,
mode="min")
]
hist = top_model.fit_generator(generator=train_set,
epochs=EPOCHS,
steps_per_epoch = STEPS_PER_EPOCH,
validation_data=val_set,
validation_steps = VALIDATION_STEPS,
verbose=1,
callbacks=callbacks)
activation_49 (Activation) (None, 7, 7, 2048) 0 add_16[0][0]
__________________________________________________________________________________________________
flatten (Flatten) (None, 100352) 0 activation_49[0][0]
__________________________________________________________________________________________________
dense_1 (Dense) (None, 512) 51380736 flatten[0][0]
__________________________________________________________________________________________________
pred_age (Dense) (None, 101) 51813 dense_1[0][0]
==================================================================================================
Total params: 75,020,261
Trainable params: 51,432,549
Non-trainable params: 23,587,712
__________________________________________________________________________________________________
Epoch 1/100
140/140 [==============================] - 1033s 7s/step - loss: 14.5776 - age_mae: 12.2994 - val_loss: 15.6144 - val_age_mae: 24.8527
Epoch 00001: val_age_mae improved from inf to 24.85268, saving model to /Users/aez/Desktop/AgeEstimation/yu4u/age_estimation/fine_tune_models/2_Finetune2//2-finetune-weights.001-15.614-24.853.hdf5
Epoch 2/100
140/140 [==============================] - 969s 7s/step - loss: 14.7104 - age_mae: 11.2545 - val_loss: 15.6462 - val_age_mae: 25.1104
Epoch 00002: val_age_mae did not improve from 24.85268
Epoch 3/100
140/140 [==============================] - 769s 5s/step - loss: 14.6159 - age_mae: 13.5181 - val_loss: 15.7551 - val_age_mae: 29.4640
Epoch 00003: val_age_mae did not improve from 24.85268
Epoch 4/100
140/140 [==============================] - 815s 6s/step - loss: 14.6509 - age_mae: 13.0087 - val_loss: 15.9366 - val_age_mae: 18.3581
Epoch 00004: val_age_mae improved from 24.85268 to 18.35811, saving model to /Users/aez/Desktop/AgeEstimation/yu4u/age_estimation/fine_tune_models/2_Finetune2//2-finetune-weights.004-15.937-18.358.hdf5
Epoch 5/100
140/140 [==============================] - 1059s 8s/step - loss: 14.3882 - age_mae: 11.8039 - val_loss: 15.6825 - val_age_mae: 24.6937
Epoch 00005: val_age_mae did not improve from 18.35811
Epoch 6/100
140/140 [==============================] - 1052s 8s/step - loss: 14.4496 - age_mae: 13.6652 - val_loss: 15.4278 - val_age_mae: 24.5045
Epoch 00006: val_age_mae did not improve from 18.35811
I already ruined this couple times, and after epoch 4 it is not improving anymore.
I get the following loss graph
python neural-network keras cnn transfer-learning
$endgroup$
I am doing transfer learning on a pre-trained model with an own dataset.
I am loading the model like:
train_generator = train_datagen.flow_from_directory(
train_data_dir,
target_size=(224, 224),
batch_size=32,
subset='training') # set as training data
validation_generator = train_datagen.flow_from_directory(
train_data_dir, # same directory as training data
target_size=(224, 224),
batch_size=32,
subset='validation') # set as validation data
model = ResNet50(include_top=False, weights=None, input_shape=(224,224,3))
model.load_weights("a trained model weights on 224x224")
model.layers.pop()
for layer in model.layers:
layer.trainable = False
x = model.layers[-1].output
x = Flatten(name='flatten')(x)
x = Dropout(0.2)(x)
x = Dense(1024, activation='relu')(x)
predictions = Dense(101, activation='softmax', name='pred_age')(x)
top_model = Model(inputs=model.input, outputs=predictions)
top_model.compile(loss='categorical_crossentropy',
optimizer=adam,
metrics=[accuracy])
EPOCHS = 100
BATCH_SIZE = 32
STEPS_PER_EPOCH = 4424 // BATCH_SIZE
VALIDATION_STEPS = 466 // BATCH_SIZE
callbacks = [LearningRateScheduler(schedule=Schedule(EPOCHS, initial_lr=lr_rate)),
ModelCheckpoint(str(output_dir) + "/weights.epoch:03d-val_loss:.3f-val_age_mae:.3f.hdf5",
monitor="val_age_mae",
verbose=1,
save_best_only=False,
mode="min")
]
hist = top_model.fit_generator(generator=train_set,
epochs=EPOCHS,
steps_per_epoch = STEPS_PER_EPOCH,
validation_data=val_set,
validation_steps = VALIDATION_STEPS,
verbose=1,
callbacks=callbacks)
activation_49 (Activation) (None, 7, 7, 2048) 0 add_16[0][0]
__________________________________________________________________________________________________
flatten (Flatten) (None, 100352) 0 activation_49[0][0]
__________________________________________________________________________________________________
dense_1 (Dense) (None, 512) 51380736 flatten[0][0]
__________________________________________________________________________________________________
pred_age (Dense) (None, 101) 51813 dense_1[0][0]
==================================================================================================
Total params: 75,020,261
Trainable params: 51,432,549
Non-trainable params: 23,587,712
__________________________________________________________________________________________________
Epoch 1/100
140/140 [==============================] - 1033s 7s/step - loss: 14.5776 - age_mae: 12.2994 - val_loss: 15.6144 - val_age_mae: 24.8527
Epoch 00001: val_age_mae improved from inf to 24.85268, saving model to /Users/aez/Desktop/AgeEstimation/yu4u/age_estimation/fine_tune_models/2_Finetune2//2-finetune-weights.001-15.614-24.853.hdf5
Epoch 2/100
140/140 [==============================] - 969s 7s/step - loss: 14.7104 - age_mae: 11.2545 - val_loss: 15.6462 - val_age_mae: 25.1104
Epoch 00002: val_age_mae did not improve from 24.85268
Epoch 3/100
140/140 [==============================] - 769s 5s/step - loss: 14.6159 - age_mae: 13.5181 - val_loss: 15.7551 - val_age_mae: 29.4640
Epoch 00003: val_age_mae did not improve from 24.85268
Epoch 4/100
140/140 [==============================] - 815s 6s/step - loss: 14.6509 - age_mae: 13.0087 - val_loss: 15.9366 - val_age_mae: 18.3581
Epoch 00004: val_age_mae improved from 24.85268 to 18.35811, saving model to /Users/aez/Desktop/AgeEstimation/yu4u/age_estimation/fine_tune_models/2_Finetune2//2-finetune-weights.004-15.937-18.358.hdf5
Epoch 5/100
140/140 [==============================] - 1059s 8s/step - loss: 14.3882 - age_mae: 11.8039 - val_loss: 15.6825 - val_age_mae: 24.6937
Epoch 00005: val_age_mae did not improve from 18.35811
Epoch 6/100
140/140 [==============================] - 1052s 8s/step - loss: 14.4496 - age_mae: 13.6652 - val_loss: 15.4278 - val_age_mae: 24.5045
Epoch 00006: val_age_mae did not improve from 18.35811
I already ruined this couple times, and after epoch 4 it is not improving anymore.
I get the following loss graph
python neural-network keras cnn transfer-learning
python neural-network keras cnn transfer-learning
edited Apr 2 at 21:06
TheJokerAEZ
asked Apr 2 at 21:00
TheJokerAEZTheJokerAEZ
12
12
add a comment |
add a comment |
0
active
oldest
votes
Your Answer
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "557"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f48471%2ftransfer-learning-no-improvement-in-loss%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
0
active
oldest
votes
0
active
oldest
votes
active
oldest
votes
active
oldest
votes
Thanks for contributing an answer to Data Science Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f48471%2ftransfer-learning-no-improvement-in-loss%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown