multi-output regression problem with tensorflow Announcing the arrival of Valued Associate #679: Cesar Manara Planned maintenance scheduled April 17/18, 2019 at 00:00UTC (8:00pm US/Eastern) 2019 Moderator Election Q&A - Questionnaire 2019 Community Moderator Election Resultsloss/val_loss are decreasing but accuracies are the same in LSTM!Tensorflow regression predicting 1 for all inputsValueError: Error when checking target: expected dense_2 to have shape (1,) but got array with shape (0,)Tensorflow CNN sometimes converges, sometimes notMulti-label classification, recall and precision increase but accuracy decrease, why?How to perform a reggression on 3 functions using a Neural NetworkHow to interpret Sum of squared errorNeural network model for sparse multi-class classifier on TensorflowValue error in Merging two different models in kerasStop CNN model at high accuracy and low loss rate?Neural Network Data Normalization Setup
How to do this path/lattice with tikz
LaTeX gives error undefined control sequence table
"Seemed to had" is it correct?
What causes the vertical darker bands in my photo?
Should I call the interviewer directly, if HR aren't responding?
How do I stop a creek from eroding my steep embankment?
Did Kevin spill real chili?
Antler Helmet: Can it work?
Echoing a tail command produces unexpected output?
Storing hydrofluoric acid before the invention of plastics
Does surprise arrest existing movement?
What would be the ideal power source for a cybernetic eye?
Is there any avatar supposed to be born between the death of Krishna and the birth of Kalki?
Can an alien society believe that their star system is the universe?
What are the motives behind Cersei's orders given to Bronn?
When is phishing education going too far?
Is the address of a local variable a constexpr?
How can players work together to take actions that are otherwise impossible?
What is the longest distance a 13th-level monk can jump while attacking on the same turn?
Is there a documented rationale why the House Ways and Means chairman can demand tax info?
Why aren't air breathing engines used as small first stages
Is it ethical to give a final exam after the professor has quit before teaching the remaining chapters of the course?
iPhone Wallpaper?
Stars Make Stars
multi-output regression problem with tensorflow
Announcing the arrival of Valued Associate #679: Cesar Manara
Planned maintenance scheduled April 17/18, 2019 at 00:00UTC (8:00pm US/Eastern)
2019 Moderator Election Q&A - Questionnaire
2019 Community Moderator Election Resultsloss/val_loss are decreasing but accuracies are the same in LSTM!Tensorflow regression predicting 1 for all inputsValueError: Error when checking target: expected dense_2 to have shape (1,) but got array with shape (0,)Tensorflow CNN sometimes converges, sometimes notMulti-label classification, recall and precision increase but accuracy decrease, why?How to perform a reggression on 3 functions using a Neural NetworkHow to interpret Sum of squared errorNeural network model for sparse multi-class classifier on TensorflowValue error in Merging two different models in kerasStop CNN model at high accuracy and low loss rate?Neural Network Data Normalization Setup
$begingroup$
number of features: 12 , -15 < each feature < 15
number of targets: 6 , 0 < each target < 360
number of examples: 262144
my normalization: I normalized the features so that they are between 0 and 1. I normalized the targets so that they are between 1 and 10.
This is the model that I am using:
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation=tf.nn.relu),
tf.keras.layers.Dense(128, activation=tf.nn.relu),
tf.keras.layers.Dense(128, activation=tf.nn.relu),
tf.keras.layers.Dense(128, activation=tf.nn.relu),
tf.keras.layers.Dense(128, activation=tf.nn.relu),
tf.keras.layers.Dense(128, activation=tf.nn.relu),
tf.keras.layers.Dense(6, activation='linear')
])
model.compile(optimizer="rmsprop", loss='mean_squared_error', metrics=['accuracy'])
model.fit(training_x, training_y, epochs=10, batch_size=100)
This is the best result that I have got (training):
235929/235929 [==============================] - 8s 33us/step - loss: 8.9393e-04 - acc: 0.6436
testing:
loss: 0.00427692719418488
acc: 0.033187106618348276
I get almost 0% accuracy on the test set! I need a model to solve this ML problem.
machine-learning tensorflow regression linear-regression
$endgroup$
add a comment |
$begingroup$
number of features: 12 , -15 < each feature < 15
number of targets: 6 , 0 < each target < 360
number of examples: 262144
my normalization: I normalized the features so that they are between 0 and 1. I normalized the targets so that they are between 1 and 10.
This is the model that I am using:
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation=tf.nn.relu),
tf.keras.layers.Dense(128, activation=tf.nn.relu),
tf.keras.layers.Dense(128, activation=tf.nn.relu),
tf.keras.layers.Dense(128, activation=tf.nn.relu),
tf.keras.layers.Dense(128, activation=tf.nn.relu),
tf.keras.layers.Dense(128, activation=tf.nn.relu),
tf.keras.layers.Dense(6, activation='linear')
])
model.compile(optimizer="rmsprop", loss='mean_squared_error', metrics=['accuracy'])
model.fit(training_x, training_y, epochs=10, batch_size=100)
This is the best result that I have got (training):
235929/235929 [==============================] - 8s 33us/step - loss: 8.9393e-04 - acc: 0.6436
testing:
loss: 0.00427692719418488
acc: 0.033187106618348276
I get almost 0% accuracy on the test set! I need a model to solve this ML problem.
machine-learning tensorflow regression linear-regression
$endgroup$
$begingroup$
Just try not normalizing the targets, and use another metric, like Mean Absolute Error ("mae")
$endgroup$
– ignatius
Apr 1 at 12:58
add a comment |
$begingroup$
number of features: 12 , -15 < each feature < 15
number of targets: 6 , 0 < each target < 360
number of examples: 262144
my normalization: I normalized the features so that they are between 0 and 1. I normalized the targets so that they are between 1 and 10.
This is the model that I am using:
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation=tf.nn.relu),
tf.keras.layers.Dense(128, activation=tf.nn.relu),
tf.keras.layers.Dense(128, activation=tf.nn.relu),
tf.keras.layers.Dense(128, activation=tf.nn.relu),
tf.keras.layers.Dense(128, activation=tf.nn.relu),
tf.keras.layers.Dense(128, activation=tf.nn.relu),
tf.keras.layers.Dense(6, activation='linear')
])
model.compile(optimizer="rmsprop", loss='mean_squared_error', metrics=['accuracy'])
model.fit(training_x, training_y, epochs=10, batch_size=100)
This is the best result that I have got (training):
235929/235929 [==============================] - 8s 33us/step - loss: 8.9393e-04 - acc: 0.6436
testing:
loss: 0.00427692719418488
acc: 0.033187106618348276
I get almost 0% accuracy on the test set! I need a model to solve this ML problem.
machine-learning tensorflow regression linear-regression
$endgroup$
number of features: 12 , -15 < each feature < 15
number of targets: 6 , 0 < each target < 360
number of examples: 262144
my normalization: I normalized the features so that they are between 0 and 1. I normalized the targets so that they are between 1 and 10.
This is the model that I am using:
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation=tf.nn.relu),
tf.keras.layers.Dense(128, activation=tf.nn.relu),
tf.keras.layers.Dense(128, activation=tf.nn.relu),
tf.keras.layers.Dense(128, activation=tf.nn.relu),
tf.keras.layers.Dense(128, activation=tf.nn.relu),
tf.keras.layers.Dense(128, activation=tf.nn.relu),
tf.keras.layers.Dense(6, activation='linear')
])
model.compile(optimizer="rmsprop", loss='mean_squared_error', metrics=['accuracy'])
model.fit(training_x, training_y, epochs=10, batch_size=100)
This is the best result that I have got (training):
235929/235929 [==============================] - 8s 33us/step - loss: 8.9393e-04 - acc: 0.6436
testing:
loss: 0.00427692719418488
acc: 0.033187106618348276
I get almost 0% accuracy on the test set! I need a model to solve this ML problem.
machine-learning tensorflow regression linear-regression
machine-learning tensorflow regression linear-regression
asked Apr 1 at 12:55
KasraKasra
132
132
$begingroup$
Just try not normalizing the targets, and use another metric, like Mean Absolute Error ("mae")
$endgroup$
– ignatius
Apr 1 at 12:58
add a comment |
$begingroup$
Just try not normalizing the targets, and use another metric, like Mean Absolute Error ("mae")
$endgroup$
– ignatius
Apr 1 at 12:58
$begingroup$
Just try not normalizing the targets, and use another metric, like Mean Absolute Error ("mae")
$endgroup$
– ignatius
Apr 1 at 12:58
$begingroup$
Just try not normalizing the targets, and use another metric, like Mean Absolute Error ("mae")
$endgroup$
– ignatius
Apr 1 at 12:58
add a comment |
1 Answer
1
active
oldest
votes
$begingroup$
Accuracy is a metric for classification, not regression.
$$Accuracy = fractextCorrect classificationtextNumber of classifications$$
So when you use accuracy for regression only the values where actual_label == predicted_label
are evaluated as true are counted as correct classifications. That will happen quite rarely when you are doing regression, resulting in an accuracy that is close to zero.
Instead you should use something like mean absolute error or mean squared error as validation metrics for regression.
$endgroup$
$begingroup$
235929/235929 [==============================] - 13s 54us/step - loss: 1.0431 - mean_absolute_error: 0.5507
$endgroup$
– Kasra
Apr 1 at 13:25
$begingroup$
You want mean absolute error to be as close to zero as possible. It is basically the average error in units that your model makes. 0.5507 might be a good score depending on the size of your labels.
$endgroup$
– Simon Larsson
Apr 1 at 13:29
$begingroup$
But is important to note that MAE will be affected by any normalization you perform on your label. So be sure to take that into consideration.
$endgroup$
– Simon Larsson
Apr 1 at 13:30
$begingroup$
On the test set, I got MAE of 1.4721086428362338. Is it acceptable? How can I improve performance?
$endgroup$
– Kasra
Apr 1 at 13:31
$begingroup$
Depends on the size of your labels. If you for example are predicting house prices (big numbers) and on average is wrong with only 1.4721086428362338 dollars then your results are great. You can try running print(np.mean(training_y)) to get a sense of the size of your labels
$endgroup$
– Simon Larsson
Apr 1 at 13:34
|
show 5 more comments
Your Answer
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "557"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f48346%2fmulti-output-regression-problem-with-tensorflow%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
Accuracy is a metric for classification, not regression.
$$Accuracy = fractextCorrect classificationtextNumber of classifications$$
So when you use accuracy for regression only the values where actual_label == predicted_label
are evaluated as true are counted as correct classifications. That will happen quite rarely when you are doing regression, resulting in an accuracy that is close to zero.
Instead you should use something like mean absolute error or mean squared error as validation metrics for regression.
$endgroup$
$begingroup$
235929/235929 [==============================] - 13s 54us/step - loss: 1.0431 - mean_absolute_error: 0.5507
$endgroup$
– Kasra
Apr 1 at 13:25
$begingroup$
You want mean absolute error to be as close to zero as possible. It is basically the average error in units that your model makes. 0.5507 might be a good score depending on the size of your labels.
$endgroup$
– Simon Larsson
Apr 1 at 13:29
$begingroup$
But is important to note that MAE will be affected by any normalization you perform on your label. So be sure to take that into consideration.
$endgroup$
– Simon Larsson
Apr 1 at 13:30
$begingroup$
On the test set, I got MAE of 1.4721086428362338. Is it acceptable? How can I improve performance?
$endgroup$
– Kasra
Apr 1 at 13:31
$begingroup$
Depends on the size of your labels. If you for example are predicting house prices (big numbers) and on average is wrong with only 1.4721086428362338 dollars then your results are great. You can try running print(np.mean(training_y)) to get a sense of the size of your labels
$endgroup$
– Simon Larsson
Apr 1 at 13:34
|
show 5 more comments
$begingroup$
Accuracy is a metric for classification, not regression.
$$Accuracy = fractextCorrect classificationtextNumber of classifications$$
So when you use accuracy for regression only the values where actual_label == predicted_label
are evaluated as true are counted as correct classifications. That will happen quite rarely when you are doing regression, resulting in an accuracy that is close to zero.
Instead you should use something like mean absolute error or mean squared error as validation metrics for regression.
$endgroup$
$begingroup$
235929/235929 [==============================] - 13s 54us/step - loss: 1.0431 - mean_absolute_error: 0.5507
$endgroup$
– Kasra
Apr 1 at 13:25
$begingroup$
You want mean absolute error to be as close to zero as possible. It is basically the average error in units that your model makes. 0.5507 might be a good score depending on the size of your labels.
$endgroup$
– Simon Larsson
Apr 1 at 13:29
$begingroup$
But is important to note that MAE will be affected by any normalization you perform on your label. So be sure to take that into consideration.
$endgroup$
– Simon Larsson
Apr 1 at 13:30
$begingroup$
On the test set, I got MAE of 1.4721086428362338. Is it acceptable? How can I improve performance?
$endgroup$
– Kasra
Apr 1 at 13:31
$begingroup$
Depends on the size of your labels. If you for example are predicting house prices (big numbers) and on average is wrong with only 1.4721086428362338 dollars then your results are great. You can try running print(np.mean(training_y)) to get a sense of the size of your labels
$endgroup$
– Simon Larsson
Apr 1 at 13:34
|
show 5 more comments
$begingroup$
Accuracy is a metric for classification, not regression.
$$Accuracy = fractextCorrect classificationtextNumber of classifications$$
So when you use accuracy for regression only the values where actual_label == predicted_label
are evaluated as true are counted as correct classifications. That will happen quite rarely when you are doing regression, resulting in an accuracy that is close to zero.
Instead you should use something like mean absolute error or mean squared error as validation metrics for regression.
$endgroup$
Accuracy is a metric for classification, not regression.
$$Accuracy = fractextCorrect classificationtextNumber of classifications$$
So when you use accuracy for regression only the values where actual_label == predicted_label
are evaluated as true are counted as correct classifications. That will happen quite rarely when you are doing regression, resulting in an accuracy that is close to zero.
Instead you should use something like mean absolute error or mean squared error as validation metrics for regression.
edited 2 days ago
answered Apr 1 at 12:58
Simon LarssonSimon Larsson
858214
858214
$begingroup$
235929/235929 [==============================] - 13s 54us/step - loss: 1.0431 - mean_absolute_error: 0.5507
$endgroup$
– Kasra
Apr 1 at 13:25
$begingroup$
You want mean absolute error to be as close to zero as possible. It is basically the average error in units that your model makes. 0.5507 might be a good score depending on the size of your labels.
$endgroup$
– Simon Larsson
Apr 1 at 13:29
$begingroup$
But is important to note that MAE will be affected by any normalization you perform on your label. So be sure to take that into consideration.
$endgroup$
– Simon Larsson
Apr 1 at 13:30
$begingroup$
On the test set, I got MAE of 1.4721086428362338. Is it acceptable? How can I improve performance?
$endgroup$
– Kasra
Apr 1 at 13:31
$begingroup$
Depends on the size of your labels. If you for example are predicting house prices (big numbers) and on average is wrong with only 1.4721086428362338 dollars then your results are great. You can try running print(np.mean(training_y)) to get a sense of the size of your labels
$endgroup$
– Simon Larsson
Apr 1 at 13:34
|
show 5 more comments
$begingroup$
235929/235929 [==============================] - 13s 54us/step - loss: 1.0431 - mean_absolute_error: 0.5507
$endgroup$
– Kasra
Apr 1 at 13:25
$begingroup$
You want mean absolute error to be as close to zero as possible. It is basically the average error in units that your model makes. 0.5507 might be a good score depending on the size of your labels.
$endgroup$
– Simon Larsson
Apr 1 at 13:29
$begingroup$
But is important to note that MAE will be affected by any normalization you perform on your label. So be sure to take that into consideration.
$endgroup$
– Simon Larsson
Apr 1 at 13:30
$begingroup$
On the test set, I got MAE of 1.4721086428362338. Is it acceptable? How can I improve performance?
$endgroup$
– Kasra
Apr 1 at 13:31
$begingroup$
Depends on the size of your labels. If you for example are predicting house prices (big numbers) and on average is wrong with only 1.4721086428362338 dollars then your results are great. You can try running print(np.mean(training_y)) to get a sense of the size of your labels
$endgroup$
– Simon Larsson
Apr 1 at 13:34
$begingroup$
235929/235929 [==============================] - 13s 54us/step - loss: 1.0431 - mean_absolute_error: 0.5507
$endgroup$
– Kasra
Apr 1 at 13:25
$begingroup$
235929/235929 [==============================] - 13s 54us/step - loss: 1.0431 - mean_absolute_error: 0.5507
$endgroup$
– Kasra
Apr 1 at 13:25
$begingroup$
You want mean absolute error to be as close to zero as possible. It is basically the average error in units that your model makes. 0.5507 might be a good score depending on the size of your labels.
$endgroup$
– Simon Larsson
Apr 1 at 13:29
$begingroup$
You want mean absolute error to be as close to zero as possible. It is basically the average error in units that your model makes. 0.5507 might be a good score depending on the size of your labels.
$endgroup$
– Simon Larsson
Apr 1 at 13:29
$begingroup$
But is important to note that MAE will be affected by any normalization you perform on your label. So be sure to take that into consideration.
$endgroup$
– Simon Larsson
Apr 1 at 13:30
$begingroup$
But is important to note that MAE will be affected by any normalization you perform on your label. So be sure to take that into consideration.
$endgroup$
– Simon Larsson
Apr 1 at 13:30
$begingroup$
On the test set, I got MAE of 1.4721086428362338. Is it acceptable? How can I improve performance?
$endgroup$
– Kasra
Apr 1 at 13:31
$begingroup$
On the test set, I got MAE of 1.4721086428362338. Is it acceptable? How can I improve performance?
$endgroup$
– Kasra
Apr 1 at 13:31
$begingroup$
Depends on the size of your labels. If you for example are predicting house prices (big numbers) and on average is wrong with only 1.4721086428362338 dollars then your results are great. You can try running print(np.mean(training_y)) to get a sense of the size of your labels
$endgroup$
– Simon Larsson
Apr 1 at 13:34
$begingroup$
Depends on the size of your labels. If you for example are predicting house prices (big numbers) and on average is wrong with only 1.4721086428362338 dollars then your results are great. You can try running print(np.mean(training_y)) to get a sense of the size of your labels
$endgroup$
– Simon Larsson
Apr 1 at 13:34
|
show 5 more comments
Thanks for contributing an answer to Data Science Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f48346%2fmulti-output-regression-problem-with-tensorflow%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
$begingroup$
Just try not normalizing the targets, and use another metric, like Mean Absolute Error ("mae")
$endgroup$
– ignatius
Apr 1 at 12:58