Can we use ReLU activation function as the output layer's non-linearity?Lack of activation function in output layer at regression?Keras retrieve value of node before activation functionBackpropagation with multiple different activation functionsCensored output data, which activation function for the output layer and which loss function to use?Alternatives to linear activation function in regression tasks to limit the outputProperly using activation functions of neural networkFeed-forward neural network not training with Keras function generators deep_learning data_science machine_learning pythonObtaining correctly gradient in neural network of output with respect to input. Is relu a bad option as the activation function?Regression with -1,1 target range - Should we use a tanh activation in the last 1 unit dense layer?Learning a highly non-linear function with a small data set
I am confused as to how the inverse of a certain function is found.
Describing a chess game in a novel
Instead of a Universal Basic Income program, why not implement a "Universal Basic Needs" program?
Have researchers managed to "reverse time"? If so, what does that mean for physics?
Pauli exclusion principle
Do I need life insurance if I can cover my own funeral costs?
I got the following comment from a reputed math journal. What does it mean?
Problem with FindRoot
ERC721: How to get the owned tokens of an address
Are Roman Catholic priests ever addressed as pastor
Does this sum go infinity?
Convergence in probability and convergence in distribution
What is the adequate fee for a reveal operation?
Is it normal that my co-workers at a fitness company criticize my food choices?
Equivalents to the present tense
How do I hide Chekhov's Gun?
Why does overlay work only on the first tcolorbox?
Meme-controlled people
A single argument pattern definition applies to multiple-argument patterns?
Welcoming 2019 Pi day: How to draw the letter π?
How to plot polar formed complex numbers?
Why does energy conservation give me the wrong answer in this inelastic collision problem?
What is the significance behind "40 days" that often appears in the Bible?
Python if-else code style for reduced code for rounding floats
Can we use ReLU activation function as the output layer's non-linearity?
Lack of activation function in output layer at regression?Keras retrieve value of node before activation functionBackpropagation with multiple different activation functionsCensored output data, which activation function for the output layer and which loss function to use?Alternatives to linear activation function in regression tasks to limit the outputProperly using activation functions of neural networkFeed-forward neural network not training with Keras function generators deep_learning data_science machine_learning pythonObtaining correctly gradient in neural network of output with respect to input. Is relu a bad option as the activation function?Regression with -1,1 target range - Should we use a tanh activation in the last 1 unit dense layer?Learning a highly non-linear function with a small data set
$begingroup$
I have trained a model with linear activation function for the last dense layer, but I have a constraint that forbids negative values for the target which is a continuous positive value.
Can I use ReLU as the activation of the output layer? I am afraid of trying, since it is generally used in hidden layers as a rectifier. I'm using Keras.
machine-learning neural-network deep-learning keras activation-function
$endgroup$
add a comment |
$begingroup$
I have trained a model with linear activation function for the last dense layer, but I have a constraint that forbids negative values for the target which is a continuous positive value.
Can I use ReLU as the activation of the output layer? I am afraid of trying, since it is generally used in hidden layers as a rectifier. I'm using Keras.
machine-learning neural-network deep-learning keras activation-function
$endgroup$
add a comment |
$begingroup$
I have trained a model with linear activation function for the last dense layer, but I have a constraint that forbids negative values for the target which is a continuous positive value.
Can I use ReLU as the activation of the output layer? I am afraid of trying, since it is generally used in hidden layers as a rectifier. I'm using Keras.
machine-learning neural-network deep-learning keras activation-function
$endgroup$
I have trained a model with linear activation function for the last dense layer, but I have a constraint that forbids negative values for the target which is a continuous positive value.
Can I use ReLU as the activation of the output layer? I am afraid of trying, since it is generally used in hidden layers as a rectifier. I'm using Keras.
machine-learning neural-network deep-learning keras activation-function
machine-learning neural-network deep-learning keras activation-function
edited yesterday
Media
7,34062161
7,34062161
asked yesterday
bacloud14bacloud14
699
699
add a comment |
add a comment |
1 Answer
1
active
oldest
votes
$begingroup$
Yes, you can. Basically, for regression tasks, it is customary to use the linear function as the non-linearity due to the fact that it's differentiable and it does not limit the output. This means you can make any output using your inputs. People do not use tanh or sigmoid as the activation function of the last layers for regression tasks due to the fact that they are limited and cannot generate all numbers which are needed. In your task, you can use ReLU as the non-linearity. The concept of non-linearities in hidden layers is to add non-linear boundaries and for the last layer in regression tasks, it should make all possible choices. In your case, ReLU is the best.
$endgroup$
1
$begingroup$
wonderful, even validation loss is getting lower. I hope it will work well on a ton of unseen data.
$endgroup$
– bacloud14
yesterday
add a comment |
Your Answer
StackExchange.ifUsing("editor", function ()
return StackExchange.using("mathjaxEditing", function ()
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
);
);
, "mathjax-editing");
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "557"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f47336%2fcan-we-use-relu-activation-function-as-the-output-layers-non-linearity%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
Yes, you can. Basically, for regression tasks, it is customary to use the linear function as the non-linearity due to the fact that it's differentiable and it does not limit the output. This means you can make any output using your inputs. People do not use tanh or sigmoid as the activation function of the last layers for regression tasks due to the fact that they are limited and cannot generate all numbers which are needed. In your task, you can use ReLU as the non-linearity. The concept of non-linearities in hidden layers is to add non-linear boundaries and for the last layer in regression tasks, it should make all possible choices. In your case, ReLU is the best.
$endgroup$
1
$begingroup$
wonderful, even validation loss is getting lower. I hope it will work well on a ton of unseen data.
$endgroup$
– bacloud14
yesterday
add a comment |
$begingroup$
Yes, you can. Basically, for regression tasks, it is customary to use the linear function as the non-linearity due to the fact that it's differentiable and it does not limit the output. This means you can make any output using your inputs. People do not use tanh or sigmoid as the activation function of the last layers for regression tasks due to the fact that they are limited and cannot generate all numbers which are needed. In your task, you can use ReLU as the non-linearity. The concept of non-linearities in hidden layers is to add non-linear boundaries and for the last layer in regression tasks, it should make all possible choices. In your case, ReLU is the best.
$endgroup$
1
$begingroup$
wonderful, even validation loss is getting lower. I hope it will work well on a ton of unseen data.
$endgroup$
– bacloud14
yesterday
add a comment |
$begingroup$
Yes, you can. Basically, for regression tasks, it is customary to use the linear function as the non-linearity due to the fact that it's differentiable and it does not limit the output. This means you can make any output using your inputs. People do not use tanh or sigmoid as the activation function of the last layers for regression tasks due to the fact that they are limited and cannot generate all numbers which are needed. In your task, you can use ReLU as the non-linearity. The concept of non-linearities in hidden layers is to add non-linear boundaries and for the last layer in regression tasks, it should make all possible choices. In your case, ReLU is the best.
$endgroup$
Yes, you can. Basically, for regression tasks, it is customary to use the linear function as the non-linearity due to the fact that it's differentiable and it does not limit the output. This means you can make any output using your inputs. People do not use tanh or sigmoid as the activation function of the last layers for regression tasks due to the fact that they are limited and cannot generate all numbers which are needed. In your task, you can use ReLU as the non-linearity. The concept of non-linearities in hidden layers is to add non-linear boundaries and for the last layer in regression tasks, it should make all possible choices. In your case, ReLU is the best.
answered yesterday
MediaMedia
7,34062161
7,34062161
1
$begingroup$
wonderful, even validation loss is getting lower. I hope it will work well on a ton of unseen data.
$endgroup$
– bacloud14
yesterday
add a comment |
1
$begingroup$
wonderful, even validation loss is getting lower. I hope it will work well on a ton of unseen data.
$endgroup$
– bacloud14
yesterday
1
1
$begingroup$
wonderful, even validation loss is getting lower. I hope it will work well on a ton of unseen data.
$endgroup$
– bacloud14
yesterday
$begingroup$
wonderful, even validation loss is getting lower. I hope it will work well on a ton of unseen data.
$endgroup$
– bacloud14
yesterday
add a comment |
Thanks for contributing an answer to Data Science Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f47336%2fcan-we-use-relu-activation-function-as-the-output-layers-non-linearity%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown