Can we use ReLU activation function as the output layer's non-linearity?Lack of activation function in output layer at regression?Keras retrieve value of node before activation functionBackpropagation with multiple different activation functionsCensored output data, which activation function for the output layer and which loss function to use?Alternatives to linear activation function in regression tasks to limit the outputProperly using activation functions of neural networkFeed-forward neural network not training with Keras function generators deep_learning data_science machine_learning pythonObtaining correctly gradient in neural network of output with respect to input. Is relu a bad option as the activation function?Regression with -1,1 target range - Should we use a tanh activation in the last 1 unit dense layer?Learning a highly non-linear function with a small data set

Did Ender ever learn that he killed Stilson and/or Bonzo?

How do I hide Chekhov's Gun?

How to terminate ping <dest> &

Does this sum go infinity?

Knife as defense against stray dogs

ERC721: How to get the owned tokens of an address

Bach's Toccata and Fugue in D minor breaks the "no parallel octaves" rule?

How do I change two letters closest to a string and one letter immediately after a string using Notepad++?

Life insurance that covers only simultaneous/dual deaths

A diagram about partial derivatives of f(x,y)

Convergence in probability and convergence in distribution

Time travel from stationary position?

How to solve this challenging limit?

Adventure Game (text based) in C++

Why one should not leave fingerprints on bulbs and plugs?

Is a party consisting of only a bard, a cleric, and a warlock functional long-term?

Pauli exclusion principle

PTIJ: Who should I vote for? (21st Knesset Edition)

Bacteria contamination inside a thermos bottle

Python if-else code style for reduced code for rounding floats

Is honey really a supersaturated solution? Does heating to un-crystalize redissolve it or melt it?

Passing arguments from one script to another

How well should I expect Adam to work?

How difficult is it to simply disable/disengage the MCAS on Boeing 737 Max 8 & 9 Aircraft?



Can we use ReLU activation function as the output layer's non-linearity?


Lack of activation function in output layer at regression?Keras retrieve value of node before activation functionBackpropagation with multiple different activation functionsCensored output data, which activation function for the output layer and which loss function to use?Alternatives to linear activation function in regression tasks to limit the outputProperly using activation functions of neural networkFeed-forward neural network not training with Keras function generators deep_learning data_science machine_learning pythonObtaining correctly gradient in neural network of output with respect to input. Is relu a bad option as the activation function?Regression with -1,1 target range - Should we use a tanh activation in the last 1 unit dense layer?Learning a highly non-linear function with a small data set













2












$begingroup$


I have trained a model with linear activation function for the last dense layer, but I have a constraint that forbids negative values for the target which is a continuous positive value.



Can I use ReLU as the activation of the output layer? I am afraid of trying, since it is generally used in hidden layers as a rectifier. I'm using Keras.










share|improve this question











$endgroup$
















    2












    $begingroup$


    I have trained a model with linear activation function for the last dense layer, but I have a constraint that forbids negative values for the target which is a continuous positive value.



    Can I use ReLU as the activation of the output layer? I am afraid of trying, since it is generally used in hidden layers as a rectifier. I'm using Keras.










    share|improve this question











    $endgroup$














      2












      2








      2





      $begingroup$


      I have trained a model with linear activation function for the last dense layer, but I have a constraint that forbids negative values for the target which is a continuous positive value.



      Can I use ReLU as the activation of the output layer? I am afraid of trying, since it is generally used in hidden layers as a rectifier. I'm using Keras.










      share|improve this question











      $endgroup$




      I have trained a model with linear activation function for the last dense layer, but I have a constraint that forbids negative values for the target which is a continuous positive value.



      Can I use ReLU as the activation of the output layer? I am afraid of trying, since it is generally used in hidden layers as a rectifier. I'm using Keras.







      machine-learning neural-network deep-learning keras activation-function






      share|improve this question















      share|improve this question













      share|improve this question




      share|improve this question








      edited yesterday









      Media

      7,34062161




      7,34062161










      asked yesterday









      bacloud14bacloud14

      699




      699




















          1 Answer
          1






          active

          oldest

          votes


















          3












          $begingroup$

          Yes, you can. Basically, for regression tasks, it is customary to use the linear function as the non-linearity due to the fact that it's differentiable and it does not limit the output. This means you can make any output using your inputs. People do not use tanh or sigmoid as the activation function of the last layers for regression tasks due to the fact that they are limited and cannot generate all numbers which are needed. In your task, you can use ReLU as the non-linearity. The concept of non-linearities in hidden layers is to add non-linear boundaries and for the last layer in regression tasks, it should make all possible choices. In your case, ReLU is the best.






          share|improve this answer









          $endgroup$








          • 1




            $begingroup$
            wonderful, even validation loss is getting lower. I hope it will work well on a ton of unseen data.
            $endgroup$
            – bacloud14
            yesterday











          Your Answer





          StackExchange.ifUsing("editor", function ()
          return StackExchange.using("mathjaxEditing", function ()
          StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
          StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
          );
          );
          , "mathjax-editing");

          StackExchange.ready(function()
          var channelOptions =
          tags: "".split(" "),
          id: "557"
          ;
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function()
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled)
          StackExchange.using("snippets", function()
          createEditor();
          );

          else
          createEditor();

          );

          function createEditor()
          StackExchange.prepareEditor(
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: false,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: null,
          bindNavPrevention: true,
          postfix: "",
          imageUploader:
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          ,
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          );



          );













          draft saved

          draft discarded


















          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f47336%2fcan-we-use-relu-activation-function-as-the-output-layers-non-linearity%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown

























          1 Answer
          1






          active

          oldest

          votes








          1 Answer
          1






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes









          3












          $begingroup$

          Yes, you can. Basically, for regression tasks, it is customary to use the linear function as the non-linearity due to the fact that it's differentiable and it does not limit the output. This means you can make any output using your inputs. People do not use tanh or sigmoid as the activation function of the last layers for regression tasks due to the fact that they are limited and cannot generate all numbers which are needed. In your task, you can use ReLU as the non-linearity. The concept of non-linearities in hidden layers is to add non-linear boundaries and for the last layer in regression tasks, it should make all possible choices. In your case, ReLU is the best.






          share|improve this answer









          $endgroup$








          • 1




            $begingroup$
            wonderful, even validation loss is getting lower. I hope it will work well on a ton of unseen data.
            $endgroup$
            – bacloud14
            yesterday
















          3












          $begingroup$

          Yes, you can. Basically, for regression tasks, it is customary to use the linear function as the non-linearity due to the fact that it's differentiable and it does not limit the output. This means you can make any output using your inputs. People do not use tanh or sigmoid as the activation function of the last layers for regression tasks due to the fact that they are limited and cannot generate all numbers which are needed. In your task, you can use ReLU as the non-linearity. The concept of non-linearities in hidden layers is to add non-linear boundaries and for the last layer in regression tasks, it should make all possible choices. In your case, ReLU is the best.






          share|improve this answer









          $endgroup$








          • 1




            $begingroup$
            wonderful, even validation loss is getting lower. I hope it will work well on a ton of unseen data.
            $endgroup$
            – bacloud14
            yesterday














          3












          3








          3





          $begingroup$

          Yes, you can. Basically, for regression tasks, it is customary to use the linear function as the non-linearity due to the fact that it's differentiable and it does not limit the output. This means you can make any output using your inputs. People do not use tanh or sigmoid as the activation function of the last layers for regression tasks due to the fact that they are limited and cannot generate all numbers which are needed. In your task, you can use ReLU as the non-linearity. The concept of non-linearities in hidden layers is to add non-linear boundaries and for the last layer in regression tasks, it should make all possible choices. In your case, ReLU is the best.






          share|improve this answer









          $endgroup$



          Yes, you can. Basically, for regression tasks, it is customary to use the linear function as the non-linearity due to the fact that it's differentiable and it does not limit the output. This means you can make any output using your inputs. People do not use tanh or sigmoid as the activation function of the last layers for regression tasks due to the fact that they are limited and cannot generate all numbers which are needed. In your task, you can use ReLU as the non-linearity. The concept of non-linearities in hidden layers is to add non-linear boundaries and for the last layer in regression tasks, it should make all possible choices. In your case, ReLU is the best.







          share|improve this answer












          share|improve this answer



          share|improve this answer










          answered yesterday









          MediaMedia

          7,34062161




          7,34062161







          • 1




            $begingroup$
            wonderful, even validation loss is getting lower. I hope it will work well on a ton of unseen data.
            $endgroup$
            – bacloud14
            yesterday













          • 1




            $begingroup$
            wonderful, even validation loss is getting lower. I hope it will work well on a ton of unseen data.
            $endgroup$
            – bacloud14
            yesterday








          1




          1




          $begingroup$
          wonderful, even validation loss is getting lower. I hope it will work well on a ton of unseen data.
          $endgroup$
          – bacloud14
          yesterday





          $begingroup$
          wonderful, even validation loss is getting lower. I hope it will work well on a ton of unseen data.
          $endgroup$
          – bacloud14
          yesterday


















          draft saved

          draft discarded
















































          Thanks for contributing an answer to Data Science Stack Exchange!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid


          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.

          Use MathJax to format equations. MathJax reference.


          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f47336%2fcan-we-use-relu-activation-function-as-the-output-layers-non-linearity%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          Adding axes to figuresAdding axes labels to LaTeX figuresLaTeX equivalent of ConTeXt buffersRotate a node but not its content: the case of the ellipse decorationHow to define the default vertical distance between nodes?TikZ scaling graphic and adjust node position and keep font sizeNumerical conditional within tikz keys?adding axes to shapesAlign axes across subfiguresAdding figures with a certain orderLine up nested tikz enviroments or how to get rid of themAdding axes labels to LaTeX figures

          Luettelo Yhdysvaltain laivaston lentotukialuksista Lähteet | Navigointivalikko

          Gary (muusikko) Sisällysluettelo Historia | Rockin' High | Lähteet | Aiheesta muualla | NavigointivalikkoInfobox OKTuomas "Gary" Keskinen Ancaran kitaristiksiProjekti Rockin' High