Is reward accumulated during a play iteration when performing SARSA?2019 Community Moderator ElectionQ-learning with a state-action-state reward structure and a Q-matrix with states as rows and actions as columnsDefining State Representation in Deep Q-LearningWhy is my loss function for DQN converging too quickly?Reinforcement learning: decreasing loss without increasing rewardWhat is the immediate reward in value iteration?Reinforcement learning: negative reward (punish) illegal actions?Is every value-iteration off-policy DP is a Q-learning?Is DQN a SMART (Semi Markov Average Reward Technique)?how to choose between discounted reward and average rewardReward function to avoid illegal actions, minimize legal action and learn to win - Reinforcement Learning

(Soft question) does light intensity oscillate really fast since it is a wave?

Typesetting a double Over Dot on top of a symbol

Filling an area between two curves

Extreme, but not acceptable situation and I can't start the work tomorrow morning

How to manage monthly salary

How could a lack of term limits lead to a "dictatorship?"

Can Pesach Mitzvot be performed before Tzet Kochavim?

What causes the sudden spool-up sound from an F-16 when enabling afterburner?

How to move the player while also allowing forces to affect it

Shall I use personal or official e-mail account when registering to external websites for work purpose?

Does it makes sense to buy a new cycle to learn riding?

Landlord wants to switch my lease to a "Land contract" to "get back at the city"

I see my dog run

Uplifted animals have parts of their "brain" in various locations of their body. Where?

Map list to bin numbers

If a centaur druid Wild Shapes into a Giant Elk, do their Charge features stack?

Prime joint compound before latex paint?

Is there a familial term for apples and pears?

What is GPS' 19 year rollover and does it present a cybersecurity issue?

aging parents with no investments

Should the British be getting ready for a no-deal Brexit?

What is the command to reset a PC without deleting any files

Was there ever an axiom rendered a theorem?

Are white and non-white police officers equally likely to kill black suspects?



Is reward accumulated during a play iteration when performing SARSA?



2019 Community Moderator ElectionQ-learning with a state-action-state reward structure and a Q-matrix with states as rows and actions as columnsDefining State Representation in Deep Q-LearningWhy is my loss function for DQN converging too quickly?Reinforcement learning: decreasing loss without increasing rewardWhat is the immediate reward in value iteration?Reinforcement learning: negative reward (punish) illegal actions?Is every value-iteration off-policy DP is a Q-learning?Is DQN a SMART (Semi Markov Average Reward Technique)?how to choose between discounted reward and average rewardReward function to avoid illegal actions, minimize legal action and learn to win - Reinforcement Learning










0












$begingroup$


I've been having issue with getting my DQN to converge to a good solution for snake. Regardless of the different types of reward functions I've tried, it seems that the snake is indefinitely going around in circles. I have not tried exploring more states yet because I am confused about how to properly assign reward.



Currently, I am using a 2D-Gaussian distribution to assign reward where $f(x=x_food,y=y_food) = 1$. Terminal states like hitting the wall or itself result in a reward value of -1.



My reason for using the Gaussian was because of the relatively sparse rewards in this game and the ability to easily clip rewards between [1,-1] in meaning full way.



I have two questions.



  1. Is this an appropriate way to define the reward function?

  2. Currently during training, I do no accumulate the reward during each play iteration. So each reward for transitions are independent of the reward values before it. Right now I am doing $big[S_1,A_1,R_1,S_2,A_2,R_2,S_3big]$. I've looked at other code where people have accumulate the reward like $big[S_1,A_1,R_1,S_2,A_2,R_2 = R_1+r,S_3big]$. Where $r$ is given by the reward function. The thing is, I can't find a paper that defines why you should do this. So my question is, which way is the appropriate way to assign reward?









share|improve this question









$endgroup$
















    0












    $begingroup$


    I've been having issue with getting my DQN to converge to a good solution for snake. Regardless of the different types of reward functions I've tried, it seems that the snake is indefinitely going around in circles. I have not tried exploring more states yet because I am confused about how to properly assign reward.



    Currently, I am using a 2D-Gaussian distribution to assign reward where $f(x=x_food,y=y_food) = 1$. Terminal states like hitting the wall or itself result in a reward value of -1.



    My reason for using the Gaussian was because of the relatively sparse rewards in this game and the ability to easily clip rewards between [1,-1] in meaning full way.



    I have two questions.



    1. Is this an appropriate way to define the reward function?

    2. Currently during training, I do no accumulate the reward during each play iteration. So each reward for transitions are independent of the reward values before it. Right now I am doing $big[S_1,A_1,R_1,S_2,A_2,R_2,S_3big]$. I've looked at other code where people have accumulate the reward like $big[S_1,A_1,R_1,S_2,A_2,R_2 = R_1+r,S_3big]$. Where $r$ is given by the reward function. The thing is, I can't find a paper that defines why you should do this. So my question is, which way is the appropriate way to assign reward?









    share|improve this question









    $endgroup$














      0












      0








      0





      $begingroup$


      I've been having issue with getting my DQN to converge to a good solution for snake. Regardless of the different types of reward functions I've tried, it seems that the snake is indefinitely going around in circles. I have not tried exploring more states yet because I am confused about how to properly assign reward.



      Currently, I am using a 2D-Gaussian distribution to assign reward where $f(x=x_food,y=y_food) = 1$. Terminal states like hitting the wall or itself result in a reward value of -1.



      My reason for using the Gaussian was because of the relatively sparse rewards in this game and the ability to easily clip rewards between [1,-1] in meaning full way.



      I have two questions.



      1. Is this an appropriate way to define the reward function?

      2. Currently during training, I do no accumulate the reward during each play iteration. So each reward for transitions are independent of the reward values before it. Right now I am doing $big[S_1,A_1,R_1,S_2,A_2,R_2,S_3big]$. I've looked at other code where people have accumulate the reward like $big[S_1,A_1,R_1,S_2,A_2,R_2 = R_1+r,S_3big]$. Where $r$ is given by the reward function. The thing is, I can't find a paper that defines why you should do this. So my question is, which way is the appropriate way to assign reward?









      share|improve this question









      $endgroup$




      I've been having issue with getting my DQN to converge to a good solution for snake. Regardless of the different types of reward functions I've tried, it seems that the snake is indefinitely going around in circles. I have not tried exploring more states yet because I am confused about how to properly assign reward.



      Currently, I am using a 2D-Gaussian distribution to assign reward where $f(x=x_food,y=y_food) = 1$. Terminal states like hitting the wall or itself result in a reward value of -1.



      My reason for using the Gaussian was because of the relatively sparse rewards in this game and the ability to easily clip rewards between [1,-1] in meaning full way.



      I have two questions.



      1. Is this an appropriate way to define the reward function?

      2. Currently during training, I do no accumulate the reward during each play iteration. So each reward for transitions are independent of the reward values before it. Right now I am doing $big[S_1,A_1,R_1,S_2,A_2,R_2,S_3big]$. I've looked at other code where people have accumulate the reward like $big[S_1,A_1,R_1,S_2,A_2,R_2 = R_1+r,S_3big]$. Where $r$ is given by the reward function. The thing is, I can't find a paper that defines why you should do this. So my question is, which way is the appropriate way to assign reward?






      machine-learning deep-learning reinforcement-learning q-learning dqn






      share|improve this question













      share|improve this question











      share|improve this question




      share|improve this question










      asked Mar 29 at 1:44









      DevarakondaVDevarakondaV

      164




      164




















          0






          active

          oldest

          votes












          Your Answer





          StackExchange.ifUsing("editor", function ()
          return StackExchange.using("mathjaxEditing", function ()
          StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
          StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
          );
          );
          , "mathjax-editing");

          StackExchange.ready(function()
          var channelOptions =
          tags: "".split(" "),
          id: "557"
          ;
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function()
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled)
          StackExchange.using("snippets", function()
          createEditor();
          );

          else
          createEditor();

          );

          function createEditor()
          StackExchange.prepareEditor(
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: false,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: null,
          bindNavPrevention: true,
          postfix: "",
          imageUploader:
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          ,
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          );



          );













          draft saved

          draft discarded


















          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f48186%2fis-reward-accumulated-during-a-play-iteration-when-performing-sarsa%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown

























          0






          active

          oldest

          votes








          0






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes















          draft saved

          draft discarded
















































          Thanks for contributing an answer to Data Science Stack Exchange!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid


          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.

          Use MathJax to format equations. MathJax reference.


          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f48186%2fis-reward-accumulated-during-a-play-iteration-when-performing-sarsa%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          Marja Vauras Lähteet | Aiheesta muualla | NavigointivalikkoMarja Vauras Turun yliopiston tutkimusportaalissaInfobox OKSuomalaisen Tiedeakatemian varsinaiset jäsenetKasvatustieteiden tiedekunnan dekaanit ja muu johtoMarja VaurasKoulutusvienti on kestävyys- ja ketteryyslaji (2.5.2017)laajentamallaWorldCat Identities0000 0001 0855 9405n86069603utb201588738523620927

          Which is better: GPT or RelGAN for text generation?2019 Community Moderator ElectionWhat is the difference between TextGAN and LM for text generation?GANs (generative adversarial networks) possible for text as well?Generator loss not decreasing- text to image synthesisChoosing a right algorithm for template-based text generationHow should I format input and output for text generation with LSTMsGumbel Softmax vs Vanilla Softmax for GAN trainingWhich neural network to choose for classification from text/speech?NLP text autoencoder that generates text in poetic meterWhat is the interpretation of the expectation notation in the GAN formulation?What is the difference between TextGAN and LM for text generation?How to prepare the data for text generation task

          Is this part of the description of the Archfey warlock's Misty Escape feature redundant?When is entropic ward considered “used”?How does the reaction timing work for Wrath of the Storm? Can it potentially prevent the damage from the triggering attack?Does the Dark Arts Archlich warlock patrons's Arcane Invisibility activate every time you cast a level 1+ spell?When attacking while invisible, when exactly does invisibility break?Can I cast Hellish Rebuke on my turn?Do I have to “pre-cast” a reaction spell in order for it to be triggered?What happens if a Player Misty Escapes into an Invisible CreatureCan a reaction interrupt multiattack?Does the Fiend-patron warlock's Hurl Through Hell feature dispel effects that require the target to be on the same plane as the caster?What are you allowed to do while using the Warlock's Eldritch Master feature?