Implementation of actor-critic model for MountainCar Announcing the arrival of Valued Associate #679: Cesar Manara Planned maintenance scheduled April 23, 2019 at 23:30 UTC (7:30pm US/Eastern) 2019 Moderator Election Q&A - Questionnaire 2019 Community Moderator Election ResultsCatastrophic forgetting in linear semi-gradient RL agent?Can Reinforcement Learning work for Dutch auctions?What is the difference between “expected return” and “expected reward” in the context of RL?Does employment of engineered immediate rewards in RL introduce a non-linear problem to an agent?Card game for Gym: Reward shapingPolicy gradient on data only, without emulatorsHow to give rewards to actions in RL?What is wrong with this reinforcement learning environment ?DQN cannot learn or convergeReinforcement learning for continuous state and action space
Flight departed from the gate 5 min before scheduled departure time. Refund options
What does 丫 mean? 丫是什么意思?
How can I prevent/balance waiting and turtling as a response to cooldown mechanics
Improvising over quartal voicings
What are some likely causes to domain member PC losing contact to domain controller?
Sally's older brother
Why do C and C++ allow the expression (int) + 4*5?
How to achieve cat-like agility?
NIntegrate on a solution of a matrix ODE
What is a more techy Technical Writer job title that isn't cutesy or confusing?
New Order #6: Easter Egg
How to resize main filesystem
How do I say "this must not happen"?
Marquee sign letters
How to infer difference of population proportion between two groups when proportion is small?
3D Masyu - A Die
Short story about astronauts fertilizing soil with their own bodies
Getting representations of the Lie group out of representations of its Lie algebra
Simple Line in LaTeX Help!
My mentor says to set image to Fine instead of RAW — how is this different from JPG?
Problem with display of presentation
Why are two-digit numbers in Jonathan Swift's "Gulliver's Travels" (1726) written in "German style"?
Did pre-Columbian Americans know the spherical shape of the Earth?
Did John Wesley plagiarize Matthew Henry...?
Implementation of actor-critic model for MountainCar
Announcing the arrival of Valued Associate #679: Cesar Manara
Planned maintenance scheduled April 23, 2019 at 23:30 UTC (7:30pm US/Eastern)
2019 Moderator Election Q&A - Questionnaire
2019 Community Moderator Election ResultsCatastrophic forgetting in linear semi-gradient RL agent?Can Reinforcement Learning work for Dutch auctions?What is the difference between “expected return” and “expected reward” in the context of RL?Does employment of engineered immediate rewards in RL introduce a non-linear problem to an agent?Card game for Gym: Reward shapingPolicy gradient on data only, without emulatorsHow to give rewards to actions in RL?What is wrong with this reinforcement learning environment ?DQN cannot learn or convergeReinforcement learning for continuous state and action space
$begingroup$
I'm trying to build a model for the Mountain Car game, following this Actor-Critic code: https://github.com/nikhilbarhate99/Actor-Critic
(However, in this case, it's discrete action space, while it's continuous for my problem. Also, it's not the MountainCar game in this github code.
So, I want to use the actor critic model in order to makes a player of the famous Mountain Car game. All the environment code is here: https://github.com/nbrosson/Actor-critic-MountainCar/ Everything about the environment works fine. The only file that I have to worry about is agent.py
import numpy as np
import torch
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F
from torch.distributions import Normal
"""
Contains the definition of the agent that will run in an
environment.
"""
class ActorCritic(nn.Module):
def __init__(self):
super(ActorCritic, self).__init__()
self.affine = nn.Linear(2, 32)
self.action_layer = nn.Linear(32, 2)
self.value_layer = nn.Linear(32, 1)
self.logprobs = []
self.state_values = []
self.rewards = []
self.actions = []
def forward(self, observation):
# Convert tuple into tensor
observation_as_list = []
observation_as_list.append(observation[0])
observation_as_list.append(observation[1])
observation_as_list = np.asarray(observation_as_list)
observation_as_list = observation_as_list.reshape(1,2)
observation = observation_as_list
state = torch.from_numpy(observation).float()
state = F.relu(self.affine(state))
state_value = self.value_layer(state)
action_parameters = F.tanh(self.action_layer(state))
action_distribution = Normal(action_parameters[0][0], action_parameters[0][1])
action = action_distribution.sample() # Torch.tensor; action
self.logprobs.append(action_distribution.log_prob(action)+ 1e-6)
self.state_values.append(state_value)
return action.item() # Float element
def calculateLoss(self, gamma=0.99):
# calculating discounted rewards:
rewards = []
dis_reward = 0
for reward in self.rewards[::-1]:
dis_reward = reward + gamma * dis_reward
rewards.insert(0, dis_reward)
# normalizing the rewards:
rewards = torch.tensor(rewards)
rewards = (rewards - rewards.mean()) / (rewards.std())
loss = 0
for logprob, value, reward in zip(self.logprobs, self.state_values, rewards):
advantage = reward - value.item()
action_loss = -logprob * advantage
value_loss = F.smooth_l1_loss(value, reward)
loss += (action_loss + value_loss)
return loss
def clearMemory(self):
del self.logprobs[:]
del self.state_values[:]
del self.rewards[:]
class RandomAgent():
def __init__(self):
"""Init a new agent.
"""
#self.theta = np.zeros((3, 2))
#self.state = RandomAgent.reset(self,[-20,20])
self.count_episodes = -1
self.max_position = -0.4
self.epsilon = 0.9
self.gamma = 0.99
self.running_rewards = 0
self.policy = ActorCritic()
self.optimizer = optim.Adam(self.policy.parameters(), lr=0.01, betas=(0.9, 0.999))
self.check_new_episode = 1
self.count_iter = 0
def reset(self, x_range):
"""Reset the state of the agent for the start of new game.
Parameters of the environment do not change, but your initial
location is randomized.
x_range = [xmin, xmax] contains the range of possible values for x
range for vx is always [-20, 20]
"""
self.epsilon = (self.epsilon * 0.99)
self.count_episodes += 1
return (np.random.uniform(x_range[0],x_range[1]), np.random.uniform(-20,20))
def act(self, observation):
"""Acts given an observation of the environment.
Takes as argument an observation of the current state, and
returns the chosen action.
observation = (x, vx)
"""
# observation_as_list = []
# observation_as_list.append(observation[0])
# observation_as_list.append(observation[1])
# observation_as_list = np.asarray(observation_as_list)
# observation_as_list = observation_as_list.reshape(1,2)
# observation = observation_as_list
if np.random.rand(1) < self.epsilon:
return np.random.uniform(-1,1)
else:
action = self.policy(observation)
return action
def reward(self, observation, action, reward):
"""Receive a reward for performing given action on
given observation.
This is where your agent can learn.
"""
self.count_iter +=1
self.policy.rewards.append(reward)
self.running_rewards += reward
if self.count_iter == 100:
# We want first to update the critic agent:
self.optimizer.zero_grad()
self.loss = self.policy.calculateLoss(self.gamma)
self.loss.backward()
self.optimizer.step()
self.policy.clearMemory()
self.count_iter = 0
Agent = RandomAgent
However, my model does not provide good results. It doesn't even improve with 200 episodes.
Any ideas what is wrong on my code?? Any suggestions??
Thanks a lot !!
python reinforcement-learning pytorch actor-critic
$endgroup$
add a comment |
$begingroup$
I'm trying to build a model for the Mountain Car game, following this Actor-Critic code: https://github.com/nikhilbarhate99/Actor-Critic
(However, in this case, it's discrete action space, while it's continuous for my problem. Also, it's not the MountainCar game in this github code.
So, I want to use the actor critic model in order to makes a player of the famous Mountain Car game. All the environment code is here: https://github.com/nbrosson/Actor-critic-MountainCar/ Everything about the environment works fine. The only file that I have to worry about is agent.py
import numpy as np
import torch
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F
from torch.distributions import Normal
"""
Contains the definition of the agent that will run in an
environment.
"""
class ActorCritic(nn.Module):
def __init__(self):
super(ActorCritic, self).__init__()
self.affine = nn.Linear(2, 32)
self.action_layer = nn.Linear(32, 2)
self.value_layer = nn.Linear(32, 1)
self.logprobs = []
self.state_values = []
self.rewards = []
self.actions = []
def forward(self, observation):
# Convert tuple into tensor
observation_as_list = []
observation_as_list.append(observation[0])
observation_as_list.append(observation[1])
observation_as_list = np.asarray(observation_as_list)
observation_as_list = observation_as_list.reshape(1,2)
observation = observation_as_list
state = torch.from_numpy(observation).float()
state = F.relu(self.affine(state))
state_value = self.value_layer(state)
action_parameters = F.tanh(self.action_layer(state))
action_distribution = Normal(action_parameters[0][0], action_parameters[0][1])
action = action_distribution.sample() # Torch.tensor; action
self.logprobs.append(action_distribution.log_prob(action)+ 1e-6)
self.state_values.append(state_value)
return action.item() # Float element
def calculateLoss(self, gamma=0.99):
# calculating discounted rewards:
rewards = []
dis_reward = 0
for reward in self.rewards[::-1]:
dis_reward = reward + gamma * dis_reward
rewards.insert(0, dis_reward)
# normalizing the rewards:
rewards = torch.tensor(rewards)
rewards = (rewards - rewards.mean()) / (rewards.std())
loss = 0
for logprob, value, reward in zip(self.logprobs, self.state_values, rewards):
advantage = reward - value.item()
action_loss = -logprob * advantage
value_loss = F.smooth_l1_loss(value, reward)
loss += (action_loss + value_loss)
return loss
def clearMemory(self):
del self.logprobs[:]
del self.state_values[:]
del self.rewards[:]
class RandomAgent():
def __init__(self):
"""Init a new agent.
"""
#self.theta = np.zeros((3, 2))
#self.state = RandomAgent.reset(self,[-20,20])
self.count_episodes = -1
self.max_position = -0.4
self.epsilon = 0.9
self.gamma = 0.99
self.running_rewards = 0
self.policy = ActorCritic()
self.optimizer = optim.Adam(self.policy.parameters(), lr=0.01, betas=(0.9, 0.999))
self.check_new_episode = 1
self.count_iter = 0
def reset(self, x_range):
"""Reset the state of the agent for the start of new game.
Parameters of the environment do not change, but your initial
location is randomized.
x_range = [xmin, xmax] contains the range of possible values for x
range for vx is always [-20, 20]
"""
self.epsilon = (self.epsilon * 0.99)
self.count_episodes += 1
return (np.random.uniform(x_range[0],x_range[1]), np.random.uniform(-20,20))
def act(self, observation):
"""Acts given an observation of the environment.
Takes as argument an observation of the current state, and
returns the chosen action.
observation = (x, vx)
"""
# observation_as_list = []
# observation_as_list.append(observation[0])
# observation_as_list.append(observation[1])
# observation_as_list = np.asarray(observation_as_list)
# observation_as_list = observation_as_list.reshape(1,2)
# observation = observation_as_list
if np.random.rand(1) < self.epsilon:
return np.random.uniform(-1,1)
else:
action = self.policy(observation)
return action
def reward(self, observation, action, reward):
"""Receive a reward for performing given action on
given observation.
This is where your agent can learn.
"""
self.count_iter +=1
self.policy.rewards.append(reward)
self.running_rewards += reward
if self.count_iter == 100:
# We want first to update the critic agent:
self.optimizer.zero_grad()
self.loss = self.policy.calculateLoss(self.gamma)
self.loss.backward()
self.optimizer.step()
self.policy.clearMemory()
self.count_iter = 0
Agent = RandomAgent
However, my model does not provide good results. It doesn't even improve with 200 episodes.
Any ideas what is wrong on my code?? Any suggestions??
Thanks a lot !!
python reinforcement-learning pytorch actor-critic
$endgroup$
add a comment |
$begingroup$
I'm trying to build a model for the Mountain Car game, following this Actor-Critic code: https://github.com/nikhilbarhate99/Actor-Critic
(However, in this case, it's discrete action space, while it's continuous for my problem. Also, it's not the MountainCar game in this github code.
So, I want to use the actor critic model in order to makes a player of the famous Mountain Car game. All the environment code is here: https://github.com/nbrosson/Actor-critic-MountainCar/ Everything about the environment works fine. The only file that I have to worry about is agent.py
import numpy as np
import torch
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F
from torch.distributions import Normal
"""
Contains the definition of the agent that will run in an
environment.
"""
class ActorCritic(nn.Module):
def __init__(self):
super(ActorCritic, self).__init__()
self.affine = nn.Linear(2, 32)
self.action_layer = nn.Linear(32, 2)
self.value_layer = nn.Linear(32, 1)
self.logprobs = []
self.state_values = []
self.rewards = []
self.actions = []
def forward(self, observation):
# Convert tuple into tensor
observation_as_list = []
observation_as_list.append(observation[0])
observation_as_list.append(observation[1])
observation_as_list = np.asarray(observation_as_list)
observation_as_list = observation_as_list.reshape(1,2)
observation = observation_as_list
state = torch.from_numpy(observation).float()
state = F.relu(self.affine(state))
state_value = self.value_layer(state)
action_parameters = F.tanh(self.action_layer(state))
action_distribution = Normal(action_parameters[0][0], action_parameters[0][1])
action = action_distribution.sample() # Torch.tensor; action
self.logprobs.append(action_distribution.log_prob(action)+ 1e-6)
self.state_values.append(state_value)
return action.item() # Float element
def calculateLoss(self, gamma=0.99):
# calculating discounted rewards:
rewards = []
dis_reward = 0
for reward in self.rewards[::-1]:
dis_reward = reward + gamma * dis_reward
rewards.insert(0, dis_reward)
# normalizing the rewards:
rewards = torch.tensor(rewards)
rewards = (rewards - rewards.mean()) / (rewards.std())
loss = 0
for logprob, value, reward in zip(self.logprobs, self.state_values, rewards):
advantage = reward - value.item()
action_loss = -logprob * advantage
value_loss = F.smooth_l1_loss(value, reward)
loss += (action_loss + value_loss)
return loss
def clearMemory(self):
del self.logprobs[:]
del self.state_values[:]
del self.rewards[:]
class RandomAgent():
def __init__(self):
"""Init a new agent.
"""
#self.theta = np.zeros((3, 2))
#self.state = RandomAgent.reset(self,[-20,20])
self.count_episodes = -1
self.max_position = -0.4
self.epsilon = 0.9
self.gamma = 0.99
self.running_rewards = 0
self.policy = ActorCritic()
self.optimizer = optim.Adam(self.policy.parameters(), lr=0.01, betas=(0.9, 0.999))
self.check_new_episode = 1
self.count_iter = 0
def reset(self, x_range):
"""Reset the state of the agent for the start of new game.
Parameters of the environment do not change, but your initial
location is randomized.
x_range = [xmin, xmax] contains the range of possible values for x
range for vx is always [-20, 20]
"""
self.epsilon = (self.epsilon * 0.99)
self.count_episodes += 1
return (np.random.uniform(x_range[0],x_range[1]), np.random.uniform(-20,20))
def act(self, observation):
"""Acts given an observation of the environment.
Takes as argument an observation of the current state, and
returns the chosen action.
observation = (x, vx)
"""
# observation_as_list = []
# observation_as_list.append(observation[0])
# observation_as_list.append(observation[1])
# observation_as_list = np.asarray(observation_as_list)
# observation_as_list = observation_as_list.reshape(1,2)
# observation = observation_as_list
if np.random.rand(1) < self.epsilon:
return np.random.uniform(-1,1)
else:
action = self.policy(observation)
return action
def reward(self, observation, action, reward):
"""Receive a reward for performing given action on
given observation.
This is where your agent can learn.
"""
self.count_iter +=1
self.policy.rewards.append(reward)
self.running_rewards += reward
if self.count_iter == 100:
# We want first to update the critic agent:
self.optimizer.zero_grad()
self.loss = self.policy.calculateLoss(self.gamma)
self.loss.backward()
self.optimizer.step()
self.policy.clearMemory()
self.count_iter = 0
Agent = RandomAgent
However, my model does not provide good results. It doesn't even improve with 200 episodes.
Any ideas what is wrong on my code?? Any suggestions??
Thanks a lot !!
python reinforcement-learning pytorch actor-critic
$endgroup$
I'm trying to build a model for the Mountain Car game, following this Actor-Critic code: https://github.com/nikhilbarhate99/Actor-Critic
(However, in this case, it's discrete action space, while it's continuous for my problem. Also, it's not the MountainCar game in this github code.
So, I want to use the actor critic model in order to makes a player of the famous Mountain Car game. All the environment code is here: https://github.com/nbrosson/Actor-critic-MountainCar/ Everything about the environment works fine. The only file that I have to worry about is agent.py
import numpy as np
import torch
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F
from torch.distributions import Normal
"""
Contains the definition of the agent that will run in an
environment.
"""
class ActorCritic(nn.Module):
def __init__(self):
super(ActorCritic, self).__init__()
self.affine = nn.Linear(2, 32)
self.action_layer = nn.Linear(32, 2)
self.value_layer = nn.Linear(32, 1)
self.logprobs = []
self.state_values = []
self.rewards = []
self.actions = []
def forward(self, observation):
# Convert tuple into tensor
observation_as_list = []
observation_as_list.append(observation[0])
observation_as_list.append(observation[1])
observation_as_list = np.asarray(observation_as_list)
observation_as_list = observation_as_list.reshape(1,2)
observation = observation_as_list
state = torch.from_numpy(observation).float()
state = F.relu(self.affine(state))
state_value = self.value_layer(state)
action_parameters = F.tanh(self.action_layer(state))
action_distribution = Normal(action_parameters[0][0], action_parameters[0][1])
action = action_distribution.sample() # Torch.tensor; action
self.logprobs.append(action_distribution.log_prob(action)+ 1e-6)
self.state_values.append(state_value)
return action.item() # Float element
def calculateLoss(self, gamma=0.99):
# calculating discounted rewards:
rewards = []
dis_reward = 0
for reward in self.rewards[::-1]:
dis_reward = reward + gamma * dis_reward
rewards.insert(0, dis_reward)
# normalizing the rewards:
rewards = torch.tensor(rewards)
rewards = (rewards - rewards.mean()) / (rewards.std())
loss = 0
for logprob, value, reward in zip(self.logprobs, self.state_values, rewards):
advantage = reward - value.item()
action_loss = -logprob * advantage
value_loss = F.smooth_l1_loss(value, reward)
loss += (action_loss + value_loss)
return loss
def clearMemory(self):
del self.logprobs[:]
del self.state_values[:]
del self.rewards[:]
class RandomAgent():
def __init__(self):
"""Init a new agent.
"""
#self.theta = np.zeros((3, 2))
#self.state = RandomAgent.reset(self,[-20,20])
self.count_episodes = -1
self.max_position = -0.4
self.epsilon = 0.9
self.gamma = 0.99
self.running_rewards = 0
self.policy = ActorCritic()
self.optimizer = optim.Adam(self.policy.parameters(), lr=0.01, betas=(0.9, 0.999))
self.check_new_episode = 1
self.count_iter = 0
def reset(self, x_range):
"""Reset the state of the agent for the start of new game.
Parameters of the environment do not change, but your initial
location is randomized.
x_range = [xmin, xmax] contains the range of possible values for x
range for vx is always [-20, 20]
"""
self.epsilon = (self.epsilon * 0.99)
self.count_episodes += 1
return (np.random.uniform(x_range[0],x_range[1]), np.random.uniform(-20,20))
def act(self, observation):
"""Acts given an observation of the environment.
Takes as argument an observation of the current state, and
returns the chosen action.
observation = (x, vx)
"""
# observation_as_list = []
# observation_as_list.append(observation[0])
# observation_as_list.append(observation[1])
# observation_as_list = np.asarray(observation_as_list)
# observation_as_list = observation_as_list.reshape(1,2)
# observation = observation_as_list
if np.random.rand(1) < self.epsilon:
return np.random.uniform(-1,1)
else:
action = self.policy(observation)
return action
def reward(self, observation, action, reward):
"""Receive a reward for performing given action on
given observation.
This is where your agent can learn.
"""
self.count_iter +=1
self.policy.rewards.append(reward)
self.running_rewards += reward
if self.count_iter == 100:
# We want first to update the critic agent:
self.optimizer.zero_grad()
self.loss = self.policy.calculateLoss(self.gamma)
self.loss.backward()
self.optimizer.step()
self.policy.clearMemory()
self.count_iter = 0
Agent = RandomAgent
However, my model does not provide good results. It doesn't even improve with 200 episodes.
Any ideas what is wrong on my code?? Any suggestions??
Thanks a lot !!
python reinforcement-learning pytorch actor-critic
python reinforcement-learning pytorch actor-critic
asked Apr 5 at 20:42
nolw38nolw38
114
114
add a comment |
add a comment |
0
active
oldest
votes
Your Answer
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "557"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f48713%2fimplementation-of-actor-critic-model-for-mountaincar%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
0
active
oldest
votes
0
active
oldest
votes
active
oldest
votes
active
oldest
votes
Thanks for contributing an answer to Data Science Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f48713%2fimplementation-of-actor-critic-model-for-mountaincar%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown