Backpropagation implementation helpNeural Network Backpropagation problemsBasic backpropagation questionDropout backpropagation implementation detailsNeed help understanding LSTMs' backpropagation and carousel of errorConfusion in backpropagation algorithmTensorflow regression predicting 1 for all inputsCompute backpropagationBackpropagation - softmax derivativeGeneral equation - calculating backpropagationBackpropagation
Randomness of Python's random
How long would it take for people to notice a mass disappearance?
How to reply this mail from potential PhD professor?
Can I get a paladin's steed by True Polymorphing into a monster that can cast Find Steed?
Unknowingly ran an infinite loop in terminal
Was Unix ever a single-user OS?
Independent, post-Brexit Scotland - would there be a hard border with England?
Enumerate Derangements
How encryption in SQL login authentication works
Is it cheaper to drop cargo than to land it?
Can the 歳 counter be used for architecture, furniture etc to tell its age?
Is Cola "probably the best-known" Latin word in the world? If not, which might it be?
Why are prions in animal diets not destroyed by the digestive system?
Can there be a single technologically advance nation, in a continent full of non-technologically advance nations
What does this colon mean? It is not labeling, it is not ternary operator
How is the law in a case of multiple edim zomemim justified by Chachomim?
Besides the up and down quark, what other quarks are present in daily matter around us?
Is there a legal ground for stripping the UK of its UN Veto if Scotland and/or N.Ireland split from the UK?
When does a player choose the creature benefiting from Amass?
What happens to the Time Stone?
What happens if I start too many background jobs?
What to use instead of cling film to wrap pastry
I caught several of my students plagiarizing. Could it be my fault as a teacher?
How to get a product new from and to date in phtml file in magento 2
Backpropagation implementation help
Neural Network Backpropagation problemsBasic backpropagation questionDropout backpropagation implementation detailsNeed help understanding LSTMs' backpropagation and carousel of errorConfusion in backpropagation algorithmTensorflow regression predicting 1 for all inputsCompute backpropagationBackpropagation - softmax derivativeGeneral equation - calculating backpropagationBackpropagation
$begingroup$
I'm trying to implement Nokland's Direct Feedback Alignment in Python following his paper.
Here's my implementation so far:
import numpy as np
import matplotlib.pyplot as plt
from sklearn.utils.extmath import softmax
from scipy.special import expit
def f_sigmoid(x): return expit(x)
def df_sigmoid(x): return f_sigmoid(x) * (1-f_sigmoid(x))
f_activation = f_sigmoid
df_activation = df_sigmoid
class NeuralNet(object):
def __init__(self, num_input, num_hidden, num_output):
#Using Lillicrap's initialization between -0.5 and 0.5
self.W1 = np.random.uniform(-0.5, 0.5)
self.W2 = np.random.uniform(-0.5, 0.5)
self.W3 = np.random.uniform(-0.5, 0.5)
self.B1 = np.random.uniform(-0.5, 0.5, self.W1.size)
self.B2 = np.random.uniform(-0.5, 0.5, self.W2.size)
def forward(self, X): #HOW DO I INITIALIZE b1,b2,b3 ?
a1 = np.matmul( X, self.W1) + self.b1
h1 = f_activation(a1)
a2 = np.matmul(h1, self.W2) + self.b2
h2 = f_activation(a2)
a_y = np.matmul(h2, self.W3) + self.b3
y_hat = softmax(a_y)
return y_hat, h2, h1, a1, a2
def loss(self, predicted, target): pass
def backpropagation(self, X, y, lr):
h3, h2, h1, a1, a2 = self.forward(X)
y_hat = h3
e = y_hat - y
# Backpropagation
d2 = df_activation(a2) * np.matmul(e, self.W3.T)
d1 = df_activation(a1) * np.matmul(d2, self.W2.T)
# Feedback Alignment
"""no W anymore!"""
d2 = df_activation(a2) * np.matmul(e, self.B2)
d1 = df_activation(a1) * np.matmul(d2, self.B1)
# Direct Feedback Alignment
d2 = df_activation(a2) * np.matmul(e, self.B2)
d1 = df_activation(a1) * np.matmul(e, self.B1)
# Indirect Feedback Alignment
d1 = df_activation(a1) * np.matmul(e, self.B1)
d2 = df_activation(a2) * np.matmul(d1, self.W2)
# Weights update, same for all methods
self.W3 = self.W3 - lr * np.matmul(h2.T,e)
self.W2 = self.W2 - lr * np.matmul(h1.T, d2)
self.W1 = self.W1 - lr * np.matmul(X.reshape((-1,1)), d1)
# Biases update, same for all methods
self.b3 = self.b3 - lr *e
self.b2 = self.b2 - lr * d2
self.b1 = self.b1 - lr * d1
I don't know how to initialize b1,b2,b3 in forward
function. I'm also not sure about W and B in __init__
Do you have any suggestions?
neural-network backpropagation
$endgroup$
add a comment |
$begingroup$
I'm trying to implement Nokland's Direct Feedback Alignment in Python following his paper.
Here's my implementation so far:
import numpy as np
import matplotlib.pyplot as plt
from sklearn.utils.extmath import softmax
from scipy.special import expit
def f_sigmoid(x): return expit(x)
def df_sigmoid(x): return f_sigmoid(x) * (1-f_sigmoid(x))
f_activation = f_sigmoid
df_activation = df_sigmoid
class NeuralNet(object):
def __init__(self, num_input, num_hidden, num_output):
#Using Lillicrap's initialization between -0.5 and 0.5
self.W1 = np.random.uniform(-0.5, 0.5)
self.W2 = np.random.uniform(-0.5, 0.5)
self.W3 = np.random.uniform(-0.5, 0.5)
self.B1 = np.random.uniform(-0.5, 0.5, self.W1.size)
self.B2 = np.random.uniform(-0.5, 0.5, self.W2.size)
def forward(self, X): #HOW DO I INITIALIZE b1,b2,b3 ?
a1 = np.matmul( X, self.W1) + self.b1
h1 = f_activation(a1)
a2 = np.matmul(h1, self.W2) + self.b2
h2 = f_activation(a2)
a_y = np.matmul(h2, self.W3) + self.b3
y_hat = softmax(a_y)
return y_hat, h2, h1, a1, a2
def loss(self, predicted, target): pass
def backpropagation(self, X, y, lr):
h3, h2, h1, a1, a2 = self.forward(X)
y_hat = h3
e = y_hat - y
# Backpropagation
d2 = df_activation(a2) * np.matmul(e, self.W3.T)
d1 = df_activation(a1) * np.matmul(d2, self.W2.T)
# Feedback Alignment
"""no W anymore!"""
d2 = df_activation(a2) * np.matmul(e, self.B2)
d1 = df_activation(a1) * np.matmul(d2, self.B1)
# Direct Feedback Alignment
d2 = df_activation(a2) * np.matmul(e, self.B2)
d1 = df_activation(a1) * np.matmul(e, self.B1)
# Indirect Feedback Alignment
d1 = df_activation(a1) * np.matmul(e, self.B1)
d2 = df_activation(a2) * np.matmul(d1, self.W2)
# Weights update, same for all methods
self.W3 = self.W3 - lr * np.matmul(h2.T,e)
self.W2 = self.W2 - lr * np.matmul(h1.T, d2)
self.W1 = self.W1 - lr * np.matmul(X.reshape((-1,1)), d1)
# Biases update, same for all methods
self.b3 = self.b3 - lr *e
self.b2 = self.b2 - lr * d2
self.b1 = self.b1 - lr * d1
I don't know how to initialize b1,b2,b3 in forward
function. I'm also not sure about W and B in __init__
Do you have any suggestions?
neural-network backpropagation
$endgroup$
add a comment |
$begingroup$
I'm trying to implement Nokland's Direct Feedback Alignment in Python following his paper.
Here's my implementation so far:
import numpy as np
import matplotlib.pyplot as plt
from sklearn.utils.extmath import softmax
from scipy.special import expit
def f_sigmoid(x): return expit(x)
def df_sigmoid(x): return f_sigmoid(x) * (1-f_sigmoid(x))
f_activation = f_sigmoid
df_activation = df_sigmoid
class NeuralNet(object):
def __init__(self, num_input, num_hidden, num_output):
#Using Lillicrap's initialization between -0.5 and 0.5
self.W1 = np.random.uniform(-0.5, 0.5)
self.W2 = np.random.uniform(-0.5, 0.5)
self.W3 = np.random.uniform(-0.5, 0.5)
self.B1 = np.random.uniform(-0.5, 0.5, self.W1.size)
self.B2 = np.random.uniform(-0.5, 0.5, self.W2.size)
def forward(self, X): #HOW DO I INITIALIZE b1,b2,b3 ?
a1 = np.matmul( X, self.W1) + self.b1
h1 = f_activation(a1)
a2 = np.matmul(h1, self.W2) + self.b2
h2 = f_activation(a2)
a_y = np.matmul(h2, self.W3) + self.b3
y_hat = softmax(a_y)
return y_hat, h2, h1, a1, a2
def loss(self, predicted, target): pass
def backpropagation(self, X, y, lr):
h3, h2, h1, a1, a2 = self.forward(X)
y_hat = h3
e = y_hat - y
# Backpropagation
d2 = df_activation(a2) * np.matmul(e, self.W3.T)
d1 = df_activation(a1) * np.matmul(d2, self.W2.T)
# Feedback Alignment
"""no W anymore!"""
d2 = df_activation(a2) * np.matmul(e, self.B2)
d1 = df_activation(a1) * np.matmul(d2, self.B1)
# Direct Feedback Alignment
d2 = df_activation(a2) * np.matmul(e, self.B2)
d1 = df_activation(a1) * np.matmul(e, self.B1)
# Indirect Feedback Alignment
d1 = df_activation(a1) * np.matmul(e, self.B1)
d2 = df_activation(a2) * np.matmul(d1, self.W2)
# Weights update, same for all methods
self.W3 = self.W3 - lr * np.matmul(h2.T,e)
self.W2 = self.W2 - lr * np.matmul(h1.T, d2)
self.W1 = self.W1 - lr * np.matmul(X.reshape((-1,1)), d1)
# Biases update, same for all methods
self.b3 = self.b3 - lr *e
self.b2 = self.b2 - lr * d2
self.b1 = self.b1 - lr * d1
I don't know how to initialize b1,b2,b3 in forward
function. I'm also not sure about W and B in __init__
Do you have any suggestions?
neural-network backpropagation
$endgroup$
I'm trying to implement Nokland's Direct Feedback Alignment in Python following his paper.
Here's my implementation so far:
import numpy as np
import matplotlib.pyplot as plt
from sklearn.utils.extmath import softmax
from scipy.special import expit
def f_sigmoid(x): return expit(x)
def df_sigmoid(x): return f_sigmoid(x) * (1-f_sigmoid(x))
f_activation = f_sigmoid
df_activation = df_sigmoid
class NeuralNet(object):
def __init__(self, num_input, num_hidden, num_output):
#Using Lillicrap's initialization between -0.5 and 0.5
self.W1 = np.random.uniform(-0.5, 0.5)
self.W2 = np.random.uniform(-0.5, 0.5)
self.W3 = np.random.uniform(-0.5, 0.5)
self.B1 = np.random.uniform(-0.5, 0.5, self.W1.size)
self.B2 = np.random.uniform(-0.5, 0.5, self.W2.size)
def forward(self, X): #HOW DO I INITIALIZE b1,b2,b3 ?
a1 = np.matmul( X, self.W1) + self.b1
h1 = f_activation(a1)
a2 = np.matmul(h1, self.W2) + self.b2
h2 = f_activation(a2)
a_y = np.matmul(h2, self.W3) + self.b3
y_hat = softmax(a_y)
return y_hat, h2, h1, a1, a2
def loss(self, predicted, target): pass
def backpropagation(self, X, y, lr):
h3, h2, h1, a1, a2 = self.forward(X)
y_hat = h3
e = y_hat - y
# Backpropagation
d2 = df_activation(a2) * np.matmul(e, self.W3.T)
d1 = df_activation(a1) * np.matmul(d2, self.W2.T)
# Feedback Alignment
"""no W anymore!"""
d2 = df_activation(a2) * np.matmul(e, self.B2)
d1 = df_activation(a1) * np.matmul(d2, self.B1)
# Direct Feedback Alignment
d2 = df_activation(a2) * np.matmul(e, self.B2)
d1 = df_activation(a1) * np.matmul(e, self.B1)
# Indirect Feedback Alignment
d1 = df_activation(a1) * np.matmul(e, self.B1)
d2 = df_activation(a2) * np.matmul(d1, self.W2)
# Weights update, same for all methods
self.W3 = self.W3 - lr * np.matmul(h2.T,e)
self.W2 = self.W2 - lr * np.matmul(h1.T, d2)
self.W1 = self.W1 - lr * np.matmul(X.reshape((-1,1)), d1)
# Biases update, same for all methods
self.b3 = self.b3 - lr *e
self.b2 = self.b2 - lr * d2
self.b1 = self.b1 - lr * d1
I don't know how to initialize b1,b2,b3 in forward
function. I'm also not sure about W and B in __init__
Do you have any suggestions?
neural-network backpropagation
neural-network backpropagation
asked Apr 9 at 21:43
Ford1892Ford1892
11
11
add a comment |
add a comment |
0
active
oldest
votes
Your Answer
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "557"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f49000%2fbackpropagation-implementation-help%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
0
active
oldest
votes
0
active
oldest
votes
active
oldest
votes
active
oldest
votes
Thanks for contributing an answer to Data Science Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f49000%2fbackpropagation-implementation-help%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown