Python Library for Neural Networks (no Tensors)Best python library for neural networksBest Julia library for neural networksPython Neural Network Library - With Dynamic TopologiesPython Neural Network library/program with simple installation? (Solved?)Neural Networks overfittingNeural network only converges when data cloud is close to 0Regression with Neural Networks in Tensorflow problemRBF neural network python library/implementationMAE and MSE are Nan for regression with Neural Networks?Loss function minimizing by pushing precision and recall to 0
Why is so much work done on numerical verification of the Riemann Hypothesis?
Biological Blimps: Propulsion
Electoral considerations aside, what are potential benefits, for the US, of policy changes proposed by the tweet recognizing Golan annexation?
How to explain what's wrong with this application of the chain rule?
What should you do when eye contact makes your subordinate uncomfortable?
Drawing ramified coverings with tikz
Character escape sequences for ">"
Fear of getting stuck on one programming language / technology that is not used in my country
The Staircase of Paint
Why did the HMS Bounty go back to a time when whales are already rare?
How could a planet have erratic days?
What prevents the use of a multi-segment ILS for non-straight approaches?
What is Cash Advance APR?
Multiplicative persistence
What is this called? Old film camera viewer?
Writing bit difficult equation in latex
Which one is correct as adjective “protruding” or “protruded”?
By means of an example, show that P(A) + P(B) = 1 does not mean that B is the complement of A.
Problem with TransformedDistribution
Why should universal income be universal?
What if a revenant (monster) gains fire resistance?
Why do we read the Megillah by night and by day?
Earnshaw’s Theorem and Ring of Charge
How to implement a feedback to keep the DC gain at zero for this conceptual passive filter?
Python Library for Neural Networks (no Tensors)
Best python library for neural networksBest Julia library for neural networksPython Neural Network Library - With Dynamic TopologiesPython Neural Network library/program with simple installation? (Solved?)Neural Networks overfittingNeural network only converges when data cloud is close to 0Regression with Neural Networks in Tensorflow problemRBF neural network python library/implementationMAE and MSE are Nan for regression with Neural Networks?Loss function minimizing by pushing precision and recall to 0
$begingroup$
I’m having an absolute nightmare with Keras and TF and I think by now it’s time to attempt a different approach with a different library (plan C is to build the network from scratch)
My neural network needs to have n regression outputs, let’s say 3, after a couple of densely connected layers. Input does not matter, I input a row of 0’s.
The reason it doesn’t matter is that my fitness function is custom as follows:
loss = 100 - get_accuracy(a, b, c)
Where a, b and c are the three numerical outputs given by the network at that time. Thus, the network is taking the inverse of accuracy as loss to in effect maximise it.
Is there a Python library that will let me implement this with ease? Keras was perfect up until I realised the python code within a custom loss is only executed once per compile and not once per step.
(I know there are better approaches to maximise this function, I want to do this to compare to those approaches)
machine-learning python neural-network regression
New contributor
$endgroup$
add a comment |
$begingroup$
I’m having an absolute nightmare with Keras and TF and I think by now it’s time to attempt a different approach with a different library (plan C is to build the network from scratch)
My neural network needs to have n regression outputs, let’s say 3, after a couple of densely connected layers. Input does not matter, I input a row of 0’s.
The reason it doesn’t matter is that my fitness function is custom as follows:
loss = 100 - get_accuracy(a, b, c)
Where a, b and c are the three numerical outputs given by the network at that time. Thus, the network is taking the inverse of accuracy as loss to in effect maximise it.
Is there a Python library that will let me implement this with ease? Keras was perfect up until I realised the python code within a custom loss is only executed once per compile and not once per step.
(I know there are better approaches to maximise this function, I want to do this to compare to those approaches)
machine-learning python neural-network regression
New contributor
$endgroup$
$begingroup$
"I realised the python code within a custom loss is only executed once per compile and not once per step." -- this should not happen. Function is called after each batch (for updating weights).
$endgroup$
– Shamit Verma
Mar 19 at 13:41
$begingroup$
If I have a global variable of 100 and the loss function decrements it, it always ends as 99 no matter how many epochs it runs for.
$endgroup$
– Jordan Bird
Mar 19 at 20:22
add a comment |
$begingroup$
I’m having an absolute nightmare with Keras and TF and I think by now it’s time to attempt a different approach with a different library (plan C is to build the network from scratch)
My neural network needs to have n regression outputs, let’s say 3, after a couple of densely connected layers. Input does not matter, I input a row of 0’s.
The reason it doesn’t matter is that my fitness function is custom as follows:
loss = 100 - get_accuracy(a, b, c)
Where a, b and c are the three numerical outputs given by the network at that time. Thus, the network is taking the inverse of accuracy as loss to in effect maximise it.
Is there a Python library that will let me implement this with ease? Keras was perfect up until I realised the python code within a custom loss is only executed once per compile and not once per step.
(I know there are better approaches to maximise this function, I want to do this to compare to those approaches)
machine-learning python neural-network regression
New contributor
$endgroup$
I’m having an absolute nightmare with Keras and TF and I think by now it’s time to attempt a different approach with a different library (plan C is to build the network from scratch)
My neural network needs to have n regression outputs, let’s say 3, after a couple of densely connected layers. Input does not matter, I input a row of 0’s.
The reason it doesn’t matter is that my fitness function is custom as follows:
loss = 100 - get_accuracy(a, b, c)
Where a, b and c are the three numerical outputs given by the network at that time. Thus, the network is taking the inverse of accuracy as loss to in effect maximise it.
Is there a Python library that will let me implement this with ease? Keras was perfect up until I realised the python code within a custom loss is only executed once per compile and not once per step.
(I know there are better approaches to maximise this function, I want to do this to compare to those approaches)
machine-learning python neural-network regression
machine-learning python neural-network regression
New contributor
New contributor
New contributor
asked Mar 19 at 12:04
Jordan BirdJordan Bird
1
1
New contributor
New contributor
$begingroup$
"I realised the python code within a custom loss is only executed once per compile and not once per step." -- this should not happen. Function is called after each batch (for updating weights).
$endgroup$
– Shamit Verma
Mar 19 at 13:41
$begingroup$
If I have a global variable of 100 and the loss function decrements it, it always ends as 99 no matter how many epochs it runs for.
$endgroup$
– Jordan Bird
Mar 19 at 20:22
add a comment |
$begingroup$
"I realised the python code within a custom loss is only executed once per compile and not once per step." -- this should not happen. Function is called after each batch (for updating weights).
$endgroup$
– Shamit Verma
Mar 19 at 13:41
$begingroup$
If I have a global variable of 100 and the loss function decrements it, it always ends as 99 no matter how many epochs it runs for.
$endgroup$
– Jordan Bird
Mar 19 at 20:22
$begingroup$
"I realised the python code within a custom loss is only executed once per compile and not once per step." -- this should not happen. Function is called after each batch (for updating weights).
$endgroup$
– Shamit Verma
Mar 19 at 13:41
$begingroup$
"I realised the python code within a custom loss is only executed once per compile and not once per step." -- this should not happen. Function is called after each batch (for updating weights).
$endgroup$
– Shamit Verma
Mar 19 at 13:41
$begingroup$
If I have a global variable of 100 and the loss function decrements it, it always ends as 99 no matter how many epochs it runs for.
$endgroup$
– Jordan Bird
Mar 19 at 20:22
$begingroup$
If I have a global variable of 100 and the loss function decrements it, it always ends as 99 no matter how many epochs it runs for.
$endgroup$
– Jordan Bird
Mar 19 at 20:22
add a comment |
0
active
oldest
votes
Your Answer
StackExchange.ifUsing("editor", function ()
return StackExchange.using("mathjaxEditing", function ()
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
);
);
, "mathjax-editing");
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "557"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Jordan Bird is a new contributor. Be nice, and check out our Code of Conduct.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f47609%2fpython-library-for-neural-networks-no-tensors%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
0
active
oldest
votes
0
active
oldest
votes
active
oldest
votes
active
oldest
votes
Jordan Bird is a new contributor. Be nice, and check out our Code of Conduct.
Jordan Bird is a new contributor. Be nice, and check out our Code of Conduct.
Jordan Bird is a new contributor. Be nice, and check out our Code of Conduct.
Jordan Bird is a new contributor. Be nice, and check out our Code of Conduct.
Thanks for contributing an answer to Data Science Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f47609%2fpython-library-for-neural-networks-no-tensors%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
$begingroup$
"I realised the python code within a custom loss is only executed once per compile and not once per step." -- this should not happen. Function is called after each batch (for updating weights).
$endgroup$
– Shamit Verma
Mar 19 at 13:41
$begingroup$
If I have a global variable of 100 and the loss function decrements it, it always ends as 99 no matter how many epochs it runs for.
$endgroup$
– Jordan Bird
Mar 19 at 20:22