Python Library for Neural Networks (no Tensors)Best python library for neural networksBest Julia library for neural networksPython Neural Network Library - With Dynamic TopologiesPython Neural Network library/program with simple installation? (Solved?)Neural Networks overfittingNeural network only converges when data cloud is close to 0Regression with Neural Networks in Tensorflow problemRBF neural network python library/implementationMAE and MSE are Nan for regression with Neural Networks?Loss function minimizing by pushing precision and recall to 0

Why is so much work done on numerical verification of the Riemann Hypothesis?

Biological Blimps: Propulsion

Electoral considerations aside, what are potential benefits, for the US, of policy changes proposed by the tweet recognizing Golan annexation?

How to explain what's wrong with this application of the chain rule?

What should you do when eye contact makes your subordinate uncomfortable?

Drawing ramified coverings with tikz

Character escape sequences for ">"

Fear of getting stuck on one programming language / technology that is not used in my country

The Staircase of Paint

Why did the HMS Bounty go back to a time when whales are already rare?

How could a planet have erratic days?

What prevents the use of a multi-segment ILS for non-straight approaches?

What is Cash Advance APR?

Multiplicative persistence

What is this called? Old film camera viewer?

Writing bit difficult equation in latex

Which one is correct as adjective “protruding” or “protruded”?

By means of an example, show that P(A) + P(B) = 1 does not mean that B is the complement of A.

Problem with TransformedDistribution

Why should universal income be universal?

What if a revenant (monster) gains fire resistance?

Why do we read the Megillah by night and by day?

Earnshaw’s Theorem and Ring of Charge

How to implement a feedback to keep the DC gain at zero for this conceptual passive filter?



Python Library for Neural Networks (no Tensors)


Best python library for neural networksBest Julia library for neural networksPython Neural Network Library - With Dynamic TopologiesPython Neural Network library/program with simple installation? (Solved?)Neural Networks overfittingNeural network only converges when data cloud is close to 0Regression with Neural Networks in Tensorflow problemRBF neural network python library/implementationMAE and MSE are Nan for regression with Neural Networks?Loss function minimizing by pushing precision and recall to 0













0












$begingroup$


I’m having an absolute nightmare with Keras and TF and I think by now it’s time to attempt a different approach with a different library (plan C is to build the network from scratch)



My neural network needs to have n regression outputs, let’s say 3, after a couple of densely connected layers. Input does not matter, I input a row of 0’s.



The reason it doesn’t matter is that my fitness function is custom as follows:



loss = 100 - get_accuracy(a, b, c)



Where a, b and c are the three numerical outputs given by the network at that time. Thus, the network is taking the inverse of accuracy as loss to in effect maximise it.



Is there a Python library that will let me implement this with ease? Keras was perfect up until I realised the python code within a custom loss is only executed once per compile and not once per step.



(I know there are better approaches to maximise this function, I want to do this to compare to those approaches)










share|improve this question







New contributor




Jordan Bird is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.







$endgroup$











  • $begingroup$
    "I realised the python code within a custom loss is only executed once per compile and not once per step." -- this should not happen. Function is called after each batch (for updating weights).
    $endgroup$
    – Shamit Verma
    Mar 19 at 13:41










  • $begingroup$
    If I have a global variable of 100 and the loss function decrements it, it always ends as 99 no matter how many epochs it runs for.
    $endgroup$
    – Jordan Bird
    Mar 19 at 20:22















0












$begingroup$


I’m having an absolute nightmare with Keras and TF and I think by now it’s time to attempt a different approach with a different library (plan C is to build the network from scratch)



My neural network needs to have n regression outputs, let’s say 3, after a couple of densely connected layers. Input does not matter, I input a row of 0’s.



The reason it doesn’t matter is that my fitness function is custom as follows:



loss = 100 - get_accuracy(a, b, c)



Where a, b and c are the three numerical outputs given by the network at that time. Thus, the network is taking the inverse of accuracy as loss to in effect maximise it.



Is there a Python library that will let me implement this with ease? Keras was perfect up until I realised the python code within a custom loss is only executed once per compile and not once per step.



(I know there are better approaches to maximise this function, I want to do this to compare to those approaches)










share|improve this question







New contributor




Jordan Bird is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.







$endgroup$











  • $begingroup$
    "I realised the python code within a custom loss is only executed once per compile and not once per step." -- this should not happen. Function is called after each batch (for updating weights).
    $endgroup$
    – Shamit Verma
    Mar 19 at 13:41










  • $begingroup$
    If I have a global variable of 100 and the loss function decrements it, it always ends as 99 no matter how many epochs it runs for.
    $endgroup$
    – Jordan Bird
    Mar 19 at 20:22













0












0








0





$begingroup$


I’m having an absolute nightmare with Keras and TF and I think by now it’s time to attempt a different approach with a different library (plan C is to build the network from scratch)



My neural network needs to have n regression outputs, let’s say 3, after a couple of densely connected layers. Input does not matter, I input a row of 0’s.



The reason it doesn’t matter is that my fitness function is custom as follows:



loss = 100 - get_accuracy(a, b, c)



Where a, b and c are the three numerical outputs given by the network at that time. Thus, the network is taking the inverse of accuracy as loss to in effect maximise it.



Is there a Python library that will let me implement this with ease? Keras was perfect up until I realised the python code within a custom loss is only executed once per compile and not once per step.



(I know there are better approaches to maximise this function, I want to do this to compare to those approaches)










share|improve this question







New contributor




Jordan Bird is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.







$endgroup$




I’m having an absolute nightmare with Keras and TF and I think by now it’s time to attempt a different approach with a different library (plan C is to build the network from scratch)



My neural network needs to have n regression outputs, let’s say 3, after a couple of densely connected layers. Input does not matter, I input a row of 0’s.



The reason it doesn’t matter is that my fitness function is custom as follows:



loss = 100 - get_accuracy(a, b, c)



Where a, b and c are the three numerical outputs given by the network at that time. Thus, the network is taking the inverse of accuracy as loss to in effect maximise it.



Is there a Python library that will let me implement this with ease? Keras was perfect up until I realised the python code within a custom loss is only executed once per compile and not once per step.



(I know there are better approaches to maximise this function, I want to do this to compare to those approaches)







machine-learning python neural-network regression






share|improve this question







New contributor




Jordan Bird is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.











share|improve this question







New contributor




Jordan Bird is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.









share|improve this question




share|improve this question






New contributor




Jordan Bird is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.









asked Mar 19 at 12:04









Jordan BirdJordan Bird

1




1




New contributor




Jordan Bird is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.





New contributor





Jordan Bird is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.






Jordan Bird is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.











  • $begingroup$
    "I realised the python code within a custom loss is only executed once per compile and not once per step." -- this should not happen. Function is called after each batch (for updating weights).
    $endgroup$
    – Shamit Verma
    Mar 19 at 13:41










  • $begingroup$
    If I have a global variable of 100 and the loss function decrements it, it always ends as 99 no matter how many epochs it runs for.
    $endgroup$
    – Jordan Bird
    Mar 19 at 20:22
















  • $begingroup$
    "I realised the python code within a custom loss is only executed once per compile and not once per step." -- this should not happen. Function is called after each batch (for updating weights).
    $endgroup$
    – Shamit Verma
    Mar 19 at 13:41










  • $begingroup$
    If I have a global variable of 100 and the loss function decrements it, it always ends as 99 no matter how many epochs it runs for.
    $endgroup$
    – Jordan Bird
    Mar 19 at 20:22















$begingroup$
"I realised the python code within a custom loss is only executed once per compile and not once per step." -- this should not happen. Function is called after each batch (for updating weights).
$endgroup$
– Shamit Verma
Mar 19 at 13:41




$begingroup$
"I realised the python code within a custom loss is only executed once per compile and not once per step." -- this should not happen. Function is called after each batch (for updating weights).
$endgroup$
– Shamit Verma
Mar 19 at 13:41












$begingroup$
If I have a global variable of 100 and the loss function decrements it, it always ends as 99 no matter how many epochs it runs for.
$endgroup$
– Jordan Bird
Mar 19 at 20:22




$begingroup$
If I have a global variable of 100 and the loss function decrements it, it always ends as 99 no matter how many epochs it runs for.
$endgroup$
– Jordan Bird
Mar 19 at 20:22










0






active

oldest

votes











Your Answer





StackExchange.ifUsing("editor", function ()
return StackExchange.using("mathjaxEditing", function ()
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
);
);
, "mathjax-editing");

StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "557"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);

else
createEditor();

);

function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);



);






Jordan Bird is a new contributor. Be nice, and check out our Code of Conduct.









draft saved

draft discarded


















StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f47609%2fpython-library-for-neural-networks-no-tensors%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown

























0






active

oldest

votes








0






active

oldest

votes









active

oldest

votes






active

oldest

votes








Jordan Bird is a new contributor. Be nice, and check out our Code of Conduct.









draft saved

draft discarded


















Jordan Bird is a new contributor. Be nice, and check out our Code of Conduct.












Jordan Bird is a new contributor. Be nice, and check out our Code of Conduct.











Jordan Bird is a new contributor. Be nice, and check out our Code of Conduct.














Thanks for contributing an answer to Data Science Stack Exchange!


  • Please be sure to answer the question. Provide details and share your research!

But avoid


  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.

Use MathJax to format equations. MathJax reference.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f47609%2fpython-library-for-neural-networks-no-tensors%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Adding axes to figuresAdding axes labels to LaTeX figuresLaTeX equivalent of ConTeXt buffersRotate a node but not its content: the case of the ellipse decorationHow to define the default vertical distance between nodes?TikZ scaling graphic and adjust node position and keep font sizeNumerical conditional within tikz keys?adding axes to shapesAlign axes across subfiguresAdding figures with a certain orderLine up nested tikz enviroments or how to get rid of themAdding axes labels to LaTeX figures

Tähtien Talli Jäsenet | Lähteet | NavigointivalikkoSuomen Hippos – Tähtien Talli

Do these cracks on my tires look bad? The Next CEO of Stack OverflowDry rot tire should I replace?Having to replace tiresFishtailed so easily? Bad tires? ABS?Filling the tires with something other than air, to avoid puncture hassles?Used Michelin tires safe to install?Do these tyre cracks necessitate replacement?Rumbling noise: tires or mechanicalIs it possible to fix noisy feathered tires?Are bad winter tires still better than summer tires in winter?Torque converter failure - Related to replacing only 2 tires?Why use snow tires on all 4 wheels on 2-wheel-drive cars?