How can I increase CUDA load on a Tensorflow deep learning task after reducing batch size to fit the GPU? The Next CEO of Stack Overflow2019 Community Moderator ElectionHow to calculate the mini-batch memory impact when training deep learning models?Public cloud GPU support for TensorFlowOnline vs minibatch training for speedWhy Tensorflow does NOT quit when CUDA_ERROR_OUT_OF_MEMORYTraining Inception V3 based model using Keras with Tensorflow BackendTensorflow Deep learning network not utilizing GPU?Execution time of the same algorithm for different runsFully convolutional networks with partially segmented dataWhy i get OOM error although my model is not that large?Using CPU after training in GPU

Why does sin(x) - sin(y) equal this?

How can a day be of 24 hours?

How exploitable/balanced is this homebrew spell: Spell Permanency?

A hang glider, sudden unexpected lift to 25,000 feet altitude, what could do this?

Could a dragon use its wings to swim?

Do I need to write [sic] when including a quotation with a number less than 10 that isn't written out?

Is a distribution that is normal, but highly skewed, considered Gaussian?

How did scripture get the name bible?

Finitely generated matrix groups whose eigenvalues are all algebraic

Could a dragon use hot air to help it take off?

Identify and count spells (Distinctive events within each group)

Is it okay to majorly distort historical facts while writing a fiction story?

Why does the freezing point matter when picking cooler ice packs?

Would a grinding machine be a simple and workable propulsion system for an interplanetary spacecraft?

Early programmable calculators with RS-232

Compensation for working overtime on Saturdays

Car headlights in a world without electricity

Arrows in tikz Markov chain diagram overlap

Gauss' Posthumous Publications?

What steps are necessary to read a Modern SSD in Medieval Europe?

How does a dynamic QR code work?

Direct Implications Between USA and UK in Event of No-Deal Brexit

Can Sri Krishna be called 'a person'?

Advance Calculus Limit question



How can I increase CUDA load on a Tensorflow deep learning task after reducing batch size to fit the GPU?



The Next CEO of Stack Overflow
2019 Community Moderator ElectionHow to calculate the mini-batch memory impact when training deep learning models?Public cloud GPU support for TensorFlowOnline vs minibatch training for speedWhy Tensorflow does NOT quit when CUDA_ERROR_OUT_OF_MEMORYTraining Inception V3 based model using Keras with Tensorflow BackendTensorflow Deep learning network not utilizing GPU?Execution time of the same algorithm for different runsFully convolutional networks with partially segmented dataWhy i get OOM error although my model is not that large?Using CPU after training in GPU










0












$begingroup$


I am running this TensorFlow task for Swahili-to-English on an NVidia GeForce 1060 GPU with 6GB of VRAM:
https://colab.research.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/contrib/eager/python/examples/nmt_with_attention/nmt_with_attention.ipynb



I have to reduce the batch size to get the example code not to run out of memory on the GPU. With trial and error, I find a batch size (minimum is 1) where the code runs and uses the most memory. I notice that, as I reduce the batch size, the CUDA core load as reported by Windows Task Manager GPU view goes down.



The application described in the link above creates a complex TensorFlow network. I don't know whether Tensorflow creates one copy of the network or multiple copies to load the GPU.



If it can create multiple copies, is there a TensorFlow switch for that? I don't think memory speed should be a bottleneck in feeding the GPU. That is, I should be able to optimize between batch size or number of jobs resident in the GPU, and number of compute networks in the GPU.



Is there an easy way in TensorFlow to assess the size in CUDA cores of a compute network?



What are the factors I can use to optimize CUDA load in a small GPU with a large deep learning task?










share|improve this question









$endgroup$











  • $begingroup$
    What is the memory usage on GPU ?
    $endgroup$
    – Shamit Verma
    Mar 25 at 4:22















0












$begingroup$


I am running this TensorFlow task for Swahili-to-English on an NVidia GeForce 1060 GPU with 6GB of VRAM:
https://colab.research.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/contrib/eager/python/examples/nmt_with_attention/nmt_with_attention.ipynb



I have to reduce the batch size to get the example code not to run out of memory on the GPU. With trial and error, I find a batch size (minimum is 1) where the code runs and uses the most memory. I notice that, as I reduce the batch size, the CUDA core load as reported by Windows Task Manager GPU view goes down.



The application described in the link above creates a complex TensorFlow network. I don't know whether Tensorflow creates one copy of the network or multiple copies to load the GPU.



If it can create multiple copies, is there a TensorFlow switch for that? I don't think memory speed should be a bottleneck in feeding the GPU. That is, I should be able to optimize between batch size or number of jobs resident in the GPU, and number of compute networks in the GPU.



Is there an easy way in TensorFlow to assess the size in CUDA cores of a compute network?



What are the factors I can use to optimize CUDA load in a small GPU with a large deep learning task?










share|improve this question









$endgroup$











  • $begingroup$
    What is the memory usage on GPU ?
    $endgroup$
    – Shamit Verma
    Mar 25 at 4:22













0












0








0





$begingroup$


I am running this TensorFlow task for Swahili-to-English on an NVidia GeForce 1060 GPU with 6GB of VRAM:
https://colab.research.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/contrib/eager/python/examples/nmt_with_attention/nmt_with_attention.ipynb



I have to reduce the batch size to get the example code not to run out of memory on the GPU. With trial and error, I find a batch size (minimum is 1) where the code runs and uses the most memory. I notice that, as I reduce the batch size, the CUDA core load as reported by Windows Task Manager GPU view goes down.



The application described in the link above creates a complex TensorFlow network. I don't know whether Tensorflow creates one copy of the network or multiple copies to load the GPU.



If it can create multiple copies, is there a TensorFlow switch for that? I don't think memory speed should be a bottleneck in feeding the GPU. That is, I should be able to optimize between batch size or number of jobs resident in the GPU, and number of compute networks in the GPU.



Is there an easy way in TensorFlow to assess the size in CUDA cores of a compute network?



What are the factors I can use to optimize CUDA load in a small GPU with a large deep learning task?










share|improve this question









$endgroup$




I am running this TensorFlow task for Swahili-to-English on an NVidia GeForce 1060 GPU with 6GB of VRAM:
https://colab.research.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/contrib/eager/python/examples/nmt_with_attention/nmt_with_attention.ipynb



I have to reduce the batch size to get the example code not to run out of memory on the GPU. With trial and error, I find a batch size (minimum is 1) where the code runs and uses the most memory. I notice that, as I reduce the batch size, the CUDA core load as reported by Windows Task Manager GPU view goes down.



The application described in the link above creates a complex TensorFlow network. I don't know whether Tensorflow creates one copy of the network or multiple copies to load the GPU.



If it can create multiple copies, is there a TensorFlow switch for that? I don't think memory speed should be a bottleneck in feeding the GPU. That is, I should be able to optimize between batch size or number of jobs resident in the GPU, and number of compute networks in the GPU.



Is there an easy way in TensorFlow to assess the size in CUDA cores of a compute network?



What are the factors I can use to optimize CUDA load in a small GPU with a large deep learning task?







machine-learning tensorflow machine-translation






share|improve this question













share|improve this question











share|improve this question




share|improve this question










asked Mar 25 at 2:42









Lars EricsonLars Ericson

1334




1334











  • $begingroup$
    What is the memory usage on GPU ?
    $endgroup$
    – Shamit Verma
    Mar 25 at 4:22
















  • $begingroup$
    What is the memory usage on GPU ?
    $endgroup$
    – Shamit Verma
    Mar 25 at 4:22















$begingroup$
What is the memory usage on GPU ?
$endgroup$
– Shamit Verma
Mar 25 at 4:22




$begingroup$
What is the memory usage on GPU ?
$endgroup$
– Shamit Verma
Mar 25 at 4:22










0






active

oldest

votes












Your Answer





StackExchange.ifUsing("editor", function ()
return StackExchange.using("mathjaxEditing", function ()
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
);
);
, "mathjax-editing");

StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "557"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);

else
createEditor();

);

function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);



);













draft saved

draft discarded


















StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f47918%2fhow-can-i-increase-cuda-load-on-a-tensorflow-deep-learning-task-after-reducing-b%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown

























0






active

oldest

votes








0






active

oldest

votes









active

oldest

votes






active

oldest

votes















draft saved

draft discarded
















































Thanks for contributing an answer to Data Science Stack Exchange!


  • Please be sure to answer the question. Provide details and share your research!

But avoid


  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.

Use MathJax to format equations. MathJax reference.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f47918%2fhow-can-i-increase-cuda-load-on-a-tensorflow-deep-learning-task-after-reducing-b%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Adding axes to figuresAdding axes labels to LaTeX figuresLaTeX equivalent of ConTeXt buffersRotate a node but not its content: the case of the ellipse decorationHow to define the default vertical distance between nodes?TikZ scaling graphic and adjust node position and keep font sizeNumerical conditional within tikz keys?adding axes to shapesAlign axes across subfiguresAdding figures with a certain orderLine up nested tikz enviroments or how to get rid of themAdding axes labels to LaTeX figures

Luettelo Yhdysvaltain laivaston lentotukialuksista Lähteet | Navigointivalikko

Gary (muusikko) Sisällysluettelo Historia | Rockin' High | Lähteet | Aiheesta muualla | NavigointivalikkoInfobox OKTuomas "Gary" Keskinen Ancaran kitaristiksiProjekti Rockin' High