Using CPU after training in GPUSwitching Keras backend Tensorflow to GPUAfter the training phase, is it better to run neural networks on a GPU or CPU?Using TensorFlow with Intel GPUChoosing between CPU and GPU for training a neural networkShould I use GPU or CPU for inference?Why does TFLearn DCGAN not run on my GPU (on Windows)?Strange behavior in current Keras setting (tensorflow-1.7.0 Keras-2.1.5) when using CPUTraining Inception V3 based model using Keras with Tensorflow BackendValidation loss differs on GPU vs CPUHow to make my Neural Netwok run on GPU instead of CPUHow can I solve error “allocation exceeds 10 of system memory” on keras?

Angel of Condemnation - Exile creature with second ability

Creepy dinosaur pc game identification

On a tidally locked planet, would time be quantized?

Calculating total slots

How to cover method return statement in Apex Class?

How do you respond to a colleague from another team when they're wrongly expecting that you'll help them?

What exact color does ozone gas have?

Electoral considerations aside, what are potential benefits, for the US, of policy changes proposed by the tweet recognizing Golan annexation?

Quoting Keynes in a lecture

Is aluminum electrical wire used on aircraft?

Why is it that I can sometimes guess the next note?

PTIJ: Haman's bad computer

What is going on with 'gets(stdin)' on the site coderbyte?

Hero deduces identity of a killer

Temporarily disable WLAN internet access for children, but allow it for adults

What does "Scientists rise up against statistical significance" mean? (Comment in Nature)

How should I respond when I lied about my education and the company finds out through background check?

Title 53, why is it reserved?

Why does a simple loop result in ASYNC_NETWORK_IO waits?

Does the Linux kernel need a file system to run?

How to hide some fields of struct in C?

What is Cash Advance APR?

Can I still be respawned if I die by falling off the map?

photorec photo recovery software not seeing my mounted filesystem - trying to use photorec to recover lost jpegs



Using CPU after training in GPU


Switching Keras backend Tensorflow to GPUAfter the training phase, is it better to run neural networks on a GPU or CPU?Using TensorFlow with Intel GPUChoosing between CPU and GPU for training a neural networkShould I use GPU or CPU for inference?Why does TFLearn DCGAN not run on my GPU (on Windows)?Strange behavior in current Keras setting (tensorflow-1.7.0 Keras-2.1.5) when using CPUTraining Inception V3 based model using Keras with Tensorflow BackendValidation loss differs on GPU vs CPUHow to make my Neural Netwok run on GPU instead of CPUHow can I solve error “allocation exceeds 10 of system memory” on keras?













1












$begingroup$


I am using tensorflow-gpu 1.10.0 and keras-gpu 2.2.4 with a Nvidia gtx765M(2GB) GPU, OS is Win8.1-64 bit- 16GB RAM.



I can train a network with 560x560 pix images and batch-size=1, but after training is over when I try to test/predict I get the following error:



ResourceExhaustedError: OOM when allocating tensor with shape[20,16,560,560] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
[[Node: conv2d_2/convolution = Conv2D[T=DT_FLOAT, data_format="NCHW", dilations=[1, 1, 1, 1], padding="SAME", strides=[1, 1, 1, 1], use_cudnn_on_gpu=true, _device="/job:localhost/replica:0/task:0/device:GPU:0"](activation_1/Relu, conv2d_2/kernel/read)]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.


I suppose it's a memory issue.



So my question is, is it possible to first use GPU for training and then switching CPU to predict some results in one Jupyter notebook?



Can we free up GPU-memory in windows inside a script?



I found these two topics but I think we should use at the beginning of the script. What I want is switching after training.



https://github.com/keras-team/keras/issues/4613



Switching Keras backend Tensorflow to GPU



Any help will be appreciated,
Thanks










share|improve this question









New contributor




John is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.







$endgroup$
















    1












    $begingroup$


    I am using tensorflow-gpu 1.10.0 and keras-gpu 2.2.4 with a Nvidia gtx765M(2GB) GPU, OS is Win8.1-64 bit- 16GB RAM.



    I can train a network with 560x560 pix images and batch-size=1, but after training is over when I try to test/predict I get the following error:



    ResourceExhaustedError: OOM when allocating tensor with shape[20,16,560,560] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
    [[Node: conv2d_2/convolution = Conv2D[T=DT_FLOAT, data_format="NCHW", dilations=[1, 1, 1, 1], padding="SAME", strides=[1, 1, 1, 1], use_cudnn_on_gpu=true, _device="/job:localhost/replica:0/task:0/device:GPU:0"](activation_1/Relu, conv2d_2/kernel/read)]]
    Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.


    I suppose it's a memory issue.



    So my question is, is it possible to first use GPU for training and then switching CPU to predict some results in one Jupyter notebook?



    Can we free up GPU-memory in windows inside a script?



    I found these two topics but I think we should use at the beginning of the script. What I want is switching after training.



    https://github.com/keras-team/keras/issues/4613



    Switching Keras backend Tensorflow to GPU



    Any help will be appreciated,
    Thanks










    share|improve this question









    New contributor




    John is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
    Check out our Code of Conduct.







    $endgroup$














      1












      1








      1





      $begingroup$


      I am using tensorflow-gpu 1.10.0 and keras-gpu 2.2.4 with a Nvidia gtx765M(2GB) GPU, OS is Win8.1-64 bit- 16GB RAM.



      I can train a network with 560x560 pix images and batch-size=1, but after training is over when I try to test/predict I get the following error:



      ResourceExhaustedError: OOM when allocating tensor with shape[20,16,560,560] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
      [[Node: conv2d_2/convolution = Conv2D[T=DT_FLOAT, data_format="NCHW", dilations=[1, 1, 1, 1], padding="SAME", strides=[1, 1, 1, 1], use_cudnn_on_gpu=true, _device="/job:localhost/replica:0/task:0/device:GPU:0"](activation_1/Relu, conv2d_2/kernel/read)]]
      Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.


      I suppose it's a memory issue.



      So my question is, is it possible to first use GPU for training and then switching CPU to predict some results in one Jupyter notebook?



      Can we free up GPU-memory in windows inside a script?



      I found these two topics but I think we should use at the beginning of the script. What I want is switching after training.



      https://github.com/keras-team/keras/issues/4613



      Switching Keras backend Tensorflow to GPU



      Any help will be appreciated,
      Thanks










      share|improve this question









      New contributor




      John is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.







      $endgroup$




      I am using tensorflow-gpu 1.10.0 and keras-gpu 2.2.4 with a Nvidia gtx765M(2GB) GPU, OS is Win8.1-64 bit- 16GB RAM.



      I can train a network with 560x560 pix images and batch-size=1, but after training is over when I try to test/predict I get the following error:



      ResourceExhaustedError: OOM when allocating tensor with shape[20,16,560,560] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
      [[Node: conv2d_2/convolution = Conv2D[T=DT_FLOAT, data_format="NCHW", dilations=[1, 1, 1, 1], padding="SAME", strides=[1, 1, 1, 1], use_cudnn_on_gpu=true, _device="/job:localhost/replica:0/task:0/device:GPU:0"](activation_1/Relu, conv2d_2/kernel/read)]]
      Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.


      I suppose it's a memory issue.



      So my question is, is it possible to first use GPU for training and then switching CPU to predict some results in one Jupyter notebook?



      Can we free up GPU-memory in windows inside a script?



      I found these two topics but I think we should use at the beginning of the script. What I want is switching after training.



      https://github.com/keras-team/keras/issues/4613



      Switching Keras backend Tensorflow to GPU



      Any help will be appreciated,
      Thanks







      keras tensorflow gpu






      share|improve this question









      New contributor




      John is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.











      share|improve this question









      New contributor




      John is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.









      share|improve this question




      share|improve this question








      edited Mar 19 at 10:19









      HFulcher

      9713




      9713






      New contributor




      John is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.









      asked Mar 19 at 10:14









      JohnJohn

      62




      62




      New contributor




      John is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.





      New contributor





      John is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.






      John is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.




















          1 Answer
          1






          active

          oldest

          votes


















          0












          $begingroup$

          For prediction step, are you using the same batch size as training ? batch prediction should bring down memory usage.






          share|improve this answer









          $endgroup$












          • $begingroup$
            Yes, I use batch_size=1. Actually, that is another problem, GPU has 2GB of ram but I can only feed network with batch_size=1(560x560 pix). I think it should be more than that, isn't it?
            $endgroup$
            – John
            Mar 19 at 14:32











          • $begingroup$
            In that case, save the model to a file and use another script for prediction. That is is batch_size=1 is used for prediction. maybe some code / keras / tf issue is preventing GPU memory from being freed.
            $endgroup$
            – Shamit Verma
            Mar 20 at 3:58










          • $begingroup$
            yeah, will do like that. thanks.
            $endgroup$
            – John
            2 days ago










          Your Answer





          StackExchange.ifUsing("editor", function ()
          return StackExchange.using("mathjaxEditing", function ()
          StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
          StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
          );
          );
          , "mathjax-editing");

          StackExchange.ready(function()
          var channelOptions =
          tags: "".split(" "),
          id: "557"
          ;
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function()
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled)
          StackExchange.using("snippets", function()
          createEditor();
          );

          else
          createEditor();

          );

          function createEditor()
          StackExchange.prepareEditor(
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: false,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: null,
          bindNavPrevention: true,
          postfix: "",
          imageUploader:
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          ,
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          );



          );






          John is a new contributor. Be nice, and check out our Code of Conduct.









          draft saved

          draft discarded


















          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f47600%2fusing-cpu-after-training-in-gpu%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown

























          1 Answer
          1






          active

          oldest

          votes








          1 Answer
          1






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes









          0












          $begingroup$

          For prediction step, are you using the same batch size as training ? batch prediction should bring down memory usage.






          share|improve this answer









          $endgroup$












          • $begingroup$
            Yes, I use batch_size=1. Actually, that is another problem, GPU has 2GB of ram but I can only feed network with batch_size=1(560x560 pix). I think it should be more than that, isn't it?
            $endgroup$
            – John
            Mar 19 at 14:32











          • $begingroup$
            In that case, save the model to a file and use another script for prediction. That is is batch_size=1 is used for prediction. maybe some code / keras / tf issue is preventing GPU memory from being freed.
            $endgroup$
            – Shamit Verma
            Mar 20 at 3:58










          • $begingroup$
            yeah, will do like that. thanks.
            $endgroup$
            – John
            2 days ago















          0












          $begingroup$

          For prediction step, are you using the same batch size as training ? batch prediction should bring down memory usage.






          share|improve this answer









          $endgroup$












          • $begingroup$
            Yes, I use batch_size=1. Actually, that is another problem, GPU has 2GB of ram but I can only feed network with batch_size=1(560x560 pix). I think it should be more than that, isn't it?
            $endgroup$
            – John
            Mar 19 at 14:32











          • $begingroup$
            In that case, save the model to a file and use another script for prediction. That is is batch_size=1 is used for prediction. maybe some code / keras / tf issue is preventing GPU memory from being freed.
            $endgroup$
            – Shamit Verma
            Mar 20 at 3:58










          • $begingroup$
            yeah, will do like that. thanks.
            $endgroup$
            – John
            2 days ago













          0












          0








          0





          $begingroup$

          For prediction step, are you using the same batch size as training ? batch prediction should bring down memory usage.






          share|improve this answer









          $endgroup$



          For prediction step, are you using the same batch size as training ? batch prediction should bring down memory usage.







          share|improve this answer












          share|improve this answer



          share|improve this answer










          answered Mar 19 at 13:25









          Shamit VermaShamit Verma

          90929




          90929











          • $begingroup$
            Yes, I use batch_size=1. Actually, that is another problem, GPU has 2GB of ram but I can only feed network with batch_size=1(560x560 pix). I think it should be more than that, isn't it?
            $endgroup$
            – John
            Mar 19 at 14:32











          • $begingroup$
            In that case, save the model to a file and use another script for prediction. That is is batch_size=1 is used for prediction. maybe some code / keras / tf issue is preventing GPU memory from being freed.
            $endgroup$
            – Shamit Verma
            Mar 20 at 3:58










          • $begingroup$
            yeah, will do like that. thanks.
            $endgroup$
            – John
            2 days ago
















          • $begingroup$
            Yes, I use batch_size=1. Actually, that is another problem, GPU has 2GB of ram but I can only feed network with batch_size=1(560x560 pix). I think it should be more than that, isn't it?
            $endgroup$
            – John
            Mar 19 at 14:32











          • $begingroup$
            In that case, save the model to a file and use another script for prediction. That is is batch_size=1 is used for prediction. maybe some code / keras / tf issue is preventing GPU memory from being freed.
            $endgroup$
            – Shamit Verma
            Mar 20 at 3:58










          • $begingroup$
            yeah, will do like that. thanks.
            $endgroup$
            – John
            2 days ago















          $begingroup$
          Yes, I use batch_size=1. Actually, that is another problem, GPU has 2GB of ram but I can only feed network with batch_size=1(560x560 pix). I think it should be more than that, isn't it?
          $endgroup$
          – John
          Mar 19 at 14:32





          $begingroup$
          Yes, I use batch_size=1. Actually, that is another problem, GPU has 2GB of ram but I can only feed network with batch_size=1(560x560 pix). I think it should be more than that, isn't it?
          $endgroup$
          – John
          Mar 19 at 14:32













          $begingroup$
          In that case, save the model to a file and use another script for prediction. That is is batch_size=1 is used for prediction. maybe some code / keras / tf issue is preventing GPU memory from being freed.
          $endgroup$
          – Shamit Verma
          Mar 20 at 3:58




          $begingroup$
          In that case, save the model to a file and use another script for prediction. That is is batch_size=1 is used for prediction. maybe some code / keras / tf issue is preventing GPU memory from being freed.
          $endgroup$
          – Shamit Verma
          Mar 20 at 3:58












          $begingroup$
          yeah, will do like that. thanks.
          $endgroup$
          – John
          2 days ago




          $begingroup$
          yeah, will do like that. thanks.
          $endgroup$
          – John
          2 days ago










          John is a new contributor. Be nice, and check out our Code of Conduct.









          draft saved

          draft discarded


















          John is a new contributor. Be nice, and check out our Code of Conduct.












          John is a new contributor. Be nice, and check out our Code of Conduct.











          John is a new contributor. Be nice, and check out our Code of Conduct.














          Thanks for contributing an answer to Data Science Stack Exchange!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid


          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.

          Use MathJax to format equations. MathJax reference.


          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f47600%2fusing-cpu-after-training-in-gpu%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          Adding axes to figuresAdding axes labels to LaTeX figuresLaTeX equivalent of ConTeXt buffersRotate a node but not its content: the case of the ellipse decorationHow to define the default vertical distance between nodes?TikZ scaling graphic and adjust node position and keep font sizeNumerical conditional within tikz keys?adding axes to shapesAlign axes across subfiguresAdding figures with a certain orderLine up nested tikz enviroments or how to get rid of themAdding axes labels to LaTeX figures

          Luettelo Yhdysvaltain laivaston lentotukialuksista Lähteet | Navigointivalikko

          Gary (muusikko) Sisällysluettelo Historia | Rockin' High | Lähteet | Aiheesta muualla | NavigointivalikkoInfobox OKTuomas "Gary" Keskinen Ancaran kitaristiksiProjekti Rockin' High