Using CPU after training in GPUSwitching Keras backend Tensorflow to GPUAfter the training phase, is it better to run neural networks on a GPU or CPU?Using TensorFlow with Intel GPUChoosing between CPU and GPU for training a neural networkShould I use GPU or CPU for inference?Why does TFLearn DCGAN not run on my GPU (on Windows)?Strange behavior in current Keras setting (tensorflow-1.7.0 Keras-2.1.5) when using CPUTraining Inception V3 based model using Keras with Tensorflow BackendValidation loss differs on GPU vs CPUHow to make my Neural Netwok run on GPU instead of CPUHow can I solve error “allocation exceeds 10 of system memory” on keras?
Angel of Condemnation - Exile creature with second ability
Creepy dinosaur pc game identification
On a tidally locked planet, would time be quantized?
Calculating total slots
How to cover method return statement in Apex Class?
How do you respond to a colleague from another team when they're wrongly expecting that you'll help them?
What exact color does ozone gas have?
Electoral considerations aside, what are potential benefits, for the US, of policy changes proposed by the tweet recognizing Golan annexation?
Quoting Keynes in a lecture
Is aluminum electrical wire used on aircraft?
Why is it that I can sometimes guess the next note?
PTIJ: Haman's bad computer
What is going on with 'gets(stdin)' on the site coderbyte?
Hero deduces identity of a killer
Temporarily disable WLAN internet access for children, but allow it for adults
What does "Scientists rise up against statistical significance" mean? (Comment in Nature)
How should I respond when I lied about my education and the company finds out through background check?
Title 53, why is it reserved?
Why does a simple loop result in ASYNC_NETWORK_IO waits?
Does the Linux kernel need a file system to run?
How to hide some fields of struct in C?
What is Cash Advance APR?
Can I still be respawned if I die by falling off the map?
photorec photo recovery software not seeing my mounted filesystem - trying to use photorec to recover lost jpegs
Using CPU after training in GPU
Switching Keras backend Tensorflow to GPUAfter the training phase, is it better to run neural networks on a GPU or CPU?Using TensorFlow with Intel GPUChoosing between CPU and GPU for training a neural networkShould I use GPU or CPU for inference?Why does TFLearn DCGAN not run on my GPU (on Windows)?Strange behavior in current Keras setting (tensorflow-1.7.0 Keras-2.1.5) when using CPUTraining Inception V3 based model using Keras with Tensorflow BackendValidation loss differs on GPU vs CPUHow to make my Neural Netwok run on GPU instead of CPUHow can I solve error “allocation exceeds 10 of system memory” on keras?
$begingroup$
I am using tensorflow-gpu 1.10.0 and keras-gpu 2.2.4 with a Nvidia gtx765M(2GB) GPU, OS is Win8.1-64 bit- 16GB RAM.
I can train a network with 560x560 pix images and batch-size=1, but after training is over when I try to test/predict I get the following error:
ResourceExhaustedError: OOM when allocating tensor with shape[20,16,560,560] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
[[Node: conv2d_2/convolution = Conv2D[T=DT_FLOAT, data_format="NCHW", dilations=[1, 1, 1, 1], padding="SAME", strides=[1, 1, 1, 1], use_cudnn_on_gpu=true, _device="/job:localhost/replica:0/task:0/device:GPU:0"](activation_1/Relu, conv2d_2/kernel/read)]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
I suppose it's a memory issue.
So my question is, is it possible to first use GPU for training and then switching CPU to predict some results in one Jupyter notebook?
Can we free up GPU-memory in windows inside a script?
I found these two topics but I think we should use at the beginning of the script. What I want is switching after training.
https://github.com/keras-team/keras/issues/4613
Switching Keras backend Tensorflow to GPU
Any help will be appreciated,
Thanks
keras tensorflow gpu
New contributor
$endgroup$
add a comment |
$begingroup$
I am using tensorflow-gpu 1.10.0 and keras-gpu 2.2.4 with a Nvidia gtx765M(2GB) GPU, OS is Win8.1-64 bit- 16GB RAM.
I can train a network with 560x560 pix images and batch-size=1, but after training is over when I try to test/predict I get the following error:
ResourceExhaustedError: OOM when allocating tensor with shape[20,16,560,560] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
[[Node: conv2d_2/convolution = Conv2D[T=DT_FLOAT, data_format="NCHW", dilations=[1, 1, 1, 1], padding="SAME", strides=[1, 1, 1, 1], use_cudnn_on_gpu=true, _device="/job:localhost/replica:0/task:0/device:GPU:0"](activation_1/Relu, conv2d_2/kernel/read)]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
I suppose it's a memory issue.
So my question is, is it possible to first use GPU for training and then switching CPU to predict some results in one Jupyter notebook?
Can we free up GPU-memory in windows inside a script?
I found these two topics but I think we should use at the beginning of the script. What I want is switching after training.
https://github.com/keras-team/keras/issues/4613
Switching Keras backend Tensorflow to GPU
Any help will be appreciated,
Thanks
keras tensorflow gpu
New contributor
$endgroup$
add a comment |
$begingroup$
I am using tensorflow-gpu 1.10.0 and keras-gpu 2.2.4 with a Nvidia gtx765M(2GB) GPU, OS is Win8.1-64 bit- 16GB RAM.
I can train a network with 560x560 pix images and batch-size=1, but after training is over when I try to test/predict I get the following error:
ResourceExhaustedError: OOM when allocating tensor with shape[20,16,560,560] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
[[Node: conv2d_2/convolution = Conv2D[T=DT_FLOAT, data_format="NCHW", dilations=[1, 1, 1, 1], padding="SAME", strides=[1, 1, 1, 1], use_cudnn_on_gpu=true, _device="/job:localhost/replica:0/task:0/device:GPU:0"](activation_1/Relu, conv2d_2/kernel/read)]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
I suppose it's a memory issue.
So my question is, is it possible to first use GPU for training and then switching CPU to predict some results in one Jupyter notebook?
Can we free up GPU-memory in windows inside a script?
I found these two topics but I think we should use at the beginning of the script. What I want is switching after training.
https://github.com/keras-team/keras/issues/4613
Switching Keras backend Tensorflow to GPU
Any help will be appreciated,
Thanks
keras tensorflow gpu
New contributor
$endgroup$
I am using tensorflow-gpu 1.10.0 and keras-gpu 2.2.4 with a Nvidia gtx765M(2GB) GPU, OS is Win8.1-64 bit- 16GB RAM.
I can train a network with 560x560 pix images and batch-size=1, but after training is over when I try to test/predict I get the following error:
ResourceExhaustedError: OOM when allocating tensor with shape[20,16,560,560] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
[[Node: conv2d_2/convolution = Conv2D[T=DT_FLOAT, data_format="NCHW", dilations=[1, 1, 1, 1], padding="SAME", strides=[1, 1, 1, 1], use_cudnn_on_gpu=true, _device="/job:localhost/replica:0/task:0/device:GPU:0"](activation_1/Relu, conv2d_2/kernel/read)]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
I suppose it's a memory issue.
So my question is, is it possible to first use GPU for training and then switching CPU to predict some results in one Jupyter notebook?
Can we free up GPU-memory in windows inside a script?
I found these two topics but I think we should use at the beginning of the script. What I want is switching after training.
https://github.com/keras-team/keras/issues/4613
Switching Keras backend Tensorflow to GPU
Any help will be appreciated,
Thanks
keras tensorflow gpu
keras tensorflow gpu
New contributor
New contributor
edited Mar 19 at 10:19
HFulcher
9713
9713
New contributor
asked Mar 19 at 10:14
JohnJohn
62
62
New contributor
New contributor
add a comment |
add a comment |
1 Answer
1
active
oldest
votes
$begingroup$
For prediction step, are you using the same batch size as training ? batch prediction should bring down memory usage.
$endgroup$
$begingroup$
Yes, I use batch_size=1. Actually, that is another problem, GPU has 2GB of ram but I can only feed network with batch_size=1(560x560 pix). I think it should be more than that, isn't it?
$endgroup$
– John
Mar 19 at 14:32
$begingroup$
In that case, save the model to a file and use another script for prediction. That is is batch_size=1 is used for prediction. maybe some code / keras / tf issue is preventing GPU memory from being freed.
$endgroup$
– Shamit Verma
Mar 20 at 3:58
$begingroup$
yeah, will do like that. thanks.
$endgroup$
– John
2 days ago
add a comment |
Your Answer
StackExchange.ifUsing("editor", function ()
return StackExchange.using("mathjaxEditing", function ()
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
);
);
, "mathjax-editing");
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "557"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
John is a new contributor. Be nice, and check out our Code of Conduct.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f47600%2fusing-cpu-after-training-in-gpu%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
For prediction step, are you using the same batch size as training ? batch prediction should bring down memory usage.
$endgroup$
$begingroup$
Yes, I use batch_size=1. Actually, that is another problem, GPU has 2GB of ram but I can only feed network with batch_size=1(560x560 pix). I think it should be more than that, isn't it?
$endgroup$
– John
Mar 19 at 14:32
$begingroup$
In that case, save the model to a file and use another script for prediction. That is is batch_size=1 is used for prediction. maybe some code / keras / tf issue is preventing GPU memory from being freed.
$endgroup$
– Shamit Verma
Mar 20 at 3:58
$begingroup$
yeah, will do like that. thanks.
$endgroup$
– John
2 days ago
add a comment |
$begingroup$
For prediction step, are you using the same batch size as training ? batch prediction should bring down memory usage.
$endgroup$
$begingroup$
Yes, I use batch_size=1. Actually, that is another problem, GPU has 2GB of ram but I can only feed network with batch_size=1(560x560 pix). I think it should be more than that, isn't it?
$endgroup$
– John
Mar 19 at 14:32
$begingroup$
In that case, save the model to a file and use another script for prediction. That is is batch_size=1 is used for prediction. maybe some code / keras / tf issue is preventing GPU memory from being freed.
$endgroup$
– Shamit Verma
Mar 20 at 3:58
$begingroup$
yeah, will do like that. thanks.
$endgroup$
– John
2 days ago
add a comment |
$begingroup$
For prediction step, are you using the same batch size as training ? batch prediction should bring down memory usage.
$endgroup$
For prediction step, are you using the same batch size as training ? batch prediction should bring down memory usage.
answered Mar 19 at 13:25
Shamit VermaShamit Verma
90929
90929
$begingroup$
Yes, I use batch_size=1. Actually, that is another problem, GPU has 2GB of ram but I can only feed network with batch_size=1(560x560 pix). I think it should be more than that, isn't it?
$endgroup$
– John
Mar 19 at 14:32
$begingroup$
In that case, save the model to a file and use another script for prediction. That is is batch_size=1 is used for prediction. maybe some code / keras / tf issue is preventing GPU memory from being freed.
$endgroup$
– Shamit Verma
Mar 20 at 3:58
$begingroup$
yeah, will do like that. thanks.
$endgroup$
– John
2 days ago
add a comment |
$begingroup$
Yes, I use batch_size=1. Actually, that is another problem, GPU has 2GB of ram but I can only feed network with batch_size=1(560x560 pix). I think it should be more than that, isn't it?
$endgroup$
– John
Mar 19 at 14:32
$begingroup$
In that case, save the model to a file and use another script for prediction. That is is batch_size=1 is used for prediction. maybe some code / keras / tf issue is preventing GPU memory from being freed.
$endgroup$
– Shamit Verma
Mar 20 at 3:58
$begingroup$
yeah, will do like that. thanks.
$endgroup$
– John
2 days ago
$begingroup$
Yes, I use batch_size=1. Actually, that is another problem, GPU has 2GB of ram but I can only feed network with batch_size=1(560x560 pix). I think it should be more than that, isn't it?
$endgroup$
– John
Mar 19 at 14:32
$begingroup$
Yes, I use batch_size=1. Actually, that is another problem, GPU has 2GB of ram but I can only feed network with batch_size=1(560x560 pix). I think it should be more than that, isn't it?
$endgroup$
– John
Mar 19 at 14:32
$begingroup$
In that case, save the model to a file and use another script for prediction. That is is batch_size=1 is used for prediction. maybe some code / keras / tf issue is preventing GPU memory from being freed.
$endgroup$
– Shamit Verma
Mar 20 at 3:58
$begingroup$
In that case, save the model to a file and use another script for prediction. That is is batch_size=1 is used for prediction. maybe some code / keras / tf issue is preventing GPU memory from being freed.
$endgroup$
– Shamit Verma
Mar 20 at 3:58
$begingroup$
yeah, will do like that. thanks.
$endgroup$
– John
2 days ago
$begingroup$
yeah, will do like that. thanks.
$endgroup$
– John
2 days ago
add a comment |
John is a new contributor. Be nice, and check out our Code of Conduct.
John is a new contributor. Be nice, and check out our Code of Conduct.
John is a new contributor. Be nice, and check out our Code of Conduct.
John is a new contributor. Be nice, and check out our Code of Conduct.
Thanks for contributing an answer to Data Science Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f47600%2fusing-cpu-after-training-in-gpu%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown