Why i get OOM error although my model is not that large?TFLearn Does Not Load ModelTensorflow speech-to-text training on single file with tf.nn.ctc_loss not convergingWhy is my CNN model not learning anything? (tensorflow)Why does TFLearn DCGAN not run on my GPU (on Windows)?feature selection such that features that explain target but are not correlated with confounds are pickedWhy Tensorflow does NOT quit when CUDA_ERROR_OUT_OF_MEMORYTraining Inception V3 based model using Keras with Tensorflow BackendModel Parallelism not working? Inception v3 with keras and tensorflowWhy is my Keras model not learning image segmentation?get a can't set attribute while using GPU in google colab but not not while using CPU

Gravity magic - How does it work?

Are the common programs (for example: "ls", "cat") in Linux and BSD come from the same source code?

What models do Create ML and Turi Create use

What options are left, if Britain cannot decide?

Make a transparent 448*448 image

Can anyone tell me why this program fails?

Backup with Hanoi Strategy

Ban on all campaign finance?

Russian cases: A few examples, I'm really confused

Official degrees of earth’s rotation per day

Declaring defaulted assignment operator as constexpr: which compiler is right?

What approach do we need to follow for projects without a test environment?

Increase thickness of graph lines larger than ultra thick

Co-worker team leader wants to inject his friend's awful software into our development. What should I say to our common boss?

AG Cluster db upgrade by vendor

Rejected in the fourth interview round, citing insufficient years of experience

Python: Check if string and its substring are existing in the same list

How could a scammer know the apps on my phone / iTunes account?

When do we add an hyphen (-) to a complex adjective word?

Can I get a Visa Waiver after spending 6 months in USA with B-2 Visa?

Happy pi day, everyone!

Why would a flight no longer considered airworthy be redirected like this?

Is it normal that my co-workers at a fitness company criticize my food choices?

Why doesn't using two cd commands in bash script execute the second command?



Why i get OOM error although my model is not that large?


TFLearn Does Not Load ModelTensorflow speech-to-text training on single file with tf.nn.ctc_loss not convergingWhy is my CNN model not learning anything? (tensorflow)Why does TFLearn DCGAN not run on my GPU (on Windows)?feature selection such that features that explain target but are not correlated with confounds are pickedWhy Tensorflow does NOT quit when CUDA_ERROR_OUT_OF_MEMORYTraining Inception V3 based model using Keras with Tensorflow BackendModel Parallelism not working? Inception v3 with keras and tensorflowWhy is my Keras model not learning image segmentation?get a can't set attribute while using GPU in google colab but not not while using CPU













1












$begingroup$


I am a newbie in GPU based training and Deep learning models. I am running cDCGAN (Conditonal DCGAN) in tensorflow on my 2 Nvidia GTX 1080 GPU's. My data-set consists of around 32,0000 images with size 64*64 and 2350 class labels. If I set my batch size 32 or large I get OOM error like below. So I am using 10 batch size for now.



tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM when allocating tensor with shape[32,64,64,2351] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
[[Node: discriminator/concat = ConcatV2[N=2, T=DT_FLOAT, Tidx=DT_INT32, _device="/job:localhost/replica:0/task:0/device:GPU:0"](_arg_Placeholder_0_0/_41, _arg_Placeholder_3_0_3/_43, discriminator/concat/axis)]]
Caused by op 'discriminator/concat', defined at:
File "cdcgan.py", line 221, in <module>
D_real, D_real_logits = discriminator(x, y_fill, isTrain)
File "cdcgan.py", line 48, in discriminator
cat1 = tf.concat([x, y_fill], 3)


The training is very slow which I understand is down to the batch size (correct me if I am wrong). If I do help -n 1 nvidia-smi, I get the following output.



enter image description here



The GPU:0 is mainly used, as the Volatile GPU-Util gives me around 0%-65% whereas GPU:1 is always 0%-3% max. Performance for GPU:0 is always in P2 whereas GPU:1 is mostly P8 or sometimes P2. I have the following questions.



1) Why I am getting OOM error on the large batch size although my dataset and model are not that big?



2) How can I utilize both GPU's equally in Tensorflow so that the performance is fast? (From the above error, it looks like GPU:0 gets full immediately whereas GPU:1 is not fully utilized. it's my understanding only)



Model Details are as follows



Generator:



I have 4 layers (fully connected, UpSampling2d-conv2d, UpSampling2d-conv2d, conv2d).

W1 is of the shape [X+Y, 16*16*128] i.e. (2450, 32768), w2 [3, 3, 128, 64], w3 [3, 3, 64, 32], w4 [[3, 3, 32, 1]] respectively



Discriminator



It has five layers (conv2d, conv2d, conv2d, conv2d, fully connected).

w1 [5, 5, X+Y, 64] i.e. (5, 5, 2351, 64), w2 [3, 3, 64, 64], w3 [3, 3, 64, 128], w4 [2, 2, 128, 256], [16*16*256, 1] respectively.



Session Configuration
I am also allocating memory in advance via



gpu_options = tf.GPUOptions(allow_growth=True)
session = tf.InteractiveSession(config=tf.ConfigProto(gpu_options=gpu_options))









share|improve this question











$endgroup$
















    1












    $begingroup$


    I am a newbie in GPU based training and Deep learning models. I am running cDCGAN (Conditonal DCGAN) in tensorflow on my 2 Nvidia GTX 1080 GPU's. My data-set consists of around 32,0000 images with size 64*64 and 2350 class labels. If I set my batch size 32 or large I get OOM error like below. So I am using 10 batch size for now.



    tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM when allocating tensor with shape[32,64,64,2351] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
    [[Node: discriminator/concat = ConcatV2[N=2, T=DT_FLOAT, Tidx=DT_INT32, _device="/job:localhost/replica:0/task:0/device:GPU:0"](_arg_Placeholder_0_0/_41, _arg_Placeholder_3_0_3/_43, discriminator/concat/axis)]]
    Caused by op 'discriminator/concat', defined at:
    File "cdcgan.py", line 221, in <module>
    D_real, D_real_logits = discriminator(x, y_fill, isTrain)
    File "cdcgan.py", line 48, in discriminator
    cat1 = tf.concat([x, y_fill], 3)


    The training is very slow which I understand is down to the batch size (correct me if I am wrong). If I do help -n 1 nvidia-smi, I get the following output.



    enter image description here



    The GPU:0 is mainly used, as the Volatile GPU-Util gives me around 0%-65% whereas GPU:1 is always 0%-3% max. Performance for GPU:0 is always in P2 whereas GPU:1 is mostly P8 or sometimes P2. I have the following questions.



    1) Why I am getting OOM error on the large batch size although my dataset and model are not that big?



    2) How can I utilize both GPU's equally in Tensorflow so that the performance is fast? (From the above error, it looks like GPU:0 gets full immediately whereas GPU:1 is not fully utilized. it's my understanding only)



    Model Details are as follows



    Generator:



    I have 4 layers (fully connected, UpSampling2d-conv2d, UpSampling2d-conv2d, conv2d).

    W1 is of the shape [X+Y, 16*16*128] i.e. (2450, 32768), w2 [3, 3, 128, 64], w3 [3, 3, 64, 32], w4 [[3, 3, 32, 1]] respectively



    Discriminator



    It has five layers (conv2d, conv2d, conv2d, conv2d, fully connected).

    w1 [5, 5, X+Y, 64] i.e. (5, 5, 2351, 64), w2 [3, 3, 64, 64], w3 [3, 3, 64, 128], w4 [2, 2, 128, 256], [16*16*256, 1] respectively.



    Session Configuration
    I am also allocating memory in advance via



    gpu_options = tf.GPUOptions(allow_growth=True)
    session = tf.InteractiveSession(config=tf.ConfigProto(gpu_options=gpu_options))









    share|improve this question











    $endgroup$














      1












      1








      1





      $begingroup$


      I am a newbie in GPU based training and Deep learning models. I am running cDCGAN (Conditonal DCGAN) in tensorflow on my 2 Nvidia GTX 1080 GPU's. My data-set consists of around 32,0000 images with size 64*64 and 2350 class labels. If I set my batch size 32 or large I get OOM error like below. So I am using 10 batch size for now.



      tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM when allocating tensor with shape[32,64,64,2351] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
      [[Node: discriminator/concat = ConcatV2[N=2, T=DT_FLOAT, Tidx=DT_INT32, _device="/job:localhost/replica:0/task:0/device:GPU:0"](_arg_Placeholder_0_0/_41, _arg_Placeholder_3_0_3/_43, discriminator/concat/axis)]]
      Caused by op 'discriminator/concat', defined at:
      File "cdcgan.py", line 221, in <module>
      D_real, D_real_logits = discriminator(x, y_fill, isTrain)
      File "cdcgan.py", line 48, in discriminator
      cat1 = tf.concat([x, y_fill], 3)


      The training is very slow which I understand is down to the batch size (correct me if I am wrong). If I do help -n 1 nvidia-smi, I get the following output.



      enter image description here



      The GPU:0 is mainly used, as the Volatile GPU-Util gives me around 0%-65% whereas GPU:1 is always 0%-3% max. Performance for GPU:0 is always in P2 whereas GPU:1 is mostly P8 or sometimes P2. I have the following questions.



      1) Why I am getting OOM error on the large batch size although my dataset and model are not that big?



      2) How can I utilize both GPU's equally in Tensorflow so that the performance is fast? (From the above error, it looks like GPU:0 gets full immediately whereas GPU:1 is not fully utilized. it's my understanding only)



      Model Details are as follows



      Generator:



      I have 4 layers (fully connected, UpSampling2d-conv2d, UpSampling2d-conv2d, conv2d).

      W1 is of the shape [X+Y, 16*16*128] i.e. (2450, 32768), w2 [3, 3, 128, 64], w3 [3, 3, 64, 32], w4 [[3, 3, 32, 1]] respectively



      Discriminator



      It has five layers (conv2d, conv2d, conv2d, conv2d, fully connected).

      w1 [5, 5, X+Y, 64] i.e. (5, 5, 2351, 64), w2 [3, 3, 64, 64], w3 [3, 3, 64, 128], w4 [2, 2, 128, 256], [16*16*256, 1] respectively.



      Session Configuration
      I am also allocating memory in advance via



      gpu_options = tf.GPUOptions(allow_growth=True)
      session = tf.InteractiveSession(config=tf.ConfigProto(gpu_options=gpu_options))









      share|improve this question











      $endgroup$




      I am a newbie in GPU based training and Deep learning models. I am running cDCGAN (Conditonal DCGAN) in tensorflow on my 2 Nvidia GTX 1080 GPU's. My data-set consists of around 32,0000 images with size 64*64 and 2350 class labels. If I set my batch size 32 or large I get OOM error like below. So I am using 10 batch size for now.



      tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM when allocating tensor with shape[32,64,64,2351] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
      [[Node: discriminator/concat = ConcatV2[N=2, T=DT_FLOAT, Tidx=DT_INT32, _device="/job:localhost/replica:0/task:0/device:GPU:0"](_arg_Placeholder_0_0/_41, _arg_Placeholder_3_0_3/_43, discriminator/concat/axis)]]
      Caused by op 'discriminator/concat', defined at:
      File "cdcgan.py", line 221, in <module>
      D_real, D_real_logits = discriminator(x, y_fill, isTrain)
      File "cdcgan.py", line 48, in discriminator
      cat1 = tf.concat([x, y_fill], 3)


      The training is very slow which I understand is down to the batch size (correct me if I am wrong). If I do help -n 1 nvidia-smi, I get the following output.



      enter image description here



      The GPU:0 is mainly used, as the Volatile GPU-Util gives me around 0%-65% whereas GPU:1 is always 0%-3% max. Performance for GPU:0 is always in P2 whereas GPU:1 is mostly P8 or sometimes P2. I have the following questions.



      1) Why I am getting OOM error on the large batch size although my dataset and model are not that big?



      2) How can I utilize both GPU's equally in Tensorflow so that the performance is fast? (From the above error, it looks like GPU:0 gets full immediately whereas GPU:1 is not fully utilized. it's my understanding only)



      Model Details are as follows



      Generator:



      I have 4 layers (fully connected, UpSampling2d-conv2d, UpSampling2d-conv2d, conv2d).

      W1 is of the shape [X+Y, 16*16*128] i.e. (2450, 32768), w2 [3, 3, 128, 64], w3 [3, 3, 64, 32], w4 [[3, 3, 32, 1]] respectively



      Discriminator



      It has five layers (conv2d, conv2d, conv2d, conv2d, fully connected).

      w1 [5, 5, X+Y, 64] i.e. (5, 5, 2351, 64), w2 [3, 3, 64, 64], w3 [3, 3, 64, 128], w4 [2, 2, 128, 256], [16*16*256, 1] respectively.



      Session Configuration
      I am also allocating memory in advance via



      gpu_options = tf.GPUOptions(allow_growth=True)
      session = tf.InteractiveSession(config=tf.ConfigProto(gpu_options=gpu_options))






      tensorflow gan gpu






      share|improve this question















      share|improve this question













      share|improve this question




      share|improve this question








      edited yesterday







      Ammar Ul Hassan

















      asked yesterday









      Ammar Ul HassanAmmar Ul Hassan

      1104




      1104




















          1 Answer
          1






          active

          oldest

          votes


















          3












          $begingroup$


          1) Why I am getting OOM error on the large batch size although my dataset and model are not that big?




          Yes, the batch size is probably the reason.



          Also, another the reason is that you don't use the second GPU at all (otherwise, both GPUs will split the batch - computation - and you could use larger batches).




          2) How can I utilize both GPU's equally in Tensorflow so that the performance is fast? (From the above error, it looks like GPU:0 gets full immediately whereas GPU:1 is not fully utilized. it's my understanding only)




          By default, Tensorflow occupies all available GPUs (that's way you see it with nvidia-smi - you have a process 34589 which took both GPUs), but, unless you specify in the code to actually use multi GPUs, it will use only one by default.



          Here is an official TF docs about how to use multi GPUs: https://www.tensorflow.org/guide/using_gpu#using_multiple_gpus



          Here is some tutorial for multi-gpu usage with more examples: https://jhui.github.io/2017/03/07/TensorFlow-GPU/






          share|improve this answer











          $endgroup$












          • $begingroup$
            Thank you for this detail explanation I will definitely set up multiple GPUs manually. Just one last thing, what do you mean by "some extra preprocessing/postprocessing in the graph and so on ..." Can you please give me a little overview?
            $endgroup$
            – Ammar Ul Hassan
            23 hours ago










          • $begingroup$
            Since that thing is little out of scope here, I updated the answer and removed that part. The "batch size" reason will be enough here. :)
            $endgroup$
            – Antonio Jurić
            23 hours ago










          Your Answer





          StackExchange.ifUsing("editor", function ()
          return StackExchange.using("mathjaxEditing", function ()
          StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
          StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
          );
          );
          , "mathjax-editing");

          StackExchange.ready(function()
          var channelOptions =
          tags: "".split(" "),
          id: "557"
          ;
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function()
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled)
          StackExchange.using("snippets", function()
          createEditor();
          );

          else
          createEditor();

          );

          function createEditor()
          StackExchange.prepareEditor(
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: false,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: null,
          bindNavPrevention: true,
          postfix: "",
          imageUploader:
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          ,
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          );



          );













          draft saved

          draft discarded


















          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f47274%2fwhy-i-get-oom-error-although-my-model-is-not-that-large%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown

























          1 Answer
          1






          active

          oldest

          votes








          1 Answer
          1






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes









          3












          $begingroup$


          1) Why I am getting OOM error on the large batch size although my dataset and model are not that big?




          Yes, the batch size is probably the reason.



          Also, another the reason is that you don't use the second GPU at all (otherwise, both GPUs will split the batch - computation - and you could use larger batches).




          2) How can I utilize both GPU's equally in Tensorflow so that the performance is fast? (From the above error, it looks like GPU:0 gets full immediately whereas GPU:1 is not fully utilized. it's my understanding only)




          By default, Tensorflow occupies all available GPUs (that's way you see it with nvidia-smi - you have a process 34589 which took both GPUs), but, unless you specify in the code to actually use multi GPUs, it will use only one by default.



          Here is an official TF docs about how to use multi GPUs: https://www.tensorflow.org/guide/using_gpu#using_multiple_gpus



          Here is some tutorial for multi-gpu usage with more examples: https://jhui.github.io/2017/03/07/TensorFlow-GPU/






          share|improve this answer











          $endgroup$












          • $begingroup$
            Thank you for this detail explanation I will definitely set up multiple GPUs manually. Just one last thing, what do you mean by "some extra preprocessing/postprocessing in the graph and so on ..." Can you please give me a little overview?
            $endgroup$
            – Ammar Ul Hassan
            23 hours ago










          • $begingroup$
            Since that thing is little out of scope here, I updated the answer and removed that part. The "batch size" reason will be enough here. :)
            $endgroup$
            – Antonio Jurić
            23 hours ago















          3












          $begingroup$


          1) Why I am getting OOM error on the large batch size although my dataset and model are not that big?




          Yes, the batch size is probably the reason.



          Also, another the reason is that you don't use the second GPU at all (otherwise, both GPUs will split the batch - computation - and you could use larger batches).




          2) How can I utilize both GPU's equally in Tensorflow so that the performance is fast? (From the above error, it looks like GPU:0 gets full immediately whereas GPU:1 is not fully utilized. it's my understanding only)




          By default, Tensorflow occupies all available GPUs (that's way you see it with nvidia-smi - you have a process 34589 which took both GPUs), but, unless you specify in the code to actually use multi GPUs, it will use only one by default.



          Here is an official TF docs about how to use multi GPUs: https://www.tensorflow.org/guide/using_gpu#using_multiple_gpus



          Here is some tutorial for multi-gpu usage with more examples: https://jhui.github.io/2017/03/07/TensorFlow-GPU/






          share|improve this answer











          $endgroup$












          • $begingroup$
            Thank you for this detail explanation I will definitely set up multiple GPUs manually. Just one last thing, what do you mean by "some extra preprocessing/postprocessing in the graph and so on ..." Can you please give me a little overview?
            $endgroup$
            – Ammar Ul Hassan
            23 hours ago










          • $begingroup$
            Since that thing is little out of scope here, I updated the answer and removed that part. The "batch size" reason will be enough here. :)
            $endgroup$
            – Antonio Jurić
            23 hours ago













          3












          3








          3





          $begingroup$


          1) Why I am getting OOM error on the large batch size although my dataset and model are not that big?




          Yes, the batch size is probably the reason.



          Also, another the reason is that you don't use the second GPU at all (otherwise, both GPUs will split the batch - computation - and you could use larger batches).




          2) How can I utilize both GPU's equally in Tensorflow so that the performance is fast? (From the above error, it looks like GPU:0 gets full immediately whereas GPU:1 is not fully utilized. it's my understanding only)




          By default, Tensorflow occupies all available GPUs (that's way you see it with nvidia-smi - you have a process 34589 which took both GPUs), but, unless you specify in the code to actually use multi GPUs, it will use only one by default.



          Here is an official TF docs about how to use multi GPUs: https://www.tensorflow.org/guide/using_gpu#using_multiple_gpus



          Here is some tutorial for multi-gpu usage with more examples: https://jhui.github.io/2017/03/07/TensorFlow-GPU/






          share|improve this answer











          $endgroup$




          1) Why I am getting OOM error on the large batch size although my dataset and model are not that big?




          Yes, the batch size is probably the reason.



          Also, another the reason is that you don't use the second GPU at all (otherwise, both GPUs will split the batch - computation - and you could use larger batches).




          2) How can I utilize both GPU's equally in Tensorflow so that the performance is fast? (From the above error, it looks like GPU:0 gets full immediately whereas GPU:1 is not fully utilized. it's my understanding only)




          By default, Tensorflow occupies all available GPUs (that's way you see it with nvidia-smi - you have a process 34589 which took both GPUs), but, unless you specify in the code to actually use multi GPUs, it will use only one by default.



          Here is an official TF docs about how to use multi GPUs: https://www.tensorflow.org/guide/using_gpu#using_multiple_gpus



          Here is some tutorial for multi-gpu usage with more examples: https://jhui.github.io/2017/03/07/TensorFlow-GPU/







          share|improve this answer














          share|improve this answer



          share|improve this answer








          edited 23 hours ago

























          answered yesterday









          Antonio JurićAntonio Jurić

          696110




          696110











          • $begingroup$
            Thank you for this detail explanation I will definitely set up multiple GPUs manually. Just one last thing, what do you mean by "some extra preprocessing/postprocessing in the graph and so on ..." Can you please give me a little overview?
            $endgroup$
            – Ammar Ul Hassan
            23 hours ago










          • $begingroup$
            Since that thing is little out of scope here, I updated the answer and removed that part. The "batch size" reason will be enough here. :)
            $endgroup$
            – Antonio Jurić
            23 hours ago
















          • $begingroup$
            Thank you for this detail explanation I will definitely set up multiple GPUs manually. Just one last thing, what do you mean by "some extra preprocessing/postprocessing in the graph and so on ..." Can you please give me a little overview?
            $endgroup$
            – Ammar Ul Hassan
            23 hours ago










          • $begingroup$
            Since that thing is little out of scope here, I updated the answer and removed that part. The "batch size" reason will be enough here. :)
            $endgroup$
            – Antonio Jurić
            23 hours ago















          $begingroup$
          Thank you for this detail explanation I will definitely set up multiple GPUs manually. Just one last thing, what do you mean by "some extra preprocessing/postprocessing in the graph and so on ..." Can you please give me a little overview?
          $endgroup$
          – Ammar Ul Hassan
          23 hours ago




          $begingroup$
          Thank you for this detail explanation I will definitely set up multiple GPUs manually. Just one last thing, what do you mean by "some extra preprocessing/postprocessing in the graph and so on ..." Can you please give me a little overview?
          $endgroup$
          – Ammar Ul Hassan
          23 hours ago












          $begingroup$
          Since that thing is little out of scope here, I updated the answer and removed that part. The "batch size" reason will be enough here. :)
          $endgroup$
          – Antonio Jurić
          23 hours ago




          $begingroup$
          Since that thing is little out of scope here, I updated the answer and removed that part. The "batch size" reason will be enough here. :)
          $endgroup$
          – Antonio Jurić
          23 hours ago

















          draft saved

          draft discarded
















































          Thanks for contributing an answer to Data Science Stack Exchange!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid


          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.

          Use MathJax to format equations. MathJax reference.


          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f47274%2fwhy-i-get-oom-error-although-my-model-is-not-that-large%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          Adding axes to figuresAdding axes labels to LaTeX figuresLaTeX equivalent of ConTeXt buffersRotate a node but not its content: the case of the ellipse decorationHow to define the default vertical distance between nodes?TikZ scaling graphic and adjust node position and keep font sizeNumerical conditional within tikz keys?adding axes to shapesAlign axes across subfiguresAdding figures with a certain orderLine up nested tikz enviroments or how to get rid of themAdding axes labels to LaTeX figures

          Luettelo Yhdysvaltain laivaston lentotukialuksista Lähteet | Navigointivalikko

          Gary (muusikko) Sisällysluettelo Historia | Rockin' High | Lähteet | Aiheesta muualla | NavigointivalikkoInfobox OKTuomas "Gary" Keskinen Ancaran kitaristiksiProjekti Rockin' High