keep_dims is deprecated, use keepdims instead Announcing the arrival of Valued Associate #679: Cesar Manara Planned maintenance scheduled April 23, 2019 at 23:30 UTC (7:30pm US/Eastern) 2019 Moderator Election Q&A - Questionnaire 2019 Community Moderator Election ResultsWhy TensorFlow can't fit simple linear model if I am minimizing absolute mean error instead of the mean squared error?CUDA_ERROR_OUT_OF_MEMORYDropout on inputs instead on outputs = DropConnect?Which convolution should I use? Conv2d or Conv1dHow to use Keras Linear Regression for Multiple input-output?DeprecationWarning: The 'categorical_features' keyword is deprecated in version 0.20How to use a NN architecture that is too big for GPU?Keras y-labels range between 0 and 1 instead of binary?When to use Dense, Conv1/2D, Dropout, Flatten, and all the other layers?Can I use the Softmax function with a binary classification in deep learning?

What is the difference between a "ranged attack" and a "ranged weapon attack"?

How to ask rejected full-time candidates to apply to teach individual courses?

How do living politicians protect their readily obtainable signatures from misuse?

what is the log of the PDF for a Normal Distribution?

A `coordinate` command ignored

NERDTreeMenu Remapping

Asymptotics question

Does the Mueller report show a conspiracy between Russia and the Trump Campaign?

Monty Hall Problem-Probability Paradox

How can I save and copy a screenhot at the same time?

Trying to understand entropy as a novice in thermodynamics

Was Kant an Intuitionist about mathematical objects?

One-one communication

Tips to organize LaTeX presentations for a semester

Central Vacuuming: Is it worth it, and how does it compare to normal vacuuming?

Tannaka duality for semisimple groups

Weaponising the Grasp-at-a-Distance spell

Does silver oxide react with hydrogen sulfide?

Found this skink in my tomato plant bucket. Is he trapped? Or could he leave if he wanted?

Why datecode is SO IMPORTANT to chip manufacturers?

Why do early math courses focus on the cross sections of a cone and not on other 3D objects?

two integers one line calculator

How much damage would a cupful of neutron star matter do to the Earth?

In musical terms, what properties are varied by the human voice to produce different words / syllables?



keep_dims is deprecated, use keepdims instead



Announcing the arrival of Valued Associate #679: Cesar Manara
Planned maintenance scheduled April 23, 2019 at 23:30 UTC (7:30pm US/Eastern)
2019 Moderator Election Q&A - Questionnaire
2019 Community Moderator Election ResultsWhy TensorFlow can't fit simple linear model if I am minimizing absolute mean error instead of the mean squared error?CUDA_ERROR_OUT_OF_MEMORYDropout on inputs instead on outputs = DropConnect?Which convolution should I use? Conv2d or Conv1dHow to use Keras Linear Regression for Multiple input-output?DeprecationWarning: The 'categorical_features' keyword is deprecated in version 0.20How to use a NN architecture that is too big for GPU?Keras y-labels range between 0 and 1 instead of binary?When to use Dense, Conv1/2D, Dropout, Flatten, and all the other layers?Can I use the Softmax function with a binary classification in deep learning?










1












$begingroup$


I downloaded:



!git clone https://www.github.com/matterport/Mask_RCNN.git
os.chdir('Mask_RCNN')


And I've got an error:
which version I should have of Keras?



WARNING:tensorflow:From /usr/local/lib/python3.5/dist-packages/keras/backend/tensorflow_backend.py:1154: calling reduce_max (from tensorflow.python.ops.math_ops) with keep_dims is deprecated and will be removed in a future version.
Instructions for updating:
keep_dims is deprecated, use keepdims instead
WARNING:tensorflow:From /usr/local/lib/python3.5/dist-packages/keras/backend/tensorflow_backend.py:1188: calling reduce_sum (from tensorflow.python.ops.math_ops) with keep_dims is deprecated and will be removed in a future version.
Instructions for updating:
keep_dims is deprecated, use keepdims instead
WARNING:tensorflow:From /usr/local/lib/python3.5/dist-packages/keras/backend/tensorflow_backend.py:1290: calling reduce_mean (from tensorflow.python.ops.math_ops) with keep_dims is deprecated and will be removed in a future version.
Instructions for updating:
keep_dims is deprecated, use keepdims instead


Futhermore:



totalMemory: 5.94GiB freeMemory: 5.44GiB
2019-04-03 22:37:38.374934: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1484] Adding visible gpu devices: 0, 1
2019-04-03 22:37:40.343417: I tensorflow/core/common_runtime/gpu/gpu_device.cc:965] Device interconnect StreamExecutor with strength 1 edge matrix:
2019-04-03 22:37:40.344366: I tensorflow/core/common_runtime/gpu/gpu_device.cc:971] 0 1
2019-04-03 22:37:40.344373: I tensorflow/core/common_runtime/gpu/gpu_device.cc:984] 0: N N
2019-04-03 22:37:40.344377: I tensorflow/core/common_runtime/gpu/gpu_device.cc:984] 1: N N
2019-04-03 22:37:40.345556: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1097] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 11435 MB memory) -> physical GPU (device: 0, name: GeForce GTX TITAN X, pci bus id: 0000:02:00.0, compute capability: 5.2)
2019-04-03 22:37:40.450785: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1097] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:1 with 5220 MB memory) -> physical GPU (device: 1, name: GeForce GTX TITAN Black, pci bus id: 0000:01:00.0, compute capability: 3.5)
2019-04-03 22:37:42.518519: W tensorflow/core/framework/allocator.cc:108] Allocation of 51380224 exceeds 10% of system memory.
2019-04-03 22:37:42.601229: W tensorflow/core/framework/allocator.cc:108] Allocation of 51380224 exceeds 10% of system memory.
2019-04-03 22:37:51.648032: W tensorflow/core/framework/allocator.cc:108] Allocation of 51380224 exceeds 10% of system memory.
2019-04-03 22:37:51.678817: W tensorflow/core/framework/allocator.cc:108] Allocation of 51380224 exceeds 10% of system memory.
2019-04-03 22:37:51.706928: W tensorflow/core/framework/allocator.cc:108] Allocation of 51380224 exceeds 10% of system memory.
[I 22:37:55.611 NotebookApp] Starting buffering for fa2cd5ca-20f3-4472-b6ca-6821e2f56118:02508f46d629494ab46babe6d7611656









share|improve this question











$endgroup$
















    1












    $begingroup$


    I downloaded:



    !git clone https://www.github.com/matterport/Mask_RCNN.git
    os.chdir('Mask_RCNN')


    And I've got an error:
    which version I should have of Keras?



    WARNING:tensorflow:From /usr/local/lib/python3.5/dist-packages/keras/backend/tensorflow_backend.py:1154: calling reduce_max (from tensorflow.python.ops.math_ops) with keep_dims is deprecated and will be removed in a future version.
    Instructions for updating:
    keep_dims is deprecated, use keepdims instead
    WARNING:tensorflow:From /usr/local/lib/python3.5/dist-packages/keras/backend/tensorflow_backend.py:1188: calling reduce_sum (from tensorflow.python.ops.math_ops) with keep_dims is deprecated and will be removed in a future version.
    Instructions for updating:
    keep_dims is deprecated, use keepdims instead
    WARNING:tensorflow:From /usr/local/lib/python3.5/dist-packages/keras/backend/tensorflow_backend.py:1290: calling reduce_mean (from tensorflow.python.ops.math_ops) with keep_dims is deprecated and will be removed in a future version.
    Instructions for updating:
    keep_dims is deprecated, use keepdims instead


    Futhermore:



    totalMemory: 5.94GiB freeMemory: 5.44GiB
    2019-04-03 22:37:38.374934: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1484] Adding visible gpu devices: 0, 1
    2019-04-03 22:37:40.343417: I tensorflow/core/common_runtime/gpu/gpu_device.cc:965] Device interconnect StreamExecutor with strength 1 edge matrix:
    2019-04-03 22:37:40.344366: I tensorflow/core/common_runtime/gpu/gpu_device.cc:971] 0 1
    2019-04-03 22:37:40.344373: I tensorflow/core/common_runtime/gpu/gpu_device.cc:984] 0: N N
    2019-04-03 22:37:40.344377: I tensorflow/core/common_runtime/gpu/gpu_device.cc:984] 1: N N
    2019-04-03 22:37:40.345556: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1097] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 11435 MB memory) -> physical GPU (device: 0, name: GeForce GTX TITAN X, pci bus id: 0000:02:00.0, compute capability: 5.2)
    2019-04-03 22:37:40.450785: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1097] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:1 with 5220 MB memory) -> physical GPU (device: 1, name: GeForce GTX TITAN Black, pci bus id: 0000:01:00.0, compute capability: 3.5)
    2019-04-03 22:37:42.518519: W tensorflow/core/framework/allocator.cc:108] Allocation of 51380224 exceeds 10% of system memory.
    2019-04-03 22:37:42.601229: W tensorflow/core/framework/allocator.cc:108] Allocation of 51380224 exceeds 10% of system memory.
    2019-04-03 22:37:51.648032: W tensorflow/core/framework/allocator.cc:108] Allocation of 51380224 exceeds 10% of system memory.
    2019-04-03 22:37:51.678817: W tensorflow/core/framework/allocator.cc:108] Allocation of 51380224 exceeds 10% of system memory.
    2019-04-03 22:37:51.706928: W tensorflow/core/framework/allocator.cc:108] Allocation of 51380224 exceeds 10% of system memory.
    [I 22:37:55.611 NotebookApp] Starting buffering for fa2cd5ca-20f3-4472-b6ca-6821e2f56118:02508f46d629494ab46babe6d7611656









    share|improve this question











    $endgroup$














      1












      1








      1





      $begingroup$


      I downloaded:



      !git clone https://www.github.com/matterport/Mask_RCNN.git
      os.chdir('Mask_RCNN')


      And I've got an error:
      which version I should have of Keras?



      WARNING:tensorflow:From /usr/local/lib/python3.5/dist-packages/keras/backend/tensorflow_backend.py:1154: calling reduce_max (from tensorflow.python.ops.math_ops) with keep_dims is deprecated and will be removed in a future version.
      Instructions for updating:
      keep_dims is deprecated, use keepdims instead
      WARNING:tensorflow:From /usr/local/lib/python3.5/dist-packages/keras/backend/tensorflow_backend.py:1188: calling reduce_sum (from tensorflow.python.ops.math_ops) with keep_dims is deprecated and will be removed in a future version.
      Instructions for updating:
      keep_dims is deprecated, use keepdims instead
      WARNING:tensorflow:From /usr/local/lib/python3.5/dist-packages/keras/backend/tensorflow_backend.py:1290: calling reduce_mean (from tensorflow.python.ops.math_ops) with keep_dims is deprecated and will be removed in a future version.
      Instructions for updating:
      keep_dims is deprecated, use keepdims instead


      Futhermore:



      totalMemory: 5.94GiB freeMemory: 5.44GiB
      2019-04-03 22:37:38.374934: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1484] Adding visible gpu devices: 0, 1
      2019-04-03 22:37:40.343417: I tensorflow/core/common_runtime/gpu/gpu_device.cc:965] Device interconnect StreamExecutor with strength 1 edge matrix:
      2019-04-03 22:37:40.344366: I tensorflow/core/common_runtime/gpu/gpu_device.cc:971] 0 1
      2019-04-03 22:37:40.344373: I tensorflow/core/common_runtime/gpu/gpu_device.cc:984] 0: N N
      2019-04-03 22:37:40.344377: I tensorflow/core/common_runtime/gpu/gpu_device.cc:984] 1: N N
      2019-04-03 22:37:40.345556: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1097] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 11435 MB memory) -> physical GPU (device: 0, name: GeForce GTX TITAN X, pci bus id: 0000:02:00.0, compute capability: 5.2)
      2019-04-03 22:37:40.450785: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1097] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:1 with 5220 MB memory) -> physical GPU (device: 1, name: GeForce GTX TITAN Black, pci bus id: 0000:01:00.0, compute capability: 3.5)
      2019-04-03 22:37:42.518519: W tensorflow/core/framework/allocator.cc:108] Allocation of 51380224 exceeds 10% of system memory.
      2019-04-03 22:37:42.601229: W tensorflow/core/framework/allocator.cc:108] Allocation of 51380224 exceeds 10% of system memory.
      2019-04-03 22:37:51.648032: W tensorflow/core/framework/allocator.cc:108] Allocation of 51380224 exceeds 10% of system memory.
      2019-04-03 22:37:51.678817: W tensorflow/core/framework/allocator.cc:108] Allocation of 51380224 exceeds 10% of system memory.
      2019-04-03 22:37:51.706928: W tensorflow/core/framework/allocator.cc:108] Allocation of 51380224 exceeds 10% of system memory.
      [I 22:37:55.611 NotebookApp] Starting buffering for fa2cd5ca-20f3-4472-b6ca-6821e2f56118:02508f46d629494ab46babe6d7611656









      share|improve this question











      $endgroup$




      I downloaded:



      !git clone https://www.github.com/matterport/Mask_RCNN.git
      os.chdir('Mask_RCNN')


      And I've got an error:
      which version I should have of Keras?



      WARNING:tensorflow:From /usr/local/lib/python3.5/dist-packages/keras/backend/tensorflow_backend.py:1154: calling reduce_max (from tensorflow.python.ops.math_ops) with keep_dims is deprecated and will be removed in a future version.
      Instructions for updating:
      keep_dims is deprecated, use keepdims instead
      WARNING:tensorflow:From /usr/local/lib/python3.5/dist-packages/keras/backend/tensorflow_backend.py:1188: calling reduce_sum (from tensorflow.python.ops.math_ops) with keep_dims is deprecated and will be removed in a future version.
      Instructions for updating:
      keep_dims is deprecated, use keepdims instead
      WARNING:tensorflow:From /usr/local/lib/python3.5/dist-packages/keras/backend/tensorflow_backend.py:1290: calling reduce_mean (from tensorflow.python.ops.math_ops) with keep_dims is deprecated and will be removed in a future version.
      Instructions for updating:
      keep_dims is deprecated, use keepdims instead


      Futhermore:



      totalMemory: 5.94GiB freeMemory: 5.44GiB
      2019-04-03 22:37:38.374934: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1484] Adding visible gpu devices: 0, 1
      2019-04-03 22:37:40.343417: I tensorflow/core/common_runtime/gpu/gpu_device.cc:965] Device interconnect StreamExecutor with strength 1 edge matrix:
      2019-04-03 22:37:40.344366: I tensorflow/core/common_runtime/gpu/gpu_device.cc:971] 0 1
      2019-04-03 22:37:40.344373: I tensorflow/core/common_runtime/gpu/gpu_device.cc:984] 0: N N
      2019-04-03 22:37:40.344377: I tensorflow/core/common_runtime/gpu/gpu_device.cc:984] 1: N N
      2019-04-03 22:37:40.345556: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1097] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 11435 MB memory) -> physical GPU (device: 0, name: GeForce GTX TITAN X, pci bus id: 0000:02:00.0, compute capability: 5.2)
      2019-04-03 22:37:40.450785: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1097] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:1 with 5220 MB memory) -> physical GPU (device: 1, name: GeForce GTX TITAN Black, pci bus id: 0000:01:00.0, compute capability: 3.5)
      2019-04-03 22:37:42.518519: W tensorflow/core/framework/allocator.cc:108] Allocation of 51380224 exceeds 10% of system memory.
      2019-04-03 22:37:42.601229: W tensorflow/core/framework/allocator.cc:108] Allocation of 51380224 exceeds 10% of system memory.
      2019-04-03 22:37:51.648032: W tensorflow/core/framework/allocator.cc:108] Allocation of 51380224 exceeds 10% of system memory.
      2019-04-03 22:37:51.678817: W tensorflow/core/framework/allocator.cc:108] Allocation of 51380224 exceeds 10% of system memory.
      2019-04-03 22:37:51.706928: W tensorflow/core/framework/allocator.cc:108] Allocation of 51380224 exceeds 10% of system memory.
      [I 22:37:55.611 NotebookApp] Starting buffering for fa2cd5ca-20f3-4472-b6ca-6821e2f56118:02508f46d629494ab46babe6d7611656






      python neural-network deep-learning keras tensorflow






      share|improve this question















      share|improve this question













      share|improve this question




      share|improve this question








      edited Apr 3 at 21:06









      Vaalizaadeh

      7,64062265




      7,64062265










      asked Apr 3 at 19:53









      BadumBadum

      537




      537




















          1 Answer
          1






          active

          oldest

          votes


















          0












          $begingroup$

          It's not an error, it's simply notifying you that the code is written using past versions of Tensorlfow and some of the arguments of special methods were going to be deprecated in the future releases of the library. It's ok to be used, but you can also check what version is used and make a virtual environment and install the specified version of the library you want.



          You can run your cell twice in order not to see the warning again.






          share|improve this answer











          $endgroup$












          • $begingroup$
            Ok. Futhermore my GPU is very slow. I used to use GTX 1050 and training one epoch last about 1-1.5h.
            $endgroup$
            – Badum
            Apr 3 at 21:25










          • $begingroup$
            Now. I use two GPU (TITAN X, TITAN BLACK = 18GB) but time is similiar to GTX 1050(4GB).
            $endgroup$
            – Badum
            Apr 3 at 21:26










          • $begingroup$
            You have to find your bottle neck. It can be somewhere between disk and memory or memory and gpu memory.
            $endgroup$
            – Vaalizaadeh
            Apr 3 at 21:33










          • $begingroup$
            How should I do it?
            $endgroup$
            – Badum
            Apr 4 at 9:30










          • $begingroup$
            It is customary for DL tasks to load data to memory as much as possible and feed a subset of those to your gpu because gpu has a limited amount of memory. The first part can be implemented simply using generators.
            $endgroup$
            – Vaalizaadeh
            Apr 5 at 17:22











          Your Answer








          StackExchange.ready(function()
          var channelOptions =
          tags: "".split(" "),
          id: "557"
          ;
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function()
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled)
          StackExchange.using("snippets", function()
          createEditor();
          );

          else
          createEditor();

          );

          function createEditor()
          StackExchange.prepareEditor(
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: false,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: null,
          bindNavPrevention: true,
          postfix: "",
          imageUploader:
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          ,
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          );



          );













          draft saved

          draft discarded


















          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f48551%2fkeep-dims-is-deprecated-use-keepdims-instead%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown

























          1 Answer
          1






          active

          oldest

          votes








          1 Answer
          1






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes









          0












          $begingroup$

          It's not an error, it's simply notifying you that the code is written using past versions of Tensorlfow and some of the arguments of special methods were going to be deprecated in the future releases of the library. It's ok to be used, but you can also check what version is used and make a virtual environment and install the specified version of the library you want.



          You can run your cell twice in order not to see the warning again.






          share|improve this answer











          $endgroup$












          • $begingroup$
            Ok. Futhermore my GPU is very slow. I used to use GTX 1050 and training one epoch last about 1-1.5h.
            $endgroup$
            – Badum
            Apr 3 at 21:25










          • $begingroup$
            Now. I use two GPU (TITAN X, TITAN BLACK = 18GB) but time is similiar to GTX 1050(4GB).
            $endgroup$
            – Badum
            Apr 3 at 21:26










          • $begingroup$
            You have to find your bottle neck. It can be somewhere between disk and memory or memory and gpu memory.
            $endgroup$
            – Vaalizaadeh
            Apr 3 at 21:33










          • $begingroup$
            How should I do it?
            $endgroup$
            – Badum
            Apr 4 at 9:30










          • $begingroup$
            It is customary for DL tasks to load data to memory as much as possible and feed a subset of those to your gpu because gpu has a limited amount of memory. The first part can be implemented simply using generators.
            $endgroup$
            – Vaalizaadeh
            Apr 5 at 17:22















          0












          $begingroup$

          It's not an error, it's simply notifying you that the code is written using past versions of Tensorlfow and some of the arguments of special methods were going to be deprecated in the future releases of the library. It's ok to be used, but you can also check what version is used and make a virtual environment and install the specified version of the library you want.



          You can run your cell twice in order not to see the warning again.






          share|improve this answer











          $endgroup$












          • $begingroup$
            Ok. Futhermore my GPU is very slow. I used to use GTX 1050 and training one epoch last about 1-1.5h.
            $endgroup$
            – Badum
            Apr 3 at 21:25










          • $begingroup$
            Now. I use two GPU (TITAN X, TITAN BLACK = 18GB) but time is similiar to GTX 1050(4GB).
            $endgroup$
            – Badum
            Apr 3 at 21:26










          • $begingroup$
            You have to find your bottle neck. It can be somewhere between disk and memory or memory and gpu memory.
            $endgroup$
            – Vaalizaadeh
            Apr 3 at 21:33










          • $begingroup$
            How should I do it?
            $endgroup$
            – Badum
            Apr 4 at 9:30










          • $begingroup$
            It is customary for DL tasks to load data to memory as much as possible and feed a subset of those to your gpu because gpu has a limited amount of memory. The first part can be implemented simply using generators.
            $endgroup$
            – Vaalizaadeh
            Apr 5 at 17:22













          0












          0








          0





          $begingroup$

          It's not an error, it's simply notifying you that the code is written using past versions of Tensorlfow and some of the arguments of special methods were going to be deprecated in the future releases of the library. It's ok to be used, but you can also check what version is used and make a virtual environment and install the specified version of the library you want.



          You can run your cell twice in order not to see the warning again.






          share|improve this answer











          $endgroup$



          It's not an error, it's simply notifying you that the code is written using past versions of Tensorlfow and some of the arguments of special methods were going to be deprecated in the future releases of the library. It's ok to be used, but you can also check what version is used and make a virtual environment and install the specified version of the library you want.



          You can run your cell twice in order not to see the warning again.







          share|improve this answer














          share|improve this answer



          share|improve this answer








          edited Apr 3 at 21:06

























          answered Apr 3 at 20:56









          VaalizaadehVaalizaadeh

          7,64062265




          7,64062265











          • $begingroup$
            Ok. Futhermore my GPU is very slow. I used to use GTX 1050 and training one epoch last about 1-1.5h.
            $endgroup$
            – Badum
            Apr 3 at 21:25










          • $begingroup$
            Now. I use two GPU (TITAN X, TITAN BLACK = 18GB) but time is similiar to GTX 1050(4GB).
            $endgroup$
            – Badum
            Apr 3 at 21:26










          • $begingroup$
            You have to find your bottle neck. It can be somewhere between disk and memory or memory and gpu memory.
            $endgroup$
            – Vaalizaadeh
            Apr 3 at 21:33










          • $begingroup$
            How should I do it?
            $endgroup$
            – Badum
            Apr 4 at 9:30










          • $begingroup$
            It is customary for DL tasks to load data to memory as much as possible and feed a subset of those to your gpu because gpu has a limited amount of memory. The first part can be implemented simply using generators.
            $endgroup$
            – Vaalizaadeh
            Apr 5 at 17:22
















          • $begingroup$
            Ok. Futhermore my GPU is very slow. I used to use GTX 1050 and training one epoch last about 1-1.5h.
            $endgroup$
            – Badum
            Apr 3 at 21:25










          • $begingroup$
            Now. I use two GPU (TITAN X, TITAN BLACK = 18GB) but time is similiar to GTX 1050(4GB).
            $endgroup$
            – Badum
            Apr 3 at 21:26










          • $begingroup$
            You have to find your bottle neck. It can be somewhere between disk and memory or memory and gpu memory.
            $endgroup$
            – Vaalizaadeh
            Apr 3 at 21:33










          • $begingroup$
            How should I do it?
            $endgroup$
            – Badum
            Apr 4 at 9:30










          • $begingroup$
            It is customary for DL tasks to load data to memory as much as possible and feed a subset of those to your gpu because gpu has a limited amount of memory. The first part can be implemented simply using generators.
            $endgroup$
            – Vaalizaadeh
            Apr 5 at 17:22















          $begingroup$
          Ok. Futhermore my GPU is very slow. I used to use GTX 1050 and training one epoch last about 1-1.5h.
          $endgroup$
          – Badum
          Apr 3 at 21:25




          $begingroup$
          Ok. Futhermore my GPU is very slow. I used to use GTX 1050 and training one epoch last about 1-1.5h.
          $endgroup$
          – Badum
          Apr 3 at 21:25












          $begingroup$
          Now. I use two GPU (TITAN X, TITAN BLACK = 18GB) but time is similiar to GTX 1050(4GB).
          $endgroup$
          – Badum
          Apr 3 at 21:26




          $begingroup$
          Now. I use two GPU (TITAN X, TITAN BLACK = 18GB) but time is similiar to GTX 1050(4GB).
          $endgroup$
          – Badum
          Apr 3 at 21:26












          $begingroup$
          You have to find your bottle neck. It can be somewhere between disk and memory or memory and gpu memory.
          $endgroup$
          – Vaalizaadeh
          Apr 3 at 21:33




          $begingroup$
          You have to find your bottle neck. It can be somewhere between disk and memory or memory and gpu memory.
          $endgroup$
          – Vaalizaadeh
          Apr 3 at 21:33












          $begingroup$
          How should I do it?
          $endgroup$
          – Badum
          Apr 4 at 9:30




          $begingroup$
          How should I do it?
          $endgroup$
          – Badum
          Apr 4 at 9:30












          $begingroup$
          It is customary for DL tasks to load data to memory as much as possible and feed a subset of those to your gpu because gpu has a limited amount of memory. The first part can be implemented simply using generators.
          $endgroup$
          – Vaalizaadeh
          Apr 5 at 17:22




          $begingroup$
          It is customary for DL tasks to load data to memory as much as possible and feed a subset of those to your gpu because gpu has a limited amount of memory. The first part can be implemented simply using generators.
          $endgroup$
          – Vaalizaadeh
          Apr 5 at 17:22

















          draft saved

          draft discarded
















































          Thanks for contributing an answer to Data Science Stack Exchange!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid


          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.

          Use MathJax to format equations. MathJax reference.


          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f48551%2fkeep-dims-is-deprecated-use-keepdims-instead%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          Adding axes to figuresAdding axes labels to LaTeX figuresLaTeX equivalent of ConTeXt buffersRotate a node but not its content: the case of the ellipse decorationHow to define the default vertical distance between nodes?TikZ scaling graphic and adjust node position and keep font sizeNumerical conditional within tikz keys?adding axes to shapesAlign axes across subfiguresAdding figures with a certain orderLine up nested tikz enviroments or how to get rid of themAdding axes labels to LaTeX figures

          Tähtien Talli Jäsenet | Lähteet | NavigointivalikkoSuomen Hippos – Tähtien Talli

          Do these cracks on my tires look bad? The Next CEO of Stack OverflowDry rot tire should I replace?Having to replace tiresFishtailed so easily? Bad tires? ABS?Filling the tires with something other than air, to avoid puncture hassles?Used Michelin tires safe to install?Do these tyre cracks necessitate replacement?Rumbling noise: tires or mechanicalIs it possible to fix noisy feathered tires?Are bad winter tires still better than summer tires in winter?Torque converter failure - Related to replacing only 2 tires?Why use snow tires on all 4 wheels on 2-wheel-drive cars?