Is the PNY NVIDIA Quadro RTX 4000 a good GPU for Machine Learning on Linux? Unicorn Meta Zoo #1: Why another podcast? Announcing the arrival of Valued Associate #679: Cesar Manara 2019 Moderator Election Q&A - Questionnaire 2019 Community Moderator Election ResultsR: machine learning on GPUPublic cloud GPU support for TensorFlowWhich deep learning framework have support for gtx580 GPU?Input Pipeline for Tensorflow on GPUWhat is the best hardware/GPU for deep learning?Is there any single disadvantage to use GPU in deep learning?Trying to use GPU of laptop for TensorFlowShould I use GPU or CPU for inference?External GPU vs. internal GPU for machine learningCNN computing time on good CPU vs cheap GPU
How much cash can I safely carry into the USA and avoid civil forfeiture?
Retract an already submitted recommendation letter (written for an undergrad student)
Suing a Police Officer Instead of the Police Department
Is there metaphorical meaning of "aus der Haft entlassen"?
Is Diceware more secure than a long passphrase?
Was Dennis Ritchie being too modest in this quote about C and Pascal?
Bayes factor vs P value
Why must Chinese maps be obfuscated?
Can I criticise the more senior developers around me for not writing clean code?
What is it called when you ride around on your front wheel?
Crossed out red box fitting tightly around image
Is there really no use for MD5 anymore?
A Paper Record is What I Hamper
How to keep bees out of canned beverages?
Mistake in years of experience in resume?
I preordered a game on my Xbox while on the home screen of my friend's account. Which of us owns the game?
What's the difference between using dependency injection with a container and using a service locator?
Contradiction proof for inequality of P and NP?
What to do with someone that cheated their way through university and a PhD program?
Is Electric Central Heating worth it if using Solar Panels?
Why doesn't the standard consider a template constructor as a copy constructor?
Is it acceptable to use working hours to read general interest books?
Did the Roman Empire have penal colonies?
Co-worker works way more than he should
Is the PNY NVIDIA Quadro RTX 4000 a good GPU for Machine Learning on Linux?
Unicorn Meta Zoo #1: Why another podcast?
Announcing the arrival of Valued Associate #679: Cesar Manara
2019 Moderator Election Q&A - Questionnaire
2019 Community Moderator Election ResultsR: machine learning on GPUPublic cloud GPU support for TensorFlowWhich deep learning framework have support for gtx580 GPU?Input Pipeline for Tensorflow on GPUWhat is the best hardware/GPU for deep learning?Is there any single disadvantage to use GPU in deep learning?Trying to use GPU of laptop for TensorFlowShould I use GPU or CPU for inference?External GPU vs. internal GPU for machine learningCNN computing time on good CPU vs cheap GPU
$begingroup$
As a web developer, I am growing increasingly interested in data science/machine learning, enough that I have decided to build a lab at home.
I have discovered the Quadro RTX 4000, and am wondering how well it would run ML frameworks on Ubuntu Linux. Are the correct drivers available on Linux so that this card can take advantage of ML frameworks?
LINUX X64 (AMD64/EM64T) DISPLAY DRIVER
This is the only driver that I could find, but it is a "Display Driver", so I am not sure if that enables ML frameworks to use this GPU for acceleration. Will it work for Intel based processors?
Any guidance would be greatly appreciated.
gpu linux
$endgroup$
|
show 1 more comment
$begingroup$
As a web developer, I am growing increasingly interested in data science/machine learning, enough that I have decided to build a lab at home.
I have discovered the Quadro RTX 4000, and am wondering how well it would run ML frameworks on Ubuntu Linux. Are the correct drivers available on Linux so that this card can take advantage of ML frameworks?
LINUX X64 (AMD64/EM64T) DISPLAY DRIVER
This is the only driver that I could find, but it is a "Display Driver", so I am not sure if that enables ML frameworks to use this GPU for acceleration. Will it work for Intel based processors?
Any guidance would be greatly appreciated.
gpu linux
$endgroup$
$begingroup$
I've used 2000 version and the major point is that it does not have a good memroy. $5GB$ is not appropriate for DL tasks. If you can afford it, buy a 2080 which is perfect. I don't know the memory of 4000 but the 2000's memory is very limiting and you cannot train big models on it. But the gpu itself is roughly a powerful one.
$endgroup$
– Vaalizaadeh
Apr 5 at 21:33
$begingroup$
I can also refer that PNY does not have a good cooling system. You have to take that in mind.
$endgroup$
– Vaalizaadeh
Apr 5 at 21:37
$begingroup$
Thanks for your feedback @Media. Would you be able to recommend a card that would work well for getting up and running with ML/Deep learning?
$endgroup$
– crayden
Apr 5 at 21:38
$begingroup$
I guess 2080ti is the best at the moment due to its power and new tensor modules that have been introduced inside them for DL/ML tasks. It is also far cheaper than titan.
$endgroup$
– Vaalizaadeh
Apr 5 at 21:40
$begingroup$
I noticed you are referring to the former 2000/5GB version of the Quadro. The new Quadro RTX line is based on the Turing architecture, and includes special tensor cores for acceleration. This should make a huge difference between the 2000 version you have used, and the new RTX/Turning based cards?
$endgroup$
– crayden
Apr 5 at 21:45
|
show 1 more comment
$begingroup$
As a web developer, I am growing increasingly interested in data science/machine learning, enough that I have decided to build a lab at home.
I have discovered the Quadro RTX 4000, and am wondering how well it would run ML frameworks on Ubuntu Linux. Are the correct drivers available on Linux so that this card can take advantage of ML frameworks?
LINUX X64 (AMD64/EM64T) DISPLAY DRIVER
This is the only driver that I could find, but it is a "Display Driver", so I am not sure if that enables ML frameworks to use this GPU for acceleration. Will it work for Intel based processors?
Any guidance would be greatly appreciated.
gpu linux
$endgroup$
As a web developer, I am growing increasingly interested in data science/machine learning, enough that I have decided to build a lab at home.
I have discovered the Quadro RTX 4000, and am wondering how well it would run ML frameworks on Ubuntu Linux. Are the correct drivers available on Linux so that this card can take advantage of ML frameworks?
LINUX X64 (AMD64/EM64T) DISPLAY DRIVER
This is the only driver that I could find, but it is a "Display Driver", so I am not sure if that enables ML frameworks to use this GPU for acceleration. Will it work for Intel based processors?
Any guidance would be greatly appreciated.
gpu linux
gpu linux
edited Apr 5 at 21:34
crayden
asked Apr 5 at 21:30
craydencrayden
1063
1063
$begingroup$
I've used 2000 version and the major point is that it does not have a good memroy. $5GB$ is not appropriate for DL tasks. If you can afford it, buy a 2080 which is perfect. I don't know the memory of 4000 but the 2000's memory is very limiting and you cannot train big models on it. But the gpu itself is roughly a powerful one.
$endgroup$
– Vaalizaadeh
Apr 5 at 21:33
$begingroup$
I can also refer that PNY does not have a good cooling system. You have to take that in mind.
$endgroup$
– Vaalizaadeh
Apr 5 at 21:37
$begingroup$
Thanks for your feedback @Media. Would you be able to recommend a card that would work well for getting up and running with ML/Deep learning?
$endgroup$
– crayden
Apr 5 at 21:38
$begingroup$
I guess 2080ti is the best at the moment due to its power and new tensor modules that have been introduced inside them for DL/ML tasks. It is also far cheaper than titan.
$endgroup$
– Vaalizaadeh
Apr 5 at 21:40
$begingroup$
I noticed you are referring to the former 2000/5GB version of the Quadro. The new Quadro RTX line is based on the Turing architecture, and includes special tensor cores for acceleration. This should make a huge difference between the 2000 version you have used, and the new RTX/Turning based cards?
$endgroup$
– crayden
Apr 5 at 21:45
|
show 1 more comment
$begingroup$
I've used 2000 version and the major point is that it does not have a good memroy. $5GB$ is not appropriate for DL tasks. If you can afford it, buy a 2080 which is perfect. I don't know the memory of 4000 but the 2000's memory is very limiting and you cannot train big models on it. But the gpu itself is roughly a powerful one.
$endgroup$
– Vaalizaadeh
Apr 5 at 21:33
$begingroup$
I can also refer that PNY does not have a good cooling system. You have to take that in mind.
$endgroup$
– Vaalizaadeh
Apr 5 at 21:37
$begingroup$
Thanks for your feedback @Media. Would you be able to recommend a card that would work well for getting up and running with ML/Deep learning?
$endgroup$
– crayden
Apr 5 at 21:38
$begingroup$
I guess 2080ti is the best at the moment due to its power and new tensor modules that have been introduced inside them for DL/ML tasks. It is also far cheaper than titan.
$endgroup$
– Vaalizaadeh
Apr 5 at 21:40
$begingroup$
I noticed you are referring to the former 2000/5GB version of the Quadro. The new Quadro RTX line is based on the Turing architecture, and includes special tensor cores for acceleration. This should make a huge difference between the 2000 version you have used, and the new RTX/Turning based cards?
$endgroup$
– crayden
Apr 5 at 21:45
$begingroup$
I've used 2000 version and the major point is that it does not have a good memroy. $5GB$ is not appropriate for DL tasks. If you can afford it, buy a 2080 which is perfect. I don't know the memory of 4000 but the 2000's memory is very limiting and you cannot train big models on it. But the gpu itself is roughly a powerful one.
$endgroup$
– Vaalizaadeh
Apr 5 at 21:33
$begingroup$
I've used 2000 version and the major point is that it does not have a good memroy. $5GB$ is not appropriate for DL tasks. If you can afford it, buy a 2080 which is perfect. I don't know the memory of 4000 but the 2000's memory is very limiting and you cannot train big models on it. But the gpu itself is roughly a powerful one.
$endgroup$
– Vaalizaadeh
Apr 5 at 21:33
$begingroup$
I can also refer that PNY does not have a good cooling system. You have to take that in mind.
$endgroup$
– Vaalizaadeh
Apr 5 at 21:37
$begingroup$
I can also refer that PNY does not have a good cooling system. You have to take that in mind.
$endgroup$
– Vaalizaadeh
Apr 5 at 21:37
$begingroup$
Thanks for your feedback @Media. Would you be able to recommend a card that would work well for getting up and running with ML/Deep learning?
$endgroup$
– crayden
Apr 5 at 21:38
$begingroup$
Thanks for your feedback @Media. Would you be able to recommend a card that would work well for getting up and running with ML/Deep learning?
$endgroup$
– crayden
Apr 5 at 21:38
$begingroup$
I guess 2080ti is the best at the moment due to its power and new tensor modules that have been introduced inside them for DL/ML tasks. It is also far cheaper than titan.
$endgroup$
– Vaalizaadeh
Apr 5 at 21:40
$begingroup$
I guess 2080ti is the best at the moment due to its power and new tensor modules that have been introduced inside them for DL/ML tasks. It is also far cheaper than titan.
$endgroup$
– Vaalizaadeh
Apr 5 at 21:40
$begingroup$
I noticed you are referring to the former 2000/5GB version of the Quadro. The new Quadro RTX line is based on the Turing architecture, and includes special tensor cores for acceleration. This should make a huge difference between the 2000 version you have used, and the new RTX/Turning based cards?
$endgroup$
– crayden
Apr 5 at 21:45
$begingroup$
I noticed you are referring to the former 2000/5GB version of the Quadro. The new Quadro RTX line is based on the Turing architecture, and includes special tensor cores for acceleration. This should make a huge difference between the 2000 version you have used, and the new RTX/Turning based cards?
$endgroup$
– crayden
Apr 5 at 21:45
|
show 1 more comment
1 Answer
1
active
oldest
votes
$begingroup$
You seem to be looking at the latest Quatro 4000, which has the following compute rating:
You can find the complete list here for all Nvidia GPUs.
While it seems to have an impressive score of 7.5 (the same as the RTX 20180ti), the main draw back the memory of 8Gb. This is definitely enough to get started with ML/DL and will allow you to do many things. However, memory is often the thing that will slow you down and limit your models.
The reason is that a large model will require large number of parameters. Take a look at the following table (models included in Keras), where you can see the number of parameters each model requires:
The issue is that the more parameters you have, the more memory you need and so the smaller the batch size you are able to use during training. There are many arguments for larger vs. smaller batch sizes - but having less memory will force you to still to smaller batch sizes when using large models.
It seems from Nvidia's marketing, that the Quadro product line is more aimed towards creative developers (films/image editing etc.), whereas the Geforce collection is for gaming an AI. This highlights that Quadro is not necessarily optimised for fast computation.
$endgroup$
$begingroup$
Nvidia's marketing makes be believe this card is better than GeForce for AI? From the product page: "Deep learning frameworks such as Caffe2, MXNet, CNTK, TensorFlow, and others deliver dramatically faster training times and higher multi-node training performance. GPU accelerated libraries such as cuDNN, cuBLAS, and TensorRT deliver higher performance for both deep learning inference and High Performance Computing (HPC) applications." I thought GeForce was optimized for gaming, in contrast.
$endgroup$
– crayden
Apr 5 at 23:08
$begingroup$
Perhaps they are starting to push in that direction - they have the same text everywhere, but Geforce has generally been the product line to go for. You care about number of cuda cores, amount of memory and the transfer rate of that memory. Then just find the best combination of those factors that your budget allows.
$endgroup$
– n1k31t4
Apr 5 at 23:20
$begingroup$
What is an adequate amount of memory and CUDA cores?
$endgroup$
– crayden
Apr 5 at 23:24
$begingroup$
How long is a piece of string? ;) if you want to work with images/videos, the more the better. Working with text can be less memory intensive and something like stock market data is not memory hungry. If you get the Quadro, an RTX or a Titan - it is likely that the human will be the slowest link. Just don't work with a CPU and you'll be fine.
$endgroup$
– n1k31t4
Apr 5 at 23:32
add a comment |
Your Answer
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "557"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f48718%2fis-the-pny-nvidia-quadro-rtx-4000-a-good-gpu-for-machine-learning-on-linux%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
You seem to be looking at the latest Quatro 4000, which has the following compute rating:
You can find the complete list here for all Nvidia GPUs.
While it seems to have an impressive score of 7.5 (the same as the RTX 20180ti), the main draw back the memory of 8Gb. This is definitely enough to get started with ML/DL and will allow you to do many things. However, memory is often the thing that will slow you down and limit your models.
The reason is that a large model will require large number of parameters. Take a look at the following table (models included in Keras), where you can see the number of parameters each model requires:
The issue is that the more parameters you have, the more memory you need and so the smaller the batch size you are able to use during training. There are many arguments for larger vs. smaller batch sizes - but having less memory will force you to still to smaller batch sizes when using large models.
It seems from Nvidia's marketing, that the Quadro product line is more aimed towards creative developers (films/image editing etc.), whereas the Geforce collection is for gaming an AI. This highlights that Quadro is not necessarily optimised for fast computation.
$endgroup$
$begingroup$
Nvidia's marketing makes be believe this card is better than GeForce for AI? From the product page: "Deep learning frameworks such as Caffe2, MXNet, CNTK, TensorFlow, and others deliver dramatically faster training times and higher multi-node training performance. GPU accelerated libraries such as cuDNN, cuBLAS, and TensorRT deliver higher performance for both deep learning inference and High Performance Computing (HPC) applications." I thought GeForce was optimized for gaming, in contrast.
$endgroup$
– crayden
Apr 5 at 23:08
$begingroup$
Perhaps they are starting to push in that direction - they have the same text everywhere, but Geforce has generally been the product line to go for. You care about number of cuda cores, amount of memory and the transfer rate of that memory. Then just find the best combination of those factors that your budget allows.
$endgroup$
– n1k31t4
Apr 5 at 23:20
$begingroup$
What is an adequate amount of memory and CUDA cores?
$endgroup$
– crayden
Apr 5 at 23:24
$begingroup$
How long is a piece of string? ;) if you want to work with images/videos, the more the better. Working with text can be less memory intensive and something like stock market data is not memory hungry. If you get the Quadro, an RTX or a Titan - it is likely that the human will be the slowest link. Just don't work with a CPU and you'll be fine.
$endgroup$
– n1k31t4
Apr 5 at 23:32
add a comment |
$begingroup$
You seem to be looking at the latest Quatro 4000, which has the following compute rating:
You can find the complete list here for all Nvidia GPUs.
While it seems to have an impressive score of 7.5 (the same as the RTX 20180ti), the main draw back the memory of 8Gb. This is definitely enough to get started with ML/DL and will allow you to do many things. However, memory is often the thing that will slow you down and limit your models.
The reason is that a large model will require large number of parameters. Take a look at the following table (models included in Keras), where you can see the number of parameters each model requires:
The issue is that the more parameters you have, the more memory you need and so the smaller the batch size you are able to use during training. There are many arguments for larger vs. smaller batch sizes - but having less memory will force you to still to smaller batch sizes when using large models.
It seems from Nvidia's marketing, that the Quadro product line is more aimed towards creative developers (films/image editing etc.), whereas the Geforce collection is for gaming an AI. This highlights that Quadro is not necessarily optimised for fast computation.
$endgroup$
$begingroup$
Nvidia's marketing makes be believe this card is better than GeForce for AI? From the product page: "Deep learning frameworks such as Caffe2, MXNet, CNTK, TensorFlow, and others deliver dramatically faster training times and higher multi-node training performance. GPU accelerated libraries such as cuDNN, cuBLAS, and TensorRT deliver higher performance for both deep learning inference and High Performance Computing (HPC) applications." I thought GeForce was optimized for gaming, in contrast.
$endgroup$
– crayden
Apr 5 at 23:08
$begingroup$
Perhaps they are starting to push in that direction - they have the same text everywhere, but Geforce has generally been the product line to go for. You care about number of cuda cores, amount of memory and the transfer rate of that memory. Then just find the best combination of those factors that your budget allows.
$endgroup$
– n1k31t4
Apr 5 at 23:20
$begingroup$
What is an adequate amount of memory and CUDA cores?
$endgroup$
– crayden
Apr 5 at 23:24
$begingroup$
How long is a piece of string? ;) if you want to work with images/videos, the more the better. Working with text can be less memory intensive and something like stock market data is not memory hungry. If you get the Quadro, an RTX or a Titan - it is likely that the human will be the slowest link. Just don't work with a CPU and you'll be fine.
$endgroup$
– n1k31t4
Apr 5 at 23:32
add a comment |
$begingroup$
You seem to be looking at the latest Quatro 4000, which has the following compute rating:
You can find the complete list here for all Nvidia GPUs.
While it seems to have an impressive score of 7.5 (the same as the RTX 20180ti), the main draw back the memory of 8Gb. This is definitely enough to get started with ML/DL and will allow you to do many things. However, memory is often the thing that will slow you down and limit your models.
The reason is that a large model will require large number of parameters. Take a look at the following table (models included in Keras), where you can see the number of parameters each model requires:
The issue is that the more parameters you have, the more memory you need and so the smaller the batch size you are able to use during training. There are many arguments for larger vs. smaller batch sizes - but having less memory will force you to still to smaller batch sizes when using large models.
It seems from Nvidia's marketing, that the Quadro product line is more aimed towards creative developers (films/image editing etc.), whereas the Geforce collection is for gaming an AI. This highlights that Quadro is not necessarily optimised for fast computation.
$endgroup$
You seem to be looking at the latest Quatro 4000, which has the following compute rating:
You can find the complete list here for all Nvidia GPUs.
While it seems to have an impressive score of 7.5 (the same as the RTX 20180ti), the main draw back the memory of 8Gb. This is definitely enough to get started with ML/DL and will allow you to do many things. However, memory is often the thing that will slow you down and limit your models.
The reason is that a large model will require large number of parameters. Take a look at the following table (models included in Keras), where you can see the number of parameters each model requires:
The issue is that the more parameters you have, the more memory you need and so the smaller the batch size you are able to use during training. There are many arguments for larger vs. smaller batch sizes - but having less memory will force you to still to smaller batch sizes when using large models.
It seems from Nvidia's marketing, that the Quadro product line is more aimed towards creative developers (films/image editing etc.), whereas the Geforce collection is for gaming an AI. This highlights that Quadro is not necessarily optimised for fast computation.
answered Apr 5 at 22:57
n1k31t4n1k31t4
6,6562421
6,6562421
$begingroup$
Nvidia's marketing makes be believe this card is better than GeForce for AI? From the product page: "Deep learning frameworks such as Caffe2, MXNet, CNTK, TensorFlow, and others deliver dramatically faster training times and higher multi-node training performance. GPU accelerated libraries such as cuDNN, cuBLAS, and TensorRT deliver higher performance for both deep learning inference and High Performance Computing (HPC) applications." I thought GeForce was optimized for gaming, in contrast.
$endgroup$
– crayden
Apr 5 at 23:08
$begingroup$
Perhaps they are starting to push in that direction - they have the same text everywhere, but Geforce has generally been the product line to go for. You care about number of cuda cores, amount of memory and the transfer rate of that memory. Then just find the best combination of those factors that your budget allows.
$endgroup$
– n1k31t4
Apr 5 at 23:20
$begingroup$
What is an adequate amount of memory and CUDA cores?
$endgroup$
– crayden
Apr 5 at 23:24
$begingroup$
How long is a piece of string? ;) if you want to work with images/videos, the more the better. Working with text can be less memory intensive and something like stock market data is not memory hungry. If you get the Quadro, an RTX or a Titan - it is likely that the human will be the slowest link. Just don't work with a CPU and you'll be fine.
$endgroup$
– n1k31t4
Apr 5 at 23:32
add a comment |
$begingroup$
Nvidia's marketing makes be believe this card is better than GeForce for AI? From the product page: "Deep learning frameworks such as Caffe2, MXNet, CNTK, TensorFlow, and others deliver dramatically faster training times and higher multi-node training performance. GPU accelerated libraries such as cuDNN, cuBLAS, and TensorRT deliver higher performance for both deep learning inference and High Performance Computing (HPC) applications." I thought GeForce was optimized for gaming, in contrast.
$endgroup$
– crayden
Apr 5 at 23:08
$begingroup$
Perhaps they are starting to push in that direction - they have the same text everywhere, but Geforce has generally been the product line to go for. You care about number of cuda cores, amount of memory and the transfer rate of that memory. Then just find the best combination of those factors that your budget allows.
$endgroup$
– n1k31t4
Apr 5 at 23:20
$begingroup$
What is an adequate amount of memory and CUDA cores?
$endgroup$
– crayden
Apr 5 at 23:24
$begingroup$
How long is a piece of string? ;) if you want to work with images/videos, the more the better. Working with text can be less memory intensive and something like stock market data is not memory hungry. If you get the Quadro, an RTX or a Titan - it is likely that the human will be the slowest link. Just don't work with a CPU and you'll be fine.
$endgroup$
– n1k31t4
Apr 5 at 23:32
$begingroup$
Nvidia's marketing makes be believe this card is better than GeForce for AI? From the product page: "Deep learning frameworks such as Caffe2, MXNet, CNTK, TensorFlow, and others deliver dramatically faster training times and higher multi-node training performance. GPU accelerated libraries such as cuDNN, cuBLAS, and TensorRT deliver higher performance for both deep learning inference and High Performance Computing (HPC) applications." I thought GeForce was optimized for gaming, in contrast.
$endgroup$
– crayden
Apr 5 at 23:08
$begingroup$
Nvidia's marketing makes be believe this card is better than GeForce for AI? From the product page: "Deep learning frameworks such as Caffe2, MXNet, CNTK, TensorFlow, and others deliver dramatically faster training times and higher multi-node training performance. GPU accelerated libraries such as cuDNN, cuBLAS, and TensorRT deliver higher performance for both deep learning inference and High Performance Computing (HPC) applications." I thought GeForce was optimized for gaming, in contrast.
$endgroup$
– crayden
Apr 5 at 23:08
$begingroup$
Perhaps they are starting to push in that direction - they have the same text everywhere, but Geforce has generally been the product line to go for. You care about number of cuda cores, amount of memory and the transfer rate of that memory. Then just find the best combination of those factors that your budget allows.
$endgroup$
– n1k31t4
Apr 5 at 23:20
$begingroup$
Perhaps they are starting to push in that direction - they have the same text everywhere, but Geforce has generally been the product line to go for. You care about number of cuda cores, amount of memory and the transfer rate of that memory. Then just find the best combination of those factors that your budget allows.
$endgroup$
– n1k31t4
Apr 5 at 23:20
$begingroup$
What is an adequate amount of memory and CUDA cores?
$endgroup$
– crayden
Apr 5 at 23:24
$begingroup$
What is an adequate amount of memory and CUDA cores?
$endgroup$
– crayden
Apr 5 at 23:24
$begingroup$
How long is a piece of string? ;) if you want to work with images/videos, the more the better. Working with text can be less memory intensive and something like stock market data is not memory hungry. If you get the Quadro, an RTX or a Titan - it is likely that the human will be the slowest link. Just don't work with a CPU and you'll be fine.
$endgroup$
– n1k31t4
Apr 5 at 23:32
$begingroup$
How long is a piece of string? ;) if you want to work with images/videos, the more the better. Working with text can be less memory intensive and something like stock market data is not memory hungry. If you get the Quadro, an RTX or a Titan - it is likely that the human will be the slowest link. Just don't work with a CPU and you'll be fine.
$endgroup$
– n1k31t4
Apr 5 at 23:32
add a comment |
Thanks for contributing an answer to Data Science Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f48718%2fis-the-pny-nvidia-quadro-rtx-4000-a-good-gpu-for-machine-learning-on-linux%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
$begingroup$
I've used 2000 version and the major point is that it does not have a good memroy. $5GB$ is not appropriate for DL tasks. If you can afford it, buy a 2080 which is perfect. I don't know the memory of 4000 but the 2000's memory is very limiting and you cannot train big models on it. But the gpu itself is roughly a powerful one.
$endgroup$
– Vaalizaadeh
Apr 5 at 21:33
$begingroup$
I can also refer that PNY does not have a good cooling system. You have to take that in mind.
$endgroup$
– Vaalizaadeh
Apr 5 at 21:37
$begingroup$
Thanks for your feedback @Media. Would you be able to recommend a card that would work well for getting up and running with ML/Deep learning?
$endgroup$
– crayden
Apr 5 at 21:38
$begingroup$
I guess 2080ti is the best at the moment due to its power and new tensor modules that have been introduced inside them for DL/ML tasks. It is also far cheaper than titan.
$endgroup$
– Vaalizaadeh
Apr 5 at 21:40
$begingroup$
I noticed you are referring to the former 2000/5GB version of the Quadro. The new Quadro RTX line is based on the Turing architecture, and includes special tensor cores for acceleration. This should make a huge difference between the 2000 version you have used, and the new RTX/Turning based cards?
$endgroup$
– crayden
Apr 5 at 21:45