Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

waloshin

macrumors 68040
Original poster
Oct 9, 2008
3,339
173
I heard the newer Nvidia gpus like the Gtx 760 are horrible for computing even though it has 1152 Cuda cores. Something about the new artitecture is good for gaming, but not for computing. Is this true?

Thanks
 

ChristianVirtual

macrumors 601
May 10, 2010
4,122
282
日本
I use two 780 and one 660Ti for folding and they do a good job there. Major influence on performance is what algorithm your computation might need plus how the driver supporting it.

What would you compute ?
 

waloshin

macrumors 68040
Original poster
Oct 9, 2008
3,339
173
I would be doing some Seti@Home and maybe some video encoding that supports cuda.
 

WMD

macrumors regular
Jun 12, 2013
175
7
Florida, USA
I wouldn't say they're "horrible," but there was a considerable dropoff after the 500 series. Apparently they did this to save power. The 700 series might be a little better than 600, but I haven't found benchmarks comparing them in compute. Notably, I'd like to see a comparision between the 560 and 760. Do you have the card already?

In any case, for compute purposes, AMD video cards always seem to do quite a bit better.
 

waloshin

macrumors 68040
Original poster
Oct 9, 2008
3,339
173
I wouldn't say they're "horrible," but there was a considerable dropoff after the 500 series. Apparently they did this to save power. The 700 series might be a little better than 600, but I haven't found benchmarks comparing them in compute. Notably, I'd like to see a comparision between the 560 and 760. Do you have the card already?

In any case, for compute purposes, AMD video cards always seem to do quite a bit better.

I do have the Gtx 760.
 

WMD

macrumors regular
Jun 12, 2013
175
7
Florida, USA
I do have the Gtx 760.
Cool...tell you what, do you mind running a quick benchmark for me? :)

Download this... http://www.distributed.net/Download_clients The one you want is x86/CUDA-3.1 for Windows 32bit. (Unless, of course, you don't have Windows, in which case, there are a few CUDA versions further down.) Once unzipped, bring up the command prompt, go to the folder you unzipped to, and run:

dnetc -bench

It runs for a few minutes, testing several configurations, then outputs an "optimal" result in keys per second. What do you get? My GTX 560 averages around 333 million per second.
 

waloshin

macrumors 68040
Original poster
Oct 9, 2008
3,339
173
Cool...tell you what, do you mind running a quick benchmark for me? :)

Download this... http://www.distributed.net/Download_clients The one you want is x86/CUDA-3.1 for Windows 32bit. (Unless, of course, you don't have Windows, in which case, there are a few CUDA versions further down.) Once unzipped, bring up the command prompt, go to the folder you unzipped to, and run:

dnetc -bench

It runs for a few minutes, testing several configurations, then outputs an "optimal" result in keys per second. What do you get? My GTX 560 averages around 333 million per second.

Will do after work. Thanks.
 

waloshin

macrumors 68040
Original poster
Oct 9, 2008
3,339
173
Will do after work. Thanks.

Looks to be around 12 Million a second.

If that is the case that is pretty pathetic that a $100 video card outperforms a new $300 video card.

keys.png
 

WMD

macrumors regular
Jun 12, 2013
175
7
Florida, USA
Ok, so around 460 million per second vs. 333 million on my 560. Not bad! :) Especially considering that the 660 regressed to around 300, in a benchmark I saw.

Sadly, these numbers pale in comparison to ATI/AMD numbers; a 5770 gets close to a billion per second, and a 7970 gets over 3 billion. Of course, it may be different depending on the workload, but I've never seen nVidia beat AMD in distributed computing tasks at the same price point. Basically, if you're buying a video card to do distributed projects, either exclusively or as a major job a computer will be doing, go AMD. I still like nVidia better for gaming though, and I wouldn't bother selling a 760 just due its compute performance (which, as dnetc suggests, is pretty good as far as nVidia goes).
 

waloshin

macrumors 68040
Original poster
Oct 9, 2008
3,339
173
Ok, so around 460 million per second vs. 333 million on my 560. Not bad! :) Especially considering that the 660 regressed to around 300, in a benchmark I saw.

Sadly, these numbers pale in comparison to ATI/AMD numbers; a 5770 gets close to a billion per second, and a 7970 gets over 3 billion. Of course, it may be different depending on the workload, but I've never seen nVidia beat AMD in distributed computing tasks at the same price point. Basically, if you're buying a video card to do distributed projects, either exclusively or as a major job a computer will be doing, go AMD. I still like nVidia better for gaming though, and I wouldn't bother selling a 760 just due its compute performance (which, as dnetc suggests, is pretty good as far as nVidia goes).

Yeah I am not getting rid if the 760 as a 290x in comparision coats around $100 more then a 760.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.