[Boards: 3 / a / aco / adv / an / asp / b / bant / biz / c / can / cgl / ck / cm / co / cock / d / diy / e / fa / fap / fit / fitlit / g / gd / gif / h / hc / his / hm / hr / i / ic / int / jp / k / lgbt / lit / m / mlp / mlpol / mo / mtv / mu / n / news / o / out / outsoc / p / po / pol / qa / qst / r / r9k / s / s4s / sci / soc / sp / spa / t / tg / toy / trash / trv / tv / u / v / vg / vint / vip / vp / vr / w / wg / wsg / wsr / x / y ] [Search | Free Show | Home]

/GPGPU/

This is a blue board which means that it's for everybody (Safe For Work content only). If you see any adult content, please report it.

Thread replies: 51
Thread images: 5

File: 1475046410051.jpg (261KB, 1256x600px) Image search: [Google]
1475046410051.jpg
261KB, 1256x600px
I do heavy parallel computing on my graphics card (not mining sheqels, though) and I need to upgrade. A couple of questions:
Are there some reliable, un-shilled OpenCL benchmarks for the current consumer cards? I've found out that AIDA64 (proprietary) measures GPGPU performance but I haven't come across a table of comparison. This (https://community.amd.com/thread/198100) is the single somewhat modern comparison I've found.
If I used multiple cards, then I wouldn't need SLI or Crossfire, right? That's only for gaymen and doesn't affect OpenCL performance, right?
I've also heard that GeForce FLOPS for anything other than single precision is artificially throttled to promote Quadros / Teslas.
Heavily considering buying two or three 480RX' right now but I may also wait for NV's 10xx Ti and RX 4xx Vega.

pic unrelated
>>
Just find an mining GPU comparison list? I don't see how it wouldn't apply.
>>
>>57001306

>Are there some reliable, un-shilled OpenCL benchmarks for the current consumer cards?
Mostly, no. Most major reviewers have biases, and most non-major reviewers only do GAYMEN benchmarks.
>If I used multiple cards, then I wouldn't need SLI or Crossfire, right
Depends on how your application addresses and accesses the cards. In 90% of cases, no. But setting up crossfire/SLI is minimal extra effort.
>I've also heard that GeForce FLOPS for anything other than single precision is artificially throttled to promote Quadros / Teslas.
Sorta correct. Modern Tesla and Quadro cards carry unique benefits these days. Stuff like ECC Memory in Quadros and half-precision in Teslas (8-bit Int8). Titan XP is still top dawg for raw processing on a single die tho.
>Heavily considering buying two or three 480RX' right now but I may also wait for NV's 10xx Ti and RX 4xx Vega.

If you can legitimately grab 2-3 4GB RX480s for a great price, do it. Amazing cost/performance ratio for raw performance.
>>
>>57001306
> using windows as a worksation
Just throw it in the trash if youre not using xenons or ecc ram
>>
>>57001551
thanks a lot for the advice
>Sorta correct. Modern Tesla and Quadro cards carry unique benefits these days. Stuff like ECC Memory in Quadros and half-precision in Teslas (8-bit Int8).
Unfortunately, they're out of my budget's range right now, even though they sound great.

>If you can legitimately grab 2-3 4GB RX480s for a great price, do it.
I've seen one being 10% off at my local vendor which swayed my decision. Also, is the housefire meme more than just a meme? Is there any danger to it if I run three of those continuously for a couple of days?
>>
>>57001306
>I've also heard that GeForce FLOPS for anything other than single precision is artificially throttled to promote Quadros / Teslas.
This is the case for literally every consumer graphics cards since the HD 7900 series and derivatives. Even most workstation cards have poor double precision performance these days so they can shill compute cards. If you want good double precision performance, you have to buy FirePro W8100s or W9100s.
>>
>>57001592
Why do you always misspell Xeon?
>>
you don't want a graphics card persay, you want a coprocessor card. you'll get better bang for your buck with intel coprocessors, like 100 core pcie cards, and teslas than with GPUs.
>>
>tfw want to do some computer vision projects which means Nvidia is my only option since so many libraries use CUDA
>>
>>57002012
>tfw machine learning cucks locked themselves into CUDA and are now getting jewed to death by the chink jew running Jewvidia
>>
What kind of fucking GPU needs 14 fucking chokes

FOURTEEN CHOKES, each ones can probably handle some 40-50W, WHY 14?!
>>
>>57002171
afaik ZOTAC are known for putting too many power phases on their custom PCBs
>>
>>57002266
That looks like an ASUS to me though.
>>
>>57002171
And funny thing is if one of them shits itself the entire card no longer works.

Why not just put several high quality ones that can take more amps and lower the chance of something going wrong?
>>
>>57001884

>I've seen one being 10% off at my local vendor which swayed my decision. Also, is the housefire meme more than just a meme? Is there any danger to it if I run three of those continuously for a couple of days?
I've got an XFX RX480 Black Edition. Zero issues outside of the length. Just don't expect to be able to OC a reference cooler model to the moon and back.
>>
File: dgx.png (227KB, 722x976px) Image search: [Google]
dgx.png
227KB, 722x976px
170 teraflops compute.

http://www.nvidia.com/object/deep-learning-system.html

170 teraflops compute.

8-way P100's. that's more powerful than a 16-way titanP sli.

this thing is dope.
>>
>>57002433

>6400 watt psu

wat
>>
>>57002445
It's redundant, shithead>>57002445
P
>>
>>57002445
8x 300w gpu with heavy network systems and all sorts, there's probably 2x redundancy on the power delivery too.

You should also balk at the price for this kind of shit.
>>
>>57002529

no it fucking isn't, that's drawing from all of them.
>>
>>57002445
Redundancies are important in servers.
>>
>>57002547

you can't get performance like this anywhere else in such a compact and conveniant package. it's 129,000 dollars, you'd pay at least 200,000 for this kind of power without the conveniance if you tried cobbling one fucking hundred fucking seventy fucking tera fucking flops of raw god damned cuda fucking compute.
>>
File: gtx570blown1.jpg (2MB, 1936x2592px) Image search: [Google]
gtx570blown1.jpg
2MB, 1936x2592px
>>57002171
>>57002316
So they can get more cheaper chokes, sell them for a much higher price, and make the card fail faster due to lower quality components so you can buy another one in a year.
>>
>>57002601
Dayum son how did you blow two mosfets and a cap at the same time?
>>
>>57002556
>3200W TDP
So where's the other half going jackass?
>>
>>57002676
Nowhere, PSUs are most efficient around 50-60%, it keeps power draw lower and also acts as a backup.
>>
>>57002556
So it's redundant.
>>
>>57002676

probably to the vast electrical apparatus that prevents you from sucking every cock in america.
>>
>>57002433
>dual E5-2698 v4

it's 40 cores/80 thread!
>>
>>57002806

i bet microsoft edge is literally still slow on the Nvidia dgx1 129,000 dollar 170 teraflop supercomputer server unit with 8 way P100s.
>>
>>57002806

and 512gb ram.
>>
>>57001306
http://www.phoronix.com/scan.php?page=news_item&px=October-2016-11-GPU-CL
>>
>>57001493
that's dependent on how the gpu handles the algorithm

An amd card and a nvidia card could have the same clockspeed/ram but one could be slower than the other
>>
>>57002158
As much as I hate it, having used both CUDA and openCl, I can see why. CUDA is much nicer.
>>
>>57003051
Khronos went full retard by forcing separation of OpenCL and C/C++ code instead of making OpenCL an extension of C/C++.

I wonder what kind of neckbeard asshole decided loading OpenCL code as a text file and making the GPU driver compile the OpenCL code in runtime was a good idea.
>>
>>57002960
peculiar how the 1070 outperforms the 1080 in every single LuxMark test. leads me to believe that there's something wrong with that benchmark's code
>>
>>57002433
>The NVIDIA DGX-1 is available for purchase in select countries and is priced at $129,000*. DGX-1 service and support at additional cost.

Well... It's only around 8 years of saving up money...
>>
>>57002879
>tfw you fell in the 1024 GIB
>>
>>57003159
You could forge your parents' signatures and sell their house to reverse mortgage company without them knowing.
>>
File: geforce experience.webm (1MB, 1280x720px) Image search: [Google]
geforce experience.webm
1MB, 1280x720px
>>57002635
>Dayum son how did you blow two mosfets and a cap at the same time?

It's what they call the Geforce experience.
>>
>>57003211
top kek
though, AyyMD housefire ain't better.
>>
>>57002867
>i bet microsoft edge is literally still slow on the Nvidia dgx1 129,000 dollar 170 teraflop supercomputer server unit with 8 way P100s.

It may be, but it's still faster than Firefox and Chrome.
>>
>>57003159

get a business loan and then make money with it.
>>
>>57003242

i don't know about firefox, but at least once or twice an hour in edge, i will wind up where the letters i type take several seconds to enter the field. that's... ridiculous. i have dual 8 core xeons @ 3,3ghz, 64gb of ram. it's unacceptable. edge is worthless. i gave it a chance. i really really did.
>>
>>57002158
>jewed
>being antisemitic
>>>/pol/
>>
>>57003321

>implying jews aren't greedy

poorly memed
>>
>>57003345
All humans are greedy faggot
>>
File: 1474202588045.jpg (216KB, 605x763px) Image search: [Google]
1474202588045.jpg
216KB, 605x763px
>>57003321
hi newfag
back to /r/eddit
>>
>>57003377
>hi newfag
>implying you have to be antisemitic or you're new
>>
>>57003368

yeah but jews are notably more greedy.
>>
>>57003404
no but you would've known that shilling and jews are commonly associated on this board, just like on whole 4chan
Thread posts: 51
Thread images: 5


[Boards: 3 / a / aco / adv / an / asp / b / bant / biz / c / can / cgl / ck / cm / co / cock / d / diy / e / fa / fap / fit / fitlit / g / gd / gif / h / hc / his / hm / hr / i / ic / int / jp / k / lgbt / lit / m / mlp / mlpol / mo / mtv / mu / n / news / o / out / outsoc / p / po / pol / qa / qst / r / r9k / s / s4s / sci / soc / sp / spa / t / tg / toy / trash / trv / tv / u / v / vg / vint / vip / vp / vr / w / wg / wsg / wsr / x / y] [Search | Top | Home]

I'm aware that Imgur.com will stop allowing adult images since 15th of May. I'm taking actions to backup as much data as possible.
Read more on this topic here - https://archived.moe/talk/thread/1694/


If you need a post removed click on it's [Report] button and follow the instruction.
DMCA Content Takedown via dmca.com
All images are hosted on imgur.com.
If you like this website please support us by donating with Bitcoins at 16mKtbZiwW52BLkibtCr8jUg2KVUMTxVQ5
All trademarks and copyrights on this page are owned by their respective parties.
Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.
This is a 4chan archive - all of the content originated from that site.
This means that RandomArchive shows their content, archived.
If you need information for a Poster - contact them.