https://developer.nvidia.com/cuda-toolkit/whatsnew
>CUDA 9 is the fastest software platform for GPU-accelerated applications. It has been built for Volta GPUs and provides faster GPU-accelerated libraries, improvements to the programming model, compiler and developer tools. With CUDA 9 you can speed up your applications while making them more scalable and robust.
>Download the CUDA 9 Release Candidate (RC) today to try the latest release.
THANK YOU BASED NVIDIA
>>61728860
But muh games
>>61728860
why would someone use CUDA and bind themselves to NVIDIA instead of using OpenCL and being flexible for hardware replacements.
what if AMD releases better hardware, but you cucked yourself into using CUDA?
>>61729132
>what if AMD releases better hardware
about as likely as the sun rising in the west
>>61729146
Go be underage somewhere else.
Can I build something that targets CUDA 9, and then still run it on an older GPU?
>>61728860
Teslas are fucking beast. If I had unlimited budget I would build a quad SLI Tesla PC just to trigger pajeets that are infecting this board.
>>61728860
>>61729146
>>61729279
Hahaha, you're all faggots. Meme man and his meme machines with their big dies are about to get BTFO by cheaper and 30x more efficient ASICs.
https://www.semiwiki.com/forum/content/6936-ai-asics-exposed.html
https://www.wired.com/2017/04/building-ai-chip-saved-google-building-dozen-new-data-centers/
>>61729146
Hope you're ready for a solar eclipse of your asshole.
>>61726203
>>61729132
amd already has a HIP tool that translates cuda shit code into open cl
>>61728860
>this triggers the amdrone
>>61730341
>this causes the retard that didn't bother to read the thread to shitpost like a mindless pile of trash vaguely formed into the shape of a human being
>>61729380
>>61729466
>>61729612
Too bad AMD's Opencl drivers are garbage and literally worse than Nvidia' s. Explain to me how a fucking 980ti pushed a fury x' s shit in at folding@home
https://www.computerbase.de/2015-06/amd-radeon-r9-fury-x-test/8/#diagramm-folding-at-home-fahbench
>>61730359
>amdrones shitting up all threads
color me surprised
>>61730426
>proof NVidia is about to get spanked is now shitting up a thread
Desperation mode engaged.
http://www.phoronix.com/image-viewer.php?id=2016&image=amd_catalyst_curse_lrg
AYYMD, NOT EVEN ONCE
>>61730458
>2015
You're not even trying anymore.
>>61730394
You won't see an answer on /r/amd I mean /g/.
>>61730394
>>61730621
>2015
Again, not trying. At all.
https://www.blendernation.com/2017/04/12/blender-cycles-opencl-now-par-cuda/
http://www.anandtech.com/show/11278/amd-radeon-rx-580-rx-570-review/14
>>61729132
Nvidia supports x86 and IBM POWER8 and (with a slightly older CUDA toolkit) ARM. amdgpu-pro, which is needed for OpenCL (and Vulkan) only works on x86 Linux, and doesn't even support graphics cards released not too long ago.
From what I've used of it, Nvidia CUDA is pretty nice API to use and it gives you C++11 device code support, and C++14 support as of CUDA 9 (the new version discussed in the OP). It also allows you to use intrinsic functions for fast computation, as well as low-level PTX assembly language.
There also seems to be a lot of software and libraries around built for CUDA which does help.
>>61730878
It's mostly libraries though.
CUDA has a LOT of them.
Especially for ML.
>when you're forced to buy nvidia because all machine learning / deep learning libraries use CUDA and cudnn
Why must I suffer
>>61731392
ROCm will be mature in a year or two.
Your suffering is only temporal.
>>61730712
What point are you trying to prove here? TF and power usage for a 580 are on par and greater respectively against the 1070. And the 580 gets it's shit wrecked in all of those benchs. Like it doesn't even come close
>>61729380
huh chips made for ANN, why the fuck not?
They run for ever, so I would really want that now that, part now when I don't got legal cluster access anymore
>>61728860
how does it feel that asics are literally faster on AI than nvidia shit?
Stop. STAHP. AMD's already dead ;_;
>>61729380
Nvidia has tpu's built into their compute chips. AFAIK tpu's are only useful for a few operations in CNN's they dont work with RNN's and such so GPUs are still superior
>>61733029
Not sure you can say that when Teslas have 815mm^2 dies that make them cost a small fortune each.
>>61729132
Because OpenCL is seriously limiting and barely supported anymore, while CUDA gives you low-level control.