Hello /g/uys, I'm wondering if gpu clock rate really matters?
I'm building a new rig, and all that's left to get for me is a gpu.. I'm going to be using a 900p (1600x900) monitor, and I heard that 2gb is good for that res, but I'm unsure if I should consider core / memory clock also? When ever people review gpu's, they tend to focus on everything but the clock rates...
>>55966746
i dont think they matter op. gpu clock rate isn't as important as, say cpu clock rate. but to be safe, i would go with something > 600mhz
>>55966746
When it comes to GPU performance, # of Shader cores is most important, followed by Memory Bandwidth (GB/s) and Buffer Size (VRAM).
This doesn't mean however that a AMD card with higher specs on paper will perform better than a Nvidia card at a similar price point.
You can only compare # of shader cores on a 1:1 ratio when you compare GPUs of the same architecture (GCN 1.0/1.1/1.2/1.3, Maxwell/Pascal, etc)
Maxwell 2 shader cores perform roughly 30-40% faster in DX11 operations than GCN shader cores on average, so you could use math to compare the two uArchs using that formula in GFLOPS for DX11 in theory.
It matters only for when comparing the same GPU
>>55966746
Unless the difference is significant (1000 vs. 1500MHz), then it doesn't matter. Overclocking nets you about 100-300 more MHz on average, which usually isn't worth the increase in heat and temperatures, unless you have custom cooling.
>>55966746
I'll bite
>clock rate really mater
yes
because FLOPS are often calculated by
>amount of streaming cores or cuda core
>time the gpu clock
>times two because it can complete two calculation per one cycle
for example kepler based titan which have 2880 cores and clock of 980 then should theoritically have 5120GFLOPS of calcultion power