>none of them have APU's
We probably won't see any APU based Steam Machines until the Zen APUs get released with onboard GPUs that use the new AMDGPU driver instead of Catalyst.
Or until AMD realizes that they just need to drop all support of Catalyst and put all their funding into RadeonSI and Mesa. Hell, maybe they could fund a Mesa driver that just sits on top of Vulkan and an open source Vulkan driver.
what? i'm not a fan of APU's, but i was just looking at the A10-7850k in BF4, ran at 1080p on medium settings @ 60fps I think. not sure how they'll run in games that strain your CPU, but they can handle their own.
also, as for not using AMD, i believe their Vulkan API is heavily optimized for Haswell(?) or maybe just Intel in general. Something about 4400 - 4600 HD graphics working real good with Source 2 games or some shit? idk
Its not optimized for Intel graphics in particular. Its just that their reference implementation was done on Intel. Considering that Vulkan is in many ways just Mantle 2.0 it should work perfectly well on AMD hardware.
Kavari based APU's do a fine job in games playing at 1080p on medium settings. This is the console level of performance. Moreover, a large number of people doing the steambox thing are going to be streaming from their gaming pc's, which can be done on a fucking toaster.
APUs are currently bottlenecked by most RAM. The CPU has to send instructions to the GPU by storing the instructions on the RAM, then having the GPU retrieve those instructions from the RAM. That process is slowed considerably by all but high end DDR3 RAM.
AMD needs to get their shit together and get some DDR4 support for their APUs, and then you'll see them gain traction as a budget gaming option
It's not really a budget gaming option when DDR4 costs a shit ton, it's really going to be good for mobile platforms though, a mobile APU with some DDR4 RAM is going to be pretty good for gaming at a low wattage.
DDR4 prices are dropping steadily. And even with the extra cost of DDR4 RAM, you would still probably be able to build a system cheaper than something with a discrete low end GPU. DDR4 isn't *that* much extra.
I'd need to see some benchmarks before anything, because with an APU it's always going to be a battle of GPU bottleneck Vs memory bottleneck, currently we're on memory.
Basically there's going to be an upper limit where the memory speed isn't going to help. but it will separate the A8 and A10 a little more than they are currently.
Are you actually running Linux?
I run nvidia on Ubuntu every day. Last time I tried AMD on Linux, I couldn't even get X.org to start - even in VESA mode. Tried both 'stable' and 'beta' AMD drivers. Uninstallation of the AMD shit when I went back to nvidia was a bitch too.
Only a couple Steam Machines could qualify as "low budget" most are ridiculously over priced. Again, most all of them are also just rebadged Windows machines. You shouldn't expect to see an APU in a Steam Machine when the OEM in question doesn't offer a comparable Windows system powered by an APU either.
The GPU has direct access to the system memory. What you are very poorly describing there is how parallel compute works, and it couldn't be more irrelevant here since the process is till hilariously faster than a CPU sending data to a discrete GPU.
Fast DDR3 with delta color compression provides adequate bandwidth for 8 CU. After Carrizo AMD will have HBM equipped APUs. There is no need for DDR4 to serve as VRAM.
This entire thread is full of idiots.
HBM isn't on die memory, its on package.
HBM1 modules are 1GB.
HBM2 modules are planned to be 4GB and 8GB with a data rate twice as high.
Every GPU going forward will have delta color compression.
Yeah he gets a lot of stuff wrong. Hes great when it comes to networking and sys admin stuff, but thats all he really knows.
Last video they released he claimed that the Nvidia Shield console had a GTX 960 inside it, then he claimed that Valve was responsible for the Vulkan API.
I have a cheap AM1 build for network shit. Thermal sensors are 100% fucked and there is this weird error. Other than that HD video plays back fine from what I've tested although this is pretty much used entirely over putty. I may be from the USA but I sure as shit know it's not 4 degrees Celsius where the server is stored let alone CPU temp.Uhhuh. NMI received for unknown reason 30 on CPU 3.
[743619.444311] Do you have a strange power saving mode enabled?
[743619.444315] Dazed and confused, but trying to continuetemp1: +4.0°C
Yeah seems like they could have fixed that sensor shit in the last... what 6 or 8 years that it's been a problem?
I think there's ways around it though. If you can get your kernel to recognize whatever chipset your motherboard uses for it's onboard temp sensors that might help.
Take your bait and choke on it.
The 960 is 1024 shaders, and they're clocked much higher than the IGP in Tegra X1 can reach.
That much is obvious.
All AMD temp sensors work like this, the program you're using for monitoring is misreading them.
They have one real sensor for package temp, then internally a bit of math is done to calculate remaining thermal margin based on power consumption. Google it if you want to read more about it.
Get what you pay for I guess. Honestly I thought AMD was great back in the K6 and Athlon days. ATI has always been shit for drivers. What they really need is an even shittier competitor like Cyrix or s3 to make them look good.
I think you misunderstood my post.... 256 shaders in shield console... 1024 shaders 'cores' in gtx 960. Therefore the shield console is 'one quarter' of a GTX 960. How is that bait?
What is this supposed to prove other than the fact that AMD has poor Linux support and the people reverse engineering their shit haven't a clue how to actually read it? You do understand what is being debated here do you not?
The TL:DR is that the AM1 system in question isn't misreporting the value its polling.
That CPU is accurately idling at 4 on the thermal margin scale.
If you want to poll an actual temperature the "CPU" value is what should be used.
But if you understand how the thermal margin scale actually works it is far more accurate than anything else. You can tell how much thermal headroom any given chip has at any given time, since the scale is ultimately adjusted to the chip's real tjmax. It is more accurate even than a physical temp sensor since the logic itself is monitoring its own current leakage and power consumption in real time.
It has been like this for nearly a decade.
Intel's core temp is adjusted to their TJmax as well, and still give's a human-readable temperature value by default.
The 'CPU Value' you mentioned depends on your motherboard manufacturers' specific implementation, and on a thermistor sitting relatively far from the CPU, and it sucks to have to rely on that.
The individual core readings in an intel processor can be inaccurate. You can have one sensor read a core at 90c under load, will all others are at 60c. This might lead one to believe that there is an issue with how the heat sink is seated, or that their may be a curve in the IHS preventing effective cooling of one part of the die. Temp sensors are one thing that can pass validation even if they aren't fully functional because testing doesn't thermally stress the chips for extended periods to check the values outputted.
AMD's makes more sense, its simpler, and more reliable. One of the few things they get right.
>One of the few things they get right.
It commonly reads *below ambient* temperatures by default, even in common Windows utilities because AMD can't be bothered to document how sensor readings relate to real temps on a given CPU model. If that's an example of something they got right, I'd hate to see what they got wrong.
Look at the #7 FAQ on Core Temp's website to see how useless AMDs documentation is.
Because that value isn't a temperature, as has already been stated plainly.
It is a scale of thermal margin. If it is currently reading 10, and 70 is tjmax, then you know you have 60 units of thermal margin before hitting tjmax. If the chip throttles and drops voltage at 60, and you idle at 20, then you know you have 40 units of thermal margin before the chip throttles.
Its incredibly simple, accurate, and reliable.
Their only flaw is in not explicitly stating the these points on a scale for every given processor line.
I have no problem comprehending this while I'm totally shitfaced drunk. Its something that only enthusiasts need know for overclocking purposes, and it takes all of 3 minutes to understand after googling it.
I understand that value isn't a temperature after reading all the convoluted descriptions from unofficial sources, but every system utility I've seen puts a C or an F behind it. It's unintuitive, it's not friendly to the user, and it's an oversimplification of something. It's hiding potentially important details behind the curtain of 'you don't need to know that'.
It shows the laziness and low standards of AMD that they won't even bother putting out a datasheet for each CPU model that shows offsets for the onboard thermal sensor.
Let alone APUs, not a single Steam Machine features an AMD dedicated card.
I thought AMD said they were going to work with Valve to optimise drivers. Looks like they dropped the ball. Hard.
The fuck you doing? That isn't a system of equations because there's no right hand side. It's six EXPRESSIONS, not equations, so there's nothing to solve.
You could simplify, but there's no way to solve. And if you want to simplify you did it wrong because the first line is already in its simplest form.
Simplified (correctly) would be:
A U T I S M = AUTISM (already in simplest form)
U U T I S M = (U^2)TISM
T T T I S M = (T^3)ISM
I I I I S M = (I^4)SM
S S S S M = (2^4)M
M M M M M = (U^5)
And I think there are typos in the last two lines.
because font is shit and double spaces between "I"s don't work, it looks like shit
let me try with code and single spaces everywhere:A U T I S M
U U T I S M
T T T I S M
I I I I S M
S S S S S M
M M M M M M
With this post, my autism exceeded any levels possible in this reality
Only way to become more autistic than me is to make this in 3D
Anyway, back to the topic about steam machines.
Is there any good DIY Steam Machine guide, where final products takes less space than dishwasher?
Nvidia has to develop for linux since for their tesla and mobile chips, almost 100% of them are using some sort of *nix thing.
You don't think people are building cuda supercomputers that run wangblows do you?