explain the difference between processing 4K video vs 4K games
what exactly is at play during both instances
An abomination of a gpu like an r7 360 could handle 4k video
One is watching a video and the other is a virgin.
>>58658946
gamerfags on suicide watch
>>58658979
I bet you bought a 1080 to "work" didn't you, Alec
>>58658946
KEK'D
>>58658939
In extremely simplified terms, 4k video playback is just the ability to read the 4k file, decode it quick enough and then draw it to the screen whereas 4k gaming requires shit loads of calculations to figure out how things will look. Basically the video is precomputed whereas the game needs to calculate everything you see on the fly
Man this board really went to shit.
>>58659040
Particularly, in 3D games, the videocard calculates color for each pixel individually based on the position of camera and contents of 3D scene.
>>58659040
is the decoding specifically based on the GPU or is the CPU / ram / HDD/SSD cahcing also taken into account
>>58659006
I bought a 1070 to train neural networks on it. Of course, I do play games also.
>>58659056
Decoding can be done on either GPU or CPU (and the other is mostly idle). HDD most often has no effect because the reading speed is almoist always way above what you need to read all the data in time, getting more RAM does not help with decoding either.
>>58658939
What takes more work,
Watching a 2 hour movie in a theater, or making that movie over the course of years?
it's the same thing with videos and games.
>>58659075
what about the new h265 which apparently is more intense than h264
isnt that against the idea of technology being better and easier to use?
just like DX12 allows lower end cards to run games better
why would anyone use h265 if its harder to actually view
>>58659154
>what about the new h265 which apparently is more intense than h264
I have very little knowledge about internal workings of h265, sorry.
>isnt that against the idea of technology being better and easier to use?
>why would anyone use h265 if its harder to actually view
Better in this case is achieving the same quality with lower bitrate.
>just like DX12 allows lower end cards to run games better
come on now
>>58658939
With 4k video you have a file which tells you the color for each pixel (ignoring the inner working of compression).
With 4k video games you have files that tell you the vertices for every object in the game, you have texture files, you have coordinates that tell you how that texture is applied to a given model. Then you also have lighting and reflections and fog/dust, camera angle, field of view. Add to that game logic such as player movement, artificial intelligence and any number of world-world or player-world interactions. All of this has to be calculated to obtain the color of each pixel every time you want to render a new frame.
>>58659154
Dx12 isn't some magic thing that makes games run better, it's just a low level application programming interface, or API for short.
Basically the API adds what they call a "layer of abstraction" to ease the interaction between programmers and the actual hardware. Think of the difference between pushing a button to send a signal that a computer interprets as an absolute function, like turning the volume up, and doing the same by hand by replacing a resistor in the circuit to change the amount of gain. Essentially, an API allows developers to implement certain functions without the hassle of accounting for which parts of the system do what, and how each thing can affect the other.
Up until recently, features added to the direct x api have streamlined the process of implementing them for developers, as well as provide a hard set standard for hardware manufacturers to ensure their method of doing things will be appropriate before investing in a technology that simply won't work with the api.
Where things have changed is in just how flexible and capable GPU's have become. Early "video accelerators" did little more than draw pixels with reference to textures on top of geometry drawn by the cpu and basic post processing.
GPU's of today can perform certain computations more effectively than CPU's, and are versatile enough to adapt to many different methods of feature implementation. We've eclipsed the point of hard set rules being convenient, to the point optimization largely revolves working AROUND the api. So what dx12 literally does is less. Thus more responsibility rests on developers to make the engine work with specific architectures.
The other large benefit is multicore scaling. Previous implementations were somewhat botched in and far from efficient. Dx12 and vulkan are built from the ground up to better utilize more cores.
Tl;dr shitty devs will still make shitty games.
>>58659616
Here's an even more dumbed down explanation.
Rendering a video is like coloring in one of those children's books with the numbers that correlate with a certain color. All the hardware, be it the CPU or GPU, is doing is filling in the colors as it is told.
Rendering a game is like directing a play. You have a set of assets (textures, object meshes, character models, audio, ect) and a script that tells the hardware what these assets are, where and how to use them, how they interact, etc.
And this play has to be highly dynamic, adaptable, and be performed as quickly and seamlessly as possible.