Why do 10-bit, 12-bit etc. color palettes exist? If 8-bit is too low for HDR for example, why not just jump directly to 16-bit? It might use a bit more space but storage is cheap and also it might improve performance as 16-bit is a standard value size (2 bytes) so it's less complicated to fetch from memory, and it is more future proof than 10-bit etc which might just have to be increased again in the future. Or am I just misunderstanding something?
bits are exponential so 10bit colour is actually 64x more than 8bit colour. anything past 13 is impossible for non-quantum computers to render
>>55651830
>anything past 13 is impossible for non-quantum computers to render
you're not getting this one past me without a (Citation needed).
>>55651872
this is the most commonly cited study on quantum colour depth by Prof. H. Zeng:
sutlib2.sut.ac.th/sut_contents/H95009/DATA/5637_2.PDF
I actually did my dissertation on the problem so feel free to ask questions.
>he doesn't watch his anime in 16 bit
>>55651937
Nice dead link.
Because anything past 10-bit video is a placebo and offers extremely little gains in compression efficiency.
The jump from 8-bit to 10-bit color means you go from ~17 million possible colors to ~1 billion possible colors. Anything more than that is placebo to human eyes.
The only reason some groups use 10-bit video encoders is to reduce color banding across all bitrates and compress video ~10-20% more than 8-bit encoders would.
12-bit video encoders offer no visual improvements across all bitrates and improve compression efficiency by like 2% at most.
>>55652186
works for me sempai? maybe try this?
sutlib2.sut.ac.th/sut_contents/H95009/DATA/5637_2.PDF
>>55652211
Seems that all of sut.ac.th is down for me...
>>55652189
this image is dumb
top line has intentional huge jumps, like from RGB (5,5,5) to (13,13,13), the banding in the middle comes from the fact that it useas plateaus and discrete jumps>1 and the RGB values arent even equal, so it goes like (100,100,100) (104,100,100) (104, 105,105)
the second gray line uses smaller steps and somehow ends up looking smoother (while still using 8 bit color depth, on a 8bit monitor) and even then the differences in the neighboring values are not = 1, so it doesnt even use the entire 8 bit spectrum
>>55652295
>>55652211
http://webcache.googleusercontent.com/search?q=cache:_mo4kPTv5d8J:sutlib2.sut.ac.th/sut_contents/H95009/DATA/5637_2.PDF+&cd=1&hl=pt-BR&ct=clnk&gl=br&client=firefox-b
>>55651965
>Megui
>>55652991
Yes?
>>55651759
Question. Why is the right picture (10-bit video) look fine when rendered on normal monitors that use 8bit output?
I understand that it is 8bit per component on the monitor so 24bit total, but that should apply to the video on the left picture too, should it not? Meaning that an 8bit video should be capable of reproducing the picture on the right, regardless of whether it is 8-bit or 10-bit.
As long as the 8-bit video uses 4:4:4 pixels instead of 4:2:0 or 4:2:2 or whatever h264 does.
Or is the idea that 4:2:0 equivalent with 10-bit components can equal 4:4:4 with 8-bit components, but produce lower filesizes since 4:2:0 is inherently more suitable for compression?
>>55651759
The source is still 8-bits per channel. When you compress a large video file to 10-bits there are fewer rounding errors and less banding.