[Boards: 3 / a / aco / adv / an / asp / b / bant / biz / c / can / cgl / ck / cm / co / cock / d / diy / e / fa / fap / fit / fitlit / g / gd / gif / h / hc / his / hm / hr / i / ic / int / jp / k / lgbt / lit / m / mlp / mlpol / mo / mtv / mu / n / news / o / out / outsoc / p / po / pol / qa / qst / r / r9k / s / s4s / sci / soc / sp / spa / t / tg / toy / trash / trv / tv / u / v / vg / vint / vip / vp / vr / w / wg / wsg / wsr / x / y ] [Search | Free Show | Home]

/mpv/ - The Open Source and Cross Platform Video Player

This is a blue board which means that it's for everybody (Safe For Work content only). If you see any adult content, please report it.

Thread replies: 330
Thread images: 38

File: 432424.png (72KB, 586x314px) Image search: [Google]
432424.png
72KB, 586x314px
Last thread >>61511211

Installation:
https://mpv.io/installation/

Wiki:
https://github.com/mpv-player/mpv/wiki

Manual:
https://mpv.io/manual/stable/

User Scripts(including opengl shaders):
https://github.com/mpv-player/mpv/wiki/User-Scripts

Migrating from MPC-HC?
https://github.com/dragons4life/MPC-HC-config-for-MPV/blob/master/input.conf

input.conf:
https://github.com/mpv-player/mpv/blob/master/etc/input.conf


Vulkan(Linux only for now):
https://github.com/atomnuker/mpv

Test vulkan and post logs if it gives you any kind of problems.

For better playback quality paste this in your mpv.conf file:
profile=opengl-hq
cscale=ewa_lanczos
scale=ewa_lanczossharp

Interpolation(smooth-motion):
interpolation
video-sync=display-resample
tscale=oversample

Check your settings for compatibility errors by running mpv in command line or with

log-file=log.txt

. Search the log for "dumb" anything or for [e].

REMINDER >>61479064
>>
hai
>>
>>61526939
stop posting this shit with
cscale=ewa_lanczos
, literally no point as opposed to
cscale=ewa_lanczossharp
>>
File: 1477103092689.jpg (87KB, 649x815px) Image search: [Google]
1477103092689.jpg
87KB, 649x815px
>>61526974
ohayou
>>
>>61527010
stop posting this shit with
cscale=ewa_lanczossharp
, literally no point as opposed to
cscale=ewa_lanczos
>>
>>61526915
I think trying to turn mpv into an entire webm studio edition with thousands of options is futile, you can't adress every possible usecase. At some point you might as well go with a dedicated tool like https://github.com/Kagami/boram, which has a much better UI than a script could ever have.

Also the better script is not the one which has the most features.
>>
File: Screenshot_20170723-164326.png (320KB, 2560x1440px) Image search: [Google]
Screenshot_20170723-164326.png
320KB, 2560x1440px
>>61527043
retard
>>
First for ElegantMonkey is a cool guy.
>>
Will opengl-dr fix the problem with display resample where you have to render a frame on every screen refresh?
>>
>>61527119
Second for him being a cooler guy than I-hate-free-software anon.
>>
>>61527105
Dumb phoneposter.
>>
>>61527105
Scroll up. It says that about scale, it doesn't say what cscale you should use.
>>
>>61527120
WAIT™ FOR VULKAN®
>>
>>61527157
why would you use a different cscale than scale
>>
>>61527202
if you want blurry chroma
>>
What is causing uneven/jerky panning?

I just compared same anime scene with panning camera on MPC and MPV
It still feels kind of uneven with MPV
I do have
>interpolation
>video-sync=display-resample
>tscale=oversample

Still feels like the video runs at slightly uneven pace.
>>
>>61527010
It's much faster and you won't see the difference anyway
>>
>>61527105
What >>61527157 said + haasn and Argon use ewa_lanczos. Some also use ewa_lanczossoft. Chroma layer does not nearly benefit from lanczossharp as much as Luma.
>>
>>61527225
>much faster
still trivial for even weak GPUs
>>
>>61527224
Try an other values tscale such as mitchell. It is noticeably smoother, at the cost of being blurrier.
>>
>>61527053
this, I think the right interface for encode scripts like these is to implicitly use what mpv was using (e.g. for audio/subtitles), to avoid UI clutter. At most, I would provide difference “presets”, for example a “4chan preset” (forces audio off and target filesize + VP8), and a “quality” preset (uses x264 + CRF + matroska + opus) or something.
>>
>>61527224
Could be that your system is failing to play at 60 fps to begin with. Double check with: https://github.com/haasn/interpolation-samples
>>
>>61527105
>>61527202
scale (luma) layer is a detail layer. cscale (chroma) is a lower resolution “color” image layered on top of luma. You wont get any benefit using ewa_lanczossharp IMO compared to ewa_lanczos and would waster your resources on it. Chances are you wont even notice any difference with "regular" lanczos and bicubic.
>>
>>61527366
The human eye can't even see above spline36.
>>
>>61527282
>Could be that your system is failing to play at 60 fps
Is there a way to see how fast it goes?
>>61527263
>mitchell
Looks like absolute fucking ass with anime.
Triangle too.
>>
>>61527053
I'm not trying to address every possible usecase, but I'm able to do this, so I do it, for my entertainment and gain. Dedicated tools will obviously beat scripts, but with these you can forget about them until you have that scene you want to save because you just saw it. Doing it in-player is very handy, and if you feel like cropping the thing or targeting a size, bam.

I didn't try to claim anything about which is better. I was inspired by Zehkul's, and if I can help Monkey with things I've solved/had a thought about, everyone wins.
A plethora of features will make a script unwieldy, which I decided to combat with the menu system for options - hide them away until needed. Previously all the CRF and scale and 2-pass keybinds and statuses were always on display (in the "advanced" mode).
The segment editor I agree on, that's just silly. But silly is fun!

>>61527269
Presets/profiles are a good idea. Nontrivial to allow the user to trivially configure them, but hmm...
>>
>>61527409
>Looks like absolute fucking ass with anime.
But that's wrong, it looks great. Enjoy your jierkiness I guess.
>>
>>61527409
Try catmull_rom. I find both mitchell and triangle to be too blurry.
>>
>>61527409
>Is there a way to see how fast it goes?
Yes, stats.lua

But it could still be the case that your compositor doesn't play frames 1:1 for some reason
>>
File: 1491599944583.jpg (58KB, 600x583px) Image search: [Google]
1491599944583.jpg
58KB, 600x583px
Need to make logos for all these different webm scripts!
>>
>>61527366
You can see an obvious difference on some pathological clips, for example the (in)famous ndkqmf

Just because it doesn't make a difference most of the time doesn't mean it's always irrelevant
>>
>>61527224
>uneven/jerky panning?

Ok.
I fixed it.
I removed two lines:
>video-sync=display-resample
>deband-iterations=2
Either one of these was causing this.

Now it's perfect.
>>
>>61527470
The what ndkqmf?
>>
>>61527366
>Chances are you wont even notice any difference with "regular" lanczos and bicubic.
if anything the sharpness might exacerbate artifacts in the chroma
http://screenshotcomparison.com/comparison/216649
>>
>>61527510
>Now it's perfect.
And also doesn't work. Lul.
>>
>>61527510
>removed video-sync=display-resample
that disables interpolation dumbass lmao
>>
Is this the best non-placebo config?
# Video #
profile=opengl-hq
opengl-backend=dxinterop
hwdec=no
scale=ewa_lanczos
dscale=ewa_lanczos
cscale=ewa_lanczos
tscale=oversample
video-sync=display-resample
interpolation=yes
dither-depth=8
temporal-dither
>>
>>61527105
>>61527157
>>61527202
>>61527208
>>61527366
>>61527242

The sharpness we perceive doesn't come from the chroma layer so sharpness is a less important criteria here.
Sharp scalers also produce more artifacts and while we make that trad-off for scale (get sharpness and artifacts), it's not worth it for cscale. The added sharpness of a more sharp algorithm will not be noticeable when used for cscale but the added artifacts will be.
>>
>>61527539
It doesnt?
>interpolation=yes
>tscale=oversample

These lines are still in plays and it seems fo be interpolated.
>>
>>61527589
Read the manual, dude
>This requires setting the --video-sync option to one of the display- modes, or it will be silently disabled.
>>
File: Screenshot_20170723-171658.png (237KB, 2560x1440px) Image search: [Google]
Screenshot_20170723-171658.png
237KB, 2560x1440px
>>61527589
>>
File: laughing whores.webm (2MB, 1280x720px) Image search: [Google]
laughing whores.webm
2MB, 1280x720px
moved to a proper git repo, and added options menu.
https://github.com/ElegantMonkey/mpv-webm

>>61527119
thanks

>>61527459
>failed_/g/_project.png
>>
>>61527624
Stop phone posting
>>
>>61527510
Probably getting frame drops because of deband. Do you have shit hardware? You should really check stats.lua
>>
File: 56806287.png (19KB, 180x197px) Image search: [Google]
56806287.png
19KB, 180x197px
>>61527627
>>
>>61527627
Now add it to:
https://github.com/mpv-player/mpv/wiki/User-Scripts
>>
>>61527636
>deband
>frame drops
Lol no. The culprit is always video-sync=display-resample and interpolation.
>>
>>61527635
stop being wrong
>>
>>61527665
Take in count you need a frame time of 16 ms or lower on a 60hz monitor for a correct interpolation.
>>
>>61527636
Actually you might be right.
I put back the
>video-sync=display-resample
And it seems to be smooth(or i grew so tired i dont even know anymore)

Perhaps the double debanding was causeng framedrops.

My GPU is r9 380x
>>
>>61527706
How many Hz is your monitor?
>>
>>61527573
explain why opengl-hq uses spline36 for both scale and cscale
>>
>>61527706
just check stats.lua fagit
>>
>>61527719
60
It's also 4k so the upscaler debander deringer and whatever are doing quite some work.
>>
>>61527568
Id remove dscale line, default (mitchell) is better. scale to ewa_lanczossharp. Also remove dither-depth line.
>>
>>61527781
>scale to ewa_lanczossharp
placebo
>>
>>61527750
You need a frametiming of 16 ms or lower for the interpolation to work
>>
>>61527791
I can clearly see a difference. Are you the anon i was talking to?
>>
>>61527812
>I can clearly see a difference
Placebo
>>
>>61527791
Butthurt Pentium 3
>>
File: really makes you think. placebo.png (50KB, 696x699px) Image search: [Google]
really makes you think. placebo.png
50KB, 696x699px
>>61527827
>>
File: 57196604.jpg (10KB, 266x239px) Image search: [Google]
57196604.jpg
10KB, 266x239px
>>61527824
>Placebo
>>
>>61527835
>Doesn't use his resources
Butthurt retard.
>>
>>61527808
>16 ms or lower for the interpolation to work
>>61527687
>16 ms or lower on a 60hz monitor for a correct interpolation
Source?
>>
>>61527872
1000ms in a second/60hz=16.6ms
mpv will deactivate the interpolation if your frametime is greater than 16.6ms
>>
>>61527922
>mpv will deactivate the interpolation if your frametime is greater than 16.6ms
Source?
>>
>>61527945
Test it yourself, the interpolation will deactivate.
Test it with some retarded shader like nnedi3 with 128 neurons.
>>
>>61527872
The computer sends a new image to the monitor in sync with its Hz, so every 16.6ms fpr 60Hz. It wont work if you go above 16.6ms in frame times since it wont be able to draw frames in time.
>>
>>61527995
>it wont be able to draw frames in time
mpv caches frames
>>
>>61527515
Sorry, meant caqbqk

https://github.com/haasn/cms/blob/master/caqbqk.mkv?raw=true
>>
>>61528005
That's not how it works.
>>
>>61527733
There is nothing to explain.
Out of the non-ewa scaler (the ones running fast enough even on old hardware) spline36 has the best sharpness for the least artifacting (lanczos is worse).
If you want less artifacts, you have to go back on sharpness as well and that quite severely.
In the "upper ewa" regions the differences are smaller. It's diminishing returns all the way.
>>
>>61528048
It does, though. See "redrawn" frames @ stats.lua. You will notice the numbers are considerably lower.
>>
>>61527872
>>61527945
>Finally, --video-sync=display-* currently comes with one important drawback: Due to OpenGL's rather severe limitations when it comes to timing, the only way to reliably figure out when vsyncs happen is to actually draw a frame on every vsync. The consequence of this is that, even for 24 Hz video, you need to draw frames at 60 Hz even if they are the same frame over and over again - thus increasing power usage by a factor of 2x-3x in such a case.
>you need to keep your frame timings according to your monitor refresh rate for a correct interpolation.
Example:
1000ms/60hz=16.6ms
1000ms/144hz=6.94ms

https://github.com/mpv-player/mpv/wiki/Display-synchronization
>>
>>61528163
Try it yourself, do >>61527970 and in stats.lua you'll see tscale isn't running.
>>
Is it possible to make MPV check sub directories for subtitles?
>>
>>61528266
Actually found something.
--sub-file-paths=<path-list>
>>
>>61528266
Yes, you can do
sub-file-paths=dirname
>>
No NGU no buy.
>>
after an hour of testing on https://github.com/haasn/interpolation-samples and twitch/dreamhackcs 720p60 (before you whinge, different people watch different stuff) i wrote this this piece of shit:
profile=opengl-hq
opengl-backend=angle
scale=ewa_lanczossharp
cscale=bilinear
dscale=mitchell
tscale=gaussian
sigmoid-upscaling=no
video-sync=display-resample
interpolation
no-audio-display
no-taskbar-progress
screenshot-directory=~~/
>>
>>61528263
I said it caches frames. It really does. You can see them in stats
>>
>>61528445
gaussian...? So the most blurry piece of shit you could find.
First you create sharp images with ewa_lanczossharp and then you blur the shit out of it.
Great.
>>
>>61528448
It will automatically deactivate the interpolation if the frametiming surpass 16.6ms, go ask in #mpv if you don't believe me.
>>
>>61528445
If you have 60Hz monitor then interpolation didn't work for you with that twitch stream. It disables if fps matches display Hz. Bilinear is shit. At least use the one included in opengl-hq.
>>
>>61528468
what is the point of interpolation if your intent is not to make lower-framerate content smooth? oversample, linear, even mitchell introduce unacceptable judder. keep in mind that i'm not an anime watcher, this is what looks best to me in haasn's big buck bunny samples
>>
>>61528530
i tested each option using something similar to this in input.conf:
y cycle-values cscale bilinear spline36 ewa_lanczos ewa_lanczossharp

i could tell the difference obviously in scale, dscale and tscale but i could not tell the difference at all for cscale, so i thought i might as well use the fastest scaler.
i don't care about the twitch stream interpolation. in fact i might disable interpolation altogether in those circumstances
>>
>>61528445
>twitch
recommend not testing on any twich samples, they can't into encoding at all. Expect A-V drift and glitched frames
>>
>>61528567
>>>61528468
>what is the point of interpolation if your intent is not to make lower-framerate content smooth? oversample, linear, even mitchell introduce unacceptable judder. keep in mind that i'm not an anime watcher, this is what looks best to me in haasn's big buck bunny samples
The fuck did I just read. Interpolation in mpv designed to remove judder caused by mismatch/not being even multiple refresh Hz of monitor and video's fps. If you're playing 30 or 60 fps content on 60 Hz display you won't get any judder. Different story for 23/24 fps content.
>>
# Video
vo=opengl
opengl-backend=dxinterop
opengl-hwdec-interop=cuda
hwdec=no
video-sync=display-resample

profile=opengl-hq
scale=ewa_lanczossoft
cscale=ewa_lanczos
deband-grain=0
deband-iterations=2
deband-range=12
deband-threshold=48

How is it?
>>
poozoor
>>
So, bicubic is the best cscale?
>>
>>61528657
You're thinking of madVR, not mpv
>>
>>61528670
You want hwdec on or off I don't get it. You got scale and cscale backwards. Remove everything related to deband.
>>
>>61528761
Are you trolling or retarded?
>>
>>61528788
Neither? Are you?
>>
>>61528729
>https://diff.pics/pm5yEJ6SneiQ/1
Only if you like blur.
>>
>>61528494
The heck? Who cares? I never said anything about it deactivating itself.
Someone said it caches frames, you said no and I told you yes it does, check with stats.
>>
>>61528799
>interpolation
>Reduce stuttering caused by mismatches in the video fps and display refresh rate (also known as judder).

Retard.
>>
>>61528640
twitch streams are what i tend to watch with mpv, so i'm fine using that as my benchmark. i'll agree with you that the quality is usually garbage, and sadly the only time you can try 1080p60 is during the dota 2 international which is next month.
>>61528657
this is simply not what I experienced. it is judder whether you want to believe it or not.
>>
>>61528901
Yes but the way you said it made it seem like mpv interpolation would do nothing for a 30 Hz video on a 60 Hz display, which is not really the case
>>
>>61528990
>made it seem like mpv interpolation would do nothing for a 30 Hz video on a 60 Hz display
I meant that it is unnecessary in such scenario. I still don't know why it only disables if fps matches display refresh rate.
>>
>>61529108
Because mpv's interpolation mechanism is a generalization of madVR's... for madVR's smoothmotion it makes no sense to have it on for a clean multiple, sure, but not all tscales are no-ops for integer scaling
>>
>>61529122
>Because mpv's interpolation mechanism is a generalization of madVR's... for madVR's smoothmotion it makes no sense to have it on for a clean multiple, sure, but not all tscales are no-ops for integer scaling
You mean that this mechanism makes sense only for tscale oversample? But why would you use any interpolation if there is no 3/2 pulldown or any other irregular frame pattern detected? Or i misunderstood you?
>tfw probably called haasn a retard
Sorry.
>>
>>61529122
Nah, madVR is not an audio enhancer, it only affects video.
Don't compare them anymore, this will hurt everyone.
>>
>>61529227
>You mean that this mechanism makes sense only for tscale oversample?
Exactly. Well, even then it doesn't _really_ make sense. I mean what do you hope to gain by disabling interpolation? tscale=oversample is not much heavier than the alternative (blitting the frame directly). It would be a small optimization at best. You still need to render within 16ms time window due to OpenGL limitations.

I guess what _could_ be done though is allowing a certain amount of mistimed frames for repeats. That would allow you to use heavier scalers (longer than 16.6ms) without causing DS to shit itself. Shouldn't be that difficult to implement either. But my stance is that if you need more than 16.6ms to render you're doing somethingg wrong anyway (what would you do for actual 60 fps clips?)

>But why would you use any interpolation if there is no 3/2 pulldown or any other irregular frame pattern detected?
60 fps still looks smoother than 30 fps. 30 fps gives me a headache..
>>
>>61529270
>60 fps still looks smoother than 30 fps. 30 fps gives me a headache..
Didn't you recently said that you've switched back to oversample?
>>
>>61527568
>>61528670
>turning hwdec off when decoding is identical
>>
Is there any point in using ontop to get some kind of FSE mode? I use Win7 and FSE in madVR fixed most of the OS's annoyances.
>>
>>61529387
I did, and 30 fps still gives me a headache.. But at least it's better than the ringing effects you get from mitchell etc.

I just don't really enjoy media at all anymore
>>
>>61529714
Can you look at >>61529496 please. Also. What TV do you have?
>>
>>61529768
all TVs are shit

>>61529496
I don't think mpv does FSE at all
>>
>>61529787
>all TVs are shit
So youre not haasn?
>>
>>61529787
It does, but it's picky about when it gets enabled. I think it only works with opengl-backend=win and opengl-es=no (with ontop and fullscreen enabled of course)
>>
The WEBM script rules. Should be linked to the official MPV repo.
>>
>>61529843
I cant tell which of you fagits is haasn so i cant take him serious. Probably you.
>I think it only works with opengl-backend=win and opengl-es=no (with ontop and fullscreen enabled of course)
Doesn't work with dx-interop?
>>
>>61529843
Are you sure? I asked rossy and he said mpv doesn't do FSE

but maybe windows does some auto-guessing magic or something
>>
>>61529843
>>61529866
>so i cant take him serious
Meant to say "can".
>>
>>61529881
>Makes the player window stay on top of other windows.
>On Windows, if combined with fullscreen mode, this causes mpv to be treated as exclusive fullscreen window that bypasses the Desktop Window Manager.
Who do i trust?
>>
Guys... I have a problem and I came here to see if you could give me a hand... Subtitles aren't showing properly. They look ok on editing programs and in other players, but in mpv there are a couple of lines that are shown in 3 rows, even though they are shown in just 2 rows anywhere else. They are somewhat long lines, but it is just in mpv that they divide. Is there something I can do to make them show properly??, is this some known thing??
>>
>>61529916
Post screenshot.
>>
>>61529912
This anon is right >>61529843
>>
>>61529866
I can't seem to get it to work with dxinterop, no, but it does work with win, at least on Win7. Dunno about newer versions of Windows.
>>
>>61529866
>I cant tell which of you fagits is haasn
haasn is taken a break from mpv
>t.haasn
>>
>>61529948
>>61529947
ontop on itself is defenitely working for me with dxinterop but im not sure if it makes any difference when i go fullscreen.
>>
>>61529974
this anon is lying t.hanna
>>
>>61529986
01:44 <rossy> with WGL, the graphics driver uses arbitrary heuristics to decide whether to enter FSE
01:44 <rossy> and --ontop sometimes makes it more likely to decide to
01:45 <rossy> angle and dxinterop will never enter FSE on their own
01:45 <rossy> we'd have to explicitly call SetFullscreenState() on their swapchains
>>
>>61529999
nice quads and i`m certainly still work on mpv

t. nand chan ( ≖‿≖)
>>
>>61530027
Can you explain for a brainlet, please? I just tried fullscreen with ontop and i couldnt switch between windows by alt+tab like i usually do, so i guess it works? I'm on AMD by the way.
>>
>>61529974
>>61529999
>>61530038
You guys, stop pretending to be me! It's creepy!
>>
>>61530078
I guess so?

If it works it works
>>
>>61530144
Im not sure. Maybe its just literally stays on top without entering FSE mode? Can i check for FSE somewhere in the log?
>>
>>61530158
What is enabling FSE mode supposed to do? Is there some benefit? I don't get it.
>>
File: mpv.jpg (72KB, 683x368px) Image search: [Google]
mpv.jpg
72KB, 683x368px
>>61529946
I'm really sorry for being a spic, but here you are...
>>
>>61530190
I honestly dont know but i always used FSE with madVR since apparently Win7 has shit desktop composition.
>>
File: MPC-HC.jpg (165KB, 1366x726px) Image search: [Google]
MPC-HC.jpg
165KB, 1366x726px
>>61530278
>>
>>61530287
Well if you don't know what difference it would make, how would you know what difference to test for?
>>
>>61530278
>>61530310
Try making your mpv window as wide as your MPC-HC window. Does that help?
>>
>>61530278
>>61530310
are you using a smaller window size for mpv?
>>
>>61530310
>>61530278
It seems like the fonts aren't loading properly, the problem is in your end tho.
>>
>>61530369
wow, tell me that hideous karaoke isn't animated
>>
>>61530332
>>61530334
>>61530369
No, I tested the wide and size of both windows and the lines stay the same. They also look fine on subtitle editors...
It could be the fonts, idk, should I check that in my player or in the sub files/muxing process??
>>
File: 1460093428567.jpg (28KB, 250x250px) Image search: [Google]
1460093428567.jpg
28KB, 250x250px
>>61530412
It is
>>
>>61530441
then why isnt the mpv screenshot fullscreen like mpc-hc? r u avin a laff?
>>
>>61530484
No, man, I was just in a hurry to make the screenshots, if you need it, give me a min and I'll give you a fullscreen mpv shot.
>>
>>61530441
run mpv from command line or post log and it will tell you if fonts aren't loading
>>
File: mpv-shot0001.jpg (260KB, 1920x1080px) Image search: [Google]
mpv-shot0001.jpg
260KB, 1920x1080px
>>61530524
There...
>>
>>61530524
sure, include your full desktop including the taskbar.
>>
>>61530562
see
>>61530567
>>
Are they serious?
https://blog.xamarin.com/installing-visual-studio-2017-made-easy/
>>
File: ontop vs default.jpg (223KB, 1507x338px) Image search: [Google]
ontop vs default.jpg
223KB, 1507x338px
>>61530317
ontop on the left vs default. I noticed it has zero redraws compared to default. VSync ratio affected by them? Is it good or bad?
>>
>>61530581
Wrong thread?
>>
>>61530622
Bump. Im curious!
>>
- no safe fullscreen exclusive mode, 33% less in rendering time with madVR
- no fast cnn upscalers, yes you should care
- no safe smoothmotion, not working with mesa and angle for me
- no way to detect black bars in realtime, yes it's useful
- no fancy features for the owners of projector, yes projectors are great
>>
>>61530541
I ran it from com, where should it be telling me about the fonts?? I don't see anything about it being shown. Only video, audio, and subs tracks info showed up...
>>
>>61530622
Then it seems like it's working? No redraws for 60 fps is a good sign
>>
>>61531161
Posting about it on 4chan isn't going to get any of those features implemented. If you care, make an issue. If you don't, stop whining.
>>
>>61531224
What about the VSync ratio graph? Is it the fuller it is the better?
>>
>>61531247
The vsync ratio graph is meaningless lol. I don't know why it's there
>>
>#mpv-devel
>hanna+atomnuker vs wm4
>>
>>61531382
w-what's happening?
>>
>>61531382
What's going on?
Post chat logs
>>
>>61531382
>hanna+atomnuker
weren't they butting heads a few weeks ago?
>>
>>61531535
No, not really, just devs things.
>>
Insults are part of devs jobs today!
>>
>>61531572
I just want everyone to get along!
>>
>>61531579
I just wm4 to move his ass and implement ravu!
>>
I just want wm4 to kick both and stop them from ruining mpv even more.
>>
>>61531609
>haasn
>ruining mpv
FUCK YOU!
>>
>>61531609
How are they ruining mpv?
>>
Is this chromish extension working for you?
https://chrome.google.com/webstore/detail/watch-with-mpv/gbgfakmgjoejbcffelendicfedkegllf
>>
>>61531382
I think you mean
hanna+wm4 vs atomnuker
>>
>>61531382
Do not bully wm4 you bakas. Well, maybe a little bit... so we can finally get RAVU and FSRCNN!
>>
post logs or gtfo
>>
>>61531715
wm4 is paid to maintain mpv, it's fine to bully him. but the people who aren't might get scared off.
>>
What's the haasn's comment style?
Sometimes he begins a sentence with a capital letter but not always. Sometimes there are dot everywhere to finish a sentence but not always. Is it normal?
>>
>>61531783
Its his impersonator. Im confused today too.
>>
>>61531783
I mean I've mindlessly posted in like 4 different styles in this very thread
>>
>>61531732
some gems

03:11 <wm4> atomnuker: that lock is not unnecessary, are you out of your mind?
03:11 <wm4> I suspect you don't know how locking and multithreading work
03:12 <atomnuker> I do, but I don't see why its there, could you explain to me what thread conflict this lock resolves?
03:12 <wm4> atomnuker: the code even fucking tells you explicitly what fields it protects
...
03:27 <wm4> man how idiotic
03:27 <wm4> #I guess I'll just ignore atomnuker and his bullshit
03:27 <atomnuker> what have I done now?
03:28 <wm4> <atomnuker> just disable threadsafe callbacks and remove the lock if you want something even simpler
03:28 <wm4> you were the one who claimed that enabling threadsafe callbacks made a significant difference
03:28 <wm4> or are you claiming that my dr_lock reduces performance significantly?
...
03:47 <hanna> we have bigger things to worry about
03:47 <hanna> so can we merge the code already or does atomnuker want to bikeshed some more
03:47 <wm4> I've been claiming from the start that it doesn't matter for performance
...
03:35 <hanna> also it seems like my earlier description was right and atomnuker's comment was wrong
03:35 <hanna> based on that documentation
03:36 <atomnuker> I definitely saw corruption if I didn't lock anywhere and didn't enable threadsafe callbacks
03:36 <atomnuker> so I don't think there's a lock
03:37 <hanna> could test it
03:37 * hanna does
03:42 <hanna> testing it
03:42 <hanna> commented out the dr_lock code and added an assert(p->dr_is_locked == false); p->dr_is_locked = true; in its stead
03:42 <hanna> doesn't crash
03:42 <hanna> so it seems like the documentation is right
>>
what kind of config should I use for my x230? Should I just use the one in the OP?
>>
>>61531835
Might be a little too much for an igpu, try:
profile=opengl-hq

If it lags:
profile=opengl-hq
deband=no
>>
File: 29086917.jpg (15KB, 351x351px) Image search: [Google]
29086917.jpg
15KB, 351x351px
>>61531823
>>
Can we has error diffusion dithering pls
>>
>>61531937
Prove its usefulness.
>>
>>61531823
wm4 is so salty
>>
>>61531823
poor atomnuker
>>
>>61531959
.....
>>
Should i use opengl-backend=win or dxinterop? Win7, AMD GPU. Angle doesn't work.
>>
>>61532004
dxinterop!
>>
>>61531937
bjin really wants to try hard to implement it but I strongly suspect he will get nowhere; and I also don't plan on exposing the dithering kernel to user shaders unless he can actually prove it A) is as fast as fruit and B) improves the quality noticeably

Error diffusion is literally the definition placebo. Unless you're dithering to 1-2 bit depth, it's complety impossible to see with the naked eye
>>
>>61532042
Sir, answer >>61532004 very much please.
>>61532032
Thank you too.
>>
>madshi
>Reclock works by resampling audio, so that's not something madVR can do. However, there will soon be a new feature that may make Reclock like algorithms not needed, anymore.
Hope he will find a way, so we finally fix display-resample/opengl limitation mess by stealing his code again!
>>
>>61532090
But its closed source
>>
File: 1495139693990.jpg (309KB, 1745x1080px) Image search: [Google]
1495139693990.jpg
309KB, 1745x1080px
>>61532090
>stealing his code again!
>stealing encrypted code
Those mpv devs really are hackers.
>>
>>61532108
There are some nasty tools available to do this kind of things. ^^' However it's sometimes a fucking pain!
>>
>>61532128
But he protected his code! Its impossible!
>>
>>61532138
See >>61479064
>>
>>61532128
Wouldnt madshi know since he can look at mpv source code? I doubt its true since he hasnt mentioned anything.
I'm sure one of his fanboys have bitched about it to him even though they wouldnt really know unless they saw his source code.
>>
File: 39653309.jpg (76KB, 418x462px) Image search: [Google]
39653309.jpg
76KB, 418x462px
hanna, answer me >>61532077
>>
>wm4 - hanna: get a better editor
Mouahaha, just reminded me haasn is a n/vimfag.
>>
https://github.com/mpv-player/mpv/commit/64d56114ed9258efe2e864315d7130bb58a03d52
>>
>>61532195
Its happening!
>>
>>61532195
I'm a retarde winfag, what does this mean?
Better performance?
>>
>>61532213
it means memeVR is finished.
>>
>>61532195
>add direct rendering support
So no more backends?
>>
>>61532224
Isn't he already implemented direct rendering?
>>
>>61532248
madvr uses directx so I assume it has always been direct.
>>
>>61532285
:D
>>
>>61532004
Use whatever works better for you? Check stats.lua if in doubt
>>
>>61532388
They are equal in features?
>>
>>61532213
Read the commit message and manpage addition?
>>
>>61532418
Yes.
>>
>>61532418
Should be pretty much identical, since both use the native OpenGL backend.

The difference is just whether you manage the window and swapchain via GL or via DirectX.
>>
>>61532470
>GL or via DirectX
Which is preferable in your opinion?
>>
>>61532487
I don't know?? I'm not a windows user, I can just guess in the dark here

but my blind stab in the dark would be to guess that directX is better because microsoft's GL windowing APIs apparently suck(?)
>>
File: subtitles.png (2MB, 1302x683px) Image search: [Google]
subtitles.png
2MB, 1302x683px
Why are my subtitles fucked?
>>
>>61532658
Post log?
>>
File: subtitles2.png (759KB, 805x619px) Image search: [Google]
subtitles2.png
759KB, 805x619px
>>61532690
https://pastebin.com/bs7VRKM6
>>
>>61531823
i get the feeling wm4 has some kind of autism
>>
Time to vote!
https://github.com/mpv-player/mpv/pull/4595#issuecomment-317310181
>>
>>61532802
Does it happen with --no-config? What about with one of the different backends? Does it happen with a different --vo? (e.g. direct3d)
>>
>>61532881
Hmm actually I think it's a problem with the font rendering itself, not the texture upload. So scratch all of that.

Does it happen with all subtitles or just specific ones? Can you upload it so I can try reproducing on my machine?

It might be a font/libass bug on your machine
>>
>>61532881
>>61532906
desu with --no-config it doesn't happen
>>
>>61532926
Well there you go, now figure out which option triggers it.

Hint: Start with --hwdec, then with the --sub-* options
>>
>>61532933
It seems like
opengl-pbo=yes
causes it
>>
>>61532840
nsa meme
>>
>>61532991
???
>>
>>61533012
the nsa are using placebo memes to sneak buttnets into mpv
>>
>>61532987
Fascinating. (So those are actually separate textures, I assume)

Can you also test v0.25 to see if that one doesn't have the bug?
>>
>>61533067
Just tested v0.25 from https://mpv.srsfckn.biz/ (2017-04-23) with opengl-pbo and the bug doesn't happen
>>
       --vd-lavc-dr=<yes|no>
Enable direct rendering (default: no). If this is set to yes, the video will be decoded directly to GPU video memory (or staging buffers).
This can speed up video upload, and may help with large resolutions or slow hardware. This works only with the following VOs:

· opengl: requires at least OpenGL 4.4.

(In particular, this can't be made work with opengl-cb.)

Using video filters of any kind that write to the image data (or output newly allocated frames) will silently disable the DR code path.

There are some corner cases that will result in undefined behavior (crashes and other strange behavior) if this option is enabled. These are
pending towards being fixed properly at a later point.

Should I use it? what are the downsides?
>>
>>61533229
What exactly is video filters there? like that opengl-shader thing or builtin stuff like deband?
>>
Bjin is an animefag!
https://github.com/bjin/mpv-prescalers/commit/416de62a0a660d1337f98b86d79fdcbfc2e7d445
>>
>>61533290
I'm guessing it's only the vf option that you can use to do stuff like crop/rotate/flip the video?
>>
>>61533151
Sounds like glBufferSubData is broken on your driver
>>
>>61533323
based AMD
>>
>>61533299
He knows what's important.
>>
>>61533229
Error parsing option vd-lavc-dr (option not found)


epic
>>
File: 1474559648136.jpg (394KB, 1024x768px) Image search: [Google]
1474559648136.jpg
394KB, 1024x768px
>>61533299
>animes
>>
>>61533229
Sure, enable it. There should be no downsides unless it doesn't work.

Best benchmark it though. The comment only refers to actual --vf's that output a new image.
>>
>>61533346
it only got added in the last 24 hours you need to be on the latest git version
>>
>>61533346
You know that option hasn't been implemented to any build yet right?
Or did you just compile the master?
>>
>>61533363
Does it even log to the console telling you it disabled itself in case you use a filter or something?
>>
>>61533399
See >>61533376 and >>61533375 retard
>>
>>61533399
Apparently not, no
>>
>>61533447
Nvm, it does. Check for: `DR enabled:` in the verbose logs
>>
Now that the dust has settled, why can't mpv downmix audio channels on windows
>>
>>61533507
What do you mean, can't?

There are at least 3 or so ways that it can downmix audio channels, including on windows
>>
>>61533507
audio-channels=stereo
Read the manual next time retard.
>>
>>61533522
Thanks for providing free tech support. The “why can't mpv do <X>” thing really works well at getting insecure mpvtards to tell you how to do something :^)
>>
>>61533548
You can ask in a non-retard way and we'll also help you.
>>
>>61533519
>>61533522
>audio-channels=mono in mpv.conf

linux:
>check stats.lua
>Channels: 1

windows:
>check stats.lua
>Channels: 2

mpv is a MESS
>>
>>61533564
stats.lua doesn't display the downmix,it will display the audio as it is, weak shit tard.
>>
>>61530562
This is giving me a headache, I tried the "--no-embeddedfonts" command and it fixed it for this specific set of files, but other files would randomly load and not load their fonts... I just cannot get to understand why and how does this happen?
>>
>>61533554
False. Try asking for a summary of changes/features during the past year without a "Well I suppose mpv is dead nobody does shit" as an incentive. Bait is known to work.
>>
>>61533607
I've always help when ask in the non-retard way.
>>
>>61533597
Why would it show Channels: 1 in linux then retard?

with audio-channels=mono for a 6 channel movie
 (+) Video --vid=1 (*) (h264 1920x1080 23.976fps)
(+) Audio --aid=1 --alang=eng (*) (eac3 6ch 48000Hz)
Subs --sid=1 --slang=eng (subrip)



Sad!
>>
Is direct rendering linux only?
>>
>>61533646
>Why would it show Channels: 1 in linux then retard?
It doesn't

Also you're checking the wrong log line but you're a retard anyway so it's a lost cause
>>
>>61533646
look where it says AO: mono 1ch ya fuckin dingus
>>
File: 1473617061922.png (548KB, 667x433px) Image search: [Google]
1473617061922.png
548KB, 667x433px
>>61533646
>>
>>61533672
It literally does, anyone can test it
>>
>>61533646
anon...
>>
>>61533665
i've tried it on windows and while the logs report that it's enabled, performance appears to be no different
>>
Is there anyway to force the the subtitle track that has all the text instead of just sign translations? I don't ever watch dubs.
>>
File: cant downmix.png (59KB, 979x680px) Image search: [Google]
cant downmix.png
59KB, 979x680px
>>61533691
damn... would you look at that STEREO 2ch float
>>
>>61533755
i really appreciate that all the insults are visible in this screencap, anon
>>
>>61533750
https://mpv.io/manual/stable/#options
alang=jpn
slang=eng
>>
>old ewa_lanczos
>1680 ms
>compute shaders ewa_lanczos
>1600 ms
hmmm...
>>
>>61533784
that won't work if both sub tracks are tagged as english and the signs track is default
hell, in some dual audio rips the english subs are tagged as japanese and only the signs track is tagged as english, because i guess people who listen to dubs have fucking brain damage
best to just avoid that shit entirely
>>
>>61533826
Or he could use mkvtoolnix to delete/disable the english track.
>>
>>61533755
now do it again with --audio-channels=mono
>>
>>61533856
you can see the audio channel parameter right after the first "Exiting"
>>
>>61533841
yeah but then I have to have dupes on my harddrive since I'm seeding all my library.
>>
>>61533878
try --no-config --audio-channels=mono --volume-max=1000 --volume=1000
>>
File: downmix2.png (54KB, 989x724px) Image search: [Google]
downmix2.png
54KB, 989x724px
>>61533893
>>
Good night bump
>>
What's a good image to test debanding?
>>
>>61534626
Any horrible subs rip.
>>
How good does motion based interpolation work?
Is it better than SVP?

Is this how they make those high quality 60 fps animu webm (anything i was getting with SVP before was corrupted)
>>
>>61530287
win7 always uses the same desktop composition rate for primary and secondary monitors, even if the gpu output refresh rate differs. which means playback on secondary monitors will often not be smooth. using fse disables desktop composition, so that's why it's extra useful in win7. win8+ has a much better desktop composition implementation.
>>
>>61535701
>Is it better than SVP?
No
>Is this how they make those high quality 60 fps
It will interpolate the frame rate of the video to match the refresh rate of your screen.
>>
File: memeshaders.png (46KB, 507x643px) Image search: [Google]
memeshaders.png
46KB, 507x643px
Are they meme?
>>
>>61535845
Use nnedi3 only to upscale low quality sources or dvd/vhs/ld.
Use krigbilateral in BDscombine with ssim and ewa_lanczos.
For downscaling use ssimdownscaler+mitchell, see https://diff.pics/hfXh77QRc5st/1
>>
File: 2017-07-24_02-22-15.png (26KB, 739x295px) Image search: [Google]
2017-07-24_02-22-15.png
26KB, 739x295px
Seems DR isn't working for me, not sure why.
[ffmpeg/video] h264: Reinit context to 1920x1088, pix_fmt: yuv420p
[vd] DR parameter change to 1920x1090 yuv420p align=32
[vd] Allocating new DR image...
[vd] ...failed..
[vd] DR failed - disabling.
[vd] Decoder format: 1920x1080 yuv420p auto/auto/auto/auto CL=mpeg2/4/h264
[vd] Using container aspect ratio.
[vf] Video filter chain:
[vf] [in] 1920x1080 yuv420p bt.709/bt.709/bt.1886/limited SP=1.000000 CL=mpeg2/4/h264
[vf] [out] 1920x1080 yuv420p bt.709/bt.709/bt.1886/limited SP=1.000000 CL=mpeg2/4/h264


This is with
mpv -v --no-config --vd-lavc-dr=yes
, so nothing in my config doing it.
GPU is RX 480, on Windows 7.

It might be because of the
requires at least OpenGL 4.4
part.
My card obviously supports OpenGL 4.4 (see screenshot), but it looks like mpv might not be using it?

With angle it's using "OpenGL ES 3.0", on win and dxinterop, it's using "desktop OpenGL 3.0"

angle
[vo/opengl] Initializing OpenGL backend 'angle'
[vo/opengl] Using Direct3D 11 feature level 11_0
[vo/opengl] EGL_VERSION=1.4 (ANGLE 2.1.0.7c5f52682ae2)
[vo/opengl] EGL_VENDOR=Google Inc. (adapter LUID: 000000000000c6e3)
[vo/opengl] EGL_CLIENT_APIS=OpenGL_ES
[vo/opengl] Trying to create GLES 3.x context.
[vo/opengl] Using DXGI 1.2+
[vo/opengl] Using flip-model presentation
[vo/opengl] GL_VERSION='OpenGL ES 3.0 (ANGLE 2.1.0.7c5f52682ae2)'


win
[vo/opengl] Initializing OpenGL backend 'win'
[vo/opengl] GL_VERSION='3.0.13476 Compatibility Profile Context 22.19.171.1024'
[vo/opengl] Detected desktop OpenGL 3.0.


dxinterop
[vo/opengl] Initializing OpenGL backend 'dxinterop'
[vo/opengl] GL_VERSION='3.0.13476 Compatibility Profile Context 22.19.171.1024'
[vo/opengl] Detected desktop OpenGL 3.0.
>>
>>61535926
>Seems DR isn't working for me, not sure why.
Don't know if you're trolling or not but see >>61533375 and >>61533376
>>
>>61535947
I'm using git master I compiled myself, it's obviously in the code since it's outputting errors to the log.
>>
>>61527010
I think it frees up resources in increases the likely-hood that your frames will come in on time. Kind of reasonable since I'm not sure it's even possible to tell the difference with chroma (feel free to post screenshots showing otherwise).

That said, OP should be using the pastebin another anon made a couple threads ago.
https://pastebin.com/0v2aW6Rc
>>
>>61535926
Create an issue in github?
>>
>>61536113
all the mpv devs read these threads anyways
>>
>>61533299
Am I understanding correctly? Ravu is a neural net based scaler trained on anime (not anime style artwork as in Waiffu2x)?

I wonder what his training data set was (inb4 all my anime looks like Dragon Ball Super with it).
>>
>>61536121
Only haasn and he doesn't care about windows
>>
>>61533755
>>61534101
I heard that if there's a parsing error in the configuration file then mpv will just ignore it and run with defaults.
>>
>>61536220
Ravu is far from stable and needs a lot of work to compete with nnedi3 or ngu. I check every single hour, from my rss reader, I am not a retard, if haasn works on fsrcnn. It's still as slow as waifu2x but I am sure he will finally find a way to make it perfect for realtime.
>>
>>61536340
Is that you bjin?
>>
File: 143981.jpg (151KB, 750x600px) Image search: [Google]
143981.jpg
151KB, 750x600px
>https://github.com/mpv-player/mpv/pull/4616
>So this is pretty much ready for merging.
>Anybody have objections before we do so?
>>
File: 141603.png (362KB, 1200x855px) Image search: [Google]
141603.png
362KB, 1200x855px
>>61536396
I like this picture too! And vote for the single/all-in-one file for users shaders even if wm4 thinks it's retarded or ridiculous.
>>
File: 1477660896195.png (58KB, 512x512px) Image search: [Google]
1477660896195.png
58KB, 512x512px
>>61536396
>>
>>61536432
I don't have that massive hands and stupid face. ;)
>>
>>61535832
>It will interpolate the frame rate of the video to match the refresh rate of your screen.
How does oversample avoid blurring?

Also why doesn't interpolation work right with, for example, a video playing at 50fps and screen refresh at 60Hz? With my testing interpolaiton only really works with videos bellow half refresh rate (30fps or lower on a 60Hz monitor)
>>
>>61536340
Well my question had more to do with the fact that Waiffu2x isn't actually trained on anime still-frames; instead it's trained on high quality anime style art (eg. fan art, doujin stuff, promo art, etc..).

Anime (and animation in general) has all sorts of techniques that you never see in art. To list a few
>Squash and strech (also called smear) - an animation technique where a character is distorted in order to make the motion "read" better and convey information like elasticity and weight. See: https://www.youtube.com/watch?v=haa7n3UGyDc
>Multiples/afterimages - An animation technique where multiples of some aspect of a character are drawn at the same time in a greatly distorted way in order to convey very high speed motion. See: https://www.youtube.com/watch?v=X2X8Me3mInc
>Staging - A big animation and film technique/concept that deals with guiding the viewer's eye in order to create certain effects or to make it easier to "read" a certain concept/situation. It's a big technique so I'm really only talking about a small part of it, particularly, in animation you will often times see poorly detailed background characters and objects (drawn that way so as to not draw your eyesight). This is sometimes called creating a frame within a frame (in flim: mise-en-scène). This is not a very good explanation but the point is that you wouldn't typically see poorly drawn low detail characters (eg. mitten hands, dot eyes, no mouth, etc..) in high quality anime art but you would in anime itself.

I'm wondering if Ravu is trained on anime art or if it's trained on actual anime. I believe the latter would give better performance on anime.

If I had more time and hard drive space then maybe I would set out on training such a neural net myself but currently I cannot justify it. So I'd very much be interested if someone else is giving this a try.
>>
>>61536565
>How does oversample avoid blurring?
It only really touches the "made up" frames, it doesn't blur or alter the original frames. Some other temporal scalers do I think.
I don't know the answer to the other question, probably because of math. Not the dude you're replying to by the way.
>>
Is a cross-platform vo_vulkan renderer planned soon?
>>
File: 00165021.jpg (57KB, 419x500px) Image search: [Google]
00165021.jpg
57KB, 419x500px
>>61533299
I love bjin even more now!
>>
>>61536582
most of these techniques seem to apply to motion, which doesn't really affect a single frame image upscaler much.

personally, while i think that ravu is an interesting concept, i think fsrcnn has a higher chance of competing with ngu in terms of quality. maybe ravu will be faster, but i think fsrcnn should end up looking better.

all that said, i don't think training on anime vs filmed content will make such a dramatic difference. waifu2x doesn't look dramatically different with photo vs anime style art models, either. rather subtle differences, imho, except maybe when testing with very extreme images.

these neural networks are not big enough to actually learn complex things like anime "objects" or stuff. they just split the image into edges and non-edges and try to restore the edges as well as possible while leaving non-edge areas mostly untouched. because of that the type of content trained with imho doesn't make such a big difference. either the neural networks learns how to restore edges well or not.
>>
>>61535763
Thanks for the info, anon.
>>
>>61536582
Me again,

I forgot to mention. These techniques typically only apply to 2D animation. CG based 3D animation (even with cel shading and other techniques to approximate an anime style) often looks weird, too rigid, or too busy (difficult to "read") because it lacks these techniques. Here is a somewhat recent 3D CG anime series.
https://www.youtube.com/watch?v=nF48tDDgJeA

There are a few groups out there really pushing the forefront of 3D rendering for 2D anime style but it's still not as far along as one would like it to be. Arc System Works DOES use tons of 2D animation techniques but the way they do it requires a lot of work.
>animations are hand-edited frame by frame
>character models themselves are actually distorted during movement
>there are separate (invisible) models used for computing lighting (also hand-edited)
>each character has their own light source (not a global light source)
>framerate is deliberately very low to mimic anime (actions are conveyed through animation techniques)
>takes many months just to animate a character
>currently (probably) not suitable for creating anime series
>etc...
More info on Arc System Works' technique:
https://www.youtube.com/watch?v=yhGjCzxJV3E
Their current project that you're probably aware of:
https://www.youtube.com/watch?v=xdJcJ_zjB5E

Someone going through the trouble of training a network on anime should therefore only stick to 2D anime (though probably okay to use anime that combines 2D characters with CG backgrounds like Shingeki No Kyojin).

>>61536798
>single frame
I beg to differ: http://animationsmears.tumblr.com/

I don't think there is likely to be much difference between (irl) photo and (irl) film except maybe on temperatures that use weird extreme color temperatures to achieve an effect.

My concern is more that semi-transparent multiples and purposely noisy smears may be distorted by being cleaned away. Also that purposely blurry background shit may be sharpened in a subtly distracting way.
>>
>>61536565
>Also why doesn't interpolation work right with, for example, a video playing at 50fps and screen refresh at 60Hz? With my testing interpolaiton only really works with videos bellow half refresh rate (30fps or lower on a 60Hz monitor)
By default it works with everything that not matches your display rate. I think it works even with a 59fps video on 60Hz display.
>>
>>61536889
I just enabled interoplation and display-resample. No other configs. There's still judder on a 50fps video.
>>
File: anime lanczos ngu - Copy.png (2MB, 898x867px) Image search: [Google]
anime lanczos ngu - Copy.png
2MB, 898x867px
>>61536582
>Waiffu2x isn't actually trained on anime still-frames; instead it's trained on high quality anime style art
Same shit with NGU, i think. Thats why it sometimes makes amazing pictures of anime art and not nearly as good upscales of REAL anime. Just look at this. You can never achieve something like that with real anime.
>>
>>61536909
Why do you say this to me?
>>
>>61536942
Cuz you said it works with everything that doesn't match the display rate but I can't see it working. Unless this is just the way it is
>>
>>61536927
Actually never mind this i think i used jpeg in this test as source image and ewa_lanczos scalers suck with upscaling jpgs (lossy). NGU doesnt have such problems with lossy formats though.
>>
>>61536951
You can check with stats.lua. If it works there is going to be a line "interpolation". If not then not.
>>
>>61536840
>single frame
what i meant to say was that the *upscaling algorithms* (like ewa something, or ravu or fsrcnn or ngu or ...) always only look at one frame at a time. they don't take past or future frames into account. so how something is "animated" doesn't matter to such upscaling algorithms. motion effects don't matter because the algos always only look at one frame at a time.

>>61536927
i think the difference is not due to the training images. waifu2x and ngu are very good at "reverting" a downscale. same should be true with fsrcnn. so these algos work great if you take a high quality image, downscale it, and then use these algos to upscale them again.

however, with real world material the situation is somewhat different. if real world material were downscaled and then losslessly compressed you'd get the same fantastic upscaling results with waifu2x etc. but real world material is first downscaled and then afterwards compressed in a lossy way, which means lots of compression artifacts are added, some of them visible to the naked eye, some not. even the "invisible" compression artifacts matter because they make life more difficult for waifu2x etc. i believe this is the main reason why waifu2x etc are not as impressive for real world video material as they are for artificial png -> downscale -> upscale tests.
>>
File: 56249369.png (200KB, 387x466px) Image search: [Google]
56249369.png
200KB, 387x466px
>So this is pretty much ready for merging. Anybody have objections before we do so?
>>
>>61536927
fugg how old is she again?
would make many babies (if under 18)
>>
File: daffy point.jpg (126KB, 720x540px) Image search: [Google]
daffy point.jpg
126KB, 720x540px
>>61537020
>what i meant to say was that the *upscaling algorithms* (like ewa something, or ravu or fsrcnn or ngu or ...) always only look at one frame at a time. they don't take past or future frames into account. so how something is "animated" doesn't matter to such upscaling algorithms.
I was never implying otherwise. There are some interesting image segmentation deep nets that are trained to predict motion from an image but that has nothing to do with this either.
>motion effects don't matter because the algos always only look at one frame at a time.
I'm not talking about motion itself or looking at several frames. I'm talking about individual frames that animators purposely drew "badly" (with shit that resembles artifacts) in order to convey a certain effect in the final animation, like pic related.

>even the "invisible" compression artifacts matter because they make life more difficult for waifu2x
This. Here's a neat demonstration of how "invisible" alterations can fool a neural net taken to the extreme.
>>
>>61537245
>This. Here's a neat demonstration of how "invisible" alterations can fool a neural net taken to the extreme.
Forgot to post link.
http://karpathy.github.io/2015/03/30/breaking-convnets/
>>
>>61537245
>http://karpathy.github.io/2015/03/30/breaking-convnets/
yes, but the links you posted are all about image classification. these neural networks try to learn everything about specific objects. e.g. which attributes make a cat a cat? in other words, these neural networks are very complex and try to be very clever. image upscaling neural networks don't learn things like that. they just look at edges, nothing else, without even trying to understand which kind of object an edge might describe. so the chance of being "hoodwinked" is much higher in image classification than in image upscaling.

the situation were different if we were talking about trying to guess texture detail while upscaling. there are some neural networks which try to do that (with partially great results and partially dramatically bad results, summed up: unusable imho).

"simple" edge adaptive neural networks like waifu2x or fsrcnn can't be fooled too much. the worst that can happen if that they reproduce a certain edge less optimally than you'd expect, or that they "connect" or "disconnect" some edges which a human would not (dis)connect when intelligently interpreting the low res image. but such errors should be much much much less extreme than the type of errors image classification neural networks can sometimes make...
>>
File: 25349553.png (77KB, 500x500px) Image search: [Google]
25349553.png
77KB, 500x500px
>>61537474
>which attributes make a cat a cat
Its fluffy, cute and i want to hug it!
>>
>>61537544
haha, good one! i somehow doubt that's the way a neural network "thinks", though... ;)
>>
>>61536909
It's entirely possible the judder is actually in the source video (ie: duplicate frames), interpolation won't fix this.

You can check this by finding an area of video with noticeable judder and frame stepping through it, to see if any of the frames are duplicated.
>>
>>61537544
don't forget the kitty ears. Little kitty ears makes anything cuter.
>>
>>61538199
and the big eyes. thinking of shrek 2 here...
>>
>>61538465
begging...
>>
File: 49602154.jpg (49KB, 438x440px) Image search: [Google]
49602154.jpg
49KB, 438x440px
>>61538465
>>
>>61533702
And I just fucking did when making that post, it says “Channels: 2” but reports “AO: mono 1ch”. Just fucking give it a rest
>>
>>61533755
Read the documentation for --audio-channels. --audio-channels makes mpv just tell windows that it wants to output as 1ch mono. The fact that it doesn't means that windows decided it doesn't support 1ch output.

It works on Linux because Linux isn't retarded as fuck and will do what the program tells it to. Try --af=format=channels=1
>>
>>61535701
What motion based interpolation?
>>
>>61535701
https://github.com/mpv-player/mpv/wiki/Display-synchronization

tl;dr: SVP actually tries to create new frames to increase the framerate whereas the smooth-motion interpolation mentioned in the OP only tries to make the current framerate play back at a more even looking pace by interpolating frames that don't jive with your screen's refresh rate.
Thread posts: 330
Thread images: 38


[Boards: 3 / a / aco / adv / an / asp / b / bant / biz / c / can / cgl / ck / cm / co / cock / d / diy / e / fa / fap / fit / fitlit / g / gd / gif / h / hc / his / hm / hr / i / ic / int / jp / k / lgbt / lit / m / mlp / mlpol / mo / mtv / mu / n / news / o / out / outsoc / p / po / pol / qa / qst / r / r9k / s / s4s / sci / soc / sp / spa / t / tg / toy / trash / trv / tv / u / v / vg / vint / vip / vp / vr / w / wg / wsg / wsr / x / y] [Search | Top | Home]

I'm aware that Imgur.com will stop allowing adult images since 15th of May. I'm taking actions to backup as much data as possible.
Read more on this topic here - https://archived.moe/talk/thread/1694/


If you need a post removed click on it's [Report] button and follow the instruction.
DMCA Content Takedown via dmca.com
All images are hosted on imgur.com.
If you like this website please support us by donating with Bitcoins at 16mKtbZiwW52BLkibtCr8jUg2KVUMTxVQ5
All trademarks and copyrights on this page are owned by their respective parties.
Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.
This is a 4chan archive - all of the content originated from that site.
This means that RandomArchive shows their content, archived.
If you need information for a Poster - contact them.