What are the pros and cons of having Vsync?
People always say that without vsync you get higher FPS, but whats the use of that if your screen cant display half of them anyway?
tearing, tearing everywhere. in some games you can't tell and don't care unless you're autistic as fuck.
buttery smooth 60 fps until you hit 59 fps and then your shit drops down to 30 fps. buttery smooth 30 fps until you hit 29 fps and then it does 15 fps.
this is why they made adaptive vsync aka free sync and that shitty nvidia gsync.
basically within a given range, the monitor can sync with the current frame rate. if your gpu puts out 42 fps then the monitor can sync to 42 hz. only works within the acceptable range.
new bullshit will push frame doubling or some faggot shit to make lower fps look more smooth.
The standard approach for rendering a game is using two image buffers, the front and back buffer. The front buffer contains a finished image, the back buffer is used for rendering the current image. Once the game finishes rendering a frame, it will swap the buffers.
Your graphics card constantly streams pixels from the current front buffer to your monitor. It will do so at your monitor's refresh rate, from top left to bottom right, with a short break (vblank) before starting over. At 60 Hz, one iteration takes about 16.7ms.
Enabling VSync will simply make the game wait for the next vblank before swapping buffers. This will introduce additional input lag, but you'll have no tears in the image that will inevitably appear if buffers are swapped outside the vblank period.
For example, let's assume you have a 60 Hz monitor and your computer is fast enough to run the game at 300 FPS. With (double-buffered) Vsync enabled, the game renders a frame at 3.3ms, wait 13.4ms for VSync and then the finished pixels will wait between 0 and 16.7ms, depending on where they are in the image, before being sent to the monitor.
Without VSync, the game will render a frame in 3.3ms, swap buffers immediately and start rendering the next frame without any waiting times. Since the front buffer is updated every 3.3ms, no pixel will have to wait longer than that before transmission because they would be replaced by the next frame.
So even if your monitor is only 60 Hz, you can minimize (input) latency just by pumping as many FPS as possible.
That is basically a frame limiter. Instead of waiting for vblank before swapping buffers like in VSync, it will swap immediately and wait after that for a duration long enough to meet the frame time target. So effectively you'll still have tearing that is more noticable than without any fps limit, but less input lag compared to VSync. The main point of that feature is to reduce power consumption.