Probably a dumb question but:

With the upcoming GTA 6 where people speculate it might only run at 30FPS, I was wondering why there isn’t some setting on current gen consoles to turn on motion smoothing.

For example my 10 year old TV has a setting for motion smoothing that works perfectly fine even though it probably has less performance than someone’s toaster.

It seems like this is being integrated in some instances for NVIDIA and AMD cards such as DLSS and Fluid Motion Frames which is compatible with some limited games.

But I wonder why can’t this be a thing that is globally integrated in modern tech so that we don’t have to play something under 60FPS anymore in 2025? I honestly couldn’t play something in 30FPS since it’s so straining and hard to see things properly.

  • jarfil@beehaw.org
    link
    fedilink
    arrow-up
    4
    ·
    edit-2
    15 hours ago

    Motion smoothing means that instead of showing:

    • Frame 1
    • 33ms rendering
    • Frame 2

    …you would get:

    • Frame 1
    • 33ms rendering
    • #ms interpolating Frames 1 and 2
    • Interpolated Frame 1.5
    • 16ms wait
    • Frame 2

    It might be fine for non-interactive stuff where you can get all the frames in advance, like cutscenes. For anything interactive though, it just increases latency while adding imprecise partial frames.

    It will never turn 30fps into true 60fps like:

    • Frame 1
    • 16ms rendering
    • Frame 2
    • 16ms rendering
    • Frame 3
  • MentalEdge@sopuli.xyz
    link
    fedilink
    arrow-up
    46
    ·
    edit-2
    2 days ago

    Because it introduces latency.

    Higher framerates only in part improve the experience due to looking better, they also make the game feel faster because what you input is reflected in-game that fraction of a second sooner.

    Increasing framerate while incurring higher latency might look nicer for an onlooker, but it generally feels a lot worse to actually play.

  • narc0tic_bird@lemm.ee
    link
    fedilink
    arrow-up
    11
    ·
    2 days ago

    Input latency for one, because the next frame is delayed where the interpolated frame is inserted.

    And image quality. The generated frame is, as I said, interpolated. Whether that’s just using an algorithm or machine learning, it’s not even close to being accurate (at this point in time).