• ocassionallyaduck@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    1
    ·
    edit-2
    12 days ago

    This is a hilariously bad take for anything not VR. async warping causes frame smearing on detail that is really noticable when the screens aren’t so close your peripheral blind spots make up for it.

    Its an excellent tool in the toolbox but to pretend that async reprojection “solved” this kind of means you don’t understand the problem itself…

    Edit: also the LTT video is very cool as a proof of concept, but absolutely demonstrates my point regarding smearing. There are also many, MANY cases where a clean frame with legible information would be preferable to a less latent smeared frame.

    • MentalEdge@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      edit-2
      11 days ago

      Thank you for being rude.

      I’m not pretending it solves anything other than the job of increasing the perceived responsiveness of a game.

      There are a variety of potential ways to fill in the missing peripheral data, or even occluded data, other than simply stretching the edge of the image. Some of which very much overlap with what DLSS and frame generation are doing.

      My core argument is simply that it is superior to frame generation. If you’re gonna throw in fake frames, reprojection beats interpolation.

      Frame generation is completely unfit for purpose, because while it may spit out more frames, it makes games feel LESS responsive, not more.

      ASW does the opposite. Both are “hacky” and “fake” but one is clearly superior in terms of the perceived experience.

      One lets me feel like the game is running faster, the other makes the game look like it runs faster, while making it feel slower.

      This solution by intel is better, essentially because it works more like ASW than other implementations of frame generation.

      • ocassionallyaduck@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        11 days ago

        Frame reprojection lacks motion data. It is in the title. It is reprojecting the last frame. Frame generation uses the interval between real frames, feeds in vector data, and estimates movement.

        If I am trying to follow a ball going across the screen, not moving my mouse, reprojection is flat out worse. Because it is reprojecting the last frame, where nothing moved. Frame 1, Frame 1RP , then Frame 2. 1 and 1RP would have the ball in the exact same place. If I move my viewpoint, then the perspective will feel correct, viewport edges will blur and the reprojection will map to perspective which feels better for head tracking in VR. But for information delivery it is no new data, not even a guess. It’s still the same frame, just in a different point in space. Not till the next real frame comes in.

        With frame generation, if I am watching this ball again, now it looks more like Frame 1 (Real), Frame 1G (estimate), Frame 2 (real) Now frame 1 and frame 1G have different data, and 1G is built on vector data between frames. Not 100% but it’s a educated guess where the ball is going between frame 1 and frame 2. If I move my viewpoint, it is not as responsive feeling as reprojection, but it the gained fake middle frame helps with motion tracking in action.

        The real answer is to use frame generation with low-latency configurations, and also enable reprojection in the game engine if possible. Then you have the best of both worlds. For VR, the headset is the viewport, so it’s handled at a driver level. But for games, the viewport is a detached virtual camera, so the gamedev has to expose this and setup reprojection, or Nvidia and AMD need to build some kind of DLSS/FSR like hook for devs to utilize.

        But if you could do both at once, that would be very cool. You would get the most responsive feel in terms of lag between input and action on screen, while also getting motion updates faster than a full render pass. So yes, Intel’s solution is a set in that direction. But ASW is not in itself a solution, especially for high motion scenes with lost of graphics. There is a reason the demo engine in the LTT video was extremely basic. If you overloaded that with particle effects and heavy rendering like you see in high end titles, then the smearing from reprojection would look awful without rules and bounding on it.

        • MentalEdge@sopuli.xyz
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          11 days ago

          The reprojected frame with the ball in the same spot is still more up to date than a generated frame using interpolation.

          With reprojection, every other frame is showing where the ball actually is.

          It essential displays the game-world at the framerate it is actually being generated, with as little latency as possible.

          I vastly prefer this. Together with the reduced perceived input latency, this makes motion tracking FAR easier than with frame generation.

          With current frame generation, every frame, is showing where the ball was, two or three frames ago. You never see where it is right now. Due to this, in fast paced action, hand-eye coordination is slower, more likely to overshoot, etc.

          And further developed reprojection, absolutely could account for such things.

          • ocassionallyaduck@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            10 days ago

            Your understanding of frame generation is incorrect.

            Again let’s say a huge absurdly low FPS and a big frame window for example. 10ms between frames.

            If your frame windows is 10ms. Frame 1 at 0ms and Frame 2 at 10ms. Frame generation is not just interpolation. That is what your new TV does when you activate motion smoothing and soap opera mode. This is not what framegen is, at all.

            In frame generation the frame generation engine (driver or program) stores a motion vector array. This determines the trend line of how pixels are likely to change. In our example, the motion vectors for the ball indicate large motion in a diagonal direction let’s say, and the overall frame indicates low or no motion due to the user not swinging the camera wildly. The frame generation then uses frame 1 to make an estimate of a frame 1.5, and the ball does actually move in the image thanks to motion vector analysis. The ball moves independently of the scene itself due to the change in user camera, so the user can see the ball itself moving against the background.

            So, in frame 1.5, the ball you are seeing, as well as the scene, have actually moved. Now, the user can see this motion, and lets say they didn’t notice it in frame 1. This means frame 1.5 is a chance for them to react! And their inputs go through sooner, reducing true latency by allowing them to react to in-game stimus faster. Yes, even if the frame is “faked”

            In reprojection, at frame 1.5RP, again crucially there is not any new scene data. Reprojection is not using motion vectors it’s using the camera and geometry only. If the user isn’t moving the POV at all for example then the reprojection just puts the frame where it already was and the user waits the full 10ms before the ball appears to move. Even if the camera is moving, reprojection is going to adjust the scene angle relative to camera, the ball is not going to move within the overall scene. Again, consider if the ball is flying left, and the user walking left. The reprojection cannot move the ball left. If anything, if the reprojection is put on the existing scene geometry, the opposite would occur and the ball may even appear to move right or slow down due to paralax.

            Reprojection uses old frame data and moves it like flat cards in 3d space, so the frame of the ball in scene the ball stays in position till frame 2. And can only be affect by camera motion that drives reprojection, not other rendering data. And what the user sees of the ball wouldn’t change until 10ms later. Only the overall flat scene can reprojection, so the user tilting the camera or swinging it can feel instantly responsive. But till the next render pass, the real motion data, delivered either via motion vector or frame 2, doesn’t his them in a reprojection on 1.5.

            So again, your understanding of current frame gen is wildly incorrect. And what you are describing for reprojection getting better is essentially to add reprojection to framegen. And use motion vectors to render the new portion of the frame, and use the projection to adjust overall pov based on camera input. Which again, works well. Adding reprojection and Framegen together is not a bad idea. And reprojection is great for reducing perceived latency (why it is essential for avoiding motion sickness in VR). These are two techniques solving different forms of latency issues. Combined they offer far more.

            • MentalEdge@sopuli.xyz
              link
              fedilink
              English
              arrow-up
              1
              ·
              edit-2
              10 days ago

              So the article above is sraight up wrong? All frame generation is already extrapolation, not interpolation?

              I had to look it up because I could have sworn that reprojection can and does use motion vectors to do more than just update the perspective.

              AND IT DOES.

              You’re talking about what VR does as the last step of EVERY rendered frame, which is an extremely simple reprojection to get the frame closer to what it would have been (what oculus called ATW), had it been rendered instantly (which it obviously can’t be). This is also seemingly the extend to which the unity demo showcased by LTT took it.

              What Oculus called ASW, asynchronous space warp, absolutely can and does update the position of the ball, which is why it can and is used to entirely replace rendering every other frame.

              Valves version of it is a lot simpler, and closer to just ATW, and does not use motion vectors when compensating for lost frames. Unlike ASW their solution was never meant to be used constantly, for every other frame, to enable VR on lesser hardware.