Pages

17/01/2013

Motion Blur Tutorial

Originally posted on 21/04/2011

What is motion blur?

Motion pictures are made up of a series of still images displayed in quick succession. These images are captured by briefly opening a shutter to expose a piece of film/electronic sensor to light (via a lens system), then closing the shutter and advancing the film/saving the data. Motion blur occurs when an object in the scene (or the camera itself) moves while the shutter is open during the exposure, causing the resulting image to streak along the direction of motion. It is an artifact which the image-viewing populous has grown so used to that its absence is conspicuous; adding it to a simulated image enhances the realism to a large degree.

Later we'll look at a screen space technique for simulating motion blur caused only by movement of the camera. Approaches to object motion blur are a little more complicated and worth a separate tutorial. First, though, let's examine a 'perfect' (full camera and object motion blur) solution which is very simple but not really efficient enough for realtime use.

Perfect solution

This is a naive approach which has the benefit of producing completely realistic full motion blur, incorporating both the camera movement and movement of the objects in the scene relative to the camera. The technique works like this: for each frame, render the scene multiple times at different temporal offsets, then blend together the results:

This technique is actually described in the red book (chapter 10). Unfortunately it requires that the basic framerate must be samples * framerate, which is either impossible or impractical for most realtime applications. And don't think about just using the previous samples frames - this will give you trippy trails (and nausea) but definitely not motion blur. So how do we go about doing it quick n' cheap?

Screen space to the rescue!

The idea is simple: each rendered pixel represents a point in the scene at the current frame. If we know where it was in the previous frame, we can apply a blur along a vector between the two points in screen space. This vector represents the size and direction of the motion of that point between the previous frame and the current one, hence we can use it to approximate the motion of a point during the intervening time, directly analogous to a single exposure in the real world.

The crux of this method is calculating a previous screen space position for each pixel. Since we're only going to implement motion blur caused by motion of the camera, this is very simple: each frame, store the camera's model-view-projection matrix so that in the next frame we'll have access to it. Since this is all done on the CPU the details will vary; I'll just assume that you can supply the following to the fragment shader: the previous model-view-projection matrix and the inverse of the current model-view matrix.

Computing the blur vector

In order to compute the blur vector we take the following steps within our fragment shader:
  1. Get the pixel's current view space position. There are a number of equally good methods for extracting this from an existing depth buffer, see Matt Pettineo's blog for a good overview. In the example shader I use a per-pixel ray to the far plane, multiplied by a per-pixel linear depth.
  2. From this, compute the pixel's current world space position using the inverse of the current model-view matrix.
  3. From this, compute the pixel's previous normalized device coordinates using the previous model-view-projection matrix and a perspective divide.
  4. Scale and bias the result to get texture coordinates.
  5. Our blur vector is the current pixel's texture coordinates minus the coordinates we just calculated
The eagle-eyed reader may have already spotted that this can be optimized, but for now we'll do it long-hand for the purposes of clarity. Here's the fragment program:
   uniform sampler2D uTexLinearDepth;

   uniform mat4 uInverseModelViewMat; // inverse model->view
   uniform mat4 uPrevModelViewProj; // previous model->view->projection

   noperspective in vec2 vTexcoord;
   noperspective in vec3 vViewRay; // for extracting current world space position
 
   void main() {
   // get current world space position:
      vec3 current = vViewRay * texture(uTexLinearDepth, vTexcoord).r;
      current = uInverseModelViewMat * current;
 
   // get previous screen space position:
      vec4 previous = uPrevModelViewProj * vec4(current, 1.0);
      previous.xyz /= previous.w;
      previous.xy = previous.xy * 0.5 + 0.5;

      vec2 blurVec = previous.xy - vTexcoord;
}

Using the blur vector

So what do we do with this blur vector? We might try stepping for n samples along the vector, starting at previous.xy and ending at vTexcoord. However this produces ugly discontinuities in the effect:

To fix this we can center the blur vector on vTexcoord, thereby blurring across these velocity boundaries:
Here's the rest of the fragment program (uTexInput the texture we're blurring):
// perform blur:
   vec4 result = texture(uTexInput, vTexcoord);
   for (int i = 1; i < nSamples; ++i) {
   // get offset in range [-0.5, 0.5]:
      vec2 offset = blurVec * (float(i) / float(nSamples - 1) - 0.5);
  
   // sample & add to result:
      result += texture(uTexInput, vTexcoord + offset);
   }
 
   result /= float(nSamples);

A sly problem

There is a potential issue around framerate: if it is very high our blur will be barely visible as the amount of motion between frames will be small, hence blurVec will be short. If the framerate is very low our blur will be exaggerated, as the amount of motion between frames will be high, hence blurVec will be long.

While this is physically realistic (higher fps = shorter exposure, lower fps = longer exposure) it might not be aesthetically desirable. This is especially true for variable-framerate games which need to maintain playability as the framerate drops without the entire image becoming a smear. At the other end of the problem, for displays with high refresh rates (or vsync disabled) the blur lengths end up being so short that the result will be pretty much unnoticeable. What we want in these situations is for each frame to look as though it was rendered at a particular framerate (which we'll call the 'target framerate') regardless of the actual framerate.

The solution is to scale blurVec according to the current actual fps; if the framerate goes up we increase the blur length, if it goes down we decrease the blur length. When I say "goes up" or "goes down" I mean "changes relative to the target framerate." This scale factor is easilly calculated:

   mblurScale = currentFps / targeFps

So if our target fps is 60 but the actual fps is 30, we halve our blur length. Remember that this is not physically realistic - we're fiddling the result in order to compensate for a variable framerate.

Optimization

The simplest way to improve the performance of this method is to reduce the number of blur samples. I've found it looks okay down to about 8 samples, where 'banding' artifacts start to become apparent.

As I hinted before, computing the blur vector can be streamlined. Notice that, in the first part of the fragment shader, we did two matrix multiplications:
// get current world space position:
   vec3 current = vViewRay * texture(uTexLinearDepth, vTexcoord).r;
   current = uInverseModelViewMat * current;
 
// get previous screen space position:
   vec4 previous = uPrevModelViewProj * vec4(current, 1.0);
   previous.xyz /= previous.w;
   previous.xy = previous.xy * 0.5 + 0.5;
These can be combined into a single transformation by constructing a current-to-previous matrix:

mat4 currentToPrevious = uPrevModelViewProj * uInverseModelViewMat

If we do this on the CPU we only have to do a single matrix multiplication per fragment in the shader. Also, this reduces the amount of data we upload to the GPU (always a good thing). The relevant part of the fragment program now looks like this:
   vec3 current = vViewRay * texture(uTexLinearDepth, vTexcoord).r;
   vec4 previous = uCurrentToPreviousMat * vec4(current, 1.0);
   previous.xyz /= previous.w;
   previous.xy = previous.xy * 0.5 + 0.5;

Conclusion

Even this limited form of motion blur makes a big improvement to the appearance of a rendered scene; moving around looks generally smoother and more realistic. At lower framerates (~30fps) the effect produces a filmic appearance, hiding some of the temporal aliasing that makes rendering (and stop-motion animation) 'look fake'.

If that wasn't enough, head over to the object motion blur tutorial, otherwise have some links:

"Stupid OpenGL Shader Tricks" Simon Green, NVIDIA

"Motion Blur as a Post Processing Effect" Gilberto Rosado, GPU Gems 3

Dinoooossaaaaaaurs!

No comments:

Post a Comment