Post Processing Motion Blur

2014/12/21 19:52

I had a quick attempt at implementing a basic motion blur last weekend.

This was the first method I tried, it is a post processing effect that blurs the whole scene based on camera movement. The link covers most of it quite well, but to explain it poorly it requires a shader at the post processing stage with uniforms for the the inverse of the current Model View Projection Matrix (MVP) and the previous MVP (not inverted) as well as textures for the Depth and Colour components as rendered normally.

It works by taking the clip space X and Y coordinates (in range -1.0 to 1.0) and the depth (0->1.0) as an approximation of the rendered scene in clip space. Multiplying that by the inverted MVP of the scene gives us an approximation of the scene in world space. We can then multiply the approximated world space position with the previous MVP to get and approximation of the previous frame in clip space, which will be in a similar range to the approximation of the current frames clip space coordinates. Subtracting one from the other will give us the difference in movement of the camera . we can then use this to sample the colour texture from the normally rendered scene in the opposite direction of the movement to give a basic impression of things being blurred when they move. You can see the result here. Below is the GLSL version of the shader given in the first link.

uniform sampler2D diffuse_texture;
uniform sampler2D depth_texture;
 
uniform mat4 InvMVP;
uniform mat4 pMVP;
 
void main()
{
    vec2 texCoord = vryTexCoord.st;
    //1
    float zOverW = texture2D(depth_texture, texCoord).z;
    vec4 H = vec4((texCoord.x*2.0) - 1.0, (texCoord.y*2.0)-1.0, zOverW, 1.0);
    vec4 D = InvMVP * H;
    vec4 worldPos = D/D.w;
 
    //2
    // Current viewport position 
    vec4 currentPos = H; 
    // Use the world position, and transform by the previous view- 
    // projection matrix.  +
    vec4 previousPos = pMVP * worldPos; 
    // Convert to nonhomogeneous points [-1,1] by dividing by w. 
    previousPos /= previousPos.w; 
     
    // Use this frame's position and last frame's to compute the pixel 
    // velocity. 
    vec2 velocity = ((currentPos - previousPos)*0.05).xy;
     
    vec4 colour = texture2D(diffuse_texture, texCoord);
    texCoord += velocity;
 
    for(int i=1; i<4; ++i, texCoord += velocity)
    {
        colour += texture2D(diffuse_texture, texCoord);
    }
    outFragColour = colour/4.0;    
}

I then went on to try out velocity maps, as show at the top of this post. The difference between this method and the previous one is that the blur can be applied per object and not to the screen as a whole. It is still mainly a post processing technique, but we must add an extra Frame Buffer Object (FBO) attachment to allow us to store the relative speeds of each object rendered to the screen as shown below.

//VERTEX SHADER for first pass, writing to a FBO with
//2 colour components and 1 depth
//not complete but should give you the basic idea
 
uniform mat4 modelViewProjectionMtx; //current MVP
uniform  mat4 previousModelViewProjectionMtx; //MVP from previous frame
out vec4 pos; //to fragment (current clip space position)
out vec4 previousPos; //to fragment (previous clip space position)
out vec4 colour; //to fragment (colour of fragment)
 
void main()
{
    //current
    pos = modelViewProjectionMtx * inVertex; //inVertex is the positional vertex attribute
    //previous
    previousPos = previousModelViewProjectionMtx * inVertex;
    colour = inColour;
    //set the position
    gl_Position = pos; 
}
//FRAGMANET SHADER for first pass, writing to a FBO with
//2 colour components and 1 depth
//not complete but should give you the basic idea
 
out vec4 outFragData[MAX_MRT]; //output from fragment shader
in vec4 pos;
in vec4 previousPos;
in vec4 colour;
 
void main()
{
    //get the X and Y positions in clip space
    //since it's a post processing effect we will only be working in the X and Y of
    //screen space for now.
    vec2 pos = vec2(pos.xy/pos.w); //get X and Z components into normalized space
    vec2 prePos = vec2(previousPos.xy/previousPos.w); //get X and Z components into normalized space
 
    //write out the visual data
    outFragData[0] = colour; //set colour
    //write out the velocity Data (current position - previous postion)
    //we need to write the data out into the range 0->1.0 and convert back to -1.0->1.0 during the post
    //processing stage
    outFragData[1] = vec4(((pos.x-prePos.x)+1.0)*0.5,((pos.y-prePos.y)+1.0)*0.5,0,0);
}
then in the post processing fragment shader we can do something like this when rendering to a full screen quad
uniform sampler2D diffuse_texture; //colour  FBO attachment
uniform sampler2D velocity_texture; //velocity map FBO attachment
uniform sampler2D depth_texture; // depth FBO attachment
 
void main()
{
    vec2 texCoord = vryTexCoord.st;
    vec2 velocity = texture2D(velocity_texture,texCoord).xy;   
    velocity = -(velocity-0.5)*2.0; //refactor to the range -1.0 -> 1.0
    if(dot(velocity,velocity) == 0) //incase thsi area of the velocity map is still
    {
        outFragColour = texture2D(diffuse_texture, vryTexCoord.st);
    }
    else //otherwise do something similar to in the first example
    {
        velocity *= 0.02;      
        vec4 colour = texture2D(diffuse_texture, texCoord);
        texCoord += velocity;
 
        for(int i=1; i<8; ++i, texCoord += velocity)
        {
            colour += texture2D(diffuse_texture, texCoord);
        }
        outFragColour = colour/8.0;
    }
}

This is the simplest of examples, to get a nice looking effect it will probably take quite some time tweaking the values