Last modified on 11 October 2012, at 08:53

GLSL Programming/Unity/Mirrors

“Toilet of Venus”, ca. 1644-48 by Diego Rodríguez de Silva y Velázquez.

This tutorial covers the rendering of virtual images of objects in plane mirrors.

It is based on blending as described in Section “Transparency” and requires some understanding of Section “Vertex Transformations”.

Virtual Images in Plane MirrorsEdit

The image we see in a plane mirror is called a “virtual image” because it is the same as the image of the real scene except that all positions are mirrored at the plane of the mirror; thus, we don't see the real scene but a “virtual” image of it.

This transformation of a real object to a virtual object can be computed by transforming each position from world space into the local coordinate system of the mirror; negating the y coordinate (assuming that the mirror plane is spanned by the x and z axes); and transforming the resulting position back to world space. This suggests a very straightforward approach to rendering virtual images of game objects by using another shader pass with a vertex shader that mirrors every vertex and normal vector, and a fragment shader that mirrors the position of light sources before computing the shading. (In fact, the light source at the original positions might also be taken into account because they represent light that is reflected by the mirror before reaching the real object.) There isn't anything wrong with this approach except that it is very limited: no other objects may be behind the mirror plane (not even partially) and the space behind the mirror plane must only be visible through the mirror. This is fine for mirrors on walls of a box that contains the whole scene if all the geometry outside the box can be removed. However, it doesn't work for mirrors with objects behind it (as in the painting by Velázquez) nor for semitransparent mirrors, for example glass windows.

Placing the Virtual ObjectsEdit

It turns out that implementing a more general solution is not straightforward in the free version of Unity because neither rendering to textures (which would allow us to render the scene from a virtual camera position behind the mirror) nor stencil buffers (which would allow us to restrict the rendering to the region of the mirror) are available in the free version of Unity.

I came up with the following solution: First, every game object that might appear in the mirror has to have a virtual “Doppelgänger”, i.e. a copy that follows all the movements of the real game object but with positions mirrored at the mirror plane. Each of these virtual objects needs a script that sets its position and orientation according to the corresponding real object and the mirror plane, which are specified by public variables:

@script ExecuteInEditMode()
 
var objectBeforeMirror : GameObject;
var mirrorPlane : GameObject;
 
function Update () 
{
   if (null != mirrorPlane) 
   {
      renderer.sharedMaterial.SetMatrix("_WorldToMirror", 
         mirrorPlane.renderer.worldToLocalMatrix);
      if (null != objectBeforeMirror) 
      {
         transform.position = objectBeforeMirror.transform.position;
         transform.rotation = objectBeforeMirror.transform.rotation;
         transform.localScale = 
            -objectBeforeMirror.transform.localScale; 
         transform.RotateAround(objectBeforeMirror.transform.position, 
            mirrorPlane.transform.TransformDirection(
            Vector3(0.0, 1.0, 0.0)), 180.0);
 
         var positionInMirrorSpace : Vector3 = 
            mirrorPlane.transform.InverseTransformPoint(
            objectBeforeMirror.transform.position);
         positionInMirrorSpace.y = -positionInMirrorSpace.y;
         transform.position = mirrorPlane.transform.TransformPoint(
            positionInMirrorSpace);
      }
   }
}

The origin of the local coordinate system (objectBeforeMirror.transform.position) is transformed as described above; i.e., it's transformed to the local coordinate system of the mirror with mirrorPlane.transform.InverseTransformPoint(), then the y coordinate is reflected, and then it is transformed back to world space with mirrorPlane.transform.TransformPoint(). However, the orientation is a bit difficult to specify in JavaScript: we have to reflect all coordinates (transform.localScale = -objectBeforeMirror.transform.localScale) and rotate the virtual object by 180° around the surface normal vector of the mirror (Vector3(0.0, 1.0, 0.0) transformed to world coordinates. This does the trick because a rotation around 180° corresponds to the reflection of two axes orthogonal to the rotation axis. Thus, this rotation undoes the previous reflection for two axes and we are left with the one reflection in the direction of the rotation axis, which was chosen to be the normal of the mirror.

Of course, the virtual objects should always follow the real object, i.e. they shouldn't collide with other objects nor be influenced by physics in any other way. Using this script on all virtual objects is already sufficient for the case mentioned above: no real objects behind the mirror plane and no other way to see the space behind the mirror plane except through the mirror. In other cases we have to render the mirror in order to occlude the real objects behind it.

Rendering the MirrorEdit

Now things become a bit tricky. Let's list what we want to achieve:

  • Real objects behind the mirror should be occluded by the mirror.
  • The mirror should be occluded by the virtual objects (which are actually behind it).
  • Real objects in front of the mirror should occlude the mirror and any virtual objects.
  • Virtual objects should only be visible in the mirror, not outside of it.

If we could restrict rendering to an arbitrary part of the screen (e.g. with a stencil buffer), this would be easy: render all geometry including an opaque mirror; then restrict the rendering to the visible parts of the mirror (i.e. not the parts that are occluded by other real objects); clear the depth buffer in these visible parts of the mirror; and render all virtual objects. It's straightforward if we had a stencil buffer.

Since we don't have a stencil buffer, we use the alpha component (a.k.a. opacity or A component) of the framebuffer as a substitute (similar to the technique used in Section “Translucent Bodies”). In the first pass of the shader for the mirror, all pixels in the visible part of the mirror (i.e. the part that is not occluded by real objects in front of it) will be marked by an alpha component of 0, while pixels in the rest of the screen should have an alpha component of 1. The first problem is that we have to make sure that the rest of the screen has an alpha component of 1, i.e. all background shaders and object shaders should set alpha to 1. For example, Unity's skyboxes don't set alpha to 1; thus, we have to modify and replace all those shaders that don't set alpha to 1. Let's assume that we can do that. Then the first pass of the shader for the mirror is:

      // 1st pass: mark mirror with alpha = 0
      Pass { 
         GLSLPROGRAM
 
         #ifdef VERTEX
 
         void main()
         { 
            gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
         }
 
         #endif
 
         #ifdef FRAGMENT
 
         void main()
         {
            gl_FragColor = vec4(1.0, 0.0, 0.0, 0.0); 
               // this color should never be visible, 
               // only alpha is important
         }
 
         #endif
 
         ENDGLSL
      }

How does this help us to limit the rendering to the pixels with alpha equal to 0? It doesn't. However, it does help us to restrict any changes of colors in the framebuffer by using a clever blend equation (see Section “Transparency”):

Blend OneMinusDstAlpha DstAlpha

We can think of the blend equation as:

vec4 result = vec4(1.0 - pixel_color.a) * gl_FragColor + vec4(pixel_color.a) * pixel_color;

where pixel_color is the color of a pixel in the framebuffer. Let's see what the expression is for pixel_color.a equal to 1 (i.e. outside of the visible part of the mirror):

vec4(1.0 - 1.0) * gl_FragColor + vec4(1.0) * pixel_color == pixel_color

Thus, if pixel_color.a is equal to 1, the blending equation makes sure that we don't change the pixel color in the framebuffer. What happens if pixel_color.a is equal to 0 (i.e. inside the visible part of the mirror)?

vec4(1.0 - 0.0) * gl_FragColor + vec4(0.0) * pixel_color == gl_FragColor

In this case, the pixel color of the framebuffer will be set to the fragment color that was set in the fragment shader. Thus, using this blend equation, our fragment shader will only change the color of pixels with an alpha component of 0. Note that the alpha component in gl_FragColor should also be 0 such that the pixels are still marked as part of the visible region of the mirror.

That was the first pass. The second pass has to clear the depth buffer before we start to render the virtual objects such that we can use the normal depth test to compute occlusions (see Section “Per-Fragment Operations”). Actually, it doesn't matter whether we clear the depth buffer only for the pixels in the visible part of the mirror or for all pixels of the screen because we won't change the colors of any pixels with alpha equal to 1 anyways. In fact, this is very fortunate because (without stencil test) we cannot limit the clearing of the depth buffer to the visible part of the mirror. Instead, we clear the depth buffer for the whole mirror by transforming the vertices to the far clipping plane, i.e. the maximum depth.

As explained in Section “Vertex Transformations”, the output of the vertex shader in gl_Position is divided automatically by the fourth coordinate gl_Position.w to compute normalized device coordinates between -1 and +1. In fact, a z coordinate of +1 represents the maximum depth; thus, this is what we are aiming for. However, because of that automatic (perspective) division by gl_Position.w, we have to set gl_Position.z to gl_Position.w in order to get a normalized device coordinate of +1. Here is the second pass of the mirror shader:

      // 2nd pass: set depth to far plane such that 
      // we can use the normal depth test for the reflected geometry
      Pass { 
         ZTest Always
         Blend OneMinusDstAlpha DstAlpha
 
         GLSLPROGRAM
 
         uniform vec4 _Color; 
            // user-specified background color in the mirror
 
         #ifdef VERTEX
 
         void main()
         {                                         
            gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
            gl_Position.z = gl_Position.w;
               // the perspective division will divide gl_Position.z 
               // by gl_Position.w; thus, the depth is 1.0, 
               // which represents the far clipping plane
         }
 
         #endif
 
         #ifdef FRAGMENT
 
         void main()
         {
            gl_FragColor = vec4(_Color.rgb, 0.0); 
               // set alpha to 0.0 and 
               // the color to the user-specified background color
         }
 
         #endif
 
         ENDGLSL
      }

The ZTest is set to Always in order to deactivate it. This is necessary because our vertices are actually behind the mirror (in order to reset the depth buffer); thus, the fragments would fail a normal depth test. We use the blend equation which was discussed above to set the user-specified background color of the mirror. (If there is a skybox in your scene, you would have to compute the mirrored view direction and look up the environment map here; see Section “Skyboxes”.)

This is the shader for the mirror. Here is the complete shader code, which uses "Transparent+10" to make sure that it is rendered after all real objects (including transparent objects) have been rendered:

Shader "GLSL shader for mirrors" {
   Properties {
      _Color ("Mirrors's Color", Color) = (1, 1, 1, 1) 
   } 
   SubShader {
      Tags { "Queue" = "Transparent+10" } 
         // draw after all other geometry has been drawn 
         // because we mess with the depth buffer
 
      // 1st pass: mark mirror with alpha = 0
      Pass { 
         GLSLPROGRAM
 
         #ifdef VERTEX
 
         void main()
         { 
            gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
         }
 
         #endif
 
         #ifdef FRAGMENT
 
         void main()
         {
            gl_FragColor = vec4(1.0, 0.0, 0.0, 0.0); 
               // this color should never be visible, 
               // only alpha is important
         }
 
         #endif
 
         ENDGLSL
      }
 
      // 2nd pass: set depth to far plane such that 
      // we can use the normal depth test for the reflected geometry
      Pass { 
         ZTest Always
         Blend OneMinusDstAlpha DstAlpha
 
         GLSLPROGRAM
 
         uniform vec4 _Color; 
            // user-specified background color in the mirror
 
         #ifdef VERTEX
 
         void main()
         {                                         
            gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
            gl_Position.z = gl_Position.w;
               // the perspective division will divide gl_Position.z 
               // by gl_Position.w; thus, the depth is 1.0, 
               // which represents the far clipping plane
         }
 
         #endif
 
         #ifdef FRAGMENT
 
         void main()
         {
            gl_FragColor = vec4(_Color.rgb, 0.0); 
               // set alpha to 0.0 and 
               // the color to the user-specified background color
         }
 
         #endif
 
         ENDGLSL
      }
   }
}


A water lily in Sheffield Park.

Rendering the Virtual ObjectsEdit

Once we have cleared the depth buffer and marked the visible part of the mirror by setting the alpha component to 0, we can use the blend equation

Blend OneMinusDstAlpha DstAlpha

to render the virtual objects. Can't we? There is another situation in which we shouldn't render virtual objects and that's when they come out of the mirror! This can actually happen when real objects move into the reflecting surface. Water lilies and swimming objects are examples. We can avoid the rasterization of fragments of virtual objects that are outside the mirror by discarding them with the discard instruction (see Section “Cutaways”) if their y coordinate in the local coordinate system of the mirror is positive. To this end, the vertex shader has to compute the vertex position in the local coordinate system of the mirror and therefore the shader requires the corresponding transformation matrix, which we have fortunately set in the script above. The complete shader code for the virtual objects is then:

Shader "GLSL shader for virtual objects in mirrors" {
   Properties {
      _Color ("Virtual Object's Color", Color) = (1, 1, 1, 1) 
   } 
   SubShader {
      Tags { "Queue" = "Transparent+20" } 
         // render after mirror has been rendered
 
      Pass { 
         Blend OneMinusDstAlpha DstAlpha 
            // when the framebuffer has alpha = 1, keep its color
            // only write color where the framebuffer has alpha = 0
 
         GLSLPROGRAM
 
         // User-specified uniforms
         uniform vec4 _Color;
         uniform mat4 _WorldToMirror; // set by a script
 
         // The following built-in uniforms  
         // are also defined in "UnityCG.glslinc", 
         // i.e. one could #include "UnityCG.glslinc" 
         uniform mat4 _Object2World; // model matrix
 
         // Varying
         varying vec4 positionInMirror;
 
         #ifdef VERTEX
 
         void main()
         { 
            positionInMirror = 
               _WorldToMirror * (_Object2World * gl_Vertex);
            gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;;
         }
 
         #endif
 
         #ifdef FRAGMENT
 
         void main()
         {
            if (positionInMirror.y > 0.0) 
               // reflection comes out of mirror?
            {
               discard; // don't rasterize it
            }
            gl_FragColor = vec4(_Color.rgb, 0.0); // set alpha to 0.0 
         }
 
         #endif
 
         ENDGLSL
      }
   }
}

Note that the line

Tags { "Queue" = "Transparent+20" }

makes sure that the virtual objects are rendered after the mirror, which uses "Transparent+10". In this shader, the virtual objects are rasterized with a uniform, user-specified color in order to keep the shader as short as possible. In a complete solution, the shader would compute the lighting and texturing with the mirrored normal vector and mirrored positions of light sources. However, this is straightforward and very much dependent on the particular shaders that are employed for the real objects.

LimitationsEdit

There are several limitations of this approach which we haven't addressed. For example:

  • multiple mirror planes (virtual objects of one mirror might appear in another mirror)
  • multiple reflections in mirrors
  • semitransparent virtual objects
  • semitransparent mirrors
  • reflection of light in mirrors
  • uneven mirrors (e.g. with a normal map)
  • uneven mirrors in the free version of Unity
  • etc.

SummaryEdit

Congratulations! Well done. Two of the things we have looked at:

  • How to render mirrors with a stencil buffer.
  • How to render mirrors without a stencil buffer.

Further ReadingEdit

If you still want to know more

  • about using the stencil buffer to render mirrors, you could read Section 9.3.1 of the SIGGRAPH '98 Course “Advanced Graphics Programming Techniques Using OpenGL” organized by Tom McReynolds, which is available online.


< GLSL Programming/Unity

Unless stated otherwise, all example source code on this page is granted to the public domain.