GLSL Programming/Unity/Screen Overlays

This tutorial covers screen overlays, which are also known as “GUI Textures” in Unity.

Title screen of a movie from 1934.

It is the first tutorial of a series of tutorials on non-standard vertex transformations, which deviate from the standard vertex transformations that are described in Section “Vertex Transformations”. This particular tutorial uses texturing as described in Section “Textured Spheres” and blending as described in Section “Transparency”.

Unity's GUI Textures edit

There are many applications for screen overlays (i.e. GUI textures in Unity's terminology), e.g. titles as in the image to the left, but also other GUI (graphical user interface) elements such as buttons or status information. The common feature of these elements is that they should always appear on top of the scene and never be occluded by any other objects. Neither should these elements be affected by any of the camera movements. Thus, the vertex transformation should go directly from object space to screen space. Unity's GUI textures allow us to render this kind of element by rendering a texture image at a specified position on the screen. This tutorial tries to reproduce the functionality of GUI textures with the help of shaders. Usually, you would still use GUI textures instead of such a shader; however, the shader allows for a lot more flexibility since you can adapt it in any way you want while GUI textures only offer a limited set of possibilities. (For example, you could change the shader such that the GPU spends less time on rasterizing the triangles that are occluded by an opaque GUI texture.)

Simulating GUI Textures with a GLSL Shader edit

The position of Unity's GUI textures is specified by an X and a Y coordinate of the lower, left corner of the rendered rectangle in pixels with   at the center of the screen and a Width and Height of the rendered rectangle in pixels. To simulate GUI textures, we use similar shader properties:

   Properties {
      _MainTex ("Texture", Rect) = "white" {}
      _Color ("Color", Color) = (1.0, 1.0, 1.0, 1.0)
      _X ("X", Float) = 0.0
      _Y ("Y", Float) = 0.0
      _Width ("Width", Float) = 128
      _Height ("Height", Float) = 128
   }

and the corresponding uniforms

         uniform sampler2D _MainTex;
         uniform vec4 _Color;
         uniform float _X;
         uniform float _Y;
         uniform float _Width;
         uniform float _Height;

For the actual object, we could use a mesh that consists of just two triangles to form a rectangle. However, we can also just use the default cube object since back-face culling (and culling of triangles that are degenerated to edges) allows us to make sure that only two triangles of the cube are rasterized. The corners of the default cube object have coordinates   and   in object space, i.e., the lower, left corner of the rectangle is at   and the upper, right corner is at  . To transform these coordinates to the user-specified coordinates in screen space, we first transform them to raster positions in pixels where   is at the lower, left corner of the screen:

        uniform vec4 _ScreenParams; // x = width; y = height; 
           // z = 1 + 1.0/width; w = 1 + 1.0/height
        ...
        #ifdef VERTEX

        void main()
        {    
            vec2 rasterPosition = vec2(
               _X + _ScreenParams.x / 2.0 
               + _Width * (gl_Vertex.x + 0.5),
               _Y + _ScreenParams.y / 2.0 
               + _Height * (gl_Vertex.y + 0.5));
            ...

This transformation transforms the lower, left corner of the front face of our cube from   in object space to the raster position vec2(_X + _ScreenParams.x / 2.0, _Y + _ScreenParams.y / 2.0), where _ScreenParams.x is the screen width in pixels and _ScreenParams.y is the height in pixels. The upper, right corner is transformed from   to vec2(_X + _ScreenParams.x / 2.0 + _Width, _Y + _ScreenParams.y / 2.0 + _Height). Raster positions are convenient and, in fact, they are often used in OpenGL; however, they are not quite what we need here.

The output of the vertex shader in gl_Position is in the so-called “clip space” as discussed in Section “Vertex Transformations”. The GPU transforms these coordinates to normalized device coordinates between   and   by dividing them by the fourth coordinate gl_Position.w in the perspective division. If we set this fourth coordinate to  , this division doesn't change anything; thus, we can think of the first three coordinates of gl_Position as coordinates in normalized device coordinates, where   specifies the lower, left corner of the screen on the near plane and   specifies the upper, right corner on the near plane. (We should use the near plane to make sure that the rectangle is in front of everything else.) In order to specify any screen position in gl_Position, we have to specify it in this coordinate system. Fortunately, transforming a raster position to normalized device coordinates is not too difficult:

            gl_Position = vec4(
               2.0 * rasterPosition.x / _ScreenParams.x - 1.0,
               2.0 * rasterPosition.y / _ScreenParams.y - 1.0,
               -1.0, // near plane 
               1.0 // all coordinates are divided by this coordinate
               );

As you can easily check, this transforms the raster position vec2(0,0) to normalized device coordinates   and the raster position vec2(_ScreenParams.x, _ScreenParams.y) to  , which is exactly what we need.

This is all we need for the vertex transformation from object space to screen space. However, we still need to compute appropriate texture coordinates in order to look up the texture image at the correct position. Texture coordinates should be between   and  , which is actually easy to compute from the vertex coordinates in object space between   and  :

            textureCoords = 
               vec4(gl_Vertex.x + 0.5, gl_Vertex.y + 0.5, 0.0, 0.0);
               // for a cube, gl_Vertex.x and gl_Vertex.y 
               // are -0.5 or 0.5

With the varying variable textureCoords, we can then use a simple fragment program to look up the color in the texture image and modulate it with the user-specified color _Color:

         #ifdef FRAGMENT

         void main()
         {
            gl_FragColor = 
               _Color * texture2D (_MainTex, vec2(textureCoords));
         }

         #endif

That's it.

Complete Shader Code edit

If we put all the pieces together, we get the following shader, which uses the Overlay queue to render the object after everything else, and uses alpha blending (see Section “Transparency”) to allow for transparent textures. It also deactivates the depth test to make sure that the texture is never occluded:

Shader "GLSL shader for screen overlays" {
   Properties {
      _MainTex ("Texture", Rect) = "white" {}
      _Color ("Color", Color) = (1.0, 1.0, 1.0, 1.0)
      _X ("X", Float) = 0.0
      _Y ("Y", Float) = 0.0
      _Width ("Width", Float) = 128
      _Height ("Height", Float) = 128
   }
   SubShader {
      Tags { "Queue" = "Overlay" } // render after everything else

      Pass {
         Blend SrcAlpha OneMinusSrcAlpha // use alpha blending
         ZTest Always // deactivate depth test

         GLSLPROGRAM

         // User-specified uniforms
         uniform sampler2D _MainTex;
         uniform vec4 _Color;
         uniform float _X;
         uniform float _Y;
         uniform float _Width;
         uniform float _Height;

         // The following built-in uniforms 
         // are also defined in "UnityCG.glslinc", 
         // i.e. one could #include "UnityCG.glslinc" 
         uniform vec4 _ScreenParams; // x = width; y = height; 
            // z = 1 + 1.0/width; w = 1 + 1.0/height

         // Varyings
         varying vec4 textureCoords;

         #ifdef VERTEX


         void main()
         {
            vec2 rasterPosition = vec2(
               _X + _ScreenParams.x / 2.0 
               + _Width * (gl_Vertex.x + 0.5),
               _Y + _ScreenParams.y / 2.0 
               + _Height * (gl_Vertex.y + 0.5));
            gl_Position = vec4(
               2.0 * rasterPosition.x / _ScreenParams.x - 1.0,
               2.0 * rasterPosition.y / _ScreenParams.y - 1.0,
               -1.0, // near plane is -1.0
               1.0);

            textureCoords =
               vec4(gl_Vertex.x + 0.5, gl_Vertex.y + 0.5, 0.0, 0.0);
               // for a cube, gl_Vertex.x and gl_Vertex.y 
               // are -0.5 or 0.5
         }

         #endif

         #ifdef FRAGMENT

         void main()
         {
            gl_FragColor = 
               _Color * texture2D (_MainTex, vec2(textureCoords));
         }

         #endif

         ENDGLSL
      }
   }
}

When you use this shader for a cube object, the texture image can appear and disappear depending on the orientation of the camera. This is due to clipping by Unity, which doesn't render objects that are completely outside of the region of the scene that is visible in the camera (the view frustum). This clipping is based on the conventional transformation of game objects, which doesn't make sense for our shader. In order to deactivate this clipping, we can simply make the cube object a child of the camera (by dragging it over the camera in the Hierarchy View). If the cube object is then placed in front of the camera, it will always stay in the same relative position, and thus it won't be clipped by Unity. (At least not in the game view.)

Changes for Opaque Screen Overlays edit

Many changes to the shader are conceivable, e.g. a different blend mode or a different depth to have a few objects of the 3D scene in front of the overlay. Here we will only look at opaque overlays.

An opaque screen overlay will occlude triangles of the scene. If the GPU was aware of this occlusion, it wouldn't have to rasterize these occluded triangles (e.g. by using deferred rendering or early depth tests). In order to make sure that the GPU has any chance to apply these optimizations, we have to render the screen overlay first, by setting

Tags { "Queue" = "Background" }

Also, we should avoid blending by removing the Blend instruction. With these changes, opaque screen overlays are likely to improve performance instead of costing rasterization performance.

Summary edit

Congratulation, you have reached the end of another tutorial. We have seen:

  • How to simulate GUI textures with a GLSL shader.
  • How to modify the shader for opaque screen overlays.

Further Reading edit

If you still want to know more


< GLSL Programming/Unity

Unless stated otherwise, all example source code on this page is granted to the public domain.