Cg Programming/Unity/Minimal Image Effect

This tutorial covers the basic steps to create a minimal image effect in Unity for image post-processing of camera views. If you are not familiar with texturing, you should read Section “Textured Spheres” first.

Post-processing effect applied to a video image.

Image Post-Processing in Unity edit

After a virtual camera has rendered an image, it is often useful to apply some image post-processing to the image. There are artistic reasons for this (e.g., achieving a certain visual style) but there are also technical reasons (e.g., it is often more efficient to implement dynamic ambient occlusion or depth-of-field in the image post-processing instead of implementing these effects as part of the rendering).

In Unity, each image post-processing step is called an "image effect." Each image effect consists of a C# script and a shader file that specifies a fragment shaders, which computes the pixels of the output image. In order to apply the shader, we also have to create a material as explained below.

Creating the Shader edit

Creating a Cg shader for an image effect is not complicated: In the Project Window, click on Create and choose Shader > Image Effect Shader. A new file named “NewImageEffectShader” should appear in the Project Window. Double-click it to open it (or right-click and choose Open). A text editor with the default shader in Cg should appear.

The following shader is a bit more useful than the default shader. You can copy&paste it into the shader file:

Shader "tintImageEffectShader"
{
   Properties
   {
      _MainTex ("Source", 2D) = "white" {}
      _Color ("Tint", Color) = (1,1,1,1)
   }
   SubShader
   {
      Cull Off 
      ZWrite Off 
      ZTest Always

      Pass
      {
         CGPROGRAM
         #pragma vertex vertexShader
         #pragma fragment fragmentShader
			
         #include "UnityCG.cginc"

         struct vertexInput
         {
            float4 vertex : POSITION;
            float2 texcoord : TEXCOORD0;
         };

         struct vertexOutput
         {
            float2 texcoord : TEXCOORD0;
            float4 position : SV_POSITION;
         };

         vertexOutput vertexShader(vertexInput i)
         {
            vertexOutput o;
            o.position = mul(UNITY_MATRIX_MVP, i.vertex);
            o.texcoord = i.texcoord;
            return o;
         }
			
         sampler2D _MainTex;
         float4 _MainTex_ST;
         float4 _Color;

         float4 fragmentShader(vertexOutput i) : COLOR
         {
            float4 color = tex2D(_MainTex, 
               UnityStereoScreenSpaceUVAdjust(
               i.texcoord, _MainTex_ST));		
            return color * _Color;
         }
         ENDCG
      }
   }
   Fallback Off
}

The shader has two properties: _Color is a color that is used by this shader to tint the color of all pixels. _MainTex is a render texture that contains the camera view that was rendered by the camera or it is the output render texture of the previous image effect. A render texture object can be used like a 2D texture for texturing but cameras can also render into it as if it was a framebuffer. Render textures are ideal for image effects because cameras (or the previous image effect) can render an image into it, and then the image can be fed into the next image effect as if it was a texture.

The shader deactivates face culling (with Cull Off) and depth testing (with ZTest Always) in order to make sure that the whole image is processed. It also deactivates writing to the depth buffer (with ZWrite Off) in order not to change the depth buffer.

The vertex shader vertexShader() applies the standard transformations to the vertex positions and passes through the texture coordinates. For image effects, the vertices are usually the corners of the camera viewport. The texture coordinates specify the positions of these corners in the texture coordinate space (from 0 to 1).

The fragment shader fragmentShader() is then called for each pixel of the output image. It can use the interpolated texture coordinates to access the pixels of the input image _MainTex. For some target platforms (in particular for stereo rendering in virtual reality) additional transformations of the texture coordinates are necessary; these are handled by Unity's UnityStereoScreenSpaceUVAdjust() function.

This fragment shader just reads the color of the corresponding pixel in the input image and multiplies it with _Color to tint it. Fragment shaders of more advanced image effects read the colors of multiple pixels at various positions in the input image and combine them in complex ways to compute the color of each output pixel.

Creating a Material and Attaching the Shader edit

One way of creating a material for this shader is to click Create > Material in the Project Window. You can then drag & drop the shader over the new material. When you select the new material, the preview in the Inspector Window should show the effect of the shader; if it doesn't, you might have to click on the grey bar at the bottom of the Inspector Window to make it appear. If there is no preview or the sphere is bright magenta, an error message should be displayed at the bottom of the Unity window and in the Console window (which you can open from the main menu with Window > General > Console). In this case you have to fix the error in the shader file.

Applying the Shader to the Camera View edit

The last step is to apply the material and its shader to the image of the camera view. This requires a small script that has to be attached to the camera; here is an example in C#, which should be saved as "tintImageEffectScript.cs":

using System;
using UnityEngine;

[RequireComponent(typeof(Camera))]
[ExecuteInEditMode]

public class tintImageEffectScript : MonoBehaviour {

   public Material material;
   
   void Start() 
   {
      if (null == material || null == material.shader || 
         !material.shader.isSupported)
      {
         enabled = false;
         return;
      }	
   }

   void OnRenderImage(RenderTexture source, RenderTexture destination)
   {
      Graphics.Blit(source, destination, material);
   }
}

This script has a public variable material which has to be set to the material that you have created. The Start() function just checks whether everything is in order; if not, it deactivates the script and thereby the image effect. The real work is performed in the OnRenderImage() function. Unity calls this function for each image effect with the input image (as a render texture) in the source variable and expects the output image in the destination variable (which is also a render texture). The standard way of applying the material and its shader to the input image in source is the call Graphics.Blit(source, destination, material), which rasterizes all pixels in the destination render texture using the shader in material with the source as input texture _MainTex.

The scripts of many image effects are a lot more complex because they compute multiple intermediate images (sometimes of different dimensions) instead of just using one call to Graphics.Blit(). In these cases, multiple shaders are used by a single image effect. This is often implemented with a single shader file with multiple passes. The shader of a specific pass is then used with a call like Graphics.Blit(source, destination, material, pass) where pass is the index of the pass that is used.

Summary edit

Congratulations, you have learned the basics about image effects in Unity. A few of the things you have seen are:

  • How to create a shader for an image effect.
  • How to create a material for such a shader.
  • How to create a C# script for an image effect.

Further reading edit

If you still want to know more

< Cg Programming/Unity

Unless stated otherwise, all example source code on this page is granted to the public domain.