Cg Programming/Unity/Projection of Bumpy Surfaces
This tutorial covers (single-step) parallax mapping.
It extends and is based on Section “Lighting of Bumpy Surfaces”. Note that this tutorial is meant to teach you how this technique works. If you want to actually use parallax mapping in Unity, you should use a built-in shader that supports it.
Improving Normal Mapping
editThe normal mapping technique presented in Section “Lighting of Bumpy Surfaces” only changes the lighting of a flat surface to create the illusion of bumps and dents. If one looks straight onto a surface (i.e. in the direction of the surface normal vector), this works very well. However, if one looks onto a surface from some other angle (as in the image to the left), the bumps should also stick out of the surface while the dents should recede into the surface. Of course, this could be achieved by geometrically modeling bumps and dents; however, this would require to process many more vertices. On the other hand, single-step parallax mapping is a very efficient techniques similar to normal mapping, which doesn't require additional triangles but can still move virtual bumps by several pixels to make them stick out of a flat surface. However, the technique is limited to bumps and dents of small heights and requires some fine-tuning for best results.
Parallax Mapping Explained
editParallax mapping was proposed in 2001 by Tomomichi Kaneko et al. in their paper “Detailed shape representation with parallax mapping” (ICAT 2001). The basic idea is to offset the texture coordinates that are used for the texturing of the surface (in particular normal mapping). If this offset of texture coordinates is computed appropriately, it is possible to move parts of the texture (e.g. bumps) as if they were sticking out of the surface.
The illustration to the left shows the view vector V in the direction to the viewer and the surface normal vector N in the point of a surface that is rasterized in a fragment shader. Parallax mapping proceeds in 3 steps:
- Lookup of the height at the rasterized point in a height map, which is depicted by the wavy line on top of the straight line at the bottom in the illustration.
- Computation of the intersection of the viewing ray in direction of V with a surface at height parallel to the rendered surface. The distance is the distance between the rasterized surface point moved by in the direction of N and this intersection point. If these two points are projected onto the rendered surface, is also the distance between the rasterized point and a new point on the surface (marked by a cross in the illustration). This new surface point is a better approximation to the point that is actually visible for the view ray in direction V if the surface was displaced by the height map.
- Transformation of the offset into texture coordinate space in order to compute an offset of texture coordinates for all following texture lookups.
For the computation of we require the height of a height map at the rasterized point, which is implemented in the example by a texture lookup in the A component of the texture property _ParallaxMap
, which should be a gray-scale image representing heights as discussed in Section “Lighting of Bumpy Surfaces”. We also require the view direction V in the local surface coordinate system formed by the normal vector ( axis), the tangent vector ( axis), and the binormal vector ( axis), which was also introduced Section “Lighting of Bumpy Surfaces”. To this end we compute a transformation from local surface coordinates to object space with:
where T, B and N are given in object coordinates. (In Section “Lighting of Bumpy Surfaces” we had a similar matrix but with vectors in world coordinates.)
We compute the view direction V in object space (as the difference between the rasterized position and the camera position transformed from world space to object space) and then we transform it to the local surface space with the matrix which can be computed as:
This is possible because T, B and N are orthogonal and normalized. (Actually, the situation is a bit more complicated because we won't normalize these vectors but use their length for another transformation; see below.) Thus, in order to transform V from object space to the local surface space, we have to multiply it with the transposed matrix . This is actually good, because in Cg it is easier to construct the transposed matrix as T, B and N are the row vectors of the transposed matrix.
Once we have the V in the local surface coordinate system with the axis in the direction of the normal vector N, we can compute the offsets (in direction) and (in direction) by using similar triangles (compare with the illustration):
and .
Thus:
and .
Note that it is not necessary to normalize V because we use only ratios of its components, which are not affected by the normalization.
Finally, we have to transform and into texture space. This would be quite difficult if Unity wouldn't help us: the tangent attribute tangent
is actually appropriately scaled and has a fourth component tangent.w
for scaling the binormal vector such that the transformation of the view direction V scales and appropriately to have and in texture coordinate space without further computations.
Implementation
editThe implementation shares most of the code with Section “Lighting of Bumpy Surfaces”. In particular, the same scaling of the binormal vector with the fourth component of the tangent
attribute is used in order to take the mapping of the offsets from local surface space to texture space into account:
float3 binormal = cross(input.normal, input.tangent.xyz)
* input.tangent.w;
We have to add an output parameter for the view vector V in the local surface coordinate system (with the scaling of axes to take the mapping to texture space into account). This parameter is called viewDirInScaledSurfaceCoords
. It is computed by transforming the view vector in object coordinates (viewDirInObjectCoords
) with the matrix (localSurface2ScaledObjectT
) as explained above:
float3 viewDirInObjectCoords = mul(
modelMatrixInverse, float4(_WorldSpaceCameraPos, 1.0)).xyz
- input.vertex.xyz;
float3x3 localSurface2ScaledObjectT =
float3x3(input.tangent.xyz, binormal, input.normal);
// vectors are orthogonal
output.viewDirInScaledSurfaceCoords =
mul(localSurface2ScaledObjectT, viewDirInObjectCoords);
// we multiply with the transpose to multiply with
// the "inverse" (apart from the scaling)
The rest of the vertex shader is the same as for normal mapping, see Section “Lighting of Bumpy Surfaces” except that the view direction in world coordinates is computed in the vertex shader instead of the fragment shader, which is necessary to keep the number of arithmetic operations in the fragment shader small enough for some GPUs.
In the fragment shader, we first query the height map for the height of the rasterized point. This height is specified by the A component of the texture _ParallaxMap
. The values between 0 and 1 are transformed to the range -_Parallax
/2 to +_Parallax
with a shader property _Parallax
in order to offer some user control over the strength of the effect (and to be compatible with the fallback shader):
float height = _Parallax
* (-0.5 + tex2D(_ParallaxMap, _ParallaxMap_ST.xy
* input.tex.xy + _ParallaxMap_ST.zw).x);
The offsets and are then computed as described above. However, we also clamp each offset between a user-specified interval -_MaxTexCoordOffset
and _MaxTexCoordOffset
in order to make sure that the offset stays in reasonable bounds. (If the height map consists of more or less flat plateaus of constant height with smooth transitions between these plateaus, _MaxTexCoordOffset
should be smaller than the thickness of these transition regions; otherwise the sample point might be in a different plateau with a different height, which would mean that the approximation of the intersection point is arbitrarily bad.) The code is:
float2 texCoordOffsets =
clamp(height * input.viewDirInScaledSurfaceCoords.xy
/ input.viewDirInScaledSurfaceCoords.z,
-_MaxTexCoordOffset, +_MaxTexCoordOffset);
In the following code, we have to apply the offsets to the texture coordinates in all texture lookups; i.e., we have to replace float2(input.tex)
(or equivalently input.tex.xy
) by (input.tex.xy + texCoordOffsets)
, e.g.:
float4 encodedNormal = tex2D(_BumpMap,
_BumpMap_ST.xy * (input.tex.xy + texCoordOffsets)
+ _BumpMap_ST.zw);
The rest of the fragment shader code is just as it was for Section “Lighting of Bumpy Surfaces”.
Complete Shader Code
editAs discussed in the previous section, most of this code is taken from Section “Lighting of Bumpy Surfaces”. Note that if you want to use the code on a mobile device with OpenGL ES, make sure to change the decoding of the normal map as described in that tutorial.
The part about parallax mapping is actually only a few lines. Most of the names of the shader properties were chosen according to the fallback shader; the user interface labels are much more descriptive.
Shader "Cg parallax mapping" {
Properties {
_BumpMap ("Normal Map", 2D) = "bump" {}
_ParallaxMap ("Heightmap (in A)", 2D) = "black" {}
_Parallax ("Max Height", Float) = 0.01
_MaxTexCoordOffset ("Max Texture Coordinate Offset", Float) =
0.01
_Color ("Diffuse Material Color", Color) = (1,1,1,1)
_SpecColor ("Specular Material Color", Color) = (1,1,1,1)
_Shininess ("Shininess", Float) = 10
}
CGINCLUDE // common code for all passes of all subshaders
#include "UnityCG.cginc"
uniform float4 _LightColor0;
// color of light source (from "Lighting.cginc")
// User-specified properties
uniform sampler2D _BumpMap;
uniform float4 _BumpMap_ST;
uniform sampler2D _ParallaxMap;
uniform float4 _ParallaxMap_ST;
uniform float _Parallax;
uniform float _MaxTexCoordOffset;
uniform float4 _Color;
uniform float4 _SpecColor;
uniform float _Shininess;
struct vertexInput {
float4 vertex : POSITION;
float4 texcoord : TEXCOORD0;
float3 normal : NORMAL;
float4 tangent : TANGENT;
};
struct vertexOutput {
float4 pos : SV_POSITION;
float4 posWorld : TEXCOORD0;
// position of the vertex (and fragment) in world space
float4 tex : TEXCOORD1;
float3 tangentWorld : TEXCOORD2;
float3 normalWorld : TEXCOORD3;
float3 binormalWorld : TEXCOORD4;
float3 viewDirWorld : TEXCOORD5;
float3 viewDirInScaledSurfaceCoords : TEXCOORD6;
};
vertexOutput vert(vertexInput input)
{
vertexOutput output;
float4x4 modelMatrix = unity_ObjectToWorld;
float4x4 modelMatrixInverse = unity_WorldToObject;
output.tangentWorld = normalize(
mul(modelMatrix, float4(input.tangent.xyz, 0.0)).xyz);
output.normalWorld = normalize(
mul(float4(input.normal, 0.0), modelMatrixInverse).xyz);
output.binormalWorld = normalize(
cross(output.normalWorld, output.tangentWorld)
* input.tangent.w); // tangent.w is specific to Unity
float3 binormal = cross(input.normal, input.tangent.xyz)
* input.tangent.w;
// appropriately scaled tangent and binormal
// to map distances from object space to texture space
float3 viewDirInObjectCoords = mul(
modelMatrixInverse, float4(_WorldSpaceCameraPos, 1.0)).xyz
- input.vertex.xyz;
float3x3 localSurface2ScaledObjectT =
float3x3(input.tangent.xyz, binormal, input.normal);
// vectors are orthogonal
output.viewDirInScaledSurfaceCoords =
mul(localSurface2ScaledObjectT, viewDirInObjectCoords);
// we multiply with the transpose to multiply with
// the "inverse" (apart from the scaling)
output.posWorld = mul(modelMatrix, input.vertex);
output.viewDirWorld = normalize(
_WorldSpaceCameraPos - output.posWorld.xyz);
output.tex = input.texcoord;
output.pos = UnityObjectToClipPos(input.vertex);
return output;
}
// fragment shader with ambient lighting
float4 fragWithAmbient(vertexOutput input) : COLOR
{
// parallax mapping: compute height and
// find offset in texture coordinates
// for the intersection of the view ray
// with the surface at this height
float height = _Parallax
* (-0.5 + tex2D(_ParallaxMap, _ParallaxMap_ST.xy
* input.tex.xy + _ParallaxMap_ST.zw).x);
float2 texCoordOffsets =
clamp(height * input.viewDirInScaledSurfaceCoords.xy
/ input.viewDirInScaledSurfaceCoords.z,
-_MaxTexCoordOffset, +_MaxTexCoordOffset);
// normal mapping: lookup and decode normal from bump map
// in principle we have to normalize tangentWorld,
// binormalWorld, and normalWorld again; however, the
// potential problems are small since we use this
// matrix only to compute "normalDirection",
// which we normalize anyways
float4 encodedNormal = tex2D(_BumpMap,
_BumpMap_ST.xy * (input.tex.xy + texCoordOffsets)
+ _BumpMap_ST.zw);
float3 localCoords = float3(2.0 * encodedNormal.a - 1.0,
2.0 * encodedNormal.g - 1.0, 0.0);
localCoords.z = sqrt(1.0 - dot(localCoords, localCoords));
// approximation without sqrt: localCoords.z =
// 1.0 - 0.5 * dot(localCoords, localCoords);
float3x3 local2WorldTranspose = float3x3(
input.tangentWorld,
input.binormalWorld,
input.normalWorld);
float3 normalDirection =
normalize(mul(localCoords, local2WorldTranspose));
float3 lightDirection;
float attenuation;
if (0.0 == _WorldSpaceLightPos0.w) // directional light?
{
attenuation = 1.0; // no attenuation
lightDirection = normalize(_WorldSpaceLightPos0.xyz);
}
else // point or spot light
{
float3 vertexToLightSource =
_WorldSpaceLightPos0.xyz - input.posWorld.xyz;
float distance = length(vertexToLightSource);
attenuation = 1.0 / distance; // linear attenuation
lightDirection = normalize(vertexToLightSource);
}
float3 ambientLighting =
UNITY_LIGHTMODEL_AMBIENT.rgb * _Color.rgb;
float3 diffuseReflection =
attenuation * _LightColor0.rgb * _Color.rgb
* max(0.0, dot(normalDirection, lightDirection));
float3 specularReflection;
if (dot(normalDirection, lightDirection) < 0.0)
// light source on the wrong side?
{
specularReflection = float3(0.0, 0.0, 0.0);
// no specular reflection
}
else // light source on the right side
{
specularReflection = attenuation * _LightColor0.rgb
* _SpecColor.rgb * pow(max(0.0, dot(
reflect(-lightDirection, normalDirection),
input.viewDirWorld)), _Shininess);
}
return float4(ambientLighting + diffuseReflection
+ specularReflection, 1.0);
}
// fragement shader for pass 2 without ambient lighting
float4 fragWithoutAmbient(vertexOutput input) : COLOR
{
// parallax mapping: compute height and
// find offset in texture coordinates
// for the intersection of the view ray
// with the surface at this height
float height = _Parallax
* (-0.5 + tex2D(_ParallaxMap, _ParallaxMap_ST.xy
* input.tex.xy + _ParallaxMap_ST.zw).x);
float2 texCoordOffsets =
clamp(height * input.viewDirInScaledSurfaceCoords.xy
/ input.viewDirInScaledSurfaceCoords.z,
-_MaxTexCoordOffset, +_MaxTexCoordOffset);
// normal mapping: lookup and decode normal from bump map
// in principle we have to normalize tangentWorld,
// binormalWorld, and normalWorld again; however, the
// potential problems are small since we use this
// matrix only to compute "normalDirection",
// which we normalize anyways
float4 encodedNormal = tex2D(_BumpMap,
_BumpMap_ST.xy * (input.tex.xy + texCoordOffsets)
+ _BumpMap_ST.zw);
float3 localCoords = float3(2.0 * encodedNormal.a - 1.0,
2.0 * encodedNormal.g - 1.0, 0.0);
localCoords.z = sqrt(1.0 - dot(localCoords, localCoords));
// approximation without sqrt: localCoords.z =
// 1.0 - 0.5 * dot(localCoords, localCoords);
float3x3 local2WorldTranspose = float3x3(
input.tangentWorld,
input.binormalWorld,
input.normalWorld);
float3 normalDirection =
normalize(mul(localCoords, local2WorldTranspose));
float3 lightDirection;
float attenuation;
if (0.0 == _WorldSpaceLightPos0.w) // directional light?
{
attenuation = 1.0; // no attenuation
lightDirection = normalize(_WorldSpaceLightPos0.xyz);
}
else // point or spot light
{
float3 vertexToLightSource =
_WorldSpaceLightPos0.xyz - input.posWorld.xyz;
float distance = length(vertexToLightSource);
attenuation = 1.0 / distance; // linear attenuation
lightDirection = normalize(vertexToLightSource);
}
float3 diffuseReflection =
attenuation * _LightColor0.rgb * _Color.rgb
* max(0.0, dot(normalDirection, lightDirection));
float3 specularReflection;
if (dot(normalDirection, lightDirection) < 0.0)
// light source on the wrong side?
{
specularReflection = float3(0.0, 0.0, 0.0);
// no specular reflection
}
else // light source on the right side
{
specularReflection = attenuation * _LightColor0.rgb
* _SpecColor.rgb * pow(max(0.0, dot(
reflect(-lightDirection, normalDirection),
input.viewDirWorld)), _Shininess);
}
return float4(diffuseReflection + specularReflection,
1.0);
}
ENDCG
SubShader {
Pass {
Tags { "LightMode" = "ForwardBase" }
// pass for ambient light and first light source
CGPROGRAM
#pragma vertex vert
#pragma fragment fragWithAmbient
// the functions are defined in the CGINCLUDE part
ENDCG
}
Pass {
Tags { "LightMode" = "ForwardAdd" }
// pass for additional light sources
Blend One One // additive blending
CGPROGRAM
#pragma vertex vert
#pragma fragment fragWithoutAmbient
// the functions are defined in the CGINCLUDE part
ENDCG
}
}
}
Summary
editCongratulations! If you actually understand the whole shader, you have come a long way. In fact, the shader includes lots of concepts (transformations between coordinate systems, the Phong reflection model, normal mapping, parallax mapping, ...). More specifically, we have seen:
- How parallax mapping improves upon normal mapping.
- How parallax mapping is described mathematically.
- How parallax mapping is implemented.
Further reading
editIf you still want to know more
- about details of the shader code, you should read Section “Lighting of Bumpy Surfaces”.
- about parallax mapping, you could read the original publication by Tomomichi Kaneko et al.: “Detailed shape representation with parallax mapping”, ICAT 2001, pages 205–208, which is available online.