OpenGL Programming/OpenGL ES Overview

What is OpenGL ES 2.0?

edit

OpenGL for Embedded Systems (OpenGL ES) is a subset of the OpenGL 3D graphics API. It is designed for embedded devices such as mobile phones, PDAs, and video game consoles. Notable platforms supporting OpenGL ES 2.0 include the iPhone 3GS and later, Android 2.2 and later, and WebGL. Desktop graphics card drivers typically do not support the OpenGL ES API directly. However, as of 2010 some graphics card manufacturers introduced ES support in their desktop drivers [1].

Differences To Other OpenGL Versions

edit

To those familiar with other OpenGL versions and as a note of warning when browsing the web for OpenGL ES 2.0 information it must be said that the OpenGL ES 2.0 API is quite different from OpenGL <= 3.0 and OpenGL ES 1.x.

  • No fixed pipeline support
    • This means there is no fixed built-in support for lighting, fog, multitexturing, or vertex transformations (translation/rotation etc.). These features must be implemented as custom shaders. For standard usage, these shaders are quite simple and mostly copy and paste though.
  • Vertex handling only using Vertex Arrays/Vertex Buffers
    • Immediate Mode (glBegin/glEnd) and Display Lists are not supported.
  • Fewer helper functions
    • For example, glFrustum(), glTranslate(), and glRotate() do not exist.

These design decisions result in a much smaller API but also require more in-depth knowledge of the rendering process and more effort (lines of - possibly boilerplate - code) for setting up the rendering pipeline. In some sense, OpenGL ES 2.0 (released 2007) was ahead of its time: for desktop OpenGL, it took until OpenGL 3.1 (released 2009) until legacy functionality was dropped from core functionality.

OpenGL ES 2.0 Pipeline Structure

edit

The following is a rough overview of the OpenGL ES 2.0 pipeline. We will discuss the individual stages in more detail further below. For very detailed explanations, see the OpenGL ES 2.0 specification.

  1. Vertex Shader
    • Inputs: Attributes (vertex position and other per-vertex attributes such as texture positions through Vertex Arrays/Vertex Buffers), Samplers (Textures), Uniforms (Constants)
    • Outputs: gl_Position (in Clip Coordinates), gl_FrontFacing (auto-generated), gl_PointSize (for Point Sprites), user-defined Varyings (to Fragment Shader)
  2. Primitive Assembly
    • Triangles/Lines/Point Sprites
    1. Clipping
    2. Perspective Division (results in Device Coordinates)
    3. Viewport Transformation (results in Window Coordinates)
  3. Rasterization
    1. Culling
    2. Depth Offset
    3. Varying Interpolation
    4. Fragment Shader
      • Inputs: gl_FragCoord, gl_FrontFacing, gl_PointCoord, Samplers (Textures), Uniforms (Constants), interpolated Varyings (from Vertex Shader)
      • Output: gl_FragColor
  4. Fragment Operations
    1. Scissor Test
    2. Stencil Test
    3. Depth Test
    4. Blending
    5. Dithering
  5. Rendering Target
    • Drawable provided by window system (direct rendering to screen)
    • Framebuffer and attached Renderbuffers (acting as Color, Depth, or Stencil buffer) or attached Texture buffers

About Shaders In General

edit

Shaders are small programs compiled to run on the GPU. The syntax is similar to C, but many restrictions apply. Inputs to a shader are called Attributes (for per-vertex/per-fragment inputs) and Uniforms (for constants for all vertices/fragments). User-defined outputs are called Varyings.

Setting up Vertex and Fragment Shaders includes passing the shaders (as a string containing GLSL) to the OpenGL ES API, compiling both shaders and linking them (during linking, input/output Varying correspondence is checked), and binding buffers to Uniforms and Varyings.

Vertex Shader

edit

The Vertex Shader is called once for each input vertex. The main task of the Vertex Shader is to provide vertex positions for the following stages of the pipeline. Additionally, it can calculate further attributes that can be used as input for the Fragment Shader later. The most basic shader just takes vertex positions as input and directly assigns the input data to the gl_Position Varying. Typically, the shader does a multiplication with the modelview projection matrix (passed as a Uniform constant) to allow translation and rotation of input geometry as well as perspective projection, possibly passes texture coordinates, and calculates lighting parameters.

Additional user outputs typically include:

  • Texture coordinates. These may be just passed through from input attributes for simple texturing but also might get generated or processed for implementing reflective surfaces and environment mapping or other effects such as dynamic texturing.
  • Fog factor. For a fog effect, the distance of the primitive from the eye can be calculated in the vertex shader. The fragment shader can later fade out the fragment based on this value.
  • Lighting parameters. Based on light source positions (passed as Uniform constants) and vertex normals (needed as additional per-vertex input), lighting parameters can be generated for the fragment shader.

Note that texture access via Samplers in vertex shaders is optional in OpenGL ES 2.0 and might not be supported on some devices.

Primitive Assembly

edit

In the Primitive Assembly stage, several coordinate transformations are done:

  • Clipping. Primitives lying outside the viewing volume are discarded, and primitives lying partially outside the view will be clipped. Varying outputs of the vertex shader get clipped, too.
  • Perspective Division. The three main elements of gl_Position (x, y, z) are normalized to [-1.0...1.0] by division by the fourth vertex element w. The result is normalized device coordinates.
  • Viewport Transformation. Coordinates are transformed to window coordinates by means of a linear transformation using the parameters set by glViewport() and glDepthRangef().

Rasterization

edit

Rasterization is the process of creating a two-dimensional rasterized image from a primitive, i.e., calculating the set of fragments (pixels) for each primitive. For polygon rasterization, this includes the following steps.

  • Culling. Polygons viewed from the back can be discarted if enabled using FrontFace() and CullFace().
  • Depth Offset. A depth offset can be applied to polygon coordinates using PolygonOffset(). This can prevent Z-fighting for polygons that lie in the same plane.
  • Varying Interpolation. Vertex shader Varying outputs and depth are interpolated when prepared as input for the Fragment Shader.

Fragment Shader

edit

The Fragment Shader is called once for each primitive fragment (pixel). The main task of the Fragment Shader is to provide color values for each output fragment. The most basic Fragment Shader just assigns a constant value to its gl_FragColor output. Typically, the Fragment Shader does a texture lookup and implements lighting based on the lighting parameters the Vertex Shader computed previously.

Fragment Operations

edit
  • Scissor testing. If enabled using glEnable(GL_SCISSOR_TEST), only pixels in a specified rectangular region are drawn. Configure using glScissor().
  • Stencil buffer testing. If enabled using glEnable(GL_STENCIL_TEST), pixels may be updated only when passing a test against the stencil buffer. Configure using glStencil*().
  • Depth buffer testing. If enabled using glEnable(GL_DEPTH_TEST), pixels are only drawn if passing the depth buffer test, implementing hidden-surface removal. Configure using glDepthFunc(). A depth buffer needs to be available.
  • Blending. If enabled using glEnable(GL_BLEND), pixels output by the Fragment Shader may be blended with pixel values already present in the output buffer. Configuration of blending is done using glBlend*().
  • Dithering. If enabled using glEnable(GL_DITHER), dithering may be used to increase the perceived color depth. No further control of the dithering process is possible.
  • Antialiasing. Using glEnable(GL_SAMPLE_COVERAGE), glEnable(GL_SAMPLE_ALPHA_TO_COVERAGE), and glSampleCoverage(), simple antialiasing may be configured.

Rendering Target

edit

There are several possible targets for the generated pixels.

  • Drawable provided by window system (direct rendering to screen)
  • Framebuffer and attached Renderbuffers

Framebuffer objects have three attached objects: Depth, or Stencil Renderbuffer, and a Color renderbuffer or Texture Buffer. Color renderbuffers cannot be used as texture sources.

  • glGenRenderbuffers(), glGenFramebuffers(), glBindRenderbuffer(), glRenderbufferStorage(), glBindFramebuffer(), glFramebufferRenderbuffer(), glFramebufferTexture*(), glCheckFramebufferStatus(), glDeleteRenderbuffers(), glDeleteFramebuffers()

Examples

edit

TODO - see [2] etc. for code examples

How to...

edit

Configure Perspective, Translate And Rotate Objects

edit

Applying the model view transformation as well as translation/rotation of world objects has to be done in the Vertex Shader. You should use a utility library for doing these calculations. The maths are described for example in the documentation of the desktop OpenGL helper methods that are omitted in OpenGL ES 2.0:

Further Reading

edit