# IntroductionEdit

In real life, cameras that use lenses can, in principle, not get everything they see in focus at once. Only objects near certain distance from the camera (the focus distance) appear sharp. This area is called the depth of field. Objects that are closer to the camera or that are further away appear unsharp. How unsharp the objects that are not at the focus distance are depends on the shape and size of the aperture of the camera. Normally, the GPU will render everything infinitely sharp, however there are various techniques to simulate the depth-of-field effect. The most accurate one is to use the accumulation buffer, and it is also very easy to implement. Basically, we render the scene multiple times with slightly different MVP matrices, to simulate light rays passing through different parts of the lens aperture. The shape of the aperture, formed by the lens diaphragm, influences the way objects that are out of focused are blurred. This is called bokeh.

# Simulating depth of field using the accumulation bufferEdit

Suppose we start with the following code that sets up the model-view-projection matrix, and then renders a frame:

glm::mat4 modelview = glm::lookAt(eye, object, up); glm::mat4 projection = glm::perspective(...); glm::mat4 mvp = projection * modelview; glUniformMatrix4fv(uniform_mvp, 1, GL_FALSE, glm::value_ptr(mvp)); draw_scene(); glSwapBuffers();

Where `eye`

is a vector containing the position of the eye or camera, `object`

is the position of the object we want to be in the center and in focus, and `up`

is a vector describing which way is up. To simulate a circular aperture, we move around the camera in a circle in the plane perpendicular to the direction we are looking at. We can easily get two vectors describing the plane using cross products.

If we would just move the camera around and not change the direction it is looking at, we would just blur most of our scene. However, the trick is that we keep looking directly at the object we want to be in the center and in focus. This is easy to do, we just have to pass the coordinates of that object as `glm::lookAt()`

's second parameter. The object we are looking at will always be exactly in the center of the screen, not moving around, so it will not be blurred. The rest will be blurred depending on the depth relative to our center object.

int n = 10; // number of light rays float aperture = 0.05; glm::mat4 projection = glm::perspective(...); glm::vec3 right = glm::normalize(glm::cross(object - eye, up)); glm::vec3 p_up = glm::normalize(glm::cross(object - eye, right)); for(int i = 0; i < n; i++) { glm::vec3 bokeh = right * cosf(i * 2 * M_PI / n) + p_up * sinf(i * 2 * M_PI / n); glm::mat4 modelview = glm::lookAt(eye + aperture * bokeh, object, p_up); glm::mat4 mvp = projection * modelview; glUniformMatrix4fv(uniform_mvp, 1, GL_FALSE, glm::value_ptr(mvp)); draw_scene(); glAccum(i ? GL_ACCUM : GL_LOAD, 1.0 / n); } glAccum(GL_RETURN, 1); glSwapBuffers();

# ExercisesEdit

- Apply this technique to any of the previous tutorials that use
`glm::lookAt()`

. - Change the value of
`n`

and`aperture`

. - Most camera diaphragms are not truly circular, but are polygonal. Try simulating a square or hexagonal diaphraghm.
- Can you combine this technique efficiently with anti-aliasing and/or motion blur?