0%

Light Field Camera

final_proj

Overview

As this paper by Ng et al. demonstrated, capturing multiple images over a plane orthogonal to the optical axis enables achieving complex effects using very simple operations like shifting and averaging. The goal of this project is to reproduce some of these effects using real lightfield data.

In this project, we took some sample datasets from the Stanford Light Field Archive, where each dataset comprising of 289 views on a 17x17 grid.

1) Depth Refocusing

The objects which are far away from the camera do not vary their position significantly when the camera moves around while keeping the optical axis direction unchanged. The nearby objects, on the other hand, vary their position significantly across images. Averaging all the images in the grid without any shifting will produce an image which is sharp around the far-away objects but blurry around the nearby ones. Similarly, shifting the images appropriately and then averaging allows one to focus on object at different depths.

In this part of the project, we implement this idea to generate multiple images which focus at different depths. To get the best effects, we use all the grid images for averaging.

Below are averages of chess shifted to five different depths:

Or in gif format:

One more example:

2) Aperture Adjustment

Averaging a large number of images sampled over the grid perpendicular to the optical axis mimics a camera with a much larger aperture. Using fewer images results in an image that mimics a smaller aperture. In part2, we generate averages of images filtered by different radius while focusing on the same point, which corresponds to different apertures.

Below are averages of chess filtered by five different radius:

Here are the results: