MIT research: Faster computer graphics

June 13, 2011

CAMBRIDGE, Mass. -- Photographs of moving objects are almost always a little blurry. To make their work look as much like conventional film as possible, game and movie animators try to reproduce this blur. But producing blurry images is actually more computationally complex than producing perfectly sharp ones.

In August, at this year's Siggraph conference -- the premier computer-graphics conference -- researchers from the Computer Graphics Group at MIT's Computer Science and Artificial Intelligence Laboratory will present a pair of papers that describe new techniques for computing blur much more efficiently. The result could be more convincing video games and frames of digital video that take minutes rather than hours to render.

The image sensor in a digital camera, and even the film in a conventional camera, can be thought of as a grid of color detectors, each detector corresponding to one pixel in the final image. If the objects being photographed are stationary, then during a single exposure, each detector registers the color of just one point on an object's surface. But if the objects are moving, light from different points on an object, and even from different objects, will strike a single detector. The detector effectively averages the colors of all the points, and the result is blur.

Digitally rendering a frame of video is a computationally intensive process with several discrete stages. First, the computer has to determine how the objects in the scene are moving. Second, it has to calculate how rays of light from an imagined light source would reflect off the objects. Finally, it determines which rays of light would actually reach an imagined lens. If the objects in the video are moving slowly enough, the computer has to go through that process only once per frame. If the objects are moving rapidly, however, it may have to go through it dozens or even hundreds of times.


Given how difficult blurring is to calculate, you might think that animators would simply ignore it. But that leads to surprisingly unconvincing video. "The motion doesn't look fluid at all," says Jaakko Lehtinen, who worked on both projects as a postdoc in the Computer Graphics Group and is now a senior research scientist with graphics-chip manufacturer Nvidia.

To get a sense of what motion without blur looks like, Lehtinen says, consider the type of clay animation familiar from old movies or Christmas specials such as "Rudolph the Red-Nosed Reindeer." "This doesn't have motion blur, because the scene is actually stationary when you take the picture," Lehtinen says. "It just looks choppy. The motion doesn't look natural."

The MIT researchers took two different approaches to simplifying the computation of blur, corresponding to two different stages in the graphics-rendering pipeline. Graduate student Jonathan Ragan-Kelley is the lead author on one of the Siggraph papers, joined by associate professor Frédo Durand, who leads the Computer Graphics Group; Lehtinen; graduate student Jiawen Chen; and Michael Doggett of Lund University in Sweden. In that paper, the researchers make the simplifying assumption that the way in which light reflects off a moving object doesn't change over the course of a single frame. For each pixel in the final image, their algorithm still averages the colors of multiple points on objects' surfaces, but it calculates those colors only once. The researchers found a way to represent the relationship between the color calculations and the shapes of the associated objects as entries in a table. For each pixel in the final image, the algorithm simply looks up the corresponding values in the table. That drastically simplifies the calculation but has little effect on the final image.

Turning the tables

The second of the Computer Graphics Group's Siggraph papers, led by Lehtinen and also featuring Durand, Chen and two of Lehtinen's Nvidia colleagues, reduces the computational burden of determining which rays of light would reach an imagined lens. To produce convincing motion blur, digital animators might ordinarily consider the contributions that more than 100 discrete points on the surfaces of moving objects make to the color value of a single pixel. Lehtinen and his colleagues' algorithm instead looks at a smaller number of points -- maybe 16 or so -- and makes an educated guess about the color values of the points in between. The result: A frame of digital video that would ordinarily take about an hour to render might instead take about 10 minutes.

In fact, both techniques apply not only to motion blur but also to the type of blur that occurs in, say, the background of an image when the camera is focused on an object in the foreground. That, too, is something that animators seek to reproduce. "Where the director and the cinematographer choose to focus the lens, it directs your attention when you're looking at the picture in subtle ways," Lehtinen says. If an animated film has no such lapses in focus, "there's just something wrong with it," Lehtinen says. "It doesn't look like a movie." Indeed, Lehtinen says, even though the paper has yet to be presented, several major special-effects companies have already contacted the researchers about the work.

Massachusetts Institute of Technology

Related Color Articles from Brightsurf:

Envision color: Activity patterns in the brain are specific to the color you see
Researchers at the National Eye Institute (NEI) have decoded brain maps of human color perception.

OPD optical sensors that reproduce any color
POSTECH Professor Dae Sung Chung's team uses chemical doping to freely control the colors of organic photodiodes.

What laser color do you like?
Researchers at the National Institute of Standards and Technology (NIST) and the University of Maryland have developed a microchip technology that can convert invisible near-infrared laser light into any one of a panoply of visible laser colors, including red, orange, yellow and green.

Increasing graduation rates of students of color with more faculty of color
A new analysis published in Public Administration found that student graduation rates improve as more faculty employed by a college or university share sex and race/ethnic identities with students.

How much color do we really see?
Color awareness has long been a puzzle for researchers in neuroscience and psychology, who debate over how much color observers really perceive.

Stretchable variable color sheet that changes color with expansion and contraction
Toyohashi University of Technology research team have succeeded in developing a variable color sheet with a film thickness of 400 nanometers that changes color when stretched and shrunk.

High color purity 3D printing
ICFO researchers report on a new method to obtain high color purity 3D objects with the use of a new class of nanoparticles.

Building a better color vision test for animals
University of Cincinnati biologists modified simple electronics to create a color vision test for fiddler crabs and other animals.

The color of your clothing can impact wildlife
Your choice of clothing could affect the behavioral habits of wildlife around you, according to a study conducted by a team of researchers, including faculty at Binghamton University, State University of New York.

Recovering color images from scattered light
Engineers at Duke University have developed a method for extracting a color image from a single exposure of light scattered through a mostly opaque material.

Read More: Color News and Color Current Events is a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for sites to earn advertising fees by advertising and linking to