Mapping with math

December 02, 2002

HANOVER, N.H. - In an unexpected meeting of the minds, two Dartmouth professors from disparate fields have come together to solve a problem: how to make accurate models of remote landscapes from photographs.

Arjun Heimsath, Assistant Professor of Earth Sciences, and Hany Farid, Assistant Professor of Computer Science, have found a way to create three-dimensional models of remote regions using only two-dimensional digital photographs. Once built, these models make it easier for researchers to predict landslides, erosion rates and other geomorphic events.

"It started after I got back from one of my trips to Nepal," says Heimsath. "I wasn't able to survey the area I wanted because it was so hard to get to on foot. I'd seen Hany's work, and I wondered if he could create the models I needed from photographs."

Usually, global positioning systems, satellite technology and other intensive surveying techniques are used to create digital elevation models, or DEMs. These methods are sometimes expensive, time consuming, or physically impossible to carry out in some parts of the world, and the equipment can be cumbersome, explains Heimsath. Farid, whose research focuses on image processing and computer vision, immediately realized he could help.

"We sketched out the idea on a napkin over lunch," says Farid. "I asked Arjun to take some photographs on his next trip, and we tested our theory within about three weeks. It didn't really work at first, but it worked well enough to keep going."

Their collaboration resulted in a paper that appeared in the November 2002 issue of the Mathematical Geology Journal, which describes a new method to obtain DEMs, without walking through poison oak, navigating rough or unstable terrain, or hauling around big, expensive and delicate equipment.

"With our method, you breeze in with a digital camera, and with relative ease, you get the DEM," says Heimsath.

On any single photograph there is not enough information to calculate the DEM, explains Farid. But with at least three images of the same region, taken from slightly different vantage points, you can capture all the necessary data. Once the images are in the computer, the researcher has to manually pick spots on each picture that correspond, such as identifying the same shrub, the same boulder, and so on.

"After you pick somewhere between 50 and 100 points, the mathematical algorithm takes over and automatically estimates the elevation map," says Farid.

Farid explains that much of the math they utilized was developed for other applications. What he and Heimsath added were constraints unique to the surface geometry of the Earth's surface. These constraints help to better condition or fine-tune the mathematical algorithms.

"One of the strikingly elegant aspects of our method is that you've got the pictures," says Heimsath, "so you know what your output is supposed to look like. If you run this model and you get something that doesn't look like the picture, then you know you've done something wrong."

The algorithms are not without limitations, however. And the researchers caution that their methodology hasn't been rigorously field tested yet. One limitation is the type of landscape being modeled. Ideally, the ground surface shouldn't be covered in vegetation. In order for the calculations to work, the photos have to clearly illustrate the ground. Also, when taking the photos, the researcher needs a good point of view taken from a little distance away.

"It's no good to be looking at the area you want to map from below. It's better if you are on a hillside adjacent to the area, across the valley or on a nearby ridge," says Heimsath.

Both researchers agree that it was a fun collaborative project.

"What was nice about the work, and what's representative of Dartmouth, is that I'm taking tools from the mathematics and computer vision community," says Farid, "and applying them to a real-world problem that Arjun works on. It was just a good fit and a natural partnership. The fact that we live next door to each other helped maintain the momentum."

From the original lunch in the café to publication took about eight months. Farid and Heimsath say it's probably the quickest project they've ever worked on. The next step is to move from theory to real-life application.

Two of their students, Deane Somerville, from Sherborn, Mass., and Layne Moffett, from Tulsa, Okla., both Dartmouth undergraduates from the Class of '05, plan to travel to New Zealand in January to test the theory. The students will go to areas that have already been surveyed by conventional methods, and take digital photos to see if the new methodology compares to what's known. If it doesn't, they can immediately return to the field and take some more pictures for more tests.

In addition to publishing their paper in the Mathematical Geology Journal, Farid and Heimsath will present their research at the American Geophysical Union's annual meeting in December.
Farid's research is funded by the National Science Foundation and an Alfred P. Sloan Fellowship. Heimsath is also supported by the National Science Foundation.

Dartmouth College

Related Computer Vision Articles from Brightsurf:

Computer vision predicts congenital adrenal hyperplasia
Using computer vision, researchers have discovered strong correlations between facial morphology and congenital adrenal hyperplasia (CAH), a life-threatening genetic condition of the adrenal glands and one of the most common forms of adrenal insufficiency in children.

Computer vision app allows easier monitoring of diabetes
A computer vision technology developed by University of Cambridge engineers has now been developed into a free mobile phone app for regular monitoring of glucose levels in people with diabetes.

Computer vision helps find binding sites in drug targets
Scientists from the iMolecule group at Skoltech developed BiteNet, a machine learning (ML) algorithm that helps find drug binding sites, i.e. potential drug targets, in proteins.

Tool helps clear biases from computer vision
Researchers at Princeton University have developed a tool that flags potential biases in sets of images used to train artificial intelligence (AI) systems.

UCLA computer scientists set benchmarks to optimize quantum computer performance
Two UCLA computer scientists have shown that existing compilers, which tell quantum computers how to use their circuits to execute quantum programs, inhibit the computers' ability to achieve optimal performance.

School-based vision screening programs found 1 in 10 kids had vision problems
A school-based vision screening program in kindergarten, shown to be effective at identifying untreated vision problems in 1 in 10 students, could be useful to implement widely in diverse communities, according to new research in CMAJ (Canadian Medical Association Journal)

Researchers incorporate computer vision and uncertainty into AI for robotic prosthetics
Researchers have developed new software that can be integrated with existing hardware to enable people using robotic prosthetics or exoskeletons to walk in a safer, more natural manner on different types of terrain.

'Time is vision' after a stroke
University of Rochester researchers studied stroke patients who experienced vision loss and found that the patients retained some visual abilities immediately after the stroke but these abilities diminished gradually and eventually disappeared permanently after approximately six months.

Computer vision helps SLAC scientists study lithium ion batteries
New machine learning methods bring insights into how lithium ion batteries degrade, and show it's more complicated than many thought.

A new model of vision
MIT researchers have developed a computer model of face processing that could reveal how the brain produces richly detailed visual representations so quickly.

Read More: Computer Vision News and Computer Vision Current Events is a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for sites to earn advertising fees by advertising and linking to