Researcher at UPNA develops 3-D reconstruction algorithms less complex and more accurate

February 15, 2013

In his PhD thesis, Leonardo de Maeztu-Reinares is proposing 3D reconstruction algorithms; they are on a par with the results of the best available techniques and can be executed more rapidly on a computer. His work is based on stereoscopic vision, a technique for obtaining three-dimensional images and which, in order to get accurate results, calls for a computationally heavy load and considerable algorithmic complexity. His work has been published in international journals like the IEEE Transactions on Pattern Analysis and Machine Intelligence.

"The basic principle behind stereoscopy is the use of two or more cameras that can simultaneously pick up the same scene from different positions, similar to what human eyes do," explains this researcher. That way, two or more images are captured for each instant in time and are compared with each other to work out how far the objects are from the cameras, and that way capture the depth lacking in a classical photograph, which is two-dimensional. "This availability of two-dimensional perspectives enables the three-dimensional reconstruction of the scene to be made by using algorithms that seek matching points between images.

Thanks to the continual increase in the calculating capacity of computers, their use is becoming increasingly routine in tasks of surveillance, measuring or inspection, which until recently were the reserve of human beings. With just a camera or video camera, today's PCs can be used for surveillance work on large crowds, inspection of parts at the end of the production line, or the recording of the licence plates of vehicles passing through a particular zone. The problem is that using just one camera limits the capacity of these systems, as they obtain a two-dimensional representation from a three-dimensional world. To overcome this, there are three-dimensional sensors; one of the most widely used owing to its similarity to the human visual system includes two cameras that record the same scene from two slightly separated viewing points.

"In the 1970s and 1980s," says Leonardo de Maeztu, "many scientists thought about imitating the way in which our brains obtain three-dimensional reconstructions using images supplied by each eye. However, the differences between our brains and computers soon became apparent. This is why solutions adapted to the way computers work became gradually more important."

Novel research

This researcher's PhD thesis has concentrated on developing 3D reconstruction algorithms that can be rapidly executed by a computer. Together with the proposal for new solutions that are more straightforward than existing ones, he has worked intensively with computers that have recently come onto the market, which, thanks to the fact that they include new types of processors, enable complex tasks to be executed fast. "One of the algorithms proposed," he explains, "yields better results than other previous ones in the same class and, what is more, offers a very interesting competitive advantage: it can be implemented in real time using a standard graphics card. Although they are algorithms that require a great calculating capacity, if the full potential of current graphics processors is used, they can even be executed in real time, in other words, to process as many images per second as those caught by the corresponding camera."

As this is a new line of research, he did some intensive documentation work during the early years of this thesis. He also had a four-month stay in Bologna (Italy) supervised by Prof Stefano Mattoccia, a leading researcher in the field of stereo vision,and with whom he continued to collaborate until completing his thesis.De Maeztuwas awarded top honours with an international distinction for his PhD thesis entitled "Towards accurate and real-time local stereo correspondence algorithms: computational efficiency and massively parallel architectures." As a result of it, eight articles and papers have been published in international journals and conference proceedings. A publication in the most important journal in the field, IEEE Transactions on Pattern Analysis and Machine Intelligence, should be highlighted and so should a paper in one of the two most important conferences in the field, the IEEE International Conference on Computer Vision.
-end-


Elhuyar Fundazioa

Related Algorithms Articles from Brightsurf:

A multidisciplinary policy design to protect consumers from AI collusion
Legal scholars, computer scientists and economists must work together to prevent unlawful price-surging behaviors from artificial intelligence (AI) algorithms used by rivals in a competitive market, argue Emilio Calvano and colleagues in this Policy Forum.

Students develop tool to predict the carbon footprint of algorithms
Within the scientific community, it is estimated that artificial intelligence -- otherwise meant to serve as a means to effectively combat climate change -- will become one of the most egregious CO2 culprits should current trends continue.

Machine learning takes on synthetic biology: algorithms can bioengineer cells for you
Scientists at Lawrence Berkeley National Laboratory have developed a new tool that adapts machine learning algorithms to the needs of synthetic biology to guide development systematically.

Algorithms uncover cancers' hidden genetic losses and gains
Limitations in DNA sequencing technology make it difficult to detect some major mutations often linked to cancer, such as the loss or duplication of parts of chromosomes.

Managing data flow boosts cyber-physical system performance
Researchers have developed a suite of algorithms to improve the performance of cyber-physical systems - from autonomous vehicles to smart power grids - by balancing each component's need for data with how fast that data can be sent and received.

New theory hints at more efficient way to develop quantum algorithms
A new theory could bring a way to make quantum algorithm development less of an accidental process, say Purdue University scientists.

AI as good as the average radiologist in identifying breast cancer
Researchers at Karolinska Institutet and Karolinska University Hospital in Sweden have compared the ability of three different artificial intelligence (AI) algorithms to identify breast cancer based on previously taken mammograms.

Context reduces racial bias in hate speech detection algorithms
When it comes to accurately flagging hate speech on social media, context matters, says a new USC study aimed at reducing errors that could amplify racial bias.

Researchers discover algorithms and neural circuit mechanisms of escape responses
Prof. WEN Quan from School of Life Sciences, University of Science and Technology of China (USTC) of the Chinese Academy of Sciences (CAS) has proposed the algorithms and circuit mechanisms for the robust and flexible motor states of nematodes during escape responses.

Lightning fast algorithms can lighten the load of 3D hologram generation
Tokyo, Japan - Researchers from Tokyo Metropolitan University have developed a new way of calculating simple holograms for heads-up displays (HUDs) and near-eye displays (NEDs).

Read More: Algorithms News and Algorithms Current Events
Brightsurf.com is a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for sites to earn advertising fees by advertising and linking to Amazon.com.