New system combines smartphone videos to create 4D visualizations

July 01, 2020

PITTSBURGH--Researchers at Carnegie Mellon University have demonstrated that they can combine iPhone videos shot "in the wild" by separate cameras to create 4D visualizations that allow viewers to watch action from various angles, or even erase people or objects that temporarily block sight lines.

Imagine a visualization of a wedding reception, where dancers can be seen from as many angles as there were cameras, and the tipsy guest who walked in front of the bridal party is nowhere to be seen.

The videos can be shot independently from variety of vantage points, as might occur at a wedding or birthday celebration, said Aayush Bansal, a Ph.D. student in CMU's Robotics Institute. It also is possible to record actors in one setting and then insert them into another, he added.

"We are only limited by the number of cameras," Bansal said, with no upper limit on how many video feeds can be used.

Bansal and his colleagues presented their 4D visualization method at the Computer Vision and Pattern Recognition virtual conference last month.

"Virtualized reality" is nothing new, but in the past it has been restricted to studio setups, such as CMU's Panoptic Studio, which boasts more than 500 video cameras embedded in its geodesic walls. Fusing visual information of real-world scenes shot from multiple, independent, handheld cameras into a single comprehensive model that can reconstruct a dynamic 3D scene simply hasn't been possible.

Bansal and his colleagues worked around that limitation by using convolutional neural nets (CNNs), a type of deep learning program that has proven adept at analyzing visual data. They found that scene-specific CNNs could be used to compose different parts of the scene.

The CMU researchers demonstrated their method using up to 15 iPhones to capture a variety of scenes -- dances, martial arts demonstrations and even flamingos at the National Aviary in Pittsburgh.

"The point of using iPhones was to show that anyone can use this system," Bansal said. "The world is our studio."

The method also unlocks a host of potential applications in the movie industry and consumer devices, particularly as the popularity of virtual reality headsets continues to grow.

Though the method doesn't necessarily capture scenes in full 3D detail, the system can limit playback angles so incompletely reconstructed areas are not visible and the illusion of 3D imagery is not shattered.
-end-
In addition to Bansal, the research team included Robotics Institute faculty members Yaser Sheikh, Deva Ramanan and Srinivasa Narasimhan. The team also included Minh Vo, a former Ph.D. student who now works at Facebook Reality Lab. The National Science Foundation, Office of Naval Research and Qualcomm supported this research.

Video: https://www.youtube.com/watch?v=quovnDPwL1k&feature=youtu.be

Carnegie Mellon University

Related Visualization Articles from Brightsurf:

Study first to tally biomass from oceanic plastic debris using visualization method
Scientists examined cell abundances, size, cellular carbon mass, and how photosynthetic cells differ on polymeric and glass substrates over time, exploring nanoparticle generation from plastic like polystyrene and how this might disrupt microalgae.

Using LEGO to test children's ability to visualize and rotate 3D shapes in space
Researchers at the University of California San Diego have developed a test that uses children's ability to assemble LEGO pieces to assess their spatial visualization ability.

Visualization of functional components to characterize optimal composite electrodes
Researchers have developed a visualization method that will determine the distribution of components in battery electrodes using atomic force microscopy.

Real-time visualization of solid-phase ion migration
Researchers from University of science and technology of China has shed new lights on the topic of solid-phase ion migration.

Imaging technology allows visualization of nanoscale structures inside whole cells
Purdue University technology allows scientists to measure wavefront distortions induced by the specimen, either a cell or a tissue, directly from the signals generated by single molecules -- tiny light sources attached to the cellular structures of interest.

Breakthrough in genome visualization
Kadir Dede and Dr. Enno Ohlebusch at Ulm University in Germany have devised a method for constructing pan-genome subgraphs at different granularities without having to wait hours and days on end for the software to process the entire genome.

Anti-carcinoembryonic antigen-related cell adhesion molecule antibody for fluorescence visualization
Oncotarget Volume 11, Issue 4: The research team's aim was to investigate mAb 6G5j binding characteristics and to validate fluorescence targeting of colorectal tumors and metastases in patient derived orthotopic xenograft models with fluorescently labeled 6G5j.

An improved method for protein crystal structure visualization
During crystallization atoms are arranged in a 3D lattice structured in a specific way.

Advancement made in the visualization of large, complex datasets
An improvement to the premier data visualization tool t-distributed Stochastic Neighborhood Embedding (t-SNE), called optimized-t-SNE (opt-SNE), shines new light on researchers' ability to view exactly what is in their datasets.

Natural language interface for data visualization debuts at prestigious IEEE conference
A team at NYU Tandon developed FlowSense, which lets those who may not be experts in machine learning create highly flexible visualizations from almost any data.

Read More: Visualization News and Visualization Current Events
Brightsurf.com is a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for sites to earn advertising fees by advertising and linking to Amazon.com.