Google's new system captures character lighting for virtually any environment

November 18, 2019

Even novice photographers and videographers who rely on their handheld devices to snap photos or make videos often consider their subject's lighting. Lighting is critical in filmmaking, gaming, and virtual/augmented reality environments and can make or break the quality of a scene and the actors and performers in it. Replicating realistic character lighting has remained a difficult challenge in computer graphics and computer vision.

While significant progress has been made on volumetric capture systems, focusing on 3D geometric reconstruction with high resolution textures, such as methods to achieve realistic shapes and textures of the human face, much less work has been done to recover photometric properties needed for relighting characters. Results from such systems lack fine details and the subject's shading is prebaked into the texture.

Computer scientists at Google are revolutionizing this area of volumetric capture technology with a novel, comprehensive system that is able, for the first time, to capture full-body reflectance of 3D human performances, and seamlessly blend them into the real world through AR or into digital scenes in films, games, and more. Google will present their new system, called The Relightables, at ACM SIGGRAPH Asia, held Nov. 17 to 20 in Brisbane, Australia. SIGGRAPH Asia, now in its 12th year, attracts the most respected technical and creative people from around the world in computer graphics, animation, interactivity, gaming, and emerging technologies.

There have been major advances in this realm of work that the industry calls 3D capture systems. Through these sophisticated systems, viewers have been able to experience digital characters come to life on the big screen, for instance, in blockbusters such as Avatar and the Avengers series and much more.

Indeed, the volumetric capture technology has reached a high level of quality, but many of these reconstructions still lack true photorealism. In particular, despite these systems using high-end studio setups with green screens, they still struggle to capture high frequency details of humans and they only recover a fixed illumination condition. This makes these volumetric capture systems unsuitable for photorealistic rendering of actors or performers in arbitrary scenes under different lighting conditions.

Google's Relightables system makes it possible to customize lighting on characters in real time or re-light them in any given scene or environment.

They demonstrate this on subjects that are recorded inside a custom geodesic sphere outfitted with 331 custom color LED lights (also called a Light Stage capture system), an array of high-resolution cameras, and a set of custom high-resolution depth sensors. The Relightables system captures about 65 GB per second of raw data from nearly 100 cameras and its computational framework enables processing the data effectively at this scale. A video demonstration of the project can be seen here: https://youtu.be/anBRroZWfzI

Their system captures the reflectance information on a person--the way lighting interacts with skin is a major factor in how realistic digital people appear. Previous attempts used either flat lighting or required computer generated characters. Not only are they able to capture reflectance information on a person, but they are able to record while the person is moving freely within the volume. As a result, they are able to relight their animation in arbitrary environments.

Historically, cameras record people from a single viewpoint and lighting condition. This new system, note the researchers, allows users to record someone then view them from any viewpoint and lighting condition, removing the need for a green screen to create special effects and allowing for more flexible lighting conditions.

The interactions of space, light, and shadow between a performer and their environment play a critical role in creating a sense of presence. Beyond just 'cutting-and-pasting' a 3D video capture, the system gives the ability to record someone and then seamlessly place them into new environments--whether in their own space for AR experiences--or in the world of a VR, film, or game experience.

At SIGGRAPH Asia, The Relightables team will present the components of their system, from capture to processing to display, with video demos of each stage. They will walk attendees through the ins and outs of building The Relightables, describing the major challenges they tackled in the work and showcasing some cool applications and renderings.
-end-
The Google researchers behind The Relightables include: Kaiwen Guo, Peter Lincoln, Philip Davidson, Jay Busch, Xueming Yu, Matt Whalen, Geoff Harvey, Sergio Orts-Escolano, Rohit Pandey, Jason Dourgarian, Danhang Tang, Anastasia Tkach, Adarsh Kowdle, Emily Cooper, Mingsong Dou, Sean Fanello, Graham Fyffe, Christopher Rhemann, Jonathan Taylor, Paul Debevec, and Shahram Izadi. The researchers' paper can be accessed at https://dl.acm.org/citation.cfm?id=3356571.

Video to embed in the EurekAlert press release:

Technical Video: https://youtu.be/anBRroZWfzI

Association for Computing Machinery

Related Computer Graphics Articles from Brightsurf:

UCLA computer scientists set benchmarks to optimize quantum computer performance
Two UCLA computer scientists have shown that existing compilers, which tell quantum computers how to use their circuits to execute quantum programs, inhibit the computers' ability to achieve optimal performance.

Dartmouth-industry collaborations improve computer graphics
New software techniques make lighting in computer-generated images look more realistic for use in video games, extended reality, and scientific visualization tools.

Computer-generated genomes
Professor Beat Christen, ETH Zurich to speak in the AAAS 2020 session, 'Synthetic Biology: Digital Design of Living Systems.' Christen will describe how computational algorithms paired with chemical DNA synthesis enable digital manufacturing of biological systems up to the size of entire microbial genomes.

Computer-based weather forecast: New algorithm outperforms mainframe computer systems
The exponential growth in computer processing power seen over the past 60 years may soon come to a halt.

A computer that understands how you feel
Neuroscientists have developed a brain-inspired computer system that can look at an image and determine what emotion it evokes in people.

Supercomputers use graphics processors to solve longstanding turbulence question
Advanced simulations have solved a problem in turbulent fluid flow that could lead to more efficient turbines and engines.

New attacks on graphics processors endanger user privacy
Web browsers use GPUs to render graphics on desktops, laptops, and smart phones.

Computer graphics research team to present new tool for sketching faces
A research team, led by computer scientists from the University of Bern-Switzerland and University of Maryland-College Park, have devised a sketch-based editing framework that enables a user to edit their photos by sketching a few strokes on top of them.

Computer redesigns enzyme
University of Groningen biotechnologists used a computational method to redesign aspartase and convert it to a catalyst for asymmetric hydroamination reactions.

Mining for gold with a computer
Engineers from Texas A&M University and Virginia Tech report important new insights into nanoporous gold -- a material with growing applications in several areas, including energy storage and biomedical devices -- all without stepping into a lab.

Read More: Computer Graphics News and Computer Graphics Current Events
Brightsurf.com is a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for sites to earn advertising fees by advertising and linking to Amazon.com.