Cameras that can learn

October 13, 2020

Intelligent cameras could be one step closer thanks to a research collaboration between the Universities of Bristol and Manchester who have developed cameras that can learn and understand what they are seeing.

Roboticists and artificial intelligence (AI) researchers know there is a problem in how current systems sense and process the world. Currently they are still combining sensors, like digital cameras that are designed for recording images, with computing devices like graphics processing units (GPUs) designed to accelerate graphics for video games.

This means AI systems perceive the world only after recording and transmitting visual information between sensors and processors. But many things that can be seen are often irrelevant for the task at hand, such as the detail of leaves on roadside trees as an autonomous car passes by. However, at the moment all this information is captured by sensors in meticulous detail and sent clogging the system with irrelevant data, consuming power and taking processing time. A different approach is necessary to enable efficient vision for intelligent machines.

Two papers from the Bristol and Manchester collaboration have shown how sensing and learning can be combined to create novel cameras for AI systems.

Walterio Mayol-Cuevas, Professor in Robotics, Computer Vision and Mobile Systems at the University of Bristol and principal investigator (PI), commented: "To create efficient perceptual systems we need to push the boundaries beyond the ways we have been following so far.

"We can borrow inspiration from the way natural systems process the visual world - we do not perceive everything - our eyes and our brains work together to make sense of the world and in some cases, the eyes themselves do processing to help the brain reduce what is not relevant."

This is demonstrated by the way the frog's eye has detectors that spot fly-like objects, directly at the point where the images are sensed.

The papers, one led by Dr Laurie Bose and the other by Yanan Liu at Bristol, have revealed two refinements towards this goal. By implementing Convolutional Neural Networks (CNNs), a form of AI algorithm for enabling visual understanding, directly on the image plane. The CNNs the team has developed can classify frames at thousands of times per second, without ever having to record these images or send them down the processing pipeline. The researchers considered demonstrations of classifying handwritten numbers, hand gestures and even classifying plankton.

The research suggests a future with intelligent dedicated AI cameras - visual systems that can simply send high-level information to the rest of the system, such as the type of object or event taking place in front of the camera. This approach would make systems far more efficient and secure as no images need be recorded.

The work has been made possible thanks to the SCAMP architecture developed by Piotr Dudek, Professor of Circuits and Systems and PI from the University of Manchester, and his team. The SCAMP is a camera-processor chip that the team describes as a Pixel Processor Array (PPA). A PPA has a processor embedded in each and every pixel which can communicate to each other to process in truly parallel form. This is ideal for CNNs and vision algorithms.

Professor Dudek said: "Integration of sensing, processing and memory at the pixel level is not only enabling high-performance, low-latency systems, but also promises low-power, highly efficient hardware.

"SCAMP devices can be implemented with footprints similar to current camera sensors, but with the ability to have a general-purpose massively parallel processor right at the point of image capture."

Dr Tom Richardson, Senior Lecturer in Flight Mechanics, at the University of Bristol and a member of the project has been integrating the SCAMP architecture with lightweight drones.

He explained: 'What is so exciting about these cameras is not only the newly emerging machine learning capability, but the speed at which they run and the lightweight configuration.

"They are absolutely ideal for high speed, highly agile aerial platforms that can literally learn on the fly!'

The research, funded by the Engineering and Physical Sciences Research Council (EPSRC), has shown that it is important to question the assumptions that are out there when AI systems are designed. And things that are often taken for granted, such as cameras, can and should be improved towards the goal of more efficient intelligent machines.
-end-
Papers

'Fully embedding fast convolutional networks on pixel processor arrays' by Laurie Bose, Jianing Chen, Stephen J. Carey, Piotr Dudek and Walterio Mayol-Cuevas presented at the European Conference on Computer Vision (ECCV) 2020

'High-speed Light-weight CNN Inference via strided convolutions on a pixel processor array' by Yanan Liu, Laurie Bose, Jianing Chen, Stephen J. Carey, Piotr Dudek, Walterio Mayol-Cuevas presented at the British Machine Vision Conference (BMVC) 2020

University of Bristol

Related Sensors Articles from Brightsurf:

OPD optical sensors that reproduce any color
POSTECH Professor Dae Sung Chung's team uses chemical doping to freely control the colors of organic photodiodes.

Airdropping sensors from moths
University of Washington researchers have created a sensor system that can ride aboard a small drone or an insect, such as a moth, until it gets to its destination.

How to bounce back from stretched out stretchable sensors
Elastic can stretch too far and that could be problematic in wearable sensors.

New mathematical tool can select the best sensors for the job
In the 2019 Boeing 737 Max crash, the recovered black box from the aftermath hinted that a failed pressure sensor may have caused the ill-fated aircraft to nose dive.

Lighting the way to porous electronics and sensors
Researchers from Osaka University have created porous titanium dioxide ceramic thin films, at high temperatures and room temperature.

Russian scientists to improve the battery for sensors
Researchers of Peter the Great St. Petersburg Polytechnic University (SPbPU) approached the creation of a solid-state thin-film battery for miniature devices and sensors.

Having an eye for colors: Printable light sensors
Cameras, light barriers, and movement sensors have one thing in common: they work with light sensors that are already found in many applications.

Improving adhesives for wearable sensors
By conveniently and painlessly collecting data, wearable sensors create many new possibilities for keeping tabs on the body.

Kirigami inspires new method for wearable sensors
As wearable sensors become more prevalent, the need for a material resistant to damage from the stress and strains of the human body's natural movement becomes ever more crucial.

Wearable sensors detect what's in your sweat
A team of scientists at the University of California, Berkeley, is developing wearable skin sensors that can detect what's in your sweat.

Read More: Sensors News and Sensors Current Events
Brightsurf.com is a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for sites to earn advertising fees by advertising and linking to Amazon.com.