Better autonomous 'reasoning' at tricky intersections

November 04, 2019

MIT and Toyota researchers have designed a new model to help autonomous vehicles determine when it's safe to merge into traffic at intersections with obstructed views.

Navigating intersections can be dangerous for driverless cars and humans alike. In 2016, roughly 23 percent of fatal and 32 percent of nonfatal U.S. traffic accidents occurred at intersections, according to a 2018 Department of Transportation study. Automated systems that help driverless cars and human drivers steer through intersections can require direct visibility of the objects they must avoid. When their line of sight is blocked by nearby buildings or other obstructions, these systems can fail.

The researchers developed a model that instead uses its own uncertainty to estimate the risk of potential collisions or other traffic disruptions at such intersections. It weighs several critical factors, including all nearby visual obstructions, sensor noise and errors, the speed of other cars, and even the attentiveness of other drivers. Based on the measured risk, the system may advise the car to stop, pull into traffic, or nudge forward to gather more data.

"When you approach an intersection there is potential danger for collision. Cameras and other sensors require line of sight. If there are occlusions, they don't have enough visibility to assess whether it's likely that something is coming," says Daniela Rus, director of the Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science. "In this work, we use a predictive-control model that's more robust to uncertainty, to help vehicles safely navigate these challenging road situations."

The researchers tested the system in more than 100 trials of remote-controlled cars turning left at a busy, obstructed intersection in a mock city, with other cars constantly driving through the cross street. Experiments involved fully autonomous cars and cars driven by humans but assisted by the system. In all cases, the system successfully helped the cars avoid collision from 70 to 100 percent of the time, depending on various factors. Other similar models implemented in the same remote-control cars sometimes couldn't complete a single trial run without a collision.

Joining Rus on the paper are: first author Stephen G. McGill, Guy Rosman, and Luke Fletcher of the Toyota Research Institute (TRI); graduate students Teddy Ort and Brandon Araki, researcher Alyssa Pierson, and postdoc Igor Gilitschenski, all of CSAIL; Sertac Karaman, an MIT associate professor of aeronautics and astronautics; and John J. Leonard, the Samuel C. Collins Professor of Mechanical and Ocean Engineering of MIT and a TRI technical advisor.

Modeling road segments

The model is specifically designed for road junctions in which there is no stoplight and a car must yield before maneuvering into traffic at the cross street, such as taking a left turn through multiple lanes or roundabouts. In their work, the researchers split a road into small segments. This helps the model determine if any given segment is occupied to estimate a conditional risk of collision.

Autonomous cars are equipped with sensors that measure the speed of other cars on the road. When a sensor clocks a passing car traveling into a visible segment, the model uses that speed to predict the car's progression through all other segments. A probabilistic "Bayesian network" also considers uncertainties -- such as noisy sensors or unpredictable speed changes -- to determine the likelihood that each segment is occupied by a passing car.

Because of nearby occlusions, however, this single measurement may not suffice. Basically, if a sensor can't ever see a designated road segment, then the model assigns it a high likelihood of being occluded. From where the car is positioned, there's increased risk of collision if the car just pulls out fast into traffic. This encourages the car to nudge forward to get a better view of all occluded segments. As the car does so, the model lowers its uncertainty and, in turn, risk.

But even if the model does everything correctly, there's still human error, so the model also estimates the awareness of other drivers. "These days, drivers may be texting or otherwise distracted, so the amount of time it takes to react may be a lot longer," McGill says. "We model that conditional risk, as well."

That depends on computing the probability that a driver saw or didn't see the autonomous car pulling into the intersection. To do so, the model looks at the number of segments a traveling car has passed through before the intersection. The more segments it had occupied before reaching the intersection, the higher the likelihood it has spotted the autonomous car and the lower the risk of collision.

The model sums all risk estimates from traffic speed, occlusions, noisy sensors, and driver awareness. It also considers how long it will take the autonomous car to steer a preplanned path through the intersection, as well as all safe stopping spots for crossing traffic. This produces a total risk estimate.

That risk estimate gets updated continuously for wherever the car is located at the intersection. In the presence of multiple occlusions, for instance, it'll nudge forward, little by little, to reduce uncertainty. When the risk estimate is low enough, the model tells the car to drive through the intersection without stopping. Lingering in the middle of the intersection for too long, the researchers found, also increases risk of a collision.

Assistance and intervention

Running the model on remote-control cars in real-time indicates that it's efficient and fast enough to deploy into full-scale autonomous test cars in the near future, the researchers say. (Many other models are too computationally heavy to run on those cars.) The model still needs far more rigorous testing before being used for real-world implementation in production vehicles.

The model would serve as a supplemental risk metric that an autonomous vehicle system can use to better reason about driving through intersections safely. The model could also potentially be implemented in certain "advanced driver-assistive systems" (ADAS), where humans maintain shared control of the vehicle.

Next, the researchers aim to include other challenging risk factors in the model, such as the presence of pedestrians in and around the road junction.
-end-
Related links

Paper: "Probabilistic Risk Metrics for Navigating Occluded Intersections." http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=8779655

Helping autonomous vehicles see around corners http://news.mit.edu/2019/helping-autonomous-vehicles-see-around-corners-1028

Bringing human-like reasoning to driverless car navigation http://news.mit.edu/2019/human-reasoning-ai-driverless-car-navigation-0523

Making driverless cars change lanes more like human drivers do http://news.mit.edu/2018/driverless-cars-change-lanes-like-human-drivers-0523

Massachusetts Institute of Technology

Related Sensors Articles from Brightsurf:

OPD optical sensors that reproduce any color
POSTECH Professor Dae Sung Chung's team uses chemical doping to freely control the colors of organic photodiodes.

Airdropping sensors from moths
University of Washington researchers have created a sensor system that can ride aboard a small drone or an insect, such as a moth, until it gets to its destination.

How to bounce back from stretched out stretchable sensors
Elastic can stretch too far and that could be problematic in wearable sensors.

New mathematical tool can select the best sensors for the job
In the 2019 Boeing 737 Max crash, the recovered black box from the aftermath hinted that a failed pressure sensor may have caused the ill-fated aircraft to nose dive.

Lighting the way to porous electronics and sensors
Researchers from Osaka University have created porous titanium dioxide ceramic thin films, at high temperatures and room temperature.

Russian scientists to improve the battery for sensors
Researchers of Peter the Great St. Petersburg Polytechnic University (SPbPU) approached the creation of a solid-state thin-film battery for miniature devices and sensors.

Having an eye for colors: Printable light sensors
Cameras, light barriers, and movement sensors have one thing in common: they work with light sensors that are already found in many applications.

Improving adhesives for wearable sensors
By conveniently and painlessly collecting data, wearable sensors create many new possibilities for keeping tabs on the body.

Kirigami inspires new method for wearable sensors
As wearable sensors become more prevalent, the need for a material resistant to damage from the stress and strains of the human body's natural movement becomes ever more crucial.

Wearable sensors detect what's in your sweat
A team of scientists at the University of California, Berkeley, is developing wearable skin sensors that can detect what's in your sweat.

Read More: Sensors News and Sensors Current Events
Brightsurf.com is a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for sites to earn advertising fees by advertising and linking to Amazon.com.