U Mich researchers teaching self-driving cars to predict pedestrian movement; Bio-LSTM

University of Michigan researchers are
teaching
self-driving cars to recognize and predict pedestrian
movements with greater precision than current technologies. The
approach relies on a novel objective function that incorporates the
periodicity of human walking (gait), the mirror symmetry of the
human body, and the change of ground reaction forces in a human
gait cycle.

A paper on the work, supported by a grant from Ford Motor
Company, is published in IEEE Robotics and Automation
Letters
.

Data collected by vehicles through cameras, LiDAR and GPS allow
the researchers to capture video snippets of humans in motion and
then recreate them in 3D computer simulation. With that, they’ve
created a biomechanically inspired recurrent neural network
(Bio-LSTM) that catalogs human movements.

With it, they can predict poses and future locations for one or
several pedestrians up to about 50 yards from the vehicle. That’s
at about the scale of a city intersection.

Prior work in this area has typically only looked at still
images. It wasn’t really concerned with how people move in three
dimensions. But if these vehicles are going to operate and interact
in the real world, we need to make sure our predictions of where a
pedestrian is going doesn’t coincide with where the vehicle is
going next.

—Ram Vasudevan, U-M assistant professor of mechanical
engineering

Equipping vehicles with the necessary predictive power requires
the network to dive into the minutiae of human movement: the pace
of a human’s gait (periodicity), the mirror symmetry of limbs,
and the way in which foot placement affects stability during
walking.

Much of the machine learning used to bring autonomous technology
to its current level has dealt with two dimensional images—still
photos. A computer shown several million photos of a stop sign will
eventually come to recognize stop signs in the real world and in
real time.

By utilizing video clips that run for several seconds, the U-M
system can study the first half of the snippet to make its
predictions, and then verify the accuracy with the second half.

Now, we’re training the system to recognize motion and making
predictions of not just one single thing—whether it’s a stop
sign or not—but where that pedestrian’s body will be at the
next step and the next and the next.

—Matthew Johnson-Roberson, associate professor in U-M’s
Department of Naval Architecture and Marine Engineering

To explain the kind of extrapolations the neural network can
make, Vasudevan describes a common sight.

If a pedestrian is playing with their phone, you know they’re
distracted. Their pose and where they’re looking is telling you a
lot about their level of attentiveness. It’s also telling you a
lot about what they’re capable of doing next.

—Ram Vasudevan

The results have shown that this new system improves upon a
driverless vehicle’s capacity to recognize what’s most likely
to happen next.

The median translation error of our prediction was approximately
10 cm after one second and less than 80 cm after six seconds. All
other comparison methods were up to 7 meters off. We’re better at
figuring out where a person is going to be.

—Matthew Johnson-Roberson

To rein in the number of options for predicting the next
movement, the researchers applied the the physical constraints of
the human body—the inability to fly or the fastest possible speed
on foot.

To create the dataset used to train U-M’s neural network,
researchers parked a vehicle with Level 4 autonomous features at
several Ann Arbor intersections. With the car’s cameras and LiDAR
facing the intersection, the vehicle could record multiple days of
data at a time.

Researchers bolstered that real-world, “in the wild” data
from traditional pose data sets captured in a lab. The result is a
system that will raise the bar for of what driverless vehicles are
capable.

Resources

  • Xiaoxiao Du, Ram Vasudevan, Matthew Johnson-Roberson (2019)
    “Bio-LSTM: A Biomechanically Inspired Recurrent Neural Network
    for 3D Pedestrian Pose and Gait Prediction” IEEE Robotics and
    Automation Letters
    doi: 10.1109/LRA.2019.2895266

Source: FS – Transport 2
U Mich researchers teaching self-driving cars to predict pedestrian movement; Bio-LSTM



Leave a Reply