×

UMass researchers aim to make self-driving cars safer

  • University of Massachusetts Professor Shlomo Zilberstein in his office at the Computer Science Lab, Tuesday, July 19, in the

  • Timothy Wright, a postdoctoral researcher at the University of Massachusetts, takes the wheel in the Human Performance Lab at the College of Engineering Tuesday, July 19. He is studying the transfer of vehicle control from autonomous to driver with Prof. Shlomo Zilberstein.

  • University of Massachusetts Professor Shlomo Zilberstein talks with Timothy Wright, inside car, at the Human Performance Lab in the College of Engineering, Tuesday, July 19. Wright is a postdoctoral researcher.

  • University of Massachusetts Professor Shlomo Zilberstein talks with Timothy Wright, inside car, at the Human Performance Lab in the College of Engineering. They are researching how to make semi-autonomous systems, such as self-driving cars, safer. Gazette Staff/JERREY ROBERTS

  • Timothy Wright opens a computer program in the Human Performance Lab at the College of Engineering. He is studying the transfer of vehicle control from autonomous to driver with Professor Shlomo Zilberstein.

  • Timothy Wright, a postdoctoral researcher at the University of Massachusetts, takes the wheel in the Human Performance Lab at the College of Engineering last month. He is studying the transfer of vehicle control from autonomous to driver with Professor Shlomo Zilberstein. Gazette Staff/JERREY ROBERTS

  • Prof. Shlomo Zilberstein, left, and Timothy Wright, a postdoctoral researcher, study the transfer of vehicle control from autonomous to driver in the University of Massachusetts Human Performance Lab at the College of Engineering Tuesday, July 19.

  • Prof. Shlomo Zilberstein is studying the transfer of vehicle control from autonomous to driver in the University of Massachusetts Human Performance Lab at the College of Engineering. He is shown in the lab Tuesday, July 19.



For the Gazette
Monday, August 01, 2016

AMHERST — Researchers at the University of Massachusetts Amherst are studying how to make semi-autonomous systems, such as self-driving cars, safer for people to use.

Shlomo Zilberstein, a professor in the College of Information and Computer Sciences, and two of his graduate assistants, Kyle Wray and Luis Pineda, are researching how to transfer control “quickly, safely and smoothly back and forth” between the system and the person operating it.

“The real trend in artificial intelligence is to build systems that can collaborate with people,” Zilberstein said.

So far, their research, which was funded in part by the National Research Foundation, has been theoretical in nature.

But this fall, Zilberstein and his team will use undergraduate and graduate students to simulate operation of self-driving cars in a lab on campus. During those tests, researchers will try to come up with plans for people to take over the systems when it’s not safe for driverless operation.

“Semi-autonomous systems can do a lot of things by themselves, but once in a while they need human intervention,” Zilberstein said. “It could be a human supervising them, approving what they’re doing, or in the case of driving, taking over control. It could be a whole range of things.”

First fatal crash

Zilberstein’s research comes when attention is focused on development and testing of self-driving cars shortly after the first fatal crash involving one of the vehicles.

On May 7, a driver was killed in Florida when his Tesla Model S car, which featured an autopilot mode still under testing, crashed into an 18-wheel truck. According to Tesla’s website, “Neither the autopilot nor the driver noticed the white side of the tractor trailer against a brightly lit sky, so the brake was not applied.”

The National Highway and Transportation System Administration is investigating Tesla’s vehicles with that autopilot mode.

A goal of the UMass research is to aid the development of safer semi-autonomous systems, Zilberstein said. He and his two graduate students presented a research paper, titled “Hierarchical Approach to Transfer of Control in Semi-Autonomous Systems,” at the International Joint Conference on Artificial Intelligence during July in New York City.

“People have certain abilities and in many ways are far more advanced than computers. The best computers we have in artificial intelligence don’t have the common sense of a 3-year-old,” Zilberstein said. “What we have learned is it’s easier to program the expertise of a world expert in a very narrow domain than build systems that can do just common-sense things, but in an unrestricted world.”

Pineda said these systems have “great potential to help society, but there are still many situations in which current technology is not yet sufficient for them to be completely autonomous.”

He noted that the simulator lab experiments are important to validate their model with actual human drivers.

“Our research wants to create a model that autonomous systems can use to reason about their limitations, anticipate when a person will be more suited to do a task, and handcontrol to them in a way that maintains the safety of both the person and the system,” he said.

Autonomous vehicles may be equipped to drive on well-mapped roads, but have more difficulty on unmapped side roads, making complete autonomy a challenging problem, Wray said.

“Our semi-autonomous systems model enables the agent to reason more broadly about additional relevant entities that might assist the agent in completing its objectives,” he said. “Semi-autonomous systems can rely on humans or other agents to compensate during scenarios in which the agent is not as capable. This greatly enhances the scope of problems that can be solved and implemented.”

More than 20 years

Zilberstein has been working in the field of artificial intelligence for more than 20 years, including an area that focuses on automated reasoning and planning. In other words, how to build systems, robots or other artifacts that can be assigned a goal or a task and determine how to accomplish that based on appropriate knowledge, reasoning and planning ability, he said.

“So in order to accomplish that, we sometimes need to have a human in the loop. I started to get interested in that area,” he said. “What we are learning is that we can leverage what people can do best and what computers can do best right now.”

With self-driving cars, drivers in theory could have their hands off the wheel, read emails or a book or watch a movie while the system takes control of keeping them safe on the road, Zilberstein said. In this particular research, Zilberstein and his team are exploring situations in which the system needs to hand over control of the car to the operator on the fly and under time constraints.

“Suppose a person is in a self-driving car and suddenly they need to take over control. How much time do they need? What kind of notifications do they need and what if they don’t respond in a timely way or don’t respond at all to the request to take over control?” he asked. “What if the car is not authorized to drive in the rain and all of a sudden it starts to rain and the person is typing an email and lost in that activity. What happens then?”

Another goal of the research is to develop what Zilberstein calls “planning techniques” that would involve a plan, and an alternative plan, because right now they cannot rely on the transfer of control being 100 percent successful.

“If the transfer of control is not successful, the car will have to maintain some safe state and we need to make sure it is never entering a point where it cannot maintain safe state and nobody is in charge,” he said.

One significant challenge, Zilberstein said, is the more autonomy that is introduced, the more chance there is for people to be less engaged, which could lead to their not taking control in situations where that is required.

Zilberstein likens the concern to when people first moved from driving a stick shift to an automatic transmission.

“The more autonomy you introduce, the more likely the person is to ignore the request and rely on the car. The more people get used to not driving manually, the less competent they become,” he said. “You see how people over time could become less responsive. They could develop a false sense of confidence. If your car is doing all the work for you, you’re no longer a good driver.”

’Unpredictability’

Another issue with self-driving cars is the “general unpredictability of human behavior,” he said.

For example, if a person is in a self-driving car and involved in another activity, there is a brief period in which they need to adjust from their activity to really focus on driving, he said.

“There is a period of time, and that is what we are measuring, of how long it takes for people to respond appropriately. But it depends on what you’re doing, like listening to music versus being immersed in email and how busy the road is,” he said.

In the Human Performance Laboratory at UMass, Zilberstein’s test subjects will participate using simulators, which create a virtual environment that is projected in a panoramic view using several screens. The simulator gives the sense they are driving in a car, including seeing the view from side and rearview mirrors, and is “very close to realistic,” he said.

“If you think about the real world, no matter how much training you do, there’s always going to be some exception. There’s the unknowns,” Zilberstein said. “People can deal with those situations not always perfectly, but using common sense. Cars can only handle things they were trained to handle.”