Francis X Govers
Bell Helicopter Textron, USA
Title: The risk of artificial intelligence: Evaluation of risk and safety in learning capable unmanned systems
Biography
Biography: Francis X Govers
Abstract
Unmanned vehicles, drones, self-driving cars and other sorts of advanced autonomous vehicles are being announced on an almost daily basis. Uber is working on flying taxis, every car company has a self-driving car in the works, and drones are the hottest Christmas toy for people of all ages. Inside these, autonomous vehicles are systems based on advanced artificial intelligence, including artificial neural networks (ANN), machine learning based systems, probabilistic reasoning, and monte-carlo models providing support for complex decision making. One of the common concerns about autonomous vehicles, be they flying or driving, is for safety. Safety testing is usually based on deterministic behavior, the aircraft or car or boat, which faced a similar situation, behaves the same way every time. But, what happens when the vehicle is learning from its environment, just as we humans do. Then it may behave differently each time based on experience. How then to predict and evaluate in advance? How safe an autonomous system might be? This paper will present two complementary approaches to this problem. One is a stochastic model for predicting how an autonomous system might behave as it learns over time, providing a range of behavioral responses, to be used as a risk assessment tool? The other is a set of methods and standards for writing test procedures for such vehicles.