Microsoft Research333 тыс
Опубликовано 6 октября 2021, 1:46
In this talk I will present my current and future work towards enabling safe real-world autonomy. My core focus is to enable efficient and safe decision-making in complex autonomous systems, while reasoning about uncertainty in real-world environments, including those involving human interactions.
First I will discuss safety for complex systems in simple environments. Traditional methods for generating safety analyses and safe controllers struggle to handle realistic complex models of autonomous systems, and therefore are stuck with simplistic models that are less accurate. I have developed scalable techniques for theoretically sound safety guarantees that can reduce computation by orders of magnitude for high-dimensional systems, resulting in better safety analyses and paving the way for safety in real-world autonomy.
Next I will add in complex environments. Safety analyses depend on pre-defined assumptions that will often be wrong in practice, as real-world systems will inevitably encounter incomplete knowledge of the environment and other agents. Reasoning efficiently and safely in unstructured environments is an area where humans excel compared to current autonomous systems. Inspired by this, I have used models of human decision-making from cognitive science to develop algorithms that allow autonomous systems to navigate quickly and safely, adapt to new information, and reason over the uncertainty inherent in predicting humans and other agents. Combining these techniques brings us closer to the goal of safe real-world autonomy.
First I will discuss safety for complex systems in simple environments. Traditional methods for generating safety analyses and safe controllers struggle to handle realistic complex models of autonomous systems, and therefore are stuck with simplistic models that are less accurate. I have developed scalable techniques for theoretically sound safety guarantees that can reduce computation by orders of magnitude for high-dimensional systems, resulting in better safety analyses and paving the way for safety in real-world autonomy.
Next I will add in complex environments. Safety analyses depend on pre-defined assumptions that will often be wrong in practice, as real-world systems will inevitably encounter incomplete knowledge of the environment and other agents. Reasoning efficiently and safely in unstructured environments is an area where humans excel compared to current autonomous systems. Inspired by this, I have used models of human decision-making from cognitive science to develop algorithms that allow autonomous systems to navigate quickly and safely, adapt to new information, and reason over the uncertainty inherent in predicting humans and other agents. Combining these techniques brings us closer to the goal of safe real-world autonomy.