EECS Colloquium

Wednesday, October 16, 2019

306 Soda Hall (HP Auditorium)
4:00 – 5:00 pm

Stuart Russell

Professor of Computer Science
Berkeley EECS

CS Professor Stuart Russell (Noah Berger / 2011)


It is reasonable to expect that AI capabilities will eventually exceed those of humans across a range of real-world-decision making scenarios. Should this be a cause for concern, as Elon Musk, Stephen Hawking, and others have suggested?  While some in the mainstream AI community dismiss the issue, I will argue instead that a fundamental reorientation of the field is required. Instead of building systems that optimize arbitrary objectives, we need to learn how to build systems that will, in fact, be beneficial for us.  I will show that it is useful to imbue systems with explicit uncertainty concerning the true objectives of the humans they are designed to help.  This uncertainty causes machine and human behavior to be inextricably (and game-theoretically) linked, while opening up many new avenues for research.

The ideas in this talk are described in more detail in a new book, “Human Compatible: AI and the Problem of Control” (Viking/Penguin, October 8, 2019).


Stuart Russell received his B.A. in physics from Oxford and his Ph.D. in computer science from Stanford.  In addition to his role as a Professor of Computer Science at Berkeley, he is the Smith-Zadeh Professor in Engineering at Berkeley, an Adjunct Professor of Neurological Surgery at UC San Francisco, and Vice-Chair of the World Economic Forum’s Council on AI and Robotics.  He served as chair of the EECS department from 2008 to 2010. His research covers a wide range of topics in artificial intelligence including machine learning, probabilistic reasoning, knowledge representation, planning, real-time decision making, multitarget tracking, computer vision, computational physiology, global seismic monitoring, and philosophical foundations. His current concerns include the threat of autonomous weapons and the long-term future of artificial intelligence and its relation to humanity.  He has written four books: “The Use of Knowledge in Analogy and Induction”; “Do the Right Thing: Studies in Limited Rationality” (with Eric Wefald);  “Artificial Intelligence: A Modern Approach” (with Peter Norvig); and “Human Compatible: Artificial Intelligence and the Problem of Control,” which was published earlier this month.  Within the past year, he as been named an Honorary Fellow of Wadham College, Oxford, and has won the AAAI Feigenbaum Prize and a Carnegie Fellowship.

Video of Presentation