Anca Dragan named Head of AI Safety and Alignment at Google DeepMind

anca

EECS Associate Professor Anca Dragan was named Head of AI Safety and Alignment at Google DeepMind.

Google DeepMind’s AI Safety and Alignment organization, founded in February, is responsible for developing new safeguards for Gemini models and aligning forthcoming models with human goals and values.

“Google DeepMind is fairly unique in that capability and safety go hand-in-hand and have to advance together,” said Dragan. “We absolutely need to have partners in both industry and academia working together, leveraging insights from each other and collaborating in order to get this done.”

Dragan’s research endeavors to enable AI agents (from robots to cars to LLMs to recommender systems) to work with, around, and in support of people. Her lab, InterACT, focuses on algorithms for human-AI, human-robot interaction, and AI alignment: getting AI agents to do what people actually want.