News

Using machine-learning to reinvent cybersecurity two ways: Song and Popa

EECS Prof. and alumna Dawn Song (Ph.D. '02, advisor: Doug Tygar) and Assistant Prof. Raluca Ada Popa are featured in the cover story for the Spring 2020 issue of the Berkeley Engineer titled "Reinventing Cybersecurity."  Faced with the challenge of protecting users' personal data while recognizing that sharing access to that data "has fueled the modern-day economy" and supports scientific research, Song has proposed a paradigm that involves "controlled use" and an open source approach utilizing a new set of principles based on game theory.  Her lab is creating a platform that applies cryptographic techniques to both machine-learning models and hardware solutions, allowing users to keep their data safe while also making it accessible.  Popa's work focuses on using machine-learning algorithms to keep data encrypted in cloud computing environments instead of just surrounding the data with firewalls.  "Sharing without showing" allows sensitive data to be made available for collaboration without decryption.  This approach is made practical by the creation of a machine-learning training system that is exponentially faster than other approaches. "So instead of training a model in three months, it takes us under three hours.”

Pieter Abbeel and Sergey Levine: teaching computers to teach themselves

EECS Prof. Pieter Abbeel and Assistant Prof. Sergey Levine both appear in a New York Times article titled "Computers Already Learn From Us. But Can They Teach Themselves?" which describes the work of scientists who "are exploring approaches that would help machines develop their own sort of common sense."  Abbeel, who runs the Berkeley Robot Learning Lab, uses reinforcement-learning systems that compete against themselves to learn faster in a method called self-play.  Levine, who runs the Robotic AI & Learning Lab, is using a form of self-supervised learning in which robots explore their environment to build a base of knowledge.

Researchers develop novel way to shrink light to detect ultra-tiny substances

EE Associate Prof. Boubacar Kanté and his graduate student Junhee Park have been profiled in a Berkeley Engineering article titled "Researchers develop novel way to shrink light to detect ultra-tiny substances."  They are part of a team of researchers who have created light-based technology that can detect biological substances with a molecular mass more than two orders of magnitude smaller than previously possible.  Their device, which would shrink light while exploiting mathematical singularities known as exceptional points (EP), could lead to the development of ultra-sensitive devices that can quickly detect pathogens in human blood and considerably reduce the time needed for patients to get results from blood tests. Their work was published in Nature Physics last week. “Our goal is to overcome the fundamental limitations of optical devices and uncover new physical principles that can enable what was previously thought impossible or very challenging,” Kanté said.

Keeping classified information secret in a world of quantum computing

Computer Science and Global Studies double major, Jake Tibbetts, has published an article in the Bulletin of the Atomic Scientists titled "Keeping classified information secret in a world of quantum computing."  Tibbetts, who is a research assistant at the LBNL Center for Global Security Research and a member of the Berkeley Nuclear Policy Working Group, argues that instead of worrying about winning the quantum supremacy race against China, U.S. policy makers and scholars should shift their focus to a more urgent national security problem: How to maintain the long-term security of secret information secured by existing cryptographic protections, which will fail against an attack by a future quantum computer.  Some possible avenues include deploying honeypots to misdirect and waste the resources of entities attempting to steal classified information; reducing the deployment time for new encryption schemes; and triaging cryptographic updates to systems that communicate and store sensitive and classified information.

New nonvolatile memory cells shrink circuits and speed searches

The work of Prof. Sayeef Salahuddin and grad student Ava Tan is featured in an article in the IEEE Spectrum titled "New Nonvolatile Memories Shrink Circuits That Search Fast."  Salahuddin, a ferroelectric device pioneer, has been conducting work on a new kind of content-addressable memory cell that could speed searches and enable in-memory computing.   The new nonvolatile memory, which is smaller and potentially much more dense than other experimental designs, relies on ferroelectric field-effect transistors (FeFETs), which store data as an electric polarization within the transistor.

Two EECS papers win 2019 ACM SIGPLAN Distinguished Paper Awards

Two papers co-authored by Berkeley EECS authors won ACM SIGPLAN Distinguished Paper Awards at the Conference on Object-Oriented Programming, Systems, Languages, and Applications (OOPSLA) 2019.  "Duet: An Expressive Higher-Order Language and Linear Type System for Statically Enforcing Differential Privacy" co-authored by Prof. Dawn Song (Ph.D. '02, advisor: Doug Tygar), graduate student Lun Wang, undergraduate researcher Pranav Gaddamadugu, and alumni Neel Somani (CS B.A.  '19), Nikhil Sharma (EECS B.S. '18/M.S. '19),  and Alex Shan (CS B.A. '18), along with researchers in Vermont and Utah, and "Aroma: Code Recommendation via Structural Code Search" co-authored by Prof. Koushik Sen (along with authors at Facebook and UC Irvine), won two of the five honors awarded at the top programming language conference, part of the ACG SIGPLAN conference on Systems, Programming, Languages, and Applications: Software for Humanity (SPLASH) in October.  

"Oracle-Guided Component-Based Program Synthesis" wins 2020 ICSE Most Influential Paper Award

The paper "Oracle-Guided Component-Based Program Synthesis," co-authored by alumnus Susmit Jha (M.S./Ph.D. '11), Sumit Gulwani (Ph.D. '05, advisor: George Necula), EECS Prof. Sanjit A. Seshia, and Ashish Tiwari--and part of Susmit Jha's Ph.D. dissertation advised by Sanjit Seshia--will receive the 2020 Most Influential Paper Award by the ACM/IEEE International Conference on Software Engineering (ICSE). ICSE is the premier conference on software engineering and this award recognizes the paper judged to have had the most influence on the theory or practice of software engineering during the 10 years since its original publication. The citation says, in part, that the paper: "...has made a significant impact in Software Engineering and beyond, inspiring subsequent work not only on program synthesis and learning, but also on automated program repair, controller synthesis, and interpretable artificial intelligence."

Using deep learning to expertly detect hemorrhages in brain scans

A computer algorithm co-developed by Vision Group alumnus Weicheng Kuo (Ph.D. '19), post doc Christian Hӓne, their advisor Prof. Jitendra Malik, and researchers at UCSF, bested two out of four expert radiologists at finding tiny brain hemorrhages in head scans, an advance that one day may help doctors treat patients with traumatic brain injuries, strokes and aneurysms.  The algorithm found some small abnormalities that the experts missed, noted their location within the brain, and classified them according to subtype.  The researchers used of a type of deep learning known as a fully convolutional neural network (FCN) to train the algorithm on a relatively small number of images that were packed with data.  Each small abnormality was manually delineated at the pixel level. The richness of this data — along with other steps that prevented the model from misinterpreting random variations, or “noise,” as meaningful — created an extremely accurate algorithm.

How to Stop Superhuman A.I. Before It Stops Us

EECS Prof. Stuart Russell has penned a New York Times Op-Ed titled "How to Stop Superhuman A.I. Before It Stops Us," in which he explains why we need to design artificial intelligence that is beneficial, not just smart.  "Instead of building machines that exist to achieve their objectives," he writes, we need to build "machines that have our objectives as their only guiding principle..."   This will make them "necessarily uncertain about what these objectives are, because they are in us — all eight billion of us, in all our glorious variety, and in generations yet unborn — not in the machines."  Russell has just published a book titled "Human Compatible: Artificial Intelligence and the Problem of Control" (Viking , October 8, 2019).

Sergey Levine, Francis Bach, and Pieter Abbeel are top 3 most prolific NeurIPS 2019 authors

Two EECS faculty and one alumnus are the authors with the most number of papers accepted to the upcoming 2019 Thirty-third Conference on Neural Information Processing Systems (NeurIPS), one of the most popular and influential AI conferences in the world.  CS Prof. Sergey Levine took the top spot with 12 papers, alumnus Francis Bach (Ph.D. '05, advisor: Michael Jordan) was the second most prolific contributor with 10 papers, and Prof. Pieter Abbeel placed third with nine.  Only one in five of the 6,743 papers submitted to the conference this year were accepted.  Registration to be one of the 8,000 attendees at  last year's NeurIPS (formerly NIPS) conference sold out in 12 minutes.  A lottery has been implemented for this year's conference, which will take place in December.