News

PerfFuzz wins ISSTA18 Distinguished Paper Award

"PerfFuzz: Automatically Generating Pathological Inputs," written by graduate students Caroline Lemieux and Rohan Padhye, and Profs. Koushik Sen and Dawn Song, will receive a Distinguished Paper Award from the ACM SIGSOFT International Symposium on Software Testing and Analysis (ISSTA) 2018 in Amsterdam in July.  PerfFuzz is a method to automatically generate inputs for software programs via feedback-directed mutational fuzzing.  These inputs exercise pathological behavior across program locations, without any domain knowledge.   The authors found that PerfFuzz outperforms prior work by generating inputs that exercise the most-hit program branch 5x to 69x times more, and result in 1.9x to 24.7x longer total execution paths.

Aviad Rubinstein wins 2017 ACM Doctoral Dissertation Award

CS alumnus Aviad Rubinstein (Ph.D. ' 17, advisor: Christos Papadimitriou) is the recipient of the Association for Computing Machinery (ACM) 2017 Doctoral Dissertation Award for his dissertation “Hardness of Approximation Between P and NP.”  In his thesis, Rubinstein established the intractability of the approximate Nash equilibrium problem and several other important problems between P and NP-completeness—an enduring problem in theoretical computer science.  His work was featured in a Quanta Magazine article titled "In Game Theory, No Clear Path to Equilibrium" in July. After graduating, Rubinstein became a Rabin Postdoc at Harvard and will join Stanford as an Assistant Professor in the fall.

Editing brain activity with holography

The research of Associate Prof. Laura Waller is highlighted in a Berkeley News article titled "Editing brain activity with holography."  Waller is co-author of a paper published in the journal Nature Neuroscience that describes a holographic brain modulator which can activate up to 50 neurons at once in a three-dimensional chunk of brain containing several thousand neurons, and repeat that up to 300 times a second with different sets of 50 neurons. The goal is to read neural activity constantly and decide, based on the activity, which sets of neurons to activate to simulate the pattern and rhythm of an actual brain response, so as to replace lost sensations after peripheral nerve damage, for example, or control a prosthetic limb. “The major advance is the ability to control neurons precisely in space and time,” said Waller's postdoc Nicolas Pégard, who is a first author of the paper.  “In other words, to shoot the very specific sets of neurons you want to activate and do it at the characteristic scale and the speed at which they normally work.”

A feasible way for devices to send data with light

Researchers, including Prof. Vladimir Stojanović, have developed a method to fabricate silicon chips that can communicate with light and are no more expensive than current chip technology.  Stojanovic initially led the project into a new microchip technology capable of optically transferring data which could solve a severe bottleneck in current devices by speeding data transfer and reducing energy consumption by orders of magnitude.  He and his collaborators, including Milos Popović at Boston University and Rajeev Ram at MIT, recently published a paper in Nature where they present a manufacturing solution by introducing a set of new material layers in the photonic processing portion of a bulk silicon chip. They demonstrate that this change allows optical communication with no impact on electronics.

HäirIÖ: Human Hair as Interactive Material

CS Prof. Eric Paulos and his graduate students in the Hybrid Ecologies Lab, Sarah Sterman, Molly Nicholas, and Christine Dierk, have created a prototype of a wearable color- and shape-changing braid called HäirIÖ.  The hair extension is built from a custom circuit, an Arduino Nano, an Adafruit Bluetooth board, shape memory alloy, and thermochromic pigments.  The bluetooth chip allows devices such as phones and laptops to communicate with the hair, causing it to change shape and color, as well as respond when the hair is touched. Their paper "Human Hair as Interactive Material," was presented at the ACM International Conference on Tangible, Embedded and Embodied Interaction (TEI) last week. They have posted a how-to guide and instructable videos which include comprehensive hardware, software, and electronics documentation, as well as information about the design process. "Hair is a unique and little-explored material for new wearable technologies," the guide says.  "Its long history of cultural and individual expression make it a fruitful site for novel interactions."

Making computer animation more agile, acrobatic — and realistic

Graduate student Xue Bin “Jason” Peng (advisors Pieter Abbeel and Sergey Levine) has made a major advance in realistic computer animation using deep reinforcement learning to recreate natural motions, even for acrobatic feats like break dancing and martial arts. The simulated characters can also respond naturally to changes in the environment, such as recovering from tripping or being pelted by projectiles.  “We developed more capable agents that behave in a natural manner,” Peng said. “If you compare our results to motion-capture recorded from humans, we are getting to the point where it is pretty difficult to distinguish the two, to tell what is simulation and what is real. We’re moving toward a virtual stuntman.”  Peng will present his paper at the 2018 SIGGRAPH conference in August.

Atomically thin light emitting device opens the possibility for ‘invisible’ displays

Prof. Ali Javey,  postdoc Der-Hsien Lien, and graduate students Matin Amani and Sujay Desai have built a bright-light emitting device that is millimeters wide and fully transparent when turned off.  The light emitting material in this device is a monolayer semiconductor, which is just three atoms thick.  It opens the door to invisible displays on walls and windows – displays that would be bright when turned on but see-through when turned off — or in futuristic applications such as light-emitting tattoos.  “The materials are so thin and flexible that the device can be made transparent and can conform to curved surfaces,” said  Lien. Their research was published in the journal Nature Communications on March 26.

A step forward in Stephen Derenzo's search for dark matter

Prof. Stephen Derenzo is quoted in an article for Australia’s Particle about a new material for a proposed detector of weakly interactive massive particles (WIMPs).  Derenzo is the lead author of a study published March 20 in the Journal of Applied Physics about a crystal called gallium arsenide (GaAs) which features added concentrations, or “dopants,” of silicon and boron.  This material possesses a scintillation property--it lights up in particle interactions that knock away electrons. According to Derenzo, who is a senior physicist in the Molecular Biophysics and Integrated Bioimaging Division at Berkeley Lab, the new ultrasensitive detector technology could scan for dark matter signals at energies thousands of times lower than those measurable by more conventional WIMP detectors. “It’s a privilege to be working on such an important problem in physics, but the celebration will have to wait until clear signals are seen,” he says. “It’s possible that dark matter particles are even lighter than what we can see with GaAs, and their discovery will have to wait for even more sensitive experiments.”

John Kubiatowicz and Group's (Circa 2000) Paper Named Most Influential at ASPLOS 2018

At the ASPLOS conference in late March, John Kubitowicz and his group from 2000 were celebrated for their paper, "OceanStore: an architecture for global-scale persistent storage." The paper was named Most Influential Paper 2018, and the authors receiving the award included David Bindel, Yan Chen, Steven Czerwinski, Patrick Eaton, Dennis Geels, Ramakrishna Gummadi, Sean Rhea, Hakim Weatherspoon, Chris Wells, and Ben Zhao, as well as Kubi, a long-time Berkeley CS faculty member. The paper was originally published in the Proceedings of the ninth international conference on Architectural support for programming languages and operating systems (ASPLOS IX). 

Carlini (photo: Kore Chan/Daily Cal)

AI training may leak secrets to canny thieves

A paper released on arXiv last week by a team of researchers including Prof. Dawn Song and Ph.D. student Nicholas Carlini (B.A. CS/Math '13), reveals just how vulnerable deep learning is to information leakage.  The researchers labelled the problem “unintended memorization” and explained it happens if miscreants can access to the model’s code and apply a variety of search algorithms. That's not an unrealistic scenario considering the code for many models are available online, and it means that text messages, location histories, emails or medical data can be leaked.  The team doesn't “really know why neural networks memorize these secrets right now, ” Carlini says.  “At least in part, it is a direct response to the fact that we train neural networks by repeatedly showing them the same training inputs over and over and asking them to remember these facts."   The best way to avoid all problems is to never feed secrets as training data. But if it’s unavoidable then developers will have to apply differentially private learning mechanisms, to bolster security, Carlini concluded.