News

HäirIÖ: Human Hair as Interactive Material

CS Prof. Eric Paulos and his graduate students in the Hybrid Ecologies Lab, Sarah Sterman, Molly Nicholas, and Christine Dierk, have created a prototype of a wearable color- and shape-changing braid called HäirIÖ.  The hair extension is built from a custom circuit, an Arduino Nano, an Adafruit Bluetooth board, shape memory alloy, and thermochromic pigments.  The bluetooth chip allows devices such as phones and laptops to communicate with the hair, causing it to change shape and color, as well as respond when the hair is touched. Their paper "Human Hair as Interactive Material," was presented at the ACM International Conference on Tangible, Embedded and Embodied Interaction (TEI) last week. They have posted a how-to guide and instructable videos which include comprehensive hardware, software, and electronics documentation, as well as information about the design process. "Hair is a unique and little-explored material for new wearable technologies," the guide says.  "Its long history of cultural and individual expression make it a fruitful site for novel interactions."

Michael Laskey talks DART in Robohub podcast

EECS graduate student Michael Laskey (advisor: Ken Goldberg) is interviewed by Audrow Nash for a Robohub podcast titled "DART: Noise injection for robust imitation learning."  Laskey works in the AUTOLAB where he develops new algorithms for Deep Learning of robust robot control policies and examines how to reliably apply recent deep learning advances for scalable robotics learning in challenging unstructured environments.  In the podcast, he discusses how DART relates to previous imitation learning methods, how this approach has been used for folding bed sheets, and on the importance of robotics leveraging theory in other disciplines.

Allan Jabri named 2018 Soros Fellow

CS graduate student Allan Jabri has been named a 2018 Paul & Daisy Soros Fellow.   Soros Fellowships are awarded to outstanding immigrants and children of immigrants from across the globe who are pursuing graduate school in the United States.  Recipients are chosen for their potential to make significant contributions to US society, culture, or their academic fields, and will receive up to $90K in funding over two years.  Jabri was born in Australia to parents from China and Lebanon and was raised in the US.   He received his B.S. at Princeton where his thesis focused on probabilistic methods for egocentric scene understanding, and worked as a research engineer at Facebook AI Research in New York before joining Berkeley AI Research (BAIR).  He  is interested in problems related to self-supervised learning, continual learning, intrinsic motivation, and embodied cognition. His long-term goal is to build learning algorithms that allow machines to autonomously acquire visual and sensorimotor common sense. During his time at Berkeley, he also hopes to mentor students, contribute to open source code projects, and develop a more interdisciplinary perspective on AI.

Stephen Tu wins Google Fellowship

EE graduate student Stephen Tu (advisor: Ben Recht) has been awarded a 2018 Google Fellowship.  Google Fellowships are presented to exemplary PhD students in computer science and related areas to acknowledge contributions to their chosen fields and provide funding for their education and research. Tu's current research interests "lie somewhere in the intersection of machine learning and optimization" although he previously worked on multicore databases and encrypted query processing.  Tu graduated with a CS B.A./ME B.S. from Berkeley in 2011 before earning an EECS S.M. from MIT in 2014.

Making computer animation more agile, acrobatic — and realistic

Graduate student Xue Bin “Jason” Peng (advisors Pieter Abbeel and Sergey Levine) has made a major advance in realistic computer animation using deep reinforcement learning to recreate natural motions, even for acrobatic feats like break dancing and martial arts. The simulated characters can also respond naturally to changes in the environment, such as recovering from tripping or being pelted by projectiles.  “We developed more capable agents that behave in a natural manner,” Peng said. “If you compare our results to motion-capture recorded from humans, we are getting to the point where it is pretty difficult to distinguish the two, to tell what is simulation and what is real. We’re moving toward a virtual stuntman.”  Peng will present his paper at the 2018 SIGGRAPH conference in August.

Atomically thin light emitting device opens the possibility for ‘invisible’ displays

Prof. Ali Javey,  postdoc Der-Hsien Lien, and graduate students Matin Amani and Sujay Desai have built a bright-light emitting device that is millimeters wide and fully transparent when turned off.  The light emitting material in this device is a monolayer semiconductor, which is just three atoms thick.  It opens the door to invisible displays on walls and windows – displays that would be bright when turned on but see-through when turned off — or in futuristic applications such as light-emitting tattoos.  “The materials are so thin and flexible that the device can be made transparent and can conform to curved surfaces,” said  Lien. Their research was published in the journal Nature Communications on March 26.

Ashokavardhanan, Jung, and McConnell

Ashokavardhanan, Jung, and McConnell named KPCB Engineering Fellows

Undergraduate students Ganeshkumar Ashokavardhanan (EECS  + Business M.E.T.),  Naomi Jung (CS BA), and Louie McConnell (EECS + Business M.E.T.) have been selected to participate in the 2018 KPCB Engineering Fellows Program, named one of the top 5 internship programs by Vault.  Over the course of a summer, KPCB Engineering Fellows join portfolio companies, where they develop their technical skills and are each mentored by an executive within the company. It offers students an opportunity to gain significant work experience at Silicon Valley startups, collaborating on unique and challenging technical problems.

Carlini (photo: Kore Chan/Daily Cal)

AI training may leak secrets to canny thieves

A paper released on arXiv last week by a team of researchers including Prof. Dawn Song and Ph.D. student Nicholas Carlini (B.A. CS/Math '13), reveals just how vulnerable deep learning is to information leakage.  The researchers labelled the problem “unintended memorization” and explained it happens if miscreants can access to the model’s code and apply a variety of search algorithms. That's not an unrealistic scenario considering the code for many models are available online, and it means that text messages, location histories, emails or medical data can be leaked.  The team doesn't “really know why neural networks memorize these secrets right now, ” Carlini says.  “At least in part, it is a direct response to the fact that we train neural networks by repeatedly showing them the same training inputs over and over and asking them to remember these facts."   The best way to avoid all problems is to never feed secrets as training data. But if it’s unavoidable then developers will have to apply differentially private learning mechanisms, to bolster security, Carlini concluded.

Ling-Qi Yan helps to improve computer rendering of animal fur

CS graduate student Ling-Qi Yan (advisors: Ravi Ramamoorthi/Ren Ng) and researchers at U.C. San Diego are the subject of an article in TechXplore titled "Scientists improve computer rendering of animal fur."  He is part of a team that developed a method for dramatically improving the way computers simulate fur, and more specifically, the way light bounces within an animal's pelt.  The researchers are using a neural network to apply the properties of a concept called subsurface scattering to quickly approximate how light bounces around fur fibers.  The neural network only needs to be trained with one scene before it can apply subsurface scattering to all the different scenes with which it is presented. This results in simulations running 10 times faster than current state of the art.  "We are converting the properties of subsurface scattering to fur fibers," said Yan. "There is no explicit physical or mathematical way to make this conversion. So we needed to use a neural network to connect these two different worlds."  The researchers recently presented their findings at the SIGGRAPH Asia conference in Thailand.

UC Berkeley wins 2018 Fiesta Bowl Overwatch Collegiate National Championship

For the second year running, UC Berkeley has won the Fiesta Bowl Overwatch Collegiate National Championship, sweeping UC Irvine 3-0.  CS majors Kevin "SlurpeeThief" Royston and Gandira “Syeikh” Prahandika were on the Berkeley team, which battled in front of a sold-out crowd at the game in Tempe, Arizona, the first partnership between a collegiate bowl game and eSports tournament.  Royston, who was profiled in on Overwatch Wire article before the tournament, is in the top 1% of all players worldwide, has had a peak skill rating of 4626, and was on the winning team last year.  He notes that the balance between esports and school has been a tough road.  “It’s definitely rough, this week is the first time I had a homework assignment slip because of Overwatch. (Laughs) Hear me out! It’s for Machine Learning which is the hardest class at Berkeley. We had to travel (for the Fiesta Bowl) and I worked sixteen hours on an assignment and didn’t even get halfway through it.”  Overwatch is a team-based shooter game that was created by Blizzard Entertainment and released in 2016.  Teams are made up of six players, who select different characters, known in the game as “heroes,” to complete different objectives. The Berkeley team won $42,000 of the $100,000 total in scholarships and other prizes.