KEYNOTES
ACM CCS 2021, November 15-19
Pseudo-Randomness and the Crystal Ball
Speaker: Cynthia Dwork, Harvard University
Tuesday, November 16, 08:15-09:00 (Korea Standard Time)
Prediction algorithms score individuals, or individual
instances, assigning to each one a number in [0,1] that is
often interpreted as a probability: What are the chances
that this loan will be repaid? How likely is the tumor to
metastasize? What is the probability this person will
commit a violent crime in the next two years? But what is
the probability of a non-repeatable event? Without a
satisfactory answer, we cannot even specify the goal of
the ideal algorithm, let alone try to build it.
This talk will introduce Outcome Indistinguishability, a
desideratum with roots in complexity theory, and situate
it in the context of research on the theory of algorithmic
fairness.
Speaker Bio:
Cynthia Dwork, Gordon
McKay Professor of Computer Science at the Harvard
Paulson School of Engineering, and Affiliated Faculty at
Harvard Law School and Harvard Department of Statistics,
uses theoretical computer science to place societal
problems on a firm mathematical foundation. Dwork’s
earliest work in distributed computing established the
pillars on which every fault-tolerant system has been
built for decades. Her innovations have modernized
cryptography to withstand the ungoverned interactions of
the internet and the eventuality of quantum computing,
and her invention of Differential Privacy has
revolutionized privacy-preserving statistical data
analysis. In 2012 she launched the theoretical
investigation of algorithmic fairness. She is a winner
of numerous awards and a member of the NAS, the NAE, the
American Academy of Arts and Sciences, and the American
Philosophical Society.
Towards Building a Responsible Data Economy
Speaker: Dawn Song, University of California, Berkeley
Wednesday, November 17, 08:15-09:00 (Korea Standard Time)
Data is a key driver of modern economy and AI/machine learning, however, a lot of this data is sensitive and handling the sensitive data has caused unprecedented challenges for both individuals and businesses. These challenges will only get more severe as we move forward in the digital era. In this talk, I will talk about technologies needed for responsible data use including secure computing, differential privacy, federated learning, as well as blockchain technologies for data rights, and how to combine privacy computing technologies and blockchain to building a platform for a responsible data economy, to enable more responsible use of data that maximizes social welfare & economic efficiency while protecting users’ data rights and enable fair distribution of value created from data. I will also talk about new paradigms that this approach enables including decentralized data science and data DAO. I will also discuss new frameworks on data valuation.
Speaker Bio:
Dawn Song is a Professor
in the Department of Electrical Engineering and Computer
Science at UC Berkeley. Her research interest lies in AI
and deep learning, security and privacy. She is the
recipient of various awards including the MacArthur
Fellowship, the Guggenheim Fellowship, the NSF CAREER
Award, the Alfred P. Sloan Research Fellowship, the MIT
Technology Review TR-35 Award, ACM SIGSAC Outstanding
Innovation Award, and Test-of-Time Awards and Best Paper
Awards from top conferences in Computer Security and Deep
Learning. She is an ACM Fellow and an IEEE Fellow. She is
ranked the most cited scholar in computer security (AMiner
Award). She obtained her Ph.D. degree from UC
Berkeley. She is also a serial entrepreneur. She is the
Founder of Oasis Labs and has been named on the Female
Founder 100 List by Inc. and Wired25 List of Innovators.
Are we done yet? Our journey to fight against memory-safety bugs
Speaker: Taesoo Kim, Georgia Institute of Technology & Samsung Research
Thursday, November 18, 08:15-09:00 (Korea Standard Time)
Memory-safety issues have been a long-standing concern of the security practitioners. According to Microsoft [1] and Google [2], memory-safety bugs still represent 70% of the exploited vulnerabilities in complex, real-world programs like OSes and Web browsers. However, it doesn’t mean that academics and practitioners haven’t tried hard to alleviate the problem. Advances in automatic techniques like fuzzing and sanitizers revolutionize the way we tame the memory safety bugs, but the increasing volume of new software simply outpaces the adoption rate of these promising new techniques, setting the legacy programs aside. In this talk, I’d like to share “our” own journey to fight against memory-safety bugs – “our” is important as all research is conducted together with the brightest hackers in SSLab at Georgia Tech. First, I’d like to talk about our group’s research agenda in the memory-safety world ranging from binary exploitation, programming analysis, fuzzing, symbolic execution and security education. Second, I will share our group's journey to participate in the DARPA CGC, DEFCON CTF and pwn2own competitions. Third, I will also present where our group is heading to: a promising new memory/thread-safe language, called Rust. Lastly, I will conclude the talk with an important projection by using our recent work on finding bugs in the Rust packages [3]: like COVID-19, the memory-safety bugs likely stay with us for the next decade, if not more.
Speaker Bio:
Taesoo Kim is an
Associate Professor in the School of Cybersecurity and
Privacy and the School Computer Science at Georgia
Tech. He also serves as a director of the Georgia Tech
Systems Software and Security Center (GTS3). Starting from
his sabbatical year, he works as a VP at Samsung Research,
leading the development of a Rust-based OS for a secure
element. He is a recipient of various awards including NSF
CAREER (2018), Internet Defense Prize (2015), and several
best paper awards including USENIX Security’18 and
EuroSys’17. He holds a BS from KAIST (2009), a SM (2011)
and a Ph.D. (2014) from MIT.