ACM CCS 2020 - November 9-13, 2020
Machine Learning and Security: The Good, The Bad, and The Ugly
November 10, 11:00 AM U.S. Eastern Standard Time
I would like to share my thoughts on the interactions between machine learning and security.
We now have more data, more powerful machines and algorithms, and better yet, we don’t need to always manually engineer the features. The ML process is now much more automated and the learned models are more powerful, and this is a positive feedback loop: more data leads to better models, which lead to more deployments, which lead to more data. All security vendors now advertise that they use ML in their products.
There are more unknowns. In the past, we knew the capabilities and limitations of our security models, including the ML-based models, and understood how they can be evaded. But the state-of-the-art models such as deep neural networks are not as intelligible as classical models such as decision trees. How do we decide to deploy a deep learning-based model for security when we don’t know for sure it is learned correctly?
Data poisoning becomes easier. On-line learning and web-based learning use data collected in run-time and often from an open environment. Since such data is often resulted from human actions, it can be intentionally polluted, e.g., in misinformation campaigns. How do we make it harder for attackers to manipulate the training data?
Attackers will keep on exploiting the holes in ML, and automate their attacks using ML. Why don’t we just secure ML? This would be no different than trying to secure our programs, and systems, and networks, so we can’t. We have to prepare for ML failures.
Ultimately, humans have to be involved. The question is how and when? For example, what information should a ML-based system present to humans and what input can humans provide to the system?
Wenke Lee is a Professor of Computer Science, the John P. Imlay Jr. Chair, and the Director of the Institute for Information Security & Privacy at Georgia Tech. His research interests include systems and network security, malware analysis, applied cryptography, and machine learning. He is an ACM Fellow.
Realistic Threats and Realistic Users: Lessons from the Election
November 11, 11:00 AM U.S. Eastern Standard Time
The speaker will utilize his experience from inside one of the world's largest social networks during the 2016 and 2018 elections, and running an election integrity war room in 2020 to discuss the ways that technology fails the people we try so hard to serve. We will discuss the realistic assumptions we can make about threats, and the expectations we should have of users, and try to chart a path forward for how cutting-edge security research might better inform the engineers and product designers who end up putting computing technologies in the hands of billions.
Alex Stamos is a Greek American computer scientist and adjunct professor at Stanford University's Center for International Security and Cooperation. He is the former chief security officer (CSO) at Facebook.