CCSW '23

Proceedings of the 2023 on Cloud Computing Security Workshop
Last Update : [26 November, 2023]

SESSION: Keynote Talks

Security Challenges and Opportunities of Cloud FPGAs
  • Mehdi B. Tahoori

Field-programmable gate arrays (FPGAs) have assumed a critical role within numerous cloud computing platforms, owing to their possession of intricate parallelism and specialization capabilities, which are instrumental in accelerating a wide array of applications spanning from machine learning and networking to signal processing, among various others. The shared FPGA platform in the cloud is based on the concept that the FPGA real estate can be shared among various users, probably event at different privilege levels.

However, such multi-tenancy comes with security challenges, in which one user, while being completely logically isolated from another, can cause security breaches to another user on the same FPGA. A substantial portion of these security challenges faced by FPGAs stem from the shared power distribution network present in these devices. Such electrical-level attacks leverage the electrical coupling between the adversary and a victim. An effective way to achieve such coupling in datacenter FPGAs is via the shared power delivery network (PDN). In addition, such a hardware security vulnerability does not require physical access to the hardware, meaning that a malicious user is able to execute a variety of remotely-controlled attacks: denial-of-service, fault injection, and power side-channel. Fine-grained control over the low-level FPGA hardware is, as it turns out, at the source of a number of electrical-level security issues. This enables the adversary to design and embed various legitimate and even benign-looking constructs in their designs to perform several attacks, evading many detection mechanisms.

Addressing the potential threat of remote electrical-level attacks on FPGAs involves a multifaceted approach encompassing various levels of abstraction, extending from pre-deployment measures to real-time monitoring. One strategy is the implementation of offline checks at the hypervisor or cloud provider level, where tenant designs undergo thorough scrutiny for any potentially malicious elements before they are loaded onto the FPGA. This can be done by simply looking for knownmalicious constructs in the design, or using machine learning approaches to generalize them for better coverage of formerlyunseen malicious designs. This proactive approach aims to prevent the introduction of vulnerable or malicious configurations in the first place.

Another line of defense involves the construction of active fences around security-sensitive FPGA designs. These fences essentially act as protective logic wrappers, detecting electrical-level leakage from the FPGA block and implementing compensation mechanisms to counterbalance PDN noise, as a hiding mechanism against remote side channel attacks. Furthermore, runtime monitoring systems can be integrated into multi-tenant FPGA environments. These systems continuously monitor voltage fluctuations on the PDN and can promptly disable any configurations exhibiting suspicious behavior. This real-time intervention serves as a safeguard against potential fault injection attacks or denial of service incidents, ensuring the integrity and reliability of the FPGA within the cloud infrastructure.

SESSION: Workshop Full Papers

Too Close for Comfort? Measuring Success of Sampled-Data Leakage Attacks Against Encrypted Search
  • Dominique Dittert
  • Thomas Schneider
  • Amos Treiber

The well-defined information leakage of Encrypted Search Algorithms (ESAs) is predominantly analyzed by crafting so-called leakage attacks. These attacks utilize adversarially known auxiliary data and the observed leakage to attack an ESA instance built on a user's data. Known-data attacks require the auxiliary data to be a subset of the user's data. In contrast, sampled-data attacks merely rely on auxiliary data that is, in some sense, statistically close to the user's data and hence reflect a much more realistic attack scenario where the auxiliary data stems from a publicly available data source instead of the private user's data.

Unfortunately, it is unclear what "statistically close" means in the context of sampled-data attacks. This leaves open how to measure whether data is close enough for attacks to become a considerable threat. Furthermore, sampled-data attacks have so far not been evaluated in the more realistic attack scenario where the auxiliary data stems from a source different to the one emulating the user's data. Instead, auxiliary and user data have been emulated with data from one source being split into distinct training and testing sets. This leaves open whether and how well attacks work in the mentioned attack scenario with data from different sources.

In this work, we address these open questions by providing a measurable metric for statistical closeness in encrypted keyword search. Using real-world data, we show a clear exponential relation between our metric and attack performance. We uncover new data that are intuitively similar yet stem from different sources. We discover that said data are not "close enough" for sampled-data attacks to perform well. Furthermore, we provide a re-evaluation of sampled-data keyword attacks with varying evaluation parameters and uncover that some evaluation choices can significantly affect evaluation results.

Enterprise Cyber Threat Modeling and Simulation of Loss Events for Cyber Risk Quantification
  • Christian Ellerhold
  • Johann Schnagl
  • Thomas Schreck

In today's enterprise landscape, effective risk management has emerged as a vital cornerstone. This importance has escalated significantly due to the widespread transition from traditional on-premise infrastructures to dynamic cloud environments. Many organizations rely on qualitative approaches for internal IT and cyber risk management; however, these approaches have notable drawbacks, such as a lack of accuracy and comparability. In this paper, we propose a novel approach to address these limitations by using the Factor Analysis of Information Risk (FAIR) methodology in conjunction with MITRE ATT&CK to model realistic cyberattacks on organizations and measure quantitative risk. We describe how this approach can be used to create an enterprise cyber threat model, providing a case study for a cloud scenario to demonstrate its usage and to illustrate its potential benefits.

Our model has demonstrated its practical applicability in enterprise settings as we thoroughly evaluated its effectiveness within two prominent German companies. This allowed us to gain valuable insight into how our proposed approach can enhance an organization's risk management strategies. Our research demonstrates the value of using a quantitative approach like FAIR over qualitative risk assessment methods. Overall, our approach provides a more comprehensive understanding of the risks organizations are facing and offers guidance on implementing effective risk management strategies. This research can help organizations improve their risk management practices and reduce the potential negative impact of cyberattacks.

Ambit: Verification of Azure RBAC
  • Matija Kuprešanin
  • Pavle Subotić

In this paper, we present an access control verification approach for Role-Based Access Control (RBAC) mechanisms. Given a specification that models security boundaries (e.g., obtained from a threat model, best practices etc.), we verify that a change to an RBAC state adheres to the specification (i.e., remains within the security boundaries). We demonstrate the practical utility of our approach by instantiating it for Microsoft's Azure AD. We have realized our technique in a tool called Ambit which leverages SMT (Satisfiability Modulo Theory) solvers to efficiently encode and solve the resulting verification problem. We demonstrate the scalability and applicability of our approach with a set of generated benchmarks that attempt to simulate real world RBAC configurations

Now is the Time: Scalable and Cloud-supported Audio Conferencing using End-to-End Homomorphic Encryption
  • David Hasselquist
  • Niklas Johansson
  • Niklas Carlsson

Homomorphic encryption (HE) allows computations on encrypted data, leaking neither the input nor the computational output. While the method has historically been infeasible to use in practice, due to recent advancements, HE has started to be applied in real-world applications. Motivated by the possibility of outsourcing heavy computations to the cloud and still maintaining end-to-end security, in this paper, we use HE to design a basic audio conferencing application and demonstrate that our design approach (including some advanced features) is both practical and scalable. First, by homomorphically mixing encrypted audio in an untrusted, honest-but-curious server, we demonstrate the practical use of HE in audio communication. Second, by using multiplication operations, we go beyond the purely additive audio mixing and implement advanced example features capable of handling server-side mute and breakout rooms without the cloud server being able to extract sensitive user-specific metadata. Whereas the encryption and decryption times are shown to be magnitudes slower than generic AES encryption and roughly ten times slower than Signal's AES implementation, our solution approach is scalable and achieves end-to-end encryption while keeping performance well within the bounds of practical use. Third, besides studying the performance aspects, we also objectively evaluate the perceived audio quality, demonstrating that this approach also achieves excellent audio quality. Finally, our comprehensive evaluation and empirical findings provide valuable insights into the tradeoffs between HE schemes, their security configurations, and audio parameters. Combined, our results demonstrate that audio mixing using HE (including advanced features) now can be made both practical and scalable.

Optimizing 0-RTT Key Exchange with Full Forward Security
  • Christian Göth
  • Sebastian Ramacher
  • Daniel Slamanig
  • Christoph Striecks
  • Erkan Tairi
  • Alexander Zikulnig

Secure communication protocols such as TLS 1.3 or QUIC are doing the heavy lifting in terms of security of today's Internet. These modern protocols provide modes that do not need an interactive handshake, but allow to send cryptographically protected data with the first client message in zero round-trip time (0-RTT). While this helps to reduce communication latency, the security of such protocols in terms of forward security is rather weak.

In recent years, the academic community investigated ways of mitigating this problem and achieving full forward security and replay resilience for such 0-RTT protocols. In particular, this can be achieved via a so-called Puncturable Key Encapsulation Mechanism (PKEM). While the first such schemes were too expensive to be used in practice, Derler et al. (EUROCRYPT 2018) proposed a variant of PKEMs called Bloom Filter Key Encapsulation Mechanism (BFKEM). Unfortunately, these primitives have only be investigated asymptotically and no real benchmarks were conducted. Dallmeier et al. (CANS 2020) were the first to study their practical application within the QUIC protocol. They build upon a specific BFKEM instantiation and conclude that while it comes with significant computational overhead, its practical use is feasible, especially in applications where the increased CPU and memory load can be tolerated.

In this paper, we revisit their choice of the concrete BFKEM instantiation and show that by relying on the concept of Time-based BFKEMs (TB-BFKEMs), also introduced by Derler et al. (EUROCRYPT 2018), one can combine the advantages of having computational efficiency and smaller key sizes. We thereby investigate algorithmic as well as conceptual optimizations with various trade-offs and conclude that our approach seems favorable for many practical settings. Overall, this extends the applicability of 0-RTT protocols with strong security in practice.

SESSION: Special Session on Robust and Trusted Internet Geolocation Papers

CDGeB: Cloud Data Geolocation Benchmark
  • Adi Offer
  • Aviram Zilberman
  • Asaf Shabtai
  • Yuval Elovici
  • Rami Puzis

Cloud computing has revolutionized data processing and management, offering flexible and scalable infrastructure for the distribution of content, computing power, and services across the globe. Dynamic, flexible, and transparent reallocation of resources increases cloud-based services' use and effectiveness. As rates of cloud adoption soar, privacy regulations, and geopolitical security introduce new challenges, which include the assessment, validation, and enforcement of data geolocation. However, currently, there is no standardized benchmark for this research domain. Therefore, this paper presents a novel dataset of measurements specifically designed to evaluate cloud data geolocation algorithms. In addition to its beneficial role in evaluating data geolocation algorithms, our dataset can be used for other data geolocation subtopics.

Entangled Clouds: Measuring the Hosting Infrastructure of the Free Contents Web
  • Mohammed Alqadhi
  • Mohammed Alkinoon
  • Jie Lin
  • Ahmed Abdalaal
  • David Mohaisen

Free content websites (FCWs) are a critical part of the Internet, and understanding them is essential for their wide use. This study statistically explores the distribution of free content websites globally by analyzing their hosting network scale, cloud service provider, and country-level distribution, combined and per the content category they provide, and by contrasting these measurements to the characteristics of premium content websites (PCWs). Our study further contrasts the distribution of these websites to general websites sampled from the Alexa top-1M websites and explores their security attributes using various security indicators.

We found that FCWs and PCWs are hosted mainly in medium-scale networks, a scale that is shown to be associated with a high concentration of malicious websites. Moreover, FCWs cloud and country-level distributions are shown to be heavy-tailed, although with unique patterns compared to PCWs. Our study contributes to understanding the FCWs ecosystem through various quantitative analyses. The results highlight the possibility of containing their harm, when malicious, through effective isolation and filtering thanks to their network, cloud, and country-level concentration.