MTD '23

Proceedings of the 10th ACM Workshop on Moving Target Defense
Last Update : [26 November, 2023]

SESSION: Session 1: MTD in Wireless Networks

MMP: A Dynamic Routing Protocol Design to Proactively Defend against Wireless Network Inference Attacks
  • Jinmiao Chen
  • Zhengping Jay Luo
  • Yuchen Liu
  • Shangqing Zhao

Network inference refers to the process of extracting sensitive information from a network without directly accessing it. This poses a significant threat to network security since it allows attackers to gain insight into sensitive information such as flow information through inference. Possessing flow information about a wireless network can empower attackers to launch more sophisticated and targeted attacks. Network inference relies on consistent traffic patterns or behavior to establish the relationship between the measured link metrics and flow information. Therefore, dynamic routing can help enhance resilience against network inference by proactive introducing variability into network traffic patterns, which can incur a high probability of mismatch between the observed patterns and the actual ones. In this paper, we observe that the inference error is positively related to the mismatch. Therefore, we propose a dynamic routing protocol, called Max-Mismatch-Probability (MMP), which seeks to maximize mismatch probability and increase the inference error. In this paper, we provide the theoretical analysis of our proposed protocol and show that the inference error of MMP is Θ(√N), which is verified in our experimental results.

BlueShift: Probing Adaptive Frequency Hopping in Bluetooth
  • Tommy Chin
  • Noah Korzak
  • Kun Sun

Bluetooth technology is ubiquitous in today's world with the adoption of mobile phones, wearable media, and Internet-of-Things devices. Adaptive Frequency Hoping (AFH) enables Bluetooth-based technology to frequently change the corresponding wireless communication channel for user privacy purposes and signal noise reduction. The continuous switching of channel frequencies in AFH conceptually mirrors a moving target defense approach, as AFH increases the difficulty for adversaries to probe and monitor Bluetooth devices. In this short paper, we propose~\product, a systematic approach to defeat AFH in practice by identifying and tracking hopping patterns for Bluetooth Low Energy devices. Our real-world experiments demonstrate plausibility of the proposed approach and identify key areas to enhance AFH's effectiveness as an MTD.

SESSION: Session 2: Rethinking Robust Web Defense

RanABD: MTD-Based Technique for Detection of Advanced Session-Replay Web Bots
  • Shadi Sadeghpour
  • Natalija Vlajic

In the current digital landscape, cyberattacks have become increasingly sophisticated in their attempts to evade detection. One such example is the session-replay web bot attack, where hackers use previously recorded human mouse movements (i.e., sessions) to emulate human behavior on the target web sites and apps. With the emergence of advanced AI, hackers are further expected to utilize these programs to generate carefully randomized session-replay bots that still exhibit human-like behavior but without replaying/repeating identical mouse trajectories, as was previously the case. Detecting such advanced bots in the traditionally designed web pages and sites is exceptionally hard if not impossible. In this paper, we propose RanABD, a novel defensive web page randomization technique that is built on the concepts of moving-target defence (MTD) and is designed to counter advanced session-replay web bots. RanABD performs randomized micro modifications in the alignment of select visual HTML elements and element attributes in the target web page, while causing minimal disturbances in the page's overall appearance and functionality. By doing so, the technique ensures that the distances between trajectories of genuine human-visitors, as well as trajectories of repeat visits by the same human user, are sufficiently separated in the Feature Space. For session-replay bot operators, the only way to bypass this defence is by increasing the degree of randomization in replay sessions, but this approach is likely to backfire as it inevitably results in outlier-like trajectories that are even easier to detect. According to our knowledge, this is the first research paper that explicitly addresses the issue of advanced session-replay bots as well as proposes a novel technique that can effectively detect these specific types of bots.

Rethinking Single Sign-On: A Reliable and Privacy-Preserving Alternative with Verifiable Credentials
  • Athan D. Johnson
  • Ifteher Alom
  • Yang Xiao

Single sign-on (SSO) has provided convenience to users in the web domain as it can authorize a user to access various resource providers (RPs) using the identity provider (IdP)'s unified authentication portal. However, SSO also faces security problems including IdP single-point failure and the privacy associated with identity linkage. In this paper, we present the initial design of an alternative SSO solution called VC-SSO to address the security and privacy problems while preserving SSO's usability. VC-SSO leverages the recently emerged decentralized identifier (DID) and verifiable credential (VC) framework in that a user only needs to authenticate with the IdP once to obtain a VC and then may generate multiple verifiable presentations (VPs) from the VC to access different RPs. This is based on the design that each RP has established a smart contract with the IdP specifying the service agreement and the VP schema for user authorization. We hope the proposed VC-SSO design marks the first step toward a future SSO system that provides strong reliability and privacy to users under adversarial conditions.

SESSION: Session 3: Securing Emerging Technologies

Jailbreaker in Jail: Moving Target Defense for Large Language Models
  • Bocheng Chen
  • Advait Paliwal
  • Qiben Yan

Large language models (LLMs), known for their capability in understanding and following instructions, are vulnerable to adversarial attacks. Researchers have found that current commercial LLMs either fail to be "harmless" by presenting unethical answers, or fail to be "helpful" by refusing to offer meaningful answers when faced with adversarial queries. To strike a balance between being helpful and harmless, we design a moving target defense (MTD) enhanced LLM system. The system aims to deliver non-toxic answers that align with outputs from multiple model candidates, making them more robust against adversarial attacks. We design a query and output analysis model to filter out unsafe or non-responsive answers. %to achieve the two objectives of randomly selecting outputs from different LLMs. We evaluate over 8 most recent chatbot models with state-of-the-art adversarial queries. Our MTD-enhanced LLM system reduces the attack success rate from 37.5% to 0%. Meanwhile, it decreases the response refusal rate from 50% to 0%.

Software and Behavior Diversification for Swarm Robotics Systems
  • Ao Li
  • Sinyin Chang
  • Guorui Li
  • Yuanhaur Chang
  • Nathan Fisher
  • Thidapat (Tam) Chantem

Inspired by natural swarms, swarm robotics systems are used in safety-critical tasks due to their scalability, redundancy, and adaptability. However, their design exposes them to two primary vulnerabilities. First, their homogeneity makes them vulnerable to large-scale attacks. Second, logical flaws within swarm algorithms can be exploited, leading to mission failures or crashes. While existing studies can effectively identify these vulnerabilities using system testing and verification, they are often time-consuming and might require repetition following software updates. To this end, we propose a complementary, two-level diversification approach. The first level tackles system homogeneity through software diversification. The second level introduces algorithmic randomness to minimize the exploitability of logical flaws. By leveraging a social force model, we can ensure that the introduced randomized behaviors do not compromise safety. Our evaluations show that the performance overheads remain within acceptable limits, notably at 2% for missions characterized by self-organizing behaviors.