The Password Based Key Derivation Function v2 (PBKDF2) is an important cryptographic primitive that has practical relevance to many widely deployed security systems. We investigate accelerated attacks on PBKDF2 with commodity GPUs, reporting the fastest attack on the primitive to date, outperforming the previous stateof- the-art oclHashcat. We apply our attack to Microsoft .NET framework, showing that a consumer-grade GPU can break an ASP.NET password in less than 3 hours, and we discuss the application of our attack toWiFi Protected Access (WPA2).
We consider both algorithmic optimisations of crypto primitives and OpenCL kernel code optimisations and empirically evaluate the contribution of individual optimisations on the overall acceleration. In contrast to the common view that GPU acceleration is primarily driven by massively parallel hardware architectures, we demonstrate that a proportionally larger contribution to acceleration is made through effective algorithmic optimisations. Our work also contributes to understanding what is going on inside the black box of oclHashcat.
We investigate nonce reuse issues with the GCM block cipher mode as used in TLS and focus in particular on AES-GCM, the most widely deployed variant. With an Internet-wide scan we identified 184 HTTPS servers repeating nonces, which fully breaks the authenticity of the connections. Affected servers include large corporations, financial institutions, and a credit card company. We present a proof of concept of our attack allowing to violate the authenticity of affected HTTPS connections which in turn can be utilized to inject seemingly valid content into encrypted sessions. Furthermore, we discovered over 70,000 HTTPS servers using random nonces, which puts them at risk of nonce reuse, in the unlikely case that large amounts of data are sent via the same session.
Rights Management Services (RMS) are used to enforce access control in a distributed environment, and to cryptographically protect companies’ assets by restricting access rights, for example, to view-only, edit, print, etc., on a per-document basis. One of the most prominent RMS implementations is Microsoft RMS. It can be found in Active Directory (AD) and Azure. Previous research concentrated on generic weaknesses of RMS, but did not present attacks on real world systems.
We provide a security analysis of Microsoft RMS and present two working attacks: (1.)We completely remove the RMS protection of a Word document on which we only have a view-only permission, without having the right to edit it. This shows that in contrast to claims made by Microsoft, Microsoft RMS can only be used to enforce all-or-nothing access. (2.) We extend this attack to be stealthy in the following sense: We show how to modify the content of an RMS write-protectedWord document issued by our victim. The resulting document still claims to be write protected, and that the modified content was generated by the victim. We show that these attacks are not limited to local instances of Microsoft AD, and can be extended to Azure RMS and Office 365. We responsibly disclosed our findings to Microsoft. They acknowledged our findings (MSRC Case 33210).
Long Term Evolution (LTE) is the most recent generation of mobile communications promising increased transfer rates and enhanced security features. It is todays communication technology for mobile Internet as well as considered for the use in critical infrastructure, making it an attractive target to a wide range of attacks. We evaluate the implementation correctness of LTE security functions that should protect personal data from compromise.
In this paper, we focus on two security aspects: user data encryption and network authentication. We develop a framework to analyze various LTE devices with respect to the implementations of their security-related functions. Using our framework, we identify several security flaws partially violating the LTE specification. In particular, we show that i) an LTE network can enforce to use no encryption and ii) none of the tested devices informs the user when user data is sent unencrypted. Furthermore, we present iii) a Man-in-the-Middle (MitM) attack against an LTE device that does not fulfill the network authentication requirements. The discovered security flaws undermine the data protection objective of LTE and represent a threat to the users of mobile communication. We outline several countermeasures to cope with these vulnerabilities and make proposals for a long-term solution.
High profile attacks such as Stuxnet and the cyber at-tack on the Ukrainian power grid have increased re-search in Industrial Control System (ICS) and Supervi-sory Control and Data Acquisition (SCADA) network security. However, due to the sensitive nature of these networks, there is little publicly available data for re-searchers to evaluate the effectiveness of the proposed solution. The lack of representative data sets makes evaluation and independent validation of emerging se-curity solutions difficult and slows down progress to-wards effective and reusable solutions.
This paper presents our work to generate representative labeled data sets for SCADA networks that security researcher can use freely. The data sets include packet captures including both malicious and non-malicious Modbus traffic and accompanying CSV files that con-tain labels to provide the ground truth for supervised machine learning.
To provide representative data at the network level, the data sets were generated in a SCADA sandbox, where electrical network simulators were used to introduce realism in the physical component. Also, real attack tools, some of them custom built for Modbus networks, were used to generate the malicious traffic. Even though they do not fully replicate a production network, these data sets represent a good baseline to validate detection tools for SCADA systems.
In this paper, a novel hardware-assisted rootkit is introduced, which leverages the performance monitoring unit (PMU) of a CPU. By configuring hardware performance counters to count specific architectural events, this research effort proves it is possible to transparently trap system calls and other interrupts driven entirely by the PMU. This offers an attacker the opportunity to redirect control flow to malicious code without requiring modifications to a kernel image.
The approach is demonstrated as a kernel-mode rootkit on both the ARM and Intel x86-64 architectures that is capable of intercepting system calls while evading current kernel patch protection implementations such as PatchGuard. A proof-of-concept Android rootkit is developed targeting ARM (Krait) chipsets found in millions of smartphones worldwide, and a similar Windows rootkit is developed for the Intel x86-64 architecture. The prototype PMU-assisted rootkit adds minimal overhead to Android, and less than 10% overhead to Windows OS. Further analysis into performance counters also reveals that the PMU can be used to trap returns from secure world on ARM as well as returns from System Management Mode on x86-64.
To fight the ever-increasing proliferation of novel malware, antivirus (AV) vendors have turned to emulationbased automated dynamic malware analysis. Malware authors have responded by creating malware that attempts to evade detection by behaving benignly while being running in an emulator. Malware may detect emulation by looking for emulator “fingerprints” such as unique environmental values, timing inconsistencies, or bugs in CPU emulation.
Due to their immense complexity and the expert knowledge required to effectively analyze them, reverseengineering AV emulators to discover fingerprints is an extremely challenging task. As an alternative, researchers have demonstrated fingerprinting attacks using simple black-box testing, but these techniques are slow, inefficient, and generally awkward to use.
We propose a novel black-box technique to efficiently extract emulator fingerprints without reverseengineering. To demonstrate our technique, we implemented an easy-to-use tool and API called AVLeak. We present an evaluation of AVLeak against several current consumer AVs and show emulator fingerprints derived from our experimentation. We also propose a classification of fingerprints as they apply to consumer AV emulators. Finally, we discuss the defensive implications of our work, and future directions of research in emulator evasion and exploitation.
Hiding malware processes from fingerprinting is challenging. Current techniques like metamorphic algorithms and diversity generate different instances of a program, protecting it against static detection. Unfortunately, all existing techniques are prone to detection through behavioral analysis – a runtime analysis that records behavior (e.g., through system call invocations), and can detect executing diversified programs like malware.
We present malWASH, a dynamic diversification engine that executes an arbitrary program without being detected by dynamic analysis tools. Target programs are chopped into small components that are then executed in the context of other processes, hiding the behavior of the original program in a stream of benign behavior of a large number of processes. A scheduler connects these components and transfers state between the different processes. The execution of the benign processes is not impacted. Furthermore, malWASH ensures that the executing program remains persistent, complicating the removal process.
View the current schedule and scheduling instructions on the USENIX Security '16 BoFs page.
Last year, Joshua disclosed multiple vulnerabilities in Android's multimedia processing library libstagefright. This disclosure went viral under the moniker "Stagefright," garnered national press, and ultimately helped spur widespread change throughout the mobile ecosystem. Since initial disclosure, a multitude of additional vulnerabilities have been disclosed affecting the library.
In the course of his research, Joshua developed and shared multiple exploits for the issues he disclosed with Google. In response to Joshua and others' findings, the Android Security Team made many security improvements. Some changes went effective immediately, some later, and others still are set to ship with the next version of Android—Nougat.
Boundaries between layers of digital radio protocols have been breached by techniques like packet-in-packet: an attacker controlling the application layer payloads can, in fact, inject frames into lower layers such as PHY and LNK. But can a digital transmitter designed for a particular PHY inject frames into a different, noncompatible PHY network?
We present several case studies of such cross-protocol injection, and show that non-compatible radio PHYs sharing the same frequencies need not merely collide and jam each other, but can instead unexpectedly cross-talk. We propose a methodology for discovering such crosstalking PHYs systematically rather that serendipitously. No PHY is an island.
Hands-on cyber security training is generally accepted as an enjoyable and effective way of developing and practising skills that complement the knowledge gained by traditional education. At the same time, experience from organizing and participating in these events show that there is still room for making a larger impact on the learners, and providing more engaging and beneficial learning. In particular, the area of the game and exercise design is not sufficiently well-developed. There is no comprehensive methodology or best practices that can be used to prepare, test, and carry out events.
We present the concept of a security game and lessons learned from a prototype game played by 260 participants. Based on the lessons, we describe the enhancements to the game design and a user study evaluating new game features. The results of the study show the importance of logging events which describe the course of the game. It also suggests what type of information can be predicted from the game logs and what can be found by other methods such as surveys.
The Extensible Markup Language (XML) has become a widely used data structure for web services, Single- Sign On, and various desktop applications. The core of the entire XML processing is the XML parser. Attacks on XML parsers, such as the Billion Laughs and the XML External Entity (XXE) Attack are known since 2002. Nevertheless even experienced companies such as Google, and Facebook were recently affected by such vulnerabilities.
In this paper we systematically analyze known attacks on XML parsers and deal with challenges and solutions of them. Moreover, as a result of our in-depth analysis we found three novel attacks.
We conducted a large-scale analysis of 30 different XML parsers of six different programming languages. We created an evaluation framework that applies different variants of 17 XML parser attacks and executed a total of 1459 attack vectors to provide a valuable insight into a parser’s configuration. We found vulnerabilities in 66 % of the default configuration of all tested parses. In addition, we comprehensively inspected parser features to prevent the attacks, show their unexpected side effects, and propose secure configurations.
While machine learning is a powerful tool for data analysis and processing, traditional machine learning methods were not designed to operate in the presence of adversaries. They are based on statistical assumptions about the distribution of the input data, and they rely on training data derived from the input data to construct models for analyses. Adversaries may exploit these characteristics to disrupt analytics, cause analytics to fail, or engage in malicious activities that fail to be detected.
While these vulnerabilities pose a challenge to using machine learning for security applications, they may also pose opportunities to disrupt privacy invasive learning systems. We will discuss techniques, challenges, and future research directions for reverse engineering analytics, secure learning and learning-based security applications.
Since its creation in 2009, Bitcoin has used a hashbased proof-of-work to generate new blocks, and create a single public ledger of transactions. The hash-based computational puzzle employed by Bitcoin is instrumental to its security, preventing Sybil attacks and making doublespending attacks more difficult. However, there have been concerns over the efficiency of this proof-of-work puzzle, and alternative “useful” proofs have been proposed.
In this paper, we present DDoSCoin, which is a cryptocurrency with a malicious proof-of-work. DDoSCoin allows miners to prove that they have contributed to a distributed denial of service attack against specific target servers. This proof involves making a large number of TLS connections to a target server, and using cryptographic responses to prove that a large number of connections has been made. Like proof-of-work puzzles, these proofs are inexpensive to verify, and can be made arbitrarily difficult to solve.
Research on cybersecurity competitions is still in its nascent state, and many questions remain unanswered, including how effective these competitions actually are at influencing career decisions and attracting a diverse participant base. The present research aims to address these questions through surveying a sample of ex-cybersecurity competition participants from New York University’s Cyber-Security Awareness Week (CSAW). 195 survey respondents reported on their self-esteem, general self-efficacy, and perceived efficacy in cyber-security-related tasks, along with important competi-tion- and career-related variables such as reasons for participating, competition performance, appeal and ef-fectiveness of competitions, job satisfaction, and per-ceived organizational fit. Correlational analyses showed that confidence in cybersecurity-related tasks was posi-tively related to interest in cybersecurity, performance within the competition, job satisfaction within a cyber-security job, and perceived organizational fit within cybersecurity organizations. Specific self-efficacy was better at predicting competition performance than gen-eral self-efficacy or self-esteem, but was unrelated to participants’ positive image of competitions and wheth-er or not the cybersecurity competitions influenced their career decisions. Instead, general self-efficacy was a better predictor of positive competition experience even more-so than performance within the competition. Overall, the results show that participants with self-confidence in their cybersecurity-relevant skills are more likely to do well in the competition and be satis-fied when entering a cybersecurity career, but any par-ticipant with high general self-efficacy will likely still have a positive experience when participating in com-petitions.
Certain usable security problems—like password selection, or warning behavior—are well-studied and oft-discussed at conferences. What problems aren't we addressing as a community? Where is more research needed, and why aren't more researchers working on those problems? In this discussion, the audience will work together to brainstorm for new research topics in the area of usable security.
To kick off the discussion, I'll start by talking about the need for more research on global and underserved communities. Until recently, most research has focused on university students. I'll share previously unpublished Chrome data that illustrates how different groups of people use and experience the Internet very differently. How can we do better at capturing diverse perspectives in user research? Then, it'll be your turn to pitch questions as we open up the floor for discussion. Should we be focusing more on the Internet of Things, self-driving software, or something else altogether...?
Sensors measure physical quantities of the environment for sensing and actuation systems, and are widely used in many commercial embedded systems such as smart devices, drones, and medical devices because they offer convenience and accuracy. As many sensing and actuation systems depend entirely on data from sensors, these systems are naturally vulnerable to sensor spoofing attacks that use fabricated physical stimuli. As a result, the systems become entirely insecure and unsafe.
In this paper, we propose a new type of sensor spoofing attack based on saturation. A sensor shows a linear characteristic between its input physical stimuli and output sensor values in a typical operating region. However, if the input exceeds the upper bound of the operating region, the output is saturated and does not change as much as the corresponding changes of the input. Using saturation, our attack can make a sensor to ignore legitimate inputs. To demonstrate our sensor spoofing attack, we target two medical infusion pumps equipped with infrared (IR) drop sensors to control precisely the amount of medicine injected into a patients’ body. Our experiments based on analyses of the drop sensors show that the output of them could be manipulated by saturating the sensors using an additional IR source. In addition, by analyzing the infusion pumps’ firmware, we figure out the vulnerability in the mechanism handling the output of the drop sensors, and implement a sensor spoofing attack that can bypass the alarm systems of the targets. As a result, we show that both over-infusion and under-infusion are possible: our spoofing attack can inject up to 3.33 times the intended amount of fluid or 0.65 times of it for a 10 minute period.
In order to accomplish cyber security tasks, one needs to know how to analyze complex data and when and how to use tools. Many hands-on exercises for cybersecurity courses have been developed to teach these skills. There is a spectrum of ways that these exercises can be taught. On one end of the spectrum are prescriptive exercises, in which students follow step-by- step instructions to run scripted exploits, perform penetration testing, do security audits, etc. On the other end of the spectrum are open-ended exercises and capture-the- flag activities, where little guidance is given on how to proceed.
This paper reports on our experience with trying to find a balance between these extremes in the context of one of the suite of cybersecurity exercises that we have developed in the EDURange framework. The particular exercise that we present teaches students about dynamic analysis of binaries using strace. We have found that students are most successful in these exercises when they are given the right amount of prerequisite knowledge and guidance as well as some opportunity to find creative solutions. Our scenarios are specifically designed to develop analysis skills and the security mindset in students and to complement the theoretical aspects of the discipline and develop practical skills.
Sensors and actuators are essential components of cyberphysical systems. They establish the bridge between cyber systems and the real world, enabling these systems to appropriately react to external stimuli. Among the various types of sensors, active sensors are particularly well suited to remote sensing applications, and are widely adopted for many safety critical systems such as automobiles, unmanned aerial vehicles, and medical devices. However, active sensors are vulnerable to spoofing attacks, despite their critical role in such systems. They cannot adopt conventional challenge-response authentication procedures with the object of measurement, because they cannot determine the response signal in advance, and their emitted signal is transparently delivered to the attacker as well.
Recently, PyCRA, a physical challenge-response authentication scheme for active sensor spoofing detection has been proposed. Although it is claimed to be both robust and generalizable, we discovered a fundamental vulnerability that allows an attacker to circumvent detection. In this paper, we show that PyCRA can be completely bypassed, both by theoretical analysis and by real-world experiment. For the experiment, we implemented authentication mechanism of PyCRA on a real-world medical drop counter, and successfully bypassed it, with only a low-cost microcontroller and a couple of crude electrical components. This shows that there is currently no effective robust and generalizable defense scheme against active sensor spoofing attacks.
In many cases students in higher education are driven by assessments and achievements rather than the “learning journey” that can be achieved through full engagement with provided material. Novel approaches are needed to improve engagement in and out of class time, and to achieve a greater depth of learning. Gamification, “the use of game design elements in nongame contexts”, has been applied to higher education to improve engagement, and research also suggests that serious games can be used for gamesbased learning, providing simulated learning environments and increasing motivation.
This paper presents the design and evaluation of a gamified computer security module, with a unique approach to assessed learning activities. Learning activities (many developed as open educational resources (OER)) and an assessment structure were developed. A new free and open source software (FOSS) virtual learning environment (VLE) was implemented, which enables the use of three types of experience points (XP), and a semiautomated marking scheme for timely, clear, transparent, and feedbackoriented marking.
The course and VLE were updated and evaluated over two years. Qualitative and descriptive results were positive and encouraging. However, ultimately the increased satisfaction was not found to have statistical significance on quantitative measurements of motivation, and the teaching workload of the gamified module was noteworthy.
Consumer vehicles have been proven to be insecure; the addition of electronics to monitor and control vehicle functions have added complexity resulting in safety critical vulnerabilities. Heavy commercial vehicles have also begun adding electronic control systems similar to consumer vehicles. We show how the openness of the SAE J1939 standard used across all US heavy vehicle industries gives easy access for safety-critical attacks and that these attacks aren't limited to one specific make, model, or industry.
We test our attacks on a 2006 Class-8 semi tractor and 2001 school bus. With these two vehicles, we demonstrate how simple it is to replicate the kinds of attacks used on consumer vehicles and that it is possible to use the same attack on other vehicles that use the SAE J1939 standard. We show safety critical attacks that include the ability to accelerate a truck in motion, disable the driver's ability to accelerate, and disable the vehicle's engine brake. We conclude with a discussion for possibilities of additional attacks and potential remote attack vectors.
There is a recognized shortage of students who are interested in learning computer and network security. One of the underlying reasons for this is a lack of awareness and motivation to study the subject. In order to tackle this problem, we have developed an introductory cryptography and security curriculum that attempts to inspire students to pursue this career path.
Towards this end, the curriculum we have designed motivates the importance of the field and contains a variety of activities intended not only to teach students basic concepts, but also allow them to develop technical skills in a fun and engaging manner. In particular, we employ a novel set of capture-the-flag (CTF) exercises and a physical activity based on an urban race, both of which are tied into a fictional story that students act out. The storyline follows a book series that many young adults of this generation are familiar with: the Divergent books written by Veronica Roth [1]. Using this approach, we have successfully delivered our curriculum at multiple schools throughout Oregon.
There has been a recent surge in interest in autonomous robots and vehicles. From the Google self-driving car, to autonomous delivery robots, to hobbyist UAVs, there is a staggering variety of proposed deployments for autonomous vehicles. Ensuring that such vehicles can plan and execute routes safely is crucial.
The key insight of our paper is that the sensors that autonomous vehicles use to navigate represent a vector for adversarial control. With direct knowledge of how sensor algorithms operate, the adversary can manipulate the victim’s environment to form an implicit control channel on the victim. We craft an attack based on this idea, which we call asensor input spoofing attack.
We demonstrate a sensor input spoofing attack against the popular Lucas-Kanade method for optical flow sensing and characterize the ability of an attacker to trick optical flow via simulation. We also demonstrate the effectiveness of our optical flow sensor input spoofing attack against two consumer-grade UAVs, the AR.Drone 2.0 and the APM 2.5 ArduCopter. Finally, we introduce a method for defending against such an attack on opticalflow sensors, both using the RANSAC algorithm and a more robust weighted RANSAC algorithm to synthesize sensor outputs.
Mingle with other workshop and conference attendees while enjoying beer, soda, and snacks.
View the current schedule and scheduling instructions on the USENIX Security '16 BoFs page.
Join us for some good old-fashioned board games. We'll have some on hand, but bring your own games, too!
The constant-time programming discipline is an effective countermeasure against timing attacks, which can lead to complete breaks of otherwise secure systems. However, adhering to constant-time programming is hard on its own, and extremely hard under additional efficiency and legacy constraints. This makes automated verification of constant-time code an essential component for building secure software.
We propose a novel approach for verifying constanttime security of real-world code. Our approach is able to validate implementations that locally and intentionally violate the constant-time policy, when such violations are benign and leak no more information than the public outputs of the computation. Such implementations, which are used in cryptographic libraries to obtain important speedups or to comply with legacy APIs, would be declared insecure by all prior solutions.
We implement our approach in a publicly available, cross-platform, and fully automated prototype, ct-verif, that leverages the SMACK and Boogie tools and verifies optimized LLVM implementations. We present verification results obtained over a wide range of constant-time components from the NaCl, OpenSSL, FourQ and other off-the-shelf libraries. The diversity and scale of our examples, as well as the fact that we deal with top-level APIs rather than being limited to low-level leaf functions, distinguishes ct-verif from prior tools.
Our approach is based on a simple reduction of constant-time security of a program P to safety of a product program Qthat simulates two executions of P. We formalize and verify the reduction for a core high-level language using the Coq proof assistant.
Row hammer attacks exploit electrical interactions between neighboring memory cells in high-density dynamic random-access memory (DRAM) to induce memory errors. By rapidly and repeatedly accessing DRAMs with specific patterns, an adversary with limited privilege on the target machine may trigger bit flips in memory regions that he has no permission to access directly. In this paper, we explore row hammer attacks in cross-VM settings, in which a malicious VM exploits bit flips induced by row hammer attacks to crack memory isolation enforced by virtualization. To do so with high fidelity, we develop novel techniques to determine the physical address mapping in DRAM modules at runtime (to improve the effectiveness of double-sided row hammer attacks), methods to exhaustively hammer a large fraction of physical memory from a guest VM (to collect exploitable vulnerable bits), and innovative approaches to break Xen paravirtualized memory isolation (to access arbitrary physical memory of the shared machine). Our study also suggests that the demonstrated row hammer attacks are applicable in modern public clouds where Xen paravirtualization technology is adopted. This shows that the presented cross-VM row hammer attacks are of practical importance.
Floating-point computations introduce several side channels. This paper describes the first solution that closes these side channels while preserving the precision of non-secure executions. Our solution exploits microarchitectural features of the x86 architecture along with novel compilation techniques to provide low overhead.
Because of the details of x86 execution, the evaluation of floating-point side channel defenses is quite involved, but we show that our solution is secure, precise, and fast. Our solution closes more side channels than any prior solution. Despite the added security, our solution does not compromise on the precision of the floating-point operations. Finally, for a set of microkernels, our solution is an order of magnitude more efficient than the previous solution.
In the absence of hardware-supported segmentation, many state-of-the-art defenses resort to “hiding” sensitive information at a random location in a very large address space. This paper argues that information hiding is a weak isolation model and shows that attackers can find hidden information, such as CPI’s SafeStacks, in seconds—by means of thread spraying. Thread spraying is a novel attack technique which forces the victim program to allocate many hidden areas. As a result, the attacker has a much better chance to locate these areas and compromise the defense. We demonstrate the technique by means of attacks on Firefox, Chrome, and MySQL. In addition, we found that it is hard to remove all sensitive information (such as pointers to the hidden region) from a program and show how residual sensitive information allows attackers to bypass defenses completely.
We also show how we can harden information hiding techniques by means of an Authenticating Page Mapper (APM) which builds on a user-level page-fault handler to authenticate arbitrary memory reads/writes in the virtual address space. APM bootstraps protected applications with a minimum-sized safe area. Every time the program accesses this area, APM authenticates the access operation, and, if legitimate, expands the area on demand. We demonstrate that APM hardens information hiding significantly while increasing the overhead, on average, 0.3% on baseline SPEC CPU 2006, 0.0% on SPEC with SafeStack and 1.4% on SPEC with CPI.
For over 30 years, password requirements and feedback have largely remained a product of LUDS: counts of lower- and uppercase letters, digits and symbols. LUDS remains ubiquitous despite being a conclusively burdensome and ineffective security practice.
zxcvbn is an alternative password strength estimator that is small, fast, and crucially no harder than LUDS to adopt. Using leaked passwords, we compare its estimations to the best of four modern guessing attacks and show it to be accurate and conservative at low magnitudes, suitable for mitigating online attacks. We find 1.5 MB of compressed storage is sufficient to accurately estimate the best-known guessing attacks up to 105 guesses, or 104 and 103 guesses, respectively, given 245 kB and 29 kB. zxcvbn can be adopted with 4 lines of code and downloaded in seconds. It runs in milliseconds and works as-is on web, iOS and Android.
ASLR is no longer a strong defense in itself, but it still serves as a foundation for sophisticated defenses that use randomization for pseudo-isolation. Crucially, these defenses hide sensitive information (such as shadow stacks and safe regions) at a random position in a very large address space. Previous attacks on randomization-based information hiding rely on complicated side channels and/or probing of the mapped memory regions. Assuming no weaknesses exist in the implementation of hidden regions, the attacks typically lead to many crashes or other visible side-effects. For this reason, many researchers still consider the pseudo-isolation offered by ASLR sufficiently strong in practice.
We introduce powerful new primitives to show that this faith in ASLR-based information hiding is misplaced, and that attackers can break ASLR and find hidden regions on 32 bit and 64 bit Linux systems quickly with very few malicious inputs. Rather than building on memory accesses that probe the allocated memory areas, we determine the sizes of theunallocated holes in the address space by repeatedly allocating large chunks of memory. Given the sizes, an attacker can infer the location of the hidden region with few or no side-effects. We show that allocation oracles are pervasive and evaluate our primitives on real-world server applications.
Many security protocols still rely on manual fingerprint comparisons for authentication. The most well-known and widely used key-fingerprint representation are hexadecimal strings as used in various security tools. With the introduction of end-to-end security in WhatsApp and other messengers, the discussion on how to best represent key-fingerprints for users is receiving a lot of interest.
We conduct a 1047 participant study evaluating six different textual key-fingerprint representations with regards to their performance and usability. We focus on textual fingerprints as the most robust and deployable representation.
Our findings show that the currently used hexadecimal representation is more prone to partial preimage attacks in comparison to others. Based on our findings, we make the recommendation that two alternative representations should be adopted. The highest attack detection rate and best usability perception is achieved with a sentence-based encoding. If language-based representations are not acceptable, a simple numeric approach still outperforms the hexadecimal representation.
Despite numerous attempts to mitigate code-reuse attacks, Return-Oriented Programming (ROP) is still at the core of exploiting memory corruption vulnerabilities. Most notably, in JIT-ROP, an attacker dynamically searches for suitable gadgets in executable code pages, even if they have been randomized. JIT-ROP seemingly requires that (i) code isreadable (to find gadgets at run time) and (ii) executable (to mount the overall attack). As a response, Execute-no-Read (XnR) schemes have been proposed to revoke the read privilege of code, such that an adversary can no longer inspect the code after finegrained code randomizations have been applied.
We revisit these “inherent” requirements for mounting JIT-ROP attacks. We show that JIT-ROP attacks can be mounted without ever reading any code fragments, but instead by injecting predictable gadgets via a JIT compiler by carefully triggering useful displacement values in control flow instructions. We show that defenses deployed in all major browsers (Chrome, MS IE, Firefox) do not protect against such gadgets, nor do the current XnR implementations protect against code injection attacks. To extend XnR’s guarantees against JIT-compiled gadgets, we propose a defense that replaces potentially dangerous direct control flow instructions with indirect ones at an overall performance overhead of less than 2% and a code-size overhead of 26% on average.
We describe a highly optimized protocol for general purpose secure two-party computation (2PC) in the presence of malicious adversaries. Our starting point is a protocol of Kolesnikov et al. (TCC 2015). We adapt that protocol to the online/offline setting, where two parties repeatedly evaluate the same function (on possibly different inputs each time) and perform as much of the computation as possible in an offline preprocessing phase before their inputs are known. Along the way we develop several significant simplifications and optimizations to the protocol.
We have implemented a prototype of our protocol and report on its performance. When two parties on Amazon servers in the same region use our implementation to securely evaluate the AES circuit 1024 times, the amortized cost per evaluation is 5.1ms offline + 1.3ms online. The total offline+online cost of our protocol is in fact less than the online cost of any reported protocol with malicious security. For comparison, our protocol’s closest competitor (Lindell & Riva, CCS 2015) uses 74ms offline + 7ms online in an identical setup.
Our protocol can be further tuned to trade performance for leakage. As an example, the performance in the above scenario improves to 2.4ms offline + 1.0ms online if we allow an adversary to learn a single bit about the honest party’s input with probability 2−20 (but not violate any other security property, e.g. correctness).
In this paper we explore several contexts where an adversary has an upper hand over the defender by using special hardware in an attack. These include password processing, hard-drive protection, cryptocurrency mining, resource sharing, code obfuscation, etc.
We suggest memory-hard computing as a generic paradigm, where every task is amalgamated with a certain procedure requiring intensive access to RAM both in terms of size and (very importantly) bandwidth, so that transferring the computation to GPU, FPGA, and even ASIC brings little or no cost reduction. Cryptographic schemes that run in this framework become egalitarian in the sense that both users and attackers are equal in the price-performance ratio conditions.
Based on existing schemes like Argon2 and the recent generalized-birthday proof-of-work, we suggest a generic framework and two new schemes:
Blackhat Search Engine Optimization (SEO) has been widely used to promote spam or malicious web sites. Traditional blackhat SEO campaigns often target hot keywords and establish link networks by spamming popular forums or compromising vulnerable sites. However, such SEO campaigns are actively disrupted by search engines providers, making the operational cost much higher in recent years. In this paper, we reveal a new type of blackhat SEO infrastructure (called “spider pool”) which seeks a different operational model. The owners of spider pools use cheap domains with low PR (PageRank) values to construct link networks and poison longtail keywords. To get better rankings of their promoted content, the owners have to reduce the indexing latencies by search engines. To this end, they abusewildcard DNS to create virtually infinite sites and construct complicated loop structure to force search-engine crawlers to visit them relentlessly.
We carried out a comprehensive study to understand this emerging threat. As a starting point, we infiltrated a spider pool service and built a detection system to explore all the recruited SEO domains to learn how they were orchestrated. Exploiting the unique features of the spider pool, we developed a scanner which examined over 13 million domains under 22 TLDs/SLDs and discovered over 458K SEO domains. Finally, we measured the spider-pool ecosystem on top of these domains and analyzed the crawling results from 21 spider pools. The measurement result reveals their infrastructure features, customer categories and impact on search engines. We hope our study could inspire new mitigation methods and improve the ranking or indexing metrics from search engines.
Recent years have seen extensive adoption of domain generation algorithms (DGA) by modern botnets. The main goal is to generate a large number of domain names and then use a small subset for actual C&C communication. This makes DGAs very compelling for botmasters to harden the infrastructure of their botnets and make it resilient to blacklisting and attacks such as takedown efforts. While early DGAs were used as a backup communication mechanism, several new botnets use them as their primary communication method, making it extremely important to study DGAs in detail.
In this paper, we perform a comprehensive measurement study of the DGA landscape by analyzing 43 DGAbased malware families and variants. We also present a taxonomy for DGAs and use it to characterize and compare the properties of the studied families. By reimplementing the algorithms, we pre-compute all possible domains they generate, covering the majority of known and active DGAs. Then, we study the registration status of over 18 million DGA domains and show that corresponding malware families and related campaigns can be reliably identified by pre-computing future DGA domains. We also give insights into botmasters’ strategies regarding domain registration and identify several pitfalls in previous takedown efforts of DGA-based botnets. We will share the dataset for future research and will also provide a web service to check domains for potential DGA identity.
View the current schedule and scheduling instructions on the USENIX Security '16 BoFs page.
Incorrect error handling in security-sensitive code often leads to severe security vulnerabilities. Implementing correct error handling is repetitive and tedious especially in languages like C that do not support any exception handling primitives. This makes it very easy for the developers to unwittingly introduce error handling bugs. Moreover, error handling bugs are hard to detect and locate using existing bug-finding techniques because many of these bugs do not display any obviously erroneous behaviors (e.g., crash and assertion failure) but cause subtle inaccuracies.
In this paper, we design, implement, and evaluate EPEX, a tool that uses error specifications to identify and symbolically explore different error paths and reports bugs when any errors are handled incorrectly along these paths. The key insights behind our approach are: (i) real-world programs often handle errors only in a limited number of ways and (ii) most functions have simple and consistent error specifications. This allows us to create a simple oracle that can detect a large class of error handling bugs across a wide range of programs. We evaluated EPEX on 867,000 lines of C Code from four different open-source SSL/TLS libraries (OpenSSL, GnuTLS, mbedTLS, and wolfSSL) and 5 different applications that use SSL/TLS API (Apache httpd, cURL, Wget, LYNX, and Mutt). EPEx discovered 102 new error handling bugs across these programs—at least 53 of which lead to security flaws that break the security guarantees of SSL/TLS. EPEX has a low false positive rate (28 out of 130 reported bugs) as well as a low false negative rate (20 out of 960 reported correct error handling cases).
API misuse is a well-known source of bugs. Some of them (e.g., incorrect use of SSL API, and integer overflow of memory allocation size) can cause serious security vulnerabilities (e.g., man-in-the-middle (MITM) attack, and privilege escalation). Moreover, modern APIs, which are large, complex, and fast evolving, are error-prone. However, existing techniques to help finding bugs require manual effort by developers (e.g., providing specification or model) or are not scalable to large real-world software comprising millions of lines of code.
In this paper, we present APISAN, a tool that automatically infers correct API usages from source code without manual effort. The key idea in APISAN is to extract likely correct usage patterns in four different aspects (e.g., causal relation, and semantic relation on arguments) by considering semantic constraints. APISAN is tailored to check various properties with security implications. We applied APISAN to 92 million lines of code, including Linux Kernel, and OpenSSL, found 76 previously unknown bugs, and provided patches for all the bugs.
Metadata manipulation attacks represent a new threat class directed against Version Control Systems, such as the popular Git. This type of attack provides inconsistent views of a repository state to different developers, and deceives them into performing unintended operations with often negative consequences. These include omitting security patches, merging untested code into a production branch, and even inadvertently installing software containing known vulnerabilities. To make matters worse, the attacks are subtle by nature and leave no trace after being executed.
We propose a defense scheme that mitigates these attacks by maintaining a cryptographically-signed log of relevant developer actions. By documenting the state of the repository at a particular time when an action is taken, developers are given a shared history, so irregularities are easily detected. Our prototype implementation of the scheme can be deployed immediately as it is backwards compatible and preserves current workflows and use cases for Git users. An evaluation shows that the defense adds a modest overhead while offering significantly stronger security. We performed responsible disclosure of the attacks and are working with the Git community to fix these issues in an upcoming version of Git.
Numerous initiatives are encouraging website owners to enable and enforce TLS encryption for the communication between the server and their users. Although this encryption, when configured properly, completely prevents adversaries from disclosing the content of the traffic, certain features are not concealed, most notably the size of messages. As modern-day web applications tend to provide users with a view that is tailored to the information they entrust these web services with, it is clear that knowing the size of specific resources, an adversary can easily uncover personal and sensitive information.
In this paper, we explore various techniques that can be employed to reveal the size of resources. As a result of this in-depth analysis, we discover several design flaws in the storage mechanisms of browsers, which allows an adversary to expose the exact size of any resource in mere seconds. Furthermore, we report on a novel size-exposing technique against Wi-Fi networks. We evaluate the severity of our attacks, and show their worrying consequences in multiple real-world attack scenarios. Furthermore, we propose an improved design for browser storage, and explore other viable solutions that can thwart size-exposing attacks.
In this paper, we introduce a novel approach to bypass modern face authentication systems. More specifically, by leveraging a handful of pictures of the target user taken from social media, we show how to create realistic, textured, 3D facial models that undermine the security of widely used face authentication solutions. Our framework makes use of virtual reality (VR) systems, incorporating along the way the ability to perform animations (e.g., raising an eyebrow or smiling) of the facial model, in order to trick liveness detectors into believing that the 3D model is a real human face. The synthetic face of the user is displayed on the screen of the VR device, and as the device rotates and translates in the real world, the 3D face moves accordingly. To an observing face authentication system, the depth and motion cues of the display match what would be expected for a human face.
We argue that such VR-based spoofing attacks constitute a fundamentally new class of attacks that point to a serious weaknesses in camera-based authentication systems: Unless they incorporate other sources of verifiable data, systems relying on color image data and camera motion are prone to attacks via virtual realism. To demonstrate the practical nature of this threat, we conduct thorough experiments using an end-to-end implementation of our approach and show how it undermines the security of several face authentication solutions that include both motion-based and liveness detectors.
Is a theoretically-secure system any good if it doesn’t address users’ real-world threat models? Is the security community today meeting the needs of a mass, global audience, or simply building tools and features for itself? Do we know how to understand what people really need?
We asked a group of straight-talking New Yorkers about the data-security threats they face. Their answers indicate a significant gap between their lived experience and the way our community thinks about security. To bridge this gap and get privacy-preserving systems into the hands of real people, we need more foundational research to understand user needs, not only late-stage usability studies in a lab.
Voice interfaces are becoming more ubiquitous and are now the primary input method for many devices. We explore in this paper how they can be attacked with hidden voice commands that are unintelligible to human listeners but which are interpreted as commands by devices.
We evaluate these attacks under two different threat models. In the black-box model, an attacker uses the speech recognition system as an opaque oracle. We show that the adversary can produce difficult to understand commands that are effective against existing systems in the black-box model. Under the white-box model, the attacker has full knowledge of the internals of the speech recognition system and uses it to create attack commands that we demonstrate through user testing are not understandable by humans.
We then evaluate several defenses, including notifying the user when a voice command is accepted; a verbal challenge-response protocol; and a machine learning approach that can detect our attacks with 99.8% accuracy.
JavaScript in one origin can use timing channels in browsers to learn sensitive information about a user’s interaction with other origins, violating the browser’s compartmentalization guarantees. Browser vendors have attempted to close timing channels by trying to rewrite sensitive code to run in constant time and by reducing the resolution of reference clocks.
We argue that these ad-hoc efforts are unlikely to succeed. We show techniques that increase the effective resolution of degraded clocks by two orders of magnitude, and we present and evaluate multiple, new implicit clocks: techniques by which JavaScript can time events without consulting an explicit clock at all.
We show how “fuzzy time” ideas in the trusted operating systems literature can be adapted to building trusted browsers, degrading all clocks and reducing the bandwidth of all timing channels. We describe the design of a next-generation browser, called Fermata, in which all timing sources are completely mediated. As a proof of feasibility, we present Fuzzyfox, a fork of the Firefox browser that implements many of the Fermata principles within the constraints of today’s browser architecture. We show that Fuzzyfox achieves sufficient compatibility and performance for deployment today by privacysensitive users.
In summary:
Numerous surveys have shown that Web users are concerned about the loss of privacy associated with online tracking. Alarmingly, these surveys also reveal that people are also unaware of the amount of data sharing that occurs between ad exchanges, and thus underestimate the privacy risks associated with online tracking.
In reality, the modern ad ecosystem is fueled by a flow of user data between trackers and ad exchanges. Although recent work has shown that ad exchanges routinely perform cookie matching with other exchanges, these studies are based on brittle heuristics that cannot detect all forms of information sharing, especially under adversarial conditions.
In this study, we develop a methodology that is able to detect client- and server-side flows of information between arbitrary ad exchanges. Our key insight is to leverage retargeted ads as a tool for identifying information flows. Intuitively, our methodology works because it relies on the semantics of how exchanges serve ads, rather than focusing on specific cookie matching mechanisms. Using crawled data on 35,448 ad impressions, we show that our methodology can successfully categorize four different kinds of information sharing behavior between ad exchanges, including cases where existing heuristic methods fail.
We conclude with a discussion of how our findings and methodologies can be leveraged to give users more control over what kind of ads they see and how their information is shared between ad exchanges.
In the last 10 years, cache attacks on Intel x86 CPUs have gained increasing attention among the scientific community and powerful techniques to exploit cache side channels have been developed. However, modern smartphones use one or more multi-core ARM CPUs that have a different cache organization and instruction set than Intel x86 CPUs. So far, no cross-core cache attacks have been demonstrated on non-rooted Android smartphones. In this work, we demonstrate how to solve key challenges to perform the most powerful cross-core cache attacks Prime+Probe,Flush+Reload, Evict+Reload, and Flush+Flush on non-rooted ARM-based devices without any privileges. Based on our techniques, we demonstrate covert channels that outperform state-of-the-art covert channels on Android by several orders of magnitude. Moreover, we present attacks to monitor tap and swipe events as well as keystrokes, and even derive the lengths of words entered on the touchscreen. Eventually, we are the first to attack cryptographic primitives implemented in Java. Our attacks work across CPUs and can even monitor cache activity in the ARM TrustZone from the normal world. The techniques we present can be used to attack hundreds of millions of Android devices.
Machine learning (ML) models may be deemed confidential due to their sensitive training data, commercial value, or use in security applications. Increasingly often, confidential ML models are being deployed with publicly accessible query interfaces. ML-as-a-service (“predictive analytics”) systems are an example: Some allow users to train models on potentially sensitive data and charge others for access on a pay-per-query basis.
The tension between model confidentiality and public access motivates our investigation of model extraction attacks. In such attacks, an adversary with black-box access, but no prior knowledge of an ML model’s parameters or training data, aims to duplicate the functionality of (i.e., “steal”) the model. Unlike in classical learning theory settings, ML-as-a-service offerings may accept partial feature vectors as inputs and include confidence values with predictions. Given these practices, we show simple, efficient attacks that extract target ML models with near-perfect fidelity for popular model classes including logistic regression, neural networks, and decision trees. We demonstrate these attacks against the online services of BigML and Amazon Machine Learning. We further show that the natural countermeasure of omitting confidence values from model outputs still admits potentially harmful model extraction attacks. Our results highlight the need for careful ML model deployment and new model extraction countermeasures.
In cloud computing environments, multiple tenants are often co-located on the same multi-processor system. Thus, preventing information leakage between tenants is crucial. While the hypervisor enforces software isolation, shared hardware, such as the CPU cache or memory bus, can leak sensitive information. For security reasons, shared memory between tenants is typically disabled. Furthermore, tenants often do not share a physical CPU. In this setting, cache attacks do not work and only a slow cross-CPU covert channel over the memory bus is known. In contrast, we demonstrate a high-speed covert channel as well as the first side-channel attack working across processors and without any shared memory. To build these attacks, we use the undocumented DRAM address mappings.
We present two methods to reverse engineer the mapping of memory addresses to DRAM channels, ranks, and banks. One uses physical probing of the memory bus, the other runs entirely in software and is fully automated. Using this mapping, we introduce DRAMA attacks, a novel class of attacks that exploit the DRAM row buffer that is shared, even in multi-processor systems. Thus, our attacks work in the most restrictive environments. First, we build a covert channel with a capacity of up to 2 Mbps, which is three to four orders of magnitude faster than memory-bus-based channels. Second, we build a side-channel template attack that can automatically locate and monitor memory accesses. Third, we show how using the DRAM mappings improves existing attacks and in particular enables practical Rowhammer attacks on DDR4. 1 Introduction
Privacy-preserving multi-party machine learning allows multiple organizations to perform collaborative data analytics while guaranteeing the privacy of their individual datasets. Using trusted SGX-processors for this task yields high performance, but requires a careful selection, adaptation, and implementation of machine-learning algorithms to provably prevent the exploitation of any side channels induced by data-dependent access patterns.
We propose data-oblivious machine learning algorithms for support vector machines, matrix factorization, neural networks, decision trees, and k-means clustering. We show that our efficient implementation based on Intel Skylake processors scales up to large, realistic datasets, with overheads several orders of magnitude lower than with previous approaches based on advanced cryptographic multi-party computation schemes.
Many researchers and engineers first learn about computer security in a classroom. In this interactive workshop, four professors will share lessons and opinions about how and when to teach security. What are the “right” security topics to teach? What is the best time in a curriculum to introduce students to security? And must the entire burden of security education fall on the computing disciplines? If you teach (or plan to teach in the future), come participate in this workshop.
David Evans is a Professor of Computer Science at the University of Virginia, where he leads the Security Research Group and teaches courses on just about everything in computing other than computer security. He is the author of anopen computer science textbook, a children's book on combinatorics and computability, and teacher of popular MOOC courses on introductory computer science and applied cryptography. He won the Outstanding Faculty Award from the State Council of Higher Education for Virginia, an All-University Teaching Award, and was Program Co-Chair for the 31st and 32nd IEEE Symposia on Security and Privacy. He has S.B., S.M. and Ph.D. degrees in Computer Science from MIT and has been a faculty member at the University of Virginia since 1999.
Zachary Peterson is an Associate Professor of Computer Science at Cal Poly, San Luis Obispo. He has a passion for creating new ways of engaging students of all ages in computer security, especially through the use of games and play. He has co-created numerous security games, including [d0x3d!], a network security board game, and is the co-founder of ASE, a new USENIX workshop dedicated to making advances in security education. He is the recent recipient of a Fulbright Scholarship which he will use to travel to University College, London, continuing some of his research in the use of digital and non-digital games for teaching computer security concepts to new, young, and non-technical audiences.
Colleen Lewis is a Professor of Computer Science at Harvey Mudd College who specializes in computer science education. Lewis has a Ph.D. in education and a M.S. and B.S. in computer science from the University of California, Berkeley. Her research seeks to identify effective teaching practices for creating equitable learning spaces where all students have the opportunity to learn. Lewis curates CSTeachingTips.org, an NSF-sponsored project for disseminating effective computer science teaching practices.
Tadayoshi Kohno is the Short-Dooley Professor of Computer Science & Engineering at the University of Washington, an Adjunct Associate Professor in the UW Electrical Engineering Department, and an Adjunct Associate Professor in the UW Information School. His research focuses on helping protect the security, privacy, and safety of users of current and future generation technologies. Kohno is the recipient of an Alfred P. Sloan Research Fellowship, a U.S. National Science Foundation CAREER Award, and a Technology Review TR-35 Young Innovator Award. Kohno has presented his research to the U.S. House of Representatives, has had his research profiled in the NOVA ScienceNOW "Can Science Stop Crime?" documentary and the NOVA "CyberWar Threat" documentary, and is a past chair of the USENIX Security Symposium. Kohno is also an alumnus of the U.S. Government’s Defense Science Study Group and a member of the National Academies Forum on Cyber Resilience, the IEEE Center for Secure Design, and the USENIX Security Steering Committee. Kohno received his Ph.D. from the University of California at San Diego.
Potentially unwanted programs (PUP) such as adware and rogueware, while not outright malicious, exhibit intrusive behavior that generates user complaints and makes security vendors flag them as undesirable. PUP has been little studied in the research literature despite recent indications that its prevalence may have surpassed that of malware.
In this work we perform the first systematic study of PUP prevalence and its distribution through pay-perinstall (PPI) services, which link advertisers that want to promote their programs with affiliate publishers willing to bundle their programs with offers for other software. Using AV telemetry information comprising of 8 billion events on 3.9 million real hosts during a 19 month period, we discover that over half (54%) of the examined hosts have PUP installed. PUP publishers are highly popular, e.g., the top two PUP publishers rank 15 and 24 amongst all software publishers (benign and PUP). Furthermore, we analyze the who-installs-who relationships, finding that 65% of PUP downloads are performed by other PUP and that 24 PPI services distribute over a quarter of all PUP. We also examine the top advertiser programs distributed by the PPI services, observing that they are dominated by adware running in the browser (e.g., toolbars, extensions) and rogueware. Finally, we investigate the PUP-malware relationships in the form of malware installations by PUP and PUP installations by malware. We conclude that while such events exist, PUP distribution is largely disjoint from malware distribution.
We analyze the generation and management of 802.11 group keys. These keys protect broadcast and multicast Wi-Fi traffic. We discovered several issues and illustrate their importance by decrypting all group (and unicast) traffic of a typical Wi-Fi network.
First we argue that the 802.11 random number generator is flawed by design, and provides an insufficient amount of entropy. This is confirmed by predicting randomly generated group keys on several platforms. We then examine whether group keys are securely transmitted to clients. Here we discover a downgrade attack that forces usage of RC4 to encrypt the group key when transmitted in the 4-way handshake. The per-message RC4 key is the concatenation of a public 16-byte initialization vector with a secret 16-byte key, and the first 256 keystream bytes are dropped. We study this peculiar usage of RC4, and find that capturing 231 handshakes can be sufficient to recover (i.e., decrypt) a 128-bit group key. We also examine whether group traffic is properly isolated from unicast traffic. We find that this is not the case, and show that the group key can be used to inject and decrypt unicast traffic. Finally, we propose and study a new random number generator tailored for 802.11 platforms.
We present DROWN, a novel cross-protocol attack on TLS that uses a server supporting SSLv2 as an oracle to decrypt modern TLS connections.
We introduce two versions of the attack. The more general form exploits multiple unnoticed protocol flaws in SSLv2 to develop a new and stronger variant of the Bleichenbacher RSA padding-oracle attack. To decrypt a 2048-bit RSA TLS ciphertext, an attacker must observe 1,000 TLS handshakes, initiate 40,000 SSLv2 connections, and perform 250 offline work. The victim client never initiates SSLv2 connections. We implemented the attack and can decrypt a TLS 1.2 handshake using 2048- bit RSA in under 8 hours, at a cost of $440 on Amazon EC2. Using Internet-wide scans, we find that 33% of all HTTPS servers and 22% of those with browser-trusted certificates are vulnerable to this protocol-level attack due to widespread key and certificate reuse.
For an even cheaper attack, we apply our new techniques together with a newly discovered vulnerability in OpenSSL that was present in releases from 1998 to early 2015. Given an unpatched SSLv2 server to use as an oracle, we can decrypt a TLS ciphertext in one minute on a single CPU—fast enough to enable man-in-the-middle attacks against modern browsers. We find that 26% of HTTPS servers are vulnerable to this attack.
We further observe that the QUIC protocol is vulnerable to a variant of our attack that allows an attacker to impersonate a server indefinitely after performing as few as 217 SSLv2 connections and 258 offline work.
We conclude that SSLv2 is not only weak, but actively harmful to the TLS ecosystem.
Although the concept of ransomware is not new (i.e., such attacks date back at least as far as the 1980s), this type of malware has recently experienced a resurgence in popularity. In fact, in the last few years, a number of high-profile ransomware attacks were reported, such as the large-scale attack against Sony that prompted the company to delay the release of the film “The Interview.” Ransomware typically operates by locking the desktop of the victim to render the system inaccessible to the user, or by encrypting, overwriting, or deleting the user’s files. However, while many generic malware detection systems have been proposed, none of these systems have attempted to specifically address the ransomware detection problem.
In this paper, we present a novel dynamic analysis system called UNVEIL that is specifically designed to detect ransomware. The key insight of the analysis is that in order to mount a successful attack, ransomware must tamper with a user’s files or desktop. UNVEIL automatically generates an artificial user environment, and detects when ransomware interacts with user data. In parallel, the approach tracks changes to the system’s desktop that indicate ransomware-like behavior. Our evaluation shows that UNVEIL significantly improves the state of the art, and is able to identify previously unknown evasive ransomware that was not detected by the antimalware industry.
The goal of searchable encryption (SE) is to enable a client to execute searches over encrypted files stored on an untrusted server while ensuring some measure of privacy for both the encrypted files and the search queries. Most recent research has focused on developing efficient SE schemes at the expense of allowing some small, wellcharacterized “(information) leakage” to the server about the files and/or the queries. The practical impact of this leakage, however, remains unclear.
We thoroughly study file-injection attacks—in which the server sends files to the client that the client then encrypts and stores—on the query privacy of singlekeyword and conjunctive SE schemes. We show such attacks can reveal the client’s queries in their entirety using very few injected files, even for SE schemes having low leakage. We also demonstrate that natural countermeasures for preventing file-injection attacks can be easily circumvented. Our attacks outperform prior work significantly in terms of their effectiveness as well as in terms of their assumptions about the attacker’s prior knowledge.
Most modern malware infections happen through the browser, typically as the result of a drive-by or social engineering attack. While there have been numerous studies on measuring and defending against drive-by downloads, little attention has been dedicated to studying social engineering attacks.
In this paper, we present the first systematic study of web-based social engineering (SE) attacks that successfully lure users into downloading malicious and unwanted software. To conduct this study, we collect and reconstruct more than two thousand examples of in-thewild SE download attacks from live network traffic. Via a detailed analysis of these attacks, we attain the following results: (i) we develop a categorization system to identify and organize the tactics typically employed by attackers to gain the user’s attention and deceive or persuade them into downloading malicious and unwanted applications; (ii) we reconstruct the web path followed by the victims and observe that a large fraction of SE download attacks are delivered via online advertisement, typically served from “low tier” ad networks; (iii) we measure the characteristics of the network infrastructure used to deliver such attacks and uncover a number of features that can be leveraged to distinguish between SE and benign (or non-SE) software downloads.
Moderator: Jaeyeon Jung, Microsoft Research
Panelists: Úlfar Erlingsson, Google; Rachel Greenstadt, Drexel University; Martin Johns, SAP; Thomas Ristenpart,Cornell Tech
What opportunities await security students graduating with a Ph.D.? On Thursday evening, students will have the opportunity to listen to informal panels of faculty and industrial researchers providing personal perspectives on their post-Ph.D. career search. Learn about the academic job search, the industrial research job search, research fundraising, dual-career challenges, life uncertainty, and other idiosyncrasies of the ivory tower. If you would like to speak in the Doctoral Colloquium, please email sec16dc@usenix.org.
View the current schedule and scheduling instructions on the USENIX Security '16 BoFs page.
Commodity CPU architectures, such as ARM and Intel CPUs, have started to offer trusted computing features in their CPUs aimed at displacing dedicated trusted hardware. Unfortunately, these CPU architectures raise serious challenges to building trusted systems because they omit providing secure resources outside the CPU perimeter.
This paper shows how to overcome these challenges to build software systems with security guarantees similar to those of dedicated trusted hardware. We present the design and implementation of a firmware-based TPM 2.0 (fTPM) leveraging ARM TrustZone. Our fTPM is the reference implementation of a TPM 2.0 used in millions of mobile devices. We also describe a set of mechanisms needed for the fTPM that can be useful for building more sophisticated trusted applications beyond just a TPM.
Over the past couple of years, Adobe Flash has been repeatedly targeted by attackers in the wild. Despite an increasing number of bug fixes and mitigations implemented in the software, previously unknown 0-day vulnerabilities continue to be uncovered and used by malicious attackers. This presentation describes my team's work to reduce the number and impact of 0-day vulnerabilities in Adobe Flash.
It will start with an overview of how attackers have targeted Flash in the past, and then explain how some of the most common types of bugs work. It will then discuss how we find similar vulnerabilities. It will go through some examples of typical, and less typical bugs, showing how they violate the assumptions made by Flash Player, and how they can be exploited. This talk will also discuss recent Flash and platform mitigations, and how they impact the severity and discoverability of security bugs.
Natalie Silvanovich is a security researcher on Google Project Zero. She has spent the last seven years working in mobile security, both finding security issues in mobile software and improving the security of mobile platforms. Outside of work, Natalie enjoys applying her hacking and reverse engineering skills to unusual targets, and has spoken at several conferences on the subject of Tamagotchi hacking. She is actively involved in hackerspaces and is a founding member of Kwartzlab Makerspace in Kitchener, Ontario, Canada.
Sanctum offers the same promise as Intel’s Software Guard Extensions (SGX), namely strong provable isolation of software modules running concurrently and sharing resources, but protects against an important class of additional software attacks that infer private information from a program’s memory access patterns. Sanctum shuns unnecessary complexity, leading to a simpler security analysis. We follow a principled approach to eliminating entire attack surfaces through isolation, rather than plugging attack-specific privacy leaks. Most of Sanctum’s logic is implemented in trusted software, which does not perform cryptographic operations using keys, and is easier to analyze than SGX’s opaque microcode, which does.
Our prototype targets a Rocket RISC-V core, an open implementation that allows any researcher to reason about its security properties. Sanctum’s extensions can be adapted to other processor cores, because we do not change any major CPU building block. Instead, we add hardware at the interfaces between generic building blocks, without impacting cycle time.
Sanctum demonstrates that strong software isolation is achievable with a surprisingly small set of minimally invasive hardware changes, and a very reasonable overhead.
Protected-module architectures such as Intel SGX provide strong isolation guarantees to sensitive parts of applications while the system is up and running. Unfortunately systems in practice crash, go down for reboots or lose power at unexpected moments in time. To deal with such events, additional security measures need to be taken to guarantee that stateful modules will either recover their state from the last stored state, or fail-stop on detection of tampering with that state. More specifically, protected-module architectures need to provide a security primitive that guarantees that (1) attackers cannot present a stale state as being fresh (i.e. rollback protection), (2) once a module accepted a specific input, it will continue execution on that input or never advance, and (3) an unexpected loss of power must never leave the system in a state from which it can never resume execution (i.e. liveness guarantee).
We propose Ariadne, a solution to the state-continuity problem that achieves the theoretical lower limit of requiring only a single bit flip of non-volatile memory per state update. Ariadne can be easily adapted to the platform at hand. In low-end devices where non-volatile memory may wear out quickly and the bill of materials (BOM) needs to be minimized, Ariadne can take optimal use of non-volatile memory. On SGX-enabled processors, Ariadne can be readily deployed to protect stateful modules (e.g., as used by Haven and VC3).
The Network Time Protocol (NTP) is used by many network-connected devices to synchronize device time with remote servers. Many security features depend on the device knowing the current time, for example in deciding whether a certificate is still valid. Currently, most services implement NTP without authentication, and the authentication mechanisms available in the standard have not been formally analyzed, require a pre-shared key, or are known to have cryptographic weaknesses. In this paper we present an authenticated version of NTP, called ANTP, to protect against desynchronization attacks. To make ANTP suitable for large-scale deployments, it is designed to minimize server-side public key operations by infrequently performing a key exchange using public key cryptography, then relying solely on symmetric cryptography for subsequent time synchronization requests; moreover, it does so without requiring server-side per-connection state. Additionally, ANTP ensures that authentication does not degrade accuracy of time synchronization. We measured the performance of ANTP by implementing it in OpenNTPD using OpenSSL. Compared to plain NTP, ANTP’s symmetric crypto reduces the server throughput (connections/second) for time synchronization requests by a factor of only 1.6. We analyzed the security of ANTP using a novel provable security framework that involves adversary control of time, and show that ANTP achieves secure time synchronization under standard cryptographic assumptions; our framework may also be used to analyze other candidates for securing NTP.
Keywords: time synchronization, Network Time Protocol (NTP), provable security, network security
Peer-to-peer (P2P) systems are predominantly used to distribute trust, increase availability and improve performance. A number of content-sharing P2P systems, for file-sharing applications (e.g., BitTorrent and Storj) and more recent peer-assisted CDNs (e.g., Akamai Netsession), are finding wide deployment. A major security concern with content-sharing P2P systems is the risk of long-term traffic analysis—a widely accepted challenge with few known solutions.
In this paper, we propose a new approach to protecting against persistent, global traffic analysis in P2P contentsharing systems. Our approach advocates for hiding data access patterns, making P2P systems oblivious. We propose OBLIVP2P— a construction for a scalable distributed ORAM protocol, usable in a real P2P setting. Our protocol achieves the following results. First, we show that our construction retains the (linear) scalability of the original P2P network w.r.t the number of peers. Second, our experiments simulating about 16,384 peers on 15 Deterlab nodes can process up to 7 requests of 512KB each per second, suggesting usability in moderately latency-sensitive applications as-is. The bottlenecks remaining are purely computational (not bandwidth). Third, our experiments confirm that in our construction, no centralized infrastructure is a bottleneck — essentially, ensuring that the network and computational overheads can be completely offloaded to the P2P network. Finally, our construction is highly parallelizable, which implies that remaining computational bottlenecks can be drastically reduced if OBLIVP2P is deployed on a network with many real machines.
Can bits of an RSA public key leak information about design and implementation choices such as the prime generation algorithm? We analysed over 60 million freshly generated key pairs from 22 open- and closedsource libraries and from 16 different smartcards, revealing significant leakage. The bias introduced by different choices is sufficiently large to classify a probable library or smartcard with high accuracy based only on the values of public keys. Such a classification can be used to decrease the anonymity set of users of anonymous mailers or operators of linked Tor hidden services, to quickly detect keys from the same vulnerable library or to verify a claim of use of secure hardware by a remote party. The classification of the key origins of more than 10 million RSA-based IPv4 TLS keys and 1.4 million PGP keys also provides an independent estimation of the libraries that are most commonly used to generate the keys found on the Internet.
Our broad inspection provides a sanity check and deep insight regarding which of the recommendations for RSA key pair generation are followed in practice, including closed-source libraries and smartcards.
This talk describes several types of attacks aimed at content delivery networks (CDNs) and their customers, along with strategies for mitigating these attacks. The attacks range from simple but large-scale denial-of-service attacks, to efforts to deface web sites, to click fraud. The talk presents examples of real attack campaigns, and analyzes the effectiveness of the CDN operated by Akamai Technologies in protecting its customers from them.
Bruce Maggs received the S.B., S.M., and Ph.D. degrees in computer science from the Massachusetts Institute of Technology in 1985, 1986, and 1989, respectively. His advisor was Charles Leiserson. After spending one year as a Postdoctoral Associate at MIT, he worked as a Research Scientist at NEC Research Institute in Princeton from 1990 to 1993. In 1994, he moved to Carnegie Mellon, where he stayed until joining Duke University in 2009 as a Professor in the Department of Computer Science. While on a two-year leave-of-absence from Carnegie Mellon, Maggs helped to launch Akamai Technologies, serving as its first Vice President for Research and Development. He retains a part-time role at Akamai as Vice President for Research.
Telephones remain a trusted platform for conducting some of our most sensitive exchanges. From banking to taxes, wide swathes of industry and government rely on telephony as a secure fall-back when attempting to confirm the veracity of a transaction. In spite of this, authentication is poorly managed between these systems, and in the general case it is impossible to be certain of the identity (i.e., Caller ID) of the entity at the other end of a call. We address this problem with AuthLoop, the first system to provide cryptographic authentication solely within the voice channel. We design, implement and characterize the performance of an in-band modem for executing a TLS-inspired authentication protocol, and demonstrate its abilities to ensure that the explicit single-sided authentication procedures pervading the web are also possible on all phones. We show experimentally that this protocol can be executed with minimal computational overhead and only a few seconds of user time (≈9 instead of ≈97 seconds for a naïve implementation of TLS 1.2) over heterogeneous networks. In so doing, we demonstrate that strong end-to-end validation of Caller ID is indeed practical for all telephony networks.
We propose new privacy attacks to infer attributes (e.g., locations, occupations, and interests) of online social network users. Our attacks leverage seemingly innocent user information that is publicly available in online social networks to infer missing attributes of targeted users. Given the increasing availability of (seemingly innocent) user information online, our results have serious implications for Internet privacy – private attributes can be inferred from users’ publicly available data unless we take steps to protect users from such inference attacks.
To infer attributes of a targeted user, existing inference attacks leverage either the user’s publicly available social friends or the user’s behavioral records (e.g., the webpages that the user has liked on Facebook, the apps that the user has reviewed on Google Play), but not both. As we will show, such inference attacks achieve limited success rates. However, the problem becomes qualitatively different if we consider both social friends and behavioral records. To address this challenge, we develop a novel model to integrate social friends and behavioral records and design new attacks based on our model. We theoretically and experimentally demonstrate the effectiveness of our attacks. For instance, we observe that, in a real-world large-scale dataset with 1.1 million users, our attack can correctly infer the cities a user lived in for 57% of the users; via confidence estimation, we are able to increase the attack success rate to over 90% if the attacker selectively attacks a half of the users. Moreover, we show that our attack can correctly infer attributes for significantly more users than previous attacks.
Proofs of Retrievability (POR) and Data Possession (PDP) are cryptographic protocols that enable a cloud provider to prove that data is correctly stored in the cloud. PDP have been recently extended to enable users to check in a single protocol that additional file replicas are stored as well. To conduct multi-replica PDP, users are however required to process, construct, and upload their data replicas by themselves. This incurs additional bandwidth overhead on both the service provider and the user and also poses new security risks for the provider. Namely, since uploaded files are typically encrypted, the provider cannot recognize if the uploaded content are indeed replicas. This limits the business models available to the provider, since e.g., reduced costs for storing replicas can be abused by users who upload different files—while claiming that they are replicas.
In this paper, we address this problem and propose a novel solution for proving data replication and retrievability in the cloud, Mirror, which allows to shift the burden of constructing replicas to the cloud provider itself—thus conforming with the current cloud model. We show that Mirror is secure against malicious users and a rational cloud provider. Finally, we implement a prototype based on Mirror, and evaluate its performance in a realistic cloud setting. Our evaluation results show that our proposal incurs tolerable overhead on the users and the cloud provider.
This talk will introduce the audience to two new x86 ISA features developed by AMD which will provide new security enhancements by leveraging integrated memory encryption hardware. These features provide the ability to selectively encrypt some or all of system memory as well as the ability to run encrypted virtual machines, isolated from the hypervisor. The talk will cover technical details related to these features, including the ISA changes, security benefits, key management framework, and practical enablement.
The main objective of the talk is to educate the audience on the design and use of these features which are the first general-purpose memory encryption features to be integrated into the x86 architecture.
David Kaplan is a PMTS Security Architect at AMD who focuses on developing new security technologies across the AMD product line as part of the Security Architecture Research and Development center. He is the lead architect for the AMD memory encryption features and has worked on both CPU and SOC level security features for the last 4 years. David has over 9 years of experience at AMD with a background in x86 CPU development and has filed over 30 patents in his career so far.
Large-scale discovery of thousands of vulnerableWeb sites has become a frequent event, thanks to recent advances in security research and the rise in maturity of Internet-wide scanning tools. The issues related to disclosing the vulnerability information to the affected parties, however, have only been treated as a side note in prior research.
In this paper, we systematically examine the feasibility and efficacy of large-scale notification campaigns. For this, we comprehensively survey existing communication channels and evaluate their usability in an automated notification process. Using a data set of over 44,000 vulnerable Web sites, we measure success rates, both with respect to the total number of fixed vulnerabilities and to reaching responsible parties, with the following highlevel results: Although our campaign had a statistically significant impact compared to a control group, the increase in the fix rate of notified domains is marginal.
If a notification report is read by the owner of the vulnerable application, the likelihood of a subsequent resolution of the issues is sufficiently high: about 40%. But, out of 35,832 transmitted vulnerability reports, only 2,064 (5.8%) were actually received successfully, resulting in an unsatisfactory overall fix rate, leaving 74.5% ofWeb applications exploitable after our month-long experiment. Thus, we conclude that currently no reliable notification channels exist, which significantly inhibits the success and impact of large-scale notification.
In this paper we describe ZKBoo, a proposal for practically efficient zero-knowledge arguments especially tailored for Boolean circuits and report on a proof-ofconcept implementation. As an highlight, we can generate (resp. verify) a non-interactive proof for the SHA-1 circuit in approximately 13ms (resp. 5ms), with a proof size of 444KB.
Our techniques are based on the “MPC-in-the-head” approach to zero-knowledge of Ishai et al. (IKOS), which has been successfully used to achieve significant asymptotic improvements. Our contributions include:
The cut-and-choose technique plays a fundamental role in cryptographic-protocol design, especially for secure two-party computation in the malicious model. The basic idea is that one party constructs n versions of a message in a protocol (e.g., garbled circuits); the other party randomly checks some of them and uses the rest of them in the protocol. Most existing uses of cut-and-choose fix in advance the number of objects to be checked and in optimizing this parameter they fail to recognize the fact that checking and evaluating may have dramatically different costs.
In this paper, we consider a refined cost model and formalize the cut-and-choose parameter selection problem as a constrained optimization problem. We analyze “cut-and-choose games” and show equilibrium strategies for the parties in these games. We then show how our methodology can be applied to improve the efficiency of three representative categories of secure-computation protocols based on cut-and-choose. We show improvements of up to an-order-of-magnitude in terms of bandwidth, and 12–106% in terms of total time. Source code of our game solvers is available to download at https://github.com/cut-n-choose.
Security researchers can send vulnerability notifications to take proactive measures in securing systems at scale. However, the factors affecting a notification’s efficacy have not been deeply explored. In this paper, we report on an extensive study of notifying thousands of parties of security issues present within their networks, with an aim of illuminating which fundamental aspects of notifications have the greatest impact on efficacy. The vulnerabilities used to drive our study span a range of protocols and considerations: exposure of industrial control systems; apparent firewall omissions for IPv6-based services; and exploitation of local systems in DDoS amplification attacks. We monitored vulnerable systems for several weeks to determine their rate of remediation. By comparing with experimental controls, we analyze the impact of a number of variables: choice of party to contact (WHOIS abuse contacts versus national CERTs versus US-CERT), message verbosity, hosting an information website linked to in the message, and translating the message into the notified party’s local language. We also assess the outcome of the emailing process itself (bounces, automated replies, human replies, silence) and characterize the sentiments and perspectives expressed in both the human replies and an optional anonymous survey that accompanied our notifications.
We find that various notification regimens do result in different outcomes. The best observed process was directly notifying WHOIS contacts with detailed information in the message itself. These notifications had a statistically significant impact on improving remediation, and human replies were largely positive. However, the majority of notified contacts did not take action, and even when they did, remediation was often only partial. Repeat notifications did not further patching. These results are promising but ultimately modest, behooving the security community to more deeply investigate ways to improve the effectiveness of vulnerability notifications.
Security competitions and, in particular, Capture-the-Flag (CTF), have emerged as an engaging way for people to learn about attacking and defending systems. In this panel, three veterans of the CTF world will share their experiences in playing and running security competitions, and talk about how integrating CTFs into your curriculum or training programs can help to identify and develop security awareness and expertise. Do CTF skills translate into the real world? Does learning how to attack have value in producing safer systems? Are CGC-inspired autonomous agents the future of systems security? All these questions and more will be on the table in this interactive session.
William Robertson is an Assistant Professor of Computer Science at Northeastern University in Boston. His research focuses on the security of operating systems, mobile devices, and the web, making use of techniques such as program analysis, anomaly detection, and security by design. He won DEFCON CTF in 2005 with Shellphish, and participated in the California Top-to-Bottom-Review (TTBR) and Ohio EVEREST reviews of electronic voting security that have had significant impact on public policy in the states of California and Ohio. He is the author of more than fifty peer-reviewed conference and journal articles, has chaired several conferences and workshops (DIMVA, WOOT, ACSAC), and regularly serves on the program committees of top-tier security conferences.
Sophia D’Antoine is a security engineer at Trail of Bits and a graduate of Rensselaer Polytechnic Institute. She is a regular speaker at security conferences around the world, including RECon, Blackhat, and CanSecWest. Her present work includes techniques for automated software exploitation and software obfuscation using LLVM. She spends too much time playing CTF, pwnable.kr and other wargames.
Bluetooth Low Energy (BLE) has emerged as an attractive technology to enable Internet of Things (IoTs) to interact with others in their vicinity. Our study of the behavior of more than 200 types of BLE-equipped devices has led to a surprising discovery: the BLE protocol, despite its privacy provisions, fails to address the most basic threat of all—hiding the device’s presence from curious adversaries. Revealing the device’s existence is the stepping stone toward more serious threats that include user profiling/fingerprinting, behavior tracking, inference of sensitive information, and exploitation of known vulnerabilities on the device. With thousands of manufacturers and developers around the world, it is very challenging, if not impossible, to envision the viability of any privacy or security solution that requires changes to the devices or the BLE protocol.
In this paper, we propose a new device-agnostic system, called BLE-Guardian, that protects the privacy of the users/environments equipped with BLE devices/IoTs. It enables the users and administrators to control those who discover, scan and connect to their devices. We have implemented BLE-Guardian using Ubertooth One, an off-the-shelf open Bluetooth development platform, facilitating its broad deployment. Our evaluation with real devices shows that BLE-Guardian effectively protects the users’ privacy while incurring little overhead on the communicating BLE-devices.
The decreasing cost of molecular profiling tests, such as DNA sequencing, and the consequent increasing availability of biological data are revolutionizing medicine, but at the same time create novel privacy risks. The research community has already proposed a plethora of methods for protecting genomic data against these risks. However, the privacy risks stemming from epigenetics, which bridges the gap between the genome and our health characteristics, have been largely overlooked so far, even though epigenetic data such as microRNAs (miRNAs) are no less privacy sensitive. This lack of investigation is attributed to the common belief that the inherent temporal variability of miRNAs shields them from being tracked and linked over time.
In this paper, we show that, contrary to this belief, miRNA expression profiles can be successfully tracked over time, despite their variability. Specifically, we show that two blood-based miRNA expression profiles taken with a time difference of one week from the same person can be matched with a success rate of 90%. We furthermore observe that this success rate stays almost constant when the time difference is increased from one week to one year. In order to mitigate the linkability threat, we propose and thoroughly evaluate two countermeasures: (i) hiding a subset of disease-irrelevant miRNA expressions, and (ii) probabilistically sanitizing the miRNA expression profiles. Our experiments show that the second mechanism provides a better trade-off between privacy and disease-prediction accuracy.