Eleven years ago Estonia was the first country in the world to fall under politically motivated cyber-attacks. A lot has changed since then. Cyber-attacks have become a new normality, they target states as well as companies and citizens, they have become global and massive in their scale, they even challenge the genuine political independence. There have been positive developments, including more awareness about cyber-security compared to 11 years ago. Cyber security has become a topic that is discussed at different forums, starting with international organizations and finishing with secondary schools. And still, many questions remain unanswered and are as topical as they were in 2007: applicability of international law, responsibility of states, including for the acts of non-state actors, attribution, defensive and offensive capabilities, international cooperation, role and responsibility of different stakeholders.
Alex Stamos keynote at Black Hat USA 2017 addressed, among other things, social skills and diversity - an urging need to include people with different backgrounds, education, culture into the world of cyber security that was for years left to and governed by technical folks.
This is true both at a company level and at a larger scale on national and international level. States have a unique role in providing security, including cyber security, in applying and interpreting international law etc., but states cannot efficiently do it alone. Despite states traditional dominance over national and international security, their role within the overall cyberspace ecosystem is limited. After all, the internet is governed by a complex system of stakeholders, and governments alone cannot decide on all aspects of cyberspace. A space where private sector owns nearly all digital and physical assets; where industry develops and provides online services; where civil society and academia contribute to research and carry the role of watchdogs of democracy.
The keynote will address the lessons learned from 2007, challenges we face in 2018/2019, role of states and other stakeholders, developments in international cooperation and propose some thoughts/ideas for future. The keynote will also introduce the work of the Global Commission on Stability in Cyberspace (GCSC) as an example of multi-stakeholder model that contributes to international discussion and policy-making.
Apple operation system has gained much popularity both in the personal computer (MacOS) and in the mobile devices (iOS) in the current world (including hackers). The system core module is becoming a hot attacking interface in both kernel mode (e.g. XNU) and user mode (e.g. XPC) because they share almost the same code logic among different Apple systems (MachOS and iOS) so as to gain the most attack with the least effort.
As for the kernel mode part, smart fuzzers must have the code-coverage support to know how to fuzz deeply, but we haven't seen anyone do XNU fuzzing based on code-coverage, especially in the static way. In this talk, we will show you how to develop the kernel sanitizers to get code-coverage support and memory issues detection support. We also developed very detailed (about 530) patterns based on grammar for XNU syscall api. Then we will give a live demo of latest macOS (10.13.6) root by using 3 0days discovered by our fuzzer. At the end, we will show you another powerful technique to obtain code-coverage without source code in a static way. This can help you develop your own smart fuzzer against any close-source target.
As for the user mode part, we would like to introduce a new fuzzing method which is designed based on python script. We also have implemented the fuzzing project towards XPC service which could allow you gain dozens of reproducible XPC services daemon crashes in minutes or seconds.
BIOS rootkits have been researched and discussed heavily in the past few years, but sparse evidence has been presented of real campaigns actively trying to compromise systems at this level. Our talk will reveal such a campaign successfully executed by the Sednit group. This APT group, also known as Fancy Bear, Sofacy and APT28, has been linked to numerous high profile cyberattacks such as the 2016 Democratic National Committee email leak scandal.
Earlier this year, there was a public report stating that the infamous Sednit/Sofacy/APT28 APT group successfully trojanized a userland LoJack agent and used it against their targets. LoJack, an embedded anti-theft application, was scrutinized by security researchers in the past because of its unusual persistence method: a module preinstalled in many computers' UEFI/BIOS software. Over the years, several security risks have been found in this product, but no significant in-the-wild activity was ever reported until the discovery of the Sednit group leveraging some of the vulnerabilities affecting the userland agent. However, through our research, we now know that Sednit did not stop there: they also tried to, and succeeded, in installing a custom UEFI module directly into a system's SPI flash memory.
In this talk, we will detail the full infection chain showing how Sednit was able to install their custom UEFI module on key targets' computers. Additionally, we will provide an in-depth analysis of their UEFI module and the associated trojanized LoJack agent.
Thanks to their omnipresence and multi-purposeness, users rely on smartphones to execute in few touches a wide range of privacy-related operation, such as accessing bank accounts, checking emails, or transferring money. While not long time ago users were seeking constant Internet connection (e.g., via free Wi-Fi hotspot), now they also look for energy sources to recharge their smartphones' battery, due to the use of more energy-draining apps (e.g., Pokιmon Go).
This phenomenon has led to the diffusion of free charging stations in public places and the marketing of portable batteries a.k.a. powerbanks. Despite the preventive measures implemented by Android's developers in order to prevent data transfer via USB cable (i.e., "Charging only" mode), we are able to exploit a hidden communication channel which leverages only the electrical current provided for charging the smartphone.
On one side, a malicious app (which can be disguised as a legitimate, clean app) installed on the victim's phone, which only requires wakelock permission (i.e., to wake up the phone when it is idle), remains silent until the device is plugged to a USB port and left unattended. Then, such app begins transmitting sensitive data coded in energy consumption peaks. On the other side, the energy provider (e.g., powerbank) is able to measure such peaks and then reconstruct the transmitted information. All this happens without the malicious app's access to Internet or other permissions, except for the information that it wants to exfiltrate.
If you're tasked with securing a portfolio of applications it's a practice in extremes. You've got a small team of security experts trying to help a multitude of developers, testers, and other engineers. You have to find a way to work with the team that's been around forever doing Waterfall on one huge product, and at the same time you have to support all the microservices that the new Agile and DevOps teams are building. And to make things extra exciting, those agile teams are pushing to production anywhere from once a month to several times a day. Even if your security team is fully staffed, there still aren't enough security experts to go around. Do you focus all your attention on the highly engaged team, the noisy and demanding team, or the team that never replies to your emails? They all need you.
By partnering with your development organization to create a guild of Security Champions, you can help them all. Establishing a Security Champion role on your development teams enables them to be more self-sufficient while maintaining and even improving their security posture. With careful selection and well-defined goals, you can train Security Champions that go beyond just interfacing with the security team but also handle a range of security activities completely within their teams, helping you scale your program.
This presentation will examine the value of the Security Champion role within the development team, which groups need to commit for the program to succeed, how to find good champions, and what benefits everyone involved can expect to gain. Based on lessons learned building a successful Security Champion program over the past 5 years, it will detail actionable steps you can take to bootstrap, monitor, and maintain a customized program that fosters these champions in your organization.
91% of cybercrimes and attacks start with a phishing email. This means that cyber security researchers must focus on detecting phishing in all of its settings and uses. However, they face many challenges as they go up against sophisticated and intelligent attackers. As a result, they must use cutting-edge Machine Learning and Artificial Intelligence techniques to combat existing and emerging criminal tactics.
Encryption is a tool that is widely used across the internet to secure legitimate communications, but is now being used by cybercriminals to hide their messages and carry out successful malware and phishing attacks while avoiding detection. Further aiding criminals is the fact that web browsers display a green lock symbol in the URL bar when a connection to a website is encrypted, creating false security in users who are more likely to enter their personal information into the page. The rise of attacks using encrypted sites means that information security researchers must explore new techniques to detect, classify, and take countermeasures against criminal traffic. So far, there is no standard approach for detecting malicious TLS certificates in the wild. Cyxtera researchers proposed a method for identifying malicious web certificates using deep neural networks and the content of TLS certificates to successfully identify malware certificates with an accuracy of 95 percent.
In addition to combating existing attacks, researchers must focus on future of fraud. As Artificial Intelligence and Machine Learning become crucial to cyber security, criminals will undoubtedly begin to harness these powerful tools to enhance their attacks. Cyxtera researchers created an algorithm called DeepPhish to simulate the results of the weaponization of AI by real life cybercriminals, and came to the staggering conclusion that intelligent algorithms could increase their attack success by up to 3000%.
In Windows 10, Microsoft is introducing a new memory protection concept: Windows Defender Device Guard, which provides code integrity for all modules in the kernel-mode, while PatchGuard prevents patching the kernel.
These features do not protect the kernel-mode memory completely. Malware can steal and modify allocated memory of third-party drivers without any BSOD. Also, elevating process privileges by patching EPROCESS.Token does not cause a BSOD. The reason for that is that kernel-mode drivers share the same memory space with the rest of the kernel.
Security researchers are trying to fill this gap. For example "LKRG" provides only code integrity without any protection of allocated memory, while AllMemPro protects allocated memory but not the code. "LKM guard" does not restrict the OS kernel. "Hypernel" provides kernel integrity, but only for limited kernel objects.
The goal is to move kernel-mode drivers into separate memory enclosures. This is possible by applying VT-x and EPT features. As a result, guest-physical addresses are translated by traversing a set of EPT paging structures. The EPT feature provides trapping memory access attempts, redirecting them, as well as allocating several EPT structures with various access configurations.
This idea is implemented in MemoryRanger (MR) in the following way: Initially MR allocates the default EPT structure and puts all loaded drivers and kernel inside it. After a new driver is loaded, MR allocates and configures a new EPT structure so that only this new driver and OS kernel are executed here. Each time after this driver allocates memory MR updates all EPTs: the allocated memory is accessible only for this driver, while all other EPTs exclude this memory. MR skips the legal memory access attempts and prevents the illegal ones. MR isolates execution of drivers by switching between EPTs.
The source code and demo of MemoryRanger are here https://github.com/IgorKorkin/MemoryRanger, https://www.youtube.com/watch?v=IMePtijD3TY&vq=hd1080.
For accelerating the development of sophisticated driving-assist technologies such as automated driving, securing vehicles against cyberattacks is challenging. To promote the development of security-measurement methods, a company's electronic control unit (ECU) places tough restrictions on security analysis. In such circumstances, we need an environment that includes transparent ECUs with high adaptability. Ideally, anyone will be able to apply technology and evaluate that technology with ECUs. Simulating an actual vehicle through hardware is also required for assessing threats of cyberattacks. We need not only to provide an adaptable platform for developing measures for existing cybersecurity but also simulate any function in actual vehicles using white-box ECUs. In addition, to easily demonstrate and evaluate the applied security technique, it is important to make the environment portable. Considering these requirements, we propose the portable automotive security testbed with adaptability (PASTA), and give an example for evaluating it for proof of concept. PASTA has the possibility to contribute to a comprehensive development platform against vehicle cyberattacks.
In Advanced Persistent Threat (APT) attacks, attackers tend to target the Active Directory to expand infections. Attackers try to take over Domain Administrator privilege and create a backdoor called "Golden Ticket" which can disguise themselves as arbitrary legitimate accounts, in order to obtain long-term administrator privilege. However, detecting attacks using this method is quite difficult since attackers often leverage legitimate accounts and commands, which are not identified as anomaly.
We will introduce a real-time detection method for attack activities leveraging Domain Administrator privilege including Golden Tickets by using Domain Controller Event logs. If we can detect attack activities with Domain Administrator privilege immediately, the damage can be minimized.
Our proposed method consists of the following steps to reduce false detection rate and help immediate response.
Step1 (Signature based detection): Firstly, analyze Event logs focusing on the characteristics of the attack activities.
Step2 (Machine Learning): Analyze with anomaly detection using unsupervised machine learning and detect suspicious commands as outlier which attackers tend to use.
Step3 (Real-time alert): If attack activities are detected, raise real-time alert using Elastic Stack.
We have developed a tool for detection and published on GitHub. We also show the specific algorithm of the proposed method and how to implement the method. The method can be easily implemented, and help immediate response to attacks.
This talk will review some of the most spectacular security failures in blockchain systems, and will help you mitigate your risks. We will notably review some of the most dramatic Ethereum smart contract issues, discuss objectively the case of Iota's custom crypto, describe how we could have stolen $millions worth of tokens (but didn't), and present examples of bugs that we found in popular Bitcoin software utilities. In the second part of the talk, we'll review the different types of wallets and their pros and cons, and we'll discuss the risks and benefits of hardware-based wallets for individuals, organizations, and trading platforms. The speaker has an extensive experience auditing blockchain systems for leading cryptocurrencies, and now helps secure a cryptocurrency exchange platform.
Jailbreaking, in general, means breaking the device out of its "jail'." Apple devices (e.g., iPhone, iPad, Apple Watch) are the most famous "jail'' devices among the world. iOS, macOS, watchOS, and tvOS are operating systems developed by Apple Inc. and used in Apple devices. All systems deploy a same hybrid kernel structure called XNU. To jailbreak devices, attackers need to patch the kernel to disable corresponding security measures. An essential condition for a kernel patching is to gain a stable arbitrary kernel memory read and write ability through kernel vulnerabilities. But, it is a consensus in security that there is no system without flaws; therefore, the only thing Apple can do is add an increasing number of mitigations. However, "Villains can always outsmart," attackers can always find a way to bypass them.
In this talk, we perform a systematic assessment of recently proposed mitigation strategies by Apple. We demonstrate that most of these defenses can be bypassed through corrupting unsafe kernel objects. We summarize this type of attack as ipc_port Kernel Object-Oriented Programming (PKOOP). More specifically, we show realistic attack scenarios to achieve full control of the latest XNU version. To defend against PKOOP attack, we propose XNU Kernel Object Protector (XKOP) to significantly reduce the number of possible targets for unprotected kernel objects. XKOP, a framework to hook related system, calls to check the integrity of risky kernel objects without system modification. We believe that our assessment and framework are curative contributions to the design and implementation of a secure XNU kernel.
This briefing will highlight the most recent expansion of the tools of the Syrian Electronic Army (SEA), which are now known to include an entire mobile surveillanceware family (SilverHawk). This is the first time a family of mobile surveillanceware has been directly attributed to the SEA with high certainty, highlighting a new stage in the group's technical evolution. To date, SilverHawk has been identified in over 30 trojanized versions of many well known apps, including Telegram, WhatsApp, Microsoft Word, YouTube, and the Guardian Project's Chat Secure app.
We'll take a look at the SEA's past notable activities, but primarily dive into SilverHawk's capabilities, as well as the significance of the group's ability to develop this toolset. Additionally, we'll explain how we attributed and tied infrastructure to one of the SEA's most high profile hackers, known as th3pro, who is currently on the FBI Cyber's Most Wanted list.
Over the past fifteen years there's been an uptick in "interesting" UNIX infrastructures being integrated into customers' existing AD forests. Whilst the threat models enabled by this should be quite familiar to anyone securing a heterogeneous Windows network, they may not be as well understood by a typical UNIX admin who does not have a strong background in Windows and AD. Over the last few months I've spent some time looking a number of specific AD integration solutions (both open and closed source) for UNIX systems and documenting some of the tools, tactics and procedures that enable attacks on the forest to be staged from UNIX.
Over time, our hardware has become smaller, faster, cheaper - and also incredibly more complicated. Just like with software, this complexity brings with it both increased attack surfaces and a more difficult detection problem.
Unfortunately right now, when it comes to hardware attacks, the discourse is focused on sensationalism. We've got reports of devices few people have heard of, doing things few people realize is possible, perhaps happening on a scale fewer people understand. When it comes to hardware details, they're incomprehensible to laypeople, as well as to most software security experts.
I'll start with a background on real examples of what we'd call 'hardware implants' to set the context and understand the scenarios where hardware implants make sense. We'll examine a few recent cases of claimed hardware implants to understand how we can classify them in terms of complexity and risk. With that information, we can then make rational decisions on where these and other hardware threats fit in your threat model.
With these examples in hand, you will better understand when it make sense to respond to hardware threats, as well as how to prioritize your response to best reduce your overall risk.
Public container images are riddled with vulnerabilities. We've analyzed the top 100 official Docker images present on DockerHub and found thousands of vulnerabilities and misconfigurations. Many of these vulnerabilities lie not within the application itself but in dependencies, binaries, and file/user/network permissions that are not required for the application to run. This issue has been recently mitigated by using a smaller base image layer such as Alpine, Minideb, and Cirros. While this is a step forward to reduce the attack surface, this is still not enough.
Like Unix tools, containers should be atomic in nature and fulfill only one task efficiently. In the context of containers, this means a container should be tailored to run one application only. It means only the required libraries, binaries, files, and network protocols to support a given application should be present.
Our approach tackles this problem by using a fine-grained container-wide profiling tool we developed to identify the subset of resources that the application absolutely needs in order to perform its normal operation. The output of our tool is then used to guide the container re-creation process to generate a new unique container image tailored specifically to only support the given application.
This new container image not only contains the minimum set of dependencies, but is also hardened with strict lock down policies which are enforced at runtime at the system API level to support only the application's intended operations, and neutralize any unneeded functionality that may be of use to exploits. In a preprocessing phase, the tool analyzes each application to pinpoint the call sites of potentially useful (to attackers) system API functions, and uses backwards data flow analysis to derive their expected argument values and generate whitelisting policies in a best-effort way. At runtime, the system exposes to the protected application only specialized versions of these critical API functions, and blocks any invocation that violates the enforced policy.
We've tested our approach on thousands of containers and will present results that demonstrate that our approach not only successfully removed 50%-70% of the known vulnerabilities in the tested images, but can also effectively block many zero-day attacks.
A wide gap exists between real-world attack scenarios and the implicit security guarantees of most popular databases, particularly around confidentiality of encrypted information. This talk will review the latest advances and breaks in database encryption techniques, including searchable encryption, multi-party authorization, and attribute based access.
The presentation will provide architects and defenders with specific practical guidance to protect high-sensitivity workloads in the most demanding privacy & compliance environments. We will dive-deep into database encryption threat models and the realities of production ops, including emerging methods around data in-use and blind administrator models.
In 2016, attackers broke into John Podesta's e-mail account and published his mailbox via WikiLeaks; many messages could be authenticated by their DKIM signatures. After this, secure messaging apps saw a flood of new users: Signal, for example saw a 400% increase in downloads. One reason for this is that secure messaging applications, like Signal, promise cryptographic deniability: that when you send a message to someone, they can verify that it came from you but the protocol will not leave any trace that can be used to convince skeptical third parties who sent that message.
Enter remote attestation: most new processors include a hardware-assisted trusted execution environment (TEE) that provides remote attestation; such TEEs can prove something about their state to a remote party. An attacker, even a manifestly untrustworthy one like a criminal or propaganda organization, can piggyback on the trust placed in the TEE, allowing them to prove to a skeptical audience that their purloined messages are authenticated by the messaging protocol, and that the attacker did not have the keys needed to forge the messages.
We demonstrate this attack using the Signal protocol and Intel SGX, but it applies to *any* purely-software protocol that provides sender authentication of messages.
We show how to design protocols that resist attackers with remote attestation, including both completely cryptographic methods such as on-line deniable key establishment (that work against some adversaries and as adopted by the upcoming OTRv4) and methods that use TEEs (which can stop it completely).
More generally, we want to raise awareness among users of secure messaging protocols about the limits of the level of deniability they can expect and among designers of such protocols that widespread availability of hardware-assisted remote attestation has changed the implicit assumptions they make.
In 1979 The Buggles launched the hit song "Video Killed the Radio Star." Nowadays The Buggles could write a new song titled "Video Killed the Text Star." Social networks are growing around video content. This means that if OSINT (Open Source INTelligence) wants to stay alive needs to start getting value from video content.
Video analysis, and person recognition in particular, is a very interesting task to security managers, CISOs, and analysts with responsibilities in physical security. However, video and image processing is gaining prominence. Internet content grows exponentially and there is a trend of publishing content in a video format. It's time to focus on this kind of information and start to move OSINT techniques and technologies to be able to process this type of information.
Human face recognition is one of the Machine Learning applications which has had more advances in recent years. This advances and the actual power of hardware make it possible to implement this kind of service at a very low cost.
We can use Open Source components to implement a fast and scalable infrastructure that allows you to analyze hundreds of hours of video in a simple way.
The work that will be shown in this presentation will allow people to be identified and compared with a series of images of people to determine if these people appear or not in the recorded video images.
When Everyone's Dog is Named Fluffy: Abusing the Brand New Security Questions in Windows 10 to Gain Domain-Wide Persistence
Magal Baz|Security Researcher, Illusive Networks Tom Sela| Head of Security Research, Illusive Networks
Location: Room B
Format: 25-Minute Briefings
In Windows domain environments most attacks involve obtaining domain admin privileges. But that's not enough - once an attacker gets them, he has to make sure he doesn't lose his grip in the domain. That's why, in recent years, new levels of innovation are being focused on means of establishing domain persistence. New ideas are emerging, such as DCShadow and DACL-based methods. In this session we will present an effective approach for creating such persistence, exploiting new opportunities offered by the Win 10 security questions feature.
The new feature, introduced this April, is a good example of how a well-intended idea can become a security nightmare. It allows a user to provide security questions and answers which he can later use to regain access to a local account. Yep, questions like "What was the name of your first pet?" are now safe-guarding your domain.
We dug into the implementation of this feature and discovered that it can be abused to create a very durable, low profile backdoor. Once an attacker compromises a network, this backdoor can be remotely distributed to any Win 10 machine in the network - without even executing code on the targeted machine. We'll present this method and the challenges we faced while implementing it, including:
How to remotely set the security questions feature How to get remote access to the "Password Reset" screen How to revert back to the user's original password after password reset, to avoid leaving traces (using Mimikatz existing features)
We'll also share methods of mitigating this attack, including an open source tool we've developed that can control or disable the security questions feature, thus preventing security questions being used as a backdoor.
The face: A crucial means of identity. But what if this crucial means of identity is stolen from you? Yes, this is happening and is termed as 'Deep fake.' Deep fake technology is an artificial intelligence based human image blending method used in different ways such as to create revenge porn, fake celebrity pornographic videos, or even in cyber propaganda. Videos are altered using General Adversarial networks in which the face of the speaker is manipulated by a network by tailoring it to someone else's face. These videos can sometimes be identified as fake by human eye; however, as neural networks get rigorously trained on more resources, it will become difficult to identify fake videos. Such videos can cause chaos and bring economical and emotional damages to one's reputation. Videos targeted on politico in form of cyber propaganda can prove to be catastrophic to a country's government.
We will discuss about the many tentacles of Deep fake and dreadful damages it can cause. But most importantly, this talk will provide a demo of the proposed solution: to identify complex Deep fake videos using deep learning. This can be achieved using a pre-trained Facenet model. The model can be trained on image data of people of importance or concern. After training, the output of the final layer will be stored in a database. A set of sampled images from a video will be passed through the neural network and the output of the final layer from the neural network will be compared to values stored in the database. The mean squared difference would confirm the authenticity of the video.
In 2018, we believe that Deep fake will progress to a different level. We will also talk about defensive measures against Deep fake.
The last two years have been filled with high-profile enterprise security incidents that shared a common origin: breach of a trusted software provider. In truth, supply chain attacks have played a key role in numerous targeted and opportunistic attacks - many of which flew under the radar - for years. This presentation examines the emergence of software supply chain compromises, the factors incentivizing attackers to adopt this approach, and practical approaches to risk mitigation and defense that enterprises can take in response.
We survey elliptic curve implementations from several vantage points. We perform internet-wide scans for TLS on a large number of ports, as well as SSH and IPsec to measure elliptic curve support and implementation behaviors, as well as collect passive measurements of client curve support for TLS. We also perform active measurements to estimate server vulnerability to known attacks against elliptic curve implementations, including support for weak curves, invalid curve attacks, and curve twist attacks. We estimate that 0.77% of HTTPS hosts, 0.04% of SSH hosts, and 4.04% of IKEv2 hosts that support elliptic curves do not perform curve validity checks as specified in elliptic curve standards. We describe how such vulnerabilities could be used to construct an elliptic curve parameter downgrade attack called CurveSwap for TLS, and observe that there do not appear to be combinations of weak behaviors we examined enabling a feasible CurveSwap attack in the wild. We also analyze source code for elliptic curve implementations, and find that a number of libraries fail to perform point validation for JSON Web Encryption, and find a flaw in the Java and NSS multiplication algorithms.
With its rapid evolvement, Apple has deployed many mechanisms in iOS to defend against potential threats and risks. Among system components, filesystem is considered to be the last line of defense against attackers' attempts to steal and tamper users' private data, as well as preventing permanent damage such as installation of backdoors or malicious applications.
In consideration of both security and performance, Apple recently proposed and deployed a new filesystem, called Apple File System (APFS), on iOS and macOS. Especially on iOS, as required by the system's rigorous security policies, APFS has adopted several protection mechanisms to prevent critical files and directories from being tampered even in face of attackers with kernel privileges. But, in our study, we found that these mechanisms are not as secure as they are supposed to be, and we successfully discovered ways to exploit or bypass them.
In this talk, we will first introduce the architecture of filesystem on Apple systems as well as the basic structure of APFS. Then we will explain previous attacks on APFS, and elaborate APFS's new mitigation through several experiments. Most importantly, our talk will propose a new attack to bypass the APFS's mitigation, which allows an attacker to tamper any file or directory on the system.
The knowledge of APFS architecture, its weak points, and our new attack elaborated in this talk is indispensable to iOS hackers and jailbreakers, which has not been thoroughly presented in any previous talks. We believe that our talk will inspire the design of a securer filesystem on Apple systems.
Deep Impact: Recognizing Unknown Malicious Activities from Zero Knowledge
Hiroshi Suzuki|Malware & Forensics Analyst, Internet Initiative Japan Inc. Hisao Nashiwa|Threat Analyst, Internet Initiative Japan Inc.
Location: Room B
Format: 50-Minute Briefings
To detect malicious activities, there are pattern matching, blacklists, behavioral analysis, and event correlation. However, those existing approaches have several problems.
For instance: - Unknown threats and sophisticated attacks could circumvent those solutions. - Some of those require huge resources.
This talk will cover how to solve those issues above and how we detect unknown malicious activities from typical logs of devices which are not dedicated for attack detection such as proxies, firewalls and so on.
1. C2 Server Detection We discover malware which periodically communicates with C2 servers such as Bots/RATs from zero-knowledge. In order to achieve this, we generate over two-million communication patterns by enumerating C2-ish communication patterns with a generator script. And we use Convolutional Neural Networks by converting common logs into "virtual images" by mapping count of communications, sent/received bytes with chronological order.
We will show you that our models are able to detect various C2 communications of unknown (it means unlearned) malware samples which come from actual incidents such as PlugX, RedLeaves/himawari, xxmm, Asruex, ursnif/gozi, Vawtrak, and so on.
2. Exploit Kit Detection Stable detection of Exploit Kits (EKs) is difficult because EKs' URLs and contents keep being changed frequently. However we found effective EK detection from zero-knowledge.
The method is able to detect unknown (it also means unlearned) EKs from standard proxy logs, by recognizing emulated content-type sequences of EKs (e.g. html -> swf -> octet-stream) with Recurrent Neural Networks. The sequences are deeply related to behavior of EKs, therefore attackers cannot change those easily.
We will show you our models which are trained with 300 thousands EK-like content-type sequences, are able to detect 14 kinds of EKs such as Rig, Nebula, Terror, Sundown, KaiXin and so on.
Threat Intelligence is a sound proposition that has its place in a mature security operation. But like so many good concepts in our industry, its path to commercialization has involved commoditization to the point of potentially dangerous over-simplification.
Intelligence is supposed to be non-obvious, actionable value-added information that is only available through some form of processing and interpretation. In truth, however, the basic premise of most commercial products is that if an entity has been observed acting maliciously in one location, then it should also be expected at other locations and prepared for.
On this premise, Threat Intelligence feeds are sold at hundreds of thousands of dollars a year.
Does it work?
This talk will present an analysis of the ability of Threat Intelligence to predict malicious activity on the Internet.
Our analysis involves the investigation of over a million Internet threat indicators over a period of six months. Notably, we've used a diverse set of sensors on real-world networks with which to track a range of malicious activities on the Internet, including port scans, web application scans, DoS & DDoS and exploits.
We track the malicious IP addresses detected, looking at their behavior over time and mapping both 'horizontal' correlations - the ability of one sensor to predict activity on a different sensor, or one target to predict for another target - and 'vertical' correlations - the ability of a sensor to predict persistence or re-appearance of an IP indicator.
By examining these two set of correlations we believe we can shed some light of the value proposition of basic Threat Intelligence offerings and, in doing so, improve our understanding of their place and value in our security systems and processes.
All our data and modeling code will be released after this talk.
Trusted Execution Environments (TEEs) are present in many devices today, and are used to perform security critical computation in an isolated environment. ARM's TrustZone is one of the most widely used TEEs in the world today, present in nearly every modern Android device.
TrustZone allows developers to write applications that run in a "Secure World" with hardware isolation of resources. Most implementations use a Trusted Operating System to run multiple Trusted Applications.
However, Trusted Applications are still written in C, and many common classes of vulnerabilities have been found in these applications. While TrustZone provides isolation of resources, it cannot prevent against vulnerable code.
In this talk, we will explore using the Rust language to write a Trusted Application. Rust allows developers to write system level code, but provides security features including memory safety, type safety, and error handling. These are desirable features for development of Trusted Applications.
We will begin with an overview of TrustZone and Rust language, then show how Rust can be used to develop a Trusted Application. To conclude, we will demo a Trusted Application on real TrustZone hardware.
This presentation focuses on modern exploitation techniques for VMware Workstation guest's virtual graphics device in order to achieve code execution on the host operating system. There will be in-depth analysis of VMware's graphics pipeline and its implementation details from an attacker's point of view. Specifically, the talk will cover how VMware bootstraps the virtual graphics device and how its behavior is affected by the host and guest operating systems. Once the graphics device is ready, the so-called SVGA thread is waiting for user commands. The presentation will enumerate the available ways to issue commands from the guest in order for them to be processed by the SVGA thread in host context.
VMware uses the SVGA3D protocol to communicate with the guest operating system (frontend interface). Commands sent by a guest user are processed by the host operating system using one of the available backend interfaces. The backend interface is selected with respect to the host operating system; this talk will focus on Windows hosts. Concepts used by both the frontend and the backend interfaces will be explained as well. Furthermore, there will be a discussion of VMware's internal objects and how they can be abused in order to develop novel primitives that are valuable assets for exploiting a memory corruption vulnerability. The aforementioned primitives include a reliable way to spray the host's heap, a way to leak memory from the host, and of course, a way to execute arbitrary code in host context. Finally, this purely offensive talk will conclude with a demonstration of those techniques using a public (patched) VMware vulnerability.
By 2020, the estimated shortfall in the security workforce will reach 1.5 million people (https://bit.ly/2GO6Ov0). Moreover, in today's world, there are so many security challenges facing us due to the evolving computers era, startups, and large companies needing talented employees. Thus, we are lacking both in quality and quantity. Additionally, these people are harder to recruit and maintain. The utopia is finding those that work hard, are innovative, creative and keep on learning. Sound like a tough mission to find. So how can we create them?
If we could only teach teenagers technical computer skills like (but not limited): networking, operating systems internals, programming languages, and the security implications of writing vulnerable code. If we could only afterward let them solve CTFs, and then mentor other teenagers about the same things they have just learned and, based on that, think about new ways to protect/attack our systems. Only then, we could really create the next generation of those talented people that the industry so eager to find.
In this talk, we will present a new approach to education in the field of cybersecurity, and demonstrate it by using a case in Israel. We will explain in detail how to build a framework (leveraging different pedagogical paradigms) of programs and groups, with the support of government, industry, and community, all for the sole purpose of creating a new generation of experts inventing the next big thing.
In this talk, we present a practical side-channel attack that identifies the social web service account of a visitor to an attacker's website. As user pages and profiles in social web services generally include his/her name and activities, the anonymity of a website visitor can be easily destroyed by identifying the social account.
Our attack leverages the widely adopted user-blocking mechanism, abusing its inherent property that certain pages return different content depending on whether a user is blocked by another user. Our key insight is that an account prepared by an attacker can hold an attacker-controllable binary state of blocking/non-blocking with respect to an arbitrary user on the same service. This state can be retrieved as one-bit data through the conventional cross-site timing attack when a user visits the attacker's website. We generalize this property as "visibility control," which we consider to be the fundamental assumption of our attack. Building on this primitive, an attacker with a set of controlled accounts can gain a flexible control over the data leaked through the side channel. Using this mechanism, it is possible to design a robust, large-scale user identification attack on social web services.
We performed an extensive empirical study and found that at least 12 are vulnerable: Facebook, Instagram, Tumblr, Google+, Twitter, eBay, PornHub, Medium, Xbox Live, Ashley Madison, Roblox, and Xvideos. The attack achieves 100% accuracy and finishes within a sufficiently short time in a practical setting.
We have shared details of this attack and countermeasures with service providers and browser vendors. Then, Twitter and eBay have been able to prevent the attack by changing their implementations. In addition, the "SameSite attribute" used for cookies has been added to some major browsers such as Microsoft Edge, Internet Explorer, and Mozilla Firefox.
Dominic Schaub|Head, Research and Development, Discrete Integration Corp.
Location: Room A
Format: 50-Minute Briefings
Data Forensics/Incident Response
Deniable encryption and steganography nominally safeguard sensitive information against forced password disclosure by concealing its very existence. However, while the presence of sensitive information may be 'plausibly' denied, the possession of steganographic software (e.g. suspiciously configured VeraCrypt) is readily detected and regarded as a 'smoking gun' that invalidates such deniability. This weakness, which undermines protection against rubber-hose cryptoanalysis and aggressive password disclosure statutes, affects all known steganographic software and is especially problematic for deniable encryption suites such as VeraCrypt, which typically remain installed and visible on a user's hard drive.
This talk will cover efforts to overcome this critical limitation through a novel form of steganography that is self concealing. In this new paradigm, steganographic tools hide themselves in a self-recursive manner that renders them forensically invisible. Moreover, upon cryptographic activation by an authorized user, these hidden tools can bootstrap themselves into existence without generating any incriminating forensic evidence. Provided that requisite cryptographic conditions are met, such steganography can be considered "perfectly deniable."
The talk will cover the successful design and implementation of a self-concealing, perfectly deniable encryption/steganography suite that is similar in functionality to VeraCrypt's hidden volume/OS feature. However, unlike VeraCrypt, the decoy system employs Linux's customary disk encryption (cryptsetup/dm-crypt) and requires no additional binaries, peculiar partition schemes (or inexplicable unallocated disk space), restrictions on cover-system write operations, or modification to TRIM settings. In fact, the decoy system appears bit-for-bit as a normal Linux system that was configured with only default parameters (e.g. repeatedly clicking 'next' during Ubuntu installation). Conversely, a simple cryptographic operation by an authorized user will bootstrap a hidden, fully functional OS into existence in a process that generates no forensic evidence and requires no outside binaries. The talk will demonstrate such a working system, which testing has found to be fast, stable, and functional.
Enterprise Wi-Fi access points featuring BLE (Bluetooth Low Energy) chips have become increasingly common in recent years. While these chips provide new features, they also introduce risks that create a new network attack surface.
In this talk, we will demonstrate BLEEDINGBIT, two zero-day vulnerabilities in Texas Instruments' (TI) BLE chips used in Cisco, Meraki, and Aruba wireless access points, that allow an unauthenticated attacker to penetrate an enterprise network over the air.
The first BLEEDINGBIT vulnerability was discovered in the BLE stack embedded on TI chips in Cisco and Meraki Wi-Fi access points. The second vulnerability was discovered in TI's OAD (over-the-air firmware download) feature used by nearly every Aruba Wi-Fi access point currently for sale. Combined, these vendors represent 80% of all wireless access points sold each year to enterprises.
Using BLEEDINGBIT, an attacker first achieves RCE on the BLE chip, and then can use the BLE chip to compromise the main OS of the access point and gain full control over it. Once an access point has been compromised, an attacker can read all traffic going through the access point, distribute malware, and even move laterally between network segments.
Although first discovered in wireless access points, BLEEDINGBIT vulnerabilities may exist in many types of devices and equipment used across many different industries. For example, medical centers use BLE to track the location of beacons on valuable assets like resuscitation carts. Retailers use BLE for mobile credit card readers and indoor navigation applications. A BLEEDINGBIT attack against any of these devices would come out of thin air, bypassing existing security controls, and catching these organizations unprotected.
Adversaries love leveraging legitimate functionality that lays dormant inside of Microsoft Windows for malicious purposes and often disguise their activity under the smoke screen of "normal administrator behavior." Over the last year, there has been a significant surge in the malicious use of Component Object Model (COM) objects as a "living off the land" approach to lateral movement. COM, a subsystem that has been around since the early days of Microsoft Windows, exposes interfaces and functionality within software objects and has the ability to share this functionality over the network via Distributed COM (DCOM). With over 20 years in existence and over a year of relative popularity among adversaries, one would imagine that network analysis and detection of DCOM attacks was old news. On the contrary, very few people understand the techniques, tools fail to properly parse the network protocol, and adversaries continue to successfully leverage it to further the compromise of networks. Needless to say, it's difficult to defend against techniques that the defenders don't understand.
This talk aims to address that knowledge gap by exploring DCOM as a lateral movement technique and provide a methodical walk through of the technique from both the attacker and defender perspectives. The audience will get a deep dive into:
[D]COM 101 How does an adversary choose a COM object for lateral movement NSM approaches with regards to DCOM (pros vs cons) Network protocol analysis of the attack using open source tools
Perception Deception: Physical Adversarial Attack Challenges and Tactics for DNN-Based Object Detection
Zhenyu Zhong|Staff Security Scientist, X-Lab, Baidu USA Weilin Xu|PhD candidate, Department of Computer Science at the University of Virginia Yunhan Jia|Senior Security Scientist, Baidu X-Lab Tao Wei|Chief Security Scientist, X-Lab, Baidu USA
Location: Room A
Format: 50-Minute Briefings
DNN has been successful for Object Detection, which is critical to the perceptions of Autonomous Driving, and it also has been found vulnerable to adversarial examples. There has been an ongoing debate whether the perturbations to the sensor input, such as video streaming data from the camera, is practically achievable. Instead of tampering with the input streaming data, we added perturbations to the target object which is more practical. Our goal of this talk is to shed a light to the challenges of the physical adversarial attack against computer vision-based object detection system, and the tactics we applied to achieve success. At the same time, we'd like to raise the security concerns of AI-powered perception system, and urge the research efforts to harden the DNN models.
The presentation starts with an overview of YOLOv3 to introduce the fundamentals of the state-of-the-art object detection method, which takes in the camera input and produces accurate detections. It is followed by the threat models we design to achieve the physical attack by applying carefully crafted perturbations to the actual physical objects. We further reveal our attack algorithms and attack strategies respectively. Throughout the presentation, we will show examples about our initial digital attack, and how we adapt it to a physical attack given the environmental constraints, for example, an object is seen at various distances and various angles etc.,
Finally, we wrap up the presentation with a demo to make the audience aware that with a careful setup, computer vision-based object detection can be deceived. A robust, adversarial example resistant model is required in safety critical system like autonomous driving system.
Secure Boot is widely deployed in modern embedded systems and an essential part of the security model. Even when no (easy to exploit) logical vulnerabilities remain, attackers are surprisingly often still able to compromise it using Fault Injection or a so called glitch attack. Many of these vulnerabilities are difficult to spot in the source code and can only be found by manually inspecting the disassembled binary code instruction by instruction.
While the idea to use simulation to identify these vulnerabilities is not new, this talk presents a fault simulator created using existing open-source components and without requiring a detailed model of the underlying hardware. The challenges to simulate real-world targets will be discussed as well as how to overcome most of them.
An attacker in procession of the binary of his target can use such simulator to find the ideal glitch location while developers of these systems can use such a tool to verify the effectiveness of their countermeasures against specific types of fault attacks.
We used our simulator to identify locations in the binaries of several real-world targets where due to a successful glitch the security could be compromised. For example, a successful glitch would result in bypassing the authentication of the next boot stage or arbitrary code execution in the context of the boot process. This would then reveal the cryptographic keys used to protect the system or gives access to additional information required to develop a more scalable attack not requiring fault injection.
Microsoft Edge, the new default browser for Windows 10, is heavily sandboxed. In fact, it is probably the only browser with its main process running inside a sandbox. Microsoft even goes to great length to design and implement platform security features exclusively for Microsoft Edge.
In this talk, we will take a deep dive into the Microsoft Edge security architecture. This includes sandbox initialization, browser broker implementation, inter-process communication, and renderer security isolation. We will present two logical sandbox escape bug chain consists of three bugs for Microsoft Edge, one of which we've used in Pwn2Own, and the other two are completely new. They are entirely different from memory corruption bugs, as all we've done is abusing normal features implemented in the browser and operating system.
Fernando Maymi|Lead Scientist, Soar Technology Alex Nickels|Associate Technical Director and Senior Software Engineer, Soar Technology
Location: Room C
Format: 50-Minute Briefings
One of the greatest challenges in developing capable cyberspace operators is building realistic environments for training events. While many organizations have developed technologies and techniques for replicating enterprise-scale networks, the problem is how to realistically populate those networks with synthetic persons. Whether we are training network defenders or penetration testers, we want to pit them against adaptive and intelligent adversaries who can continuously put their skills to the test. In either case, we also need rich ecosystems in which realistic user agents exchange messages, interact with the web and occasionally assist (or hinder) the efforts of the attackers and defenders.
This talk describes our research and development of a family of Cyberspace Cognitive (CyCog) agents that can behave like attackers, defenders or users in a network. The attacker agent (CyCog-A) was developed to train defenders while its defensive counterpart (CyCog-D) was intended to help develop penetration testers. The user agent (CyCog-U), on the other hand, is much more versatile in that it can support either type of training. Furthermore, since these synthetic users are models of actual users on a network, they can display behaviors that can either hinder or assist attackers and/or defenders.
Our experiences and successes point to current gaps as well as future threats and opportunities. From the need for scalable cyberspace mapping techniques to our work in modeling behaviors to the lessons learned in human-machine teaming, the CyCog family of agents is opening a new dimension in cyberspace operations research and development.
Sandboxing is a proven technique for detecting malware and targeted attacks. In practice, sandboxes inspect network traffic and identify the suspicious behaviors. However, the emergence of new forms of malware and exploits targeting microservices pose challenges for traditional sandboxing solutions in cloud-native environments.
Contemporary sandboxes fail to support container-based environments. To address these challenges, we redesigned the sandboxing system by adopting the new emerging container techniques. We will also demonstrate how our sandbox improves the performance of detecting miscroservice-oriented attacks. Additionally, in this talk we will discuss how to extend our sandbox to benefit existing security products in order to achieve better accuracy.
Haya Shulman|Dr., Fraunhofer Institute for Secure Information Technology SIT Elias Heftrig|Security Researcher, Fraunhofer Institute for Secure Information Technology SIT
Location: Room B
Format: 25-Minute Briefings
The security of Internet-based applications fundamentally relies on the trustwortiness of Certificate Authorities (CAs). We practically demonstrate for the first time that even a very weak attacker, namely, an off-path attacker, can effectively subvert the trustworthiness of popular commercially used CAs. Our attack targets CAs which use Domain Validation (DV) for authenticating domain ownership; collectively these CAs control 99% of the certificates market. The attack exploits DNS Cache Poisoning and tricks the CA into issuing fraudulent certificates for domains the attacker does not legitimately own -- namely certificates binding the attacker's public key to a victim domain.
We discuss short and long term defences, but argue that they fall short of securing DV. To mitigate the threats we propose Domain Validation++ (DV++). DV++ replaces the need in cryptography through assumptions in distributed systems. While retaining the benefits of DV (automation, efficiency and low costs) DV++ is secure even against Man-in-the-Middle (MitM) attackers. Deployment of DV++ is simple and does not require changing the existing infrastructure nor systems of the CAs. We demonstrate security of DV++ under realistic assumptions and provide open source access to our DV++ implementation.
The Mummy 2018 Microsoft Accidentally Summons Back Ugly Attacks from the Past
Ran Menscher|Security Researcher, Ran Menscher Security Research
Location: Room A
Format: 25-Minute Briefings
In the early 2000s attackers could very easily leverage naïve mechanisms of IP fragmentation and reassembly to intercept packets, modify them, or cause denial of service. The same fundamental flaw brought up other techniques such as stealth-scan.
These attacks relied on the trivial predictability of the IP identification field. The major operating systems fixed the problem by adding a randomization element. A simple and efficient solution.
For years this seemed to have done the trick until a seemingly innocent but unnecessary reorganization of the relevant code in the Windows kernel left things even worse than they began: opening back not only these attacks, but also leaking kernel memory in a very funny way.
Unlike any of the vulnerabilities I've ever had the privilege to discover/research, this vulnerability
(CVE-2018-8493) is a (simple) crypto bug, which shouldn't have been so damaging. The system's design, however, caused it to break down the entire mechanism.
As warm-blooded mammals, humans routinely leave thermal residue on various objects with which they come in contact. This includes common input devices, such as keyboards, that are used for entering (among other things) secret information: passwords and PINs. Although thermal residue dissipates over time, there is always a certain time window during which thermal energy readings can be harvested from input devices to recover recently entered, and potentially sensitive, information.
To-date, there has been no systematic investigation of thermal profiles of keyboards, and thus no efforts have been made to secure them. This is the main motivation for designing Thermanator -- a framework for password harvesting from keyboard thermal emanations. In this talk, we introduce Thermanator and show that several popular keyboards by different manufacturers are vulnerable to thermal side-channel attacks. Thermanator allows us to correctly determine entire passwords tens of seconds after entry, as well as greatly reduce password search. The latter is effective even as late as 60 seconds after password entry.
Furthermore, we show that thermal side-channel attacks work from as far as several feet away. Our results are based on extensive experiments conducted with a multitude of subjects using several common keyboards and many representative passwords. We demonstrate thermal side-channel attacks using a thermal (FLIR) camera. We also describe a very realistic "Coffee-Break Attack" that allows the adversary to surreptitiously capture a victim's password via the thermal side-channel in a realistic multi-user office setting or in a public space.