- Cyber -

Last update 09.10.2017 12:44:39

Introduction  List  Kategorie  Subcategory 0  1  2  3  4  5  6



Public services at the Caribbean island Sint Maarten shut down by a cyber attack
11.4.2018 securityaffairs Cyber

A cyber attack shut down the entire government infrastructure of the Caribbean island Sint Maarten. public services were interrupted.
A massive cyber attack took offline the entire government infrastructure of the Caribbean island Sint Maarten. it is a constituent country of the Kingdom of the Netherlands.

Government building remained closed after the attack.

“The Ministry of General Affairs hereby informs the public that the recovery process of the Government of Sint Maarten ICT Network is progressing steadily and will continue throughout the upcoming weekend following the cyber-attack on Monday April 2nd” reported the media.

According to the local media, The Daily Herald a cyber attack hit the country on April 2nd, the good news is that yesterday the government services were resumed with the exception of the Civil Registry Department.

Sint Maarten hacking

According to the authorities, this is the third attack in over a year, but at the time of writing, there are no public details on the assault.

“The system was hacked on Easter Monday, the third such attack in over a year. No further details about the hacking have been made public by government.
The Ministry “thanked the people of St. Maarten for their patience during this period.” continues the announcement.

Below the announcement made by the Government on Facebook.

The incident demonstrates the importance of a cyber strategy for any government, in this case, hacked shut down government networks but in other circumstances, they can hack into government systems to launch cyber attack against a third-party nation.

It is essential a mutual support among stated to prevent such kind of incident.


Breaches Increasingly Discovered Internally: Mandiant
5.4.2018 securityweek Cyber

Organizations are getting increasingly better at discovering data breaches on their own, with more than 60% of intrusions in 2017 detected internally, according to FireEye-owned Mandiant.

The company’s M-Trends report for 2018 shows that the global median time for internal detection dropped to 57.5 days in 2017, compared to 80 days in the previous year. Of the total number of breaches investigated by Mandiant last year, 62% were discovered internally, up from 53% in 2016.

On the other hand, it still took roughly the same amount of time for organizations to learn that their systems had been compromised. The global median dwell time in 2017 – the median time from the first evidence of a hack to detection – was 101 days, compared to 99 days in 2016.

Companies in the Americas had the shortest median dwell time (75.5 days), while organizations in the APAC region had the longest dwell time (nearly 500 days).

Dwell time data from Mandiant

Data collected by Mandiant in 2013 showed that more than one-third of organizations had been attacked again after the initial incident had been remediated. More recent data, specifically from the past 19 months, showed that 56% of Mandiant customers were targeted again by either the same group or one with similar motivation.

In cases where investigators discovered at least one type of significant activity (e.g. compromised accounts, data theft, lateral movement), the targeted organization was successfully attacked again within one year. Organizations that experienced more than one type of significant activity were attacked by more than one threat actor.

Again, the highest percentage of companies attacked multiple times and by multiple threat groups was in the APAC region – more than double compared to the Americas and the EMEA region.

When it comes to the most targeted industries, companies in the financial and high-tech sectors recorded the highest number of significant attacks, while the high-tech, telecommunications and education sectors were hit by the highest number of different hacker groups.

Last year, FireEye assigned names to four state-sponsored threat groups, including the Vietnam-linked APT32 (OceanLotus), and the Iran-linked APT33, APT34 (OilRig), and APT35 (NewsBeef, Newscaster and Charming Kitten).

“Iran-sponsored threat actors have compromised a variety of organizations, but recently they have expanded their efforts in a way that previously seemed beyond their grasp,” Mandiant said in its report. “Today they leverage strategic web compromises (SWC) to ensnare more victims, and concurrently maintain persistence across multiple organizations for months and sometimes years. Rather than relying on publicly available malware and utilities, they develop and deploy custom malware. When they are not carrying out destructive attacks against their targets, they are conducting espionage and stealing data like professionals.”


Several U.S. Gas Pipeline Firms Affected by Cyberattack
4.4.2018 securityweek Cyber

Several natural gas pipeline operators in the United States have been affected by a cyberattack that hit a third-party communications system, but the incident does not appear to have impacted operational technology.

Energy Transfer Partners was the first pipeline company to report problems with its Electronic Data Interchange (EDI) system due to a cyberattack that targeted Energy Services Group, specifically the company’s Latitude Technologies unit.

EDI is a platform used by businesses to exchange documents such as purchase orders and invoices. In the case of energy firms, the system is used to encrypt, decrypt, translate, and track key energy transactions. Latitude says it provides EDI and other technology services to more than 100 natural gas pipelines, storage facilities, utilities, law firms, and energy marketers across the U.S.US gas pipeline companies hit by cyberattack

Bloomberg reported that the incident also affected Boardwalk Pipeline Partners, Chesapeake Utilities Corp.’s Eastern Shore Natural Gas, and ONEOK, Inc. However, ONEOK clarified that its decision to disable the third-party EDI service was a “purely precautionary step.”

“There were no operational interruptions on ONEOK's natural gas pipelines,” the company stated. “Affected customers have been advised to use one of the alternative methods of communications available to them for gas scheduling purposes.”

Few details are known about the cyberattack, but Latitude did tell Bloomberg that it did not believe any customer data had been compromised and no other systems appeared to have been impacted. A status update provided by Latitude on its website on Tuesday informed customers that the initial restoration of EDI services had been completed and the company had been working on increasing performance.

SecurityWeek has reached out to Latitude Technologies and Energy Services Group for more information about the attack and will update this article if they respond.

“This looks like a financially-motivated cyberattack, likely by cybercriminals, but we've seen in the past that cybercriminals often collaborate with nation-states and share hacking tools with each other,” said Phil Neray, VP of Industrial Cybersecurity at CyberX, a critical infrastructure and industrial cybersecurity firm based in Boston. “It's easy to imagine a ransomware attack that uses nation-state tools to hijack ICS/SCADA systems and hold the pipeline hostage for millions of dollars per day.”

Bryan Singer, director of Security Services at IOActive, has described some worst-case scenarios that could result from attacks targeting pipeline operators.

“A lot of pipelines have 24-48 hour capacity within the pipelines. If hackers find a way to poison the product, you could have downstream impact for months or more. You could have gas compressors or lift stations where there’s a fire or explosion, and where you have to scramble to cap the ends before the fire spreads out. If it’s an oil rig, it could certainly be tougher to contain,” Singer told SecurityWeek.

“Hackers can cause some intermediate problems at first, but if they have access long enough, there’s a possibility that airports could go down (they often rely on fuel delivered directly) and gas stations could run out of gas. If they’re able to maintain an attack for a couple days, there can be very large downstream impact. We’re mostly out of winter, but if we don’t have power, we’re in need of that heat,” he added.

Back in 2012, the Department of Homeland Security (DHS) warned that malicious actors had been targeting the natural gas industry. While critical infrastructure operators in general have since become more aware of the risks posed by cyberattacks, many organizations are still unprepared.

In the case of the oil and gas industry in the United States, a study commissioned last year by German engineering giant Siemens showed that this sector is largely unprepared to address cybersecurity risks in operational technology (OT) environments.


New Bill in Georgia Could Criminalize Security Research
3.4.2018 securityweek Cyber

A new bill passed by the Georgia State Senate last week deems all forms of unauthorized computer access as illegal, thus potentially criminalizing the finding and reporting of security vulnerabilities.

The new bill, which met fierce opposition from the cybersecurity community ever since it first became public, amends the Georgia code that originally considered only unauthorized computer access with malicious intent to be a crime.

“Any person who intentionally accesses a computer or computer network with knowledge that such access is without authority shall be guilty of the crime of unauthorized computer access,” the bill reads (Senate Bill 315).

“Any person convicted of computer password disclosure or unauthorized computer access shall be fined not more than $5,000.00 or incarcerated for a period not to exceed one year, or both punished for a misdemeanor of a high and aggravated nature,” the bill continues.

The original code only made a crime out of the access of a computer or computer network without authority and with the intention of tampering with applications or data; interfering with the use of a computer program or data; or causing the malfunction of the computer, network, or application.

The main issue with the bill is that it does little to protect security researchers who find and responsibly disclose vulnerabilities.

In fact, it is possible that the new bill was created because a security researcher discovered a vulnerability in the Kennesaw State University election systems last year. The flaw was reported ethically and the researcher came clean after being investigated by the FBI.

However, the breach made it to the news and, because the state felt very embarrassed by the incident, the attorney general’s office apparently asked for law that would criminalize so-called “poking around.”

“Basically, if you’re looking for vulnerabilities in a non-destructive way, even if you’re ethically reporting them—especially if you’re ethically reporting them—suddenly you’re a criminal if this bill passes into law,” Scott M. Jones from Electronic Frontiers Georgia pointed out.

The Electronic Frontier Foundation has already called upon Georgia Gov. Nathan Deal to veto the bill as soon as possible. The foundation also points out that S.B. 315 doesn’t ensure that security researchers aren’t targeted by overzealous prosecutors for finding vulnerabilities in networks or computer programs.

EFF also points out that, while Georgia has been a hub for cybersecurity research until now, that it all might change with the adoption of the new bill. Cyber-security firms and other tech companies might no longer find Georgia welcoming and could consider relocating to states that are less hostile to security research.

“S.B. 315 is a dangerous bill with ramifications far beyond what the legislature imagined, including discouraging researchers from coming forward with vulnerabilities they discover in critical systems. It’s time for Governor Deal to step in and listen to the cybersecurity experts who keep our data safe, rather than lawmakers looking to score political points,” EFF notes.

The infosec community has already reacted to the passing of the bill, calling for a veto and pointing out not only that search engines such as Shodan could become illegal in Georgia, but also that security talent is highly likely to migrate to other states.

Professor Andy Green
@secprofgreen
recruitment of georgia security talent to other states is already starting to happen.@GovernorDeal please veto #sb315#gapol https://twitter.com/alexhutton/status/980116433265987584 …

6:53 PM - Mar 31, 2018
9
See Professor Andy Green's other Tweets
Twitter Ads info and privacy

Stephen Gay
@redpalmetto
@secprofgreen - Will the automated scanning and inventory of vulnerable devices within the State of Georgia be illegal after #SB315 is signed into law? @shodanhq

12:30 PM - Mar 30, 2018
1
See Stephen Gay's other Tweets
Twitter Ads info and privacy
Others, however, suggest that some researchers could turn to “irresponsible disclosure” instead.

Robᵉʳᵗ Graham 🤔
@ErrataRob
30 Mar
So Georgia just passed a bill making unauthorized, but well meaning (no damage or theft) access to a computer illegal, meaning anybody noticing a vuln on a website can be sent to jail for up to a year.

Dodge This Security
@shotgunner101
All this will do is force those living in georgia who would have done responsible disclosure to do irresponsible disclosure under an alternative identity. It will still happen just not in the abobe board well structured way we see now.

8:45 AM - Mar 30, 2018
See Dodge This Security's other Tweets


The Malicious Use of Artificial Intelligence in Cybersecurity
28.3.2018 securityweek Cyber

Artificial Intelligence Risks

Criminals and Nation-state Actors Will Use Machine Learning Capabilities to Increase the Speed and Accuracy of Attacks

Scientists from leading universities, including Stanford and Yale in the U.S. and Oxford and Cambridge in the UK, together with civil society organizations and a representation from the cybersecurity industry, last month published an important paper titled, The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation.

While the paper (PDF) looks at a range of potential malicious misuses of artificial intelligence (which includes and focuses on machine learning), our purpose here is to largely exclude the military and concentrate on the cybersecurity aspects. It is, however, impossible to completely exclude the potential political misuse given the interaction between political surveillance and regulatory privacy issues.

Artificial intelligence (AI) is the use of computers to perform the analytical functions normally only available to humans – but at machine speed. ‘Machine speed’ is described by Corvil’s David Murray as, “millions of instructions and calculations across multiple software programs, in 20 microseconds or even faster.” AI simply makes the unrealistic, real.

The problem discussed in the paper is that this function has no ethical bias. It can be used as easily for malicious purposes as it can for beneficial purposes. AI is largely dual-purpose; and the basic threat is that zero-day malware will appear more frequently and be targeted more precisely, while existing defenses are neutralized – all because of AI systems in the hands of malicious actors.

Current Machine Learning and Endpoint Protection
Today, the most common use of the machine learning (ML) type of AI is found in next-gen endpoint protection systems; that is, the latest anti-malware software. It is called ‘machine learning’ because the AI algorithms within the system ‘learn’ from many millions (and increasing) samples and behavioral patterns of real malware.

Detection of a new pattern can be compared with known bad patterns to generate a probability level for potential maliciousness at a speed and accuracy not possible for human analysts within any meaningful timeframe.

It works – but with two provisos: it depends upon the quality of the ‘learning’ algorithm, and the integrity of the data set from which it learns.

Potential abuse can come in both areas: manipulation or even alteration of the algorithm; and poisoning the data set from which the machine learns.

The report warns, “It has been shown time and again that ML algorithms also have vulnerabilities. These include ML-specific vulnerabilities, such as inducing misclassification via adversarial examples or via poisoning the training data… ML algorithms also remain open to traditional vulnerabilities, such as memory overflow. There is currently a great deal of interest among cyber-security researchers in understanding the security of ML systems, though at present there seem to be more questions than answers.”

The danger is that while these threats to ML already exist, criminals and nation-state actors will begin to use their own ML capabilities to increase the speed and accuracy of attacks against ML defenses.

On data set poisoning, Andy Patel, security advisor at F-Secure, warns, “Diagnosing that a model has been incorrectly trained and is exhibiting bias or performing incorrect classification can be difficult.” The problem is that even the scientists who develop the AI algorithms don’t necessarily understand how they work in the field.

He also notes that malicious actors aren’t waiting for their own ML to do this. “Automated content generation can be used to poison data sets. This is already happening, but the techniques to generate the content don't necessarily use machine learning. For instance, in 2017, millions of auto-generated comments regarding net neutrality were submitted to the FCC.”

The basic conflict between attackers and defenders will not change with machine learning – each side seeks to stay ahead of the other; and each side briefly succeeds. “We need to recognize that new defenses that utilize technology such as AI may be most effective when initially released before bad actors are building countermeasures and evasion tactics intended to circumvent them,” comments Steve Grobman, CTO at McAfee.

Put simply, the cybersecurity industry is aware of the potential malicious use of AI, and is already considering how best to react to it. “Security companies are in a three-way race between themselves and these actors, to innovate and stay ahead, and up until now have been fairly successful,” observes Hal Lonas, CTO at Webroot. “Just as biological infections evolve to more resistant strains when antibiotics are used against them, so we will see malware attacks change as AI defense tactics are used over time.”

Hyrum Anderson, one of the authors of the report, and technical director of data science at Endgame, accepts the industry understands ML can be abused or evaded, but not necessarily the methods that could be employed. “Probably fewer data scientists in infosec are thinking how products might be misused,” he told SecurityWeek; “for example, exploiting a hallucinating model to overwhelm a security analyst with false positives, or a similar attack to make AI-based prevention DoS the system.”

Indeed, even this report failed to mention one type of attack (although there will undoubtedly be others). “The report doesn’t address the dangerous implications of machine learning based de-anonymization attacks,” explains Joshua Saxe, chief data scientist at Sophos. Data anonymization is a key requirement of many regulations. AI-based de-anonymization is likely to be trivial and rapid.

Anderson describes three guidelines that Endgame uses to protect the integrity and secure use of its own ML algorithms. The first is to understand and appropriately limit the AI interaction with the system or endpoint. The second is to understand and limit the data ingestion; for example, anomaly detection that ingests all events everywhere versus anomaly detection that ingests only a subset of ‘security-interesting’ events. In order to protect the integrity of the data set, he suggests, “Trust but verify data providers, such as the malware feeds used for training next generation anti-virus.”

The third: “After a model is built, and before and after deployment, proactively probe it for blind spots. There are fancy ways to do this (including my own research), but at a minimum, doing this manually is still a really good idea.”

Identity
A second area of potential malicious use of AI revolves around ‘identity’. AI’s ability to both recognize and generate manufactured images is advancing rapidly. This can have both positive and negative effects. Facial recognition for the detection of criminal acts and terrorists would generally be consider beneficial – but it can go too far.

“Note, for example,” comments Sophos’ Saxe, “the recent episode in which Stanford researchers released a controversial algorithm that could be used to tell if someone is gay or straight, with high accuracy, based on their social media profile photos.”

“The accuracy of the algorithm,” states the research paper, “increased to 91% [for men] and 83% [for women], respectively, given five facial images per person.” Human judges achieved much lower accuracy: 61% for men and 54% for women. The result is typical: AI can improve human performance at a scale that cannot be contemplated manually.

“Critics pointed out that this research could empower authoritarian regimes to oppress homosexuals,” adds Saxe, “but these critiques were not heard prior to the release of the research.”

This example of the potential misuse of AI in certain circumstances touches on one of the primary themes of the paper: the dual-use nature of, and the role of ‘ethics’ in, the development of artificial intelligence. We look at ethics in more detail below.

A more positive use of AI-based recognition can be found in recent advances in speech recognition and language comprehension. These advances could be used for better biometric authentication – were it not for the dual-use nature of AI. Along with facial and speech recognition there has been a rapid advance in the generation of synthetic images, text, and audio; which, says the report, “could be used to impersonate others online, or to sway public opinion by distributing AI-generated content through social media channels.”

Synthetic image generation

Synthetic image generation in 2014 and 2017

For authentication, Webroot’s Lonas believes we will need to adapt our current authentication approach. “As the lines between machines and humans become less discernible, we will see a shift in what we currently see in authentication systems, for instance logging in to a computer or system. Today, authentication is used to differentiate between various humans and prevent impersonation of one person by another. In the future, we will also need to differentiate between humans and machines, as the latter, with help from AI, are able to mimic humans with ever greater fidelity.”

The future potential for AI-generated fake news is a completely different problem, but one that has the potential to make Russian interference in the 2016 presidential election somewhat pedestrian.

Just last month, the U.S. indicted thirteen Russians and three companies “for committing federal crimes while seeking to interfere in the United States political system.” A campaign allegedly involving hundreds of people working in shifts and with a budget of millions of dollars spread misinformation and propaganda through social networks. Such campaigns could increase in scope with fewer people and far less cost with the use of AI.

In short, AI could be used to make fake news more common and more realistic; or make targeted spear-phishing more compelling at the scale of current mass phishing through the misuse or abuse of identity. This will affect both business cybersecurity (business email compromise, BEC, could become even more effective than it already is), and national security.

The Ethical Problem
The increasing use of AI in cyber will inevitably draw governments into the equation. They will be concerned about more efficient cyber attacks against the critical infrastructure, but will also become embroiled over civil society concerns over their own use of AI in mass surveillance. Since machine learning algorithms become more efficient with the size of the data set from which they learn, the ‘own it all’ mentality exposed by Edward Snowden will become increasingly compelling to law enforcement and intelligence agencies.

The result is that governments will be drawn into the ethical debate about AI and the algorithms it uses. In fact, this process has already started, with the UK’s financial regulator warning that it will be monitoring the use of AI in financial trading.

Governments seek to assure people that its own use of citizens’ big data will be ethical (relying on judicial oversight, court orders, minimal intrusion, and so on). It will also seek to reassure people that business makes ethical use of artificial intelligence – GDPR has already made a start by placing controls over automated user profiling.

While governments often like the idea of ‘self-regulation’ (it absolves them from appearing to be over-proscriptive), ethics in research is never adequately covered by scientists. The report states the problem: “Appropriate responses to these issues may be hampered by two self-reinforcing factors: first, a lack of deep technical understanding on the part of policymakers, potentially leading to poorly-designed or ill-informed regulatory, legislative, or other policy responses; second, reluctance on the part of technical researchers to engage with these topics, out of concern that association with malicious use would tarnish the reputation of the field and perhaps lead to reduced funding or premature regulation.”

There is a widespread belief among technologists that politicians simply don’t understand technology. Chris Roberts, chief security architect at Acalvio, is an example. “God help us if policy makers get involved,” he told SecurityWeek. “Having just read the last thing they dabbled in, I’m dreading what they’d come up with, and would assume it’ll be too late, too wordy, too much crap and red tape. They’re basically five years behind the curve.”

The private sector is little better. Businesses are duty bound, in a capitalist society, to maximize profits for their shareholders. New ideas are frequently rushed to market with little thought for security; and new algorithms will probably be treated likewise.

Oliver Tavakoli, CTO at Vectra, believes that the security industry is obligated to help. “We must adopt defensive methodologies which are far more flexible and resilient rather than fixed and (supposedly) impermeable,” he told SecurityWeek. “This is particularly difficult for legacy security vendors who are more apt to layer on a bit of AI to their existing workflow rather than rethinking everything they do in light of the possibilities that AI brings to the table.”

“The security industry has the opportunity to show leadership with AI and focus on what will really make a difference for customers and organizations currently being pummeled by cyberattacks,” agrees Vikram Kapoor, co-founder and CTO at Lacework. His view is that there are many areas where the advantages of AI will outweigh the potential threats.

“For example,” he continued, “auditing the configuration of your system daily for security best practices should be automated – AI can help. Continuously checking for any anomalies in your cloud should be automated – AI can help there too.”

It would probably be wrong, however, to demand that researchers limit their research: it is the research that is important rather than ethical consideration of potential subsequent use or misuse of the research. The example of Stanford’s sexual orientation algorithm is a case in point.

Google mathematician Thomas Dullien (aka Halvar Flake on Twitter) puts a common researcher view. Commenting on the report, he tweeted, “Dual-use-ness of research cannot be established a-priori; as a researcher, one usually has only the choice to work on ‘useful’ and ‘useless’ things.” In other words, you cannot – or at least should not – restrict research through imposed policy because at this stage, its value (or lack of it) is unknown.

McAfee’s Grobman believes that concentrating on the ethics of AI research is the wrong focus for defending against AI. “We need to place greater emphasis on understanding the ability for bad actors to use AI,” he told SecurityWeek; “as opposed to attempting to limit progress in the field in order to prevent it.”

Summary
The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation makes four high-level recommendations “to better forecast, prevent, and mitigate” the evolving threats from unconstrained artificial intelligence. They are: greater collaboration between policymakers and researchers (that is, government and industry); the adoption of ethical best practices by AI researchers; a methodology for handling dual-use concerns; and an expansion of the stakeholders and domain experts involved in discussing the issues.

Although the detail of the report makes many more finely-grained comments, these high-level recommendations indicate there is no immediately obvious solution to the threat posed by AI in the hands of cybercriminals and nation-state actors.

Indeed, it could be argued that there is no solution. Just as there is no solution to the criminal use of encryption – merely mitigation – perhaps there is no solution to the criminal use of AI – just mitigation. If this is true, defense against the criminal use of AI will be down to the very security vendors that have proliferated the use of AI in their own products.

It is possible, however, that the whole threat of unbridled artificial intelligence in the cyber world is being over-hyped.

F-Secure’s Patel comments, “Social engineering and disinformation campaigns will become easier with the ability to generate ‘fake’ content (text, voice, and video). There are plenty of people on the Internet who can very quickly figure out whether an image has been photoshopped, and I’d expect that, for now, it might be fairly easy to determine whether something was automatically generated or altered by a machine learning algorithm.

“In the future,” he added, “if it becomes impossible to determine if a piece of content was generated by ML, researchers will need to look at metadata surrounding the content to determine its validity (for instance, timestamps, IP addresses, etc.).”

In short, Patel’s suggestion is that AI will simply scale, in quality and quantity, the same threats that are faced today. But AI can also scale and improve the current defenses against those threats.

“The fear is that super powerful machine-learning-based fuzzers will allow adversaries to easily and quickly find countless zero-day vulnerabilities. Remember, though, that these fuzzers will also be in the hands of the white hats… In the end, things will probably look the same as they do now.”


VPN leaks users’ IPs via WebRTC. I’ve tested seventy VPN providers and 16 of them leaks users’ IPs via WebRTC (23%)
28.3.2018 securityaffairs Cyber

Cyber security researcher Paolo Stagno (aka VoidSec) has tested seventy VPN providers and found 16 of them leaks users’ IPs via WebRTC (23%)
You can check if your VPN leaks visiting: http://ip.voidsec.com
Here you can find the complete list of the VPN providers that I’ve tested: https://docs.google.com/spreadsheets/d/1Nm7mxfFvmdn-3Az-BtE5O0BIdbJiIAWUnkoAF_v_0ug/edit#gid=0
Add a comment or send me a tweet if you have updated results for any of the VPN which I am missing details. (especially the “$$$” one, since I cannot subscribe to 200 different paid VPN services :P)
Some time ago, during a small event in my city, I’ve presented a small research on “decloaking” the true IP of a website visitor (ab)using the WebRTC technology.

What is WebRTC?
WebRTC is a free, open project that provides browsers and mobile applications with Real-Time Communications (RTC) capabilities via simple APIs.

It includes the fundamental building blocks for high-quality communications on the web, such as network, audio and video components used in voice and video chat applications, these components, when implemented in a browser, can be accessed through a JavaScript API, enabling developers to easily implement their own RTC web app.

STUN/ICE
Is a component allowing calls to use the STUN and ICE mechanisms to establish connections across various types of networks? The STUN server sends a pingback that contains the IP address and port of the client

These STUN (Session Traversal Utilities for NAT) servers are used by VPNs to translate a local home IP address to a new public IP address and vice-versa. To do this, the STUN server maintains a table of both your VPN-based public IP and your local (“real”) IP during connectivity (routers at home replicate a similar function in translating private IP addresses to public and back.).

WebRTC allows requests to be made to STUN servers which return the “hidden” home IP-address as well as local network addresses for the system that is being used by the user.

The results of the requests can be accessed using JavaScript, but because they are made outside the normal XML/HTTP request procedure, they are not visible in the developer console.

The only requirement for this de-anonymizing technique to work is WebRTC and JavaScript support from the browser.

VPN and WebRTC
This functionality could be also used to de-anonymize and trace users behind common privacy protection services such as: VPN, SOCKS Proxy, HTTP Proxy and in the past (TOR users).

Browsers that have WebRTC enabled by default:

Mozilla Firefox
Google Chrome
Google Chrome on Android
Internet (Samsung Browser)
Opera
Vivaldi
23% of the tested VPNs and Proxies services disclosed the real IP address of the visitors making the users traceable.

The following providers leaks users’ IP:

BolehVPN (USA Only)
ChillGlobal (Chrome and Firefox Plugin)
Glype (Depends on the configuration)
hide-me.org
Hola!VPN
Hola!VPN Chrome Extension
HTTP PROXY navigation in browser that support Web RTC
IBVPN Browser Addon
PHP Proxy
phx.piratebayproxy.co
psiphon3 (not leaking if using L2TP/IP)
PureVPN
SOCKS Proxy on browsers with Web RTC enabled
SumRando Web Proxy
TOR as PROXY on browsers with Web RTC enabled
Windscribe Add-ons
VPN

You can find the complete spreadsheet of tested VPN providers here: https://docs.google.com/spreadsheets/d/1Nm7mxfFvmdn-3Az-BtE5O0BIdbJiIAWUnkoAF_v_0ug/edit#gid=0

Add a comment or send me a tweet if you have updated results for any of the VPN which I am missing details. (especially the “$$$” one, since I cannot subscribe to 200 different paid VPN services :P)

Stay anonymous while surfing:
Some tips to follow in order to protect your IP during the internet navigation:

Disable WebRTC
Disable JavaScript (or at least some functions. Use NoScript)
Disable Canvas Rendering (Web API)
Always set a DNS fallback for every connection/adapter
Always kill all your browsers instances before and after a VPN connection
Clear browser cache, history, and cookies
PoC:
You can check if your VPN leaks through this POC: http://ip.voidsec.com

PoC Code:
I’ve updated Daniel Roesler code in order to make it works again and you can find it on Github.

Original post:

VPN Leak

TL:DR: VPN leaks users’ IPs via WebRTC. I’ve tested seventy VPN providers and 16 of them leaks users’ IPs via WebRTC (23%) You can check if your VPN leaks visiting: http://ip.voidsec.com Here you can find the complete list of the VPN providers that I’ve tested: https://docs.google.com/spreadsheets/d/1Nm7mxfFvmdn-3Az-BtE5O0BIdbJiIAWUnkoAF_v_0ug/edit#gid=0 Add a comment or send me a tweet if …


Former Barclays CISO to Head WEF's Global Center for Cybersecurity
26.3.2018 securityweek Cyber

Troels Oerting to Head the Global Centre for Cybersecurity

The 48th annual meeting of the World Economic Forum (WEF) at Davos, Switzerland, in January announced the formation of a new Global Centre for Cybersecurity. Today it announced that Troels Oerting will be its first Head, assuming the role on April 2, 2018.

Oerting has been the group chief information security officer (CISO) at Barclays since February 2015. Before that he was head of the European Cybercrime Centre (EC3) -- part of Europol formed in 2013 to strengthen LEA response to cross-border cybercrime in the EU -- and head of the Europol Counter Terrorist and Financial Intelligence Center (since 2012). He also held several other law enforcement positions (such as Head of the Serious Organised Crime Agency with the Danish National Police), and also chaired the EU Financial Cybercrime Coalition.

Oerting brings to WEF's Global Center for Cybersecurity a unique combination of hands-on cybersecurity expertise as Barclay's CISO, together with experience of and contacts within European-wide cyber intelligence organizations, and a deep knowledge of the financial crimes that will be of particular significance to WEF's members. It is a clear statement from the WEF that the new center should be taken seriously.

“The Global Centre for Cybersecurity is the first global platform to tackle today’s cyber-risks across industries, sectors and in close collaboration with the public sector. I’m glad that we have found a proven leader in the field who is keen and capable to help us address this dark side of the Fourth Industrial Revolution,” said Klaus Schwab, founder and executive chairman of the World Economic Forum.

WEF's unique position at the heart of trans-national business, with the ear of governments, provides the opportunity to develop a truly global approach to cybersecurity. Most current cybersecurity regulations and standards are based on national priorities aimed against an adversary that knows no national boundaries. The aims of the new center are to consolidate existing WEF initiatives; to establish an independent library of best practices; to work towards an appropriate and agile regulatory framework on cybersecurity; and to provide a laboratory and early-warning think tank on cybersecurity issues.


Organizations Failing Painfully at Protecting, Securing Privileged Accounts
14.3.2018 securityweek Cyber

Legal Requirement for Cyber Insurance May be Necessary to Protect Privileged Credentials

The need to manage privileged accounts is understood by practitioners and required by regulators, but poorly implemented in practice. Eighty percent of organizations consider privileged account management (PAM) to be a high priority; 60% are required by regulators to demonstrate privileged account management; but 70% would fail an access control audit.

According to the 2017 Verizon Data Breach Investigations Report (DBIR), 81% of all hacking-related data breaches involved the use of stolen and/or weak passwords. The prize for hackers is gaining access to privileged account credentials. Once acquired, the adversary can move around the network with high capability and little visibility. Despite this, a new survey (PDF) by Thycotic demonstrates widespread poor implementation of PAM principles to protect key accounts.

Thycotic queried nearly 500 global IT security professionals. In privileged account provisioning, it found 62% of organizations fail at processes for privileged access; 70% fail to fully discover privileged accounts (while 40% do nothing at all to discover these accounts); and 55% fail to revoke access after an employee is terminated.

Even with strong controls, the report warns, "You cannot secure and manage what you do not know you have." However, most organizations have few and poor controls. Seventy-three percent of organizations do not require multi-factor authentication with privileged accounts; 63% do not track and alert on failed logon attempts for privileged accounts; and 70% fail to limit third-party access to privileged accounts.

Thycotic recommends a virtuous life cycle approach to privileged account management: define; discover; manage and protect; monitor; detect anomalous use; respond to incidents; and review and audit. Without automation, this will be impossible for anything but the smallest of companies. There are several companies -- including Thycotic -- that provide technology to assist.

SecurityWeek spoke to the report's author, Joseph Carson, chief security scientist at Thycotic to understand why privileged account management is failing. "Organizations," he said, "are not measuring their security effectively. They continue to spend their budget blindly; and with limited budgets, they have difficulty in letting go of their legacy solutions and attitudes, and investing in the future."

Sometimes, he continued, companies just use a spreadsheet or Word document to record their privileged accounts -- and sometimes nothing at all. "An automated system will save money by eliminating much of the manual effort," he suggested, "providing more complete control, making audits simpler, and reducing the risk of a serious breach."

Nothing here is new or unknown, so it doesn't explain why PAM hasn't been more widely adopted. "The ultimate problem," he said, "is a lack of enforcement by the regulators, leaving organizations free to continue doing the minimum and get away with it." Most of the regulations that require PAM have certifications that will demonstrate compliance; but Carson is concerned that 'certification' is just another business with its own business pressures and its own need to make a profit.

"Are the certifiers more concerned with having certified customers than rigorously enforcing the official standards? Is," he wonders, "certification effectively becoming a subscription service -- becoming a business process rather than a serious evaluation?"

The solution, he suggests, will only come when the regulators actually enforce their own regulations. "Enforcement needs to be harsh -- where a regulation requires PAM, failing companies need to be barred from further operation until the requirements are satisfied. Set the bar high, rather than the current position which is way too low."

Carson uses car seat belts as an example. When they first began to appear, not all vehicle manufactures included them -- just as not all organizations use PAM today. "What changed the situation," he said, "so that seat belts were installed in all motor vehicles as a matter of course, was the insurance industry. Insurance companies told the motor industry that they would not insure any car that did not have seat belts." Since motor insurance is required by law, it effectively meant no seat belt, no sale.

The difference, of course, is that cyber insurance is not a current legal requirement. Carson believes this will change over the next few years, courtesy of the European General Data Protection Regulation (GDPR). GDPR does not specify the need for privileged account management -- it requires the concept of 'least privilege'. This serves the same purpose, but couched in technology future-proof language.

GDPR comes with very high potential monetary sanctions (up to 4% of global turnover), and a regulatory body that has shown itself willing to use its powers against even the largest international organizations. To ensure it can collect the fines that it will inevitably levy, Europe may well turn to the cyber insurance industry.

"It will be the insurance industry that will drive organizations to actually do something about effectively managing their privileged accounts. No adequate certification will mean no insurance. This, of course, will require legal insistence on cyber insurance; and GDPR will drive that. We will probably see, in about one or two years, insurance will become mandatory for those companies regulated by GDPR," Carson said.


World Economic Forum Announces New Fintech Cybersecurity Consortium
6.3.2018 securityweek Cyber

Following the announcement of a new Global Centre for Cybersecurity, the World Economic Forum (WEF) has today launched a new fintech-focused initiative: WEF's Fintech Cybersecurity Consortium. Its aim is to create a framework for the assessment of cybersecurity in financial technology firms and data aggregators.

The founding members of the new consortium include global bank Citigroup, insurance company Zurich Insurance Group, fintech lender Kabbage and financial infrastructure provider DTCC. Their intention is to develop common principles for cybersecurity assessments, guidance for implementation, a point-based scoring framework, and guidance on improving an organization's score.

"Cyber breaches recorded by businesses have almost doubled since 2013 and the estimated cost of cybercrime is $8 trillion over the next five years," said Mario Greco, Chief Executive Officer of Zurich Insurance Group, Switzerland, a participant in the consortium. "We expect the consortium to help adopt best cybersecurity practices and reduce the complexity of diverging cyber regulation around the world."

The $8 trillion figure comes from a May 2017 report from Juniper Research. More recently, McAfee reported that the cost of global cybercrime is $600 billion.

The new consortium will commence immediately, working closely with WEF's Global Centre for Cybersecurity being established in Geneva. It expects to draw upon a similar, domestic-focused project undertaken in 2017 by the US Chamber of Commerce on Critical Infrastructure Protection, Information Sharing and Cybersecurity. A detailed description is found in a separate whitepaper, Innovation-Driven Cyber-Risk to Customer Data in Financial Services (PDF).This paper makes it clear that the work will draw upon existing frameworks, with particular reference to NIST.

WEF spokesperson Georg Schmitt told SecurityWeek that the consortium is "doing this to step in where regulators might not (yet)." The paper makes it clear that recent cyber developments are considered to be a major threat to the financial sector. Two of these are the evolution of open banking driven by European finance legislation such as PSD2 ; and the customer privacy regulations, led perhaps by GDPR. The former increases fintech's attack surface, while rapid growth in the IoT and use of AI algorithms increases the amount of PII collected and stored.

"It's a smart move to highlight data aggregators as a point of cyber vulnerability," David Shrier, CEO of Distilled Analytics told SecurityWeek. "You have only to look at the Equifax hack to understand why this is important. And classically they are not considered fintechs, so it's worthwhile to call them out separately.

"Unknowingly," he adds, "in our race to adopt new technology over the past 20 years, we have ceded a massive amount of personal information to these third parties (data aggregator and fintech alike), and it has created gigantic cyber vulnerabilities."

Kabbage CEO Rob Frohwein explained: "Kabbage is joining the World Economic Forum consortium because cybersecurity is a never-ending, age-long issue that requires a long-lasting solution for tomorrow and not a Band-Aid for today. We need a living global standard that allows financial services companies to compete and work with incumbent institutions across borders and industries."

The Fintech Cybersecurity Consortium will develop a cybersecurity assessment framework for fintechs and data aggregators. This will, in theory, enable new firms to interconnect with fintech and aggregator firms with greater confidence.

Some firms will likely balk at yet another fintech framework-come-regulation, particularly since it will evolve from existing frameworks. "Unfortunately, this really doesn't change the game in any way (that I can tell)," comments Nathan Wenzler, chief security strategist at AsTech. "It is likely to get a, 'it's yet another regulation for us financial companies' kind of reaction. Yes, some financial firms might be interested. If this was any other industry besides finance, it might be something more significant. As it stands, they're pretty numb to all the regulatory requirements they deal with everywhere."

Shrier is more optimistic. "We have seen the WEF tackle other areas with paradigm-shifting thought leadership, so, provided they get the right experts in their working group, this could be additive to improving cybersecurity. While this new effort is not guaranteed to succeed, our problem today is too many headlines about cyber breaches and not enough systems thinking about cyber solutions. The WEF group has a chance to raise serious cyberthinking in the C-suite and board room proactively, instead of reactively after an incursion."


European Commission requests IT firms to remove ‘Terror Content’ within an hour
2.3.2018 securityaffairs Cyber

The UE issued new recommendations to tackle illegal content online, it asked internet companies to promptly remove terror content from their platforms within an hour from notification.
On Thursday, the UE issued new recommendations to internet companies to promptly remove “harmful content,” including terror content, from their platforms.

“As a follow-up, the Commission is today recommending a set of operational measures accompanied by the necessary safeguards – to be taken by companies and Member States to further step up this work before it determines whether it will be necessary to propose legislation.” reads the fact sheet published by the European Commission.

“These recommendations apply to all forms of illegal content ranging from terrorist content, incitement to hatred and violence, child sexual abuse material, counterfeit products and copyright infringement.”

It is a call to action for the tech firms and social media giants to take down “terrorist content” within an hour of it being reported, the recommendation is directed to major services including YouTube, Facebook, and Twitter.

These platforms are daily abused by terrorist organizations like Islamic State group, the EU’s recommendations follow the demands of the nations participant at the 2017 G7 Summit held in Taormina, Italy, that urged action from internet service providers and social media giants against extremist content online.

terror content

The European Commission is teaming up with a group of US internet giants to adopt additional measures to fight web extremism, but at the same time, it warned it would adopt consider legislation if the Internet firms will not follow the recommendations.

“While several platforms have been removing more illegal content than ever before — showing that self-regulation can work — we still need to react faster against terrorist propaganda and other illegal content,” said the commission’s vice-president for the Digital Single Market Andrus Ansip.

“This content remains “a serious threat to our citizens’ security, safety, and fundamental rights,”


Andrus Ansip

@Ansip_EU
What is illegal offline is also illegal online. Limited liability system under EU's #eCommerce law already works well – it should stay in place. My statement at press conference on fighting illegal content online: http://bit.ly/2oJGgTw

1:07 PM - Mar 1, 2018
11
See Andrus Ansip's other Tweets
Twitter Ads info and privacy
The European Commission recognized the results achieved by internet firms in combatting illegal content, but the adversaries are very active and there is still a lot of work to do.

“significant scope for more effective action, particularly on the most urgent issue of terrorist content, which presents serious security risks”.

The European Commission pretends that terrorist content should be taken down within one hour of being reported by the authorities, it also urges more strictly monitoring and proactive actions against the illegal content.

The EU suggests the adoption of automated detection systems that could support tech firms to rapidly identify harmful content and any attempt to re-upload removed illegal content.

The new recommendations specifically address also other types of harmful illegal content such as hate speech and images of child sexual abuse.

“Illegal content means any information which is not in compliance with EU law or the law of a Member State. This includes terrorist content, child sexual abuse material (Directive on combating sexual abuse of children), illegal hate speech (Framework
Decision on combating certain forms and expressions of racism and xenophobia by means of criminal law), commercial scams and frauds (such as Unfair commercial practices directive or Consumer rights directive) or breaches of intellectual property rights (such as Directive on the harmonisation of certain aspects of copyright and related rights in the information society).” continues the EC.
“Terrorist content is any material which amounts to terrorist offences under the EU Directive on combating terrorism or under national laws — including material produced by, or attributable to, EU or UN listed terrorist organisations.”

According to the commission, internet firms removed 70 percent of illegal content notified to them in the preceding few months.


Financial Cyberthreats in 2017
2.3.2018 Kaspersky   FINANCIAL CYBERTHREATS IN 2017  Cyber
In 2017, we saw a number of changes to the world of financial threats and new actors emerging. As we have previously noted, fraud attacks in financial services have become increasingly account-centric. User data is a key enabler for large-scale fraud attacks, and frequent data breaches – among other successful attack types – have provided cybercriminals with valuable sources of personal information to use in account takeovers or false identity attacks. These account-centric attacks can result in many other losses, including those of further customer data and trust, so mitigation is as important as ever for both businesses and financial services customers.

Attacks on ATMs continued to rise in 2017, attracting the attention of many cybercriminals, with attackers targeting bank infrastructure and payment systems using sophisticated fileless malware, as well as the more rudimentary methods of taping over CCTVs and drilling holes. In 2017, Kaspersky Lab researchers uncovered, among other things, attacks on ATM systems that involved new malware, remote operations, and an ATM-targeting malware called ‘Cutlet Maker’ that was being sold openly on the DarkNet market for a few thousand dollars, along with a step-by-step user guide. Kaspersky Lab has published a report outlining possible future ATM attack scenarios targeting ATM authentication systems.

It is also worth mentioning that major cyber incidents continue to take place. In September 2017, Kaspersky Lab researchers identified a new series of targeted attacks against at least 10 financial organizations in multiple regions, including Russia, Armenia, and Malaysia. The hits were performed by a new group called Silence. While stealing funds from its victims, Silence implemented specific techniques similar to the infamous threat actor, Carbanak.

Thus, Silence joins the ranks of the most devastating and complex cyber-robbery operations like Metel, GCMAN and Carbanak/Cobalt, which have succeeded in stealing millions of dollars from financial organizations. The interesting point to note with this actor is that the criminals exploit the infrastructure of already infected financial institutions for new attacks: sending emails from real employee addresses to a new victim, along with a request to open a bank account. Using this trick, criminals make sure the recipient doesn’t suspect the infection vector.

Small and medium-sized businesses didn’t escape financial threats either. Last year Kaspersky Lab’s researchers discovered a new botnet that cashes-in on aggressive advertising, mostly in Germany and the US. Criminals infect their victims’ computers with the Magala Trojan Clicker, generating fake ad views, and making up to $350 from each machine. Small enterprises lose out most because they end up doing business with unscrupulous advertisers, without even knowing it.

Moving down one more step – from SMEs to individual users – we can say that 2017 didn’t give the latter much respite from financial threats. Kaspersky Lab researchers detected NukeBot – a new malware designed to steal the credentials of online banking customers. Earlier versions of the Trojan were known to the security industry as TinyNuke, but they lacked the features necessary to launch attacks. The latest versions however, are fully operable, and contain code to target the users of specific banks.

This report summarizes a series of Kaspersky Lab reports that between them provide an overview of how the financial threat landscape has evolved over the years. It covers the common phishing threats that users encounter, along with Windows-based and Android-based financial malware.

The key findings of the report are:

Phishing:
In 2017, the share of financial phishing increased from 47.5% to almost 54% of all phishing detections. This is an all-time high, according to Kaspersky Lab statistics for financial phishing.
More than one in four attempts to load a phishing page blocked by Kaspersky Lab products is related to banking phishing.
The share of phishing related to payment systems and online shops accounted for almost 16% and 11% respectively in 2017. This is slightly more (single percentage points) than in 2016.
The share of financial phishing encountered by Mac users nearly doubled, accounting for almost 56%.
Banking malware:
In 2017, the number of users attacked with banking Trojans was 767,072, a decrease of 30% on 2016 (1,088,900).
19% of users attacked with banking malware were corporate users.
Users in Germany, Russia, China, India, Vietnam, Brazil and the US were the most often attacked by banking malware.
Zbot is still the most widespread banking malware family (almost 33% of attacked users), but is now being challenged by the Gozi family (27.8%).
Android banking malware:
In 2017, the number of users that encountered Android banking malware decreased by almost 15% to 259,828 worldwide.
Just three banking malware families accounted for attacks on the vast majority of users (over 70%).
Russia, Australia and Turkmenistan were the countries with the highest percentage of users attacked by Android banking malware.


Cybersecurity – Tips to Protect Small Business from Cyber Attacks
23.2.2018 securityaffairs Cyber

Small Business is a privileged target of attackers, in fact, there is a high risk of having problems with hackers if you are a large company or even a media player.
Do you have a small company? If the answer is yes, and you think that no cyber attack will ever affect you, think again. Small Business is a privileged target of attackers, in fact, there is a high risk of having problems with hackers, if you are a large company or even a media player.

According to recent reports, more than 40% of cyber attacks are targeting companies with fewer than 500 employees. More disturbing studies show that hackers attack every fifth small company. In most cases, these companies shut down because their security plans do not exist or there is a huge gap in providing total protection.

Cybersecurity is the most important way to ensure that your business does not run the risk of malicious attacks, especially if the people behind them do not show up.

Therefore, it is essential to take strong security measures if you do not want to lose your job for life and trust of your valuable customers. Moreover, prominent organizations expect their confidential information to hide under any circumstances. If you find that this is not the case, your customers will turn to other companies.

To avoid this, we would like to share with you how you can protect your small business from cyber attacks or more simply, tips to protect small business from cyber attacks.

Make as Many Backups as Possible

The reserve is significant if you want to protect all confidential data from cyber attacks and hackers who create malicious software and send it to devices that are explicitly used by small employees are inexorable. If you create multiple backups, you can sleep well at night, knowing that these files, presentations, etc. are present safe and sound. It is important not to get stained forever when it comes to malware.

Application of the Most Powerful Antivirus Program

When using a reliable security solution, it is essential to keep your business altogether.

Do not forget to choose the one antivirus software that protects your computer against all types of malware; antivirus program that eventually needs to detect and eliminate spam, spyware, Trojans, phishing attacks, etc. after selecting the best option for your business, but don’t forget to update it regularly.

Training of Employees

The people who work for you need to know that by clicking on the random links that you received through your professional email can cause significant damages to the company and its secret and confidential information.

The same applies to connections to networks that do not use a secure password. These are just two of the most dangerous practices you should stop right away. How can this be done? For example, you can organize training programs or hold meetings, where safety experts advise, give to employees and safe practices in the workplace against cyber crimes discuss. A better option is to implement security policies and procedures regarding online ethics.

Using Different Terminals Networks Every time for Payments

Using the same network for a payment terminal is a practice that must stop. Never connect it to your business. Keep these two parts separately, because only a few authorized employees can contact them. Therefore, the computers in your network protect the confidential content of cyber attacks.

Using Cybersecurity Insurance Policy

We ensure our cars, our homes, etc. Why do not we do this for our company? Cybersecurity is very useful for cyber threats. How? If a malware attack occurs, your company is responsible.

There is demand, so you must pay a significant amount of money as compensation. With the help of cybersecurity insurance, you can guarantee full coverage of all court fees.

Change Passwords Every in Three Months

Many people use the same password on all our devices, social platforms, etc. More than a year ago, small businesses did the same and increased the risk of cyber attacks. We should Change passwords every three months and do not forget to create strong passwords every time you do so.

The most secure passwords consist of 8-16 characters, which contain special characters, numbers, and letters. If you know you do not have a right memory, the password manager simplifies your work.


Do Business Leaders Listen to Their Own Security Professionals?
22.2.2018 securityweek Cyber

Survey Shows a Disconnect Between Business Leaders and Security Professionals

A new research report published this week claims, "A disconnect about cybersecurity is causing tension among leaders in the C-suite -- and may be leaving companies vulnerable to breaches as a result."

The specific disconnect is over the relative importance between anti-malware and identity control -- but it masks a more persistent issue: do business leaders even listen to their own security professionals?

The basis for this assertion comes from two sources: the Verizon 2017 Data Breach Investigations Report (DBIR), and the report's own research. DBIR states, "81% of hacking-related breaches leveraged either stolen and/or weak passwords." The new research (PDF), conducted by Centrify and Dow Jones Customer Intelligence shows that companies' security officers agree with the view, while their CEOs do not. Centrify surveyed 800 senior executives in November 2017.

According to the new research, 62% of CEOs consider malware to be the primary threat to cybersecurity, while only 35% of their technical officers agree. The technical officers agree with the DBIR that most breaches come through failures in identity and access control. "More than two-thirds (68%) of executives from companies that experienced at least one breach with serious consequences say it would most likely have been prevented by either privileged user identity and access management or user identity assurance. That compares with only 8% who point to anti-malware endpoint controls."

The report, published by Centrify (a firm that delivers Zero Trust Security through what it calls 'Next-Gen Access'), found this to be perhaps the most disturbing of a series of mismatches between the views of technical officers and their CEOs. Another example concerns strategy accountability: 81% of CEOs say they are most accountable for the company's security strategy; while 78% of the technical officers believe it is they who are most accountable.

These figures raise two questions: firstly, are the technical officers correct in their assertion that identity control is more important than anti-malware, or are CEOs correct in their insistence on anti-malware; and secondly, if the technical officers are correct, why do they fail to adequately communicate their views to senior management?

There is no simple answer. Not all practitioners accept the survey results. Steve Lentz, CSO and director of information security at Samsung Research America, doesn't automatically accept that identity is a bigger problem than malware. "I really believe it's the unknown malware that is on many employee PCs that leak info." He quoted an example of two employees visiting from abroad and connecting to his network. "Our network defenses immediately alerted my security team and quarantined the two PCs." One had a keylogger while the other had a password stealer. The implication is that since it is impossible to control all identities all the time it is necessary to have adequate anti-malware.

Martin Zinaich, information security leader at the City of Tampa, FL, believes the problem may stem from different priorities between Business and Security. Business leaders often have "a low user-friction tolerance combined with a high-risk appetite." At the same time, questioning whether malware or identity is the biggest problem is a mistake. "Wasn't last year's big breach at Equifax due to an unpatched Apache Struts vulnerability? Too often for security professionals it is the basics that get missed."

To a degree, the malware/identity issue is a chicken and egg problem. Drew Koenig, security solutions architect at Magenic, takes one view. If "you look at incidents in their entirety, malware is the result of identity security failures." While phishing and poor security behavior is one problem, poor password construction, account sharing, and over-privileged accounts are another. Compromised accounts are the delivery mechanism, he suggests, for the malware that accesses databases and steals sensitive data.

But Joseph Carson, chief security scientist at Thycotic, warns that attackers use social engineering to bypass initial identity controls. "One single click on a malicious link, can download malware onto your computer that can immediately lock up data in a 'ransomware' attack." In this scenario, identity controls won't protect you from the effects of malware.

Boris Vaynberg, co-founder and CEO at Solebit agrees. "Most attacks start with an attacker penetrating into the organization. These attackers use various techniques, most of them including use of malware to secure initial control inside the organization. Once the attacker gets control, the second step is lateral movement. Attackers will then attempt to secure the credentials they are seeking in order to obtain an organization's sensitive data."

Brian Kelly, chief information security leader at Quinnipiac University, accepts that malware may be the vector used to compromise the identity, but adds, "I really keep coming back to the idea that identity is the new perimeter. In a world full of clouds and ubiquitous mobile access, identity is the only thing between you and your data."

The implication is that identity control cannot stop malware. But since we know that anti-malware also cannot guarantee to stop all malware, identity and credential control becomes essential to prevent lateral movement and privilege escalation.

"It's overly simplistic to think that if the organization addresses one specific attack vector, it will prevent all major breaches," warns Lenny Zeltser, VP of products at Minerva Labs. "Attackers can follow different pathways to achieve their objectives. They can steal credentials, elevate access, and cause damage even if the company has strong identity management practices. Identity security is important, so is endpoint defense, so are network safeguards, etc. We cannot focus on a single security layer and neglect the others."

The second implication from the Centrify survey is that either security professionals are failing to deliver their message to business leaders, or business leaders are refusing to listen to their security professionals. Again, there is no simple answer.

Mike Weber, VP at Coalfire Labs, believes there is a business reason for business leaders to be reluctant to listen to their security professionals. "The security landscape changes constantly, and those dynamic changes rarely align with fiscal year planning cycles. To be able to quickly react to the latest threats, a CISO may need to resort to 'overselling' a particular need." The problem here is that business leaders face 'oversells' all the time, and are well-versed in ignoring them.

Brian Kelly suggests the basic problem comes from multiple sources of threat information. "The feeling that malware is the greatest risk may be driven more by media reports than the security team's failure to deliver the correct message. Information Security teams are competing for the CEO's attention, but are also struggling to craft a message that makes sense in context."

Perhaps one of the problems is a basic misunderstanding of the purpose of 'security'. Mike Smart, security strategist at Forcepoint, believes security is like the brake on a car. Business leaders think its purpose is to slow down the car; that is, security slows down business. "Innovators will tell you the opposite," he says. "It's there to give the driver the confidence to go as fast as possible." In this view, security is the enabler of agile business -- but the implication is that security leaders have failed to adequately explain this function to the business leaders.

Dr. Bret Fund, founder and CEO at SecureSet, suggests that most companies have failed to yet establish the partnership between business and security that is necessary for an agile but secure business. "Security managers need to do a better job understanding the business constraints and how, as a security team, they can provide meaningful solutions inside of those realities. Business managers need to do a better job of understanding that security is everyone's responsibility and NOT just the security teams."

There is little disagreement over a disconnect between business leaders and security professionals. Bridging that disconnect is the problem. Koenig believes that the security team needs to own the problem. "In security," he says, "you have to assume everyone outside your team distrusts you. That's an unfortunate reality. So, to improve your delivery, educate instead of present. Put context around what you are reporting. Help them understand that malware is a valid risk, but most breaches are the result of poor identity controls that allows for the delivery of malware. Ultimately for every security report that is delivered you have to answer the hardest question from a business, 'So What?'. Don't tell, explain."

Centrify's survey demonstrates this mismatch in cyber threat understanding between business leaders and security professionals. The report shows that most security professionals believe that 'identity' is the number one control, while business leaders concentrate on malware. It's a nuanced issue. Identity and credential control, such as that provided by Centrify, won't stop all malware -- but it may prevent a malware incident developing into a major breach. How to get business leaders to listen to security professionals remains a continuing problem.


Structure of Cyber Risk Perception Survey Could Distort Findings
22.2.2018 securityweek Cyber

CISOs Barely Mentioned in Report on Global Cyber Risk Perception

The purpose of a new report  from cyber insurance firm Marsh, supported by Microsoft's Global Security Strategy and Diplomacy team, is to examine the global state of cyber risk management: "This report provides a lens into the current state of cyber risk management at organizations around the world."

To achieve this, Marsh polled 1,312 senior executives "representing a range of key functions, including information technology, risk management, finance, legal/compliance, senior management, and boards of directors." However, there is no category representing information security, nor any specific indication where a security team fits in the organizational structure.

A reasonable assumption would be cyber security is treated as part of IT, and that if the organization has a CSO or CISO, that position reports directly to the CIO from within the IT structure. That would explain why IT is consistently described as the functional area that is the primary owner and decision-maker for cyber risk management in all companies across all sectors with revenue above $10 million per annum.

But it doesn't reflect reality. While the majority of CISOs might still report to the CIO, this is slowly changing. Some now report directly to the board while others report to the Chief Risk Officer (CRO) or Legal.

Cyber Risk ReportFurthermore, the cyber security function is key to the specification and implementation of any cyber risk mitigation policy (where 'mitigation' equates to risk reduction as opposed to other methods such as risk transfer, which equates to insurance). Human Resources (30 respondents) can help with insider risk definition and response. Procurement can help with security product purchasing (14 respondents). Finance (340 polled) can help with budget planning and financial compliance issues. But none of these will see the full cyber risk threat. While all of these should be involved in cyber risk management, only a dedicated security team is in a position to define and lead it -- and yet there is no cyber security function included in the report.

The decision not to give cyber security its own role, if not the primary role, within the survey has the potential to distort the findings. For example, 41% of the respondents are concerned about financially motivated attacks (which in this survey includes hacktivists), while only 6% are most concerned about politically motivated attacks including state-sponsored attacks.

The question asked was 'With regard to a cyber-attack that delivers destructive malware, which threat actor concerns you the most?" Options on offer included 'Operational error' and 'Human error, such as employee loss of mobile device'; neither of which are commonly associated with the delivery of destructive malware. It is not clear that heads of individual departments would have the nuanced understanding of different cyber threat vectors to provide an accurate view of overall cyber risk.

Another example can be found in the section on reporting. The report states, "53% of chief information security officers, 47% of chief risk officers, and 38% of chief technology/information officers said they provide reports to board members on cyber investment initiatives. Yet only 18% of board members said they receive such information." There is clearly a disconnect between reporting and listening -- and few people in the security industry would question that there is a security information communications problem.

This is the one occurrence of the title 'CISO' in the entire report -- but notice a higher percentage of cyber security officers report on cyber investments than do the IT officers. The implication is that if Security had been separated out from IT, then IT would not so consistently be seen as the primary decision-maker for cyber risk management -- something that most security practitioners might consider worrying given the non-cyber-risk and potentially conflicting business pressures already affecting IT.

This lack of distinction between IT and Security also misses a useful opportunity. The figures show that more reports are delivered by CISOs (percentage-wise) than by CIOs and CTOs. For several years now, CISOs have been on a campaign to improve their own and their security staff's 'soft skills'. Indeed, NIST's National Initiative for Cybersecurity Education (NICE) is this week running a webinar titled, 'Development of Soft Skills That Are in Demand by Cybersecurity Employers'.

NICE states that for cybersecurity employers, "soft skills such as effective communication, problem-solving, creative thinking, resourcefulness, acting as a team player, and flexibility are among the most desirable attributes they are looking for in a new hire." It would be useful if Marsh's figures could show the comparative effectiveness of cyber risk reporting coming from CISOs and CIOs.

Nevertheless, there is useful data and advice within the report. It shows that the majority of companies do not have a method of expressing risk quantitatively (that is, in economic terms). Those that do express their risk tend to do so qualitatively (that is, with capability maturity levels). But understanding the economic effect of different cyber events is essential for both risk mitigation and/or risk transfer. It helps the security team to understand where to concentrate both effort and budget; and it is essential for insurance companies to set realistic insurance premiums.

The figures show that just over half of organizations either have (34%) or plan to buy (22%) cyber insurance. The remainder either have no plans, or specifically plan not to buy insurance -- but a small number (less than 1%) have dropped existing insurance. The primary reason cited for dropping insurance is, "Cyber insurance does not provide adequate coverage for the cost."

The implication is that cyber insurance companies (which include Marsh) have a large potential market Cyber Insurance Market to Top $14 Billion by 2022: Report , but have not yet succeeded in fully making their case. This report does not help by largely ignoring companies' existing cyber risk mitigation specialists.

By not differentiating between the responding company's security function and its IT function, security-specific mitigation is diluted. When SecurityWeek asked Marsh why it hadn't separated the two, Marsh responded, "Don't know exactly what you mean by 'cyber security function' -- a CISO??"

The 'cyber security function' is the work performed by the security team under a variously titled head of cyber security. Although IT and Security must necessarily work together, they have different functions and different priorities, and therefore deserve to be treated separately.

Marsh provided SecurityWeek with a detailed breakdown of the respondents' job functions, answered under the question: "Which functional area most closely describes your position?" The available options were Finance, Risk management, Information technology, Board of directors, Operations, Legal/Compliance/Audit, Human resources, Procurement, and Other. 'Cyber Security' was not an option.

It is the security function that best understands and is most engaged in active risk mitigation. By concentrating the survey on general business leaders with little understanding of, or direct involvement in, cyber risk mitigation, the results inevitably favor the primary alternative; that is 'risk transfer'. Risk transfer is cyber insurance; which is what Marsh provides.


City Union Bank is the last victim of a cyber attack that used SWIFT to transfer funds
20.2.2018 securityaffairs Cyber

The Indian bank Kumbakonam-based City Union Bank announced that cyber criminals compromised its systems and transferred a total of US$1.8 million.
During the weekend, the Russian central bank revealed a new attack against the SWIFT system, unknown hackers have stolen 339.5 million roubles (roughly $6 million) from a Russian bank last year.

Even if the SWIFT international bank transfer system enhanced its security after the string of attacks that targeted it since 2016, the news of a new attack made the headlines.

The victim is the Indian bank Kumbakonam-based City Union Bank that announced that criminals compromised its systems and transferred a total of US$1.8 million.

Taiwan bank hach

On Sunday, February 18, the Kumbakonam-based City Union Bank issued a statement after local media reported that three unauthorized transactions were initiated by staff. The Indian bank confirmed that it has suffered a security breach launched “international cyber-criminals and there is no evidence of internal staff involvement”.

“During our reconciliation process on February 7, it was found out that 3 fraudulent remittances had gone through our SWIFT system to our corespondent banks which were not initiated from our bank’s end. We immediately alerted the correspondent banks to recall the funds,” reads the statement issued by City Union Bank.

The three transactions took place before February 7, when they were discovered during the reconciliation processes.

One transaction of $500,000 that was made through Standard Chartered Bank, New York, to a Dubai based bank was immediately blocked.

A second transaction $372,150 was made through a Standard Chartered Bank account in Frankfurt to a Turkish account, and the third transaction of 1 million dollars was sent through a Bank of America account in New York to a China-based bank.

The City Union Bank confirmed it was working with the Ministry of External Affairs and officials in Turkey and China to recover the funds.

“With the help of Ministry of External Affairs through Consulate General of Shanghai and Istanbul and office of the National Cyber Security Council (PMO) all possible efforts through diplomatic and legal channels are being taken to repatriate the money,” continues the statement.

Summarizing the security features implemented for the SWIFT were able to detect only the transfer to Dubai.

The SWIFT system is now back in operation with “adequate enhanced security”.

At the time of writing the root source of the problem is still unclear


Pyeongchang – Olympic Destroyer Unleashed to Embarrass Pyeongchang 2018 Games
13.2.2018 securityaffairs Cyber

Shortly before the Pyeongchang opening ceremonies on Friday, televisions at the main press centre, wifi at the Olympic Stadium and the official website were taken down.
It is well known that big events attract the attention of hackers. The biggest event right now is the 2018 Winter Olympics in Pyeongchang, South Korea and it looks like the hackers have arrived. Shortly before the opening ceremonies on Friday, televisions at the main press centre, wifi at the Olympic Stadium and the official website were taken down. All systems were restored by 8AM on the following Saturday, and although individuals were unable to print event tickets during the outage, the organizing committee described the event as affecting only “noncritical systems.” Given the high profile of the games, the rumor mill immediately began spreading whispers that the outage was the result of a cyberattack.

After restoring services and investigating the cause, Sunday evening Pyeongchang 2018 spokesperson Sung Baik-you issued an official statement confirming that the outage resulted from a cyber attack.

“There was a cyber-attack and the server was updated yesterday during the day and we have the cause of the problem”, Sung Baik-you said.

Leading up to the Olympic Games there was a lot of speculation whether North Korea would attempt to disrupt the games. Along with China and Russia, North Korean cyberwarfare teams are often suspected in large-scale attack such as these. In this case, the International Olympics Committee (IOC) is refusing to participate in any speculation as to the source of the attacks.

“We wouldn’t start giving you the details of an investigation before it has come to an end, particularly because it involves security which at these games is incredibly important. I am sure you appreciate we need to maintain the security of our systems,” said Mark Adams, head of communications for the IOC.

While the IOC and Pyeongchang spokespeople are being cautious about releasing details to focus on ensuring security and safety of the games, Cisco Talos has been forthcoming with technical details of the attack. While they haven’t pointed fingers at specific attackers, but in a Talos blog post on February 12, they have stated, “[samples identified] are not from adversaries looking for information from the games but instead they are aimed to disrupt the games.”

Pyeongchang

According to their research, there are many similarities between the Pyeongchang attack, which they are dubbing “Olympic Destroyer”, and earlier attacks such as BadRabbit and NotPetya. All of these attacks are focused on destruction and disruption of equipment not exfiltration of data or other, more subtle attacks. Using legitimate tools such as PsExec and WMI the attackers are specifically targeting the pyeongchang2018.com domain attempting to steal browser and system credentials to move laterally in the network and then wiping the victim computer to make it unusable.

While the source of the attacks is uncertain, the Cisco Talos blog post is clear in identifying motivation, “Disruption is the clear objective in this type of attack and it leaves us confident in thinking that the actors behind this were after embarrassment of the Olympic committee during the opening ceremony.”


Philippine Bank Threatens Counter-Suit Over World's Biggest Cyber-Heist
9.2.2018 securityweek  Cyber
The Philippine bank used by hackers to transfer money in the world's biggest cyber heist warned of tit-for-tat legal action Thursday, after Bangladeshi officials said they would sue the lender.

Unidentified hackers stole $81 million from the Bangladesh central bank's account with the US Federal Reserve in New York two years ago, then transferred it to a Manila branch of the Rizal Commercial Banking Corp (RCBC).

The funds were then swiftly withdrawn and laundered through local casinos.

Bangladeshi officials said Wednesday they are readying a case against RCBC for its alleged role in the heist.

One of the officials, Bangladesh's Finance Minister A.M.A Muhith, said last year he wanted to "wipe out" RCBC.

But RCBC maintained the February 2016 cyber-heist was an "inside job" and that the Philippine bank was being used as a scapegoat to hide the real culprits.

RCBC, one of the Philippines' largest banks, charged that Bangladeshi officials were hiding their own findings into the crime, possibly to conceal the involvement of their own officials in the heist.

"RCBC has had it and will consider a lawsuit against Bangladesh Central Bank officials for claiming the bank had a hand in the $81M cyber-heist," the Philippine lender said in a statement.

"They are perpetuating the cover-up and using RCBC as a scapegoat to keep their people in the dark," the RCBC statement said.

The Philippine central bank imposed a record $21 million fine on RCBC after the discovery of the heist as it investigated the lender's alleged role in the theft.

Only a small amount of the stolen money has been recovered.

Money-laundering charges were also filed against the RCBC branch manager.

The US reserve bank, which manages the Bangladesh Bank reserve account, has denied its own systems were breached.


US authorities dismantled the global cyber theft ring known as Infraud Organization
9.2.2018 securityaffairs Cyber

The US authorities have dismantled a global cybercrime organization tracked Infraud Organization involved in stealing and selling credit card and personal identity data.
The US authorities have taken down a global cybercrime organization, the Justice Department announced indictments for 36 people charged with being part of a crime ring specialized in stealing and selling credit card and personal identity data.

According to the DoJ, the activities of the ring tracked as ‘Infraud Organization’, caused $530 million in losses. The group is active since 2010, when it created in Ukraine by Svyatoslav Bondarenko.

Bondarenko remains at large, but Russian co-founder Sergey Medvedev has been arrested by the authorities.

Most of the crooks were arrested in the US (30), the remaining members come from Australia, Britain, France, Italy, Kosovo, and Serbia.

The indicted leaders of the organization included people from the United States, France, Britain, Egypt, Pakistan, Kosovo, Serbia, Bangladesh, Canada and Australia.

The motto of the Infraud Organization was “In Fraud We Trust,” it has a primary role in the criminal ecosystem as a “premier one-stop shop for cybercriminals worldwide,” explained the Deputy Assistant Attorney General David Rybicki.

“As alleged in the indictment, Infraud operated like a business to facilitate cyberfraud on a global scale,” said Acting Assistant Attorney General John Cronan.

The platform offered a privileged aggregator for criminals (10,901 approved “members” in early 2017) that could buy and sell payment card and personal data.

“Members ‘join the Infraud Organization via an online forum. To be granted
membership, an Infraud Administrator must approve the request. Once granted
membership, members can post and pay for advertisements within the Infraud forum. Members may move up and down the Infraud hierarchy.” said the indictment.

“The Infraud Organization continuously screens the wares and services of the vendors within the forum to ensure quality products. Vendors who are considered subpar are swiftly identified and punished by the Infraud Organization’s Administrators.”

Infraud Organization

The Infraud Organization used a number of websites to commercialize the data, it implemented a classic and efficient e-commerce for the stolen card and personal data, implementing also a rating and feedback system and an escrow” service for payments in digital currencies like Bitcoin.


Bangladesh to File U.S. Suit Over Central Bank Heist
8.2.2018 securityweek  Cyber
Bangladesh's central bank will file a lawsuit in New York against a Philippine bank over the world's largest cyber heist, the finance minister said Wednesday.

Unidentified hackers stole $81 million in February 2016 from the Bangladesh central bank's account with the US Federal Reserve in New York.

The money was transferred to a Manila branch of the Rizal Commercial Banking Corp (RCBC), then quickly withdrawn and laundered through local casinos.

With only a small amount of the stolen money recovered and frustration growing in Dhaka, Bangladesh's Finance Minister A.M.A Muhith said last year he wanted to "wipe out" RCBC.

On Wednesday he said Bangladesh Bank lawyers were discussing the case in New York and may file a joint lawsuit against the RCBC with the US Federal Reserve.

"It will be (filed) in New York. Fed may be a party," he told reporters in Dhaka.

The deputy central bank governor Razee Hassan told AFP the case would be filed in April.

"They (RCBC) are the main accused," he said.

"Rizal Commercial Banking Corporation (RCBC) and its various officials are involved in money heist from Bangladesh Bank's reserve account and the bank is liable in this regard," Hassan said in a written statement.

The Philippines in 2016 imposed a record $21 million fine on RCBC after investigating its role in the audacious cyber heist.

Philippine authorities have also filed money-laundering charges against the RCBC branch manager.

The bank has rejected the allegations and last year accused Bangladesh's central bank of a "massive cover-up".

The hackers bombarded the US Federal Reserve with dozens of transfer requests, attempting to steal a further $850 million.

But the bank's security systems and typing errors in some requests prevented the full theft.

The hack took place on a Friday, when Bangladesh Bank is closed. The Federal Reserve Bank in New York is closed on Saturday and Sunday, slowing the response.

The US reserve bank, which manages the Bangladesh Bank reserve account, has denied its own systems were breached.


Questionable Interpretation of Cybersecurity's Hidden Labor Cost
7.2.2018 securityweek Cyber
Report Claims a 2,000 Employee Organization Spends $16 Million Annually on Incident Triaging

The de facto standard for cybersecurity has always been detect and respond: detect a threat and respond to it, either by blocking its entry or clearing its presence. A huge security industry has evolved over the last two decades based on this model; and most businesses have invested vast sums in implementing the approach. It can be described as 'detect-to-protect'.

In recent years a completely different isolation cyber security paradigm has emerged. Rather than detect threats, simply isolate applications from them. This is achieved by running the app in a safe container where malware can do no harm. If an application is infected, the container and the malware is abandoned, and a clean version of the application is loaded into the container. There is no need to spend time and money on threat detection since it can do no harm. This is the isolation model.

The difficulty for vendors of isolation technology is that potential customers are already heavily invested in the detect paradigm. Getting them to switch to isolation is tantamount to asking them to abandon their existing investment as a waste of money.

Bromium, one of the earliest and leading isolation companies, has chosen to demonstrate the unnecessary continuing manpower cost of operating a detect-to-protect model, together with the unnecessary cybersecurity technology that supports it.

Bromium commissioned independent market research firm Vanson Bourne to survey 500 CISOs (200 in the U.S.; 200 in the UK; and 100 Germany) in order to understand and demonstrate the operational cost of detect-to-protect. All the surveyed CISOs are employed by firms with between 1000 and 5000 employees, allowing the research to quote figures based on an average organization of 2000 employees.

The bottom-line of this research (PDF) is that a company with 2,000 employees spends $16.7 million dollars every year on protect-to-detect. No comparable figure is given for an isolation model, but the reader is allowed to assume it would be considerably less.

The total cost is achieved by combining threat triaging costs, computer rebuilds, and emergency patching costs to provide the overall labor cost, plus the technology cost of nearly $350,000. The implication is that it is not so worrying to abandon $350,000 for a saving of $16 million -- and indeed, that would be true if the manpower costs are valid. But they are questionable.

All costs in the report are based on figures returned by the survey respondents. For example, according to the report, "Our research showed that enterprises issue emergency patches five times per month on average, with each fix taking 13 hours to deploy. That’s 780 hours a year, which—multiplied by the $39.24 average hourly rate for a cybersecurity professional—incurs costs of $30,607 per year."

But since these are emergency patches, we can add an additional $19,900 in overtime and/or contractor costs: a total of $49,900 every year that could be all but eliminated by switching to an isolation model.

The cost of computer rebuilds comes from the cost of rebuilding compromised computers that detect-to-protect has failed to protect. "On average," says the report, "organizations rebuild 51 devices every month, with each taking four hours to rebuild—equating to 2,448 hours each year. When multiplied by the average hourly wage of a cybersecurity professional, $39.24, that’s an average cost of $96,059 per year."

All these costs would seem to be realistic for a detect-to-protect model. The implication is that a switch to the isolate model would save nearly $500,000 per year to offset the cost of isolation. But the report goes much further, and suggests that much of a colossal $16 million can also be saved every year by an organization with 2,000 employees that will no longer require incident triaging by the security team.

How? "Well," claims the report, "on average SOC teams triage 796 alerts per week, taking an average of 10 hours per alert—that’s 413,920 hours across the year. When you consider that the average hourly rate for a cybersecurity professional is $39.24, that’s an annual average cost of more than $16 million each year."

The math works. But an alternative way of looking at these figures is that 7,960 hours of triaging would take more than 47 employees doing nothing but triaging 24 hours a day, seven days a week. Frankly, I doubt if any company with 2,000 employees does anything near this amount of triaging. It is, I suggest, misleading to state bluntly (as the report does): "Organizations spend $16 million per year triaging alerts."

“Application isolation provides the last line of defense in the new security stack and is the only way to tame the spiraling labor costs that result from detection-based solutions,” says Gregory Webb, CEO at Bromium. “Application isolation allows malware to fully execute, because the application is hardware isolated, so the threat has nowhere to go and nothing to steal. This eliminates reimaging and rebuilds, as machines do not get owned. It also significantly reduces false positives, as SOC teams are only alerted to real threats. Emergency patching is not needed, as the applications are already protected in an isolated container. Triage time is drastically reduced because SOC teams can analyze the full kill chain.”

All of this is perfectly valid -- except for the $16 million annual detect-to-protect triaging claim. SecurityWeek has invited Bromium to comment on our concerns, and will update this article with any response.


Cisco, Apple Launch Cyber Risk Offering With Insurance Giant Allianz
6.2.2018 securityweek Cyber
Cisco, Apple, Aon, Allianz Partner to Help Businesses Protect Against Common Malware Threats

Munich, Germany-based Allianz -- named by Forbes as the world's second largest insurance firm -- is offering cyber insurance at competitive premiums with reduced deductibles; but only if the insured is risk-assessed by Aon and uses certain Cisco and Apple products.

Over the last few years, information security has increasingly been seen as a risk management issue. One of the traditional options for risk management is risk transfer; that is, insurance. But while the cyber insurance option has increased in visibility, its adoption remains relatively low. In 2016, US cyber insurance premiums were reported to be $1.35 billion. This is just 3.3% of the total premiums for U.S. commercial line insurers. Clearly, there is an opportunity for insurance companies to increase their own share of a potentially large market.

At the same time, product vendors are always looking for new opportunities to sell their products. The potential for linking specific product to reduced insurance premiums could help both industries to increase market share.

This has been slow to materialize because insurance works on detailed statistics between risk and premiums. It has decades of statistics for motor vehicles, and perhaps hundreds of years for shipping -- but only a few years' experience of a continuously changing and worsening infosecurity world. The natural effect of this is that premiums have to be set at the higher end of the possible scale simply because nobody really understands the full risk.

Apple and Cisco have been working to change this. In June 2017, Cisco's David Ulevitch (VP, security business group) announced, "We’re collaborating with insurance industry heavyweights to lead the way in developing the architecture that enables cyber insurance providers to offer more robust policies to our customers."

This collaboration surfaced yesterday in the announcement of a deal with Allianz: "a new cyber risk management solution for businesses, comprised of cyber resilience evaluation services from Aon, the most secure technology from Cisco and Apple, and options for enhanced cyber insurance coverage from Allianz," said Apple. However, it should be noted that this is not a general cyber insurance offering, but one specifically related to "cyber risk associated with ransomware and other malware-related threats, which are the most common threats faced by organizations today."

There are three elements that could lead to the insurance deal. The first is that the candidate company is risk assessed by Aon, who will examine the company's existing cyber security posture and make recommendations on how to improve existing defenses.

The second is that the candidate company should use Cisco Ransomware Defense and/or qualified Apple products iPhone, iPad and Mac. The third is that insured companies will then have access to Cisco and Aon incident response teams in the event of a malware attack.

With any contract, and an insurance policy is just a contract, the devil is always in the detail. It isn't clear from the current announcement whether the insurance will go beyond just a malware attack -- into, for example, data manipulation or theft because of the malware attack. That may vary from contract to contract depending on the result of the Aon assessment.

For the moment, there is just the bald statement that if a company uses certain Cisco and Apple product, and presumably 'passes' a risk assessment by Aon, that company might possibly qualify for lower deductibles in a malware-related cyber insurance policy underwritten by Allianz.


Google Parent Alphabet Launches Cybersecurity Firm Chronicle
25.1.2018 securityweek Cyber

Google parent Alphabet on Wednesday announced a new standalone business dedicated to cybersecurity.

Called Chronicle, the newly unveiled company was born in 2016 as a project within X, Alphabet’s “moonshot” factory, with ambitions of analyzing massive amounts of data to provide security teams with insights into areas of “likely vulnerability” to help them protect their data.

“X, the moonshot factory, has been our home for the last two years while we figured out where we had the potential to make the biggest impact on this enormous problem,” Stephen Gillett, CEO of Chronicle, wrote in a blog post.

The new company, Gillett says, “will have two parts: a new cybersecurity intelligence and analytics platform that we hope can help enterprises better manage and understand their own security-related data; and VirusTotal, a malware intelligence service acquired by Google in 2012 which will continue to operate as it has for the last few years.”

“We want to 10x the speed and impact of security teams’ work by making it much easier, faster and more cost-effective for them to capture and analyze security signals that have previously been too difficult and expensive to find,” added Gillett, a former executive at Symantec, Best Buy and Starbucks. “We are building our intelligence and analytics platform to solve this problem.”

Few details have been provided, and many questions remain on exactly what Chronicle’s platform will bring to the table, and how it will be deployed in an enterprise. With that said, Google has been innovative with its own internal security tools and initiatives, and it’s likely that Chronicle’s offerings will be compelling.

In June 2017, Google shared details on the security infrastructure that protects its data centers. Late last year, Google also shared detailed information on how it protects service-to-service communications within its infrastructure at the application layer and the system it uses for data protection. The search giant also has provided technical details on how it uses a “Tiered Access” model to secure devices for its global workforce of more than 61,000 employees.

“Inspired by Google’s own security techniques, we’re advancing cybersecurity for enterprises of all sizes,” Chronicle’s website says.

Chronicle, says X’s Astro Teller, is starting “by trying to give organizations a much higher-resolution view of their security situation than they’ve ever had by combining machine learning, large amounts of computing power and large amounts of storage.”

According to Gillett, the company will have its own contracts and data policies with its customers, while also being able to tap expertise across the entire Alphabet ecosystem.


Railway Cybersecurity Firm Cylus Emerges From Stealth
25.1.2018 securityweek Cyber

Cylus Obtains $4.7 Million in Funding to Help Protect Rail Industry Against Cyberattacks

Cylus, an Israel-based startup that specializes in cybersecurity solutions for the rail industry, emerged from stealth mode on Thursday with $4.7 million in seed funding.

Researchers have warned on several occasions in the past years that modern railway systems are vulnerable to cyberattacks, and the rail industry has been targeted by both cybercriminals and state-sponsored cyberspies.

Cylus aims to address the challenges of securing railway systems by developing a solution that is specifically designed for this sector. The product relies on a set of non-intrusive sensors that provide deep visibility into operational networks and help detect malicious activities. Customers are provided an automated assessment and instructions on how to respond when a threat is detected.

Railway Cybersecurity Startup Cylus Emerges From Stealth

The sensors are deployed in control centers, train management systems, interlocking systems, rolling stock, and trackside components. Information collected by the sensors is fed to an on-premises server that aggregates data and generates alerts based on rules derived from machine learning algorithms and research conducted by Cylus.

A centralized dashboard provides a view of all components, and alerts users when suspicious activities are detected, including failed authentication attempts, abnormal signaling communications, and unauthorized communications between components.

In addition to step-by-step instructions on how to respond to a specific threat, Cylus’ product offers forensic analysis capabilities designed to allow railroad companies to investigate incidents.

Cylus has obtained $4.7 million in seed funding from Zohar Zisapel, Magma Venture Partners, Vertex Ventures, and the SBI Group.

“Current approaches to cybersecurity do not fit the architecture of railway networks today,” said Cylus CEO Amir Levintal. “Our team of world-class cyber specialists together with rail industry experts have tailored a solution to the industry’s unique requirements. Our solution enables rail companies to detect cyber-attacks in their operational network, including their signaling systems and rolling stocks, and block attackers before they can cause any damage. The automotive industry has woken up to the critical need for cyber protection– it’s time the railway industry got on board as well.”

Cylus told SecurityWeek that it’s currently in negotiations with several large national railways to test its product. Pricing is scalable and depends on the specific needs of each customer.

“Railway companies cannot compromise on passenger safety, and one of the pillars of passenger safety is cybersecurity,” said Boaz Zafrir, President of Cylus and former CEO of Israel Railways. “Railway executives are acutely aware of the dangers and are looking for answers. The extraordinary team at Cylus has rich experience creating effective cybersecurity solutions, and I am confident that the company's unique technology will help keep passengers safe all over the world.”


World Economic Forum Announces Global Centre for Cybersecurity
24.1.2018 securityweek Cyber

The World Economic Forum (WEF) is establishing a new Global Centre for Cybersecurity "to help build a safe and secure global cyberspace."

This was announced at the 48th Annual Meeting currently taking place in Davos-Klosters, Switzerland. This year's WEF theme is Creating a Shared Future in a Fractured World. WEF's annual Global Risk Report for 2018 shows cyberattacks are now considered the third most serious global threat behind only extreme weather and natural disasters. Data fraud/theft is fourth.

World Economic Forum 2014
Aerial photo from the futuristic and stylish Intercontinental Hotel in Davos, Switzerland. The Annual Meeting of the World Economic Forum takes place in Davos-Klosters, Switzerland from January 23 to 26, 2018. (Image Credit: World Economic Forum)
The Global Centre for Cybersecurity is seen as providing a unique opportunity to promote a global public/private response to increasing cyber threats. Alois Zwinggi, managing director at the WEF and head of the new center said cybercrime is currently costing the world economy $500 billion annually and is still growing. "As such, addressing the topic is really important for us. The Forum sees a need for much greater collaboration in that space."

WEF describes five main areas of operation for the center: consolidating existing initiatives (such as its Cyber Resiliency Playbook); establishing a library of best practices; improving partners' understanding of cybersecurity; promoting a regulatory framework; and serving as a think tank for future cybersecurity scenarios (such as the fourth industrial revolution and the effect of quantum computing). Although not specified per se, a consistent theme for the new center will be global cybersecurity information sharing.

Rob Wainwright, Executive Director of Europol, said that the center has "absolutely full support from Europol." He explained that Europol, which includes the European Cybercrime Centre) can only function as well as it does because of the public/private networks it has established in Europe: "but it is not nearly enough... That's why I am so delighted that WEF, with its unique networking capability, is now establishing this Global Centre for Cybersecurity -- because it will interconnect a large, dynamic, a very important business community... and will take us to a new level of public/private cooperation."

The Global Centre for Cybersecurity will be located in Geneva, Switzerland, and will be operational in March 2018. Although under the umbrella of WEF, it will be autonomous. WEF spokesperson Georg Schmitt told SecurityWeek that it will be funded by members, with an initial investment of several million Swiss francs from the forum itself. Ongoing, he said in an email, "partner companies will have to pay a certain fee to join. Fees for governments, academia and civil society will be waived. We are planning to hire 20-30 staff this year alone."

It's not yet known how many 'government partners' will join the center. "We will be able to announce the government partners at a later stage, but to give you an impression: at our preparatory meeting in November representatives of almost 20 governments participated, including several G7 and G20 countries."

Effective threat information sharing between the public and private sectors is often seen as the holy grail of cybersecurity -- but has so far proved just as elusive. However, business, like cybercrime, is transnational; and if any organization is well-suited to tackle the problem it is a global business organization. "The announcement of the creation of a Global Security Centre at WEF is welcomed as a potentially hugely valuable way forward in coordinating the activities of nations against this scourge of modern times," Jim Palmer, CISO at ThinkMarble told SecurityWeek. "That said," he continued, "the proof of its effectiveness will be in the pudding -- adequate funding and the positive cooperation from all will be an essential enabler. As a cyber and information security company, we watch with interest."

Mark Noctor, VP EMEA at Arxan Technologies, is hopeful. "We are delighted to see a body with the global importance of the WEF addressing the growing sophistication of cyber threats," he told SecurityWeek. "This move by the WEF will help governments and international organizations to work more closely with industry, manufacturers and software providers to create safe environments and eliminate cyber threats."

But there are many who don't believe that WEF actually delivers on its potential. Bono famously described it as 'fat cats in the snow'. It has also been described as 'a mix of pomp and platitudes'. And there are many in the security industry who do not believe the new Center will achieve much.

"This is what happens when you get a bunch of politicians in a room who have no clear understanding on cybersecurity and the threats," comments Joseph Carson, Chief Security Scientist at Thycotic. "When the need to have a Global Centre for Cybersecurity is being discussed at the World Economic Forum it becomes a pointless political debate usually without industry experts' input."

Carson doesn't believe that centralizing the effort against cybercrime will be effective. "Cybersecurity is most effective when we work together collectively but decentralized. Being decentralized in cybersecurity is a strength as it reduces the risk. We have had this discussion for many years in the EU about a European Centre for Cybersecurity though in the EU, it has been important to be working as a collective and at the same time, being decentralized."

Nevertheless, the potential of a WEF-backed global cybersecurity center cannot be denied. "The Global Centre for Cybersecurity could ultimately become an organization that fosters industry change and helps to educate the market and reduce the success cybercriminals are having on a daily basis," said Sam Curry, chief security officer at Cybereason.

The question is whether the WEF can deliver. "It is premature to declare victory," he continued; "and ultimately whether or not this works is dependent upon the collaboration of enterprises and a focused and determined group of leaders. It is clear to me that there will be minimal success if the organization is filled with toothless sinecures for washed up security hacks."


World Economic Forum Publishes Cyber Resiliency Playbook
17.1.2018 securityweek Cyber

World Economic Forum Publishes Playbook for Developing Cyber Resiliency Through Public/Private Collaboration

The World Economic Forum (WEF) has released a playbook for public-private collaboration to improve cyber resiliency ahead of the launch of a new Global Centre for Cybersecurity at the Annual Meeting 2018 taking place on January 23-26 in Davos, Switzerland.

The background to the WEF playbook is the complexity and sometimes conflicting requirements for governments to provide physical and cyber security for their citizens without unnecessarily intruding on personal privacy, and without damaging legitimate multinational businesses. Success, it claims, "depends on collaboration between the public and private sectors."

Word Economic Forum LogoThere are two sections to the playbook: a reference architecture for public-private collaboration, and cyber policy models. There is no attempt to provide a global norm in this process, nor a methodology for implementing individual policy models. It is an intra-country model, and implementation will depend upon each nation's unique values.

Fourteen separate policy topics are included, ranging from research and data sharing, through attribution, encryption, and active defense to cyber-insurance. Five key themes cross these topics: a clearly defined safe harbor for data sharing; legal clarity for the work of white hat researchers; the impact of a symmetrical international policy response; the cost and effect of compliance requirements; and software coding quality standards.

Each policy topic is then analyzed in relation to five areas: security, privacy, economic value, accountability and fairness. It is important at this point to note that the playbooks are designed for governments to develop public/private co-operation -- civil society issues are not seriously discussed.

For example, the first policy model deals with potential government approaches to zero-day vulnerabilities. The life-cycle of a zero-day comprises unknown existence in code; discovery; and exploitation and mitigation. While secure coding practices can limit the occurrence of zero-days, they "will continue to exist due to human error and other factors." Therefore, there needs to be a government policy towards zero-days.

The two primary options are for governments to "completely exit the zero-day market and avoid research dedicated to finding software vulnerabilities;" or to stockpile for own use, and/or disclose to vendors. The implications of the latter option are then discussed. Stockpiling without disclosure increases the likelihood that bad actors might also independently discover the vulnerability. Purchasing zero-days weakens the bug bounty programs since researchers are likely to sell to the highest bidder -- which is likely to be government.

The effect of a zero-day policy is then related to the five security areas. Increased exploitation of zero-days will hurt commerce (economy) and result in more breaches (privacy). Increased research and more sharing will be beneficial (security); while the sharing of zero-days applies pressure on vendors to more rapidly mitigate the vulnerabilities (accountability). Fairness is not implicated in the different policy choices

This basic model of analyzing the policy topic, and then discussing the trade-offs with each of the five security areas (and their interaction) is applied to each of the 14 discussed policy topics. For example, 'active defense' is first defined to range from "technical interactions between a defender and an attacker" to "reciprocally inflicting damage on an alleged adversary".

One obvious danger is the potential for retaliatory escalation. "Responding to a nation-state adversary may trigger significant collateral obligations for a host state of would-be active defenders," warns the playbook. "As such, policy-makers may consider curtailing attempts to attack nation-states. Policy-makers might also consider curtailing the use of active defence techniques against more sophisticated non-state adversaries, as those adversaries may have a greater ability to obfuscate their identity and dangerously escalate a conflict."

The trade-offs on an active defense policy are then related to the five security areas. Expansive use of active defense will increase costs without necessarily having an economic return (economy). It would diminish privacy for both the alleged adversary and for any third-party collateral damage organizations (privacy). Any actual effect on overall security will likely depend upon its effectiveness as a deterrent (security). Only larger companies, and especially nation-backed industries such as the defense sector will likely have the means to employ active defense (fairness); but it is only a realistic option with more accurate attribution (accountability).

The intention of the playbook is simple, despite the thoroughness and complexity of its content. "The frameworks and discussions outlined in this document," it concludes, "endeavour to provide the basis for fruitful collaboration between the public and private sectors in securing shared digital spaces."

"We need to recognize cybersecurity as a public good and move beyond the polarizing rhetoric of the current security debate. Only through collective action can we hope to meet the global challenge of cybersecurity," said Daniel Dobrygowski, Project Lead for Cyber Resilience at the World Economic Forum.

While public/private dialog on security will of necessity be led by individual governments, the document provides an excellent overview of many of the security issues faced by commercial security teams. Although it contains no technical detail on security problems, it provides a detailed picture of the different implications from different approaches to the main security issues faced by all companies today.


Proposed Legislation Would Create Office of Cybersecurity at FTC
12.1.2018 securityweek Cyber

Two Democratic senators, Elizabeth Warren, D-Mass., and Mark Warner, D-Va, introduced a bill Wednesday that would provide the Federal Trade Commission (FTC) with punitive powers over the credit reporting industry -- primarily Equifax, Credit Union and Experian -- for poor cybersecurity practices.

The bill is in response to the huge Equifax breach disclosed in September, 2017. "Equifax allowed personal data on more than half the adults in the country to get stolen, and its legal liability is so limited that it may end up making money off the breach," said Senator Warren in a Wednesday statement.

If the bill succeeds, it will become the Data Breach Prevention and Compensation Act of 2018. It will create an Office of Cybersecurity at the FTC, "headed", says the bill (PDF), "by a Director, who shall be a career appointee." This Office would be responsible for ensuring that the CRAs conform to the requirements of the legislation, and would have the power to establish new security standards going forwards.

The punitive power of the Act comes in the level of the potential fines, beginning with a base penalty of $100 for each consumer who had one piece of personal identifying information (PII) compromised and another $50 for each additional PII compromised per consumer. On this basis, were the Act already in force, Equifax would be facing a fine of at least $1.5 billion.

Under current law, say the lawmakers, it is difficult for consumers to get compensation when their personal data is stolen. Typical awards range from $1 to $2 per consumer. This bill requires the FTC to use 50% of its penalty to compensate consumers.

The maximum penalty is capped at 50% of the credit agencies' gross revenue from the previous year. This dwarf's even the EU's General Data Protection Regulation (GDPR) maximum fine set at 4% of global revenue -- but it gets worse: it could increase to 75% of gross revenue where the offending CRA fails to comply with the FTC's data security standards or fails to timely notify the agency of a breach.

The bill requires CRAs to notify the FTC of a breach within 10 days of the breach -- it doesn't at this stage specify whether that is 10 days from the breach occurring, or 10 days from discovery of the breach. Within 30 days of being so notified, the FTC is then required to "commence a civil action to recover a civil penalty in a district court of the United States against the covered consumer reporting agency that was subject to the covered breach."

While 50% of any recovered money is to compensate the victims of the breach, the remaining 50% is to be used for cybersecurity research and inspections by the FTC's new Office of Cybersecurity.

"In today's information economy, data is an enormous asset. But if companies like Equifax can't properly safeguard the enormous amounts of highly sensitive data they are collecting and centralizing, then they shouldn't be collecting it in the first place," said Sen. Warner. "This bill will ensure that companies like Equifax -- which gather vast amounts of information on American consumers, often without their knowledge -- are taking appropriate steps to secure data that's central to Americans' identity management and access to credit."

How much traction this bill will receive in the Senate remains to be seen, but it reflects the general dismay felt by the size of the Equifax breach -- which could have been prevented if patches had been applied. It is not the first Equifax-related legislative proposal, but it is by far the most punitive. In November 2017, New York State Attorney General Eric T. Schneiderman introduced the Stop Hacks and Improve Electronic Data Security Act (SHIELD Act) to improve security specifically within New York State.

SHIELD fines are capped at $250,000, and the disclosure requirement is vague: "The disclosure shall be made in the most expedient time possible and without unreasonable delay, consistent with the legitimate needs of law enforcement..." Put very simply, both proposals are designed to improve the security of their respective 'covered entities' (CRAs are covered in both bills), but SHIELD seeks to do so in a 'business friendly' manner, while the Data Breach Prevention and Compensation Act of 2018 seeks to do so in a 'consumer friendly' manner.