- Security -

Last update 09.10.2017 13:17:23

Home  Analysis  Android  Apple  APT  Attack  BigBrothers  BotNet  Congress  Crime  Crypto  Cryptocurrency  Cyber  CyberCrime  CyberSpy  CyberWar  Exploit  Forensics  Hacking  ICS  Incindent  iOS  IT  IoT  Mobil  OS  Phishing  Privacy  Ransomware  Safety  Security  Social  Spam  Vulnerebility  Virus  EN  List  Czech Press  Page

Introduction  List  Kategorie  Subcategory 0  1  2  3  4  5  6  7  8 



18.12.18

WSJ Webpage Defaced to Support PewDiePie

Security

Threatpost

18.12.18

Automotive Security: It’s More Than Just What’s Under The Hood

Security

Threatpost

18.12.18

New Extortion Email Threatens to Send a Hitman Unless You Pay 4K

SecurityBleepingcomputer

18.12.18

Delivering security and continuity for the cities of tomorrow

Security

Net-security

18.12.18

Automotive Security: It’s More Than Just What’s Under The Hood

Security

Net-security

16.12.18

Which are the worst passwords for 2018?

Security

Securityaffairs

15.12.18

Microsoft Launches AI Malware Prediction Competition with $25K Prize

Security

Bleepingcomputer

15.12.18

123456 Is the Most Used Password for the 5th Year in a Row

Security

Bleepingcomputer

14.12.18

Save the Children Federation Duped in $1M Scam

Security

Net-security

14.12.18

Save the Children Charity Org Scammed for Almost $1 MillionSecurityBleepingcomputer

14.12.18

Arctic Wolf Acquires Risk Assessment Firm RootSecureSecuritySecurityweek

14.12.18

Grammarly Launches Public Bug Bounty Program

Security

Net-security

13.12.18

GitLab Launches Public Bug Bounty Program

Security

Securityweek

13.12.18

Grammarly Launches Public Bug Bounty ProgramSecuritySecurityweek

13.12.18

Deception technology: Authenticity and why it matters

Security

Net-security

12.12.18

Claroty Adds New Capabilities to Industrial Security PlatformSecuritySecurityweek

12.12.18

Hertz, Clear Partner to Speed Rentals With Biometric ScansSecuritySecurityweek

11.12.18

Mozilla Firefox 64.0 Released - Here's What's New

SecurityBleepingcomputer

11.12.18

Biometrics: Security Solution or Issue?

Security

Threatpost

11.12.18

Guide: 5 Steps to Modernize Security in the DevSecOps Era

Security

Net-security

10.12.18

How can businesses get the most out of pentesting?

Security

Net-security

7.12.18

Google Launches Cloud Security Command Center in BetaSecuritySecurityweek

7.12.18

DNA Testing Kits & The Security Risks in Digitized DNASecurityBleepingcomputer

6.12.18

Data Exfiltration in Penetration Tests

Security

SANS

4.12.18

Mozilla to Provide MSI Installers Starting with Firefox 65SecurityBleepingcomputer

4.12.18

Microsoft building Chrome-based browser to replace Edge on Windows 10SecurityThehackernews
3.12.18Mozilla to Provide MSI Installers Starting with Firefox 65SecurityBleepingcomputer
1.12.18MITRE evaluates Enterprise security products using the ATT&CK Framework

Security

Securityaffairs

1.12.18

Mozilla Testing DNS-over-HTTPS in Firefox

Security

Securityweek

1.12.18

Bing Warns VLC Media Player Site is ‘Suspicious’ in Likely False-Positive Gaff

Security

Threatpost

30.11.18Mitre Uses ATT&CK Framework to Evaluate Enterprise Security Products

Security

Securityweek
30.11.18Google Makes Secure LDAP Generally Available

Security

Securityweek
30.11.18Mozilla Firefox Expands DNS-over-HTTPS (DoH) Test to Release Channel

Security

Bleepingcomputer

29.11.18

AWS Security Hub Aggregates Alerts From Third-Party Tools

Security

Securityweek
29.11.18Knock-Knock Docker!! Will you let me in? Open API Abuse in Docker Containers

Security

Securityaffairs

29.11.18

Bing is Warning that the VLC Media Player Site is Unsafe

Security

Bleepingcomputer

27.11.18

Acceptto Emerges from Stealth with Behavioral Biometric Authentication Platform

Security

Securityweek

27.11.18

Researchers Use Smart Bulb for Data Exfiltration

Security

Securityweek

27.11.18

Orkus Exits Stealth Mode With Cloud Security Platform

Security

Securityweek

23.11.18

ThreatList: One-Third of Firms Say Their Container Security Lags

Security

Threatpost

22.11.18

Zero-Trust Frameworks: Securing the Digital Transformation

Security

Threatpost

22.11.18

Mozilla Overhauls Content Blocking Settings in Firefox 65

Security

Bleepingcomputer

20.11.18Microsoft Enhances Windows Defender ATPSecurityPBWCZ.CZ

17.11.18

Speech Synthesis API Being Restricted in Chrome 71 Due to Abuse

Security

Bleepingcomputer

16.11.18

Firefox Now Shows Warnings On Sites with Data Breaches

Security

Bleepingcomputer
16.11.18GreatHorn Expands Email Security PlatformSecurityPBWCZ.CZ
16.11.18Firefox Alerts Users When Visiting Breached SitesSecurityPBWCZ.CZ
16.11.18Black Friday alertSecurityPBWCZ.CZ

15.11.18

Why you need to know about Penetration Testing and Compliance Audits?

Security

Thehackernews
15.11.18Misconfiguration a Top Security Concern for ContainersSecurityPBWCZ.CZ
15.11.18

Managing the Risk of IT-OT Convergence

Security

Threatpost

13.11.18IT threat evolution Q3 18. Statistics

Analysis  Cyber  Cryptocurrency  Security

PBWCZ.CZ

8.11.18World Wide Web Inventor Wants New 'Contract' to Make Web SafeSecurityPBWCZ.CZ
7.11.18Psycho-Analytics Could Aid Insider Threat DetectionSecurityPBWCZ.CZ
2.11.18Google Boosts Account Security With New Tools, ProtectionsSecurityPBWCZ.CZ
2.11.18Law Enforcement Faces Dilemma in Assessing Online ThreatsSecurityPBWCZ.CZ
30.10.18Google Launches reCAPTCHA v3SecurityPBWCZ.CZ
25.10.18Firefox 63 Blocks Tracking CookiesSecurityPBWCZ.CZ
25.10.18Google Blocks New Ad Fraud SchemeSecurityPBWCZ.CZ
24.10.18The Rise of The Virtual Security OfficerSecurityPBWCZ.CZ
24.10.18Mozilla Offers VPN Service to Firefox UsersSecurityPBWCZ.CZ
24.10.18Oracle Adds New Security Services to Cloud PlatformSecurityPBWCZ.CZ
24.10.18Fortinet Tackles Insider Threats with ZoneFox AcquisitionSecurityPBWCZ.CZ
23.10.18Securing the Vote Against Increasing ThreatsSecurityPBWCZ.CZ
18.10.18Tech Giants Concerned About Australia's Encryption LawsSecurityPBWCZ.CZ
14.10.18Purging Long-Forgotten Online Accounts: Worth the Trouble?SecurityPBWCZ.CZ
12.10.18Mozilla Delays Distrust of Symantec CertificatesSecurityPBWCZ.CZ
10.10.18KnowBe4 Brings Artificial Intelligence to Security Awareness TrainingSecurityPBWCZ.CZ
9.10.18How Secure Are Bitcoin Wallets, Really?SecurityPBWCZ.CZ
7.10.18Windows 10 October 18 Update could cause CCleaner stop workingSecurityPBWCZ.CZ
5.10.18Hackers Earn $150,000 in Marine Corps Bug Bounty ProgramSecurityPBWCZ.CZ
5.10.18Wickr Announces General Availability of Anti-Censorship ToolSecurityPBWCZ.CZ
28.9.18Chronicle Unveils VirusTotal EnterpriseSecurityPBWCZ.CZ
25.9.18Microsoft Boosts Azure Security With Array of New ToolsSecurityPBWCZ.CZ
25.9.18Symantec Completes Internal Accounting InvestigationSecurityPBWCZ.CZ
24.9.18Cloudflare Launches Security Service for Tor UsersSecurityPBWCZ.CZ
20.9.18New Tool Helps G Suite Admins Uncover Security ThreatsSecurityPBWCZ.CZ
15.9.18Secureworks Launches New Security Maturity ModelSecurityPBWCZ.CZ
12.9.18OpenSSL 1.1.1 Released With TLS 1.3, Security ImprovementsSecurityPBWCZ.CZ
10.9.18Google Launches Alert Center for G SuiteSecurityPBWCZ.CZ

Kaspersky Security Bulletin: Threat Predictions for 2019
23.11.18 Kaspersky
Security

THREAT PREDICTIONS FOR 2019 PDF

There’s nothing more difficult than predicting. So, instead of gazing into a crystal ball, the idea here is to make educated guesses based on what has happened recently and where we see a trend that might be exploited in the coming months.

Asking the most intelligent people I know, and basing our scenario on APT attacks because they traditionally show the most innovation when it comes to breaking security, here are our main ‘predictions’ of what might happen in the next few months.

No more big APTs
What? How is it possible that in a world where we discover more and more actors every day the first prediction seems to point in the opposite direction?

The reasoning behind this is that the security industry has consistently discovered highly sophisticated government-sponsored operations that took years of preparation. What seems to be a logical reaction to that situation from an attacker’s perspective would be exploring new, even more sophisticated techniques that are much more difficult to discover and to attribute to specific actors.

Indeed, there are many different ways of doing this. The only requirement would be an understanding of the techniques used by the industry for attribution and for identifying similarities between different attacks and the artifacts used in them– something that doesn’t seem to be a big secret. With sufficient resources, a simple solution for an attacker could be having different ongoing sets of activity that are very difficult to relate to the same actor or operation. Well-resourced attackers could start new innovative operations while keeping their old ones alive. Of course, there’s still a good chance of the older operations being discovered, but discovering the new operations would pose a greater challenge.

Instead of creating more sophisticated campaigns, in some cases it appears to be more efficient for some very specific actors who have the capability to do so, to directly target infrastructure and companies where victims can be found, such as ISPs. Sometimes this can be accomplished through regulation, without the need for malware.

Some operations are simply externalized to different groups and companies that use different tools and techniques, making attribution extremely difficult. It’s worth keeping in mind that in the case of government-sponsored operations this ‘centrifugation’ of resources and talent might affect the future of such campaigns. Technical capabilities and tools are owned by the private industry in this scenario, and they are for sale for any customer that, in many cases, doesn’t fully understand the technical details and consequences behind them.

All this suggests that we’re unlikely to discover new highly sophisticated operations – well-resourced attackers are more likely to simply shift to new paradigms.

Networking hardware and IOT
It just seemed logical that at some point every actor would deploy capabilities and tools designed to target networking hardware. Campaigns like VPNFilter were a perfect example of how attackers have already started deploying their malware to create a multipurpose ‘botnet’. In this particular case, even when the malware was extremely widespread, it took some time to detect the attack, which is worrisome considering what might happen in more targeted operations.

Actually, this idea can go even further for well-resourced actors: why not directly target even more elemental infrastructure instead of just focusing on a target organization? We haven’t reached that level of compromise (to our knowledge), but it was clear from past examples (like Regin) how tempting that level of control is for any attacker.

Vulnerabilities in networking hardware allow attackers to follow different directions. They might go for a massive botnet-style compromise and use that network in the future for different goals, or they might approach selected targets for more clandestine attacks. In this second group we might consider ‘malware-less’ attacks, where opening a VPN tunnel to mirror or redirect traffic might provide all the necessary information to an attacker.

All these networking elements might also be part of the mighty IoT, where botnets keep growing at an apparently unstoppable pace. These botnets could be incredibly powerful in the wrong hands when it comes to disrupting critical infrastructure, for instance. This can be abused by well-resourced actors, possibly using a cover group, or in some kind of terror attack.

One example of how these versatile botnets can be used, other than for disruptive attacks, is in short-range frequency hopping for malicious communications, avoiding monitoring tools by bypassing conventional exfiltration channels.

Even though this seems to be a recurrent warning year after year, we should never underestimate IoT botnets – they keep growing stronger.

Public retaliation
One of the biggest questions in terms of diplomacy and geopolitics was how to deal with an active cyberattack. The answer is not simple and depends heavily on how bad and blatant the attack was, among many other considerations. However, it seems that after hacks like that on the Democratic National Committee, things became more serious.

Investigations into recent high-profile attacks, such as the Sony Entertainment Network hacks or the attack on the DNC, culminated in a list of suspects being indicted. That results not only in people facing trial but also a public show of who was behind the attack. This can be used to create a wave of opinion that might be part of an argument for more serious diplomatic consequences.

Actually we have seen Russia suffering such consequences as a result of their alleged interference in democratic processes. This might make others rethink future operations of this kind.

However, the fear of something like that happening, or the thought that it might already have happened, was the attackers’ biggest achievement. They can now exploit such fear, uncertainty and doubt in different, more subtle ways – something we saw in notable operations, including that of the Shadowbrokers. We expect more to come.

What will we see in the future? The propaganda waters were probably just being tested by past operations. We believe this has just started and it will be abused in a variety of ways, for instance, in false flag incidents like we saw with Olympic Destroyer, where it’s still not clear what the final objective was and how it might have played out.

Emergence of newcomers
Simplifying somewhat, the APT world seems to be breaking into two groups: the traditional well-resourced most advanced actors (that we predict will vanish) and a group of energetic newcomers who want to get in on the game.

The thing is that the entry barrier has never been so low, with hundreds of very effective tools, re-engineered leaked exploits and frameworks of all kinds publicly available for anyone to use. As an additional advantage, such tools make attribution nearly impossible and can be easily customized if necessary.

There are two regions in the world where such groups are becoming more prevalent: South East Asia and the Middle East. We have observed the rapid progression of groups suspected of being based in these regions, traditionally abusing social engineering for local targets, taking advantage of poorly protected victims and the lack of a security culture. However, as targets increase their defenses, attackers do the same with their offensive capabilities, allowing them to extend their operations to other regions as they improve the technical level of their tools. In this scenario of scripting-based tools we can also find emerging companies providing regional services who, despite OPSEC failures, keep improving their operations.

One interesting aspect worth considering from a more technical angle is how JavaScript post-exploitation tools might find a new lease of life in the short term, given the difficulty of limiting its functionality by an administrator (as opposed to PowerShell), its lack of system logs and its ability to run on older operating systems.

The negative rings
The year of Meltdown/Specter/AMDFlaws and all the associated vulnerabilities (and those to come) made us rethink where the most dangerous malware actually lives. And even though we have seen almost nothing in the wild abusing vulnerabilities below Ring 0, the mere possibility is truly scary as it would be invisible to almost all the security mechanisms we have.

For instance, in the case of SMM there has at least been a publicly available PoC since 2015. SMM is a CPU feature that would effectively provide remote full access to a computer without even allowing Ring 0 processes to have access to its memory space. That makes us wonder whether the fact that we haven’t found any malware abusing this so far is simply because it is so difficult to detect. Abusing this feature seems to be too good an opportunity to ignore, so we are sure that several groups have been trying to exploit such mechanisms for years, maybe successfully.

We see a similar situation with virtualization/hypervisor malware, or with UEFI malware. We have seen PoCs for both, and HackingTeam even revealed a UEFI persistence module that’s been available since at least 2014, but again no real ITW examples as yet.

Will we ever find these kinds of unicorns? Or haven’t they been exploited yet? The latter possibility seems unlikely.

Your favorite infection vector
In probably the least surprising prediction of this article we would like to say a few words about spear phishing. We believe that the most successful infection vector ever will become even more important in the nearest future. The key to its success remains its ability to spark the curiosity of the victim, and recent massive leaks of data from various social media platforms might help attackers improve this approach.

Data obtained from attacks on social media giants such as Facebook and Instagram, as well as LinkedIn and Twitter, is now available on the market for anyone to buy. In some cases, it is still unclear what kind of data was targeted by the attackers, but it might include private messages or even credentials. This is a treasure trove for social engineers, and could result in, for instance, some attacker using the stolen credentials of some close contact of yours to share something on social media that you already discussed privately, dramatically improving the chances of a successful attack.

This can be combined with traditional scouting techniques where attackers double-check the target to make sure the victim is the right one, minimizing the distribution of malware and its detection. In terms of attachments, it is fairly standard to make sure there is human interaction before firing off any malicious activity, thus avoiding automatic detection systems.

Indeed, there are several initiatives using machine learning to improve phishing’s effectiveness. It’s still unknown what the results would be in a real-life scenario, but what seems clear is that the combination of all these factors will keep spear phishing as a very effective infection vector, especially via social media in the months to come.

Destructive destroyer
Olympic destroyer was one of the most famous cases of potentially destructive malware during the past year, but many attackers are incorporating such capabilities in their campaigns on a regular basis. Destructive attacks have several advantages for attackers, especially in terms of creating a diversion and cleaning up any logs or evidence after the attack. Or simply as a nasty surprise for the victim.

Some of these destructive attacks have geostrategic objectives related to ongoing conflicts as we have seen in Ukraine, or with political interests like the attacks that affected several oil companies in Saudi Arabia. In some other cases they might be the result of hacktivism, or activity by a proxy group that’s used by a more powerful entity that prefers to stay in the shadows.

Anyway, the key to all these attacks is that they are ‘too good’ not to use. In terms of retaliation for instance, governments might use them as a response ranged somewhere between a diplomatic answer and an act of war, and indeed some governments are experimenting with them. Most of these attacks are planned in advance, which involves an initial stage of reconnaissance and intrusion. We don’t know how many potential victims are already in this situation where everything is ready, just waiting for the trigger to be pulled, or what else the attackers have in their arsenal waiting for the order to attack.

ICS environments and critical infrastructure are especially vulnerable to such attacks, and even though industry and governments have put a lot of effort in over the last few years to improve the situation, things are far from ideal. That’s why we believe that even though such attacks will never be widespread, in the next year we expect to see some occurring, especially in retaliation to political decisions.

Advanced supply chain
This is one of the most worrisome vectors of attack, which has been successfully exploited over the last two years, and it has made everyone think about how many providers they have and how secure they are. Well, there is no easy answer to this kind of attack.

Even though this is a fantastic vector for targeting a whole industry (similar to watering hole attacks) or even a whole country (as seen with NotPetya), it’s not that good when it comes to more targeted attacks as the risk of detection is higher. We have also seen more indiscriminate attempts like injecting malicious code in public repositories for common libraries. The latter technique might be useful in very carefully timed attacks when these libraries are used in a very particular project, with the subsequent removal of the malicious code from the repository.

Now, can this kind of attack be used in a more targeted way? It appears to be difficult in the case of software because it will leave traces everywhere and the malware is likely to be distributed to several customers. It is more realistic in cases when the provider works exclusively for a specific customer.

What about hardware implants? Are they a real possibility? There has been some recent controversy about that. Even though we saw from Snowden’s leaks how hardware can be manipulated on its way to the customer, this does not appear to be something that most actors can do other than the very powerful ones. And even they will be limited by several factors.

However, in cases where the buyer of a particular order is known, it might be more feasible for an actor to try and manipulate hardware at its origin rather than on its way to the customer.

It’s difficult to imagine how all the technical controls in an industrial assembly line could be circumvented and how such manipulation could be carried out. We don’t want to discard this possibility, but it would probably entail the collaboration of the manufacturer.

All in all, supply chain attacks are an effective infection vector that we will continue to see. In terms of hardware implants we believe it is extremely unlikely to happen and if it does, we will probably never know….

And mobile
This is in every year’s predictions. Nothing groundbreaking is expected, but it’s always interesting to think about the two speeds for this slow wave of infections. It goes without saying that all actors have mobile components in their campaigns; it makes no sense only going for PCs. The reality is that we can find many examples of artifacts for Android, but also a few improvements in terms of attacking iOS.

Even though successful infections for iPhone requires concatenating several 0-days, it’s always worth remembering that incredibly well-resourced actors can pay for such technology and use it in critical attacks. Some private companies claim they can access any iPhone that they physically possess. Other less affluent groups can find some creative ways to circumvent security on such devices using, for instance, rogue MDM servers and asking targets through social engineering to use them in their devices, providing the attackers with the ability to install malicious applications.

It will be interesting to see if the boot code for iOS leaked at the beginning of the year will provide any advantage to the attackers, or if they’ll find new ways of exploiting it.

In any case, we don’t expect any big outbreak when it comes to mobile targeted malware, but we expect to see continuous activity by advanced attackers aimed at finding ways to access their targets’ devices.

The other things
What might attackers be thinking about in more futuristic terms? One of the ideas, especially in the military field, might be to stop using weak error-prone humans and replacing them with something more mechanical. With that in mind, and also thinking of the alleged GRU agents expelled from the Netherlands last April after trying to hack into the OPCW’s Wi-Fi network as an example, what about using drones instead of human agents for short-range hacking?

Or what about backdooring some of the hundreds of cryptocurrency projects for data gathering, or even financial gain?

Use of any digital good for money laundering? What about using in-game purchases and then selling such accounts later in the marketplace?

There are so many possibilities that predictions always fall short of reality. The complexity of the environment cannot be fully understood anymore, raising possibilities for specialist attacks in different areas. How can a stock exchange’s internal inter-banking system be abused for fraud? I have no idea, I don’t even know if such a system exists. This is just one example of how open to the imagination the attackers behind these campaigns are.

We are here to try and anticipate, to understand the attacks we don’t, and to prevent them from occurring in the future.


Microsoft to Charge for Windows 7 Security Updates
8.9.18 securityweek Security

Microsoft this week revealed plans to offer paid Windows 7 Extended Security Updates (ESU) for three years after traditional support for the operating system will officially end.

Released in 2009, Windows 7 currently powers around 39% of all machines running Microsoft’s Windows platform, but is slowly losing ground to Windows 10 (currently found on over 48% of Windows systems).

Microsoft stopped selling Windows 7 in 2014 (some variants are still available to OEMs) and ended mainstream support for the operating system in early 2015. The company plans on ending extended support for Windows 7 to January 14, 2020.

Past that date, organizations will have to pay in order to continue take advantage of support for the platform.

Paid Windows 7 Extended Security Updates (ESU), Microsoft now says, will be available through January 2023. The tech company will sell the Windows 7 ESU on a per-device basis and plans on increasing the price for it each year.

“Windows 7 ESUs will be available to all Windows 7 Professional and Windows 7 Enterprise customers in Volume Licensing, with a discount to customers with Windows software assurance, Windows 10 Enterprise or Windows 10 Education subscriptions,” Microsoft says.

The software giant also revealed that it will continue to provide support for Office 365 ProPlus on devices with active Windows 7 Extended Security Updates (ESU) through January 2023. This means that all those buying the Windows 7 ESU will continue to run Office 365 ProPlus.

January 2023, which is the end support date for Windows 8.1, also represents the end support date for Office 365 ProPlus on this platform version, Microsoft now reveals. Windows Server 2016, on the other hand, will offer support for Office 365 ProPlus until October 2025.

Currently, Microsoft is relying on a semi-annual schedule for Windows 10 and Office 365 ProPlus updates, targeting September and March, and the company will continue using this Windows 10 update cycle.

To make sure customers have enough time to plan for updates within their environments, however, Microsoft is making changes to the support life of Windows 10 updates.

Thus, currently supported feature updates of Windows 10 Enterprise and Education editions (versions 1607, 1703, 1709, and 1803) will be supported for 30 months from their original release date. As for future feature updates, those targeted for a September release will be supported for 30 months, while those targeted for a March release for 18 months.

According to Microsoft, all feature releases of Windows 10 Home, Windows 10 Pro, and Office 365 ProPlus will continue to be supported for 18 months, regardless of whether targeted for release in March or September.


Fighting Alert Fatigue With Security Orchestration, Automation and Response
7.9.18 securityweek Security

New research confirms and quantifies two known challenges for security operations teams: they don't have enough staff and would benefit from automated tools.

Demisto's State of SOAR (security orchestration, automation and response) Report, 18 (PDF) was researched via the ViB community of more than 1.2 million IT practitioners and decision makers. A total of 262 security professionals from 245 companies in a wide range of industry sectors and sizes, mostly in the U.S., took part in the survey. The results show that the two primary and related challenges for SOC and IR staff are not enough time (80.39% of respondents) and too few staff (78.76%) to handle the workload.

“We’ve seen plenty of research that highlights the unending growth in security alerts, a widening cyber security skills gap, and the ensuing fatigue that is heaped upon understaffed security teams," explains Rishi Bhargava, Co-founder of Demisto. "That’s why we conducted this study which allowed us to dig deeper into these issues, their manifestations, as well as possible solutions. Our results produced captivating insights into the state of SOAR in businesses of all sizes.”

"The pattern that stands out starkly from these results," notes the report, "is that the security skills gap continues to be a challenge." The finer detail of these results, however, is less expected: retaining staff is not much easier than finding them (60.1% against 75.2%). Sixty-seven percent of security staff move on to new companies in less than four years, with 26.4% leaving within two years.

This is primarily down to money. Nearly 65% of those who leave their current employment do so because they can earn more elsewhere. Furthermore, asked what is important to infosec employees, 71.26% replied, a 'higher salary'. The often lauded 'company culture' ranked only fifth in importance at 49.43%.

The implication is that smaller companies with smaller budgets hire newcomers, train them and provide the experience that is attractive to larger companies who simply poach experienced security staff with more money. This in turn means that it is the smaller business that is most affected by the overall security skills gap.

It's worth noting, however, that moving on to greener pastures is not the only cause of failing to retain existing staff. As many as 27.2% of security employees leave because of over work and fatigue. This echoes a comment from Jerome Segura at Malwarebytes: "There's a lot of burnout in infosec. It's tough, but that's the reality. If you're in infosec, you're on call 24/7."

According to the report's respondents, their primary concerns -- or pain points -- are they currently receive too many alerts (cited by 46.4% of respondents; an issue that will be aggravated by staff shortages); and too many false positives within those alerts (cited by 69% of respondents; an issue that is technology based).

Affecting both of these (but not specifically cited as a pain point) is the number of different tools used by the security team. More than three-quarters of the respondents have to learn how to use more than four different security tools for effective security operations and incident response. "With the number of tools constantly on the rise, high training times and attrition rates truly spell out the gravity of the human capital challenge facing the industry today."

Bhargava explains further. “Security deployment has become fractured with innumerous specialized tools, making it increasingly difficult for security teams to manage alerts across disparate systems and locations, particularly considering the talent shortage present in security today,” said Bhargava. “There is a great opportunity for SOAR tools to help unify these products and processes, using automated response to reduce alert fatigue and direct analyst resources to the alerts which are most likely to cause harm.”

It is Demisto's premise -- it is itself a SOAR vendor -- that SOAR technology can help alleviate these difficulties. "An important goal of our study was to find and validate linkages between high incident loads, high response teams, and the desire for automation." First the report quantifies the individual workload. More than 12,000 alerts are reviewed each week; and each alert takes more than 4 days to resolve.

There are simply too many alerts for the security team to handle manually. It is, says the report, "clear that there’s a vicious cycle in effect. Alert volume leads to increased MTTR [mean time to respond] which in turn leads to even more alert volume." Automation as a solution is already in use, with more than half of the respondents automating or seeing the benefit in automating much of the incident response workload.

"Proactively," says the report, "security operations and threat hunting ranked high on the ‘automation candidates’ list, highlighting security teams’ desire for automation to assist them in identifying incipient threats and vulnerabilities. Reactively, incident response, tracking IR metrics, and case management were felt as good candidates for partial or full automation."

For now, SOAR is still an emergent technology. "A sign of SOAR’s emergent nature is highlighted by around 20% of our responders being unsure about where to include SOAR in their budgets," admits the report. "A growing acknowledgement of SOAR in security budgets will come with increased awareness and continued verifiable benefits in existing SOAR deployments."

Demisto believes, however, that SOAR has the potential to improve proactive threat hunting, standardize incident processes, improve investigations, accelerate and scale incident response, simplify security operations and maintenance, and generally fight the alert fatigue that comes with too few staff responding to too many alerts.

Cupertino, Calif.-based Demisto raised $20 million in a Series B funding round in February 2017, bringing the total raised to $26 million. In May 18, Gartner included Demisto in its report on 'Cool Vendors in Security Operations and Vulnerability Management'.


Mozilla Appoints New Policy, Security Chief
6.9.18 securityweek Security

Mozilla on Tuesday announced that Alan Davidson has been named the organization’s new Vice President of Global Policy, Trust and Security.

According to Mozilla Chief Operating Officer Denelle Dixon, Davidson will work with her on scaling and reinforcing the organization’s “policy, trust and security capabilities and impact.”

His responsibilities will also include leading Mozilla’s public policy work on promoting an open and “healthy” Internet, and supervising a security and trust team whose focus is on promoting “innovative privacy and security features.”

“For over 15 years, Mozilla has been a driving force for a free and open Internet, building open source products with industry-leading privacy and security features. I am thrilled to be joining an organization so committed to putting the user first, and to making technology a force for good in people’s lives,” said Davidson.Alan Davidson named Mozilla’s new VP of Global Policy, Trust and Security

Prior to joining Mozilla, Davidson worked for the U.S. Department of Commerce, the New America think tank, and Google. At Google, he helped launch the tech giant’s Washington D.C. office and led the company’s public policy and government relations efforts in the Americas.

“Alan is not new to Mozilla,” Dixon said. “He was a Mozilla Fellow for a year in 2017-18. During his tenure with us, Alan worked on advancing policies and practices to support the nascent field of public interest technologists — the next generation of leaders with expertise in technology and public policy who we need to guide our society through coming challenges such as encryption, autonomous vehicles, blockchain, cybersecurity, and more.”

Mozilla last week laid out plans to add various anti-tracking features to Firefox in an effort to protect users and help them choose what information they share with the websites they visit.

The new features include a mechanism designed to block trackers that slow down page loads, stripping cookies and blocking storage access from third-party tracking content, and blocking trackers that fingerprint users and sites that silently mine cryptocurrencies. Some of these new features are already present in Firefox Nightly and are expected to become available in the stable release of the web browser in the near future.


Uber Announces Ramped Up Passenger Security
6.9.18 securityweek Security

Uber chief Dara Khosrowshahi said on Wednesday the smartphone-summoned ride service is reinforcing safeguards for passengers and their personal information.

Features to be added to the app in the coming months include "Ride Check," which uses location tracking already built into the service to detect when cars have stopped unexpectedly.

If a crash is suspected, the driver and passenger will receive a prompt on their phones to order a courtesy ride or use the in-app emergency call button introduced earlier this year.

"This technology can also flag trip irregularities beyond crashes that might, in some rare cases, indicate an increased safety risk," Khosrowshahi said in a blog post.

"For example, if there is a long, unexpected stop during a trip, both the rider and the driver will receive a Ride Check notification to ask if everything is OK."

Uber, which operates in 65 countries, has disrupted transport in many locations despite regulatory hurdles and resistance from taxi operators.

The company has been touting a safety-first message amid plans for an initial public offering of shares late next year.

Khosrowshahi said the service will begin leaving pick-up and drop-off addresses out of drivers' trip history logs, showing only general areas to avoid creating databases of sensitive locations such as home addresses.

The service already lets drivers and passengers waiting to be picked up communicate through the app without revealing their phone numbers. People can also request pick-ups at intersections instead of specific street addresses.

"Uber has a responsibility to help keep people safe, and it's one we take seriously," Khosrowshahi said.

"We want you to have peace of mind every time you use Uber, and hope these features make it clear that we've got your back."


Arjen Kamphuis, the Dutch associate of Julian Assange, went missing in Norway

4.9.18 securityaffairs Security

Julian Assange associate and author of “Information Security for Journalists” Arjen Kamphuis has disappeared, the Norwegian police is working on the case.
Media agencies worldwide are reporting the strange disappearance of Arjen Kamphuis, the Julian Assange associate. The news was confirmed by WikiLeaks on Sunday, the man has been missing since August 20, when he left his hotel in the Norwegian town of Bodo.

WikiLeaks

@wikileaks
.@JulianAssange associate and author of "Information Security for Journalists" @ArjenKamphuis has disappeared according to friends (@ncilla) and colleagues. Last seen in Bodø, #Norway, 11 days ago on August 20.

1:43 AM - Sep 1, 18
1,514
2,060 people are talking about this
Twitter Ads info and privacy
According to WikiLeaks, Kamphuis had bought a ticket for a flight departing on August 22 from Trondheim that is far from Bodo.

His friends believe he disappeared either in Bodo, Trondheim or on the way to the destination.

WikiLeaks

@wikileaks
Update on the strange disappearance of @ArjenKamphuis. Arjen left his hotel in Bodø on August 20. He had a ticket flying out of Trondheim on August 22. The train between the two takes ~10 hours, suggesting that he disappeared in within hours in Bodø, Trondheim or on the train.

WikiLeaks

@wikileaks
.@JulianAssange associate and author of "Information Security for Journalists" @ArjenKamphuis has disappeared according to friends (@ncilla) and colleagues. Last seen in Bodø, #Norway, 11 days ago on August 20.

View image on Twitter
3:42 AM - Sep 2, 18
762
833 people are talking about this
Twitter Ads info and privacy
“A website set up to gather information on the missing person says: “He is 47 years old, 1.78 meters tall and has a normal posture. He was usually dressed in black and carrying his black backpack. He is an avid hiker.”” reported the German website dw.com.
Kamphuis
At the time of writing, there have been two unconfirmed sightings, one in Alesund, Norway, and the other in Ribe, Denmark.

The Norwegian authorities have started an investigation on the case on Sunday.

“We have started an investigation,” police spokesman Tommy Bech told the news agency AFP. At the time, the police “would not speculate about what may have happened to him,”.
Kamphuis

Ancilla

@ncilla
Hi everyone, small update about #FindArjen; The Norwegian police is working hard on the case now. We are keeping all options open, and hoping he will soon be found🤞

11:13 AM - Sep 3, 18
183
56 people are talking about this
Twitter Ads info and privacy
According to the Norwegian Verdens Gang tabloid newspaper, the Norwegian authorities cannot access location data collected by the Kamphuis’s mobile phone until he is officially reported missing in the Netherlands.


Oath Pays Over $1 Million in Bug Bounties
24.8.18 securityweek Security

As part of its unified bug bounty program, online publishing giant Oath has paid over $1 million in rewards for verified bugs, the company announced this week.

In April, Oath paid more than $400,000 in bug bounties during a one-day HackerOne event in San Francisco, where 40 white hat hackers were invited to find bugs in the company’s portfolio of brands and online services, including Tumblr, Yahoo, Verizon Digital Media Services and AOL.

The event also represented an opportunity for the company to formally introduce its unified bug bounty program, which brought together the programs that were previously divided across AOL, Yahoo, Tumblr and Verizon Digital Media Services (VDMS).

Only two months later, the program had already surpassed $1 million in payouts for verified bugs, the media and tech company says.

“This scale represents a significant decrease in risk and a considerable reduction of our attack surface. Every bug found and closed is a bug that cannot be exploited by our adversaries,” Oath CISO Chris Nims now says.

Nims also points out that, following the feedback received from participants, the company also made a series of changes to their program policy. The company is now willing to hand out rewards for more types of vulnerabilities, although SQLi, RCE and XXE/XMLi flaws are still a priority.

Oath also published the payout table to increase the transparency of the program.

Additionally, the media giant has added EdgeCast to the bug bounty program, by opening the VDMS-EdgeCast-Partners and VDMS-EdgeCast-Customers private programs, which were previously operated separately, to the unified program.

Oath also plans on defining a structured scope for the suite of brands included in the unified bug bounty program. The purpose of this is to “help separate and define bugs for different assets.”

“The security landscape changes constantly, and we hope these updates to the bug bounty program will keep both Paranoids and security researchers alike more adept to detect threats before they cause damage to our community,” Nims concludes.


Code of App Security Tool Posted to GitHub
21.8.18 securityweek Security

Code of DexGuard, software designed to secure Android applications and software development kits (SDKs), was removed from GitHub last week, after being illegally posted on the platform.

The tool is developed by Guardsquare, a company that specializes in hardening Android and iOS applications against both on-device and off-device attacks, and is designed to protect Android applications and SDKs against reverse engineering and hacking.

The DexGuard software is built on top of ProGuard, a popular optimizer for Java and Android that Guardsquare distributes under the terms of the GNU General Public License (GPL), version 2. Unlike ProGuard, however, DexGuard is being distributed under a commercial license.

In the DMCA takedown notice published on GitHub, Guardsquare reveals that the DexGuard code posted on the Microsoft-owned code platform was illegally obtained from one of their customers.

“The listed folders (see below) contain an older version of our commercial obfuscation software (DexGuard) for Android applications. The folder is part of a larger code base that was stolen from one of our former customers,” the notice reads.

The leaked code was quickly removed from the open-source hosting platform, but it did not take long for it to appear on other repositories as well. In fact, Guardsquare said it discovered nearly 200 forks of the infringing repository and that demanded all be taken down.

HackedTeam, the account that first published the stolen code, also maintains repositories of open-source malware suite RCSAndroid (Remote Control System Android).

The spyware was attributed several years ago to the Italy-based Hacking Team, a company engaged in the development and distribution of surveillance technology to governments worldwide. Earlier this year, Intezer discovered a new backdoor based on the RCS surveillance tool.


ESET Launches New Enterprise Security Tools
17.8.18 securityweek Security

ESET on Thursday announced the general availability of a new line of enterprise security solutions that include endpoint detection and response (EDR), forensic investigation, threat monitoring, sandbox, and management tools.

The new EDR tool is ESET Enterprise Inspector, which provides real-time data from the cybersecurity firm’s endpoint security platform. The product is fully customizable and ESET claims it offers “vastly more visibility for complete prevention, detection and response against all types of cyber threats.”

The new enterprise solutions also include ESET Threat Hunting, an on-demand forensic investigation tool that provides details on alarms and events, and ESET Threat Monitoring, which constantly monitors all Enterprise Inspector data for threats.

Enterprise Inspector is also complemented by ESET Dynamic Threat Defense, a cloud sandbox designed for a quick analysis of potential threats.

ESET also announced the availability of Security Management Center, a successor of Remote Administrator that provides network visibility, security management and reporting capabilities from a central console.

“We understand that global enterprises require cybersecurity solutions tailored specifically for their business as we have cooperated with a number of them to create our all-new suite of security solutions,” said Juraj Malcho, Chief Technology Officer at ESET. “We believe that any enterprise should be able to manage and customize their security solutions with ease, and we are proud that our new lineup reduces complexity and integrates seamlessly into their network.”

The new solutions were first demoed at the RSA Conference in May and they are now available to organizations in the United States, Canada, Czech Republic, Slovakia and the Netherlands.


New G Suite Alerts Provide Visibility Into Suspicious User Activity
9.8.18 securityweek Security

After bringing alerts on state-sponsored attacks to G Suite last week, Google is now also providing administrators with increased visibility into user behavior to help identify suspicious activity.

Courtesy of newly introduced reports, G Suite administrators can keep an eye on account actions that seem suspicious and can also choose to receive alerts when critical actions are performed.

Admins can set alerts for password changes, and can also receive warnings when users enable or disable two-step verification or when they change account recovery information such as phone number, security questions, and recovery email.

By providing admins with visibility into these actions, Google aims at making it easier to identify suspicious account behavior and detect when user accounts may have been compromised.

Should an admin notice that a user has changed both the password and the password recovery info, which could be a sign that the account has been hijacked, they can leverage the reports to track time and IP address and determine if the change indeed seems suspicious.

Based on the findings, the G Suite administrator could then take the appropriate action to mitigate the issue and restore the user account, such as password reset and disable 2-step verification.

Admins can also use the new reports to gain visibility into an organization's security initiatives, such as the monitoring of domain-wide initiative to increase the adoption of two-step verification.

Access to these reports is available in Admin console > Reports > Audit > Users Accounts.

The new capabilities are set to gradually roll out to all G Suite editions and should become available to all customers within the next two weeks.

“G Suite admins have an important role in protecting their users’ accounts and ensuring their organization’s security. To succeed, they need visibility into user account actions. That’s why we’re adding reports in the G Suite Admin console that surface more information on user account activity,” Google notes.


NERC Names Bill Lawrence as VP, Chief Security Officer
8.8.18 securityweek Security

North American Electric Reliability Corporation (NERC) on Tuesday announced that Bill Lawrence has been named vice president and chief security officer (CSO), and will officially step into the lead security role on August 16, 18.

In his new role, Lawrence will be tasked with heading NERC's security programs executed through the Electricity Information Sharing and Analysis Center (E-ISAC), where he currently serves as senior director. He will also be responsible for directing security risk assessments and mitigation initiatives to protect critical electricity infrastructure across North America, the regulatory authority said.

ICS Cyber Security Conference

As VP and CSO, Lawrence will also lead coordination efforts with government agencies and stakeholders on cyber and physical security matters, including analysis, response and sharing of critical sector information, NERC said.

Lawrence joined NERC in July 2012 and has directed the development of NERC’s grid security exercise, GridEx.

A not-for-profit international regulatory authority formed to reduce risks to the reliability and security of the grid, NERC's jurisdiction includes owners and operators that serve more than 334 million people.

Lawrence is a graduate of the U.S. Naval Academy with a bachelor’s degree in Computer Science, and flew F-14 Tomcats and F/A-18F Super Hornets for the U.S. Navy prior to joining NERC. He holds a master’s degree in International Relations from Auburn Montgomery and a master’s degree in Military Operational Art and Science from the Air Command and Staff College.

NERC is subject to oversight by the Federal Energy Regulatory Commission (FERC) and governmental authorities in Canada.


Enterprises: Someone on Your Security Team is Likely a Grey Hat Hacker
8.8.18 securityweek Security

Companies Should Not Dismiss a Bit of Grey Hatting by Staff as Just a Form of Letting Off Steam

The cost of cybercrime is normally described as direct costs: the cost of remediation, forensic support, legal costs and compliance fines, etcetera. A new survey has sought to take a slightly different approach, looking at the organizational costs associated with cybercriminal activity.

Sponsored by Malwarebytes, Osterman Research surveyed 900 security professionals during May and June 18 across five countries: the United States (200), UK (175), Germany (175), Australia (175), and Singapore (175). All respondents were employed either managing or working on cybersecurity issues in an organization of between 200 and 10,000 employees.

The survey (PDF) relates staff salaries, security budgets and remediation costs; and concludes that the average firm employing 2,500 staff in the U.S. can expect to spend more than $2 million per year for cybersecurity-related costs. The amount is lower in the other surveyed countries, but still close to, or above, $1 million per year. Interestingly, the survey took the unusual step to see if there is any correlation in the number of grey hats employed by a firm and the overall cost of cybersecurity.

The basic findings are much as we would expect, and have been confirmed by numerous other research surveys: most companies have been breached; phishing is the most common attack vector; mid-market companies are attacked more frequently than small companies and as frequently as large companies; and attacks occur with alarming frequency.

The most surprising revelation from this survey is the number of grey hats working within organizations, and black hats that have been employed by organizations. Grey hats are defined as computer security experts who may sometimes violate laws or typical ethical standards, but do not have the full malicious intent associated with a full-time black hat hacker.

Overall, the 900 respondents believe that 4.6 of their colleagues are grey hats -- or, as the report puts it, a full-time security professional that is a black hat on the side. This varies by country: 3.4% in Germany, Australia and Singapore, 5.1% in the U.S., and as much as 7.9% in the UK.

Motivations provided by the respondents include black hat activity being more lucrative (63%), the challenge (50%), retaliation against an employer (40%), philosophical (39%), and, well, it's not really wrong, is it (34%)?

The extent of the income differential between a white hat employee and a black hat hacker is confirmed in a separate report from Bromium, published in April 18: "High-earning cybercriminals can make $166,000+ per month; Middle-earners can make $75,000+ per month; Low-earners can make $3,500+ per month."

According to the Malwarebytes survey, the highest average starting salary for security professionals (in the U.S.) is $65,578 or just $5,464 per month (compared to $75,000 for middle-earning black hats). The difference is far greater in the UK, where the average starting salary for security professionals is less than $3,000 per month.

"It's interesting," Jerome Segura, lead malware intelligence analyst at Malwarebytes told SecurityWeek: "that despite the skills shortage, when companies hire new security staff, they generally don't pay them very much. There's kind of a contrast here, where companies and governments claim it's difficult to find the right people -- but when they do hire people they don't always pay them accordingly."

There appears to be an inevitable conclusion when correlating figures between the U.S. and the U.K. Not only do the U.S. companies pay their security staff much more than UK companies, they also have a considerably higher security budget ($1,573,197 in the U.S. compared to $350,157 in the UK). Can it be simply coincidence that the UK then has a higher percentage of grey hats within their companies, and that the cost of remediation is proportionately higher (14.7% of the security budget in the U.S., and 17.0% in the UK)?

It makes sense that remediation would take up a higher percentage of a small budget -- and it is tempting to think that the higher rewards of black-hattery would be attractive to lowly paid British staff. The U.S government believes it has found an example in Marcus Hutchins, the British researcher who found and triggered the 'kill-switch' in WannaCry. That was pure white hat behavior -- but Hutchins was later arrested in the US and accused of involvement in making and distributing the Kronos banking malware.

"Hutchins has many who support him," commented Segura, "and many who don't. But given the surprising number of employed white hats who are considered by their peers to be grey hats, it will be interesting to see how this turns out."

Segura accepts that comparatively low pay in the industry could be a partial cause for the surprisingly high number of grey hats working in infosec. He points out that the highest percentage of grey hats appear to work for mid-size companies that cannot afford the highest salaries, and which predominate in the UK. But he does not believe that finance is the only motivating factor. "There is a tricky line in the security profession," he told SecurityWeek. "Some people are pure hackers in the original non-malevolent sense, and they like to poke around to understand things better -- even if it is strictly speaking illegal. It also helps the job -- by peaking behind the curtain you get a better understanding of how the criminals operate and you can better defend against them."

But there's more. "Don't forget the social issues," he added. "Techies can be socially awkward and have difficulty in fitting into a corporate structure. The nerd in his bedroom is a bit of a cliche, but there is some truth to it. Working in a business corporate environment is not for everybody. And in infosec there is a lot of pressure. You can't fit the work into 9-to-5, five days a week -- so people work up to 80 hours or more per week without getting recompensed for it. That's a lot of mental pressure -- there's a lot of burnout in infosec. It's tough, but that's the reality. If you're in infosec, you're on call 24/7."

It would be wrong for companies to dismiss a bit of grey hatting by staff as just a form of letting off steam -- that could prove disastrous. But at the same time, the onus is on the employer to find the solution. Companies probably cannot compete with black hats financially -- but they should do as much as possible to be as inclusive and supportive as possible to the pressures of working in infosec.


New Law May Force Small Businesses to Reveal Data Practices
8.8.18 securityweek Security

NEW YORK (AP) — A Rhode Island software company that sells primarily to businesses is nonetheless making sure it complies with a strict California law about consumers' privacy.

AVTECH Software is preparing for what some say is the wave of the future: laws requiring businesses to be upfront with customers about how they use personal information. California has already passed a law requiring businesses to disclose what they do with people's personal information and giving consumers more control over how their data is used — even the right to have it deleted from companies' computers.

Privacy rights have gotten more attention since news earlier this year that the data firm Cambridge Analytica improperly accessed Facebook user information. New regulations also took effect in Europe.

For AVTECH, which makes software to control building environmental issues, preparing now makes sense not only to lay the groundwork for future expansion, but to reassure customers increasingly uneasy about what happens to their personal information.

"People will look at who they're dealing with and who they're making purchases from," says Russell Benoit, marketing manager for the Warren, Rhode Island-based company.

Aware that California was likely to enact a data law, AVTECH began reviewing how it handles customer information last year. Although most of the company's customers are businesses, it expects it will increase its sales to consumers.

While it may yet face legal challenges, the California Consumer Privacy Act is set to take effect Jan. 1, 2020. It covers companies that conduct business in California and that fit one of three categories: Those with revenue above $25 million; those that collect or receive the personal information of 50,000 or more California consumers, households or electronic devices; and those who get at least half their revenue from selling personal information.

Although many small businesses may be exempt, those subject to the law will have to ensure their systems and websites can comply with consumer inquiries and requests. That may be an added cost of thousands for small companies that don't have in-house technology staffers and need software and consulting help.

Under California's law, consumers have the right to know what personal information companies collect from them, why it's collected and who the businesses share, transfer or sell it to. That information includes names, addresses, email addresses, browsing histories, purchasing histories, professional or employment information, educational records and information about travel from GPS apps and programs. Companies must give consumers at least two ways to find out their information, including a toll-free phone number and an online form, and companies must also give consumers a copy of the information they've collected.

Consumers also have the right to have their information deleted from companies' computer systems, and to opt out of having the information sold or shared.

The law was modeled on the European Union's General Data Protection Regulation, which took effect May 25. The California Legislature passed its law to prevent a more stringent proposed law from being placed on the November election ballot.

Frank Samson hopes the California law will help prevent what he sees as troubling marketing tactics by some in his industry, taking care of senior citizens. When people inquire about senior care companies online, it's sometimes on sites run by brokers rather than care providers themselves.

"It may be in the fine print, or it may not be: We're going to be taking your info and sending it out to a bunch of people," says Samson, founder of Petaluma, California-based Senior Care Authority.

That steers many would-be clients to just a handful of companies, he says, and can mean seniors and families get bombarded with calls while dealing with stressful situations.

But many unknowns remain about the California law. The state attorney general's office must write regulations to accompany several provisions. There are inconsistencies between different sections of the law, and the Legislature would need to correct them, says Mark Brennan, an attorney with Hogan Lovells in Washington, D.C., who specializes in technology and consumer protection laws. Questions about the law might need to be litigated, including whether California can force businesses based in other states to comply, Brennan says. There are similar questions about the European GDPR.

In the meantime, small business owners who want to start figuring out if they're likely to be subject to the California law and GDPR can talk to attorneys and technology consultants who deal with privacy rights. Brennan suggests companies contact professional and industry organizations that are gathering information about the laws and how to comply.

Some small businesses may benefit, such as any developing software tied to the law. Among other things, the software is designed to allow companies and customers to see what information has been gathered, who has access to it and who it has been shared with.

The software, expected to stay free for consumers, could cost companies into the thousands of dollars a year depending on their size, says Andy Sambandam, CEO of Clarip, one of the software makers. But, he says, "over time, the price is going to come down."

And other states are expected to adopt similar laws.

"This is the direction the country is going in," says Campbell Hutcheson, chief compliance officer with Datto, an information technology firm.


Mozilla to Researchers: Stay Away From User Data and We Won’t Sue
6.8.18 securityweek  Security

Security researchers looking to find bugs in Firefox should not worry about Mozilla suing them, the Internet organization says. That is, of course, as long as they don’t mess with user data.

Mozilla, which has had a security bug bounty program for over a decade, is discontent with the how legal issues are interfering with the bug hunting process and has decided to change its bug bounty program policies to mitigate that.

Because legal protections afforded to those participating in bounty programs failed to evolve, security researchers are often at risk, and the organization is determined to offer a safe harbor to those researchers seeking bugs in its web browser.

According to the Internet organization, bug bounty participants could end up punished for their activities under the Computer Fraud and Abuse Act (CFAA),the anti-hacking law that criminalizes unauthorized access to computer systems.

“We often hear of researchers who are concerned that companies or governments may take legal actions against them for their legitimate security research. […] The policy changes we are making today are intended to create greater clarity for our own bounty program and to remove this legal risk for researchers participating in good faith,” Mozilla says.

For that, the browser maker is making two changes to its policy. On the one hand, the organization has clarified what is in scope for its bug bounty program, while on the other it has reassured researchers it won’t take legal action against them if they don’t break the rules.

Now, Mozilla makes it clear that participants to its bug bounty program “should not access, modify, delete, or store our users’ data.” The organization also says that it “will not threaten or bring any legal action against anyone who makes a good faith effort to comply with our bug bounty program.”

Basically, the browser maker says it won’t sue researchers under any law (the DMCA and CFAA included) or under its applicable Terms of Service and Acceptable Use Policy for their research performed as part of the bug bounty program.

“We consider that security research to be ‘authorized’ under the CFAA,” Mozilla says.

These changes, which are available in full in the General Eligibility and Safe Harbor sections of organization’s main bounty page, should help researchers know what to expect from Mozilla.


Do Businesses Know When They’re Using Unethical Data?
5.8.18 securityweek Security

Data breaches are costly for businesses that expterience them, this data fuel the black markets and sometime are offered to complanies as legitimate data.
Data breaches are extraordinarily costly for businesses that experience them, both concerning reputational damage and money spent to repair the issues associated with those fiascos. And, on the consumer side of things, the scary thing is hackers don’t just steal data for notoriety. They do it to profit, typically by selling the snatched details online.

But, then, are other businesses aware of times when the data they just bought might have been stolen instead of legally obtained?

People Can Access Most of the Relevant Black Market Sites on Standard Browsers
There was a time when venturing into the world of the online black market typically meant downloading encryption software that hid the identity of users. However, most black market transactions happen on the “open” web so that it’s possible to access the respective sites via browsers like Firefox and Chrome without downloading special software first.

That means business representatives aren’t safe from coming across stolen data if they decide only to browse the internet normally. However, the kind of information advertised on the open web should be enough to raise eyebrows by itself. It often contains credit card information or sensitive medical details — not merely names, email addresses or phone numbers.

Companies can reduce the chances of unknowingly benefiting from stolen data by not proceeding with purchases if they contain private, not readily obtainable details.

Illegitimate Sellers Avoid Giving Payment Details
Even when people seek to profit by peddling stolen data, their desire to make money typically isn’t stronger than their need to remain anonymous. Most criminals who deal with data from illegal sources don’t reveal their names even when seeking payment. They’ll often request money through means that allow keeping their identities secret, such as Bitcoin.

Less Information, More Suspicion
If companies encounter data sellers that stay very secretive about how they get their data and whether it is in compliance with data protection and sharing standards, those are red flags.

However, even when data providers do list information about how they obtain data, it’s a good idea to validate the data on your own. For example, if you get calling data from a third-party provider, you should always check it against current Do Not Call lists.

Dark Web Monitoring Services Exist
As mentioned above, stolen data frequently works its way through the open web rather than the dark web. However, it’s still advisable for companies to utilize monitoring services that search the dark web for stolen data. The market for such information is lucrative, and some clients pay as much as $150,000 annually for such screening measures. If businesses provide data that comes up as originating from the dark web, that’s a strong indicator that it came from unethical sources.

data breaches

Do Legitimate Companies Create the Demand for Stolen Data?
It’s difficult to quantify how many reputable companies might be purchasing stolen data. If they do it knowingly, such a practice breaks the law. And, even if it happens without their knowledge, that’s still a poor reflection on those responsible. It means they didn’t carefully check data sources and sellers before going through with a purchase.

Unfortunately, analysts believe it happens frequently. After data breaches occur, some of the affected companies discover their data being sold online and buy it back. When hackers realize even those who initially had the data seized will pay for it, they realize there’s a demand for their criminal actions.

After suffering data breaches, some companies even ask their own employees to find stolen data and buy it back.

Most use intermediary parties, though representatives at major companies, including PayPal, acknowledge that this process of compensating hackers for the data they took occurs regularly. They say it’s part of the various actions that happen to protect customers — or to prevent them from knowing breaches happened at all.

If companies can find and recover their stolen data quickly enough, customers might never realize hackers had their details. That’s especially likely, since affected parties often don’t hear about breaches until months after companies do, giving those entities ample time to locate data and offer hackers a price for it.

Plus, it’s important to remember that companies pay tens of thousands of dollars to recover their data after ransomware attacks, too.

Should Businesses Bear the Blame?
When companies buy data that’s new to them, they should engage in the preventative measures above to verify its sources and check that it’s not stolen. Also, although businesses justify buying compromised data back from hackers, they have to remember that by doing so, they are stimulating demand — and that makes them partially to blame.

Instead of spending money to retrieve data that hackers take, those dollars would be better spent cracking down on the vulnerabilities that allow breaches to happen so frequently.


Mozilla Reinforces Commitment to Distrust Symantec Certificates
1.8.18 securityweek Security 

Mozilla this week reaffirmed its commitment to distrust all Symantec certificates starting in late October 18, when Firefox 63 is set to be released to the stable channel.

The browser maker had decided to remove trust in TLS/SSL certificates issued by the Certification Authority (CA) run by Symantec after a series of problems emerged regarding the wrongful issuance of such certificates.

Despite being one of the oldest and largest CAs, Symantec sold its certificate business to DigiCert after Internet companies, including Google and Mozilla, revealed plans to gradually remove trust in said certificates, even after DigiCert said it won’t repeat the same mistakes as Symantec.

The first step Mozilla took was to warn site owners about Symantec certificates issued before June 1, 2016, and encourage them to replace their TLS certificates.

Starting with Firefox 60, users see a warning when the browser encounters websites using certificates issued before June 1, 2016 that chain up to a Symantec root certificate.

According to Mozilla, less than 0.15% of websites were impacted by this change when Firefox 60 arrived in May. Most site owners were receptive and replaced their old certificates.

“The next phase of the consensus plan is to distrust any TLS certificate that chains up to a Symantec root, regardless of when it was issued […]. This change is scheduled for Firefox 63,” Mozilla’s Wayne Thayer notes in a blog post.

That browser release is currently planned for October 23, 18 (it will arrive in Beta on September 5).

At the moment, around 3.5% of the top 1 million websites are still using Symantec certificates that will be impacted by the change. While the number is high, it represents a 20% improvement over the past two months, and Mozilla is confident that site owners will take action in due time.

“We strongly encourage website operators to replace any remaining Symantec TLS certificates immediately to avoid impacting their users as these certificates become distrusted in Firefox Nightly and Beta over the next few months,” Thayer concludes.

Google too is on track to distrust all Symantec certificates on October 23, 18, when Chrome 70 is expected to land in the stable channel. Released in April, Chrome 66 has already removed trust in certificates issued by Symantec's legacy PKI before June 1, 2016.


State of Email Security: What Can Stop Email Threats?
30.7.18 securityweek Security

Neither Current Technology Nor Security Awareness Training Will Stop Email Threats

A survey of 295 professionals -- mostly but not entirely IT professionals -- has found that 85% of respondents see email threats bypass email security controls and make it into the inbox; 40% see weekly threats; and 20% have to take significant remediation action on a weekly basis.

Email security firm GreatHorn wanted to examine the state of email security today, nearly fifty years after email was first developed. Its findings (PDF) will not surprise security professionals. Breach analyses regularly conclude that more than 90% of all breaches start with an email attack. Indeed, the GreatHorn research shows that the majority (54.4%) of corporate security leaders (that is, those who hold the CISO role) consider email security to be a top 3 security priority.

What is surprising is not that email security is failing (almost half -- 46.1% -- of all respondents said they were less than 'satisfied' with their current email security solution), but the discrepancy in threat perception between the security professional respondents (comprising 61% of the sample) and the non-security respondents (the laypeople, comprising 39% of the sample).

"Sixty-six percent of all the people we interviewed said the only threat they saw in their inbox was spam," GreatHorn's CEO and co-founder Kevin O'Brien told SecurityWeek. "I suspect there is a little bit of a confluence of different things in this figure, and that when they say 'spam', they don't only mean unsolicited marketing emails. Nevertheless, it is a dismissal of the severity of the risk that email poses."

This figure changes dramatically when asked of the security professionals among the respondents. "When you narrow the interview stats to security professionals, less than 16% said that spam was the main threat they faced," he continued. "So, you have 85% of all security teams saying that there is a wide range of different kinds of threats that come in every single day via email -- but to the lay user, the only thing that ever goes wrong is that you get some email you don't want."

O'Brien also quoted statistics from Gartner email specialist Neil Wynne: "The email open rate for the average white-collar professional within the bounds of their work email is 100%," said O'Brien. "Whether or not you take any action in response to it, you will open the email."

It is true that you can open a malicious email and take no action whatsoever and you will remain safe. But that clearly doesn't happen. GreatHorn's figures show that 20% of the security professional respondents are forced into direct remediation from email threats (such as suspending compromised accounts, PowerShell scripts, resetting compromised third-party accounts, etc).

The implication, at a simplistic level, is that the average non-security member of staff is highly likely to open all emails; is not likely to expect anything other than spam (31% of the laypeople respondents said they never saw any email threats other than spam); and clearly -- from empirical proof -- will too often click on a malicious link or open a weaponized attachment.

Asked if a further implication from these figures is that security awareness training is failing, O'Brien said, "Yes." There are qualifications to this response, because phish training companies' built-in metrics clearly demonstrate an improvement in the click-thru rates for users trained with their systems. Reductions in successful phishing from a 30% success rate to just 10% is not uncommon.

But, said O'Brien, "Verizon has reported that one in 25 people click on any given phishing attack." This suggests that for every 100 members of staff targeted by a phishing email, four will become victims -- and only one is necessary for a breach to occur.

The difficulty is the nature of modern email attacks. Many involve some form of impersonation, including BEC attacks, business spoofing attacks, and pure social engineering attacks from a colleague whose credentials have been acquired by the attacker. "You cannot train people to have awareness of an email threat when information about that threat is not visible to the user. There is very little functional way to train a user to differentiate between an email from a colleague and an email from someone who has stolen the colleague's credentials. So, we have a security awareness market that has used marketing to say that email security is an awareness problem, a people problem, and that you can train your way out of it. You cannot."

He added, "The reason that security awareness training companies are successful is because awareness training represents a tick in a compliance box that clears a company of gross negligence in the event they suffer a data breach." So, despite the fact it isn't really effective, you still need to do it.

GreatHorn's own view of the problem is that the solution must come from not just technology, nor simply people, but from using technology against the social engineering aspect of the threat -- that is, the content as well as the mechanics of the email.

Belmont, Mass-based GreatHorn announced a $6.3 million Series A funding round led by Techstars Venture Capital Fund and .406 Ventures in June 2017. It brings machine-learning technology to the continuing threat and problem of targeted spear phishing and the related BEC threat -- the latter of which, according to the FBI in May 2016, is responsible for losses "now totaling over $3 billion."


Google Announces New Security Tools for Cloud Customers
28.7.18 securityweek Security

Google on Wednesday took the wraps off a broad range of tools to help cloud customers secure access to resources and better protect data and applications.

To improve security and deliver flexible access to business applications on user devices, Google has introduced context-aware access, which brings elements from BeyondCorp to Google Cloud.

With context-aware access, Google explains that organizations can “define and enforce granular access to GCP APIs, resources, G Suite, and third-party SaaS apps based on a user’s identity, location, and the context of their request.” This should increase security posture and decrease complexity for users, allowing them to log in from anywhere and any device.

The new capabilities are now available for select VPC Service Controls customers and should soon become available for those using Cloud Identity and Access Management (IAM), Cloud Identity-Aware Proxy (IAP), and Cloud Identity.

For increased protection against credential theft, Google announced Titan Security Key, “a FIDO security key that includes firmware developed by Google to verify its integrity.” Meant to protect users from the potentially damaging consequences of credential theft, Titan Security Keys are now available to Google Cloud customers and will soon arrive in Google Store.

Also revealed on Wednesday, Shielded VMs were designed to ensure that virtual machines haven’t been tampered with and allow users to monitor and react to any changes in the VM baseline or its current runtime state. Shielded VMs can be easily deployed on websites.

According to Google, organizations running containerized workloads should also ensure that only trusted containers are deployed on Google Kubernetes Engine. For that, the Internet giant announced Binary Authorization, which allows for the enforcing of signature validation when deploying container images.

Coming soon to beta, the tool allows for integration with existing CI/CD pipelines “to ensure images are properly built and tested prior to deployment” and can also be combined with Container Registry Vulnerability Scanning to detect vulnerable packages in Ubuntu, Debian and Alpine images before deployment.

Google also announced the beta availability of geo-based access control for Cloud Armor, a distributed denial of service (DDoS) and application defense service. The new capability allows organizations to control access to their services based on the geographic location of the client.

Cloud Armor, however, can also be used for “whitelisting or blocking traffic based on IP addresses, deploying pre-built rules for SQL injection and cross-site scripting, and controlling traffic based on Layer 3-Layer 7 parameters of your choice.”

Cloud HSM, a managed cloud-hosted hardware security module (HSM) service coming soon in beta, allows customers to host encryption keys and perform cryptographic operations in FIPS 140-2 Level 3 certified HSMs and to easily protect sensitive workloads without having to manage a HSM cluster.

Courtesy of tight integration with Cloud Key Management Service (KMS), Cloud HSM makes it “simple to create and use keys that are generated and protected in hardware and use it with customer-managed encryption keys (CMEK) integrated services such as BigQuery, Google Compute Engine, Google Cloud Storage and DataProc,” Google says.

Earlier this year, the search company launched Asylo, an open source framework and software development kit (SDK) meant to “protect the confidentiality and integrity of applications and data in a confidential computing environment.”

With Access Transparency, Google logs the activity of Google Cloud Platform administrators who are accessing content. While GCP’s Cloud Audit Logs no longer provide visibility into the actions of administrators when the cloud provider’s Support or Engineering team is engaged, Access Transparency captures “near real-time logs of manual, targeted accesses by either support or engineering.”

Google also announced the investigation tool for G Suite customers, to help identify and act upon security issues within a domain. With this tool, admins can “conduct organization-wide searches across multiple data sources to see which files are being shared externally” and then perform bulk actions on limiting files access.

Google is also making it easier to move G Suite reporting and audit data from the Admin console to Google BigQuery. Furthermore, there are five new container security partner tools in Cloud Security Command Center to help users gain more insight into risks for containers running on Google Kubernetes Engine.

To meet customer requirements on where their data is stored, Google announced data regions for G Suite, a tool that allows G Suite Business and Enterprise customers “to designate the region in which primary data for select G Suite apps is stored when at rest—globally, in the U.S., or in Europe.”

To these, Google adds the Password Alert policy for Chrome Browser, which allows IT admins to “prevent their employees from reusing their corporate password on sites outside of the company’s control, helping guard against account compromise.”


Chrome Now Marks HTTP Sites as "Not Secure"
26.7.18 securityweek Security

The latest version of Google's Chrome web browser (Chrome 68) represents another step the search giant is making toward a more secure web: the browser now marks HTTP sites as “Not Secure.”

The change comes three and a half years after the Chrome Security Team launched the proposal to mark all HTTP sites as affirmatively non-secure, so as to make it clearer for users that HTTP provides no data security.

When websites are loaded over HTTP, the connection is not encrypted, meaning not only that attackers on the network can access the transmitted information, but also that they can modify the contents of sites before they are served to the user.

HTTPS, on the other hand, encrypts the connection, meaning that eavesdroppers can’t access the transmitted data and that user’s information remains private.

Google, which has been long advocating the adoption of HTTPS across the web, is only marking HTTP pages with a gray warning in Chrome. Later this year, however, the browser will display a red “Not Secure” alert for HTTP pages that require users to enter data.

The goal, however, is to incentivize site owners to adopt HTTPS. For that, Google is also planning on removing the (green) “Secure” wording and HTTPS scheme from Chrome in September 18.

This means that the browser will no longer display positive security indicators, but will warn on insecure connections. Starting May 1, Chrome is also warning when encountering certificates that are not compliant with the Chromium Certificate Transparency (CT) Policy.

“To ensure that the Not Secure warning is not displayed for your pages in Chrome 68, we recommend migrating your site to HTTPS,” Google tells website admins.

According to Google’s Transparency Report, HTTPS usage has increased considerably worldwide, across all platforms: over 75% of pages are served over an encrypted connection on Chrome OS, macOS, Android, and Windows. The same applies to 66% of pages served to Linux users.

To help site admins move to HTTPS, the Internet giant has published a migration guide that includes recommendations and which also addresses common migration concerns such as SEO, ad revenue and performance impact.

In addition to marking HTTP sites as Not Secure, Chrome 68 includes patches for a total of 42 vulnerabilities, 29 of which were reported by external researchers: 5 High severity flaws, 19 Medium risk bugs, and 5 Low severity issues.

The 5 High risk issues include a stack buffer overflow in Skia, a heap buffer overflow in WebGL, a use after free in WebRTC, a heap buffer overflow in WebRTC, and a type confusion in WebRTC.

The remaining flaws included use after free, same origin policy bypass, heap buffer overflow, URL spoof, CORS bypass, permissions bypass, type confusion, integer overflow, local user privilege escalation, cross origin information leak, UI spoof, local file information leak, request privilege escalation, and cross origin information leak.


Expert discovered it was possible to delete all projects in the Microsoft Translator Hub
22.7.18 securityaffairs  Security

Microsoft has addressed a serious vulnerability in the Microsoft Translator Hub that could be exploited to delete any or all the projects hosted by the service.
Microsoft has fixed a severe vulnerability in the Microsoft Translator Hub that could be exploited to delete any or all projects hosted by the service.

The Microsoft Translator Hub “empowers businesses and communities to build, train, and deploy customized automatic language translation systems—-”.

The vulnerability was discovered by the security expert Haider Mahmood that was searching for bugs on the Translator Hub, he discovered that is was possible to remove a project by manipulating the “projectid” parameter in an HTTP request.

“POST request with no content and parameter in the URL (its kinda weird isn’t it?) the “projectid” parameter in the above request is the ID of the individual project in the database, which in this case is “12839“, by observing the above HTTP request, a simple delete project query could be something like:-” wrote the expert in a blog post.

The expert also discovered a Cross-Site Request Forgery (CSRF) vulnerability that could be used by an attacker to impersonate a legitimate, logged in user and perform actions on its behalf.

An attacker with the knowledge of the ProjectID of a logged user just needs to trick victims into clicking a specifically crafted URL that performs the delete action on behalf of the user. Another attack scenario sees the attacker including the same URL in a page that once visited by the victim will allow the project to be removed.

“Wait a minute, if you take a look at the Request, first thing to notice is there is no CSRF protection. This is prone to CSRF attack.” continues the expert. “In simple words, CSRF vulnerability allows attacker to impersonate legit logged in user, performing actions on their behalf. Consider this:-

Legit user is logged in.
Attacker includes the URL in a page. (img tag, iframe, lots of possibilities here) “http://hub.microsofttranslator.com/Projects/RemoveProject?projectId=12839”
Victim visits the page, above request will be sent from their browser.
Requirement is that one should know the ProjectID number of logged in victim.
As it has no CSRF projection like antiCSRF tokens it results in the removal of the project.
Even if it has Anti-CSRF projection, here are ways to bypass CSRF Token protections.”
Further analysis allowed the expert to discover the worst aspect of the story.

Mahmood discovered an Indirect Object Reference vulnerability, which could be exploited by an attacker to set any ProjectID in the HTTP request used to remove project.

Theoretically, an attacker can delete all projects in Microsoft Translator Hub by iterating through project IDs starting from 0 to 13000.

“The project whose projectID I used in the HTTP request got deleted. Technically this vulnerability is called Indirect Object Reference. now if I just loop through the values starting from 0 to 13000 (last project), I’m able to delete all projects from the database.” continues the expert. “The vulnerability could have been avoided using simple checks, either the project that the user requested is owned by the same user, associating the project owner with the project is another way, but its Microsoft so….”

Microsoft Translator Hub SecurityBulletin-1024x307

Mahmood reported the flaw to Microsoft in late February 18 that addressed it is a couple of weeks,


‘IT system issue’ caused cancellation of British Airways cancelled flights at Heathrow
20.7.18 securityaffairs Security

British Airways canceled flights at Heathrow due to an ‘IT system issue,’ the incident occurred on Wednesday and affected thousands of passengers.
The problem had severe repercussions on the air traffic, many passengers also had their flights delayed.

“On one of the busiest days of the summer, British Airways cancelled dozens of flights to and from Heathrow, affecting at least 7,000 passengers

Problems began for BA when the control tower was closed for around 35 minutes on Wednesday afternoon when a fire alarm was triggered. Landings and take-offs were stopped.” reported the British Independent,

“Then an IT issue emerged which caused further disruption for BA and other airlines. Hundreds of flights were delayed, and some evening outbound departures were canceled. Around 3,000 British Airways passengers were stranded overnight abroad.”
The IT problem affected 7,000 passengers and more than 3,000 were forced to spend the night abroad attempting to fly back to London.

Officially the problem was originated by the IT supplier Amadeus that caused disruption to the flights, below the official statement of British Airways on its Twitter account. Reportedly, the British Airways passengers stranded at the airport were advised to ‘look for overnight accommodation or seek alternative travel arrangements’.

It seems that the IT problems affected also online-check in service of the company.

British Airways

“We are aware that British Airways is currently experiencing an issue which is impacting their ability to provide boarding passes to some passengers. We will be working with the airline to support their efforts to resolve the issue as quickly as possible.” stated a spokesperson for Heathrow.

The problems began a few hours after a fire alarm at Heathrow’s air traffic control tower was triggered causing delays for several airlines. According to the airport, this event is not related to the British Airways issue, while airline glitch has “impacted operation of the airfield for a short while”.

“The vast majority of customers affected by the supplier system issue and the temporary closure of Heathrow airport’s air traffic control tower are now on route to their destinations.”

“The supplier, Amadeus, resolved their system issue last night, and our schedule is now operating as normal.” said a spokesperson for British Airways.”

“We have apologised to our customers for disruption to their travel plans.”

British Airways experienced another technical problem at its IT systems in May 2017.


Microsoft Offers $100,000 in New Identity Bug Bounty Program
19.7.18 securityweek Security

Microsoft on Tuesday announced the launch of a new bug bounty program that offers researchers the opportunity to earn up to $100,000 for discovering serious vulnerabilities in the company’s various identity services.

White hat hackers can earn a monetary reward ranging between $500 and $100,000 if they find flaws that impact Microsoft Identity services, flaws that can be leveraged to hijack Microsoft and Azure Active Directory accounts, vulnerabilities affecting the OpenID or OAuth 2.0 standards, or weaknesses that affect the Microsoft Authenticator applications for iOS and Android.

The list of domains covered by the new bug bounty program includes login.windows.net, login.microsoftonline.com, login.live.com, account.live.com, account.windowsazure.com, account.activedirectory.windowsazure.com, credential.activedirectory.windowsazure.com, portal.office.com and passwordreset.microsoftonline.com.

The top reward can be earned for a high quality submission describing ways to bypass multi-factor authentication, or design vulnerabilities in the authentication standards used by Microsoft. OpenID and OAuth implementation flaws can earn hackers up to $75,000.

The smallest rewards are offered for XSS (up to $10,000), authorization issues ($8,000), and sensitive data exposure ($5,000).

“A high-quality report provides the information necessary for an engineer to quickly reproduce, understand, and fix the issue. This typically includes a concise write up containing any required background information, a description of the bug, and a proof of concept. We recognize that some issues are extremely difficult to reproduce and understand, and this will be considered when adjudicating the quality of a submission,” Microsoft wrote on a page dedicated to its new bug bounty program.

The tech giant currently runs several bug bounty programs that offer hundreds of thousands of dollars for a single vulnerability report. This includes the speculative execution side-channel program, which offers up to $250,000 and which the company launched following the disclosure of Meltdown and Spectre; the Hyper-V program, which also offers up to $250,000; the mitigation bypass bounty, with rewards of up to $100,000 for novel exploitation techniques against Windows protections; and the Bounty for Defense, which offers an additional $100,000 for defenses to the mitigation bypass techniques.


Support for Python Packages Added to GitHub Security Alerts
18.7.18 securityweek  Security

GitHub announced on Thursday that developers will be warned if the Python packages used by their applications are affected by known vulnerabilities.

The code hosting service last year introduced a new feature, the Dependency Graph, that lists the libraries used by a project. It later extended it with a capability designed to alert developers when one of the software libraries used by their project has a known security hole.

The Dependency Graph and security alerts initially worked only for Ruby and JavaScript packages, but, as promised when the features were launched, GitHub has now also added support for Python packages.

“We’ve chosen to launch the new platform offering with a few recent vulnerabilities,” GitHub said in a blog post. “Over the coming weeks, we will be adding more historical Python vulnerabilities to our database.”

The security alerts feature is powered by information collected from the National Vulnerability Database (NVD) and other sources. When a new flaw is disclosed, GitHub identifies all repositories that use the affected version and informs their owners.

The security alerts are enabled by default for public repositories, but the owners of private repositories will have to manually enable the feature.

When a vulnerable library is detected, a “Known security vulnerability” alert will be displayed next to it in the Dependency Graph. Administrators can also configure email alerts, web notifications, and warnings via the user interface, and they can configure who should see the alerts.

GitHub reported in March that the introduction of the security alerts led to a significant decrease in the number of vulnerable libraries on the platform.

When the feature was launched, GitHub’s initial scan revealed over 4 million vulnerabilities across more than 500,000 repositories. Roughly two weeks after the first notifications were sent out, over 450,000 of the flaws were addressed by updating the impacted library or removing it altogether.


Intel Pays $100,000 Bounty for New Spectre Variants
18.7.18 securityweek  Security

Researchers have discovered new variations of the Spectre attack and they received $100,000 from Intel through the company’s bug bounty program.

The new flaws are variations of Spectre Variant 1 (CVE-2017-5753) and they are tracked as Spectre 1.1 (CVE-18-3693) and Spectre 1.2.

The more serious of these issues is Spectre 1.1, which has been described as a bounds check bypass store (BCBS) issue.

“[Spectre1.1 is] a new Spectre-v1 variant that leverages speculative stores to create speculative buffer overflows,” researchers Vladimir Kiriansky of MIT and Carl Waldspurger of Carl Waldspurger Consulting explained in a paper.

New Spectre vulnerabilities discovered

“Much like classic buffer overflows, speculative out-of-bounds stores can modify data and code pointers. Data-value attacks can bypass some Spectre-v1 mitigations, either directly or by redirecting control flow. Control-flow attacks enable arbitrary speculative code execution, which can bypass fence instructions and all other software mitigations for previous speculative-execution attacks. It is easy to construct return-oriented-programming (ROP) gadgets that can be used to build alternative attack payloads,” they added.

Spectre 1.2 impacts CPUs that fail to enforce read/write protections, allowing an attacker to overwrite read-only data and code pointers in an effort to breach sandboxes, the experts said.

Both Intel and ARM have published whitepapers describing the new vulnerabilities. AMD has yet to make any comments regarding Spectre 1.1 and Spectre 1.2.

Microsoft also updated its Spectre/Meltdown advisories on Tuesday to include information on CVE-18-3693.

“We are not currently aware of any instances of BCBS in our software, but we are continuing to research this vulnerability class and will work with industry partners to release mitigations as required,” the company said.

Oracle is also assessing the impact of these vulnerabilities on its products and has promised to provide technical mitigations.

“Note that many industry experts anticipate that a number of new variants of exploits leveraging these known flaws in modern processor designs will continue to be disclosed for the foreseeable future,” noted Eric Maurice, Director of Security Assurance at Oracle. “These issues are likely to primarily impact operating systems and virtualization platforms, and may require software update, microcode update, or both. Fortunately, the conditions of exploitation for these issues remain similar: malicious exploitation requires the attackers to first obtain the privileges required to install and execute malicious code against the targeted systems.”

Just as the researchers published their paper, Intel made a $100,000 payment to Kiriansky via the company’s HackerOne bug bounty program. The experts did reveal in their paper that the research was partially sponsored by Intel.

Following the disclosure of the Spectre and Meltdown vulnerabilities in January, Intel announced a bug bounty program for side-channel exploits with rewards of up to $250,000 for issues similar to Meltdown and Spectre. The reward for flaws classified “high severity” can be as high as $100,000.


Intel pays a $100K bug bounty for the new CPU Spectre 1.1 flaw
12.7.18 securityaffairs Security

A team of researchers has discovered new variant of the famous Spectre attack (Spectre 1.1), and Intel has paid a $100,000 bug bounty as part of its bug bounty program.
Intel has paid out a $100,000 bug bounty for new vulnerabilities that are related to the first variant of the Spectre attack (CVE-2017-5753), for this reason, they have been tracked as Spectre 1.1 (CVE-18-3693) and Spectre 1.2.

Intel credited Kiriansky and Waldspurger for the vulnerabilities to Intel and paid out $100,000 to Kiriansky via the bug bounty program on HackerOne.

Early 18, researchers from Google Project Zero disclosed details of both Spectre Variants 1 and 2 (CVE-2017-5753 and CVE-2017-5715) and Meltdown (CVE-2017-5754).

Both attacks leverage the “speculative execution” technique used by most modern CPUs to optimize performance.

The team of experts composed of Vladimir Kiriansky of MIT and Carl Waldspurger of Carl Waldspurger Consulting discovered two new variants of Spectre Variant 1.

Back to the present, the Spectre 1.1 issue is a bounds-check bypass store flaw that could be exploited by attackers to trigger speculative buffer overflows and execute arbitrary code on the vulnerable processor.

This code could potentially be exploited to exfiltrate sensitive data from the CPU memory, including passwords and cryptographic keys.

“We introduce Spectre1.1, a new Spectre-v1 variant that leverages speculative stores to create speculative buffer overflows. Much like classic buffer overflows, speculative out-ofbounds stores can modify data and code pointers. Data-value attacks can bypass some Spectre-v1 mitigations, either directly or by redirecting control flow.” reads the research paper.

“Control-flow attacks enable arbitrary speculative code execution, which can bypass
fence instructions and all other software mitigations for previous speculative-execution attacks.”

The second sub-variant discovered by the experts, called Spectre1.2 is a read-only protection bypass

It depends on lazy PTE enforcement that is the same mechanism exploited for the original Meltdown attack.

Also in this case, the issue could be exploited by an attacker to bypass the Read/Write PTE flags and write code directly in read-only data memory, code metadata, and code pointers to avoid sandboxes.
“Spectre3.0, aka Meltdown [39], relies on lazy enforcement of User/Supervisor protection flags for page-table entries (PTEs). The same mechanism can also be used to bypass the Read/Write PTE flags. We introduce Spectre1.2, a minor variant of Spectre-v1 which depends on lazy PTE enforcement, similar to Spectre-v3.”In a Spectre1.2 attack, speculative stores are allowed to overwrite read-only data, code pointers, and code metadata, including vtables, GOT/IAT, and control-flow mitigation metadata. As a result, sandboxing that depends on hardware enforcement of read-only memory is rendered ineffective.,” continues the research paper.

ARM confirmed that Spectre 1.1 flaw affects also its processor but avoided to mention flawed ARM CPUs.

Mayor tech firms, including Microsoft, Red Hat and Oracle have released security advisories, confirming that they are investigating the issues and potential effects of the new Spectre variants.

“Microsoft is aware of a new publicly disclosed class of vulnerabilities referred to as “speculative execution side-channel attacks” that affect many modern processors and operating systems including Intel, AMD, and ARM. Note: this issue will affect other systems such as Android, Chrome, iOS, MacOS, so we advise customers to seek out guidance from those vendors.” reads the advisory published by Microsoft.

“An attacker who successfully exploited these vulnerabilities may be able to read privileged data across trust boundaries. In shared resource environments (such as exists in some cloud services configurations), these vulnerabilities could allow one virtual machine to improperly access information from another.”


Mozilla Announces Root Store Policy Update
3.7.18 securityweek  Security

Mozilla announced on Monday that its Root Store Policy for Certificate Authorities (CAs) has been updated to version 2.6.

The Root Store Policy governs CAs trusted by Firefox, Thunderbird and other Mozilla-related software. The latest version of the policy, discussed by the Mozilla community over a period of several months, went into effect on July 1.

The new Root Store Policy includes nearly two dozen changes and some of the more important ones have been summarized in a blog post by Wayne Thayer, CA Program Manager at Mozilla.

Version 2.6 of the Root Store Policy requires CAs to clearly disclose email address validation methods in their certificate policy (CP) and certification practice statement (CPS). The CP/CPS must also clearly specify IP address validation methods, which have now been banned in specific circumstances.

CAs need to periodically obtain certain audits for their root and intermediate certificates in order to remain in the root store. Mozilla now requires auditors to provide reports written in English.

The new policy also states that starting with January 1, 2019, CAs will be required to create separate intermediate certificates for S/MIME and SSL certificates.

“Newly issued Intermediate certificates will need to be restricted with an EKU extension that doesn’t contain anyPolicy, or both serverAuth and emailProtection. Intermediate certificates issued prior to 2019 that do not comply with this requirement may continue to be used to issue new end-entity certificates,” Thayer explained.

Another new requirement is that root certificates must have complied with the Mozilla Root Store Policy from the moment they were created.

“This effectively means that roots in existence prior to 2014 that did not receive BR audits after 2013 are not eligible for inclusion in Mozilla’s program. Roots with documented BR violations may also be excluded from Mozilla’s root store under this policy,” Thayer said.

Mozilla takes digital certificate management very seriously. Last year it announced taking action against Chinese certificate authority WoSign and its subsidiary StartCom as a result of over a dozen incidents. It also targeted Symantec after the company and its partners were involved in several incidents involving mississued TLS certificates, and later raised concerns over DigiCert’s acquisition of Symantec’s CA business.


Hidden Tunnels: A Favored Tactic to Evade Strong Access Controls
22.6.18 securityweek  Security

Use of Hidden Tunnels to Exfiltrate Data Far More Widespread in Financial Services Than Any Other Industry Sector

Financial services have perhaps the largest cyber security budgets and are the best protected companies in the private sector. Since cyber criminals generally have little difficulty in obtaining a quick return on their effort, it would be unsurprising to find that financial services are less overtly targeted by average hackers than other, easier targets. At the same time, the data held by finserv is so attractive to criminals that it remains an attractive target for more sophisticated hackers.

Both premises are confirmed in a report (PDF) published this week by Vectra. From August 2017 through January 18, Vectra's AI-based Cognito cyberattack-detection and threat-hunting platform monitored network traffic and collected metadata from more than 4.5 million devices and workloads from customer cloud, data center and enterprise environments.

An analysis of this data showed that financial services displayed fewer criminal C&C communication behaviors than the overall industry average. This could be caused by the efficiency of large finserv budgets (Bank of America spends $600 million annually, with no upper limit, while JPMorgan Chase spends $500 million annually) warding off basic criminal activity.

Even the much smaller Equifax has a budget of $85 million. But Equifax, with its massive 2017 loss of 145.5 million social security numbers, around 17.6 million drivers' license numbers, 20.3 million phone numbers, and 1.8 million email addresses, demonstrates that finserv is a target for, and can be successfully breached by, the more advanced hackers.

Vectra analyzed the Equifax breach and then compared the attack methodology to what its Cognito platform was finding in other financial services companies -- and it discovered the same breach methodology in other financial services firms. This is the use of hidden tunnels to hide the C&C servers and disguise the exfiltration of data.

Vectra's new analysis shows that the criminal use of hidden tunnels is far more widespread in financial services than in any other industry sector. Across all industries Vectra found 11 hidden exfiltration tunnels disguised as encrypted web traffic (HTTPS) for every 10,000 devices. In finserv, this number jumped to 23. Hidden HTTP tunnels jumped from seven per 10,000 devices to 16 in financial services.

Chris Morales, head of security analytics at Vectra, commented, "What stands out the most is the presence of hidden tunnels, which attackers use to evade strong access controls, firewalls and intrusion detection systems. The same hidden tunnels enable attackers to sneak out of networks, undetected, with stolen data."

"Hidden tunnels are difficult to detect," explains the report, "because communications are concealed within multiple connections that use normal, commonly-allowed protocols. For example, communications can be embedded as text in HTTP-GET requests, as well as in headers, cookies and other fields. The requests and responses are hidden among messages within the allowed protocol."

These hidden tunnels need to be protected at all times, says Will LaSala, director security solutions and security evangelist at OneSpan. "Many app developers put holes through firewalls to make services easier to access from their apps, but these same holes can be exploited by hackers. Using the proper development tools, app developers can properly encrypt and shape the data being passed through these holes."

One of the problems is that developers are rushed to implement a new feature to maintain or gain customers, "and this," he adds, "often leads to situations where a hidden tunnel is created and not secured."

Once a hidden tunnel is established by an attacker, it is almost impossible to detect with traditional security. There is no signature to detect while specially created C&C servers will unlikely show up on reputation lists. Furthermore, because the traffic using a hidden tunnel is ostensibly legitimate traffic, there is no clear anomaly for anomaly detection systems to detect.

What Vectra's analysis shows is that while there may be fewer overt attacks against financial services, the industry is a prime target for advanced hackers willing and able to invest in more covert attacks.

San Francisco, Calif-based Vectra Networks closed a $36 million Series D funding round in February 18, bringing the total amount raised to date by the company to $123 million.


Microsoft Combats Bad Passwords With New Azure Tools
22.6.18 securityweek Security 

Microsoft this week announced the public preview of new Azure tools designed help its customers eliminate easily guessable passwords from their environments.

Following a flurry of data breaches in recent years, it has become clear that many users continue to protect their accounts with weak passwords that are easy to guess or brute force. Many people also tend to reuse the same password across multiple services.

Attackers continually use leaked passwords in live attacks, Verizon’s 2017 Data Breach Investigations Report (DBIR) revealed, and Microsoft banned commonly used passwords in Azure AD a couple of years ago.

Now, the company is taking the fight against bad passwords to a new level, with the help of Azure AD Password Protection and Smart Lockout, which were just released in public preview. These tools should significantly lower the risk of compromise through password spray attacks, Alex Simons, Director of Program Management, Microsoft Identity Division, says.

The new Azure AD Password Protection allows admins to prevent users from securing accounts in Azure AD and Windows Server Active Directory with weak passwords. For that, Microsoft uses a list of 500 most used passwords and over 1 million character substitution variations for them.

Management of Azure AD Password Protection is available in the Azure Active Directory portal for Azure AD and on-premises Windows Server Active Directory and admins will also be able to specify additional passwords to block.

To ensure users don’t use passwords that meet a complexity requirement but are easily guessable, or engage into predictable patterns if required to change their passwords frequently, organizations should apply a banned password system when passwords are changed, Microsoft says.

“Today’s public preview gives you both the ability to do this in the cloud and on-premises—wherever your users change their passwords—and unprecedented configurability. All this functionality is powered by Azure AD, which regularly updates the database of banned passwords by learning from billions of authentications and analysis of leaked credentials across the web,” Simons notes.

With Smart Lockout, Microsoft wants to lock out bad actors trying to guess users’ passwords. Leveraging cloud intelligence, it can recognize sign-ins from valid users and attempts from attackers and other unknown sources. Thus, users can remain productive while attackers are locked out.

Designed as an always-on feature, Smart Lockout is available for all Azure AD customers. While its default settings offer both security and usability, organizations can customize those settings with the right values for their environment.

By default, all Azure AD password set and reset operations for Azure AD Premium users are configured to use Azure AD password protection, Simons says. To configure their own settings, admins should access Authentication Methods under Azure AD Active Directory > Security.

Available options include setting a smart lockout threshold (number of failures until the first lockout) and duration (how long the lockout period lasts), choosing banned password strings, and extending the banned password protection to Windows Server Active Directory.

Organizations can also download and install the Azure AD password protection proxy and domain controller agents in their on-premises environment (both support silent installation), meaning that they can use Azure AD password protection across Azure AD and on-premises.


6 Security Flaws in Smart Speakers You Need to Know About
22.6.18 securityaffairs Security

Connectivity and functionality may offer us convenience, but as with any new connected technology like smart speakers also come with security concerns.
How would you feel about having a device in your home that’s always listening to what’s going on, standing ready to record, process and store any information it receives? That might be a somewhat alarmist way of putting it, but it’s essentially what smart home speakers do.

Smart speakers offer audio playback but also feature internet connectivity and often a digital assistant, which dramatically expands their functionality.

With today’s smart speakers, you can search the internet, control other home automation devices, shop online, send text messages, schedule alarms and more.

This connectivity and functionality may offer us convenience, but as with any new connected technology, these speakers also come with security concerns. Any time you add a node to your network, you open yourself up to more potential vulnerabilities. Since smart home tech is still relatively new, it’s also bound to have bugs.

Although smart home companies work to fix these flaws as quickly as possible and want to ensure their devices are secure, there’s still always a chance you’ll run into security issues. Here are six potential risks you should be aware of.

Unexpected Activation
Although smart speakers are always listening since their microphones are continuously on, they don’t record or process anything they hear unless they detect their activation phrase first. For Google Home, this phrase is “OK, Google.” For an Amazon speaker, say “Alexa.”

There are several problems. The first is that the technology isn’t perfect yet, and it’s entirely possible that the device will mishear another phrase as it’s wake-up phrase. For example, an Oregon couple recently discovered that their Amazon Echo speaker had been recording them without their knowledge. Amazon blamed the mistake on the device mishearing something in a background conversation as “Alexa.”

Misheard Cues
Unfortunately, these misunderstandings can extend beyond just activation. After the Oregon family’s Echo recorded their conversation, it sent the recording to a random person on their contact list. They only knew about the incident because the person who received the recording contacted them and told them.

Amazon offered the same explanation for this part of the event. According to the company, the speaker misheard the background conversation as a whole string of commands, resulting in sending the discussion to the couple’s acquaintance. This situation suggests that these speakers’ listening skills might not be as advanced as they need to be to function properly.

Unwanted Interaction From Ads
Smart speakers may misunderstand cues and unexpectedly wake up, but people could also purposefully wake them without your permission. Once they do so, they could potentially gain access to some of your information.

Burger King demonstrated this vulnerability when it ran an ad that purposely activated Google Home speakers and prompted them to read off a description of the Whopper burger. Google reacted quickly and prevented the devices from responding. Burger King fired back by altering the ad so that it triggered the speakers again.

While this prank might be relatively harmless, people could also potentially activate your speaker without your permission, even by yelling through your front door or an open window. Because of this vulnerability, you should avoid using a smart speaker for things like unlocking your front door. You can also change the wake word and set up pins for specific features.

Smart Speakers

Hacks and Malware
Thus far, most of the reports of problems with smart speakers have revolved around unauthorized access or faulty functionality. The devices are certainly also vulnerable to malicious hacks as well.

Security experts in the tech realm have already discovered various susceptibilities, enabling companies to fix them. Hackers may at some point, however, find some of these vulnerabilities first. If they do, they may be able to access sensitive personal information.

To protect yourself from becoming a victim of a hacking incident, use hardware and software only from companies you trust. Also, use secure passwords, and change them often wherever you can.

Voice Hacks
Using smart speakers could also increase your vulnerability to voice hacks, a subset of identity theft in which someone obtains an audio recording of your voice and uses it to access your information. Once they have this recording, they use it to trick authentication systems into thinking they’re you. This hack is a potential way to get around smart speakers’ voice recognition capabilities.

Smart home speakers provide a potential goldmine of audio recordings that someone could use for voice hacking. If a bad actor manages to hack into the speaker or cloud service where your records get stored, they could use it to hack into various accounts of yours.

Storage of Your Data
The fact that some cloud service is storing these recordings may make users uncomfortable. These recordings may be used to personalize your experience, improve the smart assistant’s effectiveness, serve you ads or do a range of other things.

Luckily, you can delete these recordings if you’d like through your account settings. In addition, advocates have called for more transparency about how these companies use customer data.

Be Smart About Using Smart Speakers

All smart technology comes with security risks. That doesn’t necessarily mean we shouldn’t use it, but it does mean we should be careful about how we use it and take appropriate security measures.

If you choose to get a smart speaker, take the time to set up your security settings, and allow access only to people and companies you trust.


Chronicle launches VirusTotal Monitor to reduce false positives
21.6.18 securityaffairs  Security

Alphabet owned cybersecurity firm Chronicle announced the launch of a new VirusTotal service that promises to reduce false positives.
VirusTotal Monitor service allows developers to upload their application files to a private cloud store where they are scanned every day using anti-malware solutions from antivirus vendors in VirusTotal.

Every time the service flags the file as malicious, VirusTotal notifies it to antivirus vendor and to the developer.

Of course, files analyzed by the VirusTotal Monitor service will remain private and are not shared by the company with third-parties.

The service implements a Google-drive like interface to allow developers to upload their files and a dashboard to display the scan results. Both developers and AV companies could access the dashboard, the service also provided APIs to integrate Monitor with their tools implemented by developers and antivirus vendors.

“Enter VirusTotal Monitor. VirusTotal already runs a multi-antivirus service that aggregates the verdicts of over 70 antivirus engines to give users a second opinion about the maliciousness of the files that they check.” reads the announcement published by VirusTotal.

“For antivirus vendors this is a big win, as they can now have context about a file: who is the company behind it? when was it released? in which software suites is it found? What are the main file names with which it is distributed? For software developers it is an equally big win, as they can upload their creations to Monitor at pre-publish stage, to ensure a release without issues.”

VirusTotal Monitor

VirusTotal pointed out that Monitor service is not a free pass to get any file whitelisted.

“Sometimes vendors will indeed decide to keep detections for certain software, however, by having contextual information about the author behind a given file, they can prioritize work and take better decisions, hopefully leading to a world with less false positives,” continues the announcement.

“The idea is to have a collection of known source software, then each antivirus can decide what kind of trust-based relationship they have with each software publisher.”

Are you interested in this service? Now you can request a trial period for VirusTotal Monitor.


New VirusTotal Service Aims to Reduce False Positives
20.6.18 securityweek Security

VirusTotal, which recently became part of Alphabet’s new cybersecurity company Chronicle, announced on Tuesday the launch of a new service designed to help software developers and security vendors reduce the number of false positive detections.

VirusTotal Monitor is a premium service that allows software developers to upload their application files to a private cloud store where they are scanned every day by the products of the more than 70 antivirus vendors in VirusTotal.

If a file is flagged as malicious, both the developer and the antivirus vendor are automatically notified.

Developers can upload their files using an interface similar to Google Drive, and both developers and AV companies are provided a dashboard where they can view results. In addition to the web interface, both parties can leverage APIs to integrate Monitor with their own tools.

VirusTotal Monitor

“For antivirus vendors this is a big win, as they can now have context about a file: who is the company behind it? when was it released? in which software suites is it found? What are the main file names with which it is distributed?” explained VirusTotal’s Emiliano Martinez. “For software developers it is an equally big win, as they can upload their creations to Monitor at pre-publish stage, to ensure a release without issues.”

VirusTotal highlighted that the uploaded files will not be shared with third-parties, except for the antivirus vendors, which will get access to the files their products detect.

While it may seem that Monitor opens a door to abuse, VirusTotal pointed out that the new service is “not a free pass to get any file whitelisted.”

“Sometimes vendors will indeed decide to keep detections for certain software, however, by having contextual information about the author behind a given file, they can prioritize work and take better decisions, hopefully leading to a world with less false positives,” Martinez said. “The idea is to have a collection of known source software, then each antivirus can decide what kind of trust-based relationship they have with each software publisher.”

VirusTotal Monitor has been in pre-release testing and is now accepting its first users. Developers can request a trial period.


Google Won't Use Artificial Intelligence for Weapons
8.6.18 securityweek Security

Google announced Thursday it would not use artificial intelligence for weapons or to "cause or directly facilitate injury to people," as it unveiled a set of principles for these technologies.

Chief executive Sundar Pichai, in a blog post outlining the company's artificial intelligence policies, noted that even though Google won't use AI for weapons, "we will continue our work with governments and the military in many other areas" including cybersecurity, training, and search and rescue.

The news comes with Google facing pressure from employees and others over a contract with the US military, which the California tech giant said last week would not be renewed.

Pichai set out seven principles for Google's application of artificial intelligence, or advanced computing that can simulate intelligent human behavior.

He said Google is using AI "to help people tackle urgent problems" such as prediction of wildfires, helping farmers, diagnosing disease or preventing blindness.

"We recognize that such powerful technology raises equally powerful questions about its use," Pichai said in the blog.

"How AI is developed and used will have a significant impact on society for many years to come. As a leader in AI, we feel a deep responsibility to get this right."

The chief executive said Google's AI programs would be designed for applications that are "socially beneficial" and "avoid creating or reinforcing unfair bias."

He said the principles also called for AI applications to be "built and tested for safety," to be "accountable to people" and to "incorporate privacy design principles."

Google will avoid the use of any technologies "that cause or are likely to cause overall harm," Pichai wrote.

That means steering clear of "weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people" and systems "that gather or use information for surveillance violating internationally accepted norms."

The move comes amid growing concerns that automated or robotic systems could be misused and spin out of control, leading to chaos.

Several technology firms have already agreed to the general principles of using artificial intelligence for good, but Google appeared to offer a more precise set of standards.

The company, which is already a member of the Partnership on Artificial Intelligence including dozens of tech firms committed to AI principles, had faced criticism for the contract with the Pentagon on Project Maven, which uses machine learning and engineering talent to distinguish people and objects in drone videos.

Faced with a petition signed by thousands of employees and criticism outside the company, Google indicated the $10 million contract would not be renewed, according to media reports.


FIFA public Wi-Fi guide: which host cities have the most secure networks?
8.6.18 Kaspersky Security
We all know how easy it is for users to connect to open Wi-Fi networks in public places. Well, it is equally straightforward for criminals to position themselves near poorly protected access points – where they can intercept network traffic and compromise user data.

A lack of essential traffic encryption for Wi-Fi networks where official and global activities are taking place – such as at locations around the forthcoming FIFA World Cup 18 – offers especially fertile ground for criminals.

With this in mind, can football fans feel digitally safe in host cities? How does the situation with Wi-Fi access differ from town to town? To answer these questions, we have analyzed existing reliable and unreliable access points in 11 FIFA World Cup host cities – Saransk, Samara, Nizhny Novgorod, Kazan, Volgograd, Moscow, Ekaterinburg, Sochi, Rostov, Kaliningrad, and Saint Petersburg.

The main feature of the research is telemetry, which aims to secure users’ Wi-Fi connections and turn on VPNs when needed. Statistics were generated from users who voluntarily agreed to having their data collected. For the research, we only evaluated the security of public Wi-Fi spots. Even with relatively few public Wi-Fi spots in small towns, we still obtained a sufficient base for analysis – almost 32,000 Wi-Fi hotspots. While checking encryption and authentication algorithms, we counted the number of WPA2 and open networks, as well as their share among all the access points.

Security of Wireless Networks in FIFA World Cup host cities
Using the methodology described above, we have evaluated the security of Wi-Fi access points in 11 FIFA World Cup 18 host cities.

Encryption types used in public Wi-Fi hotspots in FIFA World Cup host cities

Over a fifth (22.4%) of Wi-Fi hotspots in FIFA World Cup 18 host cities use unreliable networks. This means that criminals simply need to be located near an access point to grab the traffic and get their hands on user data.

Around three quarters of all access points use encryption based on the Wi-Fi Protected Access (WPA/WPA2) protocol family, which is considered to be one of the most secure. The level of protection mostly depends on the settings, such as the strength of the password set by the hotspot owner. The complicated encryption key can take years to successfully hack.

It should also be noted that even reliable networks, like WPA2, cannot be automatically considered as totally secure. They still give in to brute-force, dictionary, and key reinstallation attacks, of which there are a large number of tutorials and open source tools available online. Any attempt to intercept traffic from WPA Wi-Fi in public access points can also be made by penetrating the gap between the access point and the device at the beginning of the session.

Encryption types used in public Wi-Fi hotspots in FIFA World Cup host cities

The safest city (in terms of public Wi-Fi) turned out to be Saransk, with 72% of access points secured by WPA/WPA2 protocol encryption.

The top-three cities with the highest proportion of unsecured connections are Saint Petersburg (48% of Wi-Fi access points are unsecured), Kaliningrad (47%) and Rostov (44%).

Again, the relativity of the results should be noted. Even a WPA2 connection in a cafe couldn’t be considered as secure, if the password is visible to everyone. Nevertheless, we believe that the methodology used represents the Wi-Fi hot-spot security situation in the host cities, with a fair degree of accuracy.

The results of this research show that the security of Wi-Fi connections in FIFA World Cup hosts cities varies. Therefore. We therefore recommend that users follow some key safety rules.

Recommendations for Users
If you are going to visit any of the FIFA World Cup 18 host cities and use open Wi-Fi networks while you are there, remember to follow these simple rules to help protect your personal data:

Whenever possible, connect via a Virtual Private Network (VPN). With a VPN, encrypted traffic is transmitted over a protected tunnel, meaning that criminals won’t be able to read your data, even if they gain access to it. For example, the Kaspersky Secure Connection VPN solution can switch on automatically when a connection is not safe.
Do not trust networks that are not password-protected, or have easy-to-guess or easy-to-find passwords.
Even if a network requests a strong password, you should remain vigilant. Fraudsters can find out the network password at a coffee shop, for example, and then create a fake connection using the same password. This allows them to easily steal personal user data. You should only trust network names and passwords given to you by the employees of an establishment.
To maximize your protection, turn off your Wi-Fi connection whenever you are not using it. This will also save your battery life. We recommend you also disable automatic connections to existing Wi-Fi networks.
If you are not 100% sure that the wireless network you are using is secure, but you still need to connect to the Internet, try to limit yourself to basic user actions such as searching for information. You should refrain from entering your login details for social networks or mail services, and definitely do not perform any online banking operations or enter your bank card details anywhere. This will avoid situations where your sensitive data or passwords are intercepted and then used for malicious purposes later on.
To avoid becoming a cybercriminal target, you should enable the “Always use a secure connection” (HTTPS) option in your device settings. Enabling this option is recommended when visiting any websites you think may lack the necessary protection.
One example of a dedicated solution is the Secure Connection tool included in the latest versions of Kaspersky Internet Security and Kaspersky Total Security. This module protects users who are connected to Wi-Fi networks by providing them with a secure encrypted connection channel. Secure Connection can be launched manually or, depending on the settings, activated automatically when connecting to public Wi-Fi networks, when navigating to online banking and payment systems or online stores, and when communicating online (via mail services, social networks, etc.).


ALTR Emerges From Stealth With Blockchain-Based Data Security Solution
7.6.18 securityweek Security

Austin, Texas-based ALTR emerged from stealth mode on Wednesday with a blockchain-based data security platform and $15 million in funding.

ALTR announced the immediate availability of its product, which has been in development for nearly four years while the company operated in stealth mode.

Originally designed to serve as the public transactions ledger for the Bitcoin cryptocurrency, blockchain is a distributed database consisting of blocks that are linked and secured using cryptography. Companies have been increasingly using blockchain for purposes other than cryptocurrency transactions, including for identity verification and securing data and devices.

ALTR’s platform uses blockchain technology for secure data access and storage. Built on what the company names ALTRchain, the solution allows organizations to monitor, access and store highly sensitive information.

ALTR emerges from stealth

The ALTR platform is designed to sit between data and applications, and it can be deployed without making any changes to existing software or hardware infrastructure. It offers support for all major database systems, including from Oracle, Microsoft and others.

The platform has three main components: ALTR Monitor, ALTR Govern, and ALTR Protect. ALTR Monitor provides intelligence on data access activities, creating an audit trail of blockchain-based log files.

ALTR Govern is designed for controlling how users access business applications. Organizations can create and apply rule-based locks and access thresholds in an effort to prevent breaches.

ALTR Protect is designed to protect data at rest. It decentralizes sensitive data and stores it across a private blockchain in an effort to protect it against unauthorized access in case any single node has been compromised.

The company also announced that it has opened access to its proprietary blockchain technology by making available its ChainAPI, which allows developers to add ALTRchain to their applications.

ALTR has raised $15 million in funding from private and institutional sources in the cybersecurity, financial services and IT sectors. The money will be used to extend the reach of the company’s platform and launch additional products based on ALTRchain.

ALTR told SecurityWeek that its platform has already been deployed at a healthcare organization, a mid-sized service provider that caters to both Fortune 1000 companies and government agencies, and a couple of firms in the financial services sector.


Thousands of organizations leak sensitive data via misconfigured Google Groups
6.6.18 securityaffairs Security

Security experts reported widespread Google Groups misconfiguration exposes sensitive information.
Administrators of organizations using Google Groups and G Suite must review their configuration to avoid the leakage of internal information.

Security researchers from Kenna Security have recently discovered that 31 percent of 9,600 organizations analyzed is leaking sensitive e-mail information.

The list of affected entities also includes Fortune 500 companies, hospitals, universities and colleges, newspapers and television stations, and even US government agencies.

“Organizations utilizing G Suite are provided access to the Google Groups product, a web forum directly integrated with an organization’s mailing lists. Administrators may configure a Google Groups interface when creating a mailing list.” reads the blog post published by Kenna Security.

“Due to complexity in terminology and organization-wide vs group-specific permissions, it’s possible for list administrators to inadvertently expose email list contents. In practice, this affects a significant number of organizations”

The discovery is not new, back in 2017 experts discovered wrong configurations of G Suite that can lead to data leakage.

Unfortunately, since the first advisory published by experts at RedLock, many installs continue to leak data. According to Kenna Security, the main reason is Google Groups uses a complex terminology and organisation-wide vs group-specific permissions.

“Due to complexity in terminology and organization-wide vs group-specific permissions, it’s possible for list administrators to inadvertently expose email list contents. In practice, this affects a significant number of organizations” continues the post.

When a G Suite admin creates a Groups mailing list for specific recipients, it configures a Web interface for the list, available to users at https://groups.google.com.

Google Group privacy settings for individuals can be adjusted on both a domain and a per-group basis. In affected organizations, the Groups visibility setting is available by searching “Groups Visibility” after logging into https://admin.google.com and it is configured to “Public on the Internet”

Google Groups

To discover if an organization is affected, administrators can browse to the configuration page by logging into G Suite as an administrator and typing “Settings for Groups for Business” or simply using this direct link.

“In almost all cases – unless you’re explicitly using the Google Groups web interface – this should be set to “Private”.” continues the post.

“If publicly accessible, you may access your organization’s public listing at the following link: https://groups.google.com/a/[DOMAIN]/forum/#!forumsearch/”

Administrators have to set as private the “Google Group” to protect internal information such as customer reviews, invoices payable, password recovery / reset e-mails, and more.

It is important to highlight that Google doesn’t consider configuration issues as a vulnerability, experts recommend administrators to read the Google Groups documentation, set the sharing setting for “Outside this domain – access to groups” to “private”.


HTTP Parameter Pollution Leads to reCAPTCHA Bypass
31.5.18 securityweek Security

Earlier this year, a security researcher discovered that it was possible to bypass Google’s reCAPTCHA via HTTP parameter pollution.

The issue, application and cloud security expert Andres Riancho says, can be exploited when a web application crafts the request to /recaptcha/api/siteverify in an insecure way. Exploitation allows an attacker to bypass the protection every time.

When a web application using reCAPTCHA challenges the user, “Google provides an image set and uses JavaScript code to show them in the browser,” the researcher notes.

After solving the challenge, the user clicks verify, which triggers an HTTP request to the web application, which in turn verifies the user’s response with a request to Google’s reCAPTCHA API.

The application authenticates itself and sends a {reCAPTCHA-generated-hash} to the API to query the response. If the user solved the challenge correctly, the API sends an "OK" that the web application receives, processes, and most likely grants the user access to the requested resource.

Riancho discovered that an HTTP parameter pollution in the web application could be used to bypass reCAPTCHA (the requirement, however, reduced the severity of the vulnerability).

“HTTP parameter pollution is almost everywhere: client-side and server-side, and the associated risk depends greatly on the context. In some specific cases it could lead to huge data breach, but in most cases it is a low risk finding,” Riancho explains.

He notes that it was possible to send two HTTP requests to Google’s service and receive the same response. The reCAPTCHA API would always use the first secret parameter on the request but ignore the second, an issue the researcher was able to exploit.

Additionally, Google is providing web developers interested in testing their web applications with a hard-coded site and secret key to disable reCAPTCHA verification in staging environments and perform their testing, and the bypass leverages this functionality as well.

“If the application was vulnerable to HTTP parameter pollution AND the URL was constructed by appending the response parameter before the secret then an attacker was able to bypass the reCAPTCHA verification,” the researcher notes.

Two requirements should be met for the vulnerability to be exploitable: the web application needs to have an HTTP parameter pollution flaw in the reCAPTCHA URL creation, and to create the URL with the response parameter first, and then the secret. Overall, only around 3% of reCAPTCHA implementations would be vulnerable.

Riancho points out that Google addressed the issue in the REST API by returning an error when the HTTP request to /recaptcha/api/siteverify contains two parameters with the same name.

“Fixing it this way they are protecting the applications which are vulnerable to the HTTP Parameter Pollution and the reCAPTCHA bypass, without requiring them to apply any patches,” the researcher notes.

The issue was reported to Google on January 29, and a patch was released on March 25. The search giant paid the researcher $500 for the discovery.


Chinese researchers from Tencent discovered exploitable flaws in several BMW models
23.5.18 securityaffairs Security

A team of security researchers from Chinese firm Tencent has discovered 14 security vulnerabilities in several BMW models.
Researchers from the Tencent Keen Security Lab have discovered 14 vulnerabilities affecting several BMW models, including BMW i Series, BMW X Series, BMW 3 Series, BMW 5 Series, and BMW 7 Series.

The team of experts conducted a year-long study between January 2017 and February 18. They reported the issues to BMW and after the company started rolling out security patches the researchers published technical details for the flaws.

“we systematically performed an in-depth and comprehensive analysis of the hardware
and software on Head Unit, Telematics Control Unit and Central Gateway Module of multiple BMW vehicles.” reads the report published by Tencent Keen Security Lab.

“Through mainly focusing on the various external attack surfaces of these units, we discovered that a remote targeted attack on multiple Internet-Connected BMW vehicles in a wide range of areas is feasible, via a set of remote attack surfaces (including GSM Communication, BMW Remote Service, BMW ConnectedDrive Service, UDS Remote Diagnosis, NGTP protocol, and Bluetooth protocol).”

According to the experts, the vulnerabilities affect car produced from the year 2012. White hat hackers focused their tests on the infotainment and telematics systems of the vehicles.

Eight of the vulnerabilities impact the infotainment system, four issues affect the telematics control unit (TCU), and two the central gateway module.

bmw models hack 2

The TCU provides telephony services, accident assistance services, and implements remote controls of the doors and climate. The central gateway receives diagnostic messages from the TCU and the head unit and sends them to other Electronic Control Units (ECUs) on different CAN buses.

The experts discovered that an attacker could exploit the flaws, or chain some of them, to execute arbitrary code and take complete control of the affected component.

The experts demonstrated that a local attacker could hack BMW vehicles via a USB stick, in another attack scenario the researchers illustrated a remote hack through a software-defined radio.

Remote attacks can be conducted via Bluetooth or via cellular networks, remote hack of a BMW car is very complex to carry on because the attacker would need to hack a local GSM mobile network.

BMW-models Attack-Chains

“Our research findings have proved that it is feasible to gain local and remote access to infotainment, T-Box components and UDS communication above certain speed of selected BMW vehicle modules and been able to gain control of the CAN buses with the execution of arbitrary, unauthorized diagnostic requests of BMW in-car systems remotely,” states the researchers.

BMW issued some security updates to the backend systems, it also rolled out over-the-air patches for the TCU. The company also developed firmware updates that will be made available to customers at dealerships.

Neither BMW nor Keen Lab have revealed the list of affected models.

BMW awarded the Keen Lab as the first winner of the BMW Group Digitalization and IT Research Award.

In July 2017, the same team of security researchers from Chinese firm Tencent demonstrated how to remotely hack a Tesla Model vehicle.


Utimaco to Acquire Atalla Hardware Security Module Business From Micro Focus
21.5.18 securityweek  Security

Aachen, Germany-based firm Utimaco will acquire the Atalla hardware security module (HSM) and enterprise secure key manager (ESKM) lines from UK-based Micro Focus.

Announced on Friday, the financial details of the transaction were not disclosed. The deal is expected to complete by September 18, subject to regulatory approval.

Both Utimaco and Atalla have been in the HSM business for around thirty years. Utimaco, the world's second largest supplier, has focused on general purpose HSMs sold via OEMs and the channel. Atalla has particular strengths in the financial services market, with access to top brand banking and financial services players, especially in the USA, UK and Asia.

"Both Utimaco and Atalla are pioneers in hardware security modules, the combination of which leads to an unrivalled wealth of experience and know-how," said Malte Pollmann, Utimaco’s CEO. "The acquisition of Atalla will mark a key milestone in the further implementation of our growth strategy. It is complementary in terms of product portfolio and regional footprint as well as the vertical markets we are addressing."

"As two of the leading pioneers in the hardware security modules business, Atalla and Utimaco are a perfect match, operating in complementary markets with aligned strengths that will help drive better alignment for customers and position Atalla for future growth,” said John Delk, general manager of security for Micro Focus."

Utimaco says it will maintain the existing Atalla team and further invest at Atalla's Sunnyvale, CA, location.

HSMs are specially hardened devices used to house and protect digital keys and signatures. Atalla's HSM is a payments hardware security module for protecting sensitive data and associated keys for non-cash retail payment transactions, cardholder authentication, and cryptographic keys.

The ESKM line provides a centralized key management hardware-based solution for unifying and automating an organization’s encryption key controls by creating, protecting, serving, and auditing access to encryption keys.

Micro Focus acquired Atalla after HPE CEO Meg Whitman announced, in September 2016, that it would be spun out and then merged with Micro Focus.

Utimaco was acquired by Sophos in 2009. One year later, Sophos sold a majority interest to Apax Partners, and this was followed by a management buyout in 2013. Today, Utimaco's primary investors are EQT, PINOVA Capital and BIP Investment Partners S.A.