- Privacy -

Last update 23.09.2017 18:46:51

Introduction  List  Kategorie  Subcategory  0  1  2  3  4  5 



First GDPR Enforcement is Followed by First GDPR Appeal
11.10.2018 securityweek
Privacy

In what has been billed as the world's first GDPR action, the UK regulator -- the Information Commissioner's Office (ICO) -- quietly issued an enforcement notice against Canadian firm AggregateIQ Data Services Ltd (AIQ). It is a low-key affair. Although the enforcement notice was issued on 6 July 2018, the notice was not and has not been placed on the ICO's enforcement action page.

Instead, the notice was attached as an appendix to an investigation report by the ICO. There it largely remained unnoticed until found by law firm Mishcon de Reya LLP in September. SecurityWeek asked the ICO, "Is there any reason for the only occurrence (that I can find) of the notice appearing as an addendum to a longer report?" All other questions were answered, but SecurityWeek did not receive a direct answer to this direct question.

However, we were told that AIQ had appealed the notice. Appeals go to the First-tier Tribunal of the General Regulatory Chamber (GRC). They are not normally made public in the UK. SecurityWeek approached the GRC and asked for a copy -- and has now received a copy, slightly redacted, of AIQ's appeal against the GDPR enforcement notice.

Our first article discussed the reasoning behind the ICO's enforcement notice. Now we can look at AIQ's arguments against it. This is an important issue. While lawmakers make laws, it is the judiciary that interprets them. Neither the lawmakers nor the regulators know how the letter of the law will play out until the law has been tested in front of the judiciary. Equally, the subject of the laws -- in this case businesses that use the personal data of EU citizens around the world -- cannot fully understand their exposure to the law until it has faced the scrutiny of the judiciary.

The first specified ground for the appeal is that the ICO has no jurisdiction over AIQ "in this matter". This implies that the reason for appeal is not based on geography, but on the application of the law. SecurityWeek talked to a UK-based lawyer to understand the basis for the AIQ appeal.

AIQ claims, "There is no evidence whatsoever of any 'processing' of the data held for the purposes of 'monitoring' after the in-force date of the GDPR and DPA in 2018..." This may become the pivotal section of the appeal. Was, in GDPR terms, AIQ a data controller and/or a data processor?

"If AIQ is a Data Controller," comments David Flint, Senior Partner at MacRoberts LLP, "there would be an overriding issue of how it had a lawful basis for processing and meeting the [GDPR] Article 5 Principles. If it were a Processor, the question would be the compliance with Article 5 of those who gathered the information and whether they knew that AIQ would be processing the data."

Flint believes that AIQ's term 'monitoring' relates to 'profiling' within the legislation. Recital (24) of GDPR says "profiling a natural person, particularly in order to take decisions concerning her or him or for analysing or predicting her or his personal preferences, behaviours and attitudes", where there is any evaluation of response or otherwise to the activity. The ICO enforcement notice, comments Flint, "suggests that this is what was being done and why the data was being processed."

He adds that "'processing' also includes holding the data, so the fact that the data was still 'held' on 31 May would, in my opinion bring the activities of AIQ squarely within the scope of the GDPR/DPA2018." This is an important point for all companies that may store and forget they have EU data. They don't have to do anything with that data. Merely storing it makes them a data processor under GDPR.

Noticeably, Equifax said that it had 'forgotten' about the storage of EU citizen data in the U.S. This forgotten data resulted in a £500,000 fine from the ICO after the breach.

Data subject 'consent' is likely to be a key issue within GDPR. The ICO finds that the data subjects did not consent for AIQ to use their data. AIQ responds that the ICO has provided no proof that it lacked the consent of the subjects, and it believes that they had provided the information voluntarily to AIQ's clients with at least 'implied consent'. If the tribunal finds in favor of the ICO, it will reinforce the idea that organizations will need to obtain and be able to demonstrate actual and explicit consent from every EU citizen.

AIQ also argues that 'natural justice' should mitigate in its favor. "The position taken by the ICO in the Enforcement Notice and the Order," it claims, "is contrary to the principles of fairness and natural justice (which also may be referred to as the duty on the ICO to act fairly), and breaches AggregateIQ's right to a fair hearing."

Flint has little sympathy here. "I think the arguments of 'natural justice' fall away where there is a specific statutory provision prohibiting the behavior in question," he told SecurityWeek. "The only argument might be one based on ECHR but that would mean that the GDPR was invalid as being in breach." This in itself is an interesting comment. If the appeal were to the European Court of Human Rights, it would largely come down to whether business' rights take precedence over citizens' rights -- which seems unlikely. But if they did, then GDPR would be invalidated as in breach of the European Constitution (just as the original Safe Harbor agreement between the EU and the U.S. was invalidated).

In fairness to AIQ, this is the one section of the appeal that has been largely redacted by the Tribunal. Elsewhere in the appeal, AIQ accuses the ICO of "taking a position which is contrary to previous positions taken by the ICO, resulting in substantial unfairness and the denial of natural justice to AggregateIQ." We will not know until the hearing whether there is any link between the redacted section and AIQ's comment on 'previous positions', nor whether the Tribunal will consider this to be important.

The AIQ Appeal is number EA/2018/0153 with the Tribunal. It was received on 30 July 2018. At the time of writing this, there is no further information on the Tribunal's appeals table.

The result of the appeal is likely to be important. Much of it seems to be unconvincing -- but it doesn't matter what the lawmakers, the regulators, businesses. lawyers or the media think. In the end, it all comes down to how the judiciary interprets the law and the incident. It would be natural for the regulators to put their toe in the water before potentially going after big companies like Google or Facebook. This may partly explain the low publicity so far afforded to this first case.

"Lots to think about," comments Flint; "and an interesting case to follow particularly given that other cases are starting to line up! Wonder what the Tribunal (and I suspect in due course the Courts) will make of it." The final result may well provide clues to how GDPR is likely to play out over the next few years.

There is, however, one further point worth noting. The ICO enforcement notice requires certain action by AIQ. There is no imposed monetary penalty. This leaves one issue undiscussed. If an EU regulator were to impose a financial penalty on an extra-territorial entity, how -- or even could -- that penalty be enforced?


Millions of Xiongmai video surveillance devices can be easily hacked via cloud feature
10.10.2018 securityaffairs
Privacy

Millions of Xiongmai video surveillance devices can be easily hacked via cloud feature, a gift for APT groups and cyber crime syndicates
Security experts from security firm SEC Consult have identified over 100 companies that buy and re-brand video surveillance equipment (surveillance cameras, digital video recorders (DVRs), and network video recorders (NVRs)) manufactured by the Chinese firm Hangzhou Xiongmai Technology Co., Ltd.(Xiongmai hereinafter) that are open to hack.


Millions of devices are affected by security vulnerabilities that can be easily exploited by a remote attacker to take over devices. The flaws could be exploited to spy on camera feeds of unaware users.

The flaws reside in a feature named the “XMEye P2P Cloud” that is enabled by default which is used to connect surveillance devices to the cloud infrastructure.

“From a usability perspective, this makes it easier for users to interact with the device, since the user does not have to be in the same network (e.g. the same Wi-Fi network) in order to connect to the device. Additionally, no firewall rules, port forwarding rules, or DDNS setup are required on the router, which makes this option convenient also for non-tech-savvy users.” reads the report published by SEC Consult.!However, this approach has several security implications:

The cloud server provider gets all the data (e.g. video streams that are viewed). Open questions:
Who runs these servers?
Who controls these servers? Where are they located?
Do they comply with local jurisdiction?
Does the service comply with EU GDPR?
If the data connection is not properly encrypted (spoiler alert: it’s not, we’ve checked!), anyone who can intercept the connection is able to monitor all data that is exchanged.
The “P2P Cloud” feature bypasses firewalls and effectively allows remote connections into private networks. Now, attackers cannot only attack devices that have been intentionally/unintentionally exposed to the web (classic “Shodan hacking” or the Mirai approach), but a large number of devices that are exposed via the “P2P Cloud”.”
Each device has a unique ID, called cloud ID or UID (i.e. 68ab8124db83c8db) that allows users to connect to a specific device through one of the supported apps.

Unfortunately, the cloud ID is not sufficiently random and complex to make guessing correct cloud IDs hard because the analysis of the Xiongmai firmware revealed it is derived from the device’s MAC address.

According to SEC Consult experts, an attacker can guess account IDs and access the feed associated with other IDs,

Experts found many other security issues, for example, all new XMEye accounts use a default admin username of “admin” with no password and the worst aspect is that the installation process doesn’t require users to change it.

The experts also discovered an undocumented user with the name “default” and password “tluafed.”

“In addition to the admin user, by default there is an undocumented user with the name “default”. The password of this user is “tluafed” (default in reverse).” continues the analysis.

“We have verified that this user can be used to log in to a device via the XMEye cloud (checked via custom client using the Xiongmai NetSDK). This user seems to at least have permissions to access/view video streams.”

Experts also discovered that it is possible to execute arbitrary code on the device through a firmware update.

Firmware updates are not signed, this means that an attacker carries out a MITM attack and impersonate the XMEye cloud to tainted firmware version.

Xiongmai devices were involved in IoT botnets in the last months, both Mirai and Satori bots infected a huge number of devices manufactured by the Chinese firm.

“We have worked together with ICS-CERT to address this issue since March 2018. ICS-CERT made great efforts to get in touch with Xiongmai and the Chinese CNCERT/CC and inform them about the issues. Although Xiongmai had seven months’ notice, they have not fixed any of the issues.”

“The conversation with them over the past months has shown that security is just not a priority to them at all.” concludes SEC Consult.


Google Launch Event Overshadowed by Privacy Firestorm
10.10.2018 securityweek
Privacy

Google was supposed to be focusing Tuesday on its launch of a new smartphone and other devices, but the event was being overshadowed by a firestorm over a privacy glitch that forced it to shut down its struggling social network.

The Silicon Valley giant said Monday it found and fixed a bug exposing private data in as many as 500,000 accounts, but drew fire for failing to disclose the incident.

The revelation heightened concerns in Washington over privacy practices by Silicon Valley giants after a series of missteps by Facebook that could have leaked data on millions.

"In the last year, we've seen Google try to evade scrutiny -- both for its business practices and its treatment of user data," Senator Mark Warner said in a statement.

Warner said that despite "consent" agreements with the US Federal Trade Commission "neither company appears to have been particularly chastened in their privacy practices" and added that "it's clear that Congress needs to step in" for privacy protections.

Marc Rotenberg, president of the Electronic Privacy Information Center, said the latest breach suggests the FTC has failed to do its job in protecting user data.

"The Congress needs to establish a data protection agency in the United States," Rotenberg said. "Data breaches are increasing but the FTC lacks the political will to enforce its own legal judgments."

Rising tensions

The internet search leader had already faced tensions with lawmakers after it decided against sending its top executive to testify at a hearing on privacy and data protection, prompting the committee to leave an empty seat for the company.

Last month, Google indicated it would send chief executive Sundar Pichai to testify before Congress.

Google has also been in the crosshairs of President Donald Trump, who alleged that its search results were biased against conservatives, although there was little evidence to support the claim.

The rising tensions come with Google holding an event in New York widely expected to release its Pixel 3, the upgraded premium smartphone that aims to compete with high-end devices from Apple and Samsung.

The Pixel phone is part of a suite of hardware products Google is releasing as part of an effort to keep consumers in its mobile ecosystem and challenge rivals like Apple and Amazon.

On Monday, Google said it was unable to confirm which accounts were affected by the bug, but an analysis indicated it could have been as many as 500,000 Google+ accounts.

Google did not specify how long the software flaw existed, or why it waited to disclose it.

The Wall Street Journal reported that Google executives opted against notifying users earlier because of concerns it would catch the attention of regulators and draw comparisons to a data privacy scandal at Facebook.

Earlier this year, Facebook acknowledged that tens of millions of users had personal data hijacked by Cambridge Analytica, a political firm working for Donald Trump in 2016.

Google has also faced increasing tensions over a reported search engine which would be acceptable to Chinese censors, and over its work for the US military.

On Tuesday, Google confirmed it is dropping out of the bidding for a huge Pentagon cloud computing contract that could be worth up to $10 billion, saying the deal would be inconsistent with its principles.


Google Tightens Rules for Chrome Extensions
2.10.2018 securityweek
Privacy

Google this week announced a series of policy changes and updates to improve the overall security of Chrome extensions.

There are currently more than 180,000 extensions available in the Chrome Web Store, and nearly half of Chrome desktop users actively use extensions, which makes the security of these components critical to the user browser experience.

Over the past couple of years, there have been numerous incidents where Chrome extensions were abused for traffic hijacking, click fraud, or adware distribution. After removing inline installation of extensions earlier this year, Google is changing the rules again to better protect Chrome users.

Starting with Chrome 70, users will be able to either restrict extension host access to a custom list of sites, or to configure them to require a click to gain access to the current page, James Wagner, Chrome Extensions Product Manager, reveals.

Host permissions, Wagner notes, allow extensions to automatically read and change data on websites, thus being prone to misuse, either malicious or unintentional. Thus, the search giant has decided to improve user transparency and control over when extensions can access site data and developers are advised to make the necessary changes to their apps as soon as possible.

The review process will tighten for extensions that request powerful permissions, as well as for those that use remotely hosted code, which will be subject to ongoing monitoring, Wagner notes. Developers should ensure their extension’s permissions is as narrowly-scoped as possible and that all the code is included directly in the extension package, to minimize review time.

Starting October 1, extensions with obfuscated code are no longer allowed in the Chrome Web Store, regardless of whether the obfuscation is applied to code within the package or to external code or resources. Existing extensions with obfuscated code will be removed in early January, provided that they don’t receive updates to become compliant.

“Today over 70% of malicious and policy violating extensions that we block from Chrome Web Store contain obfuscated code. At the same time, because obfuscation is mainly used to conceal code functionality, it adds a great deal of complexity to our review process. This is no longer acceptable given the aforementioned review process changes,” Wagner points out.

Extension developers are still allowed to use minification, which not only speeds up code execution by reducing size, but also makes extensions more straightforward to review. Techniques that are allowed include removal of whitespace, newlines, code comments, and block delimiters; shortening of variable and function names; and collapsing the number of JavaScript files.

Starting in 2019, Google will also require all Chrome Web Store developer accounts to enroll in 2-Step Verification. This should add extra protection to prevent incidents where attackers attempt to steal popular extensions by hijacking the developer account.

“For even stronger account security, consider the Advanced Protection Program. Advanced protection offers the same level of security that Google relies on for its own employees, requiring a physical security key to provide the strongest defense against phishing attacks,” Wagner says.

Next year, Google also plans on introducing the next extensions manifest version, which should improve security, privacy, and performance. It will bring more narrowly-scoped and declarative APIs, easier mechanisms for users to control the permissions granted to extensions, and alignment with new web capabilities, such as supporting Service Workers as a new type of background process.

“We recognize that some of the changes announced today may require effort in the future, depending on your extension. But we believe the collective result will be worth that effort for all users, developers, and for the long term health of the Chrome extensions ecosystem. We’re committed to working with you to transition through these changes and are very interested in your feedback,” Wagner concludes.


Test Case Probes Jurisdictional Reach of GDPR
27.9.2018 securityweek
Privacy

GDPR Enforcement Case Will Show How Courts View the Extension of GDPR Beyond the Borders of the European Union

Given the potential size of GDPR fines, it has always been likely that there would be GDPR appeals. While business needs to know how the regulators will enforce the regulation, the regulators need to know how the courts will react to appeals. It has always been likely that the regulators would test the water quietly before embarking on any major action against a major company.

It should be no surprise that this has already happened. The UK's Information Commissioner's Office (ICO) quietly delivered a GDPR enforcement notice on the Canadian firm AggregateIQ Data Services Ltd (AIQ) back on July 6, 2018. The ICO did not publish the notice on its 'enforcement action' page as it usually does (including, for example, details of the £500,000 fine it imposed on Equifax, dated September 20, 2018).

Instead, the AIQ notice was published as an addendum to a report entitled 'Investigation into the use of data analytics in political campaigns'. Here it remained unnoticed until found and highlighted by law firm Mishcon de Reya LLP last week.

Equally unnoticed is that AIQ has unsurprisingly appealed the notice. Since appeals are not handled by the ICO, there is no mention of it on the ICO website. Appeals against ICO notices are handled by the General Regulatory Chamber (GRC) of HM Courts & Tribunals Service. This site lists that an AggregateIQ Data Services Ltd ("AIQ") appeal against an unreferenced ICO decision notice was received on 30 July 2018 -- which brings it perilously close to the allowed 28-day appeal period.

No further details are given, and no hearing date is listed. SecurityWeek has requested a copy of the appeal (reference EA/2018/0153); which may or may not be allowable under the Freedom of Information Act. SecurityWeek has not yet received a response from GRC. However, it is likely to be many months before the result of the appeal becomes known.

In effect, this is a test case to see how the courts view the extension of European regulations (in this instance, specifically the UK implementation of GDPR) beyond the borders of the European Union. AIQ is a Canadian firm, and Canada is a softer target than the United States. Nevertheless, the case is likely to provide important information to European regulators before they take on any of the big U.S. tech companies. Smaller U.S. firms should still monitor the outcome to gauge their own exposure to GDPR.

The ICO launched its investigation into the use of data analytics in political campaigns in May 2017, following press reports that Cambridge Analytica (CA) worked for the Leave.EU campaign during the Brexit referendum. During its investigation, Christopher Wylie (a former employee of Cambridge Analytica and the original whistleblower) provided information that led the ICO to investigate AIQ and its work with Vote Leave, BeLeave, Veterans for Britain and the Democratic and Unionist Partyís Vote to Leave campaign. "We have identified information during our investigation that confirmed a relationship between Aggregate IQ (AIQ) and CA / SCL," states the investigation report (PDF). SCL is the parent company of CA.

The report further states, "We have established that AIQ had access to UK voter personal data provided from the Vote Leave campaign. We are currently working to establish where they accessed that personal data, and whether they still hold personal data made available to them by Vote Leave. We are engaging with our regulatory colleagues in Canada, including the federal Office of the Privacy Commissioner and the Office of the Information and Privacy Commissioner, British Columbia."

However, it also adds that on March 5, 2018, "AIQ stated that they were 'not subject to the jurisdiction of the ICO' and ended with a statement that they considered their involvement in the ICO's investigation as 'closed'." Noticeably, Facebook has also disputed the ICO's jurisdiction over it, but has nevertheless cooperated with the ICO. Jurisdictional reach is clearly the first major GDPR issue that will need to be settled by the courts.

The ICO did not accept AIQ's rejection. It issued a formal enforcement notice on July 6, 2018. The two key paragraphs (6 and 7) of the notice state, "As part of AIQ's contract with these political organizations, AIQ have been provided with personal data including names and email addresses of UK individuals. This personal data was then used to target individuals with political advertising messages on social media. In correspondence with the Commissioner dated May 31, 2018, AIQ confirmed that personal data regarding UK individuals was still held by them. This data is stored on a code repository and has previously been subject to unauthorised access by a third party."

The claim is that AIQ processed UK personal data in a manner that did not include the consent of the data subjects concerned, and that (notice the date) it continued to hold this personal data after the date at which GDPR came into force (May 25, 2018). "The Commissioner takes the view that damage or distress is likely as a result of data subjects being denied the opportunity of properly understanding what personal data may be processed about them by the controller [which is AIQ], or being able to effectively exercise the various other rights in respect of that data afforded to a data subject."

The enforcement notice (PDF) demands that "AIQ shall within 30 days of the date of this notice: Cease processing any personal data of UK or EU citizens obtained from UK personal organisations or otherwise for the purposes of data analytics, political campaigning or any other advertising purposes." The notice itself warns that failure to comply could lead to the ICO serving a penalty notice "requiring payment of an amount up to 20 million Euros, or 4% of an undertaking's total annual worldwide turnover whichever is the higher."

This is the enforcement notice that has been appealed by AIQ. On its website, AIQ has a brief note: "AggregateIQ works in full compliance within all legal and regulatory requirements in all jurisdictions where it operates. It has never knowingly been involved in any illegal activity. All work AggregateIQ does for each client is kept separate from every other client. AggregateIQ has never managed, nor did we ever have access to, any Facebook data or database allegedly obtained improperly by Cambridge Analytica."

Without seeing the appeal document itself, it is impossible to know on what grounds AIQ is rejecting the ICO's notice. It seems most likely that this will at least include the rejection of the ICO's jurisdiction. It will take many months before the Tribunal makes its ruling on the appeal and jurisdiction -- but it is a case that all non-EU companies processing EU personal data will need to follow closely.


Amazon is investigating allegations that its staff is selling customer data

18.9.2018 securityaffairs Privacy

Amazon confirmed an ongoing investigation of the allegations that some of its personnel sold confidential customer data to third party companies.
Amazon confirmed that it is investigating allegations that its staff sold customer data and other confidential information to third-party firms, particularly in China, a practice that violated the company policy.

The news was first reported by the Wall Street Journal, which discovered that the company staff sells customers data to merchants that are Amazon sellers.

“Employees of Amazon, primarily with the aid of intermediaries, are offering internal data and other confidential information that can give an edge to independent merchants selling their products on the site, according to sellers who have been offered and purchased the data, as well as brokers who provide it and people familiar with internal investigations.” reads the report published by the WSJ.

On Amazon, customers can buy products sold directly by the company along with goods from many other merchants.

The Wall Street Journal said cited the cases of intermediaries in Shenzhen working for group employees and selling information on sales volumes for payments ranging from 80 to more than 2,000 dollars.

“[Amazon is] conducting a thorough investigation of these claims.” Amazon spokesperson told AFP.

“We have zero tolerance for abuse of our systems and if we find bad actors who have engaged in this behavior, we will take swift action against them, including terminating their selling accounts, deleting reviews, withholding funds, and taking legal action,” the statement said.

The company is concerned by fake reviews by purported customers, the company started the investigation months ago.


Google Case Set to Examine if EU Data Rules Extend Globally

12.9.2018 securityweek Privacy

Google is going to Europe's top court in its legal fight against an order requiring it to extend "right to be forgotten" rules to its search engines globally.

The technology giant is set for a showdown at the European Union Court of Justice in Luxembourg on Tuesday with France's data privacy regulator over an order to remove search results worldwide upon request.

The dispute pits data privacy concerns against the public's right to know, while also raising thorny questions about how to enforce differing legal jurisdictions when it comes to the borderless internet.

The two sides will be seeking clarification on a 2015 decision by the French regulator requiring Google to remove results for all its search engines on request, and not just on European country sites like google.fr.

Google declined to comment ahead of the hearing. Its general counsel, Kent Walker, said in a blog post in November that complying with the order "would encourage other countries, including less democratic regimes, to try to impose their values on citizens in the rest of the world."

"These cases represent a serious assault on the public's right to access lawful information," he added.

In an unusual move, the court has allowed a collection of press freedom, free speech and civil rights groups to submit their opinions on the case. These groups agree with Google that forcing internet companies to remove website links threatens access to information and could pave the way for censorship by more authoritarian regimes such as China, Russia and Saudi Arabia.

The court's ruling is expected within months. It will be preceded by an opinion from the court's advocate general.

The case stems from a landmark 2014 Court of Justice ruling that people have the right to control what appears when their name is searched online. That decision forced Google to delete links to outdated or embarrassing personal information that popped up in searches of their names.

Authorities are now starting to worry about the risk that internet users can easily turn to proxy servers and virtual private networks to spoof their location, allowing them to dig up the blocked search results.

Google said in its most recent transparency report that it has received requests to delete about 2.74 million web links since the ruling, and has deleted about 44 percent of them.

Not all requests are waved through. In a related case that will also be heard Tuesday, the EU court will be asked to weigh in on a request by four people in France who want their search results to be purged of any information about their political beliefs and criminal records, without taking into account public interest. Google had rejected their request, which was ultimately referred to the ECJ.


Trend Micro Admits That Its Mac Apps Collect User Data
12.9.2018 securityweek Privacy

Trend Micro on Monday confirmed that some of its applications for Mac collect browser history and send it to the security firm’s servers.

Recent reports revealed that so-called security applications for Mac that are being distributed through Apple’s App Store collected and exfiltrated users’ browsing histories along with some other sensitive information (such as lists of installed apps).

The initial reports focused on Adware Doctor, a $4.99 application that would gather Safari, Chrome, and Firefox browsing history, the list of running processes, and a list of downloaded software. The program was observed sending the harvested data to a server located in China.

Among the other applications that engaged in the collection of browsing history, researchers mentioned Dr. Antivirus and Dr. Cleaner, two programs developed by security software provider Trend Micro.

In a statement regarding these allegations, the company confirmed not only that the two applications collected user data, but also that other Mac apps developed by the company did the same, including Dr Cleaner Pro, Dr. Unarchiver, Dr. Battery, and Duplicate Finder.

The data collection practice, the company says, only targeted “a small snapshot of the browser history on a one-time basis.” Specifically, only the browsing history for the 24 hours prior to the installation were targeted.

“This was a one-time data collection, done for security purposes (to analyze whether a user had recently encountered adware or other threats, and thus to improve the product & service),” Trend Micro claims.

The security firm also points out that users were informed on the collection and use of browser history data, both in the applicable EULAs and at installation, when the user was also prompted to accept the data collection.

The security firm also notes that the browser history data was uploaded to a U.S.-based server hosted by AWS and managed/controlled by Trend Micro.

All of the offending applications have been already stripped off the browser history collection capabilities, Trend Micro also says. In addition, the company also claims to have permanently dumped all legacy logs from the US-based AWS servers, including the logs of browser histories that the users permitted at installation (and which was only being held for 3 months).

According to Trend Micro, the presence of the same data collection capabilities across a number of its applications was the result of the use of common code libraries.

“We have learned that browser collection functionality was designed in common across a few of our applications and then deployed the same way for both security-oriented as well as the non-security oriented apps such as the ones in discussion. This has been corrected,” the company said.


Privacy-oriented Linux OS Tails 3.9 is out, what’s new?
8.9.2018 securityaffairs Privacy

The popular Debian-based distribution Tails “The Amnesiac Incognito Live System” is out. The Tails 3.9 is available online with the biggest updates this year.
A new version of the popular Debian-based distribution Tails “The Amnesiac Incognito Live System” is out. The Tails version 3.9 is available online, the privacy-oriented operating system gets its biggest update, many issues were fixed and new features were added to protect user privacy and anonymity online.

TAILS does not store information on the host Hard Disk, it is loaded entirely into RAM. This means that shutting down the machine all the data in RAM is deleted in a few minutes, leaving no trace on the system. To prevent Cold Boot Attack that allows experts to extract the content of the RAM memory before it will be deleted, but TAILS overwrites memory space with random data when shutting down the distro.

Tails 3.9 relies on Linux kernel 4.17 that includes security patches for the Foreshadow attacks as well as updated Intel and AMD microcode firmware to address the latest Spectre and Meltdown security flaws.

The development for the Tails 3.9 lasted at least two months, some of the features implemented were anticipated during the last weeks, such as the integration of the VeraCrypt/TrueCrypt utilities.

Integrating VeraCrypt or TrueCrypt users can easily manage encrypted disk drives directly from the GNOME desktop environment.

VeraCrypt integration comes with the recently released GNOME 3.30 desktop environment, unlock VeraCrypt encrypted volumes in Tails 3.9 is very easy. Users can access the new Unlock VeraCrypt Volumes dialog from Applications > System Tools. The features support both TrueCrypt or VeraCrypt open-source disk encryption format.

Tails 3.9 unlock-veracrypt-volumes-with-partition

Another feature implemented in the Tails 3.9 is the automatically installation for additional software.

This means that the distro will automatically install software updates when starting up the PC. Users could decide to install future updates of an app, and manage automatically updated apps from the menu entry Applications > System Tools > Additional Software.

Tails 3.9 includes Mozilla Thunderbird 60 as new default RSS and Atom news feed reader instead of Liferea, and the TOR Browser 8.0 anonymous web browser based on Firefox 60 ESR.

Get Tails 3.9
To install, follow our installation instructions.
To upgrade, automatic upgrades are available from 3.7.1, 3.8, and 3.9~rc1 to 3.9.If you cannot do an automatic upgrade or if Tails fails to start after an automatic upgrade, please try to do a manual upgrade.
Download Tails 3.9.
Tails 3.10 is expected on October 23, 2018.


Fappening case – Another hacker who leaked celebrities naked photos was sentenced to 8 months in prison
2.9.2018 securityaffairs Privacy

Fappening – The hacker George Garofano (26) who leaked celebrities naked photos and attempted to trade them was sentenced to 8 months in prison
The sentence for the fourth hacker involved in the leakage of celebrities naked photos, also known as the Fappening case, has arrived. George Garofano, 26, of North Branford, has been sentenced to eight months in prison, he was charged earlier this year with hacking into over 250 Apple iCloud accounts belonging to Hollywood celebrities.

As part of the Fappening case, nude pictures of many celebrities were leaked online, the list of victims is long and includes Kim Kardashian, Kate Upton, and Jennifer Lawrence.

Garofano had been arrested by the FBI and a federal court has accused him of violating the Computer Fraud and Abuse Act.

From April 2013 through October 2014, Garofano used phishing attacks against the victims to obtain their iCloud accounts credentials, access the accounts and steal personal information, including private photographs and videos.

Garofano also traded the stolen credentials, as well as the information he stole from the victims’ accounts, with other individuals.

In a plea agreement signed in January in U.S. District Court in Los Angeles, Garofano agreed to plead guilty to one count of unauthorized access to a protected computer to obtain information.

Prosecutors requested for the man a sentence of at least 10 to 16 months in prison, while his advocate requested a sentence of no more than ten months, the half of which to under home arrest.

The final decision of a judge at the US district court in Bridgeport sentenced Garofano to 8 months in prison and 3 years of supervised release after his prison term is over.

“John H. Durham, United States Attorney for the District of Connecticut, announced that GEORGE GAROFANO, 26, of North Branford, was sentenced today by U.S. District Judge Victor A. Bolden in Bridgeport to eight months of imprisonment, followed by three years of supervised release, for engaging in a phishing scheme that gave him illegal access to more than 200 Apple iCloud accounts, many of which belonged to members of the entertainment industry.” reads the press release published by the Department of Justice.

“GAROFANO, who is released on a $50,000 bond, was ordered to report to prison on October 10, 2018. Judge Bolden ordered GAROFANO to perform 60 hours of community service while on supervised release.”

The other hackers involved in the Fappening case are, Edward Majerczyk, Ryan Collins, and Emilio Herrera, the latter is still awaiting his sentencing.

Garofano was the only one that traded the stolen iCloud credentials and the celebrities naked photos.


The Expected Spike in Post-GDPR Spam Activity Hasn't Happened
30.8.2018 securityweek Privacy

For many months it was expected that privacy protections afforded to consumers by GDPR would also benefit the bad guys. It was feared that security researchers would no longer be able to track new bad domains through WHOIS data, and that spammers would rush to register new domains under new GDPR-enforced anonymity; and that spam would spike once GDPR became effective in May 2018. It hasn't happened.

A new analysis from Recorded Future, combining spam volume data from Cisco's research and domain registrations from its own data sets shows -- if anything -- the reverse to be true. Allan Liska, threat intelligence analyst at Recorded Future, told SecurityWeek, "In the first 90 days since GDPR has been enacted, spam has actually declined, even taking into account the normal seasonal slump that happens during the summer. But it's not just spam volumes," he said, "but there have also been fewer new domains registered in the spammy gTLDs that tend to have a lot of spam."

Post-GDPR Spam Levels

In raw figures, according to Cisco's data, total email at May 1, 2018 stood at 433.9 billion messages, with spam accounting for 85.28% of all email. By August 1, 2018, the total email had fallen to 361.83 billion (probably helped by GDPR's privacy requirements on email marketing) with spam making up 85.14%. Those figures show spam dropping from 370 billion messages in early May (pre-GDPR) to to 308 billion messages in August (post GDPR).

The reason for the drop is not clear. "I don't have a good answer yet," Liska told SecurityWeek. "It may be that the bad guys are sitting back and waiting to see how it all plays out before they decide on their path forward." The last thing they will want to do is register a bunch of domains, "and then everything gets worked out between ICANN and the GDPR -- and the security researchers have full access to that WHOIS data again. So, the beginning groups may be in a wait and see mode."

Another possible cause for the non-appearance of the expected post-GDPR spike in spam is that the spammers don't think anything has really changed. It's true that most registrars have enacted the GDPR restrictions, so WHOIS data is now unavailable. "But there's no slackening in the researchers use of other methodologies to track spam domains," said Liska. "IP tracking, for example: jumping from IP address to IP address through domain algorithms. That type of capability is still being used by the researchers, and researchers are relying more heavily on such techniques."

Possibly arguing in favor of a 'wait and see' approach by the spammers is a corresponding drop in new domain registrations in the more spammy gTLD domains. "Spamhaus keeps track of the top 10 most malicious domains," explained Liska, "and all of the gTLDs saw a strong drop in activity post GDPR. That's a bit unusual and you would think that if nothing's changed from the bad guys' point of view, then they would continue registering these bad domains -- but that doesn't seem to be the case."

The implication of this lack of activity in the spammy gTLDs is that spammers aren't currently planning any major new spam campaigns for the immediate future.

The best possible cause for this dip in spam -- which cannot be fully explained by the summer recess -- is that defenders are finally winning the battle against spam. New techniques and technologies -- especially machine learning algorithms -- are getting better at recognizing and blocking spam. If the spammers aren't making money, they'll move on to something different.

"We've seen a big drop in dot-biz registrations," commented Liska. Dot-biz has long been considered a particularly spammy gTLD, with around 40% of dot-biz domains considered to be spam domains. That still means that 60% of these are genuine, making it impossible to simply block all dot-biz domains. However, with machine learning working to probabilities rather than binary decisions, dot-biz plus a few other lesser flags could rapidly identify and block spam.

Where machine learning is particularly useful, added Liska, is in identifying new phishing and watering hole domains. This would include typosquatting domains and cybersquatting domains. The former would include look-alike spellings and a range of country suffixes that could be mistyped. Liska gave an example of misspelled domains. "Take Wells Fargo," he said. "There are numerous ways that could be misspelled while still looking the same. The letter 'o' could be changed to the numeric zero (Wells Farg0) or the lowercase 'ls' could be changed to uppercase 'Is' (WeIIs Fargo)."

Other examples include registering a well-known brand name in a country-specific domain -- such as dot-cm (Cameroon), dot-co (Columbia), dot-om (Oman) and dot-ne (Nigeria). Each suffix is only a short typo from the major dot-com and dot-net suffixes. Criminals register big brand names in these country domains and wait for people to make a typing mistake to visit their sites.

Cybersquatting domains do not rely on the user making an error while typing the domain, but on appearing to be a genuine company URL. Bad actors will sometimes add a new company tag line or use a new product and register it as a new domain. An example can be seen in the 2017 registration of 'cybfx.co.uk' by a resident of Beijing immediately after the Clydesdale Bank introduced its new online cyber foreign exchange service called CYBFX. It wasn't until February 14, 2018 that the UK domain registrar declared it an 'abusive registration' and transferred ownership to Clydesdale Bank.

"There are hundreds of thousands of new domains registered every day," commented Liska. "It's almost impossible for a human to look through all of those domains to find bad things. Machine learning helps us find new registrations that are meant to look like well-known sites."

Does all this mean that the battle against the spammers is being won? "That would be great, but I don't think we can say that yet," he replied. While we can conjecture about why spam and new spam registrations have dipped rather than spiked after GDPR, we will have to wait on the results of the next few 90-day analyses that Recorded Future intends to deliver.

"I've been in this industry for almost 18 years," Liska told SecurityWeek, "and for 18 years we've been fighting spam and losing. I'd love to see that this dip is the beginning of a decline in spam; but my feeling is the bad guys are figuring out how to regroup, and assess the situation. They'll figure out a way round any problems, because that's what they always do. At some point we'll probably see a new uptick in spam volumes again, so we better just enjoy the lull while we have it."

Boston, Mass.-based Recorded Future, founded in 2009, raised $25 million in a Series E funding round led by Insight Venture Partners in October 2017 -- bringing the total funding raised to $57.9 million.


As Facial Recognition Use Grows, So Do Privacy Fears
12.7.2018 securityweek  Privacy

The unique features of your face can allow you to unlock your new iPhone, access your bank account or even "smile to pay" for some goods and services.

The same technology, using algorithms generated by a facial scan, can allow law enforcement to find a wanted person in a crowd or match the image of someone in police custody to a database of known offenders.

Facial recognition came into play last month when a suspect arrested for a shooting at a newsroom in Annapolis, Maryland, refused to cooperate with police and could not immediately be identified using fingerprints.

"We would have been much longer in identifying him and being able to push forward in the investigation without that system," said Anne Arundel County police chief Timothy Altomare.

Facial recognition is playing an increasing role in law enforcement, border security and other purposes in the US and around the world.

While most observers acknowledge the merits of some uses of this biometric identification, the technology evokes fears of a "Big Brother" surveillance state.

Heightening those concerns are studies showing facial recognition may not always be accurate, especially for people of color.

A 2016 Georgetown University study found that one in two American adults, or 117 million people, are in facial recognition databases with few rules on how these systems may be accessed.

A growing fear for civil liberties activists is that law enforcement will deploy facial recognition in "real time" through drones, body cameras and dash cams.

"The real concern is police on patrol identifying law-abiding Americans at will with body cameras," said Matthew Feeney, specialist in emerging technologies at the Cato Institute, a libertarian think tank.

"This technology is of course improving but it's not as accurate as science fiction films would make you think."

- 'Aggressive' deployments -

China is at the forefront of facial recognition, using the technology to fine traffic violators and "shame" jaywalkers, with at least one arrest of a criminal suspect.

Clare Garvie, lead author of the 2016 Georgetown study, said that in the past two years, "facial recognition has been deployed in a more widespread and aggressive manner" in the US, including for border security and at least one international airport.

News that Amazon had begun deploying its Rekognition software to police departments sparked a wave of protests from employees and activists calling on the tech giant to stay away from law enforcement applications.

Amazon is one of dozens of tech firms involved in facial recognition. Microsoft for example uses facial recognition for US border security, and the US state of Maryland uses technology from German-based Cognitec and Japanese tech firm NEC.

Amazon maintains that it does not conduct surveillance or provide any data to law enforcement, but simply enables them to match images to those in its databases.

The tech giant also claims its facial recognition system can help reunite lost or abducted children with their families and stem human trafficking.

- 'Slippery slope' -

Nonetheless, some say facial recognition should not be deployed by law enforcement because of the potential for errors and abuse.

That was an argument made by Brian Brackeen, founder and the chief executive officer of the facial recognition software developer Kairos.

"As the black chief executive of a software company developing facial recognition services, I have a personal connection to the technology, both culturally and socially," Brackeen said in a blog post on TechCrunch.

"Facial recognition-powered government surveillance is an extraordinary invasion of the privacy of all citizens -- and a slippery slope to losing control of our identities altogether."

The Georgetown study found facial recognition algorithms were five to 10 percent less accurate on African Americans than Caucasians.

- Policy questions -

Microsoft announced last month it had made significant improvements for facial recognition "across skin tones" and genders.

IBM meanwhile said it was launching a large-scale study "to improve the understanding of bias in facial analysis."

While more accurate facial recognition is generally welcomed, civil liberties groups say specific policy safeguards should be in place.

In 2015, several consumer groups dropped out of a government-private initiative to develop standards for facial recognition use, claiming the process was unlikely to develop sufficient privacy protections.

Cato's Feeney said a meaningful move would be to "purge these databases of anyone who isn't currently incarcerated or wanted for violent crime."

Jennifer Lynch, an attorney with the Electronic Frontier Foundation, said that the implications for police surveillance are significant.

"An inaccurate system will implicate people for crimes they did not commit. And it will shift the burden onto defendants to show they are not who the system says they are," Lynch said in a report earlier this year.

Lynch said there are unique risks of breach or misuse of this data, because "we can't change our faces."

Evan Selinger, a philosophy professor at the Rochester Institute of Technology, says facial recognition is too dangerous for law enforcement.

"It's an ideal tool for oppressive surveillance," Selinger said in a blog post.

"It poses such a severe threat in the hands of law enforcement that the problem cannot be contained by imposing procedural safeguards."


California, Home of Silicon Valley, Ramps Up Online Privacy Law
29.6.2018 securityweek  Privacy

California on Thursday passed a strict new law aimed at protecting people's privacy online, a move that promised to shift the terrain on which internet firms operate in the wake of recent scandals.

The bill, signed into law by Governor Jerry Brown, followed in the spirit of the General Data Protection Regulation, which recently took effect in Europe.

The legislation cut off an initiative that is heading for the ballot in this state in the fall.

It was crafted to ensure rights including knowing what personal information is collected by companies on the internet and whether it is sold, and to whom, according to the bill signed by Brown.

The law also gives people a right to "say no" to the sale of their personal information, and calls for them to be treated the same as anyone else online if they opt to restrict use of their data.

Internet businesses that receive "verifiable" requests by people to have their data deleted will be required to do so, with a list of exceptions that include keeping what is needed to complete transactions, detect security breaches, or protect against illegal activity.

"A consumer shall have the right, at any time, to direct a business that sells personal information about the consumer to third parties not to sell the consumer's personal information," the legislation said.

"This right may be referred to as the right to opt out."

Business home pages will be required to provide "clear and conspicuous" links titled "Do Not Sell My Personal Information" that take people to opt-out pages.

People whose personal information is stored unencrypted and not sufficiently protected were also give the right to pursue civil claims.

The shift both in Europe and California came after the harvesting of Facebook users' data by Cambridge Analytica, a US-British political research firm, for the 2016 US presidential election.

- Potential to spread -

Nonprofit advocacy group Consumer Watchdog called the California legislation "landmark reform" and branded it the toughest state privacy law in the US.

"Silicon Valley companies will very likely implement many of these reforms across their entire customer base, not just for Californians," said Consumer Watchdog president Jamie Court.

"California has led the way and Californians must be ever vigilant in the next year that the legislature does not undermine these protections at the behest of tech lobbyists and moguls."

The Internet Association, an industry lobbying group, expressed concerns about the law, saying there was a lack of public input as it was hurried through the legislative process.

"Data regulation policy is complex and impacts every sector of the economy, including the internet industry," association vice president of state government affairs Robert Callahan said in a statement posted on its website.

"That makes the lack of public discussion and process surrounding this far-reaching bill even more concerning."

Callahan contended that California policymakers will need to "correct the inevitable, negative policy and compliance ramifications this last-minute deal will create for California's consumers and businesses alike."

The list of Internet Association members includes titans such as Amazon, Facebook, Google, Microsoft, Netflix and Twitter.

During a meeting Thursday with reporters at Facebook's headquarters in Silicon Valley, chief operating officer Sheryl Sandberg said the leading social network supported the California legislation.


Facebook, Google 'Manipulate' Users to Share Data Despite EU Law: Study
29.6.2018 securityweek  Privacy

Facebook and Google are pushing users to share private information by offering "invasive" and limited default options despite new EU data protection laws aimed at giving users more control and choice, a government study said Wednesday.

The Norwegian Consumer Council found that the US tech giants' privacy updates clash with the new General Data Protection Regulation (GDPR), which forces companies to clarify what choices people have when sharing private information.

"These companies manipulate us into sharing information about ourselves," the council's director of digital services, Finn Myrstad, said in a statement.

"(This) is at odds with the expectations of consumers and the intention of the new Regulation," the 2018 study, entitled "Deceived By Design", concluded.

Myrstad said the practices showed "a lack of respect for their users, and are circumventing the notion of giving consumers control of their personal data".

The case for the new laws has been boosted by the recent scandal over the harvesting of Facebook users' data by British consultancy Cambridge Analytica for the 2016 US presidential election.

Information for the report was collected from mid-April to early June, a few weeks after the EU rules came into force.

- 'Very few actual choices' -

The report exposed that Facebook and Google often set the least privacy-friendly option as a default and that users rarely change pre-selected settings.

Privacy-friendly choices "require more clicks and are often hidden," it said.

"In many cases, the services obscure the fact that users have very few actual choices, and that comprehensive data sharing is accepted just by using the service," the study said.

But Facebook on Wednesday denied covering up the options for users and said they had prepared for 18 months to meet the GDPR requirements.

"We have made our policies clearer, our privacy settings easier to find and introduced better tools for people to access, download, and delete their information," the company's spokesman told Norwegian public broadcaster NRK.

The EU has billed the GDPR as the biggest shake-up of data privacy regulations since the birth of the web.

The social media giant and Google separately already face their first official complaints under the new law after an Austrian privacy campaigner accused them of forcing users to give their consent to the use of their personal information.

Companies can be fined up to 20 million euros ($24 million) or four percent of annual global turnover for breaching the strict new data rules for the European Union, a market of 500 million people.


Inside the Legislative and Regulatory Minefield Confronting Cybersecurity Researchers
19.6.2018 securityweek Privacy

Legislation – especially complex legislation – often comes with unintended consequences. The EU’s General Data protection Regulation (GDPR), which came into force May 25, 2018, is an example of complex legislation.

GDPR, and other cybersecurity laws, are designed to protect privacy and property in the cyber domain. There is, however, concern that many of these laws have a common unintended consequence: in protecting people from cybercriminals, the laws also protect cybercriminals from security researchers.

The question is whether security research an unintended but inevitable collateral damage of cybersecurity legislation. While focusing on GDPR, this examination will also consider other legislation, such as the CLOUD Act, the Computer Fraud and Abuse Act (CFAA), the Computer Misuse Act (CMA).

The WHOIS issue

One immediate example involves GDPR, the Internet Corporation for Assigned Names and Numbers (ICANN) and the WHOIS database/protocol. ICANN maintains a global database of internet domain registrations that has been readily available to security vendors and security researchers.

Researchers with one known malicious domain have been able to cross-reference details via WHOIS to locate, at speed, potentially thousands of other malicious domains registered at the same time or by the same person or with the same contact details.

However, registrant details of EU residents is now protected data under GDPR. ICANN can no longer share that data with third parties – effectively meaning that researchers can no longer access WHOIS data to discover potentially malicious domains and protect the public from spam or phishing campaigns involving those domains.

“Many critical cybersecurity activities;” explains Sanjay Kalra, co-founder and chief product officer at Lacework; “like spam mitigation and botnet takedowns – depend on publicly-available records to identify those responsible for cyber-attacks, botnets, or spam email campaigns. The Internet’s domain management system (ICANN) identifies a domain’s owner; and researchers use that information to identify the culprits responsible for some of the Internet’s most damaging attacks.”

In this example of an unintended consequence there is a desire on all sides to solve the problem. ICANN and the European regulators have been discussing the issue for many months; and on 17 May, ICANN adopted a Temporary Specification for gTLD Registration Data.

Cherine Chalaby, chair of the ICANN board of directors, blogged, “The adoption of this Temporary Specification is a major and historic milestone as we work toward identifying a permanent solution. The ICANN org and community have been closely working together to develop this interim model ahead of the GDPR’s 25 May 2018 enforcement deadline, and the Board believes that this is the right path to take.”

This is one unintended consequence that might find a solution. “There is still hope to see a compromise where private data would stay protected while allowing the “good guys” to research and fight cybercrime,” comments ESET security intelligence team lead, Alexis Dorais-Joncas. “Some proposed solutions would allow for example certified 3rd parties such as law enforcement or researchers to access the redacted part of the WHOIS data.”

Laws Impacting Cybersecurity and Research

Nevertheless, the wider issue of unintended consequences on security researchers remains. Can researchers download stolen credentials for analysis; can they probe a potential C&C server and evaluate personal details; can they take over the email accounts of scammers for research purposes?

Laws, Regulations and Researchers
Privacy regulations, including GDPR, generally make some concessions – for example, for national security issues and for law enforcement investigations under certain circumstances. Independent researchers do not fall directly within either category.

Whether or not security research is permitted or denied by GDPR and other regulations is a complex issue with many different interpretations – and SecurityWeek spoke to several researchers for their understanding of the difficulties.

Erka Koivunen, CISO at F-Secure, suggests that potential problems are neither new nor restricted to GDPR. “Security research on stolen datasets has been problematic even under earlier laws,” he said; adding that other regulations such as the EU’s export control regulations and the Wassenaar Arrangement, “are almost hostile to security research and testing.”

He noted, however, that European regulators have not so far been too concerned about the ‘notification’ requirement of existing regulations (except for the telecoms-specific regulation).

His colleague, F-Secure’s principal security consultant Jaros³aw Kamiñski, widens the researchers’ problem from privacy laws (such as GDPR) to property laws (such as CFAA and CMA): “Obtaining data from C2 servers and cache hives may constitute computer break-in if authorization and access control mechanisms were circumvented in the process.”

Immediately, three important issues have been raised: adequate authorization; whether the regulators are strict in interpretation and enforcement; and the effect of other regulations (such as the U.S. Computer Fraud and Abuse Act and the UK’s Computer Misuse Act).

Authorization and other regulations
The ‘authorization’ issue includes a common belief that researchers can ignore regulations if they have been authorized to do so by the FBI. Luis Corrons, security evangelist at Avast, comments, “Researchers cannot legally hack into a potential C&C. The only way to do that is in cooperation with law enforcement.”

Josu Franco, strategy and technology advisor at Panda Security, has a similar viewpoint. “Private individuals or companies,” he told SecurityWeek, “do not have the right to hack back at servers, unless it is done by (or in collaboration with) law enforcement as part of an investigation. So, it would be illegal for a researcher to hack into a server by himself/herself and download its contents.”

Corrons believes this process could be made easier and clearer if governments were to more actively encourage collaboration between public agencies and private researchers.

However, it isn’t clear that the FBI in the U.S. and other agencies elsewhere can authorize otherwise illegal cyber activity. “I do not believe this is ever OK under any law, or at least not in the U.S.,” comments Brian Bartholomew, principal security researcher at Kaspersky Lab. “‘Hacking’ into any system implies unauthorized access, which is illegal under the Computer Fraud and Abuse Act.”

He continued, “While there have been proposals to allow network defenders to ‘hack back” under certain circumstances, such as the Active Cyber Defense Certainty (ACDC) Act, none have been enacted into law and they have raised significant concerns among various stakeholders in the cyber ecosystem. In my opinion, the legal implications of such an approach alone make such proposals problematic all around.”

Scott Petry, CEO and co-founder of Authentic8, has a similar view. “I’m not aware of instances where a government agency has legitimized hacking activity,” he said. “The FBI would not have jurisdiction in non-US regions, so the EU agencies would no more honor the FBI’s approval of the activity than U.S. law enforcement would do if Interpol or another EU agency legitimized an attack against US resources. So, I don’t think there’s a free pass that either organization can offer to parties in the other region."

The recent CLOUD Act adds a further complication. Technically, this allows the FBI to authorize an otherwise illegal action – indeed, the FBI can insist upon it. The FBI can now demand access to EU PII held by a U.S. company anywhere in the world – which could place that company in contention with GDPR.

Furthermore, some researchers believe that CLOUD will have a chilling effect on future U.S. research. “Given that pen-testing and research require either formal agreements or navigating challenging questions around legal-to-do research, the CLOUD Act is incredibly problematic,” comments Robert Burney, technical instructor at SecureSet. “Legally, a security researcher cannot test cloud infrastructure without breaking laws, and this increases that risk.” He fears that researchers will avoid testing cloud infrastructure at the same time as more and more companies are adopting it.

“This increase in policy and politics,” he suggests, “will prevent high quality research in the United States and reduce our overall security… This is an inherent risk in both the GDPR and CLOUD Act. Security researchers will need to know almost as much about legal policy as they do the computers they research – and this will slow our ability to improve overall security.”

Adam McNeil, senior malware intelligence analyst at Malwarebytes is less concerned. “The Computer Fraud and Abuse Act (U.S.) and the Computer Misuse Act (UK) prohibit unauthorized access to computer systems, but both offer protections for academic and private sector research. Taken together, security researchers have a somewhat clear responsibility: don’t access systems without authorization; and if vulnerabilities are found, submit the information via responsible disclosure practices.”

But still there are problems and difficulties. Joseph Savirimuthu, senior lecturer in law at the Liverpool Law School, points out that there is no formal definition distinguishing the researcher from the hacker. In the UK, “The Computer Misuse Act 1990 and new data breach notification rules do not distinguish between White Hat and Black Hat. Neither does the Fraud Act 2006.”

Interpretation and enforcement of GDPR
In the final analysis, what a law or regulation says is not as important as how the enforcers of the law (law enforcement or official regulators) respond to that law; and ultimately how judges interpret that law. GDPR is a particularly difficult example. Firstly, despite its ‘unifying’ intention, there is a degree of flexibility that allows different EU nations to implement the law according to national preferences.

Secondly, it is still ‘enforced’ by national regulators who may vary in the severity of their interpretation. The UK, for example, is traditionally more business-friendly in its application of privacy laws than some of its European partners – such as France and Germany.

Thirdly, there is inevitably a degree of ambiguity in words that must be translated into multiple languages. For example, when Julian Assange attempted to overturn his Sweden-issued arrest warrant, UK judges chose to use the French language version of the relevant EU law to assert its validity rather than the English language version (it was potentially not valid under a strict interpretation of the English language version of the same European law).

Such vagaries leave it far from clear whether security researchers will be allowed or discouraged to continue their work – and the security researchers themselves have widely different views.

Joel Wallenstrom, CEO at Wickr, is cautiously hopeful. “We don’t anticipate that GDPR will make security research more difficult than it already is for many infosec researchers. GDPR-specific implications aren’t yet clear. Enforcement actions after May 25 will certainly provide signal to the industry on where the EU regulation is steering. Downloading a database to perform forensic analysis does not necessarily translate into becoming an ‘data operator’ or ‘processor’ under GDPR.”

“The notion that legitimate security researchers would be held responsible for GDPR is silly,” comments Kathie Miley, COO of Cybrary. “As long as they are not downloading or processing personal data there is no applicability to GDPR. Also, there is no logical reason a white hat security researcher would need to download or process an EU citizen’s personal data.”

This is not a universal view. Researchers sometimes download stolen PII from the dark web or paste sites for their own analysis. And although it is commonly held that provided researchers treat PII responsibly and in accordance with the principles of GDPR, they will not be held to account, there is no such guarantee within GDPR or other cyber legislation.

Robin Wood (aka DigiNinja) is an independent penetration tester who has done much work on passwords. When asked if GDPR would make him think twice about downloading PII (in the form of stolen and dumped user IDs and passwords), he replied, “I don’t know whether technically it would be a breach to hold the data, but I tend to hold these types of things for fairly short periods then get rid of them. When I publish things, it is always aggregated enough to be anonymized, so the publishing shouldn’t be an issue.”

He doesn’t know the answer to the problem, but does not intend to change his current behavior.

Precedent
A common problem with many laws is that they tend to be abused by law enforcement agencies seeking to find legal justification for a preferred course of action. ‘Overreach’ is a common criticism.

Examples could include the U.S. government’s use of the Stored Communications Act of 1986 to issue a search warrant on Microsoft for data held in Ireland – an action that is now mooted by the passing of the CLOUD Act.

In 2012, one-time self-styled Anonymous spokesperson Barret Brown was indicted on charges that included posting a link to stolen Stratfor data that was already available on the internet. Around the same time, Jeremy Hammond was being charged with involvement in the actual Stratfor hack, apparently having been urged to do so by the FBI informant Hector Xavier Monsegur (aka Sabu) of the LulzSec hacking crew.

While Hammond eventually pled guilty and received the maximum 10 years jail time for ‘doing’ the hack; Brown faced 45 years for posting a link to some proceeds from the hack.

However, perhaps the iconic example of overreach accusations involved Aaron Swartz, the co-founder of Reddit. Swartz was accused of breaking into MIT and illegally downloading millions of academic articles from the subscription-only JSTOR. It is claimed he simply felt that these academic articles should be freely available to everyone.

Swartz faced charges of computer fraud, wire fraud and other crimes carrying a maximum sentence of 35 years in prison and a $1 million fine. He committed suicide – and the charges were posthumously dropped.

These are rare occurrences and do not directly relate to security research; but they demonstrate that laws can potentially be misused or used for purposes not originally intended by the lawmakers.

A view from the UK GDPR regulator
Given the scope for confusion over the effect of GDPR on security researchers, SecurityWeek approached the UK Information Commissioner’s Office for comment.

An ICO spokesperson told SecurityWeek: “The ICO recognizes the valuable work undertaken by security researchers. Data protection law neither prevents nor prohibits such activity. As the GDPR states, processing personal data for the purposes of network and information security (and the security of related services) constitutes a legitimate interest of the data controller concerned. Organizations that do this must nevertheless ensure that the processing is essential for these purposes, is proportionate to what they are trying to achieve, and that they follow the GDPR’s requirements.

“Further,” he added, “the Data Protection Bill [the UK-specific law being readied to replace GDPR following Brexit] also includes a specific defense relating to the re-identification of de-identified personal data; eg where a security researcher may seek to test the effectiveness of the de-identification process.”

(It is worth noting that there is no guarantee that the UK’s GDPR replacement will be considered ‘adequate’ by the EU post-Brexit. This could introduce further complications for UK researchers. The same activity could be considered legal by the ICO, but illegal by, for example, the French CNIL regulator.)

The GDPR statement on the legitimate interest of security researchers is found in recital 49. In full, it states:

“The processing of personal data to the extent strictly necessary and proportionate for the purposes of ensuring network and information security, i.e. the ability of a network or an information system to resist, at a given level of confidence, accidental events or unlawful or malicious actions that compromise the availability, authenticity, integrity and confidentiality of stored or transmitted personal data, and the security of the related services offered by, or accessible via, those networks and systems, by public authorities, by computer emergency response teams (CERTs), computer security incident response teams (CSIRTs), by providers of electronic communications networks and services and by providers of security technologies and services, constitutes a legitimate interest of the data controller concerned.”

Noticeably, it does not state that ‘security research’ is allowed – only that some forms of research strictly limited to what is ‘necessary and proportionate’ can be defined as a ‘legitimate interest’.

Summary
Cyber laws are good faith attempts by lawmakers to protect the privacy and computer-related property of internet users. The difficulty with all laws is that once they are enacted, interpretive control passes to the law enforcers and judiciary.

Different jurisdictions may treat the same law differently, while different judges may interpret the letter of the law differently.

The UK ICO has made it clear that it does not believe that genuine research is precluded by the GDPR – indeed, recital 49 specifically allows it. Nevertheless, even the allowance of security research hinges on the value judgment of what is meant by ‘strictly necessary and proportionate for the purposes of ensuring network and information security’.

It is likely that security researchers who treat personal data in the way companies are required to treat personal data, and who delete it after use, will not be treated as in breach of GDPR.

“Researchers need to have a legitimate reason to have to work with the data,” explains Michael Aminzade, VP global compliance and risk service at Trustwave. “If they have reason because they are looking into the impacts of a breach then for the time of the research they will be OK to work with this data, but they need to work with it within the bounds of the regulations. Once the research is finished this data can’t be kept ‘in case’ of future research, as there is no legitimate need at that point – so the data should be deleted within the requirements of the regulation.”

However, where research requires breaking into and analyzing data found on a suspected criminal C&C server, the researcher is on less solid ground. The consensus among security firms is that this should only ever be done in conjunction with, for, or on behalf of, law enforcement in an ongoing investigation. However, it is unlikely that law enforcement approval has any actual weight in law.

The reality is that all forms of active security research must tread a very fine line between legal and illegal activity – and ultimately it will be up to the courts to decide on the legality or illegality of any specific regulator-challenged research.


U.S. Commerce Chief Warns of Disruption From EU Privacy Rules
30.5.2018 securityweek Privacy 

Washington - US Commerce Secretary Wilbur Ross warned Wednesday that the new EU privacy rules in effect since last week could lead to serious problems for business, medical research and law enforcement on both sides of the Atlantic.

Ross said US officials were "deeply concerned" about how the General Data Protection Regulation would be implemented, while noting that the guidance so far has been "too vague."

The law which took effect May 25 establishes the key principle that individuals must explicitly grant permission for their data to be used, and give consumers a right to know who is accessing their information and what it will be used for.

Some US officials have expressed concerns about the GDPR, but Ross is the highest ranking official to speak on the law, and his comments address a broad range of sectors that could be affected.

Related Reading: The GDPR Opportunity

"We do not have a clear understanding of what is required to comply. That could disrupt transatlantic cooperation on financial regulation, medical research, emergency management coordination, and important commerce," Ross said in an opinion piece for the Financial Times.

The costs of the new law could be significant, to the point where it may "threaten public welfare on both sides of the Atlantic," according to Ross.

"Complying with GDPR will exact a significant cost, particularly for small and medium-sized enterprises and consumers who rely on digital services and may lose access and choice as a result of the guidelines," he wrote.

"Pharmaceutical companies may not be able to submit medical data from drug trials involving European patients to US authorities, which could delay the approval of new life-saving drugs."

He added that the US Postal Service has claimed the new rules could prevent EU postal operators from providing the data needed to process inbound mail.

Ross also echoed concerns from other officials that EU requirement that personal data be restricted from the internet address book known as "WHOIS" could hurt law enforcement efforts to crack down on cybercrime and online calls to violence.

"That could stop law enforcement from ascertaining who is behind websites that propagate terrorist information, sponsor malicious botnets or steal IP addresses," he said.

"These important activities need to be weighed carefully against privacy concerns. They are critical to building trust in the internet, safeguarding infrastructure, and protecting the public. Our respect for privacy does not have to come at the expense of public safety.


Email Leakage - An Overlooked Backdoor to GDPR Failure
25.5.2018 securityweek Privacy   

On May 25, 2018, two years after it was adopted by the European Union, the General Data Protection Regulation (GDPR) came into force. For two years companies have been bombarded with offers for GDPR solutions from security firms; and publications have been bombarded with surveys claiming that only n% of firms are ready or even understand GDPR.

In truth, however, the 'data protection' element in GDPR is little different to pre-existing European laws. The GDPR changes come in the way user data is gathered, stored, processed, and made accessible to users; in breach disclosure; and in the severity of non-compliance fines.

That said, companies can learn from last year's data protection non-compliance incidents to gain insight into next year's potential GDPR non-compliance fines. One source is the statistics available from the Information Commissioner's Office (ICO -- the UK data protection regulator).

The ICO's latest 'Data security incident trends' report was published on 14 May 2018. During Q4, the ICO levied just a single fine: £400,000 on Carphone Warehouse Ltd "after serious failures put customer data at risk." There were, however, a total of 957 reported data security incidents. The ICO defines these as "a major concern for those affected and a key area of action for the ICO."

An analysis of those incidents is revealing. Healthcare -- a major worldwide criminal target for extortion and theft of PII -- reported a total of 349 data security incidents in Q4. The most common incidents were not technology-related: 121 incidents involved data posted or faxed to the wrong recipient, or the loss or theft of paperwork.

The most frequent technology-related incidents were not down to hacking, but to simple email failures (49) involving data sent to the wrong recipient, or a failure to use BCC when sending email. There is, in short, an easily overlooked backdoor into GDPR non-compliance.

Data sent to the wrong recipient is commonly addressed by data labeling and data loss prevention technologies. One problem is a high level of both false positives and false negatives. Employees charged with labeling the data they generate frequently 'over-label'; that is, they label unprotected data as 'sensitive' in an abundance of caution. This can lead to time-consuming, hampered workflows. Alternatively, sensitive data can remain unlabeled and still be sent to the wrong address.

In September 2017, the national Law Journal reported, "Wilmer, Cutler, Pickering, Hale and Dorr was caught Wednesday in an email mix-up that revealed secret U.S. Securities and Exchange Commission and internal investigations at PepsiCo, after a Wilmer lawyer accidentally sent a Wall Street Journal reporter privileged documents detailing a history of whistleblower claims at the company." This was not just an embarrassment; had it involved any EU data, it would have been a serious breach of GDPR.

(While writing this article, the author received an email from a major cybersecurity vendor: "You may have accidentally received an email from us yesterday with the subject line “SUBJECT LINE”. Our server had a bad moment and sent the email to wrong people." This was a benign error -- but it could have been serious, and it further illustrates the problem.)

One new start-up firm -- UK-based Tessian -- is seeking to solve the email GDPR backdoor using machine learning artificial intelligence. "What we're doing," co-founder and CEO Tim Sadler told SecurityWeek, "is helping organizations protect against the human threats. At our core, we prevent organizations sending highly sensitive emails to the wrong people."

The difficulty with the email problem is that it doesn't lend itself to a traditional rules-based solution -- email is used too frequently, too easily, with too many subjects and to too many people. "The approach we have taken is machine learning," explained Sadler. "We analyze historical communications patterns to understand the kind of information that is shared with different people in the user's network. On outgoing emails we understand anomalies. We understand that it is unusual that this data is shared with that contact. This is an approach we have not seen elsewhere, but it is one that works very effectively."

He claims that within 24 hours of analyzing the user email logs, a base-line of 'normality' can be produced. Anomalies to that baseline are flagged. Users are kept on board by being fully involved -- flagged emails aren't simply blocked. A full explanation of the system's decision is relayed to the user and can be accepted or overridden -- and the user's response is added to the system's machine learning knowledge. Using credit card fraud as an analogy, he said, "We don't just block the card because of anomalous behavior, we explain why, we ask the user if he wants to unblock it -- and we learn from the process."

The company was founded in 2013 by Tim Sadler, Ed Bishop and Tom Adams, and was originally known as CheckRecipient. In April 2017 it raised $2.7 million seed funding, bringing the total seed funding to $3.8 million. The company was rebranded and renamed as Tessian in February 2018. Part of the reason for the rebranding is the evolving and growing nature of the company.

"Our belief at Tessian," Sadler told SecurityWeek, "is that organizations' security has moved on from perimeter firewalls, and even endpoint security. I think we are in a third phase here, where humans are the real endpoints of the organization." If you look at how hackers try to break into a company, they're not so much hacking devices as hacking the humans.

We are focused on building security for the human endpoint. In short, we are thinking not just about outbound email threats, but also inbound email threats; and in going beyond that to understand what are the other ways in which humans leak data within an enterprise."

Sadler declined to go into details on Tessian's future road map -- but it is probably fair to say that a machine learning solution to BEC and general phishing threats is on the drawing board. Right now, Tessian is almost unique in bringing a machine learning solution to an email problem that from historical data is likely to prove a major and often overlooked threat to GDPR compliance.


Activists Urge Amazon to Drop Facial Recognition for Police
23.5.2018 securityweek Privacy

More than 30 activist groups led by the American Civil Liberties Union urged Amazon Tuesday to stop providing facial recognition technology to law enforcement, warning that it could give authorities "dangerous surveillance powers."

The organizations sent a letter to Amazon after an ACLU investigation found Amazon had been working with a number of US law enforcement agencies to deploy its artificial intelligence-powered Rekognition service.

"Rekognition marketing materials read like a user manual for authoritarian surveillance," said Nicole Ozer of the ACLU of California.

"Once a dangerous surveillance system like this is turned against the public, the harm can't be undone."

A letter to Amazon chief Jeff Bezos was signed by groups including the Electronic Frontier Foundation, Black Lives Matter, Freedom of the Press Foundation and Human Rights Watch.

"Amazon Rekognition is primed for abuse in the hands of governments," the letter said.

"This product poses a grave threat to communities, including people of color and immigrants, and to the trust and respect Amazon has worked to build."

Amazon is one of many companies in the US and elsewhere which deploy facial recognition for security and law enforcement.

Some research has indicated that such programs can be error-prone, particularly when identifying people of color, and activists argue these systems can build up large databases of biometric information which can be subject to abuse.

In China, authorities have created a digital surveillance system able to use a variety of biometric data -- from photos and iris scans to fingerprints -- to keep close tabs on the movements of the entire population, and uses it to publicly identify lawbreakers and jaywalkers.

The ACLU released documents showing correspondence with police departments in Florida, Arizona and other states on Rekognition, which is a service of Amazon Web Services.

The US activist groups say a large deployment by Amazon, which is one of the leaders in artificial intelligence, could lead to broad surveillance of the US population.

"People should be free to walk down the street without being watched by the government," the letter said.

"Facial recognition in American communities threatens this freedom. In overpoliced communities of color, it could effectively eliminate it. The federal government could use this facial recognition technology to continuously track immigrants as they embark on new lives."

Amazon did not immediately respond to an AFP request for comment on the letter.


As EU Privacy Law Looms, Debate Swirls on Cybersecurity Impact
23.5.2018 securityweek Privacy

Days ahead of the implementation of a sweeping European privacy law, debate is swirling on whether the measure will have negative consequences for cybersecurity.

The controversy is about the so-called internet address book or WHOIS directory, which up to now has been a public database identifying the owners of websites and domains.

The database will become largely private under the forthcoming General Data protection Regulation set to take effect May 25, since it contains protected personal information.

US government officials and some cybersecurity professionals fear that without the ability to easily find hackers and other malicious actors through WHOIS, the new rules could lead to a surge in cybercrime, spam and fraud.

Critics say the GDPR could take away an important tool used by law enforcement, security researchers, journalists and others.

The lockdown of the WHOIS directory comes after years of negotiations between EU authorities and ICANN, the nonprofit entity that administers the database and manages the online domain system.

ICANN -- the Internet Corporations for Assigned Names and Numbers -- approved a temporary plan last week that allows access for "legitimate" purposes, but leaves the interpretation to internet registrars, the companies that sell domains and websites.

Assistant Commerce Secretary David Redl, who head the US government division for internet administration, last week called on the EU to delay enforcement of the GDPR for the WHOIS directory.

"The loss of access to WHOIS information will negatively affect law enforcement of cybercrimes, cybersecurity and intellectual property rights protection activities globally," Redl said.

Rob Joyce, who served as White House cybersecurity coordinator until last month, tweeted in April that "GDPR is going to undercut a key tool for identifying malicious domains on the internet," adding that "cyber criminals are celebrating GDPR."

Negative consequences?

Caleb Barlow, vice president at IBM security, also warned that the privacy law "may well have negative consequences that, ironically, run contrary to its original intent."

Barlow said in a blog post earlier this month that "cybersecurity professionals use (WHOIS) information to quickly stop cyberthreats" and that the GDPR restrictions could delay or prevent security firms from acting on these threats.

James Scott, a senior fellow at the Washington-based Institute for Critical Infrastructure Technology, acknowledged that the GDPR rules "could hinder security researchers and law enforcement."

"The information would likely still be discoverable with a warrant or possibly at the request of law enforcement, but the added anonymization layers would severely delay" the identification of malicious actors.

Some analysts say the concerns about cybercrime are overblown, and that sophisticated cybercriminals can easily hide their tracks from WHOIS.

Milton Mueller, a Georgia Tech professor and founder of the Internet Governance Project of independent researchers, said the notion of an upsurge in cybercrime stemming from the rule was "totally bogus."

"There's no evidence that most of the world's cybercrime is stopped or mitigated by WHOIS," Mueller told AFP.

"In fact some of the cybercrime is facilitated by WHOIS is because the bad guys can go after that information too."

Mueller said the directory had been "exploited" for years by commercial entities, some of which resell the data, and authoritarian regimes for broad surveillance.

"It's fundamentally a matter of due process," he said.

"We all agree that when law enforcement has a reasonable cause, they can obtain certain documents, but WHOIS allow unfettered access without any due process check."

No delays

Akram Atallah, president of ICANN's global domains division, told AFP the organization had tried unsuccessfully to get an enforcement delay from the EU for the WHOIS directory to work out rules for access.

The temporary rule will strip out any personal information from WHOIS directory but allow access to the data for "legitimate" purposes, Atallah noted.

"You will need to get permission to see the rest of the data," he said.

That means the registrars, which include companies that sell websites like GoDaddy, will need to determine who gets access or face hefty fines from the EU.

ICANN is working on a process of "accreditation" to grant access, but was unable to predict how long it would take to get a consensus among the government and private stakeholders in the organization.

Matthew Kahn, a Brookings Institution research assistant, said the firms keeping the data are more likely to deny requests rather than face EU penalties.

"With democracies under siege from online election interference and active-measures campaigns, this is no time to hamper governments' and security researchers' abilities to identify and arrest cyber threats," Kahn said on the Lawfare blog.


UK Regulator Issues Advice on 'Consent' Within GDPR
11.5.2018 securityweek Privacy

The UK's Information Commissioners Office (ICO -- the data protection regulator) has published detailed guidance (PDF) on 'consent' within the General Data Protection Regulation. Since the UK is still in the European Union, the document provides a reasonable analysis of what is one of the trickiest aspects of GDPR. Once the UK leaves the EU, GDPR within the UK will be replaced by the new Data Protection Bill, which is designed to ensure the UK's data protection adequacy. It is not guaranteed to succeed in this.

Consent is not the only legal basis for processing personal data under GDPR. Others are a contractual relationship; compliance with a separate legal obligation; a public task; vital interest (as in, to save a life); and legitimate interests. Some of these are nuanced and may require detailed legal advice before being relied upon -- 'legitimate interests' does not mean that any commercial enterprise can ignore consent in the pursuit of profit.

GDPR in United Kingdom after BrexitNevertheless, user consent is likely to be the primary legal justification for processing user data. Under GDPR, it is not very different to the existing requirement for consent under the European Data Protection Directive (DPD), but adds a few significant aspects. In particular, it requires that consent must be 'unambiguous' and involve 'a clear affirmative action'.

The GDPR expansion of consent comes not in the definition but in the use and implications of consent. Three key areas are the need for keeping records of consent; the user's right to withdraw consent; and the inability to make consent a condition of a contract. "In essence," says the ICO, "there is a greater emphasis in the GDPR on individuals having clear distinct ('granular') choices upfront and ongoing control over their consent."

Genuine and lawful consent becomes a double-edged sword. On the one hand, it gives the user greater control over the use of his or her data (for example, the 'right to be forgotten' and the right to data portability); while on the other hand, the ICO says that explicit consent "can legitimize automated decision-making, including profiling."

However, it is the way the additional consent requirements play upon the definition of consent that can introduce confusion. An obvious example -- which has always existed but is now brought into focus by the potential size of the new GDPR fines -- involves 'freely given'. Consent cannot be freely given if there is imbalance in the relationship between the individual and the controller. "This will make consent particularly difficult for public authorities and for employers, who should look for an alternative lawful basis where possible," warns the ICO.

In general, public authorities should rely on the 'public task' justification rather than the consent justification. Employers who wish to process information on staff must be wary of any implication that continued employment might depend upon their consent to the processing -- that consent cannot be freely given and any reliance by the employer on that consent would be illegal.

The right to be forgotten is another complication. The implication of the regulation is that if, for any reason, the user cannot withdraw consent, or the data cannot be deleted, then consent was never legally given. Under such circumstances, user consent is most likely the wrong justification. The ICO uses a credit card company as an example. The company might ask for the user's consent to send details to a credit reference agency.

"However," says the ICO, "if a customer refuses or withdraws their consent, the credit card company will still send the data to the credit reference agencies on the basis of 'legitimate interests'. So, asking for consent is misleading and inappropriate -- there is no real choice." In this instance, the 'legitimate interests' justification should have been used from the outset -- not user consent.

The inability to use consent as a contract condition is another nuanced area that could lead to confusion. "If you require someone to agree to processing as a condition of service," says the ICO, "consent is unlikely to be the most appropriate lawful basis for the processing. In some circumstances it won't even count as valid consent."

The example given concerns a cafe that decides to offer its customers free wifi if they provide their name, email address and mobile phone number and then agree to the cafe's terms and conditions. The T&Cs make it clear that the details will be used for direct marketing. "The cafe is therefore making consent to send direct marketing a condition of accessing the service. However, collecting their customer's details for direct marketing purposes is not necessary for the provision of the wifi. This is not therefore valid consent."

If the consent issue sounds complex and confusing, it is because it is complex and confusing. For example, probably every reader will have received emails from companies seeking to gain 're-consent' to continue sending marketing or other emails before GDPR comes into effect. One example received here simply says, "To comply with the new EU General Data Protection Regulation (GDPR), we need to confirm that you want to keep receiving our marketing emails. Please confirm your subscription to [our firm's] marketing communications by clicking the button below." (Incidentally, beware of similar but false phishing emails.)

The reality is that such emails are either unnecessary or illegal. If the original consent was properly acquired in the first case, it will almost certainly remain valid. If consent was either not or inappropriately gathered in the first place, then this email is inadequate for GDPR's requirements. At just one very simple and basic level, it doesn't inform the reader of the right to withdraw consent; and is consequently not valid consent.

A case in point is the £13,000 fine levied by the ICO on Honda Motor Europe Ltd. The ICO announced in March 2017, "A separate ICO investigation into Honda Motor Europe Ltd revealed the car company had sent 289,790 emails aiming to clarify certain customers' choices for receiving marketing."

Honda believed it was doing so to abide by GDPR -- but in fact it was breaching the consent requirements of a separate law (the Privacy and Electronic Communication Regulations -- PECR), "The firm believed the emails were not classed as marketing but instead were customer service emails to help the company comply with data protection law. Honda couldn't provide evidence that the customers had ever given consent to receive this type of email, which is a breach of PECR. The ICO fined it £13,000."

At around the same time, the ICO fined the British Flybe airline £70,000 for sending more than 3.3 million emails to people who had told them they didn't want to receive marketing emails from the firm. Steve Eckersley, ICO Head of Enforcement, said at the time, "Both companies sent emails asking for consent to future marketing. In doing so they broke the law. Sending emails to determine whether people want to receive marketing without the right consent, is still marketing and it is against the law."

These fines, had they been levied under GDPR after 25 May 2018, could have been considerably higher.

The document published by the ICO is long and complex, but full of links for further information and examples of valid and invalid use of user consent. Getting consent wrong could be costly -- but getting it right is beneficial. "The GDPR sets a high standard for consent. Consent means offering people genuine choice and control over how you use their data," says the ICO. "When consent is used properly, it helps you build trust and enhance your reputation."


Amazon Alexa Can Be Used for Snooping, Researchers Say
28.4.2018 securityweek  Privacy

Amazon's Alexa cloud-based virtual assistant for Amazon Echo can be abused to eavesdrop on users, Checkmarx security researchers have discovered.

Present on more than 31 million devices around the world, Alexa enables user interaction after a wake-up word (specifically, “Alexa”) activates it. Next, the Intelligent Personal Assistant (IPA) launches the requested capability or application – called skill, it either comes built-in or is installed from the Alexa Skills Store.

Checkmarx researchers built a malicious skill application capable of recording user’s speech in the background and then exfiltrating the recording, all without alerting the user.

Because of the required wake-up word, the recording would have to be performed after the activation. However, the listening session would normally end after a response is delivered to the user, to protect privacy, yet the researchers found a way to keep the session alive and to hide that from the user.

A shouldEndSession flag allows a session to stay alive for another cycle, after reading back the service’s text as a response. However, reading back the text would reveal to the user that the device is still listening.

To overcome this issue, the researchers used a re-prompt feature, which works in a similar manner, but accepts “empty re-prompts.” Thus, they could start a new listening cycle without alerting the user on the matter.

Finally, the researchers also focused on being able to accurately transcribe the voice received by the skill application. For that, they added a new slot-type to capture any single word, not limited to a defined list of words. They also built a formatted string for each possible length.

Of course, users would still be alerted on a device listening to them because the blue light on Amazon Echo lights-up when a session is alive. However, some Alexa Voice Services (AVS) vendors would embed Alexa capabilities into their devices without providing the visual indicator, and it’s also highly likely that users would not pay attention to that light.

“While the shining blue light discloses that Alexa is still listening, much of the point of an IPA device is that, unlike a smartphone or tablet, you do not have to look at it to operate it. In fact, these IPAs are made to be placed in a corner where users simply speak to a device without actively looking in its direction,” the researchers say.

As long as speech is recognized and words picked up, the malicious skill can continue to eavesdrop in the background, without the user noticing it. In case of silence, Alexa closes the session after 8 seconds, but a silence re-prompt (defined with an empty output-speech that the user cannot hear) can double the grace period to 16 seconds, the security researchers say.

Checkmarx informed Amazon on their findings and worked with the company to mitigate the risks. Specific criteria to identify (and reject) eavesdropping skills during certification were put in place, along with measures to detect both empty-reprompts and longer-than-usual sessions, and take appropriate actions in both cases.

The security researchers also published a video demonstration of how the attack works.


Cybersecurity Tech Accord: Marketing Move or Serious Security?
20.4.2018 securityweek Privacy

Cybersecurity Tech Accord Comprises Fine Words With No Defined Deliverables and Perhaps Impossible Intentions

Thirty-four major tech and security companies have aligned themselves and signed the Cybersecurity Tech Accord, what they claim is a "watershed agreement among the largest-ever group of companies agreeing to defend all customers everywhere from malicious attacks by cybercriminal enterprises and nation-states."

"The devastating attacks from the past year demonstrate that cybersecurity is not just about what any single company can do but also about what we can all do together," said Microsoft President Brad Smith. "This tech sector Accord will help us take a principled path towards more effective steps to work together and defend customers around the world."

The Accord makes commitments in four specific areas.

First, the companies say they will mount a stronger defense against cyberattacks, and will protect all customers globally regardless of the motivation of the attack.

Second, the companies claim they will not help governments launch cyberattacks against innocent citizens, and will protect their products against tampering or exploitation at every stage of development, design and distribution.

Third, the companies promise to do more to empower users to make effective use of their products with new security practices and new features.

Fourth, verbatim, "The companies will build on existing relationships and together establish new formal and informal partnerships with industry, civil society and security researchers to improve technical collaboration, coordinate vulnerability disclosures, share threats and minimize the potential for malicious code to be introduced into cyberspace."

A problem with the Accord, that many have already noted, is that it comprises fine words with no defined deliverables and possibly impossible intentions. It has no teeth. The first commitment is something that users could be excused for thinking they have already paid for in buying or licensing the signatories' products. The third, again, should be part and parcel of selling security products -- although it has received some support.

"Separate from the fact that some of the major social networks and cloud operators are missing [think, for example, Google and Amazon]," David Ginsburg, VP of marketing at Cavirin, told SecurityWeek, "the key to any meaningful outcome is better communication to users of how to use the security capabilities within the various vendors' tools. In several cases, the capabilities are there, but they are too difficult to deploy; or, in some cases, tools from multiple vendors will provide contradictory guidance. This practical aspect is tremendously important."

The second commitment is a little more complex. No company can disregard the law in its own country. Individual governments have the right and ability to pass whatever laws they wish, subject only to any overriding constitutional limitations. So, for example, once Brexit is finalized, the UK government would be able to insist on backdoors in the UK without fear of denial from the EU constitution.

Challenged on whether this commitment meant that the signatories would go against the U.S. government, or the British government or the Australian government or whoever, Microsoft president and chief legal officer, Brad Smith took the argument away from the Five Eyes nations.

"If you look at the world today," Smith said, "the biggest attacks against private citizens are clearly coming from a set of governments that we know well. It was North Korea, and a group associated with it, that launched the WannaCry attack last year... We saw the NotPetya attack launched against the country of Ukraine. Those are the big problems that we need to solve."

But it is doubtful that a group of tech companies could influence the governments of North Korea (WannaCry) and Russia (NotPetya); while it is equally doubtful that collaboration between the signatories could have detected and stopped the spread of WannaCry.

It is concerns such as this that are behind a degree of cynicism. One security executive -- preferring to remain anonymous -- told SecurityWeek, "The first two [commitments] are BS. They are pretty obvious, and I don't see anything happening about them. Similarly, the third one. I do not see the need of this Cybersecurity Tech Accord for that."

He was, however, more enthusiastic about the fourth commitment, commenting, "I think this could be a good place to coordinate among ourselves, and share valuable information. It is true that there are places where the exchange of threat intel already happens -- but most of these places are populated by companies of the same sector. Having a wide mix of companies can open the opportunity to really improve in this field and make a change."

F-Secure, one of the signatories, hopes that the Accord will help persuade governments not to press for law enforcement backdoors in security products. "By signing the Accord," CIO Erka Koivunen told SecurityWeek, "the group of companies across both sides of the Atlantic wish to express that we resist attempts to introduce backdoors in our products or artificially weaken the protections that we provide against cyber security threats."

F-Secure has won the battle in Finland, but Koivunen added, "We still feel the pressure in many countries around the world."

Avast is another enthusiastic signatory. Jonathan Penn, director of strategy, commented, on the internet of things, "Avast has been talking in recent years about the implications of providers of these next generation devices and services continuing to operate separately, when it's clear that what is required is industry-wide collaboration to ensure that fundamentals such as security are built-in from the ground up at point of manufacture."

'From the ground up' is an interesting comment, and relates to 'every stage of development, design and distribution' from the second commitment. Yet still the criticism of a lack of teeth to the Accord remains.

Mike Banic, VP of marketing at Vectra, suggests, "The impending EU General Data Protection Regulation (GDPR) will have more impact since it has real teeth in the form of fines that can be as much as 4% of annual revenue if the personal information of EU based citizens is exposed or misused, and organizations must provide notification within 72 hours. An example to consider is the timeline of the Equifax breach where personally identifiable information (PII) was exposed and notification was not within the notification period. With so many organizations operating in EU nations or processing EU-based citizen's data, evaluating their security program to ensure GDPR compliance is such a high priority that this alliance may go unnoticed."

Notice also that 'privacy by design', that is, from the ground up, is a legal requirement under GDPR.

Last year, Microsoft's Smith called for a digital Geneva convention. This year he has launched the Cybersecurity Tech Accord -- which he hopes will be the first steps towards that. But Microsoft has a history of ambitious proposals that are unachievable. In 2016, Scott Charney proposed that an independent international body of experts should be tasked with attributing cyber incidents, so that international norms of behavior could be enforced. In 2010, he proposed that users and their computers should have a 'digital health certificate' before being allowed to connect to the internet -- an idea that has never been seriously considered.

But it would be wrong to immediately dismiss the Accord as just another unachievable Microsoft proposal. Nathan Wenzler, chief security strategist at AsTech, points out that not all the signatories are pure-play security companies, and most have themselves been hacked. "I'd be hesitant to say it's nothing but a marketing ploy," he told SecurityWeek, "as there are some serious security companies in the mix, and it's possible that if they have a voice at the table, some changes could be made with the companies that are common targets of attacks and causes of data breaches. But, time will tell on that, and it's hard to know in the here and now just how this will play out."

Brad Smith asks for that time. "I think that as with all such things, one needs to start with words, because we use words to define principals -- but ultimately we all need to be judged by our deeds. Now that we've put the words down on paper, we need to live up to them and we need to take concrete steps to implement them and that's what we're coming together to do. It's more than fair for you and others to judge us by what we do in the months and years ahead."


Would Facebook and Cambridge Analytica be in Breach of GDPR?
2.4.2018 securityweek Privacy

The Cambridge Analytica (CA) and Facebook accusations over the U.S. 2016 presidential election campaign, and to a lesser extent between CA and the UK's Brexit VoteLeave campaign, are -- if proven true -- morally reprehensible. It is not immediately clear, however, whether they are legally reprehensible. The matter is currently under investigation on both sides of the Atlantic.

On March 26, both Apple and IBM called for more regulatory oversight on the use of personal data. "I'm personally not a big fan of regulation because sometimes regulation can have unexpected consequences to it, however I think this certain situation is so dire, and has become so large, that probably some well-crafted regulation is necessary," said Apple chief Tim Cook on March 24, 2018.

"If you're going to use these technologies, you have to tell people you're doing that, and they should never be surprised," IBM chief executive Rometty said on March 26, 2018. "(We have to let) people opt in and opt out, and be clear that ownership of the data does belong to the creator," he said.

GDPR - European Data ProtectionSuch regulatory oversight already exists in Europe under national data protection laws, and this will potenyially become global when the European General Data Protection Regulation (GDPR) comes into effect on May 25, 2018. The question is whether Facebook and/or CA would have been in breach of GDPR were it already operational, and therefore whether GDPR will prevent any future repetitions of this sort.

"From Facebook's perspective," MacRoberts LLP senior partner David Flint told SecurityWeek, "the only good point is that the maximum fine under the [current UK] Data Protection Act is £500,000; after 25 May 2018 it would be 4% of Facebook worldwide turnover ($40bn in 2017) -- a potential $1.6bn fine! That's before damages claims."

Cambridge Analytica is an offshoot or SCL, formerly Strategic Communications Laboratories (a private British behavioral research and strategic communication company); and was specifically formed to target the U.S. presidential elections.

The user profile collection

At this stage we have to stress that everything is just a combination of accusation and denial, with nothing yet proven in a court of law. Nevertheless, the accusation is that a Cambridge University academic, Dr. Aleksandr Kogan, developed a Facebook personality quiz app (called 'thisisyourdigitallife') that collected data from some 270,000 app users on Facebook; and also collected their friends' data. Kogan's firm was known as Global Science Research (GSR).

Concerns about the relationship between Facebook user data, GSR, CA, and the U.S. presidential election are not new. In December 2015, the Guardian reported, "Documents seen by the Guardian have uncovered longstanding ethical and privacy issues about the way academics hoovered up personal data by accessing a vast set of US Facebook profiles, in order to build sophisticated models of users' personalities without their knowledge."

The user profiles were at least partly gathered through the process of 'turking' via the Amazon service, the Mechanical Turk. GSR reportedly paid Turkers $1 or $2 to install an app that would "download some information about you and your network … basic demographics and likes of categories, places, famous people, etc. from you and your friends."

An important element of the evolving story is that while it could be argued that the original turkers and anyone who installed Kogan's app had given implied consent to the collection of their personal data, their friends had almost certainly not; nor it seems did anyone give permission for that personal data to be used for political purposes in the presidential election via a third-party, namely Cambridge Analytica.

The scandal

The scandal did not reach public proportions until March 2018 following new reports from the New York Times and the Guardian, and a video interview between CA whistleblower Christopher Wylie and the Guardian. Wylie revealed that "personal information was taken without authorization in early 2014 to build a system that could profile individual US voters in order to target them with personalized political advertisements."

Public awareness was suddenly so high that Facebook -- the ultimate source of the user profiles -- saw an immediate and dramatic drop in its share value. Since March 16, Facebook has lost approximately $80 billion in value (at the time of writing), the FTC has announced an investigation into Facebook's privacy practices, Mark Zuckerberg, Facebook's co-founder and CEO, agreed to testify before Congress (but declined to appear in person before UK lawmakers), and the UK's data protection regulator (the Information Commissioner's Office) has raided CA's offices.

Incidentally, Facebook and CA are also included in an ongoing but lower profile investigation into possible manipulation of the Brexit referendum vote. Speaking before a UK parliamentary select committee this week, Wylie claimed that CA had been involved in the Brexit referendum and that, in his view, the result had been obtained by 'fraud' and 'cheating'.

Cambridge Analytica's alleged involvement in the U.S. election has been known since at least 2015. Facebook made some minor changes to its policies and requested that Kogan and CA delete all gathered user data. It says it believed that had happened -- but if Wylie's accusations are true, that could not have happened.

It is only in March 2018, following the dramatic drop in share value, that Facebook has responded seriously. On March 16, Facebook VP and deputy general counsel Paul Grewel announced, "We are suspending SCL/Cambridge Analytica, [whistleblower] Wylie and Kogan from Facebook, pending further information." One day later he added, "Aleksandr Kogan requested and gained access to information from users who chose to sign up to his app, and everyone involved gave their consent. People knowingly provided their information, no systems were infiltrated, and no passwords or sensitive pieces of information were stolen or hacked." The claim that 'everyone involved gave their consent' is open to debate.

On March 2, Facebook founder Mark Zuckerberg published a personal apology together with news that Facebook would dramatically rein in the amount of personal data that apps can collect. "We will reduce the data you give an app when you sign in -- to only your name, profile photo, and email address. We'll require developers to not only get approval but also sign a contract in order to ask anyone for access to their posts or other private data. And we'll have more changes to share in the next few days."

Nevertheless, two things stand-out. Facebook, CA and Aleksandr Kogan all claim they have done nothing illegal -- and it is only after the incident affected Facebook's bottom line that it has begun to take serious action. It is against this background that Tim Cook has called for "some well-crafted regulation".

GDPR

The EU's General Data Protection Regulation (GDPR) was drafted precisely to protect personal information from misuse. GDPR, is already enacted and due to come into force on May 25, 2018. The question is whether this regulation would provide the future oversight called for by Apple and IBM.

"Absolutely," says Thycotic's chief security scientist Joseph Carson. "This is exactly why EU GDPR has been put in place to protect EU citizens' personal information and ensure that companies have explicit consent to use personal data. Let's think about this - if only the data breach (aka trust) had occurred after May 25th, 2018, and if any of the 50 million impacted users had been EU citizens, Facebook would have been facing a potential whopping $1.6 billion financial penalty from the EU. I believe that would change Facebook's priority on ensuring data is not being misused. This is going to be an example on what could have been if GDPR was enforced."

It could be claimed that GDPR would still fail as a regulation because the impacted users are, ostensibly, all North American. "GDPR applies to the data for any EU resident," comments Nathan Wenzler, chief security strategist at AsTech. "For example, if a U.S. citizen was residing in an EU country, their data would be governed under GDPR when it goes into effect. Citizenship is not the criteria used to determine application of GDPR. Residency is, though, and that makes it far more complicated for companies to determine which of the individual records they have are or are not under the mandates of GDPR."

Dov Goldman, Vice President, Innovation and Alliances at Opus, is even more forthright. "The GDPR privacy rules do not protect non-EU citizens," he told SecurityWeek. "If Facebook can prove that the data released to Cambridge Analytica only contained PII of US persons, Facebook would likely not face any liability under GDPR. There are U.S. regulations that protect American's financial data, but not their personal data (PII), for now."

It's not that clear cut. While the common perception is that GDPR is designed to protect people within the EU (or perhaps the slightly larger European Economic Area), Recital 14 states: "The processing of personal data is designed to serve man; the principles and rules on the protection of individuals with regard to the processing of their personal data should, whatever the nationality or residence of natural persons, respect their fundamental rights and freedoms, notably their right to the protection of personal data."

GDPR is principal-based legislation. Interpretation of the details will be left to the courts to decide, based on their understanding of the intent of the lawmakers. It is, therefore, not entirely clear at this stage whether 'whatever the nationality' means European nationality or global nationality.

David Flint has no doubts. "GDPR would apply (were it in force) to any processing of data carried out by Cambridge Analytica, even if only of US nationals, by virtue of Article 3.1 of the GDPR (Data Controller / Processor based in EU)," he told SecurityWeek. "Article 2 (processing by automated means) would also be relevant." In this view, GDPR is about the processing of personal data, not the nationality of the data subject.

Under GDPR, responsibility is primarily with the data controller, and that responsibility cannot be off-loaded to the data processor. "It is difficult to see how Facebook would not be considered as a Data Controller (or perhaps Controller in Common with Cambridge Analytica)," continued Flint, "given that it collected the data, and/or permitted CA to do so, provided the platform APIs which allowed the data collection and mining; and carried out automatic mass profiling."

There is little doubt that Cambridge Analytica, as a UK company gathering and processing personal data from a firm (Facebook) that operates within the EU would be considered liable under GDPR. Key to this would be the consent issue. It will be argued that by downloading and installing Kogan's app, users gave consent for their data to be used and shared; and that in allowing their data to be shared among friends on Facebook, the friends also gave consent.

This argument won't pass muster. GDPR says, "'the data subject's consent' shall mean any freely given specific and informed indication of his wishes by which the data subject signifies his agreement to personal data relating to him being processed." It is unlikely that even the app downloaders were giving free and informed consent for their personal data to be profiled for political purposes in the U.S. presidential election.

As at least co-controllers with Cambridge Analytica, it is difficult then to see how Facebook would not also be drawn into the issue.

Will GDPR provide the regulation/oversight sought by Apple and IBM?

In the final analysis, Facebook's liability under GDPR for the misuse of users' personal data by Cambridge Analytica will partly come down to an interpretation of whether the legislation covers non-EU subjects. If a single affected user was living in or passing through the EU at the time, there would be no ambiguity. However, in the end, the interpretation will be done by the courts -- although it is worth noting that the European MEP who drove through GDPR as its rapporteur (Jan Philipp Albrecht) has made it clear that he sees GDPR as changing privacy practices throughout the world for all people.

Where there is little ambiguity, however, is that Facebook's processing and privacy practices fell short of that required by GDPR. These requirements do not rely on the nationality or residency of the data subject.

GDPR could well provide the basis of global oversight of large company privacy practices; but we may have to wait until the courts start to interpret the finer details. In the meantime, all companies should carefully consider what happens to the personal data they collect and share. It is possible that sharing or selling that data to a third-party not specified at the time of collection will prove a breach of GDPR.


Researchers Propose Improved Private Web Browsing System
26.2.2018 securityweek Privacy

A group of researchers from MIT and Harvard have presented a new system designed to make private browsing even more private.

Dubbed Veil, the system proposes additional protections for people who share computers with other people at the office, in hotel business centers, or university computing centers. The new system, the researchers claim, can be used in conjunction with existing private-browsing systems and anonymity networks. The system works even if users don’t visit a page using a browser’s native privacy mode.

In a paper (PDF) describing Veil, Frank Wang – MIT Computer Science and Artificial Intelligence Laboratory (CSAIL), Nickolai Zeldovich – MIT CSAIL, and James Mickens – Harvard, explain that the system is meant to prevent information leaks “through the file system, the browser cache, the DNS cache, and on-disk reflections of RAM such as the swap file.”

The researchers explain that existing private-browsing sessions rely on retrieving data, loading it into memory, and attempting to erase it when the session is over. However, because of a complex memory management process, some data could end up on a hard drive, where it could remain for days, with the browser not knowing what happened to that data.

The newly proposed system keeps all the data that the browse loads into memory encrypted until it is displayed on the screen, the researchers say. Users no longer type a URL into the browser, but access the Veil website and enter the URL there. With the help of a blinding server, the Veil format of the requested page is transmitted.

While the Veil page can be displayed in any browser, there is a bit of code in the page that executes a decryption algorithm and all of the data associated with the page is unreadable until it goes through that algorithm, the researchers say.

The system would also add decoy, meaningless code to every served page, so that the underlying source file is modified without affecting the way the page looks to the user. With no two transmissions of a page by the blinding sever similar, an attacker capable of recovering snippets of decrypted code after a Veil session should not be able to determine what page the user had visited.

“The blinding servers mutate content, making object fingerprinting more difficult; rewritten pages also automatically encrypt client-side persistent storage, and actively walk the heap to reduce the likelihood that in-memory RAM artifacts will swap to disk in cleartext form. In the extreme, Veil transforms a page into a thin client which does not include any page-specific, greppable RAM artifacts,” the paper reads.

One other option would be to have the blinding server opening the requested page itself, taking a picture of it, and sending the picture to the user’s computer. Should the user click anywhere on the image, the browser records the position of the click and sends the data to the server, which processes it and returns an image of the updated page.

Veil uses an opt-in model, meaning that the use of the new private browsing system requires developers to create Veil versions of their sites. To help in this regard, the researchers built a compiler to help admins convert sites automatically and is also capable of uploading the converted version of a site to a blinding server.

“To publish a new page, developers pass their HTML, CSS, and JavaScript files to Veil’s compiler; the compiler transforms the URLs in the content so that, when the page loads on a user’s browser, URLs are derived from a secret user key. The blinding service and the Veil page exchange encrypted data that is also protected by the user’s key. The result is that Veil pages can safely store encrypted content in the browser cache; furthermore, the URLs exposed to system interfaces like the DNS cache are unintelligible to attackers who do not possess the user’s key,” the paper reads.

The blinding servers, however, require maintenance, either by a network of private volunteers or a for-profit company. However, site admins would also have the option to host Veil-enabled versions of their sites themselves.


Automated Compliance Testing Tool Accelerates DevSecOps
21.2.2018 securityweek Privacy

Chef Software's InSpec 2.0 Compliance Automation Tool Helps Organizations Maintain an Up-to-Date View of Compliance Status

Software developers are urged to include security throughout the development cycle. This requires testing for compliance with both house rules and regulatory requirements before an application is released. Compliance testing is difficult, time-consuming and often subject to human error.

A January survey by Seattle-based software automation firm Chef Software shows that 74% of development teams assess for software compliance issues manually, and half of them remediate manually. Chef further claims that 59% of organizations do not assess for compliance until the code is running in production, and 58% of organizations need days to remediate issues.

Now Chef has released InSpec version 2.0 of its compliance automation technology. InSpec evolved from technology acquired with the purchase of German startup company VulcanoSec in 2015. The latest version improves performance and adds new routines. Chef claims it offers 90% Windows performance gains (30% on Linux/Unix) over InSpec 1.0. New in version 2.0 is the ability to verify AWS and Azure policies (with the potential to eliminate accidental public access to sensitive data in S3 buckets); and more than 30 new built-in resources.

The S3 bucket compliance problem is an example of InSpec's purpose. Earlier this month, two separate exposed databases were discovered in AWS S3 buckets. Last week, FedEx was added to the growing list, with (according to researchers) a database of "more than 119 thousands of scanned documents of US and international citizens, such as passports, driving licenses, security IDs etc."

In each case -- and the many more examples disclosed during 2017 -- the cause was simple: the databases were set for public access. The potential regulatory compliance effects, however, are complex. Just the EU General Data Protection Regulation (GDPR, coming into effect in May 2018) would have left FedEx liable to a fine of up to 4% of its global revenue if any of the 'international citizens' were citizens of the EU. FedEx revenue for 2017 is approximately $60 billion.

In all cases the cause was most likely simple human error. But this discloses a bigger problem within secure and compliant software development: it involves multiple stakeholders with different priorities and, to a degree, different languages of expression. "Compliance requirements are often specified by high level compliance officers in high level ambiguous Word documents," explains Julian Dunn, Chef's director of product marketing.

"But at the implementation level you have the DevOps folks who are in charge of the systems -- but they don't understand ambiguous Word documents. What they understand is code, computer systems and the applications. There's a failure to communicate because everyone uses different tools to do so -- and that just slows down the process."

InSpec 2.0 can verify AWS and Azure policies (with the potential to eliminate public access to sensitive data in S3 buckets); and more than 30 new built-in resources. It provides a simple easy-to-understand code-like method of defining compliance requirements. These requirements are then regularly checked against the company's infrastructure, both cloud and on-prem. A few lines of this code language would solve the S3 bucket exposure problem: "it { should have_mfa_enabled }" and "it { should_not have_access_key }".

Another example could be a database that compliance requires has access controls. For a Red Hat Linux system, the InSpec code would include, "control "ensure_selinux_installed" do", and "it { should be_installed }".

InSpec then regularly checks the infrastructure and detects whether anything is not compliant or has slipped out of compliance with the specified rules. It is part of the InSpec cycle that Chef describes as 'detect, correct, automate'. Detection provides visibility into current compliance status to satisfy audits and drive decision-making; correction is the remediation of issues to improve performance and security; and automation allows for faster application deployment and continuous code risk management.

"We help the customer in the automate phase with pre-defined profiles around the common regulatory requirements," explains Dunn. "But InSpec is fundamentally a generic toolkit for expressing rules and positive and negative outcomes from those rules -- so it deals with everything from soft compliance (rules of the house) all the way through to GDPR, PCI, SOX and so on."

But there is a further benefit. Software development has embraced the concept of DevOps to avoid siloed software development and deployment. Increasing security compliance regulations are now driving the concept of DevSecOps, to bring the security team into the mix. InSpec automatically involves security and compliance with the code development process -- a fully-functioning DevSecOps environment able to improve rather than inhibit the agility of software development is an automatic byproduct of InSpec 2.0.


Data Privacy Concerns Cause Sales Delays: Cisco
27.1.2018 securityweek Privacy

Nearly two-thirds of businesses worldwide have experienced significant delays in sales due to customer data privacy concerns, according to Cisco’s 2018 Privacy Maturity Benchmark Study.

The study, based on the responses of roughly 3,000 cybersecurity professionals from 25 countries, shows that 65% of businesses reported sales cycle delays due to concerns over data privacy, with an average delay of nearly 8 weeks.

However, organizations with a mature privacy process are less affected compared to privacy-immature companies. Privacy-mature firms experienced delays of only 3.4 weeks, while immature businesses reported delays averaging nearly 17 weeks.

Sales delays have also varied depending on several other factors, including country, with the longest delays reported in Mexico and Latin America, and industry, with the longest delays in the government and healthcare sectors.

The%20number%20of%20weeks%20sales%20have%20been%20delayed

The report also shows that privacy-mature organizations suffer lower losses as a result of data breaches. According to Cisco, only 39% of privacy-mature organizations experienced losses exceeding $500,000, compared to 74% of companies that have an immature privacy process.

The type of model adopted by organizations for privacy resources also appears to be an important factor. According to the study, businesses with fully centralized and decentralized resources had sales delays of 10 and 7 weeks, respectively. On the other hand, organizations with a hybrid model, which represents a mix between centralized and decentralized, reported delays of less than 5 weeks.

“This study provides valuable empirical evidence of the linkage between firm privacy policies and performance-relevant impacts. These results are indicative of the direction that future empirical research on privacy, and cybersecurity more generally, should take to better validate and focus our understanding of best practices in these important areas,” said Dr. William Lehr, economist at MIT.

The complete 2018 Privacy Maturity Benchmark Study is available for download in PDF format.


Many GPS Tracking Services Expose User Location, Other Data
3.1.2017 securityweek Privacy
Researchers discovered that many online services designed for managing location tracking devices are affected by vulnerabilities that expose potentially sensitive information.

Fitness, child, pet and vehicle trackers, and other devices that include GPS and GSM tracking capabilities are typically managed via specialized online services.

Security experts Vangelis Stykas and Michael Gruhn found that over 100 such services have flaws that can be exploited by malicious actors to gain access to device and personal data. The security holes, dubbed Trackmageddon, can expose information such as current location, location history, device model and type, serial number, and phone number.

Some services used by devices that have photo and audio recording capabilities also expose images and audio files. In some cases, it’s also possible to send commands to devices in order to activate or deactivate certain features, such as geofence alerts.

Attackers can gain access to information by exploiting default credentials (e.g. 123456), and insecure direct object reference (IDOR) flaws, which allow an authenticated user to access other users’ accounts simply by changing the value of a parameter in the URL. The services also expose information through directory listings, log files, source code, WSDL files, and publicly exposed API endpoints that allow unauthenticated access.

Stykas and Gruhn have notified a vast majority of the affected vendors in November and December. Nine services have confirmed patching the flaws or promised to implement fixes soon, and over a dozen websites appear to have addressed the vulnerabilities without informing the researchers. However, the rest of the tracking services remain vulnerable.

There are roughly 100 impacted domains, but some of them appear to be operated by the same company. Researchers have identified 36 unique IPs hosting these domains and 41 databases that they share. They estimate that these services expose data associated with over 6.3 million devices and more than 360 device models.

The vulnerable software appears to come from China-based ThinkRace, but in many cases the company does not have control over the servers hosting the tracking services.

Gruhn and Stykas pointed out that vulnerabilities in ThinkRace products – possibly including some of the issues disclosed now – were first discovered in 2015 by a New Zealand-based expert while analyzing car tracking and immobilisation devices that relied on ThinkRace software.

Users of the online tracking services that remain vulnerable have been advised to change their password and remove any potentially sensitive information stored in their account. However, these are only partial solutions to the problem and researchers have advised people to simply stop using affected devices until patches are rolled out.


Onapsis Helps SAP Customers Check GDPR Compliance
9.12.2017 securityweek Privacy
Onapsis, a company that specializes in securing SAP and Oracle business-critical applications, announced this week that it has added automated GDPR compliance capabilities to the Onapsis Security Platform.

The new functionality allows organizations using SAP products to quickly determine if they meet data protection requirements. The system is capable of identifying SAP systems that need to be compliant with the General Data Protection Regulation (GDPR), specifically systems that process or store user data. Onapsis believes a majority of SAP systems fall into this category.

Non-compliant systems are flagged by the Onapsis Security Platform and users are provided guidance on how to address the issue. Newly added systems that need to be GDPR compliant are automatically included in the next audit.

“In speaking to our customers, we know that GDPR is a complicated mandate and many organizations are struggling to determine if or how their SAP landscapes are relevant,” said Alex Horan, Director of Product Management at Onapsis. “With this in mind, Onapsis’s newly released audit policy within the Onapsis Security Platform (OSP) automatically evaluates any SAP system through the lens of the data protection requirements of GDPR. This includes both data at rest, data in transit and the assessment of data access or authorizations.”

GDPR, expected to come into effect in May 2018, requires businesses to protect the personal data and privacy of EU citizens. While the regulation is designed to protect the data of EU citizens, it affects organizations worldwide. Failure to comply can result in penalties of up to €20 million or 4% of global profit.

A study conducted earlier this year by the UK & Ireland SAP User Group showed that 86% of SAP customers did not fully understand the implications of GDPR. More than half of respondents said the increasing use of cloud technology and workforce mobility increased their compliance challenges.

SAP recommends its GRC (Governance, Risk, Compliance) solutions for ensuring GDPR compliance, and nearly half of the respondents taking part in the SAP User Group study had been leveraging SAP GRC. Many of those who had not used it believed GRC was either too expensive or too complicated.


Court Investigating Whether Uber Connived to Cover its Tracks
30.11.2017 securityweek Privacy
Uber Under Investigation

Uber, the ride-sharing giant hit with a number of scandals in recent months, is now suspected of operating a program to hide nefarious tactics.

The start of a trial in Waymo's suit against Uber over allegedly swiped self-driving car technology was put on hold this week while the court looked into whether evidence of a cover-up was withheld.

An internal letter from a former employee contended Uber had a team that "exists expressly for the purpose of acquiring trade secrets, code base, and competitive intelligence."

Techniques used included smartphones or laptop computers that couldn't be traced back to the company, and communicating through encrypted, vanishing message service Wickr, according to the letter and a transcript of courtroom testimony obtained by AFP.

"You never told me that there was a surreptitious, parallel, nonpublic system that relied upon messages that evaporated after six seconds or after six days," US District Judge William Alsup said to a member of Uber's legal team. "You never mentioned that there were these offline company-sponsored laptops that -- where the engineers could use that."

The letter signed by former Uber manager of global intelligence Richard Jacobs told of an effort to evade discovery requests, court orders, and government investigations "in violation of state and federal law, as well as ethical rules governing the legal profession."

Jacobs was questioned in court, saying he left Uber early this year with a compensation deal valued at $4.5 million that required him not to disparage the company.

Uber executives who also testified refuted references to wrongdoing and trail-covering.

Uber's deputy general counsel said the allegations in the letter were a tactic by a departing employee to get money from the company and had no merit, according to news website Recode.

The case stems from a lawsuit filed by Waymo -- previously known as the Google self-driving car unit -- which claimed former manager Anthony Levandowski took technical data with him when he left to launch a competing venture that went on to become Otto and was later acquired by Uber.

- Shifting gears -

The courtroom drama is playing out as Uber's new chief executive Dara Khosrowshahi strives to get the company on course and prepare for a stock market debut in 2019.

Khosrowshahi's stated goal of shifting Uber to an era of responsible growth -- after a period of growth at any price -- is beginning to appear Herculean.

"They have a long way to go to win back the trust of both its users who no doubt are quite frustrated in not fully knowing what all of this means to them personally, as well as the financial community," analyst Jack Gold said of Uber.

Uber is a target of investigations and lawsuits over the cover-up of a hack that compromised personal information of 57 million users and drivers.

Uber purportedly paid data thieves $100,000 to destroy the swiped information -- and remained quiet about the breach for a year.

US justice officials are also investigating suspicions of foreign bribery and use of illegal software to spy on competitors or escape scrutiny of regulators.

Challenges at Uber include conflicts with regulators and taxi operators, a cut-throat company culture, and board members feuding with investors over co-founder and ousted chief Travis Kalanick.

The hard-charging style that helped Uber succeed also made Kalanick a target for critics.

Dents to Uber's image include a visit by executives to a South Korean escort-karaoke bar, an attempt to dig up dirt on journalists covering the company, and the mishandling of medical records from a woman raped in India after hailing an Uber ride.

- SoftBank hard cash -

Japanese telecommunications giant SoftBank this week began offering to buy out Uber investors, reportedly at a price well below the value used for the startup's last funding round.

If the investment goes ahead as proposed, SoftBank would directly pump between $1 billion and $1.25 billion into Uber at the San Francisco-based startup's current valuation of $69 billion, according to a source familiar with the matter.

As a secondary investment move, the Japanese group would buy outstanding shares from large investors at a discounted price, the source said.

"SoftBank and Dragoneer have received indications from Benchmark, Menlo Ventures, and other early investors of their intent to sell shares in the tender offer," a SoftBank spokesperson told AFP.

Meanwhile, Uber's loss in the third quarter of this year widened to $1.46 from a loss of $1.1 billion in the second quarter, according to a Bloomberg report.


Privacy Fears Over Artificial Intelligence as Crimestopper
13.11.2017 securityweek Privacy
Police in the US state of Delaware are poised to deploy "smart" cameras in cruisers to help authorities detect a vehicle carrying a fugitive, missing child or straying senior.

The video feeds will be analyzed using artificial intelligence to identify vehicles by license plate or other features and "give an extra set of eyes" to officers on patrol, says David Hinojosa of Coban Technologies, the company providing the equipment.

"We are helping officers keep their focus on their jobs," said Hinojosa, who touts the new technology as a "dashcam on steroids."

The program is part of a growing trend to use vision-based AI to thwart crime and improve public safety, a trend which has stirred concerns among privacy and civil liberties activists who fear the technology could lead to secret "profiling" and misuse of data.

US-based startup Deep Science is using the same technology to help retail stores detect in real time if an armed robbery is in progress, by identifying guns or masked assailants.

Deep Science has pilot projects with US retailers, enabling automatic alerts in the case of robberies, fire or other threats.

The technology can monitor for threats more efficiently and at a lower cost than human security guards, according to Deep Science co-founder Sean Huver, a former engineer for DARPA, the Pentagon's long-term research arm.

"A common problem is that security guards get bored," he said.

Until recently, most predictive analytics relied on inputting numbers and other data to interpret trends. But advances in visual recognition are now being used to detect firearms, specific vehicles or individuals to help law enforcement and private security.

- Recognize, interpret the environment -

Saurabh Jain is product manager for the computer graphics group Nvidia, which makes computer chips for such systems and which held a recent conference in Washington with its technology partners.

He says the same computer vision technologies are used for self-driving vehicles, drones and other autonomous systems, to recognize and interpret the surrounding environment.

Nvidia has some 50 partners who use its supercomputing module called Jetson or its Metropolis software for security and related applications, according to Jain.

One of those partners, California-based Umbo Computer Vision, has developed an AI-enhanced security monitoring system which can be used at schools, hotels or other locations, analyzing video to detect intrusions and threats in real-time, and sending alerts to a security guard's computer or phone.

Israeli startup Briefcam meanwhile uses similar technology to interpret video surveillance footage.

"Video is unstructured, it's not searchable," explained Amit Gavish, Briefcam's US general manager. Without artificial intelligence, he says, ''you had to go through hundreds of hours of video with fast forward and rewind." "We detect, track, extract and classify each object in the video. So it becomes a database."

This can enable investigators to quickly find targets from video surveillance, a system already used by law enforcement in hundreds of cities around the world, including Paris, Boston and Chicago, Gavish said.

"It's not only saving time. In many cases they wouldn't be able to do it because people who watch video become ineffective after 10 to 20 minutes," he said.

- 'Huge privacy issues' -

Russia-based startup Vision Labs employs the Nvidia technology for facial recognition systems that can be used to identify potential shoplifters or problem customers in casinos or other locations.

Vadim Kilimnichenko, project manager at Vision Labs, said the company works with law enforcement in Russia as well as commercial clients.

"We can deploy this anywhere through the cloud," he said.

Customers of Vision labs include banks seeking to prevent fraud, which can use face recognition to determine if someone is using a false identity, Kilimnichenko said.

For Marc Rotenberg, president of the Electronic Privacy Information Center, the rapid growth in these technologies raises privacy risks and calls for regulatory scrutiny over how data is stored and applied.

"Some of these techniques can be helpful but there are huge privacy issues when systems are designed to capture identity and make a determination based on personal data," Rotenberg said.

"That's where issues of secret profiling, bias and accuracy enter the picture." Rotenberg said the use of AI systems in criminal justice calls for scrutiny to ensure legal safeguards, transparency and procedural rights.

In a blog post earlier this year, Shelly Kramer of Futurum Research argued that AI holds great promise for law enforcement, be it for surveillance, scanning social media for threats, or using "bots" as lie detectors.

"With that encouraging promise, though, comes a host of risks and responsibilities."


Website Blindspots Show GDPR is a Global Game Changer
2.11.2017 securityweek Privacy
One of the less publicized features of the European General Data Protection Regulation (GDPR) is that US companies can be held liable even if they do not actively trade with Europe. This is because the regulation is about the collection and storage of European personal information, not about business.

Any U.S. company that operates a website that collects user information (a log-in form, or perhaps a subscription application) could unwittingly collect protected European PII. That makes the company liable -- there are GDPR requirements over how it is collected (including explicit user consent, secure collection, and limitations on what is collected). Whether European regulators could do anything about that liability if the US company has no physical presence in Europe is a different matter.

Nevertheless, this highlights an area that is not well covered by many of the reports that warn about GDPR, that highlight the lack of business preparedness, and that offer solutions. IT and security teams are usually more aware of where data is stored and how it is processed than they are about where and how it is collected.

Research from RiskIQ published in June 2017 showed the potential extent of this blind-spot -- even with European organizations. The research found that 34% of all public web pages of FTSE 30 companies capturing personally identifiable information (PII) are in danger of violating GDPR by doing so insecurely.

It further found that these organizations have an average of 3,315 live websites; and that from these websites there is an average of 440 pages that collect PII per company. Thirty-four percent of these are insecure; 29% do not use encryption; and 3.5% are using old, vulnerable encryption algorithms.

It is even worse in the U.S. RiskIQ researched 25 of the 50 largest U.S. banks and found a per-organization average of 1,891 insecure login forms; 1,663 pages collecting PII insecurely; 1,326 EU first-party cookie violations; and 1,265 EU third-party cookie violations.

"GDPR is a global game changer that will pull the rest of the world toward setting a higher bar for protecting PII," comments Jarad Carleton, principal consultant, Digital Transformation, Frost & Sullivan Cybersecurity Practice. "However, to be compliant, you first need to know where PII is being collected, so proper process controls can be put around that data."

The problem is so extensive for larger organizations, that with little over six months to go, compliance in this area will be difficult to achieve.

"PII discovery, inventory, and compliance assessment is one of the major tasks for GDPR project teams," warns Lou Manousos, CEO of RiskIQ. "In our experience, most security and compliance teams have only partial visibility of the websites owned by their organization. They are left to engage users across the business in an effort to uncover them. And once they have compiled that list, inspecting tens of thousands of web pages is labor intensive and prone to error."

The solution will require some degree of automation -- and to that end RiskIQ has this week announced the addition of explicit GDPR compliance functionality to its Digital Footprint product.

RiskIQ Digital Footprint's new PII/GDPR analytics feature, says the company, helps expedite compliance during the initial and subsequent GDPR audit processes by actively identifying websites belonging to an organization, as well as highlighting issues with specific pages that collect PII. GDPR, coming into effect in May of 2018, applies to all organizations that actively engage with EU citizens -- even if they have no physical presence in the EU.

The product discovers, creates and assesses an interactive inventory of public-facing web assets. It highlights the pages that collect personal information through login forms, data collection forms, and persistent cookies. In short, it automates the process of finding GDPR web-based violations to enable more rapid and complete violation remediation.

"The new PII/GDPR analytics feature in RiskIQ Digital Footprint automates the once cumbersome and often inaccurate process of ongoing website PII discovery and assessment, helping to more efficiently support compliance obligations for large enterprises and multinational organizations," says Carleton.

RiskIQ's PII/GDPR analytics feature is immediately available and is included as part of its Digital Footprint Enterprise solution.

San Francisco, Calif.-based RiskIQ raised $30.5 million in Series C funding led by Georgian Partners in November 2016, bringing the total raised by the firm to $65 million.


Standalone Signal Desktop Messaging App Released
2.11.2017 securityweek Privacy
Signal, a popular secure messaging application, is now available for Windows, macOS, and Linux computers as a standalone program.

Developed by Open Whisper Systems, Signal provides users with end-to-end encrypted messaging functionality and is already used by millions of privacy-focused Android and iOS users. The server doesn’t have access to users’ communication and no data is stored on it, thus better keeping all conversations safe from eavesdropping.

The first Signal for desktop application was released in December 2015 in the form of a Chrome application that could provide users with the same features as the mobile software, namely end-to-end encryption, free private group, text, picture, and video messaging.

Now, the company has decided to make Signal Desktop available as a standalone application and to retire the Chrome app.

The standalone application was released with support for the 64-bit versions of Windows 7, 8, 8.1, and 10, for macOS 10.9 and above, and Linux distributions supporting APT, such as Ubuntu or Debian.

Using Signal Desktop requires pairing it with a phone first. People who have been using the Signal Desktop Chrome App can export their data and import it into the new Signal Desktop app as part of the setup process. Thus, users will be able to access all of their old conversations and contacts.

The standalone Signal Desktop application can be downloaded directly from the official website.

In late September, Open Whisper Systems revealed plans of a new private contact discovery service for Signal. The goal is to prevent anyone using modified Signal code to log the process of contact discovery the application performs when first installed, in order to determine which of one’s contacts also use the service.

In December 2016, after learning that ISPs in Egypt and the United Arab Emirates started blocking the Signal service and website, Open Whisper Systems announced a new feature for its Android application meant to bypass censorship. The technique they used is called domain fronting and relies on the use of different domain names at different layers of communication.


Firefox to Block Canvas-based Browser Fingerprinting
1.11.2017 securityweek Privacy
Firefox will soon provide users with increased privacy by blocking browser fingerprinting performed through the HTML5 canvas element.

With the release of Firefox 58, users will have the option to block websites’ requests to retrieve information through canvas, which is currently used as a cookie-less method of tracking users on the web. Websites using this technique extract data from HTML <canvas> elements silently.

As soon as the change will be in effect, Firefox will behave similarly with the Tor Browser, which is based on Firefox ESR (Extended Support Release). Tor implemented the feature about four years ago.

According to discussion on the Mozilla bug tracker, Firefox 58 should display a popup when accessing a website that attempts to use an HTML < canvas > element to extract information, just like Tor does in such situations. Users will have the option to block the site’s request.

As Sophos notes, many companies are using browser fingerprinting as means to track users online without providing them with a choice. The technique involves tracking the browser itself rather than cookies or other beacons, which can be blocked or deleted.

The fingerprinting operation usually involves passively gathering information such as browser version number, operating system details, screen resolution, language, installed plugins and fonts, and the like. The more elements are used for fingerprinting, the easier it is to single out one’s browser from another user’s, the security firm points out.

“In canvas fingerprinting your browser is given instructions to render something (perhaps a combination of words and pictures) on a hidden canvas element. The resulting image is extracted from the canvas and passed through a hashing function, producing an ID,” Sophos explains.

By providing complex instructions, one can produce enough variation between visitors to ensure canvas fingerprinting is highly efficient. The information gathered this way can be shared among advertising partners and used for the profiling of users based on the affiliated websites they visit.

While Firefox will become the first major browser to take a stance against canvas-based fingerprinting, add-ons that allow users to block this activity already exist, such as Electronic Frontier Foundation’s Privacy Badger.


Firefox 58 to Block Canvas Browser Fingerprinting By Default to Stop Online Tracking
31.10.2017 thehackernews  Privacy

Do you know? Thousands of websites use HTML5 Canvas—a method supported by all major browsers that allow websites to dynamically draw graphics on web pages—to track and potentially identify users across the websites by secretly fingerprinting their web browsers.
Over three years ago, the concern surrounding browser fingerprinting was highlighted by computer security experts from Princeton University and KU Leuven University in Belgium.
In 2014, the researchers demonstrated how browser's native Canvas element can be used to draw unique images to assign each user's device a number (a fingerprint) that uniquely identifies them.
These fingerprints are then used to detect when that specific user visits affiliated websites and create a profile of the user's web browsing habits, which is then shared among advertising partners for targeted advertisements.
Since then many third-party plugins and add-ons (ex. Canvas Defender) emerged online to help users identify and block Canvas fingerprinting, but no web browser except Tor browser by default blocks Canvas fingerprinting.
Good news—the wait is over.
Mozilla is testing a new feature in the upcoming version of its Firefox web browser that will grant users the ability to block canvas fingerprinting.
The browser will now explicitly ask user permission if any website or service attempts to use HTML5 Canvas Image Data in Firefox, according to a discussion on the Firefox bug tracking forum.
The permission prompt that Firefox displays reads:
"Will you allow [site] to use your HTML5 canvas image data? This may be used to uniquely identify your computer."
Once you get this message, it's up to you whether you want to allow access to canvas fingerprinting or just block it. You can also check the "always remember my decision" box to remember your choice on future visits as well.
Starting with Firefox 58, this feature would be made available for every Firefox user from January 2018, but those who want to try it early can install the latest pre-release version of the browser, i.e. Firefox Nightly.
Besides providing users control over canvas fingerprinting, Firefox 58 will also remove the controversial WoSign and its subsidiary StartCom root certificates from Mozilla's root store.
With the release of Firefox 52, Mozilla already stopped allowing websites to access the Battery Status API and the information about the website visitor’s device, and also implemented protection against system font fingerprinting.


Investigation Underway at Heathrow Airport After USB Drive Containing Sensitive Security Documents Found on Sidewalk
31.10.2017 securityweek Privacy

Security personnel at Heathrow Airport have an exciting investigation underway after confidential security documentation was found on a sidewalk in West London.
An unnamed man, on his way to the library, spotted a thumb drive on the sidewalk in Queen’s Park, West London. He pocketed the USB drive and continued on his way. He remembered the USB drive a few days later and returned to the library to view its contents. Recognizing the sensitive nature of the information, he then turned the USB drive over to The Sunday Mirror tabloid.

In their article on October 28th, the Mirror confirmed that the thumb drive contained at least 174 documents. These documents describe various security controls and protocols in place at Heathrow including:
timetables of roving security patrols
locations of CCTV cameras
types of security badges required to access restricted areas
maps of tunnels, access points and restricted areas
routes taken by the Queen and other VIPs to the Royal Suite private area at Heathrow, and
security protocols for VIPs travelling through the airport
Heathrow Airport

It is obvious how this information would benefit someone intent on disrupting the airport or causing harm to dignitaries or VIPs. Many documents were labeled as “confidential” or “restricted” highlighting their sensitive nature. In an interesting twist, these labels follow an older labeling scheme so there is a question of how up-to-date this information is. Even if the information is outdated, knowing former protocols and designs help a bad actor to anticipate the current solutions.

According to a Heathrow Airport spokesperson’s comment to CNN, “Heathrow’s top priority is the safety and security of our passengers and colleagues. The UK and Heathrow have some of the most robust aviation ­security measures in the world and we remain vigilant to evolving threats by updating our procedures on a daily basis. We have reviewed all of our security plans and are confident that Heathrow remains secure. We have also launched an internal investigation to understand how this happened and are taking steps to prevent a similar occurrence in future.”

The first step in any such investigation is to understand what the immediate risk is. If exposing this information increased the risk, new risk mitigations may be required. The next step is to understand how the information found its way onto an unsecured USB drive on a public street in London. The security team then needs to come up with solutions to prevent it from happening again. At a minimum, the Heathrow security team have a few busy days of investigation ahead. More likely there are changes to security protocols and procedures coming in response to sensitive information being exposed. Even if it was only exposed to one individual and one English tabloid, will Heathrow authorities be able to identify who originally dropped the thumb drive and how can they be sure it wasn’t copied?


Heathrow Probes How Security Data Found on London Street
30.10.2017 securityweek Privacy
Heathrow Airport said Sunday it has launched an internal investigation after a memory stick containing extensive security information was found on a London street by a member of the public.

The USB drive contained dozens of folders with maps, videos and documents -- some marked confidential or restricted -- detailing security at Europe's busiest airport, according to the Sunday Mirror newspaper, which first reported the incident.

A man discovered the unencrypted device discarded on a west London pavement, and handed it into the paper, which said it reviewed the contents and passed it on to Heathrow officials.
The airport said the breach led to an immediate review of all security plans and it was "confident that Heathrow remains secure".

A spokeswoman added: "We have also launched an internal investigation to understand how this happened and are taking steps to prevent a similar occurrence in future."

She declined to detail the contents on the USB and when the security lapse occurred.
The device reportedly contained 174 documents, some referencing measures used to protect Queen Elizabeth II, and others outlining the types of IDs needed for different areas of the airport.

It also included timetables of security patrols, and maps pointing to the positions of CCTV cameras, the Sunday Mirror said.

The incident comes as Britain's threat level remains at severe following a series of deadly terrorism attacks this year.

Heathrow, Britain's biggest airport which handled nearly 76 million passengers last year, is considered a prime target for terrorists.

The airport spokeswoman said safety and security were its "top priority".

"The UK and Heathrow have some of the most robust aviation security measures in the world and we remain vigilant to evolving threats by updating our procedures on a daily basis," she added.


EU ePrivacy Regulation Edges Closer to Fruition
24.10.2017 securityweek Privacy
The proposed European Union ePrivacy Regulation is on the verge of entering Trilogue. Trilogue is the series of informal discussions involving the European Parliament, the Council of Europe (that is, representatives from each member state), and the European Commission. It is Trilogue that defines the final shape of the legislation.

The all-important hurdle was the vote by 31 in favor to 24 against by the European Parliament's justice committee (LIBE) at the end of last week. LIBE is the lead committee in preparing this legislation.

The ePrivacy Regulation is intended to harmonize e-communications confidentiality laws across the member states by replacing the ePrivacy Directive passed in 2002 (and amended by the 'cookies directive' of 2009). In this way it is similar to the General Data Protection Regulation (GDPR) replacing the earlier Data Protection Directive -- and it carries the same potential sanction of up to 4% of global revenue.

Consistent enforcement will be achieved by assigning the related supervisory powers to the national independent authorities already competent to enforce the GDPR. The intention is to have the ePrivacy Regulation ready by the time GDPR becomes enforceable in May 2018.

However, the new regulation goes beyond simply harmonizing existing laws. These were put into effect before the rise of 'over-the-top' communication channels such as WhatsApp, Facebook, Messenger, and Skype -- which largely escape the confidentiality requirements imposed on mainstream telecommunications companies. The new regulation will apply to the provision of e-communications services to end-users in the EU, irrespective of whether it is a paid for or free service. Providers from outside of the EU will have to appoint a representative within the EU.

While expanding the scope to include the newer channels, the ePrivacy Regulation in its current form also increases the detail of confidentiality. For example, the new terminology is 'tracking technologies', which includes but is not limited to cookies. As with GDPR, consent must be freely and unambiguously given by the user, but can be expressed by a clear affirmative action.

Such 'affirmative action' could be at the browser settings level where technically feasible and possible -- and the wording of the proposal seems to imply that browsers will be required to include a 'no tracking' feature in all new software. Under the new proposal, service providers will not be able to prevent users from accessing a website if they refuse to accept cookies.

The regulation also specifically expands its core rules from content only to include metadata -- which is now generally accepted to include personal information.

However, it should be said that the Regulation is still in the proposal stage. It has already been weakened following extensive industry lobbying. The view of the marketing and advertising industry is that increased consumer protection will stifle innovation and reduce free services on the Internet. A new report from Corporate Europe Observatory (CEO) published October 17 notes that in 41 high-level EU Commission lobby meetings in 2016, 36 were with corporate interests. Only five were with civil society lobbyists.

Industry lobbying -- and in particular, the marketing industry -- will continue during Trilogue, and may well succeed in weakening the proposal further. “During the negotiations on the [GDPR],” notes CEO, “the industry lobby had repeatedly succeeded in having a considerable influence on the positions of the Member States. Due to the non-transparency of the Trilog method, this is particularly vulnerable to opaque manipulation attempts by lobbyists.”

Jan Philipp Albrecht, who was the EU rapporteur for the GDPR, warned about the continuing pressure from conservative MEPs and business interests. “Some conservatives have refused a compromise, despite the great concessions, the profit interests of large internet groups and the short-sighted deregulation fantasies of some industrial associations about the fundamental rights on data protection, privacy and communication secrecy and want to massively weaken the data protection in communication. Consumers want strong data protection of their communications.”

The marketing and ad industries will hope to achieve further concessions from the member states, while privacy activists will hope that the European Parliament and European Commission can hold steady.


Symantec's Latest DLP Offering Aids GDPR Compliance
21.9.2017 securityweek Privacy
Symantec DLP 15 Helps Protect Sensitive Data in Managed and Unmanaged Environments and Aids in GDPR Compliance

In unpublished research, seen by SecurityWeek, 96% of U.S. CISO respondents agreed that "ensuring that our cloud applications adhere to compliance regulations is one of the most stressful aspects of my job."

The biggest compliance concerns all revolve around loss of control/visibility into the cloud. Twenty-six percent fear the inability to track activities in sanctioned cloud applications; 41% are concerned about employee use of unsanctioned cloud applications (when 24% of all enterprise cloud apps are unsanctioned); and 14% are concerned about the broad sharing of compliance-controlled data in cloud applications.

Symantec LogoThe research was commissioned by Symantec. Without specifying Europe's General Data Protection Regulation (GDPR), due to come into force next year, the responses are entirely relevant to growing concern over GDPR. Many of these concerns can be alleviated by adequate data loss prevention controls, provided they include loss prevention from the cloud.

In August 2017, Gartner predicted that data loss prevention (DLP) would see fairly dramatic growth over the next two years. "The EU General Data Protection Regulation (GDPR) has created renewed interest, and will drive 65 percent of data loss prevention buying decisions today through 2018," it predicted.

Symantec this week announced a new version of its own DLP product -- version 15. It focuses on helping customers achieve and maintain GDPR compliance. "The upcoming General Data Protection Regulation (GDPR) introduces new obligations for organizations and the information they handle, and comes with increased penalties and heightened scrutiny for compliance," it announced. "Analysts believe that visibility and protection, which can follow data, will become the new imperative."

Two features are key to this: it protects sensitive data in managed and unmanaged environments; and helps to ensure that sensitive data doesn't get leaked through unsanctioned cloud applications. It does this by maintaing visibility into the cloud, and by protecting the data that is stored in the cloud.

It achieves this by integrating DLP and CASB products. "DLP v15 integrates with our CASB (CloudSOC)," said Sri Sundaralingam, head of product marketing for enterprise security products, "where a single set of data protection policies on our DLP system is automatically mapped to CASB to provide visibility into 3rd party cloud apps. We support 100+ SaaS applications (including Office 365, Salesforce, Box, Dropbox, and many other popular 3rd party cloud apps). Note that in addition to visibility, all reporting and incident management is done via a single console (DLP) as well."

Visibility is defined as understanding where your data resides; and it applies to both cloud and on-premise servers. "This is the most important aspect of data protection -- is having visibility to all the content that has data you want to protect (sensitive and regulated data)," he continued.

In GDPR terms, the Equifax breach demonstrates the danger of lost visibility. 400,000 UK citizens had personal data compromised. "This was due to a process failure, corrected in 2016, which led to a limited amount of UK data being stored in the US between 2011 and 2016," said Equifax UK. In short, Equifax, both in the UK and in the US, lost visibility into 400,000 UK records. Had GDPR already been in force, Equifax could add European sanctions to the US sanctions it already faces.

"This is where a system like DLP helps," Sundaralingam told SecurityWeek. "A DLP system’s core capabilities to scan all communication channels (email, web, cloud applications) as well as data storage locations (desktop/laptops, storage servers, USB) using advanced technology like machine learning (ML) and looking for specific patterns to discover sensitive/regulated data is critical. In DLP v15, Symantec has now also added user-driven tagging where end-users themselves can identify sensitive/regulated data and the system will learn from that as well. Without automation and advanced capabilities like ML it is difficult to manually identify where sensitive/regulated data is stored."


Windows 10 Users to Get Improved Privacy Controls
18.9.2017 securityweek  Privacy
The upcoming Windows 10 Fall Creators Update will bring enhanced privacy controls to both consumers and commercial customers, Microsoft says.

After being heavily criticized for the large amount of user data collected from Windows 10 machines, Microsoft has decided to implement a series of data protections to silence concerns, and has been successful in its attempt.

Released earlier this year, the Windows 10 Creators Update provided users with increased control over privacy settings and updates, and also allowed them to choose how much usage data they like to share with Microsoft. In July, the company announced that it would force users into reviewing their privacy settings and installing the latest feature update, namely Windows 10 Creators Update.

In addition, Microsoft also improved transparency on the diagnostic data collected from Windows 10 machines. Now, the company is looking into providing users with increased access to information and more control over the collected information, Marisa Rogers, Windows and Devices Group Privacy Officer, reveals.

Consumers will enjoy simplified access to information about features and the data collection around those features, courtesy of two privacy changes made to the setup process.

Starting with Windows 10 Fall Creators Update, set to arrive on October 17, users have direct access to the Privacy Statement within the setup process and can also use the Learn More page on the privacy settings screen to jump to specific settings for location, speech recognition, diagnostics, tailored experiences, and ads while choosing their privacy settings.

“You no longer need to sift through the privacy statement if you only want to read about a specific feature, simply click the Learn More button for easy access,” Rogers explained in a blog post.

Moreover, users will also have increased transparency and control over which applications can access their information. Similar to permission prompts for the use of location data when launching a map or other location-aware applications, permission requests will pop up when applications installed through the Windows Store will need access to various device capabilities.

“You will be prompted to provide permission before an app can access key device capabilities or information such as your camera, microphone, contacts, and calendar, among others. This way you can choose which apps can access information from specific features on your device,” Rogers explains.

The app permission prompts will only appear for apps installed after the Fall Creators Update. However, users will be able to review and manage existing app permissions by selecting Start > Settings > Privacy.

In addition to these changes, enterprise customers will also have access to a new setting that limits diagnostic data to the minimum required for Windows Analytics, a service that allows admins to decrease IT costs by gaining insights – through Windows Diagnostics – into the computers running Windows 10 in their organizations.


Spain – Facebook slapped with €1.2M fine for violating data protection regulations
12.9.2017 securityaffairs Privacy

The Spanish Data Protection Agency (AEPD) has issued a €1.2 Million fine against Facebook for violating data protection regulations.
Other privacy problems for the tech giant Facebook, the company has been fined for a series of privacy violations in Spain.

The Spanish Data Protection Agency (AEPD) has issued a €1.2 Million fine against Facebook for violating data protection regulations.
According to the AEPD, the social network giant collects users’ personal data without informed and ‘unequivocal consent’ for commercial purposes. It is sharing the data with advertisers and marketers without informing users, the company collects sensitive data on user’s ideology, religious beliefs, sex and personal tastes and navigation.

“The Agency notes that the social network collects, stores and uses data, including specially protected data, for advertising purposes without obtaining consent.

The data on ideology, sex, religious beliefs, personal preferences or browsing activity are collected directly, through interaction with their services or from third party pages without clearly informing the user about how and for what purpose will use those data” states the AGDP.

“Facebook does not obtain unambiguous, specific and informed consent from users to process their data, since the information it offers is not adequate”
The list of violations continues, Facebook doesn’t totally cancel information when no longer needed for the purpose they were collected.

The Spanish Agency considered identified two serious and one very serious infringements of the Data Protection Law and imposes on the company a sanction of 1,200,000 euros.

Facebook privacy
The AEPD fined Facebook for €600,000 due to a “very serious” infringement, while the remaining two serious violations are:
Tracking people through the use of “Like” button social plug-ins embedded in other non-Facebook web pages (FB slapped with €300,000).
Failing to delete data collected from users once it has finished using it (FB slapped €300,000).
The AEPD accuses Facebook of using a privacy policy containing “generic and unclear terms,” and that doesn’t “adequately collect the consent of either its users or nonusers, which constitutes a serious infringement.”

Below the reply of Facebook to the accusations:
“We take note of the DPA’s decision with which we respectfully disagree. Whilst we value the opportunities we’ve had to engage with the DPA to reinforce how seriously we take the privacy of people who use Facebook, we intend to appeal this decision.”
“As we made clear to the DPA, users choose which information they want to add to their profile and share with others, such as their religion. However, we do not use this information to target adverts to people.” states Facebook.

In May, the company was fined €150,000 because the techniques used to target advertising and track users.


UK Introduces Data Protection Bill to Replace GDPR After Brexit

8.8.2017 securityweek Privacy
The UK government has announced its plans for a new Data Protection Bill. This was foreshadowed in the Queen's Speech of 21 June when she announced, "A new law will ensure that the United Kingdom retains its world-class regime protecting personal data."

This law is, in effect, the European General Data Protection Regulation designed to withstand Brexit. The UK will still be part of the European Union when GDPR comes into effect in May 2018. However, the government is already under great pressure to transpose 40 years of European laws onto the British statute books in time for the actual severance. It makes sense, therefore, to prepare a GDPR-compliant UK law immediately.

The wording of the new Bill is not expected to become public until September. However, the Department for Digital, Culture Media & Sport yesterday published a 30 page Statement of Intent (PDF) in which The Rt Hon Matt Hancock MP, Minister of State for Digital, explains, "Implementation will be done in a way that as far as possible preserves the concepts of the Data Protection Act to ensure that the transition for all is as smooth as possible, while complying with the GDPR and DPLED in full."

It follows, then, that US companies that operate in compliance with the UK Data Protection Bill will (or should) be automatically in compliance with GDPR. The reverse is not necessarily true. For example, while the GDPR requires the use of anonymized or pseudonymised (its own term) personal data, the new DP Bill will: "Create a new offence of intentionally or recklessly re-identifying individuals from anonymised or pseudonymised data. Offenders who knowingly handle or process such data will also be guilty of an offence. The maximum penalty would be an unlimited fine."

Since this is new, and we do not yet know the detail of the proposed Bill, it is impossible to tell whether there will be any attempt to make this a worldwide offense. It is difficult, however, to see how it could be enforced in foreign jurisdictions where the company or persons concerned have no direct presence within the UK.

Other new elements include a new offence of altering records with intent to prevent disclosure following a subject access request (with an unlimited fine in England and Wales); while criminal justice agencies (read law enforcement) will have "A more prescriptive logging requirement applied to specific operations of automated processing systems including collection, alteration, consultation, disclosure, combination and erasure of data, so a full audit trail will be available."

Another feature that will undoubtedly change will be the ultimate court of appeal in case of dispute. For the GDPR it will be the European Court of Justice (as it will be in the UK until Brexit takes effect). "At Brexit (depending on its nature)," Dr Brian Bandey, a Doctor of Law specializing in international cyber laws, told SecurityWeek, "the GDPR's effectiveness as a law will terminate. I believe it likely that simultaneously with that event, the new Data Protection Bill will come into force. I expect the whole of the Data Protection Act 1998 will be repealed. It is at that time that the Supreme Court will be the ultimate Court of Appeal with respect to this matter."

Whether the UK's Supreme Court will be as aggressive in upholding the constitution (the UK does not have a written constitution in the usual sense of the term) as has the European Court, remains to be seen. David Flint, a senior partner at MacRoberts LLP, does not see a problem. He believes that the overriding motivation behind the new Bill is to ensure smooth ongoing business trading between the UK and the EU. GDPR 'adequacy' thus becomes an essential element.

"The fact that UK citizens cannot appeal to the ECJ is arguably a loss," he told SecurityWeek, "but in practice it is difficult to see how a UK court could or would not take cognizance of the decisions of the ECJ in interpreting the UK Act; were they to diverge in interpretation, again the adequacy finding would be in jeopardy."

What does seem likely is that not all the 'optional' elements of GDPR will be enacted within the new DPB. The Open Rights Group has already issued a statement saying, "We are disappointed that UK Ministers are not taking up the option in EU law to allow consumer privacy groups to lodge independent data protection complaints as they can currently do under consumer rights laws."

However, says Flint, "The 2017 UK Data Protection Bill is designed to cover the limited number of instances within the GDPR in which Member States are able to make choices or derogations; issues such as the age of consent for children, for automatic profiling, law enforcement and research. We are told that the UK is adopting a UK solution to these questions."

It seems, then, that any divergences between the UK Data Protection Bill and the GDPR will largely be limited to UK relevance only. Where US companies are concerned, future post-Brexit trading with the UK will be subject to the same conditions and the same potential fines for non-compliance, as they will be for trading with the European Union under GDPR.


Hotspot Shield VPN Accused of Spying On Its Users' Web Traffic
8.8.2017 thehackernews Privacy

"Privacy" is a bit of an Internet buzzword nowadays as the business model of the Internet has now shifted towards data collection.
Although Virtual Private Network (VPN) is one of the best solutions to protect your privacy and data on the Internet, you should be more vigilant while choosing a VPN service which actually respects your privacy.
If you are using popular free virtual private networking service Hotspot Shield, your data could be at a significant risk.
A privacy advocacy group has filed a complaint with the Federal Trade Commission (FTC) against virtual private networking provider Hotspot Shield for reportedly violating its own privacy policy of "complete anonymity" promised to its users.
The 14-page-long complaint filed Monday morning by the Centre for Democracy and Technology (CDT), a US non-profit advocacy group for digital rights, accused Hotspot Shield of allegedly tracking, intercepting and collecting its customers' data.
Developed by Anchorfree GmbH, Hotspot Shield is a VPN service available for free on Google Play Store and Apple Mac App Store with an estimated 500 million users around the world.
Also Read: Secure VPNs (Get Lifetime Subscription) To Prevent ISPs From Spying On You
VPN is a set of networks conjugated together to establish secure connections over the Internet and encrypts your data, thereby securing your identity on the Internet and improving your online security and privacy.
The VPN services are mostly used by privacy advocates, journalists, digital activists and protesters to bypass censorship and geo-blocking of content.
Hotspot Shield does just Opposite of What All it Promises
The Hotspot Shield VPN app promises to "secure all online activities," hide users' IP addresses and their identities, protect them from tracking, and keep no connections logs while protecting its user’s internet traffic using an encrypted channel.
However, according to research conducted by the CDT along with Carnegie Mellon University, the Hotspot Shield app fails to live up to all promises and instead logs connections, monitors users' browsing habits, and redirects online traffic and sells customer data to advertisers.
"It is thusly unfair for Hotspot Shield to present itself as a 48 mechanism for protecting the privacy and security of consumer information while profiting off of that information by collecting and sharing access to it with undisclosed third parties," the CDT complaint reads.
"Consumers who employ Hotspot Shield VPN do so to protect their privacy, and Hotspot Shield’s use of aggressive logging practices and third-party partnerships harm its consumers' declared privacy interests."
Hotspot Shield also found injecting Javascript code using iframes for advertising and tracking purposes.
Reverse engineering of the apps source code also revealed that the VPN uses more than five different third-party tracking libraries.
Researchers also found that the VPN app discloses sensitive data, including names of wireless networks (via SSID/BSSID info), along with unique identifiers such as Media Access Control addresses, and device IMEI numbers.
Also Read: Secure VPN Services — Get Up to 91% Discount On Lifetime Subscriptions
The CDT also claims that the VPN service sometimes "redirects e-commerce traffic to partnering domains."
If users try to visit any commercial website, the VPN app redirects that traffic to partner sites, including ad companies, to generate revenue.
"For example, when a user connects through the VPN to access specific commercial web domains, including major online retailers like www.target.com and www.macys.com,the application can intercept and redirect HTTP requests to partner websites that include online advertising companies," the complaint reads.
The CDT wants the FTC to start an investigation into what the Hotspot Shield's "unfair and deceptive trade practices" and to order the company to stop mispresenting privacy and security promises while marketing its app.


Smart Vacuum Cleaners Making Map Of Your Home — And Wants to Sell It
27.7.2017 thehackernews Privacy

What if I say that your cute, smart robotic vacuum cleaner is collecting data than just dirt?
During an interview with Reuters, the CEO of iRobot, the company which manufactured Roomba device, has revealed that the robotic vacuum cleaner also builds a map of your home while cleaning — and is now planning to sell this data to third-party companies.
I know it sounds really creepy, but this is what the iRobot company has planned with the home mapping data its Roomba robots collect on its users.
What is Roomba?
Manufactured by Massachusetts-based firm iRobot, Roomba is a cute little robotic vacuum cleaner — which ranges in price from $375 to $899 — that has been vacuuming up household dirt since 2002.
Early versions of Roomba used IR or laser sensors to avoid obstacles in their way, but the company began distributing high-end Wi-Fi-connected Roomba models from 2015, such as the Roomba 980, which includes a camera and Simultaneous Localisation And Mapping (SLAM) technology that can not only avoid obstacle but also build a map of your home.
And this has opened up new possibilities for the company.
What Data Roomba Collects and Why?
Roomba robots gather all kinds of data—from room dimensions and furniture position to distances between different objects placed in your room—that could help next-generation IoT devices to build a true smart home.
Angle believes mapping data could be used by other smart home devices—such as thermostats, lighting, air conditioner, personal assistant, and security cameras—to become smarter.
According to iRobot CEO Colin Angle, "there's an entire ecosystem of things and services that the smart home can deliver once you have a rich map of the home that the user has allowed to be shared."
Angle also told the publication that he is planning to push the company toward a broader vision of the smart home, and in the near future iRobot could sell your floor data with the business like Apple, Amazon, Microsoft and Google—but not without its users' consent.
Until now, your home data is private and is not being shared with any third-party company.
Why Would Companies be Interested in Your Floor-Plans?
By now, you must be thinking how your floor plans would be beneficial to companies like Apple, Amazon, Google or Microsoft?
The move has some obvious privacy concerns, but surprisingly, this could help other smart devices at your home to work more efficiently—for example:
The data could help tech companies like Amazon, Apple and Google to improve their smart home speakers to control the vacuum and make use of the acoustics to improve audio performance throughout the home.
Dimensional knowledge of the rooms could help Smart Air-conditioners to control airflow throughout the rooms.
Home mapping data could also help Apple’s ARKit developers to create new apps for room management and interior design.
Moreover, Microsoft, Apple, Amazon and Google are already chasing this kind of data to lead in the smart industry.
Concerns — Privacy And Security
Since 2015 when iRobot introduced the mapping technology in Roomba, the vacuum clear has not just been picking up dirt and dust, but they have also been mapping the layout of your home, which could be privacy concerns for many of its users.
According to its terms of service, the users already give the company permission to share their data with third party vendors and subsidiaries, and on government requests.
"We may share your information...Third party vendors, affiliates, and other service providers that perform services on our behalf, solely in order to carry out their work for us, which may include identifying and serving targeted advertisements, providing e-commerce services, content or service fulfillment, billing, web site operation, payment processing and authorization, customer service, or providing analytics services," the company's privacy policy reads.
Given these terms, it is possible for the company to sell its customers information in bulk with companies without notifying its users. And it is obvious that more you want your technology to be smart, more private data you are offering to companies.
Roomba is already compatible with Amazon's Alexa and Google's Home — Apple's HomePod speaker will soon join them — therefore, its CEO is planning to sell its maps to one or more of these 'Big Three' in the next couple of years.


32M Becomes First-Ever Company to Implant Micro-Chips in Employees
25.7.207 thehackernews Privacy

Biohacking could be a next big thing in this smart world.
Over two years ago, a hacker implanted a small NFC chip in his left hand right between his thumb and his pointer finger and hacked Android smartphones and bypassed almost all security measures, demonstrating the risks of Biohacking.
At the end of the same year, another hacker implanted a small NFC chip with the private key to his Bitcoin wallet under his skin, making him able to buy groceries or transfer money between bank accounts by just waving his hand.
And this is soon going to be a reality, at least in one tech company in Wisconsin.
Marketing solution provider Three Square Market (32M) has announced that it had partnered with Swedish biohacking firm BioHax International for offering implanted microchips to all their employees on 1st August, according to the company's website.
Although the programme is optional, the company wants at least more than 50 of its employees to undergo the Biohacking procedure.
Like previous bio hacks, the chips will be implanted underneath the skin between the thumb and forefinger, and will also use near-field communications (NFC) — the same technology that makes contactless credit cards and mobile payments possible — along with radio-frequency identification (RFID).

According to the company, the implanted chips would allow its employees to log into their office computers, pay for food and drink from office vending machines, open doors and use the copy machine, among other purposes.
The company CEO has also confirmed that 'there's no GPS tracking at all.'
"We foresee the use of RFID technology to drive everything from making purchases in our office break room market, opening doors, use of copy machines, logging into our office computers, unlocking phones, sharing business cards, storing medical/health information, and used as payment at other RFID terminals," 32M chief executive Todd Westby said.
"Eventually, this technology will become standardised allowing you to use this as your passport, public transit, all purchasing opportunities, etc."
Interested employees will be chipped at the 32M inaugural "chip party" on 1st August at the company's headquarters in River Falls, Wisconsin.
Three Square Market is considered as a leader in micro market technology, which designs mini-convenience stores using a self-checkout kiosk (vending machines), often found in large companies.
The company has more than 2,000 kiosks in nearly 20 different countries, and it operates over 6,000 kiosks in TurnKey Corrections, the firm's corrections industry business.
While the Biometric information and technology are experiencing an increase in popularity, it also raises widespread concerns around the safety and privacy of people adopting it.
Hackers could misuse the technology used to provide easiness to the public against the public itself, and one should not forget that with the advance in technology, the techniques used by cyber criminals also improves.