Communicating Privacy and Security Research: A Tough Nut to Crack

Today at the Citizen Lab we released a new report on (yet more) privacy and security issues in UC Browser, accompanied by a new cartoon series, called Net Alert.

Our new UC Browser report, entitled “A Tough Nut to Crack,” and authored by Jeffrey Knockel, Adam Senft and me, is our second close-up examination of UC Browser, which is by some estimates the second most popular mobile browser application in the world.   In our first analysis of UC Browser, undertaken in 2015, we discovered several major privacy and security vulnerabilities that would seriously expose users of UC Browser to surveillance and other privacy violations.  We were tipped off to look at UC Browser while going through some of the Edward Snowden disclosures and discovered the NSA, CSE and other SIGINT partners were patting themselves on the back for exploiting data leaks and faulty update security related to UC Browser.   I wrote an oped at the time discussing the security tradeoffs involved in keeping knowledge of software flaws like this quiet, and how we need a broader public discussion about software vulnerability disclosures.

We decided to take a second look at UC Browser, this time led by Jeffrey Knockel.  By reverse engineering several versions of UC Browser, Jeffrey was able to determine the likely version number of UC Browser referenced in the Snowden disclosure slides, and which led the NSA to develop an XKeyscore plugin for UC Browser exploitation.  We also found that all versions of the browser examined — Windows and Android — transmit personal user data with easily decryptable encryption, and the Windows version does not properly secure its software update process, leaving it vulnerable to arbitrary code execution.  We disclosed our findings to Alibaba, the parent company, and report back on their responses and fixes, such as they are, in an appendix to the report. 

Communicating these risks to users is not always easy, as the details are very technical and can be confusing.  To help better communicate privacy and security research to a broader audience,  we co-timed the release of our new UC Browser report with the first in a series of cartoons and info-nuggets on digital security, called “Net Alert.”   The first Net Alert features two informative and funny cartoons by Hong Kong artist Jason Li, each of which tells a story about the risks of using UC Browser.  The Net Alert series also includes background information on digital security topics, like the risks of “man-in-the-middle” attacks and of using open WiFi networks.   (Net Alert is produced by Citizen Lab in collaboration with Open Effect and the University of New Mexico).  We will be producing more of these Net Alert cartoons and info-nuggets co-timed with future Citizen Lab reports.  Our hope is that by communicating privacy and digital security risks in a friendly and accessible way, more people will be inclined to take small steps to better protect themselves against exposure and learn more about the research we undertake.

The UC Browser report is but one in an ongoing research series on mobile privacy and security.  For those who are interested, we have also published a FOCI paper, which we are presenting this week at the 2016 USENIX Free and Open Communications on the Internet workshop , that summarizes our technical analysis of the security and privacy vulnerabilities in three web browsers developed by China’s three biggest web companies: UC Browser, QQ Browser and Baidu Browser; developed by UCWeb (owned by Alibaba), Tencent and Baidu, respectively.

The Iranian Connection

Today, the Citizen Lab is publishing a new report, authored by the Citizen Lab’s John Scott-Railton, Bahr Abdulrazzak, Adam Hulcoop, Matt Brooks, and Katie Kleemola of Lookout, entitled “Group 5: Syria and the Iran Connection.”

The full report is here: https://citizenlab.org/2016/08/group5-syria/

Associated Press has an exclusive report here: http://bigstory.ap.org/article/6ab1ab75e89e480a9d12befd3fea4115/experts-iranian-link-attempted-hack-syrian-dissident

And, I wrote an oped for the Washington Post about our report, which can be found here:  https://www.washingtonpost.com/posteverything/wp/2016/08/02/how-foreign-governments-spy-using-email-and-powerpoint/

This report describes an elaborately staged malware operation with targets in the Syrian opposition. We first discovered the operation in late 2015 when a prominent member of the Syrian opposition, Noura Al-Ameera, spotted a suspicious e-mail containing a PowerPoint slideshow purporting to show evidence of “Assad crimes.”  Rather than open it, Al-Ameera wisely forwarded it to us at the Citizen Lab for further analysis.  Upon investigation, we determined the PowerPoint was laden with spyware.

Following that initial lead, our researchers spent several months engaged in careful network analysis, reverse engineering, and mapping of the command and control infrastructure.  Although we were not able to make a positive attribution to a single government (a common issue in cyber espionage investigations), we were able to determine that behind the targeted attack on Noura Al-Ameera is a new espionage group operating out of Iranian Internet space, possibly a privateer and likely working for either the Syrian or Iranian governments (or both).

Citizen Lab has tracked four separate malware campaigns that have targeted the Syrian opposition since the early days of the conflict: Assad regime-linked malware groups, the Syrian Electronic Army, ISIS, and a group with ties to Lebanon. Our latest report adds one more threat actor to the list, which we name “Group5” (to reflect the four other known malware groups) with ties to Iran.

The report demonstrates yet again that civil society groups are persistently targeted by digital malware campaigns, and that their reliance on shared social media and digital mobilization tools can be a source of serious vulnerability when exploited by operators using clever social engineering methods.

On Research in the Public Internet

This post is cross posted from https://citizenlab.org/2016/07/research-interest/

On January 20, 2016, Netsweeper Inc., a Canadian Internet filtering technology service provider, filed a defamation suit with the Ontario Superior Court of Justice. The University of Toronto and myself were named as the defendants. The lawsuit in question pertained to an October 2015 report of the Citizen Lab, “Information Controls during Military Operations: The case of Yemen during the 2015 political and armed conflict,” and related comments to the media. Netsweeper sought $3,000,000.00 in general damages; $500,000.00 in aggravated damages; and an “unascertained” amount for “special damages.”

On April 25, 2016, Netsweeper discontinued its claim in its entirety.

Between January 20, 2016 and today, we chose not to speak publicly about the lawsuit. Instead, we spent time preparing our statement of defence and other aspects of what we anticipated would be full legal proceedings.

Now that the claim has been discontinued it is a good opportunity to take stock of what happened, and make some general observations about the experience.

It should be pointed out that this is not the first time a company has contemplated legal action regarding the work of the Citizen Lab. Based on emails posted to Wikileaks from a breach of the company servers, we know that the Italian spyware vendor, Hacking Team, communicated with a law firm to evaluate whether to “hit [Citizen Lab] hard.” However, it is the first time that a company has gone so far as to begin litigation proceedings. I suspect it will not be the last.

Fortunately, Ontario has recognized the importance of protecting and encouraging speech on matters of public interest. Canada has historically proven a plaintiff-friendly environment for defamation cases. But, on November 3, 2015, the legal landscape shifted in Ontario when a new law called the Protection of Public Participation Act (PPPA) came into force. It was specifically designed to mitigate against “strategic litigation against public participation,” or SLAPP suits. The Act enumerates its purposes as:

(a) to encourage individuals to express themselves on matters of public interest;

(b) to promote broad participation in debates on matters of public interest;

(c) to discourage the use of litigation as a means of unduly limiting expression on matters of public interest; and

(d) to reduce the risk that participation by the public in debates on matters of public interest will be hampered by fear of legal action.

Under the Act, a judge may dismiss a defamation proceeding if “the person satisfies the judge that the proceeding arises from an expression made by the person that relates to a matter of public interest.” The Act allows for recovery of costs, and if, “in dismissing a proceeding under this section, the judge finds that the responding party brought the proceeding in bad faith or for an improper purpose, the judge may award the moving party such damages as the judge considers appropriate.”

In our view, the work of Citizen Lab to carefully document practices of Internet censorship, surveillance, and targeted digital attacks is precisely the sort of activity recognized as meriting special protection under the PPPA. Had our proceedings gone forward, we intended to exercise our rights under the Act and move to dismiss Netsw­eeper’s action.

Regardless of the status of the suit, we strenuously disagree with the claims made by Netsweeper, and stand firm in the conviction that my remarks to the media, and the report itself, are both clearly responsible communications on matters of public interest and fair comment as defined by the law.

One point bears underscoring: it is an indisputable fact that Citizen Lab tried to obtain and report Netsweeper’s side of the story. Indeed, we have always welcomed company engagement with us and the public at large in frank dialogue about issues of business and human rights. We sent a letter by email directly to Netsweeper on October 9, 2015. In that letter we informed Netsweeper of our findings, and presented a list of questions. We noted: “We plan to publish a report reflecting our research on October 20, 2015. We would appreciate a response to this letter from your company as soon as possible, which we commit to publish in full alongside our research report.”

Netsweeper never replied.

We expect that Citizen Lab research will continue to generate strong reaction from companies and other stakeholders that are the focus of our reports. The best way we can mitigate legal and other risk is to continue to do what we are doing: careful, responsible, peer-reviewed, evidence-based research. We will continue to investigate Netsweeper and other companies implicated in Internet censorship and surveillance, and we will continue to give those companies a chance to respond to our findings, and publish their responses, alongside our reports.

I come away from this experience profoundly appreciative of the skills of my staff and colleagues, and in particular Jakub Dalek, Sarah McKune, and Adam Senft, who assisted in the legal preparations.

Lastly, I am grateful to the University of Toronto for their support throughout this process. With corporate involvement in academia seemingly everywhere these days, it is tempting to get cynical about universities, and wonder whether corporate pressures will make university administrators lose sight of their core mission and purpose. After the experiences of the last few months, I feel optimistic about the possibilities of speaking truth to power with the protection of academic freedom that the University of Toronto has provided me.

Meanwhile, back to work on another Citizen Lab report.

The Week of Holding “Big Data” Accountable

The world of “Big Data,” “The Internet of Things,” or simply… “Cyberspace.”

Whatever we choose to call it, never in human history has something so profoundly consequential for so many people’s daily lives been unleashed in such a short period of time.  Certainly, the printing press, the telegraph, radio, the television, were all extraordinary.  But what is going on now is truly unprecedented in its sudden, dramatic impact.  In the span of a few short years, billions of citizens the world over are immersing themselves in an entirely new communications environment — one that is changing not only how we think and behave but, more profoundly, how society as a whole is fundamentally structured.  Information that previously was stored in our office drawers, in locked closets, in our diaries, even in our minds, we are now transmitting to thousands of private companies and, by extension, to government agencies.

This world of Big Data is a supernova of billions of human interactions, habits, movements, thoughts, and desires, ripe to be harvested, analyzed, and then fed back to us, in turn, to predict and shape us.  It should come as no surprise, given the rate at which this transformation is occurring, that there will be unintended — and possibly even seriously detrimental — consequences for privacy, liberty, and security.

Evidence of these consequences is now beginning to accumulate.  First, there are privacy issues. Data breaches that expose the email and password credentials of tens of millions of people have become so routine that researchers are now describing them as “megabreaches.” Our research at the Citizen Lab has shown how numerous popular mobile applications used by hundreds of millions of people routinely leak sensitive user information, including, in some cases, the geolocation of the user, device ID and serial number information, and lists of nearby wifi networks. We have discovered that some applications were so poorly secured, that anyone with control of a network to which these applications connects (e.g., a WiFi hotspot) could easily spoof a software update to install spyware onto an unwitting user’s device.  

Poorly designed mobile applications, such as those we have examined, are a goldmine for criminals and spies, and yet we surround ourselves with them. Disclosures of former National Security Agency (NSA) contractor Edward Snowden have shown that state intelligence agencies routinely vacuum up information leaked by applications in this way, and use the data for mass surveillance.  And what they don’t acquire from leaky applications, they get directly from the companies through lawful requests.  The confluence of interests around commercial and state surveillance is where Big Data meets Big Brother.

Beyond privacy issues are those of security. For example, researchers have demonstrated how they could use remote WiFi connections to take over the controls of a smart car or even an airline’s cockpit systems.  Others have shown proof of concept attacks against “smart home” systems that remotely cracked door lock codes, disabled vacation mode, and induced a fake fire alarm. Of course, what happens in the lab is but an omen of what’s to come in the real world. Several years ago, a computer virus called “Stuxnet” reportedly developed by the US and Israel, was used to sabotage Iranian nuclear enrichment plants.  Dozens of countries are reportedly researching and stockpiling their own Stuxnet like cyber weapons, which in turn is generating a huge commercial market for such hidden software flaws. Perversely adding to the insecurities (as the FBI Apple controversy showed us), some government agencies are, in fact, pressuring companies to weaken their systems by design to aid law enforcement and intelligence agencies.  As such insecurities mount, and as more and more of our critical infrastructure is networked, the Big Data environment in which we live may turn out to be a digital house of cards.

This past week, the Citizen Lab and our partners, Open Effect, produced several outputs and activities that related to concerns around privacy and security in the world of Big Data, including some that we hope can help mitigate some of these unintended consequences.

First, the Citizen Lab and Open Effect released a revamped version of the Access My Info tool, which allows Canadians to exercise their legal rights to ask companies about the data they collect on them, what they do with it, and with whom they share it.  I wrote an oped for the CBC about the tool, and there were several other media reports, including an interview by the CBC’s Metro Morning host Matt Galloway with Andrew Hilts of Citizen Lab and Open Effect.

Also, yesterday the CBC Ideas broadcast a special radio show on “Big Data Meets Big Brother,” in which I participated alongside Ann Cavoukian and Neil Desai, with Munk School director Stephen Toope moderating.  We discussed the balance between national security and privacy, and focused in on the limited oversight mechanisms that exist in Canada around security agencies, and especially the Communications Security Establishment (CSE).

Finally, Citizen Lab and Open Effect, as part of our Telecommunications Transparency Project, released a DIY Transparency Reporting Tool.  The tool is actually a software template that provides companies with a guide for developing transparency reports. To give some context for the tool, companies are increasingly encouraged to release public reports on the length of time client data is retained, how the data is used, and how often—and under what lawful authority—the data is shared with governments agencies.  The DIY Transparency Reporting Tool is the flipside of the Access My Info project:  whereas the latter encourages consumers to ask companies and governments about what they do with our data, the Transparency Reporting Tool provides companies with an easy-to-use template to take the initiative to report that information to us.

The world of Big Data has come upon us like a hurricane, with most consumers bewildered by what is happening to the data they routinely give away.  Meanwhile, companies are reaping a harvest of highly-personalized information to generate enormous profits, with very little public accountability around their conduct, or the design choices they make.  It’s time we encouraged consumers to “lift the lid” on the Big Data ecosystem right down to the algorithms that sort us and structure our choices, while simultaneously pressing companies to be more responsible stewards of our data.  Tools like “Access My Info” and the DIY Transparency Toolkit are a good first start.

A Stealth Falcon Quietly Snatches Its Twitter Prey

Today, the Citizen Lab is publishing a new report, entitled “Be Calm and (Don’t) Enable Macros: Malware Sent to UK Journalist Exposes New Threat Actor Targeting UAE Dissidents.” The report is authored by Citizen Lab senior researchers Bill Marczak and John Scott Railton, and details an extensive and highly elaborate targeted digital attack campaign, which we call “Stealth Falcon.” While we have no “smoking gun” (typical for cyber espionage) there is a lot of circumstantial evidence that strongly suggests the United Arab Emirates is responsible for Stealth Falcon.

The New York Times has an exclusive on the report, which can be found here http://www.nytimes.com/2016/05/30/technology/governments-turn-to-commercial-spyware-to-intimidate-dissidents.html?_r=0

Our full report is here: https://citizenlab.org/2016/05/stealth-falcon/

Journalists, activists — in fact, all of civil society — now depend on and have benefited from social media to conduct their campaigns and communicate with each other, and with confidential sources.  Yet that same dependence on social media has become a principal point of exposure and risk, exploited by criminals, intelligence agencies, and other adversaries determined to silence dissent. Our report offers a shocking exposé into just how elaborate and shifty these campaigns can be, and how serious the consequences are, for those ensnared in them.

The Stealth Falcon case begins when Rori Donaghy, a UK-based journalist and founder of the Emirates Center for Human Rights, received an email in November 2015, purporting to offer him a position on a human rights panel.  That email contained a malware-laden attachment from a phony organization. Donaghy has published extensively on abuses by the UAE government, including a series of articles based on leaked emails involving UAE government members.  Suspicious that something seemed awry, Donaghy made the wise move to share his email with Citizen Lab researcher, Bill Marczak.

Using a combination of reverse engineering, network scanning, and other highly intricate detective methods that are detailed in the report, Marczak (assisted by John Scott Railton) unearthed a vast campaign of digital attacks aimed at UAE dissidents, organized primarily through fake Twitter accounts, phony websites, and spoofed emails.  The attacks appear to have had extremely serious consequences: many dissidents targeted, and presumably entrapped by Stealth Falcon, disappeared into the clutches of UAE authorities and were reportedly tortured.

The United Arab Emirates is an autocratic regime that governs with strict regulations and harsh punishments.  Human Rights Watch’s 2016 UAE country report documents arbitrary arrests and forcible disappearances of regime critics.  Amnesty International says that “torture and other ill-treatment of detainees was common” in UAE prisons.   It is one of those countries that has for a long time strictly censored the Internet using technology developed by western companies; earlier Citizen Lab research found the services of a Canadian company, Netsweeper, are used by UAE ISPs to restrict access to content critical of the regime.  UAE has purchased “lawful intercept” surveillance systems from the notorious Finisher and Hacking Team intrusion software vendors, as we have documented in prior reports.  It is not yet clear whether what we call “Stealth Falcon” is something the UAE developed itself, or whether it’s part of some kind of commercial service.  Regardless, it is a nasty reminder of the way the harsh world of realpolitik actually manifests itself in cyberspace.

There are at least two broader lessons of the Stealth Falcon report. First, the careful, rigorous methods demonstrated by Bill Marczak and John Scott Railton are exemplary of the power of applying structured research techniques drawn from engineering and computer science to issues of human rights.  We hope other University-based research groups are inspired by this mixed methods approach, and emulate what we are doing around documenting targeted digital attacks.  The more this type of research is “normalized” in academia, the less likely abuses of the sort we are unearthing will go unnoticed.

Second, it is clear that autocratic regimes like the United Arab Emirates are now routinely finding ways to project their power through cyberspace by subverting the tools of social media to accomplish their sinister aims. Given that civil society is so deeply immersed in social media, it is imperative that they, and the companies that service them, urgently adapt to and mitigate these new threats. Doing so will require a more mature awareness of the risks that exist in cyberspace, what to be “on the lookout for” when it comes to those risks, and adjust behaviour accordingly.  Although there were many victims of Stealth Falcon, Donaghy himself was not among them thanks to his astute recognition that a pleasant, but out-of-the-blue, invitation seemed not quite right.

My conversation with Edward Snowden

Earlier this week, I was fortunate to have a lengthy conversation with Edward Snowden.  The chat was held at Rightscon and moderated by Access’ Amie Stephanovich, and it is archived at the RightsCon website here: https://www.youtube.com/watch?v=yGDqXokPGiE

We covered many topics, and I learned a great deal about Ed’s positions, and also his eloquence and passion.  It is clear he has deeply held and sophisticated perspectives on security, rights, and freedom.  It is remarkable that the person who is the world’s most important whistleblower in the history of intelligence also happens to be so thoughtful and articulate.

We spoke about the Internet rights community, and the challenges of extending the values of that community to the broader public in a context where big data and state surveillance are overwhelmingly dominating.  I made the case for the value of evidence-based, mixed methods University research of the sort that Citizen Lab does to bring transparency and support human rights advocacy.  I described the various fellowship opportunities, and even recommended Ed apply for one as a remote fellow. 🙂

We also spoke about the status of the Snowden disclosures moving forward.  It is clear Ed thought carefully about how best to avoid prejudice concerning the analysis of the documents. Handing them over to third parties makes sense.  But now, the documents are largely in the possession of a single media organization and the process around access to them for outside interested parties is opaque and lacking in explicit rules that we can all acknowledge.  Opening the entire cache up to the public, on the other hand, would be irresponsible since there is still sensitive information in them that could put lives at risk.

A different model I proposed is to create a respected international independent advisory board that would oversee and adjudicate applications to the archives from journalists and researchers. Ed responded that discussions had been held with a University about taking the documents, but the University was naturally concerned about the liabilities of handling them. But I believe that is confusing things. Here we need to separate the physical location of the documents from the process of how to get access to them.  It does not matter where the documents are archived — whether that be in one or several locations — as long as they are secure.  What matters more is the process by which decisions are made as to who gets access to them.  Right now, it’s a bit of a mystery and based largely on personal connections revolving around one or two journalists and a few editors of a private company.  Moving forward, that needs to change.  It’s a matter of global public interest.

Thanks to Access Now for archiving it here: https://www.youtube.com/watch?v=yGDqXokPGiE

Wup Woh: Security Issues with Another China-based Browser

“Once is happenstance. Twice is coincidence. The third time, it’s enemy action” – Ian Fleming, Goldfinger

The Citizen Lab is releasing a new report today authored by Jeffrey Knockel, Adam Senft, and myself, entitled: “WUP! There It Is: Privacy and Security Issues in QQ Browser.”*   The report is a continuation of the research we have been doing on privacy and security issues in popular Asia-based applications, and in particular China-based mobile browsers. Previous Citizen Lab reports found major security and privacy issues in UC Browser and Baidu Browser.  We now find strikingly similar problems in a third Chinese application, QQ Browser.

As we detail at length in the report (based on Jeffrey Knockel’s reverse engineering and technical analysis), we find QQ Browser is collecting a lot of highly sensitive information about users (what a user is searching for and where they are located) and users’ devices (IMEI number, SIM Card number, etc) and then transmitting all of this data either completely unencrypted or in an easily decrypt-able format back to Tencent’s servers (Tencent is the parent company of QQ).

We also identify a major vulnerability in the software update process, which would allow any malicious actor to easily spoof the automatic browser update with malware and then completely take over a user’s device.  In our report, we demonstrate this vulnerability by installing Angry Birds.  We could have just as easily installed spyware as a software update — and then turn on the microphone and camera, harvest user information, send spoofed emails or instant messages from the device, or change any of its security settings.

The threats for users of the privacy and security issues we found are numerous and troubling, especially in a context like China.  The insecure transmission of highly sensitive user data means that any actor with visibility along any point of the networks through which QQ’s data passes (WiFi cafes, ISPs, telcos, etc) could collect all of it and share it with anyone they want.  The software vulnerability update process means that any of those same actors along any of those network paths could also trivially push a fake update to the device and take it over in the same way we did.  The collection and insecure transmission of very invasive persistent identifiers hard-baked into a user’s device (IMEI number, SIM card number, serial number) is a gold mine for law enforcement and SIGINT agencies, as clearly demonstrated in the Snowden disclosures – since they can use these device identifiers to track people as they move around — as most of us do — with devices in our pockets.

Most concerning of all, of course is that these problems are situated in the context of China — a country with one of the world’s most extensive censorship and surveillance regimes; a country that compels all Internet companies, like Tencent, to turn over user data upon request to security services; a country that has recently passed a far-reaching anti-terrorism law that requires service providers to decrypt communications when the government asks; a country that is in the midst of a dramatic tightening up of laws and regulations around social media use; and a country that routinely incarcerates, detains, or harasses human rights activists, lawyers, activists, and others the regime deems to be subversive, both within mainland China and abroad.

Why is QQ collecting all of this highly invasive user data and transmitting it back to its servers in an insecure fashion? And, why are three of the most popular mobile browser applications in China all suffering from nearly identical problems?

As with UC Browser and Baidu Browser, we engaged in a responsible notification process to QQ’s security engineers (who only partially fixed the issues), and then sent detailed questions to the parent company, Tencent, answers to which we promise to publish in full alongside our report.  At the time of publication, however, Tencent has not replied to those questions.

Without those answers, we can only speculate.  It could be that the engineers are all following the same sloppy security and aggressive data collection practices as a coincidence.  Or, it could be because sloppy security and aggressive data collection practices are the norm in the application development industry, and these engineers are just doing what’s normal.  But given the context in China described above, one cannot help but speculate that there is something else more nefarious going on.

Regardless of the reasons, the effect is the same: millions of users of these applications are exposed to serious, perhaps life threatening, privacy violations and security risks.

Read the full report here: https://citizenlab.org/2016/03/privacy-security-issues-qq-browser/

Read the Washington Post story here: http://wpo.st/skzP1

Read the Wall Street Journal story here: http://on.wsj.com/1ohHbIy

*The title “WUP! There It Is” is a reference to the insecure transmission of user data sent by QQ Browser across the network, which they designate as “WUP” requests.

Shifting Tactics, Same Results: Users at Risk

Citizen Lab is releasing a new report today entitled, “Shifting Tactics: Tracking changes in years-long espionage campaign against Tibetans,” authored by Jakub Dalek, Masashi Crete-Nishihata, and John Scott-Railton.

Tibetans have long suffered persistent cyber espionage.  Being perceived as one of the political thorns in the side of the Chinese regime means that all those sophisticated digital spying campaigns we hear often about targeting companies and governments in the West — Tibetans have faced them too.  When it comes to cyber attacks, in other words, they have been canaries in the coal mine.

Today, the Citizen Lab is releasing a new report that details the latest iteration in a long-running espionage campaign against the Tibetan community.  Using malware and emails shared with us by trusted partners in Tibetan communities, Citizen Lab researchers were able to track the evolution in attacker behaviour from document-based malware attacks of the sort many are familiar with (“don’t click on that attachment, it might contain malware!”) to phishing attacks that draw on “inside” knowledge and attempt to trick users into entering credentials into cloud based infrastructure, like Google Docs.

One interesting observation we make is that this shift in tactics maps onto changes in security behaviours that the Tibetans themselves made.  To protect themselves and their community, some years ago Tibetans began advocating against opening attachments (“Detach from Attachments”).  The attackers noticed, however, and altered their methods too.  The speed with which this change happened shows how difficult it is for groups like the Tibetans to remain safe online.

Once again, what we find hitting civil society overlaps with what the private sector has previously identified hitting their clients.  In this case, we connect the attack group’s infrastructure and techniques to a group previously identified by Palo Alto Networks, which they named Scarlet Mimic.   We add some detail about the command and control infrastructure and targeting of victims to the Palo Alto report.

The information vacuumed up by whomever is behind these attacks is sensitive, and in the hands of a well-resourced adversary like China could cause serious damage to the safety and security of individuals in Tibet and beyond. The extracted information could also be used in support of efforts to frustrate and isolate political groups in the Tibetan diaspora.

We conclude the report with several tips, tools, and tactics on how users can protect themselves against this type of attack

The full report is here: https://citizenlab.org/2016/03/shifting-tactics

Update: Motherboard’s Lorenzo Franceshi-Bicchierai wrote up a great piece about it here: https://motherboard.vice.com/read/how-tibetans-are-fighting-back-against-chinese-hackers

Down on the Baidu

Today, the Citizen Lab is releasing a new report, “Baidu’s and Don’ts: Privacy and Security Issues in Baidu Browser.”

The report is the result of many weeks of careful analysis, led by Citizen Lab security researcher Jeffrey Knockel and co-authors Adam Senft and Sarah McKune and is part of Citizen Lab’s interest in analyzing the privacy and security issues involved with popular mobile applications.

Reuters has an exclusive story on the report here: http://www.reuters.com/article/baidu-vulnerability-idUSL3N1613VI

The report takes a close look at Baidu Browser, a popular China-based mobile application that is available in Windows and Android versions. What we found was very troubling.

Baidu Browser collects and transmits a lot of personal user data back to Baidu servers that we believe goes far beyond what should be collected, and it does so either without encryption, or with easily decryptable encryption. Data collected and transmitted in the Android version without any encryption includes a user’s GPS coordinates, search terms, and URLs visited. The user’s IMEI and nearby wireless networks are sent with easily decryptable encryption. Meanwhile, the Windows version sends search terms, hard drive serial number, network MAC address, title of all webpages visited and GPU model number.

That is a a lot of fine-grained personally-identifiable information about what a user is doing, where they are located, and their device.  Hard drive serial number? Really? What does the manufacturer of a mobile browser application need to know about the hard drive serial number of your device? Sending all of that information in the clear is a big problem too because it means anyone who operates any of the networks over which communication takes place (e.g., wifi, cell, ISP, telco providers) can see and log it too (more on that below).

We also found neither the Windows nor the Android version of Baidu Browser protect software updates with code signatures, meaning an in-path malicious actor could cause the application to download and execute arbitrary code.

What does that risk represent in real terms? Say you had Baidu Browser loaded on your mobile device and you connected to a wifi hotspot controlled by a criminal, spy, or some other nefarious group, maybe at a conference hotel, a coffee shop, or an airport. People with access to those networks would have been able to send malware to your phone disguised as a Baidu update and take over your phone and do anything they want with it. (Thankfully, it appears this issue has now been fixed by Baidu after our security disclosure).

On a methodological level, the findings show the value of reverse engineering – a method that is under pressure as companies get more and more litigious and copyright laws more stringent around just what individuals can do with devices and applications.  I have repeatedly argued that “lifting the lid” on the Internet is not only interesting from a research perspective, it is also a civic responsibility.  Of course not everyone can “lift the lid” on the Internet.  It requires a lot of skill of the sort Citizen Lab security researcher Jeffrey Knockel has, and which this report demonstrates.

After the last few reports where reverse engineering has figured prominently, I would like to propose a new rule: the more you take popular applications apart, the more scary the findings.

There are also some interesting lessons around the responsible disclosure process we undertook around this report (which is detailed in the report itself). We gave the company 45 days to address the issues, and then extended that deadline at their request. Baidu security engineers were very responsive, for the most part, and took our concerns very seriously.  We sent them questions prior to the report’s release, and Baidu’s International Communications Office sent back their reply, which we published here.

However, Baidu’s “fixes,” while correcting some critical problems, actually appear to have made some other things worse, and there are still some serious questions lingering about why they collect such highly invasive data about their users in the first place (about which the company feels it cannot transparently comment).

Of course, that Baidu is made in China and most of its users are there should raise alarm bells. China requires local companies like Baidu to retain and share user data without much of any kind of due process, transparency, or public accountability.  Did Baidu build their browser to hoover up all of this personal information at the request of the Chinese authorities? Did they do it for commercial reasons? Did they do it because of over zealous engineering choices?

In a way, it doesn’t matter. Whether poor design, or surveillance by design, it is the same effect: users are at risk.

The report also illustrates a series of larger concerns related to the multiplication of applications, devices, and “things” that are connected to each other and the Internet, and which follow us around relentlessly.  Insofar as applications such as these leak personally identifiable information, they become attractive targets for state intelligence agencies and other threat actors.  We know this from the Snowden disclosures and comments made by senior intelligence officials.  And you can bet if the FVEYs see it this way, other lower-tier countries and threat actors will do so eventually (if they are not already). Seemingly trivial bits of data leaked out that connect back to users become a very convenient “hook” or “selector” for intelligence analysts. With that IMEI number or serial number in hand, an analyst can go back in time and make connections with other individuals, places, points of data, or events that can be seriously incriminating.That may not matter to everyone who feels they have “nothing to hide” (although even in those cases people should still worry about crime, identity theft, etc.), but it can affect high risk users in life threatening ways.

All of this research underscores a pretty scary scenario we’re heading into, illustrated by one of the most remarkable aspects of the findings.   We discovered the software development kit at the heart of the Baidu Browser issue happens to be repurposed and employed in thousands of other applications developed by Baidu and third parties, affecting potentially hundreds of millions of users. Yes, hundreds of millions of potential users. Thousands of other applications, many of them available on the Google Play Store outside of China, and some of which have been installed hundreds of millions of times, contain the same flaws, and are sending back the same detailed information, to Baidu servers.

That means there is major collateral damage of the problems we identify that go well beyond Baidu browser, and beyond China.  This finding offers another reminder that the flaws in small but important chunks of code can ripple far and wide in the ecosystem of interconnected applications and devices (e.g., the Heartbleed OpenSSL case).

Read the full report here: https://citizenlab.org/2016/02/privacy-security-issues-baidu-browser/

Fitness Tracker Applications — Leaky, Insecure, and a Sign of the Times

Last week, the Citizen Lab in collaboration with Open Effect released a new report, “Every Step You Fake: A comparative analysis of fitness tracker privacy and security.” The report contains primarily the background, overview, methods and technical findings.  A subsequent report will include the policy and legal analysis that the team is presently completing.  Open Effect is a non-profit organization led by Citizen Lab research fellow Andrew Hilts, and one on whose board I presently serve.  We work together on a variety of projects in the area of privacy and security, and we’ll have more reports coming down the pipeline together beyond the work on Fitness Trackers. (Open Effect and Citizen Lab also worked together on the Access My Info project).

The “fitness tracker” topic may seem to be a bit of an outlier for us at the Citizen Lab, but lately we have become more and more interested in privacy and security of mobile applications. Part of it has to do with the refinement of reverse engineering and other technical analysis methods that inform several Citizen Lab projects.  A much broader concern of ours is around the privacy and security of the growing number of devices and applications that surround us in the so called “Internet of Things” ecosystem.  Obviously, the implications for consumers of these devices and applications are important from a privacy and security point of view.  But personally speaking, I find it very compelling to try to see how security holes, vulnerabilities, and other unintentional flaws could be exploited by government threat actors, putting users at risk.  Having spent considerable time studying the Snowden disclosures, I have been struck by how seemingly trivial leaks of users’ data can end up being routinely leveraged by SIGINT agencies.  A recent talk by the chief of the NSA’s TAO underscored this point well.  We leave a trail of digital droppings where ever we go, which in and of themselves may seem unimportant but when collated and analyzed together can reveal a lot.

One of the other interesting components of this report was the responsible notification process we undertook, and which is explained in the report. We notified the fitness tracker vendors who had security and privacy problems with their products, and only a few of them got back to us — until journalists reached out to them, that is.   Media strategy is important to creating positive outcomes of research, and this case illustrates it well.  (We gave an exclusive to CBC on the Fitness Tracker report for this reason). For example, although Garmin did not respond to our initial responsible disclosure, they did after the report came out. The updated version of their application seems to suggest they’ve implemented some basic security protocols that were lacking (ht Ryan Budish), which is a positive outcome of the research.