WeChat: “One App, Two Systems”

Days are long gone when we used to interact with the Internet as an undifferentiated network. The reality today is that what we communicate online is mediated by companies that own and operate the Internet services we use.  Social media in particular have become, for an increasing number of people, their windows on reality.  Whether, and in what ways, those windows might be distorted — by corporate practices or government directives — is thus a matter of significant public importance (but not always easy to discern with the naked eye).

Take the case of WeChat — the most popular chat application in China, and the fourth largest in the world with 806 million monthly active users.  WeChat is more than just an instant messaging application. It is more like a lifestyle platform.  WeChat subscribers use the app not only to send text, voice, and video but to play games, make mobile payments, hail taxis, and more.

As with all other Internet services operating in China, however, WeChat must comply with extensive government regulations that require companies to police their networks and users, and share user data with security agencies upon request.  Over numerous recent case-study reports, Citizen Lab research has found that many China-based applications follow these regulations by building into their applications hidden keyword censorship and surveillance.  WeChat is no exception, although with a twist.

Today, we are releasing a new report, entitled “One App, Two Systems: How WeChat uses one censorship policy in China and another internationally.  For this report, we undertook several controlled experiments using combinations of China, Canada, and U.S. registered phone numbers and accounts to test for Internet censorship on WeChat’s platform.  What we found was quite surprising.

Turns out that there is substantial censorship on WeChat, but split along several dimensions.  There is keyword filtering for users registered with a mainland China phone number but not for those registering with an international number.  However, we also found that once a China-based user had registered with a mainland China phone number, the censorship follows them around — even if they switch to an international phone number, or work, travel, or study abroad.  To give some context, there are roughly 50 million overseas Chinese people working and living abroad.  China’s “One-App, Two Systems” keeps them under the control of China’s censorship regime no matter where they go. This extra-territorial application of information controls is quite unique, and certainly a disturbing precedent to set.

We also found censorship worked differently on the one-on-one versus the “group” chat systems.  The latter is a WeChat feature that allows chat groups of up to 500 users.  Our tests found censorship on the group chat system was more extensive, possibly motivated by the desire to restrict speech that might mobilize large groups of people into some kind of activism.  There is also censorship of WeChat’s web browser — but, again, mostly for China-registered users.

Finally, and most troubling, we found that WeChat no longer gives a notice to users about the blocking of chat messages.  In the past, users received a warning saying they couldn’t post a message because it “contains restricted words.” Now if you send a banned keyword, it simply doesn’t appear on the recipient’s screen. It’s like it never happened at all.  This type of “silent” censorship is highly unlikely to be noticed by either communicating party unless one of them thinks to double check (or researchers like us scrutinize it closely).

By removing notice of censorship, WeChat sinks deeper into a dark hole of unaccountability to its users.

Research of this sort is essential because it helps pull back the curtain of obscurity that, unfortunately, pervades so much of our digital experiences.  As social media companies increasingly shape and control what users communicate — shape our realities — they affect our ability to exercise our rights to seek and impart information — to exercise our human rights.

China may offer the most extreme examples, as our series of reports on China-based applications has shown, but they are important to study as harbingers of a possible future.  To wit, as our report is going to publication Facebook is reportedly developing a special censorship system to comply with China’s regulations, one that would “suppress posts from appearing in users’ news feeds.”  Along with WeChat’s “One App, Two Systems” model, the services these two social media giants are offering go a long way to cementing a bifurcated, territorialized, and opaque Internet.

Read the full report here: https://citizenlab.org/2016/11/wechat-china-censorship-one-app-two-systems

What to do about “dual use” digital technologies?

The following is my written testimony to the Senate Standing Committee on Human Rights – Canada, which will take place November 30, 2016 at 11:30 AM EST and video webcast here.)*

Background

For over a decade, the Citizen Lab at the Munk School of Global Affairs, University of Toronto has researched and documented information controls that impact the openness and security of the Internet and threaten human rights. Our mission is to produce evidence-based research on cyber security issues that are associated with human rights concerns. We study how governments and the private sector censor the Internet, social media, or mobile applications.  We have done extensive reporting on targeted digital espionage on civil society.  We have produced detailed reports on the companies that sell sophisticated spyware, networking monitoring, or other tools and document their abuse potential to raise corporate social responsibility concerns.  And we have undertaken extensive technical analysis of popular applications for hidden privacy and security risks. Our goal is to inform the public while meeting high standards of rigor through academic peer review.

Citizen Lab Research into Dual-Use Technologies

One area we are particularly concerned with is the development, sale and operation of so-called “dual-use” technologies that provide capabilities to surveil users or to censor online information at the country network level. These technologies are referred to as “dual-use” because, depending on how they are deployed, they may serve a legitimate and socially beneficial purpose, or, equally well, a purpose that undermines human rights.   

Our research on dual-use technologies has fallen into two categories — those that involve network traffic management, including deep packet inspection and content filtering, and those that involve technologies used for device intrusion for more targeted monitoring.  

The first category of our research concerns certain deep packet inspection (DPI) and Internet filtering technologies that private companies can use for traffic management, but which can also be used by Internet service providers (ISPs) to prevent entire populations from accessing politically sensitive information online and/or be used for mass surveillance. This category of research uses a combination of network measurement methods, technical interrogation tests, and other “fingerprinting” techniques to identify the presence on national networks of such technologies capable of surveillance and filtering, and, where possible, the company supplying the technology. In conducting such research, questions frequently arise regarding the corporate social responsibility practices of the companies developing and selling this technology, as several of our reports in this area have identified equipment and installations sold by companies to regimes with dubious human rights track records. Our research has spotlighted several companies — Blue Coat, Websense, Fortinet, and Netsweeper — that provide filtering and deep packet inspection systems to such rights-abusing countries.  Since Netsweeper is a Canadian headquartered company and has featured repeatedly in our research on this topic, I will provide more details about our findings with respect to them.

Netsweeper, Inc. is a privately-owned technology company based in Waterloo, Ontario, Canada, whose primary offering is an Internet content filtering product and service. The company has customers ranging from educational institutions and corporations to national-level Internet Service Providers (ISPs) and telecommunications companies. Internet filtering is widely used on institutional networks, such as schools and libraries, and networks of private companies, to restrict access to a wide range of content. However, when such filtering systems are used to implement state-mandated Internet filtering at the national level, questions around human rights — specifically access to information and freedom of expression — are implicated.

Prior research by the OpenNet Initiative (2003-2013) (an Inter-University project of which Citizen Lab was a founding partner), identified the existence of Netsweeper’s filtering technology on ISPs operating in the Middle East, including Qatar, United Arab Emirates (UAE), Yemen, and Kuwait. Working on its own, Citizen Lab subsequently outlined evidence of Netsweeper’s products on the networks of Pakistan’s leading ISP, Pakistan Telecommunication Company Limited (PTCL), in a report published in 2013, and discussed their use to block the websites of independent media, and content on religion and human rights. In 2014, we reported that Netsweeper products were being used by three ISPs based in Somalia, and raised questions about the human rights implications of selling filtering technology in a failed state. In a report on information controls in Yemen in 2015, we examined the use of Netsweeper technology to filter critical political content, independent media websites, and all URLs belonging to the Israeli (.il) top-level domain in the context of an ongoing armed conflict in which the Houthi rebels had taken over the government and the country’s main ISPs.  Most recently, we published a report on September 21, 2016 that identified Netsweeper installations on nine Bahrain-based ISPs, a country with a notoriously bad human rights record, which were being employed to block access to a range of political content.

Included in some of these reports were letters with questions that we sent to Netsweeper, which also offered to publish any response from the company in full. Aside from a defamation claim filed in January 2016, and then subsequently discontinued in its entirety on April 25, 2016, Netsweeper has not responded to us.

The second category of research where we also apply the term “dual-use” concerns the use of malicious software — “malware” — billed as a tool for “lawful intercept,” e.g. zero-day exploits and remote access trojans that enable surveillance through a user’s device.  A “zero-day” — also known as an 0day — is an undisclosed computer software vulnerability.  Zero days can be precious commodities, and are traded and sold by black, grey, and legitimate market actors.  Law enforcement and intelligence agencies purchase and use zero days or other malware — typically packaged as part of a suite of “solutions” — to surreptitiously get inside a target’s device.  When used without proper safeguards, these tools (and the services that go along with them) can lead to significant human rights abuses.

Our work in this area typically begins with a “patient zero” — someone or some organization that has been targeted with a malware-laden email or link.  In the course of the last few years, we have documented numerous cases of human rights defenders and other civil society groups being targeted with advanced commercial spyware sold by companies like Italy-based Hacking Team, UK/Germany/Swiss-based Finfisher, and Israeli-based NSO Group.  Using network scanning techniques that employ digital fingerprinting for signatures belonging to the so-called “command and control” infrastructure used by this malware, we have also been able to map the proliferation of some of these systems to a large and growing global client base, many of which are governments that have notoriously bad records concerning human rights.

The data released by Citizen Lab from these projects has inspired legal and advocacy campaigns, formed much of the evidentiary basis for measures undertaken in multiple countries to control unregulated surveillance practices (e.g., 2013 modifications to the Wassenaar Arrangement), has inspired further disclosures and investigations regarding the use of spyware and filtering technologies, and has resulted in specific remediation in the form of software updates to entire consumer populations (e.g., patches to Apple’s OSX and iOS in the case of our “Million Dollar Dissident” report).

Nonetheless, our findings are only touching on a small area of what is a very disturbing larger picture.  The market for dual-use technologies, particularly spyware, is growing rapidly. Government demand for these technologies may actually be increasing following the Snowden disclosures, which raised the bar on what is deemed de rigueur in digital surveillance, and ironically may have intensified competition around the sale of zero-day exploits, and methods for defeating increasingly pervasive end-to-end encryption and other defensive measures. For example, the U.K.’s proposed Investigatory Powers Bill, at the time of writing awaiting Royal assent before becoming law, will authorize U.K. agencies to hack into targeted devices as well as “bulk networks” — meaning all devices associated with a particular geographic area.

Although Citizen Lab research has not to date identified a Canadian-based vendor of commercial spyware selling to a rights-abusing country or being used to target human rights defenders in the course of its investigations, we know that companies selling this type of technology exist.  Furthermore, the growth of the spyware market coupled with the other circumstances outlined above, suggest it is highly likely that a Canadian vendor would at some point in the not too distant future face the choice of whether or not to sell its technology and services to a rights-abusing country — if it has not already.  Indeed, it is worth pointing out that parts of a very controversial mass surveillance system implemented in Turkey by the US-based company, Procera, were reportedly outsourced to a Canadian software development company, Northforge, after engineers at Procera threatened to resign for fear of assisting President Erdogan’s draconian policies.

What is To Be Done?

Rectifying the abuse of dual-use technologies is not a simple matter, but it is one where the Government of Canada can play a constructive role. Effective solutions that encourage respect for human rights will depend on two key components: transparency of the market, and creation of an incentive structure to which private sector actors will respond.  

Transparency

The primary impediment to any progress regarding dual-use technologies of concern is the lack of transparency in the market. It is impossible for non-governmental entities to accurately gauge the scale and capabilities of the dual-use technology sector. While research such as that of the Citizen Lab and Privacy International has drawn attention to the problem and highlighted certain notorious companies, sources of research data and our capacity to undertake research are limited.  Meanwhile, new actors and technologies are regularly emerging or undergoing transformation as they change ownership, headquarters, or name. Many dual-use technology companies are not transparent about the full range of products and services they sell or their clients, and the sector as a whole is shrouded in secrecy.

With their proven potential for abuse, technologies that enable countrywide Internet filtering and digital surveillance merit increased scrutiny by the government and the public. It is telling that in many countries, government officials themselves are unable to obtain a complete picture of the technologies designed, manufactured, and serviced within their borders that could be used to suppress legitimate dissent or undermine other internationally-recognized human rights. Irrespective of whether the government chooses to regulate the sale of particular technologies, some form of mandated transparency in the market for filtering and surveillance tools is essential to addressing this information gap and informing good policy.

Mandated transparency could take a number of forms, but at a minimum will require “lawful intercept,” Internet filtering, and, possibly, DPI providers that offer their products and services in the marketplace to self-identify and report as a matter of public record. An analogous model may be found in the work of the United Nations Working Group on Mercenaries, which has drafted a proposed convention regarding regulation of private military and security companies (PMSCs). The convention envisions a general state registry of the PMSCs operating in a state’s jurisdiction, as part of a broader framework for oversight and accountability.

Transparency can emerge from research. It is noteworthy that the little we know about the abuse of dual-use technologies comes primarily from rigorous, evidence-based and interdisciplinary research of the sort Citizen Lab has done. As a complement to mandated transparency, the Government of Canada could encourage this type of mixed methods research into the dual-use technology market through research funding bodies like SSHRC and NSERC, and the Canada Research Chair program. It could also develop legislation specifically designed to provide safe harbor for security research undertaken in the public interest and incorporating responsible disclosure.

Incentivizing the Private Sector to Respect Human Rights

As the UN Guiding Principles on Business and Human Rights make clear, business enterprises have the responsibility to respect internationally-recognized human rights, in their own activities as well as activities linked to their operations, products or services. At present, however, there are few if any costs incurred by the companies that supply and service dual-use technologies when such technologies are used to violate human rights. Repeatedly we have seen that, when surveillance and filtering technologies are used against journalists, activists, and other peaceful actors, the companies involved treat the matter as “water off a duck’s back”: they assert that their products are provided for lawful purposes only, benefit society, and are beyond their control in the hands of their clients. They wait for the news cycle to pass. Many companies, particularly those that supply lawful intercept products, are further insulated by the secrecy surrounding intelligence and law enforcement work and the national security prerogatives of their clientele, most of whom lack oversight or public accountability themselves.

Yet it has become increasingly clear, as evidenced by Citizen Lab and other research, that while these technologies may be used to hunt criminals and terrorists or otherwise serve a legitimate security purpose, they are simultaneously deployed against regime critics, political opponents, and other non-violent actors with alarming frequency. Regimes that lack robust rule of law and due process while facing legitimation crises and domestic dissent simply do not distinguish among targets when leveraging the advanced technologies supplied by the private sector. It has come to light that private companies may even have detailed knowledge of attacks against civil society that are reliant on their products, as they participate in trouble-shooting delivery of malware and provide other forms of expertise to their clients. Companies, however, have managed to continue to grow and develop the sector without consequence by avoiding any form of engagement on the question of human rights.

Significant intervention is required to eliminate company expectations of immunity and prompt rights-based reform. In a forthcoming piece, my colleague Sarah McKune and I lay out several areas that we feel could help control the excesses of the commercial spyware market, by shifting the costs from the public to the spyware companies themselves, in order to generate changes in company risk-opportunity calculations, practices, and overall attitude. The drastic change in incentive structure necessary to curb the abuses of this industry will rely on a combination of (1) regulation and policy, and (2) access to remedy.

  1. Regulation and policy

Export controls are a first step in the regulatory process. The Canadian government currently has in place export controls and regulations against the sale of certain types of technologies to certain foreign jurisdictions, including those relating to “IP network communications surveillance systems or equipment” and “intrusion software” (which correspond to a large degree to the Citizen Lab research outlined above). The inclusion of these two new additions to control lists was in response to modifications made in 2013 to the Wassenaar Arrangement, of which Canada is a member. Canada has released statistics concerning 2015 export licenses including those pertaining to intrusion software and IP network surveillance, which can be found here.  Although it is impossible to know what items in particular were granted licenses or what considerations were made in doing so, it is noteworthy that within the relevant category, 2202 license applications were granted, while only 2 were denied. Regardless, export controls by themselves are insufficient to address the human rights concerns associated with these items.

As various members of the Wassenaar Arrangement rolled out implementation of the 2013 controls at the national level, the challenges of relying on export controls to address the serious rights implications of dual-use technologies became evident. One key problem is designating the scope of the items to be controlled in an appropriate and predictable manner, avoiding both over- and under-inclusion. For example, with respect to items related to “intrusion software,” certain technologies anticipated to fall within the scope of the control are also used for legitimate security research. At the same time, the 2013 controls do not cover Internet filtering and other technologies with significant human rights implications. For example, companies that provide Internet traffic management under the term “Quality of Service” (QoS) are explicitly excluded from Wassenaar targeted items. Yet, while QoS technologies are certainly integral to the proper functioning of network traffic service delivery today, they can also be used to throttle traffic or certain protocols associated with specific applications. If used in contexts where the aim is to limit free expression, privacy, or access to information — as evidenced in a rising number of troubling country cases — then human rights considerations are certainly impacted.

Lastly, the Wassenaar Arrangement’s inclusion of the 2013 controls is now on uncertain ground after the United States has given notice that it intends to renegotiate the agreement following major criticisms put forward primarily by security researchers and the private sector. The U.S. decision to reopen negotiations on these Wassenaar controls will, in turn, almost certainly affect Canada’s obligations.  

A second challenge lies in the export licensing process carried out at the national level. Even when a dual-use technology is subject to control, the licensing process must be properly calibrated to address the end users and end uses of concern from a human rights perspective. This accounting requires an ever-evolving assessment, combined with the political will to both curb access within a broad group of countries (some of which may be of strategic importance to Canada) and restrict the sales of domestic corporations. As we have witnessed, the post-2013 licensing processes surrounding spyware have left much to be desired: Italian authorities’ approved an initial grant of a “global authorization” to Hacking Team, which permitted it to export its spyware to destinations such as Kazakhstan; and, the Israeli authorities gave approval to NSO Group to export sophisticated iOS zero-day exploits to the United Arab Emirates, where we discovered they were subsequently used against a peaceful dissident and other political targets.

For these and other reasons, export controls, while important, constitute only one means by which the Government of Canada can help constrain the abuse of dual-use technologies. In tailoring applicable export controls, Canada can certainly take a proactive stance on addressing the end users and end uses that pose human rights risks. At the same time, however, such efforts can be complemented by additional regulatory and policy measures. Measures worth exploring include:

  • Government procurement and export credit or assistance policies that require vendors of dual-use technologies to demonstrate company commitment to and record of human rights due diligence. Vendors that have engaged in fraudulent or illegal practices, or have supplied technology that has facilitated human rights abuses, should be ineligible for award of government contracts or support in any form.
  • Enhanced consumer protection laws and active efforts at consumer protection agencies to address the misuse of DPI, Internet filtering technology, and spyware against the public.
  • A regulatory framework for oversight and accountability specifically tailored to dual-use technologies. That proposed in the context of PMSCs, as noted above, offers a number of elements that could be considered for inclusion, such as enumerating prohibited activities; establishing requirements for training of personnel; assessing company compliance with domestic and international law; and investigating reports of violations.
  • Structured dialogue with companies and civil society regarding the establishment of industry self-regulation, which can be modeled on the International Code of Conduct for Private Security Service Providers and its multistakeholder association. Such a dialogue could include work on model contracts and best practices for “lawful intercept” and Internet filtering technology providers.

(2) Access to remedy

When dual-use technology companies provide products and services used to undermine human rights, or when they engage in practices that are fraudulent or illegal in relevant jurisdictions (e.g., practices that are violative of intellectual property, consumer protection, privacy, or computer crime laws), it is appropriate that those harmed by such activity may seek remedy against them. Canadian law could ensure that criminal or civil litigation is possible in such circumstances, including through the clear establishment of jurisdiction over actors that operate transnationally or may be state-linked. Exposure to liability for misconduct will be the primary motivating force behind any change in this sector.

The Government of Canada is a vocal supporter of Internet freedom and human rights, and is a member in all of the relevant international bodies in which such topics are discussed.

But the fact that Citizen Lab has documented at least seven countries whose national ISPs use or have used a Canadian company’s services to censor Internet content protected under internationally-recognized human rights agreements is an embarrassing black mark for all Canadians. While we have no evidence that a Canadian intrusion software, DPI, or IP monitoring vendor has sold its services to a rights-abusing country that does not necessarily mean it has not happened, or will not happen in the future.  The Turkey-Procera case, outlined earlier, should certainly raise alarm bells.

By proactively addressing the regulation of dual-use technologies in ways outlined above, the Government of Canada would align its actions with its words, and ensure business considerations are not undertaken without human rights concerns being addressed.

*The author gratefully acknowledges the input of Sarah McKune, Senior Legal Advisor, Citizen Lab, who assisted in the preparation and writing of this testimony and John Scott Railton, Citizen Lab senior researcher, for comments and feedback.

Just Enough to Do the Job: Targeted Attacks on Tibetans

I am pleased to announce a new Citizen Lab report, entitled “It’s Parliamentary: KeyBoy and the targeting of the Tibetan Community.” The report is authored by the Citizen Lab’s Adam Hulcoop, Etienne Maynier, John Scott Railton, Masashi Crete-Nishihata, and Matt Brooks and can be found here: https://citizenlab.org/2016/11/parliament-keyboy/

In this report, the authors track a malware operation targeting members of the Tibetan Parliament over August and October 2016.  The operations involved highly targeted email lures with repurposed content and attachments that contained an updated version of custom backdoor known as “KeyBoy.”

There are several noteworthy parts of this report:

First, this operation is another example of a threat actor using “just enough” technical sophistication to exploit a target.  A significant amount of resources go into a targeted espionage operation, from crafting of an exploit to its packaging and delivery to the intended target, to the command and control infrastructure, and more.  From the perspective of an operator, why risk burning some of these precious resources when something less sophisticated will do? Throughout the many years we have been studying targeted digital attacks on the Tibetan community, we have seen operators using the same old patched exploits because … well, they work.

Part of the reason these attacks work is that the communities in question typically do not have the resources or capabilities to protect their networks properly.  While the Tibetan diaspora has done a remarkable job educating their community about how to recognize a suspicious email or attachment and not open it (their Detach from Attachments campaign being one example) many of them are still reliant on un-patched operating systems and a lack of adequate digital security controls.  As Citizen Lab’s Adam Hulcoop remarked, “We found it striking that the operators made only the bare minimum technical changes to avoid antivirus detection, while sticking with ‘old day’ exploits that would not work on a patched and updated system.”

What goes for Tibetans holds true across the entire civil society space: NGOs are typically small, overstretched organizations; most have few resources to dedicate to doing digital security well.  As a consequence, operators of targeted espionage campaigns can hold their big weapons in reserve, put more of their effort into crafting enticing messages — into the social engineering part of the equation — while re-purposing older exploits like KeyBoy.  As Citizen Lab senior researcher John Scott Railton notes in a recent article, “Citizen Lab research has consistently found that although the overall technical sophistication of attacks is typically low, their social engineering sophistication is much higher.”  The reason is that civil society is “chronically under-resourced, often relying on unmanaged networks and endpoints, combined with extensive use of popular online platforms….[providing] a target-rich environment for attackers.”

The second noteworthy part of the report concerns the precision around the social engineering on the part of the operators. The attacks were remarkably well timed to maximize return on victims.  Just 15 hours after members of the Tibetan parliament received an email about an upcoming conference, they received another email with the same subject and attachment, this time crafted to exploit a vulnerability in Microsoft Office using KeyBoy.  This level of targeting and re-use of a legitimate document sent only hours before shows how closely the Tibetans are watched by their adversaries, and how much effort the operators of such attacks put into the social engineering part of the targeted attack.  With such persistence and craftiness on the part of threat operators, it is no wonder civil society groups are facing an epidemic of these type of campaigns.

Finally, the report demonstrates the value of trusted partnerships with targeted communities.  The Citizen Lab has worked with Tibetan communities for nearly a decade, and during that time we have learned a great deal from each other.  That they are willing to share samples of attacks like these with our researchers shows not only their determination to better protect themselves, but a recognition of the value of careful evidence-based research for their community.  By publishing this report, we hope that civil society, human rights defenders, and their sponsors and supporters can better understand the threat environment, and take steps to protect themselves.  

To that end, alongside the report, we are publishing extensive details and indicators of compromise in several appendices to the report, and hope other researchers will continue where we left off.

Read the report here: https://citizenlab.org/2016/11/parliament-keyboy/

What Lies Beneath China’s Live-Streaming Apps?

Today, the Citizen Lab is releasing a new report, entitled: “Harmonized Histories? A year of fragmented censorship across Chinese live streaming platforms.”  The report is part of our NetAlert series, and can be found here.

Live-streaming media apps are extraordinarily popular in mainland China, used by millions.  Similar in functionality to the US-based and Twitter-owned streaming media app, Periscope (which is banned in China) China-based apps like YY, 9158, and Sina Show, have become a major Internet craze.  Users of these apps share everything from karaoke and live poker matches to pop culture commentary and voyeuristic peeks into their private lives.  For example, Zhou Xiaohu, a 30-year-old construction worker from Inner Mongolia, films himself eating dinner and watching TV, while another live-streamer earns thousands of yuan taking viewers on tours of Japan’s red-light districts.

The apps are also big business opportunities, for both users and the companies that operate them.  Popular streamers receive virtual gifts from their fans, who can number in the hundreds of thousands for some of the most widely viewed. The streamers can exchange these virtual gifts for cash.  Some of them have become millionaires as a result. The platforms themselves are also highly lucrative, attracting venture capital and advertisement revenues.

Chinese authorities have taken notice of the exploding live-streaming universe, which is not surprising considering their strict concerns over free expression.  Occasionally streams will veer into taboo topics, such as politics or pornography, which has resulted in more scrutiny, fines, takedowns, and increased censorship.

To better understand how censorship on the platforms takes place, our researchers downloaded three of the most popular applications (YY, 9158, and Sina Show) and systematically reverse engineered them.   Doing so allowed us to extract the banned keywords hidden in the clients as they are regularly updated.  Between February 2015 and October 2016, we collected 19,464 unique keywords that triggered censorship on the chats associated with each application, which we then translated, analyzed, and categorized.

What we found is interesting for several reasons, and runs counter to claims put forth in a widely-read study on China’s Internet censorship system authored by Gary King et al and published in the American Political Science Review.  In that study, King and his colleagues conclude that China’s censors are not concerned with “posts with negative, even vitriolic, criticism of the state, its leaders, and its policies” and instead focus predominantly on “curtailing collective action by silencing comments that represent, reinforce, or spur social mobilization, regardless of content.”  Their analysis gives the impression of a centralized and monolithic censorship system to which all Internet providers and companies strictly conform.

We found, on the other hand, that there is significant variation in blocking across the platforms.  This variation means that while the Chinese authorities may set general expectations of taboo or controversial topics to be avoided, what, exactly, to filter is left to the discretion of the companies themselves to implement.

We also found, contrary of King et al, that content they suggested was tolerated was actually routinely censored by the live-streaming companies, albeit in inconsistent ways across each of the platforms.  We also found all sorts of keywords targeted for filtering that had nothing to do with political directives, including censoring of posts by live-streaming applications related to their business competitors.

In other words, our research shows that the social media ecosystem in China — though definitely restricted for users — is more decentralized, variable, and chaotic than what King and his colleagues claim. It confirms the role of intermediary liability in China that Rebecca Mackinnon has put forward, known as “self discipline,” whereby companies are expected to police themselves and their users to ensure a “harmonious and healthy Internet.”  Ironically, that self-discipline often results in entirely different implementations of censorship on individual platforms, and a less than “harmonious” Internet experience as a result.

Our reverse engineering also discovered that YY — the most popular of the live-streaming apps, with over 844 million registered users — undertakes surveillance of users’ chats. When a censored keyword is entered by a user, a message is sent back to YY’s servers that includes the username of who sent the message, the username of who received the message, the keyword that triggered censorship, and the entire triggering message. Nearly a billion unwitting users’ chats subject to hidden keyword surveillance!  Recall that in China companies are required to share user information with security agencies upon request, and Chinese citizens have been arrested based entirely on their online actions.  Recently, for example, one user posted an image of a police report of a person under investigation for downloading a VPN on his or her mobile phone.

On a more technical level, our research shows the value of careful reverse engineering for revealing information controls hidden from the view of the typical user.  The keyword lists we extracted and are publishing reveal exactly what content triggers censorship and surveillance, something that is known only to the decision makers within the companies themselves.  We see this type of research as critical to informing users of the ecosystem within which they communicate.

Sometimes what we find also runs counter to conventional wisdom.  You don’t know what’s being censored if you can’t see the list of banned keywords. Opening these applications up allows us to see them from the inside-out in a truly unbiased way that other more impressionistic scans can only infer.

What an “MRI of the Internet” Can Reveal: Netsweeper in Bahrain

I am pleased to announce a new Citizen Lab report: “Tender Confirmed, Rights At Risk: Verifying Netsweeper in Bahrain.”  The full report can be found here: https://citizenlab.org/2016/09/tender-confirmed-rights-risk-verifying-netsweeper-bahrain

Internet censorship is a major and growing human rights issue today. Access to content is restricted for users on social media, like Facebook, on mobile applications, and on search engines.  The most egregious form of censorship, however, is that which occurs at a national level for entire populations.  This type of censorship has been spreading for many years, and now has become normalized across numerous countries.

One of the Citizen Lab’s longest standing forms of research is the meticulous documentation of Internet censorship.  We were one of the founding partners of the OpenNet Initiative, which at one time documented Internet filtering and surveillance in more than 70 countries on an annual basis. We continue this research in the form of country case studies or analyses of information controls around specific events, like a civil war.

At the core of this research is the use of a mixture of technical interrogation and network measurements methods, including in-country testing, remote scans of national networks, queries on databases, and large-area scans of the entire Internet.  One of the methods we use in this research is a scanning tool called Zmap, which we run on high-speed computers to perform a complete scan of the entire Internet space in a matter of minutes.  Think of this technique as an MRI of the Internet.

A byproduct of these scans is the ability to identify equipment that is used to undertake Internet censorship and surveillance. Certain filtering systems have the equivalent of digital signatures which we use when scanning the Internet. Searching for these signatures allows us to locate installations around the world. Doing so is useful in and of itself to help shed a light on what’s going on beneath the surface of the Internet. But it is also useful for raising awareness about the companies that are complicit in Internet censorship practices.

One of the companies that we have identified in this way is Netsweeper, Inc, a Canadian company based in Waterloo, Ontario. We have identified Netsweeper installations being used to filter at the national level in Pakistan, Somalia, and Yemen, among others.  Our latest report, published today, locates live Netsweeper installations on nine ISPs in the Kingdom of Bahrain.

These findings are significant for several reasons: Bahrain is one of the world’s worst countries for respect for human rights, particularly press and Internet freedoms.  For many years, Bahrain has restricted access to Internet content having to do with independent media, websites critical of the Kingdom, and content related to the Shia faith, which is heavily persecuted in Bahrain.

In January 2016, Bahrain issued a tender for bidding on a national-level Internet filtering system. Our findings are significant because we can confirm the presence of Netsweeper installations on Bahraini ISPs following the bid.

These findings are also noteworthy because Netsweeper filed, and then discontinued a $3.5 million defamation suit against myself and the University of Toronto following our prior report on Netsweeper in Yemen.   Our report published today is the first since the defamation suit was discontinued by Netsweeper. As we have done with prior reports, we sent Netsweeper a letter, which can be found here, in which we lay out our findings, ask Netsweeper questions about their due diligence and corporate social responsibility policies, and offer to publish their response in full alongside our report. As of today, Netsweeper has not replied to that letter.

Lastly, the case is significant because Netsweeper is a Canadian company, and the provision of Internet filtering services to a country like Bahrain— though not in violation of any Canadian law per se — is definitely being used to suppress content deemed legitimate expression under international human rights law, which Canada explicitly supports.  All the more troubling, then, is the fact that Netsweeper has benefited, and will benefit in the future, from tangible support provided by both the Canadian and the Ontario governments in trade shows held in the Gulf region.  Canada’s Trade Commissioner says the government’s involvement at these trade shows includes assistance with “business-to-business meetings” and “networking events” as well as provision of a “pavilion/exhibit” — all of which is “offered free of charge to Canadian companies and organizations.”  While we have no evidence Canada went so far as to facilitate Netsweeper’s specific bid on Bahrain’s tender, they certainly did use Canadian tax payers dollars to represent Netsweeper to interested clients in the region.

Should the government of Canada be promoting a company whose software is used to violate human rights and which offers services in direct contradiction to our stated foreign policy goals on cyberspace?   Perhaps a more harmonized approach would be to require companies like Netsweeper to have some explicit corporate social responsibility process in place.  Export controls could be established that restrict the sale of technology and services to countries that will use their product to infringe internationally-recognized human rights.  Taking these steps would help better synchronize Canada’s economic and human rights policies while also bringing the world of Internet filtering in line with widely recognized principles on how businesses should respect human rights.

Disarming a Cyber Mercenary, Patching Apple Zero Days

I am pleased to announce a new Citizen Lab report: “The Million Dollar Dissident: NSO Group’s iPhone Zero-Days used against a UAE Human Rights Defender,” authored by senior researchers Bill Marczak and John Scott Railton.

If you are one of hundreds of millions of people that own an iPhone, today you will receive a critical security patch.  While updating your software, you should pause for a moment to thank human rights activist, Ahmed Mansoor.

Mansoor is a citizen of the United Arab Emirates, and because he’s a human rights activist in an autocratic country his government views him as a menace.  For security researchers at the Citizen Lab, on the other hand, Mansoor’s unfortunate experiences are the gift that won’t stop giving.

Mansoor is an outspoken defendant of human rights, civil liberties, and free expression in a country that routinely flouts them all. While he has been praised internationally for his efforts — in 2015, Mansoor was given the prestigious Martin Ennals Award for Human Rights Defenders — his government has responded with imprisonment, beatings, harassment, a travel ban…and persistent attempts to surreptitiously spy on his digital communications.

For example, in 2011 Mansoor was sent a PDF attachment that was loaded with a sophisticated spyware manufactured by the British / German company, Gamma Group.  Fortunately, he decided not to open it.

In 2012, he was targeted with more spyware, this time manufactured by an Italian company, Hacking Team.  His decision to share that sample with Citizen Lab researchers led to one of our first detailed reports on the commercial spyware trade.

And so earlier this month, when Mansoor received two unsolicited SMS messages on his iPhone 6 containing links about “secrets” concerning detainees in UAE prisons, he thought twice about clicking on them.  Instead, he forwarded them to us for analysis. It was a wise move. 

Citizen Lab researchers, working in collaboration with the security company Lookout, found that lurking behind those SMS messages was a series of “zero day” exploits (which we call “The Trident”) designed to take advantage of unpatched vulnerabilities in Mansoor’s iPhone. 

To say these exploits are rare is truly an understatement.  Apple is widely renown for its security — just ask the FBI.  Exploits of its operating system run on the order of hundreds of thousands of dollars each.  One company that resells zero days paid $1 million dollars for the purchase of a single iOS exploit, while the FBI reportedly paid at least $1.3 million for the exploit used to get inside the San Bernadino device.  The attack on Mansoor employed not one but three separate zero day exploits.

Had he followed those links, Mansoor’s iPhone would have been turned into a sophisticated bugging device controlled by UAE security agencies. They would have been able to turn on his iPhone’s camera and microphone to record Mansoor and anything nearby, without him being wise about it. They would have been able to log his emails and calls — even those that are encrypted end-to-end. And, of course, they would have been able to track his precise whereabouts.

Through careful, detailed network analysis, our team (led by Bill Marczak and John Scott Railton) was able to positively link the exploit infrastructure behind these exploits to an obscure company called “NSO Group”. 

Don’t look for them online; NSO Group doesn’t have a website. They are an Israeli-based “cyber war” company owned by an American venture capital firm, Francisco Partners Management, and founded by alumni of the infamous Israeli signals intelligence agency, Unit 8200.  This unit is among the most highly ranked state agencies for cyber espionage, and is allegedly responsible (along with the U.S. NSA) for the so-called “Stuxnet” cyber attack on Iran’s nuclear enrichment facilities.

In short: we uncovered an operation seemingly undertaken by the United Arab Emirates using the services and technologies of an Israeli “cyber war” company who used precious and very expensive zero day iOS exploits to get inside an internationally-renowned human rights defender’s iPhone.

That’s right: Not a terrorist. Not ISIL. A human rights defender.

(An important aside: we also were able to identify what we suspect are at least two other NSO Group-related targeted digital attack campaigns: one involving an investigative journalist in Mexico, and the other a tweet related to an opposition politician in Kenya).

Once we realized what we had uncovered, Citizen Lab and Lookout contacted Apple with a responsible disclosure concerning the zero days.   

Our full report is here.

Apple responded immediately, and we are releasing our report to coincide with their public release of the iOS 9.3.5 patch.

That a country would expend millions of dollars, and contract with one of the world’s most sophisticated cyber warfare units, to get inside the device of a single human rights defender is a shocking illustration of the serious nature of the problems affecting civil society in cyberspace.  This report should serve as a wake-up call that the silent epidemic of targeted digital attacks against civil society is a very real and escalating crisis of democracy and human rights.

What is to be done?  Clearly there is a major continuing problem with autocratic regimes abusing advanced interception technology to target largely defenceless civil society organizations and human rights defenders.   The one solution that has been proposed by some — export controls on items related to “intrusion software” — appears to have had no effect curbing abuses. In fact, Israel has in place export controls ostensibly to prevent this very sort of abuse from happening. But something obviously slipped through the cracks…

Maybe it is time to explore a different strategy — one that holds the companies directly responsible for the abuse of their technologies.  It is interesting in this respect that NSO Group masqueraded some of its infrastructure as government, business, and civil society websites, including the International Committee for the Red Cross, Federal Express, Youtube, and Google Play. 

Isn’t that fraud against the user? Or a trademark violation? If not considered so now, maybe it should be.

Meanwhile, please update your iPhone’s operating system, and while you’re doing it, spare a thought for Ahmed Mansoor.

All iPhone owners should update to the latest version of iOS immediately. If you’re unsure what version you’re running, you can check Setting > General > About > Version.

Communicating Privacy and Security Research: A Tough Nut to Crack

Today at the Citizen Lab we released a new report on (yet more) privacy and security issues in UC Browser, accompanied by a new cartoon series, called Net Alert.

Our new UC Browser report, entitled “A Tough Nut to Crack,” and authored by Jeffrey Knockel, Adam Senft and me, is our second close-up examination of UC Browser, which is by some estimates the second most popular mobile browser application in the world.   In our first analysis of UC Browser, undertaken in 2015, we discovered several major privacy and security vulnerabilities that would seriously expose users of UC Browser to surveillance and other privacy violations.  We were tipped off to look at UC Browser while going through some of the Edward Snowden disclosures and discovered the NSA, CSE and other SIGINT partners were patting themselves on the back for exploiting data leaks and faulty update security related to UC Browser.   I wrote an oped at the time discussing the security tradeoffs involved in keeping knowledge of software flaws like this quiet, and how we need a broader public discussion about software vulnerability disclosures.

We decided to take a second look at UC Browser, this time led by Jeffrey Knockel.  By reverse engineering several versions of UC Browser, Jeffrey was able to determine the likely version number of UC Browser referenced in the Snowden disclosure slides, and which led the NSA to develop an XKeyscore plugin for UC Browser exploitation.  We also found that all versions of the browser examined — Windows and Android — transmit personal user data with easily decryptable encryption, and the Windows version does not properly secure its software update process, leaving it vulnerable to arbitrary code execution.  We disclosed our findings to Alibaba, the parent company, and report back on their responses and fixes, such as they are, in an appendix to the report. 

Communicating these risks to users is not always easy, as the details are very technical and can be confusing.  To help better communicate privacy and security research to a broader audience,  we co-timed the release of our new UC Browser report with the first in a series of cartoons and info-nuggets on digital security, called “Net Alert.”   The first Net Alert features two informative and funny cartoons by Hong Kong artist Jason Li, each of which tells a story about the risks of using UC Browser.  The Net Alert series also includes background information on digital security topics, like the risks of “man-in-the-middle” attacks and of using open WiFi networks.   (Net Alert is produced by Citizen Lab in collaboration with Open Effect and the University of New Mexico).  We will be producing more of these Net Alert cartoons and info-nuggets co-timed with future Citizen Lab reports.  Our hope is that by communicating privacy and digital security risks in a friendly and accessible way, more people will be inclined to take small steps to better protect themselves against exposure and learn more about the research we undertake.

The UC Browser report is but one in an ongoing research series on mobile privacy and security.  For those who are interested, we have also published a FOCI paper, which we are presenting this week at the 2016 USENIX Free and Open Communications on the Internet workshop , that summarizes our technical analysis of the security and privacy vulnerabilities in three web browsers developed by China’s three biggest web companies: UC Browser, QQ Browser and Baidu Browser; developed by UCWeb (owned by Alibaba), Tencent and Baidu, respectively.

The Iranian Connection

Today, the Citizen Lab is publishing a new report, authored by the Citizen Lab’s John Scott-Railton, Bahr Abdulrazzak, Adam Hulcoop, Matt Brooks, and Katie Kleemola of Lookout, entitled “Group 5: Syria and the Iran Connection.”

The full report is here: https://citizenlab.org/2016/08/group5-syria/

Associated Press has an exclusive report here: http://bigstory.ap.org/article/6ab1ab75e89e480a9d12befd3fea4115/experts-iranian-link-attempted-hack-syrian-dissident

And, I wrote an oped for the Washington Post about our report, which can be found here:  https://www.washingtonpost.com/posteverything/wp/2016/08/02/how-foreign-governments-spy-using-email-and-powerpoint/

This report describes an elaborately staged malware operation with targets in the Syrian opposition. We first discovered the operation in late 2015 when a prominent member of the Syrian opposition, Noura Al-Ameera, spotted a suspicious e-mail containing a PowerPoint slideshow purporting to show evidence of “Assad crimes.”  Rather than open it, Al-Ameera wisely forwarded it to us at the Citizen Lab for further analysis.  Upon investigation, we determined the PowerPoint was laden with spyware.

Following that initial lead, our researchers spent several months engaged in careful network analysis, reverse engineering, and mapping of the command and control infrastructure.  Although we were not able to make a positive attribution to a single government (a common issue in cyber espionage investigations), we were able to determine that behind the targeted attack on Noura Al-Ameera is a new espionage group operating out of Iranian Internet space, possibly a privateer and likely working for either the Syrian or Iranian governments (or both).

Citizen Lab has tracked four separate malware campaigns that have targeted the Syrian opposition since the early days of the conflict: Assad regime-linked malware groups, the Syrian Electronic Army, ISIS, and a group with ties to Lebanon. Our latest report adds one more threat actor to the list, which we name “Group5” (to reflect the four other known malware groups) with ties to Iran.

The report demonstrates yet again that civil society groups are persistently targeted by digital malware campaigns, and that their reliance on shared social media and digital mobilization tools can be a source of serious vulnerability when exploited by operators using clever social engineering methods.

On Research in the Public Internet

This post is cross posted from https://citizenlab.org/2016/07/research-interest/

On January 20, 2016, Netsweeper Inc., a Canadian Internet filtering technology service provider, filed a defamation suit with the Ontario Superior Court of Justice. The University of Toronto and myself were named as the defendants. The lawsuit in question pertained to an October 2015 report of the Citizen Lab, “Information Controls during Military Operations: The case of Yemen during the 2015 political and armed conflict,” and related comments to the media. Netsweeper sought $3,000,000.00 in general damages; $500,000.00 in aggravated damages; and an “unascertained” amount for “special damages.”

On April 25, 2016, Netsweeper discontinued its claim in its entirety.

Between January 20, 2016 and today, we chose not to speak publicly about the lawsuit. Instead, we spent time preparing our statement of defence and other aspects of what we anticipated would be full legal proceedings.

Now that the claim has been discontinued it is a good opportunity to take stock of what happened, and make some general observations about the experience.

It should be pointed out that this is not the first time a company has contemplated legal action regarding the work of the Citizen Lab. Based on emails posted to Wikileaks from a breach of the company servers, we know that the Italian spyware vendor, Hacking Team, communicated with a law firm to evaluate whether to “hit [Citizen Lab] hard.” However, it is the first time that a company has gone so far as to begin litigation proceedings. I suspect it will not be the last.

Fortunately, Ontario has recognized the importance of protecting and encouraging speech on matters of public interest. Canada has historically proven a plaintiff-friendly environment for defamation cases. But, on November 3, 2015, the legal landscape shifted in Ontario when a new law called the Protection of Public Participation Act (PPPA) came into force. It was specifically designed to mitigate against “strategic litigation against public participation,” or SLAPP suits. The Act enumerates its purposes as:

(a) to encourage individuals to express themselves on matters of public interest;

(b) to promote broad participation in debates on matters of public interest;

(c) to discourage the use of litigation as a means of unduly limiting expression on matters of public interest; and

(d) to reduce the risk that participation by the public in debates on matters of public interest will be hampered by fear of legal action.

Under the Act, a judge may dismiss a defamation proceeding if “the person satisfies the judge that the proceeding arises from an expression made by the person that relates to a matter of public interest.” The Act allows for recovery of costs, and if, “in dismissing a proceeding under this section, the judge finds that the responding party brought the proceeding in bad faith or for an improper purpose, the judge may award the moving party such damages as the judge considers appropriate.”

In our view, the work of Citizen Lab to carefully document practices of Internet censorship, surveillance, and targeted digital attacks is precisely the sort of activity recognized as meriting special protection under the PPPA. Had our proceedings gone forward, we intended to exercise our rights under the Act and move to dismiss Netsw­eeper’s action.

Regardless of the status of the suit, we strenuously disagree with the claims made by Netsweeper, and stand firm in the conviction that my remarks to the media, and the report itself, are both clearly responsible communications on matters of public interest and fair comment as defined by the law.

One point bears underscoring: it is an indisputable fact that Citizen Lab tried to obtain and report Netsweeper’s side of the story. Indeed, we have always welcomed company engagement with us and the public at large in frank dialogue about issues of business and human rights. We sent a letter by email directly to Netsweeper on October 9, 2015. In that letter we informed Netsweeper of our findings, and presented a list of questions. We noted: “We plan to publish a report reflecting our research on October 20, 2015. We would appreciate a response to this letter from your company as soon as possible, which we commit to publish in full alongside our research report.”

Netsweeper never replied.

We expect that Citizen Lab research will continue to generate strong reaction from companies and other stakeholders that are the focus of our reports. The best way we can mitigate legal and other risk is to continue to do what we are doing: careful, responsible, peer-reviewed, evidence-based research. We will continue to investigate Netsweeper and other companies implicated in Internet censorship and surveillance, and we will continue to give those companies a chance to respond to our findings, and publish their responses, alongside our reports.

I come away from this experience profoundly appreciative of the skills of my staff and colleagues, and in particular Jakub Dalek, Sarah McKune, and Adam Senft, who assisted in the legal preparations.

Lastly, I am grateful to the University of Toronto for their support throughout this process. With corporate involvement in academia seemingly everywhere these days, it is tempting to get cynical about universities, and wonder whether corporate pressures will make university administrators lose sight of their core mission and purpose. After the experiences of the last few months, I feel optimistic about the possibilities of speaking truth to power with the protection of academic freedom that the University of Toronto has provided me.

Meanwhile, back to work on another Citizen Lab report.

The Week of Holding “Big Data” Accountable

The world of “Big Data,” “The Internet of Things,” or simply… “Cyberspace.”

Whatever we choose to call it, never in human history has something so profoundly consequential for so many people’s daily lives been unleashed in such a short period of time.  Certainly, the printing press, the telegraph, radio, the television, were all extraordinary.  But what is going on now is truly unprecedented in its sudden, dramatic impact.  In the span of a few short years, billions of citizens the world over are immersing themselves in an entirely new communications environment — one that is changing not only how we think and behave but, more profoundly, how society as a whole is fundamentally structured.  Information that previously was stored in our office drawers, in locked closets, in our diaries, even in our minds, we are now transmitting to thousands of private companies and, by extension, to government agencies.

This world of Big Data is a supernova of billions of human interactions, habits, movements, thoughts, and desires, ripe to be harvested, analyzed, and then fed back to us, in turn, to predict and shape us.  It should come as no surprise, given the rate at which this transformation is occurring, that there will be unintended — and possibly even seriously detrimental — consequences for privacy, liberty, and security.

Evidence of these consequences is now beginning to accumulate.  First, there are privacy issues. Data breaches that expose the email and password credentials of tens of millions of people have become so routine that researchers are now describing them as “megabreaches.” Our research at the Citizen Lab has shown how numerous popular mobile applications used by hundreds of millions of people routinely leak sensitive user information, including, in some cases, the geolocation of the user, device ID and serial number information, and lists of nearby wifi networks. We have discovered that some applications were so poorly secured, that anyone with control of a network to which these applications connects (e.g., a WiFi hotspot) could easily spoof a software update to install spyware onto an unwitting user’s device.  

Poorly designed mobile applications, such as those we have examined, are a goldmine for criminals and spies, and yet we surround ourselves with them. Disclosures of former National Security Agency (NSA) contractor Edward Snowden have shown that state intelligence agencies routinely vacuum up information leaked by applications in this way, and use the data for mass surveillance.  And what they don’t acquire from leaky applications, they get directly from the companies through lawful requests.  The confluence of interests around commercial and state surveillance is where Big Data meets Big Brother.

Beyond privacy issues are those of security. For example, researchers have demonstrated how they could use remote WiFi connections to take over the controls of a smart car or even an airline’s cockpit systems.  Others have shown proof of concept attacks against “smart home” systems that remotely cracked door lock codes, disabled vacation mode, and induced a fake fire alarm. Of course, what happens in the lab is but an omen of what’s to come in the real world. Several years ago, a computer virus called “Stuxnet” reportedly developed by the US and Israel, was used to sabotage Iranian nuclear enrichment plants.  Dozens of countries are reportedly researching and stockpiling their own Stuxnet like cyber weapons, which in turn is generating a huge commercial market for such hidden software flaws. Perversely adding to the insecurities (as the FBI Apple controversy showed us), some government agencies are, in fact, pressuring companies to weaken their systems by design to aid law enforcement and intelligence agencies.  As such insecurities mount, and as more and more of our critical infrastructure is networked, the Big Data environment in which we live may turn out to be a digital house of cards.

This past week, the Citizen Lab and our partners, Open Effect, produced several outputs and activities that related to concerns around privacy and security in the world of Big Data, including some that we hope can help mitigate some of these unintended consequences.

First, the Citizen Lab and Open Effect released a revamped version of the Access My Info tool, which allows Canadians to exercise their legal rights to ask companies about the data they collect on them, what they do with it, and with whom they share it.  I wrote an oped for the CBC about the tool, and there were several other media reports, including an interview by the CBC’s Metro Morning host Matt Galloway with Andrew Hilts of Citizen Lab and Open Effect.

Also, yesterday the CBC Ideas broadcast a special radio show on “Big Data Meets Big Brother,” in which I participated alongside Ann Cavoukian and Neil Desai, with Munk School director Stephen Toope moderating.  We discussed the balance between national security and privacy, and focused in on the limited oversight mechanisms that exist in Canada around security agencies, and especially the Communications Security Establishment (CSE).

Finally, Citizen Lab and Open Effect, as part of our Telecommunications Transparency Project, released a DIY Transparency Reporting Tool.  The tool is actually a software template that provides companies with a guide for developing transparency reports. To give some context for the tool, companies are increasingly encouraged to release public reports on the length of time client data is retained, how the data is used, and how often—and under what lawful authority—the data is shared with governments agencies.  The DIY Transparency Reporting Tool is the flipside of the Access My Info project:  whereas the latter encourages consumers to ask companies and governments about what they do with our data, the Transparency Reporting Tool provides companies with an easy-to-use template to take the initiative to report that information to us.

The world of Big Data has come upon us like a hurricane, with most consumers bewildered by what is happening to the data they routinely give away.  Meanwhile, companies are reaping a harvest of highly-personalized information to generate enormous profits, with very little public accountability around their conduct, or the design choices they make.  It’s time we encouraged consumers to “lift the lid” on the Big Data ecosystem right down to the algorithms that sort us and structure our choices, while simultaneously pressing companies to be more responsible stewards of our data.  Tools like “Access My Info” and the DIY Transparency Toolkit are a good first start.