That whirring sound you hear at Orwell’s graveside is him spinning at the wondrous progression of his tale. Big Brother watches, but is now also watched by everyone, who now hold the power of surveillance in their pockets.
Crowd sourced intelligence is the new promise as more uses of communication technologies are explored.It is a great power; almost a superpower. We must therefore be mindful of unintended consequences and guard against them.
In the years since Edward Snowden’s 2013 leaks pulled the veil of secrecy away from the NSA and its ‘Five Eyes’ partners, security conscious journalists began to use encrypted services like WhatsApp and Signal for communications with sources whose identities they wanted to protect. The growing availability of encryption was also hailed by activist communities as a win not just for privacy but for human rights, especially in countries where protest is frowned upon or even forbidden by authorities.
There are, of course, many people willing to accept some loss of privacy to prevent terrorist attacks or other criminal activity, often in the belief that such technology would never be used against them. However, the tools and methods exposed by Snowden and in later leaks like the ‘Vault 7’ CIA documents released by Wikileaks in 2017 should have led to an important question: what happens when they inevitably fall into the hands of authoritarian states or the non-state and criminal actors we’re always told they were designed to fight?
We seem to have found a partial answer to this question after a group called the Pegasus Project, which is coordinated by an international group of journalists called Forbidden Stories that includes Amnesty International’s Security Lab and a number of media companies, brought new revelations to light about spyware called Pegasus that can be used on both iPhones and Android devices. It was developed by an Israeli firm, NSO Group, and has reportedly been used by dozens of states including Saudi Arabia, Hungary, Mexico and India.
NSO Group began as a partnership of three Israelis, the initials of whose first names were used to create the name. In 2014, a majority stake in the company was sold to U.S. based Francisco Partners, an investment company focused on technology. Then, in 2019, two of NSO’s original partners, backed by British a firm called Novalpina Capital, purchased what had already become a much larger company doing business with over 40 countries. Considering the bad publicity and rumors of problems between Novalpina’s three partners, it now looks like the company’s future is in doubt.
The company has advertised its software as an important tool in fighting terrorism and other crime. The company has also made claims that they are addressing humanitarian concerns, with a spokesperson saying in a press release, “NSO will continue its mission of saving lives, helping governments around the world prevent terror attacks, break up pedophilia, sex, and drug-trafficking rings, locate missing and kidnapped children, locate survivors trapped under collapsed buildings, and protect airspace against disruptive penetration by dangerous drones.”
Although the firm was already stirring controversy in 2016 when Citizen Lab, a research organization based at the University of Toronto, produced an investigation on the use of Pegasus malware by the UAE, who they alleged deployed it against a well known human rights activist, Ahmed Monsoor, the Pegasus Project has shown how widely the company’s tools seem to have travelled.
The most high profile alleged target to date is probably the president of France, Emmanuel Macron, one of whose phone numbers was among 50,000 “potential targets” that leaked to the Pegasus Project. The list includes journalists, activists, politicians and unexpected targets like teachers and clergy. In a bit of an ironic twist, it’s been reported that a former French colony, Morocco, was using the NOS Group tool to surveil Macron and other French politicians. The Moroccan government has denied the accusation and has filed a defamation case in France.
The revelations provided by Citizen Lab and the Pegasus Project overlap with some of the most important news stories of the last few years. In the case of Saudi Arabia, the software was being used to spy on a human rights activist living in Canada, Omar Abdulaziz, who was close to the Washington Post columnist Jamal Khashoggi and it’s been reported that the information gleaned might have played a role in the decision to brutally murder him in the Saudi consulate in Istanbul, Turkey on October 2nd, 2018. The malicious software was also installed on his fiancee’s phone shortly after his death.
Abdulaziz and seven others have taken legal action against the company in courts in both Israel and Cyprus over allegations that NSO Group facilitated spying by various governments.
The company has also been sued by Facebook, which owns WhatsApp (a case that Microsoft, Alphabet and other large tech firms joined in a rare display of corporate solidarity). The Pegasus Project has since shown that, starting in 2019, a vulnerability in voice calling within WhatsApp was used to install the surveillance tool on phones whether the user answered a call that would come from an unknown number or not.
As explained this week by Mehab Qureshi of the Quint, what makes Pegasus seem somewhat more sophisticated than other malware is that it is a ‘zero click install’, meaning users who don’t follow a link will still have the exploit installed on their device.
Lawyers for Facebook put it in an argument before the United States Court of Appeals for the Ninth Circuit in late 2020 that allowing NSO Group to continue offering these services, “…would lead to a proliferation of hacking technology, and in the foreseeable future, we will have more foreign governments with powerful and dangerous cyber-surveillance tools. That, in turn, means dramatically more opportunities for those tools to fall into the wrong hands and be used nefariously.”
NSO isn’t the only company doing this kind of work, another well known recent example is the appropriately named Spanish firm, Undercover Global, which used more conventional means to spy on Julian Assange during his time in the Ecuadorian Embassy in London. They’re said to have provided details of his private conversations with his lawyers to American intelligence in violation of the publisher’s basic rights. Undercover Global might have been to create plausible deniability but its interesting that U.S. authorities, who have such a large and sophisticated intelligence community, would feel the need to use a private contractor to spy on Assange. Still, this was an already established reality by the time Booz Allen Hamilton contractor Edward Snowden spilled some of the NSA’s secrets out into the world.
There has been, almost since the beginning of capitalism but intensifying over the past fifty years, a drive to privatize everything from overseas conflict to municipal parking. The Pegasus Project has done a great service in revealing that companies like NSO Group, despite their attempts to present themselves as humanitarians, will sell their wares to the most repressive regimes in the world.
For journalists and activists what we’ve learned about Pegasus has created new doubts about security, as Azerbaijani journalist Khadija Ismayilova, whose phone was found to have been compromised, told Phineas Rueckert of Forbidden Stories, “I feel guilty for the messages I’ve sent. I feel guilty for the sources who sent me [information] thinking that some encrypted messaging ways are secure and they didn’t know that my phone is infected.”
*The Indian Express has an interesting article on the infrastructure and costs of deploying Pegasus that you can find here.
Artificial Intelligence is moving along apace to get better at identifying and apprehending the bad guys. Cameras and social media are everywhere; more coming. The FBI has already identified and arrested hundreds of participants in the January 6th Capital Insurrection using facial recognition and other technologies. As the technology moves forward and is used more, people may begin to feel more secure that surveillance, especially in public areas, is being used to deter criminal activity and to bring justice to the lawbreakers.
This documentary points to some of the ways this could all go bad. From consistent misidentifications of people, especially women and darker skinned individuals, leading to bad outcomes, to AI being used, as in China, to track everyone’s movements and activity and assign a social score that could effect credit, job access, housing and free political expression, the threats are staggering.
There are significant, real-world benefits to having an accepted and recognized identity. That’s why the concept of a digital identity is being pursued around the world, from Australia to India. From airports to health records systems, technologists and policy makers with good intentions are digitizing our identities, making modern life more efficient and streamlined.
Governments seek to digitize their citizens in an effort to universalize government services, while the banking, travel, and insurance industries aim to create more seamless processes for their products and services. But this isn’t just about efficiency and market share. In places like Syria and Jordan, refugees are often displaced without an identity. Giving them proof of who they are can improve their settlement, financial security, and job prospects in foreign lands.
Brett Solomon (Brett Solomon is the executive director of Access Now, an NGO that defends and extends the digital rights of users at risk around the world. He is the founder of RightsCon, an annual global conference that addresses human rights in the digital age.
But as someone who has tracked the advantages and perils of technology for human rights over the past ten years, I am nevertheless convinced that digital ID, writ large, poses one of the gravest risks to human rights of any technology that we have encountered. Worse, we are rushing headlong into a future where new technologies will converge to make this risk much more severe.
For starters, we are building near-perfect facial recognition technology and other identifiers, from the human gait to breath to iris. Biometric databases are being set up in such a way that these individual identifiers are centralized, insecure, and opaque. Then there is the capacity for geo-location of identifiers—that is, the tracking of digital “you”—in real time. A constant feed of insecure data from the Internet of Things may well connect you (and your identity) to other identities and nodes on the network without your consent.
In addition, systems using artificial intelligence and machine learning are used to make decisions based on our identities. Those systems are often built on data that can reinforce bias and discrimination, and are wielded without sufficient transparency or human review. Ultimately, social credit systems, such as those that are currently being developed in China, will be based on digital ID, thereby enabling or disabling our full and free participation in society.
By developing these technologies in parallel with systems for a digital ID, we are not simply establishing an identity to access basic social services. Digital IDs will become necessary to function in a connected digital world. This has not escaped the attention of authoritarian regimes. Already, they are working to splinter the internet, collect and localize data, and impose regimes of surveillance and control. Digital ID systems, as they are being developed today, are ripe for exploitation and abuse, to the detriment of our freedoms and democracies.
We can make another choice. In the design and deployment of Digital ID systems, we must advocate for the principles of data minimization, decentralization, consent, and limited access that reinforce our fundamental rights.
First, that means the use of a digital ID should not be mandated. We should have the option to say no to any demand that we have a digital ID, without prejudice or negative repercussions.
Our cybersecurity needs to be defended. The Aadhaar program, India’s national digital ID framework—the world’s largest—was recently shown to be compromised. For a digital ID system to work without becoming an easy target for hacking, it should be decentralized and otherwise adhere to recognized principles for good digital security. One single digital identity used for authentication of multiple contexts creates the potential for pervasive profiling. Likewise, our capacity for anonymity must be preserved.
Our data needs to be protected. Governments are data fiduciaries, and data protection authorities, non-governmental legal experts, and civil society should therefore be consulted in the administrative, legislative, and technical design of digital ID systems. In the case of Aadhaar, a recent ruling by the Supreme Court of India recognized the need for a robust data protection framework.
Transparency is essential. Without transparency, there is no accountability, and few pathways for remedy of human rights abuse.
Finally, access to our data by state authorities must be governed by relevant international legal standards, particularly the “Necessary and Proportionate” principles. Personal information provided for one purpose should not be made available for identification for law enforcement purposes, without being subject to these vital legal standards.
We cannot continue on the current path without stopping to build in necessary human rights protections to mitigate harm. Our civil liberties should be the foundation upon which digital ID technologies, platforms, and systems are being constructed. Otherwise, in the quest to create a digital identity for the benefit of many, our fundamental rights can and will crumble.