Why COVID-19 is a Crisis for Digital Rights

Adam Nieścioruk on Unsplash: Street art — graffiti with facial mask on the wall during the current Coronavirus (COVID-19) pandemic in Warsaw, Poland

The COVID-19 pandemic has triggered an equally urgent digital rights crisis.

New measures being hurried in to curb the spread of the virus, from “biosurveillance” and online tracking to censorship, are potentially as world-changing as the disease itself. These changes aren’t necessarily temporary, either: once in place, many of them can’t be undone.

That’s why activists, civil society and the courts must carefully scrutinise questionable new measures, and make sure that – even amid a global panic – states are complying with international human rights law.

Human rights watchdog Amnesty International recently commented that human rights restrictions are spreading almost as quickly as coronavirus itself. Indeed, the fast-paced nature of the pandemic response has empowered governments to rush through new policies with little to no legal  oversight.

There has already been a widespread absence of transparency and regulation when it comes to the rollout of these emergency measures, with many falling far short of international human rights standards.

Tensions between protecting public health and upholding people’s basic rights and liberties are rising. While it is of course necessary to put in place safeguards to slow the spread of the virus, it’s absolutely vital that these measures are balanced and proportionate.

Unfortunately, this isn’t always proving to be the case.

The Rise of Biosurveillance

A panopticon world on a scale never seen before is quickly materialising.

“Biosurveillance” which involves the tracking of people’s movements, communications and health data has already become a buzzword, used to describe certain worrying measures being deployed to contain the virus.

A panopticon world on a scale never seen before is quickly materialising

The means by which states, often aided by private companies, are monitoring their citizens are increasingly extensive: phone data, CCTV footage, temperature checkpoints, airline and railway bookings, credit card information, online shopping records, social media data, facial recognition, and sometimes even drones.

Private companies are exploiting the situation and offering rights-abusing products to states, purportedly to help them manage the impact of the pandemic. One Israeli spyware firm has developed a product it claims can track the spread of coronavirus by analysing two weeks’ worth of data from people’s personal phones, and subsequently matching it up with data about citizens’ movements obtained from national phone companies.

In some instances, citizens can also track each other’s movements leading to not only vertical, but also horizontal sharing of sensitive medical data.

Not only are many of these measures unnecessary and disproportionately intrusive, they also give rise to secondary questions, such as: how secure is our data? How long will it be kept for? Is there transparency around how it is obtained and processed? Is it being shared or repurposed, and if so, with who?

Censorship and Misinformation

Censorship is becoming rife, with many arguing that a “censorship pandemic” is surging in step with COVID-19.

Oppressive regimes are rapidly adopting “fake news” laws. This is ostensibly to curb the spread of misinformation about the virus, but in practice, this legislation is often used to crack down on dissenting voices or otherwise suppress free speech. In Cambodia, for example, there have already been at least 17 arrests of people for sharing information about coronavirus.

Oppressive regimes are rapidly adopting “fake news” laws

At the same time, many states have themselves been accused of fuelling disinformation to their citizens to create confusion, or are arresting those who express criticism of the government’s response.

As well as this, some states have restricted free access to information on the virus, either by blocking access to health apps, or cutting off access to the internet altogether.

An all-seeing, prisonlike panopticon
I, Friman, Wikipedia: Inside one of the prison buildings at Presidio Modelo, Isla de la Juventud, Cuba

AI, Inequality and Control

The deployment of AI can have consequences for human rights at the best of times, but now, it’s regularly being adopted with minimal oversight and regulation.

AI and other automated learning technology are the foundation for many surveillance and social control tools. Because of the pandemic, it is being increasingly relied upon to fight misinformation online and process the huge increase in applications for emergency social protection which are, naturally, more urgent than ever.

Prior to the COVID-19 outbreak, the digital rights field had consistently warned about the human rights implications of these inscrutable “black boxes”, including their biased and discriminatory effects. The adoption of such technologies without proper oversight or consultation should be resisted and challenged through the courts, not least because of their potential to exacerbate the inequalities already experienced by those hardest hit by the pandemic.

Eroding Human Rights

Many of the human rights-violating measures that have been adopted to date are taken outside the framework of proper derogations from applicable human rights instruments, which would ensure that emergency measures are temporary, limited and supervised.

Legislation is being adopted by decree, without clear time limitations

Legislation is being adopted by decree, without clear time limitations, and technology is being deployed in a context where clear rules and regulations are absent.

This is of great concern for two main reasons.

First, this type of “legislating through the back door” of measures that are not necessarily temporary avoids going through a proper democratic process of oversight and checks and balances, resulting in de facto authoritarian rule.

Second, if left unchecked and unchallenged, this could set a highly dangerous precedent for the future. This is the first pandemic we are experiencing at this scale – we are currently writing the playbook for global crises to come.

If it becomes clear that governments can use a global health emergency to instate human rights infringing measures without being challenged or without having to reverse these measures, making them permanent instead of temporary, we will essentially be handing over a blank cheque to authoritarian regimes to wait until the next pandemic to impose whatever measures they want.

We are currently writing the playbook for global crises to come

Therefore, any and all measures that are not strictly necessary, sufficiently narrow in scope, and of a clearly defined temporary nature, need to be challenged as a matter of urgency. If they are not, we will not be able to push back on a certain path towards a dystopian surveillance state.

Litigation: New Ways to Engage

In tandem with advocacy and policy efforts, we will need strategic litigation to challenge the most egregious measures through the court system. Going through the legislature alone will be too slow and, with public gatherings banned, public demonstrations will not be possible at scale.

The courts will need to adapt to the current situation – and are in the process of doing so – by offering new ways for litigants to engage. Courts are still hearing urgent matters and questions concerning fundamental rights and our democratic system will fall within that remit. This has already been demonstrated by the first cases requesting oversight to government surveillance in response to the pandemic.

These issues have never been more pressing, and it’s abundantly clear that action must be taken. The courts can be an important ally in safeguarding our digital rights, also in the current crisis, but we must give them the opportunity to play that role.

This blog has been cross-posted from the Digital Freedom Fund blog.

Rebuilding the master’s house instead of repairing the cracks: why “diversity and inclusion” in the digital rights field is not enough

Paul Sableman, CC BY 2.0

Silicon Valley is not the only sector with a “white guy” problem: civil society struggles with this as well. Oddly, it wasn’t until I looked at the group photo taken at the Digital Freedom Fund’s first strategy meeting that I noticed it: everyone in the photo except for me was white. I had just founded a new organisation supporting strategic litigation on digital rights in Europe and this had been our first field-wide strategic meeting, bringing together 32 key organisations working on this issue in the region. This was in 2018. In 2019, the number of participants had increased to 48, but the picture in the group photo still was pretty pale, with the team of my organisation accounting for 50% of the 4 exceptions to that colour palet. And while gender representation overall seemed fairly balanced, and there was a diverse range of nationalities present, some voices were noticeably absent from the room. For example, the overall impression of participants was that there was no one with a physical disability attending.* It was clear: something needed to change.

In all fairness, the participants themselves had clocked this as well –– the issue of decolonising the digital rights field had significant traction in the conversations taking place in the course of those two days in February. I have been trying to find good statistics on what is popularly referred to as “diversity and inclusion” (and sometimes as “diversity, equity and inclusion”; I have fallen into that trap myself in the past when speaking about technology’s ability to amplify society’s power structures), both in the human rights field more widely and the digital rights field specifically, but failed. Perhaps I was not looking in the right places; if so, please point me in the right direction. The situation is such, however, that one hardly needs statistics to conclude that something is seriously amiss in digital rights land. A look around just about any digital rights meeting in Europe will clearly demonstrate the dominance of white privilege, as does a scroll through the staff sections of digital rights organisations’ webpages. Admittedly, this is hardly a scientific method, but sometimes we need to call it as we see it. 

This is an image many of us are used to, and have internalised to such an extent that I, too, as a person who does not fit that picture, took some time to wake up to it. But it clearly does not reflect the composition of our societies. What this leaves us with, is a watchdog that inevitably will have too many blind spots to properly serve its function for all the communities it is supposed to look out for. To change that, focusing on “diversity and inclusion” is not enough. Rather than working on (token) representation, we need an intersectional approach that is ready to meet the challenges and threats to human rights in an increasingly digitising society. Challenges and threats that often disproportionately affect groups that are marginalised. Marginalisation is not a state of being, it is something that is done to others by those in power. Therefore, we need to change the field, its systems and its power structures. In other words: we need a decolonising process for the field and its power structures rather than a solution focused on “including” those with disabilities, from minority or indigenous groups, and the LGBTQI+ community in the existing ecosystem.

How do we do this? I don’t know. And I probably will never have a definitive answer to that question. What I do know, is that the solution will not likely come from the digital rights field alone. It is perhaps trite to refer to Audre Lorde’s statement on how “the master’s tools will never dismantle the master’s house” in this context, but if the current field had the answers and the willingness to deploy them, the field would look very different. Lorde’s words also have a lot to offer as a perspective on what we might gain from a decolonising process as opposed to “diversity and inclusion”. While the following quote focuses on the shortcomings of white feminism, it is a useful aide in helping us imagine what strengths a decolonised digital rights field might represent:    

“Advocating the mere tolerance of difference between women is the grossest reformism. It is a total denial of the creative function of difference in our lives. Difference must be not merely tolerated, but seen as a fund of necessary polarities between which our creativity can spark like a dialectic. … Only within that interdependency of different strengths, acknowledged and equal, can the power to seek new ways of being in the world generate, as well as the courage and sustenance to act where there are no charters.”

The task of re-imagining and then rebuilding a new house for the digital rights field is clearly enormous. As digital rights are human rights and permeate all aspects of society, the field does not exist in isolation. Therefore, its issues cannot be solved in isolation either –– there are many moving parts, many of which will be beyond our reach as an organisation to tackle alone (and not just because DFF’s current geographical remit is Europe). But we need to start somewhere, and we need to get the process started with urgency. If we begin working within our sphere of influence and encourage others to do the same in other spaces, to join or to complement efforts, together we might just get very far.

My hope is that, in this process, we can learn from and build on the knowledge of others who have gone before us. Calls to decolonise the academic curriculum in the United Kingdom are becoming increasingly louder, but are being met with resistance. Are there examples of settings in which a decolonising process has been successfully completed? In South Africa, the need to move away from the “able-bodied, hetero-normative, white” standard in the public interest legal services sector is referred to as “transformation“. And efforts to “radically re-imagine and re-design the internet” from Whose Knowledge center the knowledge of marginalised communities on the internet, looking at not only online resources such as Wikipedia, but also digital infrastructure, privacy, surveillance and security. What are the lessons we can learn from those efforts and processes?

This is an open invitation to join us on this journey. Be our critical friend: share your views, critiques and ideas with us. What are successful examples of decolonising processes in other fields that the digital rights field could draw on? What does a decolonised digital rights field look like and what can it achieve? Who will be crucial allies in having this succeed? How can we ensure that those currently being marginalised lead in this effort? Share your views, help us think about this better, so we might start working on a solution that can catalyse structural change.

This post was cross-posted from the Digital Freedom Fund blog

* As observation was the method used for this determination, it is difficult to comment on representation that is less visible than other categories such as religion, socioeconomic background, sexual orientation, etc.

Digital rights are *all* human rights, not just civil and political

The UN Special Rapporteur on extreme poverty and human rights consults with the field

This post was co-authored with Jonathan McCully

Last week, following our strategy meeting, the Digital Freedom Fund hosted the UN Special Rapporteur on extreme poverty and human rights, Professor Philip Alston, for a one-day consultation in preparation for his upcoming thematic report on the rise of the “digital welfare state” and its implications for the human rights of poor and vulnerable individuals.

This consultation highlighted the true breadth of human rights issues that are engaged by the development, deployment, application and regulation of new technologies in numerous aspects of our lives.

The consultation brought together 30 digital rights organisations from across Europe, who shared many examples of new technologies being deployed in the provision of various public services. Common themes emerged, from the increased use of risk indication scoring in identifying welfare fraud, to the mandating of welfare recipients to register for bio-metric identification cards, and the sharing of datasets between different public services and government departments.

While many conversations on digital rights tend to centre around civil and political rights — particularly the rights to freedom of expression and to privacy — this consultation brought into sharp focus the impact new technologies can have on socio-economic rights

At DFF, we subscribe to the mantra that “digital rights are human rights” and we define “digital rights” broadly as human rights applicable in the digital sphere. This consultation highlighted the true breadth of human rights issues that are engaged by the development, deployment, application and regulation of new technologies in numerous aspects of our lives. While many conversations on digital rights tend to centre around civil and political rights –– particularly the rights to freedom of expression and to privacy –– this consultation brought into sharp focus the impact new technologies can have on socio-economic rights such as the right to education, the right to housing, the right to health and, particularly relevant for this consultation, the right to social security.

The UN Special Mandates have already started delving into issues around automated decision-making in a broad spectrum of human rights contexts.

The UN Special Mandates have already started delving into issues around automated decision-making in a broad spectrum of human rights contexts. In August last year, the UN Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression produced a detailed report on the influence of artificial intelligence on the global information environment. This follows on from thematic reports on the human rights implications of “killer robots” and “care robots” by the UN Special Rapporteur on extrajudicial, summary or arbitrary executions and the UN Special Rapporteur on the enjoyment of all human rights by older persons, respectively.

The poor are often the testing ground for the government’s introduction of new technologies.

The UN Special Rapporteur on extreme poverty and human rights has similarly placed the examination of automated decision-making and its impact on human rights at the core of his work. This can already be seen from his reports following his country visits to the United States and United Kingdom. In December 2017, following his visit to the United States, he reported on the datafication of the homeless population through systems designed to match homeless people with homeless services (i.e. coordinated entry systems) and the increased use of risk-assessment tools in pre-trial release and custody decisions. More recently, following his visit to the United Kingdom, he criticised the increased automation of various aspects of the benefits system and the “gradual disappearance of the postwar British welfare state behind a webpage and an algorithm.” In these contexts, he observed that the poor are often the testing ground for the government’s introduction of new technologies.

The digital welfare state seems to present welfare applicants with a trade-off: give up some of your civil and political rights in order to exercise some of your socio-economic rights.

The next report will build upon this important work, and we hope that the regional consultation held last week will provide useful input in this regard. Our strategy meeting presented a great opportunity to bring together great digital rights minds who could provide the Special Rapporteur with an overview of the use of digital technologies in welfare systems across Europe and their impact. It was evident from the discussions that the digital welfare state raises serious human rights concerns; not only when it comes to the right to social security, but the right to privacy and data protection, the right to freedom of information, and the right to an effective remedy are also engaged. As one participant observed, the digital welfare state seems to present welfare applicants with a trade-off: give up some of your civil and political rights in order to exercise some of your socio-economic rights.

It was clear from the room that participants were already exploring potential litigation strategies to push back against the digital welfare state, and we look forward to supporting them in this effort.

Cross-posted on the Digital Freedom Fund blog and Medium.

Digital rights are human rights

As the boundaries between our online and offline lives blur, is there really a distinction between “digital” and other human rights?

UN Photo Eleanor Roosevelt

UN Photo | Eleanor Roosevelt, holding the Universal Declaration of Human Rights

What do we mean when we talk about “digital rights”? This is a fundamental question that influences the Digital Freedom Fund’s strategy as we define the parameters for supporting the work of activists and litigators in Europe.

A quick search online yields a variety of definitions, most of which focus on the relationship between human beings, computers, networks and devices. Some of the narrower ones focus on the issue of copyright exclusively.

As our lives are digitalised further, does this approach to defining the term make sense?

In many ways, we already live in the sci-fi future we once imagined. The internet of things is here. Our food is kept cold in what we used to call a fridge, but what is now a computer that also has the ability to freeze things. The main way in which we communicate with our colleagues, family and loved ones are our mobile devices and what happens on social media is alleged to have a significant impact on elections. Our data are being collected by governments and corporations alike. In all of these contexts, our basic human rights – our rights to freedom of expression, freedom of assembly, privacy, and the like – are implicated. If there ever was a dividing line between “digital” rights and human rights, it has blurred to the point of irrelevance.

In line with the reality of our time, at DFF we work with a broad definition of digital rights for our grantmaking and field support activities. We consider digital rights to be human rights as applicable in the digital sphere. That is human rights in both physically constructed spaces, such as infrastructure and devices, and in spaces that are virtually constructed, like our online identities and communities.

If digital rights are human rights, then why use a different term? The label “digital rights” merely serves to pinpoint the sphere in which we are exercising our fundamental rights and freedoms. To draw concrete attention to an issue, using a term that expresses the context can help with framing and highlighting the issue in a compact manner. With our digital rights under threat on many fronts, this is important. Just as it was important, in 1995, for Hillary Clinton to state at the Women’s Congress in Beijing that “human rights are women’s rights, and women’s rights are human rights,” and for President Obama in 2016 to stress that LGBT rights are human rights, we should all be aware that digital rights are human rights, too. And they need to be protected.

As we further engage with the digital rights community in Europe, we look forward to supporting their important human rights work and highlighting their successes in this space. Part of that mission also includes creating broader understanding that digital rights are indeed human rights. We hope you will join us in sharing that message.

This article has been cross-posted on the Digital Freedom Fund blog. To follow DFF’s work and be notified when we launch, sign up for our newsletter and follow us on Twitter.