Rebuilding the master’s house instead of repairing the cracks: why “diversity and inclusion” in the digital rights field is not enough

Paul Sableman, CC BY 2.0

Silicon Valley is not the only sector with a “white guy” problem: civil society struggles with this as well. Oddly, it wasn’t until I looked at the group photo taken at the Digital Freedom Fund’s first strategy meeting that I noticed it: everyone in the photo except for me was white. I had just founded a new organisation supporting strategic litigation on digital rights in Europe and this had been our first field-wide strategic meeting, bringing together 32 key organisations working on this issue in the region. This was in 2018. In 2019, the number of participants had increased to 48, but the picture in the group photo still was pretty pale, with the team of my organisation accounting for 50% of the 4 exceptions to that colour palet. And while gender representation overall seemed fairly balanced, and there was a diverse range of nationalities present, some voices were noticeably absent from the room. For example, the overall impression of participants was that there was no one with a physical disability attending.* It was clear: something needed to change.

In all fairness, the participants themselves had clocked this as well –– the issue of decolonising the digital rights field had significant traction in the conversations taking place in the course of those two days in February. I have been trying to find good statistics on what is popularly referred to as “diversity and inclusion” (and sometimes as “diversity, equity and inclusion”; I have fallen into that trap myself in the past when speaking about technology’s ability to amplify society’s power structures), both in the human rights field more widely and the digital rights field specifically, but failed. Perhaps I was not looking in the right places; if so, please point me in the right direction. The situation is such, however, that one hardly needs statistics to conclude that something is seriously amiss in digital rights land. A look around just about any digital rights meeting in Europe will clearly demonstrate the dominance of white privilege, as does a scroll through the staff sections of digital rights organisations’ webpages. Admittedly, this is hardly a scientific method, but sometimes we need to call it as we see it. 

This is an image many of us are used to, and have internalised to such an extent that I, too, as a person who does not fit that picture, took some time to wake up to it. But it clearly does not reflect the composition of our societies. What this leaves us with, is a watchdog that inevitably will have too many blind spots to properly serve its function for all the communities it is supposed to look out for. To change that, focusing on “diversity and inclusion” is not enough. Rather than working on (token) representation, we need an intersectional approach that is ready to meet the challenges and threats to human rights in an increasingly digitising society. Challenges and threats that often disproportionately affect groups that are marginalised. Marginalisation is not a state of being, it is something that is done to others by those in power. Therefore, we need to change the field, its systems and its power structures. In other words: we need a decolonising process for the field and its power structures rather than a solution focused on “including” those with disabilities, from minority or indigenous groups, and the LGBTQI+ community in the existing ecosystem.

How do we do this? I don’t know. And I probably will never have a definitive answer to that question. What I do know, is that the solution will not likely come from the digital rights field alone. It is perhaps trite to refer to Audre Lorde’s statement on how “the master’s tools will never dismantle the master’s house” in this context, but if the current field had the answers and the willingness to deploy them, the field would look very different. Lorde’s words also have a lot to offer as a perspective on what we might gain from a decolonising process as opposed to “diversity and inclusion”. While the following quote focuses on the shortcomings of white feminism, it is a useful aide in helping us imagine what strengths a decolonised digital rights field might represent:    

“Advocating the mere tolerance of difference between women is the grossest reformism. It is a total denial of the creative function of difference in our lives. Difference must be not merely tolerated, but seen as a fund of necessary polarities between which our creativity can spark like a dialectic. … Only within that interdependency of different strengths, acknowledged and equal, can the power to seek new ways of being in the world generate, as well as the courage and sustenance to act where there are no charters.”

The task of re-imagining and then rebuilding a new house for the digital rights field is clearly enormous. As digital rights are human rights and permeate all aspects of society, the field does not exist in isolation. Therefore, its issues cannot be solved in isolation either –– there are many moving parts, many of which will be beyond our reach as an organisation to tackle alone (and not just because DFF’s current geographical remit is Europe). But we need to start somewhere, and we need to get the process started with urgency. If we begin working within our sphere of influence and encourage others to do the same in other spaces, to join or to complement efforts, together we might just get very far.

My hope is that, in this process, we can learn from and build on the knowledge of others who have gone before us. Calls to decolonise the academic curriculum in the United Kingdom are becoming increasingly louder, but are being met with resistance. Are there examples of settings in which a decolonising process has been successfully completed? In South Africa, the need to move away from the “able-bodied, hetero-normative, white” standard in the public interest legal services sector is referred to as “transformation“. And efforts to “radically re-imagine and re-design the internet” from Whose Knowledge center the knowledge of marginalised communities on the internet, looking at not only online resources such as Wikipedia, but also digital infrastructure, privacy, surveillance and security. What are the lessons we can learn from those efforts and processes?

This is an open invitation to join us on this journey. Be our critical friend: share your views, critiques and ideas with us. What are successful examples of decolonising processes in other fields that the digital rights field could draw on? What does a decolonised digital rights field look like and what can it achieve? Who will be crucial allies in having this succeed? How can we ensure that those currently being marginalised lead in this effort? Share your views, help us think about this better, so we might start working on a solution that can catalyse structural change.

This post was cross-posted from the Digital Freedom Fund blog

* As observation was the method used for this determination, it is difficult to comment on representation that is less visible than other categories such as religion, socioeconomic background, sexual orientation, etc.

Digital rights are *all* human rights, not just civil and political

The UN Special Rapporteur on extreme poverty and human rights consults with the field

This post was co-authored with Jonathan McCully

Last week, following our strategy meeting, the Digital Freedom Fund hosted the UN Special Rapporteur on extreme poverty and human rights, Professor Philip Alston, for a one-day consultation in preparation for his upcoming thematic report on the rise of the “digital welfare state” and its implications for the human rights of poor and vulnerable individuals.

This consultation highlighted the true breadth of human rights issues that are engaged by the development, deployment, application and regulation of new technologies in numerous aspects of our lives.

The consultation brought together 30 digital rights organisations from across Europe, who shared many examples of new technologies being deployed in the provision of various public services. Common themes emerged, from the increased use of risk indication scoring in identifying welfare fraud, to the mandating of welfare recipients to register for bio-metric identification cards, and the sharing of datasets between different public services and government departments.

While many conversations on digital rights tend to centre around civil and political rights — particularly the rights to freedom of expression and to privacy — this consultation brought into sharp focus the impact new technologies can have on socio-economic rights

At DFF, we subscribe to the mantra that “digital rights are human rights” and we define “digital rights” broadly as human rights applicable in the digital sphere. This consultation highlighted the true breadth of human rights issues that are engaged by the development, deployment, application and regulation of new technologies in numerous aspects of our lives. While many conversations on digital rights tend to centre around civil and political rights –– particularly the rights to freedom of expression and to privacy –– this consultation brought into sharp focus the impact new technologies can have on socio-economic rights such as the right to education, the right to housing, the right to health and, particularly relevant for this consultation, the right to social security.

The UN Special Mandates have already started delving into issues around automated decision-making in a broad spectrum of human rights contexts.

The UN Special Mandates have already started delving into issues around automated decision-making in a broad spectrum of human rights contexts. In August last year, the UN Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression produced a detailed report on the influence of artificial intelligence on the global information environment. This follows on from thematic reports on the human rights implications of “killer robots” and “care robots” by the UN Special Rapporteur on extrajudicial, summary or arbitrary executions and the UN Special Rapporteur on the enjoyment of all human rights by older persons, respectively.

The poor are often the testing ground for the government’s introduction of new technologies.

The UN Special Rapporteur on extreme poverty and human rights has similarly placed the examination of automated decision-making and its impact on human rights at the core of his work. This can already be seen from his reports following his country visits to the United States and United Kingdom. In December 2017, following his visit to the United States, he reported on the datafication of the homeless population through systems designed to match homeless people with homeless services (i.e. coordinated entry systems) and the increased use of risk-assessment tools in pre-trial release and custody decisions. More recently, following his visit to the United Kingdom, he criticised the increased automation of various aspects of the benefits system and the “gradual disappearance of the postwar British welfare state behind a webpage and an algorithm.” In these contexts, he observed that the poor are often the testing ground for the government’s introduction of new technologies.

The digital welfare state seems to present welfare applicants with a trade-off: give up some of your civil and political rights in order to exercise some of your socio-economic rights.

The next report will build upon this important work, and we hope that the regional consultation held last week will provide useful input in this regard. Our strategy meeting presented a great opportunity to bring together great digital rights minds who could provide the Special Rapporteur with an overview of the use of digital technologies in welfare systems across Europe and their impact. It was evident from the discussions that the digital welfare state raises serious human rights concerns; not only when it comes to the right to social security, but the right to privacy and data protection, the right to freedom of information, and the right to an effective remedy are also engaged. As one participant observed, the digital welfare state seems to present welfare applicants with a trade-off: give up some of your civil and political rights in order to exercise some of your socio-economic rights.

It was clear from the room that participants were already exploring potential litigation strategies to push back against the digital welfare state, and we look forward to supporting them in this effort.

Cross-posted on the Digital Freedom Fund blog and Medium.

Digital rights are human rights

As the boundaries between our online and offline lives blur, is there really a distinction between “digital” and other human rights?

UN Photo Eleanor Roosevelt

UN Photo | Eleanor Roosevelt, holding the Universal Declaration of Human Rights

What do we mean when we talk about “digital rights”? This is a fundamental question that influences the Digital Freedom Fund’s strategy as we define the parameters for supporting the work of activists and litigators in Europe.

A quick search online yields a variety of definitions, most of which focus on the relationship between human beings, computers, networks and devices. Some of the narrower ones focus on the issue of copyright exclusively.

As our lives are digitalised further, does this approach to defining the term make sense?

In many ways, we already live in the sci-fi future we once imagined. The internet of things is here. Our food is kept cold in what we used to call a fridge, but what is now a computer that also has the ability to freeze things. The main way in which we communicate with our colleagues, family and loved ones are our mobile devices and what happens on social media is alleged to have a significant impact on elections. Our data are being collected by governments and corporations alike. In all of these contexts, our basic human rights – our rights to freedom of expression, freedom of assembly, privacy, and the like – are implicated. If there ever was a dividing line between “digital” rights and human rights, it has blurred to the point of irrelevance.

In line with the reality of our time, at DFF we work with a broad definition of digital rights for our grantmaking and field support activities. We consider digital rights to be human rights as applicable in the digital sphere. That is human rights in both physically constructed spaces, such as infrastructure and devices, and in spaces that are virtually constructed, like our online identities and communities.

If digital rights are human rights, then why use a different term? The label “digital rights” merely serves to pinpoint the sphere in which we are exercising our fundamental rights and freedoms. To draw concrete attention to an issue, using a term that expresses the context can help with framing and highlighting the issue in a compact manner. With our digital rights under threat on many fronts, this is important. Just as it was important, in 1995, for Hillary Clinton to state at the Women’s Congress in Beijing that “human rights are women’s rights, and women’s rights are human rights,” and for President Obama in 2016 to stress that LGBT rights are human rights, we should all be aware that digital rights are human rights, too. And they need to be protected.

As we further engage with the digital rights community in Europe, we look forward to supporting their important human rights work and highlighting their successes in this space. Part of that mission also includes creating broader understanding that digital rights are indeed human rights. We hope you will join us in sharing that message.

This article has been cross-posted on the Digital Freedom Fund blog. To follow DFF’s work and be notified when we launch, sign up for our newsletter and follow us on Twitter.