Rebuilding the master’s house instead of repairing the cracks: why “diversity and inclusion” in the digital rights field is not enough

Paul Sableman, CC BY 2.0

Silicon Valley is not the only sector with a “white guy” problem: civil society struggles with this as well. Oddly, it wasn’t until I looked at the group photo taken at the Digital Freedom Fund’s first strategy meeting that I noticed it: everyone in the photo except for me was white. I had just founded a new organisation supporting strategic litigation on digital rights in Europe and this had been our first field-wide strategic meeting, bringing together 32 key organisations working on this issue in the region. This was in 2018. In 2019, the number of participants had increased to 48, but the picture in the group photo still was pretty pale, with the team of my organisation accounting for 50% of the 4 exceptions to that colour palet. And while gender representation overall seemed fairly balanced, and there was a diverse range of nationalities present, some voices were noticeably absent from the room. For example, the overall impression of participants was that there was no one with a physical disability attending.* It was clear: something needed to change.

In all fairness, the participants themselves had clocked this as well –– the issue of decolonising the digital rights field had significant traction in the conversations taking place in the course of those two days in February. I have been trying to find good statistics on what is popularly referred to as “diversity and inclusion” (and sometimes as “diversity, equity and inclusion”; I have fallen into that trap myself in the past when speaking about technology’s ability to amplify society’s power structures), both in the human rights field more widely and the digital rights field specifically, but failed. Perhaps I was not looking in the right places; if so, please point me in the right direction. The situation is such, however, that one hardly needs statistics to conclude that something is seriously amiss in digital rights land. A look around just about any digital rights meeting in Europe will clearly demonstrate the dominance of white privilege, as does a scroll through the staff sections of digital rights organisations’ webpages. Admittedly, this is hardly a scientific method, but sometimes we need to call it as we see it. 

This is an image many of us are used to, and have internalised to such an extent that I, too, as a person who does not fit that picture, took some time to wake up to it. But it clearly does not reflect the composition of our societies. What this leaves us with, is a watchdog that inevitably will have too many blind spots to properly serve its function for all the communities it is supposed to look out for. To change that, focusing on “diversity and inclusion” is not enough. Rather than working on (token) representation, we need an intersectional approach that is ready to meet the challenges and threats to human rights in an increasingly digitising society. Challenges and threats that often disproportionately affect groups that are marginalised. Marginalisation is not a state of being, it is something that is done to others by those in power. Therefore, we need to change the field, its systems and its power structures. In other words: we need a decolonising process for the field and its power structures rather than a solution focused on “including” those with disabilities, from minority or indigenous groups, and the LGBTQI+ community in the existing ecosystem.

How do we do this? I don’t know. And I probably will never have a definitive answer to that question. What I do know, is that the solution will not likely come from the digital rights field alone. It is perhaps trite to refer to Audre Lorde’s statement on how “the master’s tools will never dismantle the master’s house” in this context, but if the current field had the answers and the willingness to deploy them, the field would look very different. Lorde’s words also have a lot to offer as a perspective on what we might gain from a decolonising process as opposed to “diversity and inclusion”. While the following quote focuses on the shortcomings of white feminism, it is a useful aide in helping us imagine what strengths a decolonised digital rights field might represent:    

“Advocating the mere tolerance of difference between women is the grossest reformism. It is a total denial of the creative function of difference in our lives. Difference must be not merely tolerated, but seen as a fund of necessary polarities between which our creativity can spark like a dialectic. … Only within that interdependency of different strengths, acknowledged and equal, can the power to seek new ways of being in the world generate, as well as the courage and sustenance to act where there are no charters.”

The task of re-imagining and then rebuilding a new house for the digital rights field is clearly enormous. As digital rights are human rights and permeate all aspects of society, the field does not exist in isolation. Therefore, its issues cannot be solved in isolation either –– there are many moving parts, many of which will be beyond our reach as an organisation to tackle alone (and not just because DFF’s current geographical remit is Europe). But we need to start somewhere, and we need to get the process started with urgency. If we begin working within our sphere of influence and encourage others to do the same in other spaces, to join or to complement efforts, together we might just get very far.

My hope is that, in this process, we can learn from and build on the knowledge of others who have gone before us. Calls to decolonise the academic curriculum in the United Kingdom are becoming increasingly louder, but are being met with resistance. Are there examples of settings in which a decolonising process has been successfully completed? In South Africa, the need to move away from the “able-bodied, hetero-normative, white” standard in the public interest legal services sector is referred to as “transformation“. And efforts to “radically re-imagine and re-design the internet” from Whose Knowledge center the knowledge of marginalised communities on the internet, looking at not only online resources such as Wikipedia, but also digital infrastructure, privacy, surveillance and security. What are the lessons we can learn from those efforts and processes?

This is an open invitation to join us on this journey. Be our critical friend: share your views, critiques and ideas with us. What are successful examples of decolonising processes in other fields that the digital rights field could draw on? What does a decolonised digital rights field look like and what can it achieve? Who will be crucial allies in having this succeed? How can we ensure that those currently being marginalised lead in this effort? Share your views, help us think about this better, so we might start working on a solution that can catalyse structural change.

This post was cross-posted from the Digital Freedom Fund blog

* As observation was the method used for this determination, it is difficult to comment on representation that is less visible than other categories such as religion, socioeconomic background, sexual orientation, etc.

Reflecting on the Australian Feminist Law Journal special issue, ‘Gender, War, and Technology: Peace and Armed Conflict in the Twenty-First Century’

The nexus between war and technology has developed alongside the rapid expansion of military might and spending, evident in recent decades. Militaries have advanced their weapon systems and in theory saved civilian and military lives in the process. Weapons are now more accurate, theoretically cause less destruction to surrounding infrastructure, and require less time to deploy. Drones, for instance, can target ‘hostiles’ from miles away allowing the operator to never physically come in contact with the violence of war. Specialty ‘armour’ can better protect soldiers and make their job more efficient, by providing weight distribution. Therefore, soldiers (both men and women) will likely become less exhausted from carrying out common tasks and would therefore be allegedly clearer of mind when making key decisions on the battlefield. But, are these all welcome achievements? And, are individuals to accept these achievements at face value?

Alongside the development of these military technologies there has been a push from scholars to recognise that technology, war, and law are not the only sites of intersection. Gender, as a starting point for scholarship on war and technology, and as a tool to investigate the ways in which technology is used, understood, and imagined within military and legal structures and in war, offers an analysis that questions the pre-existing biases in international law and in feminist spaces. Using gender as a method for examination as well as feminist legal scholarship, expands the way military technologies are understood as influencing human lives both on and off the battlefield. This type of analysis disrupts the use of gender to justify and make palatable new military technologies. The Australian Feminist Law Journal’s special issue entitled ‘Gender, War, and Technology: Peace and Armed Conflict in the Twenty-First Century’ (Volume 44, Issue 1, 2018) has tacked key issues and questions that emanate precisely from the link between the concepts of ‘gender, war, and technology’ which editors Jones, Kendall, and Otomo draw out through their own writing and various contributing author’s perspectives.

The following thoughts/questions, which developed while reading this issue, speak to the critiques waged within these articles, and from the developments this issue’s engagement with these topics have generated. As this contribution suggests, intersectional issues remain ever present within new technological advances, which begs the question who are the programmers? If the desire and use of technology to gain military advantage is coming from a place of primarily white, Western, heteronormative, masculine, and secure socio-economic status, then does the method of technological advancement and deployment become defined along similar identities? Does the use of such technology change command structures whereby the weapon becomes ‘in charge’? Continue reading

Digital rights are human rights

As the boundaries between our online and offline lives blur, is there really a distinction between “digital” and other human rights?

UN Photo Eleanor Roosevelt

UN Photo | Eleanor Roosevelt, holding the Universal Declaration of Human Rights

What do we mean when we talk about “digital rights”? This is a fundamental question that influences the Digital Freedom Fund’s strategy as we define the parameters for supporting the work of activists and litigators in Europe.

A quick search online yields a variety of definitions, most of which focus on the relationship between human beings, computers, networks and devices. Some of the narrower ones focus on the issue of copyright exclusively.

As our lives are digitalised further, does this approach to defining the term make sense?

In many ways, we already live in the sci-fi future we once imagined. The internet of things is here. Our food is kept cold in what we used to call a fridge, but what is now a computer that also has the ability to freeze things. The main way in which we communicate with our colleagues, family and loved ones are our mobile devices and what happens on social media is alleged to have a significant impact on elections. Our data are being collected by governments and corporations alike. In all of these contexts, our basic human rights – our rights to freedom of expression, freedom of assembly, privacy, and the like – are implicated. If there ever was a dividing line between “digital” rights and human rights, it has blurred to the point of irrelevance.

In line with the reality of our time, at DFF we work with a broad definition of digital rights for our grantmaking and field support activities. We consider digital rights to be human rights as applicable in the digital sphere. That is human rights in both physically constructed spaces, such as infrastructure and devices, and in spaces that are virtually constructed, like our online identities and communities.

If digital rights are human rights, then why use a different term? The label “digital rights” merely serves to pinpoint the sphere in which we are exercising our fundamental rights and freedoms. To draw concrete attention to an issue, using a term that expresses the context can help with framing and highlighting the issue in a compact manner. With our digital rights under threat on many fronts, this is important. Just as it was important, in 1995, for Hillary Clinton to state at the Women’s Congress in Beijing that “human rights are women’s rights, and women’s rights are human rights,” and for President Obama in 2016 to stress that LGBT rights are human rights, we should all be aware that digital rights are human rights, too. And they need to be protected.

As we further engage with the digital rights community in Europe, we look forward to supporting their important human rights work and highlighting their successes in this space. Part of that mission also includes creating broader understanding that digital rights are indeed human rights. We hope you will join us in sharing that message.

This article has been cross-posted on the Digital Freedom Fund blog. To follow DFF’s work and be notified when we launch, sign up for our newsletter and follow us on Twitter.

Go On! KPBS Dead Reckoning: War, Crime & Justice From WWII to the War on Terror

Go On! makes note of interesting conferences, lectures, and similar events.

logo.pngKPBS and ILG’s own Prof. Naomi Roht-Arriaza present “Dead Reckoning”. A three-hour documentary series on PBS which follows war crimes investigators and prosecutors as they pursue some of the world’s most notorious criminals— notably Adolf Eichmann, Saddam Hussein, Radovan Karadzic, Charles Taylor, and Efraín Ríos Montt. The first episode “The General’s Ghost” airs Tuesday, March 28, 2017, at 8 PM on KPBS TV. Click here for details.

‘Fake news’ highlights much bigger problems at play

Hardly a day goes by without another story on fake news. With the excessive coverage dedicated to it globally, you would think it is something new. But ‘fake news’ is not new and the ways we try to combat it only highlight our inadequacies in dealing with much bigger problems.

As the US Presidential Election progressed, public fixation on the term grew and so did ambitions to try and combat. In Germany, one suggested approach has been to legislate against it, forcing social media companies to delete fake news posts or face 500,000 EUR fines. Sweden also threatened to initiate legal action against Facebook unless it started cracking down on fake news.

That might sound appealing to some. By simply outlawing fake content, we could have a news ecosystem where the information published is guaranteed to be true. As it turns out, legislating against fake news is a really bad idea. Several countries tried it back when it was called ‘false news’, a label which has served for years as a handy means of pretext for many a despot seeking to silence the opposition.

The main problem with legislating against fake news is that definitions of what constitutes fake (or false) news will generally be overly broad, leaving them open to interpretation and abuse by authorities. This puts at risk the challenging of viewpoints, which lies at the heart of a democratic society. They know that in Zambia, where a national court declared its false news law unconstitutional in 2014. And they know it in Canada, Uganda, Zimbabwe, and the United States, where supreme courts have all held that false news provisions are incompatible with the right to freedom of expression.

A softer approach to combatting fake news was announced by Facebook in December last year. It makes use of third-party fact checking organizations, which will look into user-submitted reports of fake news. This is part of a package of other projects including tackling news illiteracy and improving the skills of journalists. Whether it will be successful is hard to say, but Facebook’s initiatives certainly represent a more constructive approach than simply banning fake news. Unfortunately, they are still merely a band-aid on a much bigger ailment: people’s lack of trust. As it turns out, labeling fake news stories as fake is unlikely to stop people from believing they are true. Why? Because people do not trust the ‘experts’ who make this call for them.

And why should they? In January, the European Union task force East StratCom, warned that Russia is seeking to influence the outcome of several key elections in Europe this year with ‘enormous, far-reaching (…) disinformation campaigns.’ Amongst 2,500 fake news stories uncovered by the task force are conspiracy theories over who shot down Flight MH17 over Ukraine to claims that Sweden had banned Christmas lights for religious reasons and that the EU was planning to ban snowmen as “racist”. By spreading vast amounts of conflicting messages, these disinformatzya campaigns seek to persuade audiences that there are so many versions of events that it is impossible to find the truth, impossible to find information one can really trust. The point is to pollute the news ecosystem to make readers question everything and to undermine the very notion of truth itself.

8213377582_53046d8c92_k

In the digital age, we communicate on platforms that resemble medieval marketplaces: everyone is shouting and no one seems able to find common ground with those across the aisle. Photo: Francis McKee, CC BY 2.0

People’s difficulties with trusting information is a much bigger problem than fake news. It is also a central premise of the digital age as the “Gutenberg Parenthesis” theory highlights, arguing that the digital age partly represents a return to medieval ways of communicating, before Gutenberg’s movable type facilitated easy printing and revolutionised the world. The new printed word had a different authority that oral communication did not possess. But then the internet happened and we are now communicating through platforms that resemble marketplaces where everyone is shouting, and where those who want to undermine their opponents can simply hire an army of trolls to do the work for them.

Labelling content as fake news may help some to navigate the ecosystem of news, but it represents a shallow response to much larger underlying problems. Legislating against fake news may make its controversy disappear for a moment, but has a potentially chilling effect on freedom of expression. Neither approach will help people figure out whom or what to trust. There are no easy or quick fixes, but if the ambition is to address fake news in all its forms, there is a need to focus on the underlying issues rather prescribing symptomatic treatment. It will require us to go beyond scratching the surface of the deeper problems of our own bias and inability to reach across the aisle and find common ground with the people we disagree with.

This post was co-authored by Andreas Reventlow, Programme Development and Digital Freedom Advisor at International Media Support who works with journalists and human rights defenders to promote standards of professional journalism, digital security and internet freedom. It has been cross-posted from the Berkman Klein Center collection on Medium.

Technology for Accountability Lab MOOC

 

The Program on Liberation Technology (LibTech) at Stanford’s Center on Democracy, Development and the Rule of Law together with the National Democratic Institute (NDI) are proud to launch a free massive open online course dubbed Technology for Accountability Lab.”

The course is geared for global democracy activists, software developers and other stakeholders to conceptualize, plan and implement technological tools and advocacy strategies to improve transparency by opening political and governmental processes.

This 10-week course – which starts on August 9, 2016 – will feature video lectures by Stanford professors Terry Winograd and Larry Diamond, as well as lecturers from NDI, Transparency International, Sunlight Foundation, Creative Commons, ProPublica, and other experts.

To to learn more about the course and register, visit the course link. Please share this announcement widely with interested participants and professional networks (#TFALAB).

Go On! University of Essex Human Rights Summer School (early enrolment discounts available now)

Following on from a successful second year, the Human Rights Centre at the University of Essex is offering its five day summer school on Human Rights Research Methods from 27 June to 1 July 2015.  Additionally, the Human Rights Centre is offering a second week (4-5 July) of thematic modules on two contemporary and cutting edge issues in human rights:

An international team of experts will deliver teaching sessions, including leading human rights academics and practitioners. These are essential courses for postgraduate students, academics, lawyers, those working in civil society and international organisations, and importantly, those holding positions in government, including diplomats and civil servants.  The thematic modules are run in conjunction with the International Centre on Human Rights & Drug Policy and the Human Rights Centre.

Courses will be held at the University of Essex campus in Wivenhoe Park, an hour train ride from central London.  An early booking discount on the published course fee rates is available now.

A full course programme, including enrolment details are available here.  We hope to see you at Essex this summer!