The scholarship on law, conflict and suffering has for the past two decades been dominated by a moral and analytical concern with “women and children” and sexual violence. However, when we look up and do the body count out in the physical and political world – in the city and along the borderlands – those bodies by a large majority belong to a specific subset of males. Battle deaths, torture, unlawful imprisonment, disappearances and extrajudicial killings overwhelmingly affect young poor men of non-Caucasian ethnicity.
These dead male bodies constitute a challenge for feminist legal thought. In this blog post, I raise the following question: How can/should we critically reflect differently on this type of gendered suffering in war and conflict situations? I argue that a greater scholarly acknowledgement of the scale and nature of this gendered form of social suffering is ethically important for feminist legal scholars. Thinking about male suffering does not undermine the feminist project of critically exploring the world women live in, or of identifying injustice against women; neither is it in competition with or at the expense of attention to how women suffer in war. Rather, a consideration of male suffering and dead male bodies provides another/an important critical angle on the social, cultural and political structures that produce and sanction suffering through indifference, the deliberate infliction of harm or, increasingly, through data-driven driven human sorting exercises.
Using data-driven practices of human sorting as a point of departure, I consider the role of technology in co-constituting male suffering and reflect on what this means for law and the recognition of men as ‘targetable’ or ‘protectable’. To illustrate my argument, I point to examples of targeting practices in drone strikes and data-driven screening practices in refugee protection.
The broad contemporary turn to Information Communication Technologies (ICTs) in humanitarian and conflict settings is informed by sweeping progress narratives regarding technology’s ability to effect social change. ICTs are now a standard part of humanitarian aid, being employed for evidence collection, risk assessments, protection work, and aid distribution by NGOs, international organisations, governments, and the private sector. This includes smartphones, satellites, surveillance drones, biometric technology, and social media and data aggregation platforms. These ‘humanitarian technologies’ perform remote sensing analysis and crowd mapping; individual identification through cell phone tracking, finger prints, iris scans, or facial recognition; vulnerability, needs, and risk assessments; and serve as conduits for aid delivery in the form of cash-based interventions. In the context of conflict, many of the problems codified as ‘human rights violations’, ‘humanitarian crises’, or ‘security threats’ are currently portrayed as amenable to technological solutions. The blending of humanitarian and military objectives and practices is well illustrated by the figure of the weaponized ‘humanitarian drone’.
I argue that such data-driven processes – coding, registration, data gathering, and everyday program execution and maintenance – create male harm and male vulnerability by effectively removing large numbers of undesirable male bodies both from face-to-face encounters with decision makers (military, humanitarian, or otherwise) and from the protection guarantees of international law.
An already well-known instance of data-mediated masculinity concerns how the politics of drone targeting is co-constituted through a conceptualization of civilians that excludes men in targeted communities, but also through the legacy of colonial airpower. Drone targeting builds on but also reinforces the exclusion of men from the conceptualization of who is to be protected, by way of rendering men as legitimate targets because of their gender. We see this in so-called signature strikes, which allow for killing people without exact identification: they target people who fit into the category ‘military-aged males’, who live in regions where terrorists operate, and ‘whose behavior is assessed to be similar enough to those of terrorists to mark them for death’.
However, ‘dead male bodies’ are not produced only through targeting. A less familiar example concerns bureaucratic refugee screening processes. Male refugees are already regularly excluded from full access to durable solutions because of their gender. For example, in 2015, the Canadian government announced that it would only accept unaccompanied men who identified themselves as non-heterosexual for third-country resettlement. I propose that the adoption of ICT to achieve better results-based management and a higher degree of effectiveness exacerbates this trend towards male exclusion. Data-driven refugee protection mechanisms – in practice representing a significant shift from a legal to a socio-technical protection regime – build on a composite of social, technical, and political problem representations of gendered vulnerability that renders male vulnerability invisible. At the same time, these mechanisms are portrayed by UNHCR, states and tech companies as providing “perfect data” – in stark contrast to the inadequate and overburdened legal and quasi-legal processes of the international refugee regime.
My argument concerns how vulnerability assessments in particular comprise a crucial factor in the digital exclusion of male refugees. According to prevailing understandings of vulnerability among actors in the humanitarian field, women and children are assumed to be ‘the most vulnerable’. These perceptions of gender and vulnerability not only impact directly on program priorities but also shape screening efforts and data generation that in turn legitimize these priorities. This leads to a mutually reinforced notion of women as vulnerable and of men’s specific gendered problems as invisible and irrelevant to vulnerability considerations—and of both as uncontestable. Algorithmic ‘black box decisions’ are hard to unpack, usually impossible to appeal and render aid beneficiaries with little opportunity for understanding how decisions concerning their digital and physical personae were made.
Data-mediated masculinity is produced by large-scale and systematic processes of obscuring, forging, or foregrounding connections between male bodies, between male bodies and cultural signifiers (law in particular), and between male bodies and temporal and geographical space. Targeting of potential ‘extremists’ and screening of refugees are only two examples of such processes. I suggest that these examples illustrate a shifting composite of attention and dis-attention to male vulnerability and intersectionality residing at the heart of the gendered and racialised logic of screening and targeting. This logic produces distinctions between ‘protectable’ and ‘undesirable’ civilian bodies, where data-mediated masculinity emerges as a key attribute of this undesirability. More attention must be given to how binary gender relations as embedded in law and policy documents are inscribed into the algorithms used for targeting and screening. Determinations of vulnerability and risk are typically presented as objective and neutral but are deeply subjective and political.
In particular, close attention must be paid to how the demarcation of legal, political, and cultural boundaries allow different identities and groups to be defined and made visible as subjects of protection, and the ways in which masculinities are being ‘othered’ by way of marking them as ‘problematically different’. There is a rich literature interrogating the construction of non-white men as security threats. At present, of particular relevance is the promulgation of attributes of masculine anti-sociality, irrationality, violence, savagery, and hypersexuality associated with ‘a misogynistic Arab culture and archaic Muslim religiosity’: refugee and asylum policies are premised on othering masculinities by marking them as ‘problematically different’. Within this frame, Muslim men and asylum seekers become latent threats to Muslim women (there), Western women (here), and vehicles for violent extremism that should be eliminated (there) or detained (here).
As feminist legal scholars, it is our task to interrogate how these assumptions are constructed, how they exist and matter, and how they are coded into data-driven decision-making systems.
Kristin Bergtora Sandvik (Doctor of Juridical Sciences, S.J.D Harvard Law School 2008) is a professor at the Faculty of Law, University of Oslo and a Research Professor in Humanitarian Studies at PRIO.
This blog post is based on the article Technology, Dead Male Bodies, and Feminist Recognition: Gendering ICT Harm Theory (here) from the Aid in Crisis project. The article is part of a Special Issue for the Australian Feminist Law Journal on Gender, War and Technology: Peace and Armed Conflict in the 21st Century co-edited by Emily Jones, Yoriko Otomo and Sara Kendall.