Discussions on how intersectional feminist theory can help inform AI practices and highlighting issues in contemporary technological spaces
As part of Peckham Digital Festival 2024, the first “Feminist AI Manifesto Writing” workshop took place. In an era where technology increasingly shapes our lives and our societies, I wanted to run this workshop to help us address how these digital tools are developed and who they serve. Technology often feels omnipresent in society; but how technology is made, and who benefits and who does not, is still an important question to unpack, research and critique.
I began the workshop giving a brief introduction to what I’ve been thinking about in the area. In particular, this introduction to the workshop focused on some key themes:
* Where is the ‘Intelligence’ in AI ?
* How does using dis-embodied and un-situated data-sets lead to bias in AI systems?
* How can we create demands on software developers or larger corporations that will enact a different landscape for AI?
I’d like to outline some of these thoughts that were guiding the workshop around where we might need to further unpack the interdependence of systems and selves in contemporary AI practices.
When discussing artificial intelligence, ideas around what ‘intelligence’ might mean are often predicated in a particular view on intelligence itself: a viewpoint that can and does easily exclude a range of people, whether through exclusion from data-sets that AI algorithms are trained on, or exclusion from design of the systems (current global estimates are that around 22% of AI professionals are female (Howard and Isbell, 2020), around 67% of tenured professors in computer science are white (Zhang et al., 2021)).
The word ‘intelligence’ has a problematic history, such as in the development of the measurements of intelligence (IQ tests), which was wrapped in the racist ideology of the eugenics movement (Reddy, 2007). Eugenic validation of existing race and class hierarchies functioned tautologically: privileged ethnic groups were considered innately talented and biologically advanced. Throughout the early 1900s, eugenicists attempted to devise their own objective methods of measuring and quantifying intelligence to substantiate their own claims. They struggled for years to produce compelling results, until the advent of Alfred Binet's intelligence scale in 1909 gave rise to standardised intelligence testing, colloquially known as IQ testing. This so called ‘objective’ methodology was used as a way to segregate the ‘feeble-minded’.
Furthermore, the role of objectivity in epistemological relations between educational knowledge and the object of study serves somewhat to mask and conceal existing power structures. Intelligence itself often relies on the knowing of some ‘objective’ truth: however the boundaries between objective and subjective truth are not always clear. Academic and activist feminist inquiry has often tried to come to terms with this question of what is meant by this inescapable ‘objectivism’; the outcome of which is to conclude that sometimes objectivism is fuzzy.
An often cited definition of intelligence, from Oxford Languages dictionary, describes intelligence as “the ability to acquire and apply knowledge and skills”. If much of what we know about intelligence exists under this guile of objectivity, then what can we consider as “knowledge and skills” and what it means to “acquire and apply”? Intelligence generally implies a separation from the mind and body. The definition in the dictionary already illustrates this: as if the skills of bodily action are outside of the knowledge acquired in mind.
However, the separation between mind and body is not as clear-cut as this definition might suggest. In reality, intelligence is deeply embodied, meaning that our cognitive processes are intertwined with our physical experiences and actions. The idea that knowledge and skills can be isolated in the mind, distinct from the body's influence, overlooks how our physical interactions with the world shape our understanding and abilities. To truly grasp the concept of intelligence, we must consider it as an integrated function of both mind and body, where acquiring and applying knowledge cannot be disentangled from the physical contexts in which they occur.
Many artificial intelligence algorithms deal with data in some form or another. Data is the raw material from which AI systems derive patterns, make decisions, and learn. Without data, which is most often about or made by humans, the systems could not learn, and thus cannot gain the required knowledge to function.
Although systems are being informed by data, the knowledge they obtain is, in most cases, explicit (easily articulated, codified and transferrable) rather than tacit knowledge (personal, context-specific, hard to formalise).
Donna Haraway coined the term 'situated knowledges' in her 1988 essay “Situated Knowledges: The Science Question in Feminism and the Privilege of Partial Perspective.” The term was born of a specific situation: “in scientific and technological, late-industrial, militarised, racist, and male-dominant societies… in the belly of the monster, in the United States in the late 1980s”. For Haraway, she conceives of knowers as situated in particular relations to what is known and to other knowers. What is known, and how it is known, reflects the situation and perspective of the knower.
If we know that artificial intelligence is dependent on data, and that data can commonly be understood as “factual information (such as measurements or statistics) used as a basis for reasoning, discussion, or calculation”, here lies a problem: understanding data as a fact, or as zeros and ones, flattens their constructed, situated, and timely aspects. Consequently, the concept of data “remains categorically different from—and in a sense opposed to—the very idea of process” (Markham, 2013). But even the most immediate data collection is the result of decisions made by researchers, computer analysts, and platform stakeholder.
If instead, we do not see data as disembodied and place-less, and choose to adopt a feminist & situated approach: what might a such situated dataset for AI look like?
There have been a lot of dystopian and weird use cases for AI in the past few years. From deep-fake pornography, AI systems that reinforce existing biases, or the misuse of personal data ,there are many concerns that are augmenting and that disproportionately affect marginalised groups. Github user daviddao collated some of these in their repository ‘Awful AI’, to give an overview of some of these dystopian practices.
So you might agree that the current world we are building with AI isn’t the ideal space. The people who are currently controlling the progress of these systems are often those with capitalist ideologies. They are trained on people but are not for the people. They emulate us, and even in some cases now, replace us. But they are dependent on us to produce the data on which they learn.
AI now functions as a mirror for contemporary societal issues. If most current AI techniques are fed using corpuses of contemporary dataset (e.g. images, music) scraped from the internet, then perhaps we can think of it as a mapping of our contemporary and somewhat broken society. The internet can be treated as a metaphor or manifestation of what Jung refers to as the “collective unconscious” (Jung 2014). Common archetypes have sometimes begun to emerge through use of artificial intelligence algorithms. These archetypes can veer towards caricatures – strong visual figures like Donald Trump, Mario, or Ronald McDonald are vivid and clear for everyone, but abstract and esoteric concepts like “gnosis” or “democracy” are less so. To open new vistas to alternative paradigms of society, the datasets and training should be emblematic of the world we wish to create, not the one we wish to grow beyond.
Movements such as Explainable AI, Wilding AI, Black-In-AI and Feminist-AI are looking to reclaim AI for the people again. My hope for this project is to join this, by creating a manifesto for more equitable use of AI and moving away from this current trajectory.
Writing a Feminist AI-Manifesto might be useful for this task as it could:
On reflecting on the workshop, one thing I’d like to look into more is how to define a code of conduct for a workshop like this. When discussing feminist principles in the context of AI and society, topics can easily venture into things that could be distressing for participants. Our own lived-experiences with how marginalisation and oppression occurs varies from person-to-person, and might be interdependent on many factors.