My research practice draws on recent and foundational scholarship from diverse knowledge spaces – systems design, feminist information studies, critical digital pedagogy, digital and media studies, access and disability justice, and liberatory learning – and is driven by a commitment to collaboration, transparency, accountability, and purpose.
My guiding questions have always been grounded in how information makes knowledge: how the shape of information alters its imaginary (MA and PhD work); how it is designed into our systems and institutions (Redesigning Futures); how it networks and is networked (Network Ecologies); how its digital form can be mapped and mobilized as art (S-1 Lab/Chat Fest); how digital information is embodied in material form (Manifest Data); how the digital technosphere of information is metabolically intertwined with human bodies and the earth (Digital Metabolism); how it gathers in system forms like AI to datafy humanness and encode human bodies (Critical Digital Practice); how information now threatens bodily integrity through unwanted digital touch (CDP); how it is automated then activated, through inequitably-built environments like AI and ML, to govern, police, punish, penalize, and colonize; and, when wearing my hat as an education researcher, how we learn, teach, and understand how our pedagogies and practices inform and inflect both the bodies of our students and the content we deliver.
New, 2024: Thinking Through Repair
This project will rearticulate AI Assistants through the lens of maintenance, repair, and corporate producthood.
Advanced AI Assistants are quasi-autonomous personalizable agents designed to act in the world, including with other AI Assistants, to enact user goals; through sophisticated ML and NLP interfaces they literally speak our language, and promise to serve as (or, impersonate) counselors, coaches, companions, and copilots.
A new study from MIT proves priming influences how we interact with AI systems, e.g. language that primes us to think AI has caring (or neutral or manipulative) motives nudges us to treat it as such regardless of the underlying system. In peddling AI Assistants as revolutionary persona-driven helpmates a clever trick is played: we conveniently forget AI Assistants are products sold to us by corporations whose remit and motivation are profit. Rearticulating AI Assistants as products in need of maintenance, repair, and consumer warranty allows us to think differently about responsibility in the age of AI. (From here, I’ll replace the term ‘AI Assistants’ with ‘AI Assistant Products’ or AIAP.)
Who is responsible for maintaining glitchy, aging, or obsolete AIAP? Who repairs individual, societal, economic, or ecological harm caused by (our use of) AIAP? Who monitors harm? Who protects privacy if monitoring harm requires massive surveillance? Who protects consumers once AIAP companies see Massive Surveillance as profitable and they sell Harm Monitoring as a Safety Feature?
These aren’t speculative questions and it isn’t difficult to anticipate answers: the Big Tech Sales Playbook is already written and (spoiler alert) the plot trends toward capital gain. The argument that we cannot stop ‘the coming wave’ of AI agents is compelling, and potentially self-fulfilling. What we can do is re-objectify AI Assistant Products—which are forecast to be how we’ll soon enter and use the internet—to reinscribe consumer protections.
Through a website, AIAP Maintenance & Repair Job Bank, and series of public writings, presentations, petitions, convenings, and curricular tools, this project seeks to 1) rewrite terms such that ‘AI Assistants’ become known and legally defined as AIAP in popular, political, and regulatory discourse, 2) grow the human-care systems required to ensure AIAP are trustworthy, reliable, repairable products, and 3) create the conditions for equitable non-discriminatory participation in AI-powered futures.
♠
Determining Terminology
This project will create a word bank and word-blanking tool to promote trustworthy AI through a critical examination of the language used and abused to describe it. The word bank and word blanking tools will be offered as guides (and provocations) to inform policy, protocol, practice and praxis for trustworthy AI.
- When students are asked to “replace” themselves as a fun(ny) exercise, it suggests AI might.
- When developers playfully name AI using human names they prime us to attribute it human-ness…and as researchers have pointed out, not-so-curiously many of the AI who serve us have feminized names, voices and persona. (We maybe familiar with corporate bots Alexa and Siri and potentially too Bank of America’s Erica. We shouldn’t be so complacent with the U.S. government’s use of the same cute trick: SARAH, EMMA, MISSI, PAIGE, Angela and even
- When we say please and thank you to chatbots, we participate in the anthropomorphization.
- When users excuse AI’s ignorance for a ‘hallucination’, we believe it not faulty but even-more human.
- When we focus on ‘intelligence’ and not on the ‘artificial’ part of AI, we lose context.
- When we carelessly insist AI tools have (their own) personalities, we undo our own.
- When efficiency is the cult we unquestion, we become tools ourselves.
- When a book on AI “equality” suggests women use AI to be more like men, we fail again.
Using AI to create a tool that would call out the words used to anthropomorphize AI and otherwise unsettle its materiality and material affect will provoke a more critical understanding.
♠
Doctoral Research: Digital Metabolisms
Full open text of Digital Environmental Metabolisms: An Ecocritical Project of the Digital Environmental Humanities (2017) here: https://dukespace.lib.duke.edu/dspace/handle/10161/14457
That we are living in worlds profoundly altered by human influence is no longer a speculative issue. The implications of environmental change and its storied manifestations are, borrowing the words of Ian Baucom and Matthew Omelsky in the introduction to their recent edited collection, Climate Change and the Production of Knowledge, “deeply connected to what it means to be human on earth in the twenty-first century.”
By combining literary, ecocritical, and media techniques with a mindfulness of the environment, Digital Environmental Metabolisms: An Ecocritical Project of the Digital Environmental Humanities, contributes to the urgent task of re-orienting media theory toward environmental concerns. It is informed by the premise that, in our present Anthropocenic age defined by humans acting as a geophysical force, human bodies, cultural technologies, and the earth are intersecting material practices. I argue this intertwinedness is neither cyborgian nor posthuman, as some media scholars insist, but is something far more natural: it is a metabolic relationship wherein each system is inherently implicated in the perpetuation of the others.
Unlike those who contribute to what may be called an emerging environmental media theory through subfields like media archaeology or political history, I offer several practical and methodological interventions, including Permaculture and Ecocritical Digital Humanities, that are capable of moving us toward more sustainable digital practice and a more robust Anthropocene Humanities. I offer avenues for making meaningful change within our classrooms, our communities, and our daily lives.
Digital Environmental Metabolisms opens with a chapter that asks how an environmental humanities perspective, one that takes seriously the physical environmental aspects of digital media’s infrastructure, can contribute to the reconfiguration of media theory’s most prominent frameworks by drawing attention to the discourses that prevent a robust environmental media studies. My second chapter argues we must re-story digital materiality to help narrate unseen relationships and articulate alternate Anthropocene futures. It re-figures the environmental metaphors already present in media theory (e.g. the Cloud, Atmospheric Media, Media Ecology) to embed them concretely within their earthly material contexts. My third chapter brings these physical realities to bear on cultural critique by looking at how environmentally-focused digital artworks can challenge our digital-material (hi)stories and provoke new figurations of the complex relationship between humans and the environment. My final chapter proposes Permaculture—a profoundly interconnected set of ethical design principles that I borrow from natural farming—and Ecocritical DH as models for sustainable scholarly practice.