I found myself seated at the front of a lecture hall on the campus of Ithaca College in New York. It was March 30, 2023.
Philosopher Craig Duncan of Ithaca College and Raza Rumi, Director of the Park Center for Independent Media (PCIM), were there with me to participate in a roundtable discussion on AI as part of the Finger Lakes Environmental Film Festival (FLEFF). The session was co-hosted by the Park Center for Independent Media (PCIM) and The Edge. Both Craig and I contribute to The Edge.
The very animated panel engaged themes of responsibility, the sharing of wealth and social justice, the role of people, environmental costs, and generosity. Faculty, community members, and students in attendance raised productive questions and shared insightful points of connection.
Three weeks later I find myself still thinking about ideas that the AI panel opened up. Three take-aways continue to percolate from that conversation.
1. “A” is for…
Discussions around AI automatically register the “A” to mean “Artificial.”
According to novelist and essayist Jeanette Winterson, this need not be the case. For her, this old way of thinking is useless.
In her 2022 TEDTalk, she offers “alternative” as an intervention into the term “artificial.” She advocates for a different nomenclature: Alternative Intelligence.
Winterson argues, “We need some alternative intelligence.”
She explains that alternative intelligence will not have to operate according to the binaries that have historically informed our socio-cultural and political biases. Yes or no. This or that. Us or them. Boy or girl. Black or white. Straight or gay.
Alternative intelligence does not need to perpetuate the structures of oppression, alienation, and hate that inform many modes of thinking.
Winterson asserts that we have had to recognize ourselves in the mirror of what artificial intelligence software has produced.
She suggests that we better understand how we have previously used AI because it will inform how we work with and through AI as we move forward.
Her ideas center on a shift in framing and conceptualization, from artificial to alternative.
2. Thinking with and learning together
If Jeanette Winterson took issue with the “A” of AI, I want to interrogate the question of the “I.”
Instead of “intelligence,” I propose that we focus on meaning-making or better yet, “sense-making.” Sense-making is a way of interpreting what intelligence is and how it works.
Sense-making is processual, as the gerundive -ing indicates. And it is not principally the domain of the human.
For American pragmatist Charles Sanders Peirce, sense-making — or semiosis — is more than a matter of interpreting signs no matter their representational status, such as an icon, picture, photograph, fingerprint, or word. Rather, sense-making has biological and (neuro)physiological underpinnings and consequences.
Peirce identifies three fundamental categories. He writes about Firstness (quality of feeling), Secondness (reaction, response), and Thirdness (habit, convention).
These categories form a triad. They are governed by some general laws which produce a relatively conventional response to some stimulus. Some stimulus provokes a response that is processed more or less completely by some receptor according to regularity or convention.
We could be speaking of human communications or bacterial sensing because these are processes rather than fixed forms.
Whether humans or bacteria, sense-making is not determinate. It is open-ended and subject to change. While semiotic processes adhere to general laws, they don’t always do so. For example, “a” might very well shift from meaning “artificial” to “alternative.” This is how Peirce explains habit change.
The challenge is to imagine how we apply semiotic logic to a revised interpretation of “intelligence” so that AI unfolds differently.
Then, we might not position ourselves as separate from our AI assistants. They would not become mere tools to produce knowledge as output. Instead, we would think and learn with them.
I think Winterson might agree. But what does this look like?
Quantum physicist and feminist theorist Karen Barad provides a way to address this question.
In “On Touching—the Inhuman That Therefore I Am,” Barad shares a concept that features the letter “i” that she associates with theoretical and experimental work committed to “making a better world.”
The concept is “in touch.” Being in touch assumes being “responsible and responsive to the world’s patternings and murmurings.” Patternings and murmurings are precisely what artificial intelligence encodes and delivers back in gestures of efficiency and rationality.
But alternative intelligence challenges us to set-aside investments in readily packaged calculable outputs. It invites us, according to Barad, “to be lured by curiosity, surprise, and wonder.”
It pursues a different kind of attention and commitment. AI can activate what Barad calls “response-ability,” which is not an economic deliverable but instead a set of relations.
For Barad, being in touch is a catalyst for “response-ability.” It evokes the idea of “hospitality” as theorized by Jacques Derrida. Hospitality moves away from the transactional thinking of indebtedness toward thinking that embraces giving-receiving.
In the ever-evolving and transforming world of AI, we need a new set of questions.
How do we and AI collaborate according to a logic of hospitality?
How do we and AI enact new forms of reciprocity?
Heidi Rae Cooley is an associate professor in the School of Arts, Humanities and Technology at the University of Texas at Dallas. She is author of “Finding Augusta: Habits of Mobility and Governance in the Digital Era (2014),” which earned the 2015 Anne Friedberg Innovative Scholarship award from the Society of Cinema and Media Studies. She is a founding member and associate editor of Interactive Film and Media Journal and co-director of the Public Interactives Research Lab (PIRL).