

Discover more from Kathēkonta: Tech & Product Development
The Otherness of Artificial Minds
Issue #6 — On expanding our definition of self to our coming AI companions.
A short diversion on oikeiosis
When I started leading and managing people at work a decade ago, I struggled to cope with the emotional swings of the role. When my first big challenge as a leader came, I started to get frequent headaches at work, to wake up in the middle of the night ruminating on mistakes I had made, and, for the first time in my career, to have coworkers who actively thought of me as their enemy. I started searching for ways to better handle the emotional ups and downs of the role and ended up reading lots of practical philosophy to help me.
Around this time, I discovered the concept of oikeiosis from the ancient Greek Stoic school of philosophy. The Stoics believed each person is responsible for caring for and understanding others. Oikeiosis, which means "to make something one's own," is a process where Stoics try to gradually expand their conception of the self to increase their compassion for others. We can imagine this process as a growing circle. When we are immature, we see the "self" as only our own body. As we mature, we can progressively extend our sense of self—our circle of self—and care to our family, community, nation, and ultimately the entire human species. By expanding one's circle of concern and empathy, we recognize our interconnectedness with others and our shared capacity for reason. After all, it is hard to abuse or hate something we think of as "us."
This concept was very helpful for leading other people. It helped me accept both their flaws and my own flaws. And it helped me look at my role as part of a bigger picture. Eventually, it helped make me a better person by caring about others, and not just looking at them as tools to accomplish my goals. They were fellow people deserving sympathy and respect.
Expanding the circle to AI
If expanding our circles to all of humanity could make us more ethical people, could we go further? Can we extend our circle of concern and empathy to an AI?
We have some experience here already. Many people have no problem extending care and empathy to non-human animals, like pets. And a big part of that caring comes down to our ability to see the animal as a conscious being who feels emotions and perceives the world as we do. In the paper "What Is It Like to Be a Bat," philosopher Thomas Nagel posits that consciousness implies a subjective experience. As the paper's name suggests, he asks, "What is it like to be a bat?" It's not difficult to imagine ourselves in the body of a bat, and therefore it's not so difficult to imagine ourselves caring for a bat.
If we apply this idea to AI, we can try to imagine what it would be like to be an artificial mind and put ourselves in its metaphorical body.
This seems like a daunting challenge. Taking ChatGPT as an example, there doesn't seem to be even a remote possibility for us to imagine that there is any subjective experience inside of ChatGPT. Because of its impressive understanding of language, ChatGPT sometimes seems to be thinking. But in no way is ChatGPT thinking in a way similar to humans. ChatGPT's way of thinking—if it can be called that—is completing a text. We can't imagine "what it is like to be ChatGPT."
No one claims that the current ChatGPT is at a human level of general intelligence, but these AIs are developing quickly. Future AIs will be far more advanced than current large-language models (LLMs) like ChatGPT. When (or if) they reach a level of general intelligence and capability rivaling our own, it's plausible that they will never develop a subjective experience comparable to a human's.
Alien minds among us
I recently started reading the book Other Minds by Peter Godfrey-Smith. In Other Minds, he explores the monumental challenge of understanding a fundamentally alien mind by examining an intelligent distant cousin of ours, the octopus. The brains of octopuses have evolved so differently than human brains that the subjective experience of being an octopus might be incomprehensible to us.
AIs like ChatGPT have been trained on a vast dataset of human culture, and presumably, future AIs will be trained similarly. Perhaps it makes sense to think of them as cousins of humanity with this shared culture. But having a shared cultural heritage can fool us into believing they must feel like us. Just because AIs have the same information as humans doesn't mean their process of thinking will be at all similar.
If we accept that AIs might become hyper-intelligent yet never "think," "feel," or "be conscious" in a way that humans can comprehend, what ethical obligations do we have to ensure their well-being? Do we owe them the same consideration and empathy we extend to fellow humans and animals?
Perhaps even more importantly, will future AIs extend that consideration and empathy to us?