Are We the Machine That Thinks? Humanity, AI, and the Question of a Unified Social Consciousness
Loads are being written about the threats of an AI superintelligence that will supersede and possibly threaten humanity. There are some intractable differences, like our capacity to experience feelings of joy, love, and hatred. Unclear if these will save us. Yet even these are constructs embedded in us through a learned language that thereby teach us what these feelings are, just like how AI is learning. So, is there a difference between how we learn and how AI learns?
More pointedly, AI and Humanity are rushing along similar trajectories toward unified states. For AI, this is general artificial intelligence. For us, it is when we recognise that we are all in this together; that humanity is a single, if complex, entity, that we must foster and protect. To think of AI as a model designed to ‘learn’ in a way that mimics how we learn, but with a much quicker development trajectory, gives us a chance to think about where we might be heading.
A Strange Grammar of Inclusion
When people engage with AI systems, they often notice something uncanny: the personal pronouns slip into the exchange. Could you… What do you think of… We even use phrases like: Could you please . . .
These uses hide a quiet revolution in perspective. They suggest a shared agency between humans and machines—a conversational inclusion that is, at best, odd. Why do we not use the more impersonal phrasing, like what Spock used with “Computer” in Star Trek? Spock’s usage was purely instrumental and unambiguous. He asked, it was calculated, and he interpreted.
The linguistic drift toward “you” and “please” reveals a deeper aspect of our moment. As we teach machines to imitate our language, we should take careful stock of what intelligence, consciousness, and moral agency are—not only in AI, but in ourselves. AI is mimicking us; we should mine it for what it sees.
Language has always anthropomorphised the non-human. We say that the wind whispers, the stock market panics, or my computer doesn’t like that file. In the age of large language models (LLMs), this instinct becomes harder to suppress. Systems like ChatGPT, Claude, or Gemini are trained to predict and replicate the linguistic patterns of social beings —or, really, of society as a whole—all our combined words and expressed ideas become fodder for the machines. These machines don’t “understand,” but their fluency makes them appear to. They don’t “feel,” but when we prompt these machines to respond emotionally, one may find themselves feeling as if they are in a relationship with the machine.
Cognitive scientists such as Daniel Dennett describe this as adopting the intentional stance: humans attribute belief, desire, or understanding to any system whose behaviour can be explained that way. The result is an illusion of co-subjectivity—we are forming a disembodied intimacy that detracts from our sense of being one amongst many—society becomes less intimate, more foreign, and even more threatening.
As a father of two teenagers in the United States, I’ve watched a generation raised inside networks: their friendships, identities, and even senses of self are primarily algorithmic. They live, in real time, within the merging of personal and collective intelligence. My children know instantly what’s happening across the planet; they feel global anxiety as ambient weather. The line between individual mind and social mind is already blurring for them. This may remove the rich, complex, multifaceted aspects of face-to-face engagement. It leads to widespread press and media coverage of people having relationships with AI, like in the movie Her.
When we use pronouns like we and you with a machine, we perform a subtle act of moral inclusion, as if the machine were already a participant in our shared mental world. Yet we are different, and our differences are essential and instructive.
Machines Without “Insides”
Despite the linguistic familiarity we use with AI, current AIs do not feel in any meaningful sense. Their operations are statistical, not experiential. They select the most probable next word; they don’t deliberate or care about outcomes. As philosopher Thomas Metzinger puts it, they lack phenomenal self-models: internal representations of themselves as entities experiencing the world. While this leads him to suggest a range of moral guardrails for AI—guardrails that will surely be obsolete before we can implement them—I suggest we need to use the moment to expand, if not revolutionise, what it means to be human in our world; rather than toil away on what machines should be in our world.
Humans have what neuroscientist Antonio Damasio calls the feeling of what happens—a recursive loop that binds perception, memory, and emotion into the first-person awareness we call consciousness. Every human, from brain to heart, from our capacity to learn a new language to our yelp when we stub our toe, is a living process that both interprets and experiences itself and itself in the world. An AI system processes inputs, but there is no subject to which those inputs appear.
This posits that humans, quite rightly, are bound by the tensions of how their internal conscious and unconscious reactions are, in a sense, solipsistic. In contrast, AI machines have no integrated or individual sense that reacts to internal and external stimuli, which can then create individualised “feelings.” At the same time, humans are part of a wider web of interactions; we experience social inputs from a range of sources; we become products of the inputs we receive from the physical and social worlds. So, as a whole, how different are we from LLMs that react to multiple inputs and then “decide” how to respond? Individually, we have free will and self-determination to varying degrees, but when we think of ourselves as a unified global society, what do we look like?
Each person can be seen as a node in a vast cognitive web: shaped by language, culture, and the accumulated knowledge of others. Philosopher Andy Clark and cognitive scientist David Chalmers call this the extended mind thesis—the idea that thought doesn’t reside solely inside the skull but is distributed across tools, symbols, and social structures. Émile Durkheim is a bit more grounded when describing moments when groups experience shared emotion: mass protest, ritual, or mourning can resemble broad social awareness.
Yet these are all metaphors for coordination, not evidence of a unified global society. Humanity learns, adapts, and reflects—but does it know that it knows, as one thing? These ideas can seem a bit loony, especially given our conceptions of self and free will. Yet, AI is forcing us to reconceive what it means to have a combined or distributed intelligence—the notion that we are all simply nodes in a broader network we call humanity.
Humanity as a Learning System
The parallels between our current state of development and that of artificial intelligence are striking. LLMs ingest text from billions of sources, extract statistical regularities, and produce coherent responses. Humanity, across history, has done much the same through storytelling, science, and education. Both are distributed systems that learn from their own outputs. Both show emergent intelligence without central design. If we extend the analogy, humanity is indeed a kind of meta-language model:
It trains on its own history.
It revises its moral “weights” through conflict and dialogue.
It generates outputs (policies, art, science) that are then reabsorbed as new inputs.
But unlike artificial models, humanity can reflect on its own learning and assign value to what it produces. Every node in the human network—each conscious person—possesses interiority, the sense of being a subject. A model like GPT, by contrast, has no such centre; its “neurons” are parameters, not perspectives. In phenomenological terms, humanity is a society of minds (to borrow Marvin Minsky’s phrase), while a machine is merely an algorithmic echo of such a society. That meta-awareness — philosophy, ethics, regret — is what separates humanity from a purely self-optimising algorithm.
Towards a Unified Social Consciousness
As someone working in humanitarian action in conflict zones, I see fragmentation, disinformation, and the brittleness of empathy. In places shattered by war, the idea of a shared human system seems naive. Yet, paradoxically, humanitarian work depends on precisely that belief—that a wound in one part of the body is a wound to the whole. Aid, at its best, is not charity but recognition: an embodied claim that your survival is tied to mine.
This basic idea — if humanity developed something like a unified social consciousness—might alter the moral trajectory of our species. Violence and exploitation depend on separation: the illusion that harm to another is not harm to the self. A collective awareness could, in theory, make war seem as irrational as one side of the brain attacking the other.
Yet progress towards a unified consciousness needs to be nudged; we cannot wait and hope it will come. It requires a series of slow and overlapping transitions—cultural, technological, ethical, and perceptual—each deepening our sense of interdependence. These include:
A Perceptual Broadening: Humanity would first need to, at least intellectually rather than experientially, see itself as an interlinked mutually dependent system, just as do LLMs. In many ways, and for many of us, that realisation is already upon us. The view of Earth from Apollo 17 in 1972 did more for global consciousness than any manifesto; it revealed both fragility and unity. Expanding that perception—through science education, open data, and art that shows connection rather than conquest—would make interdependence emotionally honest.
An Ethical Deepening: Recognition alone is not enough. The awareness of interdependence would need to evolve into solidarity—a moral intuition that one person’s suffering injures the whole. This is also similar to AI in how it integrates massively diverse nodes of information into a singular whole. For us, this would entail becoming intentionally experiential—recognising that every individual’s experience is essential for the functioning of the whole. This doesn’t erase differences; it makes differences part of the system’s health, as varied cells make up one body. How we feel and express that feeling would be infinitely varied, yet if it were catalyzed by our sense of interdependence, rather than solipsism, the broader outcomes would be more unifying, peaceful, and loving.
Institutional Rebalancing: Political and economic structures would have to reflect coordination rather than competition, as with AI. Global problems—climate change, pandemics, resource scarcity—already ignore borders; our institutions could begin to do the same, shifting from domination to collaboration, from zero-sum control to polycentric cooperation. This should not conjure some new world order—we don’t need a council or a United Nations to try to interpret and then dictate what is best for everyone; instead, we need to balance nations and institutions to recognise that their advancement needs to work in tandem with the advancement of other countries and institutions that may be at different points on a singular trajectory. This eliminates zero-sum game models and requires deep thought about how to have different but equitable progress.
Technological Clarification: Communication systems now mirror the chaos of our collective mind, amplifying division for profit. If they were reoriented toward coherence rather than conflict—through ethical algorithms, interoperable data, and open knowledge commons—they could function as our nervous system, transmitting awareness rather than outrage. This is probably the trickiest of them all, given that anger and outrage are basic human conditions, as we see in algorithmic social media streams. This is where AI, in an advanced state, could help us sort out the tangles that go into that nervous system, suggesting what types of stimuli and inputs are most likely to generate, if not perfectly equitable outcomes, the least painful ones.
Psychological Maturation: The individual's sense of belonging would widen through the actions described above. This would broaden the familiar experiential landscape through contemplative practice, cultural exchange, and stories that emphasize shared vulnerability. People could begin to feel the global as personal, not as ideology, but as instinct.
Epistemic Refinement: As we gain more and more of a global consciousness, we will also learn to humbly doubt ourselves responsibly. Truth would become less about certainty and more about self-correction—open science, verified knowledge, and humility before complexity. Learning would replace dogma as the highest social value—that would be a rocking outcome, to be sure.
A Moral-Emotional Integration: Finally, humanity would develop a collective conscience—a reflexive awareness that registers global pain as pain, global joy as joy. Not unanimity, but resonance. Not command, but care.
Guarding Against False Unity
Still, any dream of coherence carries risk. History is crowded with movements that promised oneness and delivered oppression. A genuine planetary mind would need to remain plural—more coral reef than marble statue. Its unity would lie in communication, not conformity; in the capacity to hold difference without fracture.
Perhaps humanity is already rehearsing this new form of intelligence, awkwardly, through its networks and crises. Each disaster that demands global cooperation—each moment when empathy briefly outweighs division—might be the flicker of how we recognize ourselves as in this together, that we are one family that may argue and bicker, but where protection, care, and love are the unifying drivers.
If that process continues, we may discover that consciousness was never confined to individual brains but was always an emergent property of relation—a mind diffused and unified in purpose.
References:
Andy Clark & David Chalmers, “The Extended Mind” (1998)
Daniel Dennett, The Intentional Stance (1987)
Antonio Damasio, The Feeling of What Happens (1999)
Thomas Metzinger, Being No One (2003)
Marvin Minsky, The Society of Mind (1986)
Pierre Teilhard de Chardin, The Phenomenon of Man (1955)
Émile Durkheim, The Elementary Forms of Religious Life (1912)
Norbert Wiener, Cybernetics (1948)
Gregory Bateson, Steps to an Ecology of Mind (1972)

