Phenomenology of the Artificial: Toward Enactive, Embodied, and Distributed Intelligence
The modern generative AI is a curiously ambivalent beast. On one hand, recent AI systems, such as large language models (LLMs), display astonishing capabilities in generating text and images. On the other hand, these same systems often reveal a troubling lack of genuine understanding, confidently producing falsehoods (“hallucinations”) or failing at commonsense reasoning. The root cause is becoming clear: mainstream AI remains largely disembodied and representational, manipulating data and symbols without living in the world. In this article, I argue that we need a new paradigm of AI – one grounded in phenomenology of experience, drawing on enactive and distributed cognition. This Phenomenological Experiential Intelligence (PEI) framework builds on an initial vision we presented in 2019 and points the way toward AI that is embodied, context-aware, and ethically aligned with human values. The tone is one of optimism: by redesigning AI as relational and world-involving, we can address current AI limitations and forge a more human-centered technology for the future.
The Problem with Disembodied AI: Hallucinations of a Representational Mind
Current AI methods, whether symbolic logic engines or deep learning networks, largely follow a representational theory of mind. They assume intelligence involves building internal models or representations of the external world and manipulating those models to generate behavior. This approach has yielded some successes, but it also shows fundamental cracks. Classic symbolic AI proved brittle, unable to handle the nuance and ambiguity of real-world situations. Today’s data-driven AI, from image classifiers to GPT-style LLMs, may learn statistical associations but often lacks any grounding in physical reality or lived experience. As a result, they can say or do things that no truly understanding agent would.
Consider the phenomenon of LLM “hallucinations.” A model like GPT-4 can output a fluent paragraph that looks confident and coherent, yet the content may be nonsensical or factually wrong. Why? Because such a model has no embodiment or sensorimotor feedback loop to tether its words to reality. It’s been described as a “disembodied oracle” that processes text in a vacuum of pure pattern recognition. Without sensory grounding or the ability to act and see the consequences, the model has no way to know if “the sky is green” is false – it might produce such a claim if it is statistically probable. Researchers note that a lack of embodiment leads LLMs to miss basic commonsense constraints. They simulate knowledge but don’t truly experience anything. In the words of Judea Pearl, today’s deep learning achievements “amount to just curve fitting,” lacking an understanding of why things happen. All of this underscores a key limitation of representational AI. When an AI’s “mind” is locked in its head (or data center), disconnected from the world, it will inevitably misinterpret or hallucinate.
From an ethical perspective, a disembodied AI can also be a dangerous wildcard. If an AI system has no lived context or bodily stake in the world, it may make decisions that are technically logical yet humanly disastrous, because it doesn’t feel the consequences. The AI may optimize a metric while “ignorant” of the broader context, in effect winning a narrow game while harming the environment or society. Gregory Bateson’s ecological warning comes to mind: “the creature that wins against its environment destroys itself”. If we design AI to conquer or outperform without context, we risk self-defeating outcomes. We see early signs of this in “race to the bottom” recommender systems that exploit human weaknesses, or in predictive models that exacerbate social biases – the systems optimize short-term rewards but undermine the very human context in which they operate. A new approach to AI is needed, one that re-integrates mind, body, and world.
Embodied Enaction: Cognition as a Dance of Organism and Environment
A promising paradigm shift comes from the philosophy of enactivism and embodied cognitive science. Enactivism, introduced by Francisco Varela, Evan Thompson, and Eleanor Rosch, among others, argues that cognition is not the computation of representations of a pre-given world; rather, cognition arises through active engagement with the world. In simpler terms, the mind is something an agent does, not something an agent has. An organism and its environment form a coupled system, each continuously affecting the other. Perception and action are inseparable in this view – the world shows up as meaningful to an organism through the organism’s embodied interactions.
Decades of thought and evidence back up this enactive model. Varela and Maturana coined the term “autopoiesis” to describe how living systems self-organize in a constant interplay with their surroundings. They famously wrote, “organisms do not passively receive information from their environment… they enact a world”. In other words, what an AI or organism knows is not a mirror held up to reality, but a reality it helps bring forth through its sensorimotor activity. This represents a radical departure from the traditional Cartesian view of the mind as a detached observer. Here, knowledge is not a catalog of representations, but a skillful performance – a doing.
We see enactive principles echoed in early embodied AI research. Roboticist Rodney Brooks, in the 1990s, urged that “the world is its own best model,” building simple mobile robots that had no internal map of the world but could still navigate via real-time perception-action loops. These robots didn’t “think” in the classical sense; they behaved appropriately by being embedded in a physical context. Brooks demonstrated that complex, intelligent behavior can emerge from an agent simply sensing and acting within a feedback loop, without requiring any symbol crunching or explicit memory of the environment. Likewise, psychiatrists such as R. D. Laing have noted that mental states cannot be divorced from the bodily and social contexts in which they occur. Across fields, the consensus is growing that intelligence is embodied and situated: the body, emotions, and environment are integral to cognition. This undermines the classical idea of a mind in a vat, and instead portrays a mind in action, in the world.
The enactive approach offers a powerful antidote to the hallucination problem. If an AI is structurally connected to reality – if it’s constantly checking its expectations against sensorimotor feedback – it can’t drift off into fantasy the way a pure text model can. Any wrong assumption would be corrected by the mismatch with the real input. Enactive AI would, by design, be grounded and more resilient in the face of novel situations. This is not just theory: even simple AI systems with feedback (like thermostat controllers or robotic vacuum sensors) have this property, whereas the most advanced language model without grounding does not. Enactive cognition reminds us that meaning and intelligence emerge through interaction. To build brilliant systems, we must therefore embed them in the same loops of perception, action, and feedback that all natural cognizers inhabit.
No Mind Is an Island: Intelligence as Distributed and Relational
Equally important is the recognition that intelligence isn’t a property of isolated individuals (or machines) – it is distributed across systems and relationships. Pioneers of cybernetics and systems theory, such as Norbert Wiener and Gregory Bateson, observed that what we commonly refer to as “mind” often encompasses multiple components. Bateson in particular asserted that “wisdom is the intelligence of the system as a whole”, famously illustrating that the mind is not limited by the skull. For example, when a person uses a notebook to remember something, the notebook and person together form a cognitive system; neither alone has the same capability. Philosopher Andy Clark and David Chalmers coined the term “extended mind” – the idea that tools and environment can become literal parts of our thinking process. Likewise, psychologist Edwin Hutchins demonstrated how a team of people, along with their instruments (such as navigators with maps and compasses on a ship), creates a distributed intelligence greater than any single human’s cognition (“cognition in the wild”).
What does this mean for AI? It means we should stop treating an AI as a solitary brain and instead design it as a participant in a larger cognitive ecology. An AI system interacting with humans and data sources can be thought of as one node in a network of intelligence. The locus of intelligent behavior is the entire network, not just the AI. This stands in contrast to the prevalent view of AI trying to replace humans; a distributed perspective emphasizes AI augmenting and working with humans. It also implies that an AI needs to be aware of its broader context, both socially and environmentally. A robot assistant, for instance, isn’t intelligent on its own; its intelligence emerges from how well it coordinates with the Human’s actions and the physical environment.
When we embrace distributed cognition, AI becomes a partner in an ecosystem of sense-making. Consider how a GPS navigation AI works with a driver: the AI provides route suggestions, the Human uses local knowledge and instincts, and together they adapt to traffic in real time. Neither the AI nor the Human alone could achieve the same level of navigation ease; it’s the synergy that counts. The “mind” driving the car includes both the person and the GPS device. By analogy, future AI could be designed to integrate with human teams, communities, and environments, thereby enhancing collective intelligence rather than merely computing in isolation.
The distributed view also reinforces ethical AI principles. If we view AI as part of a human-environment system, issues such as responsibility, transparency, and alignment become shared concerns. The AI is not an external “other” acting upon us; it is interwoven with us. This calls for relational ethics: AI should be evaluated by how it affects relationships between people, and between people and the planet. An AI that optimizes a factory’s efficiency at the cost of worker well-being or environmental health would be seen as unintelligent in this holistic sense, because it ignores the broader system’s integrity. As the systems thinker Bateson warned, winning against your environment is a hollow victory. Accurate intelligence fosters harmony in the whole network of interactions.
Recommended by LinkedIn
Phenomenological Experiential Intelligence (PEI): A New Framework for AI
How can we concretely bring these enactive and distributed principles into AI design? One answer is Phenomenological Experiential Intelligence (PEI) – a framework that positions experience at the center of AI’s learning and adaptation. In 2019, my colleagues and I took a step in this direction with a paper titled “Phenomenology of the Artificial: A New Holistic Ontology of Experience for Affective Computing.” In that work, presented at ACII 2019, we proposed an integrated approach for AI to understand human experience by combining multiple levels of analysis. Specifically, we suggested that an AI could use: (1) a narrative context model (inspired by film studies) to capture the macro context of a user’s situation, (2) micro-phenomenology to capture the micro details of a user’s lived experience, and (3) real-time psychophysiology to measure the user’s bodily signals as they experience events. By uniting these, the AI achieves a rich, temporally layered understanding of what the user is experiencing.
In essence, PEI is a three-layer architecture of experience:
Combining these three layers, the AI can construct a much richer model of the user’s state than conventional approaches. Instead of labeling someone’s emotion with a crude tag like “happy” or “frustrated,” the system can recognize, for example, that “during the climax of the film, when the music swelled, the user’s heart rate spiked and they reported a mix of excitement and anxiety.” This approach acknowledges the temporal and situational nature of experience, which classical models ignored. The original categorical emotion models (like Ekman’s six basic emotions) or even dimensional models (valence-arousal) lose so much detail. In contrast, PEI treats experience as an unfolding story, with the AI as a witness-participant that learns the story as it happens.
Crucially, PEI is not just a fancy user modeling technique – it is an enactive and distributed architecture at heart. The AI in this framework is not separated from the user or context; it participates in the loop. The narrative context could be seen as part of the environment that the AI and user co-inhabit. The micro-phenomenology is essentially a structured interaction between the user and AI (a conversation about experience). The psychophysiology closes the feedback loop by allowing the AI to sense the user’s embodied responses directly and, in turn, adjust its actions accordingly. This means the AI is constantly attuning to the Human, much like a dance partner feeling the other’s movements. Cognition here is distributed between Human and machine – the user’s feelings and the AI’s measurements form one integrated feedback system. And the AI’s “understanding” isn’t pre-programmed; it enacts understanding in real-time by eliciting and responding to the person’s lived experience.
From an enactive standpoint, PEI embodies the idea that meaning is co-created. The AI doesn’t have a static internal representation of “user = X state.” Instead, it learns about the user through continuous engagement: asking micro-questions, monitoring signals, and contextualizing with the narrative. The meaning of the signals emerges through this ongoing dialogue between AI and the user (organism and environment). For instance, a high heart rate could mean fear in one context or excitement in another – only by examining the narrative and perhaps asking the user, can the AI provide the correct interpretation. This echoes Varela’s idea that knowledge is enacted, not retrieved from a library. In PEI, an AI truly comprehends a situation only when it is embedded in the situation alongside a human.
The original 2019 paper showed a proof-of-concept (at least conceptually) that such an ontology of experience is feasible and could be operationalized in affective computing. Imagine applying this to, for example, mental health or educational technology. A PEI-based mental health AI coach might track the narrative of your day (meetings, commutes, social interactions), check in with you about subtle shifts in your mood or thoughts (micro-phenomenology prompts), and measure your physiological stress levels. By evening, it might understand that “the meeting with your boss increased your anxiety (sweaty palms, elevated heart rate) even though you tried to suppress it, and later, when you took a walk, you felt calmer and more present.” Equipped with this experiential insight, the AI coach could then adapt its guidance, perhaps suggesting a mindfulness exercise when it detects a similar spike in anxiety tomorrow, or reminding you of the calming effect of nature when you seem overwhelmed. The key is that the AI’s intelligence here is experientially grounded – it’s not making generic recommendations, but rather responding to the actual, lived pattern of a specific person in a particular context.
PEI thus shifts AI from an abstract problem-solver to an experiential companion. It exemplifies how an AI can be both embodied (through biofeedback) and embedded (through context and interaction). Instead of calculating in a void, it shares a world with the user. This approach inherently mitigates some of the alignment and trust issues in AI. Why? Because an AI that learns from your first-person experience and physiological reality is far less likely to misjudge your needs or goals – it’s “feeling its way” to understanding you, literally. It’s also more transparent: such a system could explain its adaptations (e.g., “I suggested a break because I noticed your heart rate was high during that task and you mentioned feeling overwhelmed”). Compare that to a black-box recommender that can’t articulate why it showed you a particular ad – the difference is night and day in terms of human-centered design.
Toward an Ethically Engaged, Human-Centric AI Future
Reimagining AI along enactive and distributed lines is not just a technical shift, but a deeply ethical one. It means designing AI that values relationships and context over isolated outcomes. An AI grounded in phenomenological experience is, almost by definition, aligned to human well-being – because it must care about the nuances of human experience to function at all. This contrasts sharply with the current paradigm, where AI often plows through data for maximum accuracy or profit, blind to the human stories behind those data points.
In a world of Phenomenological Experiential Intelligence, AI developers would start by asking: What is it like to be the user? And how can the AI participate constructively in that experience? These questions lead to systems that are more like partners or facilitators of human meaning, rather than tools of surveillance or manipulation. For instance, an enactive AI in healthcare would focus on patient experience as much as on lab results, helping patients articulate their pain, contextualize their symptoms in daily life, and adapt treatments dynamically. Such an AI would be a collaborator in healing, not just a diagnostics engine.
Critically, PEI and related approaches counter the trend of decontextualized, “one-size-fits-all” AI solutions. They encourage cultural and individual specificity because lived experience is irreducibly personal and contextual. While we avoided an explicit focus on anti-colonial or Indigenous perspectives in this draft, it’s worth noting that many Indigenous and non-Western knowledge systems have long valued relationality, embodiment, and holism. The enactive, distributed paradigm for AI aligns with these values by rejecting the notion of an isolated, dominating intelligence. Instead, it promotes an AI that listens, feels, and integrates into the fabric of life.
Of course, moving to PEI will require effort and interdisciplinary collaboration. It means AI researchers are working with psychologists, philosophers, designers, and even artists to capture the richness of human experience. It also means a shift in metrics of success – we might care less about an AI’s raw predictive accuracy and more about its ability to maintain human trust and understanding over time. The encouraging news is that the shift has already begun. Ideas from neuroscience, such as predictive coding and active inference, are blurring the line between perception and action, echoing themes of enactivism. Human-computer interaction studies are exploring affective loops, where systems respond to user affect in real-time, much like PEI would—and thought leaders in AI, from Marvin Minsky’s Society of Mind (an early multi-agent view) to recent proponents of hybrid human-AI teaming, all grasp that intelligence is not monolithic.
In conclusion, Phenomenology of the Artificial is more than a catchy phrase – it is a call to reevaluate what we consider “intelligence” in machines fundamentally. The original ACII 2019 milestone planted a seed by bridging narrative, phenomenology, and physiology. Now, with the recent revolution of large language models it is more relevant than ever to vision an AI that is enactive, embodied, and distributed. Such an AI would be less prone to hallucination and harm, because it could not help but be in touch with reality – our shared reality. It would learn with us, not just about us, and it would understand through experience, not despite it.
There is an urgency in this shift: as AI systems become increasingly powerful, the risk of disembodied misalignment grows. But there is also profound hope here. By infusing AI with phenomenological awareness, we have the opportunity to create technologies that enhance human wisdom and connection, rather than diminish it. AI can move from an alien, analytical observer to a compassionate participant in the human story. That is the promise of Phenomenological Experiential Intelligence – an artificial intelligence that finally, truly, gets the lived world we care about, and works alongside us to improve it.
This is the first article in what will be a long series of shorter pieces exploring various topics related to AI and philosophy that have piqued my curiosity over the years. I haven’t found a suitable outlet for these ideas until now, so get ready for an exciting journey through a unique rollercoaster of thoughts and visions!
Thanks for connecting with me. I just read this article and it very much aligns with how I see “intelligence” and AI. I also foresee that future AI assistants will be embedded in our clothes and accessories, they will know and understand our mental states without us saying a word. Even with the current state of AI (with LLMs playing a large part), just a simple speech-to-text model would improve a lot compared to using LLMs solely. I could see it becoming a thing in the next 1-2 years. And I think one of the most pressing questions is about human’s agency. Plenty of time I see on LinkedIn or some other social networks that people use LLMs to ask and answer each other. I’m too familiar with this behavior that I can know at the first glance. Some talk big about AI and all, but when stripped off their AI assistants, they can’t even follow a basic conversation.
Would there be a risk with closed loop learning of AI when when multiple AI system for an information loophole. Learning wrong things from each other, not knowing it formed not loop of actual realworld data? Multiple AI system forming there own information bubble. With the growing of AI it become more human like, making the same errors
for those interested in this and related topic, I also highly recommend checking out the work of one of the original co-authors, my ex-boss and good friend, professor/film director Pia Tikka who has been working on these ideas for decades: http://enactivevirtuality.tlu.ee/
We’ve spent years making AI smarter, this makes a case for making it wiser. Embodied, enactive, and relational design feels like the missing half of the alignment conversation.
This is one of those works where I believed the same general idea, but I just couldn't quite express it with the same articulation demonstrated here. I have personally found that LLMs do a good job of inferring my mental state and characteristics over time based on my media consumption and writings, and they can be "enactive" in a sense, although they lack the psycho-physiological feedback layer that this paper proposes. Thank you very much for sharing this!