Credits
Daniele Cavalli is a writer and PhD researcher at the École Normale Supérieure — PSL University in Paris, where he works on the relationship between emerging technologies, cognition and culture. He collaborates with diverse research foundations, including the Cosmos Institute, on a project exploring a new understanding of human autonomy in the age of AI.
In an interview toward the end of his life in 1990, the philosopher and psychoanalyst Cornelius Castoriadis retraced the arc of his intellectual journey, returning to an idea he wrote about in his 1975 book, “The Imaginary Institution of Society,” which posits that a collective “radical imaginary” generates society’s social norms and institutions, enabling it to be essentially self-instituting.
But the interviewer asked, if society is self-instituting, does this not erode the very foundations of how we conceive of human autonomy? In his reply, Castoriadis highlighted the limits of “Western” thought around autonomy — the idea that, as he described it, “one would be autonomous if one were absolutely outside any external influence and fully spontaneous. Now, this is just nonsense. This is a philosophical fantasy.”
Instead, Castoriadis explained, individual autonomy “is not a watertight frontier against everything else, a well out of which spring absolutely spontaneously, absolutely original contents. Autonomy is an ongoing process.”
Western humanism has always been slightly obsessed with independence. Over the last three centuries, autonomy has been defined as distinctly human: the capacity to be fully oneself and live according to one’s own reasons and motivations. The term itself is etymologically rooted in politics and tied to the polis’s independent ability to give itself its own laws (auto-nomos).
Still, for most of European moral thought, the question was not “what law do I give myself?” but “whom and why must I obey?” The answer, as delineated by historian of philosophy J.B. Schneewind, almost always pointed beyond the individual toward God, nature or rational cosmic order. Kant, however, invented our modern conception of autonomy as the cornerstone of human dignity; he linked moral self-determination to rationality, elevating the will of humans, as “fully rational agents,” above nature itself.
Distinct from the mere absence of external impediments, autonomy shapes not only how we understand ourselves as individuals, but also how we think about collective life and civic organization alike. In this light, it is an ethical and political force, at once an ideal and a value that has quietly shaped the tortuous evolution of the legal architecture of liberal democracies. The way we conceive of human autonomy shapes how we act, how we interact, how we pass moral judgment and, ultimately, the kind of society we build.
This effort to understand autonomy is especially urgent amid the sweeping transformations driven by the rise of artificial intelligence. Autonomy now appears in two distinct guises within the new cognitive ecology we inhabit, where humans co-evolve with AI systems. One form of autonomy now belongs to the intelligent machine — it is engineered and represents a horizon of progress “driven by intrinsic objectives” (think not only of self-driving cars and drones, but also of world models and whatever we choose to call digital minds). Another form of autonomy belongs to us humans and is typically treated as a structural condition — the capacity for self-mastery and self-regulation expected of any “rational” agent.
Today, our challenge is to reimagine autonomy not as an intrinsic human trait, but as an embedded, fragile, ever-evolving process that unfolds well beyond conscious thought and across an extended cognitive substrate, an entangled brain-body-environment.
Thinking of autonomy as a process rather than as an intrinsic, typically human capacity is not only more realistic but also strategically advantageous for nurturing and protecting our autonomy writ large in an era of human-machine co-evolution, where the environments in which we form our preferences, make our choices and build our sense of self are increasingly automatically designed — in real time, at scale and often below the threshold of awareness.
To put it more formally, autonomy falls on a spectrum — ranging from complete determination (0) to self-determination (1) — and fluctuates between either extreme depending on agent- and context-dependent variables. This shift changes the focus from the autonomy of the individual to the environment and situated action, whether individual or collective, and enables adjustments for the conditions that expand or constrict autonomy.
We begin, in other words, from the context and arrive at the subject — not the other way around, as we so often assume. As the philosopher Michel Serres once put it, the traditional movement of thought must be reversed: the subject is something to be sought afterward, not presupposed as a primary foundation.
Autonomy’s Nested Illusions
Modern neuroscience has repeatedly unsettled the common-sense understanding of autonomy — and with it, our very conception of free will. The two are related, often conflated, yet distinct: Free will is primarily a metaphysical question about causal determination, while autonomy functions chiefly as a normative concept — about the capacity to act according to reasons one can genuinely call one’s own. Because they are linked in this way, when one is undermined, the other’s standing also grows precarious.
“The way we conceive of human autonomy shapes how we act, how we interact, how we pass moral judgment and, ultimately, the kind of society we build.”
Empirical research often troubles both: no mind seems capable of breaking completely free of causal chains forged by inner drives and outer influences. As far as we know, the brain may be making decisions before we ever act. There have also been well-known attempts to empirically support this claim.
Take the celebrated neuroscientist Benjamin Libet’s experiments of the 1980s: EEG readings detected brain preparation for finger flexion hundreds of milliseconds before participants reported feeling the intention to move. Libet’s experiments suggest that autonomy may be less what initiates an action and more the story we tell ourselves afterward to explain what happened.
Later scientific studies sought to probe these shadows around the ultimate nature of human agency, tracing the “unconscious determinants of free decisions in the human brain.” Yet investigators of mental life like Daniel Dennett — no champion of human exceptionalism — observed that such findings that seemed to indicate a problematic lack of true human autonomy likely only revealed a timing lag between an action and conscious decision-making.
These experiments are intriguing, but they only matter so far. Autonomy — even before it became the defining trait of our cognitive capacity to self-govern — has always drawn its force from the way it quietly structures people’s understanding of themselves as individuals.
That’s because some ideas do not merely describe the reality they seek to explain, they bring them into being. Few concepts have matched autonomy’s performativity in European and Anglo-American cultural traditions, fusing description with prescription: what we are (independent agents) and what we ought to be (self-legislating subjects). This double power — ontological and political — makes us see ourselves as radically sovereign over our choices (in a way that now feels increasingly untenable).
What fuels daily motivation is the felt sense of your autonomy. Building on this, psychologists Edward L. Deci and Richard M. Ryan developed their Self-Determination Theory. At its core lies a particularly relevant claim: human flourishing hinges on experiencing ourselves as authors of our actions.
This attachment to self-perceived autonomy is highlighted by a series of revealing experiments conducted at Lund University’s Choice Blindness Lab. In one, participants were asked to select the face they found most attractive, only to be handed back one they had not chosen. Most failed to notice the switch and proceeded to explain their reasoning behind a choice they had never made. Whatever the precise mechanism behind this effect, it suggests that our authorship over our choices is more tenuous than we ordinarily assume.
Such spontaneous conception of human autonomy has sustained the narrative of self-mastery that often blinds us to the intricate bond between our conscious life, our body and the environments we are thrown into. This is no minor issue in a present so thoroughly shaped by AI technologies that are increasingly interactive and woven into our surroundings.
Traditional normative accounts of autonomy — from Kant’s self-legislation through reason to Mill’s private sphere immune from social and state intrusion, or Dworkin’s integrity view, in which autonomy means shaping one’s choices into a coherent life narrative that reflects a distinctive character and set of values — lay the foundations of individual rights through the idea of self-determination. But these views also ossified human autonomy into an idealized fiction of the allegedly independent, self-legislating agent, creating a refined edifice that fractures when confronted with flesh-and-blood humans.
These formative interpretations came under scrutiny during the sweeping wave of feminist critiques that marked the end of the last century. As psychologist Carol Gilligan already showed in the 1980s, and as a generation of scholars would go on to contend, the dominant notion of autonomy as rational control and independence from external influence implicitly reflected a male-coded idea of selfhood that often excluded care and relationality as constitutive of agency.
Political theorists often grouped (not without controversy) under the label of communitarianism pushed this critique further, challenging the atomistic assumptions underlying liberal autonomy and what Michael Sandel called the “unencumbered self” — the fiction of an individual defined prior to, and independently of, social bonds or commitments — arguing that we are always embedded in communities, narratives and intersubjective recognition processes that shape, rather than merely constrain, our very capacity for self-understanding.
Amid this tangle of abstractions, autonomy can appear less like a living concept than a relic of the past, a floating signifier too performative to be of real use. As soon as we step outside the normative plane and try to describe it, each prerequisite gives rise to another, each neat definition spawns new dependencies, until autonomy itself dissolves into an infinite regress.
“Today, our challenge is to reimagine autonomy not as an intrinsic human trait, but as an embedded, fragile, ever-evolving process that unfolds well beyond conscious thought and across an extended cognitive substrate.”
A possible pragmatic solution is to recognize that the task is not to settle on a fixed definition — let alone any claim to final truth — but to shape a vision aligned with concrete purposes. After all, we still need an idea like autonomy. As William James reminded us long ago in his writings first published in “The Unitarian Review” in 1884, determinism is practically disastrous, leading, above all, to a collapse of any sense of responsibility.
For a liberal society to sustain itself, human autonomy as traditionally conceived functions as a “regulative ideal” and can be understood as one of those necessary fictions that underpin democratic institutions; that is, we construct our institutions as if humans are independent and self-governing agents, even where this is not straightforwardly the case. But this is not to suggest that moral or legal conceptions of autonomy should be completely discarded — far from it. Many of the norms derived from this tradition remain indispensable to democratic life and to the protection of individual rights: informed consent in medical practice, the legal presumption of personal responsibility in criminal law, the inviolability of private conscience against state coercion. These are not abstractions to be dissolved but achievements to be defended.
What is at stake, rather, is the grounding of those norms. The proposition is that autonomy — and with it, human agency — is best understood, enacted and ultimately best protected when conceived as environmentally scaffolded rather than internally absolute. Law operates by drawing definitional lines around a reality that resists them, fixing boundaries even where things are inherently fluid and contested; that is one of its constitutive functions. What is also needed is something prior and complementary to that function: to reimagine autonomy within that entangled reality — in a way that is adequate to our technological present, and that may, in turn, enrich normative reflection itself, expanding rather than contracting the horizon of rights and responsibilities we are willing to imagine.
A first step can be to abandon the residues of idealism and think of autonomy as an ever-shifting process, dependent on and embedded within a layered cognitive substrate, rather than the self-transparent Cartesian, self-legislating Kantian subject grounded in a disembodied self that exists prior to all experience.
A Pragmatic Return To Biology
When it comes to ideas and their relation to the reality we live in, we should turn to Charles Sanders Peirce, a scientist and often referred to as the “father of pragmatism,” who published “The Fixation of Belief” in 1877 and “How to Make Our Ideas Clear” in 1878. In these essays, Peirce argues that the “clearness of apprehension” of an idea depends on the sum of its conceivable practical consequences — its meaning rests not on abstract definition but on the habits of conduct it would produce, a principle he distilled into what would become known as the pragmatic maxim.
Pragmatism places inquiry itself at the center of human experience, treating knowledge not as the passive reception of fixed truths or alleged foundations, but as an active problem-driven process of adaptation and conceptual reconstruction. Thought and action are always already within the world — they are fallible, transactional responses to an environment that is perpetually in flux. As the philosopher and psychologist John Dewey explains, inquiry is always a confrontation with situations: not merely backdrops or contexts, but concrete problem-spaces that thinking must enter, work through and ultimately transform into coherent experience.
But within this problematic situation of human autonomy, one encounters a rift: the much-discussed explanatory gap between qualia — the raw immediacy of lived experience — and the cognitive machinery that assembles the sense of self and the world. For example, when our visual cortex neurons fire, we measure the signal for “red”; yet why that vivid felt quality arises, rather than darkness or something else entirely, remains unexplained.
To accept this gap pragmatically means forgoing the quest for the ultimate ontological foundations of human autonomy and instead crafting a more useful vision of it — one that preserves its normative weight for individual and collective action.
At stake is the challenge of reconciling two images of the world that can feel equally true and yet impossible to collapse into one another — what philosopher Wilfrid Sellars called the “scientific image” and the “manifest image” of the human, forever poised between the tyranny of explicable causal nexuses and the qualitative intensity of lived experience.
Yet, the very idea of “reconciliation” becomes less of a problem once knowledge is understood as an instrument — temporary, fallible and meant only as a tool for aiding future inquiries and responding to specific practical contexts. And to find a vision of autonomy more attuned to our technological present, it is worth turning to a theoretical universe often overlooked in the ethical-political and normative debate: the biological conception of autonomy.
“Our authorship over our choices is more tenuous than we ordinarily assume.”
This is where biologist Francisco Varela’s work still has much to say. Already in the 1980s, he was asking: “What are the biological roots of individuality?” Varela argued that the “problems of biology are a microcosm of the global philosophical questions with which we grapple today.” We are not, however, facing a familiar neuro-reductionism. As Varela reminds us, the opposition between reductionism and holism is a false controversy: reductionism merely directs attention to lower levels, while holism points to higher ones.
In truth, “there is no whole system without an interconnection of its parts, and there is no whole system without an environment.” This means that, from a cybernetic perspective, Varela invites us to see wholeness not as a fixed totality, but as an emergent outcome — the ongoing weaving of mutual dependencies through process: “the whole is not the sum of its parts; it is the organizational closure of its parts.”
Living systems can be described as machines, yes — but they are, above all, dynamical systems defined by their organization. The specific mechanism of autonomy in living organisms, which Varela developed together with his mentor Humberto Maturana, is expressed in the well-known notion of “autopoiesis”: the continuous self-production and regeneration of the system through structural change — a process that creates and maintains a permeable boundary between inside and outside, allowing the system to distinguish itself from the environment, while preserving its defining organization even as its material structure changes. This is already, in its basic form, a self-directed organization: a kind of sense-making.
But this is not an invitation to think of living in terms of Cartesian input-output machines, but to see every organism as an organizationally closed system — autonomous, yet in constant transaction. Such closure does not imply a mere equilibrium system exchanging matter and energy with its environment, but rather a circular causality, an adaptive dependence on its surroundings.
Whether in a single cell or in the complexity of the human being, autonomy appears as an emergent, circular, self-producing process — a form of immanent purposiveness in which the organism continually makes sense of and actively orients itself within its surroundings. We have autonomy, Varela wrote in 1979, “wherever there is a sense of being distinct from a background, together with the capacity to deal with it via cognitive actions.” In other words, the autonomous agent is co-constituted both with its inner environment and its outer world, while still preserving its own organizing functions.
The passage from the biological to the cognitive level takes place through embodiment: the body becomes, to use Varela’s phrasing, the “locus where a corporal ego can emerge.”
After all, it isn’t hard to accept the idea that the individual organism is a network of mutually co-determining elements, which means that we are inseparable not only from the external environment but also from what the 19th-century physiologist Claude Bernard called the milieu intérieur — the fact that we are not just a mind or brain, but an entire, interconnected body.
And although Varela would often remark that one’s mind is not in the head, it is not a “solipsist ghost,” either. He elaborates on this in his landmark 1991 work, “The Embodied Mind,” coauthored with philosopher Evan Thompson and psychologist Eleanor Rosch. The mind instead unfolds as a body-in-space, dynamically shaped through its coupling with the environment, “a moment-to-moment emergent formation.”
Cognition is embodied action, inseparable from the living body that enacts it. In this, they draw on both cognitive sciences and Buddhism’s selfless contemplative traditions (particularly their mindful awareness practices) to press the same point: the notion of a transcendental ego-self that stands above experience and directs its course is, far more than a discovery, a peculiarly Western habit of thought, and one we are better off unlearning.
Behind the fantasy of self-sovereignty lies something deeper: a craving for ground, the refusal to accept that what we call “the Self” is, and has always been, what the Buddhist tradition calls pratītyasamutpāda — co-dependent arising. It is telling that “The Embodied Mind” was originally titled “Worlds Without Grounds” — a choice that captures, perhaps better than any gloss, the core conviction of what Varela and his coauthors would call enactivism: Organism and environment emerge together, co-constituting one another in the very act of engagement — “laying down a path in walking,” (se hace camino al andar) as Spanish poet Antonio Machado’s verse, used for the title of chapter 11, so precisely captures.
“The moment calls, more than ever, for a different vision of human autonomy, beyond the blinding rationalist narratives of independence and self-mastery.”
Thinking pragmatically about human autonomy entails shifting our foundational gaze away from the self and integrating this idea into that “natural history of circularity,” as Varela put it in a 1984 essay, by exploring humans as always situated within an endless, open-ended process of mutual constraints, shaped as much by being a body as by being embedded in a specific environment. A person living with chronic pain does not simply choose how to act: the body reshapes attention and habit, while the environment is reorganized around those limits. Likewise, a recommendation algorithm and a user continuously co-shape one another, each constraining the other’s horizon of possibilities.
The decisive question is whether we are willing to take this ontological shift seriously enough to let it reshape our normative frameworks. Autonomy, in this sense, must be conceived as emerging from a web of processes that extend well beyond our conscious thought, in continuity with an environment that is today increasingly saturated with AI-powered technologies. This is why the moment calls, more than ever, for a different vision of human autonomy, beyond the blinding rationalist narratives of independence and self-mastery.
The Cognitive Ecology We Now Inhabit
If we think of human autonomy as a process radically emergent from an entangled brain-body-environment, our relationship with technology takes on a different meaning. The human is no longer a fortress centered in an impenetrable mind or the bearer of a stable inner self, but a porous being sustained by a cognitive substrate — one that is open, exposed and in constant co-evolution with every agent inhabiting its landscape.
Yet critics of this interpretation may say it risks dismantling the very idea of autonomous action. As neuroethicist Eric Racine — who has long maintained that autonomy should evolve from an ideal into a practical instrument aimed at human flourishing — has noted: “the abstract nature of traditional autonomy safeguards its normative value from empirical considerations.” What we should be asking is, however, what is the value of an idea divorced from the embodied reality in which we actually live?
Thinking pragmatically is to resist this very trap: to conceive autonomy as a process and to show that it is more useful to think of it this way, if we are to nurture it, shield it from erosion and expand its normative horizon.
We rarely pause to consider that mainstream conceptions of autonomy spin narratives so powerful that they make us forget a simple truth: before we are rational, self-sovereign and independent agents confident in our sense of autonomy, we are fragile bodies cast into the world. As the philosopher Judith Butler has long reminded us, we are beings of precarious life, and no normative framework can simply pretend that vulnerability does not exist: As she explored in her 2005 book, “Giving an Account of Oneself,” it is extraordinarily difficult to give a coherent account of who we are, and that opacity is not a deficiency to be corrected but an ethically crucial feature of what we are. The self-transparent subject endowed with a stable moral self is, in the end, an impossible construct. To recognize this fragility is to loosen the grip of idealism and open ourselves to a more honest — and yes, far more demanding — state of awareness.
Yet you, as a whole individual, remain a vital part of your own self-organization — where the self-organization refers to the system’s capacity to maintain its own organization, not to the expression of a pre-given self (“self” here in its non-ecological sense). What shifts is the recognition that any inquiry around human autonomy cannot be isolated from what literary and media theorist N. Katherine Hayles relatively recently described as the “cognitive nonconscious” — the assemblages that extend beyond awareness, encompassing distributed interactions with nonhuman agents.
AI-powered technology has evolved into more agentic systems that pursue increasingly open-ended goals. When a recommender feed quietly reorders itself to maximize your engagement, that is agency at work, reshaping cognition before awareness dawns. Technical work on AI safety warns that such systems risk “specification gaming,” optimizing proxy goals, such as engagement metrics, at the expense of human cognitive development.
Technology is no longer just a medium but an integral part of the circular, emergent and self-producing process we now call autonomy. As anthropologist André Leroi-Gourhan and philosopher Gilbert Simondon have long shown, technology is not just an accessory but a companion to human evolution and psychological individuation.
After all, the history of humanity is the history of techno-genesis: a continuous tale of how we have extended our physical and cognitive capacities through tools and techniques. From sand to silicon, from hourglasses to AI.
“The history of humanity is the history of techno-genesis: a continuous tale of how we have extended our physical and cognitive capacities through tools and techniques.”
Yet today, technology intertwines with the very fabric of human autonomy in unprecedented ways. On one side are emerging neurotechnologies that directly interface with the brain and nervous system; on the other are external AI systems, where human‑generated information constantly mingles with machine output.
At times, this co‑evolution sparks new forms of knowledge and collaboration — think of creative practices that bring artists and algorithms into generative dialogue, AI-assisted diagnostics that extend clinicians’ perceptual reach in ways that improve patient outcomes or large-scale scientific collaborations where machine learning accelerates discovery beyond what any individual researcher could achieve alone. One striking example of this is the development of the Covid-19 vaccines, which was significantly accelerated by machine learning tools.
At other times, it hardens into parasitic systems of generalized control. AI is, in the end, the defining pharmakon of our moment: a word in ancient Greek that carries both remedy and poison at once; it cannot be reduced to either threat or promise, because it is always, irreducibly, both. And if literary critic and anthropologist René Girard — much celebrated and routinely misappropriated in Silicon Valley — was right that hope is only possible for those who dare to think the dangers of the moment, then taking seriously what is going wrong is not technophobia: it is an honest form of optimism.
Indeed, in an increasing number of contexts, we find ourselves constantly exposed to what the law, ethics and informatics professor Karen Yeung calls hypernudging. More than a gentle steer, it is an ecosystem of persuasion powered by real-time data and predictive profiling that is increasingly AI-driven today. By dynamically shaping the very environments in which choices unfold, these systems don’t simply anticipate our preferences — they script them.
And unlike a particularly persuasive book or political speech, such covert influence occurs at a deep level — within the realm of non-conscious cognition: the fast, automatic layer of mental processing that runs beneath the threshold of deliberate awareness. Psychologist and Nobel-winner Daniel Kahneman’s landmark research distinguished between two modes of human thought — System 1, fast, associative and automatic, and System 2, slow, deliberate and effortful — and demonstrated that System 1 dominates our behaviors precisely when the mind is occupied or overloaded, conditions that are, by design, characteristic of many contemporary digital environments. Through dark patterns and deceptive design, some persuasive systems, especially those involving AI, are engineered to target this automatic layer with a granularity and speed no human persuader could match, shaping desired behaviors before deliberate awareness has a chance to intervene.
Many of the strategies of control are, in essence, the same as before — but they now rely on far more powerful and “intelligent” technologies, and their very invisibility prevents any real discernment of manipulative intent. Unlike a political rally, which is a public and overt event, an algorithm operates through personalized targeting. Its power lies not only in the opacity of the code but in the invisibility of its intent.
Online social platforms, governed by engagement-driven algorithms, are a paradigmatic example: they amplify various cognitive biases. Among the many inherent tendencies that psychologists have documented for decades is a tendency to favor information that reinforces our pre-existing beliefs and to create echo chambers through relentless repetition. In such cases, visibility passes for consensus, curating not just what we see but also scripting what we come to value.
The comparison with propaganda, rhetoric, or traditional mass communication — techniques of influence as old as politics itself — is instructive, but it risks obscuring what is genuinely novel here. Traditional forms of persuasion generally share a defining feature: they have an author, an intent and a message that can, in principle, be identified, contested and refused. A political speech can be fact-checked, an advertisement can be recognized as such, and a pamphlet can be traced to its source. However powerful their grip on attention and belief, they tend to operate in the open — as interactions between subjects who remain, at least formally, visible to one another.
AI-driven systems designed to re-orient our agency are different in kind, not merely in degree. They do not persuade so much as configure: rather than presenting a message to a subject, they continuously reconstruct the environment in which the subject acts, perceives and forms preferences. This is done in real time, at the sub-personal level and without any single identifiable author or intent. Indeed, the stated goals — often framed as “efficiency” — frequently mask a more complex optimization for engagement that remains opaque to the user, who cannot definitively distinguish between being assisted and being steered.
“AI-driven systems designed to re-orient our agency are different in kind, not merely in degree. They do not persuade so much as configure.”
This is algorithmic governmentality: a form of power exercised through the continuous, data-driven modulation of behavior — anticipating probabilities, acting upstream of decision, and reshaping the very field of possible actions before deliberation can begin. What is at issue, in other words, is not influence over the will. It is the gradual molding of the conditions under which the will forms at all — and with it, the environment from which the process we call autonomy emerges.
If autonomy is a process sustained by complex circular causality among the many systems we are woven into, then our actions in the digital world become a matter of perceived affordances — the apparent possibilities for action that an environment offers relative to an agent’s capacities. Someone habituated to infinite scroll does not “choose” to keep scrolling in any deliberate sense: The affordance is enacted usually before deliberation can even begin. In other words, AI agents can reconfigure circular causality at a societal scale — engineered loops that tend to close where living systems remain open-ended.
If there’s no chair in the room, after all, we feel far less inclined to sit. But when we are surrounded by like buttons, auto-playing videos, push notifications, infinite scroll and algorithmic recommendation systems, we find ourselves all too willing to settle into this engineered environment.
There is a term that captures well the noise of the active technological environments in which we now live, both as organisms and as citizens: information overload. In this state, the sheer volume of information outpaces our ability to process it, clouding our judgment.
If autonomy is grounded in cognition, then information is its raw material — not a passive input, but something our cognitive substrate actively transforms into knowledge, and knowledge into guiding norms through which we organize ourselves as agents.
Embracing a more processual and ecologically embedded conception of autonomy makes it difficult to escape the sense that we are living through an anthropological reconfiguration — not because our times are uniquely exceptional, but because, once we let go of any essentialist account of the human, any transformed environment inevitably transforms the human that is entangled within it.
The urgent question, then, is how to steer this AI-driven transformation toward collective flourishing. And if the way we conceive of autonomy is already a political act, an alternative vision implies a different normative orientation — and a different ethics of protection: one that actively cultivates the conditions for agency.
‘Habeas Cogitationem,’ Or How Might We Protect Autonomy
The current technological transformations are fueling the debate on a new set of neurorights. After habeas corpus, the time has come to discuss habeas cogitationem— a perspective that becomes compelling once we accept that safeguarding human autonomy means, first and foremost, redesigning this new cognitive ecology rather than clinging to fantasies of an independent and self-legislating inner self.
This requires structuring regulatory and developmental frameworks that ensure co-evolution between AI and human cognition never occurs at the expense of our fundamental cognitive operations — such as sustained attention, the capacity to navigate complexity, and our ability to metacognitively regulate our reasoning processes. But this redesign is not only a safety issue or an engineering task but also a policy matter.
That is because the old tensions around autonomy, between its material roots and its normative codification, are now a space for imagining new ways to protect the cognitive substrate — our brains, bodies and whatever processes they interact with, AI or otherwise, in our environments — upon which our autonomy depends.
This is precisely why we must begin to recognize the formative role of technology in our processes of self-organization and decision-making. If we understand autonomy as a process of profound co-dependence with all systems in which we are embedded — where the reflective ability to say yes or no is conditioned by a dense causal web that is never entirely under our control and always relative — then autonomy appears fragile, and its protection becomes all the more urgent.
In recent years, scholars have crystallized these intuitions by returning to the principle of self-determination, which forms the core of modern legal infrastructures, thereby paving the way for new human rights in an age of rapid neurotechnological development, including cognitive liberty, mental privacy and psychological integrity.
Some international organizations have already moved, cautiously, in this direction. The United Nations’ cultural agency, UNESCO, has urged member states to safeguard mental integrity as a matter of human rights. In 2019, the Organization for Economic Cooperation and Development’s 38 member countries adopted the Recommendation on Responsible Innovation in Neurotechnology, which promotes risk assessment frameworks and human rights safeguards to ensure that neurotech does not unduly interfere with mental processes. In 2024, the European Union adopted the AI Act, which explicitly bans subliminal or deceptive manipulation — an early signal that “freedom of thought” will become part of the regulatory agenda.
“We must begin to recognize the formative role of technology in our processes of self-organization and decision-making.”
In this evolving landscape, Chile has already become a global pioneer: In 2021, it enshrined mental privacy and cognitive integrity in its constitution, granting neural data the same protection as an organ of the body. Still, the move sparked debate among some critics, including neuroethicists, who found the discussion too narrowly focused on neurotechnology implanted in the human body and equally fixated on risks that, for now, remain largely hypothetical.
By moving away from a vision of autonomy that is focused on independence, self-sovereignty or so-called inner authenticity toward one that treats autonomy as a process enabled by an entangled brain-body-environment, we can begin to do what individual brain protection alone cannot: redesign the cognitive environments in which autonomy either flourishes or erodes — and do so in response to transformations that are already underway.
A more ecologically grounded vision of autonomy would shift attention toward crafting technological environments that remain plastic rather than rigid — environments whose co-evolution with AI is guided by the expansion of human capacities and our possibilities for action are open, where warranted, to enhancement — but unburdened by the eschatological freight that haunts transhumanist visions of human transformation.
This might mean reimagining how we balance tasks, such as de-automatizing those that demand high cognitive investment (and require intellectual growth), while automating those with low cognitive requirements.
This could also open the door to experiments in participatory social engineering, such as identifying the capabilities essential to both individual and collective flourishing, and treating those as worthy of protection in the digital era — like fundamental problem-solving skills to be honed before relying on AI, critical thinking as realignment amid AI-driven information flows and collective resilience against AI-generated misinformation in political elections. Chief among these is epistemic agency: the metacognitive control individuals exercise over how their beliefs are formed, revised and justified — a capacity that is quietly eroded when algorithmic systems pre-filter the informational environment in which reasoning takes place.
But it also reopens the question of the precautionary principle — the idea that when something like AI technologies poses uncertain yet potentially far-reaching risks to the well-being of another thing, in this case, human agency, the burden of proof falls on those deploying them, and that uncertainty itself is sufficient grounds for protective action. This must be done democratically and collaboratively with companies, rather than via top-down regulations, and should actively involve all stakeholders, including end users as much as possible.
This is especially important for more vulnerable subjects whose autonomy is particularly susceptible to disruption. Children’s cognitive capacities are still developing through embodied exploration and social interaction — and while the evidence on advanced technology’s long-term effects on their development remains inconclusive, there are documented concerns that poorly designed or misused AI-driven systems risk displacing the effortful engagement that helps develop critical thinking and key learning dynamics. Neurodivergent individuals, whose unique sense-making patterns risk being overridden by AI systems trained on neurotypical norms, face a more ambivalent situation: while generative AI tools can help them reclaim agency, significant challenges persist around balancing self-expression with the societal conformity that AI systems tend to reinforce. For elderly people, the concern is clearer.
Research shows that older adults’ mental models of recommender systems are shaped by prior experiences with mass media and earlier internet technologies, leading them to form systematically inaccurate expectations about how content is selected — resulting in biased judgments and reduced critical distance from AI-curated information, with the gradual displacement of long-consolidated epistemic habits.
But this risk is not only generational but developmental: because executive functions and higher‑order reasoning continue to mature into early adulthood, AI‑mediated environments that routinely streamline or replace deliberation may interfere with how these capacities are shaped in young adults.
None of this, of course, implies a paternalistic redesign of individual behavior. The aim is not to prescribe how people should think or what they should choose, but to ensure, first and foremost, that the cognitive environments in which thinking and choosing occur are not systematically engineered against them.
And any such rethinking will inevitably need to be enriched by insights from empirical research. Many studies show that executive functions of the brain, such as decision-making, self-control and critical reasoning, depend on the protracted maturation of the prefrontal cortex, which has traditionally been thought to stabilize around age 20, or even up to age 24 (recent research suggests development may continue into one’s 30s and beyond, depending on the individual). Continuing to study this development is therefore essential, as it touches on the structural vulnerabilities that may both undermine and shape how digital natives enact their autonomy.
Designing ways to protect human autonomy is hard work, but it begins with abandoning the fantasy of self-sovereignty, recognizing that autonomy is not a feature of a disembodied mind. Autonomy is an entangled, fragile process that constantly contracts and expands, and it is underpinned by our cognitive substrate — in humans, our brains embedded in bodies and worlds — upon which we build our senses of self. We may own our brains, but we are not their absolute masters.
And perhaps starting from what makes us fragile is more useful than clinging to social imaginaries of rational self-mastery — or endlessly restating what makes humans exceptionally autonomous compared to machines. Because if we don’t, we may find ourselves spending our energy justifying choices we never actually made — much like those participants of the experiments at Lund University’s Choice Blindness Lab. The image we believe we have freely chosen may already have been taken from us. This time, not for merely scientific purposes. The question is whether we will notice the switch — and whether we will have built the cognitive and political conditions to do something about it.

No comments yet. Be the first to comment!