Transforming Agency. On the mode of existence of Large Language Models

XE Barandiaran, LS Almendros - arXiv preprint arXiv:2407.10735, 2024 - arxiv.org
XE Barandiaran, LS Almendros
arXiv preprint arXiv:2407.10735, 2024arxiv.org
This paper investigates the ontological characterization of Large Language Models (LLMs)
like ChatGPT. Between inflationary and deflationary accounts, we pay special attention to
their status as agents. This requires explaining in detail the architecture, processing, and
training procedures that enable LLMs to display their capacities, and the extensions used to
turn LLMs into agent-like systems. After a systematic analysis we conclude that a LLM fails to
meet necessary and sufficient conditions for autonomous agency in the light of embodied …
This paper investigates the ontological characterization of Large Language Models (LLMs) like ChatGPT. Between inflationary and deflationary accounts, we pay special attention to their status as agents. This requires explaining in detail the architecture, processing, and training procedures that enable LLMs to display their capacities, and the extensions used to turn LLMs into agent-like systems. After a systematic analysis we conclude that a LLM fails to meet necessary and sufficient conditions for autonomous agency in the light of embodied theories of mind: the individuality condition (it is not the product of its own activity, it is not even directly affected by it), the normativity condition (it does not generate its own norms or goals), and, partially the interactional asymmetry condition (it is not the origin and sustained source of its interaction with the environment). If not agents, then ... what are LLMs? We argue that ChatGPT should be characterized as an interlocutor or linguistic automaton, a library-that-talks, devoid of (autonomous) agency, but capable to engage performatively on non-purposeful yet purpose-structured and purpose-bounded tasks. When interacting with humans, a "ghostly" component of the human-machine interaction makes it possible to enact genuine conversational experiences with LLMs. Despite their lack of sensorimotor and biological embodiment, LLMs textual embodiment (the training corpus) and resource-hungry computational embodiment, significantly transform existing forms of human agency. Beyond assisted and extended agency, the LLM-human coupling can produce midtended forms of agency, closer to the production of intentional agency than to the extended instrumentality of any previous technologies.
arxiv.org