Anthropocentric bias and the possibility of artificial cognition

R Millière, C Rathkopf - arXiv preprint arXiv:2407.03859, 2024 - arxiv.org
arXiv preprint arXiv:2407.03859, 2024arxiv.org
Evaluating the cognitive capacities of large language models (LLMs) requires overcoming
not only anthropomorphic but also anthropocentric biases. This article identifies two types of
anthropocentric bias that have been neglected: overlooking how auxiliary factors can
impede LLM performance despite competence (Type-I), and dismissing LLM mechanistic
strategies that differ from those of humans as not genuinely competent (Type-II). Mitigating
these biases necessitates an empirically-driven, iterative approach to mapping cognitive …
Evaluating the cognitive capacities of large language models (LLMs) requires overcoming not only anthropomorphic but also anthropocentric biases. This article identifies two types of anthropocentric bias that have been neglected: overlooking how auxiliary factors can impede LLM performance despite competence (Type-I), and dismissing LLM mechanistic strategies that differ from those of humans as not genuinely competent (Type-II). Mitigating these biases necessitates an empirically-driven, iterative approach to mapping cognitive tasks to LLM-specific capacities and mechanisms, which can be done by supplementing carefully designed behavioral experiments with mechanistic studies.
arxiv.org