Safer not to know? Shaping liability law and policy to incentivize adoption of predictive AI technologies in the food system

Front Artif Intell. 2023 Dec 8:6:1298604. doi: 10.3389/frai.2023.1298604. eCollection 2023.

Abstract

Governments, researchers, and developers emphasize creating "trustworthy AI," defined as AI that prevents bias, ensures data privacy, and generates reliable results that perform as expected. However, in some cases problems arise not when AI is not trustworthy, technologically, but when it is. This article focuses on such problems in the food system. AI technologies facilitate the generation of masses of data that may illuminate existing food-safety and employee-safety risks. These systems may collect incidental data that could be used, or may be designed specifically, to assess and manage risks. The predictions and knowledge generated by these data and technologies may increase company liability and expense, and discourage adoption of these predictive technologies. Such problems may extend beyond the food system to other industries. Based on interviews and literature, this article discusses vulnerabilities to liability and obstacles to technology adoption that arise, arguing that "trustworthy AI" cannot be achieved through technology alone, but requires social, cultural, political, as well as technical cooperation. Implications for law and further research are also discussed.

Keywords: AI ethics; business; economics; knowledge; liability; machine learning; regulation; technology adoption.

Grants and funding

The author(s) declare financial support was received for the research, authorship, and/or publication of this article. This work was supported by AFRI Competitive Grant no. 2020-67021-32855/project accession no. 1024262 from the USDA National Institute of Food and Agriculture. Partial support was received from the Cornell Institute for Digital Agriculture (CIDA).