Investigating generative AI models and detection techniques: impacts of tokenization and dataset size on identification of AI-generated text

Front Artif Intell. 2024 Nov 19:7:1469197. doi: 10.3389/frai.2024.1469197. eCollection 2024.

Abstract

Generative AI models, including ChatGPT, Gemini, and Claude, are increasingly significant in enhancing K-12 education, offering support across various disciplines. These models provide sample answers for humanities prompts, solve mathematical equations, and brainstorm novel ideas. Despite their educational value, ethical concerns have emerged regarding their potential to mislead students into copying answers directly from AI when completing assignments, assessments, or research papers. Current detectors, such as GPT-Zero, struggle to identify modified AI-generated texts and show reduced reliability for English as a Second Language learners. This study investigates detection of academic cheating by use of generative AI in high-stakes writing assessments. Classical machine learning models, including logistic regression, XGBoost, and support vector machine, are used to distinguish between AI-generated and student-written essays. Additionally, large language models including BERT, RoBERTa, and Electra are examined and compared to traditional machine learning models. The analysis focuses on prompt 1 from the ASAP Kaggle competition. To evaluate the effectiveness of various detection methods and generative AI models, we include ChatGPT, Claude, and Gemini in their base, pro, and latest versions. Furthermore, we examine the impact of paraphrasing tools such as GPT-Humanizer and QuillBot and introduce a new method of using synonym information to detect humanized AI texts. Additionally, the relationship between dataset size and model performance is explored to inform data collection in future research.

Keywords: ChatGPT; Claude; generative artificial intelligence (GenAI); machine learning; natural language processing; text classification; writing assessment.

Grants and funding

The author(s) declare that no financial support was received for the research, authorship, and/or publication of this article.