As the technical capabilities of Artificial Intelligence (AI) advance, and its widespread availability grows, the use of AI in research will become more prevalent. The regulatory landscape surrounding the responsible use of AI is in its infancy, which may create uncertainty regarding appropriate ways to implement AI in research. Utah State University strongly encourages faculty to consider the following when determining if AI use is appropriate for their research:
- Confidentiality – Researchers should not input any confidential, proprietary, or restricted (i.e., human subjects) data into a generative AI tool. Significant questions involving data privacy, ownership, and access when using generative AI tools warrants caution.
- Reliability – Be cautious in reviewing and incorporating AI-generated data or information into your research. Content generated from an AI tool may be outdated, inaccurate, or biased.
- Plagiarism – AI tools may not provide proper citation to source materials. Researchers are responsible to verify and give appropriate credit for another person’s ideas, processes, results, or works. Accusations of plagiarism fall under the definition of Research Misconduct, and may lead to inquiries or investigations.
- Publications – Publishers have yet to find consensus as to whether AI developed content is acceptable for publication. Some journals have prohibitions on AI-generated text, while others allow for AI use, so long as the author provides a disclosure in their article. Researchers should verify if a publisher has any AI restrictions and adjust proposed articles accordingly.
- Understanding Expectations – Prior to implementing an AI tool, researchers should verify that the sponsor funding the research has not imposed any restrictions or limitations on AI use. Similarly, faculty should effectively communicate with co-investigators, subawardees, and collaborators to ensure all parties have the same understanding as to how AI will be incorporated into the project, and what restrictions apply.
Federal Agency Guidance
- National Science Foundation – On December 14, 2023, NSF released Notice to research community: Use of generative artificial intelligence technology in the NSF merit review process. The two biggest takeaways from the notice are:
- NSF reviewers are prohibited from uploading any content from proposals, review information and related records to non-approved generative AI tools.
- Proposers are encouraged to indicate in the project description the extent to which, if any, generative AI technology was used and how it was used to develop their proposal.
- National Institutes of Health – On June 23, 2023, NIH released The Use of Generative Artificial Intelligence Technologies is Prohibited for the NIH Peer Review Process. The key takeaway from the notice is:
- NIH prohibits NIH scientific peer reviewers from using natural language processors, large language models, or other generative Artificial Intelligence (AI) technologies for analyzing and formulating peer review critiques for grant applications and R&D contract proposals.
USU will share guidance and updates on this website as they are received from federal agencies and other reputable sources.