Skip to Main Content

Faculty Toolkit

Academic Integrity

As it works today, ChatGPT can produce confident writing but lacks in substance. It can write an excellent paragraph but not an excellent paper. If a student were to submit work written by ChatGPT, how would you know? Let's look at some key issues with AI generated research papers.


Our students are developing their scholarly voices and their own writing styles. ChatGPT writes with a confidence that many of our students are still building. If you have already read a student's authentic writing and they use ChatGPT, you will most likely know or suspect immediately that it is not their voice. 


If you ask ChatGPT to provide three scholarly sources on a topic, it will return what appears to be a list of articles. Try to find those articles and you will see that the titles are titles of real articles (mostly), the authors are real people who produce scholarship on the topic (mostly), and the journals are real journals but the citations it provided simply pulled those elements and put them together.

In short, ChatGPT understands that a scholarly source is made up of a title, author/s, journal, and publication year but it doesn't understand how those elements are connected so it just pulls them from its dataset individually rather than as a whole.

You can quickly check scholarly sources using Google Scholar. 

See what happened when I asked ChatGPT for three scholarly sources. All these articles titles are real but they are not written by the listed authors or published in the listed journal. 


Since ChatGPT writes so believably, it's easy to take the content it generates at face value. ChatGPT is very good at structure and style but struggles with actual information which is the heart of a research assignment. "The words it produces are statistically plausible, but it is not creating assessments, judgments, behaviors or meaning because it has no internal model or understanding of the topics it writes about. GPT-3 lacks common sense and the ability to reason abstractly. When faced with prompts it has yet to be trained for, it quickly starts fabricating information, making errors or becomes incoherent" (


A researcher asked ChatGPT a question about an infamous person 


A librarian prompted it to answer a question using a specific article


I prompted ChatGPT to analyze an article that does not exist