To investigate how ChatGPT performed when writing university assessments compared to students, Talal Rahwan and Yasir Zaki invited faculty members who taught32 different courses at New York University Abu Dhabi (NYUAD) to provide three student submissions each for ten assessment questions that they had set. ChatGPT was then asked to produce three sets of answers to the ten questions, which were then assessed alongside student-written answers by three graders (who were unaware of the source of the answers). The ChatGPT-generated answers achieved a similar or higher average grade than students in 9 of 32 courses. Only mathematics and economics courses saw students consistently outperform ChatGPT. ChatGPT outperformed students most markedly in the 'Introduction to Public Policy' course, where its average grade was 9.56 compared to 4.39 for students.
The authors also surveyed views on whether ChatGPT could be used to assist with university assignments among 1,601 individuals from Brazil, India, Japan, the US, and the UK (including at least 200 students and 100 educators from each country). 74 percent of students indicated that they would use ChatGPT in their work. In contrast, in all countries, educators underestimated the proportion of students that plan to use ChatGPT and 70 percent of educators reported that they would treat its use as plagiarism.
Finally, the authors report that two tools for identifying AI-generated text - GPTZero and AI text classifier - misclassified the ChatGPT answers generated in this research as written by a human 32 percent and 49 percent of the time respectively.
Together, these findings offer insights that could inform policy for the use of AI tools within educational settings.
Ibrahim H, Liu F, Asim R, Battu B, Benabderrahmane S, Alhafni B, Adnan W, Alhanai T, AlShebli B, Baghdadi R, Bélanger JJ, Beretta E, Celik K, Chaqfeh M, Daqaq MF, Bernoussi ZE, Fougnie D, Garcia de Soto B, Gandolfi A, Gyorgy A, Habash N, Harris JA, Kaufman A, Kirousis L, Kocak K, Lee K, Lee SS, Malik S, Maniatakos M, Melcher D, Mourad A, Park M, Rasras M, Reuben A, Zantout D, Gleason NW, Makovi K, Rahwan T, Zaki Y.
Perception, performance, and detectability of conversational artificial intelligence across 32 university courses.
Sci Rep. 2023 Aug 24;13(1):12187. doi: 10.1038/s41598-023-38964-3