AI on smartphones makes it difficult for students to catch themselves cheating, and social media makes them less likely to report it.
Students at Princeton University are about to have an experience they haven’t had since the late 1800s: having someone proctor them to take their exams. This change stems from concerns about the proliferation of AI-related cheating among students.
The change takes effect July 1. Exams taken after this date will be proctored, the formal term for being supervised to ensure academic integrity during testing. Surveillance can take many forms, including cameras, microphones and screen sharing software. Princeton’s solution is to use human instructors to observe students taking exams and then report infractions to the student-led honor committee for punishment.
The move to surveillance was requested by both professors and students. According to Princeton, students were concerned that cheating with generative AI would be too easy because it can take place on personal devices like smartphones, making it harder to detect and report, in accordance with the school’s honor system. The Ivy League school also notes that reports are less frequent and often anonymous due to potential threats of retaliation via social media in the form of doxxing or other bullying behavior.
“This has made it difficult for the Honor Committee and the Office of the Dean of Undergraduates to act on concerns, even when there is significant buzz or outrage over supposedly egregious violations,” Michael Gordin, dean of the Princeton college, said in the policy proposal that outlines the new changes. “If students are the only ones present in the examination room and do not wish to present themselves, there is no check against malpractice during assessments.”
A survey of Princeton students in 2025 showed that about 30% of students admitted to cheating. The investigation found that there was “no significant increase in the number of cases where individuals were summoned before the honorary committee”, despite the increase.
The Princeton administration, including the Committee on Examinations and Work Quality, voted unanimously in April to establish oversight. It is the largest change to the university’s honor system since its introduction in 1893, when it was adopted specifically to end surveillance at the school. Students are still required to adhere to the Honor Code and will be asked to attest that they have not violated it during exams.
An evolving fight against AI cheating
Princeton is one of several schools making major changes due to students using AI. In 2024, Duke University stopped assigning numerical grades to student essays as part of the admissions process. Christoph Guttentag, dean of undergraduate admissions at Duke, pointed out that essays were once a way to better understand applicants, but with the rise of AI, the university could no longer assume that essays were an accurate reflection of applicants. The university still assigns numerical scores to other categories, such as program strength, extracurricular activities and test scores.
The return to surveillance at Princeton is also consistent with what researchers are seeing in higher education.
“Our research shows that students already face significant uncertainty about when and how AI use is acceptable, and that this uncertainty is generating real tensions around academic integrity,” said Jennifer Rubin, a senior researcher at the education research organization Foundry10. “Princeton’s decision reflects a broader trend we are seeing across education: institutions are turning to increased oversight when existing standards appear inadequate.”
Rubin notes that proctors can relieve “some of the immediate pressure” when it comes to cheating with AI, but that more will need to be done to effectively navigate AI given its almost ubiquitous availability.
This is already happening in academia. Many schools have implemented tools such as AI detection and have strict rules on the use of AI. It is common for schools to allow students limited use of AI for tasks such as correcting grammar and spelling in essays and for brainstorming, while specifying that asking AI to write essays or create other work constitutes plagiarism.
AI policies in schools have affected lower grades, with nearly half of all teachers in grades 6-12 reporting regular use of AI detection tools (PDF).






























