The Professor's Guide to Spotting AI-Generated Text: Beyond the Plagiarism Checker
- DR. SCOTT STRONG
- Jun 11, 2025
- 6 min read

Background: Becoming a Human AI Detector
As educators, we are responsible for upholding academic integrity among students. Throughout my academic career, I have taught over 300 graduate and undergraduate courses as adjunct faculty. My full-time job is working at a Fortune 100 company, where I lead Learning and Development for 10,000-person Global Technology business unit that manages its own large language model and associate AI tools.
Recently, a significant portion of my teaching role has evolved into acting as an AI detector, as AI writing tools present new challenges that can evade traditional plagiarism detectors. These tools generate original content rather than copying, but the work produced may lack the critical element of human intellectual effort. The number of students detected by the AI detector at my university has increased from one per term to three or four and still rising. Based on my experience in both academia and the industry, I want to share how you can identify whether work is generated by AI.
Note: In all my classes, I inform students that while I support using AI for research and generating citations, it should not be used for writing tasks.
Real life use case example:
Recently, I reviewed a student submission on leadership styles. It was clean, well-organized, and factually correct. Yet, it felt hollow. The analysis I performed on that text serves as a perfect case study for identifying the digital ghostwriter in our students' work. Here’s what to look for when you get that uncanny feeling that you're reading a paper no human wrote.

1. The "Too Perfect, Too Generic" Problem
The first red flag is often the voice—or lack thereof. AI-generated text is typically sanitized of any true personality. It's the elevator music of prose: pleasant, inoffensive, and utterly forgettable.
Look for: Flawless but bland language. The writing is grammatically perfect, but there are no unique turns of phrase, no interesting sentence structures, and no personal style.
In the leadership paper: The descriptions of "Autocratic" or "Servant" leadership were correct, but they read like encyclopedia entries. A student genuinely grappling with the material might say, "My last boss was a classic autocratic leader, which made our team efficient but also fearful." The AI simply states, "Productivity through delegation, clear and direct communication."
2. A Rigid and Predictable Structure
AI models are trained on patterns, and they excel at reproducing them. This often results in a paper that is so perfectly structured it feels inorganic.
Look for: An unvarying, templated format. Each section is a mirror image of the last.
In the leadership paper: Every single leadership style was broken down into the exact same format: an introduction, a bulleted list of characteristics, and a "PROS/CONS" section. Human writers, even when following an outline, will vary their transitions and sentence flow. An AI will rigidly adhere to the template, creating a repetitive, robotic rhythm.
3. The Absence of True Synthesis or Argument
This is perhaps the most crucial tell for an academic setting. AI compiles information; it does not synthesize it. It can present facts, but it cannot connect them to a larger, original argument or offer a nuanced perspective.
Look for: A collection of facts without a "so what?" factor. The paper tells you what something is but never truly explores why it matters in a specific context or what the author thinks about it.
In the leadership paper: The text listed eight leadership styles and ended with a generic "How to choose" section. There was no overarching thesis. A student paper should argue a point, such as "While multiple leadership styles have merit, transformational leadership is uniquely suited to the challenges of the modern tech industry because..." The AI-provided text makes no such claims.
4. The Tell-Tale "AI-isms"
As you read more AI text, you begin to notice recurring stylistic tics. These are phrases and formatting choices that are common hallmarks of AI generation.
Look for: Overly formal or cliché transitional phrases like "It is important to note," "In conclusion," or "Delving into..." The text also often relies heavily on bullet points to organize information rather than weaving it into narrative paragraphs. The inclusion of a vaguely cited statistic, like the "Indeed survey" in the example, is a classic AI move to feign authority without providing a specific, verifiable source.
From Detection to Confirmation: Actionable Strategies
If your instincts are raising red flags, here are a few pedagogical strategies to confirm your suspicions and, more importantly, recenter the learning process.
The Five-Minute Oral Defense: The simplest and most effective tool. Ask the student a few basic questions about their paper. "You mentioned 'servant leadership.' Can you give me an example from your own experience or a current event?" or "Which of these styles do you think is the least effective, and why?" A student who did the work can speak about it. One who copied and pasted from a chatbot will likely be unable to elaborate beyond the text on the page.
Request the Research Trail: Ask to see their sources or outline. A student who conducted genuine research will have browser histories, saved articles, or notes. An AI user will have nothing to show, as their "research" consisted of a single prompt.
Design AI-Resistant Assignments: The ultimate solution is to evolve our assignments. Instead of asking for a generic summary of "leadership styles," require students to:
Analyze a specific case study from class.
Incorporate feedback from a peer-review session.
Connect the topic to their personal experiences or career goals.
Critique a specific reading, rather than a broad concept.
Our goal should not be to simply "catch" students, but to uphold the value of critical thinking and original expression. By learning to spot AI-generated text, we can better guide our students back to the imperfect, and infinitely more valuable, process of learning how to think for themselves.
Full disclosure: I used AI to check my grammar and verify citations. As a professor, I believe in transparency. I mention Ethan Mollick's book in the citations, which I haven't read yet, but I attended his live session when hired by my organization and found him a forward thinker you should consider as a source. I encourage my students to be transparent about their process and demonstrate honesty in their work. So, I try to demonstrate this so like I have shared this type of information in this full disclosure you can tell (by how I am writing this content is an example of) how a writer can establish authenticity of their writing.
Additionally, I have observed the use of AI in the corporate world, and being able to detect its use is providing me with a competitive advantage. Use of AI detection amidst corporate pretense is maybe the new skill to advance in organizations, which I will discuss in a future article.
Reference List: Foundational Concepts and Further Reading
On AI in Education and Academic Integrity
This section includes sources that discuss the broad challenges and pedagogical shifts required by the integration of AI in academic settings.
Mollick, E. (2024). Co-Intelligence: Living and working with AI. Penguin Publishing Group.
This book is a foundational text for understanding how to work collaboratively with AI, providing a framework for the kind of pedagogical adaptation mentioned in the article's conclusion.
Eaton, L., & GveRin, C. (2023). Contract cheating and AI: A battle of wits and integrity. In T. Bretag (Ed.), A Research Agenda for Academic Integrity. Edward Elgar Publishing.
This type of scholarship directly addresses the academic integrity challenges, providing context for why professors need to be able to identify non-original work.
Vanderbilt University Center for Teaching. (2024). AI and the Future of Teaching & Learning. Vanderbilt University. Retrieved from https://cft.vanderbilt.edu/guides-sub-pages/ai-and-the-future-of-teaching-learning/
Institutional guides like this one from Vanderbilt's CFT are excellent resources that often pioneer the practical strategies discussed, such as redesigning assignments to be more AI-resistant.
On the Characteristics of AI-Generated Text
These sources delve into the linguistic and structural patterns that distinguish AI-generated text from human writing, forming the basis for the "red flags" in the article.
Guo, B., Zhang, X., Wang, Z., Jiang, M., Nie, J., Ding, Y., Yue, J., & Wu, Y. (2023). How Close is ChatGPT to Human Experts? Comparison Corpus, Evaluation, and Detection. arXiv preprint arXiv:2301.07597.
Technical analyses like this one provide empirical evidence for the stylistic differences (e.g., lower perplexity, more generic phrasing) between human and AI writing that can be perceived intuitively as a lack of voice or personality.
Liang, W., Yuksekgonul, M., Mao, Y., Wu, E., & Jurafsky, D. (2023). GPT detectors are biased against non-native English writers. arXiv preprint arXiv:2304.02819.
This paper is crucial as it highlights the limitations and potential biases of detection software, reinforcing the article's argument for relying on qualitative, pedagogical methods over purely technical ones




Comments