Universities across the globe are struggling with a question that emerged suddenly and refuses to go away: how do you maintain academic integrity when students can generate entire essays using artificial intelligence in seconds? The plagiarism detection tools that institutions have relied on for two decades were not designed for this moment, and many educators are discovering that the AI detection software they adopted in a panic often accuses innocent students while missing actual violations.
A doctoral candidate at the University of the Potomac in Washington DC thinks the education sector has been asking the wrong question. Paul Showemimo, an EdTech innovator with a track record of building technology that works under difficult conditions, is developing an approach that sidesteps the detection problem entirely. His Multi-Modal Verification Framework does not try to determine whether AI wrote something. Instead, it asks whether the student who submitted the work actually understands it.
The concept is straightforward. Students submit written work as usual, but they also participate in brief oral verification sessions where they explain what they wrote. A student who genuinely engaged with the material can discuss it coherently. A student who simply copied AI-generated text without comprehension cannot. The framework is currently being tested through pilot projects at institutions in the United States, the United Kingdom, and Africa, with faculty members providing feedback that shapes its ongoing development.
Showemimo comes to this problem with unusual credentials. Before beginning his doctoral research, he founded Eklipse Technologies in Nigeria, where he built a School Management Platform that grew to serve more than 350 schools across six African countries. The platform, known as S5, managed records for over 125,000 students in Nigeria, Zimbabwe, Kenya, Uganda, Rwanda, and Cameroon. In 2019, a major Nigerian government technology partner acquired it.
What distinguished that platform from competitors was its focus on solving practical problems that other developers ignored. Schools across Africa typically operate with unreliable internet connectivity and constrained budgets. Most school management software at the time charged schools for every text message sent to parents, a model that could cost institutions over a million naira annually just for basic attendance notifications. Showemimo designed S5 to function offline and replaced those expensive text messages with free app notifications. The innovations proved so effective that competitors eventually copied both features.
That experience building technology for resource-constrained environments now informs his approach to academic integrity. Most proposed solutions to the AI problem assume institutions have small class sizes, abundant staff time, and generous technology budgets. Showemimo recognizes that the vast majority of colleges and universities do not operate under those conditions. Any framework that cannot scale to large lecture courses or work at institutions with limited resources will simply be ignored, no matter how theoretically sound.
The fundamental challenge, as the EdTech researcher sees it, is that detection-based approaches are fighting a losing battle. Every advance in AI detection software is quickly matched or exceeded by improvements in AI generation tools. Institutions that invest heavily in detection technology today may find those tools obsolete within months. The arms race favors the AI developers, not the educators.
Verification-based approaches offer a more sustainable path forward precisely because they do not depend on technological superiority. The ability to explain complex ideas in conversation requires genuine understanding, something that cannot be faked simply by using better software. As AI tools become more sophisticated, the fundamental test remains unchanged: does the student understand what they submitted?
The approach also addresses a fairness problem that has emerged as institutions rush to adopt AI detection tools. These programs frequently generate false positives, flagging work that students wrote themselves. International students whose English follows non-standard patterns, students with certain writing styles, and students who happen to phrase ideas in ways that resemble AI output all face the risk of unfair accusation. A verification system gives honest students a straightforward way to demonstrate their knowledge rather than forcing them to prove a negative.
Showemimo holds an MBA from Hult International Business School in Boston and is currently pursuing his Doctor of Business Administration at the University of the Potomac. He is an IEEE Senior Member. His research focuses specifically on how universities can adapt their academic integrity systems to function in an environment where AI writing tools are ubiquitous and constantly improving.
The pilot projects testing his Multi-Modal Verification Framework are generating data about what works in practice versus theory. Faculty members report on how long verification sessions actually take, what types of questions most effectively assess student understanding, and how students respond to the process. That feedback loop allows the framework to evolve based on real classroom conditions rather than abstract principles.
“Everyone is asking how to catch students using AI. I think that is the wrong question,” Showemimo explains. “AI tools will only get better and harder to detect. The right question is, how do we verify that students actually learned something? If we focus on that, it does not matter what tools they used along the way.”
That perspective represents a significant shift from how most institutions have approached the problem. Rather than treating AI as a cheating tool to be blocked and detected, the verification framework accepts AI as a permanent feature of the educational landscape and focuses instead on ensuring that students develop genuine understanding regardless of what tools they use in the process.
The broader question facing higher education is whether existing academic integrity systems can adapt quickly enough to meet a challenge that emerged seemingly overnight. The plagiarism detection infrastructure that institutions built over twenty years assumed a relatively stable technological environment. That assumption no longer holds, and universities are searching for approaches that will remain viable even as AI capabilities continue to advance at an accelerating pace.


