AI in lecture halls – no longer a question of if but of how

By Prof. Dr. Till Krause
Artificial intelligence does homework, answers questions in exams, and structures arguments. Schools and universities are trying to come up with answers about how to prevent that – frequently asking the wrong question in the process, says Dr. Till Krause, a professor of media and communication at the Landshut University of Applied Sciences.
#AI
© Deagreez/iStock

In February 2026, I briefly thought I’d ended up in a science fiction movie: Judges at the Kassel Administrative Court had to pass judgment about a question that just a few years ago would have sounded like a fantasy: Was a text created by a human being – or unlawfully written by a machine? Specifically, the case involved a bachelor’s thesis in information technology at the Kassel University that the examiners had assessed as “not having passed.” The reason was a severe deception due to the forbidden use of artificial intelligence. The student sued but the court ruled that the failure to pass was valid. As early as in 2023 TU Munich rejected an applicant because their essay was suspiciously perfect: too fluid, too impeccable, too ChatGPT.

The expert
AI in lecture halls – no longer a question of if but of how© Priscillia_Grubo

Dr. Till Krause is a professor of media and communication at the Landshut University for Applied Sciences and a multiple award-winning author. As the founder of Hypeshift Media GmbH he provides consultancy to businesses and publishing houses on how to deal with artificial intelligence. In his seminars, students practice making the utilization of AI transparent. Some of the ideas in this text he played back and forth with the most reliable intern he’s ever had: Chatbot Claude.

So, in the future, do we as teachers need to say goodbye to homework, essays, and other texts because everything is written by AI anyway? Almost none of today’s students do their academic work without the help of artificial intelligence. Summarizing texts, assisting with research, structuring ideas, all that has long become routine. 92 percent of British college students regularly use AI tools for studying, according to a survey of the Higher Education Policy Institute, 86 percent do so on a global scale.  

A question about which hardly any reliable investigations have been performed so far is how many students at schools and universities have their entire homework done by AI. Because that puts us right at the core of the problem as a project paper is never just about the text being evaluated in the end. It’s about the process that leads to the text: finding the research question, reviewing literature, breaking a big subject down into manageable portions that can be answered individually. In the end, the text is only the proverbial tip of the iceberg. The visible part above the water. The parts underneath, the thinking, searching, doubting, re-thinking – that’s what represents the actual achievement.

Knowledge is no longer a rare asset

But what if, in the age of AI, we’re up against new icebergs? Icebergs that consist only of the visible tip? Plus, for teachers, this raises questions that we should have asked ourselves much earlier because our way of conducting exams is still based on the assumptions that knowledge is a rare asset. That assumption used to be correct – in days when the word “lecture” still had a literal meaning. There’s a small 14th century painting by Laurentius de Voltolina that today is on display at Berlin’s “Kupferstichkabinett” (copper engravings collection). It depicts a scene from a lecture hall where a teacher’s sitting on a tall podium with students in front of him – some attentive, some whispering, and one who’s fallen asleep. Today, hundreds of years later, the sleepy student from those days could be waking up in a modern lecture hall – and presumably would immediately know where they are because the principle of a university has seen so little change: someone is standing in front, and behind them people are listening (more or less). The way it’s always been.  

AI in lecture halls – no longer a question of if but of how
This 14th century painting shows that the principle of the university has not seen major changes in hundreds of years. Someone is standing in front and behind them the students listen (more or less)© Wikicommons

The difference between today and back then: Today, we have no lack of information but a surplus of it – just a click away. And now generative AI has arrived that not only reproduces knowledge but edits, structures, and translates it into understandable language. In this type of reality, it’s less about having facts in our heads but about sorting existing knowledge, curating, and making meaningful use of it.

The likely response by many educational institutions is better recognition tools that distinguish between human and machine writing. However, that’s a race that teachers will more than likely be losing. By now, there are contracted workers that “de-AI-i-fy” AI texts so that they’ll sound more human. But are such services truly needed? Because AI detection software is notoriously unreliable:

94 percent

of AI exam answers are not detected according to a UK study.

The Copenhagen University took remarkably clear consequences: It deliberately chose not to use AI detection software – too many false-positive results, too many wrongly accused students. Instead, students are being trained in handling AI to use it as a tool and not as a shortcut. “Employers expect our graduates to master the latest technologies,” says Stefan Nordgaard, one of the people who are responsible for the university’s AI strategy. AI is basically permitted in open exam forums, albeit with a declaration duty. Transparency instead of control.

Transparency protects against cheating

In my seminars at the Landshut University of Applied Sciences I pursue a similar pathway. Students submitting homework to me must indicate that they’ve used AI just like the utilization of other sources. At the end of each project, there’s an appendix listing what AI models were used, what specific prompts were entered, and for what purpose. The same as a reference list. The experiment has not been completed yet and we’re still learning ourselves how to evaluate this in a meaningful way. But the basic idea is clear: people creating transparency about how they worked with AI on an academic paper are not cheating.  

"Students submitting homework to me must indicate that they’ve used AI just like the utilization of other sources.“

© gorodenkoff/iStock

Leading universities worldwide are handling AI usage in similar ways. In July 2025, Oxford University issued a binding policy for all exams: AI is permitted where teachers expressly allow it – and then must be declared. Unauthorized use is deemed to be academic misconduct, the same as plagiarizing. ETH Zurich has formulated precise rules: transparency, responsibility, fairness: Anyone using AI must indicate which tool, for which part of their work, and to what extent. Plus, copyright-protected, private, or confidential content are principally banned from being handed over to commercial AI systems – unless expressly permitted. The question anywhere is no longer “Has someone used AI”? but “How – and does that individual understand what they’re doing”?

The reason is that by consistently outsourcing your thinking you risk something that’s hard to measure but easy to observe: the loss of one’s own voice.

A study by the Max Planck Institute for Human Development (MPIB) analyzedHu 740,000 hours of material with the following result: People are increasingly using the same vocabulary as AI systems. In an article published in NZZ, journalist Adrian Lobe calls that the “McDonaldization of language” and describes what it produces: “AI-ish,” a watered-down, uniform language lacking any individuality. Since 2023, appearance of the English verb to delve has massively increased in academic abstracts, just like showcasing, or the word notably – typical ChatGPT vocabulary that infiltrates real research.

"As teachers, wasn’t our original mission to teach students to find their own voice? To show how thinking, experimenting, and criticizing acquires new knowledge? No human being has ever become smarter by not thinking.“

Prof. Dr. Till Krause

Plus, to be able to evaluate AI texts in the first place, one needs text expertise. With AI, editorial skills are becoming more important that journalists primarily used to practice – reviewing material, evaluating, editing. Today, everyone must possess all those skills to avoid getting lost in the flood of nonsense, disinformation, memes, and of course plenty of great scientifically based information (which, thanks to AI, can be located as easily as never before!).

Oral exams are becoming increasingly important

The crucial question any teaching person should ask themselves is “What do I really want to impart? I, for one, in my media ethics seminar, could ask my students to name three concepts of responsibility ethics. Tendentially, that tests memorized knowledge of facts that any AI could also supply as needed but I prefer the following pathway: “Here’s a specific problem. How should we deal with AI-generated images in journalism? Solve the problem with an ethics model of your choice. Let me take part in your thought process.” And then a discussion about that ensues.

"One of the most practical answers to the AI challenge is also one of the oldest ones: the conversation, in other words, the oral exam. Even Sokrates already knew that those who’ve truly understood a subject will show it in a dialoga wirklich verstanden hat, zeigt das im Dialog.“

Prof. Dr. Till Krause

In the world of work, many things work out by verbal communication of narratives. In real time, without a cancellation key. That’s why oral exam formats should be intensified at schools and universities – for instance, by random defenses of written work or pitches putting the basic idea in a nutshell and resisting even critical challenges. Presentations of that kind soon show if someone has truly penetrated their subject – or if there was merely a text in a pretty format that no-one bothered to think about.

Yet having said this – with all due caution – we shouldn’t forget what opportunities AI offers. The complaint that a new technology would dull people’s minds has by now become folklore. Book printing allegedly undermined critical thinking. Radio was assumed to destroy the culture of reading. Even writing, once upon a time, was seen as a potential threat to memory and intellectual capacity. Even so, we’re still reading, thinking, and writing. In case of doubt, those who amplify their own skills with AI are going to write better texts, have more original ideas, and clearer thoughts than those who choose not to use those tools at all.

I explain that to my students as the Sullenberger case. On January 15, 2009, a catastrophe almost occurs in the sky above New York. Shortly after the takeoff of an Airbus A320, a swarm of Canady geese flies into both engines, causing total engine failure. The pilot, Chesley Sullenberger, has only a few moments to save the aircraft. Without a handbook, without an autopilot. What he does have is some 20,000 hours of hands-on flying under his belt. He lands the aircraft on the Hudson River, a so-called ditching maneuver that’s complicated and risky. All 155 people on board survive the incident.

Modern airliners are controlled by autopilots most of the time. That makes sense and is safe. But when the autopilot can no longer control the craft, a human being must take control – immediately and on their own steam. And that won’t work if you’ve never learned how to fly on your own.

The same applies to AI at universities and schools. Anyone who outsources their thinking without ever having searched, doubted, or reworded something is going to find themselves facing a problem to which AI has no answer. Facing the swarm of geese in the engine. Consequently, the educational institutions of the future are going to be places where the question “What do you know?” will be asked more rarely than “What can you do with what you know?” Places at which people experiment with artificial intelligence get to know its massive potential just like its limitations. And ultimately end up as smarter people.

If Laurentius de Voltolina were to paint his lecture hall picture today it might look differently: The scholar would no longer be alone at the lectern but engaged in dialog. The students would no longer be listening passively – or sleeping – but actively participating in an intellectual discourse. And subtly present somewhere in the background would be AI as a tool not a substitute for personal thoughts. That’s how young people could be made fit for the latest technologies. And be prepared when facing a problem on the job that they must solve independently and spontaneously without any help. Even if it’s safe to assume that it won’t be a swarm of geese.