AI in Education

Beyond the Panel:

What We Talked (and Didn’t Talk) About on AI in Education at FEBS 2025

We had a lively and thought-provoking panel on AI and education at the 49th FEBS Congress in İstanbul. The session was chaired by Ly Villo and Loredano Pollegioni, with Manuel João Costa and Robert Harris facilitating, and I was honoured to join fellow panellist Sana Ahmed for the discussion. Together with an engaged and enthusiastic audience, we explored how generative AI is being integrated into molecular life sciences education. We identified challenges and opportunities, and exchanged tips and best practices.

The discussion leaned more optimistic (perhaps a bit too optimistic?) so, for once, I didn’t need to play my usual role as the cautious optimist. Of course, given the broad scope and limited time, not every aspect could be covered. In this post, I’d like to highlight a few key takeaways and share my perspective on some challenges I could not talk about during the panel due to time constraints.

WhatsApp Görsel 2025-07-08 saat 20.22.32_9518bacb

Novel Challenges Require Challenging Norms

One of the most frequently raised concerns was cognitive offloading (students becoming overly dependent on AI tools), which can lead to reduced critical thinking, weaker information retention, and a shallower understanding of course material. There’s preliminary evidence supporting this trend (1), especially when students use GenAI to complete assignments without engaging meaningfully with the task.

Another widely shared worry was academic dishonesty: the use of GenAI to bypass assessments or generate answers that misrepresent students’ own understanding. While these are not new issues in education, GenAI introduces new layers of complexity that we’re still learning how to navigate.

Like many digital technologies before it, GenAI can support learning, if used well. While it has features that are uniquely powerful (and uniquely risky), it’s still, at its core, a tool. That’s why our focus should remain on sound educational design. You may not be able to fully AI-proof your assessments but with strong pedagogical design, you might not need to.

Supervised in-class activities using GenAI instead of take-home assignments, focusing on the process of learning over time instead of evaluating the end-product, emphasizing higher-order thinking over recall, and creating an environment where students reflect on their learning process… These practices are more resilient than technical safeguards.

dhhrocgmau

A useful starting point for building a pedagogy-based strategy for GenAI integration is the PICRAT framework (2). It offers a simple way to reflect on two key dimensions of technology use in the classroom:

  1. What are students doing with technology?
    Are they being Passive, Interactive, or Creative?
  1. What is the teacher’s relationship to technology?
    Is it being used to Replace, Amplify, or Transform traditional teaching?

The goal, then, is to design learning experiences where students use GenAI interactively and creatively, actively engaging with both the task at hand and their own learning process. Teachers, in turn, can use GenAI not just to replace materials like presentations or assessments, but to develop experiences that amplify or even transform their teaching.

GenAI doesn’t fix or ruin education, it amplifies what’s already there. In teaching approaches that prioritize personalized, timely feedback, it can be an effective and practical ally. But in more traditional models focused on memorization and written output as proof of learning, GenAI can feel disruptive even counterproductive. When we shift our perspective and begin to see that the assessment “is” learning (3) and not just a way to measure it, GenAI will open up new possibilities (Shout out to Steve!).

Another challenge we briefly touched on was the stigmatization of AI use where its users are sometimes seen as lazy or dishonest. This often stems from a broader negative image of GenAI products. For many, AI still represents the fast and sloppy way to produce something that looks finished but lacks real depth. You're probably thinking of distorted six-fingered images or superficial but professional-sounding texts overloaded with certain “AI-preferred” phrases. And yes, this kind of “AI slop” is increasingly flooding the internet (4). But it doesn’t reflect the full potential of these tools. As before, GenAI tends to amplify what already exists whether that’s genuine expertise or a lack of substance.

However, the stigma persists. As I write this piece, ChatGPT is helping me stay organized, maintain clarity, and fine-tune grammar. But that raises a question: how should I disclose that? Should I say this post was “created with AI assistance” or “developed using AI”? And how will that be perceived?

I can’t help but wonder whether some readers will see this disclosure as a weakness, as if the work is somehow less authentic or valuable because it involved AI. It may give the impression of being automatically generated without human agency. That kind of pre-emptive judgment can create pressure to hide AI use altogether. And that’s precisely one of the emerging risks in education: when students or educators feel they must conceal their use of AI to avoid stigma or misunderstanding. This is where open communication becomes essential. We need shared norms and clear expectations about how we talk about the role of AI in our work.

During the panel, an audience member mentioned a recent case at Northeastern University, where a student demanded a tuition refund after discovering that their professor had used ChatGPT and other tools to generate lecture materials (5). At the time, I understood the case as reflecting a broader perception issue: that AI-assisted work is not “real” work that it’s somehow less legitimate, or even a form of cheating. I initially assumed the objection was tied to low-effort or sloppy use of GenAI. But after looking into it more closely after the event, it became clear that the core concern was about transparency and fairness. The student felt there was a double standard: they were prohibited from using AI, while their professor used it without disclosure. Cases like this remind us that the real issue isn’t just whether AI is used, but how fairly, openly and consistently it’s integrated into educational practice.

One of the related challenges raised by the audience was the Liar’s Dividend which is the idea that as AI-generated content becomes more widespread, it becomes easier to dismiss real work as fake. In educational settings, this can mean that genuine student work is wrongly flagged as AI-generated, or that actual misconduct is denied under the cover of doubt. This is exacerbated by misplaced confidence in detection tools or the belief that one can simply "tell" if something was written by AI by noticing some words or image distortion, an illusion often shaped by the visible flaws of low-quality, AI-generated content. In truth, we’re moving toward a future where AI-assisted work is increasingly indistinguishable from human output. The Liar’s Dividend is a growing threat and we must begin preparing for a reality where simulacra become hyperreality (for these terms: 6) through the ease of a mouse click.

Unlike the Liar’s Dividend, the stigmatization of AI use feels more like a short-term issue which will likely fade as the technology becomes ubiquitous. New and better models are being developed at an astonishing pace. (Remember: the one you’re using right now is the worst you’ll ever use.) AI tools are increasingly being embedded into everyday software including word processors, slide makers, learning management systems. Soon, nearly every app will have some form of AI integration. For now, many of these integrations are superficial, driven more by hype than utility but over time, (hopefully) they’ll become more meaningful. As that happens, producing “AI-free” work may become impractical, irrelevant and maybe impossible. This shift brings new frightening challenges for education. When lecture slides, assignments, feedback comments, assays and theses are partially generated by AI, questions arise around authorship, ownership, data privacy, personal identity and the humanity (human component) of the end products. The stigma may soon disappear, but more complex issues are already taking its place.

To be perfectly honest, our current policies which based solely on telling students how to use or not use AI, are no longer enough. Without structural change or real enforcement, these rules rely entirely on voluntary compliance, which is difficult to verify and easy to ignore creating an enforcement illusion (7).

What we need is systematic, sustainable support. A machine that can generate evidence-based, adaptable, and practical solutions to the evolving challenges of GenAI in education. The good news? That machine already exists: it’s us, when we come together in spaces like this panel to share, question, and reflect.

But to move forward, we need more than panels. We need structured, faculty- or university-level opportunities for ongoing, open dialogue, spaces where all learning partners can exchange insights, challenge assumptions, and grow together. We must prioritize sound pedagogy, surface both good practices and failures, and treat GenAI integration as a shared responsibility.

On that note, I highly recommend exploring The AI Forge, a platform for educators navigating AI integration, mentioned during the panel by Nigel Francis, one of its co-creators alongside David Smith.

Conversations like this remind me how important it is to keep learning together. I'm grateful to the organizers and participants of the 49th FEBS Congress for creating the space to do just that.

References

References

(1) Gerlich, M. (2025). AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking. Societies, 15(1), 6. https://doi.org/10.3390/soc15010006

(2) Kimmons, R., Graham, C. R., & West, R. E. (2020). The PICRAT Model for Technology Integration in Teacher Preparation. Educational Technology Research and Development.

(3) Rutherford, S., Pritchard, C., & Francis, N. (2024). Assessment IS Learning: Developing a Student-Centred Approach for Assessment in Higher Education. FEBS Open Bio, 15(1), 21–34. https://doi.org/10.1002/2211‑5463.13921

(4) Oliver, J. (2025, June 23). AI Slop [Video]. Last Week Tonight with John Oliver. HBO. https://www.youtube.com/watch?v=TWpg1RmzAbc

(5) ET Bureau. (2025, May 14). Student demands tuition refund after spotting professor’s use of ChatGPT in course material. The Economic Times. https://m.economictimes.com/magazines/panache/caught‑red‑handed‑using‑ai‑student‑demands‑tuition‑fee‑refund‑after‑spotting‑chatgpt‑generated‑content‑in‑professors‑notes/articleshow/121214622.cms

(6) Baudrillard, J. (1994). Simulacra and Simulation (S. F. Glaser, Trans.). University of Michigan Press. (Original work published 1981)

(7) Corbin, T., Roe, J., & Furze, A. (2024). The Challenge of GenAI to Assessment Validity [Preprint]. https://doi.org/10.5281/zenodo.10787241

Leave a Reply

Your email address will not be published. Required fields are marked *