SaferAI provides a 2-hour remote professional development workshop for Australian teachers on AI-safety in the classroom.
Students don't know how to use AI safely but are using it to help them write their assessments. This puts their cognitive development at risk which directly impacts their future success.
Teachers don't know how to manage and integrate AI safely into their teaching and learning practices. This puts not only their students at risk but also undermines their professional responsibility to keep students safe.
Students and teachers are using AI in ways that may be illegal when it comes handling sensitive and private data. This puts the school at risk of breaching Australian law.
Teachers will learn how to design assessments that remain valid and defensible in the presence of generative AI. This includes prioritising process evidence, in-class and supervised tasks, and assessment formats that require explanation, application, and reasoning rather than recall. Teachers are guided on setting clear AI-use rules and declarations so assessment integrity is maintained without reliance on detection tools.
Teachers will learn how to use AI in ways that comply with Australian law when handling student data. This includes understanding privacy obligations, consent, data storage and overseas disclosure risks, and why personal or sensitive information must never be entered into public AI tools.
Teachers will learn to identify common forms of AI misuse, including outsourcing thinking and over-reliance on AI-generated explanations. Teachers will learn how these patterns disrupt attention, memory formation, critical thinking, and a student’s sense of ownership over their work.
Teachers will learn how to clearly and confidently communicate with students and parents about what AI misuse looks like and why it matters. This includes explaining the cognitive, academic, and wellbeing risks of inappropriate AI use in plain language, without fear-based messaging. Teachers are guided on setting clear expectations, addressing concerns, and building shared understanding so AI use is transparent, consistent, and learning-focused.
Teachers will learn how to identify and respond when students use AI for therapy, emotional support, or relationship advice. Teachers will explore why these uses are unsafe, the risks they pose to student wellbeing, and how to set clear boundaries without stigma. Teachers are guided on appropriate conversations, referral pathways, and redirecting students toward safe, human support systems.
Teachers will gain a clear, non-technical understanding of how generative AI tools like ChatGPT produce responses, including how they predict language rather than “understand” information. Teachers explore why AI can sound confident while being wrong, how bias and hallucinations occur, and what this means for reliability in learning and assessment.