Helping Australian Schools Use AI Safely and Responsibly

SaferAI provides a 2-hour remote professional development workshop for Australian teachers on AI-safety in the classroom.

Learn More

Start Here

The Problem:

If you are a teacher or school leader, you have likely noticed
and are experiencing the following problems:

Problem #1

Students don't know how to use AI safely but are using it to help them write their assessments. This puts their cognitive development at risk which directly impacts their future success.

Problem #2

Teachers don't know how to manage and integrate AI safely into their teaching and learning practices. This puts not only their students at risk but also undermines their professional responsibility to keep students safe.

Problem #3

Students and teachers are using AI in ways that may be illegal when it comes handling sensitive and private data. This puts the school at risk of breaching Australian law.

Our Program:

SaferAI delivers a 2-hour online professional development workshop for Australian teachers. This 2-hour workshop helps teachers understand the cognitive risks of unguided AI use and how to use AI legally and safely in Australian classrooms. Teachers leave with practical strategies to design AI-resilient learning and assessment and to clearly communicate AI safety expectations to students and parents.

Understanding Generative AI: How It Works and Its Limitations

Teachers will learn how to design assessments that remain valid and defensible in the presence of generative AI. This includes prioritising process evidence, in-class and supervised tasks, and assessment formats that require explanation, application, and reasoning rather than recall. Teachers are guided on setting clear AI-use rules and declarations so assessment integrity is maintained without reliance on detection tools.

Designing AI-Resilient Assessments and Maintaining Integrity

Teachers will learn how to use AI in ways that comply with Australian law when handling student data. This includes understanding privacy obligations, consent, data storage and overseas disclosure risks, and why personal or sensitive information must never be entered into public AI tools.

Legal and Ethical AI Use: Protecting Student Data

Teachers will learn to identify common forms of AI misuse, including outsourcing thinking and over-reliance on AI-generated explanations. Teachers will learn how these patterns disrupt attention, memory formation, critical thinking, and a student’s sense of ownership over their work.

Recognising and Addressing AI Misuse in Student Learning

Teachers will learn how to clearly and confidently communicate with students and parents about what AI misuse looks like and why it matters. This includes explaining the cognitive, academic, and wellbeing risks of inappropriate AI use in plain language, without fear-based messaging. Teachers are guided on setting clear expectations, addressing concerns, and building shared understanding so AI use is transparent, consistent, and learning-focused.

Communicating AI Safety: Engaging Students and Parents

Teachers will learn how to identify and respond when students use AI for therapy, emotional support, or relationship advice. Teachers will explore why these uses are unsafe, the risks they pose to student wellbeing, and how to set clear boundaries without stigma. Teachers are guided on appropriate conversations, referral pathways, and redirecting students toward safe, human support systems.

Supporting Student Wellbeing: Responding to AI Use for Therapy and Relationships

Teachers will gain a clear, non-technical understanding of how generative AI tools like ChatGPT produce responses, including how they predict language rather than “understand” information. Teachers explore why AI can sound confident while being wrong, how bias and hallucinations occur, and what this means for reliability in learning and assessment.