Related links
From Semester 2, 2025, you're generally not permitted to use AI in secure (supervised) tasks, such as exams and tests, unless the unit of study coordinator has given express permission in the unit outline. For open (unsupervised) assessments, you will be able to use AI, and need to appropriately acknowledge its use – provided you do so, this would not be a breach of academic integrity. Whether you can use AI or not for a particular assessment will be detailed in your unit of study outline.
At the University, we define generative AI as a rapidly evolving class of computer algorithms capable of creating digital content – including text, images, video, music and computer code.
These algorithms work by deriving patterns from large sets of training data that become encoded into predictive mathematical models, a process commonly referred to as ‘learning’. These models do not keep a copy of the data they were trained on, but rather, generate novel content entirely from the patterns they encode. People can then use AI interfaces like ChatGPT, Copilot, Gemini, Claude or MidJourney to input ‘prompts’ – typically, instructions in plain language – to make generative AI models produce outputs.
There are many AI-powered applications and tools that can be beneficial to your studies, but there are also situations in which using AI tools may not be appropriate. Using AI responsibly involves ensuring these tools are used ethically, understanding their limitations, and maintaining a balance between technology and traditional approaches to learning. It’s important for students to develop critical thinking, communication, and other skills (including written communication) and conduct independent research, rather than solely relying on AI tools.
Learning to use artificial intelligence (AI) tools productively and responsibly is an important part of developing digital literacy. We want to ensure that you have the skills and knowledge to adapt and thrive in a changing world.
At the same time, it's important to understand when use of AI is unethical, inappropriate, or breaches the University’s rules about academic integrity. Submitting assessments that aren't your original work – including work produced by AI – may constitute a breach of academic integrity.
For general learning purposes, as opposed to assessments, you are permitted to use generative AI tools, as long as you follow all University policies when doing so including the Academic Integrity Policy 2022 (pdf, 376KB), Acceptable Use of ICT Resources Policy 2019 (pdf, 258KB) and the Student Charter 2020 (pdf, 221KB).
The AI in Education Canvas site contains a number of ideas for how to use AI to improve your learning.
From Semester 2, 2025, the University has implemented a ‘two-lane’ approach to assessments.
From Semester 2, 2025, students are generally not permitted to use AI in secure (supervised) tasks, such as exams and tests, unless the Unit of Study coordinator has given express permission in the Unit Outline. Using AI when not allowed could amount to a breach of academic integrity for which you could be investigated.
For open (unsupervised) assessments, students will be able to use AI, and need to appropriately acknowledge its use – provided they do so, this would not be a breach of academic integrity.
Whether you can use AI or not for a particular assessment will be detailed in your unit of study outline. Go to the ‘Assessment’ section and look for the ‘Use of AI’ column. The options are:
To find out more visit using artificial intelligence tools responsibly in your studies and assessments.
If you are permitted to use automated writing tools or generative AI by your unit coordinator, you will need to include a statement in your submitted work explaining:
If your unit coordinator has made any additional stipulations about how they want the class to acknowledge AI use, you should also follow these rules.
Turnitin has a tool for detecting use of artificial intelligence in student work. If a marker or teacher suspects that part or all of your assessment has been generated using AI technology and its use was not permitted, or use was not acknowledged, the Turnitin AI detection tool may be used to evaluate the situation. It’s important to note that the AI detector score would not be the only evidence relied upon for an academic integrity case, but will be considered alongside other relevant evidence.
If you have further questions about how to use AI responsibly and productively, visit the Canvas site for AI in Education or contact the Office of Educational Integrity by emailing educational.integrity@sydney.edu.au.
Your feedback has been sent.
Sorry there was a problem sending your feedback. Please try again