We've updated our assessment policies to address the opportunities and challenges presented by new technologies, including generative artificial intelligence (AI).
Our goal is to make assessment and feedback more meaningful, supportive and fair, while maintaining academic integrity and the value of your University of Sydney degree.
There are many AI-powered applications and tools that can help with your studies, but there are also situations in which using AI tools may not be appropriate. Using AI responsibly involves ensuring these tools are used ethically, understanding their limitations, and maintaining a balance between technology and traditional approaches to learning.
It’s important for you to develop critical thinking, communication, and other skills (including written communication) and conduct independent research, rather than solely relying on AI tools.
At the University, we define generative AI as a rapidly evolving class of computer algorithms capable of creating digital content – including text, images, video, music and computer code.
These algorithms work by deriving patterns from large sets of training data that become encoded into predictive mathematical models, a process commonly referred to as ‘learning’. These models do not keep a copy of the data they were trained on, but rather generate novel content entirely from the patterns they encode. People can then use AI interfaces like ChatGPT, Copilot, Gemini, Claude or MidJourney to input 'prompts' – typically, instructions in plain language – to make generative AI models produce outputs.
Your unit outline includes a "Use of AI" column to indicate whether you can use AI or not for each assessment.
The options within this column are:
You can use AI in open (unsupervised) assessments. These assessments support the development of disciplinary knowledge, as well as AI literacy. As such, you're encouraged to use AI tools productively and responsibly as part of your learning experience.
Your educators will guide you on the most appropriate types of AI use.
When you submit an open assessment, you'll need to appropriately acknowledge AI use so as not to breach academic integrity.
You generally can't use AI in secure (supervised) assessments, unless the unit coordinator has given express permission in the unit outline. Using AI when not allowed could amount to a breach of academic integrity, for which you could be investigated.
Your unit outline will have information about using AI for these assessments.
Misusing AI can breach the Academic Integrity Policy 2022. Examples of misuse include:
If you use AI in your assessments, you're required to acknowledge it – this includes acknowledging any tools that use generative AI, such as translation tools, paraphrasing tools or referencing tools. You are not required to acknowledge tools used for word processing, or which only correct basic spelling and grammar.
Failing to provide acknowledgement or any other misuse can lead to a breach of the Academic Integrity Policy 2022.
If you're permitted to use automated writing tools or generative AI as per the unit outline, you will need to include a statement in your submitted work explaining:
If your unit coordinator has made any additional stipulations about how they want the class to acknowledge AI use, you should also follow these rules. For example, they may require you to submit a log of the AI inputs and outputs used during the preparation of your assessment.
I acknowledge the use of [AI tool + version, publisher and URL] to generate [summary of what was generated].
I acknowledge the use of [AI tool + version, publisher and URL] to summarise [x source], which I then [describe the action taken].
I input the prompt [write the prompt in quotation marks or italics] into [AI tool + version, publisher and URL] which produced the output provided in Appendix A [attach an Appendix at end of assessment]. I then [describe how you used any part of the output for your assessment].
Turnitin has a tool for detecting use of artificial intelligence in student work. If a marker or teacher suspects that part or all of your assessment has been generated using AI technology and its use was not permitted, or use was not acknowledged, the Turnitin AI detection tool may be used to evaluate the situation.
It’s important to note that the AI detector score would not be the only evidence relied upon for an academic integrity case, but will be considered alongside other relevant evidence.
If you're unsure whether AI use is appropriate in your coursework unit, talk to your unit coordinator or contact the Office of Educational Integrity (OEI) at educational.integrity@sydney.edu.au.