false

/content/dam/corporate/images/faculty-of-arts-and-social-sciences/research/research-centres/caitg/screen_truth_and_the_synthetic_moving_image.jpg

50%

Research projects

Browse our latest research projects

m-hero--simple

800.600.2x.jpeg 1600w, 220.165.2x.jpeg 440w, 1280.1280.jpeg 1280w, 1440.1080.2x.jpeg 2880w, 440.330.2x.jpeg 880w

false

We are dedicated to advancing responsible AI development and implementation through multi-disciplinary research, fostering public trust, and informing policy decisions in complex regulatory environments.

_self

Our research themes

h2

Learn about our research objectives and themes

Towards sovereign AI: Developing large models for the public sector and society

In an era dominated by proprietary AI systems, this groundbreaking research sets out to develop sovereign, publicly governed large AI models tailored to Australia’s public sector and societal values. Led by a senior research team spanning law, engineering, education, and social sciences, this initiative positions The University of Sydney at the forefront of ethical AI innovation. 



Inspired by Switzerland’s open public LLM, our research aims to create models that reflect Australia’s civic diversity, including Indigenous languages, and uphold democratic principles. It lays the foundation for a national AI capability that is inclusive, transparent, and accountable. 

50

automatic

Link

Transforming youth AI literacy in the age of social media

As the Australian government bans social media access for children and adolescents under 16, our research offers an inclusive alternative. Rather than relying on restrictive policies, the team advocates for youth-led initiatives that combine education, policy, and lived experience to address the real harms associated with social media and AI-driven misinformation. 

Led by researchers from four disciplines across The University of Sydney, this initiative establishes a youth-led research program aimed at empowering young people to identify key challenges and propose solutions for improving digital literacy and civic engagement. The project brings together experts in design, media, health, law, and engineering to co-create strategies that reflect the realities of young people’s online lives. 

50

automatic

Link

Co-designing trusted AI for inclusive pain management in disability health contexts

Tackling a critical gap in digital health: our research examines the exclusion of people with disabilities from AI-enabled healthcare innovation. Led by a multidisciplinary team spanning business, medicine, and engineering, the initiative focuses on co-designing NociTrack, an AI-powered, non-verbal pain assessment tool. The technology uses machine learning and vital signs to predict pain in patients who cannot communicate verbally, including those with intellectual disabilities. 

The distinction of this research lies in the commitment to inclusive design, whereby clinicians, caregivers, engineers, and people with lived experience are actively shaping the app’s interface, consent flows, and governance framework. By embedding ethical and participatory principles into technical development, this initiative is redefining trust and equity in digital health.

 

50

automatic

Link

Cross-comparative GenAI studies: Underspheres within Southeast Asia

Generative AI (GenAI) is transforming creative industries across Southeast Asia, but how do these changes play out in different cultural and regulatory landscapes? Our research builds on the innovative theoretical concept of “underspheres,” to further examine how large language model products are reappropriated by creative communities for potentially harmful purposes and information disorder. 

Drawing on expertise in media, communication and political science, the team investigates how GenAI intersects with cultural production, misinformation, and policy across diverse Southeast Asian contexts. This research offers a nuanced, regionally grounded perspective, exploring AI’s global impact. 

50

automatic

Link

Screen 'truth' and the synthetic moving image: Risks, benefits and implications of text-to-video AI

Text-to-video Generative AI is revolutionising the screen and communications industries. Tools like Sora, Veo 3, and Runway now allow users to produce convincing, fully synthetic moving images, without cameras, crews, or prior filmmaking experience. This democratisation of production is reshaping creative workflows across film, media, science, defence, and education. 

This interdisciplinary research brings together leading researchers in film, media, law, and digital humanities to critically examine the risks, benefits, and broader implications of text-to-video AI. The team investigates how these technologies are transforming aesthetic practices, disrupting evidentiary norms, and challenging existing copyright and personality rights frameworks. Their work is grounded in a screen industries perspective, with a focus on how creators, consumers, and policymakers engage with synthetic video content. 

50

automatic

Link

A joint educator-industry-academia workshop on student creativity in the age of AI

Creativity is a cornerstone of professional success across disciplines - from science and design to law and health. Yet Generative AI, with its tendency to produce “most probable” outputs, challenges traditional notions of originality and assessment. 

This research initiative convenes a dynamic workshop that brings together educators, industry leaders, and academic researchers to explore how student creativity can be nurtured in an AI-saturated world. It addresses pressing questions such as, "How do we teach and assess creativity when AI can generate content on demand?" "How do we prepare students for workplaces that demand daily innovation?" By fostering dialogue across sectors, this project aims to shape future-ready education that embraces both human ingenuity and technological disruption.

  • Associate Professor Kaz Grace, School of Architecture, Design and Planning
  • Associate Professor Jen Scott Curwood, School of Education and Social Work
  • Dr Samuel Gillespie, School of Architecture, Design and Planning

50

automatic

Link

Fostering preservice teachers' creative and distributed agency in using generative AI

As Generative AI rapidly enters classrooms, its role in shaping creativity and equity in education is under scrutiny. This research explores how preservice teachers can develop the creative and distributed agency needed to use GenAI not just efficiently, but meaningfully, especially in designing inclusive learning environments that embrace student diversity. 

Led by researchers from the Sydney School of Education and Social Work, the initiative responds to growing concerns about GenAI’s impact on creative confidence and over-reliance on automated content. Rather than viewing GenAI as a shortcut, our research positions it as a catalyst for pedagogical transformation. The project investigates how future educators can take initiative, reframe problems, and express ideas in ways that foster equitable learning through thoughtful engagement with AI tools. 

50

automatic

Link

Social agent-based simulation in social sciences research

Social sciences research is entering a new frontier with the development of an AI-driven simulation platform powered by Generative AI (GenAI) to model complex human interactions. Through role-play, this platform creates autonomous agents, each with distinct personality traits, environmental stimuli, and societal rules, who interact within virtual social environments. 

The platform’s versatility opens transformative possibilities across multiple domains. It can simulate the spread of information and misinformation, helping researchers understand what makes content trustworthy and how narratives evolve. Beyond academic inquiry, the platform has practical applications for policymakers and educators. It can be used to test policy interventions, anticipate societal responses, and explore sensitive issues such as radicalisation, discrimination, and youth behaviour in digital spaces. 

 

50

automatic

Link

More projects

The Mediated Trust research program brings together leading researchers and experts to investigate the relationship between social trust, media, AI and digital communication.

Significant crises of trust have been identified with social, political and economic institutions throughout the world. This research program identifies communication media, including digital and social media as well as news and journalism, as having a central role in both the causes of distrust, and countering crises of trust.

In 2023 Professor Terry Flew was awarded a five-year ARC Laureate Fellowship to establish the Mediated Trust program.

Visit website: https://mediated-trust-arts.sydney.edu.au/

The International Digital Policy Observatory (IDPO) is a publicly accessible, real-time database with enhanced analytical tools that tracks developments in digital, internet, and emerging technology regulation across multiple countries.

The purpose of this infrastructure is to shape innovative policy search and analysis techniques concerning multifaceted regulation, policies, and governance across our digital society and economy.

Through the development of such an enabling infrastructure, policy makers, industry stakeholders, and civil society advocacy groups will be able to draw upon a ‘common pool’ resource to better understand and respond to international trends in the tech policy arena. The IDPO seeks to place Australia at the forefront of policy and regulatory debates globally.

The rapid introduction of artificial intelligence into education is occurring with inadequate policy support. Additionally, there is a lack of stakeholder input into decisions about the use of AI in education. Utilising social science and data science approaches, this project aims to democratise policy about AI in education by building tools to monitor policies, and developing collaborative policy making methods. The expected outcomes include publicly available policy resources to anticipate, and respond to, the role of AI in education, and participatory frameworks for policy making. The benefits include informed stakeholder engagement, and concrete policy recommendations that are globally relevant and adaptable to the Australian context.

Visit website: https://education-futures-studio.org/

The ‘Generative AI in Education’ project is a project funded by the NSW Teachers Federation, and led by a team from the Education Futures Studio at the University of Sydney, the University of New South Wales, and Queensland University of Technology. It aims to co-create practical guidance for schools about generative AI in education with Teacher Federation members across NSW utilising in-person and online collaboration methods. Activities include an online survey to capture teachers’ insights and needs on AI usage in education, followed by a workshop to jointly develop the necessary guidance. The result will be a set of comprehensive guidelines distributed among NSW TF members to support the ethical and efficient use of AI in educational settings.

Visit website: https://education-futures-studio.org/

Funded by the Policy Challenge Grants of the James Martin Institute for Public Policy, the ‘Governing AI Education and Equity Together’ project aims to assist policymakers and stakeholders in anticipating and responding to the educational opportunities and inequalities that arise from the use of AI-enabled technologies. A diverse array of stakeholders will be actively involved in the process of generating and evaluating policy ideas about the growing EdTech ecosystem in New South Wales. This ecosystem spans student, teacher, system, and infrastructure-related technologies that shape differential access, usage skills, and outcomes.

Visit website: https://education-futures-studio.org/projects/

Contact us