No control over how data is used

Dr Jonathan Kummerfeld from the School of Computer Science in the Faculty of Engineering said this was a prudent move as there is currently no control over how data given to the chatbot was used. Dr Kummerfeld works on AI and natural language processing, with a particular focus on systems for collaboration between people and AI models.

“At the same time, it is important to note that Australia can benefit from the scientific discoveries underpinning DeepSeek. We can use those innovations to build our own systems, without the risks of using the chatbot service," says Dr Kummerfeld.  

DeepSeek being open-source poses unique challenge

 Dr Suranga Seneviratne, a privacy and cybersecurity expert says similar to the case with TikTok, this decision is not surprising given the potential risks.  

“Large Language Models (LLMs) introduce well-known concerns, including data privacy, confidentiality, and the possibility of containing backdoors. Also, despite significant recent advancements, LLMs can still hallucinate, meaning their outputs must be verified in critical settings

“Beyond the AI itself, the app’s access to user data—such as the clipboard—can pose additional risks. A unique challenge arises from DeepSeek being open-source; while the original company controls the official web and app versions, anyone can host their own instance. This makes a complete ban challenging to enforce, though in this case, the risk may be considered low,” says Dr Seneviratne, from the School of Computer Science, Faculty of Engineering  

People now tend to blindly trust AI-generated content. AI can intentionally or unintentionally hallucinate or provide false answers, yet people trust it as if it comes from reliable sources.

Dr Armin Chitizadeh

School of Computer Science, Faculty of Engineering

More user caution needed with AI tools

Dr Armin Chitizadeh, an expert in AI ethics at the School of Computer Science, Faculty of Engineering says people need to be more cautious when dealing with any GenAI tools.

"The emergence of GenAI tools has introduced many issues, and I'm glad that DeepSeek's arrival has created a wave of concern.

“The first concern is that in the race to create the fastest and best AI, companies might cut corners on how customer data is stored, with insufficient time spent on protecting it from malicious actors. 

“The second concern is that people now tend to blindly trust AI-generated content. AI can intentionally or unintentionally hallucinate or provide false answers, yet people trust it as if it comes from reliable sources. 

“The third, and possibly most crucial concern is that AI can easily reason and draw important conclusions from seemingly insignificant user data. Users might provide data to GenAI tools, assuming it's not valuable. However, AI can connect the dots and reach important conclusions. This newly inferred information is then in the hands of the GenAI tool owner, who has full control over its use and sale."

It doesn’t matter whether it’s China or any other country: government data should not be housed offshore and outside of secure Australian systems.

Professor Kai Reimer

University of Sydney Business School and Director of Sydney Executive Plus

Impact on AI industry the real story

Professor Kai Riemer is a Professor of Information Technology and Organisation at the University of Sydney Business School and Director of Sydney Executive Plus, where he teaches an executive education course on AI fluency.

“This generates a lot of interest because it’s AI and China, but it’s just prudent data security. It doesn’t matter whether it’s China or any other country: government data should not be housed offshore and outside of secure Australian systems,” says Professor Reimer.

“The real story here is how these open-source platforms appear to be reverse engineering the breakthroughs made by pioneers such as OpenAI and impacting the AI industry business model. I liken products such as DeepSeek and Meta AI to MP3s: they didn’t invent recorded music, but this clever compression technique had a huge impact on how we access it.”

Ban a pre-emptive measure to protect national security

Professor Uri Gal is a Professor of Business Information Systems at the University of Sydney Business School. His research focuses on the organisational and ethical aspects of digital technologies, including data security.

“Government agencies typically manage highly sensitive information, and there are worries that DeepSeek’s extensive collection of data – such as device details, usage metrics, and personal identifiers – could expose confidential information to vulnerabilities if accessed or stored outside Australian borders. Although the open-source nature of the model offers transparency regarding its code, it does not guarantee that user data is handled solely within Australia or according to local privacy standards. This risk of cross-border data access is a key factor behind the ban. 
 
“Beyond government applications, generative AIs like DeepSeek pose additional risks to the public. These include the potential spread of misinformation, unintentional biases in outputs, and the risk of privacy breaches if personal data is inadvertently exposed or misused. 

“Moreover, the scale and automation of such systems can lead to accountability challenges, which could complicate efforts to trace and rectify erroneous or harmful content. The ban can thus be seen as a pre-emptive measure aimed at protecting national security and public trust until robust data protection safeguards are established.”

Media contact

Manual Name : Ivy Shih

Manual Description : Media Adviser (Engineering)

Manual Address :

Manual Addition Info Title :

Manual Addition Info Content :

Manual Type : contact

alt

Auto Type : contact

Auto Addition Title :

Auto Addition Content :

Auto Name : true

Auto Position : true

Auto Phone Number : false

Auto Mobile Number : true

Auto Email Address : true

Auto Address : false

UUID :