5 Emerging AI Threats Australian Cyber Pros Must Watch in 2025

May Be Interested In:Viktor Antonov, the artist behind Half-Life 2’s City 17 and Dishonored, dies at 52


Australian cybersecurity professionals can expect threat actors to exploit artificial intelligence to diversify tactics and scale the volume of cyberattacks targeting organisations in 2025, according to security tech firm Infoblox.

Last year, cyber teams in APAC witnessed the first signs of AI being used to execute crimes like financial fraud, while some have linked AI to a DDoS attack in the financial services sector in Australia.

This year, Australia’s cyber defenders can expect AI to be used for a new breed of cyber attacks:

  • AI cloning: AI could be used to create synthetic audio voices to commit financial fraud.
  • AI deepfakes: Convincing fake videos could lure victims to click or provide their details.
  • AI-powered chatbots: AI chatbots could become part of complex phishing campaigns.
  • AI-enhanced malware: Criminals could use LLMs to spit out remixed malware code.
  • Jailbreaking AI: Threat actors will use “dark” AI models without safeguards.

Infoblox’s Bart Lenaerts-Bergmans told Australian defenders on a webinar that they can expect an increase in the frequency and sophistication of attacks because more actors have access to AI tools and techniques.

1. AI for cloning

Adversaries can use generative AI tools to create synthetic audio content that sounds realistic. The cloning process, which can be done quickly, leverages data available in the public domain, such as an audio interview, to train an AI model and generate a cloned voice.

SEE: Australian government proposes mandatory guardrails for AI

Lenaerts-Bergmans said cloned voices can exhibit only minor differences in intonation or pacing compared to the original voice. Adversaries can combine cloned voices with other tactics, such as spoofed email domains, to appear legitimate and facilitate financial fraud.

2. AI deepfakes

Criminals can use AI to create realistic deepfake videos of high-profile individuals, which they can use to lure victims into cryptocurrency scams or other malicious activities. The synthetic content can be used to more effectively social engineer and defraud victims.

Infoblox referenced deepfake videos of Elon Musk uploaded to YouTube accounts with millions of subscribers. Using QR codes, many viewers were directed to malicious crypto sites and scams. It took 12 hours for YouTube to remove the videos.

3. AI-powered chatbots

Adversaries have begun using automated conversational agents, or AI chatbots, to build trust with victims and ultimately scam them. The technique mimics how an enterprise may use AI to combine human-driven interaction with the AI chatbot to engage and “convert” a person.

One example of crypto fraud involves attackers using SMS to build relationships before incorporating AI chatbot elements to advance their scheme and gain access to a crypto wallet. Infoblox noted that warning signs of these scams include suspicious phone numbers and poorly designed language models that repeat answers or use inconsistent language.

4. AI-enhanced malware

Criminals can now use LLMs to automatically rewrite and mutate existing malware to bypass security controls, making it more difficult for defenders to detect and mitigate. This can occur multiple times until the code achieves a negative detection score.

SEE: The alarming state of Australian data breaches in 2024

For example, a JavaScript framework used in drive-by download attacks could be fed to an LLM. This can be used to modify the code by renaming variables, inserting code, or removing spaces to bypass typical security detection measures.

5. Jailbreaking AI

Criminals are bypassing safeguards of traditional LLMs like ChatGPT or Microsoft Copilot to generate malicious content at will. Called “jailbroken” AI models, they already include the likes of FraudGPT, WormGPT, and DarkBERT, which have no in-built legal or ethical guardrails.

Lenaerts-Bergmans explained that cybercriminals can use these AI models to generate malicious content on demand, such as creating phishing pages or emails that mimic legitimate services. Some are available on the dark web for just $100 per month.

Expect detection and response capabilities to become less effective

Lenaerts-Bergmans said AI threats may result in security teams having intelligence gaps, where existing tactical indicators like file hashes may become completely ephemeral.

He said “detection and response capabilities will drop in effectiveness” as AI tools are adopted.

Infoblox said detecting criminals at the DNS level allows cyber teams to gather intelligence earlier in the cybercriminal’s workflow, potentially stopping threats before they escalate to an active attack.

share Share facebook pinterest whatsapp x print

Similar Content

AIM movers: 7Digital reduces loss and Glantus crash
AIM movers: 7Digital reduces loss and Glantus crash
There's a new proposal to protect monarch butterflies. Advocates say it's a much-needed move
There’s a new proposal to protect monarch butterflies. Advocates say it’s a much-needed move
Researchers confirm an exoplanet potentially capable of sustaining life
Researchers confirm an exoplanet potentially capable of sustaining life
Stuck Astronaut Sunita Williams Steps Out For Walk After 7 Months In Orbit: NASA
Stuck Astronaut Sunita Williams Steps Out For Walk After 7 Months In Orbit: NASA
Tributes to ‘teddy bear’ 9-year-old victim of German Christmas market attack
Tributes to ‘teddy bear’ 9-year-old victim of German Christmas market attack
Do Newfoundland's Tablelands hold the answer to life on Mars? This researcher is trying to find out | CBC Radio
Do Newfoundland’s Tablelands hold the answer to life on Mars? This researcher is trying to find out | CBC Radio
The Big Picture: News That Defines Our Time | © 2025 | Daily News