Latest News

Shadow AI use in the workplace

Written by Gadens | 10 March 2026

As the availability and accessibility to AI tools rapidly accelerates, their use in the workplace is inevitable. Increasingly however, these tools are being used in the workplace without formal endorsement or knowledge of the employer (‘shadow AI’). 

 

Many employers have been caught unprepared for the implications of ‘shadow’ AI use in the workplace, particularly when it comes to managing confidentiality, privacy, the generation and ownership of intellectual property, the impact on employee duties such as honesty and the policy frameworks needed to respond.

 

Some of the emerging risk areas for employers are:

  1. Protecting confidential information and proprietary knowledge – AI tools often require information to be shared to enable them to learn, develop reasoning and then perform a task. However, the information uploaded may be private, commercially confidential or sensitive and its use, even for a work-related task, could result in the inadvertent disclosure of the information with serious legal consequences, particularly in sectors where privacy and confidentiality are critical.

  2. Managing performance, productivity and breaches of trust – Does your workplace ban the use of AI in the generation of work product? If so, how do you enforce it? If not, is some AI use permitted? Has that been communicated to employees along with appropriate guardrails to protect the employer?

 

The absence of clear policies, guidance and prohibitions may lead to employees unknowingly breaching confidentiality and privacy obligations, with employers potentially being held responsible for that breach.

 

Employers may face challenges enforcing performance standards when there is a policy vacuum in the workplace. While it may be fair to sanction an employee who passes of a work product generated by AI as their own, grey areas about the use and prohibited use of AI can undermine an employer’s ability to defend disciplinary action.

 

Recent cases highlight the importance of risk management steps as the use of AI becomes more common at work. In one case, an employee’s dismissal was found to be unfair when it was found that a communication, generated by the employer using an AI tool, informed the employee of a final restructure decision before consultation had taken place.[1] Another case saw a Victorian lawyer lose his practising certificate and legal practice after he caused a list of cases to be handed to a Court in the midst of a court matter, which included fictitious, AI-generated cases.[2]

 

It is important for HR teams to plan and address risks associated with employee use of AI in the workplace. This is likely to be a rapidly changing environment and continual monitoring and adapting will be the key to keeping up.

 

For a wrap up of Australia’s evolving AI governance landscape, see Gadens’ website.

 

Amanda Junkeer, Partner

Diana Diaz, Partner

[1] Hayley Lord v Millet Hospitality Geelong Pty Ltd [2025] FWC 2740

[2] Dayal [2024] FedCFamC2F 1166