Many employers have been caught unprepared for the implications of ‘shadow’ AI use in the workplace, particularly when it comes to managing confidentiality, privacy, the generation and ownership of intellectual property, the impact on employee duties such as honesty and the policy frameworks needed to respond.
Some of the emerging risk areas for employers are:
The absence of clear policies, guidance and prohibitions may lead to employees unknowingly breaching confidentiality and privacy obligations, with employers potentially being held responsible for that breach.
Employers may face challenges enforcing performance standards when there is a policy vacuum in the workplace. While it may be fair to sanction an employee who passes of a work product generated by AI as their own, grey areas about the use and prohibited use of AI can undermine an employer’s ability to defend disciplinary action.
Recent cases highlight the importance of risk management steps as the use of AI becomes more common at work. In one case, an employee’s dismissal was found to be unfair when it was found that a communication, generated by the employer using an AI tool, informed the employee of a final restructure decision before consultation had taken place.[1] Another case saw a Victorian lawyer lose his practising certificate and legal practice after he caused a list of cases to be handed to a Court in the midst of a court matter, which included fictitious, AI-generated cases.[2]
It is important for HR teams to plan and address risks associated with employee use of AI in the workplace. This is likely to be a rapidly changing environment and continual monitoring and adapting will be the key to keeping up.
For a wrap up of Australia’s evolving AI governance landscape, see Gadens’ website.
Amanda Junkeer, Partner
Diana Diaz, Partner
[1] Hayley Lord v Millet Hospitality Geelong Pty Ltd [2025] FWC 2740
[2] Dayal [2024] FedCFamC2F 1166