Australia: New AI safety "guardrails" and a targeted approach to high-risk settings
28 November 2024
28 November 2024
The Australian Government aims to ensure use of artificial intelligence (AI) systems in high-risk settings is safe and reliable, while use in low-risk settings can continue largely unimpeded.
In September 2024, Australia released a Voluntary AI Safety Standard and consulted on new AI laws in the form of a proposal paper on introducing mandatory guardrails for AI in high-risk settings. New laws are not expected to be passed this year. However, following the proposal paper, it is expected that the Government will prepare a response and take steps to implement the mandatory guardrails.
In specific industry sectors, Australian regulators are currently using enforceable industry codes and standards tailored to deal with specific AI risks, and coming data privacy reforms will bring automated decision-making and other requirements that will impact AI.
The Voluntary AI Safety Standard includes 10 "guardrails" with specific requirements around accountability and governance measures, risk management, security and data governance, testing, human oversight, user transparency, contestability, supply chain transparency, and record keeping. An additional guardrail requires broad stakeholder engagement and assessment.
The initial Voluntary AI Safety Standard focuses more on organisations deploying AI – the next version will include additional and more complex guidance for AI developers. Recognising that Australian businesses tend to rely on third party AI systems, the standard includes specific procurement guidance, including recommendations about which of the guardrails should be reflected in contractual provisions agreed between the AI developer and the AI deployer.
While the standard is voluntary, it sets expectations for what may be included in future legislation and contains guardrails that are closely aligned to the proposed mandatory guardrails for high-risk use cases – which means that implementing the voluntary standard early will help organisations adapt to coming mandatory requirements.
In September 2024, Australia proposed for consultation:
The Government is considering how mandatory guardrails for high risk AI should be legislated (as a new economy-wide legislation like the EU AI Act or Canada’s AIDA, as "framework" legislation that could be implemented in other laws, or by directly amending existing laws). Regulator powers, enforcement mechanisms and penalty regimes will depend on which approach is adopted.
Automated decision making and the use of AI within government has been a focus in Australia after Royal Commission into the Robodebt Scheme recommended wide-ranging reforms.
The Australian Government released a national framework for the assurance of AI in government in June 2024, and has specifically committed to the Australian Government being an ‘exemplar’ for the safe and responsible adoption of AI. This commitment is set out in the Government’s policy for the responsible use of AI in Government.
There is an increasing trend in Australia to address specific societal concerns with enforceable industry codes and standards. Australia's eSafety Commissioner has already used powers to register mandatory industry codes and standards under Australia's Online Safety Act to address the risk that generative AI might be used to produce child sexual exploitation or pro-terror materials.
Under the Designated Internet Services Industry Standard, websites or apps that use generative AI must either implement controls or processes to reduce the risk of generating such material, or to detect, remove or deter such material, and continuously improving safety. Further obligations apply to distributors or marketplaces of generative AI. Similarly, last year's Search Engine Services Code requires ongoing improvement of machine learning algorithms and models to limit exposure to similar materials in search results.
AI issues will be an important part of Australia's ongoing law reform agenda and regulatory priorities. The Australian Government has flagged a number of areas of law that will be reviewed in parallel to consider the impact of AI developments, for example, health-specific laws, consumer laws, copyright law, automated decision-making frameworks for government, privacy reforms, and the issue of statements of expectations for future regulation. Work is already under way, with a consultation in October 2024 looking at how Australian consumer law handles (or should handle) current and emerging AI-enabled services, diving into issues such as consumer remedies and distribution of liability among manufacturers and suppliers. However, it is not guaranteed that AI reforms will be passed before Australian Government elections are called. The timing of Australia’s next federal election is flexible within the parliamentary term, with the latest possible date being 17 May 2025.
Despite a slow shift to adopt GDPR concepts, Australia's privacy laws are both unique and in a state of flux – businesses operating directly or indirectly in Australia, or interacting with Australian data, need to understand which business operations are covered by Australian privacy laws, map Australian obligations into their compliance frameworks, and take steps now to adapt to coming reforms.
For more information, please see here.
This publication is a joint publication from Ashurst Australia and Ashurst Risk Advisory Pty Ltd, which are part of the Ashurst Group.
The Ashurst Group comprises Ashurst LLP, Ashurst Australia and their respective affiliates (including independent local partnerships, companies or other entities) which are authorised to use the name "Ashurst" or describe themselves as being affiliated with Ashurst. Some members of the Ashurst Group are limited liability entities.
The services provided by Ashurst Risk Advisory Pty Ltd do not constitute legal services or legal advice, and are not provided by Australian legal practitioners in that capacity. The laws and regulations which govern the provision of legal services in the relevant jurisdiction do not apply to the provision of non-legal services.
For more information about the Ashurst Group, which Ashurst Group entity operates in a particular country and the services offered, please visit www.ashurst.com
This material is current as at 12 September 2024 but does not take into account any developments to the law after that date. It is not intended to be a comprehensive review of all developments in the law and in practice, or to cover all aspects of those referred to, and does not constitute legal advice. The information provided is general in nature, and does not take into account and is not intended to apply to any specific issues or circumstances. Readers should take independent legal advice. No part of this publication may be reproduced by any process without prior written permission from Ashurst. While we use reasonable skill and care in the preparation of this material, we accept no liability for use of and reliance upon it by any person.
The information provided is not intended to be a comprehensive review of all developments in the law and practice, or to cover all aspects of those referred to.
Readers should take legal advice before applying it to specific issues or transactions.