Privacy risks for AI and ADM in an evolving regulatory ecosystem
07 May 2024
In order to support innovation and prevent disruption to critical business processes that leverage AI and ADM, risk-based identification, assessment and a controls management framework that drives transparency, accountability and security are required.
This involves asking the right questions:
Deploy the right tools in your risk management framework to drive transparency, accountability and security:
Growing regulatory focus on AI and ADM has been a hallmark of the past year, with further regulation of AI and ADM expected to be introduced.
In relation to ADM, the Royal Commission into the Robodebt Scheme has been a wake-up call, prompting government agencies to carefully consider their decision-making processes, and reminding organisations of the important impacts of the use of ADM in critical services.
Just last week, the Attorney-General announced that changes to the Privacy Act are expected to be introduced in August 2024. A distinct focus on driving safety in use of these technologies can also be seen in the Australian Government’s interim response to the 'Safe and Responsible AI' consultation – proposed responses include:
Australia's privacy laws do not currently include specific rules for AI and ADM. This does not mean that no rules apply. The use of these technologies continue to be regulated by the general obligations that apply to all methods of data handling. Existing requirements under the Privacy Act require organisations to handle personal information with transparency, take accountability for their actions and ensure security of personal information – the use of AI and ADM needs to be viewed with this lens.
However, in addition to existing requirements, the expected Privacy Act reforms will include the following requirements for substantially automated decisions that have a 'legally or similarly significant impact':
These requirements and the legally or similarly significant effect test are similar to existing requirements under the GDPR.
In addition, the government's proposed 'fair and reasonable' test will mean organisations will also need to demonstrate how they have considered the impacts on individuals and the public interest in protecting privacy against the organisation's interest in carrying out specific activities involving personal information, including implementation of AI and ADM using personal information.
Picking up on this year's Privacy Awareness Week themes of transparency, accountability and security, we share some key tools to help you improve these elements in your AI and ADM deployments.
Leaders responsible for privacy risk within organisations often find themselves unaware of where automation has been implemented, and lack a structured process to identify and risk assess these processes against regulatory obligations.
ADM and AI are commonly deployed in areas including:
These questions should leave you with a 'to do' list – such as implementing processes to review AI or ADM deployments that have not been risk rated or assessed.
The Privacy Act mandates transparency from organisations in their data handling practices and requires privacy policies and collection notices to be clear and accurate. Upcoming reforms are set to reinforce this by requiring that notices are also concise and understandable.
Current privacy regulations require individuals to be provided with clear, targeted collection notices that explain how their data will be used as a result of specific interactions or transactions involving personal information. Without appropriate education and governance, project teams can mistakenly assume that general organisational privacy policies suffice. A robust privacy risk management framework ensures that these notices are designed effectively to enhance trust, minimise regulatory risks and are regularly updated to reflect evolving data handling practices.
As AI and ADM are integrated into critical business processes, organisations often explore innovative uses of data, which can lead to 'purpose creep'. When data cannot be de-identified, many organisations rely on consent for these secondary uses. To avoid compliance issues, it is crucial that consent management is not siloed — teams responsible for AI/ADM development must share a consistent understanding of the consent status of data subjects with other business units through the use of a single, reliable source of truth for consent, which is integrated with business rules to de-identify or destroy data where consent is withdrawn.
Improving organisational accountability to ensure the risks posed by AI and ADM are manageable requires robust privacy governance structures.
Organisations must take reasonable steps to implement practices, procedures and systems that ensure compliance with privacy obligations. This distinct requirement means that organisations without adequate privacy risk management processes in place can be in breach of their privacy obligations even if no privacy incident or data breach has occurred.
Appointing a privacy risk owner in your organisation is essential to manage the privacy risks associated with AI/ADM deployments in critical business operations. It is also crucial that privacy risk management is integrated into broader risk and internal audit programs. Additionally, establishing a Privacy Management Committee enhances interdisciplinary collaboration in large organisations, facilitating a cohesive approach to privacy risk management.
Accountability for AI and ADM processes requires that each process be mapped, documented, risk rated, and understood. This requires risk assessments to be conducted which consider a wide set of regulatory obligations, including privacy law, competition law, intellectual property law, sector-specific regulations, contractual obligations, internal standards, stakeholder expectations, and industry specific regulations. These assessments must be updated as the processes, data, or outputs change.
To discover higher-risk uses of AI and ADM for assessment, focus first on:
Without visibility into where and how high risk AI and ADM processes are deployed, organisations will struggle to meet privacy and other regulatory obligations.
Implementing digital training materials tailored for teams developing, using, or relying on the results of AI and ADM systems that handle personal information is crucial. This training should clearly explain how privacy obligations specifically relate to their work with AI and ADM systems, making these concepts easily applicable in their daily activities.
Consider risk-focussed training or 'top up' reminders as part of project kick-offs, role induction or project milestones, and embed ongoing staff enablement in your post-deployment maintenance and compliance strategy.
Internal policies and standards lay the groundwork for fostering a culture of privacy and accountability but often lack detailed guidance for staff who deal with AI/ADM systems daily. Implementing privacy-specific Standard Operating Procedures (SOPs) that offer detailed instructions for applying privacy principles to high-risk activities is crucial for bridging this gap.
Privacy laws already require organisations to take reasonable steps to protect personal information, but coming reforms will clarify this includes both technical and organisational measures.
The 'reasonable steps' an organisation must take to protect personal information will depend on the circumstances, and will evolve as personal information handling practices and the cyber-attack and defence environment evolves. What is considered 'reasonable' will also be informed by the proliferation of guidance, advisories, alerts and standards – key parts of Australia’s 2023-30 Cyber Security Strategy.
Organisations need to understand and properly manage the datasets that they expose to their (and their third-party provider) AI models. The benefits provided by AI and ADM deployments, such as frictionless automation, can also allow bad actors to rapidly, repeatedly and at-volume exploit process or logic errors, vulnerabilities or unexpected system behaviours.
While it is not possible to prevent all threats, organisations can develop and evolve threat models that describe and respond to more likely and more harmful threats, informed by current trends and an understanding of the vulnerabilities and attack vectors more relevant to AI and ADM systems.
AI and ADM deployments often depend on complex digital supply chains, including AI suppliers, apps, resellers, data providers, model training services, data centres, and secure data transmission services. Regulators and customers have been clear — the reputational and regulatory risks associated with third-party providers cannot be outsourced. It is crucial to assess and manage these risks from the beginning.
Effective due diligence should be conducted early in the tendering process to evaluate the privacy and cybersecurity maturity of service providers, and their ability to handle risks and meet regulatory requirements. It will be important for customers and suppliers to understand the specific Australian context, because fragmented approaches to regulation across various jurisdictions will mean it will not be sufficient for suppliers to simply attest to compliance in another jurisdiction.
Organisations must integrate their critical suppliers into their operational resilience, risk management, privacy management, and incident response plans, ensuring the clear delineation of responsibilities.
To learn more about what you can do today to drive transparency, accountability and security in AI and ADM, meet regulatory expectations, and prepare for the impending Privacy Act reforms, please reach out to the key contacts below.
Authors: Geoff McGrath (Partner), Chris Baker (Partner, Risk Advisory), John Macpherson (Partner, Risk Advisory), John Moore (Director, Risk Advisory), Leon Franklin (Director, Risk Advisory), Andrew Hilton (Expertise Counsel), Michael Turner (Executive, Ashurst Risk Advisory) and Patil Sevagian, (Specialist, Ashurst Risk Advisory).
We draw on Ashurst's combined legal and risk advisory expertise to help organisations keep pace with the evolving Privacy Act reforms and the actions they can take to position themselves for success.
Learn more about privacy reform in AustraliaThis publication is a joint publication from Ashurst Australia and Ashurst Risk Advisory Pty Ltd, which are part of the Ashurst Group.
The Ashurst Group comprises Ashurst LLP, Ashurst Australia and their respective affiliates (including independent local partnerships, companies or other entities) which are authorised to use the name "Ashurst" or describe themselves as being affiliated with Ashurst. Some members of the Ashurst Group are limited liability entities.
The services provided by Ashurst Risk Advisory Pty Ltd do not constitute legal services or legal advice, and are not provided by Australian legal practitioners in that capacity. The laws and regulations which govern the provision of legal services in the relevant jurisdiction do not apply to the provision of non-legal services.
For more information about the Ashurst Group, which Ashurst Group entity operates in a particular country and the services offered, please visit www.ashurst.com
This material is current as at 7 May 2024 but does not take into account any developments to the law after that date. It is not intended to be a comprehensive review of all developments in the law and in practice, or to cover all aspects of those referred to, and does not constitute legal advice. The information provided is general in nature, and does not take into account and is not intended to apply to any specific issues or circumstances. Readers should take independent legal advice. No part of this publication may be reproduced by any process without prior written permission from Ashurst. While we use reasonable skill and care in the preparation of this material, we accept no liability for use of and reliance upon it by any person.
The information provided is not intended to be a comprehensive review of all developments in the law and practice, or to cover all aspects of those referred to.
Readers should take legal advice before applying it to specific issues or transactions.