Business Insight

The Financial Services AI Series – Balancing the benefits and the risks: how to scale your AI capabilities whilst protecting your data

technology and AI

    This article is the third in a series examining a range of legal and risk areas impacted by artificial intelligence (AI) and machine learning (ML). This article focuses on the current data challenges faced by financial services organisations in relation to embedding and scaling 'traditional' AI, and the emergence of generative AI use cases and how that impacts data governance and data protection. This article focuses on what organisations should be considering when it comes to data governance and data protection in order to prepare for adopting this technology and to maximise the benefits it can bring.

    Machine learning use cases in financial services are well developed, with many organisations having established and leveraged this capability over a number of years.

    The recent emergence of generative AI means it is going to be a key differentiator for early adopters, however over time this will become more prevalent with the increasing development and sophistication of use cases as the technology matures and is embedded within the financial services industry. To date, a number of organisations have, rightfully, taken a conservative approach in using these new technologies by restricting its use completely, or by strictly prohibiting the use of personal data within their use cases. However as competitive pressures grow in having to adopt these types of technologies to gain an advantage, organisations need to start thinking about how they can do so in a safe and controlled manner.

    The use of generative AI presents a new set of risks that haven’t needed to be previously thought about, due to the way in which the technology operates. AI and ML itself have been around for decades, however with the recent advances in generative AI new doors have been opened for further innovation within the workspace and with it, new considerations in data protection and data governance required to underpin the use of these technologies.

    Rhiannon Webster

    Rhiannon Webster is a partner in our digital economy transactions practice. Rhiannon has 15 years' experience advising on information law both in the UK and on international projects.

    E: rhiannon.webster@ashurst.com

    The AI maturity curve

    There are vast differences in maturity across organisations when it comes to AI, both from an adoption and risk management perspective. Certain financial service institutions are already utilising AI or complex ML models such as dynamic pricing models used by insurance companies or fraud detection and transaction monitoring by banks.

    Those with well-developed data science capabilities have implemented AI governance frameworks and have assigned clear accountabilities that sit in different parts of the business including the Chief Data Office, technology, legal and risk functions. Some financial services institutions have already started their journey of either developing their own generative AI models or relying on third party providers to accelerate their capabilities.

    Given the rapid development and mass availability of generative AI, the challenge facing these organisations is determining whether current data governance and data protection practices are sufficient to get favourable outcomes for an organisation whilst protecting the rights of individuals. For others, their journey into the world of AI is only beginning and therefore now is the right time to be thinking about AI governance frameworks, in order to accelerate and scale in a compliant, safe and secure manner.

    Matthew Worsfold

    Matthew Worsfold is a partner and the Data & Analytics practice lead at Ashurst Risk Advisory (ARA) having joined the firm as a founding member of ARA in June 2020.

    E: matthew.worsfold@ashurst.com

    Generative AI and the changing data risk landscape

    Generative AI presents a changing and heightened set of risks in the financial services industry, mainly due to the fundamental way in which it differs from existing ML and AI technologies. With generative AI, no longer are training data sets being curated by the organisation and no longer do organisations have full oversight and control over the way in which the model is built and configured. In addition, it functions with significantly larger datasets and there are far more complex challenges with defining, monitoring and governing its use given the ease of access with its users. This fundamentally changes the way that AI risk management needs to be considered especially from a data governance and data protection standpoint.

    Regulatory oversight and guidelines for Generative AI in the UK

    Within the UK, regulators are keeping a close eye on monitoring for scenarios where Generative AI is being deployed without the right controls set into place. The Information Commissioner's Office (“ICO”) has not been shy in producing guidance on AI. It’s now nearly 10 years ago that the ICO published its Big Data, AI, and Machine Learning guides. Today there are 3 comprehensive resources published by the ICO in this area: Its guidance on AI and data protection, a separate publication produced with the Alan Turing institute on Explaining Decisions made with AI, and an AI and Data Protection Risk Toolkit to provide practical support for organisations assessing the risks to individual rights and freedoms caused by their own AI systems.

    In addition, in April this year, in response to the launch of Chat GPT, the ICO published 8 questions that developers and users need to ask on Generative AI. The ICO is clear that there are no short-cuts for those grappling with how to roll out generative AI in compliance with data protection laws and for each use case organisations need to consider:

    1. your lawful basis for processing the personal data
    2. your role as a controller, joint data controller or processor
    3. how you will ensure transparency of the processing
    4. how you will mitigate security risks
    5. how you will limit unnecessary processing
    6. how you will comply with individual right requests
    7. whether you will use AI to make decisions about individuals; and
    8. the completion of a data privacy impact assessment.

    Bradley Rice

    Bradley Rice is a partner in our financial regulation practice. He specialises in all aspects of financial services regulation. In particular, he acts for some of the largest fund managers advising on the Alternative Investment Fund Managers Directive (AIFMD) and collective investment scheme (CIS) issues.

    E: bradley.rice@ashurst.com

    The ICO has showed that it is willing to take action against organisations in this area. In October the ICO issued Snap, Inc and Snap Group Limited (Snap) with a preliminary enforcement notice over potential failure to properly assess the privacy risks posed by Snap’s generative AI chatbot ‘My AI’.

    The UK financial regulators have also not ignored the recent developments and euphoria. The Bank of England, Prudential Regulation Authority and Financial Conduct Authority recently published a Feedback Statement to their October 2022 discussion paper on how AI may affect their respective objectives for the supervision of financial services firms. Among the key findings from respondents was a desire to keep regulation flexible so it can develop as the technology advances further, with calls for ongoing industry engagement. Most respondents said specific definitions and regulatory rules were unnecessary since existing regulatory requirements should ensure proper governance, oversight and testing of AI use cases. We share this view. AI is not new to financial services. We have had algorithmic trading, systematic trading strategies, robo-advisers, copy trading and all other things in between for years. Regulators have long been technology neutral too. The same risks should be subject to the same regulations and AI is no different. It is true that it presents additional considerations and risk, but aside from a handful of novel issues, the existing regulatory framework should be more than sufficient for now.

    Prime Minister Rishi Sunak also reportedly said that the UK will not rush to regulate AI but wishes to encourage innovation and make the UK a global leader in AI. This approach, should it hold true, would be in stark contrast to the EU, which is pushing through its regulation on AI.


    However the legal and regulatory arena develops, one thing is certain. This is a challenging area for financial services organisations as traditional use of AI has been taken out of the hands of data science teams into the hands of any employee that has an internet connection. This makes it challenging to ensure that existing constructs for governing privacy risk, such as mandating data privacy impact assessments are being complied with and legal advice being obtained when required.

    Lee Doyle

    Lee Doyle is global co-chair of the bank industry and a partner in our finance practice specialising in key risk solutions for our banking clients.

    E: lee.doyle@ashurst.com

    Data governance and data protection as the foundation

    Governance of AI is underpinned by how effectively an organisation manages inherent risks associated to AI and ML. Given the sheer volume of data required to train models there are heightened risks around data. The advent of generative AI only heightens these risks further. Financial services firms in particular need to pay close attention, given the significant amount of data that they both produce and consume, particular on their customers. As with any form of ML and AI, both 'traditional' and generative, organisations are expected to be able to answer questions around transparency, traceability, quality and potential bias of the data that feeds a model.

    For many financial services organisations that have come out the other side of BCBS239, there will be existing data governance frameworks and arrangements that can be leveraged to help solve for many of these challenges. It is vital that these frameworks not only cater for the risks around AI, but have also been implemented over the AI development lifecycle.

    As part of these arrangements, data protection risks need to be front and centre when developing AI use cases as organisations, and particularly technical teams within those organisations cannot forget they are accountable for complying with data protection laws.

    Continued relevance of Data Protection Impact Assessments (DPIAs)

    There is an acknowledgement that not all uses of AI in the financial service industry will involve types of data processing that are likely to potentially impact an individual's rights and freedoms. However, organisations must still demonstrate the existence and use of governance processes in place that assess and mitigate any data protection risks. In the vast majority of cases, the use of AI will involve a type of processing likely to result in a high risk to individuals’ rights and freedoms. It will therefore trigger the legal requirement to undertake a DPIA. You need to make this assessment on a case-by-case basis. The ICO’s AI toolkit is designed to provide further practical support to organisations to reduce the risks to individuals’ rights and freedoms caused by their own AI systems.

    The iterative and fast paced nature of AI technologies, including generative AI, within organisations mean that DPIA's should be reviewed regularly and reassessed where appropriate where the nature, scope, context or purpose of the processing, and the risks posed to individuals, alter for any reason. Put simply, data protection needs to be front of mind, and data protection stakeholders need to be brought on the AI development journey from start to finish, from idea generation to implementation, given the way in which AI use cases and solutions tend to morph as they are developed.

    Transparency and trust

    Generative AI only serves to magnify the issues that financial services organisations will face in deploying the technology. The complexity and lack of transparency of generative AI models makes it much harder for organisations to manage the data risks that AI presents. This stems from the challenges in truly understanding how the model is built and where its training data is sourced from.

    Risks are compounded compared to traditional use of AI techniques, as organisations are forced to rely on third party large language models (LLM) given the costs and amount of training data required to build their own. This means that many organisations are either augmenting foundational models trained by third party providers, or are leveraging these models out of the box. This raises concerns over the lack of transparency from vendors about models and training data which may either lead to unfair, discriminatory, or bias outcomes, or may have been trained using the data of individuals without their consent. For organisations not building their own in-house generative AI models, that have enabled users to interface with technologies such as ChatGPT and BARD, a different type of risk emerges given that many do not have clear line of sight in the type of information being entered into these tools, and are instead relying on policies to govern their use.

    Multi-faceted teams are required to govern AI technologies

    The management and governance of this new technology cannot also be simply the responsibilities of those that are involved in the development and deployment of AI capabilities, typically sitting in data scientists or engineering teams. A comprehensive governance council should be established at both senior leadership levels and at the operational level where the work is being executed. Senior management leading Financial services organisations such as in the Chief Data Office and other stakeholders such as Data Protection Officers, are also accountable for understanding and addressing privacy risks appropriately and promptly. This ensures that the organisation is equipped with the knowledge on governing any new uses cases of AI or managing existing projects.

    For now, financial services organisations need to ensure that foundational data governance frameworks and data protection protocols have been implemented effectively and are operating to manage the data risks presented during the AI development lifecycle.


    Whilst many organisations are treading carefully when it comes to generative AI, over time we will no doubt see the evolution of customer-focused use cases. Getting the foundations around data governance and data protection right today means that organisations will be well-placed to face into the challenges that generative AI will pose in the years to come.

    This publication is a joint publication from Ashurst Australia and Ashurst Risk Advisory Pty Ltd, which are part of the Ashurst Group.

    The Ashurst Group comprises Ashurst LLP, Ashurst Australia and their respective affiliates (including independent local partnerships, companies or other entities) which are authorised to use the name "Ashurst" or describe themselves as being affiliated with Ashurst. Some members of the Ashurst Group are limited liability entities.

    Ashurst Australia (ABN 75 304 286 095) is a general partnership constituted under the laws of the Australian Capital Territory.

    Ashurst Risk Advisory Pty Ltd is a proprietary company registered in Australia and trading under ABN 74 996 309 133.

    The services provided by Ashurst Risk Advisory Pty Ltd do not constitute legal services or legal advice, and are not provided by Australian legal practitioners in that capacity. The laws and regulations which govern the provision of legal services in the relevant jurisdiction do not apply to the provision of non-legal services.

    For more information about the Ashurst Group, which Ashurst Group entity operates in a particular country and the services offered, please visit www.ashurst.com

    This material is current as at 15 November 2023 but does not take into account any developments to the law after that date. It is not intended to be a comprehensive review of all developments in the law and in practice, or to cover all aspects of those referred to, and does not constitute legal advice. The information provided is general in nature, and does not take into account and is not intended to apply to any specific issues or circumstances. Readers should take independent legal advice. No part of this publication may be reproduced by any process without prior written permission from Ashurst. While we use reasonable skill and care in the preparation of this material, we accept no liability for use of and reliance upon it by any person.

    The information provided is not intended to be a comprehensive review of all developments in the law and practice, or to cover all aspects of those referred to.
    Readers should take legal advice before applying it to specific issues or transactions.

    image

    Stay ahead with our business insights, updates and podcasts

    Sign-up to select your areas of interest

    Sign-up