Following the publication of their joint discussion paper on artificial intelligence and machine learning in 2022, the Bank of England, the PRA and the FCA have published a feedback statement.
This is against a backdrop of increasing regulatory scrutiny of the use of AI at the national and international level, which financial services firms should be watching closely. The feedback statement is part of the supervisory authorities' wider investigation into AI, which includes the AI Public Private Forum (a final report was published in February 2022). The Government recently hosted an AI Safety Summit following the publication of its AI National Strategy and its July 2022 policy paper. At the international level, the EU's proposed Regulation on AI is currently making its way through the legislative process, while U.S. President Joe Biden recently signed an Executive Order on the safe use of AI. G7 leaders have also agreed on International Guiding Principles on AI and a voluntary Code of Conduct for AI developers. It is clear that momentum is gathering in this area.
While AI is transforming financial services, the key take-away from this discussion paper is that regulatory changes may be necessary to facilitate this safely, especially to protect consumers.
The 2022 Discussion Paper
The discussion paper acknowledged the potential for AI and machine learning to transform financial services, enabling firms to improve the products and services available to consumers. It highlighted novel challenges and regulatory risks, and sought to deepen the conversation about how this technology will affect supervision in the UK finance industry. Areas discussed included:
- How AI is used in financial services, focusing on:
- Adoption in established firms, as well as newer fintech or insurtech companies;
- Use in wider material business areas (from AML, credit and regulatory capital modelling in banking to orders, execution and trading signals for investment managers); and
- The use of "Complex AI techniques" and data sets.
- The benefits and risks for financial services firms, looking at the use of data, the performance of AI models and governance.
- The existing regulatory rules, examining legal requirements and guidance applicable to AI such as:
- The FCA's consumer protection rules (notably Principles 6, 7 and 9 of the FCA's Principles for Business, as well as the Consumer Duty);
- SYSC 4.1.1 and Rule 2.1 General Organisational Requirements Part of the PRA Rulebook which impose a requirement for robust governance arrangements and clear organisational structure;
- The regulations around data, including the MIFID ORG Regulation and the requirement for investment firms to store records so that they may not be manipulated and altered; and
- The SMCR framework and senior management accountability and responsibility, noting that the Chief Operations function (under SMF 24) is normally responsible for technology systems.
Why the BoE, FCA and PRA are interested in AI
The discussion paper addressed the following areas of risks and benefits posed by AI adoption, as well the supervisory authorities' remit:
- Consumer protection (FCA): While AI's ability to profile customers and preferences has obvious business benefits, this also presents risks - such as biased profiling, exploitation of consumer vulnerabilities, price discrimination an exclusion of customer groups.
- Competition (FCA): The benefits to competition AI systems may bring to areas such as Open Banking is balanced against the risk of AI facilitating collusive behaviour between firms by rapidly responding to a competitor price changes.
- Safety and Soundness (PRA and FCA): AI's greater predictive and analytical power can lead to operational efficiency and better identification of risks, but complex models are less stable and more susceptible to go off course.
- Insurance policy protection (PRA and FCA): AI's benefit to data processing and decision making for underwriting and claims is balanced against the risk of models operating on outdated data, producing inaccuracies or drifting beyond their original parameters.
- Financial stability and market integrity (BoE and FCA): The efficiencies created by AI can lead to a more efficient financial system overall. It also allows supervisors to model macro market dynamics. Its risks include encouraging pro-cyclical behaviour and crashes, especially given the use of homogenous third-party models across competitive firms.
The October 2023 Industry Feedback Statement
This Feedback Statement follows the same themes as the discussion paper, bringing together the industry's responses to its more specific questions. Although it is a useful snapshot of the industry's views, it does not include policy proposals and expressly discourages readers from drawing conclusions about how the supervisory authorities will design or implementing AI policy.
Key points from Industry's Responses
- The focus of supervision: Most respondents thought the risk of discrimination, exploitation and lack of transparency should be a priority for supervisors, with some calling for clarification on what bias and fairness in the Equality Act and FCA Consumer Duty mean in the context of AI.
- A definition of AI: A fixed definition of AI would not be helpful, as it could quickly become outdated and create incentives for regulatory arbitrage. Sectoral definition could also conflict with technology neutral application favoured by regulators.
- Evolving technology: Given its constantly evolving nature, respondents thought that regulators should aim for live regulatory guidance in relation to AI. There are a number of existing guidelines that would be relevant in relation to regulation of AI, notably those relating to operational resilience and outsourcing. These include PRA Supervisory Statements on operational resilience and outsourcing (SS 1/21 and SS2/21 for example) and the FCA's Policy Statement on operational resilience (PS21/3).
- Alignment between regulators: Given that AI is a cross-cutting technology, respondents called for more coordination and alignment between domestic and international regulators. For example, between UK regulation and the EU AI Act.
- Data: Regulation on data is not clear or coherent. Respondents called for an industry-wide standard of data quality, currently lacking in the regulation. In addition, some argued that there are areas of data regulation that are not sufficient for monitoring and controlling the risks associated with AI models. A number of respondents stated that the interaction between AI and GDPR was hard to navigate, particularly on automated decision making, data localisation and how protection and privacy rights.
- Third party providers: Generally, respondents identified the need for greater oversight of third party technology providers, noting the relevance of the BoE's other 2022 Discussion Paper on third party providers.
- Governance: Overall, respondents did not support the creation of a Prescribed Responsibility or Senior Management Function for AI. Generally, respondents believed structures such as SMCR are sufficient to address AI risk and firms need "local owners" to retain accountability over AI. Further guidance on how to interpret the reasonable steps element of SMCR in AI context would be helpful. Generally, joined-up thinking across business units and functions is needed to minimise AI risks especially data management and model risk management.
- Model risk management principles: Model risk management is being used by some firms to manage and minimise AI risk (PRA CP 6/22),but there are areas which could be bolstered or clarified. The statement acknowledged that the metrics for measuring the benefits and risks of AI are not fixed and depend on the specific use case.
What happens next
Although the Feedback Statement does not represent a defined policy approach to AI, we will be monitoring any future output closely. The regulators have not yet indicated how they will respond to this discussion.
To speak to us further about the future of AI regulation in Financial Services, please contact us.