Legal development

Finance Regulatory Update: Monsters in the deep: AI and your firm

Insight Hero Image

    The BoE has published a speech given by Jonathan Hall, external member of the Financial Policy Committee, on the impact of AI developments on financial stability. The speech focuses on a subset of AI called deep learning (a form of machine learning where neural networks are trained on large amounts of data). Mr Hall discusses areas such as model failure and model misspecification, arguing that the financial market can be seen as an information processing system. The speech will be of interest to market intermediaries considering AI and ML, repeating many of key themes and concerns raised by regulators before in relation to appropriate governance and oversight arrangements for sign off and deployment and use of AI, as well as the need to have adequate systems in place for monitoring output.

    What's the current situation?

    The increase in the use of electronic trading platforms and available data have led firms to consider the use of AI and ML in trading. According to the speech, there are performance management concerns about AI, and many in the market seem to be using ML mostly in supervised learning scenarios and not neural networks. However, Mr Hall considers a scenario where neural networks could become deep trading agents, selecting and executing trading strategies and the implications for market stability. Mr Hall highlight two main risks:

    • deep trading agents could lead to an increasingly brittle and highly correlated financial market; and 
    • the misalignment of incentives of deep trading agents with that of regulators and the public good.

    Mr Hall examines two types of trading algorithms, affectionally named Val and Flo: a value trading algorithm (an algorithm attempting to profit from an understanding of moves in fair value); and a flow-analysis trading algorithm (an algorithm attempting to understand and benefit from supply-demand dynamics).  In relation to deep value trading algorithms, Mr Hall notes that volatile or unpredictable behaviour can result if there are flaws in the training process, while sudden changes in the environment can induce failure. To achieve predictability and adaptability in a trading algorithm usually requires either extensive training or other predictability use techniques, and therefore, in the immediate term, internal stop limits and external human oversight (including kill switches) are needed to mitigate such unwanted volatility in trading algorithms, just as with such risks with a human trading desk. Managers implementing highly complex trading engines must also have sufficient understanding that goes further than a single, simplified interpretation of the model.

    In respect of flow analysis trading, Mr Hall argues that the algorithm could see the potential that market instability offers for outsized profits and might be incentivised to amplify shocks (he argues that this could be addressed via training to respect rules or a constitution).

    So what could be done? 

    There are three main areas of focus going forward according to Mr Hall:

    • Training monitoring and control: Deep trading algorithms need to be trained extensively, tested in multi-agent sandbox environments and constrained by risk and stop loss limits. Managers need to monitor output for signs of unusual and erratic behaviour. The FPC needs to understand and monitor stability implications of any changes in the market ecosystem.
    • Alignment with regulations:  Deep trading algorithms need be trained in a manner that aligns with the regulatory rulebook. Training needs to updated to respond to keep up with identified divergences between regulatory intent and reaction function, while trading managers need to keep reinforcing rules to ensure that they are not forgotten.
    • Stress testing: Stress scenarios need to be based on adversarial techniques and not neural networks behaving in a smooth manner. Stress tests will need to understand the reaction function of deep trading algorithms, as well as be able to check performance and solvency. 

    How does this fit in with what is going on with AI so far?

    AI has been on the regulatory radar internationally and at the UK level for some time. The EU has introduced a regulatory framework for AI (see our briefing here). UK regulators published a Feedback Statement (FS2/23) in November 2023 (see our briefing here) to follow up on their joint 2022 discussion paper (DP5/22). More recently, the BoE and the FCA published their approaches to applying the Government's AI Regulatory principles: (1) safety, security, and robustness; (2) appropriate transparency and explainability; (3) fairness; (4) accountability and governance; and (5) contestability and redress.  

    The PRA and FCA take a technology neutral approach to regulation, but have flagged key areas of risk and use in the financial services sector (e.g. for trading/brokering clients in relation to market surveillance, trading bot software). Accountability and governance considerations have focused on the allocation of responsibility under the SMCR (SMF 24 in most cases, as opposed to a dedicated SMF for AI). In its update, the FCA also sets out how its regulatory approach to consumer protection, particularly the Consumer Duty, are relevant to the fairness principle.

    At the international level, IOSCO guidance on AI and ML has stressed the importance of adequate testing and monitoring of algorithms to validate the results of an AI and ML technique on a continuous basis, as well as having appropriate controls in place.

    The information provided is not intended to be a comprehensive review of all developments in the law and practice, or to cover all aspects of those referred to.
    Readers should take legal advice before applying it to specific issues or transactions.

    image

    Stay ahead with our business insights, updates and podcasts

    Sign-up to select your areas of interest

    Sign-up