Car Crash or New Superhighway? – AI, Audits, Accountability, and the FCA

A recent speech from the Chief Executive of the FCA has outlined their direction of travel on Big Tech and AI regulation. But with the FCA itself being audited on its effectiveness as a ‘digital regulator’, what does this mean for organizations and accountability?

25 July 2023 8 mins read
By Jay Hampshire

In brief:

  • The Financial Conduct Authority’s Chief Executive, Nikhil Rathi, recently gave a speech on the regulator’s approach to Big Tech and Artificial Intelligence
  • While the FCA aspires to be a ‘digital regulator’, an upcoming audit intends to assess its suitability
  • With the FCA proposing the use of existing regulation to oversee AI and Big Tech, how can organizations balance innovation and accountability?

In the age of ‘digital Darwinism’, adaptability is integral to survival, something especially true of regulation. With technology evolving rapidly, the burden is increasing on regulators to stay in step with innovation. From Artificial Intelligence (AI) to ephemeral messaging channels to cryptocurrency, regulators are having to act fast to ensure they have set out effective, enforceable frameworks that mitigate. But with innovation only getting faster, is emerging technology a new superhighway, or a car crash waiting to happen?

Enter the FCA

On 12 July, 2023, Nikhil Rathi, the Chief Executive of the UK’s Financial Conduct Authority (FCA), gave a speech that outlined the regulators’ ‘emerging regulatory approach to Big Tech and Artificial Intelligence’.

The speech states that “Big Tech’s role as the gatekeepers of data in financial services will be under increased scrutiny,” acknowledging the “data-sharing asymmetry between Big Tech firms and financial services firms”. With Big Tech firms holding so much data on individuals, including “browsing data, biometrics, and social media”, there is huge potential for this data to be used to influence and create biases. As cases like the collapse of Silicon Valley Bank have illustrated, digital channels and data sources like social media can open up huge avenues of risk.

The speech highlights this data disparity between Big Tech and finance firms:

“Coupled with anonymised financial transaction data, over time this could result in a longitudinal data set that could not be rivalled by that held by a financial services firm, and it will be a data set that could cover many countries and demographics.”

Financial services firms have to ‘play by the rules’ when it comes to data, staying compliant with national and international regulations on what data they hold, how and where they hold it, and how it can be used. Many within the space will feel they are at a considerable disadvantage to Big Tech firms when it comes to competitiveness, and the risks this monopoly could present to the market and consumer. Rathi’s speech acknowledges these concerns and sets out the FCA’s direction of potential regulatory travel:

“We have announced a call for further input on the role of Big Tech firms as gatekeepers of data … considering the risks that Big Tech may pose to operational resilience in payments, retail services and financial infrastructure … but we need to test further whether the entrenched power of Big Tech could also introduce significant risks to market functioning.”

Thinking outside the Sandbox

Interestingly, the FCA’s speech was broadly positive about the opportunities presented by AI. Rathi highlighted that the regulator is embracing AI in-house:

“Internally, the FCA has developed its supervision technology. We are using AI methods for firm segmentation, the monitoring of portfolios, and to identify risky behaviours.”

He also discussed the FCA’s ‘Digital Sandbox’, a compliant testing environment designed to provide a ‘safe space’ for cooperation and innovation:

“This summer [we] have established our Digital Sandbox to be the first of its kind used by any global regulator, using real transaction, social media, and other synthetic data to support … innovations to develop safely … we welcome the government’s call for the UK to be the global hub of AI regulation, and will open our AI sandbox to firms wanting to test the latest innovations.”

While the sandbox enables AI experimentation to take place in a monitored, compliance-forward environment, the speech sets out a ‘wait and see’ approach to establishing specific AI-focused regulation, focusing on enabling innovation:

“We are open to innovation and testing the boundaries before deciding whether and what new regulations are needed … we will only intervene with new rules or guidance where necessary.”

Nikhil proposed that existing legislation and frameworks may be applied to emerging technologies, foregoing the need to create new, specific regulations:

“The Senior Managers & Certification Regime (SMCR) also gives us a clear framework to respond to innovations in AI. This makes clear that senior managers are ultimately accountable for the activities of the firm.”

The latter half of the above statement brings up accountability when it comes to AI, something that features earlier in the speech:

“We still have questions to answer about where accountability should sit – with users, with the firms or with the AI developers?”

Establishing who is responsible for an AI tool or system, especially if that system might go wrong or produce unexpected outcomes – and unexpected risks – will be critical in building effective means of regulation and enforcement. If regulators apply regimes like the SMCR to emerging technologies, senior stakeholders and managers will need to have full oversight and explainability of their AI tools – because ignorance is no defense.

Who watches the watchmen?

Segueing further on accountability, the National Audit Office (NAO) has scheduled an audit of the FCA for late 2023/early 2024. The NAO’s reasoning for the audit – the second of the regulator since 2014, just after the FCA’s creation – mirrors the content of Rathi’s speech:

“Recently, significant changes have been introduced or proposed to the FCA’s regulation of the sector … externally, technological innovations such as cryptoassets and artificial intelligence provide challenges and opportunities for regulation of financial services.”

The audit will assess the FCA’s adaptability and suitability given the proposed change of, and extension to, their remit towards becoming a ‘digital regulator’.

To their credit, the FCA is sanguine in its response:

“We welcome the National Audit Office’s review of the FCA and how we adapt to change, as it could help us to ensure we continue to meet our objectives. We have a clear strategy in place about who we want the FCA to be, and we are well underway to achieving that.”

Although the FCA is sure to be cooperative and outwardly positive about the audit, board minutes acknowledge that the process is “likely to be resource intensive” – which may well be putting things mildly.

Expert insight: Rob Mason, Director of Regulatory Intelligence, Global Relay

“It’s a fair challenge for the FCA to be audited, and it will ensure their approach is valid as their remit expands to cover new topics and emerging technologies. But from my experience – from both sides of the boardroom table – being scrutinized by another body will not be painless, and an audit never finds nothing wrong. Whatever the NAO finds will be made public, and with them seeking to become the ‘digital regulatory leader’, the FCA will hope the results of this audit might validate those credentials.

“It seems likely that the main focus of this audit will be around the FCA’s Big Tech and AI agenda. It’s an almost impossible position to regulate AI and Big Tech, and both these areas carry a wealth of regulatory risks. While the timelines might not support Rathi’s speech drawing the NAO’s attention by itself, there’s definitely a connection to be made between the FCA’s proposed direction of travel and this audit.

“What the FCA seems to intend to leverage is existing regulation against emerging technologies. Legislating similar to the SMCR regime should make senior managers and stakeholders nervous when it comes to AI, explainability, and accountability. If the regime requires that management are ultimately responsible for the activities of the firm, there will be a scramble to do more due diligence on AI tools that are already use.

“Currently, many won’t be able to explain how or why AI models their organization is using work. Machine Learning and AI algorithms learn ‘on the job’, reacting to changes in the market and making decisions from those inputs. But they can’t have morality encoded into them, and could end up engaging in practices human traders would know to avoid. When it comes to regulations and compliance, if something looks like spoofing, and cannot be obviously mitigated, it is spoofing. If senior managers can’t explain why their model is producing that result, they’re going to be held accountable as if they intended that result all along. Because if something looks like a duck, and quacks like a duck …

“The regulators will be looking for answers to the question ‘what do you do when AI goes wrong?’ and will expect firms to have that answer ready. While regulators can implement strategies like the FCA’s digital sandbox, they need to have a solid understanding of where existing regulation goes far enough, and where new measures may need to be implemented. Otherwise, given the rate of technological innovation, they run the risk of becoming ‘bicycles chasing Ferraris’.”

When the regulator comes knocking, having all of your relevant data archived and accessible can make all the difference between audit heaven and audit hell. With over two decades of experience in empowering complete, comprehensive data collection and secure, compliant archiving, we’re here to help you make sure you’re in the driving seat.