With Bill Gates having proclaimed that “the age of AI has begun”, governmental and regulatory bodies across industries and geographies are examining what current and future impacts artificial intelligence (AI) may have across multiple industries – and what legislation needs to be implemented to ensure it is can be used compliantly and without adverse effects.
We have already seen entities like Italy’s data protection authority – Garante Per La Protezione Dei Dati Personali (GPDP) – take steps to ban generative AI on the grounds that it collects and processes personal data unlawfully, and the U.K. government has released a policy paper summarising its view that “given the pace at which AI technologies and risks emerge, and the scale of the opportunities at stake … there is no time to waste” when it comes to understanding what regulatory measures are required around AI, and how to best implement them.
In January 2023, the U.S. National Institute of Standards and Technology (NIST) released a framework to manage potential risks from AI in the life insurance industry. We’ve also seen moves from the United Nations, with UN Secretary-General Antonio Guterres backing calls from those within the AI space itself to establish an international AI watchdog and code of conduct. The potential risks associated with AI are a hot-button issue, and the discussion around it doesn’t look set to die down anytime soon.
Interestingly, despite the rising usage of AI technologies within the finance sector, a dedicated regulatory framework around AI and Machine Learning (ML) in finance is yet to be put in place. However, there is a keen awareness from across the space that innovation may be outpacing regulation, and that regulators need to act fast or risk being left behind.
On 13 June, 2023, the Securities and Exchange Commission (SEC) announced its semi-annual rule-writing agenda, which included dozens of proposed regulatory plans and timescales. The agenda included several propositions around future regulation of AI and ML approaches within finance, securities, and trading. While not a fixed overall framework of regulation or legislation, this serves as something of a warning shot, a statement of intent that regulation around AI is coming – and it could be coming “as soon as October”.
You can’t spell ‘Chair Gary Gensler’ without ‘AI’
On the same day, Gary Gensler, Chair of the SEC, released an accompanying statement around the rule-writing agenda. He summarised that the agenda supports the SEC’s mission to “protect investors, maintain fair, orderly, and efficient markets, and facilitate capital formation.”
Those keeping an eye on AI and the SEC will know that Gensler has explicitly spoken out about Artificial Intelligence and its impact on the financial industry several times previously, as well as its status as a regulatory priority for the agency. Gensler has previously:
- Stated that he believes AI and the use of predictive data analytics may prove to be “more transformative than the internet itself”
- Voiced concerns that the proliferation of AI poses a “systemic risk” and could be a major component in future financial system “fragility”
- Acknowledged that “Artificial intelligence and predictive data analytics are transforming so much of our economy” and that “finance is no exception”
In two separate testimonies before the U.S. House of Representatives and Senate Committee on Banking, Housing, and Urban Affairs, Gensler has highlighted the potential for conflicts in the use of predictive data analytics that would see brokers “optimizing for their own interests”, and requested that staff make recommendations to each commission on how these conflicts could be addressed.
The call for regulation is coming from inside the house
While regulation is clearly at the forefront of the SEC’s agenda, in the absence of any fixed frameworks, we are left with organizations taking a risk-based approach to AI, with examples including the use of ChatGPT being swiftly barred by the compliance teams of many financial institutions. This ‘common sense’ driven approach is not being undertaken due to specific concerns around AI, but is actually the result of best-practice approaches used by most banks and financial organizations. Managing new technologies and assessing their risk is longstanding practice for in-house compliance and IT teams. Indeed, JP Morgan’s move to restrict employees using ChatGPT was not triggered by a specific incident of AI risk, but was tied to existing controls around third-party software.
Interestingly, the SEC’s own Investor Advisory Committee (IAC) submitted a joint consensus to Gensler himself on April 6, 2023. The letter acknowledges that “there is a lot of promise for the future of AI in the investment industry”, but also that there “have been serious blind spots brought to light with algorithms in other industries”. The IAC said that “as advisory firms obtain and mine more data, it is imperative they also are following clear best practices from regulators, which includes the SEC.”
The IAC has expressed that there is a clear need for the SEC to act on constructing a regulatory framework on AI within the finance space:
“Advisory firms should have a robust risk management and governance framework to ensure that AI is used in the best interest of investors and without bias. It is imperative that the SEC ensure the enforceability and monitor for compliance with any future best practices guidance or regulation provided to advisory firms. Many advisers are aware and eager to comply with these best practices, and the SEC has an opportunity to provide clear and enforceable guardrails for advisers that can increase the confidence of the American public in investment advisers’ use of technology.”
Pressure is mounting both internally and externally for the SEC to provide a clear regulatory framework for the use of AI and related technologies. While regulating for the unknown takes time, some organizations may argue that the burden of risk-without-regulation should not be theirs to bear.
The future of regulation is now
While we may have to wait a little longer for legislative clarity around the form and scope of regulations around AI for the finance sector, and although concerns about the impacts of technology continue to be raised, Gensler has maintained a broadly positive outlook:
“The future of generative AI in the finance industry holds great promise, but it must be accompanied by a robust regulatory framework. By fostering innovation while safeguarding market integrity, we can harness the potential of generative AI to create a more efficient and resilient financial ecosystem. To safeguard the integrity and stability of financial markets, regulatory oversight must adapt and evolve.”
Actually, Gary Gensler didn’t say any of that. ChatGPT did – in under five seconds – when given a prompt to write a comment in the style of Gensler on the future of generative AI regulation in the finance sector.
What Gensler did say as part of his statement on the rule-writing agenda, was:
“Technology, markets, and business models constantly change. Thus, the nature of the SEC’s work must evolve as the markets we oversee evolve. In every generation since President Franklin Roosevelt’s, our Commission has updated its ruleset to meet the challenges of a new hour.”
While effective regulations and compliance regimens take far longer than five seconds to implement comprehensively, some in the space may feel that they are left to ‘fill in the blanks’ until regulations are established. While waiting for the SEC and other regulators to provide clarity, organizations would do well to work with the right partners to understand how to keep pace with innovation, stay compliant, and mitigate risk in the ‘age of AI’.