Generative AI: revolution or risk?

AI innovation continues to grow. We take a look at recent developments and speak to Global Relay’s Chief Data Scientist about the key concerns for compliance and finance.

27 March 2023 5 mins read
by Jennie Clarke

Amid turbulent market activity following the collapse of Silicon Valley Bank, the progress of generative artificial intelligence (generative AI) continues. Nothing will stifle innovation, it seems, not even when tech-focused lenders fail.

In this week’s AI developments, Google launched its rival to ChatGPT – Bard. Bard is the product of a more considered approach from Google (in comparison to OpenAI, at least) and is limited in use to registered persons over the age of 18.

In the same week, technology luminary Bill Gates announced that “The Age of AI has begun”. Writing in his blog, GatesNotes, Gates says that the recent activity around AI chatbots such as ChatGPT and Bard marks the second of two demonstrations of technology that has struck him as “revolutionary”. Recent developments in artificial intelligence, he says, are “the most important advance in technology since the graphical user interface”.

As well as discussing the future benefits of AI – from healthcare to education – Gates acknowledges that AI in its current format is not without risks. He notes that AI doesn’t yet understand context well enough to provide accurate results every time – especially if the user is looking for advice or recommendations. There are also issues of “AI giving wrong answers to math problems because they struggle with abstract reasoning”. These issues, he thinks, will be “fixed in less than two years and possibly faster”. Further ahead, Gates notes that AI could “possibly…run out of control”. But that is not a pressing issue.

What is a pressing issue, especially for financial institutions, are the complex compliance implications of this constantly-evolving and easily-accessible AI chatbots.

With that in mind, we sat down with Global Relay’s Chief Data Scientist, Matt Barrow, to get his take.

Matt sets out the 3 often-overlooked areas on which firms should be placing their focus:

Matt Barrow, Chief Data Scientist

1. Environmental concerns

Generative AI applications like ChatGPT appear to the non-technical user as a cool search-like bot. Under the hood though is a massive model with hundreds of billions of parameters that requires a lot of hardware infrastructure to execute these commands. This infrastructure needs vast amounts of energy to operate, which leads to carbon emissions – and an environmental impact.

Wired UK recently reported that it took 550 tons of carbon emissions just to train ChatGPT. How big will this number get as this technology proliferates, and will technology providers commit to consuming power from environmentally friendly sources?

I do wonder whether end users of resource intensive technology like generative AI would think twice about using it if they had to pay for a carbon offset credit before executing their question or search. As technology advances, it’s vital that operators of this hardware develop a sustainable power source, or the environment will pay the price.

2. Moral and ethical concerns

Generative AI will impact humans in many different ways – some good and some bad. Its ability to create synthetic data for training models should ensure that nobody’s privacy is compromised and models should start to suffer from less bias.

But ChatGPT also poses risks to people’s livelihoods, particularly in content creation fields such as journalism and software development, as well as search-intensive industries such as law and research.

A concern here is when this technology proliferates a much broader spectrum of training data that will be added to these models. If these models are trained using live data and in real time, these professions will experience a high degree of change which could expand to wide-reaching societal change. Generative AI has already transformed many industries, potentially taking many jobs with it.  

3. Financial concerns

Many companies will find themselves at a crossroads as they will have to make commercial choices between embracing costly technological change, being driven by generative AI, or risk losing out to companies who offer or adopt this technology.  

Take search as an example of an AI arms race that has already begun. The majority of software applications have some form of search capability and the vast majority of these applications have not yet embraced generative AI. The big tech companies are leading the way by embedding this technology and we will all experience the far-superior search capabilities of this very soon.

As these features become normalized amongst consumers, all other companies will be forced to compete. They will be faced with a stark choice. Pay up and buy, or build, or risk going out of business. The question is, what will be the financial impact of all the companies out there that can’t or won’t pay up?  

Global Relay has built AI-enabled solutions engineered to seamlessly integrate throughout the Global Relay ecosystem of archive, collaboration, and compliance applications. Our clients use AI to enhance business efficacy across surveillance, behavioral analytics, customer support, and more. Our approach to innovation is to build it right, the first time – so that all our products are robust and secure.

If you’d like to know more about our AI-enabled products, speak to the team or visit our product page.

About Article

Published 27 March 2023

About Author

Share Article