“Responsible experimentation”: will Boston’s AI blueprint lay foundation for future approach?

The City of Boston has unveiled new guidelines for generative AI, which some are touting as the blueprint for future approaches. We take a look at what the new guidelines mean, and how compliance teams can navigate AI risks safely.

26 May 2023 6 mins read
By Jennie Clarke
Written by humans

Written by a human

In brief:

  • After initial rallies to ban generative AI, or pause its training, there now appears to be a change of tack afoot.
  • The City of Boston has published a new “responsible experimentation” approach which encourages employees to use ChatGPT for work purposes, within limits.
  • The new approach is being touted as a potential blueprint that could be adopted the world-over.
  • Despite this, compliance concerns around data privacy and copyright infringement persist.
  • Compliance teams should proceed with caution, and ensure all risks are considered before adopting “responsible experimentation”. We explore how.

“The only thing we have to fear is fear itself”.

So said Franklin D. Roosevelt during the U.S. Great Depression in the aftermath of the 1929 Wall Street crash.

In 2023, this quote rings true for myriad reasons, not least owing to significant and fast-moving advancements in artificial intelligence (AI), specifically generative AI such as ChatGPT.

Over the past few months, we’ve witnessed swathes of fear-inducing messaging surrounding Generative AI. First, it was banned by myriad financial institutions. Then, Bill Gates proclaimed that the “age of AI has begun”.

One week later, other luminaries including Elon Musk signed a letter for the training of powerful generative AI systems to be “immediately” paused for at least 6 months, to give them time to understand, predict, and reliably control the technology. Countries and U.S. States heeded the warning, issuing bans and restrictions.

However, in the week beginning May 15, 2023, attitudes started to shift.

From fear to foray

On May 18, New York City Schools Chancellor David C Banks, noted that the initial move to ban ChatGPT from New York schools was “knee jerk” and “overlooked the potential of generative AI to support students and teachers”. Following a period of investigation and discussion, New York City public Schools are now changing their approach and encouraging educators and students to “learn about and explore this game-changing technology”.

More significantly, on May 19, City of Boston chief information officer Santiago Garces unveiled guidelines encouraging city officials to use generative AI to “understand their potential”. The City of Boston’s new approach is that of “responsible experimentation”, which gives sample use cases for how such tools could be used. These use cases include writing memos, job descriptions, summaries, or to translate complex documents into plain language so that they are more easily accessible to others.

While the guidelines are broadly encouraging the use of AI for work purposes, the guidelines are clear in setting out key principles for its use and impressing upon readers the importance of fact-checking.

Boston’s approach makes it clear that users of generative AI will still be responsible for the outcomes that are delivered:

“For example,”, it says “if autocorrect unintentionally changes a word – we are still responsible for the text. Technology enables our work, it does not excuse our judgment nor our accountability”.

This is a key learning for the compliance team – for generative AI and beyond. If something goes wrong – whether owing to the use of technology or otherwise – the regulatory will still pin the blame on your firm. The technology bears no burden for non-compliance.

A blueprint for future use?

Following these recent announcements, many have touted Boston’s new policy as holding the potential for a future precedent to be adopted by other cities, states, or governments. Of course, many will argue that the only way to learn the benefits and limitations of new technology is to use it.

Banning technology seldom works, and fear of the unknown may be more damaging than an informed understanding of a powerful tool.

Proceed with caution

This activity is not, however, a rallying cry to throw open the doors and welcome generative AI with open arms. Instead a call for inquisitive and cautious adoption.

The data protection implications of generative AI such as ChatGPT remain unclear, with Italy, France, Spain, and Germany all investigating the technology’s GDPR compliance. E.U. scrutiny is in fact now so severe that the European Union has formed a task force to harmonize investigative efforts.

As well as data concerns, copyright issues are fast coming to the fore – with rumors that numerous private parties are preparing to challenge generative AI systems that create images, writing, and music in the same style as their work (only those hiding under a rock will have avoided seeing some AI-generated scene “in the style of a Wes Anderson movie”.)

A compliant approach

Firms should think carefully about how they use generative AI in this period of “responsible experimentation” – and still consider whether to use it at all.

Ask yourself, would you share this information with the outside world? If the answer is no (for instance, if the input contains confidential or customer-related information) do not proceed.

Perhaps start with the notion that anything you enter may one day be accessible in the public domain.

Looking ahead, firms should be exploring how generative AI is – or may be – used within their organization. From here, you should establish policies, procedures, and controls dedicated to the use of AI both for in-house and external projects and communications. Ensure your employees have had training on what is expected of them. And implement adequate supervision and monitoring to ensure that any permitted use is compliant.

Allowing employees to use generative AI may lead to increased efficiencies and more effective work processes, but if the expectations and limitations are not made clear – risk may outweigh the reward.

Global Relay has built AI-enabled solutions engineered to seamlessly integrate throughout the Global Relay ecosystem of archive, collaboration, and compliance applications. Our clients use AI to enhance business efficacy across surveillance, behavioral analytics, customer support, and more. Our approach to innovation is to build it right, the first time – so that all our products are robust and secure.

If you’d like to know more about our AI-enabled products, speak to the team or visit our product page.

 

SUPPORT 24 Hour