The UK Government, AI and regulation: why not the FRC?

Post #76

March 6, 2024

Claire Bodanis

Claire comments on a gap in the UK Government’s strategy for developing a pro-innovation approach to AI regulation – namely, the lack of focus on corporate reporting. But fear not: our research in partnership with Imperial College London and Insig AI should help fill it!

Last week, I had the great excitement of reading the UK Government’s response paper to last year’s consultation, A pro-innovation approach to AI regulation (and yes, I did look up our name in the list of contributors!).

I was pleased to find that the Government’s overall approach to regulation remains in line with that proposed in their initial consultation; which is, to empower existing regulators to deal with the risks and opportunities of AI in their own domains supported by a central function, rather than creating a whole new regulatory body to deal with AI across the board. This chimes with my view that the approach most likely to achieve a useful outcome is to regulate the use of AI, rather than trying to regulate AI itself – not least because the risks and opportunities are so vastly different, depending on what type of AI you’re talking about, and in what context it is being used. I was also pleased to read about the Government’s commitment to accurate information, and the importance of verifiable sources.

With this in mind, I turned eagerly to the regulatory framework section, which states that the Government “has written to a number of regulators impacted [sic*] by AI to ask them to publish an update outlining their strategic approach to AI by 30 April”. It goes on to explain that this should cover:

  • An outline of the steps they are taking in line with the expectations set out in the white paper.

  • Analysis of AI-related risks in the sectors and activities they regulate and the actions they are taking to address these.

  • An explanation of their current capability to address AI as compared with their assessment of requirements, and the actions they are taking to ensure they have the right structures and skills in place.

  • A forward look of plans and activities over the coming 12 months.

Imagine my disappointment, then, when I saw that the list of regulators deemed sufficiently ‘impacted’ by AI included Ofcom, the Financial Conduct Authority, and the Competition and Markets Authority, amongst others – but not one of the most important for UK corporate reporting, i.e. the Financial Reporting Council, or FRC.

You may say I’m not entirely free from bias here. After all, corporate reporting is my pet subject, so I’m naturally inclined to think it’s incredibly important. However, I would argue that the subject matter of the Government’s paper naturally points towards the FRC as an essential contributor. It states that: “The government is committed to ensuring that people have access to accurate information and is supporting all efforts to promote verifiable sources to tackle the spread of false or misleading information. AI technologies are increasingly able to provide individuals with cheap ways to generate realistic content that can falsely portray people and events. Similarly, AI may increase volumes of unintentionally false, biased, or harmful content. This may drive negative public perceptions of information quality and lower overall trust in information sources.

These are precisely the kinds of issues that alerted me to the potential risks posed to corporate reporting by the unfettered use of AI. In particular, the risks of indiscriminately using large language models of the ChatGPT type to create narrative reporting. That’s what prompted me to conduct my initial research last summer, with corporates and investors, on the use of AI in reporting; resulting in the publication, last November, of my guidance for Boards and management on approach and disclosure.

And why is the truthfulness and accuracy of corporate reporting so important, beyond my own personal interest? Quite simply because it is the bedrock of the global system of capital markets, which underpins the economic stability of many societies today. Corporate reporting contains the information upon which investment decisions are based. To me, then, it is no less essential a “channel for trustworthy and verifiable information” than the journalism cited in the Government’s paper as being particularly at risk.

So now what? In excellent news, as I announced on LinkedIn a couple of weeks ago, the lovely MSc team at Imperial College London have presented their plan for our broader research project with corporates, investors and the wider reporting ‘ecosystem’ to investigate the responsible use of AI in corporate reporting. We’re now looking at how to put that into practice over the next couple of months. The output will build on my initial guidance to support companies in developing their own responsible approach to reporting, while also offering the Government and our regulators insights into the practical application of AI in corporate reporting. We will be working hard to encourage both Government and regulators to take these insights – gathered from the market – into account.

In the meantime, I urge all those involved in regulating reporting – not least the FCA, the  FRC and the UK Government – to remember just how critical this information is, and have it at the top of their list when they consider the risks and benefits of adopting AI. 

* In our view, unnecessary verbification!