Post #66
April 5, 2023
Claire Bodanis
Claire calls on all those involved in UK reporting regulation to prohibit the use of AI in reporting – right now.
There’s a brilliant scene in Amazing Grace, the 2006 film about the British anti-slavery movement led by William Wilberforce. After years of failing to overcome vested economic interests to get the abolition bill passed in Parliament, a lawyer comes up with a clever ruse, presented as a boring technicality of maritime law, that would have the effect of making the slave trade impossible. With none of the opposition realising its implications, the bill is set to pass without a murmur.
In the film things don’t quite go to plan, but the premise is a good one (and, despite the dramatic licence, it’s a film well worth watching!). Why it’s so relevant now is that it gave me the seeds of an idea that I announced at our quarterly TMIL webinar last week. An idea that I repeat here: I am hereby making a call to all those with regulatory oversight of corporate reporting to act urgently to bring in new reporting regulation, as fast as they possibly can, to prohibit the use of AI technologies – Chatbot and the like – in creating reporting.
This may come as a surprise, given my February blog in which, while I expressed my alarm at large language model AI, I also explained why I didn’t think it’ll put me and my annual report writing colleagues out of business just yet. And I still don’t think it will.
As I said in that blog, a chatbot works by ‘scraping’ the internet for publicly available information, and the source material we use to write corporate reports is a) not in the public domain and thus not available for scraping, and b) not held internally within a company in a place or form that even an internal chatbot could gather and write from. Not to mention that the key to everything we write is what we hear during our interviews with the Board and senior execs. And so the ability of chatbots to write corporate reports is, in my view, going to come down to companies’ ability to do two things. First, bring all of their internal information together in an easily ‘scrapable’ system; and second, management’s interest in sharing their views indiscriminately with that system. At least for the foreseeable, then, I’m not worried about losing all our business to this new machine.
So why am I calling for regulation to prohibit its use in reporting, if I don’t think it’ll be used to write an annual report any time soon? It’s all in that phrase ‘creating reporting’. What I’m talking about is the systems that companies use internally to generate the information and the sources that we as writers then use in developing the annual report.
At present, those sources are created by human beings. By people who have thought about them; people who understand the difference between truth and falsehood; people who can be held to account for what they have provided. If information is created by a machine, how can it be true? Who is responsible for it? Who can be held accountable?
Truth and accountability are the bedrock of reporting, and, I would argue, the bedrock of our entire system of capital markets, which ultimately rely on the information produced by companies to determine their worth. It is absolutely essential, then, that those sources are truthful, and that people – human beings – can be held accountable for them.
Last week, I received an email that made me realise that anyone who cares about truth needs to act fast. This email was from something called chatbot99 and it invited me to buy its new product, which would write my emails and LinkedIn posts for me. No regulation, no safeguards, no requirement to say ‘this was written by AI, not by Claire Bodanis’ – nothing.
I’m sure you’ve all been reading about the potential that large language model AI has to change how we work, what we do, and, as I said in my February blog, how we think. And this massive change is seemingly just being allowed to happen, without any requirement for any sort of testing or accountability.
Imagine the uproar if GSK, say, or AstraZeneca, were allowed to issue a new drug that had the power to rewire our brains with no testing whatsoever. That is what these chatbots have the potential to do – already are doing. And so far, no one in power seems to have the appetite to put the brakes on this technology; not in the UK at least, although I was heartened by Italy’s stance last week. The UK government’s response has been pitiful, to say the least.
Perhaps, taking the charitable view, that’s because it all seems so huge, so inevitable, so pervasive, that the task seems too gargantuan. We can’t just ban it altogether, since it’s already out there: so the thinking goes, let’s just not do anything, and perhaps human beings will adapt as we’ve always done and all will be OK. Definitely the approach of the proverbial ostrich.
This is where, for me, the inspiration of Amazing Grace comes in. I’m always bemoaning the unintended consequences of new reporting regulation, and how our regulators never seem to consider what it all means in practice, which quite often runs contrary to the overall spirit of reporting – namely to tell a truthful story, clearly, to all stakeholders. But here, we have a chance to turn those ‘unintended consequences’ to our advantage and make them our real purpose, just as they were for our abolitionist forefathers.
By prohibiting the use of AI in creating reporting, our regulators can delay its much wider adoption – at least until such time as its true impacts have been properly tested, regulated and accounted for. The depth of information required in corporate reporting today means that it touches every part of a business, from top to bottom. If companies know that they can’t use AI to generate content and sources for reporting, they’ll be far more careful about embracing it fully and irrevocably into their systems in the first place, out of some blinkered notion of productivity and efficiency.
What’s more, where we have the march on our forefathers as portrayed in Amazing Grace, is that the stated intention of my proposed reporting regulation, far from being a boring technicality, is an entirely worthy end in itself, even without the wider benefits. Prohibiting AI from being used in creating reporting would safeguard a principle already firmly embedded in our regulatory system; that of providing truthful, reliable information.
I urge you all, therefore, to join me in supporting the prohibition of AI’s use in reporting, and thus fighting to hold onto truth and protecting everything that corporate reporting stands for.