Post #74
December 6, 2023
Claire Bodanis
Claire discusses the launch of her guidance for Boards and management on the responsible use of AI in corporate reporting. All blogs and documents referenced can be found in the new AI campaign section of our website.
Every Saturday, my husband and I meet for lunch at our local Jewish bakery, Karma Bread*, in South End Green. The deal is, I roll out of bed just in time to get there and place our order before the kitchen closes – giving David time to navigate the vagaries of the London Overground’s weekend service and get back from his morning kickboxing class in Shoreditch. Last Saturday, as I squeaked in with less than five minutes to spare, I overheard Sacha (who runs Karma with her genius baker sister, Tami) talking to an old, rather scruffy-looking chap sitting at a table by the door. ‘Is your room warm?’ she asked him. It occurred to me that he might be homeless, so I managed to have a quiet word with Sacha and offered to buy him lunch. ‘Oh don’t worry,’ she said, ‘that’s very kind, and yes he is homeless, but we look after him.’
As I was sitting waiting for David to arrive, I found myself glazing over the newspaper and instead pondering on the kind of business that gives a table (one of just eight inside the shop) over to a homeless person at the busiest time of the week. No one batted an eyelid, by the way – there were no sidelong glances from other customers, nor any subtle moving away of chairs. It was just what’s done by these good people in this small corner of North London.
As we contemplate the advent of artificial intelligence into our lives in a far bigger way than anyone imagined even 12 months ago, and with COP28 in full swing, it strikes me that stories like this are more important than ever. Because they remind us that what really matters is other people – our impact on them, their impact on us – and the kind of world we want to live in. And I doubt you’d find many who’d choose the state of the world as it is right now.
It was this kind of (perhaps rather grandiose) musing that was uppermost in my mind when generative AI, chatbots like ChatGPT in particular, exploded into our consciousness early in 2023. I set out my concerns in my February blog, What happens when we outsource the power to think? That led swiftly to considering the particular risks of AI in my own field; and my early attempts to persuade our regulators to prohibit, for the time being, the use of large language model systems, or LLMs (of the chatbot type) from reporting, for fear such use would compromise reporting’s overall purpose. Which is, to build a relationship of trust with investors and other stakeholders through truthful, accurate, clear reporting that people believe because it tells an honest, engaging story. You can follow what I’m now calling my AI campaign in my April, May and July blogs – which now has its own page on our website.
So why, when only a few months ago I was calling for a ban on the use of LLMs in reporting – why did I hold a webinar yesterday to mark the official launch of my guidance on the responsible use of AI in corporate reporting?
If it’s not too obvious a point to make, AI moves incredibly fast. And over the years I’ve realised that to achieve anything useful we need to deal with the world as it is, not as we wish it to be. When I started my AI campaign, the only LLMs available were external ones of the likes of ChatGPT – and who would dream of putting confidential reporting information into a public chatbot? (Too many, in fact, but that’s another story.) Now, however, it’s not going to be too long before every company has its own internal chatbot, which will no doubt be used for all sorts of things, including some aspects of the reporting process.
And so, with the help of 40+ corporates, investors, advisors, and other interested parties who took part in my AI in reporting research (to whom my abiding thanks), I changed my mind. I realised that there is simply no point trying to prohibit the use of a technology that’s going to be widely used anyway; and which might even have a positive contribution to make, if, as they all said, the right guardrails were put in place.
In the absence of any official guidance, and no indication that any might be coming soon, I therefore developed my own. Officially launched at my webinar yesterday, it’s based on the concerns and issues raised by my generous research participants. Aside from the necessary methodology and feedback from the research, the guidance itself amounts to three simple pages covering two sections:
How to approach using AI in reporting (two pages), which includes a set of questions Boards and management need to ask themselves to develop a policy on the responsible use of AI in reporting, plus some practical steps on how to do it; and
Particularly called for by investors – how to approach disclosure (one page), which includes what a good disclosure statement should cover, plus some suggestions of where to include that statement in the annual report.
The key question at the heart of this guidance, though, which all companies need to work through for themselves, is this: how would the introduction of AI support or detract from the goal of achieving the ultimate purpose of reporting I mentioned earlier?
Aside from truthfulness and accuracy, there’s an important concept here that I believe is essential for reporting – namely relationships. If you think about it, companies are collections of individual human beings, who together are building relationships through communication (reporting) with other human beings who are making decisions about that company – to invest in it, work for it, work with it, support it, or not – based on the information and the story they find in the annual report. Crucially, the annual report as a whole is supposed to represent the views and opinion of the Board and management – the directors. That’s where the ‘relationship of trust’ ultimately resides. And, while there are sections of the annual report that are statements of compliance (where arguably, use of AI might be helpful), it’s important to make sure that matters of opinion truly are the opinion of the directors. As one focus group participant put it: what value would stakeholders see in a report written by a bot?
And so, while I’ve changed my mind about AI, I must confess that my heart has not yet caught up. At a high level, the question I’m really asking is: how will AI enhance us as human beings, how will it make us more connected, better able to create the world we want to live in, a world where a homeless man can be welcomed into the warmth as a fellow human being? And it’s this question that I hope everyone, whatever their field, will ask themselves when adopting AI, especially as we enter the Christmas season of goodwill to all.
On behalf of everyone at Falcon Windsor, I wish you all a very peaceful Christmas, and comfort for those who mourn.
* Karma Bread, 13 South End Road, Hampstead, London NW3 2PT, UK