Practical perspectives on reporting #22 – A responsible approach to using AI in corporate reporting: Claire Bodanis launches much-needed guidance for Boards and management

By Tamara O’Brien, TMIL’s roving reporter 

There was a moment on Saturday night, as I hurried through the silent, pitch-black graveyard in the freezing mist, when I doubted everything. The time and date. The way ahead. The wisdom of doing this, alone, when I could be tucked up on the sofa in my nice heated throw watching Strictly.

It was only my faith in my own double-checking, my knowledge of a familiar if unseen path, and my abiding love for Handel’s Messiah, that propelled me to the gloomy entrance of St Swithuns. And once through those huge oak doors, then the modern glass security ones, my faith was vindicated. The church was ablaze, abuzz, every pew filled.

I didn’t expect to have any difficulty finding a seat, but it really was rammed and the performance was due to start any minute. Mercifully, a trio of older ladies let me in to their side pew, where I had a pretty good view, and, hallelujah, found myself next to the heating pipe. Part One went its sublime way; then wine in the vestry; then Part Two commenced, blasting off with the majestic Surely He hath borne our griefs.

And blow me if two of the merciful ladies didn’t get their phones out.

The blessings and curses of digital technology were the hot topic of today’s webinar, which marked the launch of Claire’s guidance for Boards on using AI in corporate reporting. Joining her were Fiona Cuttell, Assistant Company Secretary and Head of Non-Financial Reporting at FTSE 100 Haleon, who took part in the research; Charlotte Lush of ShareAction, representing the investor view; and Neil Murphy, CEO of FTSE 250 Bytes Technology Group (BTG), which sells AI systems to corporates and the public sector.

Now if you’re anything like me, you might describe yourself as tech-neutral; neither early adopter nor luddite. More of a wait-and-see, go with the majority, wait-til-it-gets-cheaper-then-use-it-as-best-suits-me kind of person. And I don’t necessarily see anything wrong with that.

Except it’s dawned on me, rather late in the day, that going with the flow is simply not an option with AI. It behoves each of us to examine, and keep examining, our view of a technology that has such awesome constructive and destructive potential.

This new awareness was sparked by human agency (Claire’s red-flag-waving); fed by human thought (reading published articles on AI, for and against); and given meaning by personal experience (the careless wrecking of something of intangible value to me[1]). So now I understand Claire’s urgency, and clearly, so did our panellists.

Claire opened the discussion with a quick résumé of the guidance itself: how it came about, who helped inform it and what it contains. And a reminder, lest we forget, of corporate reporting’s ultimate purpose – to build a relationship of trust with investors and other stakeholders through truthful, accurate, clear reporting that people believe because it tells an honest, engaging story. We can judge AI on how near to, or far from, that goal it takes us.

As a CoSec and governance professional, Fiona observed that senior management must have oversight of any AI-generated text, because it’s only as good as its inputs; and given companies’ natural desire to deliver mostly good news, there’s bound to be a positivity bias in what AI ingests. She also picked up on Claire’s concern about the next generation of corporate reporters. Without direct experience of information-gathering, how will they learn to interpret, develop narratives, spot what’s not being said? She did however acknowledge AI’s usefulness as a ‘decluttering’ tool, in deciding what to disclose in the AR from a mass of information.

Charlotte, who uses data and research to help investors better understand social issues, saw similar data-crunching benefits for investors grappling with multiple ESG reporting standards. Also, in the investor world, it seems there’s no such thing as too much information: ‘The analysis and consolidation of huge, unstructured data sets will uncover meaningful information investors wouldn’t have been able to access before, which is great.’ But it all comes with the risk of inaccurate information seeding the wider body of data, and spreading weed-like into information that feeds through to investors. Guidance like Claire’s is essential in minimising this risk. 

Neil gave the ‘pro’ argument for AI: that for businesses trying to optimise performance and minimise operating costs, it’s fantastic, transformative. It’s also unstoppable and hard to police – so any hope of using AI ethically must begin with a strong company culture. That means sky-high levels of honesty, openness and transparency, from the top down. There must also be guardrails around data security, compliance and governance, but like any regulation, they’ll take a long time to get right. In any case, BTG won’t be using AI to write their annual report, because what matters most to them is authenticity. They understand their business best, and what they’ve been through during the year. And producing the AR is a collective effort they value, because it deepens that understanding.

Neil’s last point about authenticity, which Fiona and Claire also touched on, made me think about a key aspect of this campaign: the importance of human expression. Recently I read a review in the New Statesman by Peter Williams, in which he says language is not just a vehicle for information but an aesthetic tool. “It is not often you say this in life,” he muses, “but what is really needed… is some poetry to demonstrate what language can do.” To paraphrase him further: nuance, irony, localism, humour, particularity, poetry, are what make writing human. I say we need more of all of them in our corporate reporting, because that’s how readers will distinguish management’s true thoughts, ideas and opinions from bland, AI-generated truisms.

The discussion prompted a deluge of questions from the audience, and they didn’t hold back. Is there a difference between using AI to create a report, and using it to crunch large volumes of data? With AI advancing at warp speed, what chance do companies have of getting its use in reporting right? And who’s thinking of the auditors? Full answers to these questions are in the webinar recording. And to the many questions there just wasn’t enough time to cover, Claire has responded with a note to all registered for the webinar.

So, where does this leave us? Perhaps with the thought that the best use of AI in reporting must be to support its ultimate purpose, of connecting people, in whatever manner suits each company. One suggestion: in helping us with the heavy lifting of data management, AI might just give us more time to focus on our uniquely human skills of interpretation and communication. Here’s hoping.

Whichever way the wind blows, may you find comfort in your own familiar paths, seen and unseen, in this world’s winter.

[1] In case you’re wondering… after 10 minutes of fruitless head-turning and stage-sighing, I did quietly ask my neighbour to turn her phone off. (The other lady had already done so.) I knew what she was going to say – that she was just following the libretto – but I explained that the constant on-and-off of the phone’s light was distracting. Poor lady looked absolutely crestfallen. I shared my printed lyrics with her, but we both had a slightly miserable and awkward time of it thereafter.