The-Women’s-Insurance-Net-Work September webinar – Digital dreams vs AI nightmares: can we regulate to innovate?

By Sophie McBain

The question under debate at The-Women’s-Insurance-Net-Work’s (TWIN) September webinar on Monday 18 September was “Will AI be a friend or a foe in financial services?”. But, as the chair later pointed out, the inevitability of AI’s encroachment on our work lives means a more realistic question would perhaps be “AI: firm friend or that annoying, awful relative you’re forced to tolerate?”

On the more optimistic side of this three-panellist session were two women from tech companies with first-hand experience of how the judicious use of AI can boost profitability in the financial services sector, by helping to speed-up complex data analysis or streamline customer services. (TWIN operates under Chatham House rules, so I won’t name the panellists.) A proprietary large language model system (LLM) can, for instance, be trained to analyse a customer’s insurance claim to determine at speed whether a specific accident is covered by their policy, and it can then compile a summary email, complete with links, to be reviewed by a human adviser before sending, saving hours of human labour.

Both stressed how important it is, however, to ensure that AI results are always reviewed and monitored by a human, whose feedback will in turn train and improve the AI. They call this keeping the human in the loop. (“The human in the loop” would incidentally make a great indie band name – any takers?!) Keeping humans in the loop also ensures that workers are empowered, rather than displaced, by tech.

A survey earlier this year revealed companies’ top concerns about using AI. These included the risk of inaccuracies, cybersecurity breaches, intellectual property infringement, regulatory non-compliance and the lack of ‘explainability’. Explainability is a really useful concept: often we do not know exactly how AI reaches the answers it does, something that might suddenly become very important when, for example, someone wants to appeal a decision made by a machine. No one wants to hear “computer says no”, and no company wants to be unable to challenge or defend decisions made by an algorithm.

These worries provided a neat segue to Claire’s contribution because, as regular readers of this blog will know, she tends to the side of AI – at least LLMs as used for drafting – as foe/annoying unavoidable relative, especially when it comes to corporate reporting. She argued that the core purpose of corporate reporting, namely to build a relationship of trust between a company and its stakeholders by telling a truthful story, simply cannot be outsourced or automated.

That’s partly because AI of the LLM variety cannot be trusted to tell the truth – although it’s disturbingly good at creating plausible-sounding falsehoods, as two American lawyers learned the hard way when they used ChatGPT to write a legal brief and were caught and fined because the AI programme had invented the cases and citations. On top of this, AI can undermine relationships and accountability in corporate reporting, because investors and other stakeholders need to know – and deserve to know – that a director’s message is truly his or her view, not the machinations of an LLM, not to mention potential knock-on implications for directors’ overall accountability for reporting if LLMs are used. However, Claire’s broader concern was about the potential erosion of the thought process itself: the more we rely on machines to do our thinking for us, the less able we may become to think for ourselves.

Claire explained that she’d submitted proposals to the recent government consultation A pro-innovation approach to AI regulation on regulating the use of AI in corporate reporting and has been conducting focus groups with representatives from FTSE 100 and FTSE 250 companies, auditors and investors, to debate the pros and cons of her proposals. To summarise, these are that companies should not use LLMs to write the narrative in reporting, and that if AI is used for data collection and analysis in source material, this should be disclosed. She also acknowledged that, while it’s unlikely a proposal like this would ever make it through, actually what matters is getting the agenda on the table with companies and regulators. In short, she sees AI as comparable to prescription medicine: powerful and beneficial when used in the right contexts, but dangerous in others, meaning that its use should be carefully regulated. You can read her submission to the government here.

For all the points of difference, there were areas of common ground. Unsurprisingly, everyone agreed that it is vital to develop clear codes of ethics for the use of AI (including being clear about the very different technologies the term encompasses) and to put in place safety mechanisms to prevent it from being misused. The speakers mentioned initiatives such as the Frontier Model Forum, an AI safety group founded by the four major players in generative AI (Google, OpenAI, Microsoft and Anthropic): evidence, one hopes, that they take their social responsibilities seriously.  

The panellists all made clear that whether you welcome or fear the fast-approaching AI revolution, the one thing you cannot afford to do is ignore it. Because, even if you haven’t yet used AI for your job, your competitors, or colleagues, or younger employees may be using it already. In other words, if you don’t think AI will affect you, you haven’t thought about it enough. One major, and very current, risk is that an employee will unwittingly breach a company’s privacy and cybersecurity policy by inputting sensitive data into a public LLM such as ChatGPT. So, if any companies don’t have a policy in place for how AI can – and can’t – be used at work, they probably need one.

On which note, the panel were prompted to conclude with a few tips. One speaker recommended experimenting with AI, and beginning with a few short courses on LinkedIn learning. Another was currently reading, and very much enjoying, Mustafa Suleyman’s The Coming Wave (I too have heard very good things about this book!). Claire also suggested Digital Empires by Anu Bradford, a must-read for anyone interested in AI regulation, and cautioned that while there may be good stuff on LinkedIn, there’s ‘a lot of dross as well’, and it’s essential to consider the authorship of any information or training about AI. Personally, she’d recommend the Financial Times and New Scientist for unbiased coverage.

For anyone now inspired to start an AI bookclub, or who just likes their bedtime reading to deliver a powerful shot of sheer terror, I’ve just finished reading Your Face Belongs to Us. It’s a gripping account of the secretive facial recognition start-up Clearview AI that asks whether privacy can survive the AI era – so do add that to your reading list.

After all, the panel discussion made one thing very clear – knowledge is power, and if you want to avoid being outwitted, or wrongfooted, by AI, you need to start swotting up.