And The Rest Is Leadership 6th October '25

Helping Leaders Translate AI Into The Context Of Their Organisations .

🌟 Editor's Note
Welcome to the bi-weekly newsletter which focuses on the AI topics that leaders need to know about. In this AI age, it’s not the knowledge of AI tools that sets you apart, but how well they can be integrated in the context of your business.
This requires a focus on your people and helping them through the change above any AI product you can buy.

Featuring

  • Three Things That Matter Most

  • In Case You Missed It

  • Tools, Podcasts, Products or Toys We’re Currently Playing With

Quick links

Smart Industries Are Quietly Scaling GPTs While Everyone Else Misreads the Trend 

The growth of GPT usage is staggering. In the 12 months from June ‘24 to June ‘25, messages to ChatGPT have grown from 451million to 2.6 billion per day.
For context, that’s roughly 30,000 messages being sent every second. Another way to think about it: by the time you’ve finished reading this sentence, another 150,000 ChatGPT messages have been sent.

A new OpenAI paper, written in conjunction with the American universities Duke and Harvard, has shown an apparent decline in use of ChatGPT for work purposes, whilst use for personal reasons soars. The headlines show that as of July 2025, more than 70% of ChatGPT consumer queries were unrelated to work. This is a stark contrast to just a year ago, when nearly 50% of all messages were for work purposes.

However, the drop in the proportion of work messages is misleading. Excluded from this data are the messages within enterprise GPTs. Whilst exact figures on the use of the enterprise version of ChatGPT are not clear, OpenAI have recently reported that there are more than 3 million business users (up from 1 million a year ago), and some estimates now place paying business accounts at ~5 million.

Within the paper, we also see that highly paid professional and technical occupations are more likely to use ChatGPT for work.

*Source: Chatterji et al. (2025), How People Use ChatCPT, NBER Working Paper No. 34255.

Takeaways For Leaders:

Leaders scanning this latest adoption data might think enthusiasm is waning. It’s not. What looks like a drop in “work-related ChatGPT use” is capturing free consumer activity, and individuals purchasing the entry level subscriptions, not the rapid rise of enterprise GPTs embedded inside companies.

Smart industries are moving first—formalising prompt workflows, securing data access, and building private AI copilots. The danger isn’t under-use; it’s leaders mistaking a blind spot for a slowdown.

The issue of GPT’s hallucinating - aka getting things wrong/making stuff up - is very real, and will likely never go away. There is evidence to show that the issue is growing over time with later models hallucinating more than earlier models. The incidents of refusal to provide an answer (what used to happen when a GPT was unsure) have dropped drastically over the past couple of years meaning that more wrong answers are returned.

The challenge is that there is no standard approach to measuring this problem, with some research having hallucination rates in the single digit %’s and others seeing hallucination rates as high as 60%. Regardless of the research method or business area to be looked at, what everyone agrees is that the rate of hallucination is not zero - and now we are learning that it probably never will be.

The technical explanation of the maths behind the answers goes some way to explaining why the wrong information can be returned, but despite these explanations, skeptics point to the underlying business reasons that are connected to this issue. If companies like OpenAI clamp down too hard on hallucinations (for example forcing ChatGPT to refuse answers more often), the product’s utility will shrink dramatically. Users expect generative value, not constant refusals and may seek out a more compliant competitor if their current GPT of choice doesn’t return answers every time

Takeaways For Leaders:

Don’t treat GPT outputs as gospel (or let your teams do so). Validation of answers - especially in sensitive areas is essential. Whilst techniques such as prompting a particular way may help, human in the loop checks are essential. Be aware of pockets in your teams that can be too trusting of GPT information - are the people in your organisation who are most digitally native also the most trusting? If so, partnering them with skeptical members who are naturally distrusting can help.

It is wise to actually favour models that admit uncertainty over the ones that confidently make stuff up. Over time, monitoring hallucination rates by domain (i.e. internal data, regulatory context, product specs etc) will give you a greater handle on the areas to watch out for most.

Hallucination is to a certain extent inherent : “making things up” is part of what generative models do. The ideal therefore is not zero hallucination, but smart tolerance and transparency.

How To Use AI To Improve Meetings

Amongst the hype and fear and doomsday articles of AI replacing jobs, smart organisations are shifting their focus on how AI augments rather than replaces human collaboration.


Harvard Business Review shared some simple examples of how to use AI for better meetings.

Three areas HBR called out:

  1.  Using AI for preparation:

    surfacing relevant data or suggesting key issues and discussion points before the meeting begins

  2. Using AI as a ‘seat at the table’: 

    Having one participant responsible for prompting AI during meeting discussions, using AI for a specific role. For instance providing an alternative voice, such as a challenger voice or helping to craft a more compelling narrative for meeting output

  3.  All participants using AI:

    Not all the way through the meeting, but breaking out in short bursts individually to use AI for tasks like idea generation or reflecting of what a decision could mean for their part of the organisation

Takeaways For Leaders:

There are simple things that many teams think of like note taking and distribution, but to do this alone is like having a 6 speed Ferrari and driving it in first gear.

Teams can use AI for so much more. One person using AI in isolation is not as powerful as having a couple of the team together partner with it. And as the recent research from Stanford/P&G showed, teams using AI together are way happier than either individuals using AI or teams without AI. And happier teams are more productive and more profitable.

That said, use of AI needs direction: help teams decide deliberately which meetings will benefit from AI support (e.g. strategic vs operational). Don’t sprinkle AI everywhere. Coach your team to understand how to question, critique, and correct AI output — teams must think of it as a co-thinker, not a saviour. Lastly, whilst AI-driven prompts and simulations can help sharpen the agenda and output, leave space for unscripted breakthroughs - meetings require human interaction first and foremost.

🔥 In Case You Missed It…

AI Actor Tilly Norwood Outrages Other Actors

A creation from Dutch actor and comedian Eline Van der Velden has caused waves amongst ‘real’ actors. Norwood's Instagram page and webpage include headshots from filming tests and an AI generated advertisement which seems to be a message for the UK’s BBC, suggesting that using AI generated content would be much better than re-running old programmes.

The pictures and the videos are a great way of seeing how far AI has come in the past couple of years, so much so that it it upsetting many big names of the screen, with Emily Blunt calling it ‘terrifying’ and Natasha Lyonne calling for anyone who works with Tilly to be boycotted.

Perhaps the reaction of other actors can be seen as a parallel for individuals in other industries who see AI as a threat, not an enhancement, and shut it down rather then look to how it could enhance what they do. AI influencers have amassed huge followings (Lu do Magalu for instance has 8.2m Instagram followers) and have lucrative endorsement deals. The world seems to have embraced AI influencers and AI actors are maybe not far behind.

🏆 Tools, Podcasts, Products Or Toys We’re Playing With This Week

AI Weirdness

Janelle Shane, an AI researcher and humourist writes an excellent blog on how machine learning algorithms in AI go wrong. Taking simple examples such as labelling animals, naming paint colours or recipes, she demonstrates many of AI’s limitations very simply, often to hilarious effect.

Every October, Janelle uses neural networks to generate daily drawing prompts that followers use AI to try to create. Her ‘Botober’ list this year shows the problem with how older, small neural networks ‘think’ and the impact of limited training data.

Whilst light hearted, her work helps to underline a very real challenge in AI: expecting it to be correct all the time and trusting it to work without close control is going to lead to problems. AI ‘workslop’ is putting pressure on organisations due to lazy practices inside organisations using AI. Don’t become one of them !

Did You Know? 

Your desk chair is a symbol of power — literally.

The word “chairman” comes from medieval England, where the person who sat in the only chair at gatherings was presumed to be in charge. Everyone else sat on benches or the floor.

Till next time,

And The Rest Is Leadership