And The Rest Is Leadership 15th September '25

Helping Leaders Translate AI Into The Context Of Their Organisations

🌟 Editor's Note
Welcome to the bi-weekly newsletter, which focuses on the AI topics that leaders need to know about. In this AI age, it’s not the knowledge of AI tools that sets you apart, but how well they can be integrated in the context of your business.
This requires a focus on your people and helping them through the change above any AI product you can buy.

Featuring

  • Three Things That Matter Most

  • In Case You Missed It

  • Tools, Podcasts, Products or Toys We’re Currently Playing With

Quick links

95% Of AI Initiatives Fail, Says MIT?
Google: Hold My Beer…..

*All images in this week’s newsletter were created using last newsletter’s suggested tool, Nano Banana.

Regular readers of this newsletter will have seen the recent headline MIT study that caught the headlines with the claim that 95% of AI initiatives were not showing impact. Google’s new research, released last week, is vastly different.

What Was The Google Study?

A qualitative study of 3,466 senior leaders, focused on enterprises with $10m+ in revenue with a balanced industry spread (retail, finance, healthcare, manufacturing, etc.). Respondents were executives, mostly C-level or senior leaders.

What Did The Study Find?
  • 74% of companies already see ROI from gen AI.

  • 88% of early adopters (agentic AI) see ROI.

  • ROI is mainly in productivity, customer experience, sales/marketing

Why The Discrepancy?  
  • Firstly, there were very different sample sizes: the MIT study was 52 company interviews of 153 people and 300 case reviews.

  • Google’s focus was on 3,400+ executives, mostly C level or senior leaders. Google’s study focused on enterprises with $10m+ in revenue, with a spread across industries.

  • MIT’s study was focused more on mid and small sized companies that are more resource constrained.

  • Lastly, the measures of productivity differ - Google counts any measurable productivity or CX gain as ROI where as the MIT study demanded P&L-level impact (revenue, profit, headcount savings)

What Should Leaders Take Away?

Alarmist headlines on AI failing or sales-spun headlines on AI solving all company issues will continue. MIT’s study outlined a divide with AI - where only a few companies succeed. The Google study painted a far more optimistic picture showing a far greater wider impact of the ROI of AI.

The MIT study findings - that there are implementation and learning gaps - are very valid points for many organisations, large and small. But for leaders who have resources that can be applied, and can implement the change management required to take advantage of AI, there is good evidence in the Google study to suggest great upside from implementing AI. The framing that Google use of ‘agentic shift’ as the path to AI ROI is a clear sign that investing in understanding the world of agents and agentic AI is becoming essential. Proceeding with caution is advised however. As the MIT report suggested, deployments that deliver ROI need capabilities that may be lacking currently in many organisations- memory, learning, contextualisation etc.
Building out an agentic framework is made to sound easy by those companies selling solutions in this space. The truth is that the things below the surface (infrastructure, governance etc.) as well as leadership of the change is where the ROI of AI will be made and lost.

The organisation NewsGuard helps by providing transparent tools for finding reliable information online. Their recent audit includes a named ranking of specific GPTs/chatbots, covering ChatGPT5, You.com’s Smart Assistant, xAI Grok, Inflection Pi, Mistral’s le Chat, Microsoft Copilot, Meta AI, Anthropic Claude, Google Gemini, and Perplexity.

*35% of the images generated for this newsletter were a result of false returns to our prompts.

The headline rates are alarming - over the past year, there is a higher repeat of false information among this group. The findings are relevant to any business using AI-chat tools or generative models, especially for customer-facing or news/information tasks. If an AI is used to draft content, answer queries, summarize news, etc., these kinds of error rates matter.

The reduction in the number of failures to return an answer to a query is also of note. In the push to retain relevance, LLMs may be priortising returning an answer at any cost.

Should You Be Worried?

35% false returns on news topics is certainly not trivial and reflects that AI is not yet reliable enough to be deployed without oversight in certain domains. For many business uses however, it is not necessarily a doomsday signal.

If your use of AI is bounded, domain-specific, or you can put in guardrails (review, human oversight, selective domain use), then this is a caution signal as opposed to a stop sign.

Whilst the audit was focused on controversial news topics, the findings are a good reminder how essential it is when relying on AI to have systems for review and validation - and how having a human in the loop can help mitigate for these kind of errors.

Key Takeaway From NewsGuard’s 1 year AI Audit Progress Report

Anthropic’s $1.5Bn Settlement For Authors

Anthropic, the owner of the Claude chatbot, has agreed to pay $1.5bn to settle a class action lawsuit brought by authors who claimed that their work was stolen to train its AI models.
A US judge originally ruled that using the books to train AI did not violate copyright law. However, Anthropic were ordered to stand trial for use of pirated material, which prompted Anthropic to make the offer to the authors. The ruling was among the first to decide on how LLMs can use existing material. The company holds more than 7 million pirated books in a central library.

🔥 In Case You Missed It…

Baseball Team Puts All Live Decisions In The Hands Of AI

The Oakland Ballers went into their game on the 6th of September with the human coach ceding all decisions to AI. Everything from line-up to on field decisions were to be made with the human coach implementing.

The AI decisions went well, leading to a game win. It should be noted was that there was a good level of success that the AI was learning from, which helped in laying the foundations for the win. After the game, the club announced on their Instagram page that the AI coach would be fired. But given that the subsequent weekend the human coach then lost, it remains to be seen what the future holds for the Oakland Ballers AI coach.

🏆 Tools, Podcasts, Products Or Toys We’re Playing With This Week

There’s An AI For That (TAAFT)

Included in a daily round-up of AI news, the TAAFT email shares 10 AI tools every day. Some are business tools, some efficiency and some are for interests and hobbies. The sheer volume and breadth of the tools being developed is worth looking at, and giving the Top 10 a daily browse will likely give you at least one useful tool a week to incorporate into your toolkit.

The link to their site is here - https://theresanaiforthat.com/ - but signing up to the daily email is way easier to navigate than the site.

Did You Know? 

NASA’s Voyager spacecraft still runs on 1970s assembly code.

Voyager 1 and 2 are over 14 billion miles away, yet NASA engineers upload new instructions in hand-written assembly code from the late 1970s.

Till next time,

And The Rest Is Leadership