- And The Rest Is Leadership: Putting AI In Context
- Posts
- And The Rest Is Leadership 15th February '26
And The Rest Is Leadership 15th February '26
Helping Leaders Translate AI Into The Context Of Their Organisations.

🌟 Editor's Note
Welcome to the bi-weekly newsletter which focuses on the AI topics that leaders need to know about. In this AI age, it’s not the knowledge of AI tools that sets you apart, but how well they can be integrated in the context of your business.
This requires a focus on your people and helping them through the change above any AI product you can buy.
Featuring
Three Things That Matter Most
In Case You Missed It
Tools, Podcasts, Products or Toys We’re Currently Playing With

IBM Triples Entry Level Hires | Would You Be Happy Without AI? 40% Say Yes | Is Gen AI Reducing or Increasing Work? |
IBM Triples Entry Level Hires
In amongst all the headlines of organisations scrapping entry-level jobs due to AI, IBM have announced they are going the other way and tripling entry-level hires. “And yes it's for all these jobs that we're being told AI can do" announced IBM's chief human resources officer, Nickle LaMoreaux at an AI conference this week.
This is a significant moment. AI is being treated by some as a magnifier of their existing team's work as it augments and enhances productivity, and by others as a tool for cost reduction (i.e. I can get rid of some people and AI will replace them).
An important thing to observe is that the entry-level roles are not the same jobs as they were 2-3 years ago. As part of the IBM announcement there's a clear acknowledgement that entry-level software developers don't need to spend as much time coding and so instead the expectation in the job description is spending more time with customers.
Takeaways for Leaders
There’s a clear longer-term rationale at play here that leaders should observe. There’s a long-term impact of entry-level role reductions. Staff turnover isn't going to magically go away and without hiring and training from entry level, organisations will be left to hire from other companies which is an expensive process both in recruitment costs and possibly also cultural impact.
Some organisations seem to be treating the advent of AI as an opportunity to save money by cutting entry-level roles. Other companies are thinking about the longer-term impact: without entry-level people hired now, who becomes the middle managers and future leaders of your organisation?
Much the same way that the battleground between the huge tech companies is being played out today in AI investment, every organisation should be planning their strategy for recruitment in the age of AI right now (and for longer than the next six to nine months!)
IBM stands out as a company that has seen, and thrived through many previous tech transformations over its 100+year history. I wouldn't bet against IBM being right about a trend yet again.
Would You Be Happy Without AI?
What Section’s Report Reveals
Section, an AI implementation and training firm, just surveyed 5,000 knowledge workers from organisations with 1,000+ employees and the results should make every leader pause. They found that three years after ChatGPT launched, 70% of workers are what Section calls "AI experimenters", using AI for basic tasks such as summarising meeting notes and rewriting emails. Barely anyone has progressed beyond basic prompting in the last six months.
Other headlines include:
-Only 3% have moved past experimentation to qualify as "practitioners" or "experts"
-25% say they save literally no time with AI
-40% would be fine never using it again
Departmentally, 54% of engineers don't use AI for code, 87% of product managers don't use it for prototypes, and 56% of marketers don't use it for first drafts.
The Leadership Blind Spot
There is a large gap between executive direction and successful on-the-ground implementation. 75% of executives are excited about AI and believe deployments are succeeding. 63% have AI policies, 50% provide tools, 44% offer training.
Most companies seem to be focusing on AI access, safety, and basic prompting. They give people an LLM, explain the guardrails, maybe teach them to write a good prompt. The survey found that employees with training score 1.5x better on proficiency. But "better" still means 40/100. They're still experimenters, not practitioners. Why? Because training doesn’t focus on transforming actual workflows. Prompts and guardrails are the foundation, but it doesn't close the gap between usage and value.
Takeaways For Leaders
Many companies are making the right directional investments in AI. But investment in access and policies isn't translating into individual capability and value creation. The gap isn't technological. It's organisational. And until leaders recognise that most of their workforce is stuck in the "experimenter" phase - doing basic tasks that generate minimal ROI - they'll keep wondering why their AI investments aren't paying off.
Leaders need to move past measuring adoption and to start measuring proficiency. The problem is use case discovery and intelligent application of AI, not prompt engineering. For greater adoption, people need concrete examples of AI transforming their specific workflows. The question isn't "Do we have AI?" It's "Can our people actually use it to transform their work?"
Section AI Proficiency Report Can Be Found Here
New Research: Gen AI is Intensifying Work, Not Reducing It

Much of the conversation around AI at work focuses on adoption. Leaders worry about getting employees to use it. The assumption is simple: if AI reduces effort, work should get easier.
An in-progress qualitative field study conducted by scholars at UC Berkeley’s Haas School Of Business, in a 200 person tech company, is showing that generative AI is responsible for intensifying work rather than reducing it. Researchers observed how generative AI changed everyday work. AI use was voluntary, not mandated. Yet employees consistently worked faster, took on broader responsibilities, multitasked more, and extended work into more hours of the day.
The authors argue that without deliberate organisational norms, AI does not compress work, it causes workload creep. Despite our ambitions of freeing up time, AI is actually expanding job scopes, increasing multitasking, and eroding natural work boundaries. The result is a short-term productivity surge paired with longer hours, higher cognitive load, and burnout risk, even when AI use is voluntary and intrinsically motivating.
Takeaways For Leaders
Asking employees to self-regulate AI use is ineffective. Productivity gains are real, but they are not neutral, they change expectations. Short-term output gains can mask long-term burnout and decision-quality risks. Employees may feel productive but that doesn’t mean work is becoming lighter. As job scopes get rewritten to take AI into account, leaders need to be aware of the research and avoid the pitfalls of burnout.
🔥 In Case You Missed It…
OpenAI’s GPT 5.3 Codex Spark, Claude’s Opus 4.6, ByteDance Seedance video generation models all launched in the past week. Each of these is a substantial improvement on previous offerings.
OpenAI’s GPT-5.3 Codex Spark is the latest step in the Codex line, optimized for real-time coding workflows.
Anthropic’s Claude Opus 4.6 pushes the frontier of large-context reasoning and productivity workflows. It integrates more advanced agentic capabilities, allowing complex multi-step tasks and deep reasoning across large documents and codebases.
ByteDance’s Seedance 2.0 represents a leap in video generation. Unlike earlier models that produced short clips with limited control, Seedance 2.0 accepts text, images, audio, and video inputs and can generate cinematic-quality clips with synchronized audio and motion effects, supporting directors’ control over lighting and camera movements. Its quality and versatility have already sparked viral attention and industry debate.
These releases show a trend that will define 2026’s AI landscape: models are no longer just bigger. They are becoming specialised in real world applications such as coding or media creation.
🏆 Tools, Podcasts, Products Or Toys We’re Playing With This Week
Claude Code, Co-Work and Chrome Browser Extension
The battle between Anthropic's Claude and OpenAI's ChatGPT is not just about the ads and the X.com slurs. It is being played out in the increasingly impressive models that both are releasing.
If you have found yourself becoming wedded to ChatGPT, trying Claude is a big eye-opener. In the hands of even a person with cursory technical ability and a mind to follow instructions, Claude will rebuild websites, research and produce documents with astonishing detail. In the hands of an experienced builder, Claude’s latest model will now write high-level code that is literally astonishing even for the best engineers.
Claude Code and Co-Work are both available under a paid subscription which compared to ChatGPT is not cheap. But with rumours of OpenAI planning a $2,000/month and even a $20,000/month subscription for future products, getting your work augmented and enhanced by this current version of Claude may feel like a very good investment
Did You Know?
![]() | Wi-Fi was accidentally invented while searching for exploding black holes |
Till next time,
