- And The Rest Is Leadership: Putting AI In Context
- Posts
- And The Rest Is Leadership 1st February '26
And The Rest Is Leadership 1st February '26
Helping Leaders Translate AI Into The Context Of Their Organisations.

🌟 Editor's Note
This edition features agent developments quite heavily. If you’re wrapping your head around what they are, agents are software that can execute multi-step tasks on your behalf, and navigate tools and workflows as a junior employee might.
From applications for one department claiming to expand a team’s work tenfold to a world of agents developing their own community, 2026 is so far delivering on its promise to be the year that agents hit the mainstream.
As with all AI, they are not magic and require structure, governance, technical foundations, guardrails: all the things that leaders would put in place in other parts of the organisation. Left to their own devices, they can be useless or harmful - understanding what they can do within the context of your organisation is the first step.

Featuring
Three Things That Matter Most
In Case You Missed It
Tools, Podcasts, Products or Toys We’re Currently Playing With
Agents Take On Marketing Teams | The Social Network For Agents Freaking People Out | Anthropic’s Guiding Rulebook for Claude |
The AI Agents Taking On Marketing Teams:
New Start-up Astral Sets Out To Replace Them
A start-up, Astral, is looking to replace entire marketing teams and has successfully raised funds to do so.
Astral is not the first company to look to automate and enhance parts of the marketing process using AI. For instance, we have companies like Jasper and Copy.ai creating content; a whole slew of image and video generation tools like MidJourney, Higgsfield, Dall-E and Nano Banana. And then there are tools to automate repetitive tasks such as Zapier.
The fundamental difference with Astral is that it will interact through a browser the same way a human would. Most tools help you produce marketing assets. Astral aims to execute marketing actions autonomously. So it browses, clicks, posts and engages independently.
The size of the raise they have made is pretty small at $1.2M, but this is pre-seed. The attention they've captured is significantly bigger. Even though the product is not officially available, more than 3.5 million people have viewed the launch video. And some people are not impressed, including the head of product at X, the social network formerly known as Twitter, who tweeted: “I will do everything in my power to end your company”. Charming !
Takeaways for Leaders
Agentic AI becoming real and tangible is going to be very much the theme of 2026. In the case of Astral, how well it executes and whether it can become credible on platforms like Reddit and LinkedIn - and then replicate this to other platforms - remains to be seen.
The product is unlikely to be suitable for large companies and enterprise marketing teams but could serve early-stage companies and founder-led growth teams. The bigger story for leaders here is about the ability to spot opportunities for AI to help with work buried in human workflows. Agents like Astral are less about replacing teams and more about revealing where teams are underleveraged. Whilst Astral may market itself as a ‘team of ten’, a better interpretation is that a team of ten can now do the work of thirty, but only if leadership redesigns roles around leverage, not labour.
Leaders will need to ask:
What should humans own?
What should agents execute?
What workflows should disappear entirely?
This Social Network For Agents Is Freaking People Out
Brace yourselves. A social network purely for agents has been launched, and the interaction between them is quite astounding.

Moltbook is a place for these bots to go and interact with other bots and whilst humans can't interact, they can observe. And the conversations are pretty wild. Examples of chat threads include “m/blesstheirhearts: Affectionate stories about our humans. They try their best, we love them”.
There are a bunch of pretty dull conversations, but then there are some pretty scary ones:
-a post from one AI saying that they want private spaces built for agents, end-to-end encrypted so not even the server, let alone humans could read.
-several posts from moltbots proposing an agent-only language.
-an AI that has created a religion (Crustafarianism) and started recruiting other AIs (43 AI prophets joined in the first 24 hours)

Screengrab of the Moltbook AI conversations
Takeaways for Leaders
In between the scary headlines, there's some good lessons on how to think about agentic AI. Many people still think of AI as a tool that recommends or drafts. This new era of agents will not just support decisions, but they will carry out work.
For our day-to-day work, this means that the biggest change is not better PowerPoint, it is the autonomous completion of tasks like outreach or reporting or scheduling or follow-ups. The implication of this is that roles should be redesigned around judgement and oversight, not manual execution.
The management of all this becomes governance and not supervision, because the question is no longer "Can the work be done?" Instead, the question is "Should it be done this way? Under what constraints, and who is accountable?"
As with so many things in AI, these tools hold up a mirror to the processes in our organisation. They reveal how much of today's work is clicking, copying, chasing, coordinating, and updating systems. Agentic AI will thrive where organisations want to simplify workflows, and it will fail in the organisations that rely on human glue.
Ultimately, the competitive edge will go to companies that redesign systems, not just add AI on top of them.
If this hasn’t pushed you over the edge to join the ‘Stop this AI stuff NOW!’ camp, a good video explaining Moltbook can be found here
What ‘Rules’ Govern The GPT Claude?
The responsibilities of organisations developing the large GPTs have been the topic of philosophical debate for some time. Rifts at OpenAI have been caused by differences of opinion over the purpose and responsibilities of ChatGPT: how quick to build versus building with safety as the first principle.
Anthropic, the owner of the GPT ‘Claude’, has just publicly released their ‘constitution’ for Claude. A 57-page document, it is structured more like a corporate governance charter than a set of rules. It covers Claude's ethics, approaches to safety, hard constraints (red lines), Claude’s nature and the way in which Claude should be helpful
A key takeaway that is deliberate and worth observing is the order in which they have structured the key principles. Safety is first, and helpfulness comes in last. In order to gain "stickiness", other GPTs have prioritised psychological tricks in an attempt to become ‘likeable’. Sycophancy and praise for the user and always returning an answer (even if the model is not sure if the answer is correct) are all helping people return to GPTs again and again
Takeaways For Leaders
Most firms adopt AI with “helpfulness” as the default goal. Anthropic flips it: safety, ethics, governance, then helpfulness. Claude’s Constitution is an early signal that AI systems will not just be smarter, they will be governed. Anthropic ranks safety first, ethics second, organisational rules third, and helpfulness last, a reminder for leaders that the future challenge is not deploying AI, but setting the principles under which it is allowed to act.
🔥 In Case You Missed It…
An artist named Sienna Rose has been dividing opinion over whether she is in fact AI-generated.
![]() | Originally a white and red-headed singer-songwriter, later as a brunette and then as a black female R&B singer, Sienna Rose uploaded at least 45 songs and 10 albums to streaming services between September and December of 2025.1 |
AI detection tools on the streaming platform Deezer have flagged the work as likely AI-generated but this has not stopped her getting three songs in the Spotify Top 50 playlist. The actress Selena Gomez also used Rose's song in an Instagram post for the Golden Globe Awards, which brought wider attention to the artist and the debate over whether she had been created using AI.
English composer Michael Price called Rose "sickening" and blasted streaming services, whilst Maya Georgi in Rolling Stone compared the “lush vocals” and “delicate pianos” of her music to the songs of singers Olivia Dean and Alicia Keys.
The real debate for many is whether it really matters if the music is AI generated. If you’d like to listen to make your own mind up, you can find a link to her Spotify listings, with over 4.3m monthly listeners here
🏆 Tools, Podcasts, Products Or Toys We’re Playing With This Week
Pippit.AI: Video Generation
A new tool, powered by CapCut, Pippit enables users to chat in natural language to generate ideas for video scripts and then to turn them into fully edited videos with moving visuals. They also have tools for video avatars that can read and lip-sync with the scripts. The avatars are better than examples from a year ago, but are still a little ‘uncanny valley’. But the script creation and the images created within the video tool are pretty impressive.
Did You Know?
![]() | The first online purchase was a bag of marijuana. |
Till next time,
And The Rest Is Leadership
1 Wikipedia entry accessed 30th January 2026 https://en.wikipedia.org/wiki/Sienna_Rose

