Welcome

Late because of a public holiday, but hopefully just as packed with juicy news as usual.
The FT covers new research that confirms just how much more persuasive AI can be compared to humans. And there's some revealing data that shows AI isn't quite as damaging to the environment as some claim. And it will become even more sustainable as nuclear-powered AI comes online - Rolls-Royce claims its SMR capability could make it the UK's largest firm. Disclaimer, I worked with Rolls-Royce to lobby for the government to support it building its SMRs.
Shocking new research says nearly seven in 10 employees are having to pay for AI tools themselves. If true this is not only dangerous, but means employers are missing out on the biggest benefits of AI which is far better used across teams than as a personal tool.
There's a fascinating case study showing how the UK's largest publisher uses AI to repurpose content to fit the house style of its 120 brands including The Mirror, The Express and OK! Magazine.
The Law Commission has published a thought-provoking paper artificial intelligence and the law, which has lots to think about for corporate affairs professionals. A paper from the University of Dundee shows how universities far from discouraging AI use are encouraging academics and students to use AI responsibly.
Last but not least in article by Microsoft's public sector head who shares how Microsoft and Copilot are being used to transform and improve public sector services in the UK.
Image courtesy of Reach plc.
AI
Is AI more persuasive than a human? 💳
New research shows AI models from OpenAI, Meta, xAI and Alibaba can make people change their political views after less than 10 minutes of conversation. This isn't the first study that shows AI is far more persuasive than humans.
Two of the most interesting aspects of the research is that it tested multiple AI models and the secret of its success was they used a customised dataset of more than 50,000 conversations on divisive political topics, such as NHS funding or asylum system reform.
The FT frames it almost entirely as a risk that politics can be manipulated, but doesn't consider the flip side of how it could be used to combat disinformation and misinformation that's already out there.
There's too often a mistaken belief that politicians are far better than they really are at communications. The vast majority of mainstream backbench MPs and local councillors have little or no professional communications skills. They struggle to combat disinformation and misinformation and indeed can reinforce myths by misguided attempts to disprove them.
If AI is more persuasive then they could use it to improve how they listen to and communicate with local residents. They can listen at scale and analyse the results. They can craft better responses to concerns of local people. If an AI chatbot is better at convincing a voter not to believe disinformation or misinformation is that really a bad thing?

AI is a sustainability horror show. Or is it?
AI is a sustainability horror show. Or is it? When we run AI workshops for clients one of the most frequent questions and concerns is about sustainability. But is AI really as bad as some people want us to believe. The lack of credible data meant that was an impossible question to answer.
Google has just published data that will surprise those complaining about AI's energy use. Apparently, a typical Gemini prompt is like 8-10 seconds of streaming Netflix. As an example of how technology constantly advances it's also the same as a single Google search in 2008.
Professor Ethan Mollick has shared Google's data but also highlights it is in line with data from other sources. Data from Google (PDF), OpenAI and independent sources show a standard prompt uses roughly 0.0003 kWh of energy.
He acknowledges water is more complicated as it varies more - ranging from .25mL to 5mL+. But without comparable water use data it's impossible to make valid data driven decisions about use, especially with such a big variation. Do data centres use more water than the amount UK water companies lose by leaks because they've not maintained or fixed their infrastructure? How does it compare to water used to water lawns, clean roads, wash windows and other non-essential uses?
We now appear to know enough to sustainability is an important issue we need to consider about AI. But that statement also applies to everything else we do. It's also a problem that is rapidly being solved. SMRs (mini nuclear power stations) to power data centres is just one way, see the story on Rolls-Royce's SMRs.
AI generated image of a data centre fuelled by a coal-fired power station. Interestingly Microsoft 365 Copilot rejected various prompts as violating its safety protocols so this is from Google Gemini.

Ex-OpenAI executive appointed as UK prime minister’s new AI adviser
Jade Leung, the chief technology officer of the AI Security Institute, has been appointed as the Keir Starmer's new AI adviser. More significant than her current role is her former role as governance lead at ChatGPT company OpenAI. Her appointment has caused some controversy as critics question if it is another example of the UK government being too close to big US tech companies.
It isn't a full-time position as she will split her time between the AI Security Institute and advising both Keir Starmer as prime minister and Peter Kyle in his role as secretary of state for science, innovation, and technology.

Nuclear-powered AI could make Rolls Royce UK's biggest firm
AI could make Rolls-Royce the UK's biggest firm, says its CEO. Mini-nuclear power stations (SMRs) are probably the best way to provide the energy needed by AI data centres. A few years ago, I worked with Rolls-Royce lobbying for SMRs. AI wasn't one of our examples of what a single SMR could power.
Since I curated this story Rolls-Royce has announced it has signed a deal with Swedish energy company Vattenfall to provide it with SMRs.
The visual mock-up of the SMRs might look like AI but is a real mock-up as I was involved in commissioning them, long before the AI advances of recent years.
New report on AI and local government
As an AI aficionado and political government geek I'm in nerd heaven reading this report from the Tony Blair Institute for Global Change. It's about how Shanghai is using AI to design the city of the future, but more interestingly it links to a fascinating paper - Governing in the Age of AI: Reimagining Local Government. Lots of food for thought and not just for local government and the public sector.
Law Commission publishes Artificial Intelligence and the Law discussion paper
The Law Commission has just published a discussion paper on Artificial Intelligence and the Law. It is a high-level overview aimed at raising awareness and discussion on legal challenges posed by AI but doesn't go as far as proposing reforms
Here are five legal challenges corporate affairs teams should be tracking:
Opacity "Black Box" problem - AI systems often produce outputs without transparent on how they do it which can lead to trust and accountability issues.
Autonomy and adaptiveness - AI systems can act independently, sometimes unpredictably which raises questions about liability and governance.
Training and data use - Questions over copyright infringement and bias in training datasets as well as ownership of AI generated content.
Disinformation and deepfakes - AI can be used to create "convincing false narratives", impacting reputation and public trust.
Legal personality for AI - The paper explores whether AI systems should be granted legal personality.
All of these issues raise issues that corporate affairs professionals need to urgently address by ensuring they have robust AI policies and governance, and by reviewing and improving their corporate communications strategy and plans.
What can comms learn from AI use in higher education?
AI's impact on education is profound. How do students use it? How do teachers use it? What is best practice?
This updated guidance from the University of Dundee takes a mature approach by acknowledging the importance of academic integrity and looking at an empowering rather than restrictive approach to AI. Many of its recommendations and principles transfer easily to corporate communications:
- Universities cannot realistically ban these tools; instead, they must equip staff and students with skills to use them ethically and effectively.
- AI detection tools don't work and even if they did does it matter as "We need to consider what we are actually trying to detect as AI-assisted writing is becoming the norm."
- Staff and students should disclose if they use GenAI.
- Students cannot be compelled to use GenAI tools requiring personal data.
- Ethical concerns include: data privacy and exploitation, copyright infringement, misinformation and hallucinations, cognitive bias and lack of inclusivity, and environmental impact.
- Positive ideas for how AI can be used such as possibility engine, Socratic opponent, collaboration coach, co-designer, Exploratorium, storyteller
- Staff are encouraged to experiment with GenAI to enhance teaching content and learning design.
Disclaimer - M365 Copilot was used to help summarise this paper.
Nearly seven in 10 employees pay for their own AI tool at work
Another day, another report on AI use. How much more evidence does it take before comms professionals start to see the necessity of getting to grips with AI?
This one finds that four in five employees are using AI in their day-to-day roles, with 36% of employees feeling that AI use is essential to their productivity. Shockingly 66% of respondents claim to have to pay for AI workplace tools out of their own pockets! This is alarming as it creates serious risks around data security as well as meaning most companies miss out on the biggest benefits of AI which is integrating it within teams to use their own data as a knowledge base.
Ragan's first recommendation is rather bizarre and dangerous - Allow IT to write the policies and comms to shape them for an employee audience. It does caveat this by saying "comms to shape them for an employee audience." It's still terrible advice.
IT is one of the worst places to start with creating an AI policy as AI is primarily about people and culture not technology. Instead of letting IT leads it needs to a cross-functional head such as corporate communications or even legal. It needs to be collaborative with corporate affairs involving IT, HR, operations, legal, finance and every part of the organisation. The AI policy needs to cover social licence and how AI's use will impact trust, reputation and relationships with all stakeholders, internal and external.
Ragan does go on to make two useful recommendations - creating an accessible guide on AI use at work and involving employees in the process of both creating policy and deciding how AI can be used.
If you need help with any of these things let us know.
Case studies
How newsrooms are using AI
An illuminating round-up of how AI is being used in newsrooms around the world. An important factor is using internal data and curated knowledge sources to reduce hallucinations and improve reliability. This is something we advise to clients as AI's greatest power can be unlocking existing potential. Technically it is known as RAG or retrieval augmented generation.

How Reach plc's Guten AI bot helps it deliver impactful journalism
From contacts I already knew quite a bit about Reach plc's Guten so I was fascinated to read this in-depth article published on the Amazon AWAS Blogs site. Reach is the largest commercial, national and regional news publisher in the United Kingdom and Ireland with more than 120 well-known brands such as The Mirror, The Express, Daily Record, OK! Magazine, Manchester Evening News and Daily Star.
It built its Guten AI system primarily to help it repurpose content at scale. Last year it supported 1.8 billion page views, and currently it supports 25% of Reach's published content. A simplified explanation of its main purpose is it takes content, from either Reach publications or external sources such as newswires, and repurposes it to the house style of each individual brand or publication.
It's a great example of how AI is impacting journalism, publishing and corporate communications. Corporate communications can use a similar approach to repurpose content for different channels, formats, audiences, markets abd countries. It enables highly customised content that maintains a consistent brand voice.

How AI is transforming the UK's public sector
Great interview on Microsoft's Stories website with Amanda Sleight, Microsoft UK’s new head of public sector. She explains how Microsoft AI technologies are helping UK government improve public sector services and reset the relationship between the state and the people it serves.
We've worked with, and are working with, several public sector bodies on helping them to understand and implement AI. Copilot is usually the de facto choice as it offers the best privacy, security and integration with existing data and processes.
Research and reports
PoliMonitor Bluesky Briefing 2025
Most MPs are still on XTwitter, but now 40% are embracing Bluesky and some have even abandoned XTwitter entirely. There is a strong political divide. Can you guess what it is? You should be able to given that one of the most frequent criticisms of X is that it has become a far-right sewer full of racism and misogyny. You have to register to download the report.