Welcome

Toot, toot!
In this issue of PR Futurist, we share a BBC investigation into why people smuggle AI into work, a new AI ethics charter for internal comms, Perplexity's free 'deep research' instead of OpenAI's at $200 per month, and the new European AI assistant app from Mistral.
It's also time to blow our own trumpet, as Stuart has been named one of the top 100 most influential PR tech leaders in the world. If you want to hear him speak about AI for PR and communications, then there's still time to book a place at the PRCA's Matchmaker NewBizFest25 where he's speaking about 'AI – A Game Changer for New Biz'. Or you could book your own in-house workshop for your team. Maybe at your lunch and learn?
We also cover a law firm called Dumb and Dumber (not really, but given their lack of AI knowledge, it could be) and a case study from the Australian government that purports to be a Copilot fail, but actually proves its value. There's also a new news channel created entirely by AI, and a look at what the BBC is up to with AI.
Listen to PR Futurist
News
🙏 Please help us improve PR Futurist
Please will you take 1–2 minutes to help us improve PR Futurist? We already know readers like it as we get open rates of 45–50% (the record is 56%) compared to an industry average of 29%. We want to see if we can make it even more useful and interesting for people. Thanks! 🙏
Tim Bailey

Stuart Bruce named as one of the top 100 most influential PR tech leaders in the world
Purposeful Relations co-founder Stuart Bruce has been named as one of the most influential PR tech leaders in the world in a list of the top innovators who are “leading the charge in PR technology… and redefining the industry in 2025.”
The Propel 100: The Most Influential PR Tech Leaders in the World 2025 shines a spotlight on the most influential PR tech leaders making waves in the industry. The annual Propel 100 recognises the professionals who are driving innovation, transforming PR with technology, and redefining how communications teams operate in a digital-first world.
Karen Marshall

Speaking at PRCA Matchmaker NewBizFest25
Purposeful Relations’ co-founder Stuart Bruce will present a session on ‘AI – A Game Changer for New Biz’ at the PRCA's first Matchmaker NewBizFest25 conference on 6 March.
"A fast-paced, insight-packed day designed to supercharge agency growth and new business success" is what's promised at the PRCA Matchmaker NewBizFest25.
There is still time to book your place.
Karen Marshall
AI

BBC explores why people smuggle AI into work
Our Global CommTech Report revealed 66% of PR professionals are using shadow AI - personal AI tools at work without permission. The BBC explores why they are doing it and what the implications for employers are.
The article highlights why bans on AI massively increase risks for companies as it forces employees into using potential risky tools rather than safer and more effective alternatives. This is why it is critical to have an effective AI policy and why a generic 'one size fits all' approach won't work.
AI generated image.
Stuart Bruce
UK government AI playbook
The UK government has just launched its AI Playbook to provide departments and public sector organisations with accessible technical guidance on the safe and effective use of AI. We are incorporating it into our work with public sector clients and analysing how it can apply to the private sector. I summarised it on LinkedIn. Take a look and add your comments.
Stuart Bruce

Ethics charter for internal communications
The Institute of Internal Communications (IoIC) has published its AI Ethics Charter for the ethical use of AI to "to sustain professional standards in internal communication as AI adoption goes mainstream."
There's nothing radically different about the charter from those that have already been published by other professional and trade bodies.
Like all sets of principles, it doesn't tell you practically what to do, so it is no substitute for a robust AI policy and training. It does provide an excellent list to check your approach to AI.
The section about transparency is interesting as last year the IoIC published research that said a third of employees "would not at all trust a message from their CEO that was developed with AI." I made the point that how many would trust a message from their CEO that was developed by IC professionals instead of the CEO.
The principles don't say IC professionals need to disclose the use of AI in that case. In fact, the "Be trustworthy and transparent" section doesn't say much about transparency at all, It's mainly about the importance of human oversight and fact-checking.
It implies that it's not always necessary to be transparent about the use of AI, as it says "When using synthetic media in the course of our work, we will always be able to explain why we chose to do so."
The other notable section is on sustainability, where it says: "the carbon emissions of all AI-generated content will be measured and monitored." This is a perfect example of principles not telling you how to do it. I'd seriously question if this is even possible in any meaningful or practical way.
I'd be interested to know if any of the people on the IoIC task force that created the principles can explain how comms professionals are meant to measure and monitor the carbon emissions of AI-generated content. And why just "AI-generated content' and not all of the other uses of AI for comms, which are actually a far bigger use than content creation.
You can download a copy of the IoIC Charter without needing to be a member.
Thanks to Dr Kevin Ruck of PR Academy for sharing this.
Stuart Bruce
Get in touch
Getting the most from Microsoft Copilot
If you are piloting or using, Microsoft Copilot let Purposeful Relations help you maximise the benefits and minimise the risks. We are specialists in AI implementation for communications teams with extensive client experience of Copilot. Contact us for a conversation so we can share our Copilot case studies.
CommTech tools

Perplexity shows the best things in AI can be free
Everyone in the AI geek world is talking about 'Deep Research'. This is where you ask an AI assistant to search the web, look up sources and then create a detailed report about what it is has found. The sort of the thing that would take you several hours or even days.
Google Gemini does it for $20 a month. OpenAI does it for $20 for only 10 queries a month.
Now Perplexity will do your deep research for free. Even free users can do deep research "a few times a day". Paid users ($20 a month) can do unlimited Deep Research queries.
I've shared a video on LinkedIn showing how you can create an in-depth profile of a journalist in less than three minutes. The example is Jonathan Calvert, editor of the Insights investigative team at The Sunday Times.
Thanks to Jesper Anderson for spotting this update to Perplexity.
Stuart Bruce

European AI Mistral launches iOS and Android AI assistant apps
Mistral, Europe’s AI large language model, isn't as well known as it should be. Hopefully, this might now start to change now that its Le Chat AI assistant has an improved web interface and new mobile apps for Android and iOS. It's a viable alternative to better-known AI assistants like ChatGPT and Copilot.
Even its free tier offers "limited access to Mistral's highest-performing models". The higher-paid tiers protect your data by automatically excluding it from being used to train the model.
If you've not tried Mistral, it's worth giving Le Chat a spin on the web or by downloading one of the apps.
Stuart Bruce
Case studies
Dumb and Dumber - law firm restricts AI because it finds 'significant' staff use
Okay, maybe the law firm isn't really called Dumb and Dumber. But if I was a client of international commercial law firm Hill Dickinson I'd be more than a little concerned. The chief technology officer of Hill Dickinson has just warned staff about the use of AI tools. So far, so good.
What's alarming is it's a commercial law firm and it has taken until now to notice that hundreds of its staff are using unauthorised AI tools thousands of times a day! This apparently includes DeepSeek, the new Chinese AI that stores data on servers in China and has T&Cs that clearly state you don't have have any privacy protection.
"The firm said much of the usage was not in line with its AI policy, and going forward the firm would only allow staff to access the tools via a request process."
Seriously? What sort of half-baked AI policy did it have that staff already couldn't access AI tools via a request process?
Our research shows that just 40% of PR professionals have an AI policy in their workplace. This highlights that even those that have might not have an effective one.
Last year Sir Geoffrey Vos, Master of the Rolls and the head of civil justice in England and Wales said it was unethical for lawyers NOT to use AI. Using Sir Geoffrey Vos's rationale, the same applies to PR and communications professionals.

How a Copilot fail trial actually proves how effective Copilot is
Where to start with this article about a Microsoft 365 Copilot trial for the Australian government Treasury department? The Register, in the typical snarky style we love it for, implies the trial was a failure as "government staff rated Microsoft's AI less useful than expected."
Now let's break it down and see what it really means.
It was only a 14-week trial, so not long enough for Copilot to become a daily habit and be used extensively.
The trial concluded Copilot was useful, just "less useful than they hoped it would be, as it was applicable to fewer workloads than they hoped it would be." Well, as the trial was with volunteers, that's not surprising. It is likely to have been early adopters who've succumbed to the AI hype or have been using personal AI and expect the same at work.
The article makes no reference to whether Copilot Studio was used to tackle specific "workloads" where people had hoped to use it. Creating custom AI agents for specific 'workloads' can make a significant difference to how useful AI is.
The most astounding sentence is "Treasury thinks it probably set unrealistically high expectations before the trial, and noted that participants often suggested extra training would be valuable."
No **** Sherlock! Give people an advanced tool without a framework for how it can be applied, and then be surprised they can't use it without adequate training.
After all this, "the report finds that if Copilot saves 13 minutes a week for mid-level workers, it will pay for itself." Most studies show two to four hours saved per week. But if Copilot pays for itself in just 13 minutes, and the real time saving is even higher, the business case becomes even more powerful.
It sounds like a trial that was set up to fail. Especially when you have statements like an the Treasury says an unanticipated benefit that Copilot displayed was “to contribute to accessibility and inclusion for neurodivergent and part-time staff, or those experiencing medical conditions that require time off work."
This is a known benefit of Copilot and AI assistants. How can it be a proper trial if a known benefit is unknown to the people running the trial?
It goes on to say: "Treasury’s learnings from the pilot include more careful selection of staff who use Copilot, the need for more consideration of necessary training on how to use AI."
No **** Sherlock.
The big takeaway from the Australian government Treasury report is that despite it being a badly run trial, it still found Copilot provided some benefits and paid for itself.
Just imagine what a well-run trial would have resulted in.
Wait, you don't have to imagine, as there are hundreds of case studies of exactly that. Commercial companies and public sector bodies that trialled it (properly) with 300 people and are now up to 2,000 because the results were so positive.
Hat-tip to Alan Morrison for spotting this and asking me what I thought.
Stuart Bruce

How the BBC is using generative AI
The BBC's use of AI is guided by three principles:
- always act in the best interests of the public
- always prioritise talent and creativity
- always be open and transparent with audiences when we use AI to support content-making
Like many principles they are very motherhood and apple pie but what do they mean in practice.
Some of the uses the BBC has successfully trialed include:
- Adding subtitles to some programmes
- Generating short-form animated sequences (instead of static images) to promote programmes on BBC Sounds
- Translation to publish more quickly across the world
The most interesting is for BBC Sport. BBC local radio does hundreds of live commentaries of English Football League matches. It is using AI tools to quickly create transcripts of the commentaries and highlight key match moments, like goals or red cards. After being checked by journalists, they are published as live text commentaries on the BBC Sport app.
It's also continuing to develop its own Large Language Models – trained on the BBC’s own content and data, so outputs reflect its values and tone.
AI is working at the BBC because it's investing heavily in training in AI tools like Microsoft Copilot, Adobe Firefly and GitHub Copilot.
Stuart Bruce

Channel 1 is a new news channel where stories are scripted, edited and presented by AI
While the BBC is taking a cautious approach to AI Channel 1 is being bolder. It is a new rolling news channel with a difference - its stories are scripted, edited, and presented by AI. The Guardian visited its creators in Los Angeles to learn more... and audition for a role.
Stuart Bruce
A TV ad with no Actors, locations or sets
Take a look at this video that was created using AI using Google DeepMind's Veo 2.
Wow or what?
Stuart Bruce
Shorts

How AI influencers are targeting German elections
They look appealing and spread right-wing disinformation on social media: AI-generated influencers are posting political content on social media. Who is behind them, and what exactly is their goal?
New research shows CEOs embrace AI, but still big barriers
The research by Cisco reveals 97% of CEOs plan AI integration, but only 1.7% feel fully prepared. The biggest barrier is a lack of skills and knowledge. The main way CEOs are getting ready for AI is by improving AI education.
Guardian Media Group signs strategic partnership with OpenAI
The bluster between publishers and big tech companies is all about positioning. Ultimately, deals will be done. It's about what they get out of the deal. The Guardian Media Group has announced a strategic partnership with Open AI. It means attributed Guardian content will appear in ChatGPT, and Guardian staff will be using Enterprise ChatGPT.
Reach reporters warned not to link to unauthorised sites
Reporters at Reach have been warned they could be sacked if they add links to third-party commercial websites in articles without permission.
HMRC develops algorithm to crack down on misuse of government logo
Internal software engineers working for HMRC have developed a machine learning tool which identifies and helps disable websites charging people for services offered free via the UK government’s own site.
Thanks to Matt Rogerson, Director of Global Public Policy & Platform Strategy at the FT for spotting this story.
European AI allies unveils LLM alternative to Big Tech, DeepSeek
As China’s DeepSeek threatens to dismantle Silicon Valley’s AI monopoly, the OpenEuroLLM has launched an alternative to tech’s global order.