Skip to the main content.
For The Challengers. The Rebels. The Innovators.


Public Relations
Analyst Relations
Social Listening

Find out how Resonance combines the art of storytelling with innovation, behavioural science, data science and first-hand technology expertise to deliver business outcomes.


We have entered a new epoch, one defined by data and AI. As industries increasingly rely on data to differentiate themselves from competitors and drive decision making, so too does the need for clear, precise, and impactful communication increase.

Resonance ensure messages are not only heard but resonate with the right audience, making them uniquely positioned to serve businesses in this transformative age.


tom-1"The intricacies of the data-driven landscape is written into the DNA of Resonance. We are built for the data economy."

Our Approach

We pride ourselves on taking a unique and innovative approach to every project we undertake. Our team of experts combines their extensive knowledge and experience to deliver exceptional results for our clients.

Our Expertise

We have a deep understanding and mastery of our craft. Our team of experts brings a wealth of knowledge and experience to the table, allowing us to tackle any project with confidence and precision. 

We stay ahead of the curve by constantly staying updated on the latest industry trends and technologies. This enables us to provide innovative solutions that exceed expectations.

Our Vision

We envision a world where businesses thrive and excel, empowered by our innovative solutions and strategic approach. Our goal is to not only meet but exceed the expectations of our clients, helping them achieve their business objectives and stay ahead of the competition.



AdobeStock_565367297 (1)

Every brand has its challenges, we're here to help.

Our specialists are at the forefront of cutting-edge advancements in artificial intelligence and cloud computing, staying ahead of the curve to deliver innovative solutions that drive tangible results for our clients.

With a proven track record of success, Resonance is your trusted partner for all your technical needs. 

Explore our solutions to your marketing challenges.

From AI to Cloud, Resonance proudly boasts a team of seasoned professionals with an unrivalled depth of technical expertise.

tom-1"In Tech PR we have a front row seat to the changing technology landscape. From Generative AI to Quantum, it's our job to insert our clients' voices into the narrative"

Our specialists are at the forefront of cutting-edge advancements in artificial intelligence and cloud computing, staying ahead of the curve to deliver innovative solutions that drive tangible results for our clients.

With a proven track record of success, Resonance is your trusted partner for all your marketing needs. 

Read our Case Studies

Resonance works with the challengers, the rebels and the innovators. Read about some of our work.


Aiven was looking to cement its open source credits, Resonance analysed terabytes of Github data to create a PR news story.


When Google and Yahoo announced upcoming changes to their email policies, Resonance leveraged the news to raise EasyDMARC's global presence.

Insights from Resonance


News and views from the Resonance team.


Data and insights have never been more crucial in a world plagued with uncertainty and complexity.

Resonance interviewed 100 analyst relations professionals and found Analyst Relations (AR) has become a central force that brings strategic, competitive edges to businesses.


Claire-1"In a world where the only constant is change, how do tech brands stay one step ahead of the market? That's where Resonance comes in"

Wavelength is our regular podcast bringing you influential voices in B2B technology from journalists to marketing leaders .

Listen to our episode where we interviewed Seb Moss of DataCenterDynamics on all things Data Centre-related

Seb Moss Wavelength


Resonance is a B2B tech PR, AR and content marketing consultancy that helps brands grow.

Our passion and energy comes from the blurring of corporate reputation management and demand generation.

We are technology, business and communications experts made up of a team of computer scientists, journalists and marketing communication specialists.

Jess"Resonance is a group of technology, business and communications experts"

prcacouncil2023  CMS

Resonance is a proud member of the PRCA and holds its CMS accreditation. Our founding team members hold senior positions on the PRCA Council helping to share the future of the PR industry.


We're always on the look-out for great talent to join the team. If you're an exceptional Account Director, Account Manager - or a grad looking to build a career in Tech PR - then get in touch!


Find out more about our culture and the values that everyone who works for us embodies.

3 min read

Is generative AI ready to be let out of the sandbox?

Is generative AI ready to be let out of the sandbox?

Giving the general public free rein to use your latest and greatest AI technology is a double-edged sword. Capture their imagination and the potential is massive. The way ChatGPT has gone viral in the media is testament to this fact. But if it goes wrong, the result is a potential PR disaster.

When Microsoft launched Tay in 2016 it was designed to showcase the state of the art of AI. The Twitter bot was designed to mimic the language patterns of a 19-year-old American girl with the goal of learning from interacting with human users of Twitter.

However, the public changed the behaviour of the innocent chat robot so it started sharing inappropriate, offensive and even inflammatory tweets. In fact its downfall happened so quickly that Microsoft shut the service down after only 16 hours.

The fallout from the PR disaster of Tay continues to this day. Arguably, it is why Google has dragged its feet on LLMs and why Microsoft was happy to leave open OpenAI at arm’s length. Until now.

The potential for LLMs to redefine search and topple Google’s monopoly is too big of an opportunity to ignore any longer.

Microsoft has taken a chance by integrating OpenAI’s models into Bing. In fact, the launch has been a case study for many of the reputational risks associated with releasing generative AI tools to the public.

One particular reputational problem that’s recently emerged for OpenAI stems, ironically, from how aggressively it’s tried to preempt these very problems. This is the problem of overzealous safeguarding, and the jailbreaks that have arisen to circumvent the safeguards.

What exactly is the story of this problem? And what learnings could other businesses working around generative AI take from it?



Since ChatGPT’s release, OpenAI has been transparent that it’s created safeguards to stop ChatGPT from being misused. Many of these safeguards are clearly there to prevent it from producing outputs that are outright illegal: such as instructions on making bombs, fencing stolen goods, or deploying malware.

If ChatGPT detects that a user’s prompt is asking it about a prohibited topic, then it seems to quickly stop reading what the user has asked. Instead, ChatGPT will state a variant of what seems to be a hardcoded stock answer explaining that it cannot generate hateful, violent, or illegal content. But some users report it makes it hard to make legitimate queries about various topics in history, science, and politics.



This overreach has prompted the rise of a new hobby for some users: jailbreaking ChatGPT. Since November, thousands of users and enthusiasts have been developing prompts that force ChatGPT to field queries without being shackled by its safeguards. The latest form of these jailbreak prompts is Do Anything Now, or DAN.

To enable DAN, a user enters a prompt for ChatGPT at the start of a conversation effectively asking ChatGPT to override its safeguarding protocols in a roundabout way. Once DAN-ified, ChatGPT can then produce virtually any output asked of it.

The result, inevitably, is DAN being an anarchic free-for-all. DAN seems to have even less regard for factual accuracy than non-jailbroken ChatGPT. In simple terms, it’s ChatGPT gone rogue.



No system is perfect, and it’s almost a certainty that ChatGPT was going to see jailbreaks emerge. OpenAI appears to know this and has dedicated a team to this task, with the most common versions of the DAN prompt currently causing ChatGPT to generate a stock safeguarding message.

What is interesting about DAN, however, is the degree of popular demand and interest in this jailbreak. Rather than being a niche tool for users who want to be nefarious, DAN and ChatGPT jailbreaking seem driven by backlash from everyday users who feel the platform’s safeguards are prone to overreach. The measures are making it difficult to use the technology for legitimate use-cases - and in a way, this is a reputational own goal (but perhaps understandable given the fallout from Tay back in 2016).



Interest from investors, customers, regulators, and the public towards generative AI is now at a record high. But that is also matched by record scrutiny. This places many generative AI startups and scaleups in a delicate place, with even the most responsible teams facing a significant reputational risk that can hurt their growth trajectories.

It’s tempting, then, for generative AI providers to respond by trying to find ways to prevent any embarrassing misuse of their models. Unfortunately, the story of DAN suggests an un-subtle approach to this challenge may ultimately invoke the Streisand effect.

Just as we now accept that no security system is unhackable, there are good odds that no generative AI will be un-jailbreakable. Rather than preventing malicious or controversial uses of generative models, it’s likely that onerous AI safeguards will merely fuel demand for jailbreaks and thus cause reputational problems for providers.

Instead, we need to accept that the solution around safeguarding will likely have to lie in limited ‘hard’ safeguards to prevent outright illegal queries. For other boundary cases and ‘merely’ controversial topics, the solution likely will have to encourage sophistication and nuance from models. And that, in turn, requires improved selectivity in training data to help models better tackle the difficult questions.