Brand repuatation in the generative AI era: trust is everyone's job

AI is now deeply embedded in everyday life, and we are seeing many brands still working out how to use it effectively.

Generative AI is moving beyond a productivity tool and in today’s environment, it can actively shape or damage a brand’s reputation. 

As PRWeek recently highlighted, we are seeing an AI search shift, where people increasingly encounter AI-generated summaries of a brand rather than engaging directly with owned or trusted channels.

In this context, LinkedIn, Reddit and Wikipedia now sit alongside owned sites and earned media as primary reference sources, meaning a brand’s external footprint will directly determine how AI describes it.

Go onto an AI tool you have access to, such as ChatGPT or Gemini and ask it for information about your brand – see what comes up!

If that summary is wrong, out of date, or missing context, your brand reputation is at risk. While your brand knows what information is accurate or not accurate - the person doing the AI search may not, meaning the results the AI tool pulls up can influence customers, journalists, investors and future hires.

So, how can brands in Ireland get ahead of this?

  • Decide who is allowed to speak for your brand and be the human in the loop

List every place AI might be used to communicate externally for your brand such as customer service, social media replies, Wikipedia, LinkedIn, media Q&As or senior executive posts.

Set one rule: anything public needs a named person responsible for it. AI can help draft and summarise, but it should not be the final voice without an authorised person’s sign-off.

  • Set rules for AI use - not just a policy document – treat data going into AI as a potential reputation issue

A broad AI policy won’t protect you on its own. What works is a short set of practical rules teams can follow under pressure: which tools are approved; what information must never be entered (personal data; confidential information such as commercially sensitive plans/live issues; when disclosure is required; who checks work before it goes out (an authorised person); and what extra checks apply to high-risk content (financial results, legal matters, health and safety, sensitive incidents).

Keep the rules short and make it easy to find for everyone in the company.

  • Watch how AI summaries are shaping what people think of your brand

If AI assistants and search summaries are how people first meet your brand, errors can repeat and become believed. Keep your own channels accurate and consistent, your website, investor information, social media profiles, and update FAQs on a regular basis.

Generative AI can boost speed, insight and creativity but reputation is a trust relationship and we are all still adopting live through it.

The brands that come out strongest will be the ones that move fast - fake content moves fast, meaning brand teams need to move faster – monitoring daily, spotting what’s suspicious early on AI, where possible verifying and correcting information through trusted sources that AI pulls information from, and acting proactively to protect and strengthen reputation.

Dawn Burke, Managing Director, Corporate and Reputation and Michelle McCoy, Managing Director Digital