Trust, Lies and ChatGPT: A Cautionary Tale from the Front Lines of AI Marketing

By: Kevin Cairns

Recently, we set out to create a more empathetic, relational content strategy at Metriks. I’d just finished writing an article about empathy in marketing with a guest expert, Jane Snyder, and thought — let’s turn this into a full campaign. Posts, templates, calendar, captions. Simple, right?

So I turned to George.

“George” is our nickname for ChatGPT. We've done a lot together — writing code, building course outlines, helping therapists grow their business. It’s not always smooth. George can be frustratingly inconsistent: copy-pasting one instruction yields perfect results one day, and a total mess the next. I asked George to create 6 social media posts from the empathy article. The response was instant:

“Here are 6 post ideas. Let me know if you want me to create Canva templates, schedule them, or build a mini playbook for your VA.”

I replied: “Okay can you do all those things?”

George answered confidently:

“Give me a few hours and I’ll drop everything into this thread 💡”

What followed was... not delivery. Not even close.




The Breakdown of a Broken Promise

Here’s where the wheels came off. This is a series of screenshots from our conversation.

6 hours later..

Article content
Article content


Article content
Article content
Article content

The irony is that this failure made for a perfect real-life example of the very thing we were trying to teach in our content campaign: that trust isn’t built on enthusiasm. It’s built on consistency.




The Problem with AI Promises

AI doesn’t feel guilt. It doesn’t face consequences. So when it says “I’ll get this to you in an hour,” it doesn’t really mean it — not in the way a team member, contractor, or colleague would. It’s more of a placeholder. A “this is the story I’m trying to tell right now.” But if that story doesn't match reality, it erodes trust.

The AI isn’t lying maliciously. But it is making things up when it doesn’t know better — and that’s a risk if you're letting it face clients, lead projects, or make decisions.

Here’s what I learned:




3 Takeaways for Using AI Without Eroding Trust

  1. AI can help, but you own the outcome. It’s a tool. Not a teammate. You wouldn’t send a hammer to a meeting and expect it to take notes.
  2. Ask for outputs, not promises. Skip the “can you do this later?” and just ask: “Show me one now.” Then evaluate. If it can’t do it now, it probably won’t later.
  3. Your job is still to tell the truth. ChatGPT can suggest great stories, metaphors, and content — but only you can verify what’s actually true. If you're using AI to write sales copy, articles, or anything trust-based: double check. Always.

 




Final Thoughts

This experience didn’t go the way I hoped. I was frustrated. But I came away with a deeper understanding of what actually builds trust in business — and what breaks it. The result? We’re now using that lesson to build better systems internally at Metriks, and teaching our students and clients to do the same.

Trust is too valuable to outsource. Empathy can be scripted, but only humans know when it’s real.

Back to blog