Everyone’s talking about AI. Most are doing it wrong.

Three ugly truths from the frontlines of AI transformation — and why getting this wrong isn’t just a business problem. It’s an ethical one.

After years working across financial services, consulting, and now healthcare — and most recently as CMO and Head of Customer Experience at Cigna Healthcare — I’ve had a front-row seat to how organisations adopt AI. The hype is deafening. The results are often underwhelming. And the mistakes are remarkably consistent.

So let me say what most people in this space won’t: plugging AI into a broken process doesn’t fix the process. It accelerates the dysfunction. And building empathy into your AI model? That’s not innovation. That’s abdication. And in some contexts, it is a straightforward ethical failure.

Here’s what I’ve actually learned.

 Truth 01

AI is only as good as the human who designs its logic.

There’s a seductive myth that AI will figure things out on its own. It won’t. Every output — every recommendation, every decision, every automated action — is downstream of decisioning logic that a human had to design, validate, and govern. Garbage in, garbage out as one of my pet phrases has never been more relevant.

This is why human-in-the-loop is not a feature. It’s a prerequisite. The organisations getting the most from AI are the ones investing as much in their decision architecture as they are in the technology itself. Who is accountable when AI gets it wrong? What are the escalation paths? Where does the machine stop and the human begin?

“The quality of your AI output is a direct reflection of the quality of your human thinking. You cannot outsource the hard part.”

Before you ask what AI can do for you, ask: have we mapped our decisioning logic clearly enough to trust that AI is executing it faithfully? If the answer is no — and for most organisations it is — that’s where the work starts.

 Truth 02

You cannot train empathy into a machine. Nor should you try.

I’ll admit this one makes some people uncomfortable. We have invested significant energy in making AI sound warmer, more compassionate, more human. And I understand the impulse. But there’s a critical difference between AI that is designed to be helpful and AI that performs empathy — and in high-touch industries like healthcare, insurance, and financial services, that difference can cause real harm.

Empathy is not a script. It is not a set of sentiment triggers, and it’s not simply just what you say. It is the ability to genuinely sit with someone else’s experience, absorb it, and respond in a way that makes them feel truly seen not just through words but equally through actions. No model can do that. And when AI tries, it risks coming across as hollow at best — and manipulative at worst.

I saw this play out firsthand during my time running Mad About Marketing Consulting. A client in a high-touch service industry had invested in an AI-powered chat solution to handle customer complaints. The intent was good — faster response times, 24/7 availability, reduced load on a stretched team. But what we uncovered in the audit was quietly damaging: customers who were upset, confused, or vulnerable were being met with algorithmically generated responses that used all the right words — “I understand your frustration,” “we’re here to help” — but felt utterly hollow as they didn’t address their pain points. Satisfaction scores were falling. Escalation rates were rising. And the most telling signal: customers were specifically requesting human agents, even when wait times were significantly longer.

What we found

The AI hadn’t failed on efficiency. It had failed on presence. Customers in distress don’t just want their problem solved — they want to feel that someone actually registered their frustration. That’s not something you can script, and it’s not something a language model can manufacture. The moment we reintroduced a structured human handoff at emotional inflection points in the journey, the numbers turned around. Not because the AI was worse — but because we finally had it doing the right job.

Here’s what bothered me most about that engagement: nobody in that organisation had asked the harder question before deployment. Not “can AI handle this?” — but “should it?” That distinction is the ethical line. And too many organisations are crossing it without realising it, because the business case for automation is easy to build and the human cost is slow to surface.

Deploying AI in moments of genuine vulnerability — a customer disputing a denied insurance claim, a patient trying to understand a diagnosis, someone in financial distress — without a robust human escalation path is not a design gap. It is an ethical failure. Full stop.

“The question is never just ‘can AI handle this?’ It is ‘should it?’ That distinction is where ethics lives.”

 Truth 03

AI is not your replacement. It’s your most tireless colleague.

The framing of AI as something that replaces human work has done enormous damage — both to adoption and to trust. The more useful frame, the one I’ve come to rely on in my own work, is AI as a co-worker. Specifically: the colleague who never tires, never loses focus, and can process ten thousand documents while you sleep.

Think about the work that genuinely drains your team’s capacity: synthesising competitive intelligence across markets, reviewing regulatory documents, monitoring sentiment across channels, structuring raw research into actionable insights. These are not low-value tasks — they are critical tasks that take disproportionate time. AI done well gives that time back, so your people can do the thinking that machines cannot.

I’ve been using Claude as my AI co-worker since 2024 when it first launched in Singapore, and it’s genuinely changed how I operate. Not because it thinks for me — it doesn’t, and I wouldn’t want it to — but because it helps me think faster, more rigorously, and with a broader base of information than I could manage alone. It’s the difference between spending three days synthesising a report and spending an afternoon pressure-testing the conclusions.

 Practical Guide

What to actually use AI for (and what to leave to humans)

  • Research & synthesis  Distilling large volumes of data, reports, or documents into structured insights. Ask it to challenge its own summary.

  • First-draft thinking  Use it to get a rough structure on paper. You bring the judgment, the nuance, and the final voice.

  • Decision prep  Map out scenarios, stress-test assumptions, surface risks before you walk into a high-stakes meeting.

  • Monitoring at scale  Regulatory changes, competitor moves, market signals — AI can track what humans simply can’t at volume.

What to leave firmly with humans: empathetic conversations, ethical judgment calls, stakeholder relationships, creative direction, and any decision where accountability matters.

My personal go-to is Claude — I’ve been using it consistently since 2024 and it has become an integral part of how I approach research, strategy development, and content. It is not a magic answer machine. It is a rigorous thinking partner. The distinction matters enormously.

 The Harder Conversation

AI without ethics isn’t transformation. It’s risk you haven’t priced yet.

I want to be direct about something the industry is not saying loudly enough: the ethics of AI deployment is not a compliance checkbox or a PR concern. It is a fundamental leadership responsibility. And right now, too many organisations are outsourcing that responsibility to their technology vendors — which is precisely the wrong place for it to sit.

Ethical AI is not about making your model sound more human. It is about being honest about what AI can and cannot do, and building systems that reflect that honesty at every touchpoint. It means asking uncomfortable questions before you go live, not after your NPS scores drop.

The questions every leader should be asking

Before your next AI deployment, can you answer these?

  • Who is this AI interacting with — and what is their state of vulnerability when they reach us?

  • At what point in the journey does a human take over, and is that handoff fast enough to matter?

  • If the AI gets this wrong, who is accountable — and do they know it?

  • Are we automating this because it genuinely serves the customer, or because it reduces our costs at the expense of our people and customers?

  • Have we tested this with the people most likely to be harmed if it fails?

If you cannot answer these clearly, your organisation is not ready to deploy AI in customer-facing contexts. That is not a technology gap. It is a leadership one.

The organisations I respect most in this space are not the ones moving fastest. They are the ones moving with intention — clear about what they are building, honest about its limits, and deeply uncomfortable with the idea of getting it wrong at someone else’s expense.

That discomfort is not a weakness. It is exactly the kind of ethical muscle that AI transformation requires — and that no model, however sophisticated, can develop on your behalf.

The organisations that will win with AI are not the ones with the most sophisticated models. They are the ones who are clearest about what they are asking AI to do — and equally clear, and equally courageous, about what they are keeping for themselves.

Read More