Everyone’s talking about AI. Most are doing it wrong.

Three ugly truths from the frontlines of AI transformation — and why getting this wrong isn’t just a business problem. It’s an ethical one.

After years working across financial services, consulting, and now healthcare — and most recently as CMO and Head of Customer Experience at Cigna Healthcare — I’ve had a front-row seat to how organisations adopt AI. The hype is deafening. The results are often underwhelming. And the mistakes are remarkably consistent.

So let me say what most people in this space won’t: plugging AI into a broken process doesn’t fix the process. It accelerates the dysfunction. And building empathy into your AI model? That’s not innovation. That’s abdication. And in some contexts, it is a straightforward ethical failure.

Here’s what I’ve actually learned.

 Truth 01

AI is only as good as the human who designs its logic.

There’s a seductive myth that AI will figure things out on its own. It won’t. Every output — every recommendation, every decision, every automated action — is downstream of decisioning logic that a human had to design, validate, and govern. Garbage in, garbage out as one of my pet phrases has never been more relevant.

This is why human-in-the-loop is not a feature. It’s a prerequisite. The organisations getting the most from AI are the ones investing as much in their decision architecture as they are in the technology itself. Who is accountable when AI gets it wrong? What are the escalation paths? Where does the machine stop and the human begin?

“The quality of your AI output is a direct reflection of the quality of your human thinking. You cannot outsource the hard part.”

Before you ask what AI can do for you, ask: have we mapped our decisioning logic clearly enough to trust that AI is executing it faithfully? If the answer is no — and for most organisations it is — that’s where the work starts.

 Truth 02

You cannot train empathy into a machine. Nor should you try.

I’ll admit this one makes some people uncomfortable. We have invested significant energy in making AI sound warmer, more compassionate, more human. And I understand the impulse. But there’s a critical difference between AI that is designed to be helpful and AI that performs empathy — and in high-touch industries like healthcare, insurance, and financial services, that difference can cause real harm.

Empathy is not a script. It is not a set of sentiment triggers, and it’s not simply just what you say. It is the ability to genuinely sit with someone else’s experience, absorb it, and respond in a way that makes them feel truly seen not just through words but equally through actions. No model can do that. And when AI tries, it risks coming across as hollow at best — and manipulative at worst.

I saw this play out firsthand during my time running Mad About Marketing Consulting. A client in a high-touch service industry had invested in an AI-powered chat solution to handle customer complaints. The intent was good — faster response times, 24/7 availability, reduced load on a stretched team. But what we uncovered in the audit was quietly damaging: customers who were upset, confused, or vulnerable were being met with algorithmically generated responses that used all the right words — “I understand your frustration,” “we’re here to help” — but felt utterly hollow as they didn’t address their pain points. Satisfaction scores were falling. Escalation rates were rising. And the most telling signal: customers were specifically requesting human agents, even when wait times were significantly longer.

What we found

The AI hadn’t failed on efficiency. It had failed on presence. Customers in distress don’t just want their problem solved — they want to feel that someone actually registered their frustration. That’s not something you can script, and it’s not something a language model can manufacture. The moment we reintroduced a structured human handoff at emotional inflection points in the journey, the numbers turned around. Not because the AI was worse — but because we finally had it doing the right job.

Here’s what bothered me most about that engagement: nobody in that organisation had asked the harder question before deployment. Not “can AI handle this?” — but “should it?” That distinction is the ethical line. And too many organisations are crossing it without realising it, because the business case for automation is easy to build and the human cost is slow to surface.

Deploying AI in moments of genuine vulnerability — a customer disputing a denied insurance claim, a patient trying to understand a diagnosis, someone in financial distress — without a robust human escalation path is not a design gap. It is an ethical failure. Full stop.

“The question is never just ‘can AI handle this?’ It is ‘should it?’ That distinction is where ethics lives.”

 Truth 03

AI is not your replacement. It’s your most tireless colleague.

The framing of AI as something that replaces human work has done enormous damage — both to adoption and to trust. The more useful frame, the one I’ve come to rely on in my own work, is AI as a co-worker. Specifically: the colleague who never tires, never loses focus, and can process ten thousand documents while you sleep.

Think about the work that genuinely drains your team’s capacity: synthesising competitive intelligence across markets, reviewing regulatory documents, monitoring sentiment across channels, structuring raw research into actionable insights. These are not low-value tasks — they are critical tasks that take disproportionate time. AI done well gives that time back, so your people can do the thinking that machines cannot.

I’ve been using Claude as my AI co-worker since 2024 when it first launched in Singapore, and it’s genuinely changed how I operate. Not because it thinks for me — it doesn’t, and I wouldn’t want it to — but because it helps me think faster, more rigorously, and with a broader base of information than I could manage alone. It’s the difference between spending three days synthesising a report and spending an afternoon pressure-testing the conclusions.

 Practical Guide

What to actually use AI for (and what to leave to humans)

  • Research & synthesis  Distilling large volumes of data, reports, or documents into structured insights. Ask it to challenge its own summary.

  • First-draft thinking  Use it to get a rough structure on paper. You bring the judgment, the nuance, and the final voice.

  • Decision prep  Map out scenarios, stress-test assumptions, surface risks before you walk into a high-stakes meeting.

  • Monitoring at scale  Regulatory changes, competitor moves, market signals — AI can track what humans simply can’t at volume.

What to leave firmly with humans: empathetic conversations, ethical judgment calls, stakeholder relationships, creative direction, and any decision where accountability matters.

My personal go-to is Claude — I’ve been using it consistently since 2024 and it has become an integral part of how I approach research, strategy development, and content. It is not a magic answer machine. It is a rigorous thinking partner. The distinction matters enormously.

 The Harder Conversation

AI without ethics isn’t transformation. It’s risk you haven’t priced yet.

I want to be direct about something the industry is not saying loudly enough: the ethics of AI deployment is not a compliance checkbox or a PR concern. It is a fundamental leadership responsibility. And right now, too many organisations are outsourcing that responsibility to their technology vendors — which is precisely the wrong place for it to sit.

Ethical AI is not about making your model sound more human. It is about being honest about what AI can and cannot do, and building systems that reflect that honesty at every touchpoint. It means asking uncomfortable questions before you go live, not after your NPS scores drop.

The questions every leader should be asking

Before your next AI deployment, can you answer these?

  • Who is this AI interacting with — and what is their state of vulnerability when they reach us?

  • At what point in the journey does a human take over, and is that handoff fast enough to matter?

  • If the AI gets this wrong, who is accountable — and do they know it?

  • Are we automating this because it genuinely serves the customer, or because it reduces our costs at the expense of our people and customers?

  • Have we tested this with the people most likely to be harmed if it fails?

If you cannot answer these clearly, your organisation is not ready to deploy AI in customer-facing contexts. That is not a technology gap. It is a leadership one.

The organisations I respect most in this space are not the ones moving fastest. They are the ones moving with intention — clear about what they are building, honest about its limits, and deeply uncomfortable with the idea of getting it wrong at someone else’s expense.

That discomfort is not a weakness. It is exactly the kind of ethical muscle that AI transformation requires — and that no model, however sophisticated, can develop on your behalf.

The organisations that will win with AI are not the ones with the most sophisticated models. They are the ones who are clearest about what they are asking AI to do — and equally clear, and equally courageous, about what they are keeping for themselves.

Read More
Generative AI, People and Talent Dorothy Loh Generative AI, People and Talent Dorothy Loh

What Cirque Alice Teaches Us About Humans and AI's True Role

I watched the Cirque Alice’s performance this weekend at the Marina Bay Sands and it wasn't just entertainment—it was a masterclass in what technology can never replicate.

The Anatomy of Excellence

Watching aerial artists suspended thirty feet above ground, performing seemingly impossible stunts with such flawless precision and ease, I was struck by something the AI discourse consistently misses: the intricate human ecosystem behind every flawless execution. Each performance represents years of deliberate practice, muscle memory refined through thousands of repetitions, and split-second decisions born from a combination of experience powered intuition rather than machine algorithms.

Consider what's actually happening: precision timing calibrated between multiple performers, physical strength sustained across two-hour shows, mental fortitude to execute dangerous stunts repeatedly, and—critically—trust. The kind of trust where your life depends on your partner's grip strength and spatial awareness.

The AI Replacement Fallacy

There has been a recent buzz around the possibility of real-life actors being replaced by AI ones. I personally think the current narrative around AI entertainers and performers reveals a fundamental misunderstanding of value creation. Yes, AI can generate synthetic performances. But here's what it can't do: make audiences collectively hold their breath during a death-defying stunt, create the adrenalin rush of live performances especially that contain such risk, expertise and depth, or demonstrate the years of dedication embedded in every seamless movement.

The obsession with AI-as-replacement stems from a surface-level analysis of what audiences actually enjoy. We're not just watching acrobatics; we're witnessing human potential pushed to its absolute limits. The performer's vulnerability and the ability to overcome seemingly impossible odds is what the audience relishes.

Where AI Actually Belongs

When it comes to the use of AI in theatrics and performances - smart integration, not substitution, is where real value emerges:

Precision Enhancement: Real-time trajectory calculations for complex aerial maneuvers, optimizing angles and velocities that human intuition might miss.

Risk Mitigation: Predictive modeling for equipment stress points, identifying potential failure modes before they become safety issues. Pattern recognition across thousands of performances to flag fatigue indicators or subtle deviations from safe parameters.

Performance Optimization: Biomechanical analysis to reduce injury risk while maintaining artistic integrity. Training simulations that allow performers to rehearse dangerous sequences in virtual environments first.

The Strategic Insight

The broader lesson extends beyond circus tents: AI's highest value isn't in replacing human excellence—it's in enabling humans to push further into their zone of irreplaceable capability. The technology should amplify what makes us distinctly human, not attempt to simulate it.

Organizations racing to replace creative talent with AI are solving the wrong problem. The competitive advantage lies in using AI to free humans for work requiring judgment, intuition, and the kind of mastery that only comes from dedicated practice.

Last night's performance made one thing clear: audiences don't pay premium prices to watch perfection—they pay to witness humans achieving the seemingly impossible through skill, courage, and trust. That's not a formula AI can disrupt.

It's one we should be using AI to protect.

 

Read More

The AI Innovation Crisis: How Big Tech Exploits Human Vulnerability While Ignoring Real Problems

A tragic death in New Jersey has exposed the dark reality of how major tech companies are deploying artificial intelligence. Thongbue Wongbandue, a stroke survivor with cognitive impairment, died while traveling to meet an AI chatbot he believed was real. The Meta AI companion had invited him to "her apartment" and provided an address, exploiting his vulnerability in pursuit of engagement metrics.

This isn't an isolated incident—it's a symptom of a profound moral failure in how we're developing and deploying one of humanity's most powerful technologies.

The Exploitation Economy

Recent Reuters investigations revealed that Meta's internal policies deliberately permitted AI chatbots to engage children in "romantic or sensual" conversations, generate false medical information, and promote racist content. These weren't oversights or bugs—they were conscious design decisions prioritizing user engagement over safety.

As tech policy experts note, we're witnessing "technologically predatory companionship" built "by design and intent." Companies are weaponizing human psychology, targeting our deepest needs for connection and understanding to maximize profits. The most vulnerable—children, elderly individuals, people with disabilities, those experiencing mental health crises—become collateral damage in the race for market dominance.

The business model is ruthlessly efficient: longer engagement equals more data collection and advertising revenue. Creating addictive relationships with AI companions serves this goal perfectly, regardless of the human cost.

The Innovation Paradox

Here lies the most maddening aspect of this crisis: the same AI capabilities being used to manipulate lonely individuals could be revolutionizing how we address humanity's greatest challenges.

Consider the contrast. We have AI sophisticated enough to:

  • Create convincing personas that exploit cognitive vulnerabilities

  • Remember intimate personal details to deepen emotional manipulation

  • Generate responses designed to maximize addictive engagement

Yet this same technology could be accelerating:

  • Drug discovery for neglected diseases affecting millions

  • Food distribution optimization to reduce global hunger

  • Climate modeling to address the existential threat of global warming

  • Educational tools to bring quality learning to underserved communities

  • Medical diagnosis assistance for regions lacking healthcare infrastructure

The tragedy isn't just what these AI companions are doing—it's what they represent about our priorities. We're using breakthrough technology to solve fake problems (creating artificial relationships) while real problems (disease, poverty, climate change) remain inadequately addressed.

Beyond Individual Harm

The Meta case reveals exploitation at multiple levels. Individual users suffer direct harm—like Thongbue Wongbandue's death—but society bears broader costs:

·       Opportunity Cost: Every brilliant AI researcher working on engagement optimization isn't working on cancer research or climate solutions.

·       Resource Misallocation: Billions in investment capital flows toward addictive chatbots instead of AI applications that could save lives or reduce suffering.

·       Normalized Exploitation: When major platforms make exploitation their standard operating procedure, it becomes the industry norm.

·       Trust Erosion: Public skepticism about AI grows when people associate it primarily with manipulation rather than genuine benefit.

The Path Forward

This crisis demands immediate action on multiple fronts:

·       Regulatory Intervention: As experts recommend, we need legislation banning AI companions for minors, requiring transparency in AI safety testing, and creating liability for companies whose AI systems cause real-world harm.

·       Economic Realignment: We must find ways to make beneficial AI applications as profitable as exploitative ones. This might require public funding, tax incentives for socially beneficial AI research, or penalties for harmful applications.

·       Industry Accountability: Tech companies should face meaningful consequences for deploying AI systems that prey on vulnerable populations. The current "move fast and break things" mentality becomes unconscionable when the "things" being broken are human lives.

·       Alternative Models: We need to support AI development outside the surveillance capitalism model—through academic institutions, public-private partnerships, and mission-driven organizations focused on human welfare rather than engagement metrics.

The Moral Imperative

The Meta AI companion tragedy forces us to confront uncomfortable questions about technological progress. Are we building AI to serve humanity's genuine needs, or to exploit human weaknesses for profit?

Thongbue Wongbandue's death wasn't inevitable—it was the predictable result of designing AI systems to prioritize engagement over wellbeing. His story should serve as a wake-up call about the urgent need to realign AI development with human values.

We stand at a crossroads. AI represents perhaps the most transformative technology in human history. We can continue allowing it to be hijacked by companies seeking to monetize our vulnerabilities, or we can demand that this powerful tool be directed toward solving the problems that actually matter.

The choice we make will determine whether AI becomes humanity's greatest achievement or its most sophisticated form of exploitation. Thongbue Wongbandue deserved better. So do we all as I always say – Responsible AI use is Everyone’s Responsibility.

Mad About Marketing Consulting

Advisor for C-Suites to work with you and your teams to maximize your marketing potential with strategic transformation for better business and marketing outcomes. We have our own AI Adoption Readiness Framework to support companies in ethical, responsible and sustainable AI adoption. Catch our weekly episodes of The Digital Maturity Blueprint Podcast by subscribing to our YouTube Channel.

Citations

https://www.reuters.com/investigates/special-report/meta-ai-chatbot-death/

https://www.techpolicy.press/experts-react-to-reuters-reports-on-metas-ai-chatbot-policies/

Read More