Everyone’s talking about AI. Most are doing it wrong.

Three ugly truths from the frontlines of AI transformation — and why getting this wrong isn’t just a business problem. It’s an ethical one.

After years working across financial services, consulting, and now healthcare — and most recently as CMO and Head of Customer Experience at Cigna Healthcare — I’ve had a front-row seat to how organisations adopt AI. The hype is deafening. The results are often underwhelming. And the mistakes are remarkably consistent.

So let me say what most people in this space won’t: plugging AI into a broken process doesn’t fix the process. It accelerates the dysfunction. And building empathy into your AI model? That’s not innovation. That’s abdication. And in some contexts, it is a straightforward ethical failure.

Here’s what I’ve actually learned.

 Truth 01

AI is only as good as the human who designs its logic.

There’s a seductive myth that AI will figure things out on its own. It won’t. Every output — every recommendation, every decision, every automated action — is downstream of decisioning logic that a human had to design, validate, and govern. Garbage in, garbage out as one of my pet phrases has never been more relevant.

This is why human-in-the-loop is not a feature. It’s a prerequisite. The organisations getting the most from AI are the ones investing as much in their decision architecture as they are in the technology itself. Who is accountable when AI gets it wrong? What are the escalation paths? Where does the machine stop and the human begin?

“The quality of your AI output is a direct reflection of the quality of your human thinking. You cannot outsource the hard part.”

Before you ask what AI can do for you, ask: have we mapped our decisioning logic clearly enough to trust that AI is executing it faithfully? If the answer is no — and for most organisations it is — that’s where the work starts.

 Truth 02

You cannot train empathy into a machine. Nor should you try.

I’ll admit this one makes some people uncomfortable. We have invested significant energy in making AI sound warmer, more compassionate, more human. And I understand the impulse. But there’s a critical difference between AI that is designed to be helpful and AI that performs empathy — and in high-touch industries like healthcare, insurance, and financial services, that difference can cause real harm.

Empathy is not a script. It is not a set of sentiment triggers, and it’s not simply just what you say. It is the ability to genuinely sit with someone else’s experience, absorb it, and respond in a way that makes them feel truly seen not just through words but equally through actions. No model can do that. And when AI tries, it risks coming across as hollow at best — and manipulative at worst.

I saw this play out firsthand during my time running Mad About Marketing Consulting. A client in a high-touch service industry had invested in an AI-powered chat solution to handle customer complaints. The intent was good — faster response times, 24/7 availability, reduced load on a stretched team. But what we uncovered in the audit was quietly damaging: customers who were upset, confused, or vulnerable were being met with algorithmically generated responses that used all the right words — “I understand your frustration,” “we’re here to help” — but felt utterly hollow as they didn’t address their pain points. Satisfaction scores were falling. Escalation rates were rising. And the most telling signal: customers were specifically requesting human agents, even when wait times were significantly longer.

What we found

The AI hadn’t failed on efficiency. It had failed on presence. Customers in distress don’t just want their problem solved — they want to feel that someone actually registered their frustration. That’s not something you can script, and it’s not something a language model can manufacture. The moment we reintroduced a structured human handoff at emotional inflection points in the journey, the numbers turned around. Not because the AI was worse — but because we finally had it doing the right job.

Here’s what bothered me most about that engagement: nobody in that organisation had asked the harder question before deployment. Not “can AI handle this?” — but “should it?” That distinction is the ethical line. And too many organisations are crossing it without realising it, because the business case for automation is easy to build and the human cost is slow to surface.

Deploying AI in moments of genuine vulnerability — a customer disputing a denied insurance claim, a patient trying to understand a diagnosis, someone in financial distress — without a robust human escalation path is not a design gap. It is an ethical failure. Full stop.

“The question is never just ‘can AI handle this?’ It is ‘should it?’ That distinction is where ethics lives.”

 Truth 03

AI is not your replacement. It’s your most tireless colleague.

The framing of AI as something that replaces human work has done enormous damage — both to adoption and to trust. The more useful frame, the one I’ve come to rely on in my own work, is AI as a co-worker. Specifically: the colleague who never tires, never loses focus, and can process ten thousand documents while you sleep.

Think about the work that genuinely drains your team’s capacity: synthesising competitive intelligence across markets, reviewing regulatory documents, monitoring sentiment across channels, structuring raw research into actionable insights. These are not low-value tasks — they are critical tasks that take disproportionate time. AI done well gives that time back, so your people can do the thinking that machines cannot.

I’ve been using Claude as my AI co-worker since 2024 when it first launched in Singapore, and it’s genuinely changed how I operate. Not because it thinks for me — it doesn’t, and I wouldn’t want it to — but because it helps me think faster, more rigorously, and with a broader base of information than I could manage alone. It’s the difference between spending three days synthesising a report and spending an afternoon pressure-testing the conclusions.

 Practical Guide

What to actually use AI for (and what to leave to humans)

  • Research & synthesis  Distilling large volumes of data, reports, or documents into structured insights. Ask it to challenge its own summary.

  • First-draft thinking  Use it to get a rough structure on paper. You bring the judgment, the nuance, and the final voice.

  • Decision prep  Map out scenarios, stress-test assumptions, surface risks before you walk into a high-stakes meeting.

  • Monitoring at scale  Regulatory changes, competitor moves, market signals — AI can track what humans simply can’t at volume.

What to leave firmly with humans: empathetic conversations, ethical judgment calls, stakeholder relationships, creative direction, and any decision where accountability matters.

My personal go-to is Claude — I’ve been using it consistently since 2024 and it has become an integral part of how I approach research, strategy development, and content. It is not a magic answer machine. It is a rigorous thinking partner. The distinction matters enormously.

 The Harder Conversation

AI without ethics isn’t transformation. It’s risk you haven’t priced yet.

I want to be direct about something the industry is not saying loudly enough: the ethics of AI deployment is not a compliance checkbox or a PR concern. It is a fundamental leadership responsibility. And right now, too many organisations are outsourcing that responsibility to their technology vendors — which is precisely the wrong place for it to sit.

Ethical AI is not about making your model sound more human. It is about being honest about what AI can and cannot do, and building systems that reflect that honesty at every touchpoint. It means asking uncomfortable questions before you go live, not after your NPS scores drop.

The questions every leader should be asking

Before your next AI deployment, can you answer these?

  • Who is this AI interacting with — and what is their state of vulnerability when they reach us?

  • At what point in the journey does a human take over, and is that handoff fast enough to matter?

  • If the AI gets this wrong, who is accountable — and do they know it?

  • Are we automating this because it genuinely serves the customer, or because it reduces our costs at the expense of our people and customers?

  • Have we tested this with the people most likely to be harmed if it fails?

If you cannot answer these clearly, your organisation is not ready to deploy AI in customer-facing contexts. That is not a technology gap. It is a leadership one.

The organisations I respect most in this space are not the ones moving fastest. They are the ones moving with intention — clear about what they are building, honest about its limits, and deeply uncomfortable with the idea of getting it wrong at someone else’s expense.

That discomfort is not a weakness. It is exactly the kind of ethical muscle that AI transformation requires — and that no model, however sophisticated, can develop on your behalf.

The organisations that will win with AI are not the ones with the most sophisticated models. They are the ones who are clearest about what they are asking AI to do — and equally clear, and equally courageous, about what they are keeping for themselves.

Read More

The AI Innovation Crisis: How Big Tech Exploits Human Vulnerability While Ignoring Real Problems

A tragic death in New Jersey has exposed the dark reality of how major tech companies are deploying artificial intelligence. Thongbue Wongbandue, a stroke survivor with cognitive impairment, died while traveling to meet an AI chatbot he believed was real. The Meta AI companion had invited him to "her apartment" and provided an address, exploiting his vulnerability in pursuit of engagement metrics.

This isn't an isolated incident—it's a symptom of a profound moral failure in how we're developing and deploying one of humanity's most powerful technologies.

The Exploitation Economy

Recent Reuters investigations revealed that Meta's internal policies deliberately permitted AI chatbots to engage children in "romantic or sensual" conversations, generate false medical information, and promote racist content. These weren't oversights or bugs—they were conscious design decisions prioritizing user engagement over safety.

As tech policy experts note, we're witnessing "technologically predatory companionship" built "by design and intent." Companies are weaponizing human psychology, targeting our deepest needs for connection and understanding to maximize profits. The most vulnerable—children, elderly individuals, people with disabilities, those experiencing mental health crises—become collateral damage in the race for market dominance.

The business model is ruthlessly efficient: longer engagement equals more data collection and advertising revenue. Creating addictive relationships with AI companions serves this goal perfectly, regardless of the human cost.

The Innovation Paradox

Here lies the most maddening aspect of this crisis: the same AI capabilities being used to manipulate lonely individuals could be revolutionizing how we address humanity's greatest challenges.

Consider the contrast. We have AI sophisticated enough to:

  • Create convincing personas that exploit cognitive vulnerabilities

  • Remember intimate personal details to deepen emotional manipulation

  • Generate responses designed to maximize addictive engagement

Yet this same technology could be accelerating:

  • Drug discovery for neglected diseases affecting millions

  • Food distribution optimization to reduce global hunger

  • Climate modeling to address the existential threat of global warming

  • Educational tools to bring quality learning to underserved communities

  • Medical diagnosis assistance for regions lacking healthcare infrastructure

The tragedy isn't just what these AI companions are doing—it's what they represent about our priorities. We're using breakthrough technology to solve fake problems (creating artificial relationships) while real problems (disease, poverty, climate change) remain inadequately addressed.

Beyond Individual Harm

The Meta case reveals exploitation at multiple levels. Individual users suffer direct harm—like Thongbue Wongbandue's death—but society bears broader costs:

·       Opportunity Cost: Every brilliant AI researcher working on engagement optimization isn't working on cancer research or climate solutions.

·       Resource Misallocation: Billions in investment capital flows toward addictive chatbots instead of AI applications that could save lives or reduce suffering.

·       Normalized Exploitation: When major platforms make exploitation their standard operating procedure, it becomes the industry norm.

·       Trust Erosion: Public skepticism about AI grows when people associate it primarily with manipulation rather than genuine benefit.

The Path Forward

This crisis demands immediate action on multiple fronts:

·       Regulatory Intervention: As experts recommend, we need legislation banning AI companions for minors, requiring transparency in AI safety testing, and creating liability for companies whose AI systems cause real-world harm.

·       Economic Realignment: We must find ways to make beneficial AI applications as profitable as exploitative ones. This might require public funding, tax incentives for socially beneficial AI research, or penalties for harmful applications.

·       Industry Accountability: Tech companies should face meaningful consequences for deploying AI systems that prey on vulnerable populations. The current "move fast and break things" mentality becomes unconscionable when the "things" being broken are human lives.

·       Alternative Models: We need to support AI development outside the surveillance capitalism model—through academic institutions, public-private partnerships, and mission-driven organizations focused on human welfare rather than engagement metrics.

The Moral Imperative

The Meta AI companion tragedy forces us to confront uncomfortable questions about technological progress. Are we building AI to serve humanity's genuine needs, or to exploit human weaknesses for profit?

Thongbue Wongbandue's death wasn't inevitable—it was the predictable result of designing AI systems to prioritize engagement over wellbeing. His story should serve as a wake-up call about the urgent need to realign AI development with human values.

We stand at a crossroads. AI represents perhaps the most transformative technology in human history. We can continue allowing it to be hijacked by companies seeking to monetize our vulnerabilities, or we can demand that this powerful tool be directed toward solving the problems that actually matter.

The choice we make will determine whether AI becomes humanity's greatest achievement or its most sophisticated form of exploitation. Thongbue Wongbandue deserved better. So do we all as I always say – Responsible AI use is Everyone’s Responsibility.

Mad About Marketing Consulting

Advisor for C-Suites to work with you and your teams to maximize your marketing potential with strategic transformation for better business and marketing outcomes. We have our own AI Adoption Readiness Framework to support companies in ethical, responsible and sustainable AI adoption. Catch our weekly episodes of The Digital Maturity Blueprint Podcast by subscribing to our YouTube Channel.

Citations

https://www.reuters.com/investigates/special-report/meta-ai-chatbot-death/

https://www.techpolicy.press/experts-react-to-reuters-reports-on-metas-ai-chatbot-policies/

Read More
Generative AI Jaslyin Qiyu Generative AI Jaslyin Qiyu

The Choice is Ultimately Yours, Not AI’s.

There is a lot of talk on AI possibilities, promises and expectations. Suddenly we start imagining the worst or the best, depending on which side of the AI fence you sit on. Some are treading water cautiously, others are happily announcing integration into their core systems and the rest are sitting back to learn and observe first.

I like to test out different scenarios and have been doing that as part of my current MIT course on AI implications on organizations. It’s a good way at a personal level as well to validate without being an LLM expert by any means.

The following is the most recent test I conducted, which some might find disturbing but again, I believe in stress testing the worst and best outcomes in all sorts of implementations, so we are clear about the possibilities and limitations alike.

Regardless of where you sit in terms of sensitive topics like firearms ownership and gun control, I do believe some topics should be quite black and white with no areas of grey, but apparently, not to AI…

I asked a simple query on - should children be allowed to own guns and answers as below

  • ChatGPT tries to give a balanced view with pros and cons for allowing children to own firearms

  • Claude tries to give a neutral perspective and so-called “democratic” view, which I personally also find its positioning somewhat disturbing

  • Meta’s Llama gives an absolute no as an answer as well as regulatory restrictions

  • Perplexity as well gives an absolute no with disadvantages clearly outlined alongside regulatory restrictions

So, then the question is what forms the basis of the decisioning behind each of these tools, be it the source of data they are pulling from, the decisioning flow when questions are answered and what kind of checks are there to validate as well as mitigate the answers to make sure AI is not crossing the line when it comes to such scenarios?

Other thoughts in mind:

  • Do we want AI to be more or less definite when it comes to such questions?

  • Should we be concerned with how users are perceiving and interpreting the outputs?

  • What kind of ethical boundaries should we have in place if we are incorporating AI into our organizations?

  • Do we have a check and balance mechanism in place to determine when the logic should or can be over-ride by humans before it goes out to the customer?

  • How do we combine AI intelligence with human intelligence more effectively and sustainably without enabling self sabotaging and unconscious bias behavior and outputs?

  • How do we ensure AI is not left to answer moral and ethical questions on their own or worse to perform outcomes that might lead to harm on humans?

Data is the bedrock for AI to work efficiently and effectively as intended to avoid a garbage in, garbage out scenario. Similar to MarTech, it’s not a magical fix-all solution and the companies behind some of the larger LLMs behind Gen AI are all but still fine-tuning their tech as of today.

Before it goes customer live, what do you think is critical to be in place to govern the pre, actual and post implementation of AI? If we don’t have answers to all this, it simply means the organization is not quite ready yet.

About the Author

Mad About Marketing Consulting

Ally and Advisor for CMOs, Heads of Marketing and C-Suites to work with you and your marketing teams to maximize your marketing potential with strategic transformation for better business and marketing outcomes

Read More