dmasin.bsky.social
@dmasin.bsky.social
🫵 Your turn!

What do you see as the biggest challenge in building AI agents for real-world customer support?

Reply below or DM us— I'd love to hear your thoughts 💭
January 7, 2025 at 11:55 AM
🌟 Our vision: SUPERHUMAN AI SUPPORT

AI agents that:
🧓 Grasp intent like seasoned humans.
💬 Handle multi-turn conversations effortlessly.
📖 Use historical context to *learn* and adapt.

It’s not just about answering—it's about delivering MAGIC ✨✨
January 7, 2025 at 11:55 AM
🤖 So, how do we bridge the gap?

AI must move beyond "semantic similarity" and adopt reasoning approaches to handle:

1️⃣ Incomplete/misleading inputs.
2️⃣ Implicit, experiential knowledge.
3️⃣ Complex real-world scenarios.

We’re only scratching the surface. 🛠️
January 7, 2025 at 11:55 AM
🔄 MULTI-TURN conversations = the real world

Support isn’t single Q&A. It’s multi-turn, flowing through phases:
1️⃣ Understand true intent.
2️⃣ Clarify missing details.
3️⃣ Provide answers & adapt.

RAG struggles here—it wasn’t designed for these dynamic, evolving exchanges.
January 7, 2025 at 11:55 AM
🌀 Complex scenarios require reasoning

Imagine this: “I paid £1000 for a hotel, £300 was to be refunded. I added £100 for a restaurant bill but got only £150 back.”

A human sees the issue: the customer expected £200 but got £150. A RAG agent? Likely lost in irrelevant details.
January 7, 2025 at 11:55 AM
🕵️‍♀️ Human intuition is invaluable
Agents lean on years of experience, identifying patterns no knowledge base could fully document. E.g.: “Symptom X often leads to outcome Y”
RAG lacks this shortcut and struggles in troubleshooting. The knowledge base can’t keep up with EVERYTHING!
January 7, 2025 at 11:55 AM
🧠 Taking things @ face value = trouble
Customer might say “Card is broken” when the transaction was declined—or “Somebody took my $” when they forgot about a charge. Humans detect these subtleties. RAG systems? They respond LITERALLY, which can lead to irrelevant/harmful replies
January 7, 2025 at 11:55 AM
🤔 Understanding intent matters
Humans don’t just answer questions - they identify WHAT'S MISSING from a customer query. If user asks “Why can’t I pay?” human agent instinctively knows to ask for clarification. RAG agents? They rely on semantic similarity ALONE & miss the nuance
January 7, 2025 at 11:55 AM
Please do!
November 24, 2024 at 9:31 AM