Levels of AI Agent in Customer Support
At supportunicorn.com, we see the landscape of customer support AI not as a single solution, but as a hierarchy of levels.
One level isn’t necessarily "better" than another. The right level for your business depends on three factors:
- Control: How much oversight you need during the handoff to a human.
- Security: How your data is accessed and stored.
- Testability: How easily you can predict the output.
Here is how we categorize the current state of Support AI.
L1: No AI (The Baseline)
This is traditional support. All requests are routed directly to a human agent.
- Pros: You have 100% control over the interaction.
- Cons: No automation or scalability. This is essentially the pre-AI era standard.
L2: Basic Prompting
At this level, you use an LLM with a system prompt (instructions) to handle basic interactions. You can set a tone of voice or create a "persona" to gather initial information.
- The Limitation: It relies entirely on the training data of the model. If the model’s knowledge cutoff was last year, it won't know about your updates today. It creates a conversational interface but lacks company-specific intelligence.
L3: Prompt with Context (The Context Window)
Here, you inject specific company data (like a PDF or a text document) directly into the prompt. The AI now has context about what your company does and how it works.
- The Limitation: You are limited by the model's context window. If you stuff too much data in, the LLM gets confused (the "lost in the middle" phenomenon). You can only add as much data as the window allows.
L4: Prompt with Large Data (RAG)
When your data exceeds the context window, you move to Level 4. We implement Retrieval-Augmented Generation (RAG).
- How it works: When a user asks a question, we first search your database to find the specific chunk of text that contains the answer. We only send that chunk to the AI.
- Optimization: The first result isn't always right. To ensure accuracy, we often use re-ranking models to verify which data chunk is actually the "source of truth" before generating an answer.
L5: Dynamic Data (Tool Use)
Static knowledge (L4) isn't enough for questions like, "Where is my order?"
- The Shift: This level introduces Tools. The agent connects to your backend or third-party APIs (like Shopify or Salesforce).
- The Result: The AI can fetch real-time, user-specific data and relay it back to the customer instantly. It moves from "knowing" to "checking."
L6: Workflows (The Agentic Flow)
Sometimes, you need the AI to follow a Standard Operating Procedure (SOP), not just chat. This is where Workflows come in.
- The Mechanism: This combines the probabilistic nature of LLMs with deterministic logic (Graph-based modules).
- Example: If a return request comes in, check the date. If > 30 days, reject. If < 30 days, generate a label.
- Warning: Because LLMs are involved, even "deterministic" flows need heavy testing.
Level N: The Holy Grail Agent
This is the theoretical future where an autonomous agent handles all customer support without oversight.
My take: We aren't there yet. I strongly believe that not every customer issue can be solved by a machine. There is a nuance to human emotion and complex edge cases that still require a "human in the loop" or a seamless co-pilot approach.
At supportunicorn.com, we build for reality—optimizing L1 through L6 while keeping the human handoff seamless.