Tutorials

Tutorial 1: Customer Support Agent

Build a customer support agent that learns from past tickets.

Customer Support Agent

Build a support agent that:

  • Remembers conversation context
  • Learns from past resolutions
  • Improves over time

The Core Idea

A stateless chatbot forgets everything between messages. With Rice, your agent maintains context and learns from every interaction.

Memory TypeWhat It Does
Working MemoryHolds the current conversation
Episodic MemoryStores past tickets and resolutions

Setup

mkdir support-agent && cd support-agent
npm init -y
npm install git+https://github.com/rice-ai-hq/slate.git#subdirectory=clients/node openai dotenv

Create a .env file:

SLATE_ADDRESS=grpc.your-instance-id.slate.tryrice.com:80
SLATE_TOKEN=your-token
OPENAI_API_KEY=your-key

The Agent

Here's the complete agent in one file:

// agent.ts
import "dotenv/config";
import OpenAI from "openai";
import { CortexClient } from "slate-client";

const slate = new CortexClient(
  process.env.SLATE_ADDRESS!,
  process.env.SLATE_TOKEN!,
  "support-agent"  // run_id for isolation
);
const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });

async function handleMessage(customerId: string, message: string) {
  // 1. Store the incoming message
  await slate.focus(JSON.stringify({
    type: "message",
    role: "customer",
    customerId,
    content: message,
    time: new Date().toISOString()
  }));

  // 2. Find similar past issues
  const pastIssues = await slate.reminisce(message, 3);

  // 3. Get current conversation context
  const context = await slate.drift();

  // 4. Build prompt with memory
  const prompt = buildPrompt(message, context, pastIssues);

  // 5. Get LLM response
  const completion = await openai.chat.completions.create({
    model: "gpt-4",
    messages: [{ role: "user", content: prompt }]
  });

  const response = completion.choices[0].message.content!;

  // 6. Store agent response
  await slate.focus(JSON.stringify({
    type: "message",
    role: "agent",
    customerId,
    content: response,
    time: new Date().toISOString()
  }));

  // 7. Learn from this interaction
  await slate.commit(message, response, {
    action: "support_response",
    reasoning: "Handled customer query"
  });

  return response;
}

function buildPrompt(message: string, context: any[], pastIssues: any[]) {
  let prompt = `You are a helpful support agent.\n\n`;

  // Add past similar issues
  if (pastIssues.length > 0) {
    prompt += `SIMILAR PAST ISSUES:\n`;
    pastIssues.slice(0, 3).forEach((issue, i) => {
      prompt += `${i + 1}. Customer: ${issue.input}\n   Response: ${issue.outcome}\n\n`;
    });
  }

  // Add conversation history
  const messages = context.filter(c => c.type === "message");
  if (messages.length > 0) {
    prompt += `CONVERSATION:\n`;
    messages.forEach(m => {
      prompt += `${m.role.toUpperCase()}: ${m.content}\n`;
    });
  }

  prompt += `\nCURRENT MESSAGE: ${message}\n\nRespond helpfully:`;

  return prompt;
}

// Test it
async function main() {
  console.log("Customer: I was charged twice this month!");
  const response = await handleMessage("cust_123", "I was charged twice this month!");
  console.log("Agent:", response);
}

main();

What's Happening

Step 1-2: Remember and Recall

// Store current message
await slate.focus(JSON.stringify({ type: "message", content: message }));

// Find similar past issues
const pastIssues = await slate.reminisce(message, 3);

When a customer says "I was charged twice", Episodic Memory searches for similar past complaints. If you resolved this issue before, that resolution appears in pastIssues.

Step 3: Get Context

const context = await slate.drift();

Working Memory returns the current conversation, sorted by relevance. Recent messages appear first.

Step 4-5: Generate Response

The prompt includes:

  • Past similar issues (few-shot examples)
  • Current conversation history
  • The new message

The LLM has full context to generate a good response.

Step 6-7: Learn

await slate.commit(message, response, { action: "support_response", reasoning: "..." });

This interaction is now stored in Episodic Memory. Next time a customer has a similar issue, this resolution will be retrieved.


The Learning Loop

Each interaction makes the agent smarter:

  1. Customer asks about billing issue
  2. Agent searches past billing resolutions (initially empty)
  3. Agent responds based on instructions
  4. Interaction is saved to Episodic Memory
  5. Next billing issue retrieves this resolution
  6. Agent uses past success as a template

After 100 billing issues, your agent has 100 examples of what worked.


Adding Refund Logic

For deterministic operations like refunds, don't rely on the LLM:

function calculateRefund(daysSincePurchase: number, amount: number): { eligible: boolean; refund: number } {
  if (daysSincePurchase <= 30) {
    return { eligible: true, refund: amount };
  } else if (daysSincePurchase <= 60) {
    return { eligible: true, refund: amount * 0.5 };
  }
  return { eligible: false, refund: 0 };
}

// In your handler:
if (message.includes("refund")) {
  const result = calculateRefund(15, 99.99);
  if (result.eligible) {
    // Process refund
  }
}

Testing

async function test() {
  // First interaction
  await handleMessage("cust_1", "I was charged twice!");
  
  // Later, different customer, similar issue
  await handleMessage("cust_2", "Double charge on my account");
  // Agent now has the first resolution as context
}

Summary

The agent loop is simple:

  1. Focus: Store the incoming message
  2. Reminisce: Find similar past issues
  3. Drift: Get conversation context
  4. Generate: LLM creates response with full context
  5. Commit: Save interaction for future learning

That's a learning support agent in under 100 lines.

Next: Research Assistant - Build an agent that synthesizes information across sessions.