Build a support agent that:
A stateless chatbot forgets everything between messages. With Rice, your agent maintains context and learns from every interaction.
| Memory Type | What It Does |
|---|---|
| Working Memory | Holds the current conversation |
| Episodic Memory | Stores past tickets and resolutions |
mkdir support-agent && cd support-agent
npm init -y
npm install git+https://github.com/rice-ai-hq/slate.git#subdirectory=clients/node openai dotenv
Create a .env file:
SLATE_ADDRESS=grpc.your-instance-id.slate.tryrice.com:80
SLATE_TOKEN=your-token
OPENAI_API_KEY=your-key
Here's the complete agent in one file:
// agent.ts
import "dotenv/config";
import OpenAI from "openai";
import { CortexClient } from "slate-client";
const slate = new CortexClient(
process.env.SLATE_ADDRESS!,
process.env.SLATE_TOKEN!,
"support-agent" // run_id for isolation
);
const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });
async function handleMessage(customerId: string, message: string) {
// 1. Store the incoming message
await slate.focus(JSON.stringify({
type: "message",
role: "customer",
customerId,
content: message,
time: new Date().toISOString()
}));
// 2. Find similar past issues
const pastIssues = await slate.reminisce(message, 3);
// 3. Get current conversation context
const context = await slate.drift();
// 4. Build prompt with memory
const prompt = buildPrompt(message, context, pastIssues);
// 5. Get LLM response
const completion = await openai.chat.completions.create({
model: "gpt-4",
messages: [{ role: "user", content: prompt }]
});
const response = completion.choices[0].message.content!;
// 6. Store agent response
await slate.focus(JSON.stringify({
type: "message",
role: "agent",
customerId,
content: response,
time: new Date().toISOString()
}));
// 7. Learn from this interaction
await slate.commit(message, response, {
action: "support_response",
reasoning: "Handled customer query"
});
return response;
}
function buildPrompt(message: string, context: any[], pastIssues: any[]) {
let prompt = `You are a helpful support agent.\n\n`;
// Add past similar issues
if (pastIssues.length > 0) {
prompt += `SIMILAR PAST ISSUES:\n`;
pastIssues.slice(0, 3).forEach((issue, i) => {
prompt += `${i + 1}. Customer: ${issue.input}\n Response: ${issue.outcome}\n\n`;
});
}
// Add conversation history
const messages = context.filter(c => c.type === "message");
if (messages.length > 0) {
prompt += `CONVERSATION:\n`;
messages.forEach(m => {
prompt += `${m.role.toUpperCase()}: ${m.content}\n`;
});
}
prompt += `\nCURRENT MESSAGE: ${message}\n\nRespond helpfully:`;
return prompt;
}
// Test it
async function main() {
console.log("Customer: I was charged twice this month!");
const response = await handleMessage("cust_123", "I was charged twice this month!");
console.log("Agent:", response);
}
main();
Step 1-2: Remember and Recall
// Store current message
await slate.focus(JSON.stringify({ type: "message", content: message }));
// Find similar past issues
const pastIssues = await slate.reminisce(message, 3);
When a customer says "I was charged twice", Episodic Memory searches for similar past complaints. If you resolved this issue before, that resolution appears in pastIssues.
Step 3: Get Context
const context = await slate.drift();
Working Memory returns the current conversation, sorted by relevance. Recent messages appear first.
Step 4-5: Generate Response
The prompt includes:
The LLM has full context to generate a good response.
Step 6-7: Learn
await slate.commit(message, response, { action: "support_response", reasoning: "..." });
This interaction is now stored in Episodic Memory. Next time a customer has a similar issue, this resolution will be retrieved.
Each interaction makes the agent smarter:
After 100 billing issues, your agent has 100 examples of what worked.
For deterministic operations like refunds, don't rely on the LLM:
function calculateRefund(daysSincePurchase: number, amount: number): { eligible: boolean; refund: number } {
if (daysSincePurchase <= 30) {
return { eligible: true, refund: amount };
} else if (daysSincePurchase <= 60) {
return { eligible: true, refund: amount * 0.5 };
}
return { eligible: false, refund: 0 };
}
// In your handler:
if (message.includes("refund")) {
const result = calculateRefund(15, 99.99);
if (result.eligible) {
// Process refund
}
}
async function test() {
// First interaction
await handleMessage("cust_1", "I was charged twice!");
// Later, different customer, similar issue
await handleMessage("cust_2", "Double charge on my account");
// Agent now has the first resolution as context
}
The agent loop is simple:
That's a learning support agent in under 100 lines.