Tutorials

Tutorial 2: Research Assistant

Build a research assistant that remembers your interests and synthesizes knowledge.

Research Assistant

Build an assistant that:

  • Remembers your research interests
  • Learns what sources you find useful
  • Improves recommendations over time

The Core Idea

A stateless research tool treats every query as new. With Rice, your assistant builds a profile of your interests and remembers what worked.

Memory TypeWhat It Does
Working MemoryCurrent research session notes
Episodic MemoryPast research queries and useful findings
Semantic MemoryFacts and knowledge you've collected

Setup

mkdir research-assistant && cd research-assistant
pip install git+https://github.com/rice-ai-hq/slate.git#subdirectory=clients/python openai python-dotenv

Create a .env file:

SLATE_ADDRESS=grpc.your-instance-id.slate.tryrice.com:80
SLATE_TOKEN=your-token
OPENAI_API_KEY=your-key

The Assistant

# assistant.py
import os
import json
from dotenv import load_dotenv
from openai import OpenAI
from slate_client import CortexClient

load_dotenv()

slate = CortexClient(
    address=os.getenv("SLATE_ADDRESS"),
    token=os.getenv("SLATE_TOKEN"),
    run_id="research-assistant"  # Session isolation
)
openai = OpenAI(api_key=os.getenv("OPENAI_API_KEY"))


def research(query: str) -> str:
    # 1. Find past related research
    past_research = slate.reminisce(query, limit=3).traces
    
    # 2. Get current session notes
    session = slate.drift().items
    
    # 3. Build context-aware prompt
    prompt = build_prompt(query, past_research, session)
    
    # 4. Generate synthesis
    response = openai.chat.completions.create(
        model=\"gpt-4\",
        messages=[{\"role\": \"user\", \"content\": prompt}]
    )
    synthesis = response.choices[0].message.content
    
    # 5. Save to current session
    slate.focus(json.dumps({
        \"type\": \"research_note\",
        \"query\": query,
        \"synthesis\": synthesis[:200]  # Store summary
    }))
    
    # 6. Learn from this research
    slate.commit(
        query,  # input
        synthesis[:500],  # outcome
        action=\"research\",
        reasoning=\"User researched this topic\"
    )
    
    return synthesis


def build_prompt(query: str, past_research: list, session: list) -> str:
    prompt = "You are a research assistant.\n\n"
    
    # Add past related research
    if past_research:
        prompt += "RELATED PAST RESEARCH:\n"
        for item in past_research[:3]:
            prompt += f"- Query: {item.input}\n  Finding: {item.outcome[:200]}...\n\n"
    
    # Add current session context
    notes = [s for s in session if s.get("type") == "research_note"]
    if notes:
        prompt += "CURRENT SESSION NOTES:\n"
        for note in notes[-5:]:
            prompt += f"- {note.get('query')}: {note.get('synthesis', '')[:100]}...\n"
        prompt += "\n"
    
    prompt += f"NEW QUERY: {query}\n\n"
    prompt += "Provide a helpful synthesis. Reference past research if relevant."
    
    return prompt


# Example usage
if __name__ == "__main__":
    # First query
    print("Query: What is hyperdimensional computing?")
    result = research("What is hyperdimensional computing?")
    print(f"Response: {result[:300]}...\n")
    
    # Follow-up (assistant now has context)
    print("Query: How does it compare to neural networks?")
    result = research("How does it compare to neural networks?")
    print(f"Response: {result[:300]}...")

What's Happening

Finding Past Research

past_research = slate.reminisce(query, limit=3).traces

When you ask about "hyperdimensional computing", Episodic Memory searches for related past queries. If you researched vector databases last week, that context appears.

Session Continuity

session = slate.drift().items

Working Memory keeps your current session notes. Follow-up questions have context from earlier in the session.

Learning

slate.commit(query, synthesis, action="research", reasoning="...")

Every query is saved. Your assistant learns your interests over time.


Building User Profiles

The assistant naturally builds a profile of your interests:

def get_user_interests() -> list:
    # Search for all past research
    all_research = slate.reminisce("research topics interests", limit=10).traces
    
    # Extract common themes
    topics = [r.input for r in all_research]
    return topics

After a few sessions, reminisce() returns your most common research areas first.


Adding Knowledge Storage

Store facts using Semantic Memory:

def save_fact(topic: str, fact: str):
    """Save a permanent fact for future reference"""
    # This would use Semantic Memory API
    slate.focus(json.dumps({
        "type": "fact",
        "topic": topic,
        "content": fact,
        "permanent": True
    }))


def research_with_facts(query: str) -> str:
    # Get stored facts on this topic
    context = slate.drift()
    facts = [c for c in context if c.get("type") == "fact"]
    
    # Include facts in prompt
    # ...

Multi-Session Example

Session 1

research("What is quantum computing?")
research("How do qubits work?")
# Both saved to Episodic Memory

Session 2 (next day)

research("Quantum error correction")
# Past research on quantum topics is retrieved
# Assistant connects to previous learning

Session 3 (next week)

research("Compare quantum and classical computing")
# Full context from all previous sessions

The assistant remembers everything across sessions.


Summary

The research loop:

  1. Reminisce: Find related past research
  2. Drift: Get current session notes
  3. Generate: LLM synthesizes with full context
  4. Focus: Save to current session
  5. Commit: Learn for future sessions

Your assistant gets smarter with every query.