Smart Memory Guide
Smart Memory is Aegis’s intelligent extraction layer that automatically determines what’s worth remembering from conversations. Instead of storing everything (noise) or requiring manual decisions (burden), Smart Memory uses a two-stage process to extract and store only valuable information.Quick Start
How It Works
Smart Memory uses a two-stage process to avoid expensive LLM calls while maintaining quality:Cost Comparison
| Approach | LLM Calls | Cost | Quality |
|---|---|---|---|
| Store everything | 0 | Low | Poor (noisy) |
| LLM for everything | 100% | High | Good |
| Two-stage (Smart) | ~30% | Low | Good |
Use Cases
- Conversational
- Coding
- Task
- Support
Configuration
Sensitivity Levels
LLM Providers
SmartAgent (Full Auto)
For the simplest experience, useSmartAgent which handles everything:
What Gets Stored
Categories
| Category | Description | Example |
|---|---|---|
preference | Likes, dislikes, style | ”User prefers dark mode” |
fact | Personal information | ”User is a developer in Chennai” |
decision | Choices made | ”User decided to use React” |
constraint | Limits and requirements | ”Budget is $5000” |
goal | What user wants | ”User wants to build a chatbot” |
strategy | What worked | ”Using async improved performance” |
mistake | What didn’t work | ”Don’t use range() for large pagination” |
Best Practices
1
Choose the Right Use Case
Match use case to your domain. Don’t use “conversational” for coding tasks.
2
Use Appropriate Sensitivity
High sensitivity for personal assistants. Low sensitivity for task agents.
3
Monitor Extraction Stats
4
Combine with Explicit Storage
Use Smart Memory for conversations, explicit storage for known-important info.
Troubleshooting
Nothing is being extracted
Nothing is being extracted
- Check sensitivity:
memory = SmartMemory(sensitivity="high", ...) - Use
force_extract=Trueto bypass filter - Check stats:
print(memory.get_stats())
Too much noise being stored
Too much noise being stored
- Lower sensitivity:
sensitivity="low" - Use a more specific use case
- Create custom filter patterns
LLM costs too high
LLM costs too high
- Use cheaper models:
gpt-4o-miniorclaude-3-haiku - Lower sensitivity to reduce LLM calls
- Use
auto_store=Falsefor custom storage logic