Memory That Thinks
A graph + vector hybrid memory system for AI agents. Memories decay like human recall, form semantic relationships, and get smarter over time β not just bigger.
AI Memory Is Broken
Current approaches force a choice between simplicity and intelligence. Vex Memory gives you both.
Flat Files Don't Scale
JSON dumps and markdown logs grow linearly. No relationships, no decay, no way to find what matters when you have 10,000+ memories.
Vector Search Alone Misses Context
Embedding similarity finds related text, but can't traverse relationships. "What caused this decision?" requires graph traversal, not cosine distance.
Everything Stays Forever
Human memory forgets unimportant things. Most AI systems don't. Without temporal decay, noise drowns out signal as context windows fill.
Built on Proven Infrastructure
One PostgreSQL instance. Three powerful extensions. No external dependencies.
Everything Your Agent Needs to Remember
Not just storage β a cognitive memory layer that models how humans actually recall information.
Graph Relationships
Apache AGE property graph connects memories with typed edges β CAUSED_BY, SUPPORTS, CONTRADICTS. Traverse 2+ hops to find context vector search misses.
Semantic Vector Search
pgvector embeddings (384-dim) enable similarity search across your entire memory store. Find relevant context even when exact words don't match.
Temporal Memory Decay
Ebbinghaus-inspired forgetting curves with 30-day half-life. Frequently accessed memories resist decay. Importance adjusts automatically.
Auto Entity Extraction
NLP pipeline (spaCy NER + pattern matching) extracts decisions, events, facts, and learnings from raw text β no LLM call needed.
Sleep-Cycle Consolidation
Clusters semantically similar memories, creates summaries, and lowers importance of originals. Like your brain during sleep.
REST API (FastAPI)
Full CRUD, semantic query, context endpoints for agent integration, graph traversal, timeline queries, and feedback loops β all over HTTP.
Docker One-Command Deploy
git clone, docker compose up, done. PostgreSQL, AGE, pgvector, and the API all configured. Production-ready in 60 seconds.
Built-in Dashboard
Real-time web UI showing memory stats, types, emotions, entity graphs, and recent activity. No extra setup needed.
How It Works
Three steps from raw text to intelligent recall.
Store
Memories go in with context
Send raw text or structured data to the API. Vex automatically generates embeddings, extracts entities (people, decisions, events), tags emotions, and assigns importance scores.
Connect
Entities form a knowledge graph
Extracted entities are linked in Apache AGE. "PostgreSQL" connects to "backend" via PART_OF. Similar memories auto-link above cosine 0.7. You can traverse relationships to find causal chains.
Recall
Intelligent retrieval when you need it
Query with natural language. Vex combines vector similarity, graph traversal, temporal relevance, and importance weighting to return exactly the context your agent needs β not just similar text.
Running in 60 Seconds
Three commands. That's it.
β FastAPI server .................. ready
β Dashboard ....................... ready
Dashboard available at localhost:8000/dashboard β’ API docs at localhost:8000/docs
Built for Performance
Real numbers from a real system. Currently running with 191 memories and 1,250+ indexed entities.
Open Source. MIT Licensed. Forever.
Vex Memory is free to use, modify, and distribute. Built in public, maintained by developers who believe AI memory should be a shared primitive β not a proprietary lock-in.