Introduction to Papr Memory
One of the biggest hurdles in building truly intelligent AI agents is “catastrophic forgetting.” Most LLMs process information in a transient window, losing context as soon as the conversation ends. Papr Memory aims to solve this by providing a robust, open-source predictive memory layer designed specifically for AI agents.
Unlike simple vector stores that only retrieve similar chunks of text, Papr Memory employs a hybrid architecture. It combines MongoDB for document storage, Qdrant for vector embeddings, and Neo4j for graph relationships. This allows agents to not only remember facts but also understand the connections between them, enabling far more sophisticated retrieval and reasoning capabilities.
Key Features
Papr Memory stands out by offering a multi-layered approach to data persistence:
- Hybrid Storage Architecture: seamless integration of Document (MongoDB), Vector (Qdrant), and Graph (Neo4j) databases to capture the full nuance of information.
- Graph Relationships: Automatically discovers and tracks connections between different memories, allowing agents to traverse related concepts rather than just matching keywords.
- Multi-Modal Support: Capable of storing and retrieving text, documents, code snippets, and structured data.
- Fine-Grained Access Control: Built-in user management and Access Control Lists (ACLs) ensure that memory spaces remain private and secure.
- Semantic Search: Powered by modern embedding models (OpenAI, Groq, Deep Infra) to find relevant memories using natural language queries.
Installation Guide
The recommended way to deploy Papr Memory is via Docker, which orchestrates the various database services automatically.
Prerequisites
Ensure you have Docker and Docker Compose installed, along with API keys for your preferred LLM provider (e.g., OpenAI, Groq).
Docker Setup
Run the following commands to clone the repository and start the services:
# 1. Clone the repositoryngit clone https://github.com/Papr-ai/memory-opensource.gitncd memory-opensourcenn# 2. Configure Environmentncp .env.example .env.opensourcen# Open .env.opensource and add your OpenAI/Groq API keysnn# 3. Start Servicesndocker-compose up -dOnce running, the API will be accessible at http://localhost:5001.
How to Use Papr Memory
Interacting with the memory layer is done primarily through its REST API. Below are examples of how to add and retrieve memories.
Adding a Memory
To store information, send a POST request to the memory endpoint. The system will automatically generate embeddings and graph connections.
curl -X POST http://localhost:5001/v1/memory \n-H "Content-Type: application/json" \n-H "X-API-Key: your-generated-key" \n-d '{n "content": "The user prefers Python for backend development.",n "type": "text",n "metadata": {n "category": "coding_preferences"n }n}'Searching Memories
To retrieve context based on a natural language query:
curl -X POST http://localhost:5001/v1/memory/search \n-H "Content-Type: application/json" \n-H "X-API-Key: your-generated-key" \n-d '{n "query": "What programming languages does the user like?",n "max_memories": 5n}'Contribution Guide
Papr Memory is an active open-source project licensed under AGPL-3.0. Contributions are welcome, particularly in the areas of additional vector store connectors and enhanced graph schema definitions.
How to Contribute
- Fork the Repo: Create a copy of the project on your GitHub account.
- Check Issues: Look for “good first issue” tags in the issue tracker.
- Submit PR: Push your changes and open a Pull Request for review.
Community & Support
As a developer tool, support is primarily handled through technical channels:
- GitHub Issues: Report bugs or request features directly on the repository.
- Documentation: The repo includes detailed markdown guides for configuration and architecture.
Conclusion
Papr Memory represents a significant step forward in AI architecture. By combining the strengths of graph databases with vector search, it offers a “predictive” memory layer that feels much more organic than standard retrieval methods. For developers building complex agents that need to remember users over weeks or months, this open-source solution provides the necessary infrastructure to make that a reality.
Useful Resources
- GitHub Repository: Source code and quickstart guide.
- Official Website: Product overview and cloud offerings.
- FastAPI Documentation: The framework powering the core API.
Frequently Asked Questions
What is the difference between the open-source and cloud versions?
The open-source version allows you to self-host the entire infrastructure, giving you full control over data and configuration. The cloud version, managed by Papr AI, offers additional features like managed infrastructure, automatic backups, enterprise SSO, SLA guarantees, and advanced analytics that are not included in the self-hosted repo.
Which databases does Papr Memory require?
Papr Memory uses a specific stack to function effectively: MongoDB for document storage, Neo4j for graph relationships, Qdrant for vector embeddings, and Redis for caching. All of these are orchestrated automatically if you use the provided Docker Compose setup.
Is it compatible with local LLMs?
Yes. While the default configuration often points to providers like OpenAI or Groq for ease of use, the architecture is designed to support various embedding and inference models. The maintainers have indicated that local support (e.g., Qwen on-device) is a priority for future updates.
What license does the project use?
The project is released under the AGPL-3.0 License. This is a strong copyleft license, meaning if you modify the code and run it as a service over a network, you must make your source code available to users of that service.
