When AI applications like ChatGPT first appeared, the primary interface for end users with foundation models was via prompting. Over the past year, we’ve seen the emergence of AI agents: self-sufficient entities capable of planning, reasoning, and carrying out complex, multi-step tasks on users’ behalf. AI agents are increasingly serving as the primary interface for end-users and are evolving into a fundamental framework for developers. The rise of AI agents not only accelerates the creation of new applications but also reimagines what is possible to build with AI.

AI agents are on the rise. The charts below underscore the increasing interest in agents. To meet this future demand, specialized agentic infrastructure is critical.  

‍

‍

AI memory is a crucial component of AI infrastructure for creating dependable and personalized AI agents. Conventional AI systems often lack strong episodic memory and continuity across separate interactions, resulting in a form of amnesia. This limited, short-term memory impedes complex sequential reasoning and knowledge sharing in multi-agent systems.

As we transition to multi-agent systems, it is vital to establish a robust memory management system across different agents that enforces access and privacy controls. Each agent's memories should be stored and retrievable both during and between sessions. More sophisticated memory mechanisms, such as sharing memories among agents, will eventually become necessary. This approach enhances an agent's decision-making capabilities by allowing one agent to learn from the experiences of others. 

Building AI memory systems is hard because 1) memory representation needs to be model agnostic, and there are a lot of different types of memory beyond just conversational memory (knowledge graphs, entities, multi-modal, etc), and 2) we are early in generalizing LLMs beyond chat completion. 

Enter Letta, which focuses on unlocking the next generation of AI through advanced memory systems and is led by the same research team that created the popular MemGPT open source project. Before MemGPT, most AI agents were stateless. They weren’t able to retain their memory (state) across user sessions. The MemGPT research paper introduced the idea of self-editing memory for LLMs, enabling an LLM to update its memory so it can learn, adapt, and become more personalized over time. Since its release, the MemGPT open source project has achieved 13K+ Github stars. Download it here.

Letta co-founders Charles Packer and Sarah Wooders met during their PhD research at the Sky Lab at UC Berkeley under the same advisors, Joseph Gonzalez and Ion Stoica. The team plans to offer Letta Cloud, where developers can build and deploy agents with advanced memory systems. Letta Cloud includes a hosted agent service, which allows developers to deploy and run stateful agents in the cloud, accessible via REST APIs. Letta Cloud also provides an “Agent Development Environment” (or “ADE”) for agent builders to develop and debug agents by directly viewing and editing both the agent’s prompts and its memory. This is enabled by Letta’s approach to “white-box memory,” which makes the exact prompts and memories being passed to the LLM on each reasoning step transparent to the developer, unlike many existing agent frameworks.

‍

Letta's Agent Development Environment

‍

Today, we are honored to announce that Felicis led Letta’s $10M seed round. We are joined by Sunflower Capital, Essence VC, Jeff Dean (Chief Scientist at Google DeepMind), Clem Delangue (CEO of HuggingFace), Cristobal Valenzuela (CEO of Runway), Jordan Tigani (CEO of MotherDuck), Tristan Handy (CEO of dbt Labs), Robert Nishihara (co-founder of Anyscale), Barry McCardel (CEO of Hex), among others. We are incredibly excited to have partnered with Charles and Sarah before they finished their PhDs and to support the entire Letta team to provide AI agent infrastructure with memory.

Join the Letta community! You can follow Letta on X and join their Discord. They are actively hiring, so check out opportunities here!