I have some (potentially basic) questions about memory, but I’m having trouble finding clear answers in the documentation or threads.
I’m not sure if anyone has already looked into these topics.
what I understand so far:
Short-term memory & entity memory: Allow agents to maintain context, share their results, and collaborate within a single crew execution.
Long-term memory: Stores various results from the crew or agents over time and retrieves them for improvement purposes.
Maintaining context across multiple executions
It is mentioned that combining these different memory types enables the crew to maintain context within a single execution as well as across multiple executions (essentially within a conversation). However, when I queried the CrewAI codebase with a cursor chatbot about memory, it mentioned that the primary goal of these memory systems is not to maintain a conversational history or contextual memory between the user and the crew.
Given this, would it be advisable to complement CrewAI’s memory system with another conversation history tracking? (For example: passing the last 30 messages between the user and the assistant to the crew) in order to ensure a consistent conversation context for each user, with a minimum contextual window.
Memory isolation in multi-user environments
In a multi-user setup, are CrewAI’s memory systems isolated per user?
For instance, if:
User A states that they are looking for a primary residence in France, and
User B states that they are looking for a secondary residence in Spain,
If both requests happen at the same time or close to each other, could there be a risk of context mixing between A and B?
Would this lead to execution issues, where Crew A and Crew B could mix their context and tasks results, causing incorrect answers to be given to users?
Is it relevant to implement a filtering system based on user_id or session_id when handling CrewAI memory?
Alternatively, would it be best to run one execution per instance, ensuring that there is only one active crew per user at a time, and resetting short-term and entity memory after each execution? Several Cloud Run instances with 1 execution at a time for example.
This way, STM and entity memory would only persist for a single execution per user, reducing the risk of cross-contamination.
The built in memory system is not the best for conversational chat type applications. You probably need 2 types of additional memory like you mentioned:
Chat history - conversation history between your user and chatbot saved to a persistent storage/database. Like you said, you would pass eg the last 30 messages
Thanks for your answer and the blog posts. I had the opportunity to test mem0 thanks to this
Okay, I understand that adding additional memory sources can be useful for providing more context to the crew about its conversation with a user.
Do you have any information on how CrewAI memories (short-term, long-term, entity) are isolated per user?
If multiple users interact with the crew at the same time, how can we ensure that there are no conflicts in retrieving information? (For example, the crew might use the result of a task performed for User 1 as context while processing a task for User 2).
Right now, it seems that there is no filtering by user or session when the crew retrieves memory elements. The inputs and results from all users appear to be mixed together.
Should we implement filtering by user_id or session_id to avoid conflicts between different users’ contexts and informations?
Have you got any solution for user management for short term memory of crew built in crewai. platform
Or are you using any custom solution to work around session management logic and on every crew all for each user retrieving data from database and passing it to agent as an in out.
I am trying to work on this, if you can share your research idea then It would be helpful.