Hey all.
I have a chat app. I’m using crews to respond to user inquiries in the chat.
I would like crews to provide answers and rely on a context from a previous questions.
My expectation is that Memory feature can help with that.
I configured my crew in the following way:
dynamic_crew = Crew(
agents=crew_agents,
tasks=crew_tasks,
verbose=True,
memory=True,
embedder={
"provider": "google",
"config": {
"api_key": os.getenv("GEMINI_API_KEY"),
"model": "models/text-embedding-004"
}
},
long_term_memory=LongTermMemory(storage=LTMSQLiteStorage(
db_path=
"./memorydb/dynamic_crew_%s/long_term_memory_storage.db" %
user_id)),
short_term_memory=ShortTermMemory(storage=RAGStorage(
type="short_term",
embedder_config={
"provider": "google",
"config": {
"api_key": os.getenv("GEMINI_API_KEY"),
"model": "models/text-embedding-004"
}
},
path="./memorydb/dynamic_crew_%s/" % user_id)),
entity_memory=EntityMemory(storage=RAGStorage(
type="short_term",
embedder_config={
"provider": "google",
"config": {
"api_key": os.getenv("GEMINI_API_KEY"),
"model": "models/text-embedding-004"
}
},
path="./memorydb/dynamic_crew_%s/" % user_id)),
)
As you see for each user the memory is stored in a different folder.
I see the files are stored in the folder and memories are written, but at the same time I don’t see context being used.
Like I ask a simple question: How old is Nike brand? - got a response.
The next question is: “What about Columbia?” - the crew response is not about the age at all.
Are my expectations wrong? I saw this article from @zinyando that makes sense for my use case, but I’m curious if I can have built-in memory to work for this use case.