Mem0 + CrewAI User Personalization

First off, welcome to the community, Anya! I’ll take this opportunity to document a new use case involving Mem0 integration with CrewAI.

The Error:

The earlier memory implementation using Mem0 had a bug, which we discussed in this thread. So, yesterday, version 0.114.0 was released, along with an example showcasing the new ExternalMemory usage. When I ran their example code, I got the following:

ValidationError: 1 validation error for Crew
  Value error, Please provide an OpenAI API key.

Great, we went from a buggy implementation (which required tricking the system) to a weird one (that has hidden dependencies). That’s progress, right? :smirking_face:

Where the Error Comes From:

Digging into the crewai/crew.py file, particularly the create_crew_memory validator, I noticed that this flag initializes several internal memory components like ShortTermMemory, LongTermMemory, and EntityMemory — even if you only want to use ExternalMemory.

And these other memory components rely on embeddings. So, you’re forced to provide an embedder configuration in your Crew setup to keep those components happy. Disclaimer: Honestly, I haven’t double-checked if that embedder gets configured automatically when the OPENAI_API_KEY environment variable is set. Since I ran into the error above, I’m documenting it here just in case.

Solution (Embedder Required for Other Memory Modules):

Below you’ll find a working example using Gemini, which also remembers user information.

from crewai import Agent, Task, Crew, LLM, Process
from crewai.memory.external.external_memory import ExternalMemory
import os

os.environ["MEM0_API_KEY"] = "YOUR-KEY"
os.environ["GEMINI_API_KEY"] = "YOUR-KEY"

gemini_llm = LLM(
    model='gemini/gemini-2.0-flash',
    temperature=0.5,
	max_tokens=1024
)

chatbot_agent = Agent(
    role="Friendly Chatbot",
    goal="Respond kindly to the user in a single paragraph.",
    backstory="A helpful AI assistant for brief interactions.",
    llm=gemini_llm,
    verbose=False,
    allow_delegation=False
)

chat_task = Task(
    description=(
        "Process the user query: '{user_question}' "
        "and provide a friendly response."
    ),
    expected_output="A single paragraph, friendly response.",
    agent=chatbot_agent
)

crew = Crew(
    agents=[chatbot_agent],
    tasks=[chat_task],
    verbose=False,
    process=Process.sequential,
    memory=True,
    embedder={
        "provider": "google",
        "config": {
            "model": "models/text-embedding-004",
            "api_key": os.environ["GEMINI_API_KEY"]
        }
    },
    external_memory=ExternalMemory(
        embedder_config={
            "provider": "mem0",
            "config": {
                "user_id": "MadMax"
            }
        }
    )
)

user_inputs = [
    "Hi, my name is Max Moura!",
    "I'm a fisherman.",
    "I really like soccer.",
    "Hey, what do you know about me?",
]

for user_input in user_inputs:
    result = crew.kickoff(
        inputs={
            "user_question": user_input
        }
    )
    print(f'\n👽 User:  {user_input}')
    print(f'🤖 Chatbot:  {result.raw}')

Hope this example helps you get closer to a solution!