Memory issue when using the Gemini API

I’m having a problem applying the memory in my agent system, although my agent manages to return the response correctly in its output, after that I receive an error:

Failed to add to long term memory: Failed to convert text into a Pydantic model due to the following error: litellm.AuthenticationError: geminiException - {
“error”: {
“code”: 400,
“message”: “API key not valid. Please pass a valid API key.”,
“status”: “INVALID_ARGUMENT”,
“details”: [
{
@type”: “type.googleapis.com/google.rpc.ErrorInfo”,
“reason”: “API_KEY_INVALID”,
“domain”: “googleapis.com”,
“metadata”: {
“service”: “generativelanguage.googleapis.com
}
},
{
@type”: “type.googleapis.com/google.rpc.LocalizedMessage”,
“locale”: “en-US”,
“message”: “API key not valid. Please pass a valid API key.”
}
]
}
}

When I switch to the Gemini key for the OpenAi one, this problem does not occur.

To use the memory I had to configure an embedder model, I chose an ollama model with 1536 embedding dimension, because without this it would give another error.

my crew setup:

crew = Crew(
agents=[agent],
tasks=[task],
verbose=True,
memory=True,
embedder={
“provider”: “ollama”,
“config”: {
“model”: “rjmalagon/gte-qwen2-1.5b-instruct-embed-f16:latest”
}
},
knowledge_sources=[json_source],
)

I have the same thing with embedders and memory. The key is provided, but for some reason is not passed. Have you figured it out, @Marcos_Maio ?

# embedder
embedder_config = {
    "provider": "google",
    "config": {
        "api_key": os.getenv("GOOGLE_API_KEY"),
        "model": "models/text-embedding-004"
    }
}

and then using it in crew:
dynamic_crew = Crew(
                agents=crew_agents,
                tasks=crew_tasks,
                verbose=True,
                memory = True,
                embedder = embedder_config,
                long_term_memory = LongTermMemory(
                    storage=LTMSQLiteStorage(
                        db_path="./memorydb/dynamic_crew_%s/long_term_memory_storage.db" % user_id
                    )
                ),
                short_term_memory = ShortTermMemory(
                    storage = RAGStorage(
                            embedder_config=embedder_config,
                            type="short_term",
                            path="./memorydb/dynamic_crew_%s/" % user_id
                        )
                    ),
                entity_memory = EntityMemory(
                    storage=RAGStorage(
                        embedder_config=embedder_config,
                        type="short_term",
                        path="./memorydb/dynamic_crew_%s/" % user_id
                    )
                ),
            )

Same issue here:
Failed to add to long term memory: Failed to convert text into a Pydantic model due to error: litellm.AuthenticationError: geminiException - {
“error”: {
“code”: 400,
“message”: “API key not valid. Please pass a valid API key.”,
“status”: “INVALID_ARGUMENT”,
“details”: [
{
@type”: “type.googleapis com/google.rpc.ErrorInfo”,
“reason”: “API_KEY_INVALID”,
“domain”: “googleapis.com”,
“metadata”: {
“service”: “generativelanguage.googleapis.com
}
},
{
@type”: “type.googleapis com/google.rpc.LocalizedMessage”,
“locale”: “en-US”,
“message”: “API key not valid. Please pass a valid API key.”
}
]
}
}

Any ideas on how to solve it?

I’ve solve it. It is necessary to configure both:
os.environ[“GOOGLE_API_KEY”] = GOOGLE_API_KEY
AND
os.environ[“GEMINI_API_KEY”] = GOOGLE_API_KEY

A single key is not sufficient.

This situation (and other related ones) happens because CrewAI:

  • Uses the LiteLLM library for agent communication with LLMs.
  • Uses the EmbedChain library for RAG tools.

And that’s where the conflicting points arise. This becomes clear when you try to customize the RAG tools. So, according to the documentation of each library, we have to:

1 Like