Error Connecting to Ollama for Embeddings in CrewAI Memory (Failed to connect to Ollama)

the original URL as specified

OLLAMA_BASE_URL = ‘XYZ’

LLM_MODEL = ‘ollama/llama3.1:8b’ # This model exists in my server

EMBEDDING_MODEL = ‘nomic-embed-text:latest’ # This model exists in my server

Embedder configuration for Ollama

OLLAMA_EMBEDDER_CONFIG = {
“provider”: “ollama”,
“config”: {
“model”: EMBEDDING_MODEL,
“ollama_base_url”: OLLAMA_BASE_URL,
}
}

Initialize memory components

ltm = LongTermMemory(storage=LTMSQLiteStorage(db_path=LTM_DB_PATH))
stm = ShortTermMemory(storage=RAGStorage(
embedder_config=OLLAMA_EMBEDDER_CONFIG,
path=STM_RAG_PATH,
type=“short_term”
))
em = EntityMemory(storage=RAGStorage(
embedder_config=OLLAMA_EMBEDDER_CONFIG,
path=EM_RAG_PATH,
type=“short_term”
))

Initialize CrewAI Crew

chatbot_crew = Crew(
agents=[assistant_agent],
tasks=[assistant_task],
process=Process.sequential,
memory=True,
long_term_memory=ltm,
short_term_memory=stm,
entity_memory=em,
verbose=True,
embedder=OLLAMA_EMBEDDER_CONFIG
)

ERROR :2025-04-08 17:10:08 - Error during short_term search: Failed to connect to Ollama. Please check that Ollama is downloaded, running and accessible. Download Ollama on macOS in query.
2025-04-08 17:10:12 - Error during short_term search: Failed to connect to Ollama. Please check that Ollama is downloaded, running and accessible. Download Ollama on macOS in query.

NOTE : all the models are working properly and are in running condition

What happens if you use the same config when using it with LLM in an agent, does it work?

Does the Ollama config only fail for embeddings?

Yes with same config it works and ollama embeddings didn’t fail when I tried to run with mem0 config where I provided same ollama embedding it worked perfectly but when I take STM and ETM I am not able to get the output it’s showing error

@zinyando any inputs to fix the error