the original URL as specified
OLLAMA_BASE_URL = ‘XYZ’
LLM_MODEL = ‘ollama/llama3.1:8b’ # This model exists in my server
EMBEDDING_MODEL = ‘nomic-embed-text:latest’ # This model exists in my server
Embedder configuration for Ollama
OLLAMA_EMBEDDER_CONFIG = {
“provider”: “ollama”,
“config”: {
“model”: EMBEDDING_MODEL,
“ollama_base_url”: OLLAMA_BASE_URL,
}
}
Initialize memory components
ltm = LongTermMemory(storage=LTMSQLiteStorage(db_path=LTM_DB_PATH))
stm = ShortTermMemory(storage=RAGStorage(
embedder_config=OLLAMA_EMBEDDER_CONFIG,
path=STM_RAG_PATH,
type=“short_term”
))
em = EntityMemory(storage=RAGStorage(
embedder_config=OLLAMA_EMBEDDER_CONFIG,
path=EM_RAG_PATH,
type=“short_term”
))
Initialize CrewAI Crew
chatbot_crew = Crew(
agents=[assistant_agent],
tasks=[assistant_task],
process=Process.sequential,
memory=True,
long_term_memory=ltm,
short_term_memory=stm,
entity_memory=em,
verbose=True,
embedder=OLLAMA_EMBEDDER_CONFIG
)
ERROR :2025-04-08 17:10:08 - Error during short_term search: Failed to connect to Ollama. Please check that Ollama is downloaded, running and accessible. Download Ollama on macOS in query.
2025-04-08 17:10:12 - Error during short_term search: Failed to connect to Ollama. Please check that Ollama is downloaded, running and accessible. Download Ollama on macOS in query.
NOTE : all the models are working properly and are in running condition