Problem with Ollama and knowledge usage

CrewAI: 0.157.0

I am having problems with trying to use knowledge with Ollama.

Firstly, I have tested the ability to create embeddings, with:

import requests
import json

def get_embeddings(text, model="nomic-embed-text:latest"):
    """Generate embeddings for the given text using Ollama."""
    response = requests.post(
        "http://localhost:11434/api/embeddings",
        json={"model": model, "prompt": text}
    )
    
    if response.status_code == 200:
        return response.json()["embedding"]
    else:
        print(f"Error: {response.status_code}")
        print(response.text)
        return None

# Example usage
text = "This is a sample text for embedding generation."
embeddings = get_embeddings(text)

if embeddings:
    print(f"Generated embedding with {len(embeddings)} dimensions")
    # First few values of the embedding
    print(f"Sample values: {embeddings[:5]}")

This works fine and I can see embeddings etc.

So, my CrewAI usage is really simple:

from crewai import Agent, Task, Crew, Process, LLM
from crewai.knowledge.source.string_knowledge_source import StringKnowledgeSource

# Create a knowledge source
content = "Users name is Martin. He is 52 years old and lives in Nottingham, UK."
string_source = StringKnowledgeSource(content=content)

# Create an LLM with a temperature of 0 to ensure deterministic outputs
llm = LLM(model="ollama/gemma3",
          base_url="http://localhost:11434",
          temperature=0)

# Create an agent with the knowledge store
agent = Agent(
    role="About User",
    goal="You know everything about the user.",
    backstory="You are a master at understanding people and their preferences.",
    # verbose=True,
    allow_delegation=False,
    llm=llm,
    knowledge_source=string_source
)

task = Task(
    description="Using the provided knowledge about the user, answer the following question: {question}",
    expected_output="An answer to the question based on the given information.",
    agent=agent,
)

crew = Crew(
    agents=[agent],
    tasks=[task],
    verbose=True,
    memory=True,
    process=Process.sequential,
    embedder={
        "provider": "ollama",
        "config": {
            "model": "nomic-embed-text:latest",
            "url": "http://localhost:11434/api/embeddings"
        }
    }
)

result = crew.kickoff(inputs={"question": "What city does Martin live in and how old is he?"})

print(result)

Now not only does this not give me the correct answer, it usually says something like: “I cannot determine Martin’s city or age based on the provided context. The provided information only contains details about John and does not include information about Martin.”

This confuses me greatly, as the ‘John’ data was what was in the sample code from the example above, that I deliberately changed. There is no reference to John anywhere, so where is that coming from?

Any advice, please? How can I get Knowledge working with OIlama?

Thanks a lot

According to the official CrewAI documentation, your definition should be:

agent = Agent(
    # [...], 
    knowledge_sources=[string_source],
    embedder={
        "provider": "ollama",
        "config": {
            "model": "nomic-embed-text:latest",
            "url": "http://localhost:11434/api/embeddings"
        }
    }
)

And, since you’re using Agent-Level Knowledge, there’s no need for an embedder in your Crew.

Thanks @maxmoura. Apologies I missed the obvious there.

This topic was automatically closed 24 hours after the last reply. New replies are no longer allowed.