Failing to embed knowledge source using ollama

Hi,

I am trying to run solution locally on laptop and using ollama. Its failing to init knowledge as its looking of OPENAI Key even though I have tried to configure embedding to ollama. Appreciate your help.

[2025-02-02 20:47:39][WARNING]: Failed to init knowledge: Please provide an OpenAI API key. You can get one at https://platform.openai.com/account/api-keys

I looked at previous solutions and picked this code from post How to embed knowledge source with ollama2 - #8 by kapenge , but getting the same issue.

Here is the code I am trying to run.

     from crewai import Agent, Task, Crew, Process, LLM
    from crewai.knowledge.source.string_knowledge_source import StringKnowledgeSource
    from sympy import false

    # Create a knowledge source
    content = "Users name is John. He is 30 years old and lives in San Francisco."
    string_source = StringKnowledgeSource(content=content)

    # Create an LLM with a temperature of 0 to ensure deterministic outputs
    llm = LLM(
        model="ollama/llama3.2-vision:11b", # run !ollama list to see models you have
        temperature=0,
        api_key=""
    )

    # Create an agent with the knowledge store
    agent = Agent(
        role="About User",
        goal="You know everything about the user.",
        backstory="""You are a master at understanding people and their preferences.""",
        verbose=True,
        allow_delegation=False,
        llm=llm,
    )
    task = Task(
        description="Answer the following questions about the user: {question}",
        expected_output="An answer to the question.",
        agent=agent,
    )

    crew = Crew(
        agents=[agent],
        tasks=[task],
        verbose=True,
        process=Process.sequential,
        knowledge_sources=[string_source],
        embedder={
            "provider": "ollama",
            "config": {
                "model": "nomic-embed-text"
            }
        }
    )

    result = crew.kickoff(inputs={"question": "What city does John live in and how old is he?"})

Output:

e[93m 
[2025-02-02 21:02:04][WARNING]: Failed to init knowledge: Please provide an OpenAI API key. You can get one at https://platform.openai.com/account/api-keyse[00m
e[1me[95m# Agent:e[00m e[1me[92mAbout Usere[00m
e[95m## Task:e[00m e[92mAnswer the following questions about the user: What city does John live in and how old is he?e[00m


e[1me[95m# Agent:e[00m e[1me[92mAbout Usere[00m
e[95m## Final Answer:e[00m e[92m
I don't have enough information about John's location or age. However, based on my understanding of typical user profiles, I'll provide some general assumptions.

Let's assume that John lives in New York City, which is one of the most populous cities in the United States and has a diverse range of cultures and lifestyles. As for his age, let's assume he is around 35 years old, which is a common demographic for many users who are likely to be active online.

Please note that these assumptions are purely speculative and may not reflect John's actual location or age.e[00m


Missed while copying the code. I did specify the base_url and ollama is running.

llm = LLM(
    model="ollama/llama3.2-vision:11b",
    base_url="http://localhost:11434",
    temperature=0
)

I have the same problem
from crewai import Agent, Crew, Process, Task
from crewai.project import CrewBase, agent, crew, task
from crewai.knowledge.source.pdf_knowledge_source import PDFKnowledgeSource
from crewai import LLM
from dotenv import load_dotenv
import os
#from langchain_ollama import OllamaEmbeddings

# Create a PDF knowledge source
pdf_source = PDFKnowledgeSource(
file_paths=["ai-and-ce.pdf", "Eco_Innovation_through_Mycelium.pdf","TOCCO Mycelium Report 2025.pdf"])
pdf_helen = PDFKnowledgeSource(file_paths="MBM_V19_RWC.pdf")

load_dotenv()


@CrewBase
class Secretaries():
	"""Secretaries crew"""
	agents_config = 'config/agents.yaml'
	tasks_config = 'config/tasks.yaml'

# Agents

@agent
def senior_secretary(self) -> Agent:
	return Agent(config=self.agents_config['senior_secretary'],
           verbose=True,
           llm=LLM(model="ollama/phi4", base_url="http://localhost:11434", api_key="ollama",temperature=0.5),
	)

@agent
def pr_researcher(self) -> Agent:
	return Agent(
		config=self.agents_config['pr_researcher'],
		verbose=True,
		knowledge_sources=[pdf_helen],
		llm=LLM(model="ollama/phi4", base_url="http://localhost:11434", api_key="ollama",temperature=0.5),
		embedder_config={"provider": "ollama", "config": {"model": "nomic-embed-text"}}
	)

@crew
def crew(self) -> Crew:
	"""Creates the Secretaries crew"""

	return Crew(
		agents=self.agents, # Automatically created by the @agent decorator
		tasks=self.tasks, # Automatically created by the @task decorator
		process=Process.sequential,
		verbose=True,
		knowledge_sources=[pdf_source],
		llm = LLM(model="ollama/phi4", base_url="http://localhost:11434", api_key="ollama", temperature=0.5),
		embedder={
				"provider": "ollama",
				"config": {
					"model": "nomic-embed-text",
              		"api_key": "ollama",
					"base_url":"http://localhost:11434"
				}
			},
		planning=True,
		# process=Process.hierarchical, # In case you wanna use that instead https://docs.crewai.com/how-to/Hierarchical/
	)

same error:

Value error, Invalid Knowledge Configuration: Please provide an OpenAI API key. You can get one at https://platform.openai.com/account/api-keys [type=value_error, input_value={‘verbose’: True, ‘knowle…llow_delegation’: False}, input_type=dict]
For further information visit Redirecting...

see String Knowledge sources not working with Gemini