CrewAI is not allowing the use of a knowledge source when using an Ollama-based LLM. It defaults to requiring an OpenAI API key, even though Ollama is correctly configure

Error Message = Value error, Invalid Knowledge Configuration: Please provide an OpenAI API key. You can get one at https://platform.openai.com/account/api-keys

Source Code

I am running my Ollama model on Runpod.

from crewai import Agent, Crew, Process, Task, LLM
from crewai.project import CrewBase, agent, crew, task
from crewai.knowledge.source.string_knowledge_source import StringKnowledgeSource
from dotenv import load_dotenv
import os
from crewai.knowledge.source.crew_docling_source import CrewDoclingSource

text_source = CrewDoclingSource(
    file_paths=["boilerplate.md"]
)

# Load environment variables from .env file
load_dotenv()

# Access the variables
MODEL = os.getenv("MODEL")
API_BASE = os.getenv("API_BASE")
os.environ["MODEL"] = MODEL
os.environ["API_BASE"] = API_BASE

# Create a knowledge source
content = "Users name is Savi. He is 26 years old and lives in Ahemdabad."
string_source = StringKnowledgeSource(
    content=content,
)

@CrewBase
class CrewaiCodingAgents():
    """CrewaiCodingAgents crew"""
    agents_config = 'config/agents.yaml'
    tasks_config = 'config/tasks.yaml'

    @agent
    def Senior_HTML_Developer(self) -> Agent:
        return Agent(
            config=self.agents_config['Senior_HTML_Developer'],
            verbose=True,
            knowledge_sources=[string_source, text_source],
            llm=LLM(
                model="ollama/deepseek-coder-v2",
                api_key="",
                base_url=os.getenv("API_BASE") 
            ),
            memory=True,
            temperature=0.5
        )

    @task
    def navbar_html(self) -> Task:
        return Task(
            config=self.tasks_config['navbar_html'],
        )

    @crew
    def crew(self) -> Crew:
        """Creates the CrewaiCodingAgents crew"""
        return Crew(
            agents=self.agents, 
            tasks=self.tasks, 
            process=Process.sequential,
            verbose=True,
            memory=True,
            embedder={
                "provider": "ollama",
                "config": {
                    "model": "ollama/nomic-embed-text:latest",
                    "api_key": "",
                    "base_url": os.getenv("API_BASE") 
                }
            }
        )

same issue here when using ollama LLM with a knowledge source, keep getting asked for Open API key. Anyone manage to get to the bottom of this?

I’ve been asking the same question for a week now.
Any help will be appreciated

1 Like

Happy to see i m not the only one ! Happy to know the right approach if not s pure bug ?

Another solution is mentioned here - Unable to Initiate Crew using Local Ollama

Please check

1 Like

not at all memory issue . it is embeder issue that trigger open AI per default instead running ollama. see String Knowledge sources not working with Gemini

Found a way to fix this. Use this approach. You can use a random/dummy API key for open ai API key env variable and then use the ChatOpenAI class instead of LLM class. It worked fine for me in local with the Ollama model

os.environ[“OPENAI_API_KEY”] = “testapikey”
llm_local = ChatOpenAI(
model=“ollama/deepseek-r1:latest”,
base_url=“http://localhost:11434”
)