Hi everyone,
I’m trying to use the Knowledge feature in CrewAI with the Google Gemini free-tier model as the embedder. However, I’m encountering a couple of issues:
-
Initially, when running my crew via uv run --active run_crew
, I got the following error:
Failed to init knowledge: The Google Generative AI python package is not installed. Please install it with `pip install google-generativeai`
I resolved this by installing the package as instructed.
-
After that, the crew runs, but it doesn’t seem to actually use the knowledge base. The agent responds with:
“I am sorry, but I do not have access to personal information about individuals like John, including his city of residence and age. My knowledge is limited to the information that has been shared with me. Therefore, I cannot answer the question.”
It looks like the knowledge feature isn’t activating or loading correctly, even after installing the required dependencies.
What I’d Like Help With:
- Is it possible to use Gemini free-tier for embeddings in CrewAI’s knowledge feature?
- How can I ensure that my knowledge base is properly initialized and used during task execution?
- Are there any additional setup steps or limitations with using Gemini for this?
Thanks in advance for your help! 
Hi
The knowledge features uses a local /knowledge directory Knowledge - CrewAI
Can you please provide the code you are using to access knowledge as it is normally local
1 Like
from crewai import Agent, Task, Crew, Process, LLM
from crewai.knowledge.source.string_knowledge_source import StringKnowledgeSource
import os
from dotenv import load_dotenv
load_dotenv()
api_key = os.getenv("GEMINI_API_KEY")
if not api_key:
raise ValueError("GEMINI_API_KEY is not set in your .env file")
# Assign to the correct env var name CrewAI expects
os.environ["CHROMA_GOOGLE_GENAI_API_KEY"] = api_key
# Optional: silence telemetry
os.environ["CREWAI_DISABLE_TELEMETRY"] = "true"
os.environ["OTEL_SDK_DISABLED"] = "true"
content = "User's name is John. He is 30 years old and lives in San Francisco."
string_source = StringKnowledgeSource(content=content)
llm = LLM(
model="gemini/gemini-2.0-flash-lite",
temperature=0,
)
agent = Agent(
role="About User",
goal="You know everything about the user.",
backstory="You are a master at understanding people and their preferences.",
verbose=True,
allow_delegation=False,
llm=llm,
)
task = Task(
description="Answer the following questions about the user: {question}",
expected_output="A clear answer to the question.",
agent=agent,
)
crew = Crew(
agents=[agent],
tasks=[task],
verbose=True,
process=Process.sequential,
embedder={
"provider": "google",
"config": {"model": "models/embedding-001", "task_type": "retrieval_document"},
},
knowledge_sources=[string_source],
)
if __name__ == "__main__":
result = crew.kickoff(
inputs={"question": "What city does John live in and how old is he?"}
)
print("Result:", result)
I have same issue. Even I put embedder and knowledge to agent level. I can verify it’s looking knowledge but chunk is 0. Agent not pulling anything from it. Tried with gemini embedder and olllama nomic-embed-text same issue.
I have the same issue. The embedded model does not pull the info from the file I put in the knowledge folder.
In the end, my work around is to use FileReadTool to read the file and pass the content into the prompt.