marketer = Agent(
role=“Market Research Analyst”,
goal=“Find out how big is the demand for my products and suggest how to reach the widest possible customer base”,
backstory=“”“You are an expert at understanding the market demand, target audience, and competition.
This is crucial for validating whether an idea fulfills a market need and has the potential
to attract a wide audience. You are good at coming up with ideas on how to appeal
to widest possible audience.
“””,
verbose=True, # enable more detailed or extensive output
allow_delegation=True, # enable collaboration between agent
llm=llm # to load gemini
)
This works fine for me with openai and lm studio(openai interface), but gemini only works of you precede the model name with gemini/ as in model=“gemini/gemini whatever model”.
However I think there is something wrong with the prompt structure even then. I get massive numbers of slashes that I did not get before and other junk. @matt did this get tested with a gemini model?
llm = completion(model=“gemini/gemini-1.5-flash”,
messages=[{“role”: “user”, “content”: “write code for saying hi from LiteLLM”}]
)
###############################################
Creating a senior researcher agent with memory and verbose mode
news_researcher=Agent(
role=“Senior Researcher”,
goal=‘Unccover ground breaking technologies in {topic}’,
verbose=True,
memory=True,
backstory=(
“Driven by curiosity, you’re at the forefront of”
“innovation, eager to explore and share knowledge that could change”
“the world.”
#agents.py
from crewai import Agent, LLM
import os
from dotenv import load_dotenv
from tools import tool
load_dotenv()
os.environ['GEMINI_API_KEY'] = os.getenv("GEMINI_API_KEY")
llm=LLM(
model=os.getenv("GEMINI_LLM_MODEL"),
verbose=True,
google_api_key=os.getenv("GEMINI_API_KEY"),
)
news_researcher=Agent(
llm=llm,
role="Senior Researcher",
goal="Uncover ground-breaking new stories in {topic}",
backstory="""You're a seasoned researcher with a knack for uncovering the latest
developments in {topic}. Known for your ability to find the most relevant
information and present it in a clear and concise manner.""",
verbose=True,
memory=True,
tools=[tool],
allow_delegation=True,
)
Looks like you have to pass “gemini/” as a prefix before the name of the model for example “gemini-1.5-pro” so ended up like this “gemini/gemini-1.5-pro”. This rule apply when using LLM constructor too.
The only thin i just can’t understand is how to make memory work with ollama model.
Here is my working code example
from crewai import Agent, Crew, Process, Task, LLM
agent_companion = Agent(
role="Helpful companion",
goal="Provide helpful and informative responses",
backstory="""You're a friendly and helpful companion.
You're here to assist the user with any questions or concerns.
You're always ready to help and provide useful information.
""",
# Better control
llm=LLM(
model="gemini/gemini-1.5-flash",
temperature=0.5,
verbose=True,
),
# Works too
# llm="gemini/gemini-1.5-flash",
allow_delegation=False,
)
task_answer_question = Task(
name="Answer Question",
description="Answer this question: {question}",
agent=agent_companion,
expected_output="A helpful and informative response to the user's question",
)
crew = Crew(
agents=[agent_companion],
tasks=[task_answer_question],
process=Process.sequential,
verbose=True,
# memory=True,
# embedder=dict(
# provider="ollama",
# config=dict(
# model="nomic-embed-text",
# ),
# ),
)
crew_output = crew.kickoff(
inputs={
"question": "What is the meaning of life?"
}
)
print(f"Raw Output: {crew_output.raw}")