Gemini not working since update

I had these code below working before update (using Gemini):

import os
from dotenv import load_dotenv
from crewai import Agent, Task, Process, Crew
from langchain_google_genai import ChatGoogleGenerativeAI

#carregar o arquivo .env
load_dotenv(override=True)

To Load GTP-4

api = os.environ.get(“OPENAI_API_KEY”)

To load gemini (this api is for free: https://makersuite.google.com/app/apikey)

api_gemini = os.environ.get(“GEMINI-API-KEY”)

llm = ChatGoogleGenerativeAI(
model=“gemini-pro”, verbose=True, temperature=0.1, google_api_key=api_gemini
)

marketer = Agent(
role=“Market Research Analyst”,
goal=“Find out how big is the demand for my products and suggest how to reach the widest possible customer base”,
backstory=“”“You are an expert at understanding the market demand, target audience, and competition.
This is crucial for validating whether an idea fulfills a market need and has the potential
to attract a wide audience. You are good at coming up with ideas on how to appeal
to widest possible audience.
“””,
verbose=True, # enable more detailed or extensive output
allow_delegation=True, # enable collaboration between agent
llm=llm # to load gemini
)

But now, it is not working anymore.

Help?

1 Like

try this instead
gemini_flash = genai.GenerativeModel(model_name=“gemini/gemini-1.5-flash”, generation_config = genai.GenerationConfig(max_output_tokens=16000, temperature=0.1))

put your gemini api key in this call or set it as env variable.

Upgrade to latest version 0.61.3

then you can do this

llm = LLM(
    model="gpt-4",
    temperature=0.8,
    max_tokens=150,
    top_p=0.9,
    frequency_penalty=0.1,
    presence_penalty=0.1,
    stop=["END"],
    seed=42,
    base_url="https://api.openai.com/v1",
    api_key="your-api-key-here"
)
agent = Agent(llm=llm, ...)

Obviously change base url, api and model to your needs

And remove what ever parameters you do not need

This works fine for me with openai and lm studio(openai interface), but gemini only works of you precede the model name with gemini/ as in model=“gemini/gemini whatever model”.
However I think there is something wrong with the prompt structure even then. I get massive numbers of slashes that I did not get before and other junk. @matt did this get tested with a gemini model?

Yes you must precede the model name with the provider as per LiteLLM. Meaning that for any model you must specify it as per LiteLLM.

The example above is just an example, it’s not specific to Gemini as per my comment at the end.

What do you mean by you get slashes? And what is the other junk

As for testing model, I personally do not test Gemini, but we do test the most popular which is OpenAI.

I am experiencing the same issue as Gemini. The problem is not specific to any model.as Gemini.

Now I can use the Gemini as llm!

you will need:

from dotenv import load_dotenv
from litellm import completion
import os

os.environ[‘GEMINI_API_KEY’] = “YOU_API_KEY”
response = completion(
model=“gemini/gemini-1.5-flash”,
messages=[{“role”: “user”, “content”: “write code for saying hi from LiteLLM”}]

When you create the agent, you need to determine the llm.

role=“”
,
goal=" "
,
backstory=“”,
verbose=True,
allow_delegation=False,
memory=True,
tools=,
llm = "gemini/gemini-1.5-flash

2 Likes

But then how the temperature, max token length, etc can be modified?

Also langsmith requires :-

llm = ChatGoogleGenerativeAI(
model=“gemini-1.5-flash”,
temperature=0.5,
max_tokens=None,
timeout=None,
max_retries=2,
# other params…
)

Unable to use langsmith with crewai, for gemini models.

Did you manage it to work ? I am having the same issue.

Hi! Didnt work for me, is it still working for you ?

from litellm import completion

os.environ[“GEMINI_API_KEY”] = os.getenv(“GOOGLE_API_KEY”)

llm = completion(model=“gemini/gemini-1.5-flash”,
messages=[{“role”: “user”, “content”: “write code for saying hi from LiteLLM”}]
)
###############################################

Creating a senior researcher agent with memory and verbose mode

news_researcher=Agent(
role=“Senior Researcher”,
goal=‘Unccover ground breaking technologies in {topic}’,
verbose=True,
memory=True,
backstory=(
“Driven by curiosity, you’re at the forefront of”
“innovation, eager to explore and share knowledge that could change”
“the world.”

),
tools=[tool],
llm=llm,
allow_delegation=True

)

This code works for me to try with Gemini. Try this.

#.env
GEMINI_LLM_MODEL=gemini/gemini-pro
GEMINI_API_KEY=YOUR KEY
#agents.py
from crewai import Agent, LLM
import os
from dotenv import load_dotenv
from tools import tool

load_dotenv()

os.environ['GEMINI_API_KEY'] = os.getenv("GEMINI_API_KEY")

llm=LLM(
    model=os.getenv("GEMINI_LLM_MODEL"),
    verbose=True,
    google_api_key=os.getenv("GEMINI_API_KEY"),

)

news_researcher=Agent(
    llm=llm,
    role="Senior Researcher",
    goal="Uncover ground-breaking new stories in {topic}",
    backstory="""You're a seasoned researcher with a knack for uncovering the latest
    developments in {topic}. Known for your ability to find the most relevant
    information and present it in a clear and concise manner.""",
    verbose=True,
    memory=True,
    tools=[tool],
    allow_delegation=True,
)

Looks like you have to pass “gemini/” as a prefix before the name of the model for example “gemini-1.5-pro” so ended up like this “gemini/gemini-1.5-pro”. This rule apply when using LLM constructor too.

The only thin i just can’t understand is how to make memory work with ollama model.

Here is my working code example

from crewai import Agent, Crew, Process, Task, LLM


agent_companion = Agent(
    role="Helpful companion",
    goal="Provide helpful and informative responses",
    backstory="""You're a friendly and helpful companion.
    You're here to assist the user with any questions or concerns.
    You're always ready to help and provide useful information.
    """,
    # Better control
    llm=LLM(
        model="gemini/gemini-1.5-flash",
        temperature=0.5,
        verbose=True,
    ),
    # Works too
    # llm="gemini/gemini-1.5-flash",
    allow_delegation=False,
)


task_answer_question = Task(
    name="Answer Question",
    description="Answer this question: {question}",
    agent=agent_companion,
    expected_output="A helpful and informative response to the user's question",
)


crew = Crew(
    agents=[agent_companion],
    tasks=[task_answer_question],
    process=Process.sequential,
    verbose=True,
    # memory=True,
    # embedder=dict(
    #     provider="ollama",
    #     config=dict(
    #         model="nomic-embed-text",
    #     ),
    # ),
)


crew_output = crew.kickoff(
    inputs={
        "question": "What is the meaning of life?"
    }
)

print(f"Raw Output: {crew_output.raw}")

Now it is working according to your code. Thanks!

It worked also, thanks for your attention.

1 Like

This topic was automatically closed 24 hours after the last reply. New replies are no longer allowed.