Gemini not working since update

I had these code below working before update (using Gemini):

import os
from dotenv import load_dotenv
from crewai import Agent, Task, Process, Crew
from langchain_google_genai import ChatGoogleGenerativeAI

#carregar o arquivo .env
load_dotenv(override=True)

To Load GTP-4

api = os.environ.get(“OPENAI_API_KEY”)

To load gemini (this api is for free: https://makersuite.google.com/app/apikey)

api_gemini = os.environ.get(“GEMINI-API-KEY”)

llm = ChatGoogleGenerativeAI(
model=“gemini-pro”, verbose=True, temperature=0.1, google_api_key=api_gemini
)

marketer = Agent(
role=“Market Research Analyst”,
goal=“Find out how big is the demand for my products and suggest how to reach the widest possible customer base”,
backstory=“”“You are an expert at understanding the market demand, target audience, and competition.
This is crucial for validating whether an idea fulfills a market need and has the potential
to attract a wide audience. You are good at coming up with ideas on how to appeal
to widest possible audience.
“””,
verbose=True, # enable more detailed or extensive output
allow_delegation=True, # enable collaboration between agent
llm=llm # to load gemini
)

But now, it is not working anymore.

Help?

1 Like

try this instead
gemini_flash = genai.GenerativeModel(model_name=“gemini/gemini-1.5-flash”, generation_config = genai.GenerationConfig(max_output_tokens=16000, temperature=0.1))

put your gemini api key in this call or set it as env variable.

Upgrade to latest version 0.61.3

then you can do this

llm = LLM(
    model="gpt-4",
    temperature=0.8,
    max_tokens=150,
    top_p=0.9,
    frequency_penalty=0.1,
    presence_penalty=0.1,
    stop=["END"],
    seed=42,
    base_url="https://api.openai.com/v1",
    api_key="your-api-key-here"
)
agent = Agent(llm=llm, ...)

Obviously change base url, api and model to your needs

And remove what ever parameters you do not need

This works fine for me with openai and lm studio(openai interface), but gemini only works of you precede the model name with gemini/ as in model=“gemini/gemini whatever model”.
However I think there is something wrong with the prompt structure even then. I get massive numbers of slashes that I did not get before and other junk. @matt did this get tested with a gemini model?

Yes you must precede the model name with the provider as per LiteLLM. Meaning that for any model you must specify it as per LiteLLM.

The example above is just an example, it’s not specific to Gemini as per my comment at the end.

What do you mean by you get slashes? And what is the other junk

As for testing model, I personally do not test Gemini, but we do test the most popular which is OpenAI.

I am experiencing the same issue as Gemini. The problem is not specific to any model.as Gemini.

Now I can use the Gemini as llm!

you will need:

from dotenv import load_dotenv
from litellm import completion
import os

os.environ[‘GEMINI_API_KEY’] = “YOU_API_KEY”
response = completion(
model=“gemini/gemini-1.5-flash”,
messages=[{“role”: “user”, “content”: “write code for saying hi from LiteLLM”}]

When you create the agent, you need to determine the llm.

role=“”
,
goal=" "
,
backstory=“”,
verbose=True,
allow_delegation=False,
memory=True,
tools=,
llm = "gemini/gemini-1.5-flash

But then how the temperature, max token length, etc can be modified?

Also langsmith requires :-

llm = ChatGoogleGenerativeAI(
model=“gemini-1.5-flash”,
temperature=0.5,
max_tokens=None,
timeout=None,
max_retries=2,
# other params…
)

Unable to use langsmith with crewai, for gemini models.