Crew/Litellm Support of Function and Tool Parameters in Gemini

does Crewai support the tool and function parameters of gemini that litellm does?

1 Like

Doesn’t look like it, I’ll spin up a PR

1 Like

hi Matt, any updates on gemini support?

I see Gemini in the documents here at the others tab. But I didn’t try yet.

i see that but it’s errorring

What error do you get?

Now mostly around embeddings

  1. File “/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/embedchain/embedder/openai.py”, line 18, in init
    api_key = self.config.api_key or os.environ[“OPENAI_API_KEY”]
    ~~~~~~~~~~^^^^^^^^^^^^^^^^^^
    File “”, line 685, in getitem
    KeyError: ‘OPENAI_API_KEY’

ImportError: cannot import name ‘rag_tool’ from ‘config’

Had to remove the rag tool to bypass…

now running into this error

An error occurred: litellm.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=model=‘models/gemini-pro’ google_api_key=SecretStr(‘**********’) client=<google.ai.generativelanguage_v1beta.services.generative_service.client.GenerativeServiceClient object at 0x155d3b9b0> default_metadata=()
Pass model as E.g. For ‘Huggingface’ inference endpoints pass in completion(model='huggingface/starcoder',..)

updated code to restructure litellm config then this error

2024-12-10 13:59:54,540 - 8520267840 - llm.py-llm:170 - ERROR: LiteLLM call failed: litellm.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=model=‘models/gemini-pro’ google_api_key=SecretStr(‘‘) client=<google.ai.generativelanguage_v1beta.services.generative_service.client.GenerativeServiceClient object at 0x15ee2c6e0> default_metadata=()
Pass model as E.g. For ‘Huggingface’ inference endpoints pass in completion(model='huggingface/starcoder',..) Learn more: Providers | liteLLM
An error occurred: litellm.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=model=‘models/gemini-pro’ google_api_key=SecretStr(’
’) client=<google.ai.generativelanguage_v1beta.services.generative_service.client.GenerativeServiceClient object at 0x15ee2c6e0> default_metadata=()
Pass model as E.g. For ‘Huggingface’ inference endpoints pass in completion(model='huggingface/starcoder',..) Learn more: Providers | liteLLM

Error #1

You got this error because you used a RAG tool, which needs an embedding LLM to work because this is how RAGs work. CrewAI uses the OpenAI embedding LLM by default, and that’s why the code searched for the OpenAI API key that couldn’t be found. However, you can customize the embedding LLM to your liking.

Create a configuration dictionary with an embedder key:

config = {
    'embedder': {
        'provider': 'your-provider',  # Providers like 'openai', 'ollama', etc.
        'config': {
            'model': 'your-provider/your-embedding-llm',
            'task_type': 'retrieval_document',
            # You can add other configuration options here
        }
    }
}

When initializing your tool, pass the configuration:

my_tool = YourTool(config=config)

Error #2

CrewAI LLM class leverages LiteLLM in the background, which supports a wide range of LLM providers. Before the LLM, you need to set the LLM provider. In case of using Google AI Studio, set it as follows:

from crewai import Agent, LLM

my_llm = LLM(
    api_key=os.getenv("GEMINI_API_KEY"),
    model="gemini/gemini-pro",
)

my_agent = Agent(
    ...,
    llm=my_llm,
)
2 Likes

thanks Rok!
I was also able to touch base with GCP Gemini team who suggested to integrate through Gemini’s OpenAI API compatibility instead of the native Gemini SDKs, which resolved the errors!

FYI two options
a. Through Vertex AI: OpenAI ライブラリを使用して Vertex AI モデルを呼び出す  |  Generative AI on Vertex AI  |  Google Cloud
b. Through Google AI Studio:** Compatibilità con OpenAI  |  Gemini API  |  Google AI for Developers

btw @rokbenko the only gemini model name that’s not erroring out is gemini-pro. i’ve tried most of options and naming conventions they’re all erroring out per above

Here’s the snippet

Initialize Google API key

GOOGLE_API_KEY = os.getenv(‘GOOGLE_API_KEY’)
if not GOOGLE_API_KEY:
raise ValueError(“GOOGLE_API_KEY not found in environment variables”)

class CustomGeminiLLM:
def init(self):
self.client = OpenAI(
api_key=os.getenv(‘GOOGLE_API_KEY’),
base_url=“https://generativelanguage.googleapis.com/v1/models
)
self.temperature = 0.7
self.max_tokens = 8100
self.model_name = “gemini-pro”
self.supports_stop_words = False
self.supports_functions = False