How to use "function_calling_llm" for agent

Hi, I’m trying to set “function_calling_llm” to increase the hit rate on generating correct function arguments. I use a custom model as below but it doesn’t work.

function_llm = LLM(
model=“openai/llama3”,
base_url=“http://localhost:8000/v1”,
api_key=‘sk_1234’,
temperature=0.5,
top_p=0.5,
max_tokens=1024
)
@agent
def my_agent(self) → Agent:

    return Agent(
        config=self.agents_config['my_agent'],
        verbose=True,
        allow_code_execution=True,
        function_calling_llm = function_llm
    )

Through debugging, I found this “function_calling_llm” module seems never to work. Here are some possible issues I found.

  1. Custom LLM can not work because the api_key and other parameters is not passed down to litellm for request. According to the “to_pydantic” function in “InternalInstructor” class, only the “model” attribute is passed down, while the others such as ‘api_key’ are discarded.
def to_pydantic(self):
      messages = [{"role": "user", "content": self.content}]
      if self.instructions:
          messages.append({"role": "system", "content": self.instructions})
      model = self._client.chat.completions.create(
          model=self.llm.model, response_model=self.model, messages=messages
      )
      return model
  1. Errors happen in following “_function_calling” function in “ToolUsage” class.
    tool_object = converter.to_pydantic()
    calling = ToolCalling(
        tool_name=tool_object["tool_name"],
        arguments=tool_object["arguments"],
        log=tool_string,  # type: ignore
    )

The converter.to_pydantic() returns a pydantic model which is not
subscriptable. It is very strange that the following logic gets the attributes of
“tool_object” as dictionary.

The crewai version is 0.80.0 and I’m not sure if this “function_calling_llm” is ready in this version.

Am I using this correctly? Any suggestion for this?

Thanks for raising this we are looking into this internally - will update this post once we have a solution for you

could you also please share any error messages you got?

Hi, @matt . Thanks for your reply. Here are the error messages I got.

  1. In “to_pydantic” function, “InternalInstructor” class

Traceback (most recent call last):
File “/Users/miqin/Library/Python/3.10/lib/python/site-packages/instructor/retry.py”, line 142, in retry_sync
response = func(*args, **kwargs)
File “/Users/miqin/Library/Python/3.10/lib/python/site-packages/litellm/utils.py”, line 960, in wrapper
raise e
File “/Users/miqin/Library/Python/3.10/lib/python/site-packages/litellm/utils.py”, line 849, in wrapper
result = original_function(*args, **kwargs)
File “/Users/miqin/Library/Python/3.10/lib/python/site-packages/litellm/main.py”, line 3059, in completion
raise exception_type(
File “/Users/miqin/Library/Python/3.10/lib/python/site-packages/litellm/litellm_core_utils/exception_mapping_utils.py”, line 2136, in exception_type
raise e
File “/Users/miqin/Library/Python/3.10/lib/python/site-packages/litellm/litellm_core_utils/exception_mapping_utils.py”, line 310, in exception_type
raise AuthenticationError(
litellm.exceptions.AuthenticationError: litellm.AuthenticationError: AuthenticationError: OpenAIException - The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY environment variable

  1. I guess the above error is caused by the base_url and api_key of my custom llm is not passed down. Therefore, I changed the source code and provide the base_url and api_key like that,
    model = self._client.chat.completions.create(
        model=self.llm.model,api_key =self.llm.api_key,api_base=self.llm.base_url,response_model=self.model, messages=messages
    )

Luckily, my custom llm is successfully called and response is returned normally. Then the other error happened in “_function_calling” function in “ToolUsage” class.

tool_object = converter.to_pydantic()
calling = ToolCalling(
tool_name=tool_object[“tool_name”],
arguments=tool_object[“arguments”],
log=tool_string, # type: ignore
)

Traceback (most recent call last):
File “/Users/miqin/Library/Python/3.10/lib/python/site-packages/IPython/core/interactiveshell.py”, line 3577, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File “”, line 2, in
tool_name=tool_object[“tool_name”],
TypeError: ‘InstructorToolCalling’ object is not subscriptable

Could you help check these issues? Very appreciate!