Hi, I’m trying to set “function_calling_llm” to increase the hit rate on generating correct function arguments. I use a custom model as below but it doesn’t work.
function_llm = LLM(
model=“openai/llama3”,
base_url=“http://localhost:8000/v1”,
api_key=‘sk_1234’,
temperature=0.5,
top_p=0.5,
max_tokens=1024
)
@agent
def my_agent(self) → Agent:return Agent( config=self.agents_config['my_agent'], verbose=True, allow_code_execution=True, function_calling_llm = function_llm )
Through debugging, I found this “function_calling_llm” module seems never to work. Here are some possible issues I found.
- Custom LLM can not work because the api_key and other parameters is not passed down to litellm for request. According to the “to_pydantic” function in “InternalInstructor” class, only the “model” attribute is passed down, while the others such as ‘api_key’ are discarded.
def to_pydantic(self): messages = [{"role": "user", "content": self.content}] if self.instructions: messages.append({"role": "system", "content": self.instructions}) model = self._client.chat.completions.create( model=self.llm.model, response_model=self.model, messages=messages ) return model
- Errors happen in following “_function_calling” function in “ToolUsage” class.
tool_object = converter.to_pydantic() calling = ToolCalling( tool_name=tool_object["tool_name"], arguments=tool_object["arguments"], log=tool_string, # type: ignore )
The converter.to_pydantic() returns a pydantic model which is not
subscriptable. It is very strange that the following logic gets the attributes of
“tool_object” as dictionary.
The crewai version is 0.80.0 and I’m not sure if this “function_calling_llm” is ready in this version.
Am I using this correctly? Any suggestion for this?