LLMs not using function calling?

In trying to understand how my tools are being executed, I was using Langtrace to check the actual LLM calls.

It appears like the OpenAI function calling isn’t actually used by CrewAI, and it just asks for the tool to be called via prompt, and then parses out the completion response.

Can anyone confirm that this is true? I checked the codebase and can’t find that the LLM calls are using function calling either.

That seems to be true. When printing the raw response from the model in llm.py, I can confirm your observation. The actions for the DuckDuckGo Tool is only present in the message part. Argument function_call is none:

llm.py RAW RESPONSE:
ModelResponse(id='chatcmpl-3t29u1y6393tnfcgkx8bxc', created=1740607461, model='meta-llama-3.1-8b-instruct', object='chat.completion', system_fingerprint='meta-llama-3.1-8b-instruct', choices=[Choices(finish_reason='stop', index=0, 
message=Message(content='Action: DuckDuckGo Search\nAction Input: {"query": "History of LLMs from 2010 until 2025', role='assistant', 
tool_calls=None, 
function_call=None, provider_specific_fields={'refusal': None}, refusal=None))], usage=Usage(completion_tokens=28, prompt_tokens=479, total_tokens=507, completion_tokens_details=None, prompt_tokens_details=None), service_tier=None, stats={})

I tried some models specialized in function calling e.g. “watt-tool-8b” and it just managed to generate the needed output so crewAI can trigger the DuckDuckGo Search.