In the CrewAI agentic framework, at what point in the code does Crew enable LLM tasks to interact with tools? Are there specific prompts that guide the LLM to use tools, and how does the agent determine when tool interaction is needed?
Which part of the code should I refer to for a deeper understanding of this logic?
The concept of tools and especially how they are being called on a technical level, are rarely being described. I would also like to get a deeper understanding on how its being done in a crewAi context.
I found the langchain doc helpful to understand the concept on how tools are called generally.
I hope it gives you some further insights as well.
look into file “parser.py” def parser(). Based on the Model response message (field message=), crewAi decides if an action needs to be taken (using a tool) or if it is a final answer.
here a raw response triggering the DuckDuckGo search tool:
llm.py RAW RESPONSE:
ModelResponse(id='chatcmpl-3t29u1y6393tnfcgkx8bxc', created=1740607461, model='meta-llama-3.1-8b-instruct', object='chat.completion', system_fingerprint='meta-llama-3.1-8b-instruct', choices=[Choices(finish_reason='stop', index=0,
message=Message(content='Action: DuckDuckGo Search\nAction Input: {"query": "History of LLMs from 2010 until 2025', role='assistant', tool_calls=None, function_call=None, provider_specific_fields={'refusal': None}, refusal=None))], usage=Usage(completion_tokens=28, prompt_tokens=479, total_tokens=507, completion_tokens_details=None, prompt_tokens_details=None), service_tier=None, stats={})
Then the content of the Message is passed on to the parser() and either AgentAction or AgentFinish is returned.