Hi everyone,
This solution Iâm going to present aims to clarify how to use the tools
and llm
attributes in the YAML configuration files for agents.
Understanding the problem
When replicating the problem, youâll basically encounter an error in a dictionary key, specifically in the crewai/project/crew_base.py
file. Letâs look at the first lines of the method that raises the exception:
def map_agent_variables(
self,
agent_name: str,
agent_info: Dict[str, Any],
agents: Dict[str, Callable],
llms: Dict[str, Callable],
tool_functions: Dict[str, Callable],
cache_handler_functions: Dict[str, Callable],
callbacks: Dict[str, Callable],
) -> None:
if llm := agent_info.get("llm"):
try:
self.agents_config[agent_name]["llm"] = llms[llm]()
except KeyError:
self.agents_config[agent_name]["llm"] = llm
if tools := agent_info.get("tools"):
self.agents_config[agent_name]["tools"] = [
tool_functions[tool]() for tool in tools
]
Ok, letâs see what it does with the llm
attribute from the YAML. It first searches for a key with the name you set to llm
(a string). If it finds this key in the llms
dict, then it runs the callable associated with that key. If the string you provided is not a key (which happens often since llms = {}
by default), then it simply uses that string as the model
parameter for a crewai.LLM
object later. If you donât believe me, go read the code yourself. 
And what about the tools
parameter from the YAML, which is a list of strings? Well, it tests each string as a key in the tools
dict then it runs the callable associated with that key.
Finding the solution
The llms
and tools
dicts are populated in a scan thatâs done in a previous method named map_all_agent_variables
, which Iâll omit. This method scans the decorators that are defined in crewai/project/__init__.py
.
Youâre certainly already used to saying from crewai.project import CrewBase, agent, task, crew
, the new thing is that you can (and should) say from crewai.project import tool, llm
. Thatâs the key insight.
Once you do this, the callables decorated with @tool
will be automatically listed in the tools
dict and so the callables decorated with @llm
will be automatically listed in the llms
dict. This is elegant.
Semi-functional example
File max_agents.yaml
:
max_agent:
role: >
Max Role
goal: >
Max Goal
backstory: >
Max Backstory
verbose: true
llm: max_llm
tools:
- max_tool
File max_tasks.yaml
:
max_task:
description: >
Max's question is: {user_question}
expected_output: >
Max answer
agent: max_agent
File crew.py
:
from crewai import Agent, Crew, Task, Process, LLM
from crewai.project import CrewBase, agent, task, crew, tool, llm
from crewai.tools import BaseTool
from crewai_tools import DirectoryReadTool
import os
os.environ['LLM_API_KEY'] = 'YOUR_KEY_NOT_MINE'
@CrewBase
class MaxCrew:
agents_config = 'max_agents.yaml'
tasks_config = 'max_tasks.yaml'
@agent
def max_agent(self) -> Agent:
return Agent(
config=self.agents_config['max_agent'],
)
@task
def max_task(self) -> Task:
return Task(
config=self.tasks_config['max_task'],
)
@tool
def max_tool(self) -> BaseTool:
return DirectoryReadTool(directory='./')
@llm
def max_llm(self) -> LLM:
return LLM(
model='max/maxgpt-999b-chat',
temperature=0.7,
timeout=90,
max_tokens=512,
)
@crew
def crew(self) -> Crew:
return Crew(
agents=self.agents,
tasks=self.tasks,
process=Process.sequential,
verbose=True,
)
Suggestions
I suggest that the CrewAI team enriches the documentation on this topic and perhaps even make the crewai create crew
command add this import to the crew.py
file, as this would encourage the use of this approach.