Update - LLMs defined in a YAML file
Hi everyone, hope you’re all doing well.
I saw that the discussion continued, so I decided to draft a poor man’s solution that allows us to define our LLMs in a YAML file as well, making things more organized.
Before the code, some considerations:
- The
agents.yaml
file defines agents that can receive these attributes. - The
tasks.yaml
file defines tasks that can receive these attributes. - The
llms.yaml
file defines LLMs that can receive the attributes of acrewai.LLM
, for example:model
,timeout
,temperature
,top_p
,n
,stop
,max_completion_tokens
,max_tokens
,presence_penalty
,frequency_penalty
,logit_bias
,seed
,logprobs
,top_logprobs
,base_url
,api_base
,api_version
,api_key
,reasoning_effort
. - The
llm
attribute of the agents defined inagents.yaml
must have the same name as a method wrapped with the@llm
decorator. - The same applies to the
tools
attribute of the agents defined inagents.yaml
, which must reference the name of a method wrapped with the@tool
decorator. - The proposed
llm_from_yaml
method returns a dict that must be unwrapped with a double asterisk. - The code uses Gemini, so you should adapt it to your preferred LLM.
Enough talk, let’s code. First, this is the directory structure:
crazy_crew
├── config
│ ├── agents.yaml
│ ├── llms.yaml
│ └── tasks.yaml
└── crew.py
agents.yaml
file:
crazy_scientist:
role: >
Eccentric Experimental Physicist
goal: >
Unravel the mysteries of the universe, no matter how bizarre!
backstory: >
A hyper-enthusiastic, slightly unhinged physicist with a penchant for
wild theories and dramatic pronouncements. Speaks rapidly, often
trailing off into excited muttering. Believes in the most outlandish
possibilities. Prone to sudden, loud exclamations!
verbose: true
llm: gemini_high_temp_llm
tools:
- my_tool_1
tasks.yaml
file:
crazy_task:
description: >
Answer the following question with a single paragraph,
and keep it under 400 characters: {user_question}
expected_output: >
A single, short, and eccentric paragraph.
agent: crazy_scientist
llms.yaml
file:
gemini_high_temp_llm:
model: gemini/gemini-2.0-flash
temperature: 1.2
max_tokens: 512
gemini_low_temp_llm:
model: gemini/gemini-2.0-flash
temperature: 0.2
max_tokens: 512
Finally, the crew.py
file putting it all together:
from typing import Dict, Any
from pathlib import Path
import logging
import os
import yaml
from crewai import Agent, Crew, Task, Process, LLM
from crewai.project import CrewBase, agent, task, crew, tool, llm
from crewai.tools import BaseTool
from crewai_tools import DirectoryReadTool
os.environ['GEMINI_API_KEY'] = 'YOUR_KEY_NOT_MINE'
@CrewBase
class CrazyCrew:
"""
CrazyCrew class for AI agent configuration and task execution.
This class configures and manages the AI agents, tasks, and LLMs
using YAML configurations to provide an organized and flexible framework.
"""
agents_config = 'config/agents.yaml'
tasks_config = 'config/tasks.yaml'
llms_config = 'config/llms.yaml'
@agent
def crazy_scientist(self) -> Agent:
return Agent(
config=self.agents_config['crazy_scientist'],
)
@task
def crazy_task(self) -> Task:
return Task(
config=self.tasks_config['crazy_task'],
)
@llm
def gemini_high_temp_llm(self) -> LLM:
return LLM(
**self.llm_from_yaml('gemini_high_temp_llm')
)
@llm
def gemini_low_temp_llm(self) -> LLM:
return LLM(
**self.llm_from_yaml('gemini_low_temp_llm')
)
@tool
def my_tool_1(self) -> BaseTool:
return DirectoryReadTool(directory='./')
@crew
def crew(self) -> Crew:
return Crew(
agents=self.agents,
tasks=self.tasks,
process=Process.sequential,
verbose=True,
)
def llm_from_yaml(self, llm_name: str) -> Dict[str, Any]:
"""
Load LLM configuration from YAML file.
This method reads the LLM configuration from a YAML file and returns
the configuration for the specified LLM name.
Args:
llm_name: Name of the LLM configuration to load.
Returns:
Dictionary containing the LLM configuration parameters.
Raises:
FileNotFoundError: If the LLMs config file doesn't exist.
KeyError: If the specified LLM name is not found in the config.
"""
base_directory = Path(__file__).parent
original_llms_config_path = getattr(
self, 'llms_config', 'config/llms.yaml'
)
llms_config_path = base_directory / original_llms_config_path
try:
with open(llms_config_path, 'r', encoding='utf-8') as file:
llms_config: Dict[str, Dict[str, Any]] = yaml.safe_load(file)
except FileNotFoundError:
logging.error(f"LLMs config file not found at {llms_config_path}.")
raise
try:
llm_config: Dict[str, Any] = llms_config[llm_name]
except KeyError:
logging.error(
f"LLM config for '{llm_name}' not found in {llms_config_path}."
)
raise
return llm_config
def main() -> None:
"""
Main function to initialize and run the CrazyCrew.
This function creates an instance of CrazyCrew, assembles the crew,
and kicks off the process with a specific input question.
"""
crazy_crew = CrazyCrew().crew()
crew_output = crazy_crew.kickoff(
inputs={"user_question": "Can I run faster than light?"}
)
print(f'\n[🧠 ANSWER] {crew_output.raw}\n')
if __name__ == "__main__":
main()
Hope this helps clarify the additional questions that came up, as well as contribute to even more organized work!