Which Task and Agent Attributes Can Be Put in the YAML File?

The docs do not include an @llm decorator, but it seems to work in this repo: GitHub - rthidden/game-builder-crew and the example @Max_Moura shared above.

Does that mean we can create our own decorators for any attribute we want to include in the YAML? For example, @function_calling_llm or @prompt_template

senior_engineer_agent:
  role: >
    Senior Software Engineer
  goal: >
    Create software as needed
  backstory: >
    You are a Senior Software Engineer at a leading tech think tank.
    Your expertise in programming in Python and do your best to produce perfect code.
  allow_delegation: False
  verbose: True
  tools:
    - serper_tool
  llm: mini_llm

qa_engineer_agent:
  role: >
    Software Quality Control Engineer
  goal: >
    Create Perfect code, by analyzing the code that is given for errors
  backstory: >
    You are a software engineer that specializes in checking code
    for errors. You have an eye for detail and a knack for finding
    hidden bugs.
    You check for missing imports, variable declarations, mismatched
    brackets and syntax errors.
    You also check for security vulnerabilities, and logic errors
  allow_delegation: False
  verbose: True
  tools:
    - serper_tool

chief_qa_engineer_agent:
  role: >
    Chief Software Quality Control Engineer
  goal: >
    Ensure that the code does the job that it is supposed to do
  backstory: >
    You feel that programmers always do only half the job, so you are
    super dedicated to make high quality code.
  allow_delegation: True
  verbose: True
  tools:
    - serper_tool
"""This file contains the crew definition for the GameBuilder crew"""
from typing import List
from crewai import Agent, Crew, Process, Task, LLM
from crewai.project import CrewBase, agent, crew, task, tool, llm
from crewai_tools import SerperDevTool, BaseTool

@CrewBase
class GameBuilderCrew:
    """GameBuilder crew"""
    agents_config = 'config/agents.yaml'
    tasks_config = 'config/tasks.yaml'

    @agent
    def senior_engineer_agent(self) -> Agent:
        """Creates the Senior Engineer Agent"""
        return Agent(config=self.agents_config['senior_engineer_agent'])

    @agent
    def qa_engineer_agent(self) -> Agent:
        """Creates the QA Engineer Agent"""
        return Agent(config=self.agents_config['qa_engineer_agent'])

    @agent
    def chief_qa_engineer_agent(self) -> Agent:
        """Creates the Chief QA Engineer Agent"""
        return Agent(config=self.agents_config['chief_qa_engineer_agent'])

    @task
    def code_task(self) -> Task:
        """Creates the Code Task"""
        return Task(
            config=self.tasks_config['code_task'],
            agent=self.senior_engineer_agent()
        )   

    @task
    def review_task(self) -> Task:
        """Creates the Review Task"""
        return Task(
            config=self.tasks_config['review_task'],
            agent=self.qa_engineer_agent(),
            #### output_json=ResearchRoleRequirements
        )

    @task
    def evaluate_task(self) -> Task:
        """Creates the Evaluate Task"""
        return Task(
            config=self.tasks_config['evaluate_task'],
            agent=self.chief_qa_engineer_agent()
        )

    @tool
    def serper_tool(self) -> BaseTool:
        return SerperDevTool()
    
    @llm
    def mini_llm(self) -> LLM:
        return LLM(
            model='openai/gpt-4o',
            temperature=0.7,
            timeout=90,
            max_tokens=8192,
        )

    @crew
    def crew(self) -> Crew:
        """Creates the GameBuilderCrew"""
        return Crew(
            agents=self.agents,
            tasks=self.tasks,
            process=Process.sequential,
            verbose=True,
        )
1 Like

Thank you for sharing the repo. I think you probably can write your own decorators, but I personally wouldn’t go crazy with it. LoL

Regarding your @llm working, that’s great! :partying_face:

I can’t get mine work apparently due the my variation on things:

  • ./llms.py: (See below). This is where I prefer to define my various crew.LLM() functions (configurations), simply to keep crew.py file clutter free.

  • ./crew.py: Then, I import whichever ones I need from there into crew.py.

It works perfectly, just not with @llm decorators (… notice that I had to comment them out). But it should’ve worked. I suspect they need to be defined directly inside the crew.py.MyCrew() class (like you and @Max_Moura did), and not imported into it.

./llms.py:

import os
from typing import Optional, List, Dict, Any

def get_llm(config: Optional[Dict[str, Any]] = None) -> crewai.LLM:
    if config is None: config = {}
    return crewai.LLM(
        model=config.get('model', 'model-name-string'),
        callbacks=config.get('callbacks', []),
        api_key=config.get('api_key', 'service-api-key-string'),
        base_url=config.get('base_url', None),
        api_base=config.get('api_base', None),
        timeout=config.get('timeout', None),
        temperature=config.get('temperature', None),
        top_p=config.get('top_p', None),
        n=config.get('n', None),
        stop=config.get('stop', None),
        max_completion_tokens=config.get('max_completion_tokens', None),
        max_tokens=config.get('max_tokens', None),
        presence_penalty=config.get('presence_penalty', None),
        frequency_penalty=config.get('frequency_penalty', None),
        logit_bias=config.get('logit_bias', None),
        response_format=config.get('response_format', None), # {"type": "json"} | type[BaseModel]
        seed=config.get('seed', None),
        logprobs=config.get('logprobs', None),
        top_logprobs=config.get('top_logprobs', None),
        api_version=config.get('api_version', None),
        reasoning_effort=config.get('reasoning_effort', None))

#@llm
def get_ollama_llm(config: Optional[Dict[str, Any]] = None) -> crewai.LLM:
    if config is None: config = dict()
    config['model'] = config.get('model', 'ollama/phi4:latest')
    config['base_url'] = config.get('base_url', 'http://0.0.0.0:11434')
    config['api_key'] = config.get('api_key', 'ollama')
    config['temperature'] = config.get('temperature', 0.7)
    return get_llm(config)

#@llm
def get_cloud_llm(config: Optional[Dict[str, Any]] = None) -> crewai.LLM:
    if config is None: config = dict()
    config['model'] = config.get('model', 'claude-3-7-sonnet-latest')
    config['api_key'] = config.get('api_key', 'ollama')
    config['temperature'] = config.get('temperature', 0.7)
    return get_llm(config)

Usage looks like this in ./crew.py:

from crewai_mycrew.llms import get_ollama_llm, get_cloud_llm
from dotenv import load_dotenv
import os
load_dotenv(os.environ.get('API_KEYS_ENV_FILE'))

@CrewBase
class MyCrew():
  """My Crew"""

[ ... snip ... ]
  @agent
  def my_agent(self) -> Agent:
    cloud_llm_config = {"model": "claude-3-7-sonnet-latest",
                        "api_key": os.environ['ANTHROPIC_API_KEY']}
    agent_obj = Agent(config=self.agents_config['my_agent'], 
                      verbose=True,
                      tools=[duckduckgo_search_tool,],
                      llm=get_cloud_llm(config=cloud_llm_config),)
[ ... snip ... ]

This works perfectly as long as the @llm decorator is omitted. If it is included, then I must add it to agents.yaml (like you did), but for some reason it doesn’t like that I’m importing these functions (I get unhashable dict exceptions). I’ll get back to it soon. :blush: This works well for now and permits me to factor out some llm boilerplate code into its own module.

1 Like

Thanks for sharing. I like what you are doing with llms.py; I want to do it in a YAML file. I think you have the right idea.

Oh, and adding tools to the yaml file works in conjunction with the tool decorator and defining it in crew.py, as discussed above. Solving that problem was satisfying.

At least there are two things to put in the yaml to keep crew.py clean potentially: tools and LLMs.

1 Like

@rthidden
Yes! Ironically, the issue I describe above with the @llm decorator is not at all a problem with the @tool decorator. Meaning, I also have tools factored out into its own file - .../src/tools/custom_tools.py - and both imported into ./crew.py to be used in Agent(...), as well as simultaneously specified in ./agents.yaml.

from crewai_my_crew.tools.custom_tools import duckduckgo_search_tool

@CrewBase
class MyCrew():

[ ... snip ... ]

  agent_obj = Agent(config=self.agents_config['researcher'], 
                      verbose=True,
                      tools=[duckduckgo_search_tool,],
                      llm=get_cloud_llm(config=cloud_llm_config),)

Not a peep of a problem doing the same with @tool and it’s not commented out. :man_shrugging:

I couldn’t make heads or tails of it (with @llm), so I let it be for now since I have a lot of other reading and trying with CrewAI. Sigh!

1 Like

Update - LLMs defined in a YAML file

Hi everyone, hope you’re all doing well.

I saw that the discussion continued, so I decided to draft a poor man’s solution that allows us to define our LLMs in a YAML file as well, making things more organized.

Before the code, some considerations:

  • The agents.yaml file defines agents that can receive these attributes.
  • The tasks.yaml file defines tasks that can receive these attributes.
  • The llms.yaml file defines LLMs that can receive the attributes of a crewai.LLM, for example: model, timeout, temperature, top_p, n, stop, max_completion_tokens, max_tokens, presence_penalty, frequency_penalty, logit_bias, seed, logprobs, top_logprobs, base_url, api_base, api_version, api_key, reasoning_effort.
  • The llm attribute of the agents defined in agents.yaml must have the same name as a method wrapped with the @llm decorator.
  • The same applies to the tools attribute of the agents defined in agents.yaml, which must reference the name of a method wrapped with the @tool decorator.
  • The proposed llm_from_yaml method returns a dict that must be unwrapped with a double asterisk.
  • The code uses Gemini, so you should adapt it to your preferred LLM.

Enough talk, let’s code. First, this is the directory structure:

crazy_crew
├── config
│   ├── agents.yaml
│   ├── llms.yaml
│   └── tasks.yaml
└── crew.py

agents.yaml file:

crazy_scientist:
  role: >
    Eccentric Experimental Physicist
  goal: >
    Unravel the mysteries of the universe, no matter how bizarre!
  backstory: >
    A hyper-enthusiastic, slightly unhinged physicist with a penchant for
    wild theories and dramatic pronouncements. Speaks rapidly, often
    trailing off into excited muttering. Believes in the most outlandish
    possibilities. Prone to sudden, loud exclamations!
  verbose: true
  llm: gemini_high_temp_llm
  tools:
    - my_tool_1

tasks.yaml file:

crazy_task:
  description: >
    Answer the following question with a single paragraph,
    and keep it under 400 characters: {user_question}
  expected_output: >
    A single, short, and eccentric paragraph.
  agent: crazy_scientist

llms.yaml file:

gemini_high_temp_llm:
  model: gemini/gemini-2.0-flash
  temperature: 1.2
  max_tokens: 512

gemini_low_temp_llm:
  model: gemini/gemini-2.0-flash
  temperature: 0.2
  max_tokens: 512

Finally, the crew.py file putting it all together:

from typing import Dict, Any
from pathlib import Path
import logging
import os
import yaml

from crewai import Agent, Crew, Task, Process, LLM
from crewai.project import CrewBase, agent, task, crew, tool, llm
from crewai.tools import BaseTool
from crewai_tools import DirectoryReadTool


os.environ['GEMINI_API_KEY'] = 'YOUR_KEY_NOT_MINE'


@CrewBase
class CrazyCrew:
    """
    CrazyCrew class for AI agent configuration and task execution.

    This class configures and manages the AI agents, tasks, and LLMs
    using YAML configurations to provide an organized and flexible framework.
    """

    agents_config = 'config/agents.yaml'
    tasks_config = 'config/tasks.yaml'
    llms_config = 'config/llms.yaml'

    @agent
    def crazy_scientist(self) -> Agent:
        return Agent(
            config=self.agents_config['crazy_scientist'],
        )

    @task
    def crazy_task(self) -> Task:
        return Task(
            config=self.tasks_config['crazy_task'],
        )

    @llm
    def gemini_high_temp_llm(self) -> LLM:
        return LLM(
            **self.llm_from_yaml('gemini_high_temp_llm')
        )
    
    @llm
    def gemini_low_temp_llm(self) -> LLM:
        return LLM(
            **self.llm_from_yaml('gemini_low_temp_llm')
        )

    @tool
    def my_tool_1(self) -> BaseTool:
        return DirectoryReadTool(directory='./')

    @crew
    def crew(self) -> Crew:
        return Crew(
            agents=self.agents,
            tasks=self.tasks,
            process=Process.sequential,
            verbose=True,
        )

    def llm_from_yaml(self, llm_name: str) -> Dict[str, Any]:
        """
        Load LLM configuration from YAML file.

        This method reads the LLM configuration from a YAML file and returns
        the configuration for the specified LLM name.

        Args:
            llm_name: Name of the LLM configuration to load.

        Returns:
            Dictionary containing the LLM configuration parameters.

        Raises:
            FileNotFoundError: If the LLMs config file doesn't exist.
            KeyError: If the specified LLM name is not found in the config.
        """
        base_directory = Path(__file__).parent
        original_llms_config_path = getattr(
            self, 'llms_config', 'config/llms.yaml'
        )
        llms_config_path = base_directory / original_llms_config_path

        try:
            with open(llms_config_path, 'r', encoding='utf-8') as file:
                llms_config: Dict[str, Dict[str, Any]] = yaml.safe_load(file)
        except FileNotFoundError:
            logging.error(f"LLMs config file not found at {llms_config_path}.")
            raise

        try:
            llm_config: Dict[str, Any] = llms_config[llm_name]
        except KeyError:
            logging.error(
                f"LLM config for '{llm_name}' not found in {llms_config_path}."
            )
            raise

        return llm_config


def main() -> None:
    """
    Main function to initialize and run the CrazyCrew.

    This function creates an instance of CrazyCrew, assembles the crew,
    and kicks off the process with a specific input question.
    """
    crazy_crew = CrazyCrew().crew()
    crew_output = crazy_crew.kickoff(
        inputs={"user_question": "Can I run faster than light?"}
    )
    print(f'\n[🧠 ANSWER] {crew_output.raw}\n')


if __name__ == "__main__":
    main()

Hope this helps clarify the additional questions that came up, as well as contribute to even more organized work!

1 Like

Wow! Thanks @Max_Moura! Do you know if we can put any of the other task or agent attributes in a yaml file?

1 Like

Thank you! I’ve enjoyed your expansion and contributions to this conversation - your clarity, writing style, and attention to detail (like factoring out code, and even ending in-line comments with periods - which is a pet peeve of mine). :grin: It made for an engaging read, even from my small Android screen initially.

I look forward to adapting this pattern into my work and will share any observations I gather along the way.

1 Like

These explanations merited saving in TinyURLs:

Although each gets expanded into the text shown above, the underlying links are handy TinyURLs. :grinning_face_with_smiling_eyes:

I save them in personal documentation.

1 Like