Hello Friends,
I recently used vsCode Copilot (which has knowledge of CrewAi) to generate the sample ./agents.yaml file below. I wanted to see all possible attribute/value pairs I could specify in the YAML file instead of inside CreaAi code directly.
I’ll stipulate that vsCode Copilot could be wrong, but let’s go with it for the purpose of the question. ![]()
My main concern is how robust can entries in the YAML file be – for any attribute?
Let us use the ./agents.yaml :: llm attribute as an example. The Agent attributes docs section specifies these possible types for the llm: attribute:
LLM (optional) llm Union[str, LLM, Any]
Those make sense when using the LLM() class. However can, say, the LLM type also be specified in the YAML file by specifying a Python identifier to one? For example:
First, define this in, say, a helper module...: my_llm = LLM(...)
Then, in the YAML file........................: llm: my_llm
The documentation doesn’t detail what is permitted for YAML file attributes and values - it only provides an example.
Anyway, the reason this question arose, apart from the documentation not offering detail, is that attempting the following raised an exception:
YAML snippet:
[ ... ] # Again, this was suggested by vsCode Copilot and could be wrong.
llm: # This translates to a Python dict() type.
model: "gpt-3.5-turbo" # Default model
temperature: 0.7 # Default temperature
[ ... ]
Type Exception (dict):
TypeError: unhashable type: 'dict' # Refers to the YAML llm dict().
The above outcome suggests that complex types are not permitted for YAML attribute values, only simple types like this:
llm: ollama/phi4:latest # This translates to a simply str() type.
Does anyone have detailed information on what it allowed in these YAML files? See llm:, tools: [...], etc. below.
Thank you. ![]()
# Generated by vsCode Copilot:
researcher:
role: {topic} Senior Data Researcher
goal: Uncover cutting-edge developments in {topic}
backstory: Some backstory.
verbose: true
max_iter: 5
max_rpm: 10
allow_delegation: false
function_calling_llm: null
knowledge: null
knowledge_sources: []
embedder: null
step_callback: null
tools: [] # Do I specify an identifier (variable) for a tool here.
llm:
model: "gpt-3.5-turbo"
temperature: 0.7