Specifying LLM in Crew Does Not Work For Me

Edit: nvm. again my error as llm= is not an attribute of Crew (wish there were error messages of some sort for this). However function_calling_llm= in Crew does not work and it is an attribute.

I don’t know why but normally I specify an llm in the Agent. I was testing something and moved it to Crew. It is an open source llm served in LM Studio.
Crewai ignores it and just uses gpt4omini default for an llm. If I take it out of Crew and put it in Agent, that same open source model works fine. Here is the code. This is all since the litellm changes and is making me doubt my sanity. Any help much appreciated.

import   os
import   agentops
from     crewai         import Agent, Task, Crew, Process, LLM
from     dotenv         import load_dotenv
from     tools.browserutil import SearchTools

load_dotenv()

AGENTOPS_API_KEY = os.getenv('AGENTOPS_API_KEY')
agentops.init(AGENTOPS_API_KEY)


main_query = "Academic research clinical experiment articles on psilocybin used in the treatment of trauma."

chat_llm = LLM(model="openai/chat", api_key="XXX", temperature=0.1, base_url="http://127.0.0.1:1234/v1")


researcher = Agent(
  role='researcher',
  goal="""Find internet references to academic articles about,  """+main_query+""" , using the "free_search" tool. Create a list of the URL's you found.
      Then use the "exa_scrape" tool to scrape the list of URL's you found.""",
  backstory="""You work at a leading drug research laboratory. Your expertise is in using website searching and scraping tools to find information about academic articles.""",
  verbose=True,
  max_iter=10,
  tools=[SearchTools.free_search, SearchTools.exa_scrape]
)

task = Task(
  description="""You search for academic research articles with the query, """+main_query+""" using the "free_search" tool. 
                 Then use "exa_scrape" tool to scrape the websites one at a time for further information.""",
  agent=researcher,
  verbose=True,
  output_file='report.md',
  expected_output="""A report of articles that you found, formatted according the schema as follows,
        Title: str = Field(..., description="The title of the article.")
        URL: str = Field(..., description="The URL of the article.")
        Author: List[str] = Field(..., description="An author of the article.") 
        Published_Date: str = Field(..., format="MM/YYYY", description="The publication date of the article using the format MM/YYYY.")
        Methodology: str = Field(..., description="The methods or procedures used in the experiment in the article in great DETAIL.")
        Drug: str = Field(..., description="The specific drug tested in the article.")
        Dosage: str = Field(..., description="The dosage amount and frequency of the drug used in the article.")
        Results: str = Field(..., description="The results or outcome of the experiment in the article in great DETAIL.")
"""
)

crew = Crew(
  agents=[researcher],
  tasks=[task],
  verbose=True,
  process=Process.sequential,
  llm=chat_llm
)

result = crew.kickoff()

Hey Moto,

I had a quick look into this but I only know how to do it within the OpenAI models using lanchain_openai then setting the model for each agent separately.

I’m sure at some point I am going to end up running into the same issue.

Let’s keep each other updated if we find a solution.

I have the same issue can’t use my own LLM for the planner only for the agents. Have also tried to not specify for the planner and OpenAI worked. Tested llama3.2 3B and qwen2.5:14b and they only work for agents not planner.