Loading Multiple LLM for multiple Agent

my code :

    agents_config = "config/agents.yaml"
    tasks_config = "config/tasks.yaml"
    lmstudio_llama = LLM(model="openai/llama-3-8b-instruct-64k", base_url="http://localhost:1234/v1", api_key="sk-1234")
    lmstudio_r1 = LLM(model="openai/deepseek-r1-distill-llama-8b", base_url="http://localhost:1234/v1", api_key="sk-1234")


    @agent
    def commentaire_writer(self) -> Agent:
        return Agent(
            config=self.agents_config["commentaire_writer"],
            llm=self.lmstudio_r1
        )
    @agent
    def comment_extractor(self) -> Agent:
        return Agent(
            config=self.agents_config["comment_extractor"],
            llm=self.lmstudio_llama
        )

but R1 is the only one running for both agent and if I switch both model inside agents .
Llama 64k is taking control .
Any suggestions, thanks.
PS: I’m using LM STUDIO

Update: I tried by loading one model from Ollama and one from LM studio but still getting the same result

Instead of using:

llm = "openai/{model}

try using this instead:

llm = "lm_studio/{model}

Hello Tony, Thank you for your reply !
Unfortunately, this is not working easer.

Moreover,I’m getting new error when using "lm_studio/* "

  • ERROR:root:Failed to get supported params: argument of type ‘NoneType’ is not iterable

I’m using :
crewai==0.102.0
litellm==1.60.2

Hi Victor,

when using LM Studio, for now I would recommend using the “openai/{model}” base path.

I tested it the same way as you did, using 2 agents with 1 individual LLM each. It worked as expected. Agent “researcher” is using llm_llama and Agent “reporting_analyst” is using llm_r1 (verified by monitoring the LM Studio Model processing):

@CrewBase
class Crew1Test():

	agents_config = 'config/agents.yaml'
	tasks_config = 'config/tasks.yaml'

	
	try:
		llm_llama = LLM(model="openai/meta-llama-3-8b-instruct",  base_url="http://localhost:1234/v1", api_key="fsdf")
		llm_r1 = LLM(model="openai/deepseek-r1-distill-qwen-7b-mlx",  base_url="http://localhost:1234/v1", api_key="fsdf")

	except Exception as e:
		print(f"--> LLM init ERROR {e}")


	@agent
	def researcher(self) -> Agent:
		return Agent(
			config=self.agents_config['researcher'],
			verbose=True,
			tools=[myDuckDuckGoSearchTool()],
			llm=self.llm_llama
			
		)

	@agent
	def reporting_analyst(self) -> Agent:
		return Agent(
			config=self.agents_config['reporting_analyst'],
			verbose=True,
			tools=[myDuckDuckGoSearchTool()],
			llm=self.llm_r1
			
		)

Attached you find my LM Studio settings:


LM Studio v0.3.9 - latest runtimes

Best
Igi

Hello igi thank’s for your reply !
I think the issue come from crewai or litelm because as I said when I’m using Lm studio and Ollama, it not resolve the problem.
Moreover, I tried your suggestion, even using your models but nothing change.
can you share your crewai version please ?

please share full error message.

I tried with
crewai==0.100.1
crewai-tools==0.33.0
litellm==1.59.8

and also main branch:
commit 1b488b6da77dc0dc1d96d45e9ef6213b3f8eceeb


did you create your crew using command?:

crewai create crew <project_name>

this then automatically generates a virtual environment which needs to be activated by the command:

source .venv/bin/activate

then you execute the command:

crewai run

correct?

hint:
any additional pip installs need to be performed when this virtual environment is activated. Otherwise when running command “crewai run” these packages cannot be located because “crewai run” switches to the virtual environment.

Let me know if anything worked or not.

I think I managed to reproduce your issue…
[edit]
On my machine, this issue seems to occur randomly because, after shutting down LM Studio Server as well as restarting the whole LM Studio app, everything works fine again. I cannot reproduce this behavior anymore.

My project setup is little bit different because I’m using a flow when I created it by doing the create flow command, and I’m using conda where I had a venv already created for other crewai project
I follow instruction in the docs:
crewai install in my flow repo
uv run kickoff or crewai flow kickoff both make my first llm used run for both agents

tell me if I did it correctly. Otherwise, I’ll restart from beginning

Currently that’s out of my scope, sorry. I am a new to crewai and didn’t built up enough knowhow yet.