Unable to connect to local Llama3.1 model

I am trying to use the from langchain_openai import ChatOpenAI wrapper on a local Llama3.1 model

I have exported OPENAI_BASE_URL, OPENAI_API_KEY, AND OPENAI_MODEL_NAME in the terminal, and I also have them set in the .env file

I am trying to execute the script below just to test that it is working
from crew import Agent, Task, Crew
from langchain_openai import ChatOpenAI
import os

llm = ChatOpenAI(model = ‘model_name set in .env file’, base_url = ‘url set in .env file’)

general_agent = Agent(
role = “Test”,
goal = “”" Test “”“,
backstory = “”” Test “”",
allow_delegation = False,
verbose = True,
llm = llm)

task = Task(description=“”" Test “”", agent = general_agent)

crew = Crew(
agents=[general_agent],
tasks = [task],
verbose=2)

result = crew.kickoff()

print(result)

However, I get the errors below
Provider List: Providers | liteLLM
llm.py-llm:178 - ERROR: Failed to get supported params: argument of type ‘NoneType’ is not iterable

pydantic_core._pydantic_core.ValidationError: 1 validation error for Task
expected_output
Field required [type=missing, input_value={‘description’: '…")}, input_type = dict]

When I do a simple langchain prompt, I get a response from the local model
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(model = ‘model_name set in .env file’, base_url = ‘url set in .env file’, api_key = “nokey”)

messages = [
(
“system”,
“You are a helpful assistant”,
),
(“human”, “Hi”),
]

msg = llm.invoke(messages)
print(msg)

The above langchain works, but I cannot get this to work for crewAI. The model is on a server with Linux. I have python 3.12.5

Any help?

1 Like

You don’t need to use langchain constructor as we use LiteLLM under the hood and you do not need to do any of the below in crewAI

messages = [
(
“system”,
“You are a helpful assistant”,
),
(“human”, “Hi”),
]

Build you crew with the cli tool

crewai create crew <crew name>

https://models.litellm.ai

1 Like

so we do not have to declare the LLMs anymore?

To use Crewai with local llm like llama3.2

use the folloing code

#import LLM from crewai

from crewai import Agent, Crew, Process, Task, LLM

in the Agent code use this code

return Agent(
…,
llm=LLM(model=“ollama/llama3.2”, base_url=“http://localhost:11434”),

)

liteLLM will automatically connect to the local llm

make sure you local llm is running

#>ollama run llama3.2

I agree Joseph, totally agree with you.
That was my modus operandi before, but after the update I start getting this error.
I am still struggling with the same error.

you mean you updated crew the latest version?

yes, that is indeed what happened.

Hi I just want to ask and I am fairly new to CrewAI.

My base_url is from an EC2 instance where my Ollama is residing. Where should i put the this:

llm_ollama = LLM(
model=“llama3.1:8b-instruct-q8_0”,
base_url = “http://ec2_ip:11434”,
temperature=0
)

and what should i put in my .env file? Thank you

Yes, there seems to be an issue when calling the Ollama models, I am not sure if other models have the same problem.
I really would like to find a solution, I would like to keep working on my project.
@matt you received my file, right? is there something I can do to help?

Many greetings,

@Ruben_Casillas What’s your CrewAI SDK version?

Hi @rokbenko I started having the problem since version 0.65
I was running well before that update.

@matt @rokbenko
Version 0.70 shows no more the same issue.
At least the basic example routine worked like a charm in the new version.

I want to thank all the people involved for the new improvements.

Thanks and have a great Friday.

@Ruben_Casillas Happy to hear that. That’s exactly why I asked you that. Because I was able to use the Llama 3.2 via Ollama with the newest SDK version the way others suggested above (i.e., using the LLM class, which uses liteLLM in the background).