Unable to connect to local Llama3.1 model

I am trying to use the from langchain_openai import ChatOpenAI wrapper on a local Llama3.1 model

I have exported OPENAI_BASE_URL, OPENAI_API_KEY, AND OPENAI_MODEL_NAME in the terminal, and I also have them set in the .env file

I am trying to execute the script below just to test that it is working
from crew import Agent, Task, Crew
from langchain_openai import ChatOpenAI
import os

llm = ChatOpenAI(model = ‘model_name set in .env file’, base_url = ‘url set in .env file’)

general_agent = Agent(
role = “Test”,
goal = “”" Test “”“,
backstory = “”” Test “”",
allow_delegation = False,
verbose = True,
llm = llm)

task = Task(description=“”" Test “”", agent = general_agent)

crew = Crew(
agents=[general_agent],
tasks = [task],
verbose=2)

result = crew.kickoff()

print(result)

However, I get the errors below
Provider List: Providers | liteLLM
llm.py-llm:178 - ERROR: Failed to get supported params: argument of type ‘NoneType’ is not iterable

pydantic_core._pydantic_core.ValidationError: 1 validation error for Task
expected_output
Field required [type=missing, input_value={‘description’: '…")}, input_type = dict]

When I do a simple langchain prompt, I get a response from the local model
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(model = ‘model_name set in .env file’, base_url = ‘url set in .env file’, api_key = “nokey”)

messages = [
(
“system”,
“You are a helpful assistant”,
),
(“human”, “Hi”),
]

msg = llm.invoke(messages)
print(msg)

The above langchain works, but I cannot get this to work for crewAI. The model is on a server with Linux. I have python 3.12.5

Any help?

1 Like

You don’t need to use langchain constructor as we use LiteLLM under the hood and you do not need to do any of the below in crewAI

messages = [
(
“system”,
“You are a helpful assistant”,
),
(“human”, “Hi”),
]

Build you crew with the cli tool

crewai create crew <crew name>

https://models.litellm.ai

1 Like

so we do not have to declare the LLMs anymore?

To use Crewai with local llm like llama3.2

use the folloing code

#import LLM from crewai

from crewai import Agent, Crew, Process, Task, LLM

in the Agent code use this code

return Agent(
…,
llm=LLM(model=“ollama/llama3.2”, base_url=“http://localhost:11434”),

)

liteLLM will automatically connect to the local llm

make sure you local llm is running

#>ollama run llama3.2

I agree Joseph, totally agree with you.
That was my modus operandi before, but after the update I start getting this error.
I am still struggling with the same error.

you mean you updated crew the latest version?

yes, that is indeed what happened.

Hi I just want to ask and I am fairly new to CrewAI.

My base_url is from an EC2 instance where my Ollama is residing. Where should i put the this:

llm_ollama = LLM(
model=“llama3.1:8b-instruct-q8_0”,
base_url = “http://ec2_ip:11434”,
temperature=0
)

and what should i put in my .env file? Thank you

Yes, there seems to be an issue when calling the Ollama models, I am not sure if other models have the same problem.
I really would like to find a solution, I would like to keep working on my project.
@matt you received my file, right? is there something I can do to help?

Many greetings,

@Ruben_Casillas What’s your CrewAI SDK version?

Hi @rokbenko I started having the problem since version 0.65
I was running well before that update.

@matt @rokbenko
Version 0.70 shows no more the same issue.
At least the basic example routine worked like a charm in the new version.

I want to thank all the people involved for the new improvements.

Thanks and have a great Friday.

@Ruben_Casillas Happy to hear that. That’s exactly why I asked you that. Because I was able to use the Llama 3.2 via Ollama with the newest CrewAI SDK version the way others suggested above (i.e., using the LLM class, which uses LiteLLM in the background).

Hi @Ruben_Casillas Thanks for sharing. I’ve been getting the same issue on my project. Could you share your version of crewAi please?

Hi there @Fabricio_Silva

I was using the version 0.70 and everything was fine, but the new version 0.74 has some issues and I cannot even create a crew.

I guess the problems will be addressed in the next version.

1 Like

Hi, i get this output when i run this code without using complete from litellm. Is there a specific way to parse the output? I am using llama3.2:1b version. Everything else is the same from the example flow project that gets created.

@agent
def poem_writer(self) → Agent:

    return Agent(
        llm='ollama/llama3.2:1b',
        base_url= 'http://localhost:11434',
        config=self.agents_config['poem_writer'],
    )

Generating sentence count
Generating poem
# Agent: CrewAI Poem Writer
## Task: Write a poem about how CrewAI is awesome. Ensure the poem is engaging and adheres to the specified sentence count of 3.

** Error parsing LLM output, agent will retry: I did it wrong. Invalid Format: I missed the ‘Action:’ after ‘Thought:’. I will do right next, and don’t use a tool I have already used.**

@Rineheaj The Invalid Format: I missed the ‘Action:’ after ‘Thought:’ error is caused because of the LLM you’re using.

Unfortunately, smaller LLMs (e.g., Llama 3.2 1B) sometimes struggle to work with CrewAI. Try to switch the LLM to a more capable one. This was discussed on GitHub.

2 Likes

Thank you, it works great even though i just moved it up to the 3b model. cheers!

This topic was automatically closed 24 hours after the last reply. New replies are no longer allowed.