I am trying to use the from langchain_openai import ChatOpenAI wrapper on a local Llama3.1 model
I have exported OPENAI_BASE_URL, OPENAI_API_KEY, AND OPENAI_MODEL_NAME in the terminal, and I also have them set in the .env file
I am trying to execute the script below just to test that it is working
from crew import Agent, Task, Crew
from langchain_openai import ChatOpenAI
import os
llm = ChatOpenAI(model = ‘model_name set in .env file’, base_url = ‘url set in .env file’)
general_agent = Agent(
role = “Test”,
goal = “”" Test “”“,
backstory = “”” Test “”",
allow_delegation = False,
verbose = True,
llm = llm)
task = Task(description=“”" Test “”", agent = general_agent)
crew = Crew(
agents=[general_agent],
tasks = [task],
verbose=2)
result = crew.kickoff()
print(result)
However, I get the errors below
Provider List: Providers | liteLLM
llm.py-llm:178 - ERROR: Failed to get supported params: argument of type ‘NoneType’ is not iterable
pydantic_core._pydantic_core.ValidationError: 1 validation error for Task
expected_output
Field required [type=missing, input_value={‘description’: '…")}, input_type = dict]
When I do a simple langchain prompt, I get a response from the local model
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(model = ‘model_name set in .env file’, base_url = ‘url set in .env file’, api_key = “nokey”)
messages = [
(
“system”,
“You are a helpful assistant”,
),
(“human”, “Hi”),
]
msg = llm.invoke(messages)
print(msg)
The above langchain works, but I cannot get this to work for crewAI. The model is on a server with Linux. I have python 3.12.5
Any help?