What to do to connect Ollama with EC2 Instance to new CrewAI?

Hi!

I am new to CrewAI and also a newbie in AI generally.

I want to know how to set up Ollama in the code and in the .env file.

I have an EC2 instance where Ollama is integrated and I use it as my base URL. I set up my LLM like this:

llm_ollama = LLM(
model=“llama3.1:8b-instruct-q8_0”,
base_url = “http://xxxxxxxx:11434”,
temperature=0
)

I also tried to set it up like this:

@agent
def researcher(self) → Agent:
return Agent(
config=self.agents_config[‘researcher’],
# tools=[MyCustomTool()], # Example of custom tool, loaded on the beginning of file
verbose=True,
llm = LLM(model=“llama3.1:8b-instruct-q8_0”, base_url=“http://xxxxxxx:11434”)
)

I am trying it out for a project.

For more context, I am also getting this error where it says that I did not provide a provider.

(Train) PS C:\Users\IA-User\Train\research> crewai run
Running the Crew

Provider List: Providers | liteLLM

2024-10-09 18:30:47,143 - 13488 - llm.py-llm:178 - ERROR: Failed to get supported params: argument of type ‘NoneType’ is not iterable

Provider List: Providers | liteLLM

2024-10-09 18:30:47,146 - 13488 - llm.py-llm:178 - ERROR: Failed to get supported params: argument of type ‘NoneType’ is not iterable

Provider List: Providers | liteLLM

2024-10-09 18:30:47,160 - 13488 - llm.py-llm:178 - ERROR: Failed to get supported params: argument of type ‘NoneType’ is not iterable

Provider List: Providers | liteLLM

2024-10-09 18:30:47,163 - 13488 - llm.py-llm:178 - ERROR: Failed to get supported params: argument of type ‘NoneType’ is not iterable

Provider List: Providers | liteLLM

2024-10-09 18:30:47,171 - 13488 - llm.py-llm:178 - ERROR: Failed to get supported params: argument of type ‘NoneType’ is not iterable

Provider List: Providers | liteLLM

2024-10-09 18:30:47,174 - 13488 - llm.py-llm:178 - ERROR: Failed to get supported params: argument of type ‘NoneType’ is not iterable

Provider List: Providers | liteLLM

2024-10-09 18:30:47,176 - 13488 - llm.py-llm:178 - ERROR: Failed to get supported params: argument of type ‘NoneType’ is not iterable

Agent: AI LLMs Senior Data Researcher

Task: Conduct a thorough research about AI LLMs Make sure you find any interesting and relevant information given the current year is 2024.

2024-10-09 18:30:47,191 - 13488 - llm.py-llm:161 - ERROR: LiteLLM call failed: litellm.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=llama3.1:8b-instruct-q8_0
Pass model as E.g. For ‘Huggingface’ inference endpoints pass in completion(model='huggingface/starcoder',..) Learn more: Providers | liteLLM

Provider List: Providers | liteLLM

2024-10-09 18:30:47,194 - 13488 - llm.py-llm:178 - ERROR: Failed to get supported params: argument of type ‘NoneType’ is not iterable

Agent: AI LLMs Senior Data Researcher

Task: Conduct a thorough research about AI LLMs Make sure you find any interesting and relevant information given the current year is 2024.

2024-10-09 18:30:47,204 - 13488 - llm.py-llm:161 - ERROR: LiteLLM call failed: litellm.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=llama3.1:8b-instruct-q8_0
Pass model as E.g. For ‘Huggingface’ inference endpoints pass in completion(model='huggingface/starcoder',..) Learn more: Providers | liteLLM

Provider List: Providers | liteLLM

2024-10-09 18:30:47,207 - 13488 - llm.py-llm:178 - ERROR: Failed to get supported params: argument of type ‘NoneType’ is not iterable

Agent: AI LLMs Senior Data Researcher

Task: Conduct a thorough research about AI LLMs Make sure you find any interesting and relevant information given the current year is 2024.

2024-10-09 18:30:47,216 - 13488 - llm.py-llm:161 - ERROR: LiteLLM call failed: litellm.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=llama3.1:8b-instruct-q8_0
Pass model as E.g. For ‘Huggingface’ inference endpoints pass in completion(model='huggingface/starcoder',..) Learn more: Providers | liteLLM
Traceback (most recent call last):
File “C:\Users\IA-User\Train\Lib\site-packages\crewai\agent.py”, line 227, in execute_task
result = self.agent_executor.invoke(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “C:\Users\IA-User\Train\Lib\site-packages\crewai\agents\crew_agent_executor.py”, line 92, in invoke
formatted_answer = self._invoke_loop()
^^^^^^^^^^^^^^^^^^^
File “C:\Users\IA-User\Train\Lib\site-packages\crewai\agents\crew_agent_executor.py”, line 173, in _invoke_loop
raise e
File “C:\Users\IA-User\Train\Lib\site-packages\crewai\agents\crew_agent_executor.py”, line 113, in _invoke_loop
answer = self.llm.call(
^^^^^^^^^^^^^^
File “C:\Users\IA-User\Train\Lib\site-packages\crewai\llm.py”, line 155, in call
response = litellm.completion(**params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “C:\Users\IA-User\Train\Lib\site-packages\litellm\utils.py”, line 1071, in wrapper
raise e
File “C:\Users\IA-User\Train\Lib\site-packages\litellm\utils.py”, line 959, in wrapper
result = original_function(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “C:\Users\IA-User\Train\Lib\site-packages\litellm\main.py”, line 2957, in completion
raise exception_type(
File “C:\Users\IA-User\Train\Lib\site-packages\litellm\main.py”, line 852, in completion
model, custom_llm_provider, dynamic_api_key, api_base = get_llm_provider(
^^^^^^^^^^^^^^^^^
File “C:\Users\IA-User\Train\Lib\site-packages\litellm\litellm_core_utils\get_llm_provider_logic.py”, line 520, in get_llm_provider
raise e
File “C:\Users\IA-User\Train\Lib\site-packages\litellm\litellm_core_utils\get_llm_provider_logic.py”, line 497, in get_llm_provider
raise litellm.exceptions.BadRequestError( # type: ignore
litellm.exceptions.BadRequestError: litellm.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=llama3.1:8b-instruct-q8_0
Pass model as E.g. For ‘Huggingface’ inference endpoints pass in completion(model='huggingface/starcoder',..) Learn more: Providers | liteLLM

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File “C:\Users\IA-User\Train\Lib\site-packages\crewai\agent.py”, line 227, in execute_task
result = self.agent_executor.invoke(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “C:\Users\IA-User\Train\Lib\site-packages\crewai\agents\crew_agent_executor.py”, line 92, in invoke
formatted_answer = self._invoke_loop()
^^^^^^^^^^^^^^^^^^^
File “C:\Users\IA-User\Train\Lib\site-packages\crewai\agents\crew_agent_executor.py”, line 173, in _invoke_loop
raise e
File “C:\Users\IA-User\Train\Lib\site-packages\crewai\agents\crew_agent_executor.py”, line 113, in _invoke_loop
answer = self.llm.call(
^^^^^^^^^^^^^^
File “C:\Users\IA-User\Train\Lib\site-packages\crewai\llm.py”, line 155, in call
response = litellm.completion(**params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “C:\Users\IA-User\Train\Lib\site-packages\litellm\utils.py”, line 1071, in wrapper
raise e
File “C:\Users\IA-User\Train\Lib\site-packages\litellm\utils.py”, line 959, in wrapper
result = original_function(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “C:\Users\IA-User\Train\Lib\site-packages\litellm\main.py”, line 2957, in completion
raise exception_type(
File “C:\Users\IA-User\Train\Lib\site-packages\litellm\main.py”, line 852, in completion
model, custom_llm_provider, dynamic_api_key, api_base = get_llm_provider(
^^^^^^^^^^^^^^^^^
File “C:\Users\IA-User\Train\Lib\site-packages\litellm\litellm_core_utils\get_llm_provider_logic.py”, line 520, in get_llm_provider
raise e
File “C:\Users\IA-User\Train\Lib\site-packages\litellm\litellm_core_utils\get_llm_provider_logic.py”, line 497, in get_llm_provider
raise litellm.exceptions.BadRequestError( # type: ignore
litellm.exceptions.BadRequestError: litellm.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=llama3.1:8b-instruct-q8_0
Pass model as E.g. For ‘Huggingface’ inference endpoints pass in completion(model='huggingface/starcoder',..) Learn more: Providers | liteLLM

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File “”, line 1, in
File “C:\Users\IA-User\Train\research\src\research\main.py”, line 27, in run
ResearchCrew().crew().kickoff(inputs=inputs)
File “C:\Users\IA-User\Train\Lib\site-packages\crewai\crew.py”, line 490, in kickoff
result = self._run_sequential_process()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “C:\Users\IA-User\Train\Lib\site-packages\crewai\crew.py”, line 594, in _run_sequential_process
return self._execute_tasks(self.tasks)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “C:\Users\IA-User\Train\Lib\site-packages\crewai\crew.py”, line 692, in _execute_tasks
task_output = task.execute_sync(
^^^^^^^^^^^^^^^^^^
File “C:\Users\IA-User\Train\Lib\site-packages\crewai\task.py”, line 191, in execute_sync
return self._execute_core(agent, context, tools)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “C:\Users\IA-User\Train\Lib\site-packages\crewai\task.py”, line 247, in _execute_core
result = agent.execute_task(
^^^^^^^^^^^^^^^^^^^
File “C:\Users\IA-User\Train\Lib\site-packages\crewai\agent.py”, line 239, in execute_task
result = self.execute_task(task, context, tools)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “C:\Users\IA-User\Train\Lib\site-packages\crewai\agent.py”, line 239, in execute_task
result = self.execute_task(task, context, tools)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “C:\Users\IA-User\Train\Lib\site-packages\crewai\agent.py”, line 238, in execute_task
raise e
File “C:\Users\IA-User\Train\Lib\site-packages\crewai\agent.py”, line 227, in execute_task
result = self.agent_executor.invoke(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “C:\Users\IA-User\Train\Lib\site-packages\crewai\agents\crew_agent_executor.py”, line 92, in invoke
formatted_answer = self._invoke_loop()
^^^^^^^^^^^^^^^^^^^
File “C:\Users\IA-User\Train\Lib\site-packages\crewai\agents\crew_agent_executor.py”, line 173, in _invoke_loop
raise e
File “C:\Users\IA-User\Train\Lib\site-packages\crewai\agents\crew_agent_executor.py”, line 113, in _invoke_loop
answer = self.llm.call(
^^^^^^^^^^^^^^
File “C:\Users\IA-User\Train\Lib\site-packages\crewai\llm.py”, line 155, in call
response = litellm.completion(**params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “C:\Users\IA-User\Train\Lib\site-packages\litellm\utils.py”, line 1071, in wrapper
raise e
File “C:\Users\IA-User\Train\Lib\site-packages\litellm\utils.py”, line 959, in wrapper
result = original_function(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “C:\Users\IA-User\Train\Lib\site-packages\litellm\main.py”, line 2957, in completion
raise exception_type(
File “C:\Users\IA-User\Train\Lib\site-packages\litellm\main.py”, line 852, in completion
model, custom_llm_provider, dynamic_api_key, api_base = get_llm_provider(
^^^^^^^^^^^^^^^^^
File “C:\Users\IA-User\Train\Lib\site-packages\litellm\litellm_core_utils\get_llm_provider_logic.py”, line 520, in get_llm_provider
raise e
File “C:\Users\IA-User\Train\Lib\site-packages\litellm\litellm_core_utils\get_llm_provider_logic.py”, line 497, in get_llm_provider
raise litellm.exceptions.BadRequestError( # type: ignore
litellm.exceptions.BadRequestError: litellm.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=llama3.1:8b-instruct-q8_0
Pass model as E.g. For ‘Huggingface’ inference endpoints pass in completion(model='huggingface/starcoder',..) Learn more: Providers | liteLLM
An error occurred while running the crew: Command ‘[‘poetry’, ‘run’, ‘run_crew’]’ returned non-zero exit status 1.

Did you try ollama/ in frint of the model name?

Thank you. I have tried that but the error still persists. It says LiteLLM call failed.

Hi I have made it to work by implementing ollama/ and by importing litellm itself.

It takes quite a bit long to process, is there any way I could speed it up?

For more context, I did not call litellm on anything and just imported it. Cheers!

from crewai import Agent, Task, Crew, Process, LLM

should be all that is required, not dealing directly with litellm

1 Like

I did that, but I am still getting the error above.

When I imported litellm, it went through, maybe I’m doing things incorrectly.