For more context, I am also getting this error where it says that I did not provide a provider.
(Train) PS C:\Users\IA-User\Train\research> crewai run
Running the Crew
Provider List: Providers | liteLLM
2024-10-09 18:30:47,143 - 13488 - llm.py-llm:178 - ERROR: Failed to get supported params: argument of type ‘NoneType’ is not iterable
Provider List: Providers | liteLLM
2024-10-09 18:30:47,146 - 13488 - llm.py-llm:178 - ERROR: Failed to get supported params: argument of type ‘NoneType’ is not iterable
Provider List: Providers | liteLLM
2024-10-09 18:30:47,160 - 13488 - llm.py-llm:178 - ERROR: Failed to get supported params: argument of type ‘NoneType’ is not iterable
Provider List: Providers | liteLLM
2024-10-09 18:30:47,163 - 13488 - llm.py-llm:178 - ERROR: Failed to get supported params: argument of type ‘NoneType’ is not iterable
Provider List: Providers | liteLLM
2024-10-09 18:30:47,171 - 13488 - llm.py-llm:178 - ERROR: Failed to get supported params: argument of type ‘NoneType’ is not iterable
Provider List: Providers | liteLLM
2024-10-09 18:30:47,174 - 13488 - llm.py-llm:178 - ERROR: Failed to get supported params: argument of type ‘NoneType’ is not iterable
Provider List: Providers | liteLLM
2024-10-09 18:30:47,176 - 13488 - llm.py-llm:178 - ERROR: Failed to get supported params: argument of type ‘NoneType’ is not iterable
Agent: AI LLMs Senior Data Researcher
Task: Conduct a thorough research about AI LLMs Make sure you find any interesting and relevant information given the current year is 2024.
2024-10-09 18:30:47,191 - 13488 - llm.py-llm:161 - ERROR: LiteLLM call failed: litellm.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=llama3.1:8b-instruct-q8_0
Pass model as E.g. For ‘Huggingface’ inference endpoints pass in completion(model='huggingface/starcoder',..)
Learn more: Providers | liteLLM
Provider List: Providers | liteLLM
2024-10-09 18:30:47,194 - 13488 - llm.py-llm:178 - ERROR: Failed to get supported params: argument of type ‘NoneType’ is not iterable
Agent: AI LLMs Senior Data Researcher
Task: Conduct a thorough research about AI LLMs Make sure you find any interesting and relevant information given the current year is 2024.
2024-10-09 18:30:47,204 - 13488 - llm.py-llm:161 - ERROR: LiteLLM call failed: litellm.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=llama3.1:8b-instruct-q8_0
Pass model as E.g. For ‘Huggingface’ inference endpoints pass in completion(model='huggingface/starcoder',..)
Learn more: Providers | liteLLM
Provider List: Providers | liteLLM
2024-10-09 18:30:47,207 - 13488 - llm.py-llm:178 - ERROR: Failed to get supported params: argument of type ‘NoneType’ is not iterable
Agent: AI LLMs Senior Data Researcher
Task: Conduct a thorough research about AI LLMs Make sure you find any interesting and relevant information given the current year is 2024.
2024-10-09 18:30:47,216 - 13488 - llm.py-llm:161 - ERROR: LiteLLM call failed: litellm.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=llama3.1:8b-instruct-q8_0
Pass model as E.g. For ‘Huggingface’ inference endpoints pass in completion(model='huggingface/starcoder',..)
Learn more: Providers | liteLLM
Traceback (most recent call last):
File “C:\Users\IA-User\Train\Lib\site-packages\crewai\agent.py”, line 227, in execute_task
result = self.agent_executor.invoke(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “C:\Users\IA-User\Train\Lib\site-packages\crewai\agents\crew_agent_executor.py”, line 92, in invoke
formatted_answer = self._invoke_loop()
^^^^^^^^^^^^^^^^^^^
File “C:\Users\IA-User\Train\Lib\site-packages\crewai\agents\crew_agent_executor.py”, line 173, in _invoke_loop
raise e
File “C:\Users\IA-User\Train\Lib\site-packages\crewai\agents\crew_agent_executor.py”, line 113, in _invoke_loop
answer = self.llm.call(
^^^^^^^^^^^^^^
File “C:\Users\IA-User\Train\Lib\site-packages\crewai\llm.py”, line 155, in call
response = litellm.completion(**params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “C:\Users\IA-User\Train\Lib\site-packages\litellm\utils.py”, line 1071, in wrapper
raise e
File “C:\Users\IA-User\Train\Lib\site-packages\litellm\utils.py”, line 959, in wrapper
result = original_function(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “C:\Users\IA-User\Train\Lib\site-packages\litellm\main.py”, line 2957, in completion
raise exception_type(
File “C:\Users\IA-User\Train\Lib\site-packages\litellm\main.py”, line 852, in completion
model, custom_llm_provider, dynamic_api_key, api_base = get_llm_provider(
^^^^^^^^^^^^^^^^^
File “C:\Users\IA-User\Train\Lib\site-packages\litellm\litellm_core_utils\get_llm_provider_logic.py”, line 520, in get_llm_provider
raise e
File “C:\Users\IA-User\Train\Lib\site-packages\litellm\litellm_core_utils\get_llm_provider_logic.py”, line 497, in get_llm_provider
raise litellm.exceptions.BadRequestError( # type: ignore
litellm.exceptions.BadRequestError: litellm.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=llama3.1:8b-instruct-q8_0
Pass model as E.g. For ‘Huggingface’ inference endpoints pass in completion(model='huggingface/starcoder',..)
Learn more: Providers | liteLLM
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File “C:\Users\IA-User\Train\Lib\site-packages\crewai\agent.py”, line 227, in execute_task
result = self.agent_executor.invoke(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “C:\Users\IA-User\Train\Lib\site-packages\crewai\agents\crew_agent_executor.py”, line 92, in invoke
formatted_answer = self._invoke_loop()
^^^^^^^^^^^^^^^^^^^
File “C:\Users\IA-User\Train\Lib\site-packages\crewai\agents\crew_agent_executor.py”, line 173, in _invoke_loop
raise e
File “C:\Users\IA-User\Train\Lib\site-packages\crewai\agents\crew_agent_executor.py”, line 113, in _invoke_loop
answer = self.llm.call(
^^^^^^^^^^^^^^
File “C:\Users\IA-User\Train\Lib\site-packages\crewai\llm.py”, line 155, in call
response = litellm.completion(**params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “C:\Users\IA-User\Train\Lib\site-packages\litellm\utils.py”, line 1071, in wrapper
raise e
File “C:\Users\IA-User\Train\Lib\site-packages\litellm\utils.py”, line 959, in wrapper
result = original_function(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “C:\Users\IA-User\Train\Lib\site-packages\litellm\main.py”, line 2957, in completion
raise exception_type(
File “C:\Users\IA-User\Train\Lib\site-packages\litellm\main.py”, line 852, in completion
model, custom_llm_provider, dynamic_api_key, api_base = get_llm_provider(
^^^^^^^^^^^^^^^^^
File “C:\Users\IA-User\Train\Lib\site-packages\litellm\litellm_core_utils\get_llm_provider_logic.py”, line 520, in get_llm_provider
raise e
File “C:\Users\IA-User\Train\Lib\site-packages\litellm\litellm_core_utils\get_llm_provider_logic.py”, line 497, in get_llm_provider
raise litellm.exceptions.BadRequestError( # type: ignore
litellm.exceptions.BadRequestError: litellm.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=llama3.1:8b-instruct-q8_0
Pass model as E.g. For ‘Huggingface’ inference endpoints pass in completion(model='huggingface/starcoder',..)
Learn more: Providers | liteLLM
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File “”, line 1, in
File “C:\Users\IA-User\Train\research\src\research\main.py”, line 27, in run
ResearchCrew().crew().kickoff(inputs=inputs)
File “C:\Users\IA-User\Train\Lib\site-packages\crewai\crew.py”, line 490, in kickoff
result = self._run_sequential_process()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “C:\Users\IA-User\Train\Lib\site-packages\crewai\crew.py”, line 594, in _run_sequential_process
return self._execute_tasks(self.tasks)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “C:\Users\IA-User\Train\Lib\site-packages\crewai\crew.py”, line 692, in _execute_tasks
task_output = task.execute_sync(
^^^^^^^^^^^^^^^^^^
File “C:\Users\IA-User\Train\Lib\site-packages\crewai\task.py”, line 191, in execute_sync
return self._execute_core(agent, context, tools)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “C:\Users\IA-User\Train\Lib\site-packages\crewai\task.py”, line 247, in _execute_core
result = agent.execute_task(
^^^^^^^^^^^^^^^^^^^
File “C:\Users\IA-User\Train\Lib\site-packages\crewai\agent.py”, line 239, in execute_task
result = self.execute_task(task, context, tools)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “C:\Users\IA-User\Train\Lib\site-packages\crewai\agent.py”, line 239, in execute_task
result = self.execute_task(task, context, tools)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “C:\Users\IA-User\Train\Lib\site-packages\crewai\agent.py”, line 238, in execute_task
raise e
File “C:\Users\IA-User\Train\Lib\site-packages\crewai\agent.py”, line 227, in execute_task
result = self.agent_executor.invoke(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “C:\Users\IA-User\Train\Lib\site-packages\crewai\agents\crew_agent_executor.py”, line 92, in invoke
formatted_answer = self._invoke_loop()
^^^^^^^^^^^^^^^^^^^
File “C:\Users\IA-User\Train\Lib\site-packages\crewai\agents\crew_agent_executor.py”, line 173, in _invoke_loop
raise e
File “C:\Users\IA-User\Train\Lib\site-packages\crewai\agents\crew_agent_executor.py”, line 113, in _invoke_loop
answer = self.llm.call(
^^^^^^^^^^^^^^
File “C:\Users\IA-User\Train\Lib\site-packages\crewai\llm.py”, line 155, in call
response = litellm.completion(**params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “C:\Users\IA-User\Train\Lib\site-packages\litellm\utils.py”, line 1071, in wrapper
raise e
File “C:\Users\IA-User\Train\Lib\site-packages\litellm\utils.py”, line 959, in wrapper
result = original_function(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “C:\Users\IA-User\Train\Lib\site-packages\litellm\main.py”, line 2957, in completion
raise exception_type(
File “C:\Users\IA-User\Train\Lib\site-packages\litellm\main.py”, line 852, in completion
model, custom_llm_provider, dynamic_api_key, api_base = get_llm_provider(
^^^^^^^^^^^^^^^^^
File “C:\Users\IA-User\Train\Lib\site-packages\litellm\litellm_core_utils\get_llm_provider_logic.py”, line 520, in get_llm_provider
raise e
File “C:\Users\IA-User\Train\Lib\site-packages\litellm\litellm_core_utils\get_llm_provider_logic.py”, line 497, in get_llm_provider
raise litellm.exceptions.BadRequestError( # type: ignore
litellm.exceptions.BadRequestError: litellm.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=llama3.1:8b-instruct-q8_0
Pass model as E.g. For ‘Huggingface’ inference endpoints pass in completion(model='huggingface/starcoder',..)
Learn more: Providers | liteLLM
An error occurred while running the crew: Command ‘[‘poetry’, ‘run’, ‘run_crew’]’ returned non-zero exit status 1.