I posted earlier but it’s closed.
Tools are failing with list index out of range when using Ollama in CrewAI. Debugging it appears to be happening in litellm.
I posted a bug for it.
I posted earlier but it’s closed.
Tools are failing with list index out of range when using Ollama in CrewAI. Debugging it appears to be happening in litellm.
I posted a bug for it.
The same happens here with ollama 0.6.6 and 0.6.7. I’m using crewai==0.118.0 and crewai-tools==0.43.0.
ERROR:root:LiteLLM call failed: litellm.APIConnectionError: list index out of range
Traceback (most recent call last):
File “d:\git\AI_Agents.venv\Lib\site-packages\litellm\main.py”, line 2870, in completion
response = base_llm_http_handler.completion(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “d:\git\AI_Agents.venv\Lib\site-packages\litellm\llms\custom_httpx\llm_http_handler.py”, line 269, in completion
data = provider_config.transform_request(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “d:\git\AI_Agents.venv\Lib\site-packages\litellm\llms\ollama\completion\transformation.py”, line 322, in transform_request
modified_prompt = ollama_pt(model=model, messages=messages)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “d:\git\AI_Agents.venv\Lib\site-packages\litellm\litellm_core_utils\prompt_templates\factory.py”, line 229, in ollama_pt
tool_calls = messages[msg_i].get(“tool_calls”)
~~~~~~~~^^^^^^^
IndexError: list index out of range
Error during LLM call: litellm.APIConnectionError: list index out of range
Traceback (most recent call last):
File “d:\git\AI_Agents.venv\Lib\site-packages\litellm\main.py”, line 2870, in completion
response = base_llm_http_handler.completion(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “d:\git\AI_Agents.venv\Lib\site-packages\litellm\llms\custom_httpx\llm_http_handler.py”, line 269, in completion
data = provider_config.transform_request(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “d:\git\AI_Agents.venv\Lib\site-packages\litellm\llms\ollama\completion\transformation.py”, line 322, in transform_request
modified_prompt = ollama_pt(model=model, messages=messages)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “d:\git\AI_Agents.venv\Lib\site-packages\litellm\litellm_core_utils\prompt_templates\factory.py”, line 229, in ollama_pt
tool_calls = messages[msg_i].get(“tool_calls”)
~~~~~~~~^^^^^^^
IndexError: list index out of range
An unknown error occurred. Please check the details below.
Error details: litellm.APIConnectionError: list index out of range
Traceback (most recent call last):
File “d:\git\AI_Agents.venv\Lib\site-packages\litellm\main.py”, line 2870, in completion
response = base_llm_http_handler.completion(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “d:\git\AI_Agents.venv\Lib\site-packages\litellm\llms\custom_httpx\llm_http_handler.py”, line 269, in completion
data = provider_config.transform_request(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “d:\git\AI_Agents.venv\Lib\site-packages\litellm\llms\ollama\completion\transformation.py”, line 322, in transform_request
modified_prompt = ollama_pt(model=model, messages=messages)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “d:\git\AI_Agents.venv\Lib\site-packages\litellm\litellm_core_utils\prompt_templates\factory.py”, line 229, in ollama_pt
tool_calls = messages[msg_i].get(“tool_calls”)
~~~~~~~~^^^^^^^
IndexError: list index out of range
Same here on mac os 15.4.1
crewai==0.118.0
crewai-tools==0.42.2
ollama==0.4.8
I’m trying to use DOCXSearchTool
Let me know if u found a fix
Hey @sodoherty - here is what I found on this issue, I also I found a temp solution.. maybe ..
I received quite similar error but having tools assigned on task level not agent level. Removing out the tools from task definition helped me finish the crew but I cannot use tools.
LiteLLM 1.67.1
CrewAI 0.118.0
Python 3.12
Windows
Ollama 0.68; gemma3:12b
here is the snippet of the offending part in factory.py
while msg_i < len(messages) and messages[msg_i]["role"] == "assistant": assistant_content_str += convert_content_list_to_str(messages[msg_i]) msg_i += 1 tool_calls = messages[msg_i].get("tool_calls")
My guess is that is either a bug or when the second condition is verified there was (not anymore) another element in messages, I fixed by adding a check and break right after the msg_i is increased
while msg_i < len(messages) and messages[msg_i]["role"] == "assistant": assistant_content_str += convert_content_list_to_str(messages[msg_i]) msg_i += 1 if msg_i == len(messages): break tool_calls = messages[msg_i].get("tool_calls")
I encountered a similar problem using Ollama 0.6.8,Crewai 0.119.0,model is gwen3:8b
I checked the bugged part in factory.py and found that the “assistant” messages are handled differently with the “user” and “system” messages -
This is the “user” part:
while msg_i < len(messages) and messages[msg_i]["role"] in user_message_types: msg_content = messages[msg_i].get("content") if msg_content: if isinstance(msg_content, list): for m in msg_content: if m.get("type", "") == "image_url": if isinstance(m["image_url"], str): images.append(m["image_url"]) elif isinstance(m["image_url"], dict): images.append(m["image_url"]["url"]) elif m.get("type", "") == "text": user_content_str += m["text"] else: # Tool message content will always be a string user_content_str += msg_content msg_i += 1
Notice where the msg_i +=1 is located at;and this is the “assistant” part:
while msg_i < len(messages) and messages[msg_i]["role"] == "assistant": assistant_content_str += convert_content_list_to_str(messages[msg_i]) msg_i += 1 tool_calls = messages[msg_i].get("tool_calls") ollama_tool_calls = [] if tool_calls: for call in tool_calls: call_id: str = call["id"] function_name: str = call["function"]["name"] arguments = json.loads(call["function"]["arguments"]) ollama_tool_calls.append( { "id": call_id, "type": "function", "function": { "name": function_name, "arguments": arguments, }, } ) if ollama_tool_calls: assistant_content_str += ( f"Tool Calls: {json.dumps(ollama_tool_calls, indent=2)}" ) msg_i += 1
So I changed the “assistant” part so that it aligns with the “user” part:
while msg_i < len(messages) and messages[msg_i]["role"] == "assistant": assistant_content_str += convert_content_list_to_str(messages[msg_i]) # msg_i += 1 tool_calls = messages[msg_i].get("tool_calls") ollama_tool_calls = [] ... if ollama_tool_calls: assistant_content_str += ( f"Tool Calls: {json.dumps(ollama_tool_calls, indent=2)}" ) msg_i += 1
And it worked well then on.This seems to be a bug within the factory.py file,which is a litellm bug to be solved.
Current work around also apart from the fix above, is to use crewai version 0.118.0
.
We’ve raised the issue with LiteLLM and there seems to be a PR to fix this as well on their end waiting to be merged.
hey @tonykipkemboi
I have crewai version: 0.118.0 but the error is occuring. Same goes for this setup:
crewai-0.119.0 litellm-1.68.0 openai-1.75.0
5.12.2025 - litellm was just updated to version 1.69.2
Is this update we have been waiting for?
I’m still getting the same error. Hot fixing the factory file worked until a proper fix.
Thank you! This worked for me too. Easy enough edit, I’ll just keep doing this with each upgrade til they fix it on their end
This topic was automatically closed 24 hours after the last reply. New replies are no longer allowed.