Just thought I would share with the community, Ollama just dropped qwen3! - qwen3:8b
I am running a flow with 3 crews powered by qwen3:8b and its working great!
You’re right. You’d have to force the downgrade and ignore the dependencies, but that’s really just meant as a temporary fix for emergencies, like if you had something running in production, for example.
There’s actually a PR under review right now that should make the next CrewAI upgrade (probably version 0.117.2) handle the LiteLLM downgrade until they get this fixed.
So, as soon as that corrected CrewAI version drops, just update, and you should be all set.
When using CrewAI with an agent configured to use an Ollama model (specifically tested with qwen3) via litellm, an IndexError: list index out of range occurs within litellm’s Ollama prompt templating logic. This error specifically happens during the LLM call that follows a successful tool execution by the agent. If the agent does not have tools assigned, the error does not occur.
The error originates in litellm/litellm_core_utils/prompt_templates/factory.py when attempting to access messages[msg_i].get("tool_calls"), suggesting an incompatibility in how the message history (including the tool call and its result/observation) is structured or processed for Ollama after a tool run.
Steps to Reproduce:
Set up CrewAI to use an Ollama model (e.g., qwen3) as the LLM provider via litellm.
Define a CrewAI Agent and assign one or more tools (e.g., DuckDuckGoSearchTool) to it using the tools=[...] parameter.
Define a Task for this agent that requires it to use one of the assigned tools.
Execute the task using crew.kickoff() (or within a CrewAI Flow).
Observe the agent successfully executing the tool.
Observe the subsequent attempt by CrewAI/litellm to make the next LLM call to Ollama (to process the tool results).
Expected Behavior:
The agent should successfully process the tool’s output and continue its execution by making the next LLM call without errors.
Actual Behavior:
The script crashes during the LLM call after the tool execution. An IndexError: list index out of range occurs within litellm, wrapped in a litellm.exceptions.APIConnectionError. The Crew/Task fails.
Error Logs / Traceback:
# Include the relevant traceback here, like the one provided:
Traceback (most recent call last):
File "C:\Users\mattv\AppData\Local\Programs\Python\Python312\Lib\site-packages\litellm\main.py", line 2870, in completion
response = base_llm_http_handler.completion(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\mattv\AppData\Local\Programs\Python\Python312\Lib\site-packages\litellm\llms\custom_httpx\llm_http_handler.py", line 269, in completion
data = provider_config.transform_request(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\mattv\AppData\Local\Programs\Python\Python312\Lib\site-packages\litellm\llms\ollama\completion\transformation.py", line 322, in transform_request
modified_prompt = ollama_pt(model=model, messages=messages)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\mattv\AppData\Local\Programs\Python\Python312\Lib\site-packages\litellm\litellm_core_utils\prompt_templates\factory.py", line 229, in ollama_pt
tool_calls = messages[msg_i].get("tool_calls")
~~~~~~~~^^^^^^^
IndexError: list index out of range
# Potentially include the higher-level CrewAI traceback as well if helpful
Environment:
Python Version: [3.12.9]
crewai Version: [0.118.0]
crewai-tools Version: [0.43.0]
litellm Version: [1.67.1]
Ollama Version: [6.4.0]
LLM Model: [qwen3:8b, qwen3:4b. qwen3:14b]
Operating System: [Windows 11 Version 24H2 (0S Build 26120.3941)]
Workaround:
Commenting out or removing the tools=[...] list from the Agent’s definition prevents this specific IndexError.
The agent can then make LLM calls via Ollama/`litellm