🔥New ollama model qwen3:8b 🔥

New Model qwen3:8b :fire:

Just thought I would share with the community, Ollama just dropped qwen3! - qwen3:8b
I am running a flow with 3 crews powered by qwen3:8b and its working great!

qwen3_8b = LLM(
    model="ollama/qwen3:8b",
    base_url=os.environ.get("OLLAMA_API_BASE"),
    api_key="NA",
   temperature=0.2
)

Qwen3 is the latest generation of large language models in Qwen series, offering a comprehensive suite of dense and mixture-of-experts (MoE) models.

https://ollama.com/library/qwen3:14b

By the way, just updated to the latest crewai-tools 0.42.2 and getting errors,
anyone else?

UnicodeDecodeError: 'charmap' codec can't decode byte 0x81 in position 1980: character maps to <undefined>
1 Like

The issue was identified in the LiteLLM library and has already been reported, as you can see here.

While they work on getting the problem fixed in LiteLLM, the recommendation for now is to downgrade to version litellm==1.67.1.

Thanks, already tried that, You see, If you downgrade
→ uv pip install --system litellm==1.67.1
( litellm==1.67.2 to to litellm==1.67.1)

Then :backhand_index_pointing_down:t2:
crewai 0.117.1 has requirement litellm==1.67.2

And I dont like that.
Will have to wait for fix! What time is it in Brazil? lol

Here is the temp fix for error:

UnicodeDecodeError: 'charmap' codec can't decode byte 0x81 in position 1980: character maps to <undefined>

uv pip install --system crewai==0.117.0

Hey Max,
here is the fix I found that works for me..
uv pip install --system crewai==0.117.0
thank you for your help!

You’re right. You’d have to force the downgrade and ignore the dependencies, but that’s really just meant as a temporary fix for emergencies, like if you had something running in production, for example.

There’s actually a PR under review right now that should make the next CrewAI upgrade (probably version 0.117.2) handle the LiteLLM downgrade until they get this fixed.

So, as soon as that corrected CrewAI version drops, just update, and you should be all set.

2 Likes

FYI Crewai Community

There is a :bug: bug with Qwen3

here is the information

Bug Report: IndexError in litellm when CrewAI Agent with Tools uses Ollama/Qwen

Affected Libraries: crewai, litellm
LLM Provider: Ollama
Model: qwen3:4b (or specific Qwen 1.5 variant used)

Description:

When using CrewAI with an agent configured to use an Ollama model (specifically tested with qwen3) via litellm, an IndexError: list index out of range occurs within litellm’s Ollama prompt templating logic. This error specifically happens during the LLM call that follows a successful tool execution by the agent. If the agent does not have tools assigned, the error does not occur.

The error originates in litellm/litellm_core_utils/prompt_templates/factory.py when attempting to access messages[msg_i].get("tool_calls"), suggesting an incompatibility in how the message history (including the tool call and its result/observation) is structured or processed for Ollama after a tool run.

Steps to Reproduce:

  1. Set up CrewAI to use an Ollama model (e.g., qwen3) as the LLM provider via litellm.
  2. Define a CrewAI Agent and assign one or more tools (e.g., DuckDuckGoSearchTool) to it using the tools=[...] parameter.
  3. Define a Task for this agent that requires it to use one of the assigned tools.
  4. Execute the task using crew.kickoff() (or within a CrewAI Flow).
  5. Observe the agent successfully executing the tool.
  6. Observe the subsequent attempt by CrewAI/litellm to make the next LLM call to Ollama (to process the tool results).

Expected Behavior:

The agent should successfully process the tool’s output and continue its execution by making the next LLM call without errors.

Actual Behavior:

The script crashes during the LLM call after the tool execution. An IndexError: list index out of range occurs within litellm, wrapped in a litellm.exceptions.APIConnectionError. The Crew/Task fails.

Error Logs / Traceback:

# Include the relevant traceback here, like the one provided:
Traceback (most recent call last):
  File "C:\Users\mattv\AppData\Local\Programs\Python\Python312\Lib\site-packages\litellm\main.py", line 2870, in completion
    response = base_llm_http_handler.completion(
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\mattv\AppData\Local\Programs\Python\Python312\Lib\site-packages\litellm\llms\custom_httpx\llm_http_handler.py", line 269, in completion
    data = provider_config.transform_request(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\mattv\AppData\Local\Programs\Python\Python312\Lib\site-packages\litellm\llms\ollama\completion\transformation.py", line 322, in transform_request
    modified_prompt = ollama_pt(model=model, messages=messages)
                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\mattv\AppData\Local\Programs\Python\Python312\Lib\site-packages\litellm\litellm_core_utils\prompt_templates\factory.py", line 229, in ollama_pt
    tool_calls = messages[msg_i].get("tool_calls")
                 ~~~~~~~~^^^^^^^
IndexError: list index out of range

# Potentially include the higher-level CrewAI traceback as well if helpful

Environment:

  • Python Version: [3.12.9]
  • crewai Version: [0.118.0]
  • crewai-tools Version: [0.43.0]
  • litellm Version: [1.67.1]
  • Ollama Version: [6.4.0]
  • LLM Model: [qwen3:8b, qwen3:4b. qwen3:14b]
  • Operating System: [Windows 11 Version 24H2 (0S Build 26120.3941)]

Workaround:

Commenting out or removing the tools=[...] list from the Agent’s definition prevents this specific IndexError.
The agent can then make LLM calls via Ollama/`litellm

3 Likes

Possible Solution: Eric Hartford created a ModelFile that disables , have not tested. https://x.com/cognitivecompai/status/1917112517496897574

Github Link: Modelfile.qwen3-no-thinking · GitHub

1 Like

It appears this is an error with litellm:
https://github.com/BerriAI/litellm/issues/10499

I I found a workaround last night, in the crew.py I added a pydantic model

class Research(BaseModel):
    researchArea: str

`output_pydantic=Research,`

and then output to pydantic to every task,


I guess it cleans up the output from qwen3 and therfore doesnt interfere with Ollamas prompt templating logic? :thinking:

*by the way, full disclosure: Total py noob here!

Just tested this, didn’t seem to help. My error is the index out of range… can’t use any tools, CrewAI or custom, without hitting this

I seem to have found the core of the problem and a solution
My post

Did you update CrewAI to 0.119.0?
The error is now gone for me.
I am using
qwen3:1.7b
qwen3:14b
qwen3:8b