LLM Response Error: ValueError: Invalid response from LLM call - None or empty

Hey CrewAi team,

Can someone please prioritize this error? I’m noticing that multiple users of crewai are also running into the same issue when using open-source LLMs.

Here’s the problem:
:white_check_mark: Everything works perfectly end-to-end when using OpenAI’s GPT models.
:cross_mark: But the moment I switch to models like Qwen, LLaMA 3.1, DeepSeek, or Mistral, I hit an LLM response error

Error I am getting:
Received None or empty response from LLM call.
** An unknown error occurred. Please check the details below.**
** Error details: Invalid response from LLM call - None or empty.**
** An unknown error occurred. Please check the details below.**
** Error details: Invalid response from LLM call - None or empty.**
2025-03-06 11:15:07,059 - ERROR - Error in rag_fucntion: Invalid response from LLM call - None or empty.
Traceback (most recent call last):
File “/home/sranjan710/miniconda3/envs/py310prod/lib/python3.10/site-packages/crewai/agent.py”, line 248, in execute_task
result = self.agent_executor.invoke(
File “/home/sranjan710/miniconda3/envs/py310prod/lib/python3.10/site-packages/crewai/agents/crew_agent_executor.py”, line 115, in invoke
raise e
File “/home/sranjan710/miniconda3/envs/py310prod/lib/python3.10/site-packages/crewai/agents/crew_agent_executor.py”, line 102, in invoke
formatted_answer = self._invoke_loop()
File “/home/sranjan710/miniconda3/envs/py310prod/lib/python3.10/site-packages/crewai/agents/crew_agent_executor.py”, line 166, in _invoke_loop
raise e
File “/home/sranjan710/miniconda3/envs/py310prod/lib/python3.10/site-packages/crewai/agents/crew_agent_executor.py”, line 140, in _invoke_loop
answer = self._get_llm_response()
File “/home/sranjan710/miniconda3/envs/py310prod/lib/python3.10/site-packages/crewai/agents/crew_agent_executor.py”, line 217, in _get_llm_response
raise ValueError(“Invalid response from LLM call - None or empty.”)
ValueError: Invalid response from LLM call - None or empty.

Please look into this error on priority. Right now, it feels pointless to have so many LLM providers if 95% of their models aren’t working with CrewAI Agents.

Thanks,
Sumit

Hey CrewAI team,

I am facing the same error as @sranjan719 while using the Qwen/Qwen2.5-VL-3B-Instruct model. I am hosting the model on a server using vLLM and setting api_base as the API endpoint, but it’s not working. I have tried multiple solutions, but nothing has worked so far.

llm = LLM(
model=“huggingface/Qwen/Qwen2-VL-2B-Instruct”,
api_base=“http://0.0.0.0:8000/v1/chat/completions”,
)

Error:
Error during LLM call: litellm.BadRequestError: HuggingfaceException - {“object”:“error”,“message”:“[{‘type’: ‘missing’, ‘loc’: (‘body’, ‘messages’), ‘msg’: ‘Field required’, ‘input’: {‘inputs’: ‘<|im_start|>system\nYou are a helpful assistant.<|im_end|>\n<|im_start|>user\nYou are Image Analyst. Expert in visual analysis with deep knowledge of design, composition, objects, patterns, and features. Can accurately describe and interpret images across various contexts.\nYour personal goal is: Analyze the given image and provide detailed insights based on the provided question\nTo give my best complete final answer to the task respond using the exact following format:\n\nThought: I now can give a great answer\nFinal Answer: Your final answer must be the great and the most complete as possible, it must be outcome described.\n\nI MUST use these formats, my job depends on it!\n\nCurrent Task: Analyze the image located at https://as1.ftcdn.net/v2/jpg/10/64/66/34/1000_F_1064663493_PG2uY9VYvZPVxXvmFwIOvGDSNlfMDeIL.jpg and describe the word written on the image.\n\nThis is the expect criteria for your final answer: A detailed analysis and response based on the given image and question.\nyou MUST return the actual complete content as the final answer, not a summary.\n\nBegin! This is VERY important to you, use the tools available and give your best Final Answer, your job depends on it!\n\nThought:<|im_end|>\n<|im_start|>assistant\n’, ‘parameters’: {‘stream’: False, ‘stop’: [‘’, ‘\nObservation:’], ‘details’: True, ‘return_full_text’: False}, ‘stream’: False}}, {‘type’: ‘missing’, ‘loc’: (‘body’, ‘model’), ‘msg’: ‘Field required’, ‘input’: {‘inputs’: ‘<|im_start|>system\nYou are a helpful assistant.<|im_end|>\n<|im_start|>user\nYou are Image Analyst. Expert in visual analysis with deep knowledge of design, composition, objects, patterns, and features. Can accurately describe and interpret images across various contexts.\nYour personal goal is: Analyze the given image and provide detailed insights based on the provided question\nTo give my best complete final answer to the task respond using the exact following format:\n\nThought: I now can give a great answer\nFinal Answer: Your final answer must be the great and the most complete as possible, it must be outcome described.\n\nI MUST use these formats, my job depends on it!\n\nCurrent Task: Analyze the image located at https://as1.ftcdn.net/v2/jpg/10/64/66/34/1000_F_1064663493_PG2uY9VYvZPVxXvmFwIOvGDSNlfMDeIL.jpg and describe the word written on the image.\n\nThis is the expect criteria for your final answer: A detailed analysis and response based on the given image and question.\nyou MUST return the actual complete content as the final answer, not a summary.\n\nBegin! This is VERY important to you, use the tools available and give your best Final Answer, your job depends on it!\n\nThought:<|im_end|>\n<|im_start|>assistant\n’, ‘parameters’: {‘stream’: False, ‘stop’: [‘’, ‘\nObservation:’], ‘details’: True, ‘return_full_text’: False}, ‘stream’: False}}]”,“type”:“BadRequestError”,“param”:null,“code”:400}

Does Qwen support CrewAI/LiteLLM?

Hello All,
Same error for me as well , seems like changing the max_iter parameter while constructing agent to a lower value some how fixing the error , it seems the error is specific to some models , i was getting error for “groq/llama-3.1-8b-instant” when i changed to a higher model “groq/llama-3.3-70b-specdec” the same code worked fine. Intermittent issue and changing max_iter param and switching models some how working for me.

Are you sure it’s not a naming issue, guys?

As you know, CrewAI uses the LiteLLM library for connections with LLMs, right? And the LiteLLM documentation about VLLM says that the model parameter should be model="hosted_vllm/<your-vllm-model-name>". So, in CrewAI, it would be something like this:

from crewai import LLM

llm = LLM(
    model='hosted_vllm/<your-vllm-model-name>',
    api_base='your-hosted-vllm-server',
)

I already tried this, @Max_Moura, but it’s still not working. I think it’s a template-related issue. I tried it with the Qwen2.5-VL-3B-Instruct model.