litellm.InternalServerError: watsonxException

I am creating an crew agent , where i am using watsonx.ai and tools like serperdevtool , ScrapeWebsiteTool and facing an issue -
:cross_mark: LLM Call Failed , Error: litellm.InternalServerError: watsonxException - {“errors”:[{“code”:“downstream_request_failed”,“message”:“Downstream vllm request failed: Internal Server
│ Error”,“more_info”:“https://cloud.ibm.com/apidocs/watsonx-ai”}]

result = self._run_sequential_process()
return self._execute_tasks(self.tasks)
task_output = task.execute_sync(
return self._execute_core(agent, context, tools)
raise e

result = agent.execute_task(
raise e

result = self.agent_executor.invoke(
raise e

formatted_answer = self._invoke_loop()
raise e

answer = self._get_llm_response()
raise e

answer = self.llm.call(
return self._handle_non_streaming_response(
response = litellm.completion(**params)
raise e

result = original_function(*args, **kwargs)
raise exception_type(
raise litellm.InternalServerError()

What does your LLM config look like?

Your errors are probably on the watsonx.ai side, have you tried again to see if it still persists?

os.environ[“SERPER_API_KEY”] = os.getenv(‘SERPER_API_KEY’)

WATSONX_URL = os.getenv("WATSONX_URL")
WATSONX_APIKEY = os.getenv("WATSONX_APIKEY")
WATSONX_PROJECT_ID = os.getenv("WATSONX_PROJECT_ID")
WATSONX_MODEL_ID = "watsonx/mistralai/mistral-large"

os.environ["WATSONX_URL"] = WATSONX_URL  
os.environ["WATSONX_APIKEY"] = WATSONX_APIKEY 
os.environ["WATSONX_PROJECT_ID"] = WATSONX_PROJECT_ID 

llm = LLM(
	model=WATSONX_MODEL_ID,
	base_url=WATSONX_URL,
	project_id=WATSONX_PROJECT_ID,
	# max_tokens=500,
	temperature=0.7,
	# top_p=0.7,
	# frequency_penalty=1,
	api_key=WATSONX_APIKEY
)

Yeah tried many times, and thought it might be from watsonx.ai side and contacted the support , but found no error like this in support area too.