LiteLLM call failed: litellm.InternalServerError: VertexAIException InternalServerError

anyone see this? Is the debug to set litellmverbose to true in the Crew? or Agent or Task?>

LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True’.

2024-12-12 20:59:32,740 - 13539291136 - llm.py-llm:170 - ERROR: LiteLLM call failed: litellm.InternalServerError: VertexAIException InternalServerError - {
“error”: {
“code”: 500,
“message”: “Internal error encountered.”,
“status”: “INTERNAL”
}
}

@achris7 Error 500 means something went wrong, but it’s not your fault. Probably VertexAI has issues on their servers. Try to run the identical code a bit later and let me know if the issue disappeared.

If the issue still persists, please provide your full code.

2 Likes

I get some LiteLLM issue also, and I just wonder, where do i set this: use litellm.set_verbose=True'.

I get this error after adding this to my Crew - and I wonder how the memory is initialized in the fist place.:

		# Try to add "memory"
		memory=True,
		embedder={
    		"provider": "ollama",
    		"config": {
        	"model": "mxbai-embed-large"
    		}
		}

When I try to reset-memory but that requires OpenAi i then solved the issue by making a OLLAMA Pull mxbai-embed-large - I still get the suggestion for LiteLMM verbose, but now it appears to be working!