Error while using the gemini model

I am trying to use the gemini model and the crew runs for a while. I can see all the output by setting the verbose=True. Just before the completion I get this error

2025-01-22 15:04:00,336 - 134479128977536 - llm.py-llm:277 - ERROR: LiteLLM call failed: litellm.APIConnectionError: Your default credentials were not found. To set up Application Default Credentials, see Set up Application Default Credentials  |  Authentication  |  Google Cloud for more information.
Traceback (most recent call last):
File “/home/ritesh/multi_agent_tutorials/.venv/lib/python3.12/site-packages/litellm/main.py”, line 2278, in completion
model_response = vertex_chat_completion.completion( # type: ignore
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/home/ritesh/multi_agent_tutorials/.venv/lib/python3.12/site-packages/litellm/llms/vertex_ai/gemini/vertex_and_google_ai_studio_gemini.py”, line 1204, in completion
_auth_header, vertex_project = self._ensure_access_token(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/home/ritesh/multi_agent_tutorials/.venv/lib/python3.12/site-packages/litellm/llms/vertex_ai/vertex_llm_base.py”, line 130, in _ensure_access_token
self._credentials, cred_project_id = self.load_auth(
^^^^^^^^^^^^^^^
File “/home/ritesh/multi_agent_tutorials/.venv/lib/python3.12/site-packages/litellm/llms/vertex_ai/vertex_llm_base.py”, line 84, in load_auth
creds, creds_project_id = google_auth.default(
^^^^^^^^^^^^^^^^^^^^
File “/home/ritesh/multi_agent_tutorials/.venv/lib/python3.12/site-packages/google/auth/_default.py”, line 697, in default
raise exceptions.DefaultCredentialsError(_CLOUD_SDK_MISSING_CREDENTIALS)
google.auth.exceptions.DefaultCredentialsError: Your default credentials were not found. To set up Application Default Credentials, see Set up Application Default Credentials  |  Authentication  |  Google Cloud for more information.
LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True’.

My llm setup is as under
llm = LLM(
model=f"gemini/{model_name}",
api_key=get_api_key(model_name),

)
the model that i am using is “gemini-1.5-flash”

I have set the max_rpm = 10 which is less than the allowed rate limit by gemini

Please help you resolve this error

Just an update. the error starts with the time out of metadata server
LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True’.

2025-01-22 15:03:39,289 - 134479128977536 - _metadata.py-_metadata:142 - WARNING: Compute Engine Metadata server unavailable on attempt 1 of 3. Reason: timed out
2025-01-22 15:03:43,321 - 134479128977536 - _metadata.py-_metadata:142 - WARNING: Compute Engine Metadata server unavailable on attempt 2 of 3. Reason: timed out
2025-01-22 15:03:48,193 - 134479128977536 - _metadata.py-_metadata:142 - WARNING: Compute Engine Metadata server unavailable on attempt 3 of 3. Reason: timed out

This is resolved. The error was caused due to a bug in my code.

This topic was automatically closed 24 hours after the last reply. New replies are no longer allowed.