# CrewAI Support Request: OPENAI_API_KEY Required Error When Using Vertex AI
**Date**: 2025-11-07
## Issue Summary
We are experiencing an `OPENAI_API_KEY is required` error when creating CrewAI agents, even though we are using Google Vertex AI (not OpenAI) and have explicitly passed a custom LLM to all agents. The error occurs during agent initialization, before the custom Vertex AI LLM is used.
## Environment Details
- **CrewAI Version**: 1.4.0
- **Python Version**: Python 3.13.9
- **Related Packages**:
- `langchain-core`: 1.0.3
- `langchain-google-vertexai`: 3.0.2
- `langchain-openai`: 1.0.2
- `litellm`: 1.79.1
- `openai`: 2.7.1
- **Operating System**: macOS (darwin 25.0.0)
- **LLM Provider**: Google Vertex AI (Gemini 2.5 Flash)
- **Authentication Method**: Service Account (JSON key file)
## Problem Description
When we try to create CrewAI agents with a custom Vertex AI LLM, CrewAI throws an error requiring `OPENAI_API_KEY` during agent initialization. This happens even though:
1. We are not using OpenAI - we’re using Google Vertex AI
2. We explicitly pass a custom `LLM` instance to each `Agent` via the `llm` parameter
3. We pass the same `LLM` instance to the `Crew` via the `llm` parameter
4. We have `OPENAI_API_KEY` set in the environment (both from `.env` file and as a global environment variable)
## Code Configuration
### LLM Configuration
```python
from crewai import LLM, Agent, Task, Crew
# Configure Vertex AI LLM
llm = LLM(
model=“vertex_ai/gemini-2.5-flash”,
project=“sinergia-energia-477317”,
location=“us-central1”,
temperature=0.7,
max_tokens=8192
)
```
### Agent Creation
```python
agent = Agent(
role=“Leitor de Clientes Supabase”,
goal=“Buscar clientes qualificados na base de dados Supabase”,
backstory=“…”,
verbose=True,
allow_delegation=False,
tools=[fetch_clients_tool],
llm=llm # Explicitly passing Vertex AI LLM
)
```
### Crew Creation
```python
crew = Crew(
agents=[agent1, agent2, agent3, agent4, agent5],
tasks=[task1, task2, task3, task4, task5],
process=Process.sequential,
verbose=True,
llm=llm, # Explicitly passing Vertex AI LLM
memory=False # Disabled to avoid OpenAI dependency
)
```
## Error Details
### Error Message
```
ImportError: Error importing native provider: OPENAI_API_KEY is required
```
### Full Stack Trace
```
Traceback (most recent call last):
File “/Library/Frameworks/Python.framework/Versions/3.13/lib/python3.13/site-packages/crewai/llm.py”, line 335, in _new_
model_string = model.partition("/")\[2\] if "/" in model else model
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/Library/Frameworks/Python.framework/Versions/3.13/lib/python3.13/site-packages/crewai/llms/providers/openai/completion.py”, line 90, in _init_
timeout=timeout,
^^^^
File “/Library/Frameworks/Python.framework/Versions/3.13/lib/python3.13/site-packages/crewai/llms/providers/openai/completion.py”, line 114, in _get_client_params
self.reasoning_effort = reasoning_effort
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ValueError: OPENAI_API_KEY is required
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File “crew_analysis.py”, line 1042, in create_credit_analysis_crew
client_reader = Agent(...)
File “/Library/Frameworks/Python.framework/Versions/3.13/lib/python3.13/site-packages/pydantic/main.py”, line 250, in _init_
validated_self = self.\__pydantic_validator_\_.validate_python(data, self_instance=self)
File “/Library/Frameworks/Python.framework/Versions/3.13/lib/python3.13/site-packages/crewai/agent/internal/meta.py”, line 58, in post_init_setup_with_extensions
result = original_func(self)
File “/Library/Frameworks/Python.framework/Versions/3.13/lib/python3.13/site-packages/crewai/agent/core.py”, line 213, in post_init_setup
description="A2A (Agent-to-Agent) configuration for delegating tasks to remote agents. Can be a single A2AConfig or a dict mapping agent IDs to configs.",
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File “/Library/Frameworks/Python.framework/Versions/3.13/lib/python3.13/site-packages/crewai/utilities/llm_utils.py”, line 66, in create_llm
raise e
File “/Library/Frameworks/Python.framework/Versions/3.13/lib/python3.13/site-packages/crewai/utilities/llm_utils.py”, line 53, in create_llm
return LLM(
model=model,
...<6 lines>...
api_base=api_base,
)
File “/Library/Frameworks/Python.framework/Versions/3.13/lib/python3.13/site-packages/crewai/llm.py”, line 338, in _new_
)
ImportError: Error importing native provider: OPENAI_API_KEY is required
```
## When Does This Occur?
The error occurs:
- **When**: Creating the first `Agent` instance
- **Condition**: Only when there are qualified clients in the database (analysis proceeds to create the crew)
- **Context**: When running via FastAPI in a separate thread
- **Not when**: No clients are found (analysis completes successfully without creating agents)
## What We’ve Tried
1.
**Set OPENAI_API_KEY in .env file** - Key is loaded correctly when tested in isolation
2.
**Set OPENAI_API_KEY as global environment variable** - Exported before starting the service
3.
**Load .env before any CrewAI imports** - Using `load_dotenv()` at the very beginning
4.
**Verify key availability before creating agents** - Added checks and logs
5.
**Pass LLM explicitly to all agents and crew** - Using `llm=llm` parameter
6.
**Disable memory** - Set `memory=False` to avoid OpenAI dependency
7.
**Use dummy key** - Even with a dummy key (for validation only), error persists
## Expected Behavior
When using a custom LLM (Vertex AI) and passing it explicitly to agents and crew:
- CrewAI should use the provided LLM instance
- CrewAI should not require `OPENAI_API_KEY` validation
- Agents should be created successfully with the custom LLM
## Actual Behavior
- CrewAI attempts to create a default LLM internally before using the custom LLM
- This default LLM creation requires `OPENAI_API_KEY` validation
- Error occurs even though we’re not using OpenAI and have passed a custom LLM
## Questions
1. **Is this expected behavior?** Should CrewAI require `OPENAI_API_KEY` even when using a custom Vertex AI LLM?
2. **Is there a configuration option** to disable the default LLM creation or skip `OPENAI_API_KEY` validation when using custom LLMs?
3. **Is there a workaround** for using Vertex AI without requiring `OPENAI_API_KEY`?
4. **Could this be a bug** in the agent initialization process that tries to create a default LLM before using the custom one?
5. **Are there any known issues** with using Vertex AI and CrewAI together?
## Additional Information
- **Service Account**: Using Google Cloud Service Account JSON file
- **Environment Variable**: `GOOGLE_APPLICATION_CREDENTIALS` points to the service account file
- **Vertex AI API**: Enabled and working (we can create LLM instances successfully)
- **Test Results**: When we test the LLM creation in isolation, it works perfectly
## Reproduction Steps
1. Configure Vertex AI LLM with service account
2. Create an Agent with `llm=llm` parameter (Vertex AI LLM)
3. Create a Crew with `llm=llm` parameter (Vertex AI LLM)
4. Try to execute the crew
5. Error occurs during agent initialization
## Minimal Reproduction Code
```python
import os
from dotenv import load_dotenv
from crewai import LLM, Agent, Crew, Process
# Load environment
load_dotenv()
# Ensure OPENAI_API_KEY is set (even dummy)
os.environ[“OPENAI_API_KEY”] = “sk-dummy-key-for-validation”
# Configure Vertex AI LLM
llm = LLM(
model=“vertex_ai/gemini-2.5-flash”,
project=“your-project-id”,
location=“us-central1”,
temperature=0.7
)
# Create agent with custom LLM
agent = Agent(
role=“Test Agent”,
goal=“Test goal”,
backstory=“Test backstory”,
llm=llm # Explicitly passing Vertex AI LLM
)
# This line throws the error:
# ImportError: Error importing native provider: OPENAI_API_KEY is required
```
## Additional Context
- The error occurs specifically when creating agents in a FastAPI application running in a separate thread
- When no clients are found (crew is not created), the analysis completes successfully
- The error only occurs when the crew needs to be created with agents
- We’ve verified that `OPENAI_API_KEY` is available in the environment before agent creation
- The error happens during `Agent._init_()` when CrewAI internally tries to create a default LLM
- **Memory is disabled**: We have `memory=False` in the Crew configuration to avoid OpenAI dependency
- **Note**: According to CrewAI documentation, even when using custom LLMs, memory system defaults to OpenAI embeddings. However, we have memory disabled, so this shouldn’t be the issue, but it’s worth noting.
## Potential Root Cause (Based on Documentation)
According to the [CrewAI Memory documentation]( Memory - CrewAI ), CrewAI uses OpenAI embeddings by default for memory operations, even when using other LLM providers. However, we have `memory=False` set.
**The issue might be**: Even with `memory=False`, CrewAI may still be trying to initialize or validate embedding providers during agent creation, which requires `OPENAI_API_KEY` validation.
**Question for Support**: Does CrewAI validate or initialize embedding providers even when `memory=False`? If so, is there a way to disable this validation when using custom LLMs?
## Contact Information
- **Project**: Sinergia Energia - Credit Analysis Service
- **Use Case**: Automated credit analysis using CrewAI agents with Vertex AI
## Request
We would appreciate guidance on:
1. Whether this is expected behavior or a bug
2. How to properly configure CrewAI with Vertex AI without requiring OpenAI API key
3. Any workarounds or solutions available
Thank you for your assistance!