Use of Azure OpenAI token provider

Hi,

I’m trying to create a crew but I’m facing lots of issues because of the way I need to connect to Azure OpenAI. I don’t have any OPENAI_API_KEY but instead I’m connecting with a token provider, something like

import os
import msal
from langchain_openai.chat_models.azure import AzureChatOpenAI

azure_app = msal.ConfidentialClientApplication(
client_id=os.environ.get("AZURE_OPENAI_CLIENT_ID"),
authority=os.environ.get("AZURE_OPENAI_AUTHORITY"),
client_credential=os.environ.get("AZURE_OPENAI_CLIENT_SECRET"),
token_cache=msal.TokenCache(),

)

llm = AzureChatOpenAI(
    azure_endpoint=os.environ.get("AZURE_OPENAI_ENDPOINT"),
    azure_ad_token_provider=lambda: azure_app.acquire_token_for_client(scopes=[os.environ.get("AZURE_OPENAI_SCOPE"])["access_token"],
    api_version=os.environ.get("AZURE_OPENAI_API_VERSION"),
    azure_deployment="gpt4o",
    model_name="azure/gpt4o",
    temperature=0,
)

This works well when I invoke directly the llm, for example

llm.invoke("Tell me a joke").content

However, I got the following error when using the llm in an agent

ERROR:root:LiteLLM call failed: litellm.APIError: AzureException APIError - Missing credentials. Please pass one of `api_key`, `azure_ad_token`, `azure_ad_token_provider`, or the `AZURE_OPENAI_API_KEY` or `AZURE_OPENAI_AD_TOKEN` environment variables.
Error during LLM call: litellm.APIError: AzureException APIError - Missing credentials. Please pass one of `api_key`, `azure_ad_token`, `azure_ad_token_provider`, or the `AZURE_OPENAI_API_KEY` or `AZURE_OPENAI_AD_TOKEN` environment variables.
An unknown error occurred. Please check the details below.
Error details: litellm.APIError: AzureException APIError - Missing credentials. Please pass one of `api_key`, `azure_ad_token`, `azure_ad_token_provider`, or the `AZURE_OPENAI_API_KEY` or `AZURE_OPENAI_AD_TOKEN` environment variables.

The toy example I’m trying to run is as simple as

writer = Agent(
    role='A writer of a popular AI newsletter',
    goal='Generate a detailed AI newsletter',
    backstory='You are a Top AI writer known for writing detailed and engaging newsletters',
    verbose=True,
    allow_delegation=False,
    llm=llm,
)

task = Task(description='Write a detailed newsletter about AI new trends', agent=writer, expected_output='A refined finalized version of report in text format')

crew = Crew(
    agents=[writer],
    tasks=[task],
    verbose=True,
)

result = crew.kickoff()

[EDIT]
Using an AD token provider does not seem to work, however a workaround (that needs to update the token once in a while, so it’s not very useful) could be to use a crewai.LLM object instead (so a LiteLLM underneath) as shown here:

from crewai import LLM

token= azure_app.acquire_token_for_client(scopes=[os.environ.get("AZURE_OPENAI_SCOPE"])["access_token"]

llm = LLM(
    model=model,
    temperature=temperature,
    azure_ad_token=token
)

This llm object can then be passed to agents as usual.

I am trying it out with Entra ID auth for Azure Subscription.

I am still getting error as

class SchedulingAgent:

SUPPORTED_CONTENT_TYPES = \["text/plain"\]



def \__init_\_(self):

    self.llm = self.get_azure_llm()

    self.scheduling_assistant = Agent(

        role="Personal Scheduling Assistant",

        goal="Check calendar and answer questions about availability.",

        backstory="You only manage calendar and use the Calendar Availability Checker tool.",

        verbose=True,

        allow_delegation=False,

        tools=\[AvailabilityTool()\],

        llm=self.llm

    )



def get_azure_llm(self) -> LLM:

    print("Creating Azure LLM with Azure AD auth...")



    endpoint = os.environ\["AZURE_OPENAI_ENDPOINT"\]

    deployment = os.environ\["AZURE_OPENAI_DEPLOYMENT"\]

    api_version = os.environ\["AZURE_OPENAI_API_VERSION"\]



    \# Azure AD credentials (CLI, VSCode, Managed Identity)

    credential = DefaultAzureCredential()

    token_provider = get_bearer_token_provider(

        credential, "https://cognitiveservices.azure.com/.default"

    )



    \# Initialize CrewAI LLM

    \# llm = LLM(

    \#     model=deployment,          # string model name

    \#     azure_endpoint=endpoint,

    \#     api_version=api_version,

    \#     azure_ad_token_provider=token_provider,

    \#     auth_method="azure_ad",

    \#     azure_ad_token=token_provider()

    \# )

    llm = LLM(

        model=deployment,

        temperature="0.0",

        azure_ad_token=token_provider()

    )

    print("Azure LLM created successfully!")

    return llm

litellm.exceptions.AuthenticationError: litellm.AuthenticationError: AuthenticationError: OpenAIException - The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY environment variable

I do not have ID auth or security

I also tried with

    # Azure AD credentials (CLI, VSCode, Managed Identity)

    credential = DefaultAzureCredential()

    token_provider = get_bearer_token_provider(

        credential, "https://cognitiveservices.azure.com/.default"

    )

    token = token_provider()

    print(f"Obtained Azure AD token, first 20 chars: {token\[:20\]}")



    llm = LLM(

        api_key="dummy",

        model=deployment,

        azure_endpoint=endpoint,

        api_version=api_version,

        azure_ad_token=token

    )

Still same error

Based on the documentation, try something like:

from crewai import LLM
import os

os.environ["AZURE_API_KEY"] = ""       # "my-azure-api-key"
os.environ["AZURE_API_BASE"] = ""      # "https://example-endpoint.openai.azure.com"
os.environ["AZURE_API_VERSION"] = ""   # "2023-05-15"
os.environ["AZURE_AD_TOKEN"] = ""      # If any

azure_llm = LLM(
    model="azure/<your_deployment_name>",
    temperature=0.7
)

print(
    azure_llm.call("Hi, how are you?")
)

default_params = {

"model": f"azure/{os.getenv("AZURE_OPENAI_DEPLOYMENT_NAME")}",

"api_version": os.getenv("AZURE_OPENAI_API_VERSION"),

"api_key": os.getenv("AZURE_OPENAI_API_KEY"),

"base_url": os.getenv("AZURE_OPENAI_ENDPOINT"),

}

params = {**default_params, **kwargs}

llm = LLM(**params)

try this it will work