Cant able to connect to Azure OPENAI with CrewAI

Issue Connecting to Azure OpenAI with CrewAI and LangChain

Hello all,

I dont know why in documentation they mentioned it will support Azure openAI but didnt even mention how to connect it. I like to know connection with azure they said will work or is it just fake ?

I’ve been trying to connect to Azure OpenAI using the following code, but I keep running into errors. No matter what I try, I can’t seem to get it to work. Can anyone guide me on what I’m doing wrong?

Code Snippet

from crewai import Agent, Task, Crew, Process
from crewai_tools import YoutubeChannelSearchTool
from langchain_openai import AzureChatOpenAI
from dotenv import load_dotenv
import os 

load_dotenv()

# Setting up Azure OpenAI
model = "gpt-4o"
api_key = "YOUR_AZURE_OPENAI_API_KEY"
api_version = "2024-05-01-preview"
base_url = "https://your-openai-instance.openai.azure.com/"

azure_llm = AzureChatOpenAI(
    azure_endpoint=os.environ.get("AZURE_OPENAI_ENDPOINT"),
    api_key=os.environ.get("AZURE_OPENAI_KEY"),
    api_version=os.environ.get("AZURE_OPENAI_VERSION"),
)

# Initializing the YouTube Channel Search Tool
yt_tool = YoutubeChannelSearchTool(youtube_channel_handle="@zackdfilms")

# Agents
blog_researcher = Agent(
    role="Blog Creator from YouTube Videos",
    goal="Get the relevant video content for the topic {topic} from YouTube channel",
    verbose=True,
    memory=True,
    backstory=(
        "Expert in understanding videos in crime, fun, and entertainment and providing suggestions"
    ),
    tools=[yt_tool],
    allow_delegation=True,
    llm=azure_llm
)

Error Message

api_key = self.config.api_key or os.environ["OPENAI_API_KEY"]
                                     ~~~~~~~~~~^^^^^^^^^^^^^^^^^^
  File "<frozen os>", line 714, in __getitem__
KeyError: 'OPENAI_API_KEY'

What I’ve Tried

  1. Setting environment variables correctly in the .env file:
    AZURE_OPENAI_ENDPOINT=https://your-openai-instance.openai.azure.com/
    AZURE_OPENAI_KEY=your-api-key
    AZURE_OPENAI_VERSION=2024-05-01-preview
    
  2. Verifying that the dotenv package is loading environment variables correctly.
  3. Explicitly passing api_key="your-api-key" instead of relying on os.environ.get().
  4. Checking if the AzureChatOpenAI class expects OPENAI_API_KEY instead of AZURE_OPENAI_KEY.

Would appreciate any guidance! Thanks in advance.

I think that you should use AZURE_OPENAI_API_KEY and not AZURE_OPENAI_KEY

I tried with what you suggested Kenpachi from Bleach

azure_llm = AzureChatOpenAI(
    azure_endpoint=os.environ.get("AZURE_OPENAI_API_ENDPOINT"),
    api_key=os.environ.get("AZURE_OPENAI_API_KEY"),
    api_version=os.environ.get("AZURE_OPENAI_API_VERSION"),
    model="gpt-4o"

)

but still same error persists

Have you tried using LiteLLM as outlined in the docs? LLMs - CrewAI

Hii ziny

Yeah i tried it worked for simple research agent but for below code the same error persists

from crewai import Agent,Task,Crew,Process,LLM
from crewai_tools import YoutubeChannelSearchTool
from langchain_openai import AzureChatOpenAI
from dotenv import load_dotenv
import os 

load_dotenv()




azure_llm = LLM(
    model="azure/gpt-4o",
    api_version="2024-05-01-preview",
)

## Initializing the Youtube Channel Search Tool
yt_tool = YoutubeChannelSearchTool(youtube_channel_handle="@zackdfilms")


## Agents
## Create a senior blog content researcher

blog_researcher = Agent(
    role="Blog Creator from Youtube Videos",
    goal="Get the relevant video content for the topic {topic} from youtube channel",
    verbose=True,
    memory=True,
    backstory=(
        "Expert in understanding videos in crime, fun and entertainment and providing suggestion"
    ),
    tools=[yt_tool],
    allow_delegation=True,
    llm=azure_llm
)

## Creating a senior blog writer agent with Youtube tool

blog_writer = Agent(
    role="Blog Writer from Youtube Videos",
    goal="Narrate compellling blog content from the video content for the topic {topic}",
    verbose=True,
    memory=True,
    backstory=(
        "With a flair for simmplifying complex topics, you craft"
        "Engaging narratives that captivate and educate,bringing new"
        "discoveries to light in an accessible manner."
    ),
    tools=[yt_tool],
    allow_delegation=False,
    llm=azure_llm
    )

## Research Task
research_task = Task(
    description=(
        "Identify the video {topic}"
        "Get detailed information about the video from the channel"
    ),
    expected_output="A comprehensive 3 paragraphs long report based on {topic} of video content",
    tools=[yt_tool],
    agent=blog_researcher
)

## Writing Task
writing_task = Task(
    description=(
    "Get the info from the youtube channel on topic {topic}"
    ),
    expected_output="Summarize the info from youtube channel video on the topic {topic} and create the content for the blog",
    tools=[yt_tool],
    agent=blog_writer,
    async_execution=False,
    output_file="blog_new_post.md"
)

## Forming the Crew with the agents and tasks
blog_crew = Crew(
    agents=[blog_researcher,blog_writer],
    tasks=[research_task,writing_task],
    process=Process.sequential,
    memory=True,
    cache=True,
    max_rpm=100,
    share_crew=True
)

## Start task execution process with enhanced feedback
result=blog_crew.kickoff(
    inputs={"topic":"Hero war dog"},
)

print(result)

I suggest you double-check that you’ve declared all the environment variables requested in the LiteLLM documentation.

Hello max

Based on the documentation i already have these key in .env file like below. Still same error persists

AZURE_API_KEY=“<api_value>”
AZURE_API_BASE=“https://<azure_domain>.openai.azure.com”
AZURE_OPENAI_DEPLOYMENT=“gpt-4o”
AZURE_API_VERSION=“2024-05-01-preview”

It’s really strange that the error keeps occurring. Unfortunately, I’m not an Azure user myself. To make it easier for someone with more knowledge to help and to isolate the connection issue, I suggest you run the code below and reply with the full exception traceback:

from crewai import LLM
import os
import litellm
litellm._turn_on_debug()

os.environ["AZURE_API_KEY"] = "" # "your-azure-api-key"
os.environ["AZURE_API_BASE"] = "" # "https://your-endpoint.openai.azure.com"
os.environ["AZURE_API_VERSION"] = "" # "2023-05-15"

azure_llm = LLM(
    model="azure/gpt-4o",
)

azure_response = azure_llm.call(
    "Hey, who are you?"
)

print(f'\nAzure Response:\n\n{azure_response}\n')

Hello max ,
I ran your code it gives me successful response but the problem is with crewai , when i ran the below code it’s givingg me the error. If you have azure credentials it will be really helpful if you try it the below code on your end and tell me

from crewai import Agent,Task,Crew,Process,LLM
from crewai_tools import YoutubeChannelSearchTool
from langchain_openai import AzureChatOpenAI
from dotenv import load_dotenv
import os 

load_dotenv()




azure_llm = LLM(
    model="azure/gpt-4o",
    api_version="2024-05-01-preview",
)

## Initializing the Youtube Channel Search Tool
yt_tool = YoutubeChannelSearchTool(youtube_channel_handle="@zackdfilms")


## Agents
## Create a senior blog content researcher

blog_researcher = Agent(
    role="Blog Creator from Youtube Videos",
    goal="Get the relevant video content for the topic {topic} from youtube channel",
    verbose=True,
    memory=True,
    backstory=(
        "Expert in understanding videos in crime, fun and entertainment and providing suggestion"
    ),
    tools=[yt_tool],
    allow_delegation=True,
    llm=azure_llm
)

## Creating a senior blog writer agent with Youtube tool

blog_writer = Agent(
    role="Blog Writer from Youtube Videos",
    goal="Narrate compellling blog content from the video content for the topic {topic}",
    verbose=True,
    memory=True,
    backstory=(
        "With a flair for simmplifying complex topics, you craft"
        "Engaging narratives that captivate and educate,bringing new"
        "discoveries to light in an accessible manner."
    ),
    tools=[yt_tool],
    allow_delegation=False,
    llm=azure_llm
    )

## Research Task
research_task = Task(
    description=(
        "Identify the video {topic}"
        "Get detailed information about the video from the channel"
    ),
    expected_output="A comprehensive 3 paragraphs long report based on {topic} of video content",
    tools=[yt_tool],
    agent=blog_researcher
)

## Writing Task
writing_task = Task(
    description=(
    "Get the info from the youtube channel on topic {topic}"
    ),
    expected_output="Summarize the info from youtube channel video on the topic {topic} and create the content for the blog",
    tools=[yt_tool],
    agent=blog_writer,
    async_execution=False,
    output_file="blog_new_post.md"
)

## Forming the Crew with the agents and tasks
blog_crew = Crew(
    agents=[blog_researcher,blog_writer],
    tasks=[research_task,writing_task],
    process=Process.sequential,
    memory=True,
    cache=True,
    max_rpm=100,
    share_crew=True
)

## Start task execution process with enhanced feedback
result=blog_crew.kickoff(
    inputs={"topic":"Hero war dog"},
)

print(result)

Okay, now that we’ve isolated the LLM configuration issue, here are some more recommendations:

  1. Remember to define the same environment variables from our previous discussion in your .env file.
  2. You’re setting the tools parameter in both your agents and your tasks. It’s not forbidden, just unnecessary, and goes against the KISS principle. If you had, for example, the same agent executing 3 different tasks and needed to use 1 tool for each task, then it would make sense to declare all the tools for the agent (the agent’s tools parameter) and then declare the specific tool for each task (the task’s tools parameter). It would be like saying, “Hey, out of all your tools, just use this one to complete this task.” That’s not the case here, so I suggest you only assign tools to your agents, as that’s sufficient for your use case. I also suggest you review the documentation for agents and tasks, especially the explanation of each parameter.
  3. You’re setting the memory parameter for your crew. Without going into too much detail here, internally this activates tools that use embeddings. These embeddings will need a model, which in turn will need an API key. In the absence of an explicit configuration, the library will try to configure one from OpenAI, hence the error from earlier. For your current example, I suggest you remove the memory parameter from your crew if it’s not needed. Review the documentation for the LLMs and the memory systems in CrewAI.
  4. Finally, the tool you’re using, YoutubeChannelSearchTool, is called “YouTube Channel RAG Search” in the documentation. The “RAG” has the same effect as what I mentioned above. So, you have to pass a complete configuration for custom model and embeddings, as suggested in the tool’s documentation. If you don’t provide your own configuration, the library tries to configure one from OpenAI, which also justifies the error you encountered earlier. I suggest you read the tool’s documentation, since you read the documentation about memory systems in the previous item.

I hope your example is up and running soon!

Thanks for your suggestion max

I read the documentation and modify the code like this


from crewai import Agent,Task,Crew,Process,LLM
from crewai_tools import YoutubeChannelSearchTool
from langchain_openai import AzureChatOpenAI
from dotenv import load_dotenv
import os 

load_dotenv()




llm = LLM(
    model="azure/gpt-4o",
    api_version="2024-05-01-preview",
     base_url=os.getenv("AZURE_API_BASE"),
    api_base=os.getenv("AZURE_API_BASE"),
    api_key=os.getenv("AZURE_API_KEY"),
)

config = dict(
    llm=dict(
        provider="azure_openai",
        config=dict(
            model="gpt-4o",
            api_key=os.getenv("AZURE_OPENAI_API_KEY"),
            deployment_name=os.getenv("AZURE_OPENAI_DEPLOYMENT"),
        ),
    ),
    embedder=dict(
        provider="azure_openai",
        config=dict(
            model="text-embedding-ada-002",
           api_key=os.getenv("AZURE_OPENAI_API_KEY"),
            deployment_name=os.getenv("AZURE_OPENAI_DEPLOYMENT"),
        ),
    )
)

## Initializing the Youtube Channel Search Tool
yt_tool = YoutubeChannelSearchTool(youtube_channel_handle="@zackdfilms",config=config)


## Agents
## Create a senior blog content researcher

blog_researcher = Agent(
    role="Blog Creator from Youtube Videos",
    goal="Get the relevant video content for the topic {topic} from youtube channel",
    verbose=True,
    backstory=(
        "Expert in understanding videos in crime, fun and entertainment and providing suggestion"
    ),
    tools=[yt_tool],
    allow_delegation=True,
    llm=llm
)

## Creating a senior blog writer agent with Youtube tool

blog_writer = Agent(
    role="Blog Writer from Youtube Videos",
    goal="Narrate compellling blog content from the video content for the topic {topic}",
    verbose=True,
    backstory=(
        "With a flair for simmplifying complex topics, you craft"
        "Engaging narratives that captivate and educate,bringing new"
        "discoveries to light in an accessible manner."
    ),
    tools=[yt_tool],
    allow_delegation=False,
    llm=llm
    )

## Research Task
research_task = Task(
    description=(
        "Identify the video {topic}"
        "Get detailed information about the video from the channel"
    ),
    expected_output="A comprehensive 3 paragraphs long report based on {topic} of video content",
    tools=[],
    agent=blog_researcher,
    llm=llm,
)

## Writing Task
writing_task = Task(
    description=(
    "Get the info from the youtube channel on topic {topic}"
    ),
    expected_output="Summarize the info from youtube channel video on the topic {topic} and create the content for the blog",
    tools=[],
    agent=blog_writer,
    async_execution=False,
    llm=llm,
    output_file="blog_new_post.md"
)

## Forming the Crew with the agents and tasks
blog_crew = Crew(
    agents=[blog_researcher,blog_writer],
    tasks=[research_task,writing_task],
    process=Process.sequential,
)

## Start task execution process with enhanced feedback
result=blog_crew.kickoff(
    inputs={"topic":"Hero war dog"},
)

print(result)

now i am getting the error like this , do you have any idea ?

I encountered an error while trying to use the tool. This was the error: APIStatusError.__init__() missing 2 required keyword-only arguments: 'response' and 'body'.
 Tool Search a Youtube Channels content accepts these inputs: Tool Name: Search a Youtube Channels content
Tool Arguments: {'search_query': {'description': 'Mandatory search query you want to use to search the Youtube Channels content', 'type': 'str'}}
Tool Description: A tool that can be used to semantic search a query the @zackdfilms Youtube Channels content.

Hey @AIEngineer, I’ve made the necessary adjustments to get your code up and running.

In my version, I’m using Qwen (through the OpenRouter provider) as the main LLM (the brains of your agents), I’m using Gemini (directly from Google) as the internal LLM for the RAG tool, and finally using one of Google’s models for the tool’s embeddings. I used this mix-and-match approach to demonstrate that, with the right configurations, CrewAI can perfectly adapt to a wide range of needs.

One important thing I should emphasize: the LLM settings (especially the environment variables) in CrewAI follow the pattern of the LiteLLM library, which you can check out in the documentation. As for the settings of the tools that use RAG (including the environment variables), they follow the pattern of the Embedchain library, which you can also review in the documentation. Unfortunately the configurations of these two libraries have not yet been unified, so you should pay attention to the requirements of each one, ok?

Here’s the fully functional code. Check out which parameters I used for each component, adapt it to your infrastructure, and consult the appropriate documentation. Good luck!

import os
from crewai import Agent, Crew, LLM, Process, Task
from crewai_tools import YoutubeChannelSearchTool

os.environ["OPENROUTER_API_KEY"] = ""
os.environ["GEMINI_API_KEY"] = ""
os.environ["GOOGLE_API_KEY"] = os.environ["GEMINI_API_KEY"]

main_llm = LLM(
    model="openrouter/qwen/qwq-32b",
    temperature=0.7
)

rag_llm = {
    "provider": "google",
    "config": {
        "model": "gemini-2.0-flash",
        "max_tokens": 1024,
        "temperature": 0.1
    }
}

rag_embedder = {
    "provider": "google",
    "config": {
        "model": "models/text-embedding-004",
        "task_type": "retrieval_document"
    }
}

youtube_search_tool = YoutubeChannelSearchTool(
    youtube_channel_handle="@zackdfilms",
    config={
        "llm": rag_llm,
        "embedder": rag_embedder
    }
)

blog_researcher_agent = Agent(
    role="Blog Creator from Youtube Videos",
    goal=(
        "Get the relevant video content for the topic "
        "{topic} from youtube channel"
    ),
    backstory=(
        "Expert in understanding videos in crime, fun and entertainment "
        "and providing suggestion"
    ),
    tools=[youtube_search_tool],
    allow_delegation=False,
    verbose=True,
    llm=main_llm
)

blog_writer_agent = Agent(
    role="Blog Writer from Youtube Videos",
    goal=(
        "Narrate compelling blog content from the video content for "
        "the topic {topic}"
    ),
    backstory=(
        "With a flair for simplifying complex topics, you craft engaging "
        "narratives that captivate and educate, bringing new discoveries "
        "to light in an accessible manner."
    ),
    tools=[youtube_search_tool],
    allow_delegation=False,
    verbose=True,
    llm=main_llm
)

research_task = Task(
    description=(
        "Identify the video {topic}. "
        "Get detailed information about the video from the channel"
    ),
    expected_output=(
        "A comprehensive 3 paragraphs long report based on "
        "{topic} of video content"
    ),
    agent=blog_researcher_agent
)

writing_task = Task(
    description=(
        "Get the info from the youtube channel on topic {topic}."
    ),
    expected_output=(
        "Summarize the info from youtube channel video on the topic "
        "{topic} and create the content for the blog"
    ),
    agent=blog_writer_agent
)

blog_crew = Crew(
    agents=[blog_researcher_agent, blog_writer_agent],
    tasks=[research_task, writing_task],
    process=Process.sequential
)

result = blog_crew.kickoff(
    inputs={"topic": "Hero war dog"}
)

print(f"\n🤖 Final Report:\n\n{result.raw}")
1 Like

Thanks for suggestion max

Did you also try with Azure LLM because currently I have organization access for that . I want it work fully based on Azure based