litellm.APIError: AzureException APIError - argument of type 'NoneType' is not iterable Using raw output instead

I am getting this error message using Azure when combining the LLM using the the langchain azure chat open ai clas:

Failed to convert text into a pydantic model due to the following error: litellm.APIError: AzureException APIError - argument of type ‘NoneType’ is not iterable Using raw output instead.

The issue was also reported by someone else here:

I also had the same problem, still couldn’t find the fix

I fixed it by having to do this:

os.environ["OPENAI_API_KEY"] = os.getenv("AZURE_OPENAI_KEY_openai_4o")
os.environ["AZURE_API_KEY"] = os.getenv("AZURE_OPENAI_KEY_openai_4o")
os.environ["AZURE_API_BASE"] = os.getenv("AZURE_OPENAI_ENDPOINT_openai_4o")
os.environ["AZURE_API_VERSION"] = "2024-02-01"

llm = LLM(
    model="azure/gpt-4o",
)

I do not understand why I cannot feed as arguments directly in LLM() instance. It is what it is… this is version of crewai==0.79.4

You can also use the agents.yaml file to instantiate the model and save the env variables in .env file in root folder.

Example .env file:

AZURE_API_KEY=your-api-key-here  # Replace with KEY1 or KEY2
AZURE_API_BASE=https://example.openai.azure.com/  # Replace with your endpoint
AZURE_API_VERSION=2024-08-01-preview # API version 

Then modify the agents.yaml to include the llm like so:

researcher:
  role: >
    {topic} Senior Data Researcher
  goal: >
    Uncover cutting-edge developments in {topic}
  backstory: >
    You're a seasoned researcher with a knack for uncovering the latest
    developments in {topic}. Known for your ability to find the most relevant
    information and present it in a clear and concise manner.
  llm: azure/gpt-4o-mini

reporting_analyst:
  role: >
    {topic} Reporting Analyst
  goal: >
    Create detailed reports based on {topic} data analysis and research findings
  backstory: >
    You're a meticulous analyst with a keen eye for detail. You're known for
    your ability to turn complex data into clear and concise reports, making
    it easy for others to understand and act on the information you provide.
  llm: azure/gpt-4o-mini # replace with your deployed model from Azure
1 Like

what if i dont use yaml file?
not everyone is using that latest configuration…
you mentioned this is need to set in azure openai, but i took it “out of the box” didn’t do any change there.
they are indicating different end points which is also confusing
button line i can connect to azure openai when using crewai

Here’s an example without the YAML config.

1. Set env varibales

export AZURE_API_KEY="your-api-key"
export AZURE_API_BASE="https://your-endpoint.openai.azure.com/"
export AZURE_API_VERSION="2024-08-01-preview"

2. Code

from crewai import Agent, Task, Crew, Process, LLM
from crewai_tools import SerperDevTool
import os

# Configure the LLM to use Azure OpenAI
azure_llm = LLM(
    model="azure/gpt-4o-mini",
    api_key=os.environ.get("AZURE_API_KEY"), # Replace with KEY1 or KEY2
    base_url=os.environ.get("AZURE_API_BASE"), # example: https://example.openai.azure.com/
    api_version=os.environ.get("AZURE_API_VERSION"), # example: 2024-08-01-preview
)

# Agent definition
researcher = Agent(
    role='{topic} Senior Researcher',
    goal='Uncover groundbreaking technologies in {topic} for year 2024',
    backstory='Driven by curiosity, you explore and share the latest innovations.',
    tools=[SerperDevTool()],
    llm=azure_llm
)

# Define a research task for the Senior Researcher agent
research_task = Task(
    description='Identify the next big trend in {topic} with pros and cons.',
    expected_output='A 3-paragraph report on emerging {topic} technologies.',
    agent=researcher,
)

def main():
    # Forming the crew and kicking off the process
    crew = Crew(
        agents=[researcher],
        tasks=[research_task],
        process=Process.sequential,
        verbose=True
    )
    result = crew.kickoff(inputs={'topic': 'AI Agents'})
    print(result)

if __name__ == "__main__":
    main()
1 Like

This topic was automatically closed 24 hours after the last reply. New replies are no longer allowed.