Use local enterprise LLMs

Does CrewAI allow using local enterprise LLMs (e.g Azure Open AI or GGUF files downloaded locally ) ?

Hi and welcome to the community

This might help your search

CrewAI supports OLLAMA some examples here LLMs - CrewAI

llm = LLM(
    model="ollama/llama3:70b",
    base_url="http://localhost:11434"
)
1 Like

Thanks for the response. Where or which files does this need to configured ?

you can set either in .env as example like

MODEL=ollama/llama3.1
API_BASE=http://localhost:11434

or by import in crew.py as below

from crewai import LLM
llm = LLM(
    model="ollama/llama3.1",
    base_url="http://localhost:11434"
)

you can also pass the llm within the class as

self.llm = LLM(
    model="ollama/llama3.1",
    base_url="http://localhost:11434"
)

if you chose the second method don’t forget to specify the llm either in agent or task like

    def researcher(self) -> Agent:
        return Agent(
            config=self.agents_config['researcher'],
            llm=self.llm #llm <<--------------
            verbose=True
        )

Please refer the litellm docs for this. vllm and other providers are supported there.

This topic was automatically closed 24 hours after the last reply. New replies are no longer allowed.