Hello everyone,
I’m new to Python and CrewAI, and I’ve successfully set up and run a sample CrewAI application. However, I’m seeking guidance on integrating a custom LLM with CrewAI.
My organization has implemented an internal GenAI gateway through which all LLM requests must be routed. This gateway requires a specific request body and header format, and authentication is handled via OAuth tokens. Importantly, we do not receive direct access to LLM API keys. Instead, the gateway dynamically routes requests to the appropriate model (e.g., OpenAI, Gemini) based on the request type.
I attempted to implement a custom LLM following the documentation provided Custom LLM Implementation - CrewAI , but encountered errors related to the OpenAI API key. Since we do not use direct keys, I’m unsure how to proceed with configuring CrewAI to work with our internal gateway.
Any insights, examples, or guidance on implementing a custom LLM in this context would be greatly appreciated.