Access OpenAI LLM through SAP Proxy Client

Hello everyone,

I am new to Crewai and trying to use an OpenAI model through the SAP proxy client. My code looks like this:

from gen_ai_hub.proxy.langchain.openai import ChatOpenAI
from gen_ai_hub.proxy.core.proxy_clients import get_proxy_client

def init(self):
proxy_client = get_proxy_client(‘gen-ai-hub’)
self.llm_model = ChatOpenAI(proxy_model_name=‘gpt-4o’, proxy_client=proxy_client, temperature=0)

@agent
def researcher(self) → Agent:
return Agent(
config=self.agents_config[‘researcher’],
verbose=True,
llm = self.llm_model
)

To get this working, I have set the following environment variables:
AICORE_CLIENT_ID
AICORE_CLIENT_SECRET
AICORE_BASE_URL

However, when I run the code, I get an error stating that the OpenAI API key is missing. I do not have any API key since I do not have access to the OpenAI API directly.

Error: The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY environment variable

Is there a way to avoid this error while using the proxy client, or am I possibly missing a configuration step? Any guidance would be greatly appreciated!

Thanks in advance!

1 Like

Hi @gava1012 ,
I got the same question… Let me know if you were able to find a work around of accessing the LLM through the SAP proxy client.

Thanks
Rakesh

@Rakesh_Teki @gava1012 did the issue resolved, are you able to connect to sap proxy without using openaiapi key

Hi,
I ran into the same issue. While I wasn’t able to fully resolve the root cause - namely, that CrewAI expects an OpenAI API key even when you want to use a custom LLM - I did manage to find a workaround.

CrewAI defines an abstract class called BaseLLM, which outlines the methods required for any (custom) llm integration. To use your own llm, you only need to implement the call function, ensuring it accepts the proper arguments and returns the model’s response. Here’s a minimal example:

from gen_ai_hub.proxy.native.openai import chat

class MyLLM(BaseLLM):
    def call(self, messages, **args):
        kwargs = dict(model_name=self.model, messages=messages)
        response = chat.completions.create(**kwargs)
        #callbacks = args.get("callbacks", None)
        return response.choices[0].message.content

llm = MyLLM("gpt-4o-mini")

Next, pass your custom llm instance to the agent:

agent = Agent(
    llm = llm,
    ...
)

Hope this helps!

1 Like

This topic was automatically closed 24 hours after the last reply. New replies are no longer allowed.