I am having difficulty to define custom models for tools.
example:
@agent
def researcher(self) -> Agent:
return Agent(
config=self.agents_config['researcher'],
verbose=True,
tool=GithubSearchTool(
github_repo='https://github.com/myrepodir/reponame',
gh_token='github_pat_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx,
content_types=['code', 'issue', 'repo', 'pr'],
config=dict(
llm=dict(
provider="ollama",
config=dict(
model="codeqwen",
# temperature=0.5,
# top_p=1,
# stream=true,
),
),
embedder=dict(
provider="ollama",
config=dict(
model="nomic-embed-text"
),
),
)
),
llm=LLM(model="gpt-4o", api_version="2025-01-01-preview")
)
If I do not have the OPENAPI key it would not run, If I have openAPI key it works but doesn’t seem to use the custom embedding model. Additionally I can’t use any of the azure openai embedding models to configure here. An example on how to use Azure openai embedding models would be very helpful.
Error message if I do not have openAPI key in my env variable.
ERROR:root:LiteLLM call failed: litellm.AuthenticationError: AuthenticationError: OpenAIException - The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY environment variable
Error during LLM call: litellm.AuthenticationError: AuthenticationError: OpenAIException - The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY environment variable
An unknown error occurred. Please check the details below.
Error details: litellm.AuthenticationError: AuthenticationError: OpenAIException - The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY environment variable