I still think it nice to have a list of all your LLM models available after ‘self.’
A single data class that all of your Agents can access, a single point of maintenance, etc.
My old habits
You are correct that the simplest way is how you describe
Can I ask how you assign temperature for an LLM like GPT-4o-mini when v ≥ 0.60.0?
And if v ≥ 0.60.0, can we still use an LLM like GPT-4o-mini with a configuration such as self.OpenAIGPT4oMini = ChatOpenAI(model_name=“gpt-4o-mini”, temperature=0.8)?
but those need to be , by model, since we have several of them going at the same time. How can we do that? Also for lm studio models that use open api,
response = litellm.completion(
model=“openai/mistral”, # add openai/ prefix to model so litellm knows to route to OpenAI
api_key=“”, # api key to your openai compatible endpoint
api_base=“http://127.0.0.1:1234/v1” # set API Base of your Custom OpenAI Endpoint
both. I use different models for general vs function calling vs planning. I may use the same model like gpt4omini but with different temperature parameters for different purposes. for example for a planning lm I always increase the temperature compared to using the same model for an agent that is just doing web search duty.
Wow - yeah my code totally broke too. I’m down for the Litellm use (it’s great and just works) but like others have mentioned I need to be able to pass in parameters to each model.
A more traditional litellm completion might look like: