moto
September 22, 2024, 5:45am
1
Where is the config.yaml that litellm uses in crewai?
matt
September 22, 2024, 7:31am
2
There is no YAML as LiteLLM is done programmatically with the code
from typing import Any, Dict, List
from litellm import completion
import litellm
class LLM:
def __init__(self, model: str, stop: List[str] = [], callbacks: List[Any] = []):
self.stop = stop
self.model = model
litellm.callbacks = callbacks
def call(self, messages: List[Dict[str, str]]) -> Dict[str, Any]:
response = completion(
stop=self.stop, model=self.model, messages=messages, num_retries=5
)
return response["choices"][0]["message"]["content"]
def _call_callbacks(self, formatted_answer):
for callback in self.callbacks:
callback(formatted_answer)
moto
September 22, 2024, 4:39pm
3
@matt so there is no way as in the .yaml file for litellm proxy to set parameters for a llm? I guess its back to the same old question. How do you set up a llm with parameters in crewai now and I know you said you were looking into it.
matt
September 23, 2024, 7:32am
4
moto
September 23, 2024, 5:53pm
5
@matt how will we initialize a model with parameters with this change?
nvm found it in new doc and upgrade to 0.63.1