Error using DeepSeek

Hi, I’m trying to run DeepSeek as LLM for my agent, but always I have the same error.

"ERROR:root:Failed to get supported params: argument of type ‘NoneType’ is not iterable
WARNING:opentelemetry.trace:Overriding of current TracerProvider is not allowed
ERROR:root:Failed to get supported params: argument of type ‘NoneType’ is not iterable
ERROR:root:Failed to get supported params: argument of type ‘NoneType’ is not iterable
ERROR:root:Failed to get supported params: argument of type ‘NoneType’ is not iterable
ERROR:root:LiteLLM call failed: litellm.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=<main.DeepseekLLM object at 0x7fd894c10cd0>
Pass model as E.g. For ‘Huggingface’ inference endpoints pass in completion(model='huggingface/starcoder',..) Learn more: Providers | liteLLM

Provider List: Providers | liteLLM

Provider List: Providers | liteLLM

Provider List: Providers | liteLLM

Provider List: Providers | liteLLM

Agent: Experto en análisis de documentos PDF

Task: Responde basándote en el PDF: ¿Cuál es el tema principal del documento?

Provider List: Providers | liteLLM

Error during LLM call: litellm.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=<main.DeepseekLLM object at 0x7fd894c10cd0>
Pass model as E.g. For ‘Huggingface’ inference endpoints pass in completion(model='huggingface/starcoder',..) Learn more: Providers | liteLLM

BadRequestError Traceback (most recent call last)
in <cell line: 0>()
25
26 # 7. Ejecutar el proceso
—> 27 result = crew.kickoff()
28 print(“Resultado:”, result)

19 frames
/usr/local/lib/python3.11/dist-packages/litellm/litellm_core_utils/get_llm_provider_logic.py in get_llm_provider(model, custom_llm_provider, api_base, api_key, litellm_params)
331 error_str = f"LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model={model}\n Pass model as E.g. For ‘Huggingface’ inference endpoints pass in completion(model='huggingface/starcoder',..) Learn more: Providers | liteLLM"
332 # maps to openai.NotFoundError, this is raised when openai does not recognize the llm
→ 333 raise litellm.exceptions.BadRequestError( # type: ignore
334 message=error_str,
335 model=model,

BadRequestError: litellm.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=<main.DeepseekLLM object at 0x7fd894c10cd0>
Pass model as E.g. For ‘Huggingface’ inference endpoints pass in completion(model='huggingface/starcoder',..) Learn more: Providers | liteLLM"

This is my code:

# 2. Crear un wrapper personalizado para LiteLLM
class DeepseekLLM:
def __init__(self, temperature: float = 0.3, max_tokens: int = 2000):
    self.temperature = temperature
    self.max_tokens = max_tokens
    
def generate(self, prompt: str) -> str:
    try:
        response = completion(
            model="deepseek-chat",
            messages=[{"role": "user", "content": prompt}],
            api_key=os.getenv("DEEPSEEK_API_KEY"),
            base_url="https://api.deepseek.com/v1",
            temperature=self.temperature,
            max_tokens=self.max_tokens
        )
        return response.choices[0].message.content
    except Exception as e:
        return f"Error: {str(e)}"


# 4. Instanciar el LLM personalizado
deepseek_llm = DeepseekLLM(temperature=0.3, max_tokens=2000)