Enablement of WatsonxLLM as the provider in crewai project setup

Hi Team,
Today when I am creating the crewai project using ‘crewai create crew ’ , Its asked to choose the provider. I want to use the watsonx as the LLM provide. But the Watsonx option is not available here. How to proceed further.

Where? What are you trying to do?

(.backlogs_SLC) MacBook-Pro 10 % crewai create crew bklog10
Creating folder bklog10…
Cache expired or not found. Fetching provider data from the web…
Downloading [####################################] 240693/12180
Select a provider to set up:

  1. openai
  2. anthropic
  3. gemini
  4. groq
  5. ollama
  6. other
    q. Quit
    Enter the number of your choice or ‘q’ to quit: 6
    Select a provider from the full list:
  7. ai21
  8. aleph_alpha
  9. anthropic
  10. anyscale
  11. azure
  12. azure_ai
  13. bedrock
  14. cerebras
  15. cloudflare
  16. codestral
  17. cohere
  18. cohere_chat
  19. databricks
  20. deepinfra
  21. deepseek
  22. fireworks_ai
  23. fireworks_ai-embedding-models
  24. friendliai
  25. gemini
  26. groq
  27. mistral
  28. nlp_cloud
  29. ollama
  30. openai
  31. openrouter
  32. palm
  33. perplexity
  34. replicate
  35. sagemaker
  36. text-completion-codestral
  37. text-completion-openai
  38. together_ai
  39. vertex_ai-ai21_models
  40. vertex_ai-anthropic_models
  41. vertex_ai-chat-models
  42. vertex_ai-code-chat-models
  43. vertex_ai-code-text-models
  44. vertex_ai-embedding-models
  45. vertex_ai-image-models
  46. vertex_ai-language-models
  47. vertex_ai-llama_models
  48. vertex_ai-mistral_models
  49. vertex_ai-text-models
  50. vertex_ai-vision-models
  51. voyage
    q. Quit
    Enter the number of your choice or ‘q’ to quit: q
    Exiting…

Here WatsonX option is not available.

I am trying to use the IBM Watsonx Model.

BadRequestError: litellm.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=IBMWatsonLLM
Params: {}
Pass model as E.g. For ‘Huggingface’ inference endpoints pass in completion(model='huggingface/starcoder',..) Learn more: Providers | liteLLM

Hi @team,
Its high priority.

I am encountered with this error:

raise exception_type(
File “/Users/paarttipaa/ProjectTask/GithubProj/BCK_Log_SLC_Code_Explanation_Project/.agenticApproach/lib/python3.12/site-packages/litellm/main.py”, line 905, in completion
model, custom_llm_provider, dynamic_api_key, api_base = get_llm_provider(
^^^^^^^^^^^^^^^^^
File “/Users/paarttipaa/ProjectTask/GithubProj/BCK_Log_SLC_Code_Explanation_Project/.agenticApproach/lib/python3.12/site-packages/litellm/litellm_core_utils/get_llm_provider_logic.py”, line 313, in get_llm_provider
raise e
File “/Users/paarttipaa/ProjectTask/GithubProj/BCK_Log_SLC_Code_Explanation_Project/.agenticApproach/lib/python3.12/site-packages/litellm/litellm_core_utils/get_llm_provider_logic.py”, line 290, in get_llm_provider
raise litellm.exceptions.BadRequestError( # type: ignore
litellm.exceptions.BadRequestError: litellm.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=WatsonxLLM
Params: {‘model_id’: ‘mistralai/mistral-large’, ‘deployment_id’: None, ‘params’: {‘decoding_method’: ‘sample’, ‘max_new_tokens’: 1000, ‘temperature’: 0.7, ‘top_k’: 50, ‘top_p’: 1, ‘repetition_penalty’: 1}, ‘project_id’: ‘f7312b11-b2dc-4581-b321-11515293a1f1’, ‘space_id’: None}
Pass model as E.g. For ‘Huggingface’ inference endpoints pass in completion(model='huggingface/starcoder',..) Learn more: Providers | liteLLM
(.agenticApproach) paarttipaa@Paarttipaabhalajis-MacBook-Pro BCK_Log_SLC_Code_Explanation_Project %

watsonx.py

from langchain_ibm import WatsonxLLM
import os
from dotenv import load_dotenv
class WatsonLLLM:
def init(self):
# Load environment variables
load_dotenv()
self.llm_mixtral_l = None # Placeholder for llm_mixtral_l
self.modelConfig() # Call the config method to initialize the LLMs

def modelConfig(self):
    model_id_mistral_l = "mistralai/mistral-large"
    # model_id_llama_3 = "meta-llama/llama-3-405b-instruct"

    wml_url=os.getenv("WATSONX_URL")
    wml_api=os.getenv("WATSONX_APIKEY")
    wml_project_id = os.getenv("PROJECT_ID")
    
    # Parameters for Mistral
    parameters_mistral_l = {
        "decoding_method": "sample",
        "max_new_tokens": 1000,
        "temperature": 0.7,
        "top_k": 50,
        "top_p": 1,
        "repetition_penalty": 1
    }

    # Create manager llm (Mistral)
    self.llm_mixtral_l = WatsonxLLM(
        model_id=model_id_mistral_l,
        url=wml_url,
        apikey=wml_api,
        params=parameters_mistral_l,
        project_id=wml_project_id,
    )

Agent.py

from textwrap import dedent
from crewai import Agent
import watsonx

class backlog10Agents():
def init(self):
# # Instantiate the class
self.watson_llm = watsonx.WatsonLLLM()
#Accessing the Mixtral LLM
self.mixtral_llm=self.watson_llm.llm_mixtral_l

def fileRetriverAgent(self) -> Agent :
   return Agent(
      # Creating the File Retriever Agent
      role="File data Retriever Agent",
      goal="Retrieve the list of Java files in a given directory and retrive the file data it one by one and context to the other agents",
      backstory=(
          "You specialize in scanning directories to find, retrive data and return files."
      ),
      llm=self.mixtral_llm,
      allow_delegation=True,
      verbose=True,
      memory=False,
  )

def Conditional_Matrix_generator(self) -> Agent:
  return Agent(
    role='Java Matrix Analyst',
    goal='Develop detailed conditional matrices for each and every Java methods, aiding in the analysis of method flow and potential paths based on different conditions.',
    backstory=dedent(\
    """You are a highly skilled software analyst with deep knowledge of Java programming.
    Your expertise lies in deconstructing complex Java methods and generating
    conditional matrices to represent logical flows. You have worked on numerous
    projects involving code analysis, helping developers optimize their
    conditional logic and understand the relationships between various code paths."""),
    verbose=True,
    llm=self.mixtral_llm
  )

Note: IBM is collaborated with Crewai, I dont understand why its throwing this error.

Request: Please help me to resolve this issue.

@Paarttipaabhalaji If you encounter this error, follow these two steps:

  1. Use the CrewAI LLM class, which leverages LiteLLM in the background.
  2. Make sure to set the LLM provider before configuring the LLM. For watsonx.ai, use watsonx/<LLM provider>/<LLM name>. If you’re unsure how to do this for a specific LLM provider, refer to the LiteLLM Providers page for guidance.
from crewai import Agent, LLM

my_llm = LLM(
    api_key=os.getenv("WATSONX_API_KEY"),
    model="watsonx/meta-llama/llama-3-8b-instruct",
)

my_agent = Agent(
    ...,
    llm=my_llm,
)
2 Likes

Thank you for your guidance @rokbenko :slightly_smiling_face:

@Paarttipaabhalaji You’re welcome! :slight_smile: