I am able to use the langchain-ibm package and run a watsonx model hosted on openshift but it seems there is an issue with how the model integrates with LiteLLM in crewai.
I set the following environment variables:
WATSONX_URL
WATSONX_VERSION
WATSONX_USERNAME
WATSONX_PROJECT_ID
WATSONX_DEPLOYMENT_SPACE_ID
SSL_CERT_FILE
WATSONX_APIKEY
WATSONX_TOKEN
In the code, I init the LLM as seen below.
from crewai import Agent, LLM
llama2_70b = LLM(
model="watsonx/meta-llama/llama-2-70b-chat"
)
When I try running a simple crew, I get the following error:
llm.py-llm:161 - ERROR: LiteLLM call failed: litellm.APIConnectionError: WatsonxException - HTTPSConnectionPool(host='
xxx ', port=443): Max retries exceeded with url: /ml/v1/text/generation?version=2024-03-13 (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1007)')))