I’ve been following the Quickstart guide, added my keys to the .env file:
MODEL=bedrock/anthropic.claude-3-5-sonnet-20240620-v1:0
AWS_ACCESS_KEY_ID=...
AWS_SECRET_ACCESS_KEY=...
AWS_REGION_NAME=us-west-2
SERPER_API_KEY=...
But when I run crewai run
, I get the following error, repeated endlessly until I interrupt it with my keyboard:
LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True'.
ERROR:root:LiteLLM call failed: litellm.APIConnectionError: Missing boto3 to call bedrock. Run 'pip install boto3'.
Traceback (most recent call last):
File "C:\Users\USER\Documents\test\multiagent\exemple_crewai\.venv\Lib\site-packages\litellm\llms\bedrock\chat\converse_handler.py", line 206, in completion
from botocore.auth import SigV4Auth
ModuleNotFoundError: No module named 'botocore'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\USER\Documents\test\multiagent\exemple_crewai\.venv\Lib\site-packages\litellm\main.py", line 2512, in completion
response = bedrock_converse_chat_completion.completion(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\USER\Documents\test\multiagent\exemple_crewai\.venv\Lib\site-packages\litellm\llms\bedrock\chat\converse_handler.py", line 210, in completion
raise ImportError("Missing boto3 to call bedrock. Run 'pip install boto3'.")
ImportError: Missing boto3 to call bedrock. Run 'pip install boto3'.
Now, boto3
is installed in the virtual environment that was created with crewai install
(and the one I use to launch the command), so I don’t know about that error.
Since the error told me to use litellm.set_verbose=True
, I do, and now I get this, repeated endlessly too:
Request to litellm:
litellm.completion(model='bedrock/anthropic.claude-3-5-sonnet-20240620-v1:0', messages=[{'role': 'system', 'content': 'You are AI LLMs Senior Data Researcher\n. You\'re a seasoned researcher with a knack for uncovering the latest developments in AI LLMs. Known for your ability to find the most relevant information and present it in a clear and concise manner.\n\nYour personal goal is: Uncover cutting-edge developments in AI LLMs\n\nYou ONLY have access to the following tools, and should NEVER make up tools that are not listed here:\n\nTool Name: Search the internet with Serper\nTool Arguments: {\'search_query\': {\'description\': \'Mandatory search query you want to use to search the internet\', \'type\': \'str\'}}\nTool Description: A tool that can be used to search the internet with a search_query. Supports different search types: \'search\' (default), \'news\'\n\nIMPORTANT: Use the following format in your response:\n\n```\nThought: you should always think about what to do\nAction: the action to take, only one name of [Search the internet with Serper], just the name, exactly as it\'s written.\nAction Input: the input to the action, just a simple JSON object, enclosed in curly braces, using " to wrap keys and values.\nObservation: the result of the action\n```\n\nOnce all necessary information is gathered, return the following format:\n\n```\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question\n```'}, {'role': 'user', 'content': '\nCurrent Task: Conduct a thorough research about AI LLMs Make sure you find any interesting and relevant information given the current year is 2024.\n\n\nThis is the expect criteria for your final answer: A list with 10 bullet points of the most relevant information about AI LLMs\n\nyou MUST return the actual complete content as the final answer, not a summary.\n\nBegin! This is VERY important to you, use the tools available and give your best Final Answer, your job depends on it!\n\nThought:'}], stop=['\nObservation:'], stream=False)
09:03:36 - LiteLLM:WARNING: utils.py:317 - `litellm.set_verbose` is deprecated. Please set `os.environ['LITELLM_LOG'] = 'DEBUG'` for debug logs.
WARNING:LiteLLM:`litellm.set_verbose` is deprecated. Please set `os.environ['LITELLM_LOG'] = 'DEBUG'` for debug logs.
Initialized litellm callbacks, Async Success Callbacks: [<crewai.utilities.token_counter_callback.TokenCalcHandler object at 0x000001C6AC62B650>]
SYNC kwargs[caching]: False; litellm.cache: None; kwargs.get('cache')['no-cache']: False
Final returned optional params: {'stream': False, 'stopSequences': ['\nObservation:']}
LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True'.
ERROR:root:LiteLLM call failed: litellm.APIConnectionError: Missing boto3 to call bedrock. Run 'pip install boto3'.
Traceback (most recent call last):
File "C:\Users\USER\Documents\test\multiagent\exemple_crewai\.venv\Lib\site-packages\litellm\llms\bedrock\chat\converse_handler.py", line 206, in completion
from botocore.auth import SigV4Auth
ModuleNotFoundError: No module named 'botocore'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\USER\Documents\test\multiagent\exemple_crewai\.venv\Lib\site-packages\litellm\main.py", line 2512, in completion
response = bedrock_converse_chat_completion.completion(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\USER\Documents\test\multiagent\exemple_crewai\.venv\Lib\site-packages\litellm\llms\bedrock\chat\converse_handler.py", line 210, in completion
raise ImportError("Missing boto3 to call bedrock. Run 'pip install boto3'.")
ImportError: Missing boto3 to call bedrock. Run 'pip install boto3'.
It seems to me this YAML approach makes it more complicated that it needs to be. How can I plug an AWS Bedrock model into this?