Issues with Bedrock LLM connectivity using instance profile ARN

Hey all, I’m using CrewAI with Bedrock and I’m running into problems using some models. Everything works fine if I set MODEL = bedrock/anthropic.claude-3-sonnet-20240229-v1:0, but when I try to use MODEL = bedrock/anthropic.claude-3-7-sonnet-20250219-v1:0 I get an error. The error indicates that on-demand throughput isn’t supported for this model, and the AWS docs say that in this case you need to create an inference profile and supply the ARN of the inference profile in place of the model id. I’ve done that, but now I’m getting the following error:

raise exception_type(
model, custom_llm_provider, dynamic_api_key, api_base = get_llm_provider(
                                                        ^^^^^^^^^^^^^^^^^
raise e

.BadRequestError:
An error occurred while running the crew: Command '['uv', 'run', 'run_crew']' returned non-zero exit status 1.

When looking through the CrewAI docs specific to Bedrock I also see that these models aren’t listed. Does Crew simply not support these model types that require an inference profile ARN? Or am I screwing something up? Thanks in advance!

Cheers!

Disregard. I figured this out myself. Turns out you can use the instance profile ID listed in the console instead of the ARN and it works just fine.

if you don’t mind, can you share the template code of the correct connection in case someone else runs into similar issue?

The AWS Console instance profile screen will have two columns. One called “Instance Profile ID” and another called “Instance Profile ARN”. When you find the model you want to use simply set the value of model to the instance profile ID listed in the console. See attached image.

1 Like

This topic was automatically closed 24 hours after the last reply. New replies are no longer allowed.