How to fix the import error of base_tool using CrewAI?

I’m trying to run my application through a gateway in an AWS Lambda. However, whenever I call the route, I get the following error:

[ERROR] Runtime.ImportModuleError: Unable to import module 'app': No module named 'crewai.tools.base_tool'
Traceback (most recent call last):

I’m not even using BaseTool in my code, I’m only using WebsiteSearchTool and TXTSearchTool:

from crewai_tools import WebsiteSearchTool, TXTSearchTool

class Tools():
  def searchTermsOfUse(termsOfUseUrl):
    return WebsiteSearchTool(website=termsOfUseUrl)
  
  def searchFaq(faqPath):
    return TXTSearchTool(txt=faqPath)

Still, I keep receiving this error.

I’m using the following versions:

crewai==0.98.0
crewai-tools==0.32.1

How can I solve this?

expect to be able to run the route that uses the tools I declared without any errors.

Anyone that was able to solve this? both crewai and crewai-tools are updated to the latest update.

I have the same, it seems a broken issue in CrewAI, Dependency cascade, and no solution has been offered so far, Tools - CrewAI hoping for a (reliable) and quick answer/solution of CrewAI team.

I believe this error occurs because AWS Lambda doesn’t include your dependencies by default. The crewai-tools package and its dependencies (including the internal crewai.tools.base_tool module) aren’t available in the Lambda runtime environment.

Solution: Package Dependencies with Your Lambda Function

You need to include all dependencies in your Lambda deployment package. Here are the recommended approaches:

Option 1: Deployment Package

Include dependencies directly in your deployment package:

bash

# Create a deployment directory
mkdir lambda-package
cd lambda-package

# Copy your application code
cp /path/to/your/app.py .

# Install dependencies in the same directory
pip install crewai==0.98.0 crewai-tools==0.32.1 -t .

# Create deployment package
zip -r lambda-deployment.zip .

Option 2: Use Docker Container Image

For large dependencies like CrewAI, a container image often works better:

Dockerfile:

text

FROM public.ecr.aws/lambda/python:3.11

# Copy requirements
COPY requirements.txt ${LAMBDA_TASK_ROOT}

# Install dependencies
RUN pip install -r requirements.txt

# Copy application code
COPY app.py ${LAMBDA_TASK_ROOT}

# Set the CMD to your handler
CMD ["app.lambda_handler"]

requirements.txt:

text

crewai==0.98.0
crewai-tools==0.32.1

Build and push to ECR, then configure Lambda to use the container image.

Option 3: AWS SAM or Serverless Framework

Use infrastructure-as-code tools that handle dependency packaging automatically.

SAM template.yaml:

text

Resources:
  MyFunction:
    Type: AWS::Serverless::Function
    Properties:
      Runtime: python3.11
      Handler: app.lambda_handler
      CodeUri: .
      Timeout: 300
      MemorySize: 512

Then run: sam build && sam deploy

Which option should you choose?

  • Small project → Option 1 (Deployment Package)
  • Large dependencies → Option 2 (Docker Container)
  • Complex infrastructure → Option 3 (SAM/Serverless)

The Docker container approach is most reliable for CrewAI due to its size and dependencies.