Hi @MCN, your local environment tolerates the mixed installation, but the containerized Cloud Run environment enforces stricter import resolution.
Standardizing on PyPI packages solves this consistently.
1. Recommended approach for Cloud Run deployments?
Use PyPI packages only - avoid mixing local source installations (pip install -e .) with PyPI packages. The pip install 'crewai[tools]' syntax is the official recommendation for production deployments.
2. Should crewai-tools be compatible with local source installations?
No, this is not officially supported. The crewai-tools package expects BaseTool to be available from the installed crewai package’s public API. Local editable installs can cause import resolution conflicts in containerized environments.
3. Are there plans to align versions?
Based on the release notes (v0.175.0), the team is aware of import issues and has been fixing them. However, the version numbering mismatch (crewai 0.175.0 vs crewai-tools 0.76.0) suggests these are separate release cycles. Use crewai[tools] to ensure compatibility.
4. Workaround for BaseTool export discrepancy?
The fix: Ensure BaseTool is imported from crewai.tools:
python
from crewai.tools import BaseTool
If crewai_tools is trying to import it, make sure you’re using compatible versions installed from PyPI together.
Also, make sure you test Docker image locally before Cloud Run deployment
docker build -t crewai-test .
# Run
docker run --rm crewai-test
# If you need to inspect
docker run -it --rm crewai-test /bin/bash