Installing collected packages: langchain-core
Attempting uninstall: langchain-core
Found existing installation: langchain-core 0.3.0
Uninstalling langchain-core-0.3.0:
Successfully uninstalled langchain-core-0.3.0
Successfully installed langchain-core-0.2.40
Note: you may need to restart the kernel to use updated packages.
ERROR: pip’s dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
langchain-google-genai 2.0.0 requires langchain-core<0.4,>=0.3.0, but you have langchain-core 0.2.40 which is incompatible.
langchain-google-vertexai 2.0.0 requires langchain-core<0.4,>=0.3.0, but you have langchain-core 0.2.40 which is incompatible.
How do I fix this issue, (it occurs when I run pip install crewai after pip install langchain-google-genai)? I want to use crewai with langchain-google-genai to work with Gemini as the LLM. Is there another way to use Gemini as an LLM besides this method?
I had the same problem since I use Gemini 1.5 flash as my llm manager. You don’t need to use langchain anymore. I also don’t use langchain-google-genai anymore as it doesn’t work either. I had to uninstall poetry, clear cache, reinstall poetry, reinitialize with poetry. I dont know if this was all needed but eventually helped me resolve the issue same issue you’re having. I also updated my pyproject.toml to include the following before running poetry install again:
-------------------pyproject.toml---------------------
[tool.poetry.dependencies]
python = “>=3.10,<=3.13”
crewai = “^0.60.00” # Added this line
filelock = “*” # Added this line
Here’s my current set-up for the crewai.py file that works in 0.60.00 (with Gemini)
import os
from crewai import Agent, Crew, Process, Task
from crewai.project import CrewBase, agent, crew, task
from langchain_community.llms import OpenAI, Ollama
from dotenv import load_dotenv
import litellm # Added for using google API for Gemini
from crewai_tools import SerperDevTool
search_tool = SerperDevTool()
load_dotenv()
Set the Google API key for LiteLLM to use Gemini LLM Models
litellm.api_key = os.getenv(‘GOOGLE_API_KEY’)
I set up a basic crew that uses local models and gemini llm manager if needed. It also accepts TOPIC user input instead of hard coding. Make’s it easier to use. Here’s the repo.
Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new
LiteLLM.Info: If you need to debug this error, use litellm.set_verbose=True.
Here is the code I used:
import os
from dotenv import load_dotenv
load_dotenv()
os.environ[“GOOGLE_APPLICATION_CREDENTIALS”] = os.getenv(“GOOGLE_APPLICATION_CREDENTIALS”)
os.environ[“temperature”] = “0.0”
os.environ[“top_p”] = “0.1”