Pip's dependency resolver does not currently take into account all the packages that are installed

Installing collected packages: langchain-core
Attempting uninstall: langchain-core
Found existing installation: langchain-core 0.3.0
Uninstalling langchain-core-0.3.0:
Successfully uninstalled langchain-core-0.3.0
Successfully installed langchain-core-0.2.40
Note: you may need to restart the kernel to use updated packages.
ERROR: pip’s dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
langchain-google-genai 2.0.0 requires langchain-core<0.4,>=0.3.0, but you have langchain-core 0.2.40 which is incompatible.
langchain-google-vertexai 2.0.0 requires langchain-core<0.4,>=0.3.0, but you have langchain-core 0.2.40 which is incompatible.

How do I fix this issue, (it occurs when I run pip install crewai after pip install langchain-google-genai)? I want to use crewai with langchain-google-genai to work with Gemini as the LLM. Is there another way to use Gemini as an LLM besides this method?

Thanks.

Hey @theg8

I had the same problem since I use Gemini 1.5 flash as my llm manager. You don’t need to use langchain anymore. I also don’t use langchain-google-genai anymore as it doesn’t work either. I had to uninstall poetry, clear cache, reinstall poetry, reinitialize with poetry. I dont know if this was all needed but eventually helped me resolve the issue same issue you’re having. I also updated my pyproject.toml to include the following before running poetry install again:
-------------------pyproject.toml---------------------
[tool.poetry.dependencies]
python = “>=3.10,<=3.13”
crewai = “^0.60.00” # Added this line
filelock = “*” # Added this line

Here’s my current set-up for the crewai.py file that works in 0.60.00 (with Gemini)

import os
from crewai import Agent, Crew, Process, Task
from crewai.project import CrewBase, agent, crew, task
from langchain_community.llms import OpenAI, Ollama
from dotenv import load_dotenv
import litellm # Added for using google API for Gemini
from crewai_tools import SerperDevTool

search_tool = SerperDevTool()

load_dotenv()

Set the Google API key for LiteLLM to use Gemini LLM Models

litellm.api_key = os.getenv(‘GOOGLE_API_KEY’)

I set up a basic crew that uses local models and gemini llm manager if needed. It also accepts TOPIC user input instead of hard coding. Make’s it easier to use. Here’s the repo.

Hope that helps!

I would reinstall your crew to latest version in a fresh ENV as crewai no longer has langchain as a dependency

1 Like

Thank you so much! I will try that and update you. :smile:

Oh, I see. Thank you so much! Can I ask how I can use Gemini as the LLM in the CrewAI framework?

If it’s not too much trouble, I would like to ask a bit more, such as how to set the model name and model temperature for Gemini 1.5 Flash.

Thank you so much. I’m new to this. :sob:

See here for all models: https://models.litellm.ai

For google flash ar you going through vertex or google dev?

Assign to your agent for vertex AI flash

llm='gemini-1.5-flash'

OR for google Dev

llm='gemini/gemini-1.5-flash'
2 Likes

Thank you so much. :grin:

I got this error. How do I fix it?

Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new
LiteLLM.Info: If you need to debug this error, use litellm.set_verbose=True.

Here is the code I used:
import os
from dotenv import load_dotenv
load_dotenv()
os.environ[“GOOGLE_APPLICATION_CREDENTIALS”] = os.getenv(“GOOGLE_APPLICATION_CREDENTIALS”)
os.environ[“temperature”] = “0.0”
os.environ[“top_p”] = “0.1”

llm=‘gemini-1.5-flash’ at agent

what issue? thats just a link to the LiteLLM issues page

Sorry, my mistake. This thing always appears, and after that, the work results in an error like this:

BadRequestError: litellm.BadRequestError: VertexAIException BadRequestError - { “error”: { “code”: 400, “message”: “Request contains an invalid argument.”, “status”: “INVALID_ARGUMENT” } }