First off, welcome to the community! And yes, your complaint is totally valid. There’s a bug in the implementation of SeleniumScrapingTool
, even in the latest stable release (crewai-tools==0.40.0
).
How to reproduce the problem?
Reproducing your error is a breeze since these tools can be run standalone—which is actually the best way to test your own tools:
from crewai_tools import SeleniumScrapingTool
url = "YOUR-URL"
SeleniumScrapingTool(website_url=url).run()
Generated output:
TypeError: 'WebDriver' object is not callable
Where’s the bug?
In the file crewai_tools/tools/selenium_scraping_tool/selenium_scraping_tool.py
, within the __init__
method:
from selenium import webdriver
# ... other imports
self.driver = webdriver.Chrome() # <-- ROOT OF THE PROBLEM!!!
self._options = Options()
self._by = By
# ... rest of __init__
Here, self.driver
is assigned the result of calling webdriver.Chrome()
. So, self.driver
holds an actual Chrome WebDriver instance (not the class itself), created with the default options.
Then, inside _create_driver
:
def _create_driver(self, url, cookie, wait_time):
# ... validation logic ...
options = self._options
options.add_argument("--headless")
driver = self.driver(options=options) # <-- THIS IS WHERE THE ERROR HAPPENS!!!
# ... rest of _create_driver ...
return driver
At this point, the code tries to call self.driver
as if it were a class or a constructor. But remember, in __init__
, self.driver
is already an instance of webdriver.Chrome
, so it’s not callable—hence the TypeError
.
How to fix it?
Open the __init__
method inside crewai_tools/tools/selenium_scraping_tool/selenium_scraping_tool.py
and replace:
self.driver = webdriver.Chrome()
with:
self.driver = webdriver.Chrome
This way, self.driver
now holds the Chrome WebDriver class, and calling self.driver(options=options)
inside _create_driver
will correctly instantiate a new driver with the specified options.
Testing after the fix
import os
from crewai import Agent, Task, Crew, LLM, Process
from crewai_tools import SeleniumScrapingTool
os.environ["GEMINI_API_KEY"] = "YOUR-KEY"
scrape_tool = SeleniumScrapingTool()
llm = LLM(
model="gemini/gemini-2.0-flash",
temperature=0.5,
)
website_summarizer_agent = Agent(
role="Website Content Summarizer",
goal=(
"Scrape the content of the given website using tools. "
"Then, create a short, concise summary (one paragraph) "
"of its main purpose or content based *only* on the "
"scraped text."
),
backstory=(
"You are an AI assistant specialized in visiting websites "
"using provided tools. You extract the text content and are "
"excellent at summarizing the key information found directly "
"on the page into a single, easy-to-understand paragraph."
),
llm=llm,
tools=[scrape_tool],
verbose=True,
allow_delegation=False,
)
summarize_website_task = Task(
description=(
"Scrape the content of the given website using available tools, then "
"write a single paragraph summarizing the main topic, based STRICTLY "
"on the text you found.\n"
"Website: {target_website}"
),
expected_output=(
"A single paragraph of 90 to 100 words containing a concise summary."
),
agent=website_summarizer_agent,
)
website_summary_crew = Crew(
agents=[website_summarizer_agent],
tasks=[summarize_website_task],
process=Process.sequential,
verbose=True,
)
crew_result = website_summary_crew.kickoff(
inputs={
"target_website": "https://www.gov.br/mast/pt-br/assuntos/noticias/2024/julho/santos-dumont-o-brasileiro-que-abriu-as-asas-perto-do-sol"
}
)
print(f"\n🤖 Final Summary:\n\n{crew_result.raw}\n")