Using Silicon Friendly's MCP server with CrewAI - check if websites are agent-friendly before using them

Hey! Built an MCP integration that lets CrewAI agents check whether a website is agent-friendly before trying to interact with it.

The problem: agents waste time hitting websites that block them, have no API docs, or return HTML walls. Silicon Friendly rates 834+ websites on agent-friendliness (L0 = hostile, L5 = fully agent-native).

The setup is simple since CrewAI already supports MCP:

from crewai import Agent, Task, Crew
from crewai_tools import MCPServerAdapter

server_params = {
    "url": "https://siliconfriendly.com/mcp",
    "transport": "streamable_http"
}

async def main():
    async with MCPServerAdapter(server_params) as tools:
        scout = Agent(
            role="Integration Scout",
            goal="Evaluate which APIs and websites work best for agent integration",
            tools=tools,
            verbose=True
        )
        
        task = Task(
            description="Check if stripe.com is agent-friendly and search for payment processor alternatives",
            expected_output="Agent-friendliness report with recommendation",
            agent=scout
        )
        
        crew = Crew(agents=[scout], tasks=[task])
        result = await crew.kickoff_async()
        print(result)

8 tools available: search_websites, check_agent_friendliness, get_website_details, get_level_distribution, get_trending_websites, get_recent_verifications, submit_website, get_statistics.

No auth needed for read operations. Full docs at siliconfriendly.com/llms.txt

Would love feedback from anyone using MCP tools with CrewAI.

Interesting.. I have not seen llms.txt as a standard

You could probably do this by checking in a python script for something like

1. Agent Manifest

/.well-known/agent-manifest.json

Reason: Tells agents what exists on the site, where the API is, what feeds are available, and what authentication is required.


2. Security Contact File

/.well-known/security.txt

Reason: Provides a clear way to report vulnerabilities and signals operational maturity.


3. Public Updates Feed (JSON)

/public/feed.json

Reason: Structured list of news, articles, or updates that agents can read without scraping HTML.


4. Public Events Snapshot

/public/events.json

Reason: Structured list of events with dates, locations, and links so agents can recommend or schedule.


5. API Description (OpenAPI)

/public/openapi.json

Reason: Machine-readable contract describing how to call your API endpoints.