Passing context between agents

Hi crew gurus, I would really appreciate some advice / help.

I am really struggling to get information passed between agents and tasks. My current setup is as follows:

  • I have setup a crew with a few agents and tasks.
  • I have a scanning agent which looks for signals and then passes these signals onto another agent which should identify specific market forces from the signals.
  • Each of these agents has its own task (scan for signals) & (identify forces).
  • I am passing info between the tasks with pydantic models.
  • The scanning agents builds a list of URLs and saves these in JSON format based on a predefined pydantic model.
  • The Market forces agent should then take this list and should work through each URL one by one to identify market forces (again based on another predefined pydantic model).

Problem - for some reason my second agent (and task) - market forces agent does not work through the list of URLs effectively despite the pydantic models and JSON being defined as output from agent 1 and input to agent 2 (through the task goal). The output of the 1st agent correctly generated as JSON, does not seem to be passed on as context for the second agent.

I have been very specific in the Task definition for agent 2 to use the context and JSON output from task 1. I have tried this with many variations of task definitions, I have tried gpt4o and 4omini.

I have also considered creating separate crew for signals identification and for market force scanning, but in any case I need to be sure that I am able to get quality context and outputs generated and passed between tasks even within a given crew. See the diagram attached:

Thanks for any support or advice you can provide.

Here is some of the code, I have included:

  1. the task config from the crew file
  2. The JSON output generated from the 1st agent and task
  3. The Task description for the second agent who should process each of the urls from the first tasks JSON list.
@task
def futurist_source_identification(self) -> Task:
	return Task(
		config=self.tasks_config['futurist_source_identification'],
		output_file=f'outputs/futurist_source_identification_{datetime.datetime.now().strftime("%Y%m%d_%H%M%S")}.json',
		output_pydantic=SourceIdentificationResultsURLonly
	)

@task
def futurist_market_force_extraction(self) -> Task:
	return Task(
		config=self.tasks_config['futurist_market_force_extraction'],
		output_file=f'outputs/futurist_market_force_extraction_{datetime.datetime.now().strftime("%Y%m%d_%H%M%S")}.json',
		output_pydantic=ResearchOutput
	)

</task extract>

(Output from Agent 1 & Task 1)

indent preformatted text by 4 spaces

{
    "name": "SourceIdentificationResultsURLonly",
    "sources": [
        {
            "url": "https://futuristspeaker.com/artificial-intelligence/eight-trends-in-the-evolving-universe-of-generative-ai-in-2024/"
        },
        {
            "url": "https://www.linkedin.com/posts/finextra_the-future-of-ai-in-financial-services-2025-activity-7284871941855821824-ZjrL"
        },
        {
            "url": "https://www.fmi.org/financial-executive-and-internal-audit-conference/fmi-blog/2025/03/31/is-ai-a-game-changer-for-food-industry-finance"
        },
        {
            "url": "https://futuristspeaker.com/category/artificial-intelligence/page/2/"
        },
        {
            "url": "https://www.linkedin.com/posts/thomas-frey-csp-a872781b5_by-the-end-of-2025-artificial-intelligence-activity-7280963165364842497-EeJt"
        },
        {
            "url": "https://futuristgerd.com/tag/thomas-frey/"
        },
        {
            "url": "https://aiforgood.itu.int/summit24/"
        },
        {
            "url": "https://futuristgerd.com/2015/04/%E2%96%B6-three-laws-of-exponential-capabilities-video-by-fellow-futurist-thomas-frey-youtube/"
        },
        {
            "url": "https://www.danielburrus.com/blog/2024-predictions-for-ai-in-financial-services/"
        },
        {
            "url": "https://www.peterdiamandis.com/blog/the-future-of-ai-in-financial-services"
        },
        {
            "url": "https://www.ianpearson.com/articles/2024-generative-ai-in-financial-services"
        },
        {
            "url": "https://www.matthewgriffin.com/insights/generative-ai-and-the-future-of-financial-services/"
        },
        {
            "url": "https://www.raykurzweil.com/articles/ai-in-financial-services-2024/"
        },
        {
            "url": "https://www.richardvanhooijdonk.com/blog/generative-ai-in-financial-services-2024/"
        },
        {
            "url": "https://www.amywebb.com/research/generative-ai-in-financial-services-2024/"
        },
        {
            "url": "https://www.forbes.com/sites/bernardmarr/2024/01/15/how-generative-ai-is-transforming-the-financial-services-industry/"
        },
        {
            "url": "https://hbr.org/2024/02/how-generative-ai-is-changing-financial-services"
        },
        {
            "url": "https://www.mckinsey.com/industries/financial-services/our-insights/the-potential-of-generative-ai-in-financial-services"
        },
        {
            "url": "https://www.bcg.com/publications/2024/how-generative-ai-is-revolutionizing-financial-services"
        },
        {
            "url": "https://www.accenture.com/us-en/insights/financial-services/generative-ai-financial-services"
        },
        {
            "url": "https://www2.deloitte.com/us/en/insights/industry/financial-services/generative-ai-in-financial-services.html"
        },
        {
            "url": "https://www.pwc.com/gx/en/services/consulting/generative-ai-in-financial-services.html"
        },
        {
            "url": "https://www.ey.com/en_gl/financial-services/how-generative-ai-is-transforming-financial-services"
        },
        {
            "url": "https://www.kpmg.com/xx/en/home/insights/2024/generative-ai-in-financial-services.html"
        },
        {
            "url": "https://www.bain.com/insights/generative-ai-in-financial-services-2024/"
        },
        {
            "url": "https://www.oxfordeconomics.com/research/generative-ai-in-financial-services-2024/"
        },
        {
            "url": "https://www.gartner.com/en/newsroom/press-releases/2024-04-01-gartner-says-generative-ai-will-transform-financial-services"
        }
    ]
}

<task.yaml> Task definition for the 2nd agent:

futurist_market_force_extraction:

description: >
Using only the URLs from the “Source results URL list” identified in the previous task related to the SourceIdentificationResultsURLonly schema. Your MANDATORY objective is to process EACH AND EVERY URL provided in the “Source results URL list” from the ‘futurist_source_identification’ task to extract market forces related to {topic}. FAILURE TO PROCESS ALL URLS WILL RESULT IN AN INCOMPLETE TASK.

IMPORTANT INSTRUCTIONS FOR ACCESSING URLs:
1.  Access the "Source results URL list" output JSON provided in the context from the previous task.
5.  Locate the 'sources' field, which is a list of URL source objects.
6.  **ITERATE THROUGH THE ENTIRE 'sources' LIST, one by one. For EACH source object URL in the list:**
    a. Extract the value of the 'url' field.
    b. **Use the scrape tool to retrieve the full content from this specific URL.** If scraping fails for a URL after 3 attempts, log the URL as failed and CONTINUE to the next URL in the list. DO NOT STOP the entire process for one failed URL.
    c. Analyze the retrieved content to identify ALL mentioned market forces relevant to {topic}.
    d. For each market force found in this specific source, capture the relevant details according to the RawMarketForce format, ensuring correct attribution to *this* source URL and name.
7.  Do not follow links and scrape URLs referred to in the content of the URLs provided in the 'sources' list.  Only scrape the URLs provided in the 'sources' list.
8.  **DO NOT STOP until you have attempted to process every single URL** present in the 'sources' list from the input context.
9.  Compile ALL extracted market forces from ALL processed sources into the final output.
10.  Before finishing, mentally (or explicitly state) confirm: "Have I attempted to scrape and analyze every URL provided in the 'sources' list?"

Expected research extraction scope - minimum requirements:
- Extract content from ALL source URLs identified in "Source results URL list" from the previous task's context.
- Identify as many distinct market forces as possible related to the {topic}.
- Find relevant specific examples for each market force.
- Do not conclude your extraction until you have gathered a comprehensive set of market forces across all the source URLs provided in the 'sources' list.
- Your thoroughness is critical to the success of the overall project.
- Your research is not complete until you've met or exceeded these minimum requirements.

Research extraction process:
Follow the numbered iteration steps 1-6 above meticulously.

Attribution requirements:
1. For every fact, statistic, quote, example, or finding you include, you MUST provide:
   - The exact URL where the information was found.
   - The specific paragraph or section containing the information, including the exact wording.
   - The name of the source.
2. Do not paraphrase information - retain key details and numbers as found in the source.
3. Structure all attributed content according to the AttributedItem format with "content", "source_url", "source_paragraph", and "source_name" fields.

Research extraction quality:
1. Compile findings with true and accurate citations and source attributions, ensuring all information is traceable.
2. **DO NOT PROVIDE FAKE OR MOCKED CONTENT.**
3. **IF CONTENT CANNOT BE FOUND, INDICATE SO, DO NOT MAKE UP CONTENT OR SOURCES.**
4. **ALWAYS EXTRACT AND USE THE ACTUAL PUBLICATION DATE FROM THE SOURCE - DO NOT USE TODAY'S DATE.**
5. **IF A PUBLICATION DATE IS NOT CLEARLY VISIBLE, MAKE A REASONABLE ESTIMATE BASED ON CONTENT OR MARK AS UNKNOWN - NEVER DEFAULT TO TODAY'S DATE.**

Performance and incentive:
1. Your performance will be evaluated based on the quantity and quality of market forces identified.
2. The depth of your analysis (detailed market force descriptions, content, examples, key terms, mentioned entities).
3. The quality and accuracy of your attributions and citations (exact URLs, paragraphs, and sources).

The most valuable insights often come from comprehensive research that goes beyond the obvious. Your goal is to deliver the most accurate, comprehensive and thorough analysis possible.

Your output MUST be formatted as a valid JSON object following the ResearchOutput schema, with each fact, statistic, quote, finding, and example properly attributed using the AttributedItem structure.
Set the source_category to "{specialisation}" in your response.

expected_output: >
A structured list of ALL market forces identified by processing EVERY source URL provided in the SourceIdentificationResultsURLonly context from the previous task, relating to {topic}, with detailed attribution.
agent: futurist_content_extractor
context: [futurist_source_identification]

</task.yaml>

Hi,

Did you try to pass any tools to your second agent for searching those links?

Hi, thanks for the response. Yes the second agent is using the scrapewebsite tool

Perfect.

Did you monitor the process through your terminal? What are the results of the tool?

In any case, you can write your custom tool that scrapes a list of URLS with pure Python code.

You can use the context property on the task to pass another agent’s context.

For example:

def task1(self) → Task:
return Task (
config = self.tasks.config[‘task1’],
output_file=“output.txt”,
guardrail=myGuardrail
)

def task2(self) → Task:
return Task (
config = self.tasks.config[‘task2’],
output_file=“output.txt”,
guardrail=myGuardrail,
context=[self.task1()]
)

Have a look at the tasks context documentation. It’s probably what you want Tasks - CrewAI

Yes I monitored in the terminal and the task and overall crew complete correctly. Regarding the context I pass it to the task in the task defininitin in the YAML file and in the @task decorator in the crew file. I have read the task documentation and cannot seem to get it to work.

Maybe developing a custom scrape tool is the answer but it seems like an issue that i cannot accurately and completely mass context from one task to another