Can we have a feed from the GitHub to announcements

It will be really useful for some to see when there are releases and updates to releases.
Can these be handles by a crew to take from Github (Releases · crewAIInc/crewAI · GitHub) and post to Announcements - CrewAI along with some commentary.

3 Likes

Totally agree with you. :clap::clap:

Communication is definitely always worth it to keep the community more engaged.

2 Likes

This would be very useful, it’s hard to know when a new version is released. If I wasn’t active on GitHub I would miss most of them.

1 Like

Thinking about it.. It would be great to have a crew that looks at all the bug and issues and give a commentary. Maybe even suggesting what bugs might be affecting the opensource and the crewai.com developers. Kind of eating our own dog food. :slight_smile:

2 Likes

@tonykipkemboi grant me write access to the announcements category and I will post about the latest releases as they are tagged on Github.

1 Like

In fact.. wouldn’t it be fun if CrewAI rang a crew to do this and hosted it on CrewAI enterprise
Let the community refine the agent / task to get the right info.. We could have a community “view” on the change and an official one.. I am up for contributing to the model and would be great to share and learn how others, ovoid hallucinations etc.

1 Like

Sounds like something the community can work on by itself. It would be great if CrewAI the company can donate an enterprise account to host the project like you said :sweat_smile:

Count me in if you decide to start this.

2 Likes

Lets use this thread to put our outputs and then we can share the code. I think as it will be shared, lets make it simple as an example.

@agent / @task / agents.yaml / tasks.yaml

Then maybe CrewAI will let us maintain it…

Step one.. lets discuss.. “what does good look like”

I’ll start…

Run daily.
Get latest issues
Get latest releases
Review latest releases for context
Review latest issues for context and community usefulness.
Create a single simple report for viewing with TDLR

For me the questions will be…
What is a good useful issue?
What is a good output for the community to consume?

Thoughts?

1 Like

We have a change log now! Changelog - CrewAI

2 Likes

Hey everyone,

I thought your initiative here was really cool. It shows our community is looking to help each other out, which is great.

To chip in, I’m sharing a simple draft of one way to tackle this need. It’s all pretty basic, still in the early stages, but maybe it can help you guys develop a more robust and professional solution.

I proposed an automated workflow that basically:

  • Monitors: Checks if a new version of the crewai library has been released.
  • Analyzes: If there’s a new version, it figures out which important code files changed between the new version and the previous one (e.g., by comparing git tags).
  • Summarizes: For each important changed file, it generates a concise, easy-to-understand summary of the modifications using an LLM.
  • Augments: It grabs the official release notes (which might be brief) and beefs them up with the generated file summaries.
  • Reports: It creates a final report in Markdown format, perfect for posting in a forum, detailing the changes in the new release comprehensively.

I didn’t go down the route of handling issues or ongoing PRs because a lot of those might just get discarded anyway. That could quickly take us from “we have too little information” to “we have way too much information.” :grin:

The basic structure uses Flows, with some cooler features like Conditional Logic and state persistence between runs.

Hope the draft and the code comments are useful for your initiative, even if it’s just to serve as an example of how not to do things! :sweat_smile:

Directory Structure:

crewai_release_watch_flow/
├── config/
│   ├── agents.yaml
│   ├── llms.yaml
│   └── tasks.yaml
├── models/
│   ├── __init__.py
│   └── datamodels.py
├── tools/
│   ├── __init__.py
│   └── releasewatchtools.py
├── main.py
└── utils.py

main.py file:

import logging
from typing import List, Dict, Any

from packaging.version import parse as parse_version

from crewai.flow.flow import Flow, listen, start, router, or_
from crewai.flow.persistence import persist
from pydantic import BaseModel, Field

logging.basicConfig(
    level=logging.INFO,
    format="%(asctime)s - %(name)s - %(levelname)s - %(message)s",
)
logger = logging.getLogger(__name__)

# Define a consistent ID for state persistence
# This ID ensures the flow retrieves the correct state across runs.
FLOW_STATE_ID = "crewai-release-monitor-state"

# --- Data Models (models/datamodels.py) ---

class ChangeSummary(BaseModel):
    """
    Represents a summary of changes for a specific file.
    """

    file_changed: str = Field(..., description="Path of the file that changed.")
    summary: str = Field(..., description="Generated summary of the changes.")

class FlowState(BaseModel):
    """
    Holds the state of the CrewAI release monitoring flow.

    This state is persisted between runs using the specified 'id'.
    Versions are stored as strings and compared using packaging.version.
    """

    id: str = Field(
        default=FLOW_STATE_ID,
        description="Unique identifier for persisting flow state.",
    )
    new_release_found: bool = Field(
        default=False,
        description="Flag indicating if a new release was detected.",
    )
    new_release: str = Field(
        default="0.2.0",
        description="Version string of the most recent release checked.",
    )
    previous_release: str = Field(
        default="0.1.0",
        description="Version string of the release before the new_release.",
    )
    official_release_note: str = Field(
        default="",
        description="Official release notes fetched from the source.",
    )
    change_summaries_list: List[Dict[str, Any]] = Field(
        default_factory=list,
        description="List of dictionaries, each summarizing changes in a file.",
    )
    output_report_generated: bool = Field(
        default=False,
        description="Flag indicating if the final report has been generated.",
    )

# --- CrewAI Flow Definition ---

@persist(verbose=True)
class CrewAIReleaseWatchFlow(Flow[FlowState]):
    """
    A CrewAI Flow to monitor CrewAI releases, summarize changes,
    and generate an augmented release report.
    """

    @start()
    def check_for_new_version(self):
        """
        Starting node: Checks if a new version of CrewAI has been released
        by comparing the latest fetched version with the 'new_release'
        stored in the state, using packaging.version.parse.

        If a new version is found, resets the state variables related to
        change summaries and reports for the new analysis cycle.
        """
        logger.info("Checking for new CrewAI releases...")

        # --- Placeholder Logic ---
        # TODO: Implement actual version checking logic here.
        # 1. Fetch the latest version string GitHub Releases.
        # 2. Parse it using packaging.version.parse
        # 3. Compare with self.state.last_release
        #    new_release_found = latest_release > current_known_release
        # -------------------------

        # Simulate a new release
        simulated_new_release: str = "0.3.0"
        simulated_previous_release: str = "0.2.0"

        # --- State Update ---

        logger.info(
            f"Comparing latest fetched release "
            f"('{simulated_new_release}') with known latest "
            f"release ('{self.state.new_release}')."
        )

        try:
            latest_release = parse_version(simulated_new_release)
            current_known_release = parse_version(self.state.new_release)
            new_release_found = latest_release > current_known_release
        except Exception as e:
            logger.error(f"Failed to parse releases for comparison: {e}")
            new_release_found = False  # Treat parse error as no new release

        if new_release_found:
            logger.info(
                f"New version found: {simulated_new_release} "
                f"(Previous was: {self.state.new_release})"
            )

            # --- Placeholder Logic ---
            # TODO: Fetch actual official release notes for the new version
            # official_notes = fetch_release_notes(new_release)
            # -------------------------
            official_notes = f"What's New in {simulated_new_release}..."

            # --- State Update ---
            self.state.new_release_found = True
            self.state.new_release = simulated_new_release
            self.state.previous_release = simulated_previous_release
            self.state.official_release_note = official_notes
            self.state.change_summaries_list = []
            self.state.output_report_generated = False
            logger.info(
                f"State updated: previous_release='{simulated_previous_release}', "
                f"new_release='{simulated_new_release}'."
            )
        else:
            logger.info(
                f"No new version found (latest is still '{self.state.new_release}')."
            )
            self.state.new_release_found = False

    @router(check_for_new_version)
    def route_after_check(self):
        """
        Routes the flow based on whether a new version was found.
        """
        if self.state.new_release_found:
            logger.info("Routing to: summarize_changed_files")
            return "new_version_found"
        else:
            logger.info("Routing to: end_flow (no new version)")
            return "no_new_version"

    @listen("new_version_found")
    def summarize_changed_files(self):
        """
        Analyzes changes between the 'previous_release' and 'new_release'
        versions stored in the state and generates summaries.
        """
        logger.info("Starting analysis of changed files...")
        # Retrieve version strings directly from state
        version_new = self.state.new_release
        version_old = self.state.previous_release
        logger.info(f"Analyzing changes between {version_old} and {version_new}")

        # --- Placeholder Logic ---
        # TODO: Implement logic to get changed files between releases
        #       using version_old and version_new (e.g., git tags).
        # TODO: Filter files (analyze code and docs only).
        # TODO: Use LLM to summarize diffs.
        # -------------------------

        # Simulate a summary
        file_changed = "src/crewai/crew.py"
        summary_text = (
            f"Introduced new collaboration modes for crews "
            f"between versions {version_old} and {version_new}."
        )

        change_summary = ChangeSummary(file_changed=file_changed, summary=summary_text)

        self.state.change_summaries_list.append(change_summary.model_dump())
        logger.info(f"Generated summary for: {file_changed}")

        logger.info("File change analysis complete. Routing to compile report.")

    @listen(summarize_changed_files)
    def compile_final_report(self):
        """
        Generates the final Markdown report using the 'new_release' version,
        official notes, and generated summaries from the state.
        """
        logger.info("Compiling the final release report...")

        official_notes = self.state.official_release_note
        summaries = self.state.change_summaries_list
        version = self.state.new_release

        # --- Placeholder Logic ---
        # TODO: Implement the actual Markdown generation logic.
        # -------------------------

        final_report_md = f"# CrewAI v{version} Report\n\n"
        final_report_md += "## Official Notes\n\n"
        final_report_md += f"{official_notes}\n\n"
        final_report_md += "## File Summaries\n\n"
        if summaries:
            for summary_dict in summaries:
                final_report_md += f"- `{summary_dict['file_changed']}`: "
                final_report_md += f"{summary_dict['summary']}\n"
        else:
            final_report_md += "- N/A\n"

        # --- Placeholder Logic ---
        # TODO: Implement saving or posting the report.
        # -------------------------

        self.state.output_report_generated = True
        logger.info("Report generation complete. Routing to end_flow.")

    @listen(or_("no_new_version", compile_final_report))
    def end_flow(self):
        """
        Final node: Logs the completion of the flow run.
        """
        logger.info("-- CrewAI Release Watch Flow finished --")


# --- Flow Execution ---
if __name__ == "__main__":
    logger.info("Initializing CrewAIReleaseWatchFlow...")
    flow = CrewAIReleaseWatchFlow()

    logger.info(f"Kicking off flow with state ID: {FLOW_STATE_ID}")
    flow.kickoff(
        inputs={
            "id": FLOW_STATE_ID,
        }
    )

    logger.info("Final flow state after run:")
    logger.info(flow.state.model_dump_json(indent=2))
1 Like

I love this… you can tell that you are a great coder… Very cool. I love how we all have different styles. Let me have a crack at the weekend and then we can compare and refine together. This is fun

1 Like

Hey everyone! I love this idea brewing here.

Here’s how we can help for now, I can create a repo and add you all as contributors and that way I can deploy using my CrewAI account to avoid the extra approvals for execution limits since we’ll have it run daily.

If this sounds good;

  1. let me know
  2. give a suggested name for the repo
3 Likes

This sounds great. Please let us know. I have a working idea to share and discuss with the community and happy to add to the repo so we can discuss and refine together.

I’ll upload the code to the repo when ready but will upload the TLDR report as a draft.
I have made a lot of assumptions and know that there will be some great ideas and other refinements… hoping this is useful.

CrewAI Release & Community Insights – TL;DR

Changelog Timeline

  • 0.118.0 (2025-04-30): No-code Guardrail Builder launched. Fixes to templates, LLMGuardrail rename, litellm compatibility expanded, internal documentation cleaned up. Major onboarding and enterprise doc improvements, YouTube embed step added, CrewStructuredTool removed from public docs.

  • 0.117.1 (2025-04-28): crewai-tools and liteLLM upgraded. Open-source software (OSS) fixes. Explainer for new tools and CLI guides in documentation.

  • 0.117.0 (2025-04-28): Added result_as_answer to @tool. Support for new LLMs (large language models): GPT-4.1, Gemini-2.5 Pro, Huggingface CLI. Knowledge management improved, supports Python 3.10+, enhanced enterprise docs and automated output folders.

  • 0.114.0 (2025-04-10): Agents act as atomic units, support for custom LLM, external memory and observability, multimodal validation, secure fingerprints, YAML extraction. NVIDIA NIM, flow and singular agent guides, and local deployment tutorials updated.

  • 0.108.0 (2025-03-17): Tabs-to-spaces alignment in crew.py, better streaming/event and fingerprint handling, new visualisations for events. Documentation for new tools, updated install guides.

  • 0.105.0 (2025-03-09): Flow state export, Crew embedder, advanced event emitter and LLM tracking, improved asynchronous support, CLI/tool integration extended. Docs refreshed for Qdrant/Bedrock, hierarchy layouts, and other prompt corrections.

Highlights & Changes

  • Rapid feature delivery: Major updates have enabled no-code guardrail design, atomic agent capability, persistent knowledge options, and broad LLM (large language model) support—including Gemini and GPT-4.1. CrewAI now supports new agent orchestration and memory, benefitting a wider range of projects.

  • Frequent breaking changes: Rapid development has introduced tool API changes and schema shifts, with breaking changes in hierarchical process configs and StructuredTool signatures. Some provider/model fields and underlying toolkits have changed without perfect doc coverage.

  • Community engagement up: The community drives validation (CrewAI Lint extension), shares knowledge (forum threads and guides), and responds to breaking changes quickly. Community-developed workarounds fill critical gaps and keep projects moving forward.

  • Persistent blockers: Hierarchical orchestration, tool API/config mismatches, and migration challenges remain top support issues. Documentation cannot always keep pace, leaving knowledge and migration best practices lagging—especially for advanced usage.

Patterns & Relationships

  • Hierarchical process and delegation remain the top frustration after rapid updates; schema and config details trip up new and experienced users alike.

  • Model/provider expansion outpaces documentation, often leading to misconfiguration and wasted cycles troubleshooting.

  • Community tooling (like the VS Code YAML linter) grows in response to migration and config issues, highlighting unmet product needs.

  • Documentation progress is continuous—but gaps persist, especially for advanced orchestration and migration.

  • Migration pain: Users report confusion over deprecations and process changes; migration advice is often buried in forums or not yet written.

Actionable Recommendations

  • Before upgrading: Read the latest changelog and test your hierarchical and tool configurations. Validate your setup before deploying to production.

  • Test new features: Isolate and experiment with new guardrail/agent/LLM support on a non-critical project. Reference community-shared minimal configs for a head start.

  • Report issues: Found a bug, doc gap, or migration pain? Log a clear GitHub issue or forum thread—the team and peers respond actively.

  • Tap into CopilotKit: For deeper integrations, see CopilotKit Blog for recipes and how-tos.

  • Share solutions: Give back by posting your validated configs, memory handlers, or migration steps in the community forum; these build knowledge for all.

  • Check the FAQ before posting: Review common blockers and updated migration tips in online documentation and FAQs for fast answers.

References


:heart: Thank you for shaping CrewAI. For support or to contribute a solution, join the community and keep building!

Awesome! What’s your GitHub handles so I can add the three of you as contributors to the repo?

1 Like

Thank you for the Repo. I added my suggestion for the TLDR code in tldr-option1. I use a simple clean and base setup as I hope this will help others get into using CrewAI.

I know @Max_Moura has some really cool ideas from looking at his code.

Personal note: @Max_Moura i know you don’t like Hierarchical crews… so forgive me…they are my base for some crews. I am happy to change :grinning_face:

1 Like

Looking forward to seeing your ideas and comments @zinyando . Super excited to see how we refine this.
also hoping you like Hierarchical crews :wink:

1 Like

Fixed a few issues and tested with latest version

CrewAI Q2 2025: Cross-Analysis of Releases, Issues & Community Feedback

Prepared for the CrewAI Community. This report surfaces actionable patterns, direct linkages, and opportunities for improving both product and community experience, based on comprehensive release notes, GitHub issue tracking, the community forum, documentation, and blog highlights from March 9, 2025, to May 9, 2025.


Changelog Timeline

Full changelog: CrewAI Release Notes


Highlights & Changes

  • Rapid feature delivery: Significant enhancements in agent collaboration and memory management More Info

  • Frequent breaking changes: Recurring memory management issues impacting user experience More Info

  • Community engagement up: Increased positive feedback regarding flexible tool integrations More Info

  • Persistent blockers: Ongoing concerns with no-code implementation complexities More Info


Patterns & Relationships

  • A distinct pattern shows an increase in community-driven feature requests, suggesting a collaborative approach to development.

  • Many users express the need for enhanced documentation to complement the newly launched features.

  • Migration pain: Users are facing challenges as they adapt to new no-code methods of operation.


Actionable Recommendations

  • Before upgrading: Review the full changelog for specific breaking changes.

  • Test new features: Engage with the new features in a development sandbox environment to understand applicability.

  • Report issues: Actively submit detailed issues on GitHub to assist in faster resolution.

  • Tap into CopilotKit: Leverage the resources offered via CopilotKit for enhanced project capabilities.

  • Share solutions: Participate in discussions to share your solutions and insights related to the tools.

  • Check the FAQ before posting: Utilize the FAQ section to clarify common queries before seeking further assistance.


References


:heart: Thank you for shaping CrewAI. For support or to contribute a solution, join the community and keep building!

1 Like

Also full report here crewai-changelog-tldr/tldr-option1/insights.md at main · tonykipkemboi/crewai-changelog-tldr · GitHub

and working on a format that copies the awesome format by @zinyando

1 Like

This looks amazing @Tony_Wood, I’ll take a proper look at the code later but I like this first step.

Do you see any areas for improvement?