Introducing memweave — persistent agent memory as plain Markdown files | Feedback + CrewAI integration discussion

Hi CrewAI community,

I recently released memweave, an open-source Python library for AI agent memory, and wanted to share it here and discuss how it could integrate with CrewAI agents.

What it does:

  • Stores agent memories as plain Markdown files — readable, editable, git diff-able

  • Hybrid search: BM25 keyword + vector (sqlite-vec) combined

  • Zero servers, zero setup — single SQLite file on disk

  • Zero LLM calls on write/search operations

  • Works fully offline — falls back to keyword search if embedding API is down

  • Pluggable search strategies and post-processors

Quick example:

async with MemWeave(MemoryConfig(workspace_dir=".")) as mem:
    await mem.index()
    results = await mem.search("user preferences", min_score=0.0)

Why it’s relevant for CrewAI:
CrewAI agents could use memweave as a persistent memory backend — each agent writing to its own namespace (memory/agents/researcher/) while sharing a common workspace, with the coordinator searching across all agents’ memories.

Looking for:

  1. Feedback from the community on the approach

  2. Thoughts from the CrewAI team on potential integration patterns

  3. Anyone already building multi-agent memory systems who wants to collaborate

:package: pip install memweave
:link: GitHub - sachinsharma9780/memweave: memweave is a zero-infrastructure, async-first Python library that gives AI agents persistent, searchable memory — stored as plain Markdown files · GitHub

Happy to discuss and answer any questions!