Distinct RAG tools need different embedding models (Hugginface ST)

I’m building a sequential crew with distinct tasks/tools : [DIR and FILE search about topic 1, then PDFsearch topic 2].
I’m using sentence transformer models for embedding, such as :
emb_conf=dict(
embedder=dict(
provider=“huggingface”,
config=dict(model=EMBEDDING_MODEL_NAME)
)
)
and GPT4o for LLM.
I noticed issues when the same embedding model is used for the PDF and for the DIR/FILE search tools. Task does not complete, or RAG mixes up PDF and folder info,…

Things get resolved when using distinct embedding models for the 2 tool sets.

Any one has a similar issue? Is it specific to ST model? Did I miss something?
Thanks in advance for your response.