Context window limit to 8192 tokens while using openai embed model

when using gpt4o-mini I got an error saying: File “C:\Users\crewai\legacy_code\ideal_prompt\src.venv\Lib\site-packages\openai_base_client.py”, line 1041, in _request
raise self._make_status_error_from_response(err.response) from None
openai.BadRequestError: Error code: 400 - {‘error’: {‘message’: “This model’s maximum context length is 8192 tokens, however you requested 10491 tokens (10491 in your prompt; 0 for the completion). Please reduce your prompt; or completion length.”, ‘type’: ‘invalid_request_error’, ‘param’: None, ‘code’: None}}

When I however take the same prompt and use that in a non-crewai openai program I have no issues getting a reponse. I used 33261 total tokens and 938 were completion tokens.

LLM_MODEL is env var set to gpt-4o-mini
agent code :
llm_model = OpenAI(model=os.environ.get(‘LLM_MODEL’, ‘no_model_defined’))
return Agent(config=self.agents_config[“Doc_agent”], verbose=True, llm=llm_model)

the agents and crew use memory

Using python 3.12.2, crewai 0.60.0 crewai-tools 0.12.1

update : the model which is giving issues is the embeddingmodel. text-embedding-3 which is limited to 8191 tokens.