Output_pydantic llm confugration

I have set up an Agent with function_call_llm and llm. Also I passed the exact same llm objects to the crew. I have found out that there is a class InternalInstructor with a method to_pydantic that uses the same llm name as the agent llm or agent function_calling_llm, but never takes theri api keys resulting in an error when trying to get a pydantic response from a task:
crewai.utilities.converter.ConverterError: Failed to convert text into a Pydantic model due to error: litellm.AuthenticationError: AuthenticationError: OpenAIException - The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY environment variable

Can someone help me how to pass a llm instance to be used here if possible, if not, how to atleas use the api_key of the agent llms?

Using it from env is not an options since I want to change keys basically on each request.

The soulution might be to just add in the InternalInstructor class in to_pydantic method from this

    def to_pydantic(self):
        messages = [{"role": "user", "content": self.content}]
        model = self._client.chat.completions.create(
            model=self.llm.model, response_model=self.model, messages=messages
        )
        return model

to this

    def to_pydantic(self):
        messages = [{"role": "user", "content": self.content}]
        model = self._client.chat.completions.create(
            model=self.llm.model, response_model=self.model, messages=messages, api_key=self.model.api_key, base_url=self.model.base_url
        )
        return model