The problem may be resolved by providing prompts and model_kwargs appropriate to the particular LLM.
model_kwargs