User defined exception based Input context validation (token limit)

Hi Team,

I need some tip in one crewai use case.

I have a 40k line of code in one method.

when this method enters into crew tasks, its should throw the user-defined exception if it exceed the token limit.

please help me on this logic.

user defined exception based on token limit.
i need to validate the tasks input context length and add the user defined message into output report.

example: task input token is 40k , max_token=32k, then its usually throws a exceeding max_token limit, right. In this place i need to add the user defined error message.

thanking you.

File "/Users/paarttipaa/ProjectTask/GithubProj/slc_code_explanation_project/SLC_Step02_Crewai/work/crewai/javadesigndocgen/.venv/lib/python3.12/site-packages/litellm/llms/watsonx/completion/handler.py", line 420, in handle_text_request with self.request_manager.request( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/contextlib.py", line 137, in __enter__ return next(self.gen) ^^^^^^^^^^^^^^ File "/Users/paarttipaa/ProjectTask/GithubProj/slc_code_explanation_project/SLC_Step02_Crewai/work/crewai/javadesigndocgen/.venv/lib/python3.12/site-packages/litellm/llms/watsonx/completion/handler.py", line 698, in request raise WatsonXAIError(status_code=500, message=str(e)) litellm.llms.watsonx.common_utils.WatsonXAIError: Error 400 (Bad Request): {"errors":[{"code":"invalid_input_argument","message":"Invalid input argument for Model 'mistralai/mistral-large': the number of input tokens 39055 cannot exceed the total tokens limit 32768 for this model","more_info":"https://cloud.ibm.com/apidocs/watsonx-ai"}],"trace":"f7766797cddbd9e1e32c3595ede7e127","status_code":400}

this error is arises because of input token limitation.

I am trying to handle this error with user defined exception message and continue the remaining process during kick-off for each execution. help me on this…

can you clarify if you’re trying to catch the token limit exception and continue automatically despite the exception?

CrewAI already has some token limit handling functionality here → crewAI/src/crewai/utilities/exceptions/context_window_exceeding_exception.py at main · crewAIInc/crewAI · GitHub

you can try extending it for your use case. here’s a starter for custom exception for token limit validation:

from typing import Optional
from crewai.utilities.exceptions import LLMContextLengthExceededException

class TaskTokenLimitExceededException(LLMContextLengthExceededException):
    def __init__(self, input_tokens: int, max_tokens: int, message: Optional[str] = None):
        self.input_tokens = input_tokens
        self.max_tokens = max_tokens
        self.custom_message = message
        error_message = self._get_error_message("")
        super(LLMContextLengthExceededException, self).__init__(error_message)

    def _get_error_message(self, _: str) -> str:
        base_message = (
            f"Task input exceeds token limit. Input tokens: {self.input_tokens}, "
            f"Max allowed: {self.max_tokens}"
        )
        if self.custom_message:
            return f"{base_message}\n{self.custom_message}"
        return base_message

----Reply:
yes, the process need to continue automatically despite exception.

@tonykipkemboi facing difficulties in implementing the above exception class

In kickoff_foreach(), if the exception is occur after nth iteration, then its prints a exception message and continues the remaining iteration process. Please help on this.