Skip to main content

process_funcs_cls.py

Source: sunholo/genai/process_funcs_cls.py

Classes

GenAIFunctionProcessor

A generic class for processing function calls from google.generativeai function calling models.

This class provides a framework for handling multiple function calls in responses from generative AI systems. Users of this class should subclass it and provide their own implementation of the construct_tools method, which returns a dictionary of function names mapped to their implementations.

Attributes: config (ConfigManager): Configuration manager instance. Reach values via self.config within your own construct_tools() method funcs (dict): A dictionary of function names mapped to their implementations.

Example usage:

class AlloyDBFunctionProcessor(GenAIFunctionProcessor):
def construct_tools(self) -> dict:
pass

config = ConfigManager()
alloydb_processor = AlloyDBFunctionProcessor(config)

results = alloydb_processor.process_funcs(full_response)

alloydb_model = alloydb_processor.get_model(
model_name="gemini-1.5-pro",
system_instruction="You are a helpful AlloyDB agent that helps users search and extract documents from the database."
)
  • init(self, config: sunholo.utils.config_class.ConfigManager)
    • Initializes the GenAIFunctionProcessor with the given configuration.

Args: config (ConfigManager): The configuration manager instance.

  • _validate_functions(self)
    • Validates that all functions in the funcs dictionary have docstrings.

This method checks each function in the funcs dictionary to ensure it has a docstring. If a function is missing a docstring, an error is logged, and a ValueError is raised.

Raises: ValueError: If any function is missing a docstring.

  • check_function_result(self, function_name, target_value, api_requests_and_responses=[])
    • Checks if a specific function result in the api_requests_and_responses contains a certain value.

Args: function_name (str): The name of the function to check. target_value: The value to look for in the function result. api_requests_and_responses (list, optional): List of function call results to check. If not provided, the method will use self.last_api_requests_and_responses.

Returns: bool: True if the target_value is found in the specified function's result, otherwise False.

  • construct_tools(self) -> dict
    • Constructs a dictionary of tools (functions) specific to the application.

This method should be overridden in subclasses to provide the specific function implementations required for the application.

Note: All functions need arguments to avoid errors.

Returns: dict: A dictionary where keys are function names and values are function objects

Raises: NotImplementedError: If the method is not overridden in a subclass.

  • decide_to_go_on(go_on: bool, chat_summary: str) -> dict
    • Examine the chat history. If the answer to the user's question has been answered, then go_on=False. If the chat history indicates the answer is still being looked for, then go_on=True. If there is no chat history, then go_on=True. If there is an error that can't be corrected or solved by you, then go_on=False. If there is an error but you think you can solve it by correcting your function arguments (such as an incorrect source), then go_on=True If you want to ask the user a question or for some more feedback, then go_on=False.
      Avoid asking the user if you suspect you can solve it yourself with the functions at your disposal - you get top marks if you solve it yourself without help. When calling, please also add a chat summary of why you think the function should be called to end.

Args: go_on: boolean Whether to continue searching for an answer chat_summary: string A brief explanation on why go_on is TRUE or FALSE

Returns: boolean: True to carry on, False to continue

  • get_model(self, system_instruction: str, generation_config=None, model_name: str = None, tool_config: str = 'auto')
    • Constructs and returns the generative AI model configured with the tools.

This method creates a generative AI model using the tools defined in the funcs dictionary and the provided configuration options.

Args: model_name (str): The name of the model to use. system_instruction (str): Instructions for the AI system. generation_config (dict, optional): Configuration for generation, such as temperature. tool_config (str, optional): Configuration for tool behaviour: 'auto' it decides, 'none' no tools, 'any' always use tools

Returns: GenerativeModel: An instance of the GenerativeModel configured with the provided tools.

Example usage:

alloydb_model = alloydb_processor.get_model(
model_name="gemini-1.5-pro",
system_instruction="You are a helpful AlloyDB agent that helps users search and extract documents from the database."
)
  • parse_as_parts(self, api_requests_and_responses=[])

    • No docstring available.
  • parse_as_string(self, api_requests_and_responses=[])

    • No docstring available.
  • process_funcs(self, full_response, output_parts=True) -> Union[list['Part'], str]

    • Processes the functions based on the full_response from the generative model.

This method iterates through each part of the response, extracts function calls and their parameters, and executes the corresponding functions defined in the funcs dictionary.

Args: full_response: The response object containing function calls. output_parts (bool): Indicates whether to return structured parts or plain strings.

Returns: list[Part] | str: A list of Part objects or a formatted string with the results.

Example usage:

results = alloydb_processor.process_funcs(full_response)
  • remove_invisible_characters(self, string)

    • No docstring available.
  • run_agent_loop(self, chat, content, callback, guardrail_max=10, loop_return=3)

    • Runs the agent loop, sending messages to the orchestrator, processing responses, and executing functions.

Args: chat: The chat object for interaction with the orchestrator. content: The initial content to send to the agent. callback: The callback object for handling intermediate responses. guardrail_max (int): The maximum number of iterations for the loop. loop_return (int): The number of last loop iterations to return. Default 3 will return last 3 iterations. If loop_return > guardrail_max then all iterations are returned.

Returns: tuple: (big_text, usage_metadata) from the loop execution.

  • tool_config_setting(self, mode: str)
    • No docstring available.
Sunholo Multivac

Get in touch to see if we can help with your GenAI project.

Contact us

Other Links

Sunholo Multivac - GenAIOps

Copyright ©

Holosun ApS 2024