Skip to main content

genaiv2.py

Source: sunholo/genai/genaiv2.py

Classes

GoogleAI

A wrapper class for Google's v2 Generative AI APIs. See https://ai.google.dev/gemini-api/docs/models/gemini-v2

  • init(self, config: sunholo.genai.genaiv2.GoogleAIConfig)
    • Initialize the GoogleAI client.

Args: config (GoogleAIConfig): Configuration for client initialization

  • _process_responses(self, session) -> List[str]

    • Internal method to process session responses.
  • _record_audio(self, duration: float = 5.0, sample_rate: int = 16000) -> bytes

    • Internal method to record audio.
  • _record_video(self, duration: float = 5.0) -> List[bytes]

    • Internal method to record video frames.
  • count_tokens(self, text: str, model: Optional[str] = None) -> int

    • Count the number of tokens in the text.

Args: text (str): Input text model (Optional[str]): Model to use for tokenization

Returns: int: Number of tokens

  • generate_text(self, prompt: str, model: Optional[str] = None, temperature: float = 0.7, max_output_tokens: int = 1024, top_p: float = 0.95, top_k: int = 20, stop_sequences: Optional[List[str]] = None, system_prompt: Optional[str] = None, tools: Optional[List[ForwardRef('types.Tool')]] = None) -> str
    • Generate text using the specified model.

Args: prompt (str): The input prompt model (Optional[str]): Model name to use temperature (float): Controls randomness (0.0-1.0) max_output_tokens (int): Maximum number of tokens to generate top_p (float): Nucleus sampling parameter top_k (int): Top-k sampling parameter stop_sequences (Optional[List[str]]): Sequences that stop generation system_prompt (Optional[str]): System-level instruction tools: list of python functions or Tool objects

Returns: str: Generated text response

  • generate_text_async(self, prompt: str, model: Optional[str] = None, **kwargs) -> str

    • Async version of generate_text.
  • get_embedding(self, text: Union[str, List[str]], model: str = 'text-embedding-004', output_dim: Optional[int] = None) -> Union[List[float], List[List[float]]]

    • Get text embeddings.

Args: text (Union[str, List[str]]): Text to embed model (str): Embedding model to use output_dim (Optional[int]): Desired embedding dimension

Returns: Union[List[float], List[List[float]]]: Embeddings

  • google_search_tool(self) -> 'types.Tool'

    • No docstring available.
  • gs_uri(self, uri, mime_type=None)

    • No docstring available.
  • live_async(self, prompt: Union[str, List[Union[str, bytes]], NoneType] = None, input_type: str = 'text', duration: Optional[float] = None, model: Optional[str] = None, **kwargs) -> str

    • Live Multimodal API with support for text, audio, and video inputs.

Args: input_type: Type of input ("text", "audio", or "video") prompt: Text prompt or list of text/binary chunks duration: Recording duration for audio/video in seconds model: Optional model name **kwargs: Additional configuration parameters

Returns: str: Generated response text

  • local_file(self, filename, mime_type=None)

    • No docstring available.
  • stream_text(self, prompt: str, model: Optional[str] = None, **kwargs) -> 'Generator[str, None, None]'

    • Stream text generation responses.

Args: prompt (str): The input prompt model (Optional[str]): Model name to use **kwargs: Additional configuration parameters

Yields: str: Chunks of generated text

  • structured_output(self, prompt: str, schema: Union[pydantic.main.BaseModel, Dict, type, TypedDict], model: Optional[str] = None, is_list: bool = False) -> Dict
    • Generate structured output according to a schema.

Args: prompt (str): Input prompt schema (Union[BaseModel, Dict, type]): Schema definition (Pydantic model, TypedDict, or raw schema) model (Optional[str]): Model to use is_list (bool): Whether to wrap the schema in a list

Returns: Dict: Structured response matching schema

GoogleAIConfig

Configuration class for GoogleAI client initialization. See https://ai.google.dev/gemini-api/docs/models/gemini-v2

  • copy(self) -> 'Self'

    • Returns a shallow copy of the model.
  • deepcopy(self, memo: 'dict[int, Any] | None' = None) -> 'Self'

    • Returns a deep copy of the model.
  • delattr(self, item: 'str') -> 'Any'

    • Implement delattr(self, name).
  • eq(self, other: 'Any') -> 'bool'

    • Return self==value.
  • getattr(self, item: 'str') -> 'Any'

    • No docstring available.
  • getstate(self) -> 'dict[Any, Any]'

    • Helper for pickle.
  • init(self, /, **data: 'Any') -> 'None'

    • Create a new model by parsing and validating input data from keyword arguments.

Raises [ValidationError][pydantic_core.ValidationError] if the input data cannot be validated to form a valid model.

self is explicitly positional-only to allow self as a field name.

  • iter(self) -> 'TupleGenerator'

    • So dict(model) works.
  • pretty(self, fmt: 'typing.Callable[[Any], Any]', **kwargs: 'Any') -> 'typing.Generator[Any, None, None]'

  • replace(self, **changes: 'Any') -> 'Self'

    • No docstring available.
  • repr(self) -> 'str'

    • Return repr(self).
  • repr_args(self) -> '_repr.ReprArgs'

    • No docstring available.
  • repr_name(self) -> 'str'

    • Name of the instance's class, used in repr.
  • repr_recursion(self, object: 'Any') -> 'str'

    • Returns the string representation of a recursive object.
  • repr_str(self, join_str: 'str') -> 'str'

    • No docstring available.
  • rich_repr(self) -> 'RichReprResult'

  • setattr(self, name: 'str', value: 'Any') -> 'None'

    • Implement setattr(self, name, value).
  • setstate(self, state: 'dict[Any, Any]') -> 'None'

    • No docstring available.
  • str(self) -> 'str'

    • Return str(self).
  • _calculate_keys(self, *args: 'Any', **kwargs: 'Any') -> 'Any'

    • No docstring available.
  • _check_frozen(self, name: 'str', value: 'Any') -> 'None'

    • No docstring available.
  • _copy_and_set_values(self, *args: 'Any', **kwargs: 'Any') -> 'Any'

    • No docstring available.
  • _iter(self, *args: 'Any', **kwargs: 'Any') -> 'Any'

    • No docstring available.
  • copy(self, *, include: 'AbstractSetIntStr | MappingIntStrAny | None' = None, exclude: 'AbstractSetIntStr | MappingIntStrAny | None' = None, update: 'Dict[str, Any] | None' = None, deep: 'bool' = False) -> 'Self'

    • Returns a copy of the model.

!!! warning "Deprecated" This method is now deprecated; use model_copy instead.

If you need include or exclude, use:

data = self.model_dump(include=include, exclude=exclude, round_trip=True)
data = {**data, **(update or {})}
copied = self.model_validate(data)

Args: include: Optional set or mapping specifying which fields to include in the copied model. exclude: Optional set or mapping specifying which fields to exclude in the copied model. update: Optional dictionary of field-value pairs to override field values in the copied model. deep: If True, the values of fields that are Pydantic models will be deep-copied.

Returns: A copy of the model with included, excluded and updated fields as specified.

  • dict(self, *, include: 'IncEx | None' = None, exclude: 'IncEx | None' = None, by_alias: 'bool' = False, exclude_unset: 'bool' = False, exclude_defaults: 'bool' = False, exclude_none: 'bool' = False) -> 'Dict[str, Any]'

    • No docstring available.
  • json(self, *, include: 'IncEx | None' = None, exclude: 'IncEx | None' = None, by_alias: 'bool' = False, exclude_unset: 'bool' = False, exclude_defaults: 'bool' = False, exclude_none: 'bool' = False, encoder: 'Callable[[Any], Any] | None' = PydanticUndefined, models_as_dict: 'bool' = PydanticUndefined, **dumps_kwargs: 'Any') -> 'str'

    • No docstring available.
  • model_copy(self, *, update: 'Mapping[str, Any] | None' = None, deep: 'bool' = False) -> 'Self'

Returns a copy of the model.

Args: update: Values to change/add in the new model. Note: the data is not validated before creating the new model. You should trust this data. deep: Set to True to make a deep copy of the model.

Returns: New model instance.

  • model_dump(self, *, mode: "Literal['json', 'python'] | str" = 'python', include: 'IncEx | None' = None, exclude: 'IncEx | None' = None, context: 'Any | None' = None, by_alias: 'bool' = False, exclude_unset: 'bool' = False, exclude_defaults: 'bool' = False, exclude_none: 'bool' = False, round_trip: 'bool' = False, warnings: "bool | Literal['none', 'warn', 'error']" = True, serialize_as_any: 'bool' = False) -> 'dict[str, Any]'

Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.

Args: mode: The mode in which to_python should run. If mode is 'json', the output will only contain JSON serializable types. If mode is 'python', the output may contain non-JSON-serializable Python objects. include: A set of fields to include in the output. exclude: A set of fields to exclude from the output. context: Additional context to pass to the serializer. by_alias: Whether to use the field's alias in the dictionary key if defined. exclude_unset: Whether to exclude fields that have not been explicitly set. exclude_defaults: Whether to exclude fields that are set to their default value. exclude_none: Whether to exclude fields that have a value of None. round_trip: If True, dumped values should be valid as input for non-idempotent types such as Json[T]. warnings: How to handle serialization errors. False/"none" ignores them, True/"warn" logs errors, "error" raises a [PydanticSerializationError][pydantic_core.PydanticSerializationError]. serialize_as_any: Whether to serialize fields with duck-typing serialization behavior.

Returns: A dictionary representation of the model.

  • model_dump_json(self, *, indent: 'int | None' = None, include: 'IncEx | None' = None, exclude: 'IncEx | None' = None, context: 'Any | None' = None, by_alias: 'bool' = False, exclude_unset: 'bool' = False, exclude_defaults: 'bool' = False, exclude_none: 'bool' = False, round_trip: 'bool' = False, warnings: "bool | Literal['none', 'warn', 'error']" = True, serialize_as_any: 'bool' = False) -> 'str'

Generates a JSON representation of the model using Pydantic's to_json method.

Args: indent: Indentation to use in the JSON output. If None is passed, the output will be compact. include: Field(s) to include in the JSON output. exclude: Field(s) to exclude from the JSON output. context: Additional context to pass to the serializer. by_alias: Whether to serialize using field aliases. exclude_unset: Whether to exclude fields that have not been explicitly set. exclude_defaults: Whether to exclude fields that are set to their default value. exclude_none: Whether to exclude fields that have a value of None. round_trip: If True, dumped values should be valid as input for non-idempotent types such as Json[T]. warnings: How to handle serialization errors. False/"none" ignores them, True/"warn" logs errors, "error" raises a [PydanticSerializationError][pydantic_core.PydanticSerializationError]. serialize_as_any: Whether to serialize fields with duck-typing serialization behavior.

Returns: A JSON string representation of the model.

  • model_post_init(self, _BaseModel__context: 'Any') -> 'None'
    • Override this method to perform additional initialization after __init__ and model_construct. This is useful if you want to do some validation that requires the entire model to be initialized.
Sunholo Multivac

Get in touch to see if we can help with your GenAI project.

Contact us

Other Links

Sunholo Multivac - GenAIOps

Copyright ©

Holosun ApS 2025