I am building applications leveraging the Gemini API and need assistance understanding the FunctionDeclaration
object. Specifically, I am curious about the design choice to only accept a single Schema
object for the parameters
attribute instead of a list[Schema]
. This seems limiting and counterintuitive given that many real-world functions often require multiple parameters.
Below is the definition of the FunctionDeclaration
object from the API:
class FunctionDeclaration(proto.Message):
r"""Structured representation of a function declaration as defined by
the `OpenAPI 3.03 specification <https://spec.openapis.org/oas/v3.0.3>`__.
This representation includes the function's name and parameters and serves
as a ``Tool`` that the model can utilize and the client can execute.
Attributes:
name (str): The function's name, constrained to alphanumeric characters,
underscores, and dashes, with a maximum length of 63.
description (str): A brief description of the function.
parameters (google.ai.generativelanguage_v1beta.types.Schema):
Optional. Defines the parameters for this function as an OpenAPI
3.03 Parameter Object. This field is case-sensitive and reflects
a single Schema object defining the parameter's type.
"""
name: str = proto.Field(proto.STRING, number=1)
description: str = proto.Field(proto.STRING, number=2)
parameters: "Schema" = proto.Field(
proto.MESSAGE, number=3, optional=True, message="Schema"
)
I am currently creating an adapter method to map my custom Function
object to Google’s FunctionDeclaration
. My custom Function
object supports multiple parameters as a list[Parameter]
. Relevant imports are shown below:
from google.ai.generativelanguage_v1beta.types.content import (
FunctionDeclaration as GenAIFunctionDeclaration,
)
from google.ai.generativelanguage_v1beta.types.content import Schema, Type
Additionally, I am curious about Google’s design decision to include similar objects (e.g., FunctionDeclaration
, Part
, Content
) across different modules, such as:
vertexai.generative_models
google.ai
google.generativeai
These objects appear to serve the same purpose but are not interchangeable between modules. This fragmentation is causing significant overhead, as I must rewrite portions of my code to accommodate different API calls depending on whether I am working with Vertex AI or AI Studio.
Could you provide insights into these design decisions, or any guidance on better handling these inconsistencies when developing across multiple Google AI APIs?