When will response schemas be supported by ChatGoogleGenerativeAI offered by langgraph

I feel as if LangGraph should support response_schemas for its responses when using the invoke method on the ChatGoogleGenerativeAI module.

What I mean is, every time I use ChatGoogleGenerativeAI:
llm = ChatGoogleGenerativeAI(model="gemini-2.5-flash", google_api_key=api_key)

it doesn’t give me as much flexibility to add pydantic schema classes. For example, I need to format the LLM’s response to a questionnaire creation prompt with this schema:

class MultiChoiceQuestions(BaseModel):
    """Structured format for multiple-choice questions.

    questions: A dict with each key as the question,
    and the value as a list: [list_of_options, correct_option]

    Example:
    {
        "What is 2+2?": [["2", "3", "4", "5"], "4"]
    }    subject_title: str """
    description: str
    questions: Dict[str, List[List[str], str]]

I cant just use

llm.invoke("hi, generate 50 random multichoie questions", response_schema=MultiChoiceQuestions)

but this isnt supported and its really annoying

Hi @Henry_Bassey , welcome to the forum.

Could you please raise this as an issue in the LangGraph GitHub repository?

Thanks