ChatGoogleGenerativeAI(
project=<gcp-project>
model=<gcp-location>
temperature=0.0,
max_retries=2,
include_thoughts=False,
thinking_budget=0,
vertexai=True,
)
hi @gs-awesome
If what you actually want is “force the model to not produce any tool call on this invocation”, use tool_choice="none". That maps to Gemini’s FunctionCallingConfigMode.NONE.
from langchain_google_genai import ChatGoogleGenerativeAI
llm = ChatGoogleGenerativeAI(
model="gemini-2.5-flash",
project="my-gcp-project",
location="us-central1",
vertexai=True,
temperature=0.0,
)
# Bind the tools so the schema is there if you want it back later,
# but force this call to ignore them:
llm_no_tools = llm.bind_tools([my_tool_1, my_tool_2], tool_choice="none")
response = llm_no_tools.invoke("Summarize this document ...")
# response.tool_calls == []
or
llm_with_tools = llm.bind_tools([my_tool_1, my_tool_2])
response = llm_with_tools.invoke(
"Summarize this document ...",
tool_choice="none",
)
well google-genai sdk has disabling automatic function call. this is the feature I was looking.
if you use LangChain, it’s safer to do that in the langchain abstraction layer
@pawel-twardziak did you mean like this?
from langchain_google_genai import ChatGoogleGenerativeAI
class _NoAFCModel(ChatGoogleGenerativeAI):
def _prepare_request(self, *args, **kwargs):
request = super()._prepare_request(*args, **kwargs)
request["config"].automatic_function_calling = AutomaticFunctionCallingConfig(disable=True)
return request
at least this seems to be working. after some investigation I realize that langchain_google_genai doesn’t support disable automatic function call options (at least)
try this @gs-awesome
from google.genai.types import AutomaticFunctionCallingConfig
from langchain_google_genai import ChatGoogleGenerativeAI
llm = ChatGoogleGenerativeAI(
model="gemini-2.5-flash",
project="my-gcp-project",
location="us-central1",
vertexai=True,
)
llm_no_afc = llm.bind(
automatic_function_calling=AutomaticFunctionCallingConfig(disable=True)
)
# Or per-invocation (same net effect):
llm.invoke(
"...",
automatic_function_calling=AutomaticFunctionCallingConfig(disable=True),
)
@pawel-twardziak thanks a lot
and how to do for structure output
from pydantic import BaseModel
class OutputSchema(BaseModel):
person: str
llm.with_structured_output(OutputSchema, method="json_schema", include_raw=True)
I think with method="json_schema" there is no config.tools at all.
So for method="json_schema", disabling AFC has no behavioral effect imho.
OR
from google.genai.types import AutomaticFunctionCallingConfig
from langchain_google_genai import ChatGoogleGenerativeAI
from pydantic import BaseModel
class OutputSchema(BaseModel):
person: str
llm = ChatGoogleGenerativeAI(
model="gemini-2.5-flash",
project="my-gcp-project",
location="us-central1",
vertexai=True,
)
llm_json = llm.bind(
response_mime_type="application/json",
response_json_schema=OutputSchema.model_json_schema(),
automatic_function_calling=AutomaticFunctionCallingConfig(disable=True),
)
raw = llm_json.invoke("Tell me about Alice.")
parsed = OutputSchema.model_validate_json(raw.content)
OR
your workaround does the job (it’s valid by the source code)
@pawel-twardziak your solution also works too
, and method=”json_schema” do exist. I have been mostly using with_structured_output because it is much more reliable to get the structure data from model, specifically for multi-turn conversation with model.
Alright @gs-awesome ![]()
Have you managed to sort out your problem? Anything else I can help with? ![]()