Are there any best practices for using Kimi K2 Thinking as the reasoning agent within a deep agent?
I’ve created a deep agent using create_deep_agent(), with the primary model being a Kimi K2 Thinking hosted on AWS BedRock (using a ChatBedrockConverse model). While tackling its todos list the agent (more often than not) will exit the react loop prematurely (stopReason = end_turn even though there is a tool call)
My guess is that there is something specific with the Kimi K2 Thinking output format that is causing the premature end_turn, but wanted to see if there is a known fix before digging deeper