FileManagement ToolKit's file write doesn't work with claude sonnet 3.7 & 4 for slightly big files

Hi,
I am trying to use claude sonnet 4 and unable to get it to write a file that is roughly 4kb. The model constantly invokes the file_write function call without the text parameter and eventually the python script crashes with graph recursion limit. The issue seems to be that if the file content is beyond certain number of tokens, the model starts making incorrect tool call (looks like it passes no text parameter).

Is there a known solution to this problem apart from increasing the max_tokens setting?

This happens because Claude models will sometimes drop or truncate parameters in a tool call if the payload exceeds their effective output length. When the text parameter is missing, the agent reinvokes file_write, causing repeated calls until the process hits recursion or call limits.

To fix this, you have a few options:

  1. Chunk the content: Pre-split your 4KB+ text into smaller segments and call file_write multiple times appending to the file each time. This avoids hitting the model’s output length limit.

  2. Explicit prompt instructions: Tell Claude: “If file content is large, write it in smaller chunks across multiple calls.”

  3. Increase max_tokens: This gives the model more space to include long parameters, but it’s less reliable than chunking.

  4. Custom file-write tool: Wrap your file write logic in a tool that accepts a list of strings or streams data, so the model only passes smaller payloads at a time.

Chunking is generally the safest approach, because it works regardless of model output size and avoids recursion crashes entirely.

Thanks for your reply. Below are some of my observations with resect to your suggestions

  1. Chunk the content: This is llm generating code for a file that ends up being bigger than its max tokens. So the chunking is under its control, not mine.
  2. Explicit prompt instructions: thanks for this suggestion! It does seem to take effect in the sense that the llm writes the file in chunks but it then complains about formatting issues with chunked content and essentially reverts all the write eventually causing a fail. I will try some other prompts to see if I can make this work, but I don’t know how robust this is. As instructions grow, I wonder if the model will just ignore this instruction.
  3. Increase max_tokens: This is the only way it works predictably. But my worry is that I will eventually hit some limit beyond which I won’t be able to increase max_tokens. Hence it will be really good if I can somehow making chunking work.
  4. Custom file-write tool: I did implementt a custom tool that supports chunked write, but the llm still insists on writing the entire file as one chunk and runs into similar issue. Essentially the explicit prompt seems to yield much better result than exposing a custom file-write tool.

I agree that chunking seems to be the most obvious way to handle this but I am unable to get it working predictably.

Thanks