Traces getting dropped

We are using LangSmith for AI observability on production. We have been observing a lot of traces getting dropped with the error “fetch failed”. Are there any suggested configurations that we should try to make it more robust?

We are on a paid plan, and this has been a critical issue for us, as we are unable to calculate usage costs and track user errors reliably.

Sadly, seems like LangSmith doesn’t have an easy way to reach out for such support queries.