Couple of weeks ago tester of mobile app I'm building sent us bug report, and described bug was awful - data loss because of wrong synchronization process. Synchronization is always tricky thing to do, but I was sure all works fine because all code, especially of synchronization, is heavily covered by tests (unit and e2e). And all of them were "green".
I've tried to reproduce, as described in report - not reproducible... It's even worse than reproducible.
But what was more interesting - teammates in the USA were able to reproduce it with 100% chance. Really, we demonstrated each other how app behaves and they could reproduce it, I saw it, when I couldn't, even in most difficult cases (for synchronization algorithm)..
As I discovered by symptoms, server has been sending same responses for separate POST requests, when server should generate unique identificator. We checked code of the API - all fine. And for me API doesn't send same responses, so... Yes, responses (or requests) were cached! And cached not by server (nobody will configure server to cache POST requests, I suppose), but somewhere inside the US network between users and server - that's why it wasn't reproducible from my side - I'm testing it from Russia (where I live).
Same "unique" id for different records it's a very dangerous thing and leads to very bad bugs and data losses, so we decided to add random hash to all requests and it solved our issue.
It wasn't difficult to fix, but it was very difficult to find, because I couldn't imagine that POST request can be cached (responses have been sent without touching server, just responses for previous requests with the same signature).
What to think about it? Should we (all programmers) add cache-preventing garbage to each POST/DELETE request? Of course, network providers shouldn't cache POST requests, but we can't force them and can't wait while they'll fix it.
I'm still not sure, so please share your opinions.