I have a number of servers running once-a-day hits on aftership. Unfortunately, my system is deployed such that by default, the jobs all run at the same time. In the short term, I'm working to stagger the requests, but that is a manual step on a validated deploy process, so it incurs significant overhead. Having the system retry the requests is also an option we're looking into, but that will require a new release cycle AND will probably result in a bunch of extra aftership traffic that you don't want.
It would be great if there was an alternate measure of traffic to limit requests (at 600/minute as opposed to 10/second, we'd be set for quite a while). I know that there's performance and scalability concerns on the hosting side, but we're happy to set our default to low-throughput times.
Alternatively, if we could submit as low-priority, we'd be happy to let the requests wait until there was lower traffic. We're doing periodic sweeps as opposed to user-driven checks, so it's not super-time-sensitive on the order of second (or even minutes).