bisconti.cloud
contact: g.dev/julien
What if you just want to block the outliers ?
(99.9% of users are well behaved)
Reduce the bandwidth overhead of rate limiting.
Data example
{
"counters": [{
"header": { "user": "b64encodedsecret", "service": "activity" },
"count": 3
},{
"header": { "user": "anotheruser", "service": "activity" },
"count": 8
},{
"header": { "user": "scraper"},
"count": 42
}],
"window_start": "2009-11-10T23:00:00Z",
"window_end": "2009-11-10T23:01:00Z"
}
# 8 seconds window
4 2 6 8 3 0 0 9 1 1 0 2 0 4 5 0 # requests per seconds
|---------------| = 23 (t0)
|---------------| = 14 (t1)
|---------------| = 12 (t2)
Hashing function to determine the aggregator
# { "user": "b64encodedsecret", "service": "activity" },
# hash = "isHfNLCKDW4832bMJkosRA=="
# agg = maglev(hash) # consistent hashing
client -> proxy -> backend service
|
├──> aggregator01
├──> aggregator02
├──> ...
└──> aggregatorN
when things go wrong
Adventures in Rate Limiting: Spotify’s Journey Writing a Scalable Envoy Rate Limiter
Oliver Soell & Peter Marsh
and I'm sorry 🙏
If you had to maintain my code
I hope you learned more by maintaining it
than me by writing it
Slides made with Reveal.js and hugo-reveal
Software Engineer / SRE
slides: bisconti.cloud
contact: g.dev/julien