Fediverse traffic is pretty bursty and sometimes there will be a large backlog of Activities to send to your server, each of which involves a POST. This can hammer your instance and overwhelm the backend’s ability to keep up. Nginx provides a rate-limiting function which can accept POSTs at full speed and proxy them slowly through to your backend at whatever rate you specify.
For example, PieFed has a backend which listens on port 5000. Nginx listens on port 443 for POSTs from outside and sends them through to port 5000:
upstream app_server { server 127.0.0.1:5000 fail_timeout=0;}server { listen 443 ssl; listen [::]:443 ssl; server_name piefed.social www.piefed.social; root /var/www/whatever; location / { # Proxy all requests to Gunicorn proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header Host $http_host; proxy_redirect off; proxy_http_version 1.1; proxy_set_header Connection ""; proxy_pass http://app_server; ssi off; }To this basic config we need to add rate limiting, using the ‘limit_req_zone’ directive. Google that for further details.
limit_req_zone $binary_remote_addr zone=one:100m rate=10r/s;This will use up to 100 MB of RAM as a buffer and limit POSTs to 10 per second, per IP address. Adjust as needed. If the sender is using multiple IP addresses the rate limit will not be as effective. Put this directive outside your server {} block.
Then after our first location / {} block, add a second one that is a copy of the first except with one additional line (and change it to apply to location /inbox or whatever the inbox URL is for your instance):
location /inbox { limit_req zone=one burst=300;# limit_req_dry_run on; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header Host $http_host; proxy_redirect off; proxy_http_version 1.1; proxy_set_header Connection ""; proxy_pass http://app_server; ssi off; }300 is the maximum number of POSTs it will have in the queue. You can use limit_req_dry_run to test the rate limiting without actually doing any limiting – watch the nginx logs for messages while doing a dry run.
It’s been a while since I set this up so please let me know if I mixed anything crucial out or said something misleading.