Your API is deployed, you are starting to make it available to consumers. That’s when the risk is the greatest: if you become a victim of your success, for example if your API is getting too many calls, response time may start degrading, and users may not stick around long enough for you to fix things.
The risk is much lower of course if your API is fully hosted in APISpark (data-driven API), because we have the ability to dynamically provision the bandwidth and resources that are needed. But if you manage an API hosted somewhere else, or if your API has a dependency on an external data source (a SQL database, a Google Spreadsheet…), then too much load can mean errors too.
To give you control over how often each user type can call your API, APISpark includes a rate limitation feature. Its usage is fairly simple: it lets you define, for each user type, how many calls per time unit each user will be permitted on the API.
Head over to Settings > Rate limitation, as shown in the following screenshot, click on the “Add a rate limit” link, give your rate limit a name, choose which group of users are affected, and define the number of calls allowed per time unit. Then, if you want, you can add other rate limits for other groups.
Why would you want to setup different limits for different groups of users?
There are as many answers as there are use cases or business models, but the reason can be for example:
- You have premium customers, who pay more than basic / free users, and you want their calls to take precedence.
- You want to serve business partners who have a legitimate reason to make lots of calls, whereas individual users don’t.
What happens when a user reaches the limit?
Your API calls will receive responses with an HTTP status code of 429 with the message “The server is refusing to service the request because the user has sent too many requests in a given amount of time”, indicating that the user has reached their allowed quota of calls, and should retry later. And since any developer calling APIs implements retries in their code, there is nothing to worry about: the next attempt will likely succeed. Without putting undue strain on your infrastructure.