Rate Limiting

Working with API limits

SmartAPIs employ rate limits to ensure our features and services continue to be stable, reliable and predictable amongst our users by safeguarding our APIs from burst of high volume incomer traffic. API users who send high volume of requests in quick succession will see a 429 error code that indicates the API limit has been reached.

SmartRecruiters have several limiters which includes rate limiter and concurrency limiter.

Limiters

Rate limiter limits the number of request by an endpoint within a given second. For most endpoints, SmartAPIs allow up to 10 requests per second

For the following endpoints, the rate limiter is 2 requests per second

POST /jobs/{id}/publication
DELETE /jobs/{id}/publication

Concurrency limiter limits the number of requests that are active at any given time. For most endpoints, SmartAPIs allow up to 8 concurrent requests.

For the following endpoint, the concurrency limiter is 1 concurrent request

GET /candidates

📘

Limiters and API users

An API user is defined by the credential it uses. As a result, each distinct API credential is considered an API user and should follow the rate limiting policy when making request to the APIs.

Active Mitigations

SmartAPIs always return rate limiting information on the response header of every requests regardless of whether a rate limiting policy was violated.

Header

Description

X-RateLimit-Limit

The request limit for the time period

X-RateLimit-Remaining

The number of requests remaining until the time period passes

X-RateLimit-Concurrent-Limit

The concurrent request limit

X-RateLimit-Concurrent-Remaining

The number of possible additional concurrent requests

Your application or service should have a build in retry mechanism. By checking or monitoring these information, your retry mechanism can follow an exponential backoff schedule to reduce the volume of requests when necessary.

Recommended Practices

You have a great influence on how your service behaves in the shared environment. If you follow the tips listed below you will get responses from the system faster:

  1. Program your software in the way that it does not make all the calls at one specific point of time, e.g.: 8 am, 9 am, etc. Build in instead some randomness and therefore distribute the calls in time more evenly.

  2. Ensure that timeout of your requests is set to at least 128s. You shall receive response from our API servers within this time (a valid response or an error code). Of course we will do our best to answer your request as quickly as possible.

  3. Use our Reporting API for getting data for analysis. Reporting API is designed to serve large volume data as quickly and effectively as possible. Reporting API uses streaming to serve high volume data as a response in one call to reduce the need of making multiple calls before iterate through the whole set. Throttling strategy on Reporting API ensures that you get full set of data at once and therefore you do not have to wait for the next chunk(s). There is a limit only when you can make the next call to the same endpoint.


Did this page help you?