Rate limits control how frequently requests can be sent to the Davix H2I API.
These controls help keep the platform stable, responsive, and fair across customers. Because requests can trigger rendering, image processing, PDF processing, and analysis through the H2I engine (PixLab), the API applies request-throttling controls to protect processing capacity.
Why Rate Limits Exist #
Davix H2I performs backend processing operations that can be computationally expensive, including:
- HTML rendering
- image processing
- PDF processing
- analysis operations
Rate limits help the platform:
- distribute resources fairly
- prevent excessive request bursts
- maintain service stability
- support more predictable performance during load
These controls are part of the request lifecycle for the public API.
How Rate Limits Work #
Rate limits restrict how quickly requests can be sent over a defined time window.
For customer-facing usage, these controls may depend on your account or plan. When a request exceeds the allowed request rate for the current window, the API rejects the request instead of processing it. The exact numeric values for customer-facing rate limits and related plan limits should be documented only in the dedicated Errors and Limits section.
Rate Limit Errors #
When a request exceeds the allowed request rate, the API returns:
- HTTP
429 - error code
rate_limit_exceeded
This indicates that the request was not accepted for processing because the current request rate exceeded the allowed limit.
Example Error Response #
{
"error": {
"code": "rate_limit_exceeded",
"message": "Rate limit exceeded."
},
"request_id": "<REQUEST_ID>"
}
The response format follows the platform’s standard error model, which includes a request_id for troubleshooting and support workflows.
Handling Rate Limits in Applications #
Applications should handle rate limits as a normal part of API integration.
Recommended practices:
- retry after a delay
- implement exponential backoff
- queue retryable work
- reduce bursty traffic patterns
- avoid sending the same request repeatedly in a short time window
These practices help prevent repeated 429 responses and make integrations more reliable under load.
Avoiding Rate Limits #
To reduce the chance of hitting a rate limit:
- distribute requests more evenly over time
- avoid unnecessary duplicate calls
- cache reusable outputs where appropriate
- control client-side concurrency
- design automation flows to avoid sudden spikes
These integration patterns are especially important when multiple systems or workflows share the same API access.
Rate Limits and Request Processing #
Rate limits are enforced as part of the public API request flow.
That means a request can be rejected before heavy processing is allowed to continue. This protects the H2I engine (PixLab) from unnecessary load and helps preserve service quality for valid traffic.
Rate Limits and Other Limits #
Rate limits are only one part of the platform’s request controls.
Depending on the endpoint and plan, requests may also be affected by other limits such as:
- monthly usage quotas
- file count limits
- upload size limits
- dimension limits
- page limits
- endpoint availability by plan
To keep all customer-facing values in one place, those limits should be documented in the dedicated Errors and Limits section rather than repeated throughout the documentation. The loaded plan and limits documentation confirms that the platform supports customer-facing limits in these categories.
Summary #
Rate limits control how frequently requests can be sent to the Davix H2I API.
If a request exceeds the allowed request rate, the API returns rate_limit_exceeded with HTTP 429. Applications should handle these responses by delaying and retrying appropriately. Customer-facing numeric values for rate limits and related plan limits should be maintained only in the Errors and Limits section so they can be updated in one place.
