Blog

- March 30, 2015

With the APISpark API platform, you can easily expose external data sources via a Web API — for example a Google Spreadsheet or a SQL database, an S3 or Github file store, as well as Parse and Firebase backends. You can find more information on how to do that in our tutorials.

But sometimes, such backends have response times or throughput limitations which can vary greatly, and make your application consuming such APIs quite slow.

You may also be managing an existing API with APISpark, and that API might potentially have response times which are not as good as you would like.

That’s why the APISpark team recently introduced a new feature to help improve the response time of your APIs, with APISpark server-side API response caching. The platform will do response caching for you, so that frequently accessed resources are served from the cache, rather than calling the underlying slow API, which might do some heavy computation to deliver its results.

To enable response caching, you must go to the Settings tab of your API. In the General settings, you’ll find a panel letting you check or uncheck this feature, as well as allowing you to define the duration of the caching, with different units (seconds, minutes, hours, days and even months). The screenshot below shows this settings panel:

apispark-caching-2

With response caching in APISpark, you can get more predictable response times for your API and save some resources if the underlying API is doing some heavy lifting. You can easily define the “time to live”, ie. the interval of time before the cached value needs to be refreshed. That way, you always have full control over how fresh this data needs to be.

TryOutAPISpark_banner