Today we are rolling out a third release of APISpark beta. The previous releases are described in blog posts #1 and #2. If you want an access to the beta, please share your interest in APISpark on Twitter (@apispark).
Also, you might want to read our latest “How much REST should your web API get?” blog post and the following discussion. Now, let’s continue with an overview of the changes released in production today!
Complex entities & representations
First, we added support for one of the most requested features: the ability to define and persist relationships between entities, including association, aggregation and composition.
We also added support for abstract entities and ensured that when automatically creating an API from an entity store via the import wizard, we expose complex entities via matching complex representations.
API templates and contracts
APISpark can not only create a web API from scratch or from an imported entity store, it can also let you create a web API based on an API template. You can think of them as blog themes on WordPress.com that lets you create your own custom blog in just seconds.
New API templates can be created from scratch or from a complete API that has been already deployed and tested. See the “Extract template” action in the drop-down menu below.
Those templates then appear in your dashboard, with an icon surrounded by a thick orange border as illustrated below and can be reused to create new APIs.
Like complete web APIs, public templates can be promoted to APISpark Catalog. Instances can then be created in seconds, each having its own runtime, including dedicated HTTP endpoints, members or analytics data.
The API contract and implementation are not cloned and remain controlled by the origin template. A similar feature just for API contracts is also provided, only including the visible aspects of a web API including the list of resources, methods, representations and client SDKs.
API contracts have several benefits for API users including easy switching from one API provider to another, or calling multiple APIs supporting the same contract but exposing different data.
Enhanced API implementation
When you deploy an API with APISpark, we internally generate some source code that has the Restlet Framework as main dependency. This source code is then compiled and packaged before being hosted by APISpark.
We try to generate this code exactly like a Restlet Framework developer would have written it. This means that your web APIs created by APISpark can be as performant in production as those written by hand. APISpark doesn’t interpret web APIs at runtime, but execute them at full byte code speed.
In this release, we have enhanced the implementation layer of APIs with:
added support for multiple actions
implemented action types (call imported API / call imported store / return response)
streamlined generated mapping code for speedier execution time
other APIs can now be imported and invoked (composite APIs)
Upgraded to Cassandra 1.2
APISpark enable you to store structured data via local entity stores, and later expose them through one or multiple web APIs.
In order to support efficient multi-tenant hosting, we rely on Apache Cassandra. Even though this technology choice is not directly visible to APISpark developers, it is key for us to provide:
potentially unlimited entity store sizes, spreading across machine clusters
highly-available entity stores thanks to continuous replication across mutiple machines
live entity store schema modifications without having to restart the database
flexible replication schemes including multi-region deployments
In this new release of APISpark, we have refactored our persistence layer, upgrading to version 1.2 of Cassandra which brings new features:
better multi-tenant performance for concurrent schema changes on entity stores
better scalability and fail-over behavior thanks to virtual nodes
speadier access thanks to new native driver
support for repeating properties thanks to the new collections feature in CQL3
Even though you don’t have a direct access to the low-level Cassandra database powering APISpark entity store, we want to let you know that we use the best-of-breed cloud database technology to power web APIs hosted on APISpark.
We are also working on entity store wrappers for data living outside APISpark platform to let you reexpose them through proper web APIs. Stay tuned for the next release of APISpark for more information about this feature.
In addition, we have started to roll-out intra-region fail-over support to ensure that your APIs hosted by APISpark are highly available in case of hardware failure. Currently APISpark runs from the Northen California data center of Amazon Web Services.
The next step is to roll-out the multi-region deployment with a second data center in Virginia so that in case a complete AWS region goes down (which is very rare but already occured in the past), your web API can still run out of the other region in a way that is almost transparent to your API users, beside the increased network latency.
Finally, we fixed several issues:
URI path variables are now declared when adding resources from imported entity stores
Adding multiple sources selection in the Mapping panel
Adding mapping support for path variables and query parameters when calling an API
Fixed API contract filtering on the dashboard
If you want to see how creating, hosting and using web APIs looks like with APISpark, we recommend our first tutorial.