Blog

- December 28, 2015

The APISpark platform lets you craft your Web API very easily and then host it, but it’s not the only supported feature. The platform supports three different components as emphasized in the following figure:

apispark-use-cases-overview

In addition to Web APIs, Descriptors allow you to design a Web API by, creating a structure from scratch or importing it from an API definition format. The goal of Descriptors is to have access to their structure in a format like Swagger or RAML.

Connectors will be our focus today. They allow you to proxy existing Web APIs hosted outside the APISpark platform to easily apply services to them.

apispark-use-cases-proxy

Some of the key features supported by APISpark’s API management capabilities are:

  • Monitoring of API calls. It provides a view on calls handled by your service. Informations from both request and response calls are accessible.
  • Analytics. They provide a consolidated view and statistics about the usage of your service.
  • Server cache. To improve performance of Web APIs, caching can be transparently added to your service, offering more consistent response times.
  • Rate limitations. When a service is provided, you often need to apply limitations according to the plan the user subscribed to.

For more information, have a look at some interesting blog posts published this year:

Let’s now focus on the Connector feature of the APISpark platform.

Connectors

A Connector in APISpark allows you to create and configure a Web API proxy.

The Restlet website provides two great in-depth tutorials about creating, configuring and using Connectors:

Creating and configuring Connectors

The first step consists of creating the Connector itself. At this level, you can choose the mode: hosted by APISpark or standalone. We will focus on the second mode where everything is executed on your own platform.

creator-connector

Then you can import the Web API definition leveraging definition formats like Swagger and RAML.

Leveraging Swagger definitions

Connectors, like any cells within the APISpark platform, need to be defined in a contract with the following information: available resources, methods and the messages they can exchange. One of the strengths of the APISpark platform is its support of API definition languages like Swagger. You can:

  • access and translate the contract in different formats
  • configure the connector contract based on a provided definition

To summarize, configuring a connector mainly consists of importing the definition of the target Web API in a specific format. We will use Swagger in the rest of the blog post.

definition-import

For any Web API that can be described with a definition format, you can import that definition into a cell.

swagger-import-definition

APISpark allows you to either upload a Swagger file or to use a public URL to get the definition content:

swagger-import

Calling the Connector

Now that your Connector is created, configured and the Web API definition imported, you are ready to deploy it to make it available for calls. Simply click on the Deploy button. This last step is required even for standalone Connectors.

There are two kinds of Connectors depending on where corresponding agents are executed:

  • Hosted agents. The agent is hosted within the APISpark platform itself. You call the proxy deployed on the platform and then the agent will call the external target service. This approach has the advantage of being easy to setup, since you don’t have to deal with running the agent yourself on your infrastructure.
  • Standalone agents. With this approach, everything is executed within your own infrastructure. APISpark lets you download the agent itself (a Java application) and execute it. The only interaction with APISpark is for services. If you want to host everything, and have the agent live as close as possible to the proxied API, choose this approach.

For standalone, you can download the Java agent to execute it within your infrastructure.

IMG connector-download-agent.png

Values for the agent configuration file properties can be found on the previous screen.

agent.login=67b60a5d-c2b8-43a3-97b8-084d6ce60e10
agent.password=5dcf7dc2-a2c4-44f5-b753-c26abea2c8b5
agent.cellId=11939
agent.cellVersion=1
reverseProxy.enabled=true
reverseProxy.targetUrl=https://192.168.10.130:8080

Then launch the following command to start the agent on port 3000 in front of your target API application:

$ java -jar -DapiSparkServiceConfig=/path/to/agent.properties apispark-agent.jar -p 3000

2015-12-02 15:22:14.370:INFO::main: Logging initialized @186ms
Starting a Jetty HTTP/HTTPS client
Starting a Jetty HTTP/HTTPS client
Starting the Jetty [HTTP/1.1] server on port 3000
2015-12-0 21:22:14.630:INFO:oejs.Server:main: jetty-9.1.z-SNAPSHOT
2015-12-02 21:22:14.648:INFO:oejs.ServerConnector:main: Started ServerConnector@248ecefb{HTTP/1.1}{0.0.0.0:3000}
2015-12-02 21:22:14.649:INFO:oejs.Server:main: Started @468ms
Starting org.restlet.apispark.agent.AgentApplication application
Starting a Jetty HTTP/HTTPS client
Stopping a Jetty HTTP/HTTPS client
Starting a Jetty HTTP/HTTPS client

Add analytics module
Add firewall module
Add redirection module
Setting agent refresh timer every 15 minutes
Agent started

Traces show the services enabled in the console that will be used by the agent.

Now you can reach your service on port 3000 instead of the one of the target application with the same contract, i.e. paths remain the same.

As you can see, the Connector feature of APISpark relies on Swagger for the Web API definition and REST for calls. There is nothing here specific to a particular technology. Every Web API can be used with the restriction that it follows the REST principles. In this series of blog posts, we will focus on how to configure your Web API whatever the language or technology you use to implement it with the APISpark platform.

In the next part of this post, we will see how to leverage APISpark Connectors for applications using the Restlet Framework.