Realizing multi-tiered API architecture in Azure
The method of looking at your APIs as split into three different layers has gained an immense popularity in the last few years. Here is my take on how you can realize this division of responsibility dream on Azure and Microsoft-based technology.
What is the multi-tiered API architecture?
You have many names for the things you love, or to put it another way: a lot of people has been developing this idea over time and try to make it theirs. Other names for this architecture are “API Led connectivity” or “three-layer API architecture”.
The basic idea can be traced back to the n-tier architecture of late 1990’s programming, where you have at least one code layer for handing the interface, one layer for handling the so-called business logic, and one for calling and storing data. I see similar advantages in the multi-tiered API architecture.
First, here is a helpful illustration of the architecture:
Like with any model of architecture or reality, this is an idealized image. With than in mind you can clearly see three different layers with three different responsibilities.
This layer usually handles abstraction of complexity and makes the API easy to use. An API should always be easy to use. Make your APIs easy to use.
This layer should also handle security, so that the underlying services can trust incoming calls.
This is where the business logic, and processes are handled. An external API might actually trigger a whole process of calls and calculations in order to fulfill the API’s need, like “register new customer”, “place order”.
There is no reason why these different processes might not call each other, as long as you keep track of dependencies and think about words like idempotency when designing your solution.
These APIs could be already existing endpoints into systems, like stored procedures or SOAP-based web services running on IIS or something else. It might not be built (yet) but doable. The main point here is that these system APIs can be system specific and not necessarily support a high level of interoperability.
Division of labor
Another upside to this way of slicing the API world is the division of labor. Different layers can be developed and maintained by different groups within an enterprise level organization.
The presentation layer is handled by application developers, the process layer usually by traditional line of business IT, or integrators and the system APIs are usually maintained by system owners or vendors.
Updating underlying APIs
Not the last, but one of the most important upsides to this layering of APIs is that a change in the underlying system API should not affect the presentation API.
If the end system updates its passwords, or perhaps moves to another location, the presentation API can still be called. It is up to the process API to handle the change.
How to realize this architecture using Microsoft based technology
Changing the illustration above and putting in some logotypes, the image becomes much more Microsoft centric.
There is only one option here: API management. It can fulfill the needs of API publishers and developers with a configurable portal, that includes documentation, signing up, and testing the APIs. Yes! You can test the APIs straight from the portal.
There are also other useful features such as transformation and versioning.
At this level there are a lot of options and they all boil down to the same thing: it depends on the integration. However, looking at the APIs from a process perspective you cannot go wrong with Logic Apps. The amount of out of the box logging as well as easily configurable basic process tasks are just a couple of reasons.
But Azure Functions, queues and web apps are useful as well. If not as processing engines, then at least as code executors or endpoints for the Logic Apps to call.
Note, that you can also bypass this layer altogether by simply having API manager calling system APIs. If the process layer does not add anything, then do not use it.
System API layer
If you do not have the ability to connect directly to your systems in Azure or in your datacenter, you can use the on-premise data gateway, as a free service to act as a secure endpoint for your Azure service. Connecting a Logic App to an on-premise SQL server is very easy.
If you are able to connect Azure or your datacenter using network-based connectivity, you can use Azure Functions on a service environment, or perhaps the upcoming Logic Apps Isolated Environments (ISE).
The multi-tiered architecture is very useful in your API platform if implemented correctly, and realizing it using components from Azure will get a very cost efficient, to some extent free, platform with a three-9 level SLA.
Let Enfo show you how to implement this architecture. Download our Guide to API Discovery Workshop and find out how to get started!
Mikael Sand, Integration Consultant at Enfo