Sometimes the hardest thing in software development is to name things appropriately. The world of software development is constantly evolving. New terms get coined frequently. We forget the old terms or their meaning evolves. Because of this, we also end up using inconsistent terminology. Things get mixed up often. One of the common cases where things really get mixed up is the difference between APIs and Microservices and also between Microservices and Containers. In this post, we will be demystifying APIs, Microservices and Containers.

What is an API?

Let’s start with the easiest. The proverbial low-hanging fruit.

APIs are pretty simple to define. Even if we just look at the full-form of API i.e. Application Programming Interface, it’s pretty clear that API is an interface.

But what kind of an interface are we talking about?

It could be an interface belonging to a software framework or a library. It could also be a remote interface exposed over HTTP protocol. Such APIs are also known as Web-APIs. However, that’s not all. These Web-APIs can also have several different flavors such as REST-based web-services, SOAP-based services, GraphQL interfaces and so on.

Then, there is another flavor from more of an API management point of view. It is based on the consumers of the APIs that can be public, partner and private/internal APIs.

APIs have also been around for a long time. Many programming languages are built around the concept of APIs. For example, in the context of Java, APIs are the collection of pre-written packages, classes and interfaces.

However, there has been a sudden explosion of interest in APIs across the industry. Most of this hype is created by the management of these organizations. Some of it is driven by the actual benefits that APIs provide in terms of communication between remote systems using HTTP protocols.

At the end of the day, API is a pretty generic term. It can be used across broad areas. But at its core, it is still mainly an interface.

APIs – a tale of two point-of-views

By their intrinsic nature, APIs require two parties. A consumer. And a provider. An API without consumer is useless. The inherent purpose of building an API is exchanging information between a consumer and provider.

However, the consumer and provider view about an API can be vastly different.

Consumer View

For an API consumer, an API is nothing more than an interface definition and a URL.

These URLs are the building blocks of the entire web world. They allow a client to access information remotely without worrying about the physical location of the information. A URL may be pointing to a super-powered mainframe system or a tiny IOT device. For the client it does not matter as long as the URL supplies the information described in its documentation.

Perhaps, this is the single biggest feature that has led to the incredible popularity of the APIs. Developers of any technology stack can tap into the power of APIs.

Provider View

The perspective, however, completely changes for API Providers. API Providers have to design, build and document APIs. They also need to be acutely aware of the infrastructure behind the API. An API serving millions of consumers would have drastically different infrastructure requirements as compared to a single-consumer API. Often, the success and failure of API adoption relies on how well it performs in a real production environment. No consumer wants to wait too long for a response.

With the dawn of cloud computing, there are a lot more choices available to API Providers with regards to selecting the infrastructure. One can now get infrastructure, platform services at the click of a button. One can get a bunch of virtual machines on-demand and can also go completely server-less while deploying APIs. But at the end of the day, a great amount of thought process is required on the API Provider’s part when deploying APIs.

In other words, from an API Provider’s perspective, an API needs to live somewhere. And most of the time, the place where these APIs live are microservices.

What are Microservices?

Microservices have many definitions and a lot of associated confusion in the industry. Some also say it’s a buzzword and some claim that they have been building microservices for decades.

Whatever be the case, one of the common ways to describe a microservice is an architectural approach of building an application as a group of services.

Whether it’s a buzz or not is debatable. We already discussed the same at length in this post.

There are many defining features of microservices. But one that has captured the popular imagination is that each microservice should be responsible for its own data. Other microservices should not be able to access such data directly. In other words, each and every microservice should be isolated, loosely coupled.

A microservice can scale independently of other microservices. In fact, each microservice in an application can be implemented using completely different programming languages or frameworks.

And that’s where the next important character in this discussion comes in. This character is a Container.

Containers to the rescue

It is not hard to imagine why containers have become so useful for microservices. Many of the demands of microservices architecture can be easily handled by containers.

Containers are a means to create isolated contexts within an operating system. Each container in an operating system has its own file system. The application code and its dependencies are bundled together. One container cannot affect another container’s process and file system. Containers can be used to scale up or down microservices.

Without containers, servers have to run multiple microservices which would be tricky to handle. Using a different virtual machine for each microservice would be an unnecessary waste of resources.

Even though it was possible to create containers in Linux for ages, they became popular only in this decade due to the rise of Docker. In essence, the rise of Docker led to the rise of microservices. It could be the other way round as well. Basically, there has been a mutual relationship between the growth of microservices and Docker. Both of them fueling other’s growth.

At this point again, it is important to establish that microservices and containers are two different terms. But they go together well. Just like API and microservices.

How microservices communicate?

We already established that microservices should not directly access each other’s data. But then, how do they communicate with each other?

Surely, any respectable IT system will have functionality complex enough to warrant inter-communication between different services.

Well, that’s where APIs come in. Again.

While there can be several data management patterns for microservices, let’s assume the simplest one where each microservice is responsible for only its data.

In the most basic approach, each microservice will expose an API or a set of APIs. Other microservices can make requests to these APIs and do something with the data. Data is the key over here.

Typically, many of these APIs exposed by microservices would be private APIs. Other microservices within an application will consume these APIs with minimum fuss. Of course, most of the rules with regards to good API design are applicable to private APIs as well. But these APIs largely act as building blocks to creating more complex business functionality.

One can imagine a system built using APIs, microservices and containers to look like this.

api microservices and containers

Conclusion

API and microservices are related. But they are not the same. So is the case with microservices and containers. They all work together to build applications realizing the distributed systems architecture. But they are still largely independent terms that can exist without each other.

It is important for development teams to understand the differences between these important terms. Based on this understanding, they can design systems as well as communicate with a common understanding. After all, less confusion can lead to less conflicts and better outcomes.


Saurabh Dashora

Saurabh is a Software Architect with over 12 years of experience. He has worked on large-scale distributed systems across various domains and organizations. He is also a passionate Technical Writer and loves sharing knowledge in the community.

0 Comments

Leave a Reply

Your email address will not be published. Required fields are marked *