- Created on 30 June 2016
Microservices originally came about for the purpose of helping developers solve frustrations toward large applications that require cycle changes to be tied together. In a monolithic service oriented architecture, any minor change meant that the entire monolith would need to be rebuilt, therefore causing rebuilds to happen often. Unlike the monolithic approach, a microservice architecture approach allows each microservice to run a unique process and usually manages its own database. This provides development teams with the opportunity to use a more decentralized approach for building software as well as allowing each service to be managed independently.
As microservices can be a fairly broad and complex subject we have decided to write this article for our viewers to explain some general concepts and tips about microservices.
Microservices are more about the concept than the technologies used. This concept is a software design recommendation to make service architectures loosely coupled, easily changeable and scalable.
Following these recommendations can help you overcome many disadvantages of a strongly integrated, monolithic service, such as
- A steep learning curve for new comers
- Having to deal with the whole environment and software
- Resolving complex conflicts in a very large codebase
- Errors propagating through multiple modules of a huge software
- Extremely hard refactoring
By splitting up your software to as little components as possible and assigning two to three people to a single component you can achieve
- Easier to understand code, as the codebase is much smaller
- Much less merge conflict as few people work on the same code
- You can use the most efficient technologies for the tasks a component does
- Language independent interface between components by following the industry standard REST API recommendations
The figure below shows the microservices approach versus the traditional server application approach.
Microservices approach compared to traditional server application approach. Digital Image. Microsoft. Demember 2015. Web. 24 June 2016. <msdn.microsoft.com>
Running a distributed software requires distributed environments, which traditionally involves buying lots of expensive hardware, assembling it, finding a physical space to store the devices, making sure you have sufficient power and network capacities, air conditioning, backup and recovery plans, as well as the manpower to handle all of these.
If you’re willing to use your hardware resources optimally, you will reach the point when you start considering virtualization and sooner or later you will find yourself in need of a system to manage your infrastructure, which frequently means more IT operation staff as oppose to less.
A solution to abstract away most of the difficulties of IT operation is moving your infrastructure to a cloud operated by a 3rd party provider. As a result, you will no longer have to worry about backups, outages or lack of hardware resources. All is there, only a few clicks away!
Clustering is the bread and butter of microservices, an architecture concept to make fault-tolerant services capable of serving a high load with high availability.
This can be achieved by adding multiple servers – creating a cluster. The basic functions of a well-designed cluster provides:
- Fault tolerance – if a server dies, the whole service should not stop
- Load balancing – distributes the requests to not overload a server
- Health check – a periodically invoked check to determine the healthy servers and services in order to enable the cluster to heal itself when a node dies
If you want to ship something overseas, your items will be end up in a container at some point. Transportation companies do this because containers are standardized making them easy to handle. Every cargo ship is built to carry containers and every dock has large cranes built to move containers.
This concept can be applied to software as well by using Docker. Docker is a solution to make the creation and starting process of separated software environments less resource hungry and more effective by putting them in containers. These containers – just like the ones on cargo ships – look the same from the outside thus, can be handled the same way without knowing what is inside. You can easily move these containers to different servers and run them anywhere without knowing the dependencies of the software inside the container.
Using Docker containers makes clustering services much easier, as you don’t have to install every required software component on each node of your cluster. The only requirement is Docker, therefore scaling your cluster is very simple. If the service dies inside a container, you can simply dispose it and start a new one, which makes self-healing a very easy task.
The reason for mentioning the above technologies is due to the fact that they are all interrelated.
- Cloud providers will allow you to use your hardware resources optimally;
- Clustering will allow you to make fault tolerant services capable of serving a high load with high availability;
- Docker containers will eliminate the need to install every required software component of each node of your cluster, making clustering services much more simple.
- Microservices provide a decentralized approach to building software as well as allowing each service to be managed independently.
Once you combine all the technologies mentioned above, the results will be a fault tolerant, scalable, and easy to manage enterprise service!
The following chart displays how cloud providers, clustering, and docker containers are related to each other.