Software systems have definitely changed throughout the years. Initially, large mainframes were used to process batches of jobs overnight for long hours to get the results that banks and big companies needed. Eventually, big systems were used to process different online jobs, which required large data centers with cold safe rooms to house their servers. Now, we have the ever-elusive cloud, which is a shared space where we can rent processing time and let our software run according to our needs.
It's understandable that the same software would not be able to run in different environments, which is why software architecture has evolved along with software. This evolution is a direct result of the continuous change occurring in the world. As businesses and markets have changed, the loads that software systems are able to handle have had to steadily develop as well.
In this context, Microservices Architecture is a way of organizing components of a software system that allows one to avoid many of the common problems a monolithic enterprise application can present today.
Why Does an Application Become a Monolith?
Monoliths are single process, server side applications that preserve all the business logic. In the best case scenario, they may split this logic into large services, each one entangled with many responsibilities. There are several reasons why an application becomes a monolith, and it's important to understand them in order to grasp why they tend to remain monolithic.
Lack of Separation
It may seem obvious, but the first reason why applications become monoliths is due to the lack of separation of concerns in the design of the application.
In most applications, the business logic of different parts of the application complement one another in some way, whether they share the same database or are developed and maintained by the same team. Therefore, it is natural that they end up as an entangled mass of code that is difficult to separate.
It's no easy feat to implement the separation of concerns. On a basic level, it requires a lot of work to design the software components independently of one another. Principles like GRASP and SOLID can help, but they have to be put into practice by the development team and then continuously enforced by an architect or tech lead.
On a higher level, the architecture of the whole system must be carefully thought out in order to allow the different software components to be separately developed, deployed, and maintained. If this effort is not prioritized, the separation may flounder due to lack of funding.
If the components of the software are physically separated and independently deployed, they will have to exchange data between them. This is usually slower than having the data shared in the memory of a single software process, but today we have faster computers, faster networks, and the cloud, so this concern is not as prevalent as it was in the past. Historically, however, it has led -and still leads - many developers to avoid splitting a software into separate processes.
Code Duplication Concerns
Another common concern is code duplication. If the code is split among different programs, part of the code will end up being duplicated - something that should be avoided whenever possible. The same code must then be modified in different places when the logic changes. Although there are options to move this shared code into a separate library, it can easily become another maintenance burden on the developer's shoulders.
Data Inconsistency Concerns
Finally, separate software components without separate databases won't be enough to avoid the creation of a monolith. The main point of entanglement in a software system is typically the database, and splitting the database usually implies data duplication. It goes against many of the principles we have learned in our database classes. What about normalization and transaction consistency?
Why Do We Need to Modernize Monolithic Applications?
With all of the above obstacles and concerns in mind, why is it necessary to modernize monolithic applications? The main reason for modernizing monolithic applications is that software systems need to be able to respond to change. They need to adapt to and evolve with the business, because the more time we spend changing a system to conform to a new business requirement, the more revenue is potentially lost.
So, what should we do if we can’t respond to change in a monolithic application? Should we throw it out and rebuild from scratch?
The big problem with monolithic applications is that a change in one place usually requires changes in other places as well. It's not uncommon for these changes to occur in areas completely unrelated to the area that was modified. After each change, the application has to be tested again in order to ensure that everything is functioning correctly as well as to check whether or not the whole system needs to be re-deployed. Two excellent resources that detail what a well-distributed enterprise application consists of are the Heroku 12 Factor App Specification and The Reactive Manifesto.
At the same time, we can’t just throw our legacy system away. If we have a functioning monolith, as bad as it might be, we probably can’t afford to rebuild something that most likely took years to become as valuable as it is now.
The solution I propose is to rebuild parts of the monolith as Microservices applications, allowing them to co-exist and integrate with the remainder of the legacy monolith. Here are some of the advantages we can obtain with this approach:
Heterogeneity & Choice
In a monolithic application, we are usually tied to the technologies used during the beginning of the application's development. Evolution is possible up to a point, but even then we are still tied to runtimes, frameworks, and languages. Consequently, we are often equally dependent on teams who are familiar with those technologies.
Modularity & Scalability
With new and refactored functionalities hosted in separate services, we have a greater guarantee that if something was implemented incorrectly in a certain service, we would only need to fix that specific service. The error would also be easier to rewrite from scratch if hosted in separate services than if it were part of a monolith. This modularity also allows us to deploy it independently from the rest of the system. We can host it in an environment like Kubernetes, for example, and have it autoscale.
Resilience & Robustness
Software systems eventually fail - an unfortunate fact that cannot be ignored. If we have a huge monolith that is responsible for everything, a small failure could cause the entire system to shut down. Even with a fallback plan, the error could be propagated into the fallback plan - or at the very least, cause some downtime.
But by utilizing independent services, we can count on the robustness of our system in case of failure. We are able to structure our system so that only the service that broke will need to be re-started. Requests to that service can be redirected to other instances, or queued up for later processing.
A Microservice is a component of a software system that can be developed and maintained independently. A Microservice can accomplish a single job successfully, but the integration of multiple Microservices is usually necessary in order to reach the end goal.
The term Microservices was introduced to highlight the fact that it’s focused on a single responsibility or service. The main difference between a Microservice and a traditional Web Service is the fact that a Microservice is small with regards to the functionality it provides. While a Web Service can expose several endpoints to perform different kinds of computations, a Microservice will usually only expose a few endpoints for a single kind of operation.
For example, we can have an Identity and Access Web Service responsible for all identity and access operations, from authenticating a user to fulfilling a user profile. With Microservices, by contrast, we can have an independent Microservice solely for authentication and authorization purposes while another Microservice can be developed to manage user profiles and registration. It might seem futile to separate these two services, but if we consider the fact that we might receive several authentication and authorization requests throughout the day, but only a few new user registrations, the independent scalability feature of Microservices can become a significantly beneficial aspect.
Using Microservices as a Refactoring Tool
Using Microservices to refactor large scale enterprise applications is not simple, but it is a practical alternative to working with a brittle monolith unable to quickly assimilate to simple changes.
In order to refactor a monolith into a set of Microservices, we have to find seams in the code where we can move the functionality to a separate module. Usually, this can be done by replacing a local function call with a remote call using a tool such as a REST operation. Whenever the system calls this functionality, it will call the separate Microservice instead of the old legacy code. Once the system is not dependent upon that legacy code, it can be removed, resulting in a slightly thinner monolith.
New features are also added to the system from time to time, which creates a good opportunity to delegate new functionalities to new Microservices, as opposed to inflating the monolith with more code.
Some years ago, I had the opportunity to work on the process of migrating a huge monolith to the cloud. The monolith itself remained relatively the same, but was moved from a local datacenter to a cloud environment. Most of its new functionalities are now provided by separate services running in this same environment. However, they now utilize technologies that were not available decades ago when the first version of this software went into production.
Here are some important aspects that we need to take into consideration when using Microservices to refactor a monolith:
The first thing to consider is that the refactoring must be incremental. We should not refactor the whole system and promote it to production when it is done. Instead, we should use the opportunities we have to refactor small pieces of the system, since this approach allows us to limit risk. Ideally, we should start with a new functionality, then build it as a separate Microservice so we can put it into production and observe.
The integration of Microservices with the rest of the system is also important. The preferred method would be to use REST calls from inside the monolith, but the less preferable method of database integration could also be a valid start if the monolith proves to be difficult to change.
Another crucial aspect is data management. A majority of the time, splitting a monolith into Microservices equates to splitting the database that this monolith consumes. The ideal solution would be to avoid sharing the same database amongst different applications to avoid data duplication. Attaining consistency means the ability to have multiple copies of the same data that can sometimes be inconsistent, but will eventually become consistent again.
At a first glance, one of the reasons a monolith becomes a monolith is because it's harder to maintain separate multiple services as opposed to a single one. It's definitely a challenge to build and keep multiple Microservices healthy. In order to do so, architectural governance is essential. It's not possible to simply build and deploy without considering all the nuances of the Microservices themselves. Microservices afford great adaptability, along with the ability for software to evolve along with business and technology, but it's not an easy process.
The final aspect I’d like to point out is the cost/benefit analysis.. Before choosing a functionality of the monolith to be rebuilt as a Microservice, we need to carefully evaluate whether or not we really need to do so, or if it's better to just leave it untouched within the legacy and look for other things to refactor. Building a new Microservice also means incurring all the risks of building software without the visible benefits in comparison, so it must be undertaken judiciously.
By using a Microservices Architecture, we can build large scale, modular software systems with each component having a single responsibility. These systems can be developed by small, independent teams, which allows us to use the best technology for the job. Furthermore, each component can be independently deployed and scaled so that if a certain module needs to handle a heavier load, the system can count on multiple components to work in parallel. It's also easier to improve or even rewrite a single component without affecting the other components of the system, and to adopt new technologies as they become available.
Traditional monolithic applications are brittle and difficult to change, but we can combine them with Microservices to replace each single functionality of the monolith. This will allow us to gradually rebuild the system utilizing a modern and sustainable architecture, allowing it to evolve alongside the business and technology.
This strategy will allow the the system to evolve while minimizing the costs and risks. This gives us the possibility to try out new technologies, new approaches to old problems, and new data management options.
Want a deeper look at how Microservices Architecture can refactor your monolith? I explore the subject in greater detail in my whitepaper - give it a read and let me know what you think!
Rafael Romão is a .NET Engineer at Avenue Code. He has studied software development since 1998 and has worked as a software developer since 2002. His main areas of interest are software architecture and design, as well as the development of highly productive teams.