What is it?
These are small modules divided on the principle of performing a single business task or single class of tasks. The main purpose of such division is the possibility to change a particular microservice without affecting any of the components connected to it. The business logic of the application is split up into separate parts and each of them is a small application with a single responsibility. The number of such applications is unlimited and they communicate with each other by using API based, for example, on HTTP.
We get rid of high code coupling by splitting the app into microservices.
If one of the microservices fails it is quite possible that the most part of the application will still be workable and this partial system crash won’t result in serious consequences for users. Restoring the operability of one microservice takes much less time than restoring and locating errors in a monolithic application.
It can be done by locating services on different servers. If an application spans several microservices with moderate computational requirements, they can reside on one host. For more demanding microservices, it is better to opt for a more powerful host. Microservices which perform non-blocking tasks can be paralleled.
Different services can be developed by separate development teams.
It is especially interesting for remote development of an application, when various microservices are developed by different teams.
Each microservice can use any technology which is applicable within the scope of the given task.
For example, in web applications developers can use Ruby (Sinatra) for some microservices and node.js for others because they are isolated from each other and the technology used for their implementation won’t affect the behavior of the application on the whole.
First negatives. When you create microservices, you’re adding inherent complexity in your code. You’re adding overhead. You’re making it harder to replicate the environment (eg for developers). You’re making debugging intermittent problems harder.
Let me illustrate a real downside. Consider hypothetically the case where you have 100 microservices called while generating a page, each of which does the right thing 99.9% of the time. But 0.05% of the time they produce wrong results. And 0.05% of the time there is a slow connection request where, say, a TCP/IP timeout is needed to connect and that takes 5 seconds. About 90.5% of the time your request works perfectly. But around 5% of the time you have wrong results and about 5% of time your page is slow. And every non-reproducible failure has a different cause.
Unless you put a lot of thought around tooling for monitoring, reproducing, and so on, this is going to turn into a mess. Particularly when one microservice calls another that calls another a few layers deep. And once you have problems, it will only get worse over time.
OK, this sounds like a nightmare (and more than one company has created huge problems for themselves by going down this path). Success is only possible you are clearly aware of the potential downside and consistently work to address it.
So what about that monolithic approach?
It turns out that a monolithic application is just as easy to modularize as microservices. And a function call is both cheaper and more reliable in practice than an RPC call. So you can develop the same thing except that it is more reliable, runs faster, and involves less code.
OK, then why do companies go to the microservices approach?
The answer is because as you scale, there is a limit to what you can do with a monolithic application. After so many users, so many requests, and so on, you reach a point where databases do not scale, webservers can’t keep your code in memory, and so on. Furthermore microservice approaches allow for independent and incremental upgrades of your application. Therefore a microservice architecture is a solution to scaling your application.