Microservices
The traditional monolithic application architecture is easy to develop, test, and deploy. Unfortunately those applications tend to become large, complex, and as a team it's difficult to work together on the app. Big applications tend to intimidate developers and it also increases coordination within a team. When deploying, monolithic applications have a higher risk of failure.
When using the microservice architecture, your big monolithic application is divided in smaller "micro"-services. It should be quicker for a new team member to understand just one service, rather than trying to understand the whole application at once. One small service is also quicker to build and faster to deploy. New technologies can be adopted easier as well, because microservices don't need to be using the same language. Let's say you want to adopt micro-services, but how do you deploy them in your organization?
1. Containerize
Virtual Machines are too slow to start up, too big to transfer, and a lot less efficient than using containers. If you are going to use a lot of smaller services, it will be much easier and more efficient to put the software in containers, rather than spinning up a virtual machine for every instance. The code in GIT can be packaged in a container and can be ran on the laptop of the Developer, on a on-premises server, or on the cloud, without changing the package. This parity of environments is important to mitigate breakage between environments. Think about the sentence "it works on my machine" that developers and operations often exchange. The most popular container software is Docker.
2. Distributed Applications
The future of applications are distributed applications. These fault tolerant, easily scalable applications will be highly available and should be able to scale when handling high concurrency. This will increase customer satisfaction, because unavailability should be rare, and applications should load quicker under high load because of auto-scaling.
To achieve distributed applications in containers, it's best to follow a methodology like the 12-Factor app. When applications are stateless, more containers can easily be deployed when load is high, or reduced when load is lower. Upgrades should be seamless, because multiple containers can now run in parallel, so customers can still be served during software or application upgrades. This is called zero-downtime deployments.
3. Orchestration
Now that we want to run a lot of different containers, we need to think about how we make sure those containers work nicely together. This is where orchestration comes in. Containers with our application in need to be monitored, jobs need to be scheduled, and failover needs to be handled. We can write some basic orchestration ourselves, using the tools that come with docker or we can opt for an orchestration package. Here are some of them to check out:
- Kubernetes
- Mesosphere
- Docker Machine, Swarm, and Compose
4. On-Premises PaaS
The orchestration tools above can be used to deploy services as well as applications. If you are only interested in quickly deploying your apps, take a look at Dokku (single machine) or Deis (multi-machine). It allows you to quickly deploy PHP, Ruby, Python, or Java applications on your own machines, in orchestrated containers. It will handle placement of containers over the nodes, health checks, failover, and all sorts of things you don't want to deal with as a developer. Tools like Dokku and Deis work on-premises as well as on the cloud.