Kubernetes and Docker: 9 Reasons Why DevOps is better
with Docker & Kubernetes
Kubernetes and docker can help companies overcome one of the main challenges – a long time to market, which usually happens when your development process is slowed down.
When deploying applications most of the teams usually face a problem between Dev and Ops. Because these two departments make the same application but work completely in different ways.
Wouldn’t it be nice if they work together without any misunderstandings to make shorten time to market?
I’ve assembled this list of advantages that DevOps + Docker & Kubernetes can provide you with comparing to traditional DevOps approach.
The traditional approach to DevOps
- In traditional DevOps approach, developers write code and commit it to a git repository.
- Then they check how it works locally and in a development environment.
- They launch a build process of the code using CI-tool, for example, Jenkins, which also runs functional tests during the build. In case the tests pass successfully, those changes are merged into a release branch.
- The tests are handled on stage-environment and at the end of the sprint, the release is issued. System administrators prepare scripts for application deployment to production, using Ansible, Puppet or Chef.
- Finally, system administrators roll out changes to production, i.e. updating the version.
Problems of the traditional approach
- The first problem is that system administrators and developers use different tools. For example, the majority of developers don’t know how to work with Ansible, Puppet or Chef. A common outcome of this situation is that the task of preparing release falls on the shoulders of system administrators. But system administrators often do not understand how an application should work, because developers are the ones who have the expertise in this area.
- The second problem is that usually development environments are being updated manually without any automation. As a result, it is very unstable and is always breaking down. Changes made by one developer break changes made by another. Finding out problems is usually takes a lot of time. And finally, you get slow time to market.
- The third problem is that development environments can differ significantly from staging and production. Moreover, staging can be not similar to production at all. It leads to many difficulties. For example, a release prepared by developers may not work correctly on the staging environment. Even if tests passed successfully on the staging environment, some issues may appear unexpectedly on production. Meanwhile, the rollback process of a broken version from production is very problematic and not trivial, even using of Ansible, Puppet or Chef.
- The fourth problem is that the writing of Ansible manifests is time-consuming and difficult. It is very easy to lose a track of changes made in manifests, updating an application from version to version. It may result in a high number of mistakes.
Improvement of DevOps approach with Docker
- The main advantage of this approach is that both developers and system administrators use the same tool – Docker.
- Developers create docker images from docker-files at a development stage. They make it on local computers and run them on a development environment.
- The same Docker images are used by system administrators who make updates of stage and production environments using Docker. It is very important that Docker containers are not patched when updating to a new version of a software. It means that a new version of your software is represented by a new Docker image & a new copy of Docker container, but not to a patch of the old Docker container.
- As a result, you can make immutable dev, staging and production environments. There are several benefits you can get using this approach. First of all, there is a high level of control of all changes, because changes are made using immutable Docker images and containers. You can rollback to the previous version at any moment. Development, staging and production environments become more similar to each other than in the case of using Ansible. Using Docker you can guarantee that if the feature works in the development environment it will work in staging and production too.
How to get DevOps superpowers with Kubernetes and Docker
- The process of creating application topology, containing multiple interconnected components becomes much easier and understandable compared to Docker.
- The process of load balancing configuration simplifies greatly because of built-in Service and Ingress concepts.
- Thanks to built-in features of Kubernetes Deployments, StatefulSets, and ReplicaSets, the process of rolling update or green/blue deployments becomes very easy.
- You can make CI/CD using Helm which is more comfortable than just with Docker-containers because of these reasons:
- Helm charts are more production ready and stable than individual Docker images. Most probably you were facing with the issue when you tried to interconnect different Docker containers into a joint topology, but you failed due to those images were not ready for such kind of interconnection.
- It provides you with high-level template language and a concept of application releases that can be rolled back if needed.
- Moreover, you can use existing Helm charts as dependencies for your own charts, which allows you to have complex topologies using 3rd party building blocks.
- Kubernetes supports out-of-the-box of deployment scenario to multi-cloud (AWS, Google, Hidora or another hosting provider) through Federation or Service Mesh tools.
Have you simplified your DevOps processes using Docker and Kubernetes? Please, share your experience!