top of page

10 DevOps Tools to build complete DevOps processes

Updated: Apr 13


This is the written version of my youtube video ✍️ 🙂


Introduction


This article is aimed at giving you a short, but comprehensive overview of the core DevOps tools, that you need to build DevOps processes. So let's get to it right away! 👏



1 - CI/CD Platform

Tools: Jenkins, GitLab CI, GitHub Actions, Azure DevOps


At the very core of DevOps we have a release pipeline, commonly known as a CI/CD pipeline. So CI/CD tool is the most essential part of a DevOps Engineer's toolkit.


The most popular and still most widely used one being Jenkins. There are alternatives like GitLab CI is becoming really good or GitHub Actions, Circle CI and many more.


So these tools are about how to create automated release pipelines, which run tests, build the application, do different types of application scanning and deploy to the end environment:

Example CI/CD pipeline

And that involves its integrations with:

  • Git repositories on GitLab, GitHub etc.

  • Docker registry

  • Cloud platforms


Writing pipeline as code with Jenkinsfile and so on.



2 - Cloud Platform

Tools: AWS, Azure, Google Cloud


Okay, we're testing and releasing application and deploying it, but where are we deploying the application?

Deploy application to end environment

We need a deployment environment and that's where cloud platforms, like AWS, come in.


So AWS services, the virtual instances, security groups around servers, access to application running on the server, configuring the server and so on.



3 - Docker


Okay cool, we are releasing and deploying applications to AWS virtual machines let's say, but what are we releasing exactly? 🤔 And in which form? You need to understand how the application is packaged and how it runs on the end environment.

Container that has everything the software needs packaged inside

The new standard way of packaging and running applications is Docker. Docker packages software into standardized units called "containers" that have everything the software needs to run including libraries, system tools, code and runtime.


And this improves the development in deployment process. You can quickly deploy and scale applications into any environment and know your code will run. 😎


Again there are similar tools, but Docker wins here as well. 🐳


So we would create Docker images in the CI/CD pipeline and run the application as Docker container on AWS server for example:

Docker in CI/CD pipeline


4 - Kubernetes

Engineers going wild with Docker containers

Now Docker made it easy to create and run applications, so engineers went wild and scaled up applications, because it is easy to do with Docker. 🔥


But that made the lives of application operations team harder again. With DevOps we are saying no separate Dev and Ops, we want to unify them, so how to make running dockerized microservices applications easier?


Docker is lightweight and cool, but ephemeral and stateless.


Challenges:

  • So how do we restart applications when they fail

  • how do we scale and replicate applications or microservices if they are getting a lot of requests?

  • How do we run distributed applications like database clusters and so on.

  • Making sure that application is always available, even if some parts of it fail.

  • Also a network of hundreds of containers, when they run on multiple servers, how do we manage that?

So Kubernetes, which is a container orchestration platform, comes to rescue with all these solutions.

Kubernetes features

Kubernetes has an auto-healing feature and the network layer that makes thousands of containers seem like part of one server.


It has auto-scheduling and much more.


Scaling applications up and down as we need is super easily done. Just specifying replica counts in Kubernetes deployments. And you can also scale up and down the servers by adding additional worker nodes or control plane nodes easily. 🚀



5 - Monitoring and Alerting

Tools: Prometheus, Grafana


Cool, we have thousands of containers or even tens of thousands of containers, which is great and Kubernetes manages a lot of the operations automatically. ✅


And that's great, but what if things go wrong in the cluster? 😱

Let's say we have applications equipped with great logging and we have all the information, but we can't possibly manually look into logs and metrics of thousands of applications and see what's going on. Maybe someone is trying to hack into our application and our application is logging and screaming about it, but we don't know.

proper monitoring is needed

What about third-party applications, maybe databases is under heavy load or the servers are under attack.


Somebody's trying to SSH into it or do a port scanning to see what ports are open and so on.


With so much workload, we need automatic monitoring and alerting in place that uses the data that we have in the logs and alerts us, if something is out of natural behavior. Again security attacks or maybe a harmless misconfiguration in Kubernetes manifest file that has created a mess in the cluster. 🤷🏻‍♂️


So monitoring and alerting is essential on all levels like infrastructure, runtime and application itself and for Kubernetes specifically, a popular monitoring tool is Prometheus, which comes with a whole stack for monitoring, alerting and visualizing the metrics data:

Prometheus monitoring stack


6 - Infrastructure as Code with Terraform


Talking about issues in the cluster they may make the cluster to crash and get into a state that we can't recover. 😣

How can we recover the state

Imagine we configure the cluster on AWS, we have thousands of servers with tens of thousands of containers running on them and we have configured monitoring and 100 other services in the cluster and now it's all gone! Because of misconfiguration issues or hacking attacks or whatever.


How can we possibly recover all that? How can we recreate this state again? And that's where "Infrastructure as Code" helps, because it's really difficult and sometimes impossible to do that manually or it would take just weeks or month. ⏰


So with Infrastructure as Code we actually script this entire setup: spinning up AWS resources, Kubernetes cluster, installing all the services. And if something happens, we just run the script again and it recreates everything:

Using IaC to automate infrastructure provisioning
Using IaC to automate infrastructure provisioning

Terraform is the most popular tool that allows infrastructure as code.



7 - Configuration Management

Tools: Ansible, Chef, Puppet


Now sometimes if we're working directly on the operating system like installing packages, maybe doing security patches etc like on Kubernetes worker nodes, that's where configuration management tools like Ansible may be helpful.

Manually configuring servers

Again with the scale of Kubernetes, we may have hundreds or thousands of worker nodes and let's say if you need to do a security patch on those or do an upgrade to the latest container runtime, you don't want to be login into each server manually and executing the scripts. 🙇🏼‍♀️



With Ansible, just write a script once, provide it with a list of servers as targets and it will automatically push out and execute scripts on those targets and give you a nice output summary of the state 💪 :

Using Ansible to automate configuration
Using Ansible to automate configuration

8 - Code Editor

Tool: Visual Studio Code

All DevOps tools are written as code
All DevOps tools are configurations written as code

Now "Infrastructure as Code" is code, "Configuration as Code" is also code.


Again if you're writing Jenkinsfile, that's also code.


Or the Dockerfile or Kubernetes manifest files.



So we need to write all of these in a code editor such as Visual Studio Code (VSC). You can install a bunch of plugins and features for specific languages or tools that actually help you write those scripts. They have auto-completion or error checking integrated and so on:

Writing configuration files in a code editor

And it's a simple tool, but it is definitely a needed one in DevOps.



9 - Version Control

Tool: Git


Now obviously you aren't working alone. Well, hopefully not! 👀 But rather in a team with other engineers. 🫂


So as a DevOps engineer you aren't coding the application features themselves, but you are writing pipeline code, Dockerfiles, Helm charts etc. So basically code, which is part of the application or you are writing infrastructures code scripts, which are in a separate project:

Collaborating on code
Collaborating on code

Well, you need to make that code available and transparent for teams for other engineers, ideally with history of changes and ideally with its own release pipeline to apply infrastructure changes the same way as application changes are applied:

Treat IaC same way as application code
Treat IaC same way as application code

IaC and configuration as code in Git repos

Well that's where you need the knowledge of Git to do all that with your infrastructures code as well as just collaborate with other engineers on code changes.


Git is a version control system, which enables:

✅ Tracking of infrastructure changes

✅ Comparing versions, revert changes easily

✅ Code reviews, approval workflows etc.

✅ Trigger Pipeline and automate testing and deployment of infrastructure configurations



10 - Linux Operating System and Command Line


Now this is an obvious one, but obviously you can't do much if you don't know Linux and Linux command line. 😌


➡️ Docker is a lightweight virtual computer, mostly based on Linux

➡️ Worker nodes in Kubernetes are servers mostly with Linux operating system

➡️ Physical or cloud servers are mostly Linux OS

Learn Linux

So even with Infrastructure as Code and all the automations, you will still be working a lot with Linux and working with command line interface.


So that's kind of a must here.



Combine those tools 🚀


Now as you see, when building DevOps processes these tools need to be combined and used together. So even if you know them individually, you need to learn how to integrate these tools.


Combining DevOps tools is important

Like deploy from Jenkins to Kubernetes environment, which is running on AWS and has AWS service integrations, and all that written in Terraform.


And again for that Terraform code that lives in Git repository, you may build a CI/CD pipeline. And all of this is containerized, even Jenkins instances may be running as containers.


And learning these tools in isolation is already challenging, but learning to combine them in a secure properly configured way with industry best practices is way more challenging and that's exactly why we created the DevOps Bootcamp and are now working on DevSecOps course to teach exactly that, building complete DevOps and DevSecOps processes with all these tools and even more. 💪

DevOps and DevSecOps processes

And more importantly teaching the underlying concepts for each step so that you can easily replace and swap out the tools 🛠 when you need to, because when you understand what you are doing and why on a conceptual level, tools just become means to an end and easily replaceable. 💡😀


And for us that was extremely important part of creating those courses. If you want to learn all that or get more details, you can check out more information here:


Now I hope I was able to give you some valuable, quick information in this blog post. 😊


Feel free to share it with others, who want to get a short overview of DevOps tools and also let us know in the comments what interesting exciting DevOps tools do you work with or work for besides the ones I mentioned here. 💬


 

Like, share and follow 😍 for more content:

bottom of page