By unveiling “Project Pacific”, VMWare acknowledged that Kubernetes will be a foundation of vSphere, its virtualization offer.
This event marks the beginning of a new era: Kubernetes is now a de-facto standard in the IT platforms to run containers workloads in production, and is considered equals as virtual machines.
Why Should I Care (As an I.T. Engineer, Project Manager, C.T.O. or Sales)?
Containers, Containers, Containers…
Let’s take a look back at 2013: Docker was presented at the PyCon by dotCloud, a Paas provider.
6 years later, this technical tool had been widely adopted by developers, disrupting our industry by innovating in the way developers package, share and deliver applications.
The core idea of Docker is to execute applications with lightweight isolation, packaged with their dependencies to be self-serviced. No more conflicts between PHP or JDK versions between legacy and brand new apps: each “container” contains its own and do not step on each others feet.
So much hype for a developer’s tool? The IT industry already had these kind of technological waves: What is the point?
Docker is a tool which automates some concerns: which port the applications listen on? Which directories are required? What is the underlying security model? These elements could be documented, but you risk a shift between expectation and reality.
By having these metadata (almost) mandatory, it’s never been easier to share concerns per team, and to allow opportunistic learning “only when you need it”: you can start with a minimalist viable server, and then iterate on directories, next on network, or whatever is required.
This behavior is perfectly aligned with the Agile Principles: focusing on the value which matters. Docker made (or tried) to make the premise of the “DevOps” culture, which is applying the Agile principles from development to operations.
Let that sink in: validating business assumptions without endangering the production, this would totally be worth the effort. And it’s only one of the numerous benefits...
… but what about Ops/Sysadmin?
The world of system administrators (or any role related to operational IT: SREs, platform ops, etc.) felt that their concerns had been left on the side of the road.
Iterating on tiny pieces of software is not often doable when your daily concern is stability and security: Agile principles might be understood, but are harder to apply in this area.
By shifting left some tasks which were usually treated by the “ops” world, Docker is asking a lot of questions on the challenges of a well-running production: what about security, deployments, resources usages, etc.? Of course these challenges were partially (or not at all) answered.
Kubernetes comes from this world, as an operational model based on what we learnt from a decade of configuration management. It tries to make (Docker but not only) containers able to run in production.
Fostering Cross-Silos Awareness
Kubernetes (with Docker) is a way to share awareness of team’s concerns, aka. “Translate the definition of local optima to collaborate on making it global optima”.
If you’re willing to align all the teams on your organization’s business, defining OKRs should be a priority. What if the I.T. departments had a tool to make the OKR automated as a technical asset? Stop searching for this unicorn: Kubernetes could be a way to do it!
The open source communities often refers of Docker as a tool for “Empathy as code”: this is exactly the point!
Technical Tools for Business (and not the opposite)
Technical tooling is fun for software engineers (dev & ops), but it should serve a purpose for the organization, or it’s only a hobby.
As a business person, what Kubernetes (and containers) put on the table?
Let’s consider the adage “From idea to production”. This sentence illustrates the value stream of our businesses: a feature is providing value only when used in production.
The time required from making a business hypothesis and validating it is slowing down the creation of value. But don’t try to go fast because of this: features need to be running correctly in production, which requires stability! Of course, this is a business metric worth measuring to ensure you track any variation (increasing or decreasing) to make decisions.
If you add Kubernetes in the equation, you’ll benefit from a system where metric is a first-class citizen. As a state-of-the-art engineering tool, metrics are key to any decision: how to efficiently use the computing resources? How to scale and load-balance applications? How to ensure security and track any automated action? You need metrics!
Kubernetes also enables new deployment patterns which were costly to implement before. For instance, let’s consider “Progressive Deployments”, where the idea is to deliver changes progressively to subsets of new users, or without enabling it immediately.
With Kubernetes, you can enable “Canary routing” backed by the centralized metrics system: you can deploy a feature only for 2% of users, and let the system roll back if there is more than 10% of these users in errors.
Let that sink in: validating business assumptions without endangering the production, this would totally be worth the effort.
And it’s only one of the numerous benefits…
What is Kubernetes?
Kubernetes is a Greek word for “governor” or “helmsman”. This project is an open-source orchestrator for containers workloads.
Based on the principles of Borg, Google’s container scheduler, it was unveiled in 2014 by 3 Google engineers.
Since its release v1.0 in 2015, this project is now held by the Cloud Native Computing Foundation, depending from the Linux Foundation itself.
Written in Golang, Kubernetes provides a set of loosely coupled primitives for managing container workloads. You can see it as a distributed cluster of basic building blocks.
To put it simply: it is a cluster of machines for running containers in a distributed fashion.
Built around a centralized API, Kubernetes is an operational model based on the concept of state machine: you describe the state in which you want the system, and it is responsible to converge to this state from the current state by iterations.
The difference with Docker? Kubernetes adds a layer of orchestration with loosely coupled elements. For instance, deploying an application requires to define different objects for different purposes:
- Defining a “Deployment” to describe the state of your application instances.
- Example: “I want my web application to always have 2 healthy replicas running to handle production’s workload.”
- Defining a “Service” to expose the application and provide load-balancing:
- Example: “I want this application to be always reachable through this highly-available private IP, on the port 80. It load-balances on all the replicas”.
- Defining an “Ingress” to publish the application externally:
- Example: “I want every HTTP incoming requests with the hostname set to ‘www.company.org’ to be forwarded to the service on port 80”
Kubernetes provides extensibility on this object model, so you can define your own loosely coupled object for your software: for instance JenkinsX provides “Builds” and “Jobs”.
More objects means more powerful, but also steeper to learn: take time to understand the meaning of each object before going full Kubernetes!
How to go Kubernetes?
OK then, you’re convinced of the value. But how to go from your current bare-metal or container infrastructure?
Start by selecting a candidate project, for which you have sponsors on each team: at least 1 developer person, 1 operational person, but also 1 management person as you need organization-wide acceptance to enable collaborative work.
Put everyone on the same room (or virtual-chat-room if you practice distributed working) and let them work on running the project in a Kubernetes Proof of Conceot. Give them the means to achieve this task: machines, time to learn and practice.
As soon as minimum viable prototype emerge, start the feedback loop and iterates on another piece, until you got:
- A full deployment pipeline (“from idea to production Kubernetes”)
- Observability on the system (“metrics and logs collection at least, eventually tracing”)
- Crash & Restore test (“How much time do you need to restore if you crash the cluster”)
You’re now ready to evaluate the value, and start adding more projects, with the initial team as sponsors to empower the other teams and to get them started.
Be careful to not use this initial Proof of Concept as a time measurement for the next projects:
the initial cost of learning and discovering will decrease with each new project added and with good practices emerging.
Keep in mind that this investment is not lost in a black hole if you choose to NOT use Kubernetes. Because your teams will still have learned and improved your current applications: deploying an application in Kubernetes requires some (and maybe all) of the 12 factors to be fulfilled. Getting awareness of the weakest points of an application and its delivery process is always a valuable lesson for improvement.
By fully embracing Kubernetes and its model, your organization will benefit from a powerful and extensible model for operating applications in production with the help of containers.
The investment in the learning curve is always a benefit: at least your applications will gain operational maturity. And if you use it in production, your business will benefit from new patterns from additional business metrics to new ways of deploying features to your end users.
One last thing: do not forget to contribute back what your organization learn.
As you’ll discover along the road that your teams gained a lot of knowledge,
Giving it back to the community is also an improvement step for your employees: think about the positive reputation and the leadership position!
By Damien DUPORTAL, a freelancer who recently ran a Kubernetes workshop in Arexo’s Liege offices.
- ✉️ firstname.lastname@example.org
- 🐙 https://github.com/dduportal
- 🐦 https://twitter.com/DamienDuportal
Human stack focused. Rock climber. Passionate software engineer. Talk to me!