History of containers
Back in the days, in 2005, web applications were hosted on Virtual Machines for the first time. Amazon Web Services was born in 2006 and the Cloud revolution began. But the setup, management, and maintenance of servers remained complex. Then one company decided to do things differently. They had thousands of servers and could not spend 2 hours per server for maintenance every month. What company are we talking about? Google. They needed to control their applications as cattle instead of pets otherwise maintenance of the servers would be impossible. Their applications needed to be easy to manage, and have auto recovery and auto-scaling characteristics. And so they developed Kubernetes.
Kubernetes enables developers to set up and manage applications easier and faster, ease the burden on the IT department and handle resource consumption more efficiently. “Kubernetes is the future of Hosting.” says Lex van Sonderen, General Manager of Proteon, “Until now, hosting was system-centric, but Kubernetes makes hosting application-centric. Therefore, application owners no longer have to worry about technology, uptime, and deployments. Instead, application owners can focus on resource consumption and cost and more importantly, innovation.”
Kubernetes as a Service
Kubernetes sounds wonderful, and trust me: it is. But when we started working with it four years ago (when it was first open-sourced by Google), we soon discovered that Kubernetes was very hard to set up. You need to build everything yourself which is time-consuming and therefore expensive. Kubernetes doesn’t support reproducible builds so it will take months (or even years) to build your own infrastructure. Developers also need a lot of knowledge to orchestrate a cluster and run day-to-day operations. Repairing issues within Kubernetes requires years of experience and knowledge in the DevOps sector, Kubernetes itself and Docker. And even if you manage to survive the setup and run stage, a consistent operations and developer experience are far off.
The main cloud providers noticed the difficulties of setting up and using Kubernetes as well and started offering Kubernetes as a Services (KaaS). The main perception is that when you go for a KaaS solution is that you can start right away and don’t have to worry about a thing. Although the first part is true, it helps accelerate the adoption process of Kubernetes, you still have many operational challenges to worry about.
Just Kubernetes lacks security, deployment pipelines, user management, monitoring and more. Plus, another big disadvantage of Kubernetes as a Service is vendor lock-in. Many providers only let you enjoy Kubernetes on their cloud and application services.
So, what is the solution to all your problems?
Red Hat’s OpenShift is, at least for your developer challenges. OpenShift is a container platform with built-in Docker registry, Source to Image, build and deployment configuration, image stream, routes, Software Defined Network, autoscaling, auto recovery and more. It takes away the complexity of container operations and gives time back to developers to build applications that actually help the business. As the leading enterprise distributor of Kubernetes, OpenShift is optimized for continuous application development and multi-tenant deployment. Add a bunch of developer and operational tools to that and you can suddenly lean back while OpenShift takes care of your infrastructure and application management and maintenance.
If you summarize the differences between OpenShift and KaaS it would look somewhat like this:
|OpenShift||Kubernetes as a Service|
|Deploy on any VM, cloud or even on-premises||Tied to the cloud environment of the vendor|
|Work with the tools you want to use and create a consistent developer and operations experience.||Consume and manage Kubernetes as demanded by the underlying platform of the vendor|
|Red Hat Linux Enterprise is included and supported as the Operating System||You need to buy and manage your Operating System separately|
|Routing capabilities are provided by OpenShift and can be managed by the users depending on their roles||As a KaaS customer, you need to handle your own routing|
|Logging is installed and configured automatically when installing OpenShift||Just plain Kubernetes lets users decide how they want to handle logging|
|Integrated toolstack for developer self-service, like a Jenkins Pipeline||If developers want to use Jenkins and Kubernetes they need to set-up, install and configure this themselves|
|All features of OpenShift come with Role Based Access. The customer can control access inside the OpenShift platform||Kubernetes lacks a feature that allows you to define roles. If you want it, you have to take care of it yourself.|
|Red Hat’s security tools are integrated into Openshift. SELinux and OpenSCAP are used to manage and secure your applications||At a KaaS provider, you have to choose, set up and manage your own security|
|OpenShift offers support for all the elements of the platform. From the Operating System to the services layers||Public cloud providers only offer support for the underlying Kubernetes infrastructure they provision|
The final solution: OpenShift as a Service
As you know by now, OpenShift is the most advanced container solution available at the moment. It takes you closer to the end game and the container victory is within reach. There’s just one challenge that you have to overcome: what is the fastest time to market? How will you be able to adopt OpenShift as soon as possible within your organization? The answer to this question brings us to the final solution. OpenShift as a Service. With OpenShift as a Service, the entire container platform is delivered in an instance. From the operating system to the hosting. You can start immediately without adopting all the knowledge that is needed to set up, run and maintain the OpenShift platform yourself. This way you get all the benefits while having the fastest time to market. Most OpenShift as a Service providers can even run their services on-premises, in the public cloud or on their own servers since there is no vendor lock-in.