cookie consent

By browsing our website, you consent to our use of cookies and other tracking technologies. For more information, read our Privacy Policy.

Trends in the market related to Kubernetes adoption

Kubernetes are adopted to increase agility, accelerate software delivery, and support digital transformation and has become the orchestration platform of choice for containers, simplifying the work of both developers and operators^1.

Container Tools Used

Survey from Flexera shows that 65 percent of organizations use Docker, and 14 percent plan to use it. Fifty-eight percent are using Kubernetes, a container orchestration tool that leverages Docker, and another 22 percent plan to use it. Many organizations are also choosing container-as-a-service offerings from public cloud providers. The AWS container service (ECS/EKS) experienced substantial adoption, with 54 percent using it and another 24 percent planning to use it. Azure Container Service adoption reached 46 percent, and Google Kubernetes Engine (GKE) reached 24 percent^2.

As Kubernetes is young, enterprise adoption is in the early stages. 95% of respondents use containers in the proof of concept (PoC), test, and development environments^3. It is no surprise that most kubernetes deployment is done on-premise^4. The probable cause is to optimize existing infrastructure available to the development team.

Trends in the market

The figure above shows the growth of Kubernetes from 2016-2020 in different deployment stages^5. This could mean that more and more organizations are becoming comfortable in testing new use cases or moving more workloads with the support of containers.

Nonetheless, the potential is there and a full 95% of respondents report clear benefits from adopting Kubernetes, with 56% of respondents cite resource utilization as a top Kubernetes benefit and 53% of respondents say shortened software development cycles is a top benefit^6.

Challenges in the market related to Kubernetes adoption

Interest and adoption for Kubernetes has risen in recent years, organizations expect production projects using Kubernetes to rise 61% in the next two years^7. Nonetheless, report from D2IQ stated that the majority of organizations running Kubernetes utilize outside resources, such as a Kubernetes management platform (64%), a cloud SaaS service (64%) and/or public cloud (55%)^8. This shows a pressing issue that a lack of capable technical resources for deploying Kubernetes is evident in many organizations.

Collected from several sources, here are the key reason on why Kubernetes adoption is still a challenge^9:

  • Choosing the right technology provider
  • Guaranteeing cluster security
  • Required to update infrastructure to support Kubernetes
  • Innovation vs Budget Allocation
  • Lack of capable IT Resources
  • Change Management

Another critical information regarding factors that hinders Kubernetes adoption is how culture change and security are the most mentioned issues. Shown here is an infographic chart from The Newstack that mentioned it^11.

Challenges in the market

The figure above shows the concerns about Kubernetes adoption, based on a survey from 2017 - 2019. The data presents that although storage and training are becoming less of an issue, change management in the development team and security concerns are still a major concern for companies. Companies that move from monolithic approach to containers and Kubernetes for software development and infrastructure, are likely trying to deploy faster and more frequently than ever^12. That being said, a guided Kubernetes implementation from an experienced team would be in prime position to help organizations to properly implement Kubernetes application and assist in adjusting to the new development culture that comes with it.

Compare Kubernetes service offerings by various vendors (managed Kubernetes)

This section defines the top managed kubernetes service providers^13.

Google Cloud GKE (Google Kubernetes Engine)

Kubernetes was originally developed at Google and released as open source in 2014. Google Kubernetes Engine (GKE) is a secured and managed Kubernetes service from GCP that supports four-way auto scaling and multi-cluster support. The Kubernetes engine will schedule your containers, manage your application and is equipped with logging and container health checking, to make application management easier^14. Some of the key features are^15:

  • Off-the-shelf Applications: Prebuilt deployment templates, featuring portability, simplified licensing, and consolidated billing.
  • Adaptive Autoscaling: Automatically scales the node pool and clusters based on CPU utilization or custom metrics, based on changing workload requirements.
  • Enterprise-grade Security: Provides support for Kubernetes Network Policy as the second layer of defense between containerized workloads on GKE for enhanced workload security.

Overview of Google Cloud GKE component is as follows^17

Overview of Google Cloud GKE component

While a typical GKE deployment in Google Cloud is shown here^18

Typical GKE deployment in Google Cloud

Key takeaways: Google is the birthplace of Kubernetes, an assuring information especially when GKE supports hybrid deployment (Google Cloud + On Premise vSphere), which enables companies to have a consistent, unified, and secure infrastructure, cluster, and container management.

Microsoft Azure

Azure Kubernetes Service (AKS) is a managed Kubernetes service from Microsoft that allows companies to quickly deploy and manage clusters that includes a web front end and a Redis (cache database) in the cluster^19. Key features are as follows^20:

  • Efficient resource utilization: Easy deployment and management of containerized applications with efficient resource utilization.
  • Faster application development: Handles patching, auto-upgrades, and self-healing to simplify the container orchestration.
  • Security and compliance: AKS integrates with Azure Active Directory (AD) and offers on-demand access to the users to greatly reduce threats and risks.
  • Quicker development and integration: Supports auto-upgrades, monitoring, and scaling in seconds to help in minimizing the infrastructure maintenance.

Overview of Azure Kubernetes Service component is as follows

AOverview of Azure Kubernetes Service component

A sample on how development team creates application with AKS as the orchestrator is shown here

Application with AKS

Key takeaways: Azure Kubernetes Service offers managed service that is specifically for Azure/Microsoft users. It is most suitable for companies that are strategically aligned with Microsoft technologies and are looking to utilize containerized applications on the cloud.

Red Hat Openshift Dedicated

Red Hat OpenShift Dedicated is a professionally managed enterprise Kubernetes platform by Red Hat, hosted on premise or hybrid cloud solution (AWS or Google Cloud). It includes a Linux OS, container runtime, networking, monitoring, container registry, authentication, and authorization solutions^22. Key features are as follows^23:

Custom OS: OpenShift Dedicated uses Red Hat Enterprise Linux CoreOS (RHCOS), specifically designed for running containerized applications and works with new tools to provide fast installation, Operator-based management, and simplified upgrades.

Simplified installation and updates: OpenShift Dedicated completely controls the systems and services that run on each machine, including the operating system itself, from a central control plane, upgrades are designed to become automatic events.

Efficient Scaling and Enterprise Support: OpenShift Dedicated you to scale only the required services instead of the entire application, which can allow you to meet application demands while using minimal resources. Enterprise support service includes building, installing, upgrading, managing, and maintaining every cluster.

Overview of Red Hat OpenShift Dedicated architecture is as follows

Overview of Red Hat OpenShift Dedicated architecture

Red Hat OpenShift Dedicated architecture

A sample on how development team creates/deploy application with Red Hat OpenShift Dedicated as the orchestrator is shown here

Create/deploy application with Red Hat OpenShift

Key takeaways: Red Hat OpenShift Dedicated offers a proven and experienced Enterprise grade operations team that manages infrastructure configuration, maintenance, support and security 24x7.

Challenges around managing Kubernetes at scale

As good as its potential benefits, Kubernetes is still a new technology that many developers are not familiar with. In addition, best practice for settings and configuration that fits with the company business needs are not readily available. Hence, plenty of simulation and trials are needed to find the most suitable configuration for the Kubernetes deployment.

Configuring a Load Balancer^25

Customers often must configure the load balancer on their own (for on-premise deployment) and risking port conflicts and problem scaling clusters.

Managing Resource Constraints^26

Best practice to configure Kubernetes to request resources (memory or processing power) on each pod are specific to each application. So developers need to simulate this internally, since there’s no general application computing requirement that can satisfy every need.

Logging and Monitoring^27

In Kubernetes, a centralized logging and monitoring system is critical. However, with many services in play, users will need to utilize external tools to handle log data from different services. Since Kubernetes is capable of recovering from crashes well, monitoring such issues are ideal in order to prevent them in the future. Again, users will need to utilize external tools to handle this problem.

Information about multi-cluster (multi-region and/or multi-cloud) Kubernetes implementation challenges in the market

As companies are looking to strengthen the capability of Kubernetes, deploying multi-cluster Kubernetes is selected for tenancy and reliability reasons^28. Especially if the objective is to have Kubernetes clusters divided into different functional teams or to have a Kubernetes Run time on multiple Clouds. However, this brings out a problem. Since each Kubernetes Cluster is a silo that has its own API, and because Kubernetes deals with containers, each cluster needs to have its own security profile that needs to be managed individually^29. Shown here is an illustration on the issue of multi-cluster Kubernetes

Managing Islands of Multiple Kubernetes Cluster

Key challenges for multi-cluster (multi-region or multi-cloud) Kubernetes are^30 ^32:

  • Separate security domain for each cluster
  • Kubernetes has own Role-Based Access Control (RBAC) rules, difficult to monitor and maintain access privilege
  • Each infrastructure has its own IAM
  • Cluster maintenance (upgrades, backup, monitoring)
  • Individual policy management per cluster
  • Manual setting for cross-cluster traffic
  • Cost management (different cloud, and region price structure)
  • Technical resource availability and capability

Basically, companies need to have a team that has a certain level of agility to connect, provision and access Kubernetes clusters everywhere. Moreover, the ability to control cloud budgets across cloud providers and regions as well. This is a tough task for companies with limited technical resources. Nonetheless, a service mesh-like Istio can help, and when used properly can make multicluster communication painless and exactly why a managed Kubernetes service offering is one of the best options to implement a multi-cluster Kubernetes application.

Information about multi-region or multi-cloud Kubernetes implementation trends in the market

While Kubernetes is associated with cloud native operations, a VMWare report states that the majority of enterprise deployments today are not in the public cloud. Large majority (64%) of respondents have deployed Kubernetes on-premises, with an encouraging 30% of respondents uses multi-cloud for their kubernetes deployment^33. This indicates that most companies are getting started with Kubernetes by optimizing their existing on premise infrastructure.

Kubernetes implementation trends

Foot Notes
[1] [2] [3] [4] [5] [6] [7], [8] [9], [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27] [28] [29] [30] [31] [32] [33]