Last week I attended Edge Field Day 1, a Tech Field Day event focused on edge computing solutions. Some of the sessions really got me thinking.
Edge infrastructures are very different from anything in the data center or in the cloud: the farther from the center you go, the smaller the device. Less CPU power, less memory and storage, less network and connectivity all pose serious challenges. That’s before considering the physical and logical security requirements that are less important in the data center or in the cloud, where the perimeter is well protected.
In addition, many indoor devices remain in the field for many years, posing environmental and life cycle challenges. To complicate matters further, edge computing resources can run mission-critical applications, built for efficiency and resiliency. Containers and Kubernetes (K8s) may be a good option here, but does the container want the complexity of Kubernetes?
Assessing the value of Kubernetes on the Edge
To be fair, Edge Kubernetes has been around for a long time. Several vendors now deliver optimized Kubernetes distributions for content use cases, including management platforms to manage large hosts of small clusters. The ecosystem is growing and more users are adopting these solutions in the field.
But does Edge Kubernetes make sense? Or more precisely, how far from the cloud-based core can you deploy Kubernetes, before it becomes more trouble than it’s worth? Kubernetes adds a layer of complexity that must be deployed and managed. And there are additional things to remember:
- Even if an application is built with microservices in mind (like small containers), it’s not always so big and complex that it needs a full orchestration layer.
- K8s often require additional components to ensure data redundancy and continuity. In a resource-constrained scenario where few containers are deployed, the Kubernetes orchestration layer can consume more resources than the application!
In GigaOm’s report covering this space, we found most vendors working on how to deliver K8 management at scale. Different approaches, but they all include some form of automation and, more recently, GitOps. It solves infrastructure management but does not cover resource consumption, nor does it enable content and application management, which remain content concerns.
While application management can be solved using additional tools, the same ones you use for the rest of your K8s applications, resource consumption is something that doesn’t have a solution if you continue to use Kubernetes. And this is especially true if instead of three nodes, you have two or one, and maybe one is too small.
Alternatives to Kubernetes at the Edge
Back at Tech Field Day, an approach I found fascinating was demonstrated by Avassa. They have an end-to-end container management platform that does not require Kubernetes to operate. It does everything you’d expect for a small edge container orchestrator, while removing complexity and unnecessary components.
As a result, the container-level feature has a smaller footprint compared to (even) container-optimized Kubernetes distributions. Additionally, it implements management and monitoring capabilities to provide visibility into key application aspects, including deployment and management. Avassa now offers something very different, even with other options to remove K8s from the (edge) picture, not least Web Assembly.
Key Actions and Acquisitions
To summarize, many organizations are evaluating solutions in this space, and applications are often written following precise requirements. Containers are the best way to deploy this, but not the same as Kubernetes.
Before installing Kubernetes at the edge, it’s important to check if it’s worth doing. If you’ve already deployed, you’ve probably seen its value increase with application size. However, that value decreases with distance from the data center, and the size and number of edge computing nodes.
It may therefore be prudent to explore alternatives to simplify the stack, and therefore improve the TCO of the entire infrastructure. If the IT team managing the edge infrastructure is small, and needs to interact daily with the development team, this becomes even more true. The lack of expertise across the industry, and especially around Kubernetes, makes it mandatory to consider options.
I’m not saying Kubernetes is a no-go for edge applications. However, it is important to weigh the pros and cons, and establish the best course of action, before embarking on what can be a challenging journey.