Containers and the Cloud: An Easier Way to Deploy Workloads

Cloud Containers Overview:

  • Containers are abstract units of software that have everything you need to run a workload or process.

  • Container orchestration is the ability to deploy and manage multiple containers across private and public cloud infrastructure.

  • Intel has contributed open source feature discovery and telemetry tools and offers performant hardware that enables organizations to get the most out of their containers.

author-image

By

What Are Containers?

A container is an abstract unit of software that is a stand-alone, executable unit that has everything needed to run an application: code, runtime, system tools, and system libraries. Containers have defined parameters and can run a program, a workload, or a specific task.

A simple analogy to help understand containers is to think of shipping containers. You can pack a lot of cargo into a single container, and you can pack a lot of shipping containers into a single vessel or split them across multiple vessels. You can also use specialized containers for specific workloads, in the same way you might use a refrigerated shipping container to transport a specific type of cargo.

The only restriction with containers is that they are reliant on their host system kernel. A Linux container may only operate on a Linux host, a Windows container operates on a Windows host, and so on for other operating systems (OSs).

The Benefits of Containers

Containers allow system managers to achieve more density with their architecture. You can define and run multiple containers, each tuned to a specific workload for greater efficiency. Containers have only what you need, so they’re not bloated with superfluous software, and they won’t waste compute resources on background processes.

Businesses are discovering extreme value in using containers because they are portable, consistent, and user friendly. IT departments can enable continuous integration/continuous delivery (CI/CD) with the agility and automation that containers provide. Containers also help isolate workloads, contributing to robust data security policies.

Containers have only what you need, so they’re not bloated with superfluous software, and they won’t waste extra compute on background processes.

Virtual Machines vs. Containers

Like containers, virtual machines (VMs) are stand-alone computing environments that are abstracted from hardware. Unlike containers, VMs require a full replica of an OS to function. VMs offer some advantages, as you can use a VM to simulate a different OS from the host system. For example, if your host machine runs Windows, you can run a Linux OS in a VM, and vice versa. VMs also allow for even more isolation and data security, being more fully insulated systems of compute.

However, because VMs are essentially self-contained systems with their own OS, they take much longer to boot than containers, and they run less efficiently. Containers are also more portable, as a complex workload can be split across numerous containers, which can be deployed anywhere across multiple systems or cloud infrastructures. For example, you can deploy workloads across multiple containers to your on-premises hardware and your public cloud service and manage everything through a single orchestration dashboard. Because of this portability, containers scale more effectively than VMs.

What Is Container Orchestration?

Orchestration is a methodology that provides a top-down view of your containers, giving you visibility and control over where containers are deployed and how workloads are allocated across multiple containers. Orchestration is essential to deploying multiple containers. Without orchestration, you must manage each container manually. Orchestration also allows IT managers to apply policies, such as fault tolerance, selectively or holistically to a collection of containers.

One of the enhanced capabilities afforded by container orchestration is the ability to automatically manage workloads across multiple compute nodes. (Nodes refer to any system connected to a network.) For example, if you have five servers but one server initiates a maintenance cycle, the orchestrator can automatically divert the workload to the four remaining servers and balance the workload based on what the remaining nodes can handle. The orchestrator can perform this task without human assistance.

Kubernetes and Docker

Kubernetes is an open source container orchestration platform, originally designed by Google, and is the de facto standard solution in the market today. Docker is also an open source software that is used to deploy a single container and has become the de facto standard solution for its purpose.

Kubernetes works on top of solutions like Docker to deploy and manage multiple containers. Both of these solutions are ubiquitous in the market, and although they are both open source, proprietary offerings are available and expand on each set of frameworks with additional capabilities and tools. If you work with containers, both Kubernetes and Docker will become everyday terms.

Container Use Cases

As previously stated, you can use containers to run a specific task, program, or workload. We can expand our understanding of how containers function by taking a closer look at three key use cases.

  • Microservices: A microservice is a specific function in a larger service or application. For example, you can use a container to run a search or lookup function on a data set, rather than loading an entire database application. Because this operation runs within a container environment, it will run faster compared to a noncontainer environment, whether VM or bare metal, with a full OS and backup processes taking up extra compute resources. Containers make it simpler and faster to deploy and use microservices.
  • Hybrid cloud and multicloud: Within a hybrid cloud environment, the container becomes your basic unit of compute that’s abstracted from the underlying hardware. You don’t need to worry about where the container is running since you can run the container anywhere. Containers therefore make it easier to deploy workloads across a hybrid cloud environment. This is generally handled through the orchestration platform so administrators have visibility in where containers are being deployed and which capabilities each node offers, between on-premises and public cloud infrastructures. With regard to cloud security practices in the hybrid cloud model, businesses should still pay attention to concerns such as authentication to ensure that only authorized personnel can access workloads and data within each container. However, businesses can expect authentication to run in a more simplified manner in the container environment.
  • Machine learning: Machine learning and deep learning workloads pose challenges in that their complexity is high, there are a lot of moving parts, and there’s very little that human operators can do to intervene and change things in a containerized environment. For these workloads, algorithmic training tends to be the more common type that is deployed via containers. Data scientists and researchers will often rely on workload tagging, which is the process of identifying and matching workloads to nodes with specific capabilities. Otherwise, data scientists will use containers for parallel processing, which is a method of breaking large data sets into chunks and running algorithms on each chunk simultaneously, to generate a faster result.

What Intel Offers

If you run multiple containers on a single node, you can achieve good scalability as long as the node has the compute resources available. This is why hardware matters. More compute allows for higher density with more containers. Intel’s contribution to this space includes a full offering of server architecture and components, including Intel® Xeon® Scalable processors, the Intel® SSD data center family, and Intel® Ethernet products. These technologies allow for fast, robust, efficient, and dense containerization.

In terms of software solutions, Node Feature Discovery (NFD) is a key contribution. NFD was developed by Intel and recently added to the main open source release of Kubernetes. This feature allows an orchestrator to identify key technologies and capabilities—such as Intel® AVX-512—within each available node. If a system administrator has a workload that needs Intel® AVX-512, Kubernetes can use NFD to tell the administrator which nodes offer this capability, and the administrator can deploy containers to those nodes specifically.

Lastly, Intel enables telemetry for visibility into container-level performance, relative to each activated node. Specifically, Intel has contributed performance counters to Google’s open source telemetry tool cAdvisor. This allows businesses to measure and establish granular control over container performance, which in turn allows for greater optimization, workload matching, and increasing density in deploying more containers to each node.

A Promise of High Value

It’s important to understand that containers aren’t just a trend. They offer scalability, portability, and security benefits, all of which will make containers an essential methodology for workload deployment both now and in the future. If you haven’t thought about containers yet, the best time to get started is right now. If you’ve already been working with containers, the next step is to consider how to make your containers more efficient, performant, and denser using the right architecture paired with telemetry and feature discovery tools.