From Sandboxed Containers to Confidential Containers — Part-1

Pradipta Banerjee
5 min readFeb 16, 2023

I will take you through the journey from sandboxed to confidential containers in this blog. The details should provide you with knowledge of the current options for improving the security of container workloads, the underlying technology stack and trade-offs.

First, let's revisit some definitions to have a shared understanding.

Sandboxed containers — Also known as kernel-isolated containers or virtualised containers. Sandboxed containers provide additional isolation and security by leveraging a second, lightweight kernel, which runs alongside the host kernel. The increased isolation is provided due to the use of lightweight virtualisation technology. Kata containers are an example of sandboxed containers.

Confidential containers — Containers using confidential computing technologies to protect the application and its data from other software running on the same system, including the host operating system.

Confidential computing — The technology that can help protect your workload from unauthorised entities — the host or hypervisor, system administrators, service providers, other VMs, and processes on the host. At the heart of confidential computing technology is a Trusted Execution Environment (TEE). TEEs are secure and isolated environments provided by confidential computing-enabled hardware that prevents unauthorised access or modification of applications and data while in use.

In one of my earlier blogs, you can read more about the confidential computing solution and how it can help your business.

With the definitions in place, let’s delve into the details.

From Sandboxed containers to Confidential Containers

Sandboxed containers are widely used to provide lightweight and isolated runtime environments for applications. However, as threats to security and privacy continue to evolve, new approaches are needed to protect sensitive data and code in containerised applications.
Confidential containers are an evolution of sandboxed containers that provide additional security and privacy features to protect data and code within a container. These containers run in secure enclaves provided by hardware-based Trusted Execution Environments (TEEs).
In this blog, I’ll take you through an opinionated architecture of sandboxed containers implemented via Kata container runtime. I’ll then describe how the sandboxed containers architecture evolved into a confidential containers (CoCo) architecture.

The following diagram shows the typical architecture of sandboxed containers solution as implemented in Kata containers.

Sandboxed Containers

The Kubernetes Node (host) is a bare metal server supporting virtualisation. The Kubernetes Pod runs inside a lightweight virtual machine and provides an additional isolation layer. This additional layer of isolation, coupled with no changes to the Kubernetes end-user experience when working with sandboxed containers, makes it a potent option in the arsenal of companies looking to implement the following capabilities in the cluster:

  1. Run privileged workloads. For example, workloads requiring to run as the root user or admin capabilities) safely with guardrails.
  2. Ability to use different kernel settings for specific workloads. For example, different application core dump settings, networking settings, “sysctl” tunables etc.
  3. Ability to try out custom kernel functionality or modules for bleeding edge development. For example, using custom devices for specific workloads.
  4. Running legacy code or untrusted 3rd party code. For example, having an application using legacy libraries which is prohibitively costly to replace or architect.

If you are looking for an enterprise-ready sandboxed containers solution, then take a look at OpenShift sandboxed containers — https://www.redhat.com/en/blog/learn-openshift-sandboxed-containers

As you can infer, sandboxed containers provide the following protections:

  1. Protects a workload from another workload.
  2. Protects the Kubernetes node (host) from the workload.

Can you sense any missing protections?

What protects the workload from the host? Enter confidential computing and confidential containers.

Confidential computing and confidential containers protect your workload from unauthorised entities — the host or hypervisor, system administrators, service providers, other VMs, and processes on the host.

This protection of workload from unauthorised entities gives the confidence to run your sensitive workloads in the public cloud and reap the benefits of the public cloud.

Protections enabled by confidential computing

The following diagram shows the typical architecture of the confidential containers (CoCo) solution, as implemented in the CNCF confidential containers project:

Confidential Containers

The Kubernetes Node (host) is a bare metal server supporting virtualisation and confidential computing technology (e.g. AMD (SEV-SNP), Intel (TDX) or IBM Z (SE) etc.). The Kubernetes Pod runs inside a lightweight virtual machine (like the sandboxed containers). However, this virtual machine (VM) is not a regular one. The VM is a Trusted Execution Environment (TEE ).

The kata-agent and related components are measured, meaning a trusted cryptographic algorithm is used to authenticate its content (unlike the sandboxed containers). Further, the container images are kept inside the TEE and may be signed or encrypted. Kata containers runtime is extended to support these capabilities required for confidential containers.

The attestation agent is responsible for initiating attestation and fetching the secrets from the key management service.

Supporting components for the solution is the relying party, which combines the attestation service and key management service.

The attestation service is responsible for checking the measurement of the software stack running inside the VM against a list of approved workloads and authorising or denying the delivery of secrets. Please refer to my previous blog for more details on the attestation process.

The key management service is responsible for storing secrets that the workload needs to run, such as disk decryption keys, and delivering these secrets to the enclave agent.

As you can see, with new components and Kata container extensions to support confidential computing, we get a confidential container solution. Consequently, from a user point of view, you can start with sandboxed containers and gradually move to confidential containers based on your business requirements.

However, as with everything in technology, there is a trade-off. Due to virtualisation technology and container image handling differences, sandboxed and confidential containers are comparatively more resource-heavy than regular containers. Relatively confidential containers are more resource-heavy than sandboxed containers, which are more resource-heavy than regular containers. Further, these require bare metal servers in your cluster, which can be costly. The choice of when to use sandboxed or confidential containers, whether for all or some workloads or a specific set of users in your cluster, is something you need to decide.

In the next part of this series, I’ll touch upon how we enable sandboxed and confidential containers on any footprint without needing bare metal servers.

--

--