What You Define is What You Deploy

Trusting Kubernetes Cluster in the Cloud

Pradipta Banerjee
3 min readJan 27, 2020

Cowritten by Harshal

Currently, we have two common Kubernetes cluster deployment scenarios in the cloud:

  1. Customer Managed — Kubernetes control plane and nodes are owned by the customer
  2. Provider Managed — Kubernetes control plane is owned by provider and nodes are owned by the customer

So, as a user, how can I be sure of the following?

  1. The application runs exactly the way the specification (eg. Pod or Deployment YAML) is written — image, start program, arguments, input data, output etc.
  2. No un-authorised modification to the application specification (Pod or Deployment YAML).
  3. Kubernetes secrets are not read by un-authorised entities in case of provider managed deployment.
  4. Mitigate security issues resulting from a compromised control plane — like inadvertent access to secrets, modification/mutation of application specification.
  5. Protect data-in-use in a compromised node.

We have a couple of interesting technologies available to us today that can help us to give those assurances to our users.

Trusted Execution Environments (TEEs) are ways to protect data-in-use. TEEs can be categorised into two types — process based based (e.g. Intel SGX) and VM based (e.g. AMD SEV, IBM PEF, Intel TME/MKTME ) and, Virtualisation for increased isolation for the containers (e.g. Kata Containers, Amazon Firecracker)

Tying these two technologies, i.e. VM based TEE with a VM container runtime, provides us best of both worlds:

  1. Data in use protection
  2. Increased isolation
  3. No change in application code, unlike process based TEE

Introducing virtualisation by means of highly optimised VM container runtime is bound to add slight overhead. This is a trade-off — small amount of performance penalty for a completely isolated and protected workload execution. So, it’s really up to the user to gauge the security sensitivity of their workload and choose the right execution environment.

Raksh (protect)

This project aims to help users with ultra-security sensitive workloads to use VM based TEE with Kata containers (VM runtime) along with these additional capabilities:

  1. Keep the Kubernetes application deployment workflow intact
  2. Securing application specification and preventing un-authorised modification.
  3. Securing application secrets from the control plane
  4. No change in application deployment workflow

Securing application specification

Let’s understand how the application specification is secured by leveraging VM based TEE and Kata containers runtime. Let’s take the example of the following Kubernetes Pod specification :

apiVersion: v1
kind: Pod
metadata:
labels:
app: nginx
namespace: default
name: nginx
spec:
containers:
- image: nginx:latest
imagePullPolicy: IfNotPresent
name: nginx
ports:
- containerPort: 80
protocol: TCP

The container spec is highlighted above. This is converted to a Kubernetes ConfigMap and looks like this:

apiVersion: v1
data:
nginx: |
spec:
containers:
- image: nginx:latest
imagePullPolicy: IfNotPresent
name: nginx
ports:
- containerPort: 80
protocol: TCP
kind: ConfigMap
metadata:
name: configmap-nginx

The ConfigMap is then encrypted. The encrypted ConfigMap looks like this:

apiVersion: v1
data:
nginx: 6qvygg8md7bXfyX3Y9cpZxUp4eZA0kKmWBirrpJv/WEGkrdLYrdtqxdqm4cGLG4++06d2iGTaB+5SDjjDwf05T+9a2iUAdHmRngHcQNAzkKK2RCnR4Zkt0cXDaEP+w5mbugH0xdqGm8SoX4IgvWGi2toq1CUcc8OmgTX42g0NruTZbrNv5NccyS7+kR7Iib6vaMI24E=
kind: ConfigMap
metadata:
name: secure-configmap-nginx

Post that a new spec is created to use the encrypted ConfigMap. The final Pod.yaml looks like this:

apiVersion: securecontainers.k8s.io/v1alpha1
kind: SecureContainer
metadata:
name: secure-nginx
object:
apiVersion: v1
kind: Pod
metadata:
labels:
app: nginx
name: nginx
spec:
containers:
- image: sc-scratch:latest
imagePullPolicy: IfNotPresent
name: nginx
ports:
- containerPort: 80
protocol: TCP
resources: {}
volumeMounts:
- mountPath: /etc/raksh
name: secure-volume-nginx
readOnly: true
volumes:
- configMap:
items:
- key: nginx
path: raksh.properties
name: secure-configmap-nginx
name: secure-volume-nginx
spec:
SecureContainerImageRef:
name: nginx-securecontainerimage

Deploying the above spec does the following:

  1. The scratch image gets deployed using the Kata containers runtime
  2. Inside the Kata VM, the Kata agent decrypts the ConfigMap (mounted as a volume in /etc/raksh)
  3. The Kata agent provisions the container as per the decrypted ConfigMap

As you can see, the actual container spec gets decrypted only inside the VM and all the actual container operations are happening inside the VM as well. Now on systems with VM based TEEs (AMD SEV or Power PEF), this ensures that all secrets, data and code that is in use and in the VM memory cannot be accessed from the host.

Trying out Raksh

Assuming you have a Kubernetes cluster with Kata containers runtime setup, head to Raksh project page to find usage instructions.

In subsequent article, we’ll take a look at ways to secure application secrets from the control plane.

--

--

Pradipta Banerjee
Pradipta Banerjee

Written by Pradipta Banerjee

Writes about technology | Startup advisor & mentor. www.linkedin.com/in/bpradipt

No responses yet