How about container runtimes as a potential security guardrail for your CI/CD pipelines?

Pradipta Banerjee
4 min readJun 16, 2022
Photo by Hogarth de la Plante on Unsplash

When we discuss security guardrails, it’s primarily about automated policies and monitoring, among other things.

But have you ever thought about container runtimes as a potential security guardrail?

Every container runtime has its pros and cons. When choosing container runtimes, we typically consider the use cases. However, we shouldn’t rule out certain indirect benefits that can improve developer productivity while ensuring proper checks and balances.

Let’s take an example of CI/CD pipelines and see how container runtime can help to keep the pipelines relatively much safer without slowing things down.

Safer Cloud Native CI/CD pipelines

Specifically, consider the open-source Tekton project, which provides a Kubernetes-native framework to design and run your CI/CD pipelines.

An essential aspect of Tekton Pipelines is that each CI/CD pipeline step runs in its container, allowing each step to be customised as required.

A typical CI/CD pipeline is a set of tasks, each task a set of steps. Further, a task runs as a Kubernetes pod, and each task step is a separate container in the pod.

The diagram below will help to visualise the relationship between task, step, pipeline, and pod.

The task running as a Kubernetes pod makes it possible to customise each step w.r.to container resource requirements, runtime configuration, security policies, attached sidecar, etc.

Now, let’s say you are looking for a safe way to handle one of these requirements:

  1. Run a task requiring additional privileges
  2. Run a step in the task with unsafe sysctl settings (eg. kernel.msgmax) and capture some test data

There are two ways to handle these requirements:

  1. Do nothing: Disallow these requirements or discourage these requirements by creating a heavy-weight process framework.
  2. Become a change agent: Look at newer technologies, such as alternative container runtimes, which can help to handle such requirements safely.

As a decision-maker, you need to make a hard choice here. If you believe in embracing challenges with the eventual goal of achieving the required synergy between company needs and developers’ flexibility, then you have help.

You can start by exploring alternate container runtimes and evaluate their applicability to the requirements.

For example, with a single line change to the POD template used to run the tasks, you can provide an additional isolation layer by leveraging the Kata container runtime.

The one-line change is adding the following statement to the POD template spec:

runtimeClassName: kata

Sample POD template spec


spec:
pipelineRef:
name: mypipeline
podTemplate:

runtimeClassName: kata

The following section provides a complete Tekton task example for folks who want to try it.

Create a Tekton task to set and display kernel sysctl settings

Creating a new Tekton task from scratch is super simple. The following YAML creates a sample Tekton task with two steps:

Step-1. Set sysctl value

Step-2. Display new value

apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
name: setsysctl
spec:
params:
- name: namespace
default: default
- name: key
description: Specify the sysctl setting
type: string
- name: value
description: Specify the sysctl value
type: string
steps:
- name: init-sysctl
SecurityContext:
privileged: true
image: quay.io/fedora/fedora:35
script: |
#!/usr/bin/env bash
set -xe
echo "Current value of $(params.key)"
cat $(params.key)
echo $(params.value) > $(params.key)
- name: printsysctl
image: quay.io/fedora/fedora:35
script: |
#!/usr/bin/env bash
set -xe
echo "Updated value of $(params.key)"
cat $(params.key)

Run the Tekton Task

For running a Tekton task, a TaskRun object needs to be created. The following example YAML is used to run the Tekton setsysctl task described earlier.

apiVersion: tekton.dev/v1beta1
kind: TaskRun
metadata:
name: sysctltaskrun
namespace: default
spec:
params:
- name: key
value: '/proc/sys/kernel/msgmax'
- name: value
value: '65536'
taskRef:
name: setsysctl
podTemplate:
runtimeClassName: kata

Sample output of the execution.

$ kubectl create -f taskrun.yaml                                                                 
taskrun.tekton.dev/sysctltaskrun created
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
sysctltaskrun-pod-t4p4v 0/2 Init:0/2 0 5s
$ kubectl get pods sysctltaskrun-pod-t4p4v -o yaml | grep kata
runtimeClassName: kata
$ kubectl get pods NAME READY STATUS RESTARTS AGE
sysctltaskrun-pod-t4p4v 0/2 Completed 0 18s
$ kubectl logs sysctltaskrun-pod-t4p4v -c step-init-sysctl
Current value of /proc/sys/kernel/msgmax
+ echo 'Current value of /proc/sys/kernel/msgmax'
+ cat /proc/sys/kernel/msgmax
8192
+ echo 65536
$ kubectl logs sysctltaskrun-pod-t4p4v -c step-printsysctl Updated value of /proc/sys/kernel/msgmax
+ echo 'Updated value of /proc/sys/kernel/msgmax'
+ cat /proc/sys/kernel/msgmax
65536

I hope this article gives you a different perspective on container runtimes.

This article was first published here.

--

--