Abstract
Kata Containers is an open source project and community working to build a standard implementation of lightweight Virtual Machines (VMs) that feel and perform like containers, but provide the workload isolation and security advantages of VMs. In this lab, students will use Vagrant to automatically deploy an Oracle Container Services for use with Kubernetes cluster on Oracle Linux 7 virtual machines using VirtualBox. Once the cluster is deployed, attendees will learn how to deploy secured containers by Kata Containers.
Lab Objective
This hands-on lab takes you through the planning and deployment of Oracle Container Services for use with Kubernetes with Docker, initially, and then leveraging Kata-Containers.
During this lab, we will create an Oracle Linux KVM demo environment on a single laptop machine, using KVM with nested virtualization; nested virtualization is required by Kata-Containers.
Minimal Configuration to run this lab
This document can be used to run the lab at home or at your office on your own laptop/desktop/server machine.
In this lab, we use Oracle Linux KVM on the host to create 3 virtual machines that will later be used by Oracle Linux as Kubernetes Master and Worker nodes (2), so that we can install all software components on a single physical machine.
Since the hypervisor used on the host is KVM, the native operating system on the laptop/desktop/server machine has to be Linux based.
The minimal configuration needed for your laptop/desktop/server is:
- 16 GB of RAM memory
- Modern Intel/AMD x86 CPU
- ~ 80GB of disk space
The KVM Virtual Machine requirements requirements are:
- Virtual Machine dedicated to KVM Compute node
- 2 vCPUs
- 4GB RAM
- 50GB of disk space
- Virtual Machine dedicated to Oracle Linux Virtualization Manager node
- 2 vCPUs
- 6GB RAM
- 30GB of disk space
For the installation of the Oracle Linux Container Services for use with Kubernetes you can follow documentation available at: https://docs.oracle.com/cd/E52668_01/E88884/html/kubernetes_install_upgrade.html or leverage the Oracle Linux Vagrant Boxes, public available on GitHub at https://github.com/oracle/vagrant-boxes
Global Architecture Picture

Important Notes
Acronyms
In the present document, we will use the following acronyms:
- OL for “Oracle Linux"
- OL KVM for "Oracle Linux Kernel-based Virtual Machine"
- VM for “Virtual machine”
- K8S for Kubernetes
- Kata for Kata Containers
Nested Virtualization
In this lab, 2 Virtualization layers are used to emulate a compute/management architecture:
- Oracle Linux with Container Services for use with Kubernetes Master node (1)
- Oracle Linux with Container Services for use with Kubernetes Worker nodes (1 & 2)
Lab Execution
The first step is to start the Virtual Machines required to run the lab.
Those Virtual Machines have been creating on KVM and can be managed by the local GUI utility called "virt-manager"; so, to proceed, open a terminal with "lab" user and execute:
# virt-manager
The "Virtual Machine Manager" windows will open:

As you can see more Virtual Machines are available; the three Virtual Machines required for HOL-5303 are:
- k8s-master (Oracle Linux with Oracle Container Services for use with Kubernetes - master-node)
- k8s-worker1 (Oracle Linux with Oracle Container Services for use with Kubernetes - worker-node 1 of 2)
- k8s-worker2 (Oracle Linux with Oracle Container Services for use with Kubernetes - worker-node 2 of 2)
Select each of those Virtual Machines and click on "Play" button to start them.
Lab Checks
Before starting to work on the K8s environment, proceed to check that everything is correctly configured.
Master Node
- Open a terminal and connect by "ssh" to the "Master Node" and verify that all the K8s pods are up&running
# ssh root@192.168.99.100
# kubectl get pods --namespace=kube-system

- On the same terminal verify that also the "Worker nodes" are in status "Ready"
# kubectl get nodes

- Open a terminal and connect by "ssh" to the "Worker Nodes (1 and 2)" and verify that all the K8s containers are up & running
# ssh root@192.168.99.101
# docker ps

# ssh root@192.168.99.102
# docker ps

Known issue
None
Lab Steps
During this lab following operations will be executed:
Generate Service Account Token to get access to the K8s dashboard
By this section we'll generate a service account with "cluster-admin" role that will have access to all K8s resources.
- Connect by "ssh" to the K8s "Master Node" (192.168.99.100) and execute following commands
# kubectl create serviceaccount cluster-admin-dashboard-sa
# kubectl create clusterrolebinding cluster-admin-dashboard-sa --clusterrole=cluster-admin --serviceaccount=default:cluster-admin-dashboard-sa
- Copy the token from generated secret into a dedicated file
# kubectl get secret | grep cluster-admin-dashboard-sa
# kubectl describe secret cluster-admin-dashboard-sa-token-xgdjk (value obtained by the previous step)

- Execute following command to save the token in a file
# kubectl describe secret cluster-admin-dashboard-sa-token-xgdjk |grep "token:" |cut -d ":" -f2 |sed -e 's/ //g' > /home/vagrant/token

Connect to the K8s dashboard from the host laptop browser
- To get access to the K8s dashboard from the Host we need to create an ssh service forwarding process for port 8001 (from the VM to the Host)
# ssh -L 8001:localhost:8001 root@192.168.99.100

- Once in-place, we can then open a browser (Firefox) on the Host and connect to the K8s Dashboard
http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/login
- Get access to the K8s Dashboard by the token generated above


Start Kubernetes pods on Worker nodes (leveraging Docker container)
During this section we run an application by creating a Kubernetes Deployment object; by the K8S Dashboard we upload a YAML file containing the deployment description..
This YAML file describes a Deployment that runs the "nginx:1.7.9" Docker image.
Here the content of the YAML file we use:
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: nginx-deployment
spec:
selector:
_matchLabels:_
_app: nginx_
replicas: 2 # tells deployment to run 2 pods matching the template
template:
_metadata:_
_labels:_
_app: nginx_
_spec:_
_containers:_
_- name: nginx_
_image: nginx:1.7.9_
_ports:_
_- containerPort: 80_
To properly start those 2 containers on the K8s workers proceed with following steps.
- On K8s dashboard click on "+ CREATE" icon in the right-upper corner

- On the "Create from text input" window, copy the YAML content available above and click on "Upload" button

- Once deployed, the two pods can be showed by both the "K8s dashboard" as well as "CLI"

# kubectl get pods

Those deployed pods expose a web-service on port 80; following steps will show how-to get simple access to this web-service and verify the same is correctly working.
- Get the IP address where the "nginx" service is exposed; click on "Workloads => Pods" and select one of the two "nginx-deployment-*" pods, by clicking on its name, running on "worker1"

- On details webpage we can see the "Docker container" IP address

[OPTIONAL] By a terminal, opened on one of the 3 Virtual Machines (master, worker1 or worker2) you can try to execute the following command to test the web-server:
# wget http://[IP-ADDRESS-NGINX]
Those pods are based on Docker containers, running on the two K8s worker nodes configured; to verity those "Docker containers" we can execute following commands
- From the "K8s master" node we can verify the container kernel running (same of the worker node)
# kubectl get pods
# kubectl exec -it <nginx-pod-name> -- uname -a

Kernel running on the K8s workers is "4.14.35-1902.4.8.el7uek.x86_64" and, obviously, the same run on Docker containers, managed by K8s, on top.
Scale existing Application and update nginx software release at the same time
By K8s Dashboard is also possible to extend the service capability (number of pods) as well as update the software release running (nginx).
To complete this section, proceed with following steps:
- By K8s Dashboard, open "Workloads => Deployments" and click on "nginx-deployment"

- On the "Deployment Details" webpage, click on "Edit" button (right-upper corner)

- On the "Edit a Deployment" window, update following parameters
- "replicas": 8 (two entries require update)
- "image": "nginx:1.8"
- "updatedReplicas": 8
- "readyReplicas": 8
- "availableReplicas": 8
and then click on "Update" button.
The new deployment will appear with pods as below:

Reset Kubernetes Installation to prepare to the new environment
Reset Kubernetes Installation on master and worker nodes
- Connect to the master and both the worker nodes by "ssh" and execute following command (confirm with "y")
# kubeadm reset



Enable new Yum channels, update Kubernetes and install Kata Container
In this section new Yum channels will be enabled:
- [ol7_developer_olcne] - Developer Preview for Oracle Linux Cloud Native Environment
- [ol7_kvm_utils] - Oracle Linux KVM Utilities
Once enabled, following steps will be executed:
- Kubernetes upgrade from 1.12 to 1.14 Developer release
- Kata Container 1.5.x installation
Following steps have to be executed on all the KVM Virtual Machines (master, worker1 and worker2)
- Enable "ol7_developer_olcne" Yum channel (required to upgrade Kubernetes and install Kata Container)
# yum install oracle-olcne-release-el7.x86_64 -y

- Enabled "ol7_kvm_utils" Yum channel (containing Kata Container dependencies)
# yum-config-manager --enable ol7_kvm_utils

# yum update -y

# yum install kata-runtime -y

Configure new Kubernetes cluster with CRI (containerd plugin) and Kata Container
In order to allow Kubelet to use containerd (using the CRI interface), configure the service to point to the containerd
socket; to proceed, execute following steps as "root" user:
K8s Master Node
- Connect to the Master node by "ssh"
# ssh root@192.168.99.100
- Configure Kubernetes to use "containerd" (instead of Docker) as Container Runtime
# mkdir -p /etc/systemd/system/kubelet.service.d/
# cat << EOF |tee /etc/systemd/system/kubelet.service.d/0-containerd.conf
[Service]
Environment="KUBELET_EXTRA_ARGS=--container-runtime=remote --runtime-request-timeout=15m --container-runtime-endpoint=unix:///run/containerd/containerd.sock"
EOF
- Inform systemd about the new configuration on "kubelet" service
# systemctl daemon-reload

- Restart "containerd" service and verify it is up and running
# systemctl restart containerd
# systemctl status containerd

- Install "crictl" utility and verify it's then available in the PATH; it provides a CLI for CRI-compatible container runtimes. This utility, currently in Beta and still under quick iterations, is hosted at the cri-tools repository.
# VERSION="v1.15.0"
# wget https://github.com/kubernetes-sigs/cri-tools/releases/download/$VERSION/crictl-$VERSION-linux-amd64.tar.gz
# tar zxvf crictl-$VERSION-linux-amd64.tar.gz -C /usr/local/bin
# rm -f crictl-$VERSION-linux-amd64.tar.gz
# which crictl

- Start K8s Cluster using "kubeadm"; please, take care of the output (see picture below) and save the content selected in a text file.
# kubeadm init --cri-socket /run/containerd/containerd.sock --pod-network-cidr=10.244.0.0/16 --ignore-preflight-errors=swap

- To start using the "K8s cluster", we have to remove old config file and run the first text selected in the picture above as "root" user
# rm -f $HOME/.kube/config
# mkdir -p $HOME/.kube
# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
# chown $(id -u):$(id -g) $HOME/.kube/config
- Verify nodes and pods status on the new K8s master node
# kubectl get nodes
# kubectl get pods --namespace=kube-system

- Deploy a pod network service, required to get pods to communicate with each other; "flannel" is going to be used (as it was on the previous K8s cluster)
# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

- Check pods status and verify both "flannel" and "coredns" are running

K8s Worker 1 Node - Join cluster as Docker Container Runtime node
- Install all required packages and update the system
# yum install oracle-olcne-release-el7.x86_64 -y
# yum-config-manager --enable ol7_kvm_utils
# yum update -y
# yum install kata-runtime -y
# yum install iproute-tc
- Setup updated Docker configuration (to be used with K8s)
# cat > /etc/docker/daemon.json <<EOF
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
_"max-size": "100m"_
},
"storage-driver": "overlay2",
"storage-opts": [
_"overlay2.override\_kernel\_check=true"_
]
}
EOF
- Restart "Docker" system service
# systemctl daemon-reload
# systemctl restart docker
- Join "Worker1" node to the K8s cluster (you need to use the command saved on Master step above, with proper hash. (this is only an example)
# kubeadm join 10.0.2.210:6443 --token yd6erz.akin2eqjyscughar --discovery-token-ca-cert-hash sha256:c032e1625181dcf87647fd361bfa2a7288d1e1b8e2f78973c8972ae6f84867de --ignore-preflight-errors=Swap

- From the "master" node, check K8s nodes now configured on your cluster ("master" and "worker1" should be there once completed); the option "wide" will also show the "container-runtime" used
# kubectl get nodes -o wide

- For listing containers on "worker1" node (Docker), you now must use docker command:
# docker ps

K8s Worker 2 Node - Join cluster with Containerd Container Runtime node
- Install all required packages and update the system
# yum install oracle-olcne-release-el7.x86_64 -y
# yum-config-manager --enable ol7_kvm_utils
# yum update -y
# yum install kata-runtime -y
# yum install iproute-tc
- Setup "containerd" for the integration with "kubelet"
# modprobe overlay
# modprobe br_netfilter
# cat > /etc/sysctl.d/99-kubernetes-cri.conf <<EOF
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF
# sysctl --system
- Configure "kubelet" to start with "containerd" as Container Runtime
# cat << EOF | sudo tee /etc/systemd/system/kubelet.service.d/0-containerd.conf
[Service]
Environment="KUBELET_EXTRA_ARGS=--container-runtime=remote --runtime-request-timeout=15m --container-runtime-endpoint=unix:///run/containerd/containerd.sock --fail-swap-on=false"
EOF
# echo "KUBELET_EXTRA_ARGS= --container-runtime=remote --runtime-request-timeout=15m --container-runtime-endpoint=unix:///run/containerd/containerd.sock --fail-swap-on=false" > /etc/sysconfig/kubelet
# echo "KUBELET_EXTRA_ARGS= --container-runtime=remote --runtime-request-timeout=15m --container-runtime-endpoint=unix:///run/containerd/containerd.sock --fail-swap-on=false" > /etc/default/kubelet
- Get systemd aware of service changes and restart "containerd" and "kubelet" services
# systemctl daemon-reload
# systemctl restart containerd
# systemctl restart kubelet
- Setup updated Docker configuration (to be used with K8s)
# cat > /etc/docker/daemon.json <<EOF
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2",
"storage-opts": [
"overlay2.override\_kernel\_check=true"
]
}
EOF
# systemctl daemon-reload
# systemctl restart docker
- Join "Worker2" node to the K8s cluster (you need to use the command saved on Master step above, with proper hash. (this is only an example)
# kubeadm join 10.0.2.210:6443 --token yd6erz.akin2eqjyscughar --discovery-token-ca-cert-hash sha256:c032e1625181dcf87647fd361bfa2a7288d1e1b8e2f78973c8972ae6f84867de --ignore-preflight-errors=Swap

- From the "master" node, check K8s nodes now configured on your cluster ("master","worker1" and "worker2" should be there once completed); the option "wide" will also show the "container-runtime" used
# kubectl get nodes -o wide

- For listing containers on "worker2" node (containerd), you now must use ctr command:
# ctr --namespace k8s.io containers ls

Deploy Kata Container support into the K8s Cluster
Good, now we have the right configured "kubelet", "containerd" in place on "worker2" node and we can move on and install Kata containers inside our cluster.
- Run next commands on "master" node to deploy Kata containers support into the Kubernetes cluster
# kubectl apply -f https://raw.githubusercontent.com/kata-containers/packaging/master/kata-deploy/kata-rbac.yaml
# kubectl apply -f https://raw.githubusercontent.com/kata-containers/packaging/master/kata-deploy/kata-deploy.yaml
# kubectl apply -f https://raw.githubusercontent.com/kata-containers/packaging/master/kata-deploy/k8s-1.14/kata-qemu-runtimeClass.yaml

- Run following command to check Kata Container support has been deployed on the K8s cluster
# kubectl get pods --namespace=kube-system
_
_
Deploy K8s Dashboard and get cluster-admin token to get access to the GUI
- Deploying dashboard is easy and straight forward; execute following command on the "master" node
# kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta1/aio/deploy/recommended.yaml

- Run "kubectl proxy" in the background to open the Dashboard access on localhost:8001
# kubectl proxy &

- On the K8s "Master Node" (192.168.99.100) and execute following commands
# kubectl create serviceaccount cluster-admin-dashboard-sa
# kubectl create clusterrolebinding cluster-admin-dashboard-sa --clusterrole=cluster-admin --serviceaccount=default:cluster-admin-dashboard-sa
- Copy the token from generated secret into a dedicated file
# kubectl get secret | grep cluster-admin-dashboard-sa
# kubectl describe secret cluster-admin-dashboard-sa-token-xgdjk (value obtained by the previous step)

- Execute following command to save the token in a file
# kubectl describe secret cluster-admin-dashboard-sa-token-xgdjk |grep "token:" |cut -d ":" -f2 |sed -e 's/ //g' > /home/vagrant/token
- To get access to the K8s dashboard from the Host we need to create an ssh service forwarding process for port 8001 (from the VM to the Host) -- this port redirection could be already active
# ssh -L 8001:localhost:8001 root@192.168.99.100
- From the host, open a browser, connect to the following URL by using the token generated above
http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/

- By the K8s Dashboard we can also see the difference, in term of support, between "worker1" and "worker2" nodes, where "worker2" also supports "Kata Container".
- Click on "Cluster => Nodes" and then expand the "Labels" for each "Worker" nodes, you'll see the support for Kata Container associated to "worker2"

Start Kubernetes pods on Worker nodes, leveraging Docker container as well as Containerd with Kata
Deploy pods by leveraging Docker and Containerd Container Runtimes
During this section we run an application by creating a Kubernetes Deployment object; by the K8S Dashboard we upload a YAML file containing the deployment description..
This YAML file describes a Deployment that runs the "nginx:1.7.9" image.
Here the content of the YAML file we use:
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: nginx-deployment
spec:
selector:
_matchLabels:_
_app: nginx_
replicas: 2 # tells deployment to run 2 pods matching the template
template:
_metadata:_
_labels:_
_app: nginx_
_spec:_
_containers:_
_- name: nginx_
_image: nginx:1.7.9_
_ports:_
_- containerPort: 80_
To properly start those 2 containers on the K8s workers proceed with following steps.
- On K8s dashboard click on "+" icon in the right-upper corner
- On the "Create from text input" window, copy the YAML content available above and click on "Upload" button

- Once deployed, the two pods can be showed by both the "K8s dashboard" as well as "CLI"

# kubectl get pods

The big difference between the two pods is that:
- nginx-deployment pod running on "worker1" uses "Docker" as Container Service
- nginx-deployment pod running on "worker2" uses "Containerd" as Container Service
Deploy pods by leveraging Kata Container Runtime
This YAML file describes a Deployment that runs a "php/apache" image.
Here the content of the YAML file we use:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
_run: php-apache-kata-qemu_
name: php-apache-kata-qemu
spec:
replicas: 1
selector:
_matchLabels:_
_run: php-apache-kata-qemu_
template:
_metadata:_
_labels:_
_run: php-apache-kata-qemu_
_spec:_
_runtimeClassName: kata-qemu_
_containers:_
_- image: k8s.gcr.io/hpa-example_
_imagePullPolicy: Always_
_name: php-apache_
_ports:_
_- containerPort: 80_
_protocol: TCP_
_resources:_
_requests:_
_cpu: 200m_
_restartPolicy: Always_
nodeSelector:
_katacontainers.io/kata-runtime: "true"_
kubernetes.io/hostname: worker2.vagrant.vm
---
apiVersion: v1
kind: Service
metadata:
name: php-apache-kata-qemu
spec:
ports:
- port: 80
_protocol: TCP_
_targetPort: 80_
selector:
_run: php-apache-kata-qemu_
sessionAffinity: None
type: ClusterIP
To properly start this single Kata container on the K8s worker2 proceed with following steps.
- On K8s dashboard click on "+" icon in the right-upper corner
- On the "Create from text input" window, copy the YAML content available above and click on "Upload" button
- From the "worker2" node we can verify that a new "qemu" process has started (Kata Container process)
# ps -edaf |grep qemu

- From the "K8s master" node we can verify the container kernel running that will be
- Same of the worker node (host) while running with Docker or Containerd
- Dedicated and different while running with Kata Container
# kubectl get pods
# kubectl exec -it <nginx-pod-name> -- uname -a

Kernel running on the K8s workers is "4.14.35-1902.4.8.el7uek.x86_64" and, obviously, the same run on Docker as well as Containerd containers; different is the kernel running on the Kata Container because this pod has its own kernel and boot up as a lightweight Virtual Machine "4.19.52".