Skip to Main Content

Infrastructure Software

Announcement

For appeals, questions and feedback about Oracle Forums, please email oracle-forums-moderators_us@oracle.com. Technical questions should be asked in the appropriate category. Thank you!

Interested in getting your voice heard by members of the Developer Marketing team at Oracle? Check out this post for AppDev or this post for AI focus group information.

How-to: Deploying a Multi-Primary Kubernetes Cluster in the Oracle Linux Cloud Native Environment

LinuxVirtTrainTeam-OracleFeb 12 2020 — edited Mar 25 2021

Before You Begin

This tutorial shows you how to install and set up an Oracle Linux Cloud Native Environment with a multi-primary Kubernetes cluster. When deploying a multi-primary Kubernetes cluster, you need to set up a load balancer to enable high availability of the cluster. You can use your own load balancer implementation or you can use the built-in load balancer. This tutorial includes the steps to set up the built-in load balancer.

In this tutorial, you also configure X.509 Private CA Certificates used to manage the communication between the nodes. There are other methods to manage and deploy the certificates, such as by using HashiCorp Vault secrets manager, or by using your own certificates, signed by a trusted Certificate Authority (CA). These other methods are not included in this tutorial.

Background

Oracle Linux Cloud Native Environment is a fully integrated suite for the development and management of cloud-native applications. The Kubernetes module for the Oracle Linux Cloud Native Environment (the Kubernetes module) is the core module. It is used to deploy and manage containers and also automatically installs and configures CRI-O, runC and Kata Containers. CRI-O manages the container runtime for a Kubernetes cluster. The runtime may be either runC or Kata Containers. The Kubernetes module also includes Flannel, the default overlay network for a Kubernetes cluster and CoreDNS, the DNS server for a Kubernetes cluster.

The architecture consists of:

  • Oracle Linux Cloud Native Environment Platform API Server (the Platform API Server). The Platform API Server is responsible for managing all entities, from hosts to microservices, and is also responsible for managing the state of the environment, including the deployment and configuration of modules to one or more nodes in a cluster.
  • Oracle Linux Cloud Native Environment Platform Agent (the Platform Agent). The Platform Agent runs on each host to proxy requests from the Platform API Server to small worker applications.
  • Oracle Linux Cloud Native Environment Platform Command-Line Interface (the Platform CLI). The Platform CLI is used to communicate with the Platform API Server. The Platform CLI is a simple application (the olcnectl command) that converts the input to Platform API Server calls. The required software for modules is configured by the Platform CLI, such as CRI-O, runC, Kata Containers, CoreDNS and Flannel.

What Do You Need?

  • 7 Oracle Linux systems: 1 operator node, 3 Kubernetes primary nodes, and 3 Kubernetes worker nodes
  • Systems have a minimum of Oracle Linux 7 Update 5 (x86_64) installed and running the Unbreakable Enterprise Kernel Release 5 (UEK R5)
  • Systems have access to the following yum repositories: ol7_olcne, ol7_kvm_utils, ol7_addons, ol7_latest, and ol7_UEKR5, or access to related ULN channels (refer to “Enabling Access to the Oracle Linux Cloud Native Environment Packages”)
  • Systems have the oraclelinux-release-el7 RPM installed and the oracle-olcne-release-el7 RPM installed
  • Network Time Protocol (NTP) service is running on the Kubernetes primary and worker nodes (refer to “Setting up a Network Time Service”)
  • Swap is disabled on the Kubernetes primary and worker nodes (refer to “Disabling Swap”)
  • SELinux is disabled or in permissive mode on the Kubernetes primary and worker nodes (refer to “Setting SELinux to Permissive”)
  • Systems are configured with necessary firewall rules (refer to “Setting up the Firewall Rules”)
  • Systems have the br_netfilter kernel module loaded on the Kubernetes primary and worker nodes (refer to “br_netfilter Module”)

Steps

1. Set up the Operator Node

The operator node performs and manages the deployment of environments, including deploying the Kubernetes cluster. An operator node may be a node in the Kubernetes cluster, or a separate host. In this tutorial, the operator node is a separate host. On the operator node, install the Platform CLI, Platform API Server, and utilities. Enable the olcne-api-server service, but do not start it.

$ sudo yum install olcnectl olcne-api-server olcne-utils

$ sudo systemctl enable olcne-api-server.service

2. Set up the Kubernetes Nodes

Perform these steps on all Kubernetes primary and worker nodes. Install the Platform Agent package and utilities. Enable the olcne-agent service, but do not start it.

$ sudo yum install olcne-agent olcne-utils

$ sudo systemctl enable olcne-agent.service

If you use a proxy server, configure it with CRI-O. On each Kubernetes node, create a CRI-O systemd configuration directory. Create a file named proxy.conf in the directory and add the proxy server information. This example uses a specific proxy. Substitute the appropriate proxy for your environment. The IP address for the NO_PROXY setting also uses a specific value. Again, change this as necessary for your environment.

$ sudo mkdir /etc/systemd/system/crio.service.d

$ sudo vi /etc/systemd/system/crio.service.d/proxy.conf

[Service]

Environment="HTTP_PROXY=proxy.example.com:80"

Environment="HTTPS_PROXY=proxy.example.com:80"

Environment="NO_PROXY=.example.com,192.0.2.*"

IIf the docker service is running, or if the containerd service is running, stop and disable them.

$ sudo systemctl disable --now docker.service

$ sudo systemctl disable --now containerd.service

3. Set up a Load Balancer

Perform these steps on each Kubernetes primary node. Open port 6444 and enable the Virtual Router Redundancy Protocol (VRRP) protocol.

$ sudo firewall-cmd --add-port=6444/tcp

$ sudo firewall-cmd --add-port=6444/tcp --permanent

$ sudo firewall-cmd --add-protocol=vrrp

$ sudo firewall-cmd --add-protocol=vrrp --permanent

If you use a proxy server, configure it with NGINX. On each Kubernetes primary node, create an NGINX systemd configuration directory. Create a file named proxy.conf in the directory and add the proxy server information. This example uses a specific proxy. Substitute the appropriate proxy for your environment. The IP address for the NO_PROXY setting also uses a specific value. Again, change this as necessary for your environment.

$ sudo mkdir /etc/systemd/system/olcne-nginx.service.d

$ sudo vi /etc/systemd/system/olcne-nginx.service.d/proxy.conf

[Service]

Environment="HTTP_PROXY=proxy.example.com:80"

Environment="HTTPS_PROXY=proxy.example.com:80"

Environment="NO_PROXY=.example.com,192.0.2.*"

4. Set up X.509 Private CA Certificates

Use the /etc/olcne/gen-certs-helper.sh script to generate a private CA and certificates for the nodes. Run the script from the /etc/olcne directory on the operator node. The script saves the certificate files in the current directory. Use the --nodes option followed by the nodes for which you want to create certificates. Create a certificate for each node that runs the Platform API Server or Platform Agent. That is, for the operator node, and each Kubernetes node. Provide the private CA information using the --cert-request* options. Some of these options are given in the example. You can get a list of all command options using the gen-certs-helper.sh --help command.

For the --cert-request-common-name option, provide the appropriate DNS Domain Name for your environment. For the --nodes option value, provide the fully qualified domain name (FQDN) of your operator, primary, and worker nodes.

$ cd /etc/olcne

$ sudo ./gen-certs-helper.sh \

--cert-request-organization-unit "My Company Unit" \

--cert-request-organization "My Company" \

--cert-request-locality "My Town" \

--cert-request-state "My State" \

--cert-request-country US \

--cert-request-common-name example.com \

--nodes operator.example.com,master1.example.com,master2.example.com,master3.example.com \
  worker1.example.com,worker2.example.com,worker3.example.com

5. Transfer Certificates

The /etc/olcne/gen-certs-helper.sh script used to generate a private CA and certificates for the nodes was run on the operator node. Make sure the operator node has passwordless ssh access to the Kubernetes primary and worker node (not shown in this tutorial), then run the following command on the operator node to transfer certificates from the operator node to the Kubernetes nodes.

$ bash -ex /etc/olcne/configs/certificates/olcne-tranfer-certs.sh

6. Configure the Platform API Server to Use the Certificates

On the operator node, run the /etc/olcne/bootstrap-olcne.sh script as shown to configure the Platform API Server to use the certificates. Alternatively, you can use certificates managed by HashiCorp Vault. This method is not included in this tutorial.

$ sudo /etc/olcne/bootstrap-olcne.sh \

--secret-manager-type file \

--olcne-node-cert-path /etc/olcne/configs/certificates/production/node.cert \

--olcne-ca-path /etc/olcne/configs/certificates/production/ca.cert \

--olcne-node-key-path /etc/olcne/configs/certificates/production/node.key \

--olcne-component api-server

7. Configure the Platform Agent to Use the Certificates

On each Kubernetes node, run the /etc/olcne/bootstrap-olcne.sh script as shown to configure the Platform Agent to use the certificates. Alternatively, you can use certificates managed by HashiCorp Vault. This method is not included in this tutorial.

$ sudo /etc/olcne/bootstrap-olcne.sh \

--secret-manager-type file \

--olcne-node-cert-path /etc/olcne/configs/certificates/production/node.cert \

--olcne-ca-path /etc/olcne/configs/certificates/production/ca.cert \

--olcne-node-key-path /etc/olcne/configs/certificates/production/node.key \

--olcne-component agent

Repeat step 7 as needed to ensure this script is ran on each Kubernetes node.

8. Create the Environment

On the operator node, create the environment using the olcnectl environment create command as shown. Alternatively, you can use certificates managed by HashiCorp Vault. This method is not included in this tutorial..

$ olcnectl --api-server 127.0.0.1:8091 environment create \
--environment-name myenvironment \

--update-config \

--secret-manager-type file \

--olcne-node-cert-path /etc/olcne/configs/certificates/production/node.cert \

--olcne-ca-path /etc/olcne/configs/certificates/production/ca.cert \

--olcne-node-key-path /etc/olcne/configs/certificates/production/node.key

Environment myenvironment created.

9. Add the Kubernetes Module to the Environment

The next command is not necessary, but it allows you to list the available modules for an environment. Run this command on the operator node.

$ olcnectl --api-server 127.0.0.1:8091 module list \
--environment-name myenvironment

Available Modules:

  node

  kubernetes

On the operator node, use the following command to create a multi-primary deployment using the built-in load balancer. Use the --virtual-ip option to set the virtual IP address to be used for the primary primary node, for example, --virtual-ip 192.0.2.137.

Provide the fully qualified domain name (FQDN) of your primary and worker nodes, separate the node names with a comma. This example includes specific primary and worker nodes. Substitute nodes as necessary for your environment.

$ olcnectl --api-server 127.0.0.1:8091 module create \
--environment-name myenvironment \

--module kubernetes --name mycluster \

--container-registry container-registry.oracle.com/olcne \

--virtual-ip 192.0.2.137 \

--master-nodes master1.example.com:8090,master2.example.com:8090,master3.example.com:8090 \

--worker-nodes worker1.example.com:8090,worker2.example.com:8090,worker3.example.com:8090

Modules created successfully.

10. Validate the Kubernetes Module

On the operator node, use the following command to validate the nodes are configured correctly to deploy the Kubernetes module. In this example, there are no validation errors. If there are any errors, the commands required to fix the nodes are provided as output of this command.

$ olcnectl --api-server 127.0.0.1:8091 module validate \

--environment-name myenvironment \

--name mycluster

Validation of module mycluster succeeded.

11. Deploy the Kubernetes Module

On the operator node, use the following command to deploy the Kubernetes module to the environment.

$ olcnectl --api-server 127.0.0.1:8091 module install \

--environment-name myenvironment \

--name mycluster

Modules installed successfully.

Want to Learn More?

Comments

Post Details

Added on Feb 12 2020
0 comments
570 views