OpenShift

In this series:

Welcome to my new collection of know-how for OpenShift 4.5. This series aims to remove the ambiguity often found in the official documentation and demonstrate using clear examples on how to deploy an OpenShift cluster and complete the most common "Day Two Operation" tasks, with good examples.

The purpose of the objectives detailed in this series is for learning and building personal lab environments, therefore not intended for production use. With that said the process and know-how do remain the same.

OpenShift is a rapidly moving target, with minor releases often incrementing weekly, at the time of writing, 4.6.1 has been released for general availability (GA). I'll cover both 4.5 and 4.6 in this series.

A challenge for would-be OpenShift administrator is the accessibility to technology. Minimum requirements are enormous, and let's remember OpenShift (based on Kubernetes) is intended to be deployed using cloud infrastructure, or maybe better put, OpenShift is a cloud-native platform first. Evident from when OpenShift 4.1 was first released, it only supported Amazon Web Services (AWS) using the Installer Provisioned Infrastructure (IPI).

Today, OpenShift now supports AWS, Azure, GCP, IBM, OpenStack, RHV, vSphere and bare metal. All of these have their nuances and for a home lab, most are too costly for experimenting with.

Bare metal allows us to provision infrastructure, and the User Provisioned Infrastructure (UPI) installation enables customisations. The process of doing a UPI bare-metal installation is far more involved than say an AWS IPI. However, the knowledge gained is invaluable, and the result is a local cluster, albeit a singe node "cluster".

Obviously, you need three master nodes to achieve quorum and high availability, and you can't really class a singe node as a cluster. But for home labs, its all you need!

Overview

There are a few feasible approaches, and the first is to use the Libvirt KVM/QEMU hyper-visor driver on a single Linux host. You'll need lots of cores and memory but I have proved a three master cluster and bastion host all done in VMs on a 8 core 32 gigabyte memory Linux host.

After deploying this using various configurations, I've settled on documenting the following three physical masters, using a bridged VM to bootstrap them and later on maybe add/remove worker nodes using VMs also.

Overview