Correct invalid day.md navigation
This commit is contained in:
parent
748e09750c
commit
50a8950f1a
@ -12,7 +12,6 @@ I decided to cheat a bit for this opening, and consult ChatGPT, the AI Service o
|
||||
|
||||
> Why choose an enterprise Kubernetes platform such as Red Hat OpenShift?
|
||||
>
|
||||
>
|
||||
> 1. Automation and Enterprise-grade Security: Red Hat OpenShift provides an automated platform to help you deploy, manage, and scale your applications quickly and easily while ensuring that the underlying infrastructure is secure.
|
||||
>
|
||||
> 2. Open Source: Red Hat OpenShift is built on top of open source technologies such as Kubernetes, Docker, and Red Hat Enterprise Linux. This ensures that your applications are always up-to-date with the latest technologies.
|
||||
@ -23,7 +22,6 @@ I decided to cheat a bit for this opening, and consult ChatGPT, the AI Service o
|
||||
>
|
||||
> 5. Cost Savings: Red Hat OpenShift provides a cost-effective solution for running your applications in the cloud. You can save money on infrastructure and operations costs by leveraging OpenShift's automated platform.
|
||||
|
||||
|
||||
# What does Red Hat OpenShift bring to the party?
|
||||
|
||||
Red Hat has developed OpenShift based on a Open Souce platform (Kubernetes) and even distributes the OpenShift platform enhancements as Open Source as well under the guise of [OpenShift Kubernetes Distribution (OKD)](https://www.okd.io).
|
||||
@ -92,20 +90,21 @@ Red Hat OpenShift Dedicated is a service hosted and fully-managed by Red Hat tha
|
||||
## Getting access to a trial
|
||||
|
||||
Getting started with OpenShift is simple. They give you the ability trial three options:
|
||||
|
||||
- Developer Sandbox - A hosted instance of OpenShift for you to consume straight away for 30 days
|
||||
- Managed Service - A fully managed Red Hat OpenShift dedicated instance for you to consume, you will need to provide the AWS or GCP cloud account to deploy this into. 60 day trial.
|
||||
- Self-Managed - Deploy OpenShift yourself to any of the platforms named above. 60 day trial.
|
||||
|
||||
You'll need to sign up for a Red Hat account to access the trial and get the software details to deploy.
|
||||
|
||||
- [Try Red Hat OpenShift](https://www.redhat.com/en/technologies/cloud-computing/openshift/try-it)
|
||||
|
||||
# Next Steps - Understanding the OpenShift Architecture + Spinning up an instance!
|
||||
|
||||
In [day 57](/day57.md) we will dive into the Architecture and components of OpenShift, moving onto spinning up our own OpenShift Environment in [day 58](/day58.md).
|
||||
In [day 57](day57.md) we will dive into the Architecture and components of OpenShift, moving onto spinning up our own OpenShift Environment in [day 58](day58.md).
|
||||
|
||||
# Resources
|
||||
|
||||
- [OKD](https://www.okd.io/)
|
||||
- [Official Red Hat OpenShift product page](https://www.redhat.com/en/technologies/cloud-computing/openshift)
|
||||
- [Red Hat Hybrid Cloud Learning Hub](https://cloud.redhat.com/learn)
|
||||
|
||||
|
@ -48,7 +48,7 @@ Red Hat believes that although Kubernetes is a great platform for managing your
|
||||
- Montioring
|
||||
- Routing
|
||||
|
||||
And finally to round off, you can interact with a Red Hat OpenShift Cluster, either via a "Comprehensive" web console, or the custom [OpenShift CLI tool ```oc```](https://docs.openshift.com/container-platform/4.12/cli_reference/openshift_cli/getting-started-cli.html), which is a mix of ```kubectl```, ```kubeadm``` and some specific CLI for Red Hat OpenShift.
|
||||
And finally to round off, you can interact with a Red Hat OpenShift Cluster, either via a "Comprehensive" web console, or the custom [OpenShift CLI tool `oc`](https://docs.openshift.com/container-platform/4.12/cli_reference/openshift_cli/getting-started-cli.html), which is a mix of `kubectl`, `kubeadm` and some specific CLI for Red Hat OpenShift.
|
||||
|
||||
The below image nicely finishes off this section covering the product and it's components and why you would potentially choose Red Hat OpenShift over a vanilla Kubernetes platform.
|
||||
|
||||
@ -81,13 +81,13 @@ And with these options, there are two types of installation methods/deployment m
|
||||
|
||||
There is a third method, which is Agent-based, providing the flexbility of UPI, driven by the Assisted Installer (AI) tool.
|
||||
|
||||
Either method, IPI or UPI is driven from the ```openshift-install``` installation program, which is a CLI tool provided for Linux and Mac Operating systems only.
|
||||
Either method, IPI or UPI is driven from the `openshift-install` installation program, which is a CLI tool provided for Linux and Mac Operating systems only.
|
||||
|
||||
The installation program will generate the necessary components to build a cluster such as the Ignition files for bootstrap, master and worker machines. It will further monitor the installation for known targets that an installation must achieve for a successful deployment of a cluster, and provide error handling in the event of a failed cluster deployment, by collecting the necessary troubleshooting logs.
|
||||
|
||||
To visualise bringing all these moving parts together, I have provided the below image from the Red Hat OpenShift documentation.
|
||||
|
||||
A cluster definition is created in a special file called ```install-config.yaml```, this file contains the following information:
|
||||
A cluster definition is created in a special file called `install-config.yaml`, this file contains the following information:
|
||||
|
||||
- Cluster name
|
||||
- Base domain (FDQN for the network where the cluster will run)
|
||||
@ -96,9 +96,9 @@ A cluster definition is created in a special file called ```install-config.yaml`
|
||||
- Specific Infrastructure platform details (Login details, which networks and storage to use, for example)
|
||||
- Workload Customizations, such what instance types to use for your Control Plane (Master) and Compute Plane (Worker) nodes.
|
||||
|
||||
There is also additional files which may be stored along side the root of the ```install-config.yaml``` in a folder called ```manifests``` these are additional files which can be configured to assist the bootstrapping of a cluster to integrate with your infrastructure, such as your Networking platform.
|
||||
There is also additional files which may be stored along side the root of the `install-config.yaml` in a folder called `manifests` these are additional files which can be configured to assist the bootstrapping of a cluster to integrate with your infrastructure, such as your Networking platform.
|
||||
|
||||
Once you have all of these files, by running the ```openshift-install``` CLI tool, this will create the ignition files for your boostrap, control plane, and compute plane nodes. Returning to the earlier descriptions of RHCOS, these files contain the first boot information to configure the Operation System and start the process of building a consistent Kubernetes cluster with minimal to no interaction.
|
||||
Once you have all of these files, by running the `openshift-install` CLI tool, this will create the ignition files for your boostrap, control plane, and compute plane nodes. Returning to the earlier descriptions of RHCOS, these files contain the first boot information to configure the Operation System and start the process of building a consistent Kubernetes cluster with minimal to no interaction.
|
||||
|
||||

|
||||
|
||||
@ -106,13 +106,12 @@ Once you have all of these files, by running the ```openshift-install``` CLI too
|
||||
|
||||
This is the default installation method, and preferred by Red Hat for their customers to initiate a cluster installation, as it provides a reference architectural deployment out of the box.
|
||||
|
||||
The ```openshift-install``` CLI tool can act as it's own installation wizard, presenting you with a number of queries for the values it needs to deploy to your choosen platform. You can also customize the installation process to support more advanced scenerios, such as the number of machines deployed, instance type/size, CIDR range for the Kubernetes service network,
|
||||
The `openshift-install` CLI tool can act as it's own installation wizard, presenting you with a number of queries for the values it needs to deploy to your choosen platform. You can also customize the installation process to support more advanced scenerios, such as the number of machines deployed, instance type/size, CIDR range for the Kubernetes service network,
|
||||
|
||||
The main point here, is that the installation software provisions the underlying infrastructure for the cluster.
|
||||
|
||||
By using an IPI installation method, the provisioned cluster then has the further ability to continue to manage all aspects of the cluster and provisioned infrastructure going forward from a lifecycle management point of view. For example, if you scale the number of compute plane (worker) nodes in your cluster, the OpenShift Container Platform can interact with the underlying platform (for example, AWS, VMware vSphere) to create the new virtual machines and bootstrap them to the cluster.
|
||||
|
||||
|
||||
## User provisioned infrastructure (UPI)
|
||||
|
||||
With a UPI method, the OpenShift Container Platform will be installed to infrastucture that you have provided. The installation software will still be used to generate the assets needed to provision the cluster, however you will manually build the nodes and provide the necessary ignition to bring the nodes online. You must also manage the infrastructure supporting cluster resources such as:
|
||||
@ -139,6 +138,7 @@ A temporary bootstrap machine is provisioned using IPI or UPI, which contains th
|
||||
Once the control plane is initialised, the bootstrap machine is destroyed. If you are manually provisioning the platform (UPI), then you complete a number of the provisioning steps manually.
|
||||
|
||||
> Bootstrapping a cluster involves the following steps:
|
||||
>
|
||||
> 1. The bootstrap machine boots and starts hosting the remote resources required for the control plane machines to boot. (Requires manual intervention if you provision the infrastructure)
|
||||
> 2. The bootstrap machine starts a single-node etcd cluster and a temporary Kubernetes control plane.
|
||||
> 3. The control plane machines fetch the remote resources from the bootstrap machine and finish booting. (Requires manual intervention if you provision the infrastructure)
|
||||
@ -156,7 +156,7 @@ Once the control plane is initialised, the bootstrap machine is destroyed. If yo
|
||||
|
||||
We have covered the components that make up a Red Hat OpenShift Container Platform environment, why they are important to the environment, and what enteprise features they bring over a vanilla Kubernetes environment. We then dived into the methods available to deploy an OpenShift Cluster and the process that a Cluster build undertakes.
|
||||
|
||||
In [Day 58](../day58.md) will cover the steps to install Red Hat OpenShift to a VMware vSphere environment.
|
||||
In [Day 58](day58.md) will cover the steps to install Red Hat OpenShift to a VMware vSphere environment.
|
||||
|
||||
# Resources
|
||||
|
||||
|
128
2023/day58.md
128
2023/day58.md
@ -7,6 +7,7 @@ The platform for this example will be [VMware vSphere](https://www.vmware.com/uk
|
||||
## Pre-requisites
|
||||
|
||||
We will need the following:
|
||||
|
||||
- Jump host to run the installation software from
|
||||
- Access to the DNS server which supports the infrastructure platform you are deploying to
|
||||
- A pull secret file/key from the Red Hat Cloud Console website
|
||||
@ -15,7 +16,7 @@ We will need the following:
|
||||
|
||||
### Configuring the Jump host Machine
|
||||
|
||||
For this example, I've used a Ubuntu Server Virtual Machine, you can use another distribution of Linux or Mac OS X for these steps. (Note: the ```OpenShift-install``` CLI tool only supports Linux and MAC OS X)
|
||||
For this example, I've used a Ubuntu Server Virtual Machine, you can use another distribution of Linux or Mac OS X for these steps. (Note: the `OpenShift-install` CLI tool only supports Linux and MAC OS X)
|
||||
|
||||
Download the OpenShift-Install tool and OpenShift-Client (OC) command line tool. (I’ve used version 4.12.6 in my install)
|
||||
|
||||
@ -24,39 +25,45 @@ Download the OpenShift-Install tool and OpenShift-Client (OC) command line tool.
|
||||

|
||||
|
||||
Extract the files and copy to your /usr/bin/local directory
|
||||
````
|
||||
|
||||
```
|
||||
tar -zxvf openshift-client-linux.tar.gz
|
||||
tar -zxvf openshift-install-linux.tar.gz
|
||||
|
||||
sudo cp openshift-install /usr/bin/local/openshift-install
|
||||
sudo cp oc /usr/bin/local/oc
|
||||
sudo cp kubectl /usr/bin/local/kubectl
|
||||
````
|
||||
Have an available SSH key from your jump box, so that you can connect to your CoreOS VMs one they are deployed for troubleshooting purposes. Generate one using ````ssh-keygen```` if needed.
|
||||
```
|
||||
|
||||
Have an available SSH key from your jump box, so that you can connect to your CoreOS VMs one they are deployed for troubleshooting purposes. Generate one using `ssh-keygen` if needed.
|
||||
|
||||
Next, we need to download the VMware vCenter trusted root certificates and import them to your Jump Host.
|
||||
|
||||
````
|
||||
```
|
||||
curl -O https://{vCenter_FQDN}/certs/download.zip
|
||||
````
|
||||
```
|
||||
|
||||
Now unzip the file (you may need to install a software package for this ````sudo apt install unzip````), and import them to the trusted store (ubuntu uses the .crt files, hence importing the win folder).
|
||||
````
|
||||
Now unzip the file (you may need to install a software package for this `sudo apt install unzip`), and import them to the trusted store (ubuntu uses the .crt files, hence importing the win folder).
|
||||
|
||||
```
|
||||
unzip download.zip
|
||||
cp certs/win/* /usr/local/share/ca-certificates
|
||||
update-ca-certificates
|
||||
````
|
||||
```
|
||||
|
||||
You will need a user account to connect to vCenter with the correct permissions for the OpenShift-Install to deploy the cluster. If you do not want to use an existing account and permissions, you can use this [PowerCLI script](https://github.com/saintdle/PowerCLI/blob/master/Create_vCenter_OpenShift_Install_Role.ps1) to create the roles with the correct privileges based on the Red Hat documentation.
|
||||
|
||||
### Configuring DNS Records
|
||||
|
||||
A mandatory pre-req is DNS records. You will need the two following records to be available on your OpenShift cluster network in the same IP address space that your nodes will be deployed to. These records will follow the format:
|
||||
````
|
||||
|
||||
```
|
||||
{clusterID}.{domain_name}
|
||||
example: ocp412.veducate.local
|
||||
*.apps.{clusterID}.{domain_name}
|
||||
example: *.apps.ocp412.veducate.local
|
||||
````
|
||||
```
|
||||
|
||||
If your DNS is a Windows server, you can use this [script here](https://github.com/saintdle/OCP-4.3-vSphere-Static-IP/tree/master/DNS). I've included a quick screenshot of my DNS Server settings below for both records.
|
||||
|
||||

|
||||
@ -64,69 +71,77 @@ If your DNS is a Windows server, you can use this [script here](https://github.c
|
||||
### Minimum Resources to deploy a cluster
|
||||
|
||||
You need to be aware of the [minimum deployment options](https://docs.openshift.com/container-platform/4.12/installing/installing_vsphere/installing-vsphere.html#installation-minimum-resource-requirements_installing-vsphere) to successfully bring up a cluster.
|
||||
````
|
||||
|
||||
```
|
||||
1 Bootstrap
|
||||
This machine is created automatically and deleted after the cluster build.
|
||||
3 Control Plane
|
||||
2 Compute Plane
|
||||
````
|
||||
```
|
||||
|
||||

|
||||
|
||||
## Using the OpenShift-Install tool
|
||||
|
||||
Now that we have our pre-reqs in place, we can start to deploy our cluster. When using the ````OpenShift-Install```` tool, you have three main command line options when creating a cluster
|
||||
Now that we have our pre-reqs in place, we can start to deploy our cluster. When using the `OpenShift-Install` tool, you have three main command line options when creating a cluster
|
||||
|
||||
>````openshift-install create cluster````
|
||||
> `openshift-install create cluster`
|
||||
>
|
||||
> - This will run you through a wizard to create the install-config.yaml file and then create the cluster automatically using terraform which is packaged as part of the installer software (meaning you don't need terraform on your system as a pre-req).
|
||||
> - If you run the below two commands listed, you can then still run this command to provision your cluster.
|
||||
>
|
||||
>````openshift-install create install-config````
|
||||
>- This will run you through a wizard to create the install-config.yaml file, and leave it in the root directory, or directory you specify with the --dir= argument.
|
||||
>- It is supported for you to modify the install-config.yaml file, before running the above ```create cluster``` command.
|
||||
> `openshift-install create install-config`
|
||||
>
|
||||
> - This will run you through a wizard to create the install-config.yaml file, and leave it in the root directory, or directory you specify with the --dir= argument.
|
||||
> - It is supported for you to modify the install-config.yaml file, before running the above `create cluster` command.
|
||||
>
|
||||
> `openshift-install create manifests`
|
||||
>
|
||||
>````openshift-install create manifests````
|
||||
> - This will create the manifests folder which controls the provisioning of the cluster. Most of the time this command is only use with UPI installations. However some platform integrations support IPI installation, such as VMware's NSX Advanced Load Balancer, but they require you create the manifest folder and upload adding YAML files to this folder, which helps OpenShift integrate to the Load Balancer upon deployment..
|
||||
|
||||
There are other commands such as ```create ignition``` which would be used for when you are performing the UPI installation method.
|
||||
There are other commands such as `create ignition` which would be used for when you are performing the UPI installation method.
|
||||
|
||||

|
||||
|
||||
Now let's jump into creating our cluster in the easiest possible way, with the ```openshift-install create cluster```command and press enter, this will take you into the wizard format. Below I've selected my SSH key I want to use and the Platform as vSphere.
|
||||
Now let's jump into creating our cluster in the easiest possible way, with the `openshift-install create cluster`command and press enter, this will take you into the wizard format. Below I've selected my SSH key I want to use and the Platform as vSphere.
|
||||
|
||||

|
||||
|
||||
Next I enter the vCenter FQDN, the username, and password. The tool then connects to the vCenter and pulls the necessary datastores and networks I can deploy to. If you have missed the certificate step above, it will error out here.
|
||||
|
||||
After selecting datastore and the network, I need to now input the address for:
|
||||
|
||||
> api.{cluster_name}.{base_domain}
|
||||
> *.apps.{cluster_name}.{base_domain}
|
||||
> \*.apps.{cluster_name}.{base_domain}
|
||||
|
||||
However I hit a bug ([GitHub PR](https://github.com/openshift/installer/pull/6783),[Red Hat Article](https://access.redhat.com/solutions/6994972)) in the installer, where by the software installer is hardcoded to only accept addresses in the 10.0.0.0/16 range.
|
||||
|
||||

|
||||
|
||||
The current work around for this is to run ````openshift-install create install-config```` provide ip addresses in the 10.0.0.0/16 range, and then alter the ```install-config.yaml``` file manually before running ````openshift-install create cluster````, which will read the available ```install-config.yaml``` file and create the cluster (rather than presenting you another wizard).
|
||||
The current work around for this is to run `openshift-install create install-config` provide ip addresses in the 10.0.0.0/16 range, and then alter the `install-config.yaml` file manually before running `openshift-install create cluster`, which will read the available `install-config.yaml` file and create the cluster (rather than presenting you another wizard).
|
||||
|
||||
So, let's back track a bit, and do that. Running the ```create install-config``` argument, provides the same wizard run through as before.
|
||||
So, let's back track a bit, and do that. Running the `create install-config` argument, provides the same wizard run through as before.
|
||||
|
||||
In the wizard, I've provided IP's on the range from above, and set my base domain and cluster name as well. The final piece is to paste in my Pull Secret from the Red Hat Cloud console.
|
||||
|
||||

|
||||
|
||||
Now if I run ```ls``` on my current directory I'll see the ```install-config.yaml``` file. It is recommended to save this file now before you run the ```create cluster``` command, as this file will be removed after this, as it contains plain text passwords.
|
||||
Now if I run `ls` on my current directory I'll see the `install-config.yaml` file. It is recommended to save this file now before you run the `create cluster` command, as this file will be removed after this, as it contains plain text passwords.
|
||||
|
||||
I've highlighted in the below image the lines we need to alter.
|
||||
|
||||

|
||||
|
||||
For the section
|
||||
````
|
||||
|
||||
```
|
||||
machineNetwork:
|
||||
- cidr: 10.0.0.0/16
|
||||
````
|
||||
```
|
||||
|
||||
This needs to be changed the network subnet the nodes will run on. And for the platform section, you need to map the right IP addresses from your DNS records.
|
||||
|
||||
````
|
||||
```
|
||||
platform:
|
||||
vsphere:
|
||||
apiVIP: 192.168.200.192 <<<<<<< This is your api.{cluster_name}.{base_domain} DNS record
|
||||
@ -135,13 +150,13 @@ platform:
|
||||
datacenter: vEducate-DC
|
||||
defaultDatastore: Datastore01
|
||||
ingressVIP: 192.168.200.193 <<<<<<< This is your *.apps.{cluster_name}.{base_domain} DNS record
|
||||
````
|
||||
```
|
||||
|
||||
I've also included a further example of a ```install-config.yaml``` file, I want to highlight under the "compute" and "controlPlane" sections, where by I've specified resouce configuration settings for my virtual machines. You cannot change these below the minimum specified in the documentation, otherwise you will your cluster will not build successfully.
|
||||
I've also included a further example of a `install-config.yaml` file, I want to highlight under the "compute" and "controlPlane" sections, where by I've specified resouce configuration settings for my virtual machines. You cannot change these below the minimum specified in the documentation, otherwise you will your cluster will not build successfully.
|
||||
|
||||
You can read about further [supported customizations here](https://github.com/openshift/installer/blob/master/docs/user/customization.md).
|
||||
|
||||
````
|
||||
```
|
||||
apiVersion: v1
|
||||
baseDomain: veducate.local
|
||||
compute:
|
||||
@ -194,12 +209,13 @@ publish: External
|
||||
pullSecret: '{"auths":{"cloud.openshift.com":{"auth":"bxxxxxx==","email":"openshift@veducate.co.uk"},"registry.redhat.io":{"auth":"Nxxx=","email":"openshift@veducate.co.uk"}}}'
|
||||
sshKey: |
|
||||
ssh-rsa AAAABxxxxxx openshift@veducate
|
||||
````
|
||||
Now that we have our correctly configured ```install-config.yaml``` file, we can proceed with the installation of the cluster, which after running the ```openshift-install create cluster``` command, is hands off from this point forward. The system will output logging to the console for you, which you can modify using the ```--log-level=``` argument at the end of the command.
|
||||
```
|
||||
|
||||
Below is the normal output without any modifiers. We now have a working Red Hat OpenShift Cluster, and can use the export command provided to access the cluster via the ```oc``` CLI tool, or you can use ```kubectl```
|
||||
Now that we have our correctly configured `install-config.yaml` file, we can proceed with the installation of the cluster, which after running the `openshift-install create cluster` command, is hands off from this point forward. The system will output logging to the console for you, which you can modify using the `--log-level=` argument at the end of the command.
|
||||
|
||||
````
|
||||
Below is the normal output without any modifiers. We now have a working Red Hat OpenShift Cluster, and can use the export command provided to access the cluster via the `oc` CLI tool, or you can use `kubectl`
|
||||
|
||||
```
|
||||
dean@dean [ ~/90days-ocp412 ] # ./openshift-install create cluster
|
||||
INFO Consuming Install Config from target directory
|
||||
INFO Creating infrastructure resources...
|
||||
@ -215,21 +231,22 @@ INFO To access the cluster as the system:admin user when using 'oc', run 'export
|
||||
INFO Access the OpenShift web-console here: https://console-openshift-console.apps.90days-ocp.simon.local
|
||||
INFO Login to the console with user: "kubeadmin", and password: "ur6xT-gxmVW-WVUuD-Sd44J"
|
||||
INFO Time elapsed: 35m16s
|
||||
````
|
||||
```
|
||||
|
||||

|
||||
|
||||
### Viewing the installation logs
|
||||
|
||||
If we now look within our directory where we ran the ```openshift-install``` installation from, you can see a number of new folders and files are created:
|
||||
If we now look within our directory where we ran the `openshift-install` installation from, you can see a number of new folders and files are created:
|
||||
|
||||
- auth Folder
|
||||
- Within this folder is your kubeconfig file, as mentioned in the above console output
|
||||
- tls Folder
|
||||
- this contains the certificates of the journal-gateway service on the nodes to collect logs and debug
|
||||
- Terraform files
|
||||
- There are various ```.tfvars``` and ```.tfstate``` files used by the terraform component which is part of ```openshift-install``` software, and well as the output Terraform state file.
|
||||
- There are various `.tfvars` and `.tfstate` files used by the terraform component which is part of `openshift-install` software, and well as the output Terraform state file.
|
||||
- Log Files
|
||||
- Finally the verbose output is located in the hidden file ```.openshift_install.log```, this contains all the details about your installation and the running of Terraform to create the various resources.
|
||||
- Finally the verbose output is located in the hidden file `.openshift_install.log`, this contains all the details about your installation and the running of Terraform to create the various resources.
|
||||
|
||||
Below is a screenshot showing the directory, folders and example of my logging output.
|
||||
|
||||
@ -241,13 +258,13 @@ To communicate with your cluster, like a vanilla Kubernetes environment, you can
|
||||
|
||||
### Using the Openshift Client (oc) and Kubectl
|
||||
|
||||
Aa you will have seen from the final output of the installation, you will be provided a kubeconfig file in the ```auth``` folder, and the ouput provides you the necessary command to start consuming that straight away, as per the below example.
|
||||
Aa you will have seen from the final output of the installation, you will be provided a kubeconfig file in the `auth` folder, and the ouput provides you the necessary command to start consuming that straight away, as per the below example.
|
||||
|
||||
> INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/dean/90days-ocp412/auth/kubeconfig'
|
||||
|
||||
Once set as your environment variable, you can now interact with the cluster the same way you would with a vanilla Kubernetes cluster. When using the OpenShift Client (oc) tool, you'll find that all of your favourite ```kubectl``` commands will still work, you just replace the first part of the command with ```oc```. Below I've detailed a few examples:
|
||||
Once set as your environment variable, you can now interact with the cluster the same way you would with a vanilla Kubernetes cluster. When using the OpenShift Client (oc) tool, you'll find that all of your favourite `kubectl` commands will still work, you just replace the first part of the command with `oc`. Below I've detailed a few examples:
|
||||
|
||||
````
|
||||
```
|
||||
kubectl get ns
|
||||
oc get ns
|
||||
|
||||
@ -257,34 +274,37 @@ oc get pods -A
|
||||
|
||||
kubectl get pods -n openshift-apiserver
|
||||
oc get pods -n openshift-apiserver
|
||||
````
|
||||
```
|
||||
|
||||

|
||||

|
||||

|
||||
I've created an image of the output from ```oc -help``` and ```kubectl -help``` and mapped the two commands together, you will see that the ```oc``` tool is far more rich in terms of functionality
|
||||
I've created an image of the output from `oc -help` and `kubectl -help` and mapped the two commands together, you will see that the `oc` tool is far more rich in terms of functionality
|
||||
|
||||

|
||||
|
||||
You can also login into the OpenShift cluster via the ```oc login``` command.
|
||||
You can also login into the OpenShift cluster via the `oc login` command.
|
||||
|
||||
````
|
||||
```
|
||||
# Log in to the given server with the given credentials (will not prompt interactively)
|
||||
oc login localhost:8443 --username=myuser --password=mypass
|
||||
````
|
||||
```
|
||||
|
||||
A final footnote here, if you are not planning on using an image registry in your environment, it recommended to run this command to use the inbuilt registry as ephemeral whilst you do your testing:
|
||||
|
||||
````
|
||||
```
|
||||
oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"managementState":"Managed","storage":{"emptyDir":{}}}}'
|
||||
````
|
||||
```
|
||||
|
||||
### Using the OpenShift Console UI
|
||||
|
||||
The final access point into the cluster, is via the UI, again the output of the installation software, gives you the full FQDN to access this console, if you look closely you'll see it uses the ingress record under *.apps.{cluster_name}.{base_domain}.
|
||||
The final access point into the cluster, is via the UI, again the output of the installation software, gives you the full FQDN to access this console, if you look closely you'll see it uses the ingress record under \*.apps.{cluster_name}.{base_domain}.
|
||||
|
||||
````
|
||||
```
|
||||
INFO Access the OpenShift web-console here: https://console-openshift-console.apps.90days-ocp.simon.local
|
||||
INFO Login to the console with user: "kubeadmin", and password: "ur6xT-gxmVW-WVUuD-Sd44J"
|
||||
````
|
||||
```
|
||||
|
||||

|
||||
|
||||
Once logged in, you'll view the persona that you have access to (1). In my example, I'm using the kubeadmin account, so I see the administrative view first, and I can change this to the Developer view as well (see second screenshot).
|
||||
@ -311,9 +331,9 @@ If I take a step back I also can look at my project as a whole, see resource uti
|
||||
|
||||
# Summary
|
||||
|
||||
I think I'll stop here and wrap up. As you now know from [day 57](/day57.md), there are a few deployment methods, and numerous platforms to deploy to. This walkthrough covered the simpliest deployment to one of the most popular platforms, VMware vSphere. For those of you who want to try out OpenShift, you have two options, you can now deploy a [single node OpenShift environment](https://cloud.redhat.com/blog/visual-guide-to-single-node-openshift-deploy) running on your local machine, you can run [OpenShift sandbox](https://developers.redhat.com/developer-sandbox) via their website, or you can run [OKD](https://www.okd.io/), the open-source version in your home lab. Or stick with a trial of the enterprise software like I have.
|
||||
I think I'll stop here and wrap up. As you now know from [day 57](day57.md), there are a few deployment methods, and numerous platforms to deploy to. This walkthrough covered the simpliest deployment to one of the most popular platforms, VMware vSphere. For those of you who want to try out OpenShift, you have two options, you can now deploy a [single node OpenShift environment](https://cloud.redhat.com/blog/visual-guide-to-single-node-openshift-deploy) running on your local machine, you can run [OpenShift sandbox](https://developers.redhat.com/developer-sandbox) via their website, or you can run [OKD](https://www.okd.io/), the open-source version in your home lab. Or stick with a trial of the enterprise software like I have.
|
||||
|
||||
In [day 59](/day59.md), we will cover application deployment in a little more detail, and look start to look at Security Contraints Context (SCC), the out of the box security features of OpenShift which enhance further upon the older PodSecurityPolicies from Kubernetes. SCC is sometimes a little hard to get used to, and be a source of frustration out of the box for many when getting started with OpenShift.
|
||||
In [day 59](day59.md), we will cover application deployment in a little more detail, and look start to look at Security Contraints Context (SCC), the out of the box security features of OpenShift which enhance further upon the older PodSecurityPolicies from Kubernetes. SCC is sometimes a little hard to get used to, and be a source of frustration out of the box for many when getting started with OpenShift.
|
||||
|
||||
# Resources
|
||||
|
||||
|
@ -1,16 +1,16 @@
|
||||
# Deploying a Sample Application on Red Hat OpenShift: Handling Security Context Constraints (SCC)
|
||||
|
||||
On [Day 58](/day58.md) we finished looking around the developer and administrator interfaces of a newly deployed cluster.
|
||||
On [Day 58](day58.md) we finished looking around the developer and administrator interfaces of a newly deployed cluster.
|
||||
|
||||
In this submission (Day 59), we will walk through the process of deploying a sample MongoDB application to a newly deployed Red Hat OpenShift cluster. However, this deployment will fail due to the default security context constraints (SCC) in OpenShift. We will explain why the deployment fails, how to resolve the issue, and provide a brief overview of SCC in OpenShift with examples.
|
||||
|
||||
## Understanding Security Context Constraints (SCC)
|
||||
|
||||
Security context constraints in OpenShift are a security feature that allows administrators to control various aspects of the container runtime, such as user and group IDs, SELinux context, and the use of host resources. In Short, SCCs determine which security settings are allowed or disallowed for containerized applications. By default, OpenShift comes with several predefined SCCs, such as ```restricted```, ```anyuid```, and ```hostaccess```. These SCCs serve as templates for creating custom SCCs to meet specific security requirements.
|
||||
Security context constraints in OpenShift are a security feature that allows administrators to control various aspects of the container runtime, such as user and group IDs, SELinux context, and the use of host resources. In Short, SCCs determine which security settings are allowed or disallowed for containerized applications. By default, OpenShift comes with several predefined SCCs, such as `restricted`, `anyuid`, and `hostaccess`. These SCCs serve as templates for creating custom SCCs to meet specific security requirements.
|
||||
|
||||
> Warning: Do not modify the default SCCs. Customizing the default SCCs can lead to issues when some of the platform pods deploy or OpenShift Container Platform is upgraded. Additionally, the default SCC values are reset to the defaults during some cluster upgrades, which discards all customizations to those SCCs. Instead of modifying the default SCCs, create and modify your own SCCs as needed.
|
||||
|
||||
For example, the restricted SCC (default for most deployments, or restricted-v2 for new installs of OCP 4.11 and later) does not allow containers to run as root or with privileged access, while the ```anyuid``` SCC permits containers to run with any user ID, including root. By creating custom SCCs and granting them to service accounts or users, administrators can ensure that applications adhere to the desired security policies without compromising functionality.
|
||||
For example, the restricted SCC (default for most deployments, or restricted-v2 for new installs of OCP 4.11 and later) does not allow containers to run as root or with privileged access, while the `anyuid` SCC permits containers to run with any user ID, including root. By creating custom SCCs and granting them to service accounts or users, administrators can ensure that applications adhere to the desired security policies without compromising functionality.
|
||||
|
||||
Security context constraints allow an administrator to control:
|
||||
|
||||
@ -48,7 +48,7 @@ SCCs consist of settings and strategies that control the security features that
|
||||
|
||||
- Controlled by a boolean
|
||||
- Fields of this type default to the most restrictive value. For example;
|
||||
- ```AllowPrivilegedContainer``` is always set to ```false``` if unspecified.
|
||||
- `AllowPrivilegedContainer` is always set to `false` if unspecified.
|
||||
- Controlled by an allowable set
|
||||
- Fields of this type are checked against the set to ensure their value is allowed.
|
||||
- Controlled by a strategy
|
||||
@ -58,7 +58,7 @@ SCCs consist of settings and strategies that control the security features that
|
||||
|
||||
CRI-O has the following [default list of capabilities](https://github.com/cri-o/cri-o/blob/main/docs/crio.conf.5.md#crioruntime-table) that are allowed for each container of a pod:
|
||||
|
||||
````
|
||||
```
|
||||
default_capabilities = [
|
||||
"CHOWN",
|
||||
"DAC_OVERRIDE",
|
||||
@ -70,15 +70,15 @@ CRI-O has the following [default list of capabilities](https://github.com/cri-o/
|
||||
"NET_BIND_SERVICE",
|
||||
"KILL",
|
||||
]
|
||||
````
|
||||
```
|
||||
|
||||
You can learn more about Linux capabilities [here](https://linuxera.org/container-security-capabilities-seccomp/) and [here](https://man7.org/linux/man-pages/man7/capabilities.7.html). The containers use the capabilities from this default list, but pod manifest authors (the person writing the application YAML for Kubernetes) can alter the list by requesting additional capabilities or removing some of the default behaviors. To control the capabilities allowed or denied for Pods running in the cluster, use the ```allowedCapabilities```, ```defaultAddCapabilities```, and ```requiredDropCapabilities``` parameters in your SCC to control such requests from the pods.
|
||||
You can learn more about Linux capabilities [here](https://linuxera.org/container-security-capabilities-seccomp/) and [here](https://man7.org/linux/man-pages/man7/capabilities.7.html). The containers use the capabilities from this default list, but pod manifest authors (the person writing the application YAML for Kubernetes) can alter the list by requesting additional capabilities or removing some of the default behaviors. To control the capabilities allowed or denied for Pods running in the cluster, use the `allowedCapabilities`, `defaultAddCapabilities`, and `requiredDropCapabilities` parameters in your SCC to control such requests from the pods.
|
||||
|
||||
#### Quick Snippet: configuring a pod with capabilities
|
||||
|
||||
You can [specify additional capabilities for your pod](https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-capabilities-for-a-container) as per the below example.
|
||||
|
||||
````yaml
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
@ -90,7 +90,7 @@ spec:
|
||||
securityContext:
|
||||
capabilities:
|
||||
add: ["NET_ADMIN", "SYS_TIME"]
|
||||
````
|
||||
```
|
||||
|
||||
Let's look at some of the default contexts in further detail.
|
||||
|
||||
@ -116,9 +116,9 @@ The restricted-v2 SCC:
|
||||
|
||||
Ensures that no child process of a container can gain more privileges than its parent (AllowPrivilegeEscalation=False)
|
||||
|
||||
You can get this SCC configuration by running ```oc get scc restricted-v2 -o yaml```
|
||||
You can get this SCC configuration by running `oc get scc restricted-v2 -o yaml`
|
||||
|
||||
````yaml
|
||||
```yaml
|
||||
allowHostDirVolumePlugin: false
|
||||
allowHostIPC: false
|
||||
allowHostNetwork: false
|
||||
@ -138,7 +138,8 @@ metadata:
|
||||
include.release.openshift.io/ibm-cloud-managed: "true"
|
||||
include.release.openshift.io/self-managed-high-availability: "true"
|
||||
include.release.openshift.io/single-node-developer: "true"
|
||||
kubernetes.io/description: restricted denies access to all host features and requires
|
||||
kubernetes.io/description:
|
||||
restricted denies access to all host features and requires
|
||||
pods to be run with a UID, and SELinux context that are allocated to the namespace.
|
||||
release.openshift.io/create-only: "true"
|
||||
creationTimestamp: "2023-03-16T09:34:36Z"
|
||||
@ -168,7 +169,7 @@ volumes:
|
||||
- persistentVolumeClaim
|
||||
- projected
|
||||
- secret
|
||||
````
|
||||
```
|
||||
|
||||
2. Privileged SCC:
|
||||
|
||||
@ -196,9 +197,9 @@ The privileged SCC allows:
|
||||
|
||||
Pods to request any capabilities
|
||||
|
||||
You can get this SCC configuration by running ```oc get scc privileged -o yaml```
|
||||
You can get this SCC configuration by running `oc get scc privileged -o yaml`
|
||||
|
||||
````yaml
|
||||
```yaml
|
||||
allowHostDirVolumePlugin: true
|
||||
allowHostIPC: true
|
||||
allowHostNetwork: true
|
||||
@ -207,9 +208,9 @@ allowHostPorts: true
|
||||
allowPrivilegeEscalation: true
|
||||
allowPrivilegedContainer: true
|
||||
allowedCapabilities:
|
||||
- '*'
|
||||
- "*"
|
||||
allowedUnsafeSysctls:
|
||||
- '*'
|
||||
- "*"
|
||||
apiVersion: security.openshift.io/v1
|
||||
defaultAddCapabilities: null
|
||||
fsGroup:
|
||||
@ -224,10 +225,11 @@ metadata:
|
||||
include.release.openshift.io/ibm-cloud-managed: "true"
|
||||
include.release.openshift.io/self-managed-high-availability: "true"
|
||||
include.release.openshift.io/single-node-developer: "true"
|
||||
kubernetes.io/description: 'privileged allows access to all privileged and host
|
||||
kubernetes.io/description:
|
||||
"privileged allows access to all privileged and host
|
||||
features and the ability to run as any user, any group, any fsGroup, and with
|
||||
any SELinux context. WARNING: this is the most relaxed SCC and should be used
|
||||
only for cluster administration. Grant with caution.'
|
||||
only for cluster administration. Grant with caution."
|
||||
release.openshift.io/create-only: "true"
|
||||
creationTimestamp: "2023-03-16T09:34:35Z"
|
||||
generation: 1
|
||||
@ -242,18 +244,19 @@ runAsUser:
|
||||
seLinuxContext:
|
||||
type: RunAsAny
|
||||
seccompProfiles:
|
||||
- '*'
|
||||
- "*"
|
||||
supplementalGroups:
|
||||
type: RunAsAny
|
||||
users:
|
||||
- system:admin
|
||||
- system:serviceaccount:openshift-infra:build-controller
|
||||
volumes:
|
||||
- '*'
|
||||
````
|
||||
- "*"
|
||||
```
|
||||
|
||||
Now let's look at some specific items from the above YAML:
|
||||
|
||||
- **allowedCapabilities:** - A list of capabilities that a pod can request. An empty list means that none of the capabilities can be requested while the special symbol * allows any capabilities.
|
||||
- **allowedCapabilities:** - A list of capabilities that a pod can request. An empty list means that none of the capabilities can be requested while the special symbol \* allows any capabilities.
|
||||
- **defaultAddCapabilities: []** - A list of additional capabilities that are added to any pod.
|
||||
- **fsGroup:** - The FSGroup strategy, dictates the allowable values for the security context.
|
||||
- **groups** - The groups that can access this SCC.
|
||||
@ -262,7 +265,7 @@ Now let's look at some specific items from the above YAML:
|
||||
- **seLinuxContext:** - The seLinuxContext strategy type, dictates the allowable values for the security context.
|
||||
- **supplementalGroups** - The supplementalGroups strategy, dictates the allowable supplemental groups for the security context.
|
||||
- **users:** - The users who can access this SCC.
|
||||
- **volumes:** - The allowable volume types for the security context. In the example, * allows the use of all volume types.
|
||||
- **volumes:** - The allowable volume types for the security context. In the example, \* allows the use of all volume types.
|
||||
|
||||
The users and groups fields on the SCC control which users can access the SCC. By default, cluster administrators, nodes, and the build controller are granted access to the privileged SCC. All authenticated users are granted access to the restricted-v2 SCC.
|
||||
|
||||
@ -270,11 +273,11 @@ The users and groups fields on the SCC control which users can access the SCC. B
|
||||
|
||||
I'm going to deploy some of the basic components of my [trusty Pac-Man application for Kubernetes](https://github.com/saintdle/pacman-tanzu). The MongoDB deployment, PVC and Secret.
|
||||
|
||||
First, I need to create the namespace to place the components in, ```oc create ns pacman```.
|
||||
First, I need to create the namespace to place the components in, `oc create ns pacman`.
|
||||
|
||||
Now I apply the below YAML file ```oc apply -f mongo-test.yaml```
|
||||
Now I apply the below YAML file `oc apply -f mongo-test.yaml`
|
||||
|
||||
````yaml
|
||||
```yaml
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
@ -378,7 +381,8 @@ data:
|
||||
database-name: cGFjbWFu
|
||||
database-password: cGlua3k=
|
||||
database-user: Ymxpbmt5
|
||||
````
|
||||
```
|
||||
|
||||
Once applied, I see the following output:
|
||||
|
||||
> Warning: would violate PodSecurity "restricted:v1.24": allowPrivilegeEscalation != false (containers "volume-permissions", "mongo" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "volume-permissions", "mongo" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or containers "volume-permissions", "mongo" must set securityContext.runAsNonRoot=true), runAsUser=0 (container "volume-permissions" must not set runAsUser=0), seccompProfile (pod or containers "volume-permissions", "mongo" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
|
||||
@ -387,9 +391,9 @@ Once applied, I see the following output:
|
||||
>
|
||||
> secret/mongodb-users-secret created
|
||||
|
||||
If I now inspect the deployment and replicaset in the ```pacman``` namespace, we'll see that's stuck, I have no pods running.
|
||||
If I now inspect the deployment and replicaset in the `pacman` namespace, we'll see that's stuck, I have no pods running.
|
||||
|
||||
````
|
||||
```
|
||||
# oc get all -n pacman
|
||||
|
||||
NAME READY UP-TO-DATE AVAILABLE AGE
|
||||
@ -397,17 +401,16 @@ deployment.apps/mongo 0/1 0 0 3m9s
|
||||
|
||||
NAME DESIRED CURRENT READY AGE
|
||||
replicaset.apps/mongo-56cc764fb 1 0 0 3m9s
|
||||
````
|
||||
|
||||
```
|
||||
|
||||
## Why the Deployment Fails
|
||||
|
||||
The provided Kubernetes application includes an initContainer with the following security context:
|
||||
|
||||
````yaml
|
||||
```yaml
|
||||
securityContext:
|
||||
runAsUser: 0
|
||||
````
|
||||
```
|
||||
|
||||
This configuration means that the initContainer will attempt to run as the root user (UID 0). However, OpenShift's default SCCs restrict the use of the root user for security reasons. As a result, the deployment fails because it violates the default security context constraints. The same is true of the other configuration settings mentioned in the above output as well. Remember in OCP 4.11 and later (new installs), the default SCC is the restricted-v2 policy.
|
||||
|
||||
@ -417,7 +420,7 @@ To resolve this issue, we need to modify the deployment configuration to comply
|
||||
|
||||
1. Create a new custom SCC, and save the below YAML in a file called mongo-custom-scc.yaml:
|
||||
|
||||
````yaml
|
||||
```yaml
|
||||
apiVersion: security.openshift.io/v1
|
||||
kind: SecurityContextConstraints
|
||||
metadata:
|
||||
@ -435,28 +438,28 @@ fsGroup:
|
||||
type: RunAsAny
|
||||
supplementalGroups:
|
||||
type: RunAsAny
|
||||
````
|
||||
```
|
||||
|
||||
2. Apply the custom SCC to your OpenShift cluster:
|
||||
|
||||
````sh
|
||||
```sh
|
||||
oc apply -f mongo-custom-scc.yaml
|
||||
````
|
||||
```
|
||||
|
||||
3. Grant the mongo-custom-scc SCC to the service account that the MongoDB deployment is using:
|
||||
|
||||
````sh
|
||||
```sh
|
||||
oc adm policy add-scc-to-user mongo-custom-scc system:serviceaccount:<namespace>:default
|
||||
|
||||
# In my environment, I run:
|
||||
oc adm policy add-scc-to-user mongo-custom-scc system:serviceaccount:pacman:default
|
||||
````
|
||||
```
|
||||
|
||||
Replace <namespace> with the namespace where your MongoDB deployment is located.
|
||||
|
||||
4. Redeploy the MongoDB application.
|
||||
|
||||
````
|
||||
```
|
||||
# oc scale deploy mongo -n pacman --replicas=0
|
||||
|
||||
deployment.apps/mongo scaled
|
||||
@ -464,7 +467,8 @@ deployment.apps/mongo scaled
|
||||
# oc scale deploy mongo -n pacman --replicas=1
|
||||
|
||||
deployment.apps/mongo scaled
|
||||
````
|
||||
```
|
||||
|
||||
In the real world, the first port of call should always be to work to ensure your containers and applications run with the least privileges necessary and therefore don't need to run as root.
|
||||
|
||||
If they do need some sort of privilege, then defining tight RBAC and SCC control in place is key.
|
||||
@ -473,7 +477,7 @@ If they do need some sort of privilege, then defining tight RBAC and SCC control
|
||||
|
||||
In this post, we discussed how the default security context constraints in OpenShift can prevent deployments from running as expected. We provided a solution to the specific issue of running an initContainer as root for a MongoDB application. Understanding and managing SCCs in OpenShift is essential for maintaining secure and compliant applications within your cluster.
|
||||
|
||||
On [Day 60](/day60.md)](/day60.md), we will look at OpenShift Projects Creation, Configuration and Governance, for example consuming SCC via the project level, and other features of Red Hat OpenShift.
|
||||
On [Day 60](day60.md), we will look at OpenShift Projects Creation, Configuration and Governance, for example consuming SCC via the project level, and other features of Red Hat OpenShift.
|
||||
|
||||
## Resources
|
||||
|
||||
|
Loading…
Reference in New Issue
Block a user