Correct invalid day.md navigation

This commit is contained in:
hnnynh 2024-03-17 21:13:18 +09:00
parent 748e09750c
commit 50a8950f1a
4 changed files with 276 additions and 253 deletions

View File

@ -10,19 +10,17 @@ Of course for this section, we are dedicating ourselves to Red Hat OpenShift.
I decided to cheat a bit for this opening, and consult ChatGPT, the AI Service of the moment, to provide an opinion on why Red Hat OpenShift.
>Why choose an enterprise Kubernetes platform such as Red Hat OpenShift?
> Why choose an enterprise Kubernetes platform such as Red Hat OpenShift?
>
> 1. Automation and Enterprise-grade Security: Red Hat OpenShift provides an automated platform to help you deploy, manage, and scale your applications quickly and easily while ensuring that the underlying infrastructure is secure.
>
>1. Automation and Enterprise-grade Security: Red Hat OpenShift provides an automated platform to help you deploy, manage, and scale your applications quickly and easily while ensuring that the underlying infrastructure is secure.
> 2. Open Source: Red Hat OpenShift is built on top of open source technologies such as Kubernetes, Docker, and Red Hat Enterprise Linux. This ensures that your applications are always up-to-date with the latest technologies.
>
>2. Open Source: Red Hat OpenShift is built on top of open source technologies such as Kubernetes, Docker, and Red Hat Enterprise Linux. This ensures that your applications are always up-to-date with the latest technologies.
> 3. Continuous Integration and Delivery: Red Hat OpenShift provides an integrated set of tools for continuous integration and delivery that make it easy to deploy and manage your applications in a secure and reliable manner.
>
>3. Continuous Integration and Delivery: Red Hat OpenShift provides an integrated set of tools for continuous integration and delivery that make it easy to deploy and manage your applications in a secure and reliable manner.
> 4. Scalability and High Availability: Red Hat OpenShift provides a highly available and scalable platform that can handle large amounts of traffic and data without any downtime.
>
>4. Scalability and High Availability: Red Hat OpenShift provides a highly available and scalable platform that can handle large amounts of traffic and data without any downtime.
>
>5. Cost Savings: Red Hat OpenShift provides a cost-effective solution for running your applications in the cloud. You can save money on infrastructure and operations costs by leveraging OpenShift's automated platform.
> 5. Cost Savings: Red Hat OpenShift provides a cost-effective solution for running your applications in the cloud. You can save money on infrastructure and operations costs by leveraging OpenShift's automated platform.
# What does Red Hat OpenShift bring to the party?
@ -92,20 +90,21 @@ Red Hat OpenShift Dedicated is a service hosted and fully-managed by Red Hat tha
## Getting access to a trial
Getting started with OpenShift is simple. They give you the ability trial three options:
- Developer Sandbox - A hosted instance of OpenShift for you to consume straight away for 30 days
- Managed Service - A fully managed Red Hat OpenShift dedicated instance for you to consume, you will need to provide the AWS or GCP cloud account to deploy this into. 60 day trial.
- Self-Managed - Deploy OpenShift yourself to any of the platforms named above. 60 day trial.
You'll need to sign up for a Red Hat account to access the trial and get the software details to deploy.
- Developer Sandbox - A hosted instance of OpenShift for you to consume straight away for 30 days
- Managed Service - A fully managed Red Hat OpenShift dedicated instance for you to consume, you will need to provide the AWS or GCP cloud account to deploy this into. 60 day trial.
- Self-Managed - Deploy OpenShift yourself to any of the platforms named above. 60 day trial.
You'll need to sign up for a Red Hat account to access the trial and get the software details to deploy.
- [Try Red Hat OpenShift](https://www.redhat.com/en/technologies/cloud-computing/openshift/try-it)
# Next Steps - Understanding the OpenShift Architecture + Spinning up an instance!
In [day 57](/day57.md) we will dive into the Architecture and components of OpenShift, moving onto spinning up our own OpenShift Environment in [day 58](/day58.md).
In [day 57](day57.md) we will dive into the Architecture and components of OpenShift, moving onto spinning up our own OpenShift Environment in [day 58](day58.md).
# Resources
- [OKD](https://www.okd.io/)
- [Official Red Hat OpenShift product page](https://www.redhat.com/en/technologies/cloud-computing/openshift)
- [Red Hat Hybrid Cloud Learning Hub](https://cloud.redhat.com/learn)

View File

@ -8,7 +8,7 @@ This product was folding into the portfolio as "Red Hat CoreOS" (RHCOS), to beco
The most important note about RHCOS, is that it is the only supported operating system for the Red Hat OpenShift Control Plane (a.k.a Master) nodes. For Compute Plane (a.k.a Worker) nodes, you have the choice to deploy either RHCOS or[ Red Hat Enterprise Linux (RHEL)](https://www.redhat.com/en/technologies/linux-platforms/enterprise-linux) as the operating system. You can read about the RHCOS [key features here](https://docs.openshift.com/container-platform/4.12/architecture/architecture-rhcos.html#rhcos-key-features_architecture-rhcos).
RHCOS is designed as a minimal user-interaction platform from a configuration standpoint. It is not [encouraged to directly configure](https://docs.openshift.com/container-platform/4.12/architecture/architecture-rhcos.html#rhcos-configured_architecture-rhcos) a RHCOS instance, as it's management comes from the Red Hat OpenShift platform itself, meaning any configuarations would be controlled via Kubernetes Objects.
RHCOS is designed as a minimal user-interaction platform from a configuration standpoint. It is not [encouraged to directly configure](https://docs.openshift.com/container-platform/4.12/architecture/architecture-rhcos.html#rhcos-configured_architecture-rhcos) a RHCOS instance, as it's management comes from the Red Hat OpenShift platform itself, meaning any configuarations would be controlled via Kubernetes Objects.
Bringing a RHCOS machine online is enabled through [Ignition](https://docs.openshift.com/container-platform/4.12/architecture/architecture-rhcos.html#rhcos-about-ignition_architecture-rhcos), a utility designed to manipulate the disks during initial configuration of the machine. This runs at first boot, and typically will be used to configure disk partitions for the machine, and provide the necessary details for the machine to connect ot the bootstrapping machine in the environment, from which it will recieve it's configuration to become part of the Red Hat OpenShift Cluster.
@ -25,7 +25,7 @@ To summarize this section, RHCOS brings the following, as a specifically designe
At a basic level, Red Hat OpenShift if built ontop of the open-source platform, Kubernetes. Meaning that all components you have learned about from this base platform, are apparent and available in the Red Hat OpenShift platform.
If you haven't visited the [#90DaysOfDevOps - Kubernetes section](https://github.com/MichaelCade/90DaysOfDevOps/blob/main/2022.md#kubernetes), then I urge you to do so, before continuing with this section on Red Hat OpenShift.
If you haven't visited the [#90DaysOfDevOps - Kubernetes section](https://github.com/MichaelCade/90DaysOfDevOps/blob/main/2022.md#kubernetes), then I urge you to do so, before continuing with this section on Red Hat OpenShift.
![Red Hat OpenShift - Product Architect](images/Day57%20-%20Red%20Hat%20OpenShift%20Architecture/Red%20Hat%20OpenShift%20-%20Product%20Architecture.png)
@ -35,7 +35,7 @@ Ontop of the Kubernetes platform, Red Hat then delivers it's enterprise sauce sp
- Integrating Red Hat Technologies from Red Hat Enterprise Linux
- Open Source development mode. This means source code is available in public software repositories.
Red Hat believes that although Kubernetes is a great platform for managing your applicaitons, it doesn't do a great job of platform-level requirements management (think supporting services to make your apps work) or deployment process handling. Therefore they layer ontop they additional components to give you a full enterprise ready Kubernetes platform.
Red Hat believes that although Kubernetes is a great platform for managing your applicaitons, it doesn't do a great job of platform-level requirements management (think supporting services to make your apps work) or deployment process handling. Therefore they layer ontop they additional components to give you a full enterprise ready Kubernetes platform.
- Custom Operating system based on Red Hat Enterprise Linux (RHCOS)(See above).
- Simplified installation and lifecycle management at a cluster platform level (See below).
@ -48,7 +48,7 @@ Red Hat believes that although Kubernetes is a great platform for managing your
- Montioring
- Routing
And finally to round off, you can interact with a Red Hat OpenShift Cluster, either via a "Comprehensive" web console, or the custom [OpenShift CLI tool ```oc```](https://docs.openshift.com/container-platform/4.12/cli_reference/openshift_cli/getting-started-cli.html), which is a mix of ```kubectl```, ```kubeadm``` and some specific CLI for Red Hat OpenShift.
And finally to round off, you can interact with a Red Hat OpenShift Cluster, either via a "Comprehensive" web console, or the custom [OpenShift CLI tool `oc`](https://docs.openshift.com/container-platform/4.12/cli_reference/openshift_cli/getting-started-cli.html), which is a mix of `kubectl`, `kubeadm` and some specific CLI for Red Hat OpenShift.
The below image nicely finishes off this section covering the product and it's components and why you would potentially choose Red Hat OpenShift over a vanilla Kubernetes platform.
@ -79,15 +79,15 @@ And with these options, there are two types of installation methods/deployment m
- Installer provisioned infrastructure (IPI)
- User provisioned infrastructure (UPI)
There is a third method, which is Agent-based, providing the flexbility of UPI, driven by the Assisted Installer (AI) tool.
There is a third method, which is Agent-based, providing the flexbility of UPI, driven by the Assisted Installer (AI) tool.
Either method, IPI or UPI is driven from the ```openshift-install``` installation program, which is a CLI tool provided for Linux and Mac Operating systems only.
Either method, IPI or UPI is driven from the `openshift-install` installation program, which is a CLI tool provided for Linux and Mac Operating systems only.
The installation program will generate the necessary components to build a cluster such as the Ignition files for bootstrap, master and worker machines. It will further monitor the installation for known targets that an installation must achieve for a successful deployment of a cluster, and provide error handling in the event of a failed cluster deployment, by collecting the necessary troubleshooting logs.
The installation program will generate the necessary components to build a cluster such as the Ignition files for bootstrap, master and worker machines. It will further monitor the installation for known targets that an installation must achieve for a successful deployment of a cluster, and provide error handling in the event of a failed cluster deployment, by collecting the necessary troubleshooting logs.
To visualise bringing all these moving parts together, I have provided the below image from the Red Hat OpenShift documentation.
A cluster definition is created in a special file called ```install-config.yaml```, this file contains the following information:
A cluster definition is created in a special file called `install-config.yaml`, this file contains the following information:
- Cluster name
- Base domain (FDQN for the network where the cluster will run)
@ -96,9 +96,9 @@ A cluster definition is created in a special file called ```install-config.yaml`
- Specific Infrastructure platform details (Login details, which networks and storage to use, for example)
- Workload Customizations, such what instance types to use for your Control Plane (Master) and Compute Plane (Worker) nodes.
There is also additional files which may be stored along side the root of the ```install-config.yaml``` in a folder called ```manifests``` these are additional files which can be configured to assist the bootstrapping of a cluster to integrate with your infrastructure, such as your Networking platform.
There is also additional files which may be stored along side the root of the `install-config.yaml` in a folder called `manifests` these are additional files which can be configured to assist the bootstrapping of a cluster to integrate with your infrastructure, such as your Networking platform.
Once you have all of these files, by running the ```openshift-install``` CLI tool, this will create the ignition files for your boostrap, control plane, and compute plane nodes. Returning to the earlier descriptions of RHCOS, these files contain the first boot information to configure the Operation System and start the process of building a consistent Kubernetes cluster with minimal to no interaction.
Once you have all of these files, by running the `openshift-install` CLI tool, this will create the ignition files for your boostrap, control plane, and compute plane nodes. Returning to the earlier descriptions of RHCOS, these files contain the first boot information to configure the Operation System and start the process of building a consistent Kubernetes cluster with minimal to no interaction.
![OpenShift Container Platform installation targets and dependencies](images/Day57%20-%20Red%20Hat%20OpenShift%20Architecture/OpenShift%20Container%20Platform%20installation%20targets%20and%20dependencies.png)
@ -106,13 +106,12 @@ Once you have all of these files, by running the ```openshift-install``` CLI too
This is the default installation method, and preferred by Red Hat for their customers to initiate a cluster installation, as it provides a reference architectural deployment out of the box.
The ```openshift-install``` CLI tool can act as it's own installation wizard, presenting you with a number of queries for the values it needs to deploy to your choosen platform. You can also customize the installation process to support more advanced scenerios, such as the number of machines deployed, instance type/size, CIDR range for the Kubernetes service network,
The `openshift-install` CLI tool can act as it's own installation wizard, presenting you with a number of queries for the values it needs to deploy to your choosen platform. You can also customize the installation process to support more advanced scenerios, such as the number of machines deployed, instance type/size, CIDR range for the Kubernetes service network,
The main point here, is that the installation software provisions the underlying infrastructure for the cluster.
By using an IPI installation method, the provisioned cluster then has the further ability to continue to manage all aspects of the cluster and provisioned infrastructure going forward from a lifecycle management point of view. For example, if you scale the number of compute plane (worker) nodes in your cluster, the OpenShift Container Platform can interact with the underlying platform (for example, AWS, VMware vSphere) to create the new virtual machines and bootstrap them to the cluster.
## User provisioned infrastructure (UPI)
With a UPI method, the OpenShift Container Platform will be installed to infrastucture that you have provided. The installation software will still be used to generate the assets needed to provision the cluster, however you will manually build the nodes and provide the necessary ignition to bring the nodes online. You must also manage the infrastructure supporting cluster resources such as:
@ -126,7 +125,7 @@ When using a UPI installation, you have the option to deploy your Compute Plane
## Assisted Installer
As mentioned earlier, the assisted installer is a kind of hybrid of the UPI method, but offers the hosting of the installation artifacts and removing the need for a bootstrap machine, essentially you provision/install your nodes from a live boot cd, which has the necessary configuration to bring up your node and pull down the rest of the hosted files from a known location.
As mentioned earlier, the assisted installer is a kind of hybrid of the UPI method, but offers the hosting of the installation artifacts and removing the need for a bootstrap machine, essentially you provision/install your nodes from a live boot cd, which has the necessary configuration to bring up your node and pull down the rest of the hosted files from a known location.
You can find out more from this [Red Hat blog post](How to use the OpenShift Assisted Installer), or [official documentation](https://docs.openshift.com/container-platform/4.12/installing/installing_on_prem_assisted/installing-on-prem-assisted.html)
@ -138,25 +137,26 @@ A temporary bootstrap machine is provisioned using IPI or UPI, which contains th
Once the control plane is initialised, the bootstrap machine is destroyed. If you are manually provisioning the platform (UPI), then you complete a number of the provisioning steps manually.
>Bootstrapping a cluster involves the following steps:
> 1. The bootstrap machine boots and starts hosting the remote resources required for the control plane machines to boot. (Requires manual intervention if you provision the infrastructure)
> 2. The bootstrap machine starts a single-node etcd cluster and a temporary Kubernetes control plane.
> 3. The control plane machines fetch the remote resources from the bootstrap machine and finish booting. (Requires manual intervention if you provision the infrastructure)
> 4. The temporary control plane schedules the production control plane to the production control plane machines.
> 5. The Cluster Version Operator (CVO) comes online and installs the etcd Operator. The etcd Operator scales up etcd on all control plane nodes.
> 6. The temporary control plane shuts down and passes control to the production control plane.
> 7. The bootstrap machine injects OpenShift Container Platform components into the production control plane.
> 8. The installation program shuts down the bootstrap machine. (Requires manual intervention if you provision the infrastructure)
> 9. The control plane sets up the compute nodes.
> 10. The control plane installs additional services in the form of a set of Operators.
> Bootstrapping a cluster involves the following steps:
>
>The result of this bootstrapping process is a running OpenShift Container Platform cluster. The cluster then downloads and configures remaining components needed for the day-to-day operation, including the creation of compute machines in supported environments.
> 1. The bootstrap machine boots and starts hosting the remote resources required for the control plane machines to boot. (Requires manual intervention if you provision the infrastructure)
> 2. The bootstrap machine starts a single-node etcd cluster and a temporary Kubernetes control plane.
> 3. The control plane machines fetch the remote resources from the bootstrap machine and finish booting. (Requires manual intervention if you provision the infrastructure)
> 4. The temporary control plane schedules the production control plane to the production control plane machines.
> 5. The Cluster Version Operator (CVO) comes online and installs the etcd Operator. The etcd Operator scales up etcd on all control plane nodes.
> 6. The temporary control plane shuts down and passes control to the production control plane.
> 7. The bootstrap machine injects OpenShift Container Platform components into the production control plane.
> 8. The installation program shuts down the bootstrap machine. (Requires manual intervention if you provision the infrastructure)
> 9. The control plane sets up the compute nodes.
> 10. The control plane installs additional services in the form of a set of Operators.
>
> The result of this bootstrapping process is a running OpenShift Container Platform cluster. The cluster then downloads and configures remaining components needed for the day-to-day operation, including the creation of compute machines in supported environments.
# Summary
We have covered the components that make up a Red Hat OpenShift Container Platform environment, why they are important to the environment, and what enteprise features they bring over a vanilla Kubernetes environment. We then dived into the methods available to deploy an OpenShift Cluster and the process that a Cluster build undertakes.
We have covered the components that make up a Red Hat OpenShift Container Platform environment, why they are important to the environment, and what enteprise features they bring over a vanilla Kubernetes environment. We then dived into the methods available to deploy an OpenShift Cluster and the process that a Cluster build undertakes.
In [Day 58](../day58.md) will cover the steps to install Red Hat OpenShift to a VMware vSphere environment.
In [Day 58](day58.md) will cover the steps to install Red Hat OpenShift to a VMware vSphere environment.
# Resources

View File

@ -7,15 +7,16 @@ The platform for this example will be [VMware vSphere](https://www.vmware.com/uk
## Pre-requisites
We will need the following:
- Jump host to run the installation software from
- Access to the DNS server which supports the infrastructure platform you are deploying to
- A pull secret file/key from the Red Hat Cloud Console website
- You can get one of these by just signing up for an account, any cluster created using this key will get a trial activation for a cluster for 60 days
- You can get one of these by just signing up for an account, any cluster created using this key will get a trial activation for a cluster for 60 days
- A SSH Key used for access to the deployed nodes
### Configuring the Jump host Machine
For this example, I've used a Ubuntu Server Virtual Machine, you can use another distribution of Linux or Mac OS X for these steps. (Note: the ```OpenShift-install``` CLI tool only supports Linux and MAC OS X)
For this example, I've used a Ubuntu Server Virtual Machine, you can use another distribution of Linux or Mac OS X for these steps. (Note: the `OpenShift-install` CLI tool only supports Linux and MAC OS X)
Download the OpenShift-Install tool and OpenShift-Client (OC) command line tool. (Ive used version 4.12.6 in my install)
@ -24,39 +25,45 @@ Download the OpenShift-Install tool and OpenShift-Client (OC) command line tool.
![OpenShift Clients Download](images/Day58%20-%20OpenShift%20Cluster%20Install/OpenShift%20Clients%20Download.jpg)
Extract the files and copy to your /usr/bin/local directory
````
```
tar -zxvf openshift-client-linux.tar.gz
tar -zxvf openshift-install-linux.tar.gz
sudo cp openshift-install /usr/bin/local/openshift-install
sudo cp oc /usr/bin/local/oc
sudo cp kubectl /usr/bin/local/kubectl
````
Have an available SSH key from your jump box, so that you can connect to your CoreOS VMs one they are deployed for troubleshooting purposes. Generate one using ````ssh-keygen```` if needed.
```
Have an available SSH key from your jump box, so that you can connect to your CoreOS VMs one they are deployed for troubleshooting purposes. Generate one using `ssh-keygen` if needed.
Next, we need to download the VMware vCenter trusted root certificates and import them to your Jump Host.
````
```
curl -O https://{vCenter_FQDN}/certs/download.zip
````
```
Now unzip the file (you may need to install a software package for this ````sudo apt install unzip````), and import them to the trusted store (ubuntu uses the .crt files, hence importing the win folder).
````
Now unzip the file (you may need to install a software package for this `sudo apt install unzip`), and import them to the trusted store (ubuntu uses the .crt files, hence importing the win folder).
```
unzip download.zip
cp certs/win/* /usr/local/share/ca-certificates
update-ca-certificates
````
```
You will need a user account to connect to vCenter with the correct permissions for the OpenShift-Install to deploy the cluster. If you do not want to use an existing account and permissions, you can use this [PowerCLI script](https://github.com/saintdle/PowerCLI/blob/master/Create_vCenter_OpenShift_Install_Role.ps1) to create the roles with the correct privileges based on the Red Hat documentation.
### Configuring DNS Records
A mandatory pre-req is DNS records. You will need the two following records to be available on your OpenShift cluster network in the same IP address space that your nodes will be deployed to. These records will follow the format:
````
```
{clusterID}.{domain_name}
example: ocp412.veducate.local
*.apps.{clusterID}.{domain_name}
example: *.apps.ocp412.veducate.local
````
```
If your DNS is a Windows server, you can use this [script here](https://github.com/saintdle/OCP-4.3-vSphere-Static-IP/tree/master/DNS). I've included a quick screenshot of my DNS Server settings below for both records.
![OpenShift - Example DNS Records](images/Day58%20-%20OpenShift%20Cluster%20Install/OpenShift%20-%20Example%20DNS%20records.jpg)
@ -64,69 +71,77 @@ If your DNS is a Windows server, you can use this [script here](https://github.c
### Minimum Resources to deploy a cluster
You need to be aware of the [minimum deployment options](https://docs.openshift.com/container-platform/4.12/installing/installing_vsphere/installing-vsphere.html#installation-minimum-resource-requirements_installing-vsphere) to successfully bring up a cluster.
````
```
1 Bootstrap
This machine is created automatically and deleted after the cluster build.
3 Control Plane
2 Compute Plane
````
```
![OpenShift - Minimum resource requirements](images/Day58%20-%20OpenShift%20Cluster%20Install/OpenShift%20-%20Minimum%20resource%20requirements.jpg)
## Using the OpenShift-Install tool
Now that we have our pre-reqs in place, we can start to deploy our cluster. When using the ````OpenShift-Install```` tool, you have three main command line options when creating a cluster
Now that we have our pre-reqs in place, we can start to deploy our cluster. When using the `OpenShift-Install` tool, you have three main command line options when creating a cluster
>````openshift-install create cluster````
>- This will run you through a wizard to create the install-config.yaml file and then create the cluster automatically using terraform which is packaged as part of the installer software (meaning you don't need terraform on your system as a pre-req).
>- If you run the below two commands listed, you can then still run this command to provision your cluster.
> `openshift-install create cluster`
>
>````openshift-install create install-config````
>- This will run you through a wizard to create the install-config.yaml file, and leave it in the root directory, or directory you specify with the --dir= argument.
>- It is supported for you to modify the install-config.yaml file, before running the above ```create cluster``` command.
> - This will run you through a wizard to create the install-config.yaml file and then create the cluster automatically using terraform which is packaged as part of the installer software (meaning you don't need terraform on your system as a pre-req).
> - If you run the below two commands listed, you can then still run this command to provision your cluster.
>
>````openshift-install create manifests````
>- This will create the manifests folder which controls the provisioning of the cluster. Most of the time this command is only use with UPI installations. However some platform integrations support IPI installation, such as VMware's NSX Advanced Load Balancer, but they require you create the manifest folder and upload adding YAML files to this folder, which helps OpenShift integrate to the Load Balancer upon deployment..
> `openshift-install create install-config`
>
> - This will run you through a wizard to create the install-config.yaml file, and leave it in the root directory, or directory you specify with the --dir= argument.
> - It is supported for you to modify the install-config.yaml file, before running the above `create cluster` command.
>
> `openshift-install create manifests`
>
> - This will create the manifests folder which controls the provisioning of the cluster. Most of the time this command is only use with UPI installations. However some platform integrations support IPI installation, such as VMware's NSX Advanced Load Balancer, but they require you create the manifest folder and upload adding YAML files to this folder, which helps OpenShift integrate to the Load Balancer upon deployment..
There are other commands such as ```create ignition``` which would be used for when you are performing the UPI installation method.
There are other commands such as `create ignition` which would be used for when you are performing the UPI installation method.
![OpenShift-install create help](images/Day58%20-%20OpenShift%20Cluster%20Install/OpenShift-Install%20create%20help.jpg)
Now let's jump into creating our cluster in the easiest possible way, with the ```openshift-install create cluster```command and press enter, this will take you into the wizard format. Below I've selected my SSH key I want to use and the Platform as vSphere.
Now let's jump into creating our cluster in the easiest possible way, with the `openshift-install create cluster`command and press enter, this will take you into the wizard format. Below I've selected my SSH key I want to use and the Platform as vSphere.
![OpenShift-Install Create cluster](images/Day58%20-%20OpenShift%20Cluster%20Install/OpenShift-Install%20create%20cluster.jpg)
Next I enter the vCenter FQDN, the username, and password. The tool then connects to the vCenter and pulls the necessary datastores and networks I can deploy to. If you have missed the certificate step above, it will error out here.
After selecting datastore and the network, I need to now input the address for:
> api.{cluster_name}.{base_domain}
> *.apps.{cluster_name}.{base_domain}
> \*.apps.{cluster_name}.{base_domain}
However I hit a bug ([GitHub PR](https://github.com/openshift/installer/pull/6783),[Red Hat Article](https://access.redhat.com/solutions/6994972)) in the installer, where by the software installer is hardcoded to only accept addresses in the 10.0.0.0/16 range.
![OpenShift-Install create cluster - Sorry, your reply was invalid: IP expected to be in one of the machine networks: 10.0.0.0/16](images/Day58%20-%20OpenShift%20Cluster%20Install/OpenShift-Install%20create%20cluster%20-%20Sorry%2C%20your%20reply%20was%20invalid-%20IP%20expected%20to%20be%20in%20one%20of%20the%20machine%20networks-%2010.0.0.0-16.jpg)
The current work around for this is to run ````openshift-install create install-config```` provide ip addresses in the 10.0.0.0/16 range, and then alter the ```install-config.yaml``` file manually before running ````openshift-install create cluster````, which will read the available ```install-config.yaml``` file and create the cluster (rather than presenting you another wizard).
The current work around for this is to run `openshift-install create install-config` provide ip addresses in the 10.0.0.0/16 range, and then alter the `install-config.yaml` file manually before running `openshift-install create cluster`, which will read the available `install-config.yaml` file and create the cluster (rather than presenting you another wizard).
So, let's back track a bit, and do that. Running the ```create install-config``` argument, provides the same wizard run through as before.
So, let's back track a bit, and do that. Running the `create install-config` argument, provides the same wizard run through as before.
In the wizard, I've provided IP's on the range from above, and set my base domain and cluster name as well. The final piece is to paste in my Pull Secret from the Red Hat Cloud console.
![OpenShift-install create install-config](images/Day58%20-%20OpenShift%20Cluster%20Install/OpenShift-install%20create%20install-config.jpg)
Now if I run ```ls``` on my current directory I'll see the ```install-config.yaml``` file. It is recommended to save this file now before you run the ```create cluster``` command, as this file will be removed after this, as it contains plain text passwords.
Now if I run `ls` on my current directory I'll see the `install-config.yaml` file. It is recommended to save this file now before you run the `create cluster` command, as this file will be removed after this, as it contains plain text passwords.
I've highlighted in the below image the lines we need to alter.
I've highlighted in the below image the lines we need to alter.
![OpenShift-install install-config.yaml file](images/Day58%20-%20OpenShift%20Cluster%20Install/OpenShift-install%20-%20install-config.yaml%20file.jpg)
For the section
````
```
machineNetwork:
- cidr: 10.0.0.0/16
````
This needs to be changed the network subnet the nodes will run on. And for the platform section, you need to map the right IP addresses from your DNS records.
```
````
This needs to be changed the network subnet the nodes will run on. And for the platform section, you need to map the right IP addresses from your DNS records.
```
platform:
vsphere:
apiVIP: 192.168.200.192 <<<<<<< This is your api.{cluster_name}.{base_domain} DNS record
@ -135,32 +150,32 @@ platform:
datacenter: vEducate-DC
defaultDatastore: Datastore01
ingressVIP: 192.168.200.193 <<<<<<< This is your *.apps.{cluster_name}.{base_domain} DNS record
````
```
I've also included a further example of a ```install-config.yaml``` file, I want to highlight under the "compute" and "controlPlane" sections, where by I've specified resouce configuration settings for my virtual machines. You cannot change these below the minimum specified in the documentation, otherwise you will your cluster will not build successfully.
I've also included a further example of a `install-config.yaml` file, I want to highlight under the "compute" and "controlPlane" sections, where by I've specified resouce configuration settings for my virtual machines. You cannot change these below the minimum specified in the documentation, otherwise you will your cluster will not build successfully.
You can read about further [supported customizations here](https://github.com/openshift/installer/blob/master/docs/user/customization.md).
````
```
apiVersion: v1
baseDomain: veducate.local
compute:
- hyperthreading: Enabled
compute:
- hyperthreading: Enabled
name: worker
replicas: 1
platform:
vsphere:
vsphere:
cpus: 8
coresPerSocket: 4
memoryMB: 16384
osDisk:
diskSizeGB: 120
controlPlane:
hyperthreading: Enabled
controlPlane:
hyperthreading: Enabled
name: master
replicas: 3
platform:
vsphere:
vsphere:
cpus: 8
coresPerSocket: 4
memoryMB: 16384
@ -194,42 +209,44 @@ publish: External
pullSecret: '{"auths":{"cloud.openshift.com":{"auth":"bxxxxxx==","email":"openshift@veducate.co.uk"},"registry.redhat.io":{"auth":"Nxxx=","email":"openshift@veducate.co.uk"}}}'
sshKey: |
ssh-rsa AAAABxxxxxx openshift@veducate
````
Now that we have our correctly configured ```install-config.yaml``` file, we can proceed with the installation of the cluster, which after running the ```openshift-install create cluster``` command, is hands off from this point forward. The system will output logging to the console for you, which you can modify using the ```--log-level=``` argument at the end of the command.
```
Below is the normal output without any modifiers. We now have a working Red Hat OpenShift Cluster, and can use the export command provided to access the cluster via the ```oc``` CLI tool, or you can use ```kubectl```
Now that we have our correctly configured `install-config.yaml` file, we can proceed with the installation of the cluster, which after running the `openshift-install create cluster` command, is hands off from this point forward. The system will output logging to the console for you, which you can modify using the `--log-level=` argument at the end of the command.
````
dean@dean [ ~/90days-ocp412 ] # ./openshift-install create cluster
INFO Consuming Install Config from target directory
INFO Creating infrastructure resources...
Below is the normal output without any modifiers. We now have a working Red Hat OpenShift Cluster, and can use the export command provided to access the cluster via the `oc` CLI tool, or you can use `kubectl`
```
dean@dean [ ~/90days-ocp412 ] # ./openshift-install create cluster
INFO Consuming Install Config from target directory
INFO Creating infrastructure resources...
INFO Waiting up to 20m0s (until 9:52AM) for the Kubernetes API at https://api.90days-ocp.simon.local:6443...
INFO API v1.25.4+18eadca up
INFO Waiting up to 30m0s (until 10:04AM) for bootstrapping to complete...
INFO Destroying the bootstrap resources...
INFO Waiting up to 40m0s (until 10:30AM) for the cluster at https://api.90days-ocp.simon.local:6443 to initialize...
INFO Checking to see if there is a route at openshift-console/console...
INFO Install complete!
INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/dean/90days-ocp412/auth/kubeconfig'
INFO Access the OpenShift web-console here: https://console-openshift-console.apps.90days-ocp.simon.local
INFO Login to the console with user: "kubeadmin", and password: "ur6xT-gxmVW-WVUuD-Sd44J"
INFO Time elapsed: 35m16s
```
INFO Waiting up to 20m0s (until 9:52AM) for the Kubernetes API at https://api.90days-ocp.simon.local:6443...
INFO API v1.25.4+18eadca up
INFO Waiting up to 30m0s (until 10:04AM) for bootstrapping to complete...
INFO Destroying the bootstrap resources...
INFO Waiting up to 40m0s (until 10:30AM) for the cluster at https://api.90days-ocp.simon.local:6443 to initialize...
INFO Checking to see if there is a route at openshift-console/console...
INFO Install complete!
INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/dean/90days-ocp412/auth/kubeconfig'
INFO Access the OpenShift web-console here: https://console-openshift-console.apps.90days-ocp.simon.local
INFO Login to the console with user: "kubeadmin", and password: "ur6xT-gxmVW-WVUuD-Sd44J"
INFO Time elapsed: 35m16s
````
![OpenShift-Install Create Cluster - output](images/Day58%20-%20OpenShift%20Cluster%20Install/OpenShift-Install%20create%20cluster%20-%20output.jpg)
### Viewing the installation logs
If we now look within our directory where we ran the ```openshift-install``` installation from, you can see a number of new folders and files are created:
If we now look within our directory where we ran the `openshift-install` installation from, you can see a number of new folders and files are created:
- auth Folder
- Within this folder is your kubeconfig file, as mentioned in the above console output
- Within this folder is your kubeconfig file, as mentioned in the above console output
- tls Folder
- this contains the certificates of the journal-gateway service on the nodes to collect logs and debug
- this contains the certificates of the journal-gateway service on the nodes to collect logs and debug
- Terraform files
- There are various ```.tfvars``` and ```.tfstate``` files used by the terraform component which is part of ```openshift-install``` software, and well as the output Terraform state file.
- There are various `.tfvars` and `.tfstate` files used by the terraform component which is part of `openshift-install` software, and well as the output Terraform state file.
- Log Files
- Finally the verbose output is located in the hidden file ```.openshift_install.log```, this contains all the details about your installation and the running of Terraform to create the various resources.
- Finally the verbose output is located in the hidden file `.openshift_install.log`, this contains all the details about your installation and the running of Terraform to create the various resources.
Below is a screenshot showing the directory, folders and example of my logging output.
@ -237,17 +254,17 @@ Below is a screenshot showing the directory, folders and example of my logging o
## Connecting to your cluster
To communicate with your cluster, like a vanilla Kubernetes environment, you can interact via the CLI tooling or directly with the API. However with Red Hat OpenShift, you also get a web console out of the box as well, this web console is designed for both personas, the platform administrator, and the developer who is deploying their applications. There is actually a drop down to change between these persona views as well (If you have the appropiate permissions to see both interfaces).
To communicate with your cluster, like a vanilla Kubernetes environment, you can interact via the CLI tooling or directly with the API. However with Red Hat OpenShift, you also get a web console out of the box as well, this web console is designed for both personas, the platform administrator, and the developer who is deploying their applications. There is actually a drop down to change between these persona views as well (If you have the appropiate permissions to see both interfaces).
### Using the Openshift Client (oc) and Kubectl
Aa you will have seen from the final output of the installation, you will be provided a kubeconfig file in the ```auth``` folder, and the ouput provides you the necessary command to start consuming that straight away, as per the below example.
Aa you will have seen from the final output of the installation, you will be provided a kubeconfig file in the `auth` folder, and the ouput provides you the necessary command to start consuming that straight away, as per the below example.
> INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/dean/90days-ocp412/auth/kubeconfig'
> INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/dean/90days-ocp412/auth/kubeconfig'
Once set as your environment variable, you can now interact with the cluster the same way you would with a vanilla Kubernetes cluster. When using the OpenShift Client (oc) tool, you'll find that all of your favourite ```kubectl``` commands will still work, you just replace the first part of the command with ```oc```. Below I've detailed a few examples:
Once set as your environment variable, you can now interact with the cluster the same way you would with a vanilla Kubernetes cluster. When using the OpenShift Client (oc) tool, you'll find that all of your favourite `kubectl` commands will still work, you just replace the first part of the command with `oc`. Below I've detailed a few examples:
````
```
kubectl get ns
oc get ns
@ -257,34 +274,37 @@ oc get pods -A
kubectl get pods -n openshift-apiserver
oc get pods -n openshift-apiserver
````
```
![kubectl get pods -n openshift-apiserver - oc get pods -n openshift-apiserver ](images/Day58%20-%20OpenShift%20Cluster%20Install/kubectl%20get%20pods%20-n%20openshift-apiserver%20-%20oc%20get%20pods%20-n%20openshift-apiserver%20.jpg)
![kubectl get pods -A - oc get pods -A](images/Day58%20-%20OpenShift%20Cluster%20Install/kubectl%20get%20pods%20-A%20-%20oc%20get%20pods%20-A.jpg)
![kubectl get ns - oc get ns](images/Day58%20-%20OpenShift%20Cluster%20Install/kubectl%20get%20ns%20-%20oc%20get%20ns.jpg)
I've created an image of the output from ```oc -help``` and ```kubectl -help``` and mapped the two commands together, you will see that the ```oc``` tool is far more rich in terms of functionality
I've created an image of the output from `oc -help` and `kubectl -help` and mapped the two commands together, you will see that the `oc` tool is far more rich in terms of functionality
![oc -help compared to kubectl -help](images/Day58%20-%20OpenShift%20Cluster%20Install/oc%20-help%20compared%20to%20kubectl%20-help.jpg)
You can also login into the OpenShift cluster via the ```oc login``` command.
You can also login into the OpenShift cluster via the `oc login` command.
````
```
# Log in to the given server with the given credentials (will not prompt interactively)
oc login localhost:8443 --username=myuser --password=mypass
````
```
A final footnote here, if you are not planning on using an image registry in your environment, it recommended to run this command to use the inbuilt registry as ephemeral whilst you do your testing:
````
```
oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"managementState":"Managed","storage":{"emptyDir":{}}}}'
````
```
### Using the OpenShift Console UI
The final access point into the cluster, is via the UI, again the output of the installation software, gives you the full FQDN to access this console, if you look closely you'll see it uses the ingress record under *.apps.{cluster_name}.{base_domain}.
The final access point into the cluster, is via the UI, again the output of the installation software, gives you the full FQDN to access this console, if you look closely you'll see it uses the ingress record under \*.apps.{cluster_name}.{base_domain}.
```
INFO Access the OpenShift web-console here: https://console-openshift-console.apps.90days-ocp.simon.local
INFO Login to the console with user: "kubeadmin", and password: "ur6xT-gxmVW-WVUuD-Sd44J"
```
````
INFO Access the OpenShift web-console here: https://console-openshift-console.apps.90days-ocp.simon.local
INFO Login to the console with user: "kubeadmin", and password: "ur6xT-gxmVW-WVUuD-Sd44J"
````
![Red Hat OpenShift Web Console](images/Day58%20-%20OpenShift%20Cluster%20Install/Red%20Hat%20OpenShift%20Web%20Console.jpg)
Once logged in, you'll view the persona that you have access to (1). In my example, I'm using the kubeadmin account, so I see the administrative view first, and I can change this to the Developer view as well (see second screenshot).
@ -301,7 +321,7 @@ On the the developer homepage screen, you can see straight away you are presente
![Red Hat OpenShift - Web Console - Developer Homepage](images/Day58%20-%20OpenShift%20Cluster%20Install/Red%20Hat%20OpenShift%20-%20Web%20Console%20-%20Developer%20Homepage.jpg)
For this example today, I deployed my trusty [pacman app](https://github.com/saintdle/pacman-tanzu) from the [Helm Chart hosted on GitHub](https://github.com/saintdle/helm-charts), unfortunately I've not configured images for the app, but you can see, a topology is built, I can click into each component and see information about it.
For this example today, I deployed my trusty [pacman app](https://github.com/saintdle/pacman-tanzu) from the [Helm Chart hosted on GitHub](https://github.com/saintdle/helm-charts), unfortunately I've not configured images for the app, but you can see, a topology is built, I can click into each component and see information about it.
![Red Hat OpenShift - Web Console - Developer Topology](images/Day58%20-%20OpenShift%20Cluster%20Install/Red%20Hat%20OpenShift%20-%20Web%20Console%20-%20Developer%20Topology.jpg)
@ -311,18 +331,18 @@ If I take a step back I also can look at my project as a whole, see resource uti
# Summary
I think I'll stop here and wrap up. As you now know from [day 57](/day57.md), there are a few deployment methods, and numerous platforms to deploy to. This walkthrough covered the simpliest deployment to one of the most popular platforms, VMware vSphere. For those of you who want to try out OpenShift, you have two options, you can now deploy a [single node OpenShift environment](https://cloud.redhat.com/blog/visual-guide-to-single-node-openshift-deploy) running on your local machine, you can run [OpenShift sandbox](https://developers.redhat.com/developer-sandbox) via their website, or you can run [OKD](https://www.okd.io/), the open-source version in your home lab. Or stick with a trial of the enterprise software like I have.
I think I'll stop here and wrap up. As you now know from [day 57](day57.md), there are a few deployment methods, and numerous platforms to deploy to. This walkthrough covered the simpliest deployment to one of the most popular platforms, VMware vSphere. For those of you who want to try out OpenShift, you have two options, you can now deploy a [single node OpenShift environment](https://cloud.redhat.com/blog/visual-guide-to-single-node-openshift-deploy) running on your local machine, you can run [OpenShift sandbox](https://developers.redhat.com/developer-sandbox) via their website, or you can run [OKD](https://www.okd.io/), the open-source version in your home lab. Or stick with a trial of the enterprise software like I have.
In [day 59](/day59.md), we will cover application deployment in a little more detail, and look start to look at Security Contraints Context (SCC), the out of the box security features of OpenShift which enhance further upon the older PodSecurityPolicies from Kubernetes. SCC is sometimes a little hard to get used to, and be a source of frustration out of the box for many when getting started with OpenShift.
In [day 59](day59.md), we will cover application deployment in a little more detail, and look start to look at Security Contraints Context (SCC), the out of the box security features of OpenShift which enhance further upon the older PodSecurityPolicies from Kubernetes. SCC is sometimes a little hard to get used to, and be a source of frustration out of the box for many when getting started with OpenShift.
# Resources
- vEducate.co.uk
- [OpenShift on VMware Integrating with vSphere Storage, Networking and Monitoring](https://veducate.co.uk/openshift-on-vmware/)
- [How to specify your vSphere virtual machine resources when deploying Red Hat OpenShift](https://veducate.co.uk/deploy-vsphere-openshift-machine-resources/)
- [Red Hat OpenShift on VMware vSphere How to Scale and Edit your cluster deployments](https://veducate.co.uk/openshift-vsphere-scale-clusters/)
- [OpenShift on VMware Integrating with vSphere Storage, Networking and Monitoring](https://veducate.co.uk/openshift-on-vmware/)
- [How to specify your vSphere virtual machine resources when deploying Red Hat OpenShift](https://veducate.co.uk/deploy-vsphere-openshift-machine-resources/)
- [Red Hat OpenShift on VMware vSphere How to Scale and Edit your cluster deployments](https://veducate.co.uk/openshift-vsphere-scale-clusters/)
- Red Hat OpenShift Commons - Community sessions helping steer the future of OpenShift as an open-source developed project - [Link](https://cloud.redhat.com/blog/tag/openshift-commons)
- Red Hat Sysadmin Blog - [Deploy and run OpenShift on AWS: 4 options](https://www.redhat.com/sysadmin/run-openshift-aws)
- Red Hat OpenShift Documentation - [Installing a cluster quickly on Azure](https://docs.openshift.com/container-platform/4.12/installing/installing_azure/installing-azure-default.html)
- YouTube - [TAM Lab 069 - Deploying OpenShift 4.3 to VMware vSphere (UPI install)](https://www.youtube.com/watch?v=xZpoZZ2EfYc)
- [Red Hat OpenShift Container Platform 4.10 on VMware Cloud Foundation 4.5](https://core.vmware.com/resource/red-hat-openshift-container-platform-410-vmware-cloud-foundation-45)
- YouTube - [TAM Lab 069 - Deploying OpenShift 4.3 to VMware vSphere (UPI install)](https://www.youtube.com/watch?v=xZpoZZ2EfYc)
- [Red Hat OpenShift Container Platform 4.10 on VMware Cloud Foundation 4.5](https://core.vmware.com/resource/red-hat-openshift-container-platform-410-vmware-cloud-foundation-45)

View File

@ -1,16 +1,16 @@
# Deploying a Sample Application on Red Hat OpenShift: Handling Security Context Constraints (SCC)
On [Day 58](/day58.md) we finished looking around the developer and administrator interfaces of a newly deployed cluster.
On [Day 58](day58.md) we finished looking around the developer and administrator interfaces of a newly deployed cluster.
In this submission (Day 59), we will walk through the process of deploying a sample MongoDB application to a newly deployed Red Hat OpenShift cluster. However, this deployment will fail due to the default security context constraints (SCC) in OpenShift. We will explain why the deployment fails, how to resolve the issue, and provide a brief overview of SCC in OpenShift with examples.
## Understanding Security Context Constraints (SCC)
Security context constraints in OpenShift are a security feature that allows administrators to control various aspects of the container runtime, such as user and group IDs, SELinux context, and the use of host resources. In Short, SCCs determine which security settings are allowed or disallowed for containerized applications. By default, OpenShift comes with several predefined SCCs, such as ```restricted```, ```anyuid```, and ```hostaccess```. These SCCs serve as templates for creating custom SCCs to meet specific security requirements.
Security context constraints in OpenShift are a security feature that allows administrators to control various aspects of the container runtime, such as user and group IDs, SELinux context, and the use of host resources. In Short, SCCs determine which security settings are allowed or disallowed for containerized applications. By default, OpenShift comes with several predefined SCCs, such as `restricted`, `anyuid`, and `hostaccess`. These SCCs serve as templates for creating custom SCCs to meet specific security requirements.
> Warning: Do not modify the default SCCs. Customizing the default SCCs can lead to issues when some of the platform pods deploy or OpenShift Container Platform is upgraded. Additionally, the default SCC values are reset to the defaults during some cluster upgrades, which discards all customizations to those SCCs. Instead of modifying the default SCCs, create and modify your own SCCs as needed.
For example, the restricted SCC (default for most deployments, or restricted-v2 for new installs of OCP 4.11 and later) does not allow containers to run as root or with privileged access, while the ```anyuid``` SCC permits containers to run with any user ID, including root. By creating custom SCCs and granting them to service accounts or users, administrators can ensure that applications adhere to the desired security policies without compromising functionality.
For example, the restricted SCC (default for most deployments, or restricted-v2 for new installs of OCP 4.11 and later) does not allow containers to run as root or with privileged access, while the `anyuid` SCC permits containers to run with any user ID, including root. By creating custom SCCs and granting them to service accounts or users, administrators can ensure that applications adhere to the desired security policies without compromising functionality.
Security context constraints allow an administrator to control:
@ -48,7 +48,7 @@ SCCs consist of settings and strategies that control the security features that
- Controlled by a boolean
- Fields of this type default to the most restrictive value. For example;
- ```AllowPrivilegedContainer``` is always set to ```false``` if unspecified.
- `AllowPrivilegedContainer` is always set to `false` if unspecified.
- Controlled by an allowable set
- Fields of this type are checked against the set to ensure their value is allowed.
- Controlled by a strategy
@ -58,7 +58,7 @@ SCCs consist of settings and strategies that control the security features that
CRI-O has the following [default list of capabilities](https://github.com/cri-o/cri-o/blob/main/docs/crio.conf.5.md#crioruntime-table) that are allowed for each container of a pod:
````
```
default_capabilities = [
"CHOWN",
"DAC_OVERRIDE",
@ -70,27 +70,27 @@ CRI-O has the following [default list of capabilities](https://github.com/cri-o/
"NET_BIND_SERVICE",
"KILL",
]
````
```
You can learn more about Linux capabilities [here](https://linuxera.org/container-security-capabilities-seccomp/) and [here](https://man7.org/linux/man-pages/man7/capabilities.7.html). The containers use the capabilities from this default list, but pod manifest authors (the person writing the application YAML for Kubernetes) can alter the list by requesting additional capabilities or removing some of the default behaviors. To control the capabilities allowed or denied for Pods running in the cluster, use the ```allowedCapabilities```, ```defaultAddCapabilities```, and ```requiredDropCapabilities``` parameters in your SCC to control such requests from the pods.
You can learn more about Linux capabilities [here](https://linuxera.org/container-security-capabilities-seccomp/) and [here](https://man7.org/linux/man-pages/man7/capabilities.7.html). The containers use the capabilities from this default list, but pod manifest authors (the person writing the application YAML for Kubernetes) can alter the list by requesting additional capabilities or removing some of the default behaviors. To control the capabilities allowed or denied for Pods running in the cluster, use the `allowedCapabilities`, `defaultAddCapabilities`, and `requiredDropCapabilities` parameters in your SCC to control such requests from the pods.
#### Quick Snippet: configuring a pod with capabilities
You can [specify additional capabilities for your pod](https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-capabilities-for-a-container) as per the below example.
You can [specify additional capabilities for your pod](https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-capabilities-for-a-container) as per the below example.
````yaml
```yaml
apiVersion: v1
kind: Pod
metadata:
name: security-context-demo-4
spec:
containers:
- name: sec-ctx-4
image: gcr.io/google-samples/node-hello:1.0
securityContext:
capabilities:
add: ["NET_ADMIN", "SYS_TIME"]
````
- name: sec-ctx-4
image: gcr.io/google-samples/node-hello:1.0
securityContext:
capabilities:
add: ["NET_ADMIN", "SYS_TIME"]
```
Let's look at some of the default contexts in further detail.
@ -116,9 +116,9 @@ The restricted-v2 SCC:
Ensures that no child process of a container can gain more privileges than its parent (AllowPrivilegeEscalation=False)
You can get this SCC configuration by running ```oc get scc restricted-v2 -o yaml```
You can get this SCC configuration by running `oc get scc restricted-v2 -o yaml`
````yaml
```yaml
allowHostDirVolumePlugin: false
allowHostIPC: false
allowHostNetwork: false
@ -138,7 +138,8 @@ metadata:
include.release.openshift.io/ibm-cloud-managed: "true"
include.release.openshift.io/self-managed-high-availability: "true"
include.release.openshift.io/single-node-developer: "true"
kubernetes.io/description: restricted denies access to all host features and requires
kubernetes.io/description:
restricted denies access to all host features and requires
pods to be run with a UID, and SELinux context that are allocated to the namespace.
release.openshift.io/create-only: "true"
creationTimestamp: "2023-03-16T09:34:36Z"
@ -149,10 +150,10 @@ metadata:
priority: null
readOnlyRootFilesystem: false
requiredDropCapabilities:
- KILL
- MKNOD
- SETUID
- SETGID
- KILL
- MKNOD
- SETUID
- SETGID
runAsUser:
type: MustRunAsRange
seLinuxContext:
@ -161,14 +162,14 @@ supplementalGroups:
type: RunAsAny
users: []
volumes:
- configMap
- downwardAPI
- emptyDir
- ephemeral
- persistentVolumeClaim
- projected
- secret
````
- configMap
- downwardAPI
- emptyDir
- ephemeral
- persistentVolumeClaim
- projected
- secret
```
2. Privileged SCC:
@ -196,9 +197,9 @@ The privileged SCC allows:
Pods to request any capabilities
You can get this SCC configuration by running ```oc get scc privileged -o yaml```
You can get this SCC configuration by running `oc get scc privileged -o yaml`
````yaml
```yaml
allowHostDirVolumePlugin: true
allowHostIPC: true
allowHostNetwork: true
@ -207,27 +208,28 @@ allowHostPorts: true
allowPrivilegeEscalation: true
allowPrivilegedContainer: true
allowedCapabilities:
- '*'
- "*"
allowedUnsafeSysctls:
- '*'
- "*"
apiVersion: security.openshift.io/v1
defaultAddCapabilities: null
fsGroup:
type: RunAsAny
groups:
- system:cluster-admins
- system:nodes
- system:masters
- system:cluster-admins
- system:nodes
- system:masters
kind: SecurityContextConstraints
metadata:
annotations:
include.release.openshift.io/ibm-cloud-managed: "true"
include.release.openshift.io/self-managed-high-availability: "true"
include.release.openshift.io/single-node-developer: "true"
kubernetes.io/description: 'privileged allows access to all privileged and host
kubernetes.io/description:
"privileged allows access to all privileged and host
features and the ability to run as any user, any group, any fsGroup, and with
any SELinux context. WARNING: this is the most relaxed SCC and should be used
only for cluster administration. Grant with caution.'
only for cluster administration. Grant with caution."
release.openshift.io/create-only: "true"
creationTimestamp: "2023-03-16T09:34:35Z"
generation: 1
@ -242,18 +244,19 @@ runAsUser:
seLinuxContext:
type: RunAsAny
seccompProfiles:
- '*'
- "*"
supplementalGroups:
type: RunAsAny
users:
- system:admin
- system:serviceaccount:openshift-infra:build-controller
- system:admin
- system:serviceaccount:openshift-infra:build-controller
volumes:
- '*'
````
- "*"
```
Now let's look at some specific items from the above YAML:
- **allowedCapabilities:** - A list of capabilities that a pod can request. An empty list means that none of the capabilities can be requested while the special symbol * allows any capabilities.
- **allowedCapabilities:** - A list of capabilities that a pod can request. An empty list means that none of the capabilities can be requested while the special symbol \* allows any capabilities.
- **defaultAddCapabilities: []** - A list of additional capabilities that are added to any pod.
- **fsGroup:** - The FSGroup strategy, dictates the allowable values for the security context.
- **groups** - The groups that can access this SCC.
@ -262,7 +265,7 @@ Now let's look at some specific items from the above YAML:
- **seLinuxContext:** - The seLinuxContext strategy type, dictates the allowable values for the security context.
- **supplementalGroups** - The supplementalGroups strategy, dictates the allowable supplemental groups for the security context.
- **users:** - The users who can access this SCC.
- **volumes:** - The allowable volume types for the security context. In the example, * allows the use of all volume types.
- **volumes:** - The allowable volume types for the security context. In the example, \* allows the use of all volume types.
The users and groups fields on the SCC control which users can access the SCC. By default, cluster administrators, nodes, and the build controller are granted access to the privileged SCC. All authenticated users are granted access to the restricted-v2 SCC.
@ -270,11 +273,11 @@ The users and groups fields on the SCC control which users can access the SCC. B
I'm going to deploy some of the basic components of my [trusty Pac-Man application for Kubernetes](https://github.com/saintdle/pacman-tanzu). The MongoDB deployment, PVC and Secret.
First, I need to create the namespace to place the components in, ```oc create ns pacman```.
First, I need to create the namespace to place the components in, `oc create ns pacman`.
Now I apply the below YAML file ```oc apply -f mongo-test.yaml```
Now I apply the below YAML file `oc apply -f mongo-test.yaml`
````yaml
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
@ -295,24 +298,24 @@ spec:
name: mongo
spec:
initContainers:
- args:
- |
mkdir -p /bitnami/mongodb
chown -R "1001:1001" "/bitnami/mongodb"
command:
- /bin/bash
- -ec
image: docker.io/bitnami/bitnami-shell:10-debian-10-r158
imagePullPolicy: Always
name: volume-permissions
resources: {}
securityContext:
runAsUser: 0
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /bitnami/mongodb
name: mongo-db
- args:
- |
mkdir -p /bitnami/mongodb
chown -R "1001:1001" "/bitnami/mongodb"
command:
- /bin/bash
- -ec
image: docker.io/bitnami/bitnami-shell:10-debian-10-r158
imagePullPolicy: Always
name: volume-permissions
resources: {}
securityContext:
runAsUser: 0
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /bitnami/mongodb
name: mongo-db
restartPolicy: Always
schedulerName: default-scheduler
securityContext:
@ -320,39 +323,39 @@ spec:
serviceAccountName: default
terminationGracePeriodSeconds: 30
volumes:
- name: mongo-db
persistentVolumeClaim:
claimName: mongo-storage
- name: mongo-db
persistentVolumeClaim:
claimName: mongo-storage
containers:
- image: bitnami/mongodb:4.4.8
name: mongo
env:
- name: MONGODB_ROOT_PASSWORD
valueFrom:
secretKeyRef:
key: database-admin-password
name: mongodb-users-secret
- name: MONGODB_DATABASE
valueFrom:
secretKeyRef:
key: database-name
name: mongodb-users-secret
- name: MONGODB_PASSWORD
valueFrom:
secretKeyRef:
key: database-password
name: mongodb-users-secret
- name: MONGODB_USERNAME
valueFrom:
secretKeyRef:
key: database-user
name: mongodb-users-secret
ports:
- name: mongo
containerPort: 27017
volumeMounts:
- name: mongo-db
mountPath: /bitnami/mongodb/
- image: bitnami/mongodb:4.4.8
name: mongo
env:
- name: MONGODB_ROOT_PASSWORD
valueFrom:
secretKeyRef:
key: database-admin-password
name: mongodb-users-secret
- name: MONGODB_DATABASE
valueFrom:
secretKeyRef:
key: database-name
name: mongodb-users-secret
- name: MONGODB_PASSWORD
valueFrom:
secretKeyRef:
key: database-password
name: mongodb-users-secret
- name: MONGODB_USERNAME
valueFrom:
secretKeyRef:
key: database-user
name: mongodb-users-secret
ports:
- name: mongo
containerPort: 27017
volumeMounts:
- name: mongo-db
mountPath: /bitnami/mongodb/
---
kind: PersistentVolumeClaim
apiVersion: v1
@ -371,25 +374,26 @@ kind: Secret
metadata:
name: mongodb-users-secret
namespace: pacman
type: Opaque
type: Opaque
data:
database-admin-name: Y2x5ZGU=
database-admin-password: Y2x5ZGU=
database-name: cGFjbWFu
database-password: cGlua3k=
database-user: Ymxpbmt5
````
```
Once applied, I see the following output:
>Warning: would violate PodSecurity "restricted:v1.24": allowPrivilegeEscalation != false (containers "volume-permissions", "mongo" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "volume-permissions", "mongo" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or containers "volume-permissions", "mongo" must set securityContext.runAsNonRoot=true), runAsUser=0 (container "volume-permissions" must not set runAsUser=0), seccompProfile (pod or containers "volume-permissions", "mongo" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
> Warning: would violate PodSecurity "restricted:v1.24": allowPrivilegeEscalation != false (containers "volume-permissions", "mongo" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "volume-permissions", "mongo" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or containers "volume-permissions", "mongo" must set securityContext.runAsNonRoot=true), runAsUser=0 (container "volume-permissions" must not set runAsUser=0), seccompProfile (pod or containers "volume-permissions", "mongo" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
>
>deployment.apps/mongo created
> deployment.apps/mongo created
>
>secret/mongodb-users-secret created
> secret/mongodb-users-secret created
If I now inspect the deployment and replicaset in the ```pacman``` namespace, we'll see that's stuck, I have no pods running.
If I now inspect the deployment and replicaset in the `pacman` namespace, we'll see that's stuck, I have no pods running.
````
```
# oc get all -n pacman
NAME READY UP-TO-DATE AVAILABLE AGE
@ -397,17 +401,16 @@ deployment.apps/mongo 0/1 0 0 3m9s
NAME DESIRED CURRENT READY AGE
replicaset.apps/mongo-56cc764fb 1 0 0 3m9s
````
```
## Why the Deployment Fails
The provided Kubernetes application includes an initContainer with the following security context:
````yaml
```yaml
securityContext:
runAsUser: 0
````
```
This configuration means that the initContainer will attempt to run as the root user (UID 0). However, OpenShift's default SCCs restrict the use of the root user for security reasons. As a result, the deployment fails because it violates the default security context constraints. The same is true of the other configuration settings mentioned in the above output as well. Remember in OCP 4.11 and later (new installs), the default SCC is the restricted-v2 policy.
@ -417,7 +420,7 @@ To resolve this issue, we need to modify the deployment configuration to comply
1. Create a new custom SCC, and save the below YAML in a file called mongo-custom-scc.yaml:
````yaml
```yaml
apiVersion: security.openshift.io/v1
kind: SecurityContextConstraints
metadata:
@ -435,28 +438,28 @@ fsGroup:
type: RunAsAny
supplementalGroups:
type: RunAsAny
````
```
2. Apply the custom SCC to your OpenShift cluster:
````sh
```sh
oc apply -f mongo-custom-scc.yaml
````
```
3. Grant the mongo-custom-scc SCC to the service account that the MongoDB deployment is using:
````sh
```sh
oc adm policy add-scc-to-user mongo-custom-scc system:serviceaccount:<namespace>:default
# In my environment, I run:
oc adm policy add-scc-to-user mongo-custom-scc system:serviceaccount:pacman:default
````
```
Replace <namespace> with the namespace where your MongoDB deployment is located.
4. Redeploy the MongoDB application.
````
```
# oc scale deploy mongo -n pacman --replicas=0
deployment.apps/mongo scaled
@ -464,7 +467,8 @@ deployment.apps/mongo scaled
# oc scale deploy mongo -n pacman --replicas=1
deployment.apps/mongo scaled
````
```
In the real world, the first port of call should always be to work to ensure your containers and applications run with the least privileges necessary and therefore don't need to run as root.
If they do need some sort of privilege, then defining tight RBAC and SCC control in place is key.
@ -473,7 +477,7 @@ If they do need some sort of privilege, then defining tight RBAC and SCC control
In this post, we discussed how the default security context constraints in OpenShift can prevent deployments from running as expected. We provided a solution to the specific issue of running an initContainer as root for a MongoDB application. Understanding and managing SCCs in OpenShift is essential for maintaining secure and compliant applications within your cluster.
On [Day 60](/day60.md)](/day60.md), we will look at OpenShift Projects Creation, Configuration and Governance, for example consuming SCC via the project level, and other features of Red Hat OpenShift.
On [Day 60](day60.md), we will look at OpenShift Projects Creation, Configuration and Governance, for example consuming SCC via the project level, and other features of Red Hat OpenShift.
## Resources
@ -486,4 +490,4 @@ On [Day 60](/day60.md)](/day60.md), we will look at OpenShift Projects Creation,
- [Using the legacy restricted SCC in OCP 4.11+](https://access.redhat.com/articles/6973044)
- [Role-based access to security context constraints](https://docs.openshift.com/container-platform/4.12/authentication/managing-security-context-constraints.html#role-based-access-to-ssc_configuring-internal-oauth)
- You can specify SCCs as resources that are handled by RBAC. This allows you to scope access to your SCCs to a certain project or to the entire cluster.
- Kubernetes.io - [Configure a Security Context for a Pod or Container](https://kubernetes.io/docs/tasks/configure-pod-container/security-context/)
- Kubernetes.io - [Configure a Security Context for a Pod or Container](https://kubernetes.io/docs/tasks/configure-pod-container/security-context/)