Merge pull request #389 from saintdle/main

This commit is contained in:
Michael Cade 2023-03-29 21:57:20 +01:00 committed by GitHub
commit e29e54489d
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
33 changed files with 310 additions and 34 deletions

View File

@ -119,8 +119,8 @@ Or contact us via Twitter, my handle is [@MichaelCade1](https://twitter.com/Mich
- [✔️] ⛑️ 56 > [What does Red Hat OpenShift bring to the party? An Overview](2023/day56.md)
- [✔️] ⛑️ 57 > [Understanding the OpenShift Architecture , Installation Methods and Process](2023/day57.md)
- [✔️] ⛑️ 58 > [Deploying Red Hat OpenShift on VMware vSphere](2023/day58.md)
- [] ⛑️ 59 > [Deploying applications and getting a handle on Security Constraints Context (SCC)](2023/day59.md)
- [] ⛑️ 60 > [](2023/day60.md)
- [✔️] ⛑️ 59 > [Deploying applications and getting a handle on Security Constraints Context (SCC)](2023/day59.md)
- [✔️] ⛑️ 60 > [Looking at OpenShift Projects - Creation, Configuration and Governance](2023/day60.md)
- [] ⛑️ 61 > [](2023/day61.md)
- [] ⛑️ 62 > [](2023/day62.md)

Binary file not shown.

Before

Width:  |  Height:  |  Size: 77 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 35 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 169 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 72 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 21 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 60 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 102 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 70 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 237 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 288 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 166 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 84 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 124 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 226 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 499 KiB

View File

@ -47,7 +47,7 @@ Again, this is not an exhausive list:
You can read a more indepth coverage of the benefits and features of Red Hat OpenShift in [this datasheet](https://www.redhat.com/en/resources/openshift-container-platform-datasheet), or a full breakdown on the [Red Hat Developers page](https://developers.redhat.com/products/openshift/overview)
![OpenShift Overview](images/cl-OpenShift-container-platform-datasheet-f31593_image1.png)
![OpenShift Overview](images/Day56-OpenShift-container-platform-datasheet.png)
## Where can I deploy OpenShift?

View File

@ -27,7 +27,7 @@ At a basic level, Red Hat OpenShift if built ontop of the open-source platform,
If you haven't visited the [#90DaysOfDevOps - Kubernetes section](https://github.com/MichaelCade/90DaysOfDevOps/blob/main/2022.md#kubernetes), then I urge you to do so, before continuing with this section on Red Hat OpenShift.
![Red Hat OpenShift - Product Architect](images/Red%20Hat%20OpenShift%20-%20Product%20Architecture.png)
![Red Hat OpenShift - Product Architect](images/Day57%20-%20Red%20Hat%20OpenShift%20Architecture/Red%20Hat%20OpenShift%20-%20Product%20Architecture.png)
Ontop of the Kubernetes platform, Red Hat then delivers it's enterprise sauce sprinkled around to help make your cloud native strategy a sucess:
@ -55,7 +55,7 @@ The below image nicely finishes off this section covering the product and it's c
- Simplification of creating and managing a cluster
- Built in tooling for the Application Developer to create and deploy their applications, with workload lifecycle management included, such as ability to monitor and scale those applications.
![Red Hat OpenShift Container Platform Lifecycle](images/OpenShift%20Container%20Platform%20lifecycle.png)
![Red Hat OpenShift Container Platform Lifecycle](images/Day57%20-%20Red%20Hat%20OpenShift%20Architecture/OpenShift%20Container%20Platform%20lifecycle.png)
For a further deep dive into the control plane architecture, you can read the [official documentation here](https://docs.openshift.com/container-platform/4.12/architecture/control-plane.html).
@ -100,7 +100,7 @@ There is also additional files which may be stored along side the root of the ``
Once you have all of these files, by running the ```openshift-install``` CLI tool, this will create the ignition files for your boostrap, control plane, and compute plane nodes. Returning to the earlier descriptions of RHCOS, these files contain the first boot information to configure the Operation System and start the process of building a consistent Kubernetes cluster with minimal to no interaction.
![OpenShift Container Platform installation targets and dependencies](images/OpenShift%20Container%20Platform%20installation%20targets%20and%20dependencies.png)
![OpenShift Container Platform installation targets and dependencies](images/Day57%20-%20Red%20Hat%20OpenShift%20Architecture/OpenShift%20Container%20Platform%20installation%20targets%20and%20dependencies.png)
## Installer provisioned infrastructure (IPI)
@ -134,7 +134,7 @@ You can find out more from this [Red Hat blog post](How to use the OpenShift Ass
A temporary bootstrap machine is provisioned using IPI or UPI, which contains the necessary information to build the OpenShift cluster itself (which becomes the permanent control plane nodes). Once the control plane is online, the control plane will initiate the creation of the compute plane (worker) nodes.
![Creating the bootstrap, control plane, and compute machines](images/Creating%20the%20bootstrap%20control%20plane%20and%20compute%20machines.png)
![Creating the bootstrap, control plane, and compute machines](images/Day57%20-%20Red%20Hat%20OpenShift%20Architecture/Creating%20the%20bootstrap%20control%20plane%20and%20compute%20machines.png)
Once the control plane is initialised, the bootstrap machine is destroyed. If you are manually provisioning the platform (UPI), then you complete a number of the provisioning steps manually.

View File

@ -103,7 +103,7 @@ After selecting datastore and the network, I need to now input the address for:
However I hit a bug ([GitHub PR](https://github.com/openshift/installer/pull/6783),[Red Hat Article](https://access.redhat.com/solutions/6994972)) in the installer, where by the software installer is hardcoded to only accept addresses in the 10.0.0.0/16 range.
![OpenShift-Install create cluster - Sorry, your reply was invalid: IP expected to be in one of the machine networks: 10.0.0.0/16](images/Day58%20-%20OpenShift%20Cluster%20Install/OpenShift-Install%20create%20cluster%20-%20Sorry%2C%20your%20reply%20was%20invalid-%20IP%20expected%20to%20be%20in%20one%20of%20the%20machine%20networks-%2010.0.0.0%3A16.jpg)
![OpenShift-Install create cluster - Sorry, your reply was invalid: IP expected to be in one of the machine networks: 10.0.0.0/16](images/Day58%20-%20OpenShift%20Cluster%20Install/OpenShift-Install%20create%20cluster%20-%20Sorry%2C%20your%20reply%20was%20invalid-%20IP%20expected%20to%20be%20in%20one%20of%20the%20machine%20networks-%2010.0.0.0-16.jpg)
The current work around for this is to run ````openshift-install create install-config```` provide ip addresses in the 10.0.0.0/16 range, and then alter the ```install-config.yaml``` file manually before running ````openshift-install create cluster````, which will read the available ```install-config.yaml``` file and create the cluster (rather than presenting you another wizard).
@ -233,7 +233,7 @@ If we now look within our directory where we ran the ```openshift-install``` ins
Below is a screenshot showing the directory, folders and example of my logging output.
![OpenShift-Install - .openshift_install.log output](/images/Day58%20-%20OpenShift%20Cluster%20Install/OpenShift-Install%20create%20cluster%20-%20.openshift_install.log%20output.jpg)
![OpenShift-Install - .openshift_install.log output](images/Day58%20-%20OpenShift%20Cluster%20Install/OpenShift-Install%20create%20cluster%20-%20.openshift_install.log%20output.jpg)
## Connecting to your cluster

View File

@ -1,16 +1,16 @@
# Deploying a Sample Application on Red Hat OpenShift: Handling Security Context Constraints (SCC)
In [Day 58](/day58.md) we finished up looking around the developer and administator interfaces of a newly deployed cluster.
On [Day 58](/day58.md) we finished looking around the developer and administrator interfaces of a newly deployed cluster.
In this submission (Day 59), we will walk through the process of deploying a sample MongoDB application to a newly deployed Red Hat OpenShift cluster. However, this deployment will fail due to the default security context constraints (SCC) in OpenShift. We will explain why the deployment fails, how to resolve the issue, and provide a brief overview of SCC in OpenShift with examples.
## Understanding Security Context Constraints (SCC)
Security context constraints in OpenShift are a security feature that allows administrators to control various aspects of container runtime, such as user and group IDs, SELinux context, and the use of host resources. In Short, SCCs determine which security settings are allowed or disallowed for containerized applications. By default, OpenShift comes with several predefined SCCs, such as restricted, anyuid, and hostaccess. These SCCs serve as templates for creating custom SCCs to meet specific security requirements.
Security context constraints in OpenShift are a security feature that allows administrators to control various aspects of the container runtime, such as user and group IDs, SELinux context, and the use of host resources. In Short, SCCs determine which security settings are allowed or disallowed for containerized applications. By default, OpenShift comes with several predefined SCCs, such as ```restricted```, ```anyuid```, and ```hostaccess```. These SCCs serve as templates for creating custom SCCs to meet specific security requirements.
> Warning: Do not modify the default SCCs. Customizing the default SCCs can lead to issues when some of the platform pods deploy or OpenShift Container Platform is upgraded. Additionally, the default SCC values are reset to the defaults during some cluster upgrades, which discards all customizations to those SCCs. Instead of modifying the default SCCs, create and modify your own SCCs as needed.
For example, the restricted SCC (default for most deployments, or restricted-v2 for new installs of OCP 4.11 and later) does not allow containers to run as root or with privileged access, while the anyuid SCC permits containers to run with any user ID, including root. By creating custom SCCs and granting them to service accounts or users, administrators can ensure that applications adhere to the desired security policies without compromising functionality.
For example, the restricted SCC (default for most deployments, or restricted-v2 for new installs of OCP 4.11 and later) does not allow containers to run as root or with privileged access, while the ```anyuid``` SCC permits containers to run with any user ID, including root. By creating custom SCCs and granting them to service accounts or users, administrators can ensure that applications adhere to the desired security policies without compromising functionality.
Security context constraints allow an administrator to control:
@ -32,19 +32,19 @@ Security context constraints allow an administrator to control:
- The configuration of allowable supplemental groups
- Whether a container requires write access to its root file system
- Whether a container requires to write access to its root file system
- The usage of volume types
- The configuration of allowable seccomp profiles
To learn more details about what each of the out-of-the-box default security context constraint does, see [this official documentation page](https://docs.openshift.com/container-platform/4.12/authentication/managing-security-context-constraints.html#default-sccs_configuring-internal-oauth).
To learn more details about what each of the out-of-the-box default security context constraints does, see [this official documentation page](https://docs.openshift.com/container-platform/4.12/authentication/managing-security-context-constraints.html#default-sccs_configuring-internal-oauth).
![Red Hat OpenShift - oc get scc](/images/Day58%20-%20OpenShift%20Cluster%20Install/Red%20Hat%20OpenShift%20-%20oc%20get%20scc.jpg)
![Red Hat OpenShift - oc get scc](/2023/images/day59-Red%20Hat%20OpenShift%20-%20oc%20get%20scc.jpg)
### Anatomy of a Security Context Constraint configuration
SCCs consist of settings and strategies that control the security features a pod has access to. These settings fall into three categories:
SCCs consist of settings and strategies that control the security features that a pod has access to. These settings fall into three categories:
- Controlled by a boolean
- Fields of this type default to the most restrictive value. For example;
@ -78,7 +78,7 @@ You can learn more about Linux capabilities [here](https://linuxera.org/containe
You can [specify additional capabilities for your pod](https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-capabilities-for-a-container) as per the below example.
````
````yaml
apiVersion: v1
kind: Pod
metadata:
@ -92,7 +92,7 @@ spec:
add: ["NET_ADMIN", "SYS_TIME"]
````
Let's look at some of the default contexts in futher detail.
Let's look at some of the default contexts in further detail.
### Example SCC Configurations
@ -118,7 +118,7 @@ The restricted-v2 SCC:
You can get this SCC configuration by running ```oc get scc restricted-v2 -o yaml```
````
````yaml
allowHostDirVolumePlugin: false
allowHostIPC: false
allowHostNetwork: false
@ -198,7 +198,7 @@ The privileged SCC allows:
You can get this SCC configuration by running ```oc get scc privileged -o yaml```
````
````yaml
allowHostDirVolumePlugin: true
allowHostIPC: true
allowHostNetwork: true
@ -253,14 +253,14 @@ volumes:
````
Now let's look at some specific items from the above YAML:
- **allowedCapabilities:** - A list of capabilities that a pod can request. An empty list means that none of capabilities can be requested while the special symbol * allows any capabilities.
- **allowedCapabilities:** - A list of capabilities that a pod can request. An empty list means that none of the capabilities can be requested while the special symbol * allows any capabilities.
- **defaultAddCapabilities: []** - A list of additional capabilities that are added to any pod.
- **fsGroup:** - The FSGroup strategy, which dictates the allowable values for the security context.
- **fsGroup:** - The FSGroup strategy, dictates the allowable values for the security context.
- **groups** - The groups that can access this SCC.
- **requiredDropCapabilities** A list of capabilities to drop from a pod. Or, specify ALL to drop all capabilities.
- **runAsUser:** - The runAsUser strategy type, which dictates the allowable values for the security context.
- **seLinuxContext:** - The seLinuxContext strategy type, which dictates the allowable values for the security context.
- **supplementalGroups** - The supplementalGroups strategy, which dictates the allowable supplemental groups for the security context.
- **seLinuxContext:** - The seLinuxContext strategy type, dictates the allowable values for the security context.
- **supplementalGroups** - The supplementalGroups strategy, dictates the allowable supplemental groups for the security context.
- **users:** - The users who can access this SCC.
- **volumes:** - The allowable volume types for the security context. In the example, * allows the use of all volume types.
@ -274,7 +274,7 @@ First, I need to create the namespace to place the components in, ```oc create n
Now I apply the below YAML file ```oc apply -f mongo-test.yaml```
````
````yaml
apiVersion: apps/v1
kind: Deployment
metadata:
@ -404,7 +404,7 @@ replicaset.apps/mongo-56cc764fb 1 0 0 3m9s
The provided Kubernetes application includes an initContainer with the following security context:
````
````yaml
securityContext:
runAsUser: 0
````
@ -415,9 +415,9 @@ This configuration means that the initContainer will attempt to run as the root
To resolve this issue, we need to modify the deployment configuration to comply with the SCC policies in OpenShift. There are several ways to achieve this, but in this example, we will create a custom SCC that allows the initContainer to run as root. Follow these steps:
1. Create a new custom SCC, save the below YAML in a file called mongo-custom-scc.yaml:
1. Create a new custom SCC, and save the below YAML in a file called mongo-custom-scc.yaml:
````
````yaml
apiVersion: security.openshift.io/v1
kind: SecurityContextConstraints
metadata:
@ -439,13 +439,13 @@ supplementalGroups:
2. Apply the custom SCC to your OpenShift cluster:
````
````sh
oc apply -f mongo-custom-scc.yaml
````
3. Grant the mongo-custom-scc SCC to the service account that the MongoDB deployment is using:
````
````sh
oc adm policy add-scc-to-user mongo-custom-scc system:serviceaccount:<namespace>:default
# In my environment, I run:
@ -465,15 +465,15 @@ deployment.apps/mongo scaled
deployment.apps/mongo scaled
````
In the real-world, the first port of call should always be to work to ensure your containers and applications run with the least privileges necessary, and therefore don't need to run as root.
In the real world, the first port of call should always be to work to ensure your containers and applications run with the least privileges necessary and therefore don't need to run as root.
If they do need some sort of privilege, then defining tight RBAC and SCC control in-place is key.
If they do need some sort of privilege, then defining tight RBAC and SCC control in place is key.
# Summary
In this post, we discussed how the default security context constraints in OpenShift can prevent deployments from running as expected. We provided a solution to the specific issue of running an initContainer as root for a MongoDB application. Understanding and managing SCCs in OpenShift is essential for maintaining secure and compliant applications within your cluster.
In [Day 60](/day60.md), we will look at RBAC in a cluster in more details, such as the accounts used to access a cluster, the service accounts used by container, and how you tie it all together to areas such as consuming SCC and other features of Red Hat OpenShift.
On [Day 60](/day60.md)](/day60.md), we will look at OpenShift Projects Creation, Configuration and Governance, for example consuming SCC via the project level, and other features of Red Hat OpenShift.
## Resources
@ -484,4 +484,5 @@ In [Day 60](/day60.md), we will look at RBAC in a cluster in more details, such
- [Pods fail to create due to "allowPrivilegeEscalation: true" in OpenShift 4.11](https://access.redhat.com/solutions/6976492)
- [Using the legacy restricted SCC in OCP 4.11+](https://access.redhat.com/articles/6973044)
- [Role-based access to security context constraints](https://docs.openshift.com/container-platform/4.12/authentication/managing-security-context-constraints.html#role-based-access-to-ssc_configuring-internal-oauth)
- You can specify SCCs as resources that are handled by RBAC. This allows you to scope access to your SCCs to a certain project or to the entire cluster.
- You can specify SCCs as resources that are handled by RBAC. This allows you to scope access to your SCCs to a certain project or to the entire cluster.
- Kubernetes.io - [Configure a Security Context for a Pod or Container](https://kubernetes.io/docs/tasks/configure-pod-container/security-context/)

View File

@ -0,0 +1,272 @@
# OpenShift Projects - Creation, Configuration and Governance
## Understanding OpenShift Projects: How They Differ from Kubernetes Namespaces
Red Hat OpenShift adds many features to simplify and enhance the management of Kubernetes clusters. One such feature is OpenShift Projects, which are similar to Kubernetes Namespaces but with added benefits tailored to the enterprise environment. In this post, we will explore the concept of OpenShift Projects, how they differ from Kubernetes Namespaces, and provide examples of creating and configuring Projects.
### OpenShift Projects: A Brief Overview
OpenShift Projects are an abstraction layer built on top of Kubernetes Namespaces. They provide a convenient way to organize and manage resources within an OpenShift cluster, and they offer additional features such as:
- Simplified multi-tenancy: Projects enable better isolation between users and teams, ensuring that each group works within its own environment without impacting others.
- Access control: Projects facilitate role-based access control (RBAC), allowing administrators to define and manage user permissions at the project level.
- Resource quotas and limits: Projects support setting resource quotas and limits to prevent overconsumption of cluster resources by individual projects.
## Creating and Configuring an OpenShift Project
Let's walk through the process of creating and configuring an OpenShift Project.
1. Create a new project:
To create a new project, use the oc new-project command:
````sh
$ oc new-project my-sample-project --description="My Sample OpenShift Project" --display-name="Sample Project"
````
This command creates a new project called my-sample-project with a description and display name.
2. Switch between projects:
You can switch between projects using the oc project command:
````sh
$ oc project my-sample-project
````
This command sets the active project to my-sample-project.
3. Configure resource quotas:
You can apply resource quotas to your project to limit the consumption of resources. Create a file called resource-quota.yaml with the following content:
````yaml
apiVersion: v1
kind: ResourceQuota
metadata:
name: my-resource-quota
spec:
hard:
requests.cpu: "2"
requests.memory: 2Gi
limits.cpu: "4"
limits.memory: 4Gi
````
## Adding Resource Quotas to projects
To apply the resource quota to your project:
````sh
$ oc apply -f resource-quota.yaml -n my-sample-project
````
This command applies the resource quota to the my-sample-project, limiting the total CPU and memory consumption for the project.
5. Configure role-based access control (RBAC):
To manage access control for your project, you can define and assign roles to users. For example, create a file called developer-role.yaml with the following content:
````yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: developer
rules:
- apiGroups: [""]
resources: ["pods", "services", "configmaps", "persistentvolumeclaims"]
verbs: ["create", "get", "list", "watch", "update", "delete"]
````
Apply the role to your project:
````sh
$ oc apply -f developer-role.yaml -n my-sample-project
````
Now, you can grant the developer role to a specific user:
````sh
$ oc policy add-role-to-user developer my-user -n my-sample-project
````
This command grants the developer role to my-user in the my-sample-project.
## Adding SCC to a project
Remember in [Day 59](/2023/day59.md), we covered the Security Context Contraints, and how they provide security against the workloads we run inside the cluster, in the examples I provided, we fixed the security violation of the workload (pod) by ensuring the Service Account that it uses, is added to the correct SCC policy.
In this example, I'm going to which SCC at the project level, so that any workloads deployed to this project, conform to the correct policy.
1. Create a new project and change to that projects context
````sh
$ oc new-project scc-ns-test
$ oc project ssc-ns-test
````
2. Create a file called ```ngnix.yaml``` with the below content
````yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginx
replicas: 2 # tells deployment to run 2 pods matching the template
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
````
2. Deploy an Ngnix Deployment to this project, and watch for the failure
````sh
$ oc apply -f ngnix.yaml
Warning: would violate PodSecurity "restricted:v1.24": allowPrivilegeEscalation != false (container "nginx" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "nginx" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "nginx" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "nginx" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
deployment.apps/nginx-deployment created
````
As per Day 59's example, the deployment is created, but the pod will not be running.
3. Let's inspect the project's configuration before we continue further
````sh
$ oc get project scc-ns-test -o json
````
````json
{
"apiVersion": "project.openshift.io/v1",
"kind": "Project",
"metadata": {
"annotations": {
"openshift.io/description": "",
"openshift.io/display-name": "",
"openshift.io/requester": "system:admin",
"openshift.io/sa.scc.mcs": "s0:c27,c4",
"openshift.io/sa.scc.supplemental-groups": "1000710000/10000",
"openshift.io/sa.scc.uid-range": "1000710000/10000"
},
"creationTimestamp": "2023-03-29T09:23:18Z",
"labels": {
"kubernetes.io/metadata.name": "scc-ns-test",
"pod-security.kubernetes.io/audit": "restricted",
"pod-security.kubernetes.io/audit-version": "v1.24",
"pod-security.kubernetes.io/warn": "restricted",
"pod-security.kubernetes.io/warn-version": "v1.24"
},
"name": "scc-ns-test",
"resourceVersion": "11247602",
"uid": "3f720113-1e30-4a3f-b97e-48f88735e510"
},
"spec": {
"finalizers": [
"kubernetes"
]
},
"status": {
"phase": "Active"
}
}
````
Note that under ```labels``` section, we have several pod-security settings that are specified by default. That is because the most restrictive policy is applied to namespaces and their workloads by default.
4. Now let's delete our deployment
````sh
$ oc delete -f nginx.yaml
````
5. Let's alter the configuration of this project to consume the ````privileged```` SCC, allowing us to brute force our pod to run. (In the real world, we would create an appropriate SCC and this use, rather than giving workloads god mode type access)
We are going to use the `oc patch` command and pass in the modifications to the labels. There are two ways to achieve this using the patch argument, we can either pass in the changes within the command line in JSON format, or we can pass in a file that is either JSON or YAML content. I'll detail both options below.
This first command passes JSON content as part of the executions to alert the configuration.
````sh
$ oc patch namespace/scc-ns-test -p '{"metadata":{"labels":{"pod-security.kubernetes.io/audit":"privileged","pod-security.kubernetes.io/enforce":"privileged","pod-security.kubernetes.io/warn":"privileged","security.openshift.io/scc.podSecurityLabelSync":"false"}}}'
````
To break this down further, from the above example showing the Project configuration in JSON, we are altering the "audit", "warn" and "enforce" settings for Pod-Security to have the "privileged" value, and we also add a new label called "security.openshift.io/scc.podSecurityLabelSync" with a value of false. This stops the security admission controller from overwriting our changes. As the default SCC enforced is "restricted".
Rather than including the JSON changes in the same command line, which can get very long if you have a lot of changes, you can simply create a JSON or YAML file, containing content such as below and then apply it using the ```--patch-file``` argument.
````yaml
metadata:
labels:
pod-security.kubernetes.io/audit: privileged
pod-security.kubernetes.io/enforce: privileged
pod-security.kubernetes.io/warn: privileged
security.openshift.io/scc.podSecurityLabelSync: false
````
````sh
oc patch namespace/scc-ns-test --patch-file ns-patch.yaml
````
5. Now if we inspect our Project, we will see the changes in effect.
````sh
oc get project scc-ns-test -o json
````
```json
{
"apiVersion": "project.openshift.io/v1",
"kind": "Project",
"metadata": {
"annotations": {
"openshift.io/description": "",
"openshift.io/display-name": "",
"openshift.io/requester": "system:admin",
"openshift.io/sa.scc.mcs": "s0:c27,c4",
"openshift.io/sa.scc.supplemental-groups": "1000710000/10000",
"openshift.io/sa.scc.uid-range": "1000710000/10000"
},
"creationTimestamp": "2023-03-29T09:23:18Z",
"labels": {
"kubernetes.io/metadata.name": "scc-ns-test",
"pod-security.kubernetes.io/audit": "privileged",
"pod-security.kubernetes.io/audit-version": "v1.24",
"pod-security.kubernetes.io/enforce": "privileged",
"pod-security.kubernetes.io/warn": "privileged",
"pod-security.kubernetes.io/warn-version": "v1.24",
"security.openshift.io/scc.podSecurityLabelSync": "false"
},
"name": "scc-ns-test",
"resourceVersion": "11479286",
"uid": "3f720113-1e30-4a3f-b97e-48f88735e510"
},
"spec": {
"finalizers": [
"kubernetes"
]
},
"status": {
"phase": "Active"
}
}
````
6. Redeploy the nginx instances or other containers you've been working with.
# Summary
There is just so much to cover, but hopefully you've now learned that Projects are more than just a Kubernetes Namespace with a different name. One of the areas we didn't cover, is the ability to [control Project creation](https://docs.openshift.com/container-platform/4.12/applications/projects/configuring-project-creation.html) by OpenShift users, either from a governed default template, or simply removing the ability for self-service access to create templates.
On [Day 61](/2023/day61.md), we shall cover the larger subject of RBAC within the cluster, and bring it back to applying access to projects.
## Resources
- Red Hat OpenShift Documentation - Building Applications - [Projects](https://access.redhat.com/documentation/en-us/openshift_container_platform/4.12/html/building_applications/projects#doc-wrapper)

View File

@ -0,0 +1,3 @@
## Resources
-
- Red Hat OpenShift Documentation - [Using RBAC to define and apply permissions](https://access.redhat.com/documentation/en-us/openshift_container_platform/4.12/html/authentication_and_authorization/using-rbac)

View File

Before

Width:  |  Height:  |  Size: 258 KiB

After

Width:  |  Height:  |  Size: 258 KiB

View File

Before

Width:  |  Height:  |  Size: 154 KiB

After

Width:  |  Height:  |  Size: 154 KiB