Merge branch 'MichaelCade:main' into main

This commit is contained in:
Mau Ha Quang 2023-02-15 15:22:55 +09:00 committed by GitHub
commit 75c6c8e913
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
173 changed files with 3624 additions and 778 deletions

13
.github/FUNDING.yml vendored Normal file
View File

@ -0,0 +1,13 @@
# These are supported funding model platforms
github: [MichaelCade]
patreon: # Replace with a single Patreon username
open_collective: # Replace with a single Open Collective username
ko_fi: # michaelcade1
tidelift: # Replace with a single Tidelift platform-name/package-name e.g., npm/babel
community_bridge: # Replace with a single Community Bridge project-name e.g., cloud-foundry
liberapay: # Replace with a single Liberapay username
issuehunt: # Replace with a single IssueHunt username
otechie: # Replace with a single Otechie username
lfx_crowdfunding: # Replace with a single LFX Crowdfunding project-name e.g., cloud-foundry
custom: # Replace with up to 4 custom sponsorship URLs e.g., ['link1', 'link2']

View File

@ -1,24 +0,0 @@
name: Add contributors
on:
schedule:
- cron: '0 12 * * *'
# push:
# branches:
# - master
jobs:
add-contributors:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- uses: BobAnkh/add-contributors@master
with:
REPO_NAME: 'MichaelCade/90DaysOfDevOps'
CONTRIBUTOR: '### Other Contributors'
COLUMN_PER_ROW: '6'
ACCESS_TOKEN: ${{secrets.GITHUB_TOKEN}}
IMG_WIDTH: '100'
FONT_SIZE: '14'
PATH: '/Contributors.md'
COMMIT_MESSAGE: 'docs(Contributors): update contributors'
AVATAR_SHAPE: 'round'

18
.github/workflows/welcome_workflow.yaml vendored Normal file
View File

@ -0,0 +1,18 @@
name: 'Welcome New Contributors'
on:
issues:
types: [opened]
pull_request_target:
types: [opened]
jobs:
welcome-new-contributor:
runs-on: ubuntu-latest
steps:
- name: 'Greet the contributor'
uses: garg3133/welcome-new-contributors@v1.2
with:
token: ${{ secrets.GITHUB_TOKEN }}
issue-message: 'Hello there, thanks for opening your first issue here. We welcome you to the #90DaysOfDevOps community!'
pr-message: 'Hello there, thanks for opening your first Pull Request. Someone will review it soon. Welcome to the #90DaysOfDevOps community!'

BIN
2022.jpg

Binary file not shown.

Before

Width:  |  Height:  |  Size: 51 KiB

View File

@ -77,14 +77,14 @@ The ones we want to learn more about are the build, install and run.
![](Images/Day10_Go8.png)
- `go run` - This command compiles and runs the main package comprised of the .go files specified on the command line. The command is compiled to a temporary folder.
- `go build` - To compile packages and dependencies, compile the package in the current directory. If the `main` package, will place the executable in the current directory if not then it will place the executable in the `pkg` folder. `go build` also enables you to build an executable file for any Go Supported OS platform.
- `go install` - The same as go build but will place the executable in the `bin` folder
- `go build` - To compile packages and dependencies, compile the package in the current directory. If Go project contains a `main` package, it will create and place the executable in the current directory if not then it will put the executable in the `pkg` folder, and that can be imported and used by other Go programs. `go build` also enables you to build an executable file for any Go Supported OS platform.
- `go install` - The same as go build but will place the executable in the `bin` folder.
We have run through go build and go run but feel free to run through them again here if you wish, `go install` as stated above puts the executable in our bin folder.
![](Images/Day10_Go9.png)
Hopefully, if you are following along you are watching one of the playlists or videos below, I am taking bits of all of these and translating these into my notes so that I can understand the foundational knowledge of the Golang language. The resources below are likely going to give you a much better understanding of a lot of the areas you need overall but I am trying to document the 7 days or 7 hours worth of the journey with interesting things that I have found.
Hopefully, if you are following along, you are watching one of the playlists or videos below. I am taking bits of all of these and translating these into my notes so that I can understand the foundational knowledge of the Golang language. The resources below are likely going to give you a much better understanding of a lot of the areas you need overall, but I am trying to document the 7 days or 7 hours worth of the journey with interesting things that I have found.
## Resources

View File

@ -51,7 +51,7 @@ You will then see from the below that we built our code with the above example a
![](Images/Day11_Go1.png)
We also know that our challenge is 90 days at least for this challenge, but next, maybe it's 100 so we want to define a variable to help us here as well. However, for our program, we want to define this as a constant. Constants are like variables, except that their value cannot be changed within code (we can still create a new app later on down the line with this code and change this constant but this 90 will not change whilst we are running our application)
We also know that our challenge is 90 days at least for this challenge, but next, maybe it's 100 so we want to define a variable to help us here as well. However, for our program, we want to define this as a constant. Constants are like variables, except that their value cannot be changed within code (we can still create a new app later on down the line with this code and change this constant but this 90 will not change while we are running our application)
Adding the `const` to our code and adding another line of code to print this.

View File

@ -84,7 +84,7 @@ If we create an additional file called `samplecode.ps1`, the status would become
![](Images/Day35_Git10.png)
Add our new file using the `git add sample code.ps1` command and then we can run `git status` again and see our file is ready to be committed.
Add our new file using the `git add samplecode.ps1` command and then we can run `git status` again and see our file is ready to be committed.
![](Images/Day35_Git11.png)

View File

@ -44,7 +44,7 @@ Now we can choose additional components that we would like to also install but a
![](Images/Day36_Git4.png)
We can then choose which SSH Executable we wish to use. IN leave this as the bundled OpenSSH that you might have seen in the Linux section.
We can then choose which SSH Executable we wish to use. I leave this as the bundled OpenSSH that you might have seen in the Linux section.
![](Images/Day36_Git5.png)

View File

@ -1,22 +1,22 @@
## Getting Hands-On with Python & Network
## Manos a la obra con Python y Redes
In this final section of Networking fundamentals, we are going to cover some automation tasks and tools with our lab environment created on [Day 26](day26.md)
En esta sección final de Fundamentos de Redes, vamos a cubrir algunas tareas y herramientas de automatización con nuestro entorno de laboratorio creado el [Día 26](day26.md).
We will be using an SSH tunnel to connect to our devices from our client vs telnet. The SSH tunnel created between client and device is encrypted. We also covered SSH in the Linux section on [Day 18](day18.md)
En esta sección final de Fundamentos de Redes, vamos a cubrir algunas tareas y herramientas de automatización con nuestro entorno de laboratorio creado el [Día 26](day18.md).
## Access our virtual emulated environment
## Acceder a nuestro entorno virtual emulado
For us to interact with our switches we either need a workstation inside the EVE-NG network or you can deploy a Linux box there with Python installed to perform your automation ([Resource for setting up Linux inside EVE-NG](https://www.youtube.com/watch?v=3Qstk3zngrY)) or you can do something like me and define a cloud for access from your workstation.
Para interactuar con nuestros switches necesitamos una workstation dentro de la red EVE-NG o puedes desplegar una caja Linux allí con Python instalado para realizar tu automatización ([Recurso para configurar Linux dentro de EVE-NG](https://www.youtube.com/watch?v=3Qstk3zngrY)) o puedes hacer algo como yo y definir una nube para acceder desde tu estación de trabajo.
![](Images/Day27_Networking3.png)
To do this, we have right-clicked on our canvas and we have selected network and then selected "Management(Cloud0)" this will bridge out to our home network.
Para hacer esto, hemos hecho click con el botón derecho del ratón en nuestro lienzo y hemos seleccionado red y luego "Gestión(Nube0)" esto hará de puente con nuestra red doméstica.
![](Images/Day27_Networking4.png)
However, we do not have anything inside this network so we need to add connections from the new network to each of our devices. (My networking knowledge needs more attention and I feel that you could just do this next step to the top router and then have connectivity to the rest of the network through this one cable?)
Sin embargo, no tenemos nada dentro de esta red por lo que necesitamos añadir conexiones desde la nueva red a cada uno de nuestros dispositivos. (Mis conocimientos de redes necesitan más atención y me parece que sólo podría hacer este paso siguiente al router superior y luego tener conectividad con el resto de la red a través de este único cable...).
I have then logged on to each of our devices and I have run through the following commands for the interfaces applicable to where the cloud comes in.
A continuación, he iniciado sesión en cada uno de nuestros dispositivos y he corrido a través de los siguientes comandos para las interfaces aplicables a donde entra la nube.
```
enable
@ -29,7 +29,7 @@ exit
sh ip int br
```
The final step gives us the DHCP address from our home network. My device network list is as follows:
El último paso nos da la dirección DHCP de nuestra red doméstica. La lista de red de mi dispositivo es la siguiente:
| Node | IP Address | Home Network IP |
| ------- | ------------ | --------------- |
@ -39,81 +39,81 @@ The final step gives us the DHCP address from our home network. My device networ
| Switch3 | 10.10.88.113 | 192.168.169.125 |
| Switch4 | 10.10.88.114 | 192.168.169.197 |
### SSH to a network device
### SSH a un dispositivo de red
With the above in place, we can now connect to our devices on our home network using our workstation. I am using Putty but also have access to other terminals such as git bash that give me the ability to SSH to our devices.
Con lo anterior en su lugar, ahora podemos conectarnos a nuestros dispositivos en nuestra red doméstica utilizando nuestra estación de trabajo. Estoy usando Putty pero también tengo acceso a otras terminales como git bash que me dan la capacidad de SSH a nuestros dispositivos.
Below you can see we have an SSH connection to our router device. (R1)
A continuación se puede ver que tenemos una conexión SSH a nuestro dispositivo router. (R1)
![](Images/Day27_Networking5.png)
### Using Python to gather information from our devices
### Usando Python para recopilar información de nuestros dispositivos
The first example of how we can leverage Python is to gather information from all of our devices and in particular, I want to be able to connect to each one and run a simple command to provide me with interface configuration and settings. I have stored this script here [netmiko_con_multi.py](Networking/netmiko_con_multi.py)
El primer ejemplo de cómo podemos aprovechar Python es para recopilar información de todos nuestros dispositivos y, en particular, quiero ser capaz de conectarme a cada uno y ejecutar un comando simple que me proporcione la configuración de la interfaz y los ajustes. He almacenado este script aquí [netmiko_con_multi.py](Networking/netmiko_con_multi.py)
Now when I run this I can see each port configuration over all of my devices.
Ahora cuando ejecuto esto puedo ver la configuración de cada puerto sobre todos mis dispositivos.
![](Images/Day27_Networking6.png)
This could be handy if you have a lot of different devices, create this one script so that you can centrally control and understand quickly all of the configurations in one place.
Esto puede ser útil si tienes muchos dispositivos diferentes, crea este script para que puedas controlar de forma centralizada y entender rápidamente todas las configuraciones en un solo lugar.
### Using Python to configure our devices
### Usando Python para configurar nuestros dispositivos
The above is useful but what about using Python to configure our devices, in our scenario we have a trunked port between `SW1` and `SW2` again imagine if this was to be done across many of the same switches we want to automate that and not have to manually connect to each switch to make the configuration change.
Lo anterior es útil, pero ¿qué pasa con el uso de Python para configurar nuestros dispositivos, en nuestro escenario tenemos un puerto troncalizado entre `SW1` y `SW2` de nuevo imaginar si esto se iba a hacer a través de muchos de los mismos interruptores que queremos automatizar y no tener que conectarse manualmente a cada interruptor para hacer el cambio de configuración.
We can use [netmiko_sendchange.py](Networking/netmiko_sendchange.py) to achieve this. This will connect over SSH and perform that change on our `SW1` which will also change to `SW2`.
Podemos usar [netmiko_sendchange.py](Networking/netmiko_sendchange.py) para lograr esto. Esto se conectará por SSH y realizará ese cambio en nuestro `SW1` que también cambiará al `SW2`.
![](Images/Day27_Networking7.png)
Now for those that look at the code, you will see the message appears and tells us `sending configuration to device` but there is no confirmation that this has happened we could add additional code to our script to perform that check and validation on our switch or we could modify our script before to show us this. [netmiko_con_multi_vlan.py](Networking/netmiko_con_multi_vlan.py)
Ahora para los que miren el código, verán que aparece el mensaje y nos dice `sending configuration to device` pero no hay confirmación de que esto haya ocurrido podríamos añadir código adicional a nuestro script para realizar esa comprobación y validación en nuestro switch o podríamos modificar nuestro script de antes para que nos muestre esto. [netmiko_con_multi_vlan.py](Networking/netmiko_con_multi_vlan.py)
![](Images/Day27_Networking8.png)
### backing up your device configurations
### copia de seguridad de las configuraciones de tus dispositivos
Another use case would be to capture our network configurations and make sure we have those backed up, but again we don't want to be connecting to every device we have on our network so we can also automate this using [backup.py](Networking/backup.py). You will also need to populate the [backup.txt](Networking/backup.txt) with the IP addresses you want to backup.
Otro caso de uso sería capturar nuestras configuraciones de red y asegurarnos de que las tenemos respaldadas, pero de nuevo no queremos estar conectándonos a cada dispositivo que tenemos en nuestra red así que también podemos automatizar esto usando [backup.py](Networking/backup.py). También necesitarás rellenar [backup.txt](Networking/backup.txt) con las direcciones IP de las que quieres hacer copia de seguridad.
Run your script and you should see something like the below.
Ejecute su script y debería ver algo como lo siguiente.
![](Images/Day27_Networking9.png)
That could be me just writing a simple print script in python so I should show you the backup files as well.
Los archivos de respaldo.
![](Images/Day27_Networking10.png)
### Paramiko
A widely used Python module for SSH. You can find out more at the official GitHub link [here](https://github.com/paramiko/paramiko)
Un módulo de Python ampliamente utilizado para SSH. Puedes encontrar más información en el enlace oficial de GitHub [aquí](https://github.com/paramiko/paramiko)
We can install this module using the `pip install paramiko` command.
Podemos instalar este módulo usando el comando `pip install paramiko`.
![](Images/Day27_Networking1.png)
We can verify the installation by entering the Python shell and importing the paramiko module.
Podemos verificar la instalación entrando en la shell de Python e importando el módulo paramiko.
![](Images/Day27_Networking2.png)
### Netmiko
The netmiko module targets network devices specifically whereas paramiko is a broader tool for handling SSH connections overall.
El módulo netmiko apunta específicamente a dispositivos de red mientras que paramiko es una herramienta más amplia para manejar conexiones SSH en general.
Netmiko which we have used above alongside paramiko can be installed using `pip install netmiko`
Netmiko que hemos usado arriba junto con paramiko puede ser instalado usando `pip install netmiko`.
Netmiko supports many network vendors and devices, you can find a list of supported devices on the [GitHub Page](https://github.com/ktbyers/netmiko#supports)
Netmiko soporta muchos proveedores y dispositivos de red, puedes encontrar una lista de dispositivos soportados en la [Página GitHub](https://github.com/ktbyers/netmiko#supports)
### Other modules
### Otros módulos
It is also worth mentioning a few other modules that we have not had the chance to look at but they give a lot more functionality when it comes to network automation.
También vale la pena mencionar algunos otros módulos que no hemos tenido la oportunidad de ver pero que dan mucha más funcionalidad cuando se trata de automatización de redes.
`netaddr` is used for working with and manipulating IP addresses, again the installation is simple with `pip install netaddr`
`netaddr` se utiliza para trabajar y manipular direcciones IP, de nuevo la instalación es sencilla con `pip install netaddr`
you might find yourself wanting to store a lot of your switch configuration in an excel spreadsheet, the `xlrd` will allow your scripts to read the excel workbook and convert rows and columns into a matrix. `pip install xlrd` to get the module installed.
Puede que quieras almacenar gran parte de la configuración de tu switch en una hoja de cálculo excel, `xlrd` permitirá a tus scripts leer el libro de excel y convertir las filas y columnas en una matriz. `pip install xlrd` para instalar el módulo.
Some more use cases where network automation can be used that I have not had the chance to look into can be found [here](https://github.com/ktbyers/pynet/tree/master/presentations/dfwcug/examples)
Algunos otros casos de uso en los que la automatización de redes puede ser utilizada y que no he tenido la oportunidad de mirar se pueden encontrar [aquí](https://github.com/ktbyers/pynet/tree/master/presentations/dfwcug/examples)
I think this wraps up our Networking section of the #90DaysOfDevOps, Networking is one area that I have not touched for a while really and there is so much more to cover but I am hoping between my notes and the resources shared throughout it is helpful for some.
Aquí terminamos nuestra sección de Redes de los #90DaysOfDevOps. Las redes es un área muy extensa, espero que estos apuntes y recursos compartidos sean útiles para tener una base de conocimientos.
## Resources
## Recursos
- [Free Course: Introduction to EVE-NG](https://www.youtube.com/watch?v=g6B0f_E0NMg)
- [EVE-NG - Creating your first lab](https://www.youtube.com/watch?v=9dPWARirtK8)
@ -122,8 +122,8 @@ I think this wraps up our Networking section of the #90DaysOfDevOps, Networking
- [Practical Networking](http://www.practicalnetworking.net/)
- [Python Network Automation](https://www.youtube.com/watch?v=xKPzLplPECU&list=WL&index=126)
Most of the examples I am using here as I am not a Network Engineer have come from this extensive book which is not free but I am using some of the scenarios to help understand Network Automation.
La mayoría de los ejemplos utilizados aquí provienen de este extenso libro que no es gratuito, pero ha sido utilizado algunos de los escenarios planteados.
- [Hands-On Enterprise Automation with Python (Book)](https://www.packtpub.com/product/hands-on-enterprise-automation-with-python/9781788998512)
See you on [Day 28](day28.md) where will start looking into cloud computing and get a good grasp and foundational knowledge of the topic and what is available.
Nos vemos el[Día 28](day28.md) donde veremos la computación en nube para una buena comprensión de los conocimientos básicos necesarios.

View File

@ -1,95 +1,95 @@
## The Big Picture: DevOps & The Cloud
## El panorama: DevOps & The Cloud
When it comes to cloud computing and what is offered, it goes very nicely with the DevOps ethos and processes. We can think of Cloud Computing as bringing the technology and services whilst DevOps as we have mentioned many times before is about the process and process improvement.
Cuando se trata de la computación en nube y lo que se ofrece, va muy bien con la ética y los procesos DevOps. Podemos pensar que la computación en nube aporta tecnología y servicios, mientras que DevOps, como ya hemos mencionado muchas veces, trata del proceso y de la mejora del proceso.
But to start with that cloud learning journey is a steep one and making sure you know and understand all elements or the best service to choose for the right price point is confusing.
Pero para empezar, el viaje de aprendizaje de la nube es empinado y asegurarse de conocer y entender todos los elementos o el mejor servicio a elegir para el punto de precio correcto es confuso.
![](Images/Day28_Cloud1.png)
Does the public cloud require a DevOps mindset? My answer here is not, but to really take advantage of cloud computing and possibly avoid those large cloud bills that so many people have been hit with then it is important to think of Cloud Computing and DevOps together.
¿Requiere la nube pública una mentalidad DevOps? Mi respuesta aquí es no, pero para realmente tomar ventaja de la computación en nube y posiblemente evitar esas grandes facturas de nube que tanta gente ha sido golpeada con entonces es importante pensar en Cloud Computing y DevOps juntos.
If we look at what we mean by the Public Cloud at a 40,000ft view, it is about removing some responsibility to a managed service to enable you and your team to focus on more important aspects which name should be the application and the end-users. After all the Public Cloud is just someone else's computer.
Si nos fijamos en lo que queremos decir con la nube pública en una vista de 40.000 pies, se trata de la eliminación de algunas responsabilidades a un servicio gestionado para permitir que usted y su equipo para centrarse en aspectos más importantes que el nombre debe ser la aplicación y los usuarios finales. Al fin y al cabo, la nube pública no es más que el ordenador de otra persona.
![](Images/Day28_Cloud2.png)
In this first section, I want to get into and describe a little more of what a Public Cloud is and some of the building blocks that get referred to as the Public Cloud overall.
En esta primera sección, quiero entrar y describir un poco más de lo que es una Nube Pública y algunos de los bloques de construcción que se refieren a la Nube Pública en general.
### SaaS
The first area to cover is Software as a service, this service is removing almost all of the management overhead of a service that you may have once run on-premises. Let's think about Microsoft Exchange for our email, this used to be a physical box that lived in your data centre or maybe in the cupboard under the stairs. You would need to feed and water that server. By that I mean you would need to keep it updated and you would be responsible for buying the server hardware, most likely installing the operating system, installing the applications required and then keeping that patched, if anything went wrong you would have to troubleshoot and get things back up and running.
La primera área a tratar es el software como servicio, que elimina casi toda la sobrecarga de gestión de un servicio que antes se ejecutaba in situ. Pensemos en Microsoft Exchange para nuestro correo electrónico: antes era una caja física que vivía en el centro de datos o quizá en el armario de debajo de las escaleras. Había que alimentar y regar ese servidor. Con esto quiero decir que tendrías que mantenerlo actualizado y que serías responsable de comprar el hardware del servidor, probablemente instalar el sistema operativo, instalar las aplicaciones necesarias y luego mantenerlo parcheado, si algo fuera mal tendrías que solucionar los problemas y hacer que las cosas volvieran a funcionar.
Oh, and you would also have to make sure you were backing up your data, although this doesn't change with SaaS for the most part either.
Ah, y también habría que asegurarse de hacer copias de seguridad de los datos, aunque esto tampoco cambia con SaaS en su mayor parte.
What SaaS does and in particular Microsoft 365, because I mentioned Exchange is removing that administration overhead and they provide a service that delivers your exchange functionality by way of mail but also much other productivity (Office 365) and storage options (OneDrive) that overall gives a great experience to the end-user.
Lo que hace SaaS y, en particular, Microsoft 365, ya que he mencionado Exchange, es eliminar esa sobrecarga de administración y ofrecer un servicio que proporciona la funcionalidad de Exchange a través del correo, pero también muchas otras opciones de productividad (Office 365) y almacenamiento (OneDrive) que, en general, ofrecen una gran experiencia al usuario final.
Other SaaS applications are widely adopted, such as Salesforce, SAP, Oracle, Google, and Apple. All removing that burden of having to manage more of the stack.
Otras aplicaciones SaaS son ampliamente adoptadas, como Salesforce, SAP, Oracle, Google y Apple. Todas ellas eliminan esa carga de tener que gestionar más de la pila.
I am sure there is a story with DevOps and SaaS-based applications but I am struggling to find out what they may be. I know Azure DevOps has some great integrations with Microsoft 365 that I might have a look into and report back to.
Estoy seguro de que hay una historia con DevOps y aplicaciones basadas en SaaS, pero estoy luchando para averiguar lo que pueden ser. Sé que Azure DevOps tiene algunas grandes integraciones con Microsoft 365 que podría echar un vistazo e informar.
![](Images/Day28_Cloud3.png)
### Public Cloud
### Cloud público
Next up we have the public cloud, most people would think of this in a few different ways, some would see this as the hyper scalers only such as Microsoft Azure, Google Cloud Platform and AWS.
A continuación tenemos la nube pública, la mayoría de la gente podría pensar en esto de varias maneras diferentes, algunos verían esto como los hiperescaladores sólo como Microsoft Azure, Google Cloud Platform y AWS.
![](Images/Day28_Cloud4.png)
Some will also see the public cloud as a much wider offering that includes those hyper scalers but also the thousands of MSPs all over the world as well. For this post, we are going to consider Public Cloud including hyper scalers and MSPs, although later on, we will specifically dive into one or more of the hyper scalers to get that foundational knowledge.
Algunos también verán la nube pública como una oferta mucho más amplia que incluye a los hiperescaladores, pero también a los miles de MSP de todo el mundo. Para este post, vamos a considerar la nube pública incluyendo hiperescaladores y MSPs, aunque más adelante, nos sumergiremos específicamente en uno o más de los hiperescaladores para obtener ese conocimiento fundacional.
![](Images/Day28_Cloud5.png)
_thousands more companies could land on this, I am merely picking from local, regional, telco and global brands I have worked with and am aware of._
_Podría haber miles de empresas más en esta lista, sólo estoy seleccionando las marcas locales, regionales, de telecomunicaciones y globales con las que he trabajado y que conozco._
We mentioned in the SaaS section that Cloud removed the responsibility or the burden of having to administer parts of a system. If SaaS we see a lot of the abstraction layers removed i.e the physical systems, network, storage, operating system, and even application to some degree. When it comes to the cloud there are various levels of abstraction we can remove or keep depending on your requirements.
Mencionamos en la sección SaaS que Cloud eliminaba la responsabilidad o la carga de tener que administrar partes de un sistema. Si hablamos de SaaS, vemos que se eliminan muchas de las capas de abstracción, es decir, los sistemas físicos, la red, el almacenamiento, el sistema operativo e incluso la aplicación hasta cierto punto. Cuando se trata de la nube, hay varios niveles de abstracción que podemos eliminar o mantener en función de nuestros requisitos.
We have already mentioned SaaS but there are at least two more to mention regarding the public cloud.
Ya hemos mencionado SaaS, pero hay al menos dos más que mencionar en relación con la nube pública.
Infrastructure as a service - You can think of this layer as a virtual machine but whereas on-premises you will be having to look after the physical layer in the cloud this is not the case, the physical is the cloud provider's responsibility and you will manage and administer the Operating System, the data and the applications you wish to run.
Infraestructura como servicio: puede pensar en esta capa como una máquina virtual, pero mientras que en las instalaciones tendrá que ocuparse de la capa física, en la nube no es así, la física es responsabilidad del proveedor de la nube y usted gestionará y administrará el sistema operativo, los datos y las aplicaciones que desee ejecutar.
Platform as a service - This continues to remove the responsibility of layers and this is really about you taking control of the data and the application but not having to worry about the underpinning hardware or operating system.
Plataforma como servicio: sigue eliminando la responsabilidad de las capas y en realidad se trata de que usted tome el control de los datos y la aplicación, pero sin tener que preocuparse por el hardware o el sistema operativo subyacentes.
There are many other aaS offerings out there but these are the two fundamentals. You might see offerings around StaaS (Storage as a service) which provide you with your storage layer but without having to worry about the hardware underneath. Or you might have heard CaaS for Containers as a service which we will get onto, later on, another aaS we will look to cover over the next 7 days is FaaS (Functions as a Service) where maybe you do not need a running system up all the time and you just want a function to be executed as and when.
Existen muchas otras ofertas de aaS, pero éstas son las dos fundamentales. Puede que haya ofertas de StaaS (almacenamiento como servicio) que le proporcionan la capa de almacenamiento sin tener que preocuparse por el hardware subyacente. O puede que hayas oído hablar de CaaS (Containers as a Service), del que hablaremos más adelante. Otro aaS que trataremos en los próximos 7 días es FaaS (Functions as a Service), en el que puede que no necesites un sistema en funcionamiento todo el tiempo y sólo quieras que una función se ejecute cuando y como quieras.
There are many ways in which the public cloud can provide abstraction layers of control that you wish to pass up and pay for.
Hay muchas maneras en que la nube pública puede proporcionar capas de abstracción de control que usted desea pasar y pagar.
![](Images/Day28_Cloud6.png)
### Private Cloud
### Cloud privado
Having your own data centre is not a thing of the past I would think that this has become a resurgence among a lot of companies that have found the OPEX model difficult to manage as well as skill sets in just using the public cloud.
Tener su propio centro de datos no es una cosa del pasado, creo que esto se ha convertido en un resurgimiento entre una gran cantidad de empresas que han encontrado el modelo OPEX difícil de manejar, así como conjuntos de habilidades en sólo el uso de la nube pública.
The important thing to note here is the public cloud is likely now going to be your responsibility and it is going to be on your premises.
Lo importante a tener en cuenta aquí es la nube pública es probable que ahora va a ser su responsabilidad y va a estar en sus instalaciones.
We have some interesting things happening in this space not only with VMware that dominated the virtualisation era and on-premises infrastructure environments. We also have the hyper scalers offering an on-premises version of their public clouds.
En este espacio están ocurriendo cosas interesantes, no sólo con VMware, que dominó la era de la virtualización y los entornos de infraestructura locales. También tenemos a los hiperescaladores que ofrecen una versión local de sus nubes públicas.
![](Images/Day28_Cloud7.png)
### Hybrid Cloud
### Cloud híbrida
To follow on from the Public and Private cloud mentions we also can span across both of these environments to provide flexibility between the two, maybe take advantage of services available in the public cloud but then also take advantage of features and functionality of being on-premises or it might be a regulation that dictates you having to store data locally.
Para continuar con las menciones a la nube pública y privada, también podemos abarcar ambos entornos para proporcionar flexibilidad entre los dos, tal vez aprovechando los servicios disponibles en la nube pública, pero también aprovechando las características y la funcionalidad de estar en las instalaciones o podría ser una regulación que dicta que tienes que almacenar los datos localmente.
![](Images/Day28_Cloud8.png)
Putting this all together we have a lot of choices for where we store and run our workloads.
Si juntamos todo esto, tenemos muchas opciones para elegir dónde almacenar y ejecutar nuestras cargas de trabajo.
![](Images/Day28_Cloud9.png)
Before we get into a specific hyper-scale, I have asked the power of Twitter where we should go?
Antes de entrar en una hiperescala específica, he preguntado al poder de Twitter ¿dónde deberíamos ir?
![](Images/Day28_Cloud10.png)
[Link to Twitter Poll](https://twitter.com/MichaelCade1/status/1486814904510259208?s=20&t=x2n6QhyOXSUs7Pq0itdIIQ)
Whichever one gets the highest percentage we will take a deeper dive into the offerings, I think the important to mention though is that services from all of these are quite similar which is why I say to start with one because I have found that in knowing the foundation of one and how to create virtual machines, set up networking etc. I have been able to go to the others and quickly ramp up in those areas.
Cualquiera que obtenga el porcentaje más alto vamos a tomar una inmersión más profunda en las ofertas, creo que el importante mencionar sin embargo es que los servicios de todos ellos son bastante similares por lo que digo que empezar con uno porque he encontrado que en el conocimiento de la base de uno y cómo crear máquinas virtuales, configurar la red, etc . He sido capaz de ir a los otros y rápidamente la rampa en esas áreas.
Either way, I am going to share some great **FREE** resources that cover all three of the hyper scalers.
De cualquier manera, voy a compartir algunos recursos **GRATIS** que cubren los tres hiperescaladores.
I am also going to build out a scenario as I have done in the other sections where we can build something as we move through the days.
También voy a construir un escenario como lo he hecho en las otras secciones donde podemos construir algo a medida que avanzamos a través de los días.
## Resources
## Recursos
- [Hybrid Cloud and MultiCloud](https://www.youtube.com/watch?v=qkj5W98Xdvw)
- [Microsoft Azure Fundamentals](https://www.youtube.com/watch?v=NKEFWyqJ5XA&list=WL&index=130&t=12s)
- [Google Cloud Digital Leader Certification Course](https://www.youtube.com/watch?v=UGRDM86MBIQ&list=WL&index=131&t=10s)
- [AWS Basics for Beginners - Full Course](https://www.youtube.com/watch?v=ulprqHHWlng&t=5352s)
See you on [Day 29](day29.md)
Nos vemos en el [Día 29](day29.md).

View File

@ -1,131 +1,131 @@
## Microsoft Azure Fundamentals
## Fundamentos de Microsoft Azure
Before we get going, the winner of the Twitter poll was Microsoft Azure, hence the title of the page. It was close and also quite interesting to see the results come in over the 24 hours.
Antes de empezar, el ganador de la encuesta de Twitter fue Microsoft Azure, de ahí el título de la página. Ha estado reñido y también ha sido muy interesante ver los resultados a lo largo de las 24 horas.
![](Images/Day29_Cloud1.png)
I would say in terms of covering this topic is going to give me a better understanding and update around the services available on Microsoft Azure, I lean towards Amazon AWS when it comes to my day today. I have however left resources I had lined up for all three of the major cloud providers.
Yo diría que en términos de cubrir este tema me va a dar una mejor comprensión y actualización en torno a los servicios disponibles en Microsoft Azure, me inclino hacia Amazon AWS cuando se trata de mi día a día. Sin embargo, he dejado recursos que había alineado para los tres principales proveedores de nube.
I do appreciate that there are more and the poll only included these 3 and in particular, there were some comments about Oracle Cloud. I would love to hear more about other cloud providers being used out in the wild.
Me doy cuenta de que hay más y la encuesta sólo incluía estos 3 y, en particular, hubo algunos comentarios sobre Oracle Cloud. Me encantaría saber más acerca de otros proveedores de nube que se utilizan, podéis dejar comentarios.
### The Basics
### Lo básico
- Provides public cloud services
- Geographically distributed (60+ Regions worldwide)
- Accessed via the internet and/or private connections
- Multi-tenant model
- Consumption-based billing - (Pay as you go | Pay as you grow)
- A large number of service types and offerings for different requirements.
- Proporciona servicios de nube pública
- Distribuidos geográficamente (más de 60 regiones en todo el mundo)
- Acceso a través de Internet y/o conexiones privadas
- Modelo multiinquilino
- Facturación basada en el consumo - Pay as you go (Pague a medida que avanza) | Pay as you grow (Pague a medida que crece)
- Un gran número de tipos de servicio y ofertas para diferentes requisitos.
- [Microsoft Azure Global Infrastructure](https://infrastructuremap.microsoft.com/explore)
[Microsoft Azure Global Infrastructure](https://infrastructuremap.microsoft.com/explore)
As much as we spoke about SaaS and Hybrid Cloud we are not planning on covering those topics here.
Aunque ya hemos hablado de SaaS y de la nube híbrida, no vamos a tratar esos temas aquí.
The best way to get started and follow along is by clicking the link, which will enable you to spin up a [Microsoft Azure Free Account](https://azure.microsoft.com/en-gb/free/)
La mejor manera de empezar es haciendo clic en el siguiente enlace que permite crear una [Cuenta gratuita de Microsoft Azure](https://azure.microsoft.com/en-gb/free/)
### Regions
### Regiones
I linked the interactive map above, but we can see the image below the breadth of regions being offered in the Microsoft Azure platform worldwide.
He enlazado el mapa interactivo más arriba, pero podemos ver en la imagen de abajo la amplitud de regiones que se ofrecen en la plataforma Microsoft Azure en todo el mundo.
![](Images/Day29_Cloud2.png)
_image taken from [Microsoft Docs - 01/05/2021](https://docs.microsoft.com/en-us/azure/networking/microsoft-global-network)_
_imagen tomada de [Microsoft Docs - 01/05/2021](https://docs.microsoft.com/en-us/azure/networking/microsoft-global-network)_
You will also see several "sovereign" clouds meaning they are not linked or able to speak to the other regions, for example, these would be associated with governments such as the `AzureUSGovernment` also `AzureChinaCloud` and others.
También verás varias nubes "soberanas", lo que significa que no están vinculadas o no pueden hablar con las otras regiones, por ejemplo, éstas estarían asociadas con gobiernos como `AzureUSGovernment`, `AzureChinaCloud` y otras.
When we are deploying our services within Microsoft Azure we will choose a region for almost everything. However, it is important to note that not every service is available in every region. You can see [Products available by region](https://azure.microsoft.com/en-us/global-infrastructure/services/?products=all) at the time of my writing this that in West Central US we cannot use Azure Databricks.
Cuando estemos desplegando nuestros servicios dentro de Microsoft Azure elegiremos una región para casi todo. Sin embargo, es importante tener en cuenta que no todos los servicios están disponibles en todas las regiones. Puedes ver [Productos disponibles por región](https://azure.microsoft.com/en-us/global-infrastructure/services/?products=all) en el momento de escribir esto que en West Central US no podemos usar Azure Databricks.
I also mentioned "almost everything" above, there are certain services that are linked to the region such as Azure Bot Services, Bing Speech, Azure Virtual Desktop, Static Web Apps, and some more.
También se mencionó arriba que hay ciertos servicios que están ligados a la región como Azure Bot Services, Bing Speech, Azure Virtual Desktop, Static Web Apps, y algunos más.
Behind the scenes, a region may be made up of more than one data centre. These will be referred to as Availability Zones.
Entre bastidores, una región puede estar formada por más de un centro de datos. Estos se denominarán Zonas de Disponibilidad.
In the below image you will see and again this is taken from the Microsoft official documentation it describes what a region is and how it is made up of Availability Zones. However not all regions have multiple Availability Zones.
En la siguiente imagen, extraída de la documentación oficial de Microsoft, se describe qué es una región y cómo se compone de zonas de disponibilidad. Sin embargo no todas las regiones tienen múltiples Zonas de Disponibilidad.
![](Images/Day29_Cloud3.png)
The Microsoft Documentation is very good, and you can read up more on [Regions and Availability Zones](https://docs.microsoft.com/en-us/azure/availability-zones/az-overview) here.
La documentación de Microsoft es muy buena, y puedes obtener mucha más información sobre [Regiones y zonas de disponibilidad](https://docs.microsoft.com/en-us/azure/availability-zones/az-overview).
### Subscriptions
### Suscripciones
Remember we mentioned that Microsoft Azure is a consumption model cloud you will find that all major cloud providers follow this model.
Recuerda que mencionamos que Microsoft Azure es una nube de modelo de consumo que encontrará que todos los principales proveedores de nube siguen este modelo.
If you are an Enterprise then you might want or have an Enterprise Agreement set up with Microsoft to enable your company to consume these Azure Services.
Si tienes una empresa, entonces es posible que desee tener un acuerdo de empresa establecido con Microsoft para permitir servicios especializados de Azure.
If you are like me and you are using Microsoft Azure for education then we have a few other options.
Si usted es como yo y está utilizando Microsoft Azure para la educación, entonces tenemos algunas otras opciones.
We have the [Microsoft Azure Free Account](https://azure.microsoft.com/en-gb/free/) which generally gives you several free cloud credits to spend in Azure over some time.
Tenemos la [Cuenta gratuita de Microsoft Azure](https://azure.microsoft.com/en-gb/free/) que generalmente te da varios créditos de nube gratuitos para gastar en Azure durante algún tiempo.
There is also the ability to use a Visual Studio subscription which gives you maybe some free credits each month alongside your annual subscription to Visual Studio, this was commonly known as the MSDN years ago. [Visual Studio](https://azure.microsoft.com/en-us/pricing/member-offers/credit-for-visual-studio-subscribers/)
También existe la posibilidad de utilizar una suscripción a Visual Studio que te da algunos créditos gratuitos cada mes junto con tu suscripción anual a Visual Studio, esto era comúnmente conocido como MSDN hace años. [Visual Studio](https://azure.microsoft.com/en-us/pricing/member-offers/credit-for-visual-studio-subscribers/)
Then finally there is the hand over a credit card and have a pay as you go, model. [Pay-as-you-go](https://azure.microsoft.com/en-us/pricing/purchase-options/pay-as-you-go/)
Por último, está el modelo de pago por uso con tarjeta de crédito. [Pago por uso](https://azure.microsoft.com/en-us/pricing/purchase-options/pay-as-you-go/)
A subscription can be seen as a boundary between different subscriptions potentially cost centres but completely different environments. A subscription is where the resources are created.
Una suscripción puede verse como un límite entre diferentes suscripciones potencialmente centros de costes pero entornos completamente diferentes. Una suscripción es donde se crean los recursos.
### Management Groups
### Grupos de gestión
Management groups give us the ability to segregate control across our Azure Active Directory (AD) or our tenant environment. Management groups allow us to control policies, Role Based Access Control (RBAC), and budgets.
Los grupos de gestión nos dan la capacidad de segregar el control a través de nuestro Azure Active Directory (AD) o nuestro entorno de inquilinos. Los grupos de gestión nos permiten controlar las políticas, el control de acceso basado en roles (RBAC) y los presupuestos.
Subscriptions belong to these management groups so you could have many subscriptions in your Azure AD Tenant, these subscriptions then can also control policies, RBAC, and budgets.
Las suscripciones pertenecen a estos grupos de gestión por lo que podría tener muchas suscripciones en su Azure AD Tenant, estas suscripciones a continuación, también puede controlar las políticas, RBAC, y los presupuestos.
### Resource Manager and Resource Groups
### Administrador de recursos y grupos de recursos
#### Azure Resource Manager
#### Gestor de Recursos Azure
- JSON based API that is built on resource providers.
- Resources belong to a resource group and share a common life cycle.
- Parallelism
- JSON-Based deployments are declarative, idempotent and understand dependencies between resources to govern creation and order.
- API basada en JSON que se basa en proveedores de recursos.
- Los recursos pertenecen a un grupo de recursos y comparten un ciclo de vida común.
- Paralelismo
- Los despliegues basados en JSON son declarativos, idempotentes y comprenden las dependencias entre recursos para gobernar la creación y el orden.
#### Resource Groups
#### Grupos de recursos
- Every Azure Resource Manager resource exists in one and only one resource group!
- Resource groups are created in a region that can contain resources from outside the region.
- Resources can be moved between resource groups
- Resource groups are not walled off from other resource groups, there can be communication between resource groups.
- Resource Groups can also control policies, RBAC, and budgets.
- Cada recurso de Azure Resource Manager existe en uno y sólo un grupo de recursos.
- Los grupos de recursos se crean en una región que puede contener recursos de fuera de la región.
- Los recursos pueden moverse entre grupos de recursos
- Los grupos de recursos no están aislados de otros grupos de recursos, puede haber comunicación entre grupos de recursos.
- Los grupos de recursos también pueden controlar políticas, RBAC y presupuestos.
### Hands-On
### Manos a la obra
Let's go and get connected and make sure we have a **Subscription** available to us. We can check our simple out of the box **Management Group**, We can then go and create a new dedicated **Resource Group** in our preferred **Region**.
Vamos a conectarnos y a asegurarnos de que tenemos una **Suscripción** disponible. Podemos marcar nuestro simple **Grupo de Gestión**, podemos ir y crear un nuevo **Grupo de Recursos** dedicado en nuestra **Región** preferida.
When we first login to our [Azure portal](https://portal.azure.com/#home) you will see at the top the ability to search for resources, services and docs.
La primera vez que iniciemos sesión en nuestro [portal Azure](https://portal.azure.com/#home) veremos en la parte superior la posibilidad de buscar recursos, servicios y documentos.
![](Images/Day29_Cloud4.png)
We are going to first look at our subscription, you will see here that I am using a Visual Studio Professional subscription which gives me some free credit each month.
Vamos a ver primero nuestra suscripción, verás aquí que estoy usando una suscripción Visual Studio Professional que me da algo de crédito gratis cada mes.
![](Images/Day29_Cloud5.png)
If we go into that you will get a wider view and a look into what is happening or what can be done with the subscription, we can see billing information with control functions on the left where you can define IAM Access Control and further down there are more resources available.
Si entramos en ella obtendremos una visión más amplia a lo que está sucediendo y a lo que se puede hacer con la suscripción, podemos ver información de facturación con funciones de control a la izquierda donde se puede definir el Control de Acceso IAM y más abajo hay más recursos disponibles.
![](Images/Day29_Cloud6.png)
There might be a scenario where you have multiple subscriptions and you want to manage them all under one, this is where management groups can be used to segregate responsibility groups. In mine below, you can see there is just my tenant root group with my subscription.
Podría haber un escenario en el que tienes varias suscripciones y deseas gestionarla todas bajo una cuenta, aquí puedes utilizar los grupos de gestión para segregar por grupos de responsabilidad. Abajo puedes ver que hay sólo un grupo raíz de inquilino con la suscripción.
You will also see in the previous image that the parent management group is the same id used on the tenant root group.
También verás en la imagen anterior que el grupo de gestión padre es el mismo ID utilizado en el grupo raíz del inquilino.
![](Images/Day29_Cloud7.png)
Next up we have Resource groups, this is where we combine our resources and we can easily manage them in one place. I have a few created for various other projects.
A continuación tenemos los grupos de recursos, aquí es donde combinamos nuestros recursos y podemos gestionarlos fácilmente en un solo lugar. Hay algunos creados para otros proyectos.
![](Images/Day29_Cloud8.png)
With what we are going to be doing over the next few days, we want to create our resource group. This is easily done in this console by hitting the create option on the previous image.
En los próximos días vamos a crear un grupo de recursos. Esto se hace fácilmente en esta consola pulsando la opción crear de la imagen anterior.
![](Images/Day29_Cloud9.png)
A validation step takes place and then you have the chance to review your creation and then create. You will also see down the bottom "Download a template for automation" this allows us to grab the JSON format so that we can perform this simple in an automated fashion later on if we wanted, we will cover this later on as well.
Se produce un paso de validación y luego tienes la oportunidad de revisar tu creación antes de crear. También verás abajo "Descargar una plantilla para automatización" esto nos permite tomar en formato JSON una plantilla que podremos utilizar de forma automatizada más adelante, lo veremos más adelante también.
![](Images/Day29_Cloud10.png)
Hit create, then in our list of resource groups, we now have our "90DaysOfDevOps" group ready for what we do in the next session.
Pulsamos crear. Ahora en nuestra lista de grupos de recursos tenemos nuestro grupo "90DaysOfDevOps" listo para lo que hagamos en las siguientes sesiones.
![](Images/Day29_Cloud11.png)
## Resources
## Recursos
- [Hybrid Cloud and MultiCloud](https://www.youtube.com/watch?v=qkj5W98Xdvw)
- [Microsoft Azure Fundamentals](https://www.youtube.com/watch?v=NKEFWyqJ5XA&list=WL&index=130&t=12s)
- [Google Cloud Digital Leader Certification Course](https://www.youtube.com/watch?v=UGRDM86MBIQ&list=WL&index=131&t=10s)
- [AWS Basics for Beginners - Full Course](https://www.youtube.com/watch?v=ulprqHHWlng&t=5352s)
See you on [Day 30](day30.md)
Nos vemos en el [Día 30](day30.md)

View File

@ -100,7 +100,7 @@ Empecemos con lo que vas a poder ver en estos 90 días.
- [✔️] 🌐 24 > [Automatización de la red](Days/day24.md)
- [✔️] 🌐 25 > [Python para la automatización de la red](Days/day25.md)
- [✔️] 🌐 26 > [Construir nuestro Lab](Days/day26.md)
- [✔️] 🌐 27 > [Ponerse a trabajar con Python y la red](Days/day27.md)
- [✔️] 🌐 27 > [Manos a la obra con Python y Redes](Days/day27.md)
### Quédate con solo un Cloud Provider

BIN
2023.jpg

Binary file not shown.

Before

Width:  |  Height:  |  Size: 6.0 KiB

76
2023.md
View File

@ -16,14 +16,14 @@ Or contact us via Twitter, my handle is [@MichaelCade1](https://twitter.com/Mich
## List of Topics
| Topic | Author | Date | Twitter Handle |
| Topic | Author | Date | Twitter Handle |
| -------------------------------------- | ----------------------------------- | ------------------- | ----------------------------------------------------------------------------------------------- |
| DevSecOps | Michael Cade | 1st Jan - 6th Jan | [@MichaelCade1](https://twitter.com/MichaelCade1) |
| Secure Coding | Prateek Jain | 7th Jan - 13th Jan | [@PrateekJainDev](https://twitter.com/PrateekJainDev) |
| Secure Coding | Prateek Jain | 7th Jan - 13th Jan | [@PrateekJainDev](https://twitter.com/PrateekJainDev) |
| Continuous Build, Integration, Testing | Anton Sankov and Svetlomir Balevski | 14th Jan - 20th Jan | [@a_sankov](https://twitter.com/a_sankov) |
| Continuous Delivery & Deployment | Anton Sankov | 21st Jan - 27th Jan | [@a_sankov](https://twitter.com/a_sankov) |
| Runtime Defence & Monitoring | Ben Hirschberg | 28th Jan - 3rd Feb | [@slashben81](https://twitter.com/slashben81) |
| Secrets Management | Bryan Krausen | 4th Feb - 10th Feb | [@btkrausen](https://twitter.com/btkrausen) |
| Runtime Defence & Monitoring | Ben Hirschberg | 28th Jan - 3rd Feb | [@slashben81](https://twitter.com/slashben81) |
| Secrets Management | Bryan Krausen | 4th Feb - 10th Feb | [@btkrausen](https://twitter.com/btkrausen) |
| Python | Rishab Kumar | 11th Feb - 17th Feb | [@rishabk7](https://twitter.com/rishabk7) |
| AWS | Chris Williams | 18th Feb - 24th Feb | [@mistwire](https://twitter.com/mistwire) |
| OpenShift | Dean Lewis | 25th Feb - 3rd Mar | [@saintdle](https://twitter.com/saintdle) |
@ -34,55 +34,55 @@ Or contact us via Twitter, my handle is [@MichaelCade1](https://twitter.com/Mich
## Progress
- [] ♾️ 1 > [2022 Reflection & Welcome 2023](2023/day01.md)
- [✔️] ♾️ 1 > [2022 Reflection & Welcome 2023](2023/day01.md)
### DevSecOps
- [] ♾️ 2 > [The Big Picture: DevSecOps](2023/day02.md)
- [] ♾️ 3 > [Think like an Attacker](2023/day03.md)
- [] ♾️ 4 > [Red Team vs. Blue Team](2023/day04.md)
- [] ♾️ 5 > [OpenSource Security](2023/day05.md)
- [] ♾️ 6 > [Hands-On: Building a weak app](2023/day06.md)
- [✔️] ♾️ 2 > [The Big Picture: DevSecOps](2023/day02.md)
- [✔️] ♾️ 3 > [Think like an Attacker](2023/day03.md)
- [✔️] ♾️ 4 > [Red Team vs. Blue Team](2023/day04.md)
- [✔️] ♾️ 5 > [OpenSource Security](2023/day05.md)
- [✔️] ♾️ 6 > [Hands-On: Building a weak app](2023/day06.md)
### Secure Coding
- [] ⌨️ 7 > [](2023/day07.md)
- [] ⌨️ 8 > [](2023/day08.md)
- [] ⌨️ 9 > [](2023/day09.md)
- [] ⌨️ 10 > [](2023/day10.md)
- [] ⌨️ 11 > [](2023/day11.md)
- [] ⌨️ 12 > [](2023/day12.md)
- [] ⌨️ 13 > [](2023/day13.md)
- [✔️] ⌨️ 7 > [Secure Coding Overview](2023/day07.md)
- [✔️] ⌨️ 8 > [SAST Overview](2023/day08.md)
- [✔️] ⌨️ 9 > [SAST Implementation with SonarCloud](2023/day09.md)
- [✔️] ⌨️ 10 > [Software Composition Analysis Overview](2023/day10.md)
- [✔️] ⌨️ 11 > [SCA Implementation with OWASP Dependency Check](2023/day11.md)
- [✔️] ⌨️ 12 > [Secure Coding Practices](2023/day12.md)
- [✔️] ⌨️ 13 > [Additional Secure Coding Practices](2023/day13.md)
### Continuous Build, Integration, Testing
- [] 🐧 14 > [](2023/day14.md)
- [] 🐧 15 > [](2023/day15.md)
- [] 🐧 16 > [](2023/day16.md)
- [] 🐧 17 > [](2023/day17.md)
- [] 🐧 18 > [](2023/day18.md)
- [] 🐧 19 > [](2023/day19.md)
- [] 🐧 20 > [](2023/day20.md)
- [✔️] 🐧 14 > [Container Image Scanning](2023/day14.md)
- [✔️] 🐧 15 > [Container Image Scanning Advanced](2023/day15.md)
- [✔️] 🐧 16 > [Fuzzing](2023/day16.md)
- [✔️] 🐧 17 > [Fuzzing Advanced](2023/day17.md)
- [✔️] 🐧 18 > [DAST](2023/day18.md)
- [✔️] 🐧 19 > [IAST](2023/day19.md)
- [✔️] 🐧 20 > [Practical Lab on IAST and DAST](2023/day20.md)
### Continuous Delivery & Deployment
- [] 🌐 21 > [](2023/day21.md)
- [] 🌐 22 > [](2023/day22.md)
- [] 🌐 23 > [](2023/day23.md)
- [] 🌐 24 > [](2023/day24.md)
- [] 🌐 25 > [](2023/day25.md)
- [] 🌐 26 > [](2023/day26.md)
- [] 🌐 27 > [](2023/day27.md)
- [✔️] 🌐 21 > [Continuous Image Repository Scan](2023/day21.md)
- [✔️] 🌐 22 > [Continuous Image Repository Scan - Container Registries](2023/day22.md)
- [✔️] 🌐 23 > [Artifacts Scan](2023/day23.md)
- [✔️] 🌐 24 > [Signing](2023/day24.md)
- [✔️] 🌐 25 > [Systems Vulnerability Scanning](2023/day25.md)
- [✔️] 🌐 26 > [Containers Vulnerability Scanning](2023/day26.md)
- [✔️] 🌐 27 > [Network Vulnerability Scan](2023/day27.md)
### Runtime Defence & Monitoring
- [] ☁️ 28 > [](2023/day28.md)
- [] ☁️ 29 > [](2023/day29.md)
- [] ☁️ 30 > [](2023/day30.md)
- [] ☁️ 31 > [](2023/day31.md)
- [] ☁️ 32 > [](2023/day32.md)
- [] ☁️ 33 > [](2023/day33.md)
- [] ☁️ 34 > [](2023/day34.md)
- [✔️] ☁️ 28 > [System monitoring and auditing](2023/day28.md)
- [] ☁️ 29 > [Application level monitoring](2023/day29.md)
- [] ☁️ 30 > [Intrusion detection and anti-malware software](2023/day30.md)
- [] ☁️ 31 > [Firewalls and network protection](2023/day31.md)
- [] ☁️ 32 > [Vulnerability and patch management](2023/day32.md)
- [] ☁️ 33 > [Application whitelisting and software trust management](2023/day33.md)
- [] ☁️ 34 > [Runtime access control](2023/day34.md)
### Secrets Management

View File

@ -0,0 +1,94 @@
## Think Like an Attacker
Yesterday we covered what is DevSecOps, in this post we are going to look at some of the characteristics of an attacker. For us to think about the attacker we must think like an attacker.
### Characteristics of an Attacker
First and foremost, all businesses and software is an attack vectors to an attacker, there is no safe place we can only make places safer and less attractive for people to attack.
![](images/day03-2.jpg)
***[image from this source](https://www.trainerize.me/articles/outrun-bear/)***
With that in mind, attackers are a constant threat!
Attackers will identify gaps in security by running attacks in a specific order to gain access, pull data and be successful in their mission.
Attackers can be lucky, but they will absolutely work on targeted attacks.
Compromises can be slow and persistent or fast to get to a breach. Not all attacks are going to be the same.
### Motivations of an Attacker
As a DevOps team, you are going to be provisioning infrastructure, and software and protecting these environments likely spanning multiple clouds, virtualisation, and containerisation on platforms.
We must consider the following:
- **How** would they attack us?
- **Why** would they attack us?
- **What** do we have that is valuable to an attacker?
The motivations of an attacker will also be different depending on the attacker. I mean it could just be for fun... We have probably all been there, in school and just gone a little too deep into the network looking for more information. Who has a story to tell?
But as we have seen in the media attacks are more aligned to monetary, fraud or even political attacks on businesses and organisations.
In the Kubernetes space, we have even seen attackers leveraging and using the computing power of an environment to mine cryptocurrency.
At the heart of this attack is likely going to be **DATA**
A companys data is likely going to be extremely valuable to the company but also potentially out in the wild. It is why we put so much emphasis on protecting this data, ensuring that the data is secure and encrypted.
### Attack Maps
We now have a motive and some of the characteristics of an attacker or a group of attackers, if this is a planned attack then you are going to need a plan, you are going to need to identify what services and data you are targeting.
An attack map is a visual representation of an attack on a computer network. It shows the various stages of the attack, the tools and techniques used by the attacker, and the points of entry and exit into the network. Attack maps can be used to analyse the details of past attacks, identify vulnerabilities in a network, and plan defences against future attacks. They can also be used to communicate information about an attack to non-technical stakeholders, such as executives or legal teams.
You can see from the above description that an Attack Map should be created on both sides or both teams (teams wise this is something I am going to cover in a later post)
If you were to create an Attack Map of your home network or your business some of the things, you would want to capture would be:
- Capture a graphical representation of your app including all communication flows and technologies being used.
- A list of potential vulnerabilities and areas of attack.
- Consider confidentiality, integrity and availability for each connection/interaction within the app.
- Map the attacks/vulnerabilities
An attack map might look something like this with a key explaining what each number represents.
![](images/day03-1.png)
From this map we might consider there to be a denial of service or some malicious insider attack and access to S3 bucket to prevent the application saving data or causing it to save bad data.
This map then is never final, in the same way that your application continuously moves forward through feedback, this attack map also needs to be tested against, which provides feedback which in turn means the security posture is strengthened against these attacks. You could call this "Continuous Response" in the Security Feedback loop.
At a bare minimum, we should be following a good, better, best model to better the security posture.
- **Good** - Identify security design constraints and controls that need to be built into the software to reduce an attack.
- **Better** - Prioritise and build security in for issues found later in the software cycle.
- **Best** - Build automation into script deployment to detect issues, unit testing, security testing, black box testing
Security is a design constraint - albeit an inconvenient one.
## Resources
- [devsecops.org](https://www.devsecops.org/)
- [TechWorld with Nana - What is DevSecOps? DevSecOps explained in 8 Mins](https://www.youtube.com/watch?v=nrhxNNH5lt0&list=PLsKoqAvws1pvg7qL7u28_OWfXwqkI3dQ1&index=1&t=19s)
- [What is DevSecOps?](https://www.youtube.com/watch?v=J73MELGF6u0&list=PLsKoqAvws1pvg7qL7u28_OWfXwqkI3dQ1&index=2&t=1s)
- [freeCodeCamp.org - Web App Vulnerabilities - DevSecOps Course for Beginners](https://www.youtube.com/watch?v=F5KJVuii0Yw&list=PLsKoqAvws1pvg7qL7u28_OWfXwqkI3dQ1&index=3&t=67s)
- [The Importance of DevSecOps and 5 Steps to Doing it Properly (DevSecOps EXPLAINED)](https://www.youtube.com/watch?v=KaoPQLyWq_g&list=PLsKoqAvws1pvg7qL7u28_OWfXwqkI3dQ1&index=4&t=13s)
- [Continuous Delivery - What is DevSecOps?](https://www.youtube.com/watch?v=NdvMUcWNlFw&list=PLsKoqAvws1pvg7qL7u28_OWfXwqkI3dQ1&index=5&t=6s)
- [Cloud Advocate - What is DevSecOps?](https://www.youtube.com/watch?v=a2y4Oj5wrZg&list=PLsKoqAvws1pvg7qL7u28_OWfXwqkI3dQ1&index=6)
- [Cloud Advocate - DevSecOps Pipeline CI Process - Real world example!](https://www.youtube.com/watch?v=ipe08lFQZU8&list=PLsKoqAvws1pvg7qL7u28_OWfXwqkI3dQ1&index=7&t=204s)
See you on [Day 4](day04.md)

View File

@ -0,0 +1,82 @@
## <span style="color:red">Red Team</span> vs. <span style="color:blue">Blue Team</span>
Something I mentioned in the last session, was referring to <span style="color:red">**Red**</span> and <span style="color:blue">**Blue**</span> teams. In the security space <span style="color:red">**Red**</span> teams and <span style="color:blue">**Blue**</span> teams work as attackers and defenders to improve an organisation's security.
Both teams work toward improving an organisation's security posture but in different ways.
The <span style="color:red">**Red**</span> team has the role of the attacker by trying to find vulnerabilities in code or infrastructure and attempting to break through cybersecurity defences.
The <span style="color:blue">**Blue**</span> team defends against those attacks and responds to incidents when they occur.
![](images\day04-2.jpg)
***[image from this source](https://hackernoon.com/introducing-the-infosec-colour-wheel-blending-developers-with-red-and-blue-security-teams-6437c1a07700)***
### The Benefits
A very good way to understand and better a company's security posture is to run these exercises between the <span style="color:red">**Red**</span> and <span style="color:blue">**Blue**</span> teams. The whole idea is that this scenario is there to mimic a real attack. Some of the areas that this approach will help are the following:
- Vulnerabilities
- Hardening network security
- Gaining experience in detecting and isolating attacks
- Build detailed response plans
- Raise overall company security awareness
### <span style="color:red">Red Team</span>
NIST (national institute of standards and technology) describes the <span style="color:red">**Red**</span> Team as:
“a group of people authorized and organized to emulate a potential adversarys attack or exploitation capabilities against an enterprises security posture.”
They are playing the bad actor in the scenario or simulation of the attack.
When we speak about both <span style="color:red">**Red**</span> and <span style="color:blue">**Blue**</span> team it is possibly wider than the DevSecOps process and principles of a software lifecycle but knowing this is not going to hurt and practices from DevSecOps will ensure overall that you have a better security posture.
The <span style="color:red">**Red**</span> team, is tasked with thinking like the attacker which we covered in the last session. Think about social engineering and including the wider teams within the business to manipulate and gain access to the network and services.
A key fundamental of the <span style="color:red">**Red**</span> team is understanding software development. Understanding and knowing how applications are built, you are going to be able to identify possible weaknesses, then write your programs to try and gain access and exploit. On top of this though you may have heard the term "penetration testing" or "pen testing" the overall aim for the <span style="color:red">**Red**</span> team is to identify and try to exploit known vulnerabilities within an environment. With the rise of Open Source software, this is another area that I want to cover in a few sessions time.
### <span style="color:blue">Blue Team</span>
NIST (national institute of standards and technology) describes the <span style="color:blue">**Blue**</span> Team as:
“the group responsible for defending an enterprises use of information systems by maintaining its security posture against a group of mock attackers.”
The <span style="color:blue">**Blue**</span> team is playing the defence, they are going to be analyse the security posture currently in the business and then take action on improving that to stop those external attacks. In the <span style="color:blue">**Blue**</span> team you are also going to be focused on continuous monitoring (something we covered in the end of 2022 regarding DevOps) monitoring for breaches and responding to them when they occur.
As part of the <span style="color:blue">**Blue**</span> team you are going to have to understand the assets you are protecting and how to best to protect them. In the IT landscape today we have lots of diverse options to run our workloads, applications and data.
- Assessing Risk - through the form of risk assessments is going to give you a good understanding what are the most critical assets within the business.
- Threat Intelligence - What threats are out there? There are thousands of vulnerabilities out there possibly without a resolution how can you mititgate risk of those services without damaging the use case and the business need?
### Cybersecurity colour wheel
As Cybersecurity grows in importance with all the big brands getting hit there is a need for more than just the <span style="color:red">**Red**</span> and <span style="color:blue">**Blue**</span> teams when it comes to security within a business.
![](images\day04-1.png)
***[image from this source](https://hackernoon.com/introducing-the-infosec-colour-wheel-blending-developers-with-red-and-blue-security-teams-6437c1a07700)***
- The <span style="color:yellow">**Yellow Team**</span> are our builders, the engineers and developers who develop the security systems and applications.
"We have our <span style="color:red">**Red**</span> and <span style="color:blue">**Blue**</span> Teams just as we always have, but now with the introduction of a <span style="color:yellow">**Yellow**</span> Team, we can have secondary coloured teams (Orange, Green and Purple) dedicated to mixing skills between attackers, defenders and codersmaking code more secure and the organisation more secure."
The above abstract was taken from the top resource listed at the end of the post.
<span style="color:red">**Red**</span>, <span style="color:blue">**Blue**</span>, <span style="color:yellow">**Yellow**</span> are primary colours, combine them and we start to understand where the other colours or secondary colours come into play, again really great explanation in that first link.
- <span style="color:purple">**Purple Team**</span> - The special team! If the you take <span style="color:blue">**Blue**</span> and <span style="color:red">**Red**</span> you get <span style="color:purple">**Purple**</span>. If you integrate defence with offence and you collaborate and share knowledge between the teams you overall provide a better posture throughout.
- <span style="color:green">**Green Team**</span> - Feedback loop, the <span style="color:green">**Green**</span> team are going to take insights from the <span style="color:blue">**Blue**</span> team and work closely with the <span style="color:yellow">**Yellow**</span> team to be more effcient. Mix <span style="color:blue">**Blue**</span> and <span style="color:green">**Green**</span> and what do you <span style="color:purple">**get**</span>?
- <span style="color:orange">**Orange Team**</span> - Much like the <span style="color:green">**Green**</span> team working with the <span style="color:blue">**Blue**</span> team for feedback, the <span style="color:orange">**Orange**</span> team works with the <span style="color:red">**Red**</span> team and pass on what they have learnt to the <span style="color:yellow">**Yellow**</span> team to build better security into their code.
When I got into researching this I realised that maybe I was moving away from the DevOps topics but please anyone in the DevSecOps space is this useful? correct? and do you have anything to add?
Obviously throughout we have the plan to dive into more specifics around DevSecOps and the different stages so I was being mindful that I did not want to cover those areas that will be covered in future sessions.
Also please add any additional resources.
## Resources
- [Introducing the InfoSec colour wheelblending developers with red and blue security teams.](https://hackernoon.com/introducing-the-infosec-colour-wheel-blending-developers-with-red-and-blue-security-teams-6437c1a07700)

View File

@ -0,0 +1,55 @@
## Open Source Security
Open-source software has become widely used over the past few years due to its collaborative and community/public nature.
The term Open Source refers to software in the public domain that people can freely use, modify, and share.
The main reason for this surge of adoption and interest in Open Source is the speed of augmenting proprietary code developed in-house and this in turn can accelerate time to market. Meaning that leveraging OSS can speed up application development and help get your commercial product to market faster.
### What is Open-Source Security?
Open-source security refers to the practice of ensuring the safety and security of computer systems and networks that use open-source software. As we said above Open-source software is software that is freely available to use, modify, and distribute, and it is typically developed by a community of volunteers however there is a huge uptake from big software vendors that also contribute back to open-source, you only need to look at the Kubernetes repository to see which vendors are heavily invested there.
Because open-source software is freely available, it can be widely used and studied, which can help to improve its security. However, it is important to ensure that open-source software is used responsibly and that any vulnerabilities are addressed in a timely manner to maintain its security.
### Understanding OSS supply chain security
I would normally document my findings based on a longer form video into a paragraph here but as this is 10mins I thought it made sense to link the resource here [Understanding Open-Source Supply Chain Security] (https://www.youtube.com/watch?v=pARGj6j0-ZY)
Be it a commercial product leveraging OSS or an OSS project using packages or other OSS code we must have an awareness from top to bottom and provide better visibility between projects.
### 3 As of OSS Security
Another resource I found useful here from IBM, will be linked below in the resources section.
- **Assess** - Look at the project health, how active is the repository, how responsive are the maintainers? If these show a bad sign, then you are not going to be happy about the security of the project.
At this stage, we can also check the security model, code reviews, data validations, and test coverage for security. How does the project handle CVEs?
What dependencies does this project have? Explore the health of these in turn as you need to be sure the whole stack is good.
- **Adopt** - If you are going to take this on within your software or as a standalone app within your own stack, who is going to manage and maintain it? Set some policies on who internally will overlook the project and support the community.
- **Act** - Security is the responsibility of everyone, not just the maintainers, as a user you should also act and assist with the project.
### Log4j Vulnerability
In early 2022 we had a vulnerability that seemed to massively hit the headlines (Log4j (CVE-2021-44228) RCE Vulnerability)
Log4j is a very common library for logging within Java. The vulnerability would in turn affect millions of java-based applications.
A malicious actor could use this vulnerability within the application to gain access to a system.
Two big things I mentioned,
- **millions** of applications will have this package being used.
- **malicious actors** could leverage this to gain access or plant malware into an environment.
The reason I am raising this is that security never stops, the growth of Open-Source adoption has increased this attack vector on applications, and this is why there needs to be an overall effort on security from day 0.
## Resources
- [Open Source Security Foundation](https://openssf.org/)
- [Snyk - State of open source security 2022](https://snyk.io/reports/open-source-security/)
- [IBM - The 3 A's of Open Source Security](https://www.youtube.com/watch?v=baZH6CX6Zno)
- [Log4j (CVE-2021-44228) RCE Vulnerability Explained](https://www.youtube.com/watch?v=0-abhd-CLwQ)

View File

@ -0,0 +1,244 @@
## Hands-On: Building a weak app
Nobody really sets out to build a weak or vulnerable app... do they?
No is the correct answer, nobody should or does set out to build a weak application, and nobody intends on using packages or other open-source software that brings its own vulnerabilities.
In this final introduction section into DevSecOps, I want to attempt to build and raise awareness of some of the misconfigurations and weaknesses that might fall by the wayside. Then later over the next 84 days or even sooner we are going to hear from some subject matter experts in the security space on how to prevent bad things and weak applications from being created.
### Building our first weak application
<span style="color:red">**Important Message: This exercise is to highlight bad and weaknesses in an application, Please do try this at home but beware this is bad practice**</span>
At this stage, I am not going to run through my software development environment in any detail. I would generally be using VScode on Windows with WSL2 enabled. We might then use Vagrant to provision dedicated compute instances to VirtualBox all of which I covered throughout the 2022 sections of #90DaysOfDevOps mostly in the Linux section.
### Bad Coding Practices or Coding Bad Practices
It is very easy to copy and paste into GitHub!
How many people check end-to-end the package that they include in your code?
We also must consider:
- Do we trust the user/maintainer
- Not validating input on our code
- Hardcoding secrets vs env or secrets management
- Trusting code without validation
- Adding your secrets to public repositories (How many people have done this?)
Now going back to the overall topic, DevSecOps, everything we are doing or striving towards is faster iterations of our application or software, but this means we can introduce defects and risks faster.
We will also likely be deploying our infrastructure with code, another risk is including bad code here that lets bad actors in via defects.
Deployments will also include application configuration management, another level of possible defects.
However! Faster iterations can and do mean faster fixes as well.
### OWASP - Open Web Application Security Project
*"[OWASP](https://owasp.org/) is a non-profit foundation that works to improve the security of software. Through community-led open-source software projects, hundreds of local chapters worldwide, tens of thousands of members, and leading educational and training conferences, the OWASP Foundation is the source for developers and technologists to secure the web."*
If we look at their most recent data set and their [top 10](https://owasp.org/www-project-top-ten/) we can see the following big ticket items for why things go bad and wrong.
1. Broken Access Control
2. Cryptographic Failures
3. Injection (2020 #1)
4. Insecure Design (New for 2021)
5. Security Misconfiguration
6. Vulnerable and Outdated Components (2020 #9)
7. Identification and authentication failures (2020 #2)
8. Software and Data integrity failures (New for 2021)
9. Security logging and monitoring failures (2020 #10)
10. Server-side request forgery (SSRF)
### Back to the App
<span style="color:red">**The warning above still stands, I will deploy this to a local VirtualBox VM IF you do decide to deploy this to a cloud instance then please firstly be careful and secondly know how to lock down your cloud provider to only your own remote IP!**</span>
Ok I think that is enough warnings, I am sure we might see the red warnings over the next few weeks some more as we get deeper into discussing this topic.
The application that I am going to be using will be from [DevSecOps.org](https://github.com/devsecops/bootcamp/blob/master/Week-2/README.md) This was one of their bootcamps years ago but still allows us to show what a bad app looks like.
Having the ability to see a bad or a weak application means we can start to understand how to secure it.
Once again, I will be using VirtualBox on my local machine and I will be using the following vagrantfile (link here to intro on vagrant)
The first alarm bell is that this vagrant box was created over 2 years ago!
```
Vagrant.configure("2") do |config|
config.vm.box = "centos/7"
config.vm.provider :virtualbox do |v|
v.memory = 8096
v.cpus = 4
end
end
```
If navigate to this folder, you can use `vagrant up` to spin up your centos7 machine in your environment.
![](images/day06-1.png)
Then we will need to access our machine, you can do this with `vagrant ssh`
We are then going to install MariaDB as a local database to use in our application.
`sudo yum -y install mariadb mariadb-server mariadb-devel`
start the service with
`sudo systemctl start mariadb.service`
We have to install some dependencies, this is also where I had to change what the Bootcamp suggested as NodeJS was not available in the current repositories.
`sudo yum -y install links`
`sudo yum install --assumeyes epel-release`
`sudo yum install --assumeyes nodejs`
You can confirm you have node installed with `node -v` and `npm -v` (npm should be installed as a dependency)
For this app we will be using ruby a language we have not covered at all yet and we will not really get into much detail about it, I will try and find some good resources and add them below.
Install with
`curl -L https://get.rvm.io | bash -s stable`
You might with the above be asked to add keys follow those steps.
For us to use rvm we need to do the following:
`source /home/vagrant/.rvm/scripts/rvm`
and finally, install it with
`rvm install ruby-2.7`
the reason for this long-winded process is basically because the centos7 box we are using is old and old ruby is shipped in the normal repository etc.
Check installation and version with
`ruby --version`
We next need the Ruby on Rails framework which can be gathered using the following command.
`gem install rails`
Next, we need git and we can get this with
`sudo yum install git`
Just for the record and not sure if it is required, I also had Redis installed on my machine as I was doing something else but it actually still might be needed so these are the steps.
```
sudo yum install epel-release
sudo yum install redis
```
The above could be related to turbo streams but I did not have time to learn more about ruby on rails.
Now lets finally create our application (for the record I went through a lot to make sure these steps worked on my system so I am sending you all the luck)
create the app with the following, calling it what you wish
`rails new myapp --skip-turbolinks --skip-spring --skip-test-unit -d mysql `
next, we will create the database and schema:
```
cd myapp
bundle exec rake db:create
bundle exec rake db:migrate
```
We can then run our app with `bundle exec rails server -b 0.0.0.0`
![](images/day06-2.png)
Then open a browser to hit that box, I had to change my VirtualBox VM networking to bridged vs NAT so that I would be able to navigate to it vs using vagrant ssh.
![](images/day06-3.png)
Now we need to **scaffold** a basic model
A scaffold is a set of automatically generated files which forms the basic structure of a Rails project.
We do this with the following commands:
```
bundle exec rails generate scaffold Bootcamp name:string description:text dates:string
bundle exec rake db:migrate
```
![](images/day06-4.png)
Add a default route to config/routes.rb
`root bootcamps#index`
![](images/day06-5.png)
Now edit app/views/bootcamps/show.html.erb and make the description field a raw field. Add the below.
```
<p>
<strong>Description:</strong>
<%=raw @bootcamp.description %>
</p>
```
Now why this is all relevant is that using raw in the description field means that this field now becomes a potential XSS target. Or cross-site scripting.
This can be explained better with a video [What is Cross-Site Scripting?](https://youtu.be/DxsmEXicXEE)
The rest of the Bootcamp goes on to add in search functionality which also increases the capabilities around an XSS attack and this is another great example of a demo attack you could try out on a [vulnerable app](https://www.softwaretestinghelp.com/cross-site-scripting-xss-attack-test/).
### Create search functionality
In app/controllers/bootcamps_controller.rb, we'll add the following logic to the index method:
```
def index
@bootcamps = Bootcamp.all
if params[:search].to_s != ''
@bootcamps = Bootcamp.where("name LIKE '%#{params[:search]}%'")
else
@bootcamps = Bootcamp.all
end
end
```
In app/views/bootcamps/index.html.erb, we'll add the search field:
```
<h1>Search</h1>
<%= form_tag(bootcamps_path, method: "get", id: "search-form") do %>
<%= text_field_tag :search, params[:search], placeholder: "Search Bootcamps" %>
<%= submit_tag "Search Bootcamps"%>
<% end %>
<h1>Listing Bootcamps</h1>
```
Massive thanks for [DevSecOps.org](https://www.devsecops.org/) this is where I found the old but great walkthrough with a few tweaks above, there is also so much more information to be found there.
With that much longer walkthrough than anticipated I am going to hand over to the next sections and authors to highlight how not to do this and how to make sure we are not releasing bad code or vulnerabilities out there into the wild.
## Resources
- [devsecops.org](https://www.devsecops.org/)
- [TechWorld with Nana - What is DevSecOps? DevSecOps explained in 8 Mins](https://www.youtube.com/watch?v=nrhxNNH5lt0&list=PLsKoqAvws1pvg7qL7u28_OWfXwqkI3dQ1&index=1&t=19s)
- [What is DevSecOps?](https://www.youtube.com/watch?v=J73MELGF6u0&list=PLsKoqAvws1pvg7qL7u28_OWfXwqkI3dQ1&index=2&t=1s)
- [freeCodeCamp.org - Web App Vulnerabilities - DevSecOps Course for Beginners](https://www.youtube.com/watch?v=F5KJVuii0Yw&list=PLsKoqAvws1pvg7qL7u28_OWfXwqkI3dQ1&index=3&t=67s)
- [The Importance of DevSecOps and 5 Steps to Doing it Properly (DevSecOps EXPLAINED)](https://www.youtube.com/watch?v=KaoPQLyWq_g&list=PLsKoqAvws1pvg7qL7u28_OWfXwqkI3dQ1&index=4&t=13s)
- [Continuous Delivery - What is DevSecOps?](https://www.youtube.com/watch?v=NdvMUcWNlFw&list=PLsKoqAvws1pvg7qL7u28_OWfXwqkI3dQ1&index=5&t=6s)
- [Cloud Advocate - What is DevSecOps?](https://www.youtube.com/watch?v=a2y4Oj5wrZg&list=PLsKoqAvws1pvg7qL7u28_OWfXwqkI3dQ1&index=6)
- [Cloud Advocate - DevSecOps Pipeline CI Process - Real world example!](https://www.youtube.com/watch?v=ipe08lFQZU8&list=PLsKoqAvws1pvg7qL7u28_OWfXwqkI3dQ1&index=7&t=204s)
See you on [Day 7](day07.md) Where we will start a new section on Secure Coding.

View File

@ -0,0 +1,42 @@
# Day 7: Secure Coding Overview
Secure coding is the practice of writing software in a way that ensures the security of the system and the data it processes. It involves designing, coding, and testing software with security in mind to prevent vulnerabilities and protect against potential attacks.
There are several key principles of secure coding that developers should follow:
1. Input validation: It is important to validate all user input to ensure that it is in the expected format and does not contain any malicious code or unexpected characters. This can be achieved through the use of regular expressions, data type checks, and other validation techniques.
2. Output encoding: Output data should be properly encoded to prevent any potential injection attacks. For example, HTML output should be properly escaped to prevent cross-site scripting (XSS) attacks, and SQL queries should be parameterized to prevent SQL injection attacks.
3. Access control: Access control involves restricting access to resources or data to only those users who are authorized to access them. This can include implementing authentication and authorization protocols, as well as enforcing least privilege principles to ensure that users have only the access rights they need to perform their job duties.
4. Error handling: Error handling is the process of properly handling errors and exceptions that may occur during the execution of a program. This can include logging errors, displaying appropriate messages to users, and mitigating the impact of errors on system security.
5. Cryptography: Cryptography should be used to protect sensitive data and communications, such as passwords, financial transactions, and sensitive documents. This can be achieved through the use of encryption algorithms and secure key management practices.
6. Threat Modeling: Document, locate, address, and validate are the four steps to threat modeling. To securely code, you need to examine your software for areas susceptible to increased threats of attack. Threat modeling is a multi-stage process that should be integrated into the software lifecycle from development, testing, and production.
7. Secure storage: Secure storage involves properly storing and handling sensitive data, such as passwords and personal information, to prevent unauthorized access or tampering. This can include using encryption, hashing, and other security measures to protect data at rest and in transit.
8. Secure architecture: Secure architecture is the foundation of a secure system. This includes designing systems with security in mind, using secure frameworks and libraries, and following secure design patterns.
There are several tools and techniques that can be used to help ensure that code is secure, including Static Application Security Testing (SAST), Software Composition Analysis (SCA), and Secure Code Review.
### Static Application Security Testing (SAST)
SAST is a method of testing software code for security vulnerabilities during the development phase. It involves analyzing the source code of a program without executing it, looking for vulnerabilities such as injection attacks, cross-site scripting (XSS), and other common security issues. SAST tools can be integrated into the software development process to provide ongoing feedback and alerts about potential vulnerabilities as the code is being written.
### Software Composition Analysis (SCA)
SCA is a method of analyzing the third-party components and libraries that are used in a software application. It helps to identify any vulnerabilities or security risks that may be present in these components, and can alert developers to the need to update or replace them. SCA can be performed manually or with the use of automated tools.
### Secure Code Reviews
Secure Code Review is a process of reviewing software code with the goal of identifying and addressing potential security vulnerabilities. It is typically performed by a team of security experts who are familiar with common coding practices and security best practices. Secure Code Review can be done manually or with the use of automated tools, and may involve a combination of SAST and SCA techniques.
In summary, Overall, secure coding is a crucial practice that helps protect software and its users from security vulnerabilities and attacks. By following best practices and keeping software up to date, developers can help ensure that their software is as secure as possible.
### Resources
- [Secure Coding Best Practices | OWASP Top 10 Proactive Control](https://www.youtube.com/watch?v=8m1N2t-WANc)
- [Secure coding practices every developer should know](https://snyk.io/learn/secure-coding-practices/)
- [10 Secure Coding Practices You Can Implement Now](https://codesigningstore.com/secure-coding-practices-to-implement)
- [Secure Coding Guidelines And Best Practices For Developers](https://www.softwaretestinghelp.com/guidelines-for-secure-coding/)
In the next part [Day 8](day08.md), we will discuss Static Application Security Testing (SAST) in more detail.

View File

@ -0,0 +1,54 @@
# Day 8: SAST Overview
Static Application Security Testing (SAST) is a method of evaluating the security of an application by analyzing the source code of the application without executing the code. SAST is also known as white-box testing as it involves testing the internal structure and workings of an application.
SAST is performed early in the software development lifecycle (SDLC) as it allows developers to identify and fix vulnerabilities before the application is deployed. This helps prevent security breaches and minimizes the risk of costly security incidents.
One of the primary benefits of SAST is that it can identify vulnerabilities that may not be detected by other testing methods such as dynamic testing or manual testing. This is because SAST analyzes the entire codebase and can identify vulnerabilities that may not be detectable by other testing methods.
There are several types of vulnerabilities that SAST can identify, including:
- **Input validation vulnerabilities**: These vulnerabilities occur when an application does not adequately validate user input, allowing attackers to input malicious code or data that can compromise the security of the application.
- **Cross-site scripting (XSS) vulnerabilities**: These vulnerabilities allow attackers to inject malicious scripts into web applications, allowing them to steal sensitive information or manipulate the application for their own gain.
- **Injection vulnerabilities**: These vulnerabilities allow attackers to inject malicious code or data into the application, allowing them to gain unauthorized access to sensitive information or execute unauthorized actions.
- **Unsafe functions and libraries**: These vulnerabilities occur when an application uses unsafe functions or libraries that can be exploited by attackers.
- **Security misconfigurations**: These vulnerabilities occur when an application is not properly configured, allowing attackers to gain access to sensitive information or execute unauthorized actions.
### SAST Tools (with free tier plan)
- **[SonarCloud](https://www.sonarsource.com/products/sonarcloud/)**: SonarCloud is a cloud-based code analysis service designed to detect code quality issues in 25+ different programming languages, continuously ensuring the maintainability, reliability and security of your code.
- **[Snyk](https://snyk.io/)**: Snyk is a platform allowing you to scan, prioritize, and fix security vulnerabilities in your own code, open source dependencies, container images, and Infrastructure as Code (IaC) configurations.
- **[Semgrep](https://semgrep.dev/)**: Semgrep is a fast, open source, static analysis engine for finding bugs, detecting dependency vulnerabilities, and enforcing code standards.
## How SAST Works?
SAST tools typically use a variety of techniques to analyze the sourced code, including pattern matching, rule-based analysis, and data flow analysis.
Pattern matching involves looking for specific patterns in the code that may indicate a vulnerability, such as the use of a known vulnerable library or the execution of user input without proper sanitization.
Rule-based analysis involves the use of a set of predefined rules to identify potential vulnerabilities, such as the use of weak cryptography or the lack of input validation.
Data flow analysis involves tracking the flow of data through the application and identifying potential vulnerabilities that may arise as a result, such as the handling of sensitive data in an insecure manner.
## Consideration while using SAST Tools
1. It is important to ensure that the tool is properly configured and that it is being used in a way that is consistent with best practices. This may include setting the tool's sensitivity level to ensure that it is properly identifying vulnerabilities, as well as configuring the tool to ignore certain types of vulnerabilities that are known to be benign.
2. SAST tools are not a replacement for manual code review. While these tools can identify many potential vulnerabilities, they may not be able to identify all of them, and it is important for developers to manually review the code to ensure that it is secure.
3. SAST is just one aspect of a comprehensive application security program. While it can be an important tool for identifying potential vulnerabilities, it is not a replacement for other security measures, such as secure coding practices, testing in the production environment, and ongoing monitoring and maintenance.
### Challenges associated with SAST
- **False positives**: Automated SAST tools can sometimes identify potential vulnerabilities that are not actually vulnerabilities. This can lead to a large number of false positives that need to be manually reviewed, increasing the time and cost of the testing process.
- **Limited coverage**: SAST can only identify vulnerabilities in the source code that is analyzed. If an application uses external libraries or APIs, these may not be covered by the SAST process.
- **Code complexity**: SAST can be more challenging for larger codebases or codebases that are written in languages that are difficult to analyze.
- **Limited testing**: SAST does not execute the code and therefore cannot identify vulnerabilities that may only occur when the code is executed.
Despite these challenges, SAST is a valuable method of evaluating the security of an application and can help organizations prevent security breaches and minimize the risk of costly security incidents. By identifying and fixing vulnerabilities early in the SDLC, organizations can build more secure applications and improve the overall security of their systems.
### Resources
- [SAST- Static Analysis with lab by Practical DevSecOps](https://www.youtube.com/watch?v=h37zp5g5tO4)
- [SAST All About Static Application Security Testing](https://www.mend.io/resources/blog/sast-static-application-security-testing/)
- [SAST Tools : 15 Top Free and Paid Tools](https://www.appsecsanta.com/sast-tools)
In the next part [Day 9](day09.md), we will discuss SonarCloud and integrate it with different CI/CD tools.

View File

@ -0,0 +1,132 @@
# Day 9: SAST Implementation with SonarCloud
SonarCloud is a cloud-based platform that provides static code analysis to help developers find and fix code quality issues in their projects. It is designed to work with a variety of programming languages and tools, including Java, C#, JavaScript, and more.
SonarCloud offers a range of features to help developers improve the quality of their code, including:
- **Static code analysis**: SonarCloud analyzes the source code of a project and checks for issues such as coding style violations, potential bugs, security vulnerabilities, and other problems. It provides developers with a detailed report of the issues it finds, along with suggestions for how to fix them.
- **Code review**: SonarCloud integrates with code review tools like GitHub pull requests, allowing developers to receive feedback on their code from their peers before it is merged into the main branch. This helps to catch issues early on in the development process, reducing the risk of bugs and other issues making it into production.
- **Continuous integration**: SonarCloud can be integrated into a continuous integration (CI) pipeline, allowing it to automatically run static code analysis on every code commit. This helps developers catch issues early and fix them quickly, improving the overall quality of their codebase.
- **Collaboration**: SonarCloud includes tools for team collaboration, such as the ability to assign issues to specific team members and track the progress of code review and issue resolution.
- **Customization**: SonarCloud allows developers to customize the rules and configurations used for static code analysis, so they can tailor the analysis to fit the specific needs and coding standards of their team.
Overall, SonarCloud is a valuable tool for developers looking to improve the quality of their code and reduce the risk of issues making it into production. It helps teams collaborate and catch problems early on in the development process, leading to faster, more efficient development and fewer bugs in the final product.
Read more about SonarCloud [here](https://docs.sonarcloud.io/)
### Integrate SonarCloud with GitHub Actions
- Sign up for a [SonarCloud](https://sonarcloud.io/) account with your GitHub Account.
- From the dashboard, click on “Import an organization from GitHub”
![](images/day09-1.png)
- Authorise and install SonarCloud app to access your GitHub account.
![](images/day09-2.png)
- Select the repository (free tier supports only public repositories) you want to analyze and click "Install"
![](images/day09-3.png)
- In SonarCloud you can now create an organisation.
![](images/day09-4.png)
![](images/day09-5.png)
- Now click on “Analyze a new Project”
![](images/day09-6.png)
- Click on setup to add the Project.
![](images/day09-7.png)
- Now on the SonarCloud dashboard you can the project.
![](images/day09-8.png)
- To setup the GitHub Actions, click on the project, then on **Information** > **Last analysis method**
![](images/day09-9.png)
- Click on **GitHub Actions**
![](images/day09-10.png)
- This will show some steps to integrate SonarCloud with GitHub actions. At the top you will see SONAR_TOKEN, we will add that as Github Secrets later.
![](images/day09-11.png)
- Next thing you will see is the yaml file for the GitHub Workflow
![](images/day09-12.png)
- You will also see a configuration file that we will have to add in the source code repo
![](images/day09-13.png)
![](images/day09-14.png)
- At the bottom of page, disable the Automatic Analysis
![](images/day09-15.png)
- Now go the source code repo and add the following configuration `sonar-project.properties` in the root directory.
```yaml
sonar.projectKey=prateekjaindev_nodejs-todo-app-demo
sonar.organization=prateekjaindev
# This is the name and version displayed in the SonarCloud UI.
#sonar.projectName=nodejs-todo-app-demo
#sonar.projectVersion=1.0
# Path is relative to the sonar-project.properties file. Replace "\" by "/" on Windows.
#sonar.sources=.
# Encoding of the source code. Default is default system encoding
#sonar.sourceEncoding=UTF-8
```
- Update or add the GitHub actions workflow with the following job in the `.github/workflows` directory
```yaml
name: SonarScan
on:
push:
branches:
- main
pull_request:
types: [opened, synchronize, reopened]
jobs:
sonarcloud:
name: SonarCloud
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
with:
fetch-depth: 0 # Shallow clones should be disabled for a better relevancy of analysis
- name: SonarCloud Scan
uses: SonarSource/sonarcloud-github-action@master
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} # Needed to get PR information, if any
SONAR_TOKEN: ${{ secrets.SONAR_TOKEN }}
```
- Now go to GitHub and add GitHub Secret named SOANR_TOKEN.
![](images/day09-16.png)
- As soon as you commit the changes, the workflow will trigger.
![](images/day09-17.png)
- Now after every commit, you can check the updated reports on the SonarCloud dashboard.
![](images/day09-18.png)
### Quality Gates
A quality gate is an indicator that tells you whether your code meets the minimum level of quality required for your project. It consists of a set of conditions that are applied to the results of each analysis. If the analysis results meet or exceed the quality gate conditions then it shows a **Passed** status otherwise, it shows a **Failed** status.
By default SonarCloud comes with a default quality gate “Sonar way”. You can edit or create new one in the Organisation Settings.
![](images/day09-19.png)
### Resources
- [SonarCloud Documentation](https://docs.sonarcloud.io/)
- [How to create Quality gates on SonarQube](https://www.youtube.com/watch?v=8_Xt9vchlpY)
- [Source Code of the repo I used for SAST implementation](https://github.com/prateekjaindev/nodejs-todo-app-demo)
In the next part [Day 10](day10.md), we will discuss Software Composition Analysis (SCA).

View File

@ -0,0 +1,33 @@
# Day 10: Software Composition Analysis Overview
Software composition analysis (SCA) is a process that helps developers identify the open source libraries, frameworks, and components that are included in their software projects. SCA tools scan the codebase of a software project and provide a report that lists all the open source libraries, frameworks, and components that are being used. This report includes information about the licenses and vulnerabilities of these open source libraries and components, as well as any security risks that may be associated with them.
There are several benefits to using SCA tools in software development projects. These benefits include:
1. **Improved security**: By identifying the open source libraries and components that are being used in a project, developers can assess the security risks associated with these libraries and components. This allows them to take appropriate measures to fix any vulnerabilities and protect their software from potential attacks.
2. **Enhanced compliance**: SCA tools help developers ensure that they are using open source libraries and components that are compliant with the appropriate licenses. This is particularly important for companies that have strict compliance policies and need to ensure that they are not infringing on any third-party intellectual property rights.
3. **Improved efficiency**: SCA tools can help developers save time and effort by automating the process of identifying and tracking open source libraries and components. This allows developers to focus on more important tasks, such as building and testing their software.
4. **Reduced risk**: By using SCA tools, developers can identify and fix vulnerabilities in open source libraries and components before they become a problem. This helps to reduce the risk of security breaches and other issues that could damage the reputation of the software and the company.
5. **Enhanced quality**: By identifying and addressing any vulnerabilities in open source libraries and components, developers can improve the overall quality of their software. This leads to a better user experience and a higher level of customer satisfaction.
In addition to these benefits, SCA tools can also help developers to identify any potential legal issues that may arise from the use of open source libraries and components. For example, if a developer is using a library that is licensed under a copyleft license, they may be required to share any changes they make to the library with the community.
Despite these benefits, there are several challenges associated with SCA:
1. **Scale**: As the use of open source software has become more widespread, the number of components that need to be analyzed has grown exponentially. This can make it difficult for organizations to keep track of all the components they are using and to identify any potential issues.
2. **Complexity**: Many software applications are made up of a large number of components, some of which may have been added years ago and are no longer actively maintained. This can make it difficult to understand the full scope of an application and to identify any potential issues.
3. **False positives**: SCA tools can generate a large number of alerts, some of which may be false positives. This can be frustrating for developers who have to review and dismiss these alerts, and it can also lead to a lack of trust in the SCA tool itself.
4. **Lack of standardization**: There is no standard way to conduct SCA, and different tools and approaches can produce different results. This can make it difficult for organizations to compare the results of different SCA tools and to determine which one is best for their needs.
Overall, SCA tools provide a number of benefits to software developers and can help to improve the security, compliance, efficiency, risk management, and quality of software projects. By using these tools, developers can ensure that they are using open source libraries and components that are compliant with the appropriate licenses, free of vulnerabilities, and of high quality. This helps to protect the reputation of their software and the company, and leads to a better user experience.
### SCA Tools (Opensource or Free Tier)
- **[OWASP Dependncy Check](https://owasp.org/www-project-dependency-check/)**: Dependency-Check is a Software Composition Analysis (SCA) tool that attempts to detect publicly disclosed vulnerabilities contained within a projects dependencies. It does this by determining if there is a Common Platform Enumeration (CPE) identifier for a given dependency. If found, it will generate a report linking to the associated CVE entries.
- **[Snyk](https://snyk.io/product/open-source-security-management/)**: Snyk Open Source provides a developer-first SCA solution, helping developers find, prioritize, and fix security vulnerabilities and license issues in open source dependencies.
### Resources
- [Software Composition Analysis (SCA): What You Should Know](https://www.aquasec.com/cloud-native-academy/supply-chain-security/software-composition-analysis-sca/)
- [Software Composition Analysis 101: Knowing whats inside your apps - Magno Logan](https://www.youtube.com/watch?v=qyVDHH4T1oo)
In the next part [Day 11](day11.md), we will discuss Dependency Check and integrate it with GitHub Actions.

View File

@ -0,0 +1,69 @@
# Day 11: SCA Implementation with OWASP Dependency Check
### OWASP Dependency Check
OWASP Dependency Check is an open-source tool that checks project dependencies for known vulnerabilities. It can be used to identify dependencies with known vulnerabilities and determine if any of those vulnerabilities are exposed in the application.
The tool works by scanning the dependencies of a project and checking them against a database of known vulnerabilities. If a vulnerability is found, the tool will report the vulnerability along with the associated CVE (Common Vulnerabilities and Exposures) identifier, a standardized identifier for publicly known cybersecurity vulnerabilities.
To use OWASP Dependency Check, you will need to include it as a part of your build process. There are integrations available for a variety of build tools, including Maven, Gradle, and Ant. You can also use the command-line interface to scan your dependencies.
OWASP Dependency Check is particularly useful for identifying vulnerabilities in third-party libraries and frameworks that your application depends on. These types of dependencies can introduce vulnerabilities into your application if they are not properly managed. By regularly scanning your dependencies, you can ensure that you are aware of any vulnerabilities and take steps to address them.
It is important to note that OWASP Dependency Check is not a replacement for secure coding practices and should be used in conjunction with other security measures. It is also important to regularly update dependencies to ensure that you are using the most secure version available.
### Integrate Dependency Check with GitHub Actions
To use Dependency Check with GitHub Actions, you can create a workflow file in your repository's `.github/workflows` directory. Here is an example workflow that runs Dependency Check on every push to the `main` branch:
```yaml
name: Dependency-Check
on:
push:
branches:
- main
pull_request:
types: [opened, synchronize, reopened]
jobs:
dependency-check:
name: Dependency-Check
runs-on: ubuntu-latest
steps:
- name: Download OWASP Dependency Check
run: |
VERSION=$(curl -s https://jeremylong.github.io/DependencyCheck/current.txt)
curl -sL "https://github.com/jeremylong/DependencyCheck/releases/download/v$VERSION/dependency-check-$VERSION-release.zip" --output dependency-check.zip
unzip dependency-check.zip
- name: Run Dependency Check
run: |
./dependency-check/bin/dependency-check.sh --out report.html --scan .
rm -rf dependency-check*
- name: Upload Artifacts
uses: actions/upload-artifact@v2
with:
name: artifacts
path: report.html
```
This workflow does the following:
1. Defines a workflow called `Dependency-Check` that runs on every push to the `main` branch.
2. Specifies that the workflow should run on the `ubuntu-latest` runner.
3. Downloads and installs Dependency Check.
4. Runs Dependency Check on the current directory (`.`) and generate a report in report.html file.
5. Removes the downloaded Dependency Check files.
6. Upload the report file as artifacts.
You can download the report from the Artifacts and open it in the Browser.
![](images/day11-1.png)
You can customize this workflow to fit your needs. For example, you can specify different branches to run the workflow on, or specify different dependencies to check. You can also configure Dependency Check to generate a report in a specific format (e.g., HTML, XML, JSON) and save it to the repository.
### Resources
- [Dependency Check Documentation](https://jeremylong.github.io/DependencyCheck/)
- [Source Code of the repo I used for SCA implementation](https://github.com/prateekjaindev/nodejs-todo-app-demo)
In the next part [Day 12](day12.md), we will discuss Secure Coding Review.

View File

@ -0,0 +1,33 @@
# Day 12: Secure Coding Review
Secure code review is the process of examining and evaluating the security of a software application or system by reviewing the source code for potential vulnerabilities or weaknesses. This process is an essential part of ensuring that an application is secure and can withstand attacks from cyber criminals.
There are several steps involved in a secure code review process:
1. **Identify the scope of the review**: The first step is to identify the scope of the review, including the type of application being reviewed and the specific security concerns that need to be addressed.
2. **Set up a review team**: A review team should be composed of individuals with expertise in different areas, such as security, coding, and testing. The team should also include individuals who are familiar with the application being reviewed.
3. **Prepare the code for review**: Before the review can begin, the code needs to be prepared for review by organizing it in a way that makes it easier to understand and review. This may include breaking the code down into smaller chunks or adding comments to explain the purpose of specific sections.
4. **Conduct the review**: During the review, the team will examine the code for vulnerabilities and weaknesses. This may include checking for insecure coding practices, such as hardcoded passwords or unencrypted data, or looking for vulnerabilities in the applications architecture.
5. **Document findings**: As the team identifies potential vulnerabilities or weaknesses, they should document their findings in a report. The report should include details about the vulnerability, the potential impact, and recommendations for how to fix the issue.
6. **Remediate vulnerabilities**: Once the review is complete, the team should work with the development team to fix any vulnerabilities or weaknesses that were identified. This may involve updating the code, implementing additional security controls, or both.
There are several tools and techniques that can be used to facilitate a secure code review. These may include:
1. **Static analysis tools**: These tools analyze the code without executing it, making them useful for identifying vulnerabilities such as buffer overflows, SQL injection, and cross-site scripting.
2. **Dynamic analysis tools**: These tools analyze the code while it is being executed, allowing the review team to identify vulnerabilities that may not be detectable through static analysis alone.
3. **Code review guidelines**: Many organizations have developed guidelines for conducting code reviews, which outline the types of vulnerabilities that should be looked for and the best practices for remediation.
4. **Peer review**: Peer review is a process in which other developers review the code, providing a second set of eyes to identify potential vulnerabilities.
Secure code review is an ongoing process that should be conducted at various stages throughout the development lifecycle. This includes reviewing code before it is deployed to production, as well as conducting periodic reviews to ensure that the application remains secure over time.
Overall, secure code review is a critical component of ensuring that an application is secure. By identifying and addressing vulnerabilities early in the development process, organizations can reduce the risk of attacks and protect their systems and data from potential threats.
I highly recommend watching this video to understand how source code analysis can lead to finding vulnerabilities in large enterprise codebases.
[![Final video of fixing issues in your code in VS Code](https://img.youtube.com/vi/fb-t3WWHsMQ/maxresdefault.jpg)](https://www.youtube.com/watch?v=fb-t3WWHsMQ)
### Resources
- [How to Analyze Code for Vulnerabilities](https://www.youtube.com/watch?v=A8CNysN-lOM&t)
- [What Is A Secure Code Review And Its Process?](https://valuementor.com/blogs/source-code-review/what-is-a-secure-code-review-and-its-process/)
In the next part [Day 13](day13.md), we will discuss Additional Secure Coding Practices with some more hands-on.

View File

@ -0,0 +1,89 @@
# Day 13: Additional Secure Coding Practices
## Git Secret Scan
Scanning repositories for secrets refers to the process of searching through a code repository, such as on GitHub or GitLab, for sensitive information that may have been inadvertently committed and pushed to the repository. This can include sensitive data such as passwords, API keys, and private encryption keys.
The process is usually done using automated tools that scan the code for specific patterns or keywords that indicate the presence of sensitive information. The goal of this process is to identify and remove any secrets that may have been exposed in the repository, in order to protect against potential breaches or unauthorized access.
### Git Secret Scan with Gitleaks
Gitleaks is a tool that can be added to your GitHub repository as a GitHub Action, which scans your codebase for sensitive information such as credentials, tokens, and other secrets. The action runs the gitleaks tool on your codebase, which checks for any sensitive information that may have been accidentally committed to your repository.
To set up Gitleaks GitHub Action, you need to create a new workflow file in your repository's `.github/workflows/git-secret-scan.yml` directory. The workflow file should contain the following:
```yaml
name: gitleaks
on:
pull_request:
push:
jobs:
scan:
name: gitleaks
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 0
- uses: gitleaks/gitleaks-action@v2
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
```
This workflow does the following:
1. Defines a workflow called `Dependency-Check` that runs on every push to the `main` branch.
2. Specifies that the workflow should run on the `ubuntu-latest` runner.
3. Runs gitleaks scan for the entire repository
4. This action will fail if it detects any secret.
In my demo, I have added AWS Keys in .env file and because of that the pipeline faild.
![](images/day13-1.png)
Other Git Scanner tools
- [**AWS git-secrets**](https://github.com/awslabs/git-secrets)
- **[GitGuardian ggshield](https://github.com/GitGuardian/ggshield)**
- **[TruffleHog](https://github.com/trufflesecurity/trufflehog)**
### Resources
- [Gitleaks GitHub](https://github.com/zricethezav/gitleaks)
- [Gitleaks GitHub Action](https://github.com/gitleaks/gitleaks-action)
## Create better Dockerfile with Hadolint
Hadolint is a linter for Dockerfiles that checks for common mistakes and provides suggestions for improvement. It can be used directly from the command line, integrated into a CI/CD pipeline, or integrated into code editors and IDEs for real-time linting.
To set up linting with hadolint in Github Actions, you can use the following steps:
1. Create a new workflow file in your repository, for example `.github/workflows/dockerfile-lint.yml`
2. In this file, add the following code to set up the Github Actions workflow:
```yaml
name: Lint Dockerfile
on:
push:
branches:
- main
jobs:
lint:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- uses: hadolint/hadolint-action@v2.1.0
with:
dockerfile: Dockerfile
```
1. This workflow will run on every push to the "main" branch, and will run the hadolint command on the "Dockerfile" file.
2. Commit the new workflow file and push it to your repository.
3. Next time you push changes to the "main" branch, Github Actions will run the linting job and provide feedback if any issues are found with your Dockerfile.
### Resources
- [Hadolint GitHub](https://github.com/hadolint/hadolint)
- [Hadolint Online](https://hadolint.github.io/hadolint/)
- [Top 20 Dockerfile best practices](https://sysdig.com/blog/dockerfile-best-practices/)
Next up we will be starting our **Continuous Build, Integration, Testing** with [Day 14](day14.md) covering Container Image Scanning from [Anton Sankov](https://twitter.com/a_sankov).

View File

@ -228,3 +228,6 @@ It is between 0 and 10.
<https://www.nist.gov/itl/executive-order-improving-nations-cybersecurity>
<https://www.aquasec.com/cloud-native-academy/supply-chain-security/sbom/>
On [Day 16](day16.md) we will take a look into "Fuzzing" or Fuzz Testing.

View File

@ -1,7 +1,7 @@
# Fuzzing
Fuzzing, also known as "fuzz testing," is a software testing technique that involves providing invalid, unexpected, or random data as input to a computer program.
The goal of fuzzing is to identify security vulnerabilities and other bugs in the program by causing it to crash or exhibit unintended behavior.
The goal of fuzzing is to identify security vulnerabilities and other bugs in the program by causing it to crash or exhibit unintended behaviour.
Fuzzing can be performed manually or by using a testing library/framework to craft the inputs for us.
@ -32,13 +32,13 @@ However, in more complex systems such fail points may not be obvious, and may be
This is where fuzzing comes in handy.
The Go Fuzzing library (part of the standard language library since Go 1.18) generates many inputs for a test case, and then based on the coverage and the results determines which inputs are "interesting".
The Go Fuzzing library (part of the standard language library since Go 1.18) generates many inputs for a test case, and then based on the coverage and the results determine which inputs are "interesting".
If we write a fuzz test for this function what will happen is:
1. The fuzzing library will start providing random strings starting from smaller strings and increasing their size.
2. Once the library provides a string of lenght 4 it will notice a change in the test-coverage (`if (len(s) == 4)` is now `true`) and will continue to generate inputs with this lenght.
3. Once the library provides a string of lenght 4 that starts with `f` it will notice another change in the test-coverage (`if s[0] == "f"` is now `true`) and will continue to generate inputs that start with `f`.
2. Once the library provides a string of length 4 it will notice a change in the test-coverage (`if (len(s) == 4)` is now `true`) and will continue to generate inputs with this length.
3. Once the library provides a string of length 4 that starts with `f` it will notice another change in the test-coverage (`if s[0] == "f"` is now `true`) and will continue to generate inputs that start with `f`.
4. The same thing will repeat for `u` and the double `z`.
5. Once it provides `fuzz` as input the function will panic and the test will fail.
6. We have _fuzzed_ successfully!
@ -56,7 +56,7 @@ Fuzzing is a useful technique, but there are situations in which it might not be
For example, if the input that fails our code is too specific and there are no clues to help, the fuzzing library might not be able to guess it.
If we change the example code from the previoud paragraph to something like this:
If we change the example code from the previous paragraph to something like this:
```go
func DontPanic(s input) {

View File

@ -1,26 +1,242 @@
# DAST
DAST, or Dynamic Application Security Testing, is a technique that is used to evaluate the security of an application by simulating attacks from external sources.
Idea is to automate as much as possible black-box penetration testing.
It can be used for acquiring the low-hanging fruits so a real humans time will be spared and additionally for generating traffic to other security tools (e.g. IAST).
# Fuzzing Advanced
Nevertheless, It is an essential component of the SSDLC, as it helps organizations uncover potential vulnerabilities early in the development process, before the application is deployed to production. By conducting DAST testing, organizations can prevent security incidents and protect their data and assets from being compromised by attackers.
Yesterday we learned what fuzzing is and how to write fuzz tests (unit tests with fuzzy inputs).
However, fuzz testing goes beyond just unit testing.
We can use this methodology to test our web application by fuzzing the requests sent to our server.
## Tools
Today, we will take a practical approach to fuzzy testing a web server.
There are various open-source tools available for conducting DAST, such as ZAP, Burp Suite, and Arachni. These tools can simulate different types of attacks on the application, such as SQL injection, cross-site scripting, and other common vulnerabilities. For example, if an application is vulnerable to SQL injection, a DAST tool can send a malicious SQL query to the application, such as ' OR 1=1 --, and evaluate its response to determine if it is vulnerable. If the application is vulnerable, it may return all records from the database, indicating that the SQL injection attack was successful.
As some of the tests could be quite invasive (for example it may include DROP TABLE or something similar) or at least put a good amount of test data into the databases or even DOS the app,
__DAST tools should never run against a production environment!!!__
All tools have the possibility for authentication into the application and this could lead to production credentials compromise. Also when run authenticated scans against the testing environment, use suitable roles (if RBAC model exists, for the application, of course), e.g. DAST shouldnt use role that have the possibility to delete or modify other users because this way the whole environment can became unusable.
As with other testing methodologies it is necessary to analyze the scope, so not unneeded targets are scanned.
Different tools can help us do this.
## Usage
Common error is scanning compensating security controls (e.g. WAF) instead of the real application. DAST is in its core an application security testing tool and should be used against actual applications, not against security mitigations. As it uses pretty standardized attacks, external controls can block the attacking traffic and this way to cover potentially exploitable flows (as per definition adversary would be able to eventually bypass such measures)
Actual scans are quite slow, so sometimes they should be run outside of the DevOps pipeline. Good example is running them nightly or during the weekend. Some of the simple tools (zap / arachny, …) could be used into pipelines but often, due to the nature of the scan can slow down the whole development process.
Once the DAST testing is complete, the results are analyzed to identify any vulnerabilities that were discovered. The organization can then take appropriate remediation steps to address the vulnerabilities and improve the overall security of the application. This may involve fixing the underlying code, implementing additional security controls, such as input validation and filtering, or both.
In conclusion, the use of DAST in the SSDLC is essential for ensuring the security of an application. By conducting DAST testing and identifying vulnerabilities early in the development process, organizations can prevent security incidents and protect their assets from potential threats. Open-source tools, such as ZAP, Burp Suite, and Arachni, can be used to conduct DAST testing and help organizations improve their overall security posture.
As with all other tools part of DevSecOps pipeline DAST should not be the only scanner in place and as with all others, it is not a substitute for penetration test and good development practices.
Such tools are [Burp Intruder](https://portswigger.net/burp/documentation/desktop/tools/intruder) and [SmartBear](https://smartbear.com/).
However, there are proprietary tools that require a paid license to use them.
## Some useful links and open-source tools:
- https://github.com/zaproxy/zaproxy
- https://www.arachni-scanner.com/
- https://owasp.org/www-project-devsecops-guideline/latest/02b-Dynamic-Application-Security-Testing
That is why for our demonstration today we are going to use a simple open-source CLI written in Go that was inspired by Burp Intruder and provides similar functionality.
It is called [httpfuzz](https://github.com/JonCooperWorks/httpfuzz).
## Getting started
This tool is quite simple.
We provide it a template for our requests (in which we have defined placeholders for the fuzzy data), a wordlist (the fuzzy data) and `httpfuzz` will render the requests and send them to our server.
First, we need to define a template for our requests.
Create a file named `request.txt` with the following content:
```text
POST / HTTP/1.1
Content-Type: application/json
User-Agent: PostmanRuntime/7.26.3
Accept: */*
Cache-Control: no-cache
Host: localhost:8000
Accept-Encoding: gzip, deflate
Connection: close
Content-Length: 35
{
"name": "`S9`",
}
```
This is a valid HTTP `POST` request to the `/` route with JSON body.
The "\`" symbol in the body defines a placeholder that will be substituted with the data we provide.
`httpfuzz` can also fuzz the headers, path, and URL params.
Next, we need to provide a wordlist of inputs that will be placed in the request.
Create a file named `data.txt` with the following content:
```text
SOME_NAME
Mozilla/5.0 (Linux; Android 7.0; SM-G930VC Build/NRD90M; wv) AppleWebKit/537.36 (KHTML, like Gecko) Version/4.0 Chrome/58.0.3029.83 Mobile Safari/537.36
```
In this file, we defined two inputs that will be substituted inside the body.
In a real-world scenario, you should put much more data here for proper fuzz testing.
Now that we have our template and our inputs, let's run the tool.
Unfortunately, this tool is not distributed as a binary, so we will have to build it from source.
Clone the repo and run:
```shell
go build -o httpfuzz cmd/httpfuzz.go
```
(requires to have a recent version of Go installed on your machine).
Now that we have the binary let's run it:
```shell
./httpfuzz \
--wordlist data.txt \
--seed-request request.txt \
--target-header User-Agent \
--target-param fuzz \
--delay-ms 50 \
--skip-cert-verify \
--proxy-url http://localhost:8080 \
```
- `httpfuzz` is the binary we are invoking.
- `--wordlist data.txt` is the file with inputs we provided.
- `--seed-request requests.txt` is the request template.
- `--target-header User-Agent` tells `httpfuzz` to use the provided inputs in the place of the `User-Agent` header.
- `--target-param fuzz` tells `httpfuzz` to use the provided inputs as values for the `fuzz` URL parameter.
- `--delay-ms 50` tells `httpfuzz` to wait 50 ms between the requests.
- `--skip-cert-verify` tells `httpfuzz` to not do any TLS verification.
- `--proxy-url http://localhost:8080` tells `httpfuzz` where our HTTP server is.
We have 2 inputs and 3 places to place them (in the body, the `User-Agent` header, and the `fuzz` parameter).
This means that `httpfuzz` will generate 6 requests and send them to our server.
Let's run it and see what happens.
I wrote a simple web server that logs all requests so that we can see what is coming into our server:
```shell
$ ./httpfuzz \
--wordlist data.txt \
--seed-request request.txt \
--target-header User-Agent \
--target-param fuzz \
--delay-ms 50 \
--skip-cert-verify \
--proxy-url http://localhost:8080 \
httpfuzz: httpfuzz.go:164: Sending 6 requests
```
and the server logs:
```text
-----
Got request to http://localhost:8000/
User-Agent header = [SOME_NAME]
Name = S9
-----
Got request to http://localhost:8000/?fuzz=SOME_NAME
User-Agent header = [PostmanRuntime/7.26.3]
Name = S9
-----
Got request to http://localhost:8000/
User-Agent header = [PostmanRuntime/7.26.3]
Name = SOME_NAME
-----
Got request to http://localhost:8000/
User-Agent header = [Mozilla/5.0 (Linux; Android 7.0; SM-G930VC Build/NRD90M; wv) AppleWebKit/537.36 (KHTML, like Gecko) Version/4.0 Chrome/58.0.3029.83 Mobile Safari/537.36]
Name = S9
-----
Got request to http://localhost:8000/?fuzz=Mozilla%2F5.0+%28Linux%3B+Android+7.0%3B+SM-G930VC+Build%2FNRD90M%3B+wv%29+AppleWebKit%2F537.36+%28KHTML%2C+like+Gecko%29+Version%2F4.083+Mobile+Safari%2F537.36
User-Agent header = [PostmanRuntime/7.26.3]
Name = S9
-----
Got request to http://localhost:8000/
User-Agent header = [PostmanRuntime/7.26.3]
Name = Mozilla/5.0 (Linux; Android 7.0; SM-G930VC Build/NRD90M; wv) AppleWebKit/537.36 (KHTML, like Gecko) Version/4.0 Chrome/58.0.3029.83 Mobile Safari/537.36
```
We see that we have received 6 HTTP requests.
Two of them have a value from our values file for the `User-Agent` header, and 4 have the default header from the template.
Two of them have a value from our values file for the `fuzz` query parameter, and 4 have the default header from the template.
Two of them have a value from our values file for the `Name` body property, and 4 have the default header from the template.
A slight improvement of the tool could be to make different permutations of these requests (for example, a request that has both `?fuzz=` and `User-Agent` as values from the values file).
Notice how `httpfuzz` does not give us any information about the outcome of the requests.
To figure that out, we need to either set up some sort of monitoring for our server or write a `httpfuzz` plugin that will process the results in a meaningful for us way.
Let's do that.
To write a custom plugin, we need to implement the [`Listener`](https://github.com/JonCooperWorks/httpfuzz/blob/master/plugin.go#L13) interface:
```go
// Listener must be implemented by a plugin to users to hook the request - response transaction.
// The Listen method will be run in its own goroutine, so plugins cannot block the rest of the program, however panics can take down the entire process.
type Listener interface {
Listen(results <-chan *Result)
}
```
```go
package main
import (
"bytes"
"io/ioutil"
"log"
"github.com/joncooperworks/httpfuzz"
)
type logResponseCodePlugin struct {
logger *log.Logger
}
func (b *logResponseCodePlugin) Listen(results <-chan *httpfuzz.Result) {
for result := range results {
b.logger.Printf("Got %d response from the server\n", result.Response.StatusCode)
}
}
// New returns a logResponseCodePlugin plugin that simple logs the response code of the response.
func New(logger *log.Logger) (httpfuzz.Listener, error) {
return &logResponseCodePlugin{logger: logger}, nil
}
```
Now we need to build our plugin first:
```shell
go build -buildmode=plugin -o log exampleplugins/log/log.go
```
and then we can plug it into `httpfuzz` via the `--post-request` flag:
```shell
$ ./httpfuzz \
--wordlist data.txt \
--seed-request request.txt \
--target-header User-Agent \
--target-param fuzz \
--delay-ms 50 \
--skip-cert-verify \
--proxy-url http://localhost:8080 \
--post-request log
httpfuzz: httpfuzz.go:164: Sending 6 requests
httpfuzz: log.go:15: Got 200 response from the server
httpfuzz: log.go:15: Got 200 response from the server
httpfuzz: log.go:15: Got 200 response from the server
httpfuzz: log.go:15: Got 200 response from the server
httpfuzz: log.go:15: Got 200 response from the server
httpfuzz: log.go:15: Got 200 response from the server
```
Voila!
Now we can at least see what the response code from the server was.
Of course, we can write much more sophisticated plugins that output much more data, but for the purpose of this exercise, that is enough.
## Summary
Fuzzing is a really powerful testing technique that goes way beyond unit testing.
Fuzzing can be extremely useful for testing HTTP servers by substituting parts of valid HTTP requests with data that could potentially expose vulnerabilities or deficiencies in our server.
There are many tools that can help us in fuzzy testing our web applications, both free and paid ones.
## Resources
[OWASP: Fuzzing](https://owasp.org/www-community/Fuzzing)
[OWASP: Fuzz Vectors](https://owasp.org/www-project-web-security-testing-guide/v41/6-Appendix/C-Fuzz_Vectors)
[Hacking HTTP with HTTPfuzz](https://medium.com/swlh/hacking-http-with-httpfuzz-67cfd061b616)
[Fuzzing the Stack for Fun and Profit at DefCamp 2019](https://www.youtube.com/watch?v=qCMfrbpuCBk&list=PLnwq8gv9MEKiUOgrM7wble1YRsrqRzHKq&index=33)
[HTTP Fuzzing Scan with SmartBear](https://support.smartbear.com/readyapi/docs/security/scans/types/fuzzing-http.html)
[Fuzzing Session: Finding Bugs and Vulnerabilities Automatically](https://youtu.be/DSJePjhBN5E)
[Fuzzing the CNCF Landscape](https://youtu.be/zIyIZxAZLzo)

View File

@ -1,33 +1,26 @@
# IAST (Interactive Application Security Testing)
# DAST
DAST, or Dynamic Application Security Testing, is a technique that is used to evaluate the security of an application by simulating attacks from external sources.
Idea is to automate as much as possible black-box penetration testing.
It can be used for acquiring the low-hanging fruits so a real humans time will be spared and additionally for generating traffic to other security tools (e.g. IAST).
IAST is a type of security testing tool that is designed to identify vulnerabilities in web applications and help developers fix them. It works by injecting a small agent into the application's runtime environment and monitoring its behavior in real-time. This allows IAST tools to identify vulnerabilities as they occur, rather than relying on static analysis or simulated attacks.
Nevertheless, It is an essential component of the SSDLC, as it helps organizations uncover potential vulnerabilities early in the development process, before the application is deployed to production. By conducting DAST testing, organizations can prevent security incidents and protect their data and assets from being compromised by attackers.
IAST works through software instrumentation, or the use of instruments to monitor an application as it runs and gather information about what it does and how it performs. IAST solutions instrument applications by deploying agents and sensors in running applications and continuously analyzing all application interactions initiated by manual tests, automated tests, or a combination of both to identify vulnerabilities in real time Instrumentation.
IAST agent is running inside the application and monitor for known attack patterns. As it is part of the application, it can monitor traffic between different components (either as classic MVC deployments and in microservices deployment).
## Tools
## For IAST to be used, there are few prerequisites.
- Application should be instrumented (inject the agent).
- Traffic should be generated - via manual or automated tests. Another possible approach is via DAST tools (OWASP ZAP can be used for example).
There are various open-source tools available for conducting DAST, such as ZAP, Burp Suite, and Arachni. These tools can simulate different types of attacks on the application, such as SQL injection, cross-site scripting, and other common vulnerabilities. For example, if an application is vulnerable to SQL injection, a DAST tool can send a malicious SQL query to the application, such as ' OR 1=1 --, and evaluate its response to determine if it is vulnerable. If the application is vulnerable, it may return all records from the database, indicating that the SQL injection attack was successful.
As some of the tests could be quite invasive (for example it may include DROP TABLE or something similar) or at least put a good amount of test data into the databases or even DOS the app,
__DAST tools should never run against a production environment!!!__
All tools have the possibility for authentication into the application and this could lead to production credentials compromise. Also when run authenticated scans against the testing environment, use suitable roles (if RBAC model exists, for the application, of course), e.g. DAST shouldnt use role that have the possibility to delete or modify other users because this way the whole environment can became unusable.
As with other testing methodologies it is necessary to analyze the scope, so not unneeded targets are scanned.
## Advantages
One of the main advantages of IAST tools is that they can provide detailed and accurate information about vulnerabilities and how to fix them. This can save developers a lot of time and effort, as they don't have to manually search for vulnerabilities or try to reproduce them in a testing environment. IAST tools can also identify vulnerabilities that might be missed by other testing methods, such as those that require user interaction or are triggered under certain conditions. Testing time depends on the tests used (as IAST is not a standalone system) and with faster tests (automated tests) can be include into CI/CD pipelines. It can be used to detect different kind of vulnerabilities and due to the nature of the tools (it looks for “real traffic only) false positives/negatives findings are relatively rear compared to other testing types.
IAST can be used in two flavors - as a typical testing tool and as real-time protection (it is called RAST in this case). Both work at the same principals and can be used together.
## Usage
Common error is scanning compensating security controls (e.g. WAF) instead of the real application. DAST is in its core an application security testing tool and should be used against actual applications, not against security mitigations. As it uses pretty standardized attacks, external controls can block the attacking traffic and this way to cover potentially exploitable flows (as per definition adversary would be able to eventually bypass such measures)
Actual scans are quite slow, so sometimes they should be run outside of the DevOps pipeline. Good example is running them nightly or during the weekend. Some of the simple tools (zap / arachny, …) could be used into pipelines but often, due to the nature of the scan can slow down the whole development process.
Once the DAST testing is complete, the results are analyzed to identify any vulnerabilities that were discovered. The organization can then take appropriate remediation steps to address the vulnerabilities and improve the overall security of the application. This may involve fixing the underlying code, implementing additional security controls, such as input validation and filtering, or both.
In conclusion, the use of DAST in the SSDLC is essential for ensuring the security of an application. By conducting DAST testing and identifying vulnerabilities early in the development process, organizations can prevent security incidents and protect their assets from potential threats. Open-source tools, such as ZAP, Burp Suite, and Arachni, can be used to conduct DAST testing and help organizations improve their overall security posture.
As with all other tools part of DevSecOps pipeline DAST should not be the only scanner in place and as with all others, it is not a substitute for penetration test and good development practices.
## There are several disadvantages of the technology as well:
- It is relatively new technology so there is not a lot of knowledge and experience both for the security teams and for the tools builders (open-source or commercial).
- The solution cannot be used alone - something (or someone) should generate traffic patterns. It is important that all possible endpoints are queried during the tests.
- Findings are based on traffic. This is especially true if used for testing alone - if there is no traffic to a portion of the app / site it would not be tested so no findings are going to be generated.
- Due to need of instrumentation of the app, it can be fairly complex, especially compared to the source scanning tools (SAST or SCA).
There are several different IAST tools available, each with its own features and capabilities.
## Some common features of IAST tools include:
- Real-time monitoring: IAST tools monitor the application's behavior in real-time, allowing them to identify vulnerabilities as they occur.
- Vulnerability identification: IAST tools can identify a wide range of vulnerabilities, including injection attacks, cross-site scripting (XSS), and cross-site request forgery (CSRF).
- Remediation guidance: IAST tools often provide detailed information about how to fix identified vulnerabilities, including code snippets and recommendations for secure coding practices.
- Integration with other tools: IAST tools can often be integrated with other security testing tools, such as static code analysis or penetration testing tools, to provide a more comprehensive view of an application's security.
IAST tools can be a valuable addition to a developer's toolkit, as they can help identify and fix vulnerabilities in real-time, saving time and effort. If you are a developer and are interested in using an IAST tool, there are many options available, so it is important to research and compare different tools to find the one that best fits your needs.
## Tool example
There are almost no open-source tools on the market. Example is the commercial tool: Contrast Community Edition (CE) - Fully featured version for 1 app and up to 5 users (some Enterprise features disabled). Contrast CE supports Java and .NET only.
Can be found here - https://www.contrastsecurity.com/contrast-community-edition
## Some useful links and open-source tools:
- https://github.com/zaproxy/zaproxy
- https://www.arachni-scanner.com/
- https://owasp.org/www-project-devsecops-guideline/latest/02b-Dynamic-Application-Security-Testing

View File

@ -0,0 +1,33 @@
# IAST (Interactive Application Security Testing)
IAST is a type of security testing tool that is designed to identify vulnerabilities in web applications and help developers fix them. It works by injecting a small agent into the application's runtime environment and monitoring its behaviour in real-time. This allows IAST tools to identify vulnerabilities as they occur, rather than relying on static analysis or simulated attacks.
IAST works through software instrumentation, or the use of instruments to monitor an application as it runs and gather information about what it does and how it performs. IAST solutions instrument applications by deploying agents and sensors in running applications and continuously analyzing all application interactions initiated by manual tests, automated tests, or a combination of both to identify vulnerabilities in real time Instrumentation.
IAST agent is running inside the application and monitoring for known attack patterns. As it is part of the application, it can monitor traffic between different components (either as classic MVC deployments and in microservices deployment).
## For IAST to be used, there are few prerequisites.
- Application should be instrumented (inject the agent).
- Traffic should be generated - via manual or automated tests. Another possible approach is via DAST tools (OWASP ZAP can be used for example).
## Advantages
One of the main advantages of IAST tools is that they can provide detailed and accurate information about vulnerabilities and how to fix them. This can save developers a lot of time and effort, as they don't have to manually search for vulnerabilities or try to reproduce them in a testing environment. IAST tools can also identify vulnerabilities that might be missed by other testing methods, such as those that require user interaction or are triggered under certain conditions. Testing time depends on the tests used (as IAST is not a standalone system) and with faster tests (automated tests) can be included into CI/CD pipelines. It can be used to detect different kind of vulnerabilities and due to the nature of the tools (it looks for “real traffic only) false positives/negatives findings are relatively rear compared to other testing types.
IAST can be used in two flavours - as a typical testing tool and as real-time protection (it is called RAST in this case). Both work at the same principles and can be used together.
## There are several disadvantages of the technology as well:
- It is relatively new technology so there is not a lot of knowledge and experience both for the security teams and for the tools builders (open-source or commercial).
- The solution cannot be used alone - something (or someone) should generate traffic patterns. It is important that all possible endpoints are queried during the tests.
- Findings are based on traffic. This is especially true if used for testing alone - if there is no traffic to a portion of the app / site it would not be tested so no findings are going to be generated.
- Due to need of instrumentation of the app, it can be fairly complex, especially compared to the source scanning tools (SAST or SCA).
There are several different IAST tools available, each with its own features and capabilities.
## Some common features of IAST tools include:
- Real-time monitoring: IAST tools monitor the application's behaviour in real-time, allowing them to identify vulnerabilities as they occur.
- Vulnerability identification: IAST tools can identify a wide range of vulnerabilities, including injection attacks, cross-site scripting (XSS), and cross-site request forgery (CSRF).
- Remediation guidance: IAST tools often provide detailed information about how to fix identified vulnerabilities, including code snippets and recommendations for secure coding practices.
- Integration with other tools: IAST tools can often be integrated with other security testing tools, such as static code analysis or penetration testing tools, to provide a more comprehensive view of an application's security.
IAST tools can be a valuable addition to a developer's toolkit, as they can help identify and fix vulnerabilities in real-time, saving time and effort. If you are a developer and are interested in using an IAST tool, there are many options available, so it is important to research and compare different tools to find the one that best fits your needs.
## Tool example
There are almost no open-source tools on the market. Example is the commercial tool: Contrast Community Edition (CE) - Fully featured version for 1 app and up to 5 users (some Enterprise features disabled). Contrast CE supports Java and .NET only.
Can be found here - https://www.contrastsecurity.com/contrast-community-edition

View File

@ -0,0 +1,153 @@
# IAST and DAST in conjunction - lab time
After learning what IAST and DAST are it's time to get our hands dirty and perform an exercise in which we use these processes to find vulnerabilities in real applications.
**NOTE:** There are no open-source IAST implementations, so we will have to use a commerical solution.
Don't worry, there is a free-tier, so you will be able to follow the lab without paying anything.
This lab is based on this [repo](https://github.com/rstatsinger/contrast-java-webgoat-docker).
It contains a vulnerable Java application to be tested and exploited, Docker and Docker Compose for easy setup and [Contrast Community Edition](https://www.contrastsecurity.com/contrast-community-edition?utm_campaign=ContrastCommunityEdition&utm_source=GitHub&utm_medium=WebGoatLab) for IAST solution.
## Prerequisites
- [Docker](https://www.docker.com/products/docker-desktop/)
- [Docker Compose](https://docs.docker.com/compose/)
- Contrast CE account. Sign up for free [here](https://www.contrastsecurity.com/contrast-community-edition?utm_campaign=ContrastCommunityEdition&utm_source=GitHub&utm_medium=WebGoatLab).
**NOTE:** The authors of this article and of the 90 Days of DevOps program are in way associated or affilited with Contrast Security.
We are using this commercial solution, because there is not an open-source one, and because this one has a free-tier that does not require paying or providing a credit card.
1. As there are no open-source IAST implementation will use a commercial one with some free licenses. For this purpose, you will need 2 componenets:
IAST solution from here - <https://github.com/rstatsinger/contrast-java-webgoat-docker>. You need docker and docker-compose installed in mac or linux enviroment (this lab is tested on Mint). Please follow the README to create account in Contrast.
## Getting started
To start, clone the [repository](https://github.com/rstatsinger/contrast-java-webgoat-docker).
Get your credentials from Contrast Security.
Click on your name in the top-right corner -> `Organization Settings` -> `Agent`.
Get the values for `Agent Username`, `Agent Service Key` and `API Key`.
Replace these values in the `.env.template` file in the newly cloned repository.
**NOTE:** These values are secret.
Do not commit them to Git.
It's best to put the `.env.template` under `.gitignore` so that you don't commit these values by mistake.
## Running the vulnerable application
To run the vulnerable application, run:
```sh
./run.sh
```
or
```sh
docker compose up
```
Once ready, the application UI will be accessible on <http://localhost:8080/WebGoat>.
## Do some damage
Now that we have a vulnerable application let's try to exploit it.
1. Install ZAP Proxy from [here](https://www.zaproxy.org/download/)
An easy way to do that is via a DAST scanner.
One such scanner is [ZAP Proxy](https://www.zaproxy.org/).
It is a free and open-source web app scanner.
2. Install `zap-cli` from [here](https://github.com/Grunny/zap-cli)
Next, install `zap-cli`.
`zap-cli` is an open-source CLI for ZAP Proxy.
3. Run ZAP proxy
Run ZAP Proxy from its installed location.
In Linux Mint it is by default in `/opt/zaproxy`.
In MacOS it is in `Applications`.
4. Set env variables for `ZAP_API_KEY` and `ZAP_PORT`
Get these values from ZAP Proxy.
Go to `Options...` -> `API` to get the API Key.
Go to `Options...` -> `Network` -> `Local Servers/Proxies` to configure and obtain the port.
5. Run several commands with `zap-cli`
For example:
```sh
zap-cli quick-scan -s all --ajax-spider -r http://127.0.0.1:8080/WebGoat/login.mvc
```
Alternatively, you can follow the instructions in the [repo](https://github.com/rstatsinger/contrast-java-webgoat-docker/blob/master/Lab-WebGoat.pdf)
to cause some damage to the vulnerable application.
6. Observe findings in Constrast
Either way, if you go to the **Vulnerabilities** tab for your application in Contrast you should be able to see that Contrast detected the vulnerabilities
and is warning you to take some action.
## Bonus: Image Scanning
We saw how an IAST solution helped us detect attacks by observing the behaviour of the application.
Let's see whether we could have done something to prevent these attacks in the first place.
The vulnerable application we used for this demo was packages as a container.
Let's scan this container via the `grype` scanner we learned about in Days [14](day14.md) and [15](day15.md) and see the results.
```sh
$ grype contrast-java-webgoat-docker-webgoat
✔ Vulnerability DB [no update available]
✔ Loaded image
✔ Parsed image
✔ Cataloged packages [316 packages]
✔ Scanned image [374 vulnerabilities]
NAME INSTALLED FIXED-IN TYPE VULNERABILITY SEVERITY
apt 1.8.2.3 deb CVE-2011-3374 Negligible
axis 1.4 java-archive GHSA-55w9-c3g2-4rrh Medium
axis 1.4 java-archive GHSA-96jq-75wh-2658 Medium
bash 5.0-4 deb CVE-2019-18276 Negligible
bash 5.0-4 (won't fix) deb CVE-2022-3715 High
bsdutils 1:2.33.1-0.1 deb CVE-2022-0563 Negligible
bsdutils 1:2.33.1-0.1 (won't fix) deb CVE-2021-37600 Low
commons-beanutils 1.8.3 java-archive CVE-2014-0114 High
commons-beanutils 1.8.3 java-archive CVE-2019-10086 High
commons-beanutils 1.8.3 1.9.2 java-archive GHSA-p66x-2cv9-qq3v High
commons-beanutils 1.8.3 1.9.4 java-archive GHSA-6phf-73q6-gh87 High
commons-collections 3.2.1 java-archive CVE-2015-6420 High
commons-collections 3.2.1 3.2.2 java-archive GHSA-6hgm-866r-3cjv High
commons-collections 3.2.1 3.2.2 java-archive GHSA-fjq5-5j5f-mvxh Critical
commons-fileupload 1.3.1 java-archive CVE-2016-1000031 Critical
commons-fileupload 1.3.1 java-archive CVE-2016-3092 High
commons-fileupload 1.3.1 1.3.2 java-archive GHSA-fvm3-cfvj-gxqq High
commons-fileupload 1.3.1 1.3.3 java-archive GHSA-7x9j-7223-rg5m Critical
commons-io 2.4 java-archive CVE-2021-29425 Medium
commons-io 2.4 2.7 java-archive GHSA-gwrp-pvrq-jmwv Medium
coreutils 8.30-3 deb CVE-2017-18018 Negligible
coreutils 8.30-3 (won't fix) deb CVE-2016-2781 Low
curl 7.64.0-4+deb10u3 deb CVE-2021-22922 Negligible
curl 7.64.0-4+deb10u3 deb CVE-2021-22923 Negligible
<truncated>
```
As we can see this image is full with vulnerabilities.
If we dive into each one we will see we have vulnerabilities like RCE (Remote Code Execution), SQL Injection, XML External Entity Vulnerability, etc.
## Week Summary
IAST and DAST are important methods that can help us find vulnerabilities in our application via monitoring its behaviour.
This is done once the application is already deployed.
Container Image Scanning can help us find vulnerabilities in our application based on the library that are present inside the container.
Image Scanning and IAST/DAST are not mutually-exclusive.
They both have their place in a Secure SDLC and can help us find different problems before the attackers do.

View File

@ -0,0 +1,230 @@
# Continuous Image Repository Scan
In [Day 14](day14.md), we learned what container image scanning is and why it's important.
We also learned about tools like Grype and Trivy that help us scan our container images.
However, in modern SDLCs, a DevSecOps engineer would rarely scan container images by hand, e.g., they would not be running Grype and Trivy locally and looking at every single vulnerability.
Instead, they would have the image scanning configured as part of the CI/CD pipeline.
This way, they would be sure that all the images that are being built by the pipelines are also scanned by the image scanner.
These results could then be sent by another system, where the DevSecOps engineers could look at them and take some action depending on the result.
A sample CI/CD pipeline could look like this:
0. _Developer pushes code_
1. Lint the code
2. Build the code
3. Test the code
4. Build the artifacts (container images, helm charts, etc.)
5. Scan the artifacts
6. (Optional) Send the scan results somewhere
7. (Optional) Verify the scan results and fail the pipeline if the verification fails
8. Push the artifacts to a repository
A failure in the scan or verify steps (steps 6 and 7) would mean that our container will not be pushed to our repository, and we cannot use the code we submitted.
Today, we are going to take a look at how we can set up such a pipeline and what would be a sensible configuration for one.
## Setting up a CI/CD pipeline with Grype
Let's take a look at the [Grype](https://github.com/anchore/grype) scanner.
Grype is an open-source scanner maintained by the company [Anchore](https://anchore.com/).
### Scanning an image with Grype
Scanning a container image with Grype is as simple as running:
```shell
grype <IMAGE>
```
For example, if we want to scan the `ubuntu:20.04` image, we can run:
```shell
$ grype ubuntu:20.04
✔ Vulnerability DB [no update available]
✔ Pulled image
✔ Loaded image
✔ Parsed image
✔ Cataloged packages [92 packages]
✔ Scanned image [19 vulnerabilities]
NAME INSTALLED FIXED-IN TYPE VULNERABILITY SEVERITY
coreutils 8.30-3ubuntu2 deb CVE-2016-2781 Low
gpgv 2.2.19-3ubuntu2.2 deb CVE-2022-3219 Low
libc-bin 2.31-0ubuntu9.9 deb CVE-2016-20013 Negligible
libc6 2.31-0ubuntu9.9 deb CVE-2016-20013 Negligible
libncurses6 6.2-0ubuntu2 deb CVE-2021-39537 Negligible
libncurses6 6.2-0ubuntu2 deb CVE-2022-29458 Negligible
libncursesw6 6.2-0ubuntu2 deb CVE-2021-39537 Negligible
libncursesw6 6.2-0ubuntu2 deb CVE-2022-29458 Negligible
libpcre3 2:8.39-12ubuntu0.1 deb CVE-2017-11164 Negligible
libsystemd0 245.4-4ubuntu3.19 deb CVE-2022-3821 Medium
libtinfo6 6.2-0ubuntu2 deb CVE-2021-39537 Negligible
libtinfo6 6.2-0ubuntu2 deb CVE-2022-29458 Negligible
libudev1 245.4-4ubuntu3.19 deb CVE-2022-3821 Medium
login 1:4.8.1-1ubuntu5.20.04.4 deb CVE-2013-4235 Low
ncurses-base 6.2-0ubuntu2 deb CVE-2021-39537 Negligible
ncurses-base 6.2-0ubuntu2 deb CVE-2022-29458 Negligible
ncurses-bin 6.2-0ubuntu2 deb CVE-2021-39537 Negligible
ncurses-bin 6.2-0ubuntu2 deb CVE-2022-29458 Negligible
passwd 1:4.8.1-1ubuntu5.20.04.4 deb CVE-2013-4235 Low
```
Of course, you already know that because we did it on [Day 14](day14.md).
However, this command will only output the vulnerabilities and exit with a success code.
So if this were in a CI/CD pipeline, the pipeline would be successful even if we have many vulnerabilities.
The person running the pipeline would have to open it, see the logs and manually determine whether the results are OK.
This is tedious and error prone.
Let's see how we can enforce some rules for the results that come out of the scan.
### Enforcing rules for the scanned images
As we already established, just scanning the image does not do much except for giving us visibility into the number of vulnerabilities we have inside the image.
But what if we want to enforce a set of rules for our container images?
For example, a good rule would be "an image should not have critical vulnerabilities" or "an image should not have vulnerabilities with available fixes."
Fortunately for us, this is also something that Grype supports out of the box.
We can use the `--fail-on <SEVERITY>` flag to tell Grype to exit with a non-zero exit code if, during the scan, it found vulnerabilities with a severity higher or equal to the one we specified.
This will fail our pipeline, and the engineer would have to look at the results and fix something in order to make it pass.
Let's tried it out.
We are going to use the `springio/petclinic:latest` image, which we already found has many vulnerabilities.
You can go back to [Day 14](day14.md) or scan it yourself to see how much exactly.
We want to fail the pipeline if the image has `CRITICAL` vulnerabilities.
We are going to run the can like this:
```shell
$ grype springio/petclinic:latest --fail-on critical
✔ Vulnerability DB [no update available]
✔ Loaded image
✔ Parsed image
✔ Cataloged packages [212 packages]
✔ Scanned image [168 vulnerabilities]
NAME INSTALLED FIXED-IN TYPE VULNERABILITY SEVERITY
spring-core 5.3.6 java-archive CVE-2016-1000027 Critical
spring-core 5.3.6 java-archive CVE-2022-22965 Critical
...
1 error occurred:
* discovered vulnerabilities at or above the severity threshold
$ echo $?
1
```
We see two things here:
- apart from the results, Grype also outputted an error that is telling us that this scan violated the rule we had defined (no CRITICAL vulnerabilities)
- Grype exited with exit code 1, which indicates failure.
If this were a CI pipeline, it would have failed.
When this happens, we will be blocked from merging our code and pushing our container to the registry.
This means that we need to take some action to fix the failure so that we can finish our task and push our change.
Let's see what our options are.
### Fixing the pipeline
Once we encounter a vulnerability that is preventing us from publishing our container, we have a few ways we can go depending on the vulnerability.
#### 1. The vulnerability has a fix
The best-case scenario is when this vulnerability is already fixed in a newer version of the library we depend on.
One such vulnerability is this one:
```text
NAME INSTALLED FIXED-IN TYPE VULNERABILITY SEVERITY
snakeyaml 1.27 1.31 java-archive GHSA-3mc7-4q67-w48m High
```
This is a `High` severity vulnerability.
It's coming from the Java package `snakeyaml`, version `1.27`.
Grype is telling us that this vulnerability is fixed in version `1.31` of the same library.
In this case, we can just upgrade the version of this library in our `pom.xml` or `build.gradle` file,
test our code to make sure nothing breaks with the new version,
and submit the code again.
This will build a new version of our container, re-scan it, and hopefully, this time, the vulnerability will not come up, and our scan will be successful.
### 2. The vulnerability does not have a fix, but it's not dangerous
Sometimes a vulnerability we encounter will not have a fix available.
These are so-called zero-day vulnerabilities that are disclosed before a fix is available.
We can see two of those in the initial scan results:
```text
NAME INSTALLED FIXED-IN TYPE VULNERABILITY SEVERITY
spring-core 5.3.6 java-archive CVE-2016-1000027 Critical
spring-core 5.3.6 java-archive CVE-2022-22965 Critical
```
When we encounter such a vulnerability, we need to evaluate how severe it is and calculate the risk of releasing our software with that vulnerability in it.
We can determine that the vulnerability does not constitute any danger to our software and its consumers.
One such case might be when a vulnerability requires physical access to the servers to be exploited.
If we are sure that our physical servers are secure enough and an attacker cannot get access to them, we can safely ignore this vulnerability.
In this case, we can tell Grype to ignore this vulnerability and not fail the scan because of it.
We can do this via the `grype.yaml` configuration file, where we can list vulnerabilities we want to ignore:
```yaml
ignore:
# This is the full set of supported rule fields:
- vulnerability: CVE-2016-1000027
fix-state: unknown
package:
name: spring-core
version: 5.3.6
type: java-archive
# We can list as many of these as we want
- vulnerability: CVE-2022-22965
# Or list whole packages which we want to ignore
- package:
type: gem
```
Putting this in our configuration file and re-running the scan will make our pipeline green.
However, it is crucial that we keep track of this file and not ignore vulnerabilities that have a fix.
For example, when a fix for this vulnerability is released, it's best we upgrade our dependency and remove this vulnerability from our application.
That way, we will ensure that our application is as secure as possible and there are no vulnerabilities that can turn out to be more severe than we initially thought.
### 3. Vulnerability does not have a fix, and IT IS dangerous
The worst-case scenario is if we encounter a vulnerability that does not have a fix, and it is indeed dangerous, and there is a possibility to be exploited.
In that case, there is no right move.
The best thing we can do is sit down with our security team and come up with an action plan.
We might decide it's best to do nothing while the vulnerability is fixed.
We might decide to manually patch some stuff so that we remove at least some part of the danger.
It really depends on the situation.
Sometimes, a zero-day vulnerability is already in your application that is deployed.
In that case, freezing deploys won't help because your app is already vulnerable.
That was the case with the Log4Shell vulnerability that was discovered in late 2021 but has been present in Log4j since 2013.
Luckily, there was a fix available within hours, but next time we might not be this lucky.
## Summary
As we already learned in [Day 14](day14.md), scanning your container images for vulnerabilities is important as it can give you valuable insights about
the security posture of your images.
Today we learned that it's even better to have it as part of your CI/CD pipeline and to enforce some basic rules about what vulnerabilities you have inside your images.
Finally, we discussed the steps we can take when we find a vulnerability.
Tomorrow we are going to take a look at container registries that enable this scanning out of the box and also at scanning other types of artifacts.
See you on [Day 22](day22.md).

View File

@ -0,0 +1,77 @@
# Continuous Image Repository Scan - Container Registries
Yesterday we learned how to integrate container image vulnerability scanning into our CI/CD pipelines.
Today, we are going to take a look at how to enforce that our images are scanned on another level - the container registry.
There are container registries that will automatically scan your container images once you push them.
This ensures that we will have visibility into the number of vulnerabilities for every container image produced by our team.
Let's take a look at few different registries that provide this capability and how we can use it.
## Docker Hub
[Docker Hub](https://hub.docker.com/) is the first container registry.
It was build by the team that created Docker and is still very popular today.
Docker Hub has automatic vulnerability scanner, powered by [Snyk](https://snyk.io/).
This means that, if enabled, when you push an image to Docker Hub it will be automatically scanned and the results with be visible to you in the UI.
You can learn more about how to enable and use this feature from the Docker Hub [docs](https://docs.docker.com/docker-hub/vulnerability-scanning/).
**NOTE:** This feature is not free.
In order to use it you need to have a subscription.
## Harbor
[Harbor](https://goharbor.io/) is an open-source container registry.
Originally developed in VMware, it is now part of the CNCF.
It supports image scanning via [Trivy](https://github.com/aquasecurity/trivy) and/or [Clair](https://github.com/quay/clair).
This is configured during installation.
(Even if you don't enable image scanning during installation, it can always be configured afterwards).
For more info, check out the [docs](https://goharbor.io/docs/2.0.0/administration/vulnerability-scanning/).
## AWS ECR
[AWS ECR](https://aws.amazon.com/ecr/) also supports [image scanning via Clair](https://docs.aws.amazon.com/AmazonECR/latest/userguide/image-scanning-basic.html).
## Azure Container Registry
[Azure Container Registry](https://azure.microsoft.com/en-us/products/container-registry) support [image scanning via Qualys](https://azure.microsoft.com/en-us/updates/vulnerability-scanning-for-images-in-azure-container-registry-is-now-generally-available/).
## GCP
[GCP Container Registry](https://cloud.google.com/container-registry) also support [automatic image scanning](https://cloud.google.com/container-analysis/docs/automated-scanning-howto).
## Policy Enforcement
Just scanning the images and having the results visible in your registry is nice thing to have,
but it would be even better if we have a way to enforce some standards for these images.
In [Day 14](day14.md) we saw how to make `grype` fail a scan if an image has vulnerabilities above a certain severity.
Something like this can also be enforced on the container registry level.
For example, [Harbor](https://goharbor.io/) has the **Prevent vulnerable images from running** option, which when enable does not allow you to pull an image that has vulnerabilities above a certain severity.
If you cannot pull the image, you cannot run it, so this is a good rule to have if you don't want to be running vulnerable images.
Of course, a rule like that can effectively prevent you from deploying something to your environment, so you need to use it carefully.
More about this option and how to enable it in Harbor you can read [here](https://goharbor.io/docs/2.3.0/working-with-projects/project-configuration/).
For more granular control and for unblocking deployments you can configure a [per-project CVE allowlist](https://goharbor.io/docs/2.3.0/working-with-projects/project-configuration/configure-project-allowlist/).
This will allow certain images to run even though they have vulnerabilities.
However, these vulnerabilities would be manually curated and allow-listed by the repo admin.
## Summary
Scanning your container images and having visibility into the number of vulnerabilities inside them is critical for a secure SDLC.
One place to do that is you CI pipeline (as seen in [Day 21](day21.md)).
Another place is your container registry (as seen today).
Both are good options, both have their pros and cons.
It is up to the DevSecOps architect to decide which approach works better for them and their thread model.

View File

@ -0,0 +1,161 @@
# Artifacts Scan
In the previous two days we learned why and how to scan container images.
However, usually our infrastructure consists of more than just container images.
Yes, our services will run as containers, but around them we can also have other artifacts like:
- Kubernetes manifests
- Helm templates
- Terraform code
For maximum security, you would be scanning all the artifacts that you use for your environment, not only your container images.
The reason for that is that even if you have the most secure Docker images with no CVEs,
but run then on an insecure infrastructure with bad Kubernetes configuration,
then your environment will not be secure.
**Each system is as secure as its weakest link.**
Today we are going to take a look at different tools for scanning artifacts different than container images.
## Kubernetes manifests
Scanning Kubernetes manifests can expose misconfigurations and security bad practices like:
- running containers as root
- running containers with no resource limits
- giving too much and too powerful capabilities to the containers
- hardcoding secrets in the templates, etc.
All of these are part of the security posture of our Kubernetes workloads, and having a bad posture in security is just as bad as having a bad posture in real-life.
One popular open-source tool for scanning Kubernetes manifests is [KubeSec](https://kubesec.io/).
It outputs a list of misconfiguration.
For example, this Kubernetes manifest taken from their docs has a lot of misconfigurations like missing memory limits, running as root, etc.
```yaml
apiVersion: v1
kind: Pod
metadata:
name: kubesec-demo
spec:
containers:
- name: kubesec-demo
image: gcr.io/google-samples/node-hello:1.0
securityContext:
runAsNonRoot: false
```
Let's scan it and look at the results.
```shell
$ kubesec scan kubesec-test.yaml
[
{
"object": "Pod/kubesec-demo.default",
"valid": true,
"message": "Passed with a score of 0 points",
"score": 0,
"scoring": {
"advise": [
{
"selector": ".metadata .annotations .\"container.seccomp.security.alpha.kubernetes.io/pod\"",
"reason": "Seccomp profiles set minimum privilege and secure against unknown threats"
},
{
"selector": ".spec .serviceAccountName",
"reason": "Service accounts restrict Kubernetes API access and should be configured with least privilege"
},
{
"selector": "containers[] .securityContext .runAsNonRoot == true",
"reason": "Force the running image to run as a non-root user to ensure least privilege"
},
{
"selector": ".metadata .annotations .\"container.apparmor.security.beta.kubernetes.io/nginx\"",
"reason": "Well defined AppArmor policies may provide greater protection from unknown threats. WARNING: NOT PRODUCTION READY"
},
{
"selector": "containers[] .resources .requests .memory",
"reason": "Enforcing memory requests aids a fair balancing of resources across the cluster"
},
{
"selector": "containers[] .securityContext .runAsUser -gt 10000",
"reason": "Run as a high-UID user to avoid conflicts with the host's user table"
},
{
"selector": "containers[] .resources .limits .cpu",
"reason": "Enforcing CPU limits prevents DOS via resource exhaustion"
},
{
"selector": "containers[] .resources .requests .cpu",
"reason": "Enforcing CPU requests aids a fair balancing of resources across the cluster"
},
{
"selector": "containers[] .securityContext .readOnlyRootFilesystem == true",
"reason": "An immutable root filesystem can prevent malicious binaries being added to PATH and increase attack cost"
},
{
"selector": "containers[] .securityContext .capabilities .drop",
"reason": "Reducing kernel capabilities available to a container limits its attack surface"
},
{
"selector": "containers[] .resources .limits .memory",
"reason": "Enforcing memory limits prevents DOS via resource exhaustion"
},
{
"selector": "containers[] .securityContext .capabilities .drop | index(\"ALL\")",
"reason": "Drop all capabilities and add only those required to reduce syscall attack surface"
}
]
}
}
]
```
As we see it produced 12 warnings about thing in this manifests we would want to change.
Each warning has an explanation telling us WHY we need to fix it.
### Others
Other such tools include [kube-bench](https://github.com/aquasecurity/kube-bench), [kubeaudit](https://github.com/Shopify/kubeaudit) and [kube-score](https://github.com/zegl/kube-score).
They work in the same or similar manner.
You give them a resource to analyze and they output a list of things to fix.
They can be used in a CI setup.
Some of them can also be used as [Kubernetes validating webhook](https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/), and can block resources from being created if they violate a policy.
## Helm templates
[Helm](https://helm.sh/) templates are basically templated Kubernetes resources that can be reused and configured with different values.
There are some tools like [Snyk](https://docs.snyk.io/products/snyk-infrastructure-as-code/scan-kubernetes-configuration-files/scan-and-fix-security-issues-in-helm-charts) that have *some* support for scanning Helm templates for misconfigurations the same way we are scanning Kubernetes resources.
However, the best way to approach this problem is to just scan the final templated version of your Helm charts.
E.g. use the `helm template` to substitute the templated values with actual ones and just scan that via the tools provided above.
## Terraform
The most popular tool for scanning misconfigurations in Terraform code is [tfsec](https://github.com/aquasecurity/tfsec).
It uses static analysis to spot potential issues in your code.
It support multiple cloud providers and points out issues specific to the one you are using.
For example, it has checks for [using the default VPC in AWS](https://aquasecurity.github.io/tfsec/v1.28.1/checks/aws/ec2/no-default-vpc/),
[hardcoding secrets in the EC2 user data](https://aquasecurity.github.io/tfsec/v1.28.1/checks/aws/ec2/no-secrets-in-launch-template-user-data/),
or [allowing public access to your ECR container images](https://aquasecurity.github.io/tfsec/v1.28.1/checks/aws/ecr/no-public-access/).
It allow you to enable/disable checks and to ignore warnings via inline comments.
It also allows you to define your own policies via [Rego](https://www.openpolicyagent.org/docs/latest/policy-language/).
## Summary
A Secure SDLC would include scanning of all artifacts that end up in our production environment, not just the container images.
Today we learned how to scan non-container artifacts like Kubernetes manifests, Helm charts and Terraform code.
The tools we looked at are free and open-source and can be integrated into any workflow or CI pipeline.

View File

@ -0,0 +1,147 @@
# Signing
The process of signing involves... well, signing an artifact with a key, and later verifying that this artifact has not been tampered with.
An "artifact" in this scenario can be anything
- [code](https://venafi.com/machine-identity-basics/what-is-code-signing/#item-1)
- [git commit](https://docs.github.com/en/authentication/managing-commit-signature-verification/signing-commits)
- [container images](https://docs.sigstore.dev/cosign/overview/)
Signing and verifying the signature ensures that the artifact(container) we pulled from the registry is the same one that we pushed.
This secures us from supply chain and man-in-the-middle attack where we download something different that we wanted.
The CI workflow would look like this:
0. Developer pushes code to Git
1. CI builds the code into a container
2. **CI signs the container with our private key**
3. CI pushes the signed container to our registry
And then when we want to deploy this image:
1. Pull the image
2. **Verify the signature with our public key**
1. If signature does not match, fail the deploy - image is probably compromised
3. If signature does match, proceed with the deploy
This workflow is based on public-private key cryptography.
When you sign something with your private key, everyone that has access to your public key can verify that this was signed by you.
And since the public key is... well, public, that means everyone.
## The danger of NOT signing your images
If you are not signing your container images, there is the danger that someone will replace an image in your repository with another image that is malicious.
For example, you can push the `my-repo/my-image:1.0.0` image to your repository, but image tags, even versioned ones (like `1.0.0`) are mutable.
So an attacker that has access to your repo can push another image, tag it the same way, and this way it will override your image.
Then, when you go an deploy this image, the image that will get deployed is the one that attacked forged.
This will probably be a maliciuos one.
For example, on that has malware, is stealing data, or using your infrastructure for mining crypto currencies.
This problem can be solved by signing your images, because when you sign an images, then you can later verify that what you pull is what you uploaded in the first place.
So let's take a look at how we can do this via a tool called [cosign](https://docs.sigstore.dev/cosign/overview/).
## Signing container images
First, download the tool, following the instructions for your OS [here](https://docs.sigstore.dev/cosign/installation/).
Generate a key-pair if you don't have one:
```console
cosign generate-key-pair
```
This will output two files in the current folder:
- `cosign.key` - your private key.
DO NOT SHARE WITH ANYONE.
- `cosign.pub` - your public key.
Share with whoever needs it.
We can use the private key to sign an image:
```console
$ cosign sign --key cosign.key asankov/signed
Enter password for private key:
Pushing signature to: index.docker.io/asankov/signed
```
This command signed the `asankov/signed` contaner image and pushed the signature to the container repo.
## Verifying signatures
Now that we have signed the image, let's verify the signature.
For that, we need our public key:
```console
$ cosign verify --key=cosign.pub asankov/signed | jq
Verification for index.docker.io/asankov/signed:latest --
The following checks were performed on each of these signatures:
- The cosign claims were validated
- The signatures were verified against the specified public key
[
{
"critical": {
"identity": {
"docker-reference": "index.docker.io/asankov/signed"
},
"image": {
"docker-manifest-digest": "sha256:93d62c92b70efc512379cf89317eaf41b8ce6cba84a5e69507a95a7f15708506"
},
"type": "cosign container image signature"
},
"optional": null
}
]
```
The output of this command showed us that the image is signed by the key we expected.
Since we are the only ones that have access to our private key, this means that no one except us could have pushed this image and signature to the container repo.
Hence, the contents of this image have not been tampered with since we pushed it.
Let's try to verify an image that we have NOT signed.
```console
$ cosign verify --key=cosign.pub asankov/not-signed
Error: no matching signatures:
main.go:62: error during command execution: no matching signatures:
```
Just as expected, `cosign` could not verify the signature of this image (because there was not one).
In this example, this image (`asankov/not-signed`) is not signed at all, but we would have gotten the same error if someone had signed this image with different key than the one we are using to verify it.
### Verifying signatures in Kubernetes
In the previous example, we were verifying the signatures by hand.
However, that is good only for demo purposes or for playing around with the tool.
In a real-world scenario, you would want this verification to be done automatically at the time of deploy.
Fortunately, there are many `cosign` integrations for doing that.
For example, if we are using Kubernetes, we can deploy a validating webhook that will audit all new deployments and verify that the container images used by them are signed.
For Kubernetes you can choose from 3 existing integrations - [Gatekeeper](https://github.com/sigstore/cosign-gatekeeper-provider), [Kyverno](https://kyverno.io/docs/writing-policies/verify-images/) or [Conaisseur](https://github.com/sse-secure-systems/connaisseur#what-is-connaisseur).
You can choose one of the three depending on your preference, or if you are already using them for something else.
## Dangers to be aware of
As with everything else, signing images is not a silver bullet and will not solve all your security problems.
There is still the problem that your private keys might leak, in which case everyone can sign everything and it will still pass your signature check.
However, integrating signing into your workflow adds yet another layer of defence and one more hoop for attackers to jump over.
## Summary
Signing artifacts prevents supply-chain and man-in-the-middle attacks, by allowing you to verify the integrity of your artifacts.
[Sigstore](https://sigstore.dev/) and [cosign](https://docs.sigstore.dev/cosign/overview/) are useful tools to sign your artifacts and they come with many integrations to choose from.

View File

@ -0,0 +1,84 @@
# Systems Vulnerability Scanning
## What is systems vulnerability scanning?
Vulnerability scanning is the process of scanning a network or system to identify any existing security vulnerabilities.
It is a proactive measure used to detect any weaknesses that an attacker may exploit to gain unauthorised access to a system or network.
Vulnerability scanning can be either manual or automated.
It can involve scanning for known vulnerabilities, analysing the configuration of a system or network, or using an automated tool to detect any possible vulnerabilities.
## How do you perform a vulnerability scan?
A vulnerability scan is typically performed with specialised software that searches for known weaknesses and security issues in the system.
The scan typically looks for missing patches, known malware, open ports, weak passwords, and other security risks.
Once the scan is complete, the results are analysed to determine which areas of the system need to be addressed to improve its overall security.
## What are the types of vulnerability scans?
There are two main types of vulnerability scan: unauthenticated and authenticated.
Unauthenticated scans are conducted without any credentials and, as such, can only provide limited information about potential vulnerabilities.
This type of scan helps identify low-hanging fruit, such as unpatched systems or open ports.
Authenticated scans, on the other hand, are conducted with administrative credentials.
This allows the scanning tool to provide much more comprehensive information about potential vulnerabilities, including those that may not be easily exploitable.
In the next two days we are going to take a look at containers and network vulnerability scan, which are more specific subsets os system vulnerability scanning.
## Why are vulnerability scans important?
Vulnerabilities are widespread across organisations of all sizes.
New ones are discovered constantly or can be introduced due to system changes.
Criminal hackers use automated tools to identify and exploit known vulnerabilities and access unsecured systems, networks or data.
Exploiting vulnerabilities with automated tools is simple: attacks are cheap, easy to run and indiscriminate, so every Internet-facing organisation is at risk.
All it takes is one vulnerability for an attacker to access your network.
This is why applying patches to fix these security vulnerabilities is essential.
Updating your software, firmware and operating systems to the newest versions will help protect your organisation from potential vulnerabilities.
Worse, most intrusions are not discovered until it is too late. According to the global median, dwell time between the start of a cyber intrusion and its identification is 24 days.
## What does a vulnerability scan test?
Automated vulnerability scanning tools scan for open ports and detect common services running on those ports.
They identify any configuration issues or other vulnerabilities on those services and look at whether best practice is being followed, such as using TLSv1.2 or higher and strong cipher suites.
A vulnerability scanning report is then generated to highlight the items that have been identified.
By acting on these findings, an organisation can improve its security posture.
## Who conducts vulnerability scans?
IT departments usually undertake vulnerability scanning if they have the expertise and software to do so, or they can call on a third-party security service provider.
Vulnerability scans are also performed by attackers who scour the Internet to find entry points into systems and networks.
Many companies have bug bountry programs, that allow enthical hackers to report vulnerabilities and gain money for that.
Usually the bug bountry programs have boundaries, e.g. they define what is allowed and what is not.
Participating in big bounty programs must be done resposibly.
Hacking is a crime, and if you are caugh you cannot just claim that you did it for good, or that you were not going to exploit your findings.
## How often should you conduct a vulnerability scan?
Vulnerability scans should be performed regularly so you can detect new vulnerabilities quickly and take appropriate action.
This will help identify your security weaknesses and the extent to which you are open to attack.
## Penetration testing
Penetration testing is the next step after vulnerability scanning.
In penetration testing professional ethical hackers combine the results of automated scans with their expertise to reveal vulnerabilities that may not be identified by scans alone.
Penetration testers will also consider your environment (a significant factor in determining vulnerabilities true severity) and upgrade or downgrade the score as appropriate.
A scan can detect something that is vulnerability, but it cannot be actively exploited, because of the way it is incorporated into our system.
This makes the vulnerability a low priority one, because why fix something that presents no danger to you.
If an issue comes up in penetration testing then that means that this issue is exploitable, and probably a high priority - in the penetation testers managed to exploit it, so will the hackers.

View File

@ -0,0 +1,129 @@
# Containers Vulnerability Scanning
[Yesterday](day25.md) we learned that vulnerability scanning is the process of scanning a network or system to identify any existing security vulnerabilities.
We also learned that Containers Vulnerability Scanning is a subset of Systems Vulnerability Scanning, e.g. we are only scanning the "containers" part of our system.
In [Day 14](day14.md) we learned what container image vulnerability scanning and how it makes us more secure.
Then in [Day 15](day15.md) we learned more about that and on Days [21](day21.md) and [22](day22.md) we learned how to integrate the scanning process into our CI/CD pipelines
so that it is automatic and enforced.
Today, we are going to look at other techniques of scanning and securing containers.
Vulnerability scanning is important, but is not a silver bullet and not a guarantee that you are secure.
There are a few reasons for that.
First, image scanning only shows you the list of _known_ vulnerabilities.
There might be many vulnerabilities which have not been discovered, but are still there and could be exploited.
Second, the security of our deployments depends not only on the image and number of vulnerabilities, but also on the way we deploy that image.
For example, if we deploy an insecure application on the open internet where everyone has access to it, or leave the default SSH port and password of our VM,
then it does not matter whether our container has vulnerabilities or not, because the attackers will use the other holes in our system to get in.
That is why today we are going to take a look at few other aspects of containers vulnerability scanning.
## Host Security
Containers run on hosts.
Docker containers run on hosts that have the Docker Daemon installed.
Same is true for containerd, podman, cri-o, and other container runtimes.
If your host is not secured, and someone manages to break it, they will probably have access to your containers and be able to start, stop, modify them, etc.
That is why it's important to secure the host and secure it well.
Securing VMs is a deep topic I will not go into today, but the most basic things you can do are:
- limit the visibility of the machine on the public network
- if possible use a Load Balancer to access your containers, and make the host machine not visible on the public internet
- close all unnecessary ports
- use strong password for SSH and RDP
In the bottom of the article I will link 2 articles from AWS and VMware about VM security.
## Network Security
Network security is another deep topic, which we will look into in better detail [tomorrow](day27.md).
At a minimum, you should not have network exposure you don't need.
E.g. if Container A does not need to make network calls to Container B, it should not be able to make this calls at a first place.
In Docker you can define [different network drivers](https://docs.docker.com/network/) that can help you with this.
In Kubernetes there are [network policies](https://kubernetes.io/docs/concepts/services-networking/network-policies/) that limit which container has access to what.
## Security misconfiguration
When working with containers, there are a few security misconfiguration which you can make that can put you in danger of being hacked.
### Capabilities
One such thing is giving your container excessive capabilities.
[Linux capabilities](https://man7.org/linux/man-pages/man7/capabilities.7.html) determine what syscalls you container can execute.
The best practice is to be aware of the capabilities your containers need and assign them only them.
That way you will be sure that a left-over capability that was never needed was not abused by an attacker.
In practice, it is hard to know what capabilities exactly your containers need, because that involves complex monitoring of your container over time.
Even the developers that wrote the code are probably not aware of what capabilities exactly are needed to perform the actions that they code is doing.
That is so, because capabilities are a low-level construct and developers usually write higher-level code.
However, it is good to know which capabilities you should avoid assigning to your containers, because they are too overpowered and give it too many permissions.
One such capability is `CAP_SYS_ADMIN` which is way overpowered and can do a lot of things.
Even the Linux docs of this capability warn you that you should not be using this capability if you can avoid it.
### Running as root
Running containers as root is a really bad practice and it should be avoided as much as possible.
Of course, there might be situations in which you _must_ run containers as root.
One such example are the core components of Kubernetes, which run as root containers, because they need to have a lot of priviledges on the host.
However, if you are running a simple web server, or something like this, you should not have the need to run the container as root.
Running a container as root means that basically you are throwing away all the isolation containers give you, as a root container have almost full control over the host.
A lot of container runtime vulnerabilities are only applicable if containers are running as root.
Tools like [falco](https://github.com/falcosecurity/falco) and [kube-bench](https://github.com/aquasecurity/kube-bench) will warn you if you are running containers as root, so that you can take actions and change that.
### Resource limits
Not defining resource limits for your containers can lead to a DDoS attack that brings down your whole infrastructure.
When you are being DDoS-ed the workload starts consuming more memory and CPU.
If that workload is a container with no limits, at some point it will drain all the available resources from the host and there will be none left for the other containers on that host.
At some point, the whole host might go down, which will lead to more pressure on your other hosts and can have a domino effect on your whole infra.
If you have sensible limits for your container, it will consume them, but the orchestrator would not give him more.
At some point, the container will die due to lack of resources, but nothing else will happen.
Your host and other containers will be safe.
## Summary
Containers Vulnerability Scanning is more than just scanning for CVEs.
It includes things like proper configuration, host security, network configuration, etc.
There is not one tool that can help with this, but there are open source solutions that you can combine to achieve the desired results.
Most of these lessons are useful no matter the orchestrator you are using.
You can be using Kubernetes, OpenShift, AWS ECS, Docker Compose, VMs with Docker, etc.
The basics are the same, and you should adapt them to the platform you are using.
Some orchestrators give you more features than others.
For example, Kubernetes has [dynamic admission controllers](https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/) that lets you define custom checks for your resources.
As far as I am aware, Docker Compose does not have something like this, but if you know what you want to achieve it should not be difficult to write your own.
## Resources
[This article](https://sysdig.com/blog/container-security-best-practices/) by Sysdig contains many best practices for containers vulnerability scanning.
Some of them like container image scanning and Infrastructure-as-Code scanning we already mentioned in previous days.
It also includes other useful things like [Host scanning](https://sysdig.com/blog/vulnerability-assessment/#host), [real-time logging and monitoring](https://sysdig.com/blog/container-security-best-practices/#13) and [security misconfigurations](https://sysdig.com/blog/container-security-best-practices/#11).
More on VM security:
<https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-security.html>
<https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.security.doc/GUID-60025A18-8FCF-42D4-8E7A-BB6E14708787.html>

View File

@ -0,0 +1,84 @@
# Network Vulnerability Scan
On [Day 25](day25.md) we learned that vulnerability scanning is the process of scanning a network or system to identify any existing security vulnerabilities.
We also learned that Network Vulnerability Scanning is a subset of Systems Vulnerability Scanning, e.g. we are only scanning the network part of our system.
Today we are going to dive deeper into what Network Vulnerability Scanning is and how we can do it.
## Network Vulnerability Scanning
**Network vulnerability scanning** is the process of identifying weaknesses on a network that is a potential target for exploitation by threat actors.
Once upon a time, before the cloud, network security was easy (sort of, good security is never easy).
You build a huge firewall around your data center, allow traffic only to the proper entrypoints and assume that everything that managed to get inside is legitimate.
This approach has one huge flaw - if an attacker managed to get through the wall, there are no more lines of defence to stop them.
Nowadays, such an approach would work even less.
With the cloud and microservices architecture, the actors in a network has grown exponentially.
This requires us to change our mindset and adopt new processes and tools in building secure systems.
One such process is **Network Vulnerability Scanning**.
The tool that does that is called **Network Vulnerability Scanner**.
## How does network vulnerability scanning work?
Vulnerability scanning software relies on a database of known vulnerabilities and automated tests for them.
A scanner would scan a wide range of devices and hosts on your networks, identifying the device type and operating system, and probing for relevant vulnerabilities.
A scan may be purely network-based, conducted from the wider internet (external scan) or from inside your local intranet (internal scan).
It may be a deep inspection that is possible when the scanner has been provided with credentials to authenticate itself as a legitimate user of the host or device.
## Vulnerability management
After a scan has been performed and has found vulnerabilities, the next step is to address them.
This is the vulnerability management phase.
A vulnerability could be marked as false positive, e.g. the scanner reported something that is not true.
It could be acknowledged and then assessed by the security team.
Many vulnerabilities can be addressed by patching, but not all.
A cost/benefit analysis should be part of the process because not all vulnerabilities are security risks in every environment, and there may be business reasons why you cant install a given patch.
It would be useful if the scanner reports alternative means to remediate the vulnerability (e.g., disabling a service or blocking a port via firewall).
## Caveats
Similar to container image vulnerability scanning, network vulnerability scanning tests your system for _known_ vulnerabilities.
So it will not find anything that is not already reporter.
Also, it will not protect you from something like exposing your admin panel to the internet and using the default password.
(Although I would assume that some network scanner are smart enough to test for well-known endpoints that should not be exposed).
At the end of the day, it's up to you to know your system, and to know the way to test it, and protect it.
Tools only go so far.
## Network Scanners
Here is a list of network scanners that can be used for that purpose.
**NOTE:** The tools on this list are not free and open-source, but most of them have free trials, which you can use to evaluate them.
- [Intruder Network Vulnerability Scanner](https://www.intruder.io/network-vulnerability-scanner)
- [SecPod SanerNow Vulnerability Management](https://www.secpod.com/vulnerability-management/)
- [ManageEngine Vulnerability Manager Plus](https://www.manageengine.com/vulnerability-management/)
- [Domotz](https://www.domotz.com/features/network-security.php)
- [Microsoft Defender for Endpoint](https://www.microsoft.com/en-us/security/business/endpoint-security/microsoft-defender-endpoint)
- [Rapid7 InsightVM](https://www.rapid7.com/products/insightvm/)
## Summary
As with all the security processes we talked about in the previous day, network scanning is not a silver bullet.
Utilizing a network scanner would not make you secure if you are not taking care of the other aspects of systems security.
Also, using a tool like a network scanner does not mean that you don't need a security team.
Quite, the opposite, a good Secure SDLC starts with enabling the security team to run that kind of tool againts the system.
Then they would also be responsible for triaging the results and working with the revelant teams that need to fix the vulnerabilities.
That will be done by either patching up the system, closing a hole that is not necessary, or re-architecturing the system in a more secure manner.
## Resources
<https://www.comparitech.com/net-admin/free-network-vulnerability-scanners/>
<https://www.rapid7.com/solutions/network-vulnerability-scanner/>

View File

@ -0,0 +1,147 @@
# Introduction to Runtime Defence & Monitoring
Welcome to all the DevOps and DevSecOps enthusiasts! 🙌
We are here to learn about "Runtime defence". This is a huge subject, but we are not deterred by it and will learn about it together in the next 7 days.
![](images/day28-0.png)
This subject was split into major parts:
* Monitoring (1st and 2nd day)
* Intrusion detection
* Network defense
* Access control
* Application defense subjects (6th and 7th days)
The goal is to get you up to a level in these subjects, where you can start to work on your own.
Let's start 😎
# System monitoring and auditing
## Why this is the first subject of the topic "Runtime defense and monitoring" subject?
Monitoring computer systems is a fundamental tool for security teams, providing visibility into what is happening within the system. Without monitoring, security teams would be unable to detect and respond to security incidents.
To illustrate this point, consider physical security. If you want to protect a building, you must have security personnel 24/7 at every entrance to control who is entering the building. In this same example, you are also tasked with controlling the security of everyone in the building therefore you must also have personnel all around. Of course, this is not scaling well therefore installing CCTV cameras at key places is a much better solution today.
While scaling such physical security measures is difficult, for computer systems, it is easier to achieve through the installation of monitoring tools. Monitoring provides a basic level of control over the system, allowing security teams to detect problems, understand attack patterns, and maintain overall security. Beyond monitoring, there are additional security measures such as detection systems, which we can discuss further.
Elaborating on this, here are the key reasons why monitoring is important for runtime security include:
* Identifying security incidents: Monitoring can help organizations detect potential security incidents such as malware infections, unauthorized access attempts, and data breaches.
* Mitigating risks: By monitoring for signs of security threats, organizations can take action to mitigate those risks before they lead to a breach or other security incident.
* Complying with regulations: Many industries are subject to regulatory requirements that mandate certain security controls, including monitoring and incident response.
* Improving incident response: Monitoring provides the necessary data to quickly identify and respond to security incidents, reducing the impact of a breach and allowing organizations to recover more quickly.
* Gaining visibility: Monitoring provides insight into system activity, which can be used to optimize performance, troubleshoot issues, and identify opportunities for improvement.
## What to monitor and record?
In theory, the ideal solution would be to log everything that is happening in the system and keep the data forever.
However, this is not practical. Let's take a look at what needs to be monitored and what events need to be recorded.
When monitoring cloud-based computer services, there are several key components that should be closely monitored to ensure the system is secure and operating correctly. These components include:
Control plane logging: all the orchestration of the infrastructure is going through this control plane, it is crucial to always know who did what at the infrastructure level. It does not just enable the identification of malicious activity but also enables troubleshooting of the system.
Operating level logs: log operating system level events to track system activity and detect any errors or security-related events, such as failed login attempts or system changes. Deeper logs contain information about which use does what on the machine level which is important for identifying malicious behavior.
Network activity: Monitor network traffic to identify any unusual or unauthorized activity that could indicate an attack or compromise of the network.
Application activity and performance: Monitor application activity to detect misbehavior in case the attack is coming from the application level. Performance monitoring is important to ensure that services are running smoothly and to respond to any performance issues that may arise.
Resource utilization: Monitor the use of system resources such as CPU, memory, and disk space to identify bottlenecks or other performance issues. Unusual activity can be a result of denial of service-like attacks or attackers using computation resources for their good.
Security configurations: Monitor security configurations, such as firewall rules and user access controls, to ensure that they are correctly configured and enforced.
Backup and disaster recovery systems: Monitor backup and disaster recovery systems to ensure that they are operating correctly and data can be recovered in the event of a failure or disaster.
## A practical implementation
In this part, we move from theory to practice.
There isn't a silver bullet here, every system has its tools. We will work on Kubernetes as infrastructure with [Microservices demo](https://github.com/GoogleCloudPlatform/microservices-demo) application.
### Control plane monitoring
Kubernetes has an event auditing infrastructure called [audit logs](https://kubernetes.io/docs/tasks/debug/debug-cluster/audit/).
Kubernetes API server has a configuration called `Audit Policy` which tells the API server what to log. The log can either be stored in a file or sent to a webhook.
We are using Minikube in our example, and for the sake of testing this, we will send the audit logs to the `stdout` of the API server (which is its log).
```bash
mkdir -p ~/.minikube/files/etc/ssl/certs
cat <<EOF > ~/.minikube/files/etc/ssl/certs/audit-policy.yaml
# Log all requests at the Metadata level.
apiVersion: audit.k8s.io/v1
kind: Policy
rules:
- level: RequestResponse
EOF
minikube start --extra-config=apiserver.audit-policy-file=/etc/ssl/certs/audit-policy.yaml --extra-config=apiserver.audit-log-path=-
```
You can follow the logs with this Kubectl command:
```bash
kubectl logs kube-apiserver-minikube -n kube-system | grep audit.k8s.io/v1
```
Every API operation is logged to the stream.
Here is an example of an event "getting all secrets in default namespace":
```json
{"kind":"Event","apiVersion":"audit.k8s.io/v1","level":"RequestResponse","auditID":"8e526e77-1fd9-43c3-9714-367fde233c99","stage":"RequestReceived","requestURI":"/api/v1/namespaces/default/secrets?limit=500","verb":"list","user":{"username":"minikube-user","groups":["system:masters","system:authenticated"]},"sourceIPs":["192.168.49.1"],"userAgent":"kubectl/v1.25.4 (linux/amd64) kubernetes/872a965","objectRef":{"resource":"secrets","namespace":"default","apiVersion":"v1"},"requestReceivedTimestamp":"2023-02-11T20:34:11.015389Z","stageTimestamp":"2023-02-11T20:34:11.015389Z"}
```
As you can see, all key aspects of the infrastructure request are logged here (who, what, when).
Storing this in a file is not practical. Audit logs are usually shipped to a logging system and database for later use. Managed Kubernetes services use their own "cloud logging" service to capture Kubernetes Audit logs. In native Kubernetes, you could use Promtail to ship logs to Prometheus as described [here](https://www.bionconsulting.com/blog/monitoring-and-gathering-metrics-from-kubernetes-auditlogs).
### Resource monitoring
Kubernetes ecosystem enables multiple ways to monitor resources and logging, however, the most common example is Prometheus (logging and event database) and Grafana (UI and dashboards). These two open-source tools are an easy one-stop shop for multiple tasks around monitoring.
Out of the box, we will get resource monitoring Kubernetes nodes.
Here is how we are installing it on the Minikube we started in the previous part. Make sure, you have `helm` installed before.
```bash
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo add grafana https://grafana.github.io/helm-charts
helm install prometheus prometheus-community/prometheus
helm install grafana grafana/grafana
kubectl expose service grafana --type=NodePort --target-port=3000 --name=grafana-np
```
Now, these services should be installed.
To access Grafana UI, first, get the first password
```bash
kubectl get secret --namespace default grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo
```
Then login to the UI
```bash
minikube service grafana-np --url
```
![](images/day28-1.png)
After you have logged in, go to "Data sources/Prometheus" and add our Prometheus service as a source. The URL has to be set to `http://prometheus-server` and click "save & test".
Now, to set up resource dashboards, go to the "Dashboards" side menu and choose "Import". Here you can import premade dashboard. For example node metrics can be imported by putting the number `6126` in the field `Import via grafana.com` and clicking the `Load` button.
![](images/day28-2.png)
Browse Grafana for more dashboards [here](https://grafana.com/grafana/dashboards/).
# Next...
Tomorrow we will continue to the application level. Application logs and behavior monitoring will be in focue. We will continue to use the same setup and go deeper into the rabbit hole 😄

View File

@ -0,0 +1,131 @@
# Recap
Last day we discussed why monitoring, logging and auditing are the basics of runtime defense. In short: you cannot protect a live system without knowing what is happening. We built a Minikube cluster yesterday with Prometheus and Grafana. We are continuing to build over this stack today.
Let's start 😎
# Application logging
Application logs are important from many perspective. This is the way operators know what is happening inside applications they run on their infrastrucutre. For the same reason, keeping application logs is important from a security perspective because they provide a detailed record of the system's activity, which can be used to detect and investigate security incidents.
By analyzing application logs, security teams can identify unusual or suspicious activity, such as failed login attempts, access attempts to sensitive data, or other potentially malicious actions. Logs can also help track down the source of security breaches, including when and how an attacker gained access to the system, and what actions they took once inside.
In addition, application logs can help with compliance requirements, such as those related to data protection and privacy. By keeping detailed logs, organizations can demonstrate that they are taking the necessary steps to protect sensitive data and comply with regulations.
Loki is a component in the Grafana stack which collects logs using Promtail for Pods running in the Kubernetes cluster and stores them just as Prometheus does for metrics.
To install Loki with Promtail on your cluster, install the following Helm chart.
```bash
helm install loki --namespace=monitoring grafana/loki-stack
```
This will put a Promtail and a Loki instance in your Minikube and will start collecting logs. Note that this installation in not production grade and it is here to demonstrate the capabilities.
You should be seeing the Pods are ready:
```bash
$ kubectl get pods | grep loki
loki-0 1/1 Running 0 8m25s
loki-promtail-mpwgq 1/1 Running 0 8m25s
```
Now go to your Grafana UI (just as we did yesterday):
```bash
kubectl get secret --namespace default grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo
minikube service grafana-np --url
```
Take the secret of the admin password (if you haven't changed it already) and print the URL of the service, then go to the URL and log in.
In order to see the logs in Grafana, we need to hook up Loki as a "data source" just as we did yesterday with Prometheus.
![](images/day29-1.gif)
Now add here a new Loki data source.
The only thing that needs to be changed in the default configuration is the endpoint of the Loki service, in our case it is http://loki:3100, see it below:
![](images/day29-2.png)
Now click "Save & test" and your Grafana should be now connected to Loki.
You can explore your logs in the "Explore" screen (click Explore in the left menu).
To try our centralized logging system, we are going to check when Etcd container did compactization in the last hour.
Choose Loki source on the top of the screen (left of the explore title) and switch from query builder mode (visual builder) to code.
Add the following line in the query field:
```
{container="etcd"} |= `compaction`
```
and click "run query" on the top right part of the screen.
You should see logs in your browser, like this:
![](images/day29-3.png)
Voila! You have a logging system ;-)
# Application behavior monitoring
We start to come over from general monitoring needs to low-level application monitoring for security purposes. A modern way to do this is to monitor fine-grade application behavior using eBPF.
Monitoring applications with eBPF (extended Berkeley Packet Filter) is important from a security perspective because it provides a powerful and flexible way to monitor and analyze the behavior of applications and the underlying system. Here are some reasons why eBPF is important for application monitoring and security:
1. Fine-grained monitoring: eBPF allows for fine-grained monitoring of system and application activity, including network traffic, system calls, and other events. This allows you to identify and analyze security threats and potential vulnerabilities in real-time.
2. Relatively low overhead: eBPF has very low overhead, making it ideal for use in production environments. It can be used to monitor and analyze system and application behavior without impacting performance or reliability at scale.
3. Customizable analysis: eBPF allows you to create custom analysis and monitoring tools that are tailored to the specific needs of your application and environment. This allows you to identify and analyze security threats and potential vulnerabilities in a way that is tailored to your unique needs.
4. Real-time analysis: eBPF provides real-time analysis and monitoring, allowing you to detect and respond to security threats and potential vulnerabilities as they occur. This helps you to minimize the impact of security incidents and prevent data loss or other negative outcomes.
Falco is a well-respected project which installs agents on your Kubernetes nodes and monitors applications at the eBPF level.
In this part, we will install Falco in our Minikube and channel the data it collects to Prometheus (and eventually, Grafana). This part is based on this great [tutorial](https://falco.org/blog/falco-kind-prometheus-grafana/).
In order to install Falco, you need to create private keys and certificates for client-server communication between the Falco and its exporter.
We will use `falcoctl` for this, however you could generate your certificates and keys with `openssl` if you want.
To install `falcoctl`, run the following command (if you are running Linux on amd64 CPU, otherwise check out [here](https://github.com/falcosecurity/falcoctl#installation)):
```bash
LATEST=$(curl -sI https://github.com/falcosecurity/falcoctl/releases/latest | awk '/location: /{gsub("\r","",$2);split($2,v,"/");print substr(v[8],2)}')
curl --fail -LS "https://github.com/falcosecurity/falcoctl/releases/download/v${LATEST}/falcoctl_${LATEST}_linux_amd64.tar.gz" | tar -xz
sudo install -o root -g root -m 0755 falcoctl /usr/local/bin/falcoctl
```
Now generate key pair:
```bash
FALCOCTL_NAME=falco-grpc.default.svc.cluster.local FALCOCTL_PATH=$PWD falcoctl tls install
```
We need to add Falco Helm repo and install the Falco services and the exporter:
```bash
helm repo add falcosecurity https://falcosecurity.github.io/charts
helm repo update
helm install falco falcosecurity/falco --set driver.kind=ebpf --set-file certs.server.key=$PWD/server.key,certs.server.crt=$PWD/server.crt,certs.ca.crt=$PWD/ca.crt --set falco.grpc.enabled=true,falco.grpcOutput.enabled=true
helm install falco-exporter --set-file certs.ca.crt=$PWD/ca.crt,certs.client.key=$PWD/client.key,certs.client.crt=$PWD/client.crt falcosecurity/falco-exporter
```
Make sure that all Falco Pods are running OK
```bash
$ kubectl get pods | grep falco
falco-exporter-mlc5h 1/1 Running 3 (32m ago) 38m
falco-mlvc4 2/2 Running 0 31m
```
Since Prometheus detects the exporter automatically and we already added the Prometheus data source, we can go directly to Grafana and install the [Falco dashboard](https://grafana.com/grafana/dashboards/11914-falco-dashboard/).
Go to "Dashboard" left side menu and click import. In "Import via grfana.com" insert the ID `11914` and click "load".
Now you should see Falco events in your Grafana! 😎
# Next...
Next day we will look into how to detect attacks in runtime. See you tomorrow 😃

View File

@ -0,0 +1,73 @@
# Day 42 - Programming Language:Introduction to Python
Guido van Rossum created Python, a high-level, interpreted and dynamic programming language, in the late 1980s. It is widely used in range of applications, including web development, devops and data analysis, as well as artificial intelligence and machine learning.
## Installation and Setting up the Environment:
Python is available for download and installation on a variety of platforms, including Windows, Mac, and Linux. Python can be downloaded from [the official website](https://www.python.org/.).
![Python Website](/2023/images/day42-01.png)
Following the installation of Python, you can configure your environment with an Integrated Development Environment (IDE) such as [PyCharm](https://www.jetbrains.com/pycharm/), [Visual Studio Code](https://code.visualstudio.com/), or IDLE (the default IDE that comes with Python).
I personally use Visual Studio Code.
You can also use cloud environment, where you will not have to configure and install python locally, like [Replit](https://replit.com/).
![Replit Website](/2023/images/day42-02.png)
## Basic Data Types:
Python includes a number of built-in data types for storing and manipulating data. The following are the most common ones:
- Numbers: integers, floating-point numbers, and complex numbers
- Strings are character sequences.
- Lists are ordered groups of elements.
- Tuples are ordered immutable collections of elements.
- Dictionaries are collections of key-value pairs that are not ordered.
## Operations and Expressions:
With the above data types, you can perform a variety of operations in Python, including arithmetic, comparison, and logical operations.
Expressions can also be used to manipulate data, such as combining multiple values into a new value.
## Variables:
A variable is declared and assigned a value in Python by using the assignment operator =. The variable is on the left side of the operator, and the value being assigned is on the right, which can be an expression like `2 + 2` or even other variables. As an example:
``` python
a = 7 # assign variable a the value 7
b = x + 3 # assign variable b the value of a plus 3
c = b # assign variable c the value of b
```
These examples assign numbers to variables, but numbers are only one of the data types supported by Python. There is no type declaration for the variables. This is due to the fact that Python is a dynamically typed language, which means that the variable type is determined by the data assigned to it. The x, y, and z variables in the preceding examples are integer types, which can store both positive and negative whole numbers.
Variable names are case sensitive and can contain any letter, number, or underscore ( ). They cannot, however, begin with a number.
Also, with numbers, strings are among the most commonly used data types. A string is a sequence of one or more characters. Strings are typically declared with single quotation marks, but they can also be declared with double quotation marks:
``` python
a = 'My name is Rishab'
b = "This is also a string"
```
You can add strings to other strings — an operation known as "concatenation" — with the same + operator that adds two numbers:
``` python
x = 'My name is' + ' ' + 'Rishab'
print(x) # outputs: My name is Rishab
```
## Printing to the console:
The print function in Python is one of more than 60 built-in functions. It outputs text to the screen.
Let's see an example of the most famous "Hello World!":
``` python
print('Hello World!')
```
The print argument is a string, which is one of Python's basic data types for storing and managing text. Print outputs a newline character at the end of the line by default, so subsequent calls to print will begin on the next line.
## Resources:
[Learn Python - Full course by freeCodeCamp](https://youtu.be/rfscVS0vtbw)
[Python tutorial for beginners by Nana](https://youtu.be/t8pPdKYpowI)
[Python Crash Course book](https://amzn.to/40NfY45)

View File

@ -0,0 +1,114 @@
# Day 43 - Programming Language: Python
Welcome to the second day of Python, and today we will cover some more concepts:
- Loops
- Functions
- Modules and libraries
- File I/O
## Loops (for/while):
Loops are used to repeatedly run a block of code.
### for loop
Using the `for` loop, a piece of code is executed once for each element of a sequence (such as a list, string, or tuple). Here is an example of a for loop that prints each programming language in a list:
``` python
languages = ['Python', 'Go', 'JavaScript']
# for loop
for language in languages:
print(language)
```
Output
```
Python
Go
JavaScript
```
### while loop
The `while loop` is used to execute a block of code repeatedly as long as a condition is True. Here's an example of a while loop that prints the numbers from 1 to 5:
``` python
i = 1
n = 5
# while loop from i = 1 to 5
while i <= n:
print(i)
i = i + 1
```
Output:
```
1
2
3
4
5
```
## Functions
Functions are reusable chunks of code with argument/parameters and return values.
Using the `def` keyword in Python, you can define a function. In your programme, functions can be used to encapsulate complex logic and can be called several times.
Functions can also be used to simplify code and make it easier to read. Here is an illustration of a function that adds two numbers:
``` python
# function has two arguments num1 and num2
def add_numbers(num1, num2):
sum = num1 + num2
print('The sum is: ',sum)
```
``` python
# calling the function with arguments to add 5 and 2
add_numbers(5, 2)
# Output: The sum is: 9
```
## Understanding Modules and Importing Libraries:
A module is a file in Python that contains definitions and statements. Modules let you arrange your code and reuse it across many apps.
The Standard Library, a sizable collection of Python modules, offers a wide range of capabilities, such as file I/O, regular expressions, and more.
Additional libraries can be installed using package managers like pip.
You must import a module or library using the import statement in order to use it in your programme. Here is an illustration of how to load the math module and calculate a number's square root using the sqrt() function:
``` python
import math
print(math.sqrt(16)) # 4.0
```
## File I/O
File I/O is used to read and write data to and from files on disk.
The built-in Python function open() can be used to open a file, after which you can read from and write to it using methods like read() and write().
To save system resources, you should always close the file after you are done with it.
An example of reading from a file and printing its content:
``` python
f = open("90DaysOfDevOps.txt", "r")
print(f.read())
f.close()
```
## Exception Handling:
Exceptions are runtime errors that happen when your programme runs into unexpected circumstances, such dividing by zero or attempting to access a list element that doesn't exist.
Using a try/except block, you can manage exceptions in Python. The try block's code is run, and if an exception arises, the except block's code is run to handle the exception.
``` python
try:
f = open("90DaysOfDevOps.txt")
try:
f.write("Python is great")
except:
print("Something went wrong when writing to the file")
```
## Conclusion
That is it for today, I will see you tomorrow in Day 3 of Python!

BIN
2023/images/day03-1.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 110 KiB

BIN
2023/images/day03-2.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 58 KiB

BIN
2023/images/day04-1.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 372 KiB

BIN
2023/images/day04-2.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 215 KiB

BIN
2023/images/day06-1.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 98 KiB

BIN
2023/images/day06-2.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 54 KiB

BIN
2023/images/day06-3.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 40 KiB

BIN
2023/images/day06-4.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 75 KiB

BIN
2023/images/day06-5.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 16 KiB

BIN
2023/images/day09-1.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 273 KiB

BIN
2023/images/day09-10.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 404 KiB

BIN
2023/images/day09-11.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 265 KiB

BIN
2023/images/day09-12.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 66 KiB

BIN
2023/images/day09-13.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 210 KiB

BIN
2023/images/day09-14.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 131 KiB

BIN
2023/images/day09-15.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 103 KiB

BIN
2023/images/day09-16.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 66 KiB

BIN
2023/images/day09-17.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 180 KiB

BIN
2023/images/day09-18.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 439 KiB

BIN
2023/images/day09-19.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 143 KiB

BIN
2023/images/day09-2.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 90 KiB

BIN
2023/images/day09-3.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 254 KiB

BIN
2023/images/day09-4.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 237 KiB

BIN
2023/images/day09-5.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 193 KiB

BIN
2023/images/day09-6.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 165 KiB

BIN
2023/images/day09-7.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 175 KiB

BIN
2023/images/day09-8.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 222 KiB

BIN
2023/images/day11-1.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 356 KiB

BIN
2023/images/day13-1.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 196 KiB

BIN
2023/images/day28-0.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 199 KiB

BIN
2023/images/day28-1.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 134 KiB

BIN
2023/images/day28-2.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 133 KiB

BIN
2023/images/day29-1.gif Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 21 KiB

BIN
2023/images/day29-2.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 43 KiB

BIN
2023/images/day29-3.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 183 KiB

BIN
2023/images/day42-01.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 419 KiB

BIN
2023/images/day42-02.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 602 KiB

File diff suppressed because it is too large Load Diff

View File

@ -14,7 +14,9 @@ This will **not cover all things** "DevOps" but it will cover the areas that I f
[![ko-fi](https://ko-fi.com/img/githubbutton_sm.svg)](https://ko-fi.com/N4N33YRCS)
[![](https://dcbadge.vercel.app/api/server/vqwPrNQsyK)](https://discord.gg/vqwPrNQsyK)
[![Discord Invite Link](https://dcbadge.vercel.app/api/server/vqwPrNQsyK)](https://discord.gg/vqwPrNQsyK)
![GitHub Repo Stars](https://img.shields.io/github/stars/michaelcade/90daysofdevops?style=social?)
The two images below will take you to the 2022 and 2023 edition of the learning journey.
@ -30,6 +32,8 @@ The two images below will take you to the 2022 and 2023 edition of the learning
</p>
</a>
From this year we have built website for 90DaysOfDevops Challenge :rocket: :technologist: - [Link for website](https://www.90daysofdevops.com/#/2023)
The quickest way to get in touch is going to be via Twitter, my handle is [@MichaelCade1](https://twitter.com/MichaelCade1)

View File

@ -3,9 +3,10 @@
<head>
<meta charset="UTF-8">
<title>Document</title>
<title>90DaysOfDevOps</title>
<meta http-equiv="X-UA-Compatible" content="IE=edge,chrome=1" />
<meta name="description" content="Description">
<meta name="description"
content="A learning resource for the community to pick up a foundational theory and hands-on knowledge and understanding of the key areas of DevOps. Follow along and join the community!">
<meta name="viewport" content="width=device-width, initial-scale=1.0, minimum-scale=1.0">
<link rel="stylesheet" href="//cdn.jsdelivr.net/npm/docsify/themes/dark.css" />
</head>

View File

View File

View File

View File

View File

View File

View File

View File

View File

View File

View File

View File

View File

View File

View File

View File

View File

View File

Some files were not shown because too many files have changed in this diff Show More