Merge branch 'MichaelCade:main' into main

This commit is contained in:
SvetlomirBalevski 2023-01-09 22:58:08 +02:00 committed by GitHub
commit 8cfd25f11b
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
152 changed files with 2291 additions and 675 deletions

13
.github/FUNDING.yml vendored Normal file
View File

@ -0,0 +1,13 @@
# These are supported funding model platforms
github: [MichaelCade]
patreon: # Replace with a single Patreon username
open_collective: # Replace with a single Open Collective username
ko_fi: # michaelcade1
tidelift: # Replace with a single Tidelift platform-name/package-name e.g., npm/babel
community_bridge: # Replace with a single Community Bridge project-name e.g., cloud-foundry
liberapay: # Replace with a single Liberapay username
issuehunt: # Replace with a single IssueHunt username
otechie: # Replace with a single Otechie username
lfx_crowdfunding: # Replace with a single LFX Crowdfunding project-name e.g., cloud-foundry
custom: # Replace with up to 4 custom sponsorship URLs e.g., ['link1', 'link2']

18
.github/workflows/welcome_workflow.yaml vendored Normal file
View File

@ -0,0 +1,18 @@
name: 'Welcome New Contributors'
on:
issues:
types: [opened]
pull_request_target:
types: [opened]
jobs:
welcome-new-contributor:
runs-on: ubuntu-latest
steps:
- name: 'Greet the contributor'
uses: garg3133/welcome-new-contributors@v1.2
with:
token: ${{ secrets.GITHUB_TOKEN }}
issue-message: 'Hello there, thanks for opening your first issue here. We welcome you to the #90DaysOfDevOps community!'
pr-message: 'Hello there, thanks for opening your first Pull Request. Someone will review it soon. Welcome to the #90DaysOfDevOps community!'

BIN
2022.jpg

Binary file not shown.

Before

Width:  |  Height:  |  Size: 51 KiB

BIN
2022.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 238 KiB

View File

@ -77,14 +77,14 @@ The ones we want to learn more about are the build, install and run.
![](Images/Day10_Go8.png)
- `go run` - This command compiles and runs the main package comprised of the .go files specified on the command line. The command is compiled to a temporary folder.
- `go build` - To compile packages and dependencies, compile the package in the current directory. If the `main` package, will place the executable in the current directory if not then it will place the executable in the `pkg` folder. `go build` also enables you to build an executable file for any Go Supported OS platform.
- `go install` - The same as go build but will place the executable in the `bin` folder
- `go build` - To compile packages and dependencies, compile the package in the current directory. If Go project contains a `main` package, it will create and place the executable in the current directory if not then it will put the executable in the `pkg` folder, and that can be imported and used by other Go programs. `go build` also enables you to build an executable file for any Go Supported OS platform.
- `go install` - The same as go build but will place the executable in the `bin` folder.
We have run through go build and go run but feel free to run through them again here if you wish, `go install` as stated above puts the executable in our bin folder.
![](Images/Day10_Go9.png)
Hopefully, if you are following along you are watching one of the playlists or videos below, I am taking bits of all of these and translating these into my notes so that I can understand the foundational knowledge of the Golang language. The resources below are likely going to give you a much better understanding of a lot of the areas you need overall but I am trying to document the 7 days or 7 hours worth of the journey with interesting things that I have found.
Hopefully, if you are following along, you are watching one of the playlists or videos below. I am taking bits of all of these and translating these into my notes so that I can understand the foundational knowledge of the Golang language. The resources below are likely going to give you a much better understanding of a lot of the areas you need overall, but I am trying to document the 7 days or 7 hours worth of the journey with interesting things that I have found.
## Resources

View File

@ -1,22 +1,22 @@
## Getting Hands-On with Python & Network
## Manos a la obra con Python y Redes
In this final section of Networking fundamentals, we are going to cover some automation tasks and tools with our lab environment created on [Day 26](day26.md)
En esta sección final de Fundamentos de Redes, vamos a cubrir algunas tareas y herramientas de automatización con nuestro entorno de laboratorio creado el [Día 26](day26.md).
We will be using an SSH tunnel to connect to our devices from our client vs telnet. The SSH tunnel created between client and device is encrypted. We also covered SSH in the Linux section on [Day 18](day18.md)
En esta sección final de Fundamentos de Redes, vamos a cubrir algunas tareas y herramientas de automatización con nuestro entorno de laboratorio creado el [Día 26](day18.md).
## Access our virtual emulated environment
## Acceder a nuestro entorno virtual emulado
For us to interact with our switches we either need a workstation inside the EVE-NG network or you can deploy a Linux box there with Python installed to perform your automation ([Resource for setting up Linux inside EVE-NG](https://www.youtube.com/watch?v=3Qstk3zngrY)) or you can do something like me and define a cloud for access from your workstation.
Para interactuar con nuestros switches necesitamos una workstation dentro de la red EVE-NG o puedes desplegar una caja Linux allí con Python instalado para realizar tu automatización ([Recurso para configurar Linux dentro de EVE-NG](https://www.youtube.com/watch?v=3Qstk3zngrY)) o puedes hacer algo como yo y definir una nube para acceder desde tu estación de trabajo.
![](Images/Day27_Networking3.png)
To do this, we have right-clicked on our canvas and we have selected network and then selected "Management(Cloud0)" this will bridge out to our home network.
Para hacer esto, hemos hecho click con el botón derecho del ratón en nuestro lienzo y hemos seleccionado red y luego "Gestión(Nube0)" esto hará de puente con nuestra red doméstica.
![](Images/Day27_Networking4.png)
However, we do not have anything inside this network so we need to add connections from the new network to each of our devices. (My networking knowledge needs more attention and I feel that you could just do this next step to the top router and then have connectivity to the rest of the network through this one cable?)
Sin embargo, no tenemos nada dentro de esta red por lo que necesitamos añadir conexiones desde la nueva red a cada uno de nuestros dispositivos. (Mis conocimientos de redes necesitan más atención y me parece que sólo podría hacer este paso siguiente al router superior y luego tener conectividad con el resto de la red a través de este único cable...).
I have then logged on to each of our devices and I have run through the following commands for the interfaces applicable to where the cloud comes in.
A continuación, he iniciado sesión en cada uno de nuestros dispositivos y he corrido a través de los siguientes comandos para las interfaces aplicables a donde entra la nube.
```
enable
@ -29,7 +29,7 @@ exit
sh ip int br
```
The final step gives us the DHCP address from our home network. My device network list is as follows:
El último paso nos da la dirección DHCP de nuestra red doméstica. La lista de red de mi dispositivo es la siguiente:
| Node | IP Address | Home Network IP |
| ------- | ------------ | --------------- |
@ -39,81 +39,81 @@ The final step gives us the DHCP address from our home network. My device networ
| Switch3 | 10.10.88.113 | 192.168.169.125 |
| Switch4 | 10.10.88.114 | 192.168.169.197 |
### SSH to a network device
### SSH a un dispositivo de red
With the above in place, we can now connect to our devices on our home network using our workstation. I am using Putty but also have access to other terminals such as git bash that give me the ability to SSH to our devices.
Con lo anterior en su lugar, ahora podemos conectarnos a nuestros dispositivos en nuestra red doméstica utilizando nuestra estación de trabajo. Estoy usando Putty pero también tengo acceso a otras terminales como git bash que me dan la capacidad de SSH a nuestros dispositivos.
Below you can see we have an SSH connection to our router device. (R1)
A continuación se puede ver que tenemos una conexión SSH a nuestro dispositivo router. (R1)
![](Images/Day27_Networking5.png)
### Using Python to gather information from our devices
### Usando Python para recopilar información de nuestros dispositivos
The first example of how we can leverage Python is to gather information from all of our devices and in particular, I want to be able to connect to each one and run a simple command to provide me with interface configuration and settings. I have stored this script here [netmiko_con_multi.py](Networking/netmiko_con_multi.py)
El primer ejemplo de cómo podemos aprovechar Python es para recopilar información de todos nuestros dispositivos y, en particular, quiero ser capaz de conectarme a cada uno y ejecutar un comando simple que me proporcione la configuración de la interfaz y los ajustes. He almacenado este script aquí [netmiko_con_multi.py](Networking/netmiko_con_multi.py)
Now when I run this I can see each port configuration over all of my devices.
Ahora cuando ejecuto esto puedo ver la configuración de cada puerto sobre todos mis dispositivos.
![](Images/Day27_Networking6.png)
This could be handy if you have a lot of different devices, create this one script so that you can centrally control and understand quickly all of the configurations in one place.
Esto puede ser útil si tienes muchos dispositivos diferentes, crea este script para que puedas controlar de forma centralizada y entender rápidamente todas las configuraciones en un solo lugar.
### Using Python to configure our devices
### Usando Python para configurar nuestros dispositivos
The above is useful but what about using Python to configure our devices, in our scenario we have a trunked port between `SW1` and `SW2` again imagine if this was to be done across many of the same switches we want to automate that and not have to manually connect to each switch to make the configuration change.
Lo anterior es útil, pero ¿qué pasa con el uso de Python para configurar nuestros dispositivos, en nuestro escenario tenemos un puerto troncalizado entre `SW1` y `SW2` de nuevo imaginar si esto se iba a hacer a través de muchos de los mismos interruptores que queremos automatizar y no tener que conectarse manualmente a cada interruptor para hacer el cambio de configuración.
We can use [netmiko_sendchange.py](Networking/netmiko_sendchange.py) to achieve this. This will connect over SSH and perform that change on our `SW1` which will also change to `SW2`.
Podemos usar [netmiko_sendchange.py](Networking/netmiko_sendchange.py) para lograr esto. Esto se conectará por SSH y realizará ese cambio en nuestro `SW1` que también cambiará al `SW2`.
![](Images/Day27_Networking7.png)
Now for those that look at the code, you will see the message appears and tells us `sending configuration to device` but there is no confirmation that this has happened we could add additional code to our script to perform that check and validation on our switch or we could modify our script before to show us this. [netmiko_con_multi_vlan.py](Networking/netmiko_con_multi_vlan.py)
Ahora para los que miren el código, verán que aparece el mensaje y nos dice `sending configuration to device` pero no hay confirmación de que esto haya ocurrido podríamos añadir código adicional a nuestro script para realizar esa comprobación y validación en nuestro switch o podríamos modificar nuestro script de antes para que nos muestre esto. [netmiko_con_multi_vlan.py](Networking/netmiko_con_multi_vlan.py)
![](Images/Day27_Networking8.png)
### backing up your device configurations
### copia de seguridad de las configuraciones de tus dispositivos
Another use case would be to capture our network configurations and make sure we have those backed up, but again we don't want to be connecting to every device we have on our network so we can also automate this using [backup.py](Networking/backup.py). You will also need to populate the [backup.txt](Networking/backup.txt) with the IP addresses you want to backup.
Otro caso de uso sería capturar nuestras configuraciones de red y asegurarnos de que las tenemos respaldadas, pero de nuevo no queremos estar conectándonos a cada dispositivo que tenemos en nuestra red así que también podemos automatizar esto usando [backup.py](Networking/backup.py). También necesitarás rellenar [backup.txt](Networking/backup.txt) con las direcciones IP de las que quieres hacer copia de seguridad.
Run your script and you should see something like the below.
Ejecute su script y debería ver algo como lo siguiente.
![](Images/Day27_Networking9.png)
That could be me just writing a simple print script in python so I should show you the backup files as well.
Los archivos de respaldo.
![](Images/Day27_Networking10.png)
### Paramiko
A widely used Python module for SSH. You can find out more at the official GitHub link [here](https://github.com/paramiko/paramiko)
Un módulo de Python ampliamente utilizado para SSH. Puedes encontrar más información en el enlace oficial de GitHub [aquí](https://github.com/paramiko/paramiko)
We can install this module using the `pip install paramiko` command.
Podemos instalar este módulo usando el comando `pip install paramiko`.
![](Images/Day27_Networking1.png)
We can verify the installation by entering the Python shell and importing the paramiko module.
Podemos verificar la instalación entrando en la shell de Python e importando el módulo paramiko.
![](Images/Day27_Networking2.png)
### Netmiko
The netmiko module targets network devices specifically whereas paramiko is a broader tool for handling SSH connections overall.
El módulo netmiko apunta específicamente a dispositivos de red mientras que paramiko es una herramienta más amplia para manejar conexiones SSH en general.
Netmiko which we have used above alongside paramiko can be installed using `pip install netmiko`
Netmiko que hemos usado arriba junto con paramiko puede ser instalado usando `pip install netmiko`.
Netmiko supports many network vendors and devices, you can find a list of supported devices on the [GitHub Page](https://github.com/ktbyers/netmiko#supports)
Netmiko soporta muchos proveedores y dispositivos de red, puedes encontrar una lista de dispositivos soportados en la [Página GitHub](https://github.com/ktbyers/netmiko#supports)
### Other modules
### Otros módulos
It is also worth mentioning a few other modules that we have not had the chance to look at but they give a lot more functionality when it comes to network automation.
También vale la pena mencionar algunos otros módulos que no hemos tenido la oportunidad de ver pero que dan mucha más funcionalidad cuando se trata de automatización de redes.
`netaddr` is used for working with and manipulating IP addresses, again the installation is simple with `pip install netaddr`
`netaddr` se utiliza para trabajar y manipular direcciones IP, de nuevo la instalación es sencilla con `pip install netaddr`
you might find yourself wanting to store a lot of your switch configuration in an excel spreadsheet, the `xlrd` will allow your scripts to read the excel workbook and convert rows and columns into a matrix. `pip install xlrd` to get the module installed.
Puede que quieras almacenar gran parte de la configuración de tu switch en una hoja de cálculo excel, `xlrd` permitirá a tus scripts leer el libro de excel y convertir las filas y columnas en una matriz. `pip install xlrd` para instalar el módulo.
Some more use cases where network automation can be used that I have not had the chance to look into can be found [here](https://github.com/ktbyers/pynet/tree/master/presentations/dfwcug/examples)
Algunos otros casos de uso en los que la automatización de redes puede ser utilizada y que no he tenido la oportunidad de mirar se pueden encontrar [aquí](https://github.com/ktbyers/pynet/tree/master/presentations/dfwcug/examples)
I think this wraps up our Networking section of the #90DaysOfDevOps, Networking is one area that I have not touched for a while really and there is so much more to cover but I am hoping between my notes and the resources shared throughout it is helpful for some.
Aquí terminamos nuestra sección de Redes de los #90DaysOfDevOps. Las redes es un área muy extensa, espero que estos apuntes y recursos compartidos sean útiles para tener una base de conocimientos.
## Resources
## Recursos
- [Free Course: Introduction to EVE-NG](https://www.youtube.com/watch?v=g6B0f_E0NMg)
- [EVE-NG - Creating your first lab](https://www.youtube.com/watch?v=9dPWARirtK8)
@ -122,8 +122,8 @@ I think this wraps up our Networking section of the #90DaysOfDevOps, Networking
- [Practical Networking](http://www.practicalnetworking.net/)
- [Python Network Automation](https://www.youtube.com/watch?v=xKPzLplPECU&list=WL&index=126)
Most of the examples I am using here as I am not a Network Engineer have come from this extensive book which is not free but I am using some of the scenarios to help understand Network Automation.
La mayoría de los ejemplos utilizados aquí provienen de este extenso libro que no es gratuito, pero ha sido utilizado algunos de los escenarios planteados.
- [Hands-On Enterprise Automation with Python (Book)](https://www.packtpub.com/product/hands-on-enterprise-automation-with-python/9781788998512)
See you on [Day 28](day28.md) where will start looking into cloud computing and get a good grasp and foundational knowledge of the topic and what is available.
Nos vemos el[Día 28](day28.md) donde veremos la computación en nube para una buena comprensión de los conocimientos básicos necesarios.

View File

@ -1,95 +1,95 @@
## The Big Picture: DevOps & The Cloud
## El panorama: DevOps & The Cloud
When it comes to cloud computing and what is offered, it goes very nicely with the DevOps ethos and processes. We can think of Cloud Computing as bringing the technology and services whilst DevOps as we have mentioned many times before is about the process and process improvement.
Cuando se trata de la computación en nube y lo que se ofrece, va muy bien con la ética y los procesos DevOps. Podemos pensar que la computación en nube aporta tecnología y servicios, mientras que DevOps, como ya hemos mencionado muchas veces, trata del proceso y de la mejora del proceso.
But to start with that cloud learning journey is a steep one and making sure you know and understand all elements or the best service to choose for the right price point is confusing.
Pero para empezar, el viaje de aprendizaje de la nube es empinado y asegurarse de conocer y entender todos los elementos o el mejor servicio a elegir para el punto de precio correcto es confuso.
![](Images/Day28_Cloud1.png)
Does the public cloud require a DevOps mindset? My answer here is not, but to really take advantage of cloud computing and possibly avoid those large cloud bills that so many people have been hit with then it is important to think of Cloud Computing and DevOps together.
¿Requiere la nube pública una mentalidad DevOps? Mi respuesta aquí es no, pero para realmente tomar ventaja de la computación en nube y posiblemente evitar esas grandes facturas de nube que tanta gente ha sido golpeada con entonces es importante pensar en Cloud Computing y DevOps juntos.
If we look at what we mean by the Public Cloud at a 40,000ft view, it is about removing some responsibility to a managed service to enable you and your team to focus on more important aspects which name should be the application and the end-users. After all the Public Cloud is just someone else's computer.
Si nos fijamos en lo que queremos decir con la nube pública en una vista de 40.000 pies, se trata de la eliminación de algunas responsabilidades a un servicio gestionado para permitir que usted y su equipo para centrarse en aspectos más importantes que el nombre debe ser la aplicación y los usuarios finales. Al fin y al cabo, la nube pública no es más que el ordenador de otra persona.
![](Images/Day28_Cloud2.png)
In this first section, I want to get into and describe a little more of what a Public Cloud is and some of the building blocks that get referred to as the Public Cloud overall.
En esta primera sección, quiero entrar y describir un poco más de lo que es una Nube Pública y algunos de los bloques de construcción que se refieren a la Nube Pública en general.
### SaaS
The first area to cover is Software as a service, this service is removing almost all of the management overhead of a service that you may have once run on-premises. Let's think about Microsoft Exchange for our email, this used to be a physical box that lived in your data centre or maybe in the cupboard under the stairs. You would need to feed and water that server. By that I mean you would need to keep it updated and you would be responsible for buying the server hardware, most likely installing the operating system, installing the applications required and then keeping that patched, if anything went wrong you would have to troubleshoot and get things back up and running.
La primera área a tratar es el software como servicio, que elimina casi toda la sobrecarga de gestión de un servicio que antes se ejecutaba in situ. Pensemos en Microsoft Exchange para nuestro correo electrónico: antes era una caja física que vivía en el centro de datos o quizá en el armario de debajo de las escaleras. Había que alimentar y regar ese servidor. Con esto quiero decir que tendrías que mantenerlo actualizado y que serías responsable de comprar el hardware del servidor, probablemente instalar el sistema operativo, instalar las aplicaciones necesarias y luego mantenerlo parcheado, si algo fuera mal tendrías que solucionar los problemas y hacer que las cosas volvieran a funcionar.
Oh, and you would also have to make sure you were backing up your data, although this doesn't change with SaaS for the most part either.
Ah, y también habría que asegurarse de hacer copias de seguridad de los datos, aunque esto tampoco cambia con SaaS en su mayor parte.
What SaaS does and in particular Microsoft 365, because I mentioned Exchange is removing that administration overhead and they provide a service that delivers your exchange functionality by way of mail but also much other productivity (Office 365) and storage options (OneDrive) that overall gives a great experience to the end-user.
Lo que hace SaaS y, en particular, Microsoft 365, ya que he mencionado Exchange, es eliminar esa sobrecarga de administración y ofrecer un servicio que proporciona la funcionalidad de Exchange a través del correo, pero también muchas otras opciones de productividad (Office 365) y almacenamiento (OneDrive) que, en general, ofrecen una gran experiencia al usuario final.
Other SaaS applications are widely adopted, such as Salesforce, SAP, Oracle, Google, and Apple. All removing that burden of having to manage more of the stack.
Otras aplicaciones SaaS son ampliamente adoptadas, como Salesforce, SAP, Oracle, Google y Apple. Todas ellas eliminan esa carga de tener que gestionar más de la pila.
I am sure there is a story with DevOps and SaaS-based applications but I am struggling to find out what they may be. I know Azure DevOps has some great integrations with Microsoft 365 that I might have a look into and report back to.
Estoy seguro de que hay una historia con DevOps y aplicaciones basadas en SaaS, pero estoy luchando para averiguar lo que pueden ser. Sé que Azure DevOps tiene algunas grandes integraciones con Microsoft 365 que podría echar un vistazo e informar.
![](Images/Day28_Cloud3.png)
### Public Cloud
### Cloud público
Next up we have the public cloud, most people would think of this in a few different ways, some would see this as the hyper scalers only such as Microsoft Azure, Google Cloud Platform and AWS.
A continuación tenemos la nube pública, la mayoría de la gente podría pensar en esto de varias maneras diferentes, algunos verían esto como los hiperescaladores sólo como Microsoft Azure, Google Cloud Platform y AWS.
![](Images/Day28_Cloud4.png)
Some will also see the public cloud as a much wider offering that includes those hyper scalers but also the thousands of MSPs all over the world as well. For this post, we are going to consider Public Cloud including hyper scalers and MSPs, although later on, we will specifically dive into one or more of the hyper scalers to get that foundational knowledge.
Algunos también verán la nube pública como una oferta mucho más amplia que incluye a los hiperescaladores, pero también a los miles de MSP de todo el mundo. Para este post, vamos a considerar la nube pública incluyendo hiperescaladores y MSPs, aunque más adelante, nos sumergiremos específicamente en uno o más de los hiperescaladores para obtener ese conocimiento fundacional.
![](Images/Day28_Cloud5.png)
_thousands more companies could land on this, I am merely picking from local, regional, telco and global brands I have worked with and am aware of._
_Podría haber miles de empresas más en esta lista, sólo estoy seleccionando las marcas locales, regionales, de telecomunicaciones y globales con las que he trabajado y que conozco._
We mentioned in the SaaS section that Cloud removed the responsibility or the burden of having to administer parts of a system. If SaaS we see a lot of the abstraction layers removed i.e the physical systems, network, storage, operating system, and even application to some degree. When it comes to the cloud there are various levels of abstraction we can remove or keep depending on your requirements.
Mencionamos en la sección SaaS que Cloud eliminaba la responsabilidad o la carga de tener que administrar partes de un sistema. Si hablamos de SaaS, vemos que se eliminan muchas de las capas de abstracción, es decir, los sistemas físicos, la red, el almacenamiento, el sistema operativo e incluso la aplicación hasta cierto punto. Cuando se trata de la nube, hay varios niveles de abstracción que podemos eliminar o mantener en función de nuestros requisitos.
We have already mentioned SaaS but there are at least two more to mention regarding the public cloud.
Ya hemos mencionado SaaS, pero hay al menos dos más que mencionar en relación con la nube pública.
Infrastructure as a service - You can think of this layer as a virtual machine but whereas on-premises you will be having to look after the physical layer in the cloud this is not the case, the physical is the cloud provider's responsibility and you will manage and administer the Operating System, the data and the applications you wish to run.
Infraestructura como servicio: puede pensar en esta capa como una máquina virtual, pero mientras que en las instalaciones tendrá que ocuparse de la capa física, en la nube no es así, la física es responsabilidad del proveedor de la nube y usted gestionará y administrará el sistema operativo, los datos y las aplicaciones que desee ejecutar.
Platform as a service - This continues to remove the responsibility of layers and this is really about you taking control of the data and the application but not having to worry about the underpinning hardware or operating system.
Plataforma como servicio: sigue eliminando la responsabilidad de las capas y en realidad se trata de que usted tome el control de los datos y la aplicación, pero sin tener que preocuparse por el hardware o el sistema operativo subyacentes.
There are many other aaS offerings out there but these are the two fundamentals. You might see offerings around StaaS (Storage as a service) which provide you with your storage layer but without having to worry about the hardware underneath. Or you might have heard CaaS for Containers as a service which we will get onto, later on, another aaS we will look to cover over the next 7 days is FaaS (Functions as a Service) where maybe you do not need a running system up all the time and you just want a function to be executed as and when.
Existen muchas otras ofertas de aaS, pero éstas son las dos fundamentales. Puede que haya ofertas de StaaS (almacenamiento como servicio) que le proporcionan la capa de almacenamiento sin tener que preocuparse por el hardware subyacente. O puede que hayas oído hablar de CaaS (Containers as a Service), del que hablaremos más adelante. Otro aaS que trataremos en los próximos 7 días es FaaS (Functions as a Service), en el que puede que no necesites un sistema en funcionamiento todo el tiempo y sólo quieras que una función se ejecute cuando y como quieras.
There are many ways in which the public cloud can provide abstraction layers of control that you wish to pass up and pay for.
Hay muchas maneras en que la nube pública puede proporcionar capas de abstracción de control que usted desea pasar y pagar.
![](Images/Day28_Cloud6.png)
### Private Cloud
### Cloud privado
Having your own data centre is not a thing of the past I would think that this has become a resurgence among a lot of companies that have found the OPEX model difficult to manage as well as skill sets in just using the public cloud.
Tener su propio centro de datos no es una cosa del pasado, creo que esto se ha convertido en un resurgimiento entre una gran cantidad de empresas que han encontrado el modelo OPEX difícil de manejar, así como conjuntos de habilidades en sólo el uso de la nube pública.
The important thing to note here is the public cloud is likely now going to be your responsibility and it is going to be on your premises.
Lo importante a tener en cuenta aquí es la nube pública es probable que ahora va a ser su responsabilidad y va a estar en sus instalaciones.
We have some interesting things happening in this space not only with VMware that dominated the virtualisation era and on-premises infrastructure environments. We also have the hyper scalers offering an on-premises version of their public clouds.
En este espacio están ocurriendo cosas interesantes, no sólo con VMware, que dominó la era de la virtualización y los entornos de infraestructura locales. También tenemos a los hiperescaladores que ofrecen una versión local de sus nubes públicas.
![](Images/Day28_Cloud7.png)
### Hybrid Cloud
### Cloud híbrida
To follow on from the Public and Private cloud mentions we also can span across both of these environments to provide flexibility between the two, maybe take advantage of services available in the public cloud but then also take advantage of features and functionality of being on-premises or it might be a regulation that dictates you having to store data locally.
Para continuar con las menciones a la nube pública y privada, también podemos abarcar ambos entornos para proporcionar flexibilidad entre los dos, tal vez aprovechando los servicios disponibles en la nube pública, pero también aprovechando las características y la funcionalidad de estar en las instalaciones o podría ser una regulación que dicta que tienes que almacenar los datos localmente.
![](Images/Day28_Cloud8.png)
Putting this all together we have a lot of choices for where we store and run our workloads.
Si juntamos todo esto, tenemos muchas opciones para elegir dónde almacenar y ejecutar nuestras cargas de trabajo.
![](Images/Day28_Cloud9.png)
Before we get into a specific hyper-scale, I have asked the power of Twitter where we should go?
Antes de entrar en una hiperescala específica, he preguntado al poder de Twitter ¿dónde deberíamos ir?
![](Images/Day28_Cloud10.png)
[Link to Twitter Poll](https://twitter.com/MichaelCade1/status/1486814904510259208?s=20&t=x2n6QhyOXSUs7Pq0itdIIQ)
Whichever one gets the highest percentage we will take a deeper dive into the offerings, I think the important to mention though is that services from all of these are quite similar which is why I say to start with one because I have found that in knowing the foundation of one and how to create virtual machines, set up networking etc. I have been able to go to the others and quickly ramp up in those areas.
Cualquiera que obtenga el porcentaje más alto vamos a tomar una inmersión más profunda en las ofertas, creo que el importante mencionar sin embargo es que los servicios de todos ellos son bastante similares por lo que digo que empezar con uno porque he encontrado que en el conocimiento de la base de uno y cómo crear máquinas virtuales, configurar la red, etc . He sido capaz de ir a los otros y rápidamente la rampa en esas áreas.
Either way, I am going to share some great **FREE** resources that cover all three of the hyper scalers.
De cualquier manera, voy a compartir algunos recursos **GRATIS** que cubren los tres hiperescaladores.
I am also going to build out a scenario as I have done in the other sections where we can build something as we move through the days.
También voy a construir un escenario como lo he hecho en las otras secciones donde podemos construir algo a medida que avanzamos a través de los días.
## Resources
## Recursos
- [Hybrid Cloud and MultiCloud](https://www.youtube.com/watch?v=qkj5W98Xdvw)
- [Microsoft Azure Fundamentals](https://www.youtube.com/watch?v=NKEFWyqJ5XA&list=WL&index=130&t=12s)
- [Google Cloud Digital Leader Certification Course](https://www.youtube.com/watch?v=UGRDM86MBIQ&list=WL&index=131&t=10s)
- [AWS Basics for Beginners - Full Course](https://www.youtube.com/watch?v=ulprqHHWlng&t=5352s)
See you on [Day 29](day29.md)
Nos vemos en el [Día 29](day29.md).

View File

@ -1,131 +1,131 @@
## Microsoft Azure Fundamentals
## Fundamentos de Microsoft Azure
Before we get going, the winner of the Twitter poll was Microsoft Azure, hence the title of the page. It was close and also quite interesting to see the results come in over the 24 hours.
Antes de empezar, el ganador de la encuesta de Twitter fue Microsoft Azure, de ahí el título de la página. Ha estado reñido y también ha sido muy interesante ver los resultados a lo largo de las 24 horas.
![](Images/Day29_Cloud1.png)
I would say in terms of covering this topic is going to give me a better understanding and update around the services available on Microsoft Azure, I lean towards Amazon AWS when it comes to my day today. I have however left resources I had lined up for all three of the major cloud providers.
Yo diría que en términos de cubrir este tema me va a dar una mejor comprensión y actualización en torno a los servicios disponibles en Microsoft Azure, me inclino hacia Amazon AWS cuando se trata de mi día a día. Sin embargo, he dejado recursos que había alineado para los tres principales proveedores de nube.
I do appreciate that there are more and the poll only included these 3 and in particular, there were some comments about Oracle Cloud. I would love to hear more about other cloud providers being used out in the wild.
Me doy cuenta de que hay más y la encuesta sólo incluía estos 3 y, en particular, hubo algunos comentarios sobre Oracle Cloud. Me encantaría saber más acerca de otros proveedores de nube que se utilizan, podéis dejar comentarios.
### The Basics
### Lo básico
- Provides public cloud services
- Geographically distributed (60+ Regions worldwide)
- Accessed via the internet and/or private connections
- Multi-tenant model
- Consumption-based billing - (Pay as you go | Pay as you grow)
- A large number of service types and offerings for different requirements.
- Proporciona servicios de nube pública
- Distribuidos geográficamente (más de 60 regiones en todo el mundo)
- Acceso a través de Internet y/o conexiones privadas
- Modelo multiinquilino
- Facturación basada en el consumo - Pay as you go (Pague a medida que avanza) | Pay as you grow (Pague a medida que crece)
- Un gran número de tipos de servicio y ofertas para diferentes requisitos.
- [Microsoft Azure Global Infrastructure](https://infrastructuremap.microsoft.com/explore)
[Microsoft Azure Global Infrastructure](https://infrastructuremap.microsoft.com/explore)
As much as we spoke about SaaS and Hybrid Cloud we are not planning on covering those topics here.
Aunque ya hemos hablado de SaaS y de la nube híbrida, no vamos a tratar esos temas aquí.
The best way to get started and follow along is by clicking the link, which will enable you to spin up a [Microsoft Azure Free Account](https://azure.microsoft.com/en-gb/free/)
La mejor manera de empezar es haciendo clic en el siguiente enlace que permite crear una [Cuenta gratuita de Microsoft Azure](https://azure.microsoft.com/en-gb/free/)
### Regions
### Regiones
I linked the interactive map above, but we can see the image below the breadth of regions being offered in the Microsoft Azure platform worldwide.
He enlazado el mapa interactivo más arriba, pero podemos ver en la imagen de abajo la amplitud de regiones que se ofrecen en la plataforma Microsoft Azure en todo el mundo.
![](Images/Day29_Cloud2.png)
_image taken from [Microsoft Docs - 01/05/2021](https://docs.microsoft.com/en-us/azure/networking/microsoft-global-network)_
_imagen tomada de [Microsoft Docs - 01/05/2021](https://docs.microsoft.com/en-us/azure/networking/microsoft-global-network)_
You will also see several "sovereign" clouds meaning they are not linked or able to speak to the other regions, for example, these would be associated with governments such as the `AzureUSGovernment` also `AzureChinaCloud` and others.
También verás varias nubes "soberanas", lo que significa que no están vinculadas o no pueden hablar con las otras regiones, por ejemplo, éstas estarían asociadas con gobiernos como `AzureUSGovernment`, `AzureChinaCloud` y otras.
When we are deploying our services within Microsoft Azure we will choose a region for almost everything. However, it is important to note that not every service is available in every region. You can see [Products available by region](https://azure.microsoft.com/en-us/global-infrastructure/services/?products=all) at the time of my writing this that in West Central US we cannot use Azure Databricks.
Cuando estemos desplegando nuestros servicios dentro de Microsoft Azure elegiremos una región para casi todo. Sin embargo, es importante tener en cuenta que no todos los servicios están disponibles en todas las regiones. Puedes ver [Productos disponibles por región](https://azure.microsoft.com/en-us/global-infrastructure/services/?products=all) en el momento de escribir esto que en West Central US no podemos usar Azure Databricks.
I also mentioned "almost everything" above, there are certain services that are linked to the region such as Azure Bot Services, Bing Speech, Azure Virtual Desktop, Static Web Apps, and some more.
También se mencionó arriba que hay ciertos servicios que están ligados a la región como Azure Bot Services, Bing Speech, Azure Virtual Desktop, Static Web Apps, y algunos más.
Behind the scenes, a region may be made up of more than one data centre. These will be referred to as Availability Zones.
Entre bastidores, una región puede estar formada por más de un centro de datos. Estos se denominarán Zonas de Disponibilidad.
In the below image you will see and again this is taken from the Microsoft official documentation it describes what a region is and how it is made up of Availability Zones. However not all regions have multiple Availability Zones.
En la siguiente imagen, extraída de la documentación oficial de Microsoft, se describe qué es una región y cómo se compone de zonas de disponibilidad. Sin embargo no todas las regiones tienen múltiples Zonas de Disponibilidad.
![](Images/Day29_Cloud3.png)
The Microsoft Documentation is very good, and you can read up more on [Regions and Availability Zones](https://docs.microsoft.com/en-us/azure/availability-zones/az-overview) here.
La documentación de Microsoft es muy buena, y puedes obtener mucha más información sobre [Regiones y zonas de disponibilidad](https://docs.microsoft.com/en-us/azure/availability-zones/az-overview).
### Subscriptions
### Suscripciones
Remember we mentioned that Microsoft Azure is a consumption model cloud you will find that all major cloud providers follow this model.
Recuerda que mencionamos que Microsoft Azure es una nube de modelo de consumo que encontrará que todos los principales proveedores de nube siguen este modelo.
If you are an Enterprise then you might want or have an Enterprise Agreement set up with Microsoft to enable your company to consume these Azure Services.
Si tienes una empresa, entonces es posible que desee tener un acuerdo de empresa establecido con Microsoft para permitir servicios especializados de Azure.
If you are like me and you are using Microsoft Azure for education then we have a few other options.
Si usted es como yo y está utilizando Microsoft Azure para la educación, entonces tenemos algunas otras opciones.
We have the [Microsoft Azure Free Account](https://azure.microsoft.com/en-gb/free/) which generally gives you several free cloud credits to spend in Azure over some time.
Tenemos la [Cuenta gratuita de Microsoft Azure](https://azure.microsoft.com/en-gb/free/) que generalmente te da varios créditos de nube gratuitos para gastar en Azure durante algún tiempo.
There is also the ability to use a Visual Studio subscription which gives you maybe some free credits each month alongside your annual subscription to Visual Studio, this was commonly known as the MSDN years ago. [Visual Studio](https://azure.microsoft.com/en-us/pricing/member-offers/credit-for-visual-studio-subscribers/)
También existe la posibilidad de utilizar una suscripción a Visual Studio que te da algunos créditos gratuitos cada mes junto con tu suscripción anual a Visual Studio, esto era comúnmente conocido como MSDN hace años. [Visual Studio](https://azure.microsoft.com/en-us/pricing/member-offers/credit-for-visual-studio-subscribers/)
Then finally there is the hand over a credit card and have a pay as you go, model. [Pay-as-you-go](https://azure.microsoft.com/en-us/pricing/purchase-options/pay-as-you-go/)
Por último, está el modelo de pago por uso con tarjeta de crédito. [Pago por uso](https://azure.microsoft.com/en-us/pricing/purchase-options/pay-as-you-go/)
A subscription can be seen as a boundary between different subscriptions potentially cost centres but completely different environments. A subscription is where the resources are created.
Una suscripción puede verse como un límite entre diferentes suscripciones potencialmente centros de costes pero entornos completamente diferentes. Una suscripción es donde se crean los recursos.
### Management Groups
### Grupos de gestión
Management groups give us the ability to segregate control across our Azure Active Directory (AD) or our tenant environment. Management groups allow us to control policies, Role Based Access Control (RBAC), and budgets.
Los grupos de gestión nos dan la capacidad de segregar el control a través de nuestro Azure Active Directory (AD) o nuestro entorno de inquilinos. Los grupos de gestión nos permiten controlar las políticas, el control de acceso basado en roles (RBAC) y los presupuestos.
Subscriptions belong to these management groups so you could have many subscriptions in your Azure AD Tenant, these subscriptions then can also control policies, RBAC, and budgets.
Las suscripciones pertenecen a estos grupos de gestión por lo que podría tener muchas suscripciones en su Azure AD Tenant, estas suscripciones a continuación, también puede controlar las políticas, RBAC, y los presupuestos.
### Resource Manager and Resource Groups
### Administrador de recursos y grupos de recursos
#### Azure Resource Manager
#### Gestor de Recursos Azure
- JSON based API that is built on resource providers.
- Resources belong to a resource group and share a common life cycle.
- Parallelism
- JSON-Based deployments are declarative, idempotent and understand dependencies between resources to govern creation and order.
- API basada en JSON que se basa en proveedores de recursos.
- Los recursos pertenecen a un grupo de recursos y comparten un ciclo de vida común.
- Paralelismo
- Los despliegues basados en JSON son declarativos, idempotentes y comprenden las dependencias entre recursos para gobernar la creación y el orden.
#### Resource Groups
#### Grupos de recursos
- Every Azure Resource Manager resource exists in one and only one resource group!
- Resource groups are created in a region that can contain resources from outside the region.
- Resources can be moved between resource groups
- Resource groups are not walled off from other resource groups, there can be communication between resource groups.
- Resource Groups can also control policies, RBAC, and budgets.
- Cada recurso de Azure Resource Manager existe en uno y sólo un grupo de recursos.
- Los grupos de recursos se crean en una región que puede contener recursos de fuera de la región.
- Los recursos pueden moverse entre grupos de recursos
- Los grupos de recursos no están aislados de otros grupos de recursos, puede haber comunicación entre grupos de recursos.
- Los grupos de recursos también pueden controlar políticas, RBAC y presupuestos.
### Hands-On
### Manos a la obra
Let's go and get connected and make sure we have a **Subscription** available to us. We can check our simple out of the box **Management Group**, We can then go and create a new dedicated **Resource Group** in our preferred **Region**.
Vamos a conectarnos y a asegurarnos de que tenemos una **Suscripción** disponible. Podemos marcar nuestro simple **Grupo de Gestión**, podemos ir y crear un nuevo **Grupo de Recursos** dedicado en nuestra **Región** preferida.
When we first login to our [Azure portal](https://portal.azure.com/#home) you will see at the top the ability to search for resources, services and docs.
La primera vez que iniciemos sesión en nuestro [portal Azure](https://portal.azure.com/#home) veremos en la parte superior la posibilidad de buscar recursos, servicios y documentos.
![](Images/Day29_Cloud4.png)
We are going to first look at our subscription, you will see here that I am using a Visual Studio Professional subscription which gives me some free credit each month.
Vamos a ver primero nuestra suscripción, verás aquí que estoy usando una suscripción Visual Studio Professional que me da algo de crédito gratis cada mes.
![](Images/Day29_Cloud5.png)
If we go into that you will get a wider view and a look into what is happening or what can be done with the subscription, we can see billing information with control functions on the left where you can define IAM Access Control and further down there are more resources available.
Si entramos en ella obtendremos una visión más amplia a lo que está sucediendo y a lo que se puede hacer con la suscripción, podemos ver información de facturación con funciones de control a la izquierda donde se puede definir el Control de Acceso IAM y más abajo hay más recursos disponibles.
![](Images/Day29_Cloud6.png)
There might be a scenario where you have multiple subscriptions and you want to manage them all under one, this is where management groups can be used to segregate responsibility groups. In mine below, you can see there is just my tenant root group with my subscription.
Podría haber un escenario en el que tienes varias suscripciones y deseas gestionarla todas bajo una cuenta, aquí puedes utilizar los grupos de gestión para segregar por grupos de responsabilidad. Abajo puedes ver que hay sólo un grupo raíz de inquilino con la suscripción.
You will also see in the previous image that the parent management group is the same id used on the tenant root group.
También verás en la imagen anterior que el grupo de gestión padre es el mismo ID utilizado en el grupo raíz del inquilino.
![](Images/Day29_Cloud7.png)
Next up we have Resource groups, this is where we combine our resources and we can easily manage them in one place. I have a few created for various other projects.
A continuación tenemos los grupos de recursos, aquí es donde combinamos nuestros recursos y podemos gestionarlos fácilmente en un solo lugar. Hay algunos creados para otros proyectos.
![](Images/Day29_Cloud8.png)
With what we are going to be doing over the next few days, we want to create our resource group. This is easily done in this console by hitting the create option on the previous image.
En los próximos días vamos a crear un grupo de recursos. Esto se hace fácilmente en esta consola pulsando la opción crear de la imagen anterior.
![](Images/Day29_Cloud9.png)
A validation step takes place and then you have the chance to review your creation and then create. You will also see down the bottom "Download a template for automation" this allows us to grab the JSON format so that we can perform this simple in an automated fashion later on if we wanted, we will cover this later on as well.
Se produce un paso de validación y luego tienes la oportunidad de revisar tu creación antes de crear. También verás abajo "Descargar una plantilla para automatización" esto nos permite tomar en formato JSON una plantilla que podremos utilizar de forma automatizada más adelante, lo veremos más adelante también.
![](Images/Day29_Cloud10.png)
Hit create, then in our list of resource groups, we now have our "90DaysOfDevOps" group ready for what we do in the next session.
Pulsamos crear. Ahora en nuestra lista de grupos de recursos tenemos nuestro grupo "90DaysOfDevOps" listo para lo que hagamos en las siguientes sesiones.
![](Images/Day29_Cloud11.png)
## Resources
## Recursos
- [Hybrid Cloud and MultiCloud](https://www.youtube.com/watch?v=qkj5W98Xdvw)
- [Microsoft Azure Fundamentals](https://www.youtube.com/watch?v=NKEFWyqJ5XA&list=WL&index=130&t=12s)
- [Google Cloud Digital Leader Certification Course](https://www.youtube.com/watch?v=UGRDM86MBIQ&list=WL&index=131&t=10s)
- [AWS Basics for Beginners - Full Course](https://www.youtube.com/watch?v=ulprqHHWlng&t=5352s)
See you on [Day 30](day30.md)
Nos vemos en el [Día 30](day30.md)

View File

@ -100,7 +100,7 @@ Empecemos con lo que vas a poder ver en estos 90 días.
- [✔️] 🌐 24 > [Automatización de la red](Days/day24.md)
- [✔️] 🌐 25 > [Python para la automatización de la red](Days/day25.md)
- [✔️] 🌐 26 > [Construir nuestro Lab](Days/day26.md)
- [✔️] 🌐 27 > [Ponerse a trabajar con Python y la red](Days/day27.md)
- [✔️] 🌐 27 > [Manos a la obra con Python y Redes](Days/day27.md)
### Quédate con solo un Cloud Provider

View File

@ -106,9 +106,9 @@ Trong vài ngày tới, chúng ta sẽ tìm hiểu thêm về:
- DHCP
- Mạng con
## Tài nguyên
## Tài liệu tham khảo
* [Các nguyên tắc cơ bản về mạng](https://www.youtube.com/playlist?list=PLIFyRwBY_4bRLmKfP1KnZA6rZbRHtxmXi)
* [Toàn bộ khóa học Mạng máy tính](https://www.youtube.com/watch?v=IPvYjXCsTg8)
Hẹn gặp lại các bạn vào [Day22](day22.md)
Hẹn gặp lại bạn vào [Ngày 22](day22.md)

108
2022/vi/Days/day22.md Normal file
View File

@ -0,0 +1,108 @@
---
title: '#90DaysOfDevOps - Mô hình 7 Lớp OSI - Ngày 22'
published: false
description: 90DaysOfDevOps - Mô hình 7 Lớp OSI
tags: 'devops, 90daysofdevops, learning'
cover_image: null
canonical_url: null
id: 1049037
---
Nội dung của phần này chủ yếu từ sê-ri [Networking Fundamentals series](https://www.youtube.com/playlist?list=PLIFyRwBY_4bRLmKfP1KnZA6rZbRHtxmXi). Nếu bạn thích học bằng video, hãy tham khảo 2 video sau:
* [The OSI Model: A Practical Perspective - Layers 1 / 2 / 3](https://www.youtube.com/watch?v=LkolbURrtTs&list=PLIFyRwBY_4bRLmKfP1KnZA6rZbRHtxmXi&index=3)
* [The OSI Model: A Practical Perspective - Layers 4 / 5+](https://www.youtube.com/watch?v=0aGqGKrRE0g&list=PLIFyRwBY_4bRLmKfP1KnZA6rZbRHtxmXi&index=4)
## Mô hình 7 lớp (tầng) OSI
Mục đích cơ bản của mạng máy tính là cho phép hai máy tính chia sẻ dữ liệu. Trước khi có mạng máy tính, nếu ta muốn chuyển dữ liệu từ một máy tính này đến một máy tính khác, ta cần phải gắn một thiết bị lưu trữ vào một máy tính, sao chép dữ liệu và đưa nó sang máy tính khác.
Mạng máy tính cho phép làm việc này một cách tự động bằng cách cho phép máy tính chia sẻ dữ liệu qua dây mạng (hoặc kết nối không dây). Để cho các máy tính có thể thực hiện việc đó, chúng cần phải tuân thủ một bộ quy tắc.
Nguyên tắc này cũng tương tự như trong giao tiếp bằng ngôn ngữ. Tiếng Anh có một bộ quy tắc mà hai người nói tiếng Anh phải tuân theo. Tiếng Tây Ban Nha hay tiếng Pháp cũng có bộ quy tắc riêng, và mạng máy tính cũng có bộ quy tắc riêng của nó.
Các quy tắc để giao tiếp trong mạng máy tính được chia thành bảy lớp khác nhau và được gọi là mô hình OSI.
### Giới thiệu về mô hình OSI
Mô hình OSI (Mô hình kết nối hệ thống mở) là một khuôn khổ được sử dụng để mô tả các chức năng của một hệ thống mạng. Mô hình OSI mô tả các chức năng tính toán và tạo thành một tập hợp các quy tắc và yêu cầu chung để hỗ trợ khả năng giao tiếp giữa các thiết bị và phần mềm khác nhau. Trong mô hình tham chiếu OSI, giao tiếp giữa một hệ thống máy tính được chia thành bảy lớp trừu tượng khác nhau: **Lớp vật lý (Physical), Lớp liên kết dữ liệu (Data Link), Lớp mạng (Network), Lớp giao vận (Transport), Lớp phiên (Session), Lớp trình diễn (Presentation), và Lớp ứng dụng (Application)**.
![](../../Days/Images/Day22_Networking1.png)
### Lớp vật lý (Physical)
Đây là lớp thứ 1 trong mô hình OSI, quy định cách mà chúng ta có thể chuyển dữ liệu từ một máy tính này thông qua máy tính khác về mặt vật lý (ví dụ dây mạng hoặc sóng Wi-Fi). Chúng ta cũng có thể bắt gặp một số thiết bị phần cứng cũ hoạt động ở lớp này như hub hoặc repeater (bộ lặp).
![](../../Days/Images/Day22_Networking2.png)
### Lớp liên kết dữ liệu (Data Link)
Lớp thứ 2 là lớp liên kết dữ liệu, nó cho phép đóng gói dữ liệu dưới dạng các frame để truyền từ thiết bị này sang thiết bị khác. Lớp này có thể cung cấp tính năng cho phép sửa lỗi xảy ra ở lớp vật lý. Địa chỉ MAC (Media Access Control) cũng được giới thiệu ở lớp này.
Các thiết bị chuyển mạch (switch) mà chúng ta đã đề cập trong ngày 21 hoạt động ở lớp này [Ngày 21](day21.md)
![](../../Days/Images/Day22_Networking3.png)
### Lớp mạng (Network)
Bạn có thể đã nghe đến thuật ngữ thiết bị chuyển mạch (switch) lớp 3 hoặc thiết bị chuyển mạch (switch) lớp 2. Trong mô hình OSI, Lớp mạng có nhiệm vụ phân phối dữ liệu từ điểm đầu đến điểm cuối. Đây là nơi chúng ta thấy các địa chỉ IP của các thiết bị như chúng ta đã đề cập trong [Ngày 21](day21.md).
Bộ định tuyến (router) và máy tính (host) làm việc ở lớp mạng, hãy nhớ bộ định tuyến cung cấp chức năng định tuyến giữa nhiều mạng. Bất kỳ thứ gì có địa chỉ IP đều có thể được coi là thiết bị của lớp 3.
![](../../Days/Images/Day22_Networking4.png)
Tại sao chúng ta cần sử dụng địa chỉ ở cả lớp 2 và 3? (địa chỉ MAC và địa chỉ IP)
Nếu chúng ta nghĩ về việc truyền dữ liệu từ máy tính này sang một máy tính khác, mỗi máy tính có một địa chỉ IP riêng nhưng sẽ có một số thiết bị chuyển mạch (switch) và định tuyến (router) nằm giữa hai máy tính. Mỗi thiết bị đó đều có địa chỉ MAC lớp 2.
Địa chỉ MAC lớp 2 chỉ được dùng để liên lạc giữa hai thiết bị kết nối trực tiếp với nhau trong quá trình chuyền dữ liệu, nó chỉ tập trung vào truyền tải đến trạm kế tiếp, trong khi địa chỉ IP lớp 3 sẽ ở lại với gói dữ liệu đó cho đến khi nó đến máy tính cuối của nó. (Điểm đầu đến điểm cuối)
Địa chỉ IP - Lớp 3 = Vận chuyển từ điểm đầu đến điểm cuối
Địa chỉ MAC - Lớp 2 = Vận chuyển đến trạm kế tiếp
Có một giao thức mạng mà chúng ta sẽ tìm hiểu vào các ngày sau có tên là ARP (Address Resolution Protocol, Giao thức phân giải địa chỉ), nhằm giúp liên kết địa chỉ của lớp 2 và lớp 3 trong mạng.
### Lớp giao vận (Transport)
Lớp thứ 4 (lớp giao vận) được tạo ra để phân biệt các luồng dữ liệu, cho phép vận chuyển dữ liệu từ dịch vụ (ứng dụng) đến dịch vụ giữa các máy tính. Theo cách tương tự mà lớp 3 và lớp 2 đều có các cơ chế địa chỉ, trong lớp 4 chúng ta có các cổng (port).
![](../../Days/Images/Day22_Networking5.png)
### Lớp phiên, trình diễn, ứng dụng (Session, Presentation, Application)
Sự tách biệt giữa các lớp 5,6,7 có thể hơi mơ hồ.
Bạn nên xem [Mô hình TCP IP](https://www.geeksforgeeks.org/tcp-ip-model/) để hiểu rõ hơn.
Bây giờ chúng ta hãy thử giải thích điều gì sẽ xảy ra khi các máy tính trong mạng giao tiếp với nhau bằng mô hình nhiều lớp này. Máy tính này có một ứng dụng sẽ tạo ra dữ liệu và gửi đến một máy tính khác.
Máy tính nguồn sẽ trải qua quá trình được gọi là quá trình đóng gói dữ liệu (lớp 7 --> 5). Dữ liệu sau đó sẽ được gửi đến lớp 4.
Lớp 4 sẽ thêm một header vào dữ liệu đó, điều này giúp cho việc truyền tải dữ liệu ở lớp 4 (từ ứng dụng đến ứng dụng). Một cổng sẽ được sử dụng để truyền dữ liệu dựa trên TCP hoặc UDP. Header sẽ bao gồm thông tin cổng nguồn và cổng đích.
Thông tin về dữ liệu (data) và cổng (port) có thể được gọi là một segment.
Segment này sẽ được chuyển xuống cho lớp 3 (lớp mạng). Lớp mạng sẽ thêm một header khác vào dữ liệu này.
Header này sẽ chứa thông tin giúp lớp 3 vận chuyển dữ liệu từ điểm đầu đến điểm cuối. Trong tiêu đề này, bạn sẽ có địa chỉ IP nguồn và IP đích, header ở lới 3 cộng với dữ liệu lớp trên cũng có thể được gọi là một packet (gói tin).
Lớp 3 sau đó sẽ lấy gói tin đó và giao nó cho lớp 2, lớp 2 một lần nữa sẽ thêm một header khác vào dữ liệu đó để thực hiện chuyển tiếp dữ liệu đến trạm kế tiếp trong mạng. Header ở lớp 2 sẽ bao gồm địa chỉ MAC nguồn và đích. Header và dữ liệu lớp 2 được gọi là một frame.
Frame sau đó sẽ được chuyển đổi thành những tín hiệu 0 và 1 được gửi qua cáp vật lý hoặc sóng không dây thuộc lớp 1 .
![](../../Days/Images/Day22_Networking6.png)
Tôi đã đề cập ở trên về việc đặt tên cho dữ liệu + header ở mỗi lớp và bạn có thể tham khảo qua hình ảnh tóm lượt bên dưới.
![](../../Days/Images/Day22_Networking7.png)
Quá trình gửi và nhận dữ liệu của ứng dụng ở hai máy tính nguồn và đích.
![](../../Days/Images/Day22_Networking8.png)
## Tài liệu tham khảo
* [Networking Fundamentals](https://www.youtube.com/playlist?list=PLIFyRwBY_4bRLmKfP1KnZA6rZbRHtxmXi)
- [Computer Networking full course](https://www.youtube.com/watch?v=IPvYjXCsTg8)
Hẹn gặp lại các bạn vào [Ngày 23](day23.md)

118
2022/vi/Days/day23.md Normal file
View File

@ -0,0 +1,118 @@
---
title: '#90DaysOfDevOps - Giao thức mạng - Ngày 23'
published: false
description: 90DaysOfDevOps - Giao thức mạng
tags: 'devops, 90daysofdevops, learning'
cover_image: null
canonical_url: null
id: 1048704
---
Nội dung của phần này chủ yếu từ sê-ri [Networking Fundamentals series](https://www.youtube.com/playlist?list=PLIFyRwBY_4bRLmKfP1KnZA6rZbRHtxmXi). Nếu bạn thích học thông qua video, bạn có thể xem video sau:
* [Network Protocols - ARP, FTP, SMTP, HTTP, SSL, TLS, HTTPS, DNS, DHCP](https://www.youtube.com/watch?v=E5bSumTAHZE&list=PLIFyRwBY_4bRLmKfP1KnZA6rZbRHtxmXi&index=12)
## Các giao thức mạng
Các giao thức mạng là một tập hợp các quy tắc giao tiếp tạo thành một tiêu chuẩn, tiêu chuẩn Internet.
- ARP (Address Resolution Protocol) - Giao thức phân giải địa chỉ
Nếu bạn muốn tìm hiểu sâu hơn về ARP, bạn có thể đọc về tiêu chuẩn Internet tại đây. [RFC 826](https://datatracker.ietf.org/doc/html/rfc826)
Một địa chỉ IP sẽ được gắn với một địa chỉ vật lý cố định, còn được gọi là địa chỉ MAC trên mạng lớp 2.
![](../../Days/Images/Day23_Networking1.png)
- FTP (File Transfer Protocol) - Giao thức truyền tải file
Cho phép truyền tải các tập tin từ một máy nguồn đến máy đích. Về cơ bản, quá trình này được xác thực nhưng vân có thể cấu hình để cho phép quyền truy cập ẩn danh. Bạn sẽ thấy FTPS được sử dùng thường xuyên hơn vì nó cung cấp kết nối SSL/TLS tới các máy tính FTP từ máy khách để đảm bảo bảo mật tốt hơn. Giao thức này hoạt động ở lớp Ứng dụng của Mô hình OSI.
![](../../Days/Images/Day23_Networking2.png)
- SMTP (Simple Mail Transfer Protocol) - Giao thức chuyển thư đơn giản
Được sử dụng để truyền email, máy tính sử dụng SMTP để gửi và nhận thư. Bạn vẫn sẽ thấy ngay cả SMTP vẫn đang được sử dụng với Microsoft 365.
![](../../Days/Images/Day23_Networking3.png)
- HTTP (Hyper Text Transfer Protocol) - Giao thức truyền tải siêu văn bản
HTTP là giao thức nền tảng cho việc truy cập nội dung trên Internet. Nó cung cấp cho chúng ta khả năng để dễ dàng truy cập các trang web. HTTP vẫn được sử dụng nhiều nhưng HTTPS hiện được sử dụng nhiều hơn để tăng cường khả năng bảo mật.
![](../../Days/Images/Day23_Networking4.png)
- SSL (Secure Sockets Layer) - Lớp cổng bảo mật | TLS (Transport Layer Security) - Bảo mật tầng vận chuyển
TLS đã tiếp quản từ SSL, TLS là **Giao thức mật mã** cung cấp thông tin liên lạc an toàn qua mạng. Nó được sử dụng trong các ứng dụng email, tin nhắn, v.v., nhưng phổ biến nhất là để bảo mật cho HTTPS.
![](../../Days/Images/Day23_Networking5.png)
- HTTPS - HTTP được bảo mật bằng SSL/TLS
Phiên bản mở rộng của HTTP, được sử dụng để cung cấp liên lạc an toàn qua mạng, HTTPS được mã hóa bằng TLS như đã đề cập ở trên. Trọng tâm ở đây là mang lại tính xác thực, quyền riêng tư và tính toàn vẹn trong khi dữ liệu được trao đổi giữa các máy tính.
![](../../Days/Images/Day23_Networking6.png)
- DNS (Domain Name System) - Hệ thống tên miền
DNS được sử dụng để ánh xạ các tên miền theo cách thân thiện với con người, chẳng hạn như tất cả chúng ta đều biết [google.com](https://google.com) nhưng nếu bạn mở trình duyệt và nhập [8.8.8.8](https://8.8.8.8) bạn sẽ truy cập được Google như chúng ta vẫn làm. Tuy nhiên, bạn không thể nhớ tất cả các địa chỉ IP cho tất cả các trang web của bạn.
Đây là nơi DNS xuất hiện, nó đảm bảo rằng các máy tính, dịch vụ và các tài nguyên khác có thể truy cập được.
Trên tất cả các máy tính yêu cầu kết nối internet thì phải có DNS để phân giải được các tên miền. DNS là một lĩnh vực bạn có thể dành nhiều ngày và nhiều năm để tìm hiểu. Tôi cũng sẽ nói từ kinh nghiệm rằng DNS là nguyên nhân phổ biến của tất cả các lỗi khi nói đến Mạng. Tuy nhiên, không chắc liệu một kỹ sư mạng có đồng ý với quan điểm này hay không.
![](../../Days/Images/Day23_Networking7.png)
- DHCP (Dynamic Host Configuration Protocol) - Giao thức cấu hình máy tính tự động
Chúng ta đã thảo luận rất nhiều về các giao thức cần thiết để làm cho các máy tính của chúng ta hoạt động, có thể là truy cập internet hoặc truyền tải file giữa các máy tính với nhau.
Có 4 điều chúng ta cần trên mọi máy tính để nó có thể đạt được cả hai nhiệm vụ đó.
- Địa chỉ IP
- Subnet Mask
- Gateway mặc định
- DNS
Địa chỉ IP là địa chỉ duy nhất đại diện cho máy tính của chúng ta trên mạng mà nó tham gia, có thể coi đây là số nhà.
Chúng ta có thể coi subnet mask như là mã bưu điện hoặc mã zip.
Gateway mặc định là IP của bộ định tuyến đã cung cấp cho chúng ta kết nối đến Internet hoặc các mạng khác. Bạn có thể coi đây là con đường duy nhất cho phép chúng ta ra khỏi con phố của mình.
Sau đó, chúng ta có DNS để chuyển đổi các địa chỉ IP công khai phức tạp thành các tên miền phù hợp và dễ nhớ hơn. Chúng ta có thể coi đây là văn phòng phân loại khổng lồ để đảm bảo chúng ta nhận được đúng gói hàng của mình.
Như tôi đã nói, mỗi máy tính yêu cầu 4 cài đặt này, nếu bạn có 1000 hoặc 10.000 máy tính thì bạn sẽ mất rất nhiều thời gian để cấu hình tất cả. Chính vì vậy, DHCP xuất hiện và cho phép bạn xác định phạm vi cho mạng của mình và giao thức này sẽ cấp phát những thông tin trên cho tất cả các máy tính trong mạng của bạn.
Một ví dụ khác là bạn đi vào một quán cà phê, lấy một ly cà phê và ngồi xuống với máy tính xách tay hoặc điện thoại của bạn. Bạn kết nối máy tính của mình với Wi-Fi của quán cà phê và có quyền truy cập vào internet, tin nhắn và thư bắt đầu được gửi tới và bạn có thể duyệt web hay truy cập mạng xã hội. Khi bạn kết nối với Wi-Fi của quán cà phê, máy tính của bạn sẽ nhận một địa chỉ DHCP từ máy chủ DHCP chuyên dụng hoặc rất có thể là bộ định tuyến (router) của quán cũng xử lý DHCP.
![](../../Days/Images/Day23_Networking8.png)
### Mạng con (Subnet)
Mạng con là một phân khu về mặt logic của một mạng IP.
Mạng con chia các mạng lớn thành các mạng nhỏ hơn, dễ quản lý hơn và hoạt động hiệu quả hơn.
Mỗi mạng con là một phân khu về mặt logic của một mạng lớn hơn. Các thiết bị trong cùng một mạng con có cùng Subnet Mask, cho phép chúng có thể giao tiếp với nhau.
Bộ định tuyến quản lý giao tiếp giữa các mạng con.
Kích thước của mạng con phụ thuộc vào yêu cầu kết nối và công nghệ mạng được sử dụng.
Một tổ chức quốc tế chịu trách nhiệm cho xác định số lượng và kích thước của các mạng con trong không gian địa chỉ IP giới hạn hiện có. Các mạng con cũng có thể được phân đoạn thành các mạng con nhỏ hơn cho những trường hợp như liên kết Điểm tới Điểm hoặc mạng con chỉ hỗ trợ một vài thiết bị.
Bên cạnh một số lợi ích khác, việc phân chia một mạng lớn thành các mạng con cho phép tái sử dụng địa chỉ IP và giảm tắc nghẽn mạng, tăng hiệu quả sử dụng mạng.
Mạng con cũng có thể cải thiện tính bảo mật. Nếu một phần của mạng bị xâm phạm, nó có thể được cô lập khiến những kẻ tấn công khó có thể truy cập được các hệ thống mạng lớn hơn.
![](../../Days/Images/Day23_Networking9.png)
## Tài liệu tham khảo
- [Networking Fundamentals](https://www.youtube.com/playlist?list=PLIFyRwBY_4bRLmKfP1KnZA6rZbRHtxmXi)
- [Subnetting Mastery](https://www.youtube.com/playlist?list=PLIFyRwBY_4bQUE4IB5c4VPRyDoLgOdExE)
- [Computer Networking full course](https://www.youtube.com/watch?v=IPvYjXCsTg8)
Hẹn gặp lại các bạn vào [Ngày 24](day24.md)

149
2022/vi/Days/day24.md Normal file
View File

@ -0,0 +1,149 @@
---
title: '#90DaysOfDevOps - Tự Động Hóa Thiết Lập Mạng - Ngày 24'
published: false
description: 90DaysOfDevOps - Tự Động Hóa Thiết Lập Mạng
tags: 'devops, 90daysofdevops, learning'
cover_image: null
canonical_url: null
id: 1048805
---
## Tự động hóa mạng
### Khái niệm cơ bản về tự động hóa mạng
Mục đích của việc Tự động hóa mạng
- Đạt được sự linh hoạt
- Giảm chi phí
- Loại bỏ lỗi
- Tuân thủ các quy tắc, quy định (compliance)
- Quản lý tập trung
Quá trình áp dụng tự động hóa là riêng biệt cho từng doanh nghiệp. Không có một giải pháp nào phù hợp với tất cả các yêu cầu khi triển khai tự động hóa, khả năng xác định và nắm bắt phương pháp phù hợp nhất với tổ chức của bạn là rất quan trọng trong việc tiến tới duy trì hoặc tạo ra một môi trường linh hoạt hơn, trọng tâm luôn phải là giá trị kinh doanh và mục tiêu cuối cùng - trải nghiệm người dùng. (Chúng ta đã nói những điều tương tự ngay từ đầu về văn hóa DevOps và sự thay đổi văn hóa cũng như quy trình tự động mà điều này mang lại)
Để phân tích vấn đề này, bạn cần xác định bằng cách nào những nhiệm vụ hoặc quy trình mà bạn đang cố gắng tự động hóa sẽ giúp cải thiện trải nghiệm của người dùng cuối hoặc giá trị kinh doanh trong khi vẫn tuân theo phương pháp tiếp cận có hệ thống từng bước.
"Nếu bạn không biết mình đang đi đâu, thì bất kỳ con đường nào cũng sẽ đưa bạn đến đích."
Có một framework hoặc bản thiết kế mà bạn đang cố gắng để hoàn thành, biết rõ mục tiêu cuối cùng của mình là gì và sau đó làm việc từng bước để đạt được mục tiêu đó, đo lường mức độ thành công của việc tự động hóa ở các giai đoạn khác nhau dựa trên kết quả kinh doanh.
Xây dựng các khái niệm đã được mô hình hóa xung quanh các ứng dụng hiện có, không cần phải thiết kế các khái niệm xung quanh một mô hình giả tưởng vì chúng cần được áp dụng cho ứng dụng, dịch vụ và cơ sở hạ tầng của bạn. Vì vậy hãy bắt đầu xây dựng các khái niệm và mô hình hóa xung quanh cơ sở hạ tầng và ứng dụng hiện có của bạn.
### Cách tiếp cận việc Tự động hóa Mạng
Chúng ta nên xác định các tác vụ và thực hiện khám phá các yêu cầu thay đổi trong thiết lập mạng để bạn có danh sách các vấn đề và sự cố phổ biến nhất mà cần một giải pháp tự động hóa.
- Lập danh sách tất cả các yêu cầu thay đổi và quy trình công việc hiện đang được giải quyết theo cách thủ công.
- Xác định các hoạt động phổ biến, tốn thời gian và dễ mắc lỗi nhất.
- Ưu tiên các yêu cầu bằng dựa theo định hướng kinh doanh của doanh nghiệp.
- Nếu đây là bộ khung để xây dựng quy trình tự động hóa, thì cái gì phải tự động hóa, cái gì không.
Sau đó, chúng ta nên phân chia các nhiệm vụ và phân tích cách các chức năng mạng khác nhau hoạt động và tương tác với nhau.
- Nhóm Hạ tầng/Mạng nhận yêu cầu thay đổi ở nhiều lớp để triển khai ứng dụng.
- Dựa trên các dịch vụ mạng, hãy chia chúng thành các khu vực khác nhau và hiểu cách chúng tương tác với nhau.
- Tối ưu hóa ứng dụng
- ADC (Bộ điều khiển phân phối ứng dụng)
- Tường lửa (Firewall)
- DDI (DNS, DHCP, IPAM, v.v.)
- Định tuyến
- Các vấn đề khác
- Xác định các yếu tố phụ thuộc khác nhau để giải quyết các khác biệt về kinh doanh và văn hóa, đồng thời mang lại sự hợp tác giữa các nhóm.
- Chính sách tái sử dụng, xác định và đơn giản hóa các tác vụ, quy trình và đầu vào/đầu ra của dịch vụ có thể tái sử dụng.
- Định nghĩa các dịch vụ, quy trình và đầu vào/đầu ra khác nhau.
- Đơn giản hóa quy trình triển khai sẽ giảm thời gian hoàn thành cho cả khối lượng công việc mới và hiện có.
- Sau khi bạn có một quy trình tiêu chuẩn, quy trình đó có thể được sắp xếp theo trình tự và căn chỉnh theo các yêu cầu riêng lẻ để có cách tiếp cận và phân phối đa luồng.
Kết hợp các chính sách với các hoạt động kinh doanh cụ thể. Việc thực hiện chính sách này giúp gì cho doanh nghiệp? Tiết kiệm thời gian? Tiết kiệm tiền? Cung cấp một kết quả kinh doanh tốt hơn?
- Đảm bảo rằng các tác vụ dịch vụ có thể tương tác với nhau.
- Liên kết các nhiệm vụ dịch vụ gia tăng sao cho chúng phối hợp để tạo ra các dịch vụ kinh doanh.
- Cho phép việc linh hoạt trong liên kết lại các nhiệm vụ dịch vụ theo yêu cầu.
- Triển khai các dịch vụ tự làm việc và mở đường cho việc cải thiện hiệu quả hoạt động.
- Cho phép nhiều bộ kỹ năng công nghệ tiếp tục đóng góp vào việc giám sát và tuân thủ.
**Lặp đi lặp lại** các chính sách và quy trình, bổ sung và cải thiện trong khi vẫn duy trì tính khả dụng của dịch vụ.
- Bắt đầu bằng cách tự động hóa các nhiệm vụ hiện có.
- Làm quen với quy trình tự động hóa để bạn có thể xác định các lĩnh vực khác có thể hưởng lợi từ tự động hóa.
- Lặp đi lặp lại các sáng kiến tự động hóa của bạn, tăng dần sự linh hoạt trong khi vẫn duy trì tính khả dụng cần thiết.
- Thực hiện một cách tiếp cận tăng dần sẽ mở đường cho thành công!
Điều phối các dịch vụ mạng!
- Tự động hóa quy trình triển khai là cần thiết để phân phối ứng dụng nhanh chóng.
- Việc tạo ra một môi trường dịch vụ linh hoạt đòi hỏi phải quản lý các yếu tố khác nhau thông qua nhiều kỹ năng kỹ thuật.
- Chuẩn bị cho sự phối hợp từ đầu đến cuối cung cấp khả năng kiểm soát tự động hóa và thứ tự trong việc triển khai.
## Công cụ tự động hóa mạng
Tin tốt ở đây là phần lớn các công cụ chúng ta sử dụng ở đây cho tự động hóa Mạng nói chung giống với những công cụ mà chúng ta sẽ sử dụng cho các lĩnh vực tự động hóa khác đối với những gì chúng ta đã đề cập cho đến nay hoặc những gì chúng ta sẽ đề cập trong các phần sau.
Hệ điều hành - Như tôi đã vượt qua thử thách này, tôi đang thực hiện hầu hết bài học của mình với HĐH Linux, lý do đó đã được đưa ra trong phần Linux nhưng hầu như tất cả các công cụ mà chúng ta sẽ sử dụng mặc dù hôm nay có thể là các nền tảng đa hệ điều hành, tuy nhiên tất cả đều bắt đầu dưới dạng các ứng dụng hoặc công cụ dựa trên Linux.
Môi trường phát triển tích hợp (IDE) - Một lần nữa, không có nhiều điều để nói ở đây ngoài việc tôi sẽ đề xuất Visual Studio Code làm IDE xuyên suốt của bạn, nó cung cấp các plugin mở rộng có sẵn cho rất nhiều ngôn ngữ khác nhau.
Quản lý cấu hình - chúng ta chưa đến phần Quản lý cấu hình, nhưng rõ ràng là Ansible được yêu thích trong lĩnh vực này để quản lý và tự động hóa cấu hình. Ansible được viết bằng Python nhưng bạn không cần phải biết Python để sử dụng nó.
- Agentless
- Chỉ yêu cầu SSH
- Cộng đồng hỗ trợ lớn
- Rất nhiều mô-đun mạng
- Mô hình Push only
- Cấu hình với YAML
- Mã nguồn mở!
[Link to Ansible Network Modules](https://docs.ansible.com/ansible/2.9/modules/list_of_network_modules.html)
Chúng ta cũng sẽ tìm hiểu **Ansible Tower** trong phần quản lý cấu hình, nó được xem như là giao diện người dùng (GUI) cho Ansible.
CI/CD - Một lần nữa, chúng ta sẽ đề cập nhiều hơn về các khái niệm và công cụ xung quanh vấn đề này nhưng điều quan trọng là ít nhất phải đề cập ở đây vì khái niệm này không chỉ xuất hiện trong phần mạng mà còn bao gồm trong tất cả quá trình cung cấp dịch vụ và nền tảng.
Đặc biệt, Jenkins dường như là một công cụ phổ biến cho Tự động hóa mạng.
- Theo dõi kho lưu trữ git để biết các thay đổi và sau đó khởi tạo chúng.
Kiểm soát phiên bản - Một lần nữa chúng ta sẽ tìm hiểu sâu hơn về công nghệ này ở phần sau.
- Git cho phép kiểm soát các phiên bản code của bạn trên máy tính cục bộ - Hỗ trợ đa nền tảng
- GitHub, GitLab, BitBucket, v.v. là các trang web trực tuyến nơi bạn tạo ra các kho lưu trữ và tải code của mình lên.
Ngôn ngữ Lập trình | Scripting - Thứ mà chúng ta chưa đề cập ở đây là Python với tư cách là một ngôn ngữ, tôi quyết định đi sâu vào Go dựa trên hoàn cảnh của tôi. Tôi cho rằng có một cuộc so sánh giữa Golang và Python và Python có vẻ như là người chiến thắng cho ngôn ngữ lập trình để tự động hóa mạng.
- Nornir là thứ cần đề cập ở đây, một framework tự động hóa được viết bằng Python. Nó tương tự như Ansible nhưng cụ thể là xung quanh việc tự động hóa mạng. [Nornir documentation](https://nornir.readthedocs.io/en/latest/)
Phân tích API - Postman là một công cụ tuyệt vời để phân tích API RESTful. Giúp xây dựng, kiểm tra và sửa đổi API.
- POST >>> Để tạo các đối tượng tài nguyên.
- GET >>> Để truy xuất tài nguyên.
- PUT >>> Để tạo hoặc thay thế tài nguyên.
- PATCH >>> Để tạo hoặc cập nhật đối tượng tài nguyên.
- Delete >>> Để xóa tài nguyên
[Postman tool Download](https://www.postman.com/downloads/)
### Các công cụ khác cần đề cập
[Cisco NSO (Network Services Orchestrator)](https://www.cisco.com/c/en/us/products/cloud-systems-management/network-services-orchestrator/index.html)
[NetYCE - Simplify Network Automation](https://netyce.com/)
[Network Test Automation](https://pubhub.devnetcloud.com/media/genie-feature-browser/docs/#/)
Trong 3 ngày tới, tôi sẽ cung cấp nhiều hơn các bài thực hành với một số nội dung chúng ta đã đề cập và thực hiện một số công việc xung quanh Python và Tự động hóa mạng.
Cho đến nay, chúng ta vẫn chưa đề cập đến tất cả các chủ đề của mạng máy tính nhưng tôi cũng muốn làm cho chủ đề này đủ rộng để theo dõi và các bạn có thể tiếp tục học hỏi từ các tài nguyên mà tôi bổ sung bên dưới.
## Tài liệu tham khảo
- [3 Necessary Skills for Network Automation](https://www.youtube.com/watch?v=KhiJ7Fu9kKA&list=WL&index=122&t=89s)
- [Computer Networking full course](https://www.youtube.com/watch?v=IPvYjXCsTg8)
- [Practical Networking](http://www.practicalnetworking.net/)
- [Python Network Automation](https://www.youtube.com/watch?v=xKPzLplPECU&list=WL&index=126)
Hẹn gặp lại các bạn vào [Ngày 25](day25.md)

121
2022/vi/Days/day26.md Normal file
View File

@ -0,0 +1,121 @@
---
title: '#90DaysOfDevOps - Xây dựng Lab - Ngày 26'
published: false
description: 90DaysOfDevOps - Xây dựng Lab
tags: 'devops, 90daysofdevops, learning'
cover_image: null
canonical_url: null
id: 1048762
---
## Xây dựng Lab
Chúng ta sẽ tiếp tục thiết lập mạng mô phỏng của mình bằng phần mềm EVE-NG và sau đó hy vọng sẽ triển khai một số thiết bị và bắt đầu suy nghĩ về cách chúng ta có thể tự động hóa cấu hình của các thiết bị này. Vào [Ngày 25](day25.md), chúng ta đã đề cập đến việc cài đặt EVE-NG trên máy của mình bằng VMware Workstation.
### Cài đặt ứng dụng EVE-NG
Ngoài ra còn có một gói ứng dụng cho phép chúng ta chọn ứng dụng nào sẽ được sử dụng khi chúng ta SSH tới các thiết bị. Nó cũng sẽ cài đặt Wireshark để bắt gói tin giữa các mạng. Bạn có thể tải về gói ứng dụng cho hệ điều hành của mình (Windows, macOS, Linux).
[EVE-NG Client Download](https://www.eve-ng.net/index.php/download/)
![](../../Days/Images/Day26_Networking1.png)
Mẹo: Nếu bạn đang sử dụng Linux thì có thể tải [client pack](https://github.com/SmartFinn/eve-ng-integration).
Quá trình cài đặt diễn ra khá đơn giản và tôi khuyên bạn nên chọn các thiết lập mặc định.
### Tải network images
Bước này là một thách thức, tôi đã làm theo một số video mà tôi để link ở cuối bài để tải xuống image (file cài) cho switch và router và cách để tải nó vào các thiết bị trên.
Điều quan trọng cần lưu ý là tôi sử dụng mọi thứ cho mục đích giáo dục. Tôi khuyên bạn nên tải xuống image chính thức từ các nhà cung cấp thiết bị.
[Blog & Links to YouTube videos](https://loopedback.com/2019/11/15/setting-up-eve-ng-for-ccna-ccnp-ccie-level-studies-includes-multiple-vendor-node-support-an-absolutely-amazing-study-tool-to-check-out-asap/)
[How To Add Cisco VIRL vIOS image to Eve-ng](https://networkhunt.com/how-to-add-cisco-virl-vios-image-to-eve-ng/)
Nhìn chung, các bước ở đây hơi phức tạp và có thể dễ dàng hơn nhiều nhưng các blog và video ở trên hướng dẫn quy trình thêm image vào EVE-NG của bạn.
Tôi đã sử dụng FileZilla để chuyển qcow2 sang các máy ảo VM dựa trên SFTP.
Chúng ta sẽ dùng switch Cisco vIOS L2 và router Cisco vIOS trong lab này.
### Tạo Lab
Bên trong giao diện web EVE-NG, chúng ta sẽ tạo network topology mới. Chúng ta sẽ có bốn switch và một router đóng vai trò là gateway với các mạng bên ngoài.
| Node | Địa chỉ IP |
| ------- | ------------ |
| Router | 10.10.88.110 |
| Switch1 | 10.10.88.111 |
| Switch2 | 10.10.88.112 |
| Switch3 | 10.10.88.113 |
| Switch4 | 10.10.88.114 |
#### Thêm các Nodes trong EVE-NG
Khi bạn đăng nhập lần đầu vào EVE-NG, bạn sẽ thấy một màn hình như bên dưới, chúng ta muốn bắt đầu bằng cách tạo lab đầu tiên của mình.
![](../../Days/Images/Day26_Networking2.png)
Đặt tên cho lab của bạn và các mục khác là tùy chọn.
![](../../Days/Images/Day26_Networking3.png)
Sau đó, bạn sẽ được chào đón bằng một khung vẽ trống để bắt đầu tạo mạng của mình. Nhấp chuột phải vào canvas của bạn và chọn thêm node.
Từ đây, bạn sẽ có một danh sách dài các tùy chọn node. Nếu bạn đã làm theo hướng dẫn ở trên, bạn sẽ có hai node màu xanh lam hiển thị bên dưới và các node khác sẽ có màu xám và không thể chọn được.
![](../../Days/Images/Day26_Networking4.png)
Chúng ta sẽ thêm những thiết bị sau vào lab:
- 1 x router Cisco vIOS
- 4 x switch Cisco vIOS
Chạy qua trình hướng dẫn để thêm node vào lab của bạn và nó sẽ giống như thế này.
![](../../Days/Images/Day26_Networking5.png)
#### Liên kết các nodes
Bây giờ chúng ta cần kết nối giữa các router và switch. Chúng ta có thể thực hiện việc này khá dễ dàng bằng cách di chuột qua thiết bị và xem biểu tượng kết nối như bên dưới rồi kết nối thiết bị đó với thiết bị mà chúng ta muốn kết nối.
![](../../Days/Images/Day26_Networking6.png)
Khi bạn kết nối xong môi trường của mình, bạn cũng có thể muốn thêm một số cách để xác định ranh giới hoặc vị trí vật lý bằng cách sử dụng các hộp hoặc vòng tròn cũng có thể tìm thấy trong menu chuột phải. Bạn cũng có thể thêm ghi chú hữu ích khi chúng ta muốn xác định tên hoặc địa chỉ IP trong lab của mình.
Tôi đã tiếp tục và làm cho lab của mình trông giống như hình dưới.
![](../../Days/Images/Day26_Networking7.png)
Bạn cũng sẽ nhận thấy rằng tất cả lab ở trên đều bị tắt, chúng ta có thể bắt đầu lab của mình bằng cách chọn mọi thứ và nhấp chuột phải và chọn "start selected".
![](../../Days/Images/Day26_Networking8.png)
Sau khi chúng ta thiết lập và chạy lab, bạn có thể điều khiển từng thiết bị và bạn sẽ nhận thấy ở giai đoạn này, chúng khá ngu ngốc khi không có cấu hình. Chúng ta có thể thêm một số cấu hình cho mỗi node bằng cách sao chép hoặc tạo cấu hình của riêng của bạn trong mỗi thiết bị đầu cuối.
Tôi sẽ để cấu hình của mình trong thư mục Networking của kho lưu trữ để bạn tham khảo.
| Node | Configuration |
| ------- | -------------------------------- |
| Router | [R1](../../Days/Networking/R1) |
| Switch1 | [SW1](../../Days/Networking/SW1) |
| Switch2 | [SW2](../../Days/Networking/SW2) |
| Switch3 | [SW3](../../Days/Networking/SW3) |
| Switch4 | [SW4](../../Days/Networking/SW4) |
## Tài liệu tham khảo
- [Free Course: Introduction to EVE-NG](https://www.youtube.com/watch?v=g6B0f_E0NMg)
- [EVE-NG - Creating your first lab](https://www.youtube.com/watch?v=9dPWARirtK8)
- [3 Necessary Skills for Network Automation](https://www.youtube.com/watch?v=KhiJ7Fu9kKA&list=WL&index=122&t=89s)
- [Computer Networking full course](https://www.youtube.com/watch?v=IPvYjXCsTg8)
- [Practical Networking](http://www.practicalnetworking.net/)
- [Python Network Automation](https://www.youtube.com/watch?v=xKPzLplPECU&list=WL&index=126)
Vì tôi không phải là một kỹ sư mạng nên phần lớn các ví dụ tôi sử dụng ở trên đến từ cuốn sách này (không miễn phí):
- [Hands-On Enterprise Automation with Python (Book)](https://www.packtpub.com/product/hands-on-enterprise-automation-with-python/9781788998512)
Hẹn gặp lại các bạn vào [Ngày 27](day27.md)

BIN
2023.jpg

Binary file not shown.

Before

Width:  |  Height:  |  Size: 6.0 KiB

18
2023.md
View File

@ -34,21 +34,21 @@ Or contact us via Twitter, my handle is [@MichaelCade1](https://twitter.com/Mich
## Progress
- [] ♾️ 1 > [2022 Reflection & Welcome 2023](2023/day01.md)
- [✔️] ♾️ 1 > [2022 Reflection & Welcome 2023](2023/day01.md)
### DevSecOps
- [] ♾️ 2 > [The Big Picture: DevSecOps](2023/day02.md)
- [] ♾️ 3 > [Think like an Attacker](2023/day03.md)
- [] ♾️ 4 > [Red Team vs. Blue Team](2023/day04.md)
- [] ♾️ 5 > [OpenSource Security](2023/day05.md)
- [] ♾️ 6 > [Hands-On: Building a weak app](2023/day06.md)
- [✔️] ♾️ 2 > [The Big Picture: DevSecOps](2023/day02.md)
- [✔️] ♾️ 3 > [Think like an Attacker](2023/day03.md)
- [✔️] ♾️ 4 > [Red Team vs. Blue Team](2023/day04.md)
- [✔️] ♾️ 5 > [OpenSource Security](2023/day05.md)
- [✔️] ♾️ 6 > [Hands-On: Building a weak app](2023/day06.md)
### Secure Coding
- [] ⌨️ 7 > [](2023/day07.md)
- [] ⌨️ 8 > [](2023/day08.md)
- [] ⌨️ 9 > [](2023/day09.md)
- [✔️] ⌨️ 7 > [Secure Coding Overview](2023/day07.md)
- [✔️] ⌨️ 8 > [SAST Overview](2023/day08.md)
- [✔️] ⌨️ 9 > [SAST Implementation with SonarCloud](2023/day09.md)
- [] ⌨️ 10 > [](2023/day10.md)
- [] ⌨️ 11 > [](2023/day11.md)
- [] ⌨️ 12 > [](2023/day12.md)

BIN
2023.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 159 KiB

View File

@ -1 +1,65 @@
This is a test
## 2022 Reflection & Welcome 2023
Hey everyone and welcome to the 2023 edition of #90DaysOfDevOps in this Day 1 post the plan is to reflect on the 2022 edition and some statistics, feedback, and ideas that we have had during the year.
### 2022 Recap
First, WOW! To think that the mission I thought up on New Years Eve 2021 was to spend the first 90 days of 2022 learning and documenting that learning, basically writing some notes after watching some much smarter people than me on YouTube.
Fast forward a year, and we have some amazing numbers on the repository, I think I mentioned at least somewhere in the repository, but I know I have mentioned elsewhere many times any content is worth doing if it helps even just one person, to have the numbers that we have here from stars to forks is incredible.
![](images/day01-1.jpg)
Also, nearly **500** watchers of the repository!
First, I want to thank everyone for sharing the repository with the community. Hearing that Microsoft and other massive tech vendors have shared this with their teams is humbling.
Secondly, I would like to thank the contributors. This started out as a place to take notes and learn in public, and it wasn't until a few days in that saw people correcting my poor spelling and grammar. (I am sure the same will happen this year) But the biggest and most amazing thing was the community that started to translate the repository into their native language! How amazing to think this was happening and helping non-native English speakers learn more about the powers of DevOps.
![](images/day01-2.png)
If you would like to find the amazing contributors on the repository, then you can head to the [Contributors](https://github.com/MichaelCade/90DaysOfDevOps/blob/main/Contributors.md)
### Continuous Learning
I mentioned and mentioned a lot that we are never done learning, if you think you are then you picked the wrong industry as things are changing all the time and at a rapid pace.
It is for that reason we must keep learning, learning for some is a challenge and for those people, I urge you to find a medium that you enjoy. I have always enjoyed documenting something I learn to like this and then getting hands-on. The premise of this project is exactly that, it is about a foundational knowledge of some of the key areas of DevOps and the tooling that achieves this, you are not going to be a graduated DevOps engineer by following along but you are going to have a better understanding of terminology and getting hands-on with some technologies that maybe you do not see on a day-to-day basis.
I also want to add that everyone is constantly evolving and learning, it doesn't matter if you are the CTO of a software company or a Systems Administrator wanting to learn more about automation, everyone is learning, and that little imposter syndrome feeling is normal. My advice is to run towards it vs running away from it and you will absolutely reap the rewards, also learn what you enjoy this makes learning more enjoyable.
### Security focused
For those that have been following along, you will have known that the biggest area we missed out on in the 2022 edition was security aptly named DevSecOps and how we integrate security into that infinite DevOps cycle to ensure we are always thinking about security.
In this edition, we will be diving headfirst into the security processes and principles as it obtains to DevSecOps heavily in this version and getting to some more topics that we missed in the first round.
### A little help from my friends
The 2022 edition was the equivalent of writing a blog post each day. We were well over 100k words and if we were to spin this into an eBook which was an option and instructions can be found in the repository if you so wish but you would find over 700 pages of A4 paper in total. The book idea is not dead and buried and I am working on a smaller version behind the scenes that might be a nice giveaway at a conference near you along with our amazing stickers.
Another gap for me and maybe this was the authenticity of the project as I was just starting to learn and documenting that learning journey in some of these areas. This time around I have asked some friends in the community to help.
There are two reasons for this:
1. I think it is important to get different perspectives across topics and also, we are all going to learn best if we hear from subject matter experts in those specific topic areas.
2. Some of the friends that will be helping here will have the opportunity to grow their brand and potentially even speak at events about their topics and the over the project.
You can find the 2023 authors on the opening 2023.md page with links to their bios and contact details.
I think it is also time to be very clear about the project. Nobody is being paid to write, nobody is being paid to talk about the project. I was approached about sponsorship several times, but the premise of this project is for it to remain impartial, free and for the community. Yes, we have used some projects and products throughout but none of the companies have sponsored or had a say in what has been written.
Finally, my employer Veeam Software, I am extremely lucky to have a company that enables me to be part of the community and document my learnings without interference. I don't work a traditional 9-5 and I am sure many people reading this do not either, but I am free to create content like this project.
### Resources
Throughout the project and the previous 2022 edition you will find the resources section, this is a list of content that I or my fellow authors have been through and if you want to learn more than you are reading here go and grab this content.
You can find the 2022 edition [here](https://github.com/MichaelCade/90DaysOfDevOps/blob/main/2022.md)
But also some community members have been busy at work transforming and creating a new look and feel through [GitHub Pages](https://www.90daysofdevops.com/#/)
On the [2023 page](https://www.90daysofdevops.com/#/2023) you will also find ways to interact and join the community.
With that said let's get into things with [Day 2](day02.md).

View File

@ -0,0 +1,85 @@
## The Big Picture: DevSecOps
Welcome to Day 2 of the 2023 edition here in this first module of the next 6 days we are going look at the foundational overview around DevSecOps.
### What is DevSecOps?
DevSecOps is a software development approach that aims to bring together development, security, and operations teams to build and maintain secure software applications. It is based on the principles of continuous integration, continuous delivery, and continuous deployment, which aim to deliver software updates and features more quickly and frequently. In DevSecOps, security is an integral part of the software development process, rather than an afterthought. This means that security testing, monitoring, and other security measures are built into the software development life cycle (SDLC) from the beginning, rather than being added later. DevSecOps aims to improve collaboration and communication between development, security, and operations teams, to create a more efficient and effective software development process.
### DevSecOps vs DevOps
I use the "vs" lightly here again but if we think back to the 2022 edition and the goal of DevOps is to improve the speed, reliability, and quality of software releases.
DevSecOps is an extension of the DevOps philosophy that emphasizes the integration of security practices into the software development process. The goal of DevSecOps is to build security measures into the software development process so that security is an integral part of the software from the start, rather than an afterthought. This helps to reduce the risk of security vulnerabilities being introduced into the software and makes it easier to identify and fix any issues that do arise.
DevOps focuses on improving collaboration and communication between developers and operations staff to improve the speed, reliability, and quality of software releases, while DevSecOps focuses on integrating security practices into the software development process to reduce the risk of security vulnerabilities and improve the overall security of the software.
### Automated Security
Automated security refers to the use of technology to perform security tasks without the need for human intervention. This can include things like security software that monitors a network for threats and takes action to block them, or systems that use artificial intelligence to analyse security footage and identify unusual activity. Automated security systems are designed to make security processes more efficient and effective, and to help reduce the workload on security personnel.
A key component of all things DevSecOps is the ability to automate a lot of the tasks at hand when creating and delivering software, when we add security from the start it means we also need to consider the automation aspect of security.
### Security at Scale (Containers and Microservices)
We know that the scale and dynamic infrastructure that has been enabled by containerisation and microservices have changed the way that most organisations do business.
This is also why we must bring that automated security into our DevOps principles to ensure that specific container security guidelines are met.
What I mean by this is with cloud-native technologies we cannot only have static security policies and posture; our security model also must be dynamic with the workload in hand and how that is running.
DevOps teams will need to include automated security to protect the overall environment and data, as well as continuous integration and continuous delivery processes.
The below list is taken from a [RedHat blog post](https://www.redhat.com/en/topics/devops/what-is-devsecops)
- Standardise and automate the environment: Each service should have the least privilege possible to minimize unauthorized connections and access.
- Centralise user identity and access control capabilities: Tight access control and centralised authentication mechanisms are essential for securing microservices since authentication is initiated at multiple points.
- Isolate containers running microservices from each other and the network: This includes both in-transit and at-rest data since both can represent high-value targets for attackers.
- Encrypt data between apps and services: A container orchestration platform with integrated security features helps minimize the chance of unauthorized access.
- Introduce secure API gateways: Secure APIs increase authorization and routing visibility. By reducing exposed APIs, organizations can reduce surfaces of attacks.
### Security is HOT right now
One thing you will have seen regardless of your background is that security is hot all over the industry, this is partly to do with security breaches appearing in global news and big brands being affected by security vulnerabilities or following potential bad practices allowing bad actors into the networks of these companies. It is fair to say or at least from my perspective the creation of software is much more achievable and obtainable now than it ever has. But in creating software it is increasingly exposed with vulnerabilities and the like which allows the bad actors to cause havoc and sometimes hold data to ransom or shut down businesses causing mayhem. We have discussed so far what is DevSecOps but I think it is also worthwhile exploring the cybersecurity side of the attack vector and why we protect our software supply chain to help avoid these cyber-attacks.
### Cybersecurity vs DevSecOps
As the heading goes it is not really a vs but more of a difference between the two topics. But I think it is important to raise this as really this will explain why Security must be part of that DevOps process, principles, and methodology.
Cybersecurity is the practice of protecting computer systems and networks from digital attacks, theft, and damage. It involves identifying and addressing vulnerabilities, implementing security measures, and monitoring systems for threats.
DevSecOps, on the other hand, is a combination of development, security, and operations practices. It is a philosophy that aims to integrate security into the development process, rather than treating it as a separate step. This involves collaboration between development, security, and operations teams throughout the entire software development lifecycle (SDLC).
Some key differences between cybersecurity and DevSecOps include:
**Focus**: Cybersecurity is primarily focused on protecting systems from external threats, while DevSecOps focuses on integrating security into the development process.
**Scope**: Cybersecurity covers a wider range of topics, including network security, data security, application security, and more. DevSecOps, on the other hand, is specifically focused on improving the security of software development and deployment.
**Approach**: Cybersecurity typically involves implementing security measures after the development process is complete, while DevSecOps involves integrating security into the development process from the start.
**Collaboration**: Cybersecurity often involves collaboration between IT and security teams, while DevSecOps involves collaboration between development, security, and operations teams.
## Resources
Over the course of the 90 Days, we will have a daily resources list that will bring relevant content that will help continue the topics and where you can go to find out more.
- [TechWorld with Nana - What is DevSecOps? DevSecOps explained in 8 Mins](https://www.youtube.com/watch?v=nrhxNNH5lt0&list=PLsKoqAvws1pvg7qL7u28_OWfXwqkI3dQ1&index=1&t=19s)
- [What is DevSecOps?](https://www.youtube.com/watch?v=J73MELGF6u0&list=PLsKoqAvws1pvg7qL7u28_OWfXwqkI3dQ1&index=2&t=1s)
- [freeCodeCamp.org - Web App Vulnerabilities - DevSecOps Course for Beginners](https://www.youtube.com/watch?v=F5KJVuii0Yw&list=PLsKoqAvws1pvg7qL7u28_OWfXwqkI3dQ1&index=3&t=67s)
- [The Importance of DevSecOps and 5 Steps to Doing it Properly (DevSecOps EXPLAINED)](https://www.youtube.com/watch?v=KaoPQLyWq_g&list=PLsKoqAvws1pvg7qL7u28_OWfXwqkI3dQ1&index=4&t=13s)
- [Continuous Delivery - What is DevSecOps?](https://www.youtube.com/watch?v=NdvMUcWNlFw&list=PLsKoqAvws1pvg7qL7u28_OWfXwqkI3dQ1&index=5&t=6s)
- [Cloud Advocate - What is DevSecOps?](https://www.youtube.com/watch?v=a2y4Oj5wrZg&list=PLsKoqAvws1pvg7qL7u28_OWfXwqkI3dQ1&index=6)
- [Cloud Advocate - DevSecOps Pipeline CI Process - Real world example!](https://www.youtube.com/watch?v=ipe08lFQZU8&list=PLsKoqAvws1pvg7qL7u28_OWfXwqkI3dQ1&index=7&t=204s)
Hopefully this gave you a taster for what you can expect for this module and some of the resources above will help provide more depth on the topic, In the post on [Day 3](day03.md) we will be taking a look at what an attacker thinks like which is why we have to protect from the start.

View File

@ -0,0 +1,94 @@
## Think Like an Attacker
Yesterday we covered what is DevSecOps, in this post we are going to look at some of the characteristics of an attacker. For us to think about the attacker we must think like an attacker.
### Characteristics of an Attacker
First and foremost, all businesses and software is an attack vectors to an attacker, there is no safe place we can only make places safer and less attractive for people to attack.
![](images/day03-2.jpg)
***[image from this source](https://www.trainerize.me/articles/outrun-bear/)***
With that in mind, attackers are a constant threat!
Attackers will identify gaps in security by running attacks in a specific order to gain access, pull data and be successful in their mission.
Attackers can be lucky, but they will absolutely work on targeted attacks.
Compromises can be slow and persistent or fast to get to a breach. Not all attacks are going to be the same.
### Motivations of an Attacker
As a DevOps team, you are going to be provisioning infrastructure, and software and protecting these environments likely spanning multiple clouds, virtualisation, and containerisation on platforms.
We must consider the following:
- **How** would they attack us?
- **Why** would they attack us?
- **What** do we have that is valuable to an attacker?
The motivations of an attacker will also be different depending on the attacker. I mean it could just be for fun... We have probably all been there, in school and just gone a little too deep into the network looking for more information. Who has a story to tell?
But as we have seen in the media attacks are more aligned to monetary, fraud or even political attacks on businesses and organisations.
In the Kubernetes space, we have even seen attackers leveraging and using the computing power of an environment to mine cryptocurrency.
At the heart of this attack is likely going to be **DATA**
A companys data is likely going to be extremely valuable to the company but also potentially out in the wild. It is why we put so much emphasis on protecting this data, ensuring that the data is secure and encrypted.
### Attack Maps
We now have a motive and some of the characteristics of an attacker or a group of attackers, if this is a planned attack then you are going to need a plan, you are going to need to identify what services and data you are targeting.
An attack map is a visual representation of an attack on a computer network. It shows the various stages of the attack, the tools and techniques used by the attacker, and the points of entry and exit into the network. Attack maps can be used to analyse the details of past attacks, identify vulnerabilities in a network, and plan defences against future attacks. They can also be used to communicate information about an attack to non-technical stakeholders, such as executives or legal teams.
You can see from the above description that an Attack Map should be created on both sides or both teams (teams wise this is something I am going to cover in a later post)
If you were to create an Attack Map of your home network or your business some of the things, you would want to capture would be:
- Capture a graphical representation of your app including all communication flows and technologies being used.
- A list of potential vulnerabilities and areas of attack.
- Consider confidentiality, integrity and availability for each connection/interaction within the app.
- Map the attacks/vulnerabilities
An attack map might look something like this with a key explaining what each number represents.
![](images/day03-1.png)
From this map we might consider there to be a denial of service or some malicious insider attack and access to S3 bucket to prevent the application saving data or causing it to save bad data.
This map then is never final, in the same way that your application continouly moves forward through feedback, this attack map also needs to be tested against, which provides feedback which in turn means the security posture is strengthend against these attacks. You could call this "Continuous Response" in the Security Feedback loop.
At a bare minimum we should be following a good, better, best model to better the security posture.
- **Good** - Identify security design contraints and controls that need to be built into the software to reduce an attack.
- **Better** - Prioritise and build security in for issues found later in the software cycle.
- **Best** - Build automation into script deployment to detect issues, unit testing, security testing, black box testing
Security is a design constraint - albeit an inconvenient one.
## Resources
- [devsecops.org](https://www.devsecops.org/)
- [TechWorld with Nana - What is DevSecOps? DevSecOps explained in 8 Mins](https://www.youtube.com/watch?v=nrhxNNH5lt0&list=PLsKoqAvws1pvg7qL7u28_OWfXwqkI3dQ1&index=1&t=19s)
- [What is DevSecOps?](https://www.youtube.com/watch?v=J73MELGF6u0&list=PLsKoqAvws1pvg7qL7u28_OWfXwqkI3dQ1&index=2&t=1s)
- [freeCodeCamp.org - Web App Vulnerabilities - DevSecOps Course for Beginners](https://www.youtube.com/watch?v=F5KJVuii0Yw&list=PLsKoqAvws1pvg7qL7u28_OWfXwqkI3dQ1&index=3&t=67s)
- [The Importance of DevSecOps and 5 Steps to Doing it Properly (DevSecOps EXPLAINED)](https://www.youtube.com/watch?v=KaoPQLyWq_g&list=PLsKoqAvws1pvg7qL7u28_OWfXwqkI3dQ1&index=4&t=13s)
- [Continuous Delivery - What is DevSecOps?](https://www.youtube.com/watch?v=NdvMUcWNlFw&list=PLsKoqAvws1pvg7qL7u28_OWfXwqkI3dQ1&index=5&t=6s)
- [Cloud Advocate - What is DevSecOps?](https://www.youtube.com/watch?v=a2y4Oj5wrZg&list=PLsKoqAvws1pvg7qL7u28_OWfXwqkI3dQ1&index=6)
- [Cloud Advocate - DevSecOps Pipeline CI Process - Real world example!](https://www.youtube.com/watch?v=ipe08lFQZU8&list=PLsKoqAvws1pvg7qL7u28_OWfXwqkI3dQ1&index=7&t=204s)
See you on [Day 4](day04.md)

View File

@ -0,0 +1,82 @@
## <span style="color:red">Red Team</span> vs. <span style="color:blue">Blue Team</span>
Something I mentioned in the last session, was referring to <span style="color:red">**Red**</span> and <span style="color:blue">**Blue**</span> teams. In the security space <span style="color:red">**Red**</span> teams and <span style="color:blue">**Blue**</span> teams work as attackers and defenders to improve an organisation's security.
Both teams work toward improving an organisation's security posture but in different ways.
The <span style="color:red">**Red**</span> team has the role of the attacker by trying to find vulnerabilities in code or infrastructure and attempting to break through cybersecurity defences.
The <span style="color:blue">**Blue**</span> team defends against those attacks and responds to incidents when they occur.
![](images\day04-2.jpg)
***[image from this source](https://hackernoon.com/introducing-the-infosec-colour-wheel-blending-developers-with-red-and-blue-security-teams-6437c1a07700)***
### The Benefits
A very good way to understand and better a company's security posture is to run these exercises between the <span style="color:red">**Red**</span> and <span style="color:blue">**Blue**</span> teams. The whole idea is that this scenario is there to mimic a real attack. Some of the areas that this approach will help are the following:
- Vulnerabilities
- Hardening network security
- Gaining experience in detecting and isolating attacks
- Build detailed response plans
- Raise overall company security awareness
### <span style="color:red">Red Team</span>
NIST (national institute of standards and technology) describes the <span style="color:red">**Red**</span> Team as:
“a group of people authorized and organized to emulate a potential adversarys attack or exploitation capabilities against an enterprises security posture.”
They are playing the bad actor in the scenario or simulation of the attack.
When we speak about both <span style="color:red">**Red**</span> and <span style="color:blue">**Blue**</span> team it is possibly wider than the DevSecOps process and principles of a software lifecycle but knowing this is not going to hurt and practices from DevSecOps will ensure overall that you have a better security posture.
The <span style="color:red">**Red**</span> team, is tasked with thinking like the attacker which we covered in the last session. Think about social engineering and including the wider teams within the business to manipulate and gain access to the network and services.
A key fundamental of the <span style="color:red">**Red**</span> team is understanding software development. Understanding and knowing how applications are built, you are going to be able to identify possible weaknesses, then write your programs to try and gain access and exploit. On top of this though you may have heard the term "penetration testing" or "pen testing" the overall aim for the <span style="color:red">**Red**</span> team is to identify and try to exploit known vulnerabilities within an environment. With the rise of Open Source software, this is another area that I want to cover in a few sessions time.
### <span style="color:blue">Blue Team</span>
NIST (national institute of standards and technology) describes the <span style="color:blue">**Blue**</span> Team as:
“the group responsible for defending an enterprises use of information systems by maintaining its security posture against a group of mock attackers.”
The <span style="color:blue">**Blue**</span> team is playing the defence, they are going to be analyse the security posture currently in the business and then take action on improving that to stop those external attacks. In the <span style="color:blue">**Blue**</span> team you are also going to be focused on continuous monitoring (something we covered in the end of 2022 regarding DevOps) monitoring for breaches and responding to them when they occur.
As part of the <span style="color:blue">**Blue**</span> team you are going to have to understand the assets you are protecting and how to best to protect them. In the IT landscape today we have lots of diverse options to run our workloads, applications and data.
- Assessing Risk - through the form of risk assessments is going to give you a good understanding what are the most critical assets within the business.
- Threat Intelligence - What threats are out there? There are thousands of vulnerabilities out there possibly without a resolution how can you mititgate risk of those services without damaging the use case and the business need?
### Cybersecurity colour wheel
As Cybersecurity grows in importance with all the big brands getting hit there is a need for more than just the <span style="color:red">**Red**</span> and <span style="color:blue">**Blue**</span> teams when it comes to security within a business.
![](images\day04-1.png)
***[image from this source](https://hackernoon.com/introducing-the-infosec-colour-wheel-blending-developers-with-red-and-blue-security-teams-6437c1a07700)***
- The <span style="color:yellow">**Yellow Team**</span> are our builders, the engineers and developers who develop the security systems and applications.
"We have our <span style="color:red">**Red**</span> and <span style="color:blue">**Blue**</span> Teams just as we always have, but now with the introduction of a <span style="color:yellow">**Yellow**</span> Team, we can have secondary coloured teams (Orange, Green and Purple) dedicated to mixing skills between attackers, defenders and codersmaking code more secure and the organisation more secure."
The above abstract was taken from the top resource listed at the end of the post.
<span style="color:red">**Red**</span>, <span style="color:blue">**Blue**</span>, <span style="color:yellow">**Yellow**</span> are primary colours, combine them and we start to understand where the other colours or secondary colours come into play, again really great explanation in that first link.
- <span style="color:purple">**Purple Team**</span> - The special team! If the you take <span style="color:blue">**Blue**</span> and <span style="color:red">**Red**</span> you get <span style="color:purple">**Purple**</span>. If you integrate defence with offence and you collaborate and share knowledge between the teams you overall provide a better posture throughout.
- <span style="color:green">**Green Team**</span> - Feedback loop, the <span style="color:green">**Green**</span> team are going to take insights from the <span style="color:blue">**Blue**</span> team and work closely with the <span style="color:yellow">**Yellow**</span> team to be more effcient. Mix <span style="color:blue">**Blue**</span> and <span style="color:green">**Green**</span> and what do you <span style="color:purple">**get**</span>?
- <span style="color:orange">**Orange Team**</span> - Much like the <span style="color:green">**Green**</span> team working with the <span style="color:blue">**Blue**</span> team for feedback, the <span style="color:orange">**Orange**</span> team works with the <span style="color:red">**Red**</span> team and pass on what they have learnt to the <span style="color:yellow">**Yellow**</span> team to build better security into their code.
When I got into researching this I realised that maybe I was moving away from the DevOps topics but please anyone in the DevSecOps space is this useful? correct? and do you have anything to add?
Obviously throughout we have the plan to dive into more specifics around DevSecOps and the different stages so I was being mindful that I did not want to cover those areas that will be covered in future sessions.
Also please add any additional resources.
## Resources
- [Introducing the InfoSec colour wheelblending developers with red and blue security teams.](https://hackernoon.com/introducing-the-infosec-colour-wheel-blending-developers-with-red-and-blue-security-teams-6437c1a07700)

View File

@ -0,0 +1,55 @@
## Open Source Security
Open-source software has become widely used over the past few years due to its collaborative and community/public nature.
The term Open Source refers to software in the public domain that people can freely use, modify, and share.
The main reason for this surge of adoption and interest in Open Source is the speed of augmenting proprietary code developed in-house and this in turn can accelerate time to market. Meaning that leveraging OSS can speed up application development and help get your commercial product to market faster.
### What is Open-Source Security?
Open-source security refers to the practice of ensuring the safety and security of computer systems and networks that use open-source software. As we said above Open-source software is software that is freely available to use, modify, and distribute, and it is typically developed by a community of volunteers however there is a huge uptake from big software vendors that also contribute back to open-source, you only need to look at the Kubernetes repository to see which vendors are heavily invested there.
Because open-source software is freely available, it can be widely used and studied, which can help to improve its security. However, it is important to ensure that open-source software is used responsibly and that any vulnerabilities are addressed in a timely manner to maintain its security.
### Understanding OSS supply chain security
I would normally document my findings based on a longer form video into a paragraph here but as this is 10mins I thought it made sense to link the resource here [Understanding Open-Source Supply Chain Security] (https://www.youtube.com/watch?v=pARGj6j0-ZY)
Be it a commercial product leveraging OSS or an OSS project using packages or other OSS code we must have an awareness from top to bottom and provide better visibility between projects.
### 3 As of OSS Security
Another resource I found useful here from IBM, will be linked below in the resources section.
- **Assess** - Look at the project health, how active is the repository, how responsive are the maintainers? If these show a bad sign, then you are not going to be happy about the security of the project.
At this stage, we can also check the security model, code reviews, data validations, and test coverage for security. How does the project handle CVEs?
What dependencies does this project have? Explore the health of these in turn as you need to be sure the whole stack is good.
- **Adopt** - If you are going to take this on within your software or as a standalone app within your own stack, who is going to manage and maintain it? Set some policies on who internally will overlook the project and support the community.
- **Act** - Security is the responsibility of everyone, not just the maintainers, as a user you should also act and assist with the project.
### Log4j Vulnerability
In early 2022 we had a vulnerability that seemed to massively hit the headlines (Log4j (CVE-2021-44228) RCE Vulnerability)
Log4j is a very common library for logging within Java. The vulnerability would in turn affect millions of java-based applications.
A malicious actor could use this vulnerability within the application to gain access to a system.
Two big things I mentioned,
- **millions** of applications will have this package being used.
- **malicious actors** could leverage this to gain access or plant malware into an environment.
The reason I am raising this is that security never stops, the growth of Open-Source adoption has increased this attack vector on applications, and this is why there needs to be an overall effort on security from day 0.
## Resources
- [Open Source Security Foundation](https://openssf.org/)
- [Snyk - State of open source security 2022](https://snyk.io/reports/open-source-security/)
- [IBM - The 3 A's of Open Source Security](https://www.youtube.com/watch?v=baZH6CX6Zno)
- [Log4j (CVE-2021-44228) RCE Vulnerability Explained](https://www.youtube.com/watch?v=0-abhd-CLwQ)

View File

@ -0,0 +1,244 @@
## Hands-On: Building a weak app
Nobody really sets out to build a weak or vulnerable app... do they?
No is the correct answer, nobody should or does set out to build a weak application, and nobody intends on using packages or other open-source software that brings its own vulnerabilities.
In this final introduction section into DevSecOps, I want to attempt to build and raise awareness of some of the misconfigurations and weaknesses that might fall by the wayside. Then later over the next 84 days or even sooner we are going to hear from some subject matter experts in the security space on how to prevent bad things and weak applications from being created.
### Building our first weak application
<span style="color:red">**Important Message: This exercise is to highlight bad and weaknesses in an application, Please do try this at home but beware this is bad practice**</span>
At this stage, I am not going to run through my software development environment in any detail. I would generally be using VScode on Windows with WSL2 enabled. We might then use Vagrant to provision dedicated compute instances to VirtualBox all of which I covered throughout the 2022 sections of #90DaysOfDevOps mostly in the Linux section.
### Bad Coding Practices or Coding Bad Practices
It is very easy to copy and paste into GitHub!
How many people check end-to-end the package that they include in your code?
We also must consider:
- Do we trust the user/maintainer
- Not validating input on our code
- Hardcoding secrets vs env or secrets management
- Trusting code without validation
- Adding your secrets to public repositories (How many people have done this?)
Now going back to the overall topic, DevSecOps, everything we are doing or striving towards is faster iterations of our application or software, but this means we can introduce defects and risks faster.
We will also likely be deploying our infrastructure with code, another risk is including bad code here that lets bad actors in via defects.
Deployments will also include application configuration management, another level of possible defects.
However! Faster iterations can and do mean faster fixes as well.
### OWASP - Open Web Application Security Project
*"[OWASP](https://owasp.org/) is a non-profit foundation that works to improve the security of software. Through community-led open-source software projects, hundreds of local chapters worldwide, tens of thousands of members, and leading educational and training conferences, the OWASP Foundation is the source for developers and technologists to secure the web."*
If we look at their most recent data set and their [top 10](https://owasp.org/www-project-top-ten/) we can see the following big ticket items for why things go bad and wrong.
1. Broken Access Control
2. Cryptographic Failures
3. Injection (2020 #1)
4. Insecure Design (New for 2021)
5. Security Misconfiguration
6. Vulnerable and Outdated Components (2020 #9)
7. Identification and authentication failures (2020 #2)
8. Software and Data integrity failures (New for 2021)
9. Security logging and monitoring failures (2020 #10)
10. Server-side request forgery (SSRF)
### Back to the App
<span style="color:red">**The warning above still stands, I will deploy this to a local VirtualBox VM IF you do decide to deploy this to a cloud instance then please firstly be careful and secondly know how to lock down your cloud provider to only your own remote IP!**</span>
Ok I think that is enough warnings, I am sure we might see the red warnings over the next few weeks some more as we get deeper into discussing this topic.
The application that I am going to be using will be from [DevSecOps.org](https://github.com/devsecops/bootcamp/blob/master/Week-2/README.md) This was one of their bootcamps years ago but still allows us to show what a bad app looks like.
Having the ability to see a bad or a weak application means we can start to understand how to secure it.
Once again, I will be using VirtualBox on my local machine and I will be using the following vagrantfile (link here to intro on vagrant)
The first alarm bell is that this vagrant box was created over 2 years ago!
```
Vagrant.configure("2") do |config|
config.vm.box = "centos/7"
config.vm.provider :virtualbox do |v|
v.memory = 8096
v.cpus = 4
end
end
```
If navigate to this folder, you can use `vagrant up` to spin up your centos7 machine in your environment.
![](images/day06-1.png)
Then we will need to access our machine, you can do this with `vagrant ssh`
We are then going to install MariaDB as a local database to use in our application.
`sudo yum -y install mariadb mariadb-server mariadb-devel`
start the service with
`sudo systemctl start mariadb.service`
We have to install some dependencies, this is also where I had to change what the Bootcamp suggested as NodeJS was not available in the current repositories.
`sudo yum -y install links`
`sudo yum install --assumeyes epel-release`
`sudo yum install --assumeyes nodejs`
You can confirm you have node installed with `node -v` and `npm -v` (npm should be installed as a dependency)
For this app we will be using ruby a language we have not covered at all yet and we will not really get into much detail about it, I will try and find some good resources and add them below.
Install with
`curl -L https://get.rvm.io | bash -s stable`
You might with the above be asked to add keys follow those steps.
For us to use rvm we need to do the following:
`source /home/vagrant/.rvm/scripts/rvm`
and finally, install it with
`rvm install ruby-2.7`
the reason for this long-winded process is basically because the centos7 box we are using is old and old ruby is shipped in the normal repository etc.
Check installation and version with
`ruby --version`
We next need the Ruby on Rails framework which can be gathered using the following command.
`gem install rails`
Next, we need git and we can get this with
`sudo yum install git`
Just for the record and not sure if it is required, I also had Redis installed on my machine as I was doing something else but it actually still might be needed so these are the steps.
```
sudo yum install epel-release
sudo yum install redis
```
The above could be related to turbo streams but I did not have time to learn more about ruby on rails.
Now lets finally create our application (for the record I went through a lot to make sure these steps worked on my system so I am sending you all the luck)
create the app with the following, calling it what you wish
`rails new myapp --skip-turbolinks --skip-spring --skip-test-unit -d mysql `
next, we will create the database and schema:
```
cd myapp
bundle exec rake db:create
bundle exec rake db:migrate
```
We can then run our app with `bundle exec rails server -b 0.0.0.0`
![](images/day06-2.png)
Then open a browser to hit that box, I had to change my VirtualBox VM networking to bridged vs NAT so that I would be able to navigate to it vs using vagrant ssh.
![](images/day06-3.png)
Now we need to **scaffold** a basic model
A scaffold is a set of automatically generated files which forms the basic structure of a Rails project.
We do this with the following commands:
```
bundle exec rails generate scaffold Bootcamp name:string description:text dates:string
bundle exec rake db:migrate
```
![](images/day06-4.png)
Add a default route to config/routes.rb
`root bootcamps#index`
![](images/day06-5.png)
Now edit app/views/bootcamps/show.html.erb and make the description field a raw field. Add the below.
```
<p>
<strong>Description:</strong>
<%=raw @bootcamp.description %>
</p>
```
Now why this is all relevant is that using raw in the description field means that this field now becomes a potential XSS target. Or cross-site scripting.
This can be explained better with a video [What is Cross-Site Scripting?](https://youtu.be/DxsmEXicXEE)
The rest of the Bootcamp goes on to add in search functionality which also increases the capabilities around an XSS attack and this is another great example of a demo attack you could try out on a [vulnerable app](https://www.softwaretestinghelp.com/cross-site-scripting-xss-attack-test/).
### Create search functionality
In app/controllers/bootcamps_controller.rb, we'll add the following logic to the index method:
```
def index
@bootcamps = Bootcamp.all
if params[:search].to_s != ''
@bootcamps = Bootcamp.where("name LIKE '%#{params[:search]}%'")
else
@bootcamps = Bootcamp.all
end
end
```
In app/views/bootcamps/index.html.erb, we'll add the search field:
```
<h1>Search</h1>
<%= form_tag(bootcamps_path, method: "get", id: "search-form") do %>
<%= text_field_tag :search, params[:search], placeholder: "Search Bootcamps" %>
<%= submit_tag "Search Bootcamps"%>
<% end %>
<h1>Listing Bootcamps</h1>
```
Massive thanks for [DevSecOps.org](https://www.devsecops.org/) this is where I found the old but great walkthrough with a few tweaks above, there is also so much more information to be found there.
With that much longer walkthrough than anticipated I am going to hand over to the next sections and authors to highlight how not to do this and how to make sure we are not releasing bad code or vulnerabilities out there into the wild.
## Resources
- [devsecops.org](https://www.devsecops.org/)
- [TechWorld with Nana - What is DevSecOps? DevSecOps explained in 8 Mins](https://www.youtube.com/watch?v=nrhxNNH5lt0&list=PLsKoqAvws1pvg7qL7u28_OWfXwqkI3dQ1&index=1&t=19s)
- [What is DevSecOps?](https://www.youtube.com/watch?v=J73MELGF6u0&list=PLsKoqAvws1pvg7qL7u28_OWfXwqkI3dQ1&index=2&t=1s)
- [freeCodeCamp.org - Web App Vulnerabilities - DevSecOps Course for Beginners](https://www.youtube.com/watch?v=F5KJVuii0Yw&list=PLsKoqAvws1pvg7qL7u28_OWfXwqkI3dQ1&index=3&t=67s)
- [The Importance of DevSecOps and 5 Steps to Doing it Properly (DevSecOps EXPLAINED)](https://www.youtube.com/watch?v=KaoPQLyWq_g&list=PLsKoqAvws1pvg7qL7u28_OWfXwqkI3dQ1&index=4&t=13s)
- [Continuous Delivery - What is DevSecOps?](https://www.youtube.com/watch?v=NdvMUcWNlFw&list=PLsKoqAvws1pvg7qL7u28_OWfXwqkI3dQ1&index=5&t=6s)
- [Cloud Advocate - What is DevSecOps?](https://www.youtube.com/watch?v=a2y4Oj5wrZg&list=PLsKoqAvws1pvg7qL7u28_OWfXwqkI3dQ1&index=6)
- [Cloud Advocate - DevSecOps Pipeline CI Process - Real world example!](https://www.youtube.com/watch?v=ipe08lFQZU8&list=PLsKoqAvws1pvg7qL7u28_OWfXwqkI3dQ1&index=7&t=204s)
See you on [Day 7](day04.md) Where we will start a new section on Secure Coding.

View File

@ -0,0 +1,42 @@
# Day 7: Secure Coding Overview
Secure coding is the practice of writing software in a way that ensures the security of the system and the data it processes. It involves designing, coding, and testing software with security in mind to prevent vulnerabilities and protect against potential attacks.
There are several key principles of secure coding that developers should follow:
1. Input validation: It is important to validate all user input to ensure that it is in the expected format and does not contain any malicious code or unexpected characters. This can be achieved through the use of regular expressions, data type checks, and other validation techniques.
2. Output encoding: Output data should be properly encoded to prevent any potential injection attacks. For example, HTML output should be properly escaped to prevent cross-site scripting (XSS) attacks, and SQL queries should be parameterized to prevent SQL injection attacks.
3. Access control: Access control involves restricting access to resources or data to only those users who are authorized to access them. This can include implementing authentication and authorization protocols, as well as enforcing least privilege principles to ensure that users have only the access rights they need to perform their job duties.
4. Error handling: Error handling is the process of properly handling errors and exceptions that may occur during the execution of a program. This can include logging errors, displaying appropriate messages to users, and mitigating the impact of errors on system security.
5. Cryptography: Cryptography should be used to protect sensitive data and communications, such as passwords, financial transactions, and sensitive documents. This can be achieved through the use of encryption algorithms and secure key management practices.
6. Threat Modeling: Document, locate, address, and validate are the four steps to threat modeling. To securely code, you need to examine your software for areas susceptible to increased threats of attack. Threat modeling is a multi-stage process that should be integrated into the software lifecycle from development, testing, and production.
7. Secure storage: Secure storage involves properly storing and handling sensitive data, such as passwords and personal information, to prevent unauthorized access or tampering. This can include using encryption, hashing, and other security measures to protect data at rest and in transit.
8. Secure architecture: Secure architecture is the foundation of a secure system. This includes designing systems with security in mind, using secure frameworks and libraries, and following secure design patterns.
There are several tools and techniques that can be used to help ensure that code is secure, including Static Application Security Testing (SAST), Software Composition Analysis (SCA), and Secure Code Review.
### Static Application Security Testing (SAST)
SAST is a method of testing software code for security vulnerabilities during the development phase. It involves analyzing the source code of a program without executing it, looking for vulnerabilities such as injection attacks, cross-site scripting (XSS), and other common security issues. SAST tools can be integrated into the software development process to provide ongoing feedback and alerts about potential vulnerabilities as the code is being written.
### Software Composition Analysis (SCA)
SCA is a method of analyzing the third-party components and libraries that are used in a software application. It helps to identify any vulnerabilities or security risks that may be present in these components, and can alert developers to the need to update or replace them. SCA can be performed manually or with the use of automated tools.
### Secure Code Reviews
Secure Code Review is a process of reviewing software code with the goal of identifying and addressing potential security vulnerabilities. It is typically performed by a team of security experts who are familiar with common coding practices and security best practices. Secure Code Review can be done manually or with the use of automated tools, and may involve a combination of SAST and SCA techniques.
In summary, Overall, secure coding is a crucial practice that helps protect software and its users from security vulnerabilities and attacks. By following best practices and keeping software up to date, developers can help ensure that their software is as secure as possible.
### Resources
- [Secure Coding Best Practices | OWASP Top 10 Proactive Control](https://www.youtube.com/watch?v=8m1N2t-WANc)
- [Secure coding practices every developer should know](https://snyk.io/learn/secure-coding-practices/)
- [10 Secure Coding Practices You Can Implement Now](https://codesigningstore.com/secure-coding-practices-to-implement)
- [Secure Coding Guidelines And Best Practices For Developers](https://www.softwaretestinghelp.com/guidelines-for-secure-coding/)
In the next part [Day 8](day08.md), we will discuss Static Application Security Testing (SAST) in more detail.

View File

@ -0,0 +1,54 @@
# Day 8: SAST Overview
Static Application Security Testing (SAST) is a method of evaluating the security of an application by analyzing the source code of the application without executing the code. SAST is also known as white-box testing as it involves testing the internal structure and workings of an application.
SAST is performed early in the software development lifecycle (SDLC) as it allows developers to identify and fix vulnerabilities before the application is deployed. This helps prevent security breaches and minimizes the risk of costly security incidents.
One of the primary benefits of SAST is that it can identify vulnerabilities that may not be detected by other testing methods such as dynamic testing or manual testing. This is because SAST analyzes the entire codebase and can identify vulnerabilities that may not be detectable by other testing methods.
There are several types of vulnerabilities that SAST can identify, including:
- **Input validation vulnerabilities**: These vulnerabilities occur when an application does not adequately validate user input, allowing attackers to input malicious code or data that can compromise the security of the application.
- **Cross-site scripting (XSS) vulnerabilities**: These vulnerabilities allow attackers to inject malicious scripts into web applications, allowing them to steal sensitive information or manipulate the application for their own gain.
- **Injection vulnerabilities**: These vulnerabilities allow attackers to inject malicious code or data into the application, allowing them to gain unauthorized access to sensitive information or execute unauthorized actions.
- **Unsafe functions and libraries**: These vulnerabilities occur when an application uses unsafe functions or libraries that can be exploited by attackers.
- **Security misconfigurations**: These vulnerabilities occur when an application is not properly configured, allowing attackers to gain access to sensitive information or execute unauthorized actions.
### SAST Tools (with free tier plan)
- **[SonarCloud](https://www.sonarsource.com/products/sonarcloud/)**: SonarCloud is a cloud-based code analysis service designed to detect code quality issues in 25+ different programming languages, continuously ensuring the maintainability, reliability and security of your code.
- **[Snyk](https://snyk.io/)**: Snyk is a platform allowing you to scan, prioritize, and fix security vulnerabilities in your own code, open source dependencies, container images, and Infrastructure as Code (IaC) configurations.
- **[Semgrep](https://semgrep.dev/)**: Semgrep is a fast, open source, static analysis engine for finding bugs, detecting dependency vulnerabilities, and enforcing code standards.
## How SAST Works?
SAST tools typically use a variety of techniques to analyze the sourced code, including pattern matching, rule-based analysis, and data flow analysis.
Pattern matching involves looking for specific patterns in the code that may indicate a vulnerability, such as the use of a known vulnerable library or the execution of user input without proper sanitization.
Rule-based analysis involves the use of a set of predefined rules to identify potential vulnerabilities, such as the use of weak cryptography or the lack of input validation.
Data flow analysis involves tracking the flow of data through the application and identifying potential vulnerabilities that may arise as a result, such as the handling of sensitive data in an insecure manner.
## Consideration while using SAST Tools
1. It is important to ensure that the tool is properly configured and that it is being used in a way that is consistent with best practices. This may include setting the tool's sensitivity level to ensure that it is properly identifying vulnerabilities, as well as configuring the tool to ignore certain types of vulnerabilities that are known to be benign.
2. SAST tools are not a replacement for manual code review. While these tools can identify many potential vulnerabilities, they may not be able to identify all of them, and it is important for developers to manually review the code to ensure that it is secure.
3. SAST is just one aspect of a comprehensive application security program. While it can be an important tool for identifying potential vulnerabilities, it is not a replacement for other security measures, such as secure coding practices, testing in the production environment, and ongoing monitoring and maintenance.
### Challenges associated with SAST
- **False positives**: Automated SAST tools can sometimes identify potential vulnerabilities that are not actually vulnerabilities. This can lead to a large number of false positives that need to be manually reviewed, increasing the time and cost of the testing process.
- **Limited coverage**: SAST can only identify vulnerabilities in the source code that is analyzed. If an application uses external libraries or APIs, these may not be covered by the SAST process.
- **Code complexity**: SAST can be more challenging for larger codebases or codebases that are written in languages that are difficult to analyze.
- **Limited testing**: SAST does not execute the code and therefore cannot identify vulnerabilities that may only occur when the code is executed.
Despite these challenges, SAST is a valuable method of evaluating the security of an application and can help organizations prevent security breaches and minimize the risk of costly security incidents. By identifying and fixing vulnerabilities early in the SDLC, organizations can build more secure applications and improve the overall security of their systems.
### Resources
- [SAST- Static Analysis with lab by Practical DevSecOps](https://www.youtube.com/watch?v=h37zp5g5tO4)
- [SAST All About Static Application Security Testing](https://www.mend.io/resources/blog/sast-static-application-security-testing/)
- [SAST Tools : 15 Top Free and Paid Tools](https://www.appsecsanta.com/sast-tools)
In the next part [Day 9](day09.md), we will discuss SonarCloud and integrate it with different CI/CD tools.

View File

@ -0,0 +1,132 @@
# Day 9: SAST Implementation with SonarCloud
SonarCloud is a cloud-based platform that provides static code analysis to help developers find and fix code quality issues in their projects. It is designed to work with a variety of programming languages and tools, including Java, C#, JavaScript, and more.
SonarCloud offers a range of features to help developers improve the quality of their code, including:
- **Static code analysis**: SonarCloud analyzes the source code of a project and checks for issues such as coding style violations, potential bugs, security vulnerabilities, and other problems. It provides developers with a detailed report of the issues it finds, along with suggestions for how to fix them.
- **Code review**: SonarCloud integrates with code review tools like GitHub pull requests, allowing developers to receive feedback on their code from their peers before it is merged into the main branch. This helps to catch issues early on in the development process, reducing the risk of bugs and other issues making it into production.
- **Continuous integration**: SonarCloud can be integrated into a continuous integration (CI) pipeline, allowing it to automatically run static code analysis on every code commit. This helps developers catch issues early and fix them quickly, improving the overall quality of their codebase.
- **Collaboration**: SonarCloud includes tools for team collaboration, such as the ability to assign issues to specific team members and track the progress of code review and issue resolution.
- **Customization**: SonarCloud allows developers to customize the rules and configurations used for static code analysis, so they can tailor the analysis to fit the specific needs and coding standards of their team.
Overall, SonarCloud is a valuable tool for developers looking to improve the quality of their code and reduce the risk of issues making it into production. It helps teams collaborate and catch problems early on in the development process, leading to faster, more efficient development and fewer bugs in the final product.
Read more about SonarCloud [here](https://docs.sonarcloud.io/)
### Integrate SonarCloud with GitHub Actions
- Sign up for a [SonarCloud](https://sonarcloud.io/) account with your GitHub Account.
- From the dashboard, click on “Import an organization from GitHub”
![](images/day09-1.png)
- Authorise and install SonarCloud app to access your GitHub account.
![](images/day09-2.png)
- Select the repository (free tier supports only public repositories) you want to analyze and click "Install"
![](images/day09-3.png)
- In SonarCloud you can now create an organisation.
![](images/day09-4.png)
![](images/day09-5.png)
- Now click on “Analyze a new Project”
![](images/day09-6.png)
- Click on setup to add the Project.
![](images/day09-7.png)
- Now on the SonarCloud dashboard you can the project.
![](images/day09-8.png)
- To setup the GitHub Actions, click on the project, then on **Information** > **Last analysis method**
![](images/day09-9.png)
- Click on **GitHub Actions**
![](images/day09-10.png)
- This will show some steps to integrate SonarCloud with GitHub actions. At the top you will see SONAR_TOKEN, we will add that as Github Secrets later.
![](images/day09-11.png)
- Next thing you will see is the yaml file for the GitHub Workflow
![](images/day09-12.png)
- You will also see a configuration file that we will have to add in the source code repo
![](images/day09-13.png)
![](images/day09-14.png)
- At the bottom of page, disable the Automatic Analysis
![](images/day09-15.png)
- Now go the source code repo and add the following configuration `sonar-project.properties` in the root directory.
```yaml
sonar.projectKey=prateekjaindev_nodejs-todo-app-demo
sonar.organization=prateekjaindev
# This is the name and version displayed in the SonarCloud UI.
#sonar.projectName=nodejs-todo-app-demo
#sonar.projectVersion=1.0
# Path is relative to the sonar-project.properties file. Replace "\" by "/" on Windows.
#sonar.sources=.
# Encoding of the source code. Default is default system encoding
#sonar.sourceEncoding=UTF-8
```
- Update or add the GitHub actions workflow with the following job in the `.github/workflows` directory
```yaml
name: SonarScan
on:
push:
branches:
- main
pull_request:
types: [opened, synchronize, reopened]
jobs:
sonarcloud:
name: SonarCloud
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
with:
fetch-depth: 0 # Shallow clones should be disabled for a better relevancy of analysis
- name: SonarCloud Scan
uses: SonarSource/sonarcloud-github-action@master
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} # Needed to get PR information, if any
SONAR_TOKEN: ${{ secrets.SONAR_TOKEN }}
```
- Now go to GitHub and add GitHub Secret named SOANR_TOKEN.
![](images/day09-16.png)
- As soon as you commit the changes, the workflow will trigger.
![](images/day09-17.png)
- Now after every commit, you can check the updated reports on the SonarCloud dashboard.
![](images/day09-18.png)
### Quality Gates
A quality gate is an indicator that tells you whether your code meets the minimum level of quality required for your project. It consists of a set of conditions that are applied to the results of each analysis. If the analysis results meet or exceed the quality gate conditions then it shows a **Passed** status otherwise, it shows a **Failed** status.
By default SonarCloud comes with a default quality gate “Sonar way”. You can edit or create new one in the Organisation Settings.
![](images/day09-19.png)
### Resources
- [SonarCloud Documentation](https://docs.sonarcloud.io/)
- [How to create Quality gates on SonarQube](https://www.youtube.com/watch?v=8_Xt9vchlpY)
- [Source Code of the repo I used for SAST implementation](https://github.com/prateekjaindev/nodejs-todo-app-demo)
In the next part [Day 10](day10.md), we will discuss Software Composition Analysis (SCA).

View File

@ -0,0 +1,33 @@
# Day 10: Software Composition Analysis Overview
Software composition analysis (SCA) is a process that helps developers identify the open source libraries, frameworks, and components that are included in their software projects. SCA tools scan the codebase of a software project and provide a report that lists all the open source libraries, frameworks, and components that are being used. This report includes information about the licenses and vulnerabilities of these open source libraries and components, as well as any security risks that may be associated with them.
There are several benefits to using SCA tools in software development projects. These benefits include:
1. **Improved security**: By identifying the open source libraries and components that are being used in a project, developers can assess the security risks associated with these libraries and components. This allows them to take appropriate measures to fix any vulnerabilities and protect their software from potential attacks.
2. **Enhanced compliance**: SCA tools help developers ensure that they are using open source libraries and components that are compliant with the appropriate licenses. This is particularly important for companies that have strict compliance policies and need to ensure that they are not infringing on any third-party intellectual property rights.
3. **Improved efficiency**: SCA tools can help developers save time and effort by automating the process of identifying and tracking open source libraries and components. This allows developers to focus on more important tasks, such as building and testing their software.
4. **Reduced risk**: By using SCA tools, developers can identify and fix vulnerabilities in open source libraries and components before they become a problem. This helps to reduce the risk of security breaches and other issues that could damage the reputation of the software and the company.
5. **Enhanced quality**: By identifying and addressing any vulnerabilities in open source libraries and components, developers can improve the overall quality of their software. This leads to a better user experience and a higher level of customer satisfaction.
In addition to these benefits, SCA tools can also help developers to identify any potential legal issues that may arise from the use of open source libraries and components. For example, if a developer is using a library that is licensed under a copyleft license, they may be required to share any changes they make to the library with the community.
Despite these benefits, there are several challenges associated with SCA:
1. **Scale**: As the use of open source software has become more widespread, the number of components that need to be analyzed has grown exponentially. This can make it difficult for organizations to keep track of all the components they are using and to identify any potential issues.
2. **Complexity**: Many software applications are made up of a large number of components, some of which may have been added years ago and are no longer actively maintained. This can make it difficult to understand the full scope of an application and to identify any potential issues.
3. **False positives**: SCA tools can generate a large number of alerts, some of which may be false positives. This can be frustrating for developers who have to review and dismiss these alerts, and it can also lead to a lack of trust in the SCA tool itself.
4. **Lack of standardization**: There is no standard way to conduct SCA, and different tools and approaches can produce different results. This can make it difficult for organizations to compare the results of different SCA tools and to determine which one is best for their needs.
Overall, SCA tools provide a number of benefits to software developers and can help to improve the security, compliance, efficiency, risk management, and quality of software projects. By using these tools, developers can ensure that they are using open source libraries and components that are compliant with the appropriate licenses, free of vulnerabilities, and of high quality. This helps to protect the reputation of their software and the company, and leads to a better user experience.
### SCA Tools (Opensource or Free Tier)
- **[OWASP Dependncy Check](https://owasp.org/www-project-dependency-check/)**: Dependency-Check is a Software Composition Analysis (SCA) tool that attempts to detect publicly disclosed vulnerabilities contained within a projects dependencies. It does this by determining if there is a Common Platform Enumeration (CPE) identifier for a given dependency. If found, it will generate a report linking to the associated CVE entries.
- **[Snyk](https://snyk.io/product/open-source-security-management/)**: Snyk Open Source provides a developer-first SCA solution, helping developers find, prioritize, and fix security vulnerabilities and license issues in open source dependencies.
### Resources
- [Software Composition Analysis (SCA): What You Should Know](https://www.aquasec.com/cloud-native-academy/supply-chain-security/software-composition-analysis-sca/)
- [Software Composition Analysis 101: Knowing whats inside your apps - Magno Logan](https://www.youtube.com/watch?v=qyVDHH4T1oo)
In the next part [Day 11](day11.md), we will discuss Dependency Check and integrate it with GitHub Actions.

View File

@ -0,0 +1,69 @@
# Day 11: SCA Implementation with OWASP Dependency Check
### OWASP Dependency Check
OWASP Dependency Check is an open-source tool that checks project dependencies for known vulnerabilities. It can be used to identify dependencies with known vulnerabilities and determine if any of those vulnerabilities are exposed in the application.
The tool works by scanning the dependencies of a project and checking them against a database of known vulnerabilities. If a vulnerability is found, the tool will report the vulnerability along with the associated CVE (Common Vulnerabilities and Exposures) identifier, a standardized identifier for publicly known cybersecurity vulnerabilities.
To use OWASP Dependency Check, you will need to include it as a part of your build process. There are integrations available for a variety of build tools, including Maven, Gradle, and Ant. You can also use the command-line interface to scan your dependencies.
OWASP Dependency Check is particularly useful for identifying vulnerabilities in third-party libraries and frameworks that your application depends on. These types of dependencies can introduce vulnerabilities into your application if they are not properly managed. By regularly scanning your dependencies, you can ensure that you are aware of any vulnerabilities and take steps to address them.
It is important to note that OWASP Dependency Check is not a replacement for secure coding practices and should be used in conjunction with other security measures. It is also important to regularly update dependencies to ensure that you are using the most secure version available.
### Integrate Dependency Check with GitHub Actions
To use Dependency Check with GitHub Actions, you can create a workflow file in your repository's `.github/workflows` directory. Here is an example workflow that runs Dependency Check on every push to the `main` branch:
```yaml
name: Dependency-Check
on:
push:
branches:
- main
pull_request:
types: [opened, synchronize, reopened]
jobs:
dependency-check:
name: Dependency-Check
runs-on: ubuntu-latest
steps:
- name: Download OWASP Dependency Check
run: |
VERSION=$(curl -s https://jeremylong.github.io/DependencyCheck/current.txt)
curl -sL "https://github.com/jeremylong/DependencyCheck/releases/download/v$VERSION/dependency-check-$VERSION-release.zip" --output dependency-check.zip
unzip dependency-check.zip
- name: Run Dependency Check
run: |
./dependency-check/bin/dependency-check.sh --out report.html --scan .
rm -rf dependency-check*
- name: Upload Artifacts
uses: actions/upload-artifact@v2
with:
name: artifacts
path: report.html
```
This workflow does the following:
1. Defines a workflow called `Dependency-Check` that runs on every push to the `main` branch.
2. Specifies that the workflow should run on the `ubuntu-latest` runner.
3. Downloads and installs Dependency Check.
4. Runs Dependency Check on the current directory (`.`) and generate a report in report.html file.
5. Removes the downloaded Dependency Check files.
6. Upload the report file as artifacts.
You can download the report from the Artifacts and open it in the Browser.
![](images/day11-1.png)
You can customize this workflow to fit your needs. For example, you can specify different branches to run the workflow on, or specify different dependencies to check. You can also configure Dependency Check to generate a report in a specific format (e.g., HTML, XML, JSON) and save it to the repository.
### Resources
- [Dependency Check Documentation](https://jeremylong.github.io/DependencyCheck/)
- [Source Code of the repo I used for SCA implementation](https://github.com/prateekjaindev/nodejs-todo-app-demo)
In the next part [Day 12](day12.md), we will discuss Secure Coding Review.

BIN
2023/images/day01-1.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 184 KiB

BIN
2023/images/day01-2.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 91 KiB

BIN
2023/images/day03-1.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 110 KiB

BIN
2023/images/day03-2.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 58 KiB

BIN
2023/images/day04-1.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 372 KiB

BIN
2023/images/day04-2.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 215 KiB

BIN
2023/images/day06-1.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 98 KiB

BIN
2023/images/day06-2.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 54 KiB

BIN
2023/images/day06-3.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 40 KiB

BIN
2023/images/day06-4.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 75 KiB

BIN
2023/images/day06-5.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 16 KiB

BIN
2023/images/day09-1.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 273 KiB

BIN
2023/images/day09-10.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 404 KiB

BIN
2023/images/day09-11.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 265 KiB

BIN
2023/images/day09-12.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 66 KiB

BIN
2023/images/day09-13.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 210 KiB

BIN
2023/images/day09-14.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 131 KiB

BIN
2023/images/day09-15.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 103 KiB

BIN
2023/images/day09-16.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 66 KiB

BIN
2023/images/day09-17.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 180 KiB

BIN
2023/images/day09-18.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 439 KiB

BIN
2023/images/day09-19.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 143 KiB

BIN
2023/images/day09-2.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 90 KiB

BIN
2023/images/day09-3.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 254 KiB

BIN
2023/images/day09-4.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 237 KiB

BIN
2023/images/day09-5.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 193 KiB

BIN
2023/images/day09-6.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 165 KiB

BIN
2023/images/day09-7.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 175 KiB

BIN
2023/images/day09-8.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 222 KiB

BIN
2023/images/day11-1.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 356 KiB

File diff suppressed because it is too large Load Diff

View File

@ -4,7 +4,7 @@
<img src="logo.png?raw=true" alt="90DaysOfDevOps Logo" width="50%" height="50%" />
</p>
![Website](https://img.shields.io/website?url=https%3A%2F%2Fwww.90daysofdevops.com) ![GitHub Repo stars](https://img.shields.io/github/stars/MichaelCade/90DaysOfDevOps)
[![Website](https://img.shields.io/website?url=https%3A%2F%2Fwww.90daysofdevops.com)](https://www.90daysofdevops.com) [![GitHub Repo stars](https://img.shields.io/github/stars/MichaelCade/90DaysOfDevOps)](https://github.com/MichaelCade/90DaysOfDevOps)
This repository is used to document my journey on getting a better foundational knowledge of "DevOps". I will be starting this journey on the 1st January 2022 but the idea is that we take 90 days which just so happens to be January 1st to March 31st.
@ -14,19 +14,21 @@ This will **not cover all things** "DevOps" but it will cover the areas that I f
[![ko-fi](https://ko-fi.com/img/githubbutton_sm.svg)](https://ko-fi.com/N4N33YRCS)
[![](https://dcbadge.vercel.app/api/server/vqwPrNQsyK)](https://discord.gg/vqwPrNQsyK)
[![Discord Invite Link](https://dcbadge.vercel.app/api/server/vqwPrNQsyK)](https://discord.gg/vqwPrNQsyK)
![GitHub Repo Stars](https://img.shields.io/github/stars/michaelcade/90daysofdevops?style=social?)
The two images below will take you to the 2022 and 2023 edition of the learning journey.
<p align="center">
<a href="2022.md">
<img src="2022.jpg?raw=true" alt="2022" width="50%" height="50%" />
<img src="2022.png?raw=true" alt="2022" width="70%" height="70%" />
</p>
</a>
</a>
<p align="center">
<a href="2023.md">
<img src="2023.jpg?raw=true" alt="2022" width="50%" height="50%" />
<img src="2023.png?raw=true" alt="2023" width="70%" height="70%" />
</p>
</a>

View File

@ -3,9 +3,10 @@
<head>
<meta charset="UTF-8">
<title>Document</title>
<title>90DaysOfDevOps</title>
<meta http-equiv="X-UA-Compatible" content="IE=edge,chrome=1" />
<meta name="description" content="Description">
<meta name="description"
content="A learning resource for the community to pick up a foundational theory and hands-on knowledge and understanding of the key areas of DevOps. Follow along and join the community!">
<meta name="viewport" content="width=device-width, initial-scale=1.0, minimum-scale=1.0">
<link rel="stylesheet" href="//cdn.jsdelivr.net/npm/docsify/themes/dark.css" />
</head>

View File

View File

View File

View File

View File

View File

View File

View File

View File

View File

View File

View File

View File

View File

View File

View File

View File

View File

View File

View File

View File

View File

View File

View File

View File

View File

View File

View File

View File

View File

View File

View File

View File

View File

View File

View File

View File

View File

View File

Some files were not shown because too many files have changed in this diff Show More