Merge branch 'main' into patch-1
24
.github/workflows/add-contributors.yml
vendored
@ -1,24 +0,0 @@
|
||||
name: Add contributors
|
||||
on:
|
||||
schedule:
|
||||
- cron: '0 12 * * *'
|
||||
# push:
|
||||
# branches:
|
||||
# - master
|
||||
|
||||
jobs:
|
||||
add-contributors:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v2
|
||||
- uses: BobAnkh/add-contributors@master
|
||||
with:
|
||||
REPO_NAME: 'MichaelCade/90DaysOfDevOps'
|
||||
CONTRIBUTOR: '### Other Contributors'
|
||||
COLUMN_PER_ROW: '6'
|
||||
ACCESS_TOKEN: ${{secrets.GITHUB_TOKEN}}
|
||||
IMG_WIDTH: '100'
|
||||
FONT_SIZE: '14'
|
||||
PATH: '/Contributors.md'
|
||||
COMMIT_MESSAGE: 'docs(Contributors): update contributors'
|
||||
AVATAR_SHAPE: 'round'
|
@ -84,7 +84,7 @@ If we create an additional file called `samplecode.ps1`, the status would become
|
||||
|
||||

|
||||
|
||||
Add our new file using the `git add sample code.ps1` command and then we can run `git status` again and see our file is ready to be committed.
|
||||
Add our new file using the `git add samplecode.ps1` command and then we can run `git status` again and see our file is ready to be committed.
|
||||
|
||||

|
||||
|
||||
|
@ -44,7 +44,7 @@ Now we can choose additional components that we would like to also install but a
|
||||
|
||||

|
||||
|
||||
We can then choose which SSH Executable we wish to use. IN leave this as the bundled OpenSSH that you might have seen in the Linux section.
|
||||
We can then choose which SSH Executable we wish to use. I leave this as the bundled OpenSSH that you might have seen in the Linux section.
|
||||
|
||||

|
||||
|
||||
|
@ -1,165 +1,165 @@
|
||||
## Microsoft Azure Security Models
|
||||
## Modelos de seguridad de Microsoft Azure
|
||||
|
||||
Following on from the Microsoft Azure Overview, we are going to start with Azure Security and see where this can help in our day to day. For the most part, I have found the built-in roles have been sufficient but knowing that we can create and work with many different areas of authentication and configurations. I have found Microsoft Azure to be quite advanced with its Active Directory background compared to other public clouds.
|
||||
Siguiendo con la visión general de Microsoft Azure, vamos a empezar con Azure Security y ver cómo esto puede ayudar. En gran medida, con los roles por defecto son suficientes, pero además podemos trabajar con muchas áreas diferentes de autenticación y configuraciones. Microsoft Azure puede ser bastante avanzado gracias a Active Directory en comparación con otras nubes públicas.
|
||||
|
||||
This is one area in which Microsoft Azure seemingly works differently from other public cloud providers, in Azure there is ALWAYS Azure AD.
|
||||
Esta es un área en la que Microsoft Azure aparentemente funciona de manera diferente a otros proveedores de nube pública, en Azure SIEMPRE tiene Active Directory.
|
||||
|
||||
### Directory Services
|
||||
### Servicios de directorio
|
||||
|
||||
- Azure Active Directory hosts the security principles used by Microsoft Azure and other Microsoft cloud services.
|
||||
- Authentication is accomplished through protocols such as SAML, WS-Federation, OpenID Connect and OAuth2.
|
||||
- Queries are accomplished through REST API called Microsoft Graph API.
|
||||
- Tenants have a tenant.onmicrosoft.com default name but can also have custom domain names.
|
||||
- Subscriptions are associated with an Azure Active Directory tenant.
|
||||
- Azure Active Directory alberga los principios de seguridad utilizados por Microsoft Azure y otros servicios en la nube de Microsoft.
|
||||
- La autenticación se realiza a través de protocolos como SAML, WS-Federation, OpenID Connect y OAuth2.
|
||||
- Las consultas se realizan a través de la API REST denominada Microsoft Graph API.
|
||||
- Los tenants tienen un nombre por defecto tenant.onmicrosoft.com pero también pueden tener nombres de dominio personalizados.
|
||||
- Las suscripciones están asociadas a un tenant de Azure Active Directory.
|
||||
|
||||
If we think about AWS to compare the equivalent offering would be AWS IAM (Identity & Access Management) Although still very different
|
||||
Si pensamos en AWS para comparar el servicio equivalente sería AWS IAM (Identity & Access Management), aunque es bastante diferente
|
||||
|
||||
Azure AD Connect provides the ability to replicate accounts from AD to Azure AD. This can also include groups and sometimes objects. This can be granular and filtered. Supports multiple forests and domains.
|
||||
Azure AD Connect ofrece la posibilidad de replicar cuentas de AD a Azure AD. Esto también puede incluir grupos y a veces objetos. Esto puede ser granulado y filtrado. Admite varios bosques y dominios.
|
||||
|
||||
It is possible to create cloud accounts in Microsoft Azure Active Directory (AD) but most organisations already have accounted for their users in their own Active Directory being on-premises.
|
||||
Es posible crear cuentas en la nube en Microsoft Azure Active Directory (AD), pero la mayoría de las organizaciones ya tienen contabilizados a sus usuarios en su propio Active Directory local.
|
||||
|
||||
Azure AD Connect also allows you to not only see Windows AD servers but also other Azure AD, Google and others. This also provides the ability to collaborate with external people and organisations this is called Azure B2B.
|
||||
Azure AD Connect también permite ver no sólo los servidores Windows AD sino también otros Azure AD, Google y otros. Esto también proporciona la capacidad de colaborar con personas y organizaciones externas, lo que se denomina Azure B2B.
|
||||
|
||||
Authentication options between Active Directory Domain Services and Microsoft Azure Active Directory are possible with both identity sync with a password hash.
|
||||
Las opciones de autenticación entre Active Directory Domain Services y Microsoft Azure Active Directory son posibles con ambas identidades sincronizadas con un hash de contraseña.
|
||||
|
||||

|
||||
|
||||
The passing of the password hash is optional, if this is not used then pass-through authentication is required.
|
||||
El paso del hash de la contraseña es opcional, si esto no se utiliza entonces se requiere autenticación passthrough.
|
||||
|
||||
There is a video linked below that goes into detail about Passthrough authentication.
|
||||
Hay un vídeo enlazado a continuación que entra en detalle sobre la autenticación Passthrough.
|
||||
|
||||
[User sign-in with Azure Active Directory Pass-through Authentication](https://docs.microsoft.com/en-us/azure/active-directory/hybrid/how-to-connect-pta)
|
||||
[Inicio de sesión de usuario con autenticación Pass-through de Azure Active Directory](https://docs.microsoft.com/en-us/azure/active-directory/hybrid/how-to-connect-pta)
|
||||
|
||||

|
||||
|
||||
### Federation
|
||||
### Federación
|
||||
|
||||
It's fair to say that if you are using Microsoft 365, Microsoft Dynamics and on-premises Active Directory it is quite easy to understand and integrate into Azure AD for federation. However, you might be using other services outside of the Microsoft ecosystem.
|
||||
Es justo decir que utilizando Microsoft 365, Microsoft Dynamics y Active Directory local es bastante fácil de entender e integrar en Azure AD para la federación. Sin embargo, es posible utilizar otros servicios fuera del ecosistema de Microsoft.
|
||||
|
||||
Azure AD can act as a federation broker to these other Non-Microsoft apps and other directory services.
|
||||
Azure AD puede actuar como intermediario de federación para estas otras aplicaciones ajenas a Microsoft y otros servicios de directorio.
|
||||
|
||||
This will be seen in the Azure Portal as Enterprise Applications of which there are a large number of options.
|
||||
Esto se verá en el Portal Azure como Aplicaciones Empresariales de las cuales hay un gran número de opciones.
|
||||
|
||||

|
||||
|
||||
If you scroll down on the enterprise application page you are going to see a long list of featured applications.
|
||||
Desplazando hacia abajo la página de aplicaciones empresariales se puede obtener una larga lista de aplicaciones destacadas.
|
||||
|
||||

|
||||
|
||||
This option also allows for "bring your own" integration, an application you are developing or a non-gallery application.
|
||||
Esta opción también permite "traer su propia" integración, una aplicación que está desarrollando o una aplicación no galería.
|
||||
|
||||
I have not looked into this before but I can see that this is quite the feature set when compared to the other cloud providers and capabilities.
|
||||
No he mirado en esto antes, pero puedo ver que esto es bastante el conjunto de características en comparación con los otros proveedores de nube y capacidades.
|
||||
|
||||
### Role-Based Access Control
|
||||
### Control de acceso basado en roles
|
||||
|
||||
We have already covered on [Day 29](day29.md) the scopes we are going to cover here, we can set our role-based access control according to one of these areas.
|
||||
Ya hemos cubierto en [Día 29](day29.md) los ámbitos que vamos a cubrir aquí, podemos establecer nuestro control de acceso basado en roles de acuerdo a uno de estos ámbitos:
|
||||
|
||||
- Subscriptions
|
||||
- Management Group
|
||||
- Resource Group
|
||||
- Resources
|
||||
- Suscripciones
|
||||
- Grupo de Gestión
|
||||
- Grupo de Recursos
|
||||
- Recursos
|
||||
|
||||
Roles can be split into three, there are many built-in roles in Microsoft Azure. Those three are:
|
||||
Los roles se pueden dividir en tres, hay muchos roles incorporados en Microsoft Azure. Estos tres son:
|
||||
|
||||
- Owner
|
||||
- Contributor
|
||||
- Reader
|
||||
- Propietario
|
||||
- Contribuidor
|
||||
- Lector
|
||||
|
||||
Owner and Contributor are very similar in their boundaries of scope however the owner can change permissions.
|
||||
Propietario y Contribuidor son muy similares en sus límites de alcance. Sin embargo, el propietario puede cambiar permisos.
|
||||
|
||||
Other roles are specific to certain types of Azure Resources as well as custom roles.
|
||||
Otros roles son específicos para ciertos tipos de Azure Resources, así como roles personalizados.
|
||||
|
||||
We should focus on assigning permissions to groups vs users.
|
||||
Deberíamos centrarnos en asignar permisos a grupos frente a usuarios.
|
||||
|
||||
Permissions are inherited.
|
||||
Los permisos se heredan.
|
||||
|
||||
If we go back and look at the "90DaysOfDevOps" Resource group we created and check the Access Control (IAM) within you can see we have a list of contributors and a customer User Access Administrator, and we do have a list of owners (But I cannot show this)
|
||||
Si volvemos atrás y miramos el grupo de Recursos "90DaysOfDevOps" creado y comprobamos el Control de Acceso (IAM) dentro podemos ver que tenemos una lista de contribuidores y un cliente Administrador de Acceso de Usuario, y tenemos una lista de propietarios (Pero no puedo mostrar esto)
|
||||
|
||||

|
||||
|
||||
We can also check the roles we have assigned here if they are BuiltInRoles and which category they fall under.
|
||||
Podemos comprobar aquí si los roles que tenemos asignados son BuiltInRoles y a qué categoría pertenecen.
|
||||
|
||||

|
||||
|
||||
We can also use the check access tab if we want to check an account against this resource group and make sure that the account we wish to have that access to has the correct permissions or maybe we want to check if a user has too much access.
|
||||
También podemos comprobar acceso si queremos comprobar una cuenta contra este grupo de recursos y asegurarnos de que la cuenta a la que queremos dar ese acceso tiene los permisos correctos o quizás queremos comprobar si un usuario tiene demasiado acceso.
|
||||
|
||||

|
||||
|
||||
### Microsoft Defender for Cloud
|
||||
|
||||
- Microsoft Defender for Cloud (formerly known as Azure Security Center) provides insight into the security of the entire Azure environment.
|
||||
- Microsoft Defender for Cloud (anteriormente conocido como Azure Security Center) proporciona información sobre la seguridad de todo el entorno Azure.
|
||||
|
||||
- A single dashboard for visibility into the overall security health of all Azure and non-Azure resources (via Azure Arc) and security hardening guidance.
|
||||
- Un único panel de control para la visibilidad del estado general de la seguridad de todos los recursos Azure y no Azure (a través de Azure Arc) y orientación sobre el refuerzo de la seguridad.
|
||||
|
||||
- Free tier includes continuous assessment and security recommendations.
|
||||
- El nivel gratuito incluye evaluación continua y recomendaciones de seguridad.
|
||||
|
||||
- Paid plans for protected resource types (e.g. Servers, AppService, SQL, Storage, Containers, KeyVault).
|
||||
- Planes de pago para tipos de recursos protegidos (por ejemplo, Servidores, AppService, SQL, Almacenamiento, Contenedores, KeyVault).
|
||||
|
||||
I have switched to another subscription to view the Azure Security Center and you can see here based on very few resources that I have some recommendations in one place.
|
||||
He cambiado a otra suscripción para ver el Centro de Seguridad de Azure y se puede ver aquí sobre la base de muy pocos recursos con algunas recomendaciones en un solo lugar.
|
||||
|
||||

|
||||
|
||||
### Azure Policy
|
||||
|
||||
- Azure Policy is an Azure native service that helps to enforce organizational standards and assess compliance at scale.
|
||||
- Azure Policy es un servicio nativo de Azure que ayuda a aplicar las normas de la organización y evaluar el cumplimiento a escala.
|
||||
|
||||
- Integrated into Microsoft Defender for Cloud. Azure Policy audits non-compliant resources and applies remediation.
|
||||
- Integrado en Microsoft Defender for Cloud. Azure Policy audita los recursos no conformes y aplica correcciones.
|
||||
|
||||
- Commonly used for governing resource consistency, regulatory compliance, security, cost, and management standards.
|
||||
- Se utiliza habitualmente para regular la coherencia de los recursos, el cumplimiento normativo, la seguridad, los costes y las normas de gestión.
|
||||
|
||||
- Uses JSON format to store evaluation logic and determine whether a resource is compliant or not, and any actions to take for non-compliance (e.g. Audit, AuditIfNotExists, Deny, Modify, DeployIfNotExists).
|
||||
- Utiliza el formato JSON para almacenar la lógica de evaluación y determinar si un recurso es conforme o no, así como las medidas que deben tomarse en caso de incumplimiento (por ejemplo, Audit, AuditIfNotExists, Deny, Modify, DeployIfNotExists).
|
||||
|
||||
- Free for use. The exception is Azure Arc connected resources charged per server/month for Azure Policy Guest Configuration usage.
|
||||
- Uso gratuito. La excepción son los recursos conectados a Azure Arc que se cobran por servidor/mes para el uso de Azure Policy Guest Configuration.
|
||||
|
||||
### Hands-On
|
||||
### Manos a la obra
|
||||
|
||||
I have gone out and I have purchased www.90DaysOfDevOps.com and I would like to add this domain to my Azure Active Directory portal, [Add your custom domain name using the Azure Active Directory Portal](https://docs.microsoft.com/en-us/azure/active-directory/fundamentals/add-custom-domain)
|
||||
He comprado www.90DaysOfDevOps.com y quisiera añadir el dominio al portal Azure Active Directory, [Añada su nombre de dominio personalizado utilizando el portal Azure Active Directory](https://docs.microsoft.com/en-us/azure/active-directory/fundamentals/add-custom-domain)
|
||||
|
||||

|
||||
|
||||
With that now, we can create a new user on our new Active Directory Domain.
|
||||
Con eso ahora, podemos crear un nuevo usuario en nuestro nuevo dominio de Active Directory.
|
||||
|
||||

|
||||
|
||||
Now we want to create a group for all of our new 90DaysOfDevOps users in one group. We can create a group as per the below, notice that I am using "Dynamic User" which means Azure AD will query user accounts and add them dynamically vs assigned which is where you manually add the user to your group.
|
||||
Ahora queremos crear un grupo para todos los nuevos usuarios 90DaysOfDevOps en un grupo. Podemos crear un grupo como el siguiente. Tenga en cuenta que estoy usando "Usuario dinámico", lo que significa que Azure AD consultará las cuentas de usuario y agregarlos dinámicamente.Asignado es donde se agrega manualmente el usuario al grupo.
|
||||
|
||||

|
||||
|
||||
There are lots of options when it comes to creating your query, I plan to simply find the principal name and make sure that the name contains @90DaysOfDevOps.com.
|
||||
Hay muchas opciones a la hora de crear la consulta, como buscar el nombre de la entidad de seguridad y asegurar de que el nombre contiene @90DaysOfDevOps.com.
|
||||
|
||||

|
||||
|
||||
Now because we have created our user account already for michael.cade@90DaysOfDevOps.com we can validate the rules are working. For comparison I have also added another account I have associated to another domain here and you can see that because of this rule our user will not land in this group.
|
||||
Ahora que ya hemos creado la cuenta de usuario para michael.cade@90DaysOfDevOps.com podemos validar que las reglas funcionan. Para comparar se ha añadido otra cuenta asociada a otro dominio y podéis ver que debido a esta regla nuestro usuario no aterrizará en este grupo.
|
||||
|
||||

|
||||
|
||||
I have since added a new user1@90DaysOfDevOps.com and if we go and check the group we can see our members.
|
||||
Se añade un nuevo user1@90DaysOfDevOps.com y comprobando el grupo podemos ver a todos los miembros.
|
||||
|
||||

|
||||
|
||||
If we have this requirement x100 then we are not going to want to do this all in the console we are going to want to take advantage of either bulk options to create, invite, and delete users or you are going to want to look into PowerShell to achieve this automated approach to scale.
|
||||
Si tenemos este requisito x100 no querremos hacer todo esto en la consola, podemos tomar ventaja de cualquiera de las opciones a granel para crear, invitar y eliminar usuarios o lo haremos en PowerShell para lograr este enfoque automatizado a escala.
|
||||
|
||||
Now we can go to our Resource Group and specify that on the 90DaysOfDevOps resource group we want the owner to be the group we just created.
|
||||
Ahora podemos ir a nuestro grupo de recursos y especificar que en el grupo de recursos 90DaysOfDevOps queremos que el propietario sea el grupo que acabamos de crear.
|
||||
|
||||

|
||||
|
||||
We can equally go in here and deny assignments access to our resource group as well.
|
||||
Igualmente podemos denegar también el acceso de las asignaciones a nuestro grupo de recursos.
|
||||
|
||||
Now if we log in to the Azure Portal with our new user account, you can see that we only have access to our 90DaysOfDevOps resource group and not the others seen in previous pictures because we do not have the access.
|
||||
Si entramos en el Azure Portal con la nueva cuenta de usuario, podemos ver que sólo tenemos acceso a nuestro grupo de recursos 90DaysOfDevOps y no a los otros vistos en imágenes anteriores.
|
||||
|
||||

|
||||
|
||||
The above is great if this is a user that has access to resources inside of your Azure portal, not every user needs to be aware of the portal, but to check access we can use the [Apps Portal](https://myapps.microsoft.com/) This is a single sign-on portal for us to test.
|
||||
Lo anterior está muy bien si se trata de un usuario que tiene acceso a los recursos dentro de su portal de Azure, no todos los usuarios necesitan conocer el portal, pero para comprobar el acceso podemos utilizar el [Apps Portal](https://myapps.microsoft.com/) Es un portal de inicio de sesión único para que podamos probar.
|
||||
|
||||

|
||||
|
||||
You can customise this portal with your branding and this might be something we come back to later on.
|
||||
Puedes personalizar este portal con tu marca y esto podría ser algo a lo que volveremos más adelante.
|
||||
|
||||
## Resources
|
||||
## Recursos
|
||||
|
||||
- [Hybrid Cloud and MultiCloud](https://www.youtube.com/watch?v=qkj5W98Xdvw)
|
||||
- [Microsoft Azure Fundamentals](https://www.youtube.com/watch?v=NKEFWyqJ5XA&list=WL&index=130&t=12s)
|
||||
- [Google Cloud Digital Leader Certification Course](https://www.youtube.com/watch?v=UGRDM86MBIQ&list=WL&index=131&t=10s)
|
||||
- [AWS Basics for Beginners - Full Course](https://www.youtube.com/watch?v=ulprqHHWlng&t=5352s)
|
||||
|
||||
See you on [Day 31](day31.md)
|
||||
Nos vemos en el [Día 31](day31.md).
|
@ -1,104 +1,93 @@
|
||||
## Microsoft Azure Compute Models
|
||||
## Modelos de computación de Microsoft Azure
|
||||
|
||||
Following on from covering the basics around security models within Microsoft Azure yesterday today we are going to look into the various compute services available to us in Azure.
|
||||
Siguiendo con los conceptos básicos sobre los modelos de seguridad dentro de Microsoft Azure vamos a ver los diferentes servicios de computación disponibles en Azure.
|
||||
|
||||
### Service Availability Options
|
||||
### Opciones de Disponibilidad de Servicio
|
||||
|
||||
This section is close to my heart given my role in Data Management. As with on-premises, it is critical to ensure the availability of your services.
|
||||
Esta sección es muy importante el autor por su papel en la gestión de datos. Al igual que en el on-premises, es crítico asegurar la disponibilidad de tus servicios.
|
||||
|
||||
- High Availability (Protection within a region)
|
||||
- Disaster Recovery (Protection between regions)
|
||||
- Backup (Recovery from a point in time)
|
||||
- Alta Disponibilidad (Protección dentro de una región)
|
||||
- Recuperación ante desastres (protección entre regiones)
|
||||
- Copia de seguridad (Recuperación desde un punto en el tiempo)
|
||||
|
||||
Microsoft deploys multiple regions within a geopolitical boundary.
|
||||
Microsoft despliega múltiples regiones dentro de una frontera geopolítica. Dos conceptos con Azure para la Disponibilidad de Servicios con los conjuntos y zonas:
|
||||
- **Conjuntos de Disponibilidad** - Proporcionan resiliencia dentro de un centro de datos.
|
||||
- **Zonas de Disponibilidad** - Proporcionan resiliencia entre centros de datos dentro de una región.
|
||||
|
||||
Two concepts with Azure for Service Availability. Both sets and zones.
|
||||
### Máquinas virtuales
|
||||
|
||||
Availability Sets - Provide resiliency within a datacenter
|
||||
Muy probablemente es el punto de partida para cualquier persona en la nube pública.
|
||||
|
||||
Availability Zones - Provide resiliency between data centres within a region.
|
||||
- Proporciona una variedad de series y tamaños de MV con diferentes capacidades (Algunas abrumadoras). [Tamaños para máquinas virtuales en Azure](https://docs.microsoft.com/en-us/azure/virtual-machines/sizes)
|
||||
- Hay muchas opciones y enfoques diferentes para MVs desde alto rendimiento, y baja latencia hasta MVs con opciones de alta memoria.
|
||||
- También tenemos un tipo de MV burstable que se puede encontrar bajo la Serie B. Esto es ideal para las cargas de trabajo en las que puede tener un bajo requerimiento de CPU en su mayor parte, pero pueden requerir una vez al mes el requisito de un pico de rendimiento.
|
||||
- Las máquinas virtuales se colocan en una red virtual que puede proporcionar conectividad a cualquier red.
|
||||
- Compatibilidad con sistemas operativos invitados como Windows y Linux.
|
||||
- También hay kernels ajustados a Azure cuando se trata de distribuciones específicas de Linux. [Azure Tuned Kernals](https://docs.microsoft.com/en-us/azure/virtual-machines/linux/endorsed-distros#azure-tuned-kernels)
|
||||
|
||||
### Virtual Machines
|
||||
### Plantillas
|
||||
|
||||
Most likely the starting point for anyone in the public cloud.
|
||||
Ya se ha mencionado antes que todo lo que hay detrás o debajo de Microsoft Azure es JSON.
|
||||
|
||||
- Provides a VM from a variety of series and sizes with different capabilities (Sometimes an overwhelming) [Sizes for Virtual machines in Azure](https://docs.microsoft.com/en-us/azure/virtual-machines/sizes)
|
||||
- There are many different options and focuses for VMs from high performance, and low latency to high memory options VMs.
|
||||
- We also have a burstable VM type which can be found under the B-Series. This is great for workloads where you can have a low CPU requirement for the most part but require that maybe once a month performance spike requirement.
|
||||
- Virtual Machines are placed on a virtual network that can provide connectivity to any network.
|
||||
- Windows and Linux guest OS support.
|
||||
- There are also Azure-tuned kernels when it comes to specific Linux distributions. [Azure Tuned Kernals](https://docs.microsoft.com/en-us/azure/virtual-machines/linux/endorsed-distros#azure-tuned-kernels)
|
||||
Hay varios portales de gestión y consolas diferentes que podemos utilizar para crear nuestros recursos, la ruta preferida va a ser a través de plantillas JSON.
|
||||
|
||||
### Templating
|
||||
Despliegues idempotentes en modo incremental o completo, es decir, estado deseado repetible.
|
||||
|
||||
I have mentioned before that everything behind or underneath Microsoft Azure is JSON.
|
||||
Hay una gran selección de plantillas que pueden exportar definiciones de recursos desplegados. Me gusta pensar en esta característica de plantillas como AWS CloudFormation o podría ser Terraform para una opción multi-nube. Cubriremos más sobre la potencia de Terraform en la sección de Infraestructura como código.
|
||||
|
||||
There are several different management portals and consoles we can use to create our resources the preferred route is going to be via JSON templates.
|
||||
### Escalado
|
||||
|
||||
Idempotent deployments in incremental or complete mode - i.e repeatable desired state.
|
||||
El escalado automático es una gran característica de la nube pública, siendo capaz de reducir los recursos no utilizados o aumentarlos cuando se necesiten.
|
||||
|
||||
There is a large selection of templates that can export deployed resource definitions. I like to think about this templating feature to something like AWS CloudFormation or could be Terraform for a multi-cloud option. We will cover Terraform more in the Infrastructure as code section.
|
||||
En Azure, existe algo llamado Virtual Machine Scale Sets (VMSS) para IaaS. Esto permite la creación automática y la escala de una imagen estándar de oro basado en horarios y métricas.
|
||||
|
||||
### Scaling
|
||||
Esto es ideal para actualizar ventanas de modo que pueda actualizar sus imágenes y desplegarlas con el menor impacto.
|
||||
|
||||
Automatic scaling is a large feature of the Public Cloud, being able to spin down resources you are not using or spin up when you need them.
|
||||
Otros servicios como Azure App Services tienen autoescalado integrado.
|
||||
|
||||
In Azure, we have something called Virtual Machine Scale Sets (VMSS) for IaaS. This enables the automatic creation and scale from a gold standard image based on schedules and metrics.
|
||||
### Contenedores
|
||||
|
||||
This is ideal for updating windows so that you can update your images and roll those out with the least impact.
|
||||
No hemos cubierto los contenedores como un caso de uso, ni qué ni cómo deben ser necesarios en nuestro viaje de aprendizaje DevOps, pero tenemos que mencionar que Azure tiene algunos servicios específicos centrados en contenedores que son dignos de mención.
|
||||
- **Azure Kubernetes Service (AKS)** - Proporciona una solución Kubernetes gestionada, sin necesidad de preocuparse por el plano de control o la gestión de clústeres subyacentes. También veremos más sobre Kubernetes más adelante.
|
||||
- **Azure Container Instances** - Contenedores como servicio con facturación por segundos. Ejecute una imagen e intégrela con su red virtual, sin necesidad de Container Orchestration.
|
||||
- **Service Fabric** - Tiene muchas capacidades pero incluye orquestación para instancias de contenedor.
|
||||
|
||||
Other services such as Azure App Services have auto-scaling built in.
|
||||
Azure también tiene el Container Registry que proporciona un registro privado para Docker Images, Helm charts, OCI Artifacts e imágenes. Más sobre esto de nuevo cuando lleguemos a la sección correspondiente de contenedores.
|
||||
|
||||
### Containers
|
||||
También debemos mencionar que muchos de los servicios de contenedores también pueden aprovechar los contenedores bajo el capó, pero esto se abstrae de su necesidad de gestionar.
|
||||
|
||||
We have not covered containers as a use case and what and how they can and should be needed in our DevOps learning journey but we need to mention that Azure has some specific container-focused services to mention.
|
||||
Estos servicios centrados en contenedores mencionados también encontramos servicios similares en todas las demás nubes públicas.
|
||||
|
||||
Azure Kubernetes Service (AKS) - Provides a managed Kubernetes solution, no need to worry about the control plane or management of the underpinning cluster management. More on Kubernetes also later on.
|
||||
### Servicios de aplicaciones
|
||||
|
||||
Azure Container Instances - Containers as a service with Per-Second Billing. Run an image and integrate it with your virtual network, no need for Container Orchestration.
|
||||
- Azure Application Services ofrece una solución de alojamiento de aplicaciones que proporciona un método sencillo para establecer servicios.
|
||||
- Despliegue y escalado automáticos.
|
||||
- Admite soluciones basadas en Windows y Linux.
|
||||
- Los servicios se ejecutan en un App Service Plan que tiene un tipo y un tamaño.
|
||||
- Número de servicios diferentes que incluyen aplicaciones web, aplicaciones API y aplicaciones móviles.
|
||||
- Soporte para ranuras de Despliegue para pruebas y promoción fiables.
|
||||
|
||||
Service Fabric - Has many capabilities but includes orchestration for container instances.
|
||||
### Computación serverless
|
||||
|
||||
Azure also has the Container Registry which provides a private registry for Docker Images, Helm charts, OCI Artifacts and images. More on this again when we reach the containers section.
|
||||
El objetivo con serverless es que sólo pagamos por el tiempo de ejecución de la función y no tenemos que tener máquinas virtuales o aplicaciones PaaS en ejecución todo el tiempo. Simplemente ejecutamos nuestra función cuando la necesitamos y luego desaparece.
|
||||
|
||||
We should also mention that a lot of the container services may indeed also leverage containers under the hood but this is abstracted away from your requirement to manage.
|
||||
**Azure Functions** - Proporciona código serverless. Si nos remontamos a nuestro primer vistazo a la nube pública recordaremos la capa de abstracción de la gestión, con funciones serverless sólo vas a estar gestionando el código.
|
||||
|
||||
These mentioned container-focused services we also find similar services in all other public clouds.
|
||||
**Event-Driven** con escala masiva. Proporciona enlace de entrada y salida a muchos servicios de Azure y de terceros.
|
||||
|
||||
### Application Services
|
||||
Soporta muchos lenguajes de programación diferentes. (C#, NodeJS, Python, PHP, batch, bash, Golang y Rust. Cualquier ejecutable)
|
||||
|
||||
- Azure Application Services provides an application hosting solution that provides an easy method to establish services.
|
||||
- Automatic Deployment and Scaling.
|
||||
- Supports Windows & Linux-based solutions.
|
||||
- Services run in an App Service Plan which has a type and size.
|
||||
- Number of different services including web apps, API apps and mobile apps.
|
||||
- Support for Deployment slots for reliable testing and promotion.
|
||||
**Azure Event Grid** permite disparar la lógica desde servicios y eventos.
|
||||
|
||||
### Serverless Computing
|
||||
**Azure Logic App** proporciona workflows e integración basado en gráficos.
|
||||
|
||||
Serverless for me is an exciting next step that I am extremely interested in learning more about.
|
||||
También podemos echar un vistazo a Azure Batch, que puede ejecutar trabajos a gran escala en nodos Windows y Linux con una gestión y programación coherentes.
|
||||
|
||||
The goal with serverless is that we only pay for the runtime of the function and do not have to have running virtual machines or PaaS applications running all the time. We simply run our function when we need it and then it goes away.
|
||||
|
||||
Azure Functions - Provides serverless code. If we remember back to our first look into the public cloud we will remember the abstraction layer of management, with serverless functions you are only going to be managing the code.
|
||||
|
||||
Event-Driven with massive scale, I have a plan to build something when I get some hands-on here hopefully later on.
|
||||
|
||||
Provides input and output binding to many Azure and 3rd Party Services.
|
||||
|
||||
Supports many different programming languages. (C#, NodeJS, Python, PHP, batch, bash, Golang and Rust. Or any Executable)
|
||||
|
||||
Azure Event Grid enables logic to be triggered from services and events.
|
||||
|
||||
Azure Logic App provides a graphical-based workflow and integration.
|
||||
|
||||
We can also look at Azure Batch which can run large-scale jobs on both Windows and Linux nodes with consistent management & scheduling.
|
||||
|
||||
## Resources
|
||||
## Recursos
|
||||
|
||||
- [Hybrid Cloud and MultiCloud](https://www.youtube.com/watch?v=qkj5W98Xdvw)
|
||||
- [Microsoft Azure Fundamentals](https://www.youtube.com/watch?v=NKEFWyqJ5XA&list=WL&index=130&t=12s)
|
||||
- [Google Cloud Digital Leader Certification Course](https://www.youtube.com/watch?v=UGRDM86MBIQ&list=WL&index=131&t=10s)
|
||||
- [AWS Basics for Beginners - Full Course](https://www.youtube.com/watch?v=ulprqHHWlng&t=5352s)
|
||||
|
||||
See you on [Day 32](day32.md)
|
||||
Nos vemos en el [Día 32](day32.md).
|
||||
|
@ -1,181 +1,179 @@
|
||||
## Microsoft Azure Storage Models
|
||||
## Modelos de almacenamiento de Microsoft Azure
|
||||
|
||||
### Storage Services
|
||||
### Servicios de almacenamiento
|
||||
|
||||
- Azure storage services are provided by storage accounts.
|
||||
- Storage accounts are primarily accessed via REST API.
|
||||
- A storage account must have a unique name that is part of a DNS name `<Storage Account name>.core.windows.net`
|
||||
- Various replication and encryption options.
|
||||
- Sits within a resource group
|
||||
- Los servicios de almacenamiento de Azure se proporcionan mediante cuentas de almacenamiento.
|
||||
- A las cuentas de almacenamiento se accede principalmente a través de la API REST.
|
||||
- Una cuenta de almacenamiento debe tener un nombre único que forme parte de un nombre DNS `<Nombre de la cuenta de almacenamiento>.core.windows.net`.
|
||||
- Varias opciones de replicación y cifrado.
|
||||
- Se encuentra dentro de un grupo de recursos
|
||||
|
||||
We can create our storage group by simply searching for Storage Group in the search bar at the top of the Azure Portal.
|
||||
Podemos crear nuestro grupo de almacenamiento simplemente buscando Storage Group en la barra de búsqueda de la parte superior del Azure Portal.
|
||||
|
||||

|
||||
|
||||
We can then run through the steps to create our storage account remembering that this name needs to be unique and it also needs to be all lower case, with no spaces but can include numbers.
|
||||
A continuación, podemos ejecutar los pasos para crear nuestra cuenta de almacenamiento recordando que este nombre tiene que ser único y también tiene que ser todo en minúsculas, sin espacios, pero puede incluir números.
|
||||
|
||||

|
||||
|
||||
We can also choose the level of redundancy we would like against our storage account and anything we store here. The further down the list the more expensive option but also the spread of your data.
|
||||
También podemos elegir el nivel de redundancia que queremos para nuestra cuenta de almacenamiento y para todo lo que guardemos aquí. Cuanto más abajo de la lista más cara es la opción, pero también la dispersión de sus datos.
|
||||
|
||||
Even the default redundancy option gives us 3 copies of our data.
|
||||
Incluso la opción de redundancia por defecto nos da 3 copias de nuestros datos.
|
||||
|
||||
[Azure Storage Redundancy](https://docs.microsoft.com/en-us/azure/storage/common/storage-redundancy)
|
||||
|
||||
Summary of the above link down below:
|
||||
Resumen del enlace anterior:
|
||||
|
||||
- **Locally-redundant storage** - replicates your data three times within a single data centre in the primary region.
|
||||
- **Geo-redundant storage** - copies your data synchronously three times within a single physical location in the primary region using LRS.
|
||||
- **Zone-redundant storage** - replicates your Azure Storage data synchronously across three Azure availability zones in the primary region.
|
||||
- **Geo-zone-redundant storage** - combines the high availability provided by redundancy across availability zones with protection from regional outages provided by geo-replication. Data in a GZRS storage account is copied across three Azure availability zones in the primary region and is also replicated to a second geographic region for protection from regional disasters.
|
||||
- **Almacenamiento redundante local**: replica los datos tres veces en un único centro de datos de la región primaria.
|
||||
- **Almacenamiento georredundante**: copia los datos de forma sincrónica tres veces en una única ubicación física de la región principal mediante LRS.
|
||||
- **Almacenamiento redundante en zonas**: replica los datos de Azure Storage de forma sincrónica en tres zonas de disponibilidad de Azure en la región principal.
|
||||
- **Almacenamiento georredundante**: combina la alta disponibilidad proporcionada por la redundancia entre zonas de disponibilidad con la protección contra interrupciones regionales proporcionada por la georreplicación. Los datos de una cuenta de almacenamiento GZRS se copian en tres zonas de disponibilidad de Azure en la región principal y también se replican en una segunda región geográfica para protegerlos de desastres regionales.
|
||||
|
||||

|
||||
|
||||
Just moving back up to performance options. We have Standard and Premium to choose from. We have chosen Standard in our walkthrough but premium gives you some specific options.
|
||||
Volviendo a las opciones de rendimiento. Podemos elegir entre Estándar y Premium. Hemos elegido Estándar en el tutorial, pero Premium te da algunas opciones específicas.
|
||||
|
||||

|
||||
|
||||
Then in the drop-down, you can see we have these three options to choose from.
|
||||
A continuación, en el menú desplegable, puedes ver que tenemos estas tres opciones para elegir.
|
||||
|
||||

|
||||
|
||||
There are lots more advanced options available for your storage account but for now, we do not need to get into these areas. These options are around encryption and data protection.
|
||||
Hay muchas más opciones avanzadas disponibles para su cuenta de almacenamiento, pero por ahora, no necesitamos entrar en estas áreas. Estas opciones están relacionadas con el cifrado y la protección de datos.
|
||||
|
||||
### Managed Disks
|
||||
### Discos gestionados
|
||||
|
||||
Storage access can be achieved in a few different ways.
|
||||
El acceso al almacenamiento puede realizarse de varias formas.
|
||||
|
||||
Authenticated access via:
|
||||
Acceso autenticado mediante:
|
||||
- Una clave compartida para un control total.
|
||||
- Firma de acceso compartido para un acceso delegado y granular.
|
||||
- Azure Active Directory (cuando esté disponible)
|
||||
|
||||
- A shared key for full control.
|
||||
- Shared Access Signature for delegated, granular access.
|
||||
- Azure Active Directory (Where Available)
|
||||
Acceso público:
|
||||
- El acceso público también se puede conceder para permitir el acceso anónimo, incluso a través de HTTP.
|
||||
- Un ejemplo de esto podría ser alojar contenido básico y archivos en un blob de bloques para que un navegador pueda ver y descargar estos datos.
|
||||
|
||||
Public Access:
|
||||
Si accede a su almacenamiento desde otro servicio Azure, el tráfico permanece dentro de Azure.
|
||||
|
||||
- Public access can also be granted to enable anonymous access including via HTTP.
|
||||
- An example of this could be to host basic content and files in a block blob so a browser can view and download this data.
|
||||
Cuando se trata del rendimiento del almacenamiento tenemos dos tipos diferentes:
|
||||
|
||||
If you are accessing your storage from another Azure service, traffic stays within Azure.
|
||||
- **Estándar** - Número máximo de IOPS
|
||||
- **Premium** - Número garantizado de IOPS
|
||||
|
||||
When it comes to storage performance we have two different types:
|
||||
IOPS => Operaciones de entrada/salida por segundo.
|
||||
|
||||
- **Standard** - Maximum number of IOPS
|
||||
- **Premium** - Guaranteed number of IOPS
|
||||
También hay que tener en cuenta la diferencia entre discos no gestionados y gestionados a la hora de elegir el almacenamiento adecuado para la tarea que tenemos.
|
||||
|
||||
IOPS => Input/Output operations per sec.
|
||||
### Almacenamiento de Máquinas Virtuales
|
||||
|
||||
There is also a difference between unmanaged and managed disks to consider when choosing the right storage for the task you have.
|
||||
- Los discos del sistema operativo de la máquina virtual suelen almacenarse en un almacenamiento persistente.
|
||||
- Algunas cargas de trabajo sin estado no requieren almacenamiento persistente y la reducción de la latencia es un beneficio mayor.
|
||||
- Hay máquinas virtuales que soportan discos efímeros gestionados por el SO que se crean en el almacenamiento local del nodo.
|
||||
- Estos también se pueden utilizar con VM Scale Sets.
|
||||
|
||||
### Virtual Machine Storage
|
||||
Los discos gestionados son un almacenamiento en bloque duradero que puede utilizarse con las máquinas virtuales Azure. Pueden tener Ultra Disk Storage, Premium SSD, Standard SSD o Standard HDD. También tienen algunas características.
|
||||
|
||||
- Virtual Machine OS disks are typically stored on persistent storage.
|
||||
- Some stateless workloads do not require persistent storage and reduced latency is a larger benefit.
|
||||
- There are VMs that support ephemeral OS-managed disks that are created on the node-local storage.
|
||||
- These can also be used with VM Scale Sets.
|
||||
- Compatibilidad con instantáneas e imágenes
|
||||
- Movimiento sencillo entre SKUs
|
||||
- Mejor disponibilidad cuando se combina con conjuntos de disponibilidad
|
||||
- Facturación basada en el tamaño del disco, no en el almacenamiento consumido.
|
||||
|
||||
Managed Disks are durable block storage that can be used with Azure Virtual Machines. You can have Ultra Disk Storage, Premium SSD, Standard SSD, or Standard HDD. They also carry some characteristics.
|
||||
## Almacenamiento de archivos
|
||||
|
||||
- Snapshot and Image support
|
||||
- Simple movement between SKUs
|
||||
- Better availability when combined with availability sets
|
||||
- Billed based on disk size not on consumed storage.
|
||||
- **Cool Tier** - Está disponible para bloquear y anexar blobs.
|
||||
- Menor coste de almacenamiento
|
||||
- Mayor coste de transacción.
|
||||
- **Archive Tier** - Está disponible para bloques BLOB.
|
||||
- Se configura para cada BLOB.
|
||||
- Coste más bajo, latencia de recuperación de datos más larga.
|
||||
- Misma durabilidad de datos que el almacenamiento Azure normal.
|
||||
- Se pueden habilitar niveles de datos personalizados según sea necesario.
|
||||
|
||||
## Archive Storage
|
||||
### Compartir Archivos
|
||||
|
||||
- **Cool Tier** - A cool tier of storage is available to block and append blobs.
|
||||
- Lower Storage cost
|
||||
- Higher transaction cost.
|
||||
- **Archive Tier** - Archive storage is available for block BLOBs.
|
||||
- This is configured on a per-BLOB basis.
|
||||
- Cheaper cost, Longer Data retrieval latency.
|
||||
- Same Data Durability as regular Azure Storage.
|
||||
- Custom Data tiering can be enabled as required.
|
||||
|
||||
### File Sharing
|
||||
|
||||
From the above creation of our storage account, we can now create file shares.
|
||||
A partir de la creación anterior de nuestra cuenta de almacenamiento podemos crear archivos compartidos.
|
||||
|
||||

|
||||
|
||||
This will provide SMB2.1 and 3.0 file shares in Azure.
|
||||
Esto proporcionará recursos compartidos de archivos SMB2.1 y 3.0 en Azure.
|
||||
|
||||
Useable within the Azure and externally via SMB3 and port 445 open to the internet.
|
||||
Utilizable dentro de Azure y externamente a través de SMB3 y el puerto 445 abierto a Internet.
|
||||
|
||||
Provides shared file storage in Azure.
|
||||
Proporciona almacenamiento compartido de archivos en Azure.
|
||||
|
||||
Can be mapped using standard SMB clients in addition to REST API.
|
||||
Se puede asignar utilizando clientes SMB estándar además de la API REST.
|
||||
|
||||
You might also notice [Azure NetApp Files](https://vzilla.co.uk/vzilla-blog/azure-netapp-files-how) (SMB and NFS)
|
||||
Consultar [Azure NetApp Files](https://vzilla.co.uk/vzilla-blog/azure-netapp-files-how) (SMB y NFS).
|
||||
|
||||
### Caching & Media Services
|
||||
### Almacenamiento en caché y servicios multimedia
|
||||
|
||||
The Azure Content Delivery Network provides a cache of static web content with locations throughout the world.
|
||||
Azure Content Delivery Network proporciona una caché de contenido web estático con ubicaciones en todo el mundo.
|
||||
|
||||
Azure Media Services, provides media transcoding technologies in addition to playback services.
|
||||
Azure Media Services, proporciona tecnologías de transcodificación de medios además de servicios de reproducción.
|
||||
|
||||
## Microsoft Azure Database Models
|
||||
## Modelos de bases de datos de Microsoft Azure
|
||||
|
||||
Back on [Day 28](day28.md), we covered various service options. One of these was PaaS (Platform as a Service) where you abstract a large amount of the infrastructure and operating system away and you are left with the control of the application or in this case the database models.
|
||||
En el [Día 28](day28.md) vimos varias opciones de servicio. Una de ellas era PaaS (Platform as a Service), en la que se abstrae gran parte de la infraestructura y el sistema operativo y se deja el control de la aplicación o, en este caso, de los modelos de bases de datos.
|
||||
|
||||
### Relational Databases
|
||||
### Bases de datos relacionales
|
||||
|
||||
Azure SQL Database provides a relational database as a service based on Microsoft SQL Server.
|
||||
Azure SQL Database proporciona una base de datos relacional como servicio basada en Microsoft SQL Server.
|
||||
|
||||
This is SQL running the latest SQL branch with database compatibility level available where a specific functionality version is required.
|
||||
Se trata de SQL que ejecuta la última rama de SQL con un nivel de compatibilidad de base de datos disponible cuando se requiere una versión de funcionalidad específica.
|
||||
|
||||
There are a few options on how this can be configured, we can provide a single database that provides one database in the instance, while an elastic pool enables multiple databases that share a pool of capacity and collectively scale.
|
||||
Hay algunas opciones sobre cómo esto se puede configurar, podemos proporcionar una única base de datos que proporciona una base de datos en la instancia, mientras que un pool elástico permite múltiples bases de datos que comparten un pool de capacidad y escalan colectivamente.
|
||||
|
||||
These database instances can be accessed like regular SQL instances.
|
||||
Se puede acceder a estas instancias de base de datos como a instancias SQL normales.
|
||||
|
||||
Additional managed offerings for MySQL, PostgreSQL and MariaDB.
|
||||
Ofertas gestionadas adicionales para MySQL, PostgreSQL y MariaDB.
|
||||
|
||||

|
||||
|
||||
### NoSQL Solutions
|
||||
|
||||
Azure Cosmos DB is a scheme agnostic NoSQL implementation.
|
||||
Azure Cosmos DB es una implementación NoSQL de esquema agnóstico.
|
||||
|
||||
99.99% SLA
|
||||
|
||||
Globally distributed database with single-digit latencies at the 99th percentile anywhere in the world with automatic homing.
|
||||
Base de datos distribuida globalmente con latencias de un solo dígito en el porcentaje 99 en cualquier parte del mundo con homing automático.
|
||||
|
||||
Partition key leveraged for the partitioning/sharding/distribution of data.
|
||||
Partition key aprovechada para la partición/sharding/distribución de datos.
|
||||
|
||||
Supports various data models (documents, key-value, graph, column-friendly)
|
||||
Admite varios modelos de datos (documentos, clave-valor, gráfico, amigable con las columnas).
|
||||
|
||||
Supports various APIs (DocumentDB SQL, MongoDB, Azure Table Storage and Gremlin)
|
||||
Soporta varias APIs (DocumentDB SQL, MongoDB, Azure Table Storage y Gremlin)
|
||||
|
||||

|
||||
|
||||
Various consistency models are available based around [CAP theorem](https://en.wikipedia.org/wiki/CAP_theorem).
|
||||
Existen varios modelos de consistencia basados en el [teorema CAP](https://es.wikipedia.org/wiki/Teorema_CAP).
|
||||
|
||||

|
||||
|
||||
### Caching
|
||||
### Caché
|
||||
|
||||
Without getting into the weeds about caching systems such as Redis I wanted to include that Microsoft Azure has a service called Azure Cache for Redis.
|
||||
Sin entrar en la maleza sobre los sistemas de almacenamiento en caché como Redis quería incluir que Microsoft Azure tiene un servicio llamado Azure Cache para Redis.
|
||||
|
||||
Azure Cache for Redis provides an in-memory data store based on the Redis software.
|
||||
Azure Cache for Redis proporciona un almacén de datos en memoria basado en el software Redis.
|
||||
|
||||
- It is an implementation of the open-source Redis Cache.
|
||||
- A hosted, secure Redis cache instance.
|
||||
- Different tiers are available
|
||||
- Application must be updated to leverage the cache.
|
||||
- Aimed for an application that has high read requirements compared to writes.
|
||||
- Key-Value store based.
|
||||
- Se trata de una implementación de la caché Redis de código abierto.
|
||||
- Una instancia de caché Redis alojada y segura.
|
||||
- Diferentes niveles disponibles
|
||||
- La aplicación debe actualizarse para aprovechar la caché.
|
||||
- Dirigido a aplicaciones que requieren más lecturas que escrituras.
|
||||
- Basado en almacén clave-valor.
|
||||
|
||||

|
||||
|
||||
I appreciate the last few days have been a lot of note-taking and theory on Microsoft Azure but I wanted to cover the building blocks before we get into the hands-on aspects of how these components come together and work.
|
||||
Los últimos días han sido un montón de teorías y tomar notas sobre Microsoft Azure, pero ahora ya tenemos cubierto los bloques de construcción antes de entrar en los aspectos prácticos de cómo estos componentes se unen y trabajan.
|
||||
|
||||
We have one more bit of theory remaining around networking before we can get some scenario-based deployments of services up and running. We also want to take a look at some of the different ways we can interact with Microsoft Azure vs just using the portal that we have been using so far.
|
||||
Solo queda un poco más de teoría sobre redes para que podamos ponernos en marcha con despliegues de servicios basados en escenarios reales. También echaremos un vistazo a algunas de las diferentes formas en que podemos interactuar con Microsoft Azure.
|
||||
|
||||
## Resources
|
||||
## Recursos
|
||||
|
||||
- [Hybrid Cloud and MultiCloud](https://www.youtube.com/watch?v=qkj5W98Xdvw)
|
||||
- [Microsoft Azure Fundamentals](https://www.youtube.com/watch?v=NKEFWyqJ5XA&list=WL&index=130&t=12s)
|
||||
- [Google Cloud Digital Leader Certification Course](https://www.youtube.com/watch?v=UGRDM86MBIQ&list=WL&index=131&t=10s)
|
||||
- [AWS Basics for Beginners - Full Course](https://www.youtube.com/watch?v=ulprqHHWlng&t=5352s)
|
||||
|
||||
See you on [Day 33](day33.md)
|
||||
Nos vemos en el [Día 33](day33.md).
|
||||
|
@ -1,180 +1,174 @@
|
||||
## Microsoft Azure Networking Models + Azure Management
|
||||
## Modelos de red de Microsoft Azure + Gestión de Azure
|
||||
|
||||
As if today marks the anniversary of Microsoft Azure and its 12th Birthday! (1st February 2022) Anyway, we are going to cover the networking models within Microsoft Azure and some of the management options for Azure. So far we have only used the Azure portal but we have mentioned other areas that can be used to drive and create our resources within the platform.
|
||||
Vamos a cubrir los modelos de red dentro de Microsoft Azure y algunas de las opciones de gestión de Azure. Hasta ahora solo hemos utilizado el portal de Azure pero hemos mencionado otras áreas que pueden ser utilizadas para manejar y crear nuestros recursos dentro de la plataforma.
|
||||
|
||||
## Azure Network Models
|
||||
## Modelos de Red Azure
|
||||
|
||||
### Virtual Networks
|
||||
### Redes Virtuales
|
||||
|
||||
- A virtual network is a construct created in Azure.
|
||||
- A virtual network has one or more IP ranges assigned to it.
|
||||
- Virtual networks live within a subscription within a region.
|
||||
- Virtual subnets are created in the virtual network to break up the network range.
|
||||
- Virtual machines are placed in virtual subnets.
|
||||
- All virtual machines within a virtual network can communicate.
|
||||
- 65,536 Private IPs per Virtual Network.
|
||||
- Only pay for egress traffic from a region. (Data leaving the region)
|
||||
- IPv4 & IPv6 Supported.
|
||||
- IPv6 for public-facing and within virtual networks.
|
||||
- Una red virtual es una construcción creada en Azure.
|
||||
- Una red virtual tiene uno o más rangos de IP asignados.
|
||||
- Las redes virtuales viven dentro de una suscripción dentro de una región.
|
||||
- Se crean subredes virtuales en la red virtual para dividir el rango de red.
|
||||
- Las máquinas virtuales se colocan en subredes virtuales.
|
||||
- Todas las máquinas virtuales dentro de una red virtual pueden comunicarse.
|
||||
- 65.536 IPs privadas por red virtual.
|
||||
- Sólo se paga por el tráfico de salida de una región. (Datos que salen de la región)
|
||||
- Soporta IPv4 e IPv6.
|
||||
- IPv6 para redes virtuales de cara al público y dentro de ellas.
|
||||
|
||||
We can liken Azure Virtual Networks to AWS VPCs. However, there are some differences to note:
|
||||
Podemos comparar las redes virtuales de Azure con las VPC de AWS. Sin embargo, hay algunas diferencias a tener en cuenta:
|
||||
|
||||
- In AWS a default VNet is created that is not the case in Microsoft Azure, you have to create your first virtual network to your requirements.
|
||||
- All Virtual Machines by default in Azure have NAT access to the internet. No NAT Gateways as per AWS.
|
||||
- In Microsoft Azure, there is no concept of Private or Public subnets.
|
||||
- Public IPs are a resource that can be assigned to vNICs or Load Balancers.
|
||||
- The Virtual Network and Subnets have their own ACLs enabling subnet level delegation.
|
||||
- Subnets across Availability Zones whereas in AWS you have subnets per Availability Zones.
|
||||
- En AWS se crea una VNet por defecto que no es el caso en Microsoft Azure, tienes que crear tu primera red virtual a tu medida.
|
||||
- Todas las máquinas virtuales por defecto en Azure tienen acceso NAT a Internet. No hay NAT Gateways como en AWS.
|
||||
- En Microsoft Azure no existe el concepto de subredes Privadas o Públicas.
|
||||
- Las IPs Públicas son un recurso que puede ser asignado a vNICs o Balanceadores de Carga.
|
||||
- La red virtual y las subredes tienen sus propias ACL que permiten la delegación a nivel de subred.
|
||||
- Subredes a través de Zonas de Disponibilidad mientras que en AWS tienes subredes por Zonas de Disponibilidad.
|
||||
|
||||
We also have Virtual Network Peering. This enables virtual networks across tenants and regions to be connected using the Azure backbone. Not transitive but can be enabled via Azure Firewall in the hub virtual network. Using a gateway transit allows peered virtual networks to the connectivity of the connected network and an example of this could ExpressRoute to On-Premises.
|
||||
También tenemos Virtual Network Peering. Esto permite la conexión de redes virtuales entre inquilinos y regiones utilizando la red troncal de Azure. No es transitivo, pero puede activarse a través de Azure Firewall en la red virtual central. El uso de una pasarela de tránsito permite a las redes virtuales peered la conectividad de la red conectada y un ejemplo de esto podría ser [ExpressRoute](https://learn.microsoft.com/es-es/azure/expressroute/expressroute-introduction) a On-Premises.
|
||||
|
||||
### Access Control
|
||||
### Control de acceso
|
||||
|
||||
- Azure utilises Network Security Groups, these are stateful.
|
||||
- Enable rules to be created and then assigned to a network security group
|
||||
- Network security groups applied to subnets or VMs.
|
||||
- When applied to a subnet it is still enforced at the Virtual Machine NIC that it is not an "Edge" device.
|
||||
- Azure utiliza Grupos de Seguridad de Red, estos son de estado.
|
||||
- Permiten crear reglas y luego asignarlas a un grupo de seguridad de red
|
||||
- Los grupos de seguridad de red se aplican a subredes o máquinas virtuales.
|
||||
- Cuando se aplica a una subred todavía se aplica en el NIC de la máquina virtual que no es un dispositivo "Edge".
|
||||
|
||||

|
||||
|
||||
- Rules are combined in a Network Security Group.
|
||||
- Based on the priority, flexible configurations are possible.
|
||||
- Lower priority number means high priority.
|
||||
- Most logic is built by IP Addresses but some tags and labels can also be used.
|
||||
- Las reglas se combinan en un Grupo de Seguridad de Red.
|
||||
- En función de la prioridad, es posible realizar configuraciones flexibles.
|
||||
- Un número de prioridad bajo significa una prioridad alta.
|
||||
- La mayor parte de la lógica se construye por Direcciones IP pero también se pueden utilizar algunas etiquetas.
|
||||
|
||||
| Description | Priority | Source Address | Source Port | Destination Address | Destination Port | Action |
|
||||
| ---------------- | -------- | ------------------ | ----------- | ------------------- | ---------------- | ------ |
|
||||
| Inbound 443 | 1005 | \* | \* | \* | 443 | Allow |
|
||||
| ILB | 1010 | Azure LoadBalancer | \* | \* | 10000 | Allow |
|
||||
| Deny All Inbound | 4000 | \* | \* | \* | \* | DENY |
|
||||
| Descripción | Prioridad | Dirección origen | Puerto de origen | Dirección de destino | Puerto de destino | Acción |
|
||||
| -------------------------- | --------- | ------------------ | ---------------- | -------------------- | ----------------- | -------- |
|
||||
| Entrada 443 | 1005 | \* | \* | \* | 443 | Permitir |
|
||||
| ILB | 1010 | Azure LoadBalancer | \* | \* | 10000 | Permitir |
|
||||
| Denegar todas las entradas | 4000 | \* | \* | \* | \* | Denegar |
|
||||
|
||||
We also have Application Security Groups (ASGs)
|
||||
También tenemos Grupos de Seguridad de Aplicaciones (ASG - Application Security Groups)
|
||||
|
||||
- Where NSGs are focused on the IP address ranges which may be difficult to maintain for growing environments.
|
||||
- ASGs enable real names (Monikers) for different application roles to be defined (Webservers, DB servers, WebApp1 etc.)
|
||||
- The Virtual Machine NIC is made a member of one or more ASGs.
|
||||
- Los NSGs se centran en los rangos de direcciones IPs, que pueden ser difíciles de mantener para entornos en crecimiento.
|
||||
- Los ASGs permiten definir nombres reales (Monikers) para diferentes roles de aplicación (Webservers, DB servers, WebApp1 etc.)
|
||||
- La NIC de la máquina virtual se convierte en miembro de uno o más ASG.
|
||||
|
||||
The ASGs can then be used in rules that are part of Network Security Groups to control the flow of communication and can still use NSG features like service tags.
|
||||
Los ASG se pueden utilizar en reglas que forman parte de Grupos de Seguridad de Red para controlar el flujo de comunicación y pueden seguir utilizando funciones de NSG como las etiquetas de servicio.
|
||||
|
||||
| Action | Name | Source | Destination | Port |
|
||||
| ------ | ------------------ | ---------- | ----------- | ------------ |
|
||||
| Allow | AllowInternettoWeb | Internet | WebServers | 443(HTTPS) |
|
||||
| Allow | AllowWebToApp | WebServers | AppServers | 443(HTTPS) |
|
||||
| Allow | AllowAppToDB | AppServers | DbServers | 1443 (MSSQL) |
|
||||
| Deny | DenyAllinbound | Any | Any | Any |
|
||||
| Acción | Nombre | Origen | Destino | Puerto |
|
||||
| -------- | ------------------ | ---------- | ---------- | ------------ |
|
||||
| Permitir | AllowInternettoWeb | Internet | WebServers | 443(HTTPS) |
|
||||
| Permitir | AllowWebToApp | WebServers | AppServers | 443(HTTPS) |
|
||||
| Permitir | AllowAppToDB | AppServers | DbServers | 1443 (MSSQL) |
|
||||
| Denegar | DenyAllinbound | Any | Any | Any |
|
||||
|
||||
### Load Balancing
|
||||
### Balanceador de carga
|
||||
|
||||
Microsoft Azure has two separate load balancing solutions. (the first party, there are third parties available in the Azure marketplace.) Both can operate with externally facing or internally facing endpoints.
|
||||
Microsoft Azure tiene dos soluciones separadas de equilibrio de carga. Ambas pueden funcionar con puntos finales orientados externamente o internamente. La primera solución son las opciones de terceros disponibles en el marketplace de Azure.
|
||||
|
||||
- Load Balancer (Layer 4) supporting hash-based distribution and port-forwarding.
|
||||
- App Gateway (Layer 7) supports features such as SSL offload, cookie-based session affinity and URL-based content routing.
|
||||
- Balanceador de carga (capa 4) que admite la distribución basada en hash y el reenvío de puertos.
|
||||
- App Gateway (capa 7) admite funciones como SSL offload, afinidad de sesión basada en cookies y enrutamiento de contenido basado en URL.
|
||||
|
||||
Also with the App Gateway, you can optionally use the Web Application firewall component.
|
||||
También con App Gateway, puede utilizar opcionalmente el componente Web Application firewall.
|
||||
|
||||
## Azure Management Tools
|
||||
## Herramientas de gestión de Azure
|
||||
|
||||
We have spent most of our theory time walking through the Azure Portal, I would suggest that when it comes to following a DevOps culture and process a lot of these tasks, especially around provisioning will be done via an API or a command-line tool. I wanted to touch on some of those other management tools that we have available to us as we need to know this for when we are automating the provisioning of our Azure environments.
|
||||
Hemos pasado la mayor parte teórica por el Portal de Azure, pero cuando se trata de seguir una cultura DevOps el proceso de muchas de estas tareas (especialmente en torno a aprovisionamiento) se hará a través de una API o una herramienta de línea de comandos. Habría que revisar algunas de estas otras herramientas de gestión que tenemos a nuestra disposición, ya que necesitamos conocerla para cuando estemos automatizando el aprovisionamiento de nuestros entornos Azure.
|
||||
|
||||
### Azure Portal
|
||||
### Portal Azure
|
||||
|
||||
The Microsoft Azure Portal is a web-based console, that provides an alternative to command-line tools. You can manage your subscriptions within the Azure Portal. Build, Manage, and Monitor everything from a simple web app to complex cloud deployments. Another thing you will find within the portal are these breadcrumbs, JSON as mentioned before is the underpinning of all Azure Resources, It might be that you start in the Portal to understand the features, services and functionality but then later understand the JSON underneath to incorporate into your automated workflows.
|
||||
El Microsoft Azure Portal es una consola basada en web, que proporciona una alternativa a las herramientas de línea de comandos. Puedes gestionar tus suscripciones dentro del Portal Azure. Construya, Gestione y Monitorice todo, desde una simple aplicación web hasta complejos despliegues en la nube. Otra cosa que encontrarás en el portal son las migas de pan. Como ya se mencionó, JSON es la base de todos los recursos de Azure. Puede ser que comiences en el Portal para entender las características, servicios y funcionalidad, pero tarde o temprano tendrás que entender el JSON para incorporar flujos de trabajo automatizados.
|
||||
|
||||

|
||||
|
||||
There is also the Azure Preview portal, this can be used to view and test new and upcoming services and enhancements.
|
||||
También existe el portal Azure Preview, que puede utilizarse para ver y probar servicios y mejoras.
|
||||
|
||||

|
||||
|
||||
### PowerShell
|
||||
|
||||
Before we get into Azure PowerShell it is worth introducing PowerShell first. PowerShell is a task automation and configuration management framework, a command-line shell and a scripting language. We might and dare I say this liken this to what we have covered in the Linux section around shell scripting. PowerShell was very much first found on Windows OS but it is now cross-platform.
|
||||
Antes de adentrarnos en Azure PowerShell, conviene presentar PowerShell. PowerShell es un marco de automatización de tareas y gestión de la configuración, un shell de línea de comandos y un lenguaje de scripting. Podríamos decir que esto se asemeja a lo que hemos visto en la sección de Linux sobre shell scripting. PowerShell se utilizó por primera vez en el sistema operativo Windows, pero ahora es multiplataforma.
|
||||
|
||||
Azure PowerShell is a set of cmdlets for managing Azure resources directly from the PowerShell command line.
|
||||
Azure PowerShell es un conjunto de cmdlets para gestionar los recursos de Azure directamente desde la línea de comandos de PowerShell.
|
||||
|
||||
We can see below that you can connect to your subscription using the PowerShell command `Connect-AzAccount`
|
||||
Podemos ver a continuación que te puedes conectar a una suscripción mediante el comando PowerShell `Connect-AzAccount`.
|
||||
|
||||

|
||||
|
||||
Then if we wanted to find some specific commands associated with Azure VMs we can run the following command. You could spend hours learning and understanding more about this PowerShell programming language.
|
||||
Luego, si quisiéramos encontrar algunos comandos específicos asociados a las VMs de Azure podemos ejecutar el siguiente comando. Podrías pasarte horas aprendiendo y entendiendo más sobre este lenguaje de programación.
|
||||
|
||||

|
||||
|
||||
There are some great quickstarts from Microsoft on getting started and provisioning services from PowerShell [here](https://docs.microsoft.com/en-us/powershell/azure/get-started-azureps?view=azps-7.1.0)
|
||||
Hay algunos buenos quickstarts de Microsoft para empezar a aprovisionar servicios desde PowerShell [aquí](https://docs.microsoft.com/en-us/powershell/azure/get-started-azureps?view=azps-7.1.0)
|
||||
|
||||
### Visual Studio Code
|
||||
|
||||
Like many, and as you have all seen my go-to IDE is Visual Studio Code.
|
||||
Como habréis visto la IDE de cabecera en el tutorial es Visual Studio Code. Visual Studio Code es un editor de código fuente gratuito creado por Microsoft para Windows, Linux y macOS.
|
||||
|
||||
Visual Studio Code is a free source-code editor made by Microsoft for Windows, Linux and macOS.
|
||||
|
||||
You will see below that there are lots of integrations and tools built into Visual Studio Code that you can use to interact with Microsoft Azure and the services within.
|
||||
Verás a continuación que hay un montón de integraciones y herramientas integradas en Visual Studio Code que puedes utilizar para interactuar con Microsoft Azure y los servicios que contiene.
|
||||
|
||||

|
||||
|
||||
### Cloud Shell
|
||||
|
||||
Azure Cloud Shell is an interactive, authenticated, browser-accessible shell for managing Azure resources. It provides the flexibility of choosing the shell experience that best suits the way you work.
|
||||
Azure Cloud Shell es un shell interactivo, autenticado y accesible desde el navegador para gestionar los recursos de Azure. Proporciona la flexibilidad de elegir la experiencia de shell que mejor se adapte a su forma de trabajar.
|
||||
|
||||

|
||||
|
||||
You can see from the below when we first launch Cloud Shell within the portal we can choose between Bash and PowerShell.
|
||||
Puedes ver en la siguiente imagen que cuando lanzamos Cloud Shell por primera vez dentro del portal podemos elegir entre Bash y PowerShell.
|
||||
|
||||

|
||||
|
||||
To use the cloud shell you will have to provide a bit of storage in your subscription.
|
||||
Para utilizar Cloud Shell tendrás que proporcionar un poco de almacenamiento en tu suscripción.
|
||||
|
||||
When you select to use the cloud shell it is spinning up a machine, these machines are temporary but your files are persisted in two ways; through a disk image and a mounted file share.
|
||||
Cuando seleccionas el intérprete de comandos en la nube, se pone en marcha una máquina. Estas máquinas son temporales, pero tus archivos se conservan de dos maneras: a través de una imagen de disco y en un archivo compartido montado.
|
||||
|
||||

|
||||
|
||||
- Cloud Shell runs on a temporary host provided on a per-session, per-user basis
|
||||
- Cloud Shell times out after 20 minutes without interactive activity
|
||||
- Cloud Shell requires an Azure file share to be mounted
|
||||
- Cloud Shell uses the same Azure file share for both Bash and PowerShell
|
||||
- Cloud Shell is assigned one machine per user account
|
||||
- Cloud Shell persists $HOME using a 5-GB image held in your file share
|
||||
- Permissions are set as a regular Linux user in Bash
|
||||
- Cloud Shell se ejecuta en un host temporal proporcionado por sesión y por usuario.
|
||||
- Cloud Shell se desconecta después de 20 minutos sin actividad interactiva.
|
||||
- Cloud Shell requiere que se monte un archivo compartido de Azure.
|
||||
- Cloud Shell utiliza el mismo recurso compartido de archivos de Azure para Bash y PowerShell.
|
||||
- Cloud Shell tiene asignada una máquina por cuenta de usuario.
|
||||
- Cloud Shell persiste $HOME utilizando una imagen de 5 GB guardada en su recurso compartido de archivos.
|
||||
- Los permisos se establecen como un usuario normal de Linux en Bash.
|
||||
|
||||
The above was copied from [Cloud Shell Overview](https://docs.microsoft.com/en-us/azure/cloud-shell/overview)
|
||||
Lo anterior fue copiado de [Cloud Shell Overview](https://docs.microsoft.com/en-us/azure/cloud-shell/overview).
|
||||
|
||||
### Azure CLI
|
||||
|
||||
Finally, I want to cover the Azure CLI, The Azure CLI can be installed on Windows, Linux and macOS. Once installed you can type `az` followed by other commands to create, update, delete and view Azure resources.
|
||||
Por último vamos a echar un ojo a Azure CLI. Azure CLI se puede instalar en Windows, Linux y macOS. Una vez instalado se puede escribir `az` seguido de otros comandos para crear, actualizar, eliminar y ver los recursos de Azure.
|
||||
|
||||
When I initially came into my Azure learning I was a little confused by there being Azure PowerShell and the Azure CLI.
|
||||
Al empezar con Azure es confusa la existencia de Azure PowerShell y Azure CLI. Estaría bien algún comentario de la comunidad sobre esto. Pero una visión objetica es que Azure PowerShell es un módulo añadido a Windows PowerShell o PowerShell Core (También disponible en otros sistemas operativos, pero no todos), mientras que Azure CLI es un programa de línea de comandos multiplataforma que se conecta a Azure y ejecuta los comandos.
|
||||
|
||||
I would love some feedback from the community on this as well. But the way I see it is that Azure PowerShell is a module added to Windows PowerShell or PowerShell Core (Also available on other OS but not all) Whereas Azure CLI is a cross-platform command-line program that connects to Azure and executes those commands.
|
||||
Ambas opciones tienen una sintaxis diferente, aunque pueden hacer tareas muy similares.
|
||||
|
||||
Both of these options have a different syntax, although they can from what I can see and what I have done do very similar tasks.
|
||||
|
||||
For example, creating a virtual machine from PowerShell would use the `New-AzVM` cmdlet whereas Azure CLI would use `az VM create`.
|
||||
|
||||
You saw previously that I have the Azure PowerShell module installed on my system but then I also have the Azure CLI installed that can be called through PowerShell on my Windows machine.
|
||||
Por ejemplo, crear una máquina virtual desde PowerShell usaría el cmdlet `New-AzVM` mientras que Azure CLI usaría `az VM create`.
|
||||
|
||||

|
||||
|
||||
The takeaway here as we already mentioned is about choosing the right tool. Azure runs on automation. Every action you take inside the portal translates somewhere to code being executed to read, create, modify, or delete resources.
|
||||
Como ya hemos mencionado, lo importante aquí es elegir la herramienta adecuada para cada tarea. Azure se basa en la automatización. Cada acción que realizas dentro del portal se traduce en algún lugar en código que se ejecuta para leer, crear, modificar o eliminar recursos.
|
||||
|
||||
Azure CLI
|
||||
|
||||
- Cross-platform command-line interface, installable on Windows, macOS, Linux
|
||||
- Runs in Windows PowerShell, Cmd, Bash and other Unix shells.
|
||||
- Interfaz de línea de comandos multiplataforma, instalable en Windows, macOS y Linux.
|
||||
- Se ejecuta en Windows PowerShell, Cmd, Bash y otros shells Unix.
|
||||
|
||||
Azure PowerShell
|
||||
|
||||
- Cross-platform PowerShell module, runs on Windows, macOS, Linux
|
||||
- Requires Windows PowerShell or PowerShell
|
||||
- Módulo PowerShell multiplataforma, ejecutable en Windows, macOS, Linux.
|
||||
- Requiere Windows PowerShell o PowerShell.
|
||||
|
||||
If there is a reason you cannot use PowerShell in your environment but you can use .mdor bash then the Azure CLI is going to be your choice.
|
||||
Si hay una razón por la que no puede utilizar PowerShell en su entorno, pero puede utilizar .mdor bash entonces el Azure CLI va a ser su elección.
|
||||
|
||||
Next up we take all the theories we have been through and create some scenarios and get hands-on in Azure.
|
||||
A continuación vamos a tomar todas las teorías que hemos estado a través de y crear algunos escenarios y ponerse manos a la obra en Azure.
|
||||
|
||||
## Resources
|
||||
## Recursos
|
||||
|
||||
- [Hybrid Cloud and MultiCloud](https://www.youtube.com/watch?v=qkj5W98Xdvw)
|
||||
- [Microsoft Azure Fundamentals](https://www.youtube.com/watch?v=NKEFWyqJ5XA&list=WL&index=130&t=12s)
|
||||
- [Google Cloud Digital Leader Certification Course](https://www.youtube.com/watch?v=UGRDM86MBIQ&list=WL&index=131&t=10s)
|
||||
- [AWS Basics for Beginners - Full Course](https://www.youtube.com/watch?v=ulprqHHWlng&t=5352s)
|
||||
|
||||
See you on [Day 34](day34.md)
|
||||
Nos vemos en el [Día 34](day34.md)
|
||||
|
@ -1,128 +1,126 @@
|
||||
## Microsoft Azure Hands-On Scenarios
|
||||
## Microsoft Azure Escenarios Prácticos
|
||||
|
||||
The last 6 days have been focused on Microsoft Azure and the public cloud in general, a lot of this foundation had to contain a lot of theory to understand the building blocks of Azure but also this will nicely translate to the other major cloud providers as well.
|
||||
En los últimos 6 días nos hemos centrado en Microsoft Azure y la nube pública para conseguir una mínima base, teníamos que pasar por la teoría para entender los bloques de construcción de Azure. Lo bueno es que esto se traduce muy bien a los otros proveedores de la nube más importantes, tan solo hay que saber como le llaman a cada componente o servicio homólogo.
|
||||
|
||||
I mentioned at the very beginning about getting a foundational knowledge of the public cloud and choosing one provider to at least begin with, if you are dancing between different clouds then I believe you can get lost quite easily whereas choosing one you get to understand the fundamentals and when you have those it is quite easy to jump into the other clouds and accelerate your learning.
|
||||
Al principio se mencionó esta necesidad de obtener un conocimiento básico de la nube pública y importación de la elección de un proveedor, al menos para empezar, porque si estás danzando entre diferentes nubes puede crear confusiones y perderse con facilidad. Mientras que con la elección de una en concreto se llega a entender mejor los fundamentos, así cuando surja la necesidad de saltar a otras nubes será mucho más ágil el aprendizaje.
|
||||
|
||||
In this final session, I am going to be picking and choosing my hands-on scenarios from this page here which is a reference created by Microsoft and is used for preparations for the [AZ-104 Microsoft Azure Administrator](https://microsoftlearning.github.io/AZ-104-MicrosoftAzureAdministrator/)
|
||||
En esta última sesión veremos escenarios prácticos de la siguiente página que se enlaza, una referencia creada por Microsoft para la preparación del examen [AZ-104 Microsoft Azure Administrator](https://microsoftlearning.github.io/AZ-104-MicrosoftAzureAdministrator/).
|
||||
|
||||
There are some here such as Containers and Kubernetes that we have not covered in any detail as of yet so I don't want to jump in there just yet.
|
||||
Algunos como contenedores y Kubernetes aun no están cubiertos en detalle en este viaje, así que no adelantemos acontecimientos todavía.
|
||||
|
||||
In previous posts, we have created most of Modules 1,2 and 3.
|
||||
En los días anteriores sí que se ha visto gran parte de los Módulos 1,2 y 3 👍
|
||||
|
||||
### Virtual Networking
|
||||
### Redes Virtuales
|
||||
|
||||
Following [Module 04](https://microsoftlearning.github.io/AZ-104-MicrosoftAzureAdministrator/Instructions/Labs/LAB_04-Implement_Virtual_Networking.html):
|
||||
Siguiendo y revisando el [Módulo 04](https://microsoftlearning.github.io/AZ-104-MicrosoftAzureAdministrator/Instructions/Labs/LAB_04-Implement_Virtual_Networking.html) se han cambiado algunos nombres para #90DaysOfDevOps.
|
||||
|
||||
I went through the above and changed a few namings for #90DaysOfDevOps. I also instead of using the Cloud Shell went ahead and logged in with my new user created on previous days with the Azure CLI on my Windows machine.
|
||||
También, en lugar de utilizar el Cloud Shell iniciamos sesión con el nuevo usuario creado en días anteriores con el CLI de Azure.
|
||||
|
||||
You can do this using the `az login` which will open a browser and let you authenticate to your account.
|
||||
Se puede hacer esto usando el `az login` que abrirá un navegador y permite autenticar en la cuenta.
|
||||
|
||||
I have then created a PowerShell script and some references from the module to use to build out some of the tasks below. You can find the associated files in this folder.
|
||||
(Cloud\01VirtualNetworking)
|
||||
A continuación, veremos un script PowerShell y algunas referencias del módulo a utilizar para construir algunas de las tareas. Puedes encontrar los archivos asociados en esta carpeta [Cloud\01VirtualNetworking](Cloud/01VirtualNetworking/)
|
||||
|
||||
Please make sure you change the file location in the script to suit your environment.
|
||||
Asegúrese de cambiar la ubicación del archivo en el script para adaptarlo a tu entorno.
|
||||
|
||||
At this first stage, we have no virtual network or virtual machines created in our environment, I only have a cloud shell storage location configured in my resource group.
|
||||
En esta primera etapa, no tenemos ninguna red virtual o máquinas virtuales creadas en nuestro entorno, sólo tengo una ubicación de almacenamiento shell en la nube configurada en mi grupo de recursos.
|
||||
|
||||
I first of all run my [PowerShell script](Cloud/01VirtualNetworking/Module4_90DaysOfDevOps.ps1)
|
||||
En primer lugar ejecuto mi [script PowerShell](Cloud/01VirtualNetworking/Module4_90DaysOfDevOps.ps1).
|
||||
|
||||

|
||||
|
||||
- Task 1: Create and configure a virtual network
|
||||
- Tarea 1: Crear y configurar una red virtual.
|
||||
|
||||

|
||||
|
||||
- Task 2: Deploy virtual machines into the virtual network
|
||||
- Tarea 2: Desplegar máquina virtual en la red virtual.
|
||||
|
||||

|
||||
|
||||
- Task 3: Configure private and public IP addresses of Azure VMs
|
||||
- Tarea 3: Configurar las direcciones IP privadas y públicas de las máquinas virtuales Azure.
|
||||
|
||||

|
||||
|
||||
- Task 4: Configure network security groups
|
||||
- Tarea 4: Configurar grupos de seguridad en red.
|
||||
|
||||

|
||||

|
||||
|
||||
- Task 5: Configure Azure DNS for internal name resolution
|
||||
- Tarea 5: Configurar Azure DNS para la resolución de nombres internos.
|
||||
|
||||

|
||||

|
||||
|
||||
### Network Traffic Management
|
||||
### Gestión de tráfico de red
|
||||
|
||||
Following [Module 06](https://microsoftlearning.github.io/AZ-104-MicrosoftAzureAdministrator/Instructions/Labs/LAB_06-Implement_Network_Traffic_Management.html):
|
||||
Siguiendo el [Módulo 06](https://microsoftlearning.github.io/AZ-104-MicrosoftAzureAdministrator/Instructions/Labs/LAB_06-Implement_Network_Traffic_Management.html):
|
||||
|
||||
Next walkthrough, from the last one we have gone into our resource group and deleted our resources, if you had not set up the user account like me to only have access to that one resource group you could follow the module changing the name to `90Days*` this will delete all resources and resource group. This will be my process for each of the following labs.
|
||||
Si no has configurado la cuenta de usuario para que sólo tenga acceso a ese grupo de recursos, puedes seguir el módulo cambiando el nombre a '90 días*'. Esto eliminará todos los recursos y el grupo de recursos. Este será el proceso para cada uno de los siguientes laboratorios.
|
||||
|
||||
For this lab, I have also created a PowerShell script and some references from the module to use to build out some of the tasks below. You can find the associated files in this folder.
|
||||
(Cloud\02TrafficManagement)
|
||||
Para este laboratorio, también se ha creado una secuencia de comandos PowerShell y algunas referencias del módulo a utilizar para construir algunas de las tareas. Puede encontrar los archivos asociados en esta carpeta.
|
||||
[Nube\02GestiónDeTráfico](Nube\02GestiónDeTráfico)
|
||||
|
||||
- Task 1: Provision of the lab environment
|
||||
- Tarea 1: Provisión del entorno de laboratorio
|
||||
|
||||
I first of all run my [PowerShell script](Cloud/02TrafficManagement/Mod06_90DaysOfDevOps.ps1)
|
||||
En primer lugar ejecuto mi [script PowerShell](Cloud/02TrafficManagement/Mod06_90DaysOfDevOps.ps1)
|
||||
|
||||

|
||||
|
||||
- Task 2: Configure the hub and spoke network topology
|
||||
- Tarea 2: Configurar la [topología de red de concentrador y radio (Hub-and-spoke)](https://learn.microsoft.com/es-es/azure/cloud-adoption-framework/ready/azure-best-practices/hub-spoke-network-topology)
|
||||
|
||||

|
||||
|
||||
- Task 3: Test transitivity of virtual network peering
|
||||
- Tarea 3: Probar la transitividad del peering de la red virtual
|
||||
|
||||
For this my 90DaysOfDevOps group did not have access to the Network Watcher because of permissions, I expect this is because Network Watchers are one of those resources that are not tied to a resource group which is where our RBAC was covered for this user. I added the East US Network Watcher contributor role to the 90DaysOfDevOps group.
|
||||
El grupo 90DaysOfDevOps no tenía acceso al Network Watcher debido a los permisos, espero que esto se deba a que los Network Watchers son uno de esos recursos que no están ligados a un grupo de recursos, que es donde nuestro RBAC estaba cubierto para este usuario. Se ha añadido el rol de colaborador del Vigilante de Red del Este de EEUU al grupo 90DaysOfDevOps.
|
||||
|
||||

|
||||

|
||||

|
||||
|
||||
^ This is expected since the two spoke virtual networks do not peer with each other (virtual network peering is not transitive).
|
||||
^ Esto es lo esperado ya que las dos redes virtuales spoke no son peer entre sí (el peering de redes virtuales no es transitivo).
|
||||
|
||||
- Task 4: Configure routing in the hub and spoke topology
|
||||
- Tarea 4: Configurar el enrutamiento en la topología de red de concentrador y radio
|
||||
|
||||
I had another issue here with my account not being able to run the script as my user within the group 90DaysOfDevOps which I am unsure of so I did jump back into my main admin account. The 90DaysOfDevOps group is an owner of everything in the 90DaysOfDevOps Resource Group so would love to understand why I cannot run a command inside the VM?
|
||||
Tuve otro problema aquí con mi cuenta no ser capaz de ejecutar el script como mi usuario dentro del grupo 90DaysOfDevOps que no estoy seguro de lo que hice saltar de nuevo en mi cuenta de administrador principal. El grupo 90DaysOfDevOps es el propietario de todo lo que hay en el grupo de recursos 90DaysOfDevOps, así que me gustaría saber por qué no puedo ejecutar un comando dentro de la máquina virtual.
|
||||
|
||||

|
||||

|
||||
|
||||
I then was able to go back into my michael.cade@90DaysOfDevOps.com account and continue this section. Here we are running the same test again but now with the result being reachable.
|
||||
Entonces pude volver a entrar en la cuenta michael.cade@90DaysOfDevOps.com y continuar con esta sección. Aquí estamos ejecutando la misma prueba de nuevo pero ahora con el resultado alcanzable.
|
||||
|
||||

|
||||
|
||||
- Task 5: Implement Azure Load Balancer
|
||||
- Tarea 5: Implementar Azure Load Balancer
|
||||
|
||||

|
||||

|
||||
|
||||
- Task 6: Implement Azure Application Gateway
|
||||
- Tarea 6: Implementar Azure Application Gateway
|
||||
|
||||

|
||||

|
||||
|
||||
### Azure Storage
|
||||
### Almcacenamiento Azure
|
||||
|
||||
Following [Module 07](https://microsoftlearning.github.io/AZ-104-MicrosoftAzureAdministrator/Instructions/Labs/LAB_07-Manage_Azure_Storage.html):
|
||||
Siguiendo el [Módulo 07](https://microsoftlearning.github.io/AZ-104-MicrosoftAzureAdministrator/Instructions/Labs/LAB_07-Manage_Azure_Storage.html):
|
||||
|
||||
For this lab, I have also created a PowerShell script and some references from the module to use to build out some of the tasks below. You can find the associated files in this folder.
|
||||
(Cloud\03Storage)
|
||||
Para el siguiente laboratorio, también tenemos un script de PowerShell y algunas referencias del módulo a utilizar para construir algunas de las tareas. Puedes encontrar los archivos asociados en la carpeta [Cloud\03Storage](Cloud\03Storage).
|
||||
|
||||
- Task 1: Provision of the lab environment
|
||||
- Tarea 1: Provisión del entorno de laboratorio.
|
||||
|
||||
I first of all run my [PowerShell script](Cloud/03Storage/Mod07_90DaysOfDeveOps.ps1)
|
||||
En primer lugar se ejecuta el [script PowerShell](Cloud/03Storage/Mod07_90DaysOfDeveOps.ps1)
|
||||
|
||||

|
||||
|
||||
- Task 2: Create and configure Azure Storage accounts
|
||||
- Tarea 2: Crear y configurar cuentas Azure Storage.
|
||||
|
||||

|
||||
|
||||
- Task 3: Manage blob storage
|
||||
- Tarea 3: Gestionar el almacenamiento blob
|
||||
|
||||

|
||||
|
||||
- Task 4: Manage authentication and authorization for Azure Storage
|
||||
- Tarea 4: Gestionar la autenticación y autorización para Azure Storage
|
||||
|
||||

|
||||

|
||||
@ -133,55 +131,55 @@ I was a little impatient waiting for this to be allowed but it did work eventual
|
||||
|
||||
- Task 5: Create and configure an Azure Files shares
|
||||
|
||||
On the run command, this would not work with michael.cade@90DaysOfDevOps.com so I used my elevated account.
|
||||
Se debe tener paciencia esperando a que esto se autorice.
|
||||
|
||||

|
||||

|
||||

|
||||
|
||||
- Task 6: Manage network access for Azure Storage
|
||||
- Tarea 6: Gestionar acceso a la red para Azure Storage
|
||||
|
||||

|
||||
|
||||
### Serverless (Implement Web Apps)
|
||||
### Serverless (Implementar Web Apps)
|
||||
|
||||
Following [Module 09a](https://microsoftlearning.github.io/AZ-104-MicrosoftAzureAdministrator/Instructions/Labs/LAB_09a-Implement_Web_Apps.html):
|
||||
Ahora toca el [Módulo 09a](https://microsoftlearning.github.io/AZ-104-MicrosoftAzureAdministrator/Instructions/Labs/LAB_09a-Implement_Web_Apps.html):
|
||||
|
||||
- Task 1: Create an Azure web app
|
||||
- Tarea 1: Crear un Azure web app
|
||||
|
||||

|
||||
|
||||
- Task 2: Create a staging deployment slot
|
||||
- Tarea 2: Crear un slot de despliegue de preparación
|
||||
|
||||

|
||||
|
||||
- Task 3: Configure web app deployment settings
|
||||
- Tarea 3: Configurar opciones del despliegue web app
|
||||
|
||||

|
||||
|
||||
- Task 4: Deploy code to the staging deployment slot
|
||||
- Tarea 4: Desplegar el código en el slot de despliegue de preparación
|
||||
|
||||

|
||||
|
||||
- Task 5: Swap the staging slots
|
||||
- Tarea 5: Intercambiar slots de preparación
|
||||
|
||||

|
||||
|
||||
- Task 6: Configure and test autoscaling of the Azure web app
|
||||
- Tarea 6: Configurar y testear el autoescalado de Azure web app
|
||||
|
||||
This script I am using can be found in (Cloud/05Serverless)
|
||||
Este script que estoy usando se puede encontrar en la carpeta [Cloud/05Serverless](Cloud/05Serverless)
|
||||
|
||||

|
||||
|
||||
This wraps up the section on Microsoft Azure and the public cloud in general. I will say that I had lots of fun attacking and working through these scenarios.
|
||||
Con esto terminamos la sección sobre Microsoft Azure y la nube pública en general. Espero que te hayas divertido trabajando los distintos escenarios.
|
||||
|
||||
## Resources
|
||||
## Recursos
|
||||
|
||||
- [Hybrid Cloud and MultiCloud](https://www.youtube.com/watch?v=qkj5W98Xdvw)
|
||||
- [Microsoft Azure Fundamentals](https://www.youtube.com/watch?v=NKEFWyqJ5XA&list=WL&index=130&t=12s)
|
||||
- [Google Cloud Digital Leader Certification Course](https://www.youtube.com/watch?v=UGRDM86MBIQ&list=WL&index=131&t=10s)
|
||||
- [AWS Basics for Beginners - Full Course](https://www.youtube.com/watch?v=ulprqHHWlng&t=5352s)
|
||||
|
||||
Next, we will be diving into version control systems, specifically around git and then also code repository overviews and we will be choosing GitHub as this is my preferred option.
|
||||
A continuación, vamos a sumergirnos en los sistemas de control de versiones, en concreto en torno a git. Para los repositorios de código veremos GitHub, una de las opciones más utilizadas.
|
||||
|
||||
See you on [Day 35](day35.md)
|
||||
Nos vemos en el [Día 35](day35.md)
|
||||
|
@ -1,124 +1,124 @@
|
||||
## The Big Picture: Git - Version Control
|
||||
## El panorama: Git - Control de versiones
|
||||
|
||||
Before we get into git, we need to understand what version control is and why? In this opener for Git, we will take a look at what version control is, and the basics of git.
|
||||
Antes de adentrarnos en git, necesitamos entender qué es el control de versiones y por qué. En esta introducción a Git, le echaremos un vistazo al control de versiones y a los fundamentos de git.
|
||||
|
||||
### What is Version Control?
|
||||
### ¿Qué es el control de versiones?
|
||||
|
||||
Git is not the only version control system so here we want to cover what options and what methodologies are available around version control.
|
||||
Git no es el único sistema de control de versiones, así que aquí queremos cubrir qué opciones y qué metodologías hay disponibles en torno al control de versiones.
|
||||
|
||||
The most obvious and a big benefit of Version Control is the ability to track a project's history. We can look back over this repository using `git log` and see that we have many commits and many comments and what has happened so far in the project. Don't worry we will get into the commands later. Now think if this was an actual software project full of source code and multiple people are committing to our software at different times, different authors and then reviewers all are logged here so that we know what has happened, when, by whom and who reviewed.
|
||||
El más obvio y gran beneficio del Control de Versiones es la capacidad de rastrear la historia de un proyecto. Podemos mirar atrás en este repositorio usando `git log` y ver que tenemos muchos commits (Confirmaciones de cambios), muchos comentarios y analizar lo que ha pasado desde el principio del proyecto. No te preocupes, hablaremos de los comandos más tarde. Ahora piensa en un proyecto de software real lleno de código fuente y con varias personas haciendo commits a nuestro software en diferentes momentos, diferentes autores, luego revisores... todo se registra para que sepamos lo que ha sucedido, cuándo, por quién y quién revisó.
|
||||
|
||||

|
||||
|
||||
Version Control before it was cool, would have been something like manually creating a copy of your version before you made changes. It might be that you also comment out old useless code with the just-in-case mentality.
|
||||
El control de versiones antes de que fuera cool, habría sido algo como crear manualmente una copia de tu versión antes de hacer cambios y, manualmente también, hacer anotaciones de los cambios en un documento típicamente llamado changelog. Podría ser también que comentaras código viejo inútil con la mentalidad del "por si acaso" y lo dejarás entre el código fuente haciendo bulto.
|
||||
|
||||

|
||||
|
||||
I have started using version control over not just source code but pretty much anything, talks about projects like this (90DaysOfDevOps). Why not accept the features that rollback and log of everything that has gone on.
|
||||
Una vez te das cuenta de los beneficios del control de versiones no sólo lo utilizas sobre el código fuente, sino sobre prácticamente cualquier cosa, como proyectos como 90DaysOfDevOps. ¿Por qué no aprovechar las características que rollback y el registro de todo lo que ha pasado?
|
||||
|
||||
However, a big disclaimer **Version Control is not a Backup!**
|
||||
Sin embargo, una gran advertencia: ⚠️ **¡Control de versiones no es una copia de seguridad!** ⚠️
|
||||
|
||||
Another benefit of Version Control is the ability to manage multiple versions of a project, Let's create an example, we have a free app that is available on all operating systems and then we have a paid-for app also available on all operating systems. The majority of the code is shared between both applications. We could copy and paste our code each commit to each app but that is going to be very messy especially as you scale your development to more than just one person, also mistakes will be made.
|
||||
Otro beneficio del Control de Versiones es la capacidad de gestionar múltiples versiones de un proyecto, vamos a crear un ejemplo, tenemos una aplicación gratuita que está disponible en todos los sistemas operativos y luego tenemos una aplicación de pago también disponible en todos los sistemas operativos. La mayor parte del código se comparte entre ambas aplicaciones. Podríamos copiar y pegar nuestro código en cada commit para cada aplicación, pero eso va a ser muy desordenado, especialmente a medida que escalas tu desarrollo a más de una persona, también se cometerán errores.
|
||||
|
||||
The premium app is where we are going to have additional features, let's call them premium commits, the free edition will just contain the normal commits.
|
||||
La aplicación premium es donde vamos a tener características adicionales, vamos a llamarlos commits premium, la edición gratuita sólo contendrá los commits normales.
|
||||
|
||||
The way this is achieved in Version Control is through branching.
|
||||
La forma en que esto se logra en el Control de Versiones es a través de la ramificación.
|
||||
|
||||

|
||||
|
||||
Branching allows for two code streams for the same app as we stated above. But we will still want new features that land in our source code-free version to be in our premium and to achieve this we have something called merging.
|
||||
La ramificación (branching) permite dos flujos de código para la misma aplicación, como hemos dicho anteriormente. Pero todavía queremos nuevas características que aterrizan en nuestra versión código libre para estar en nuestra prima y para lograr esto tenemos algo que se llama fusión (merging).
|
||||
|
||||

|
||||
|
||||
Now, this same easy but merging can be complicated because you could have a team working on the free edition and you could have another team working on the premium paid-for version and what if both change code that affects aspects of the overall code. Maybe a variable gets updated and breaks something. Then you have a conflict that breaks one of the features. Version Control cannot fix the conflicts that are down to you. But version control allows this to be easily managed.
|
||||
Hacer esto mismo ahora es facilísimo, pero la fusión puede ser complicada porque podrías tener un equipo trabajando en la edición gratuita y podrías tener otro equipo trabajando en la versión premium de pago y ¿qué pasa si ambos equipos cambian código que afecta a aspectos del código general? Tal vez una variable se actualiza y rompe algo. Aquí se produce un conflicto que rompe una de las características. El control de versiones no puede arreglar los conflictos pero permite gestionarlos fácilmente.
|
||||
|
||||
The primary reason if you have not picked up so far for version control, in general, is the ability to collaborate. The ability to share code amongst developers and when I say code as I said before more and more we are seeing much more use cases for other reasons to use source control, maybe its a joint presentation you are working on with a colleague or a 90DaysOfDevOps challenge where you have the community offering their corrections and updates throughout the project.
|
||||
La razón principal de utilizar el control de versiones, en general, es la capacidad de poder colaborar. La capacidad de compartir código entre los desarrolladores es algo principal, pero cada vez se ven más casos de uso. Por ejemplo, en una presentación conjunta que trabajas con un colega o en un reto 90DaysOfDevOps donde tienes una comunidad que ofrece sus correcciones y actualizaciones en todo el proyecto, como esta traducción.
|
||||
|
||||
Without version control how did teams of software developers even handle this? I find it hard enough when I am working on my projects to keep track of things. I expect they would split out the code into each functional module. Maybe a little part of the puzzle then was bringing the pieces together and then problems and issues before anything would get released.
|
||||
Sin el control de versiones, ¿cómo se las arreglaban los equipos de desarrolladores de software? Cuando trabajo en mis proyectos me resulta bastante difícil hacer un seguimiento de las cosas. Supongo que dividirían el código en módulos funcionales y luego, como un puzzle, iban juntando las piezas y resolviendo los problemas antes de que algo se publicara. [El desarrollo en cascada](https://es.wikipedia.org/wiki/Desarrollo_en_cascada).
|
||||
|
||||
With version control, we have a single source of truth. We might all still work on different modules but it enables us to collaborate better.
|
||||
Con el control de versiones, tenemos una única fuente de verdad. Puede que todos sigamos trabajando en módulos diferentes, pero nos permite colaborar mejor porque vemos en tiempo real el trabajo de los demás.
|
||||
|
||||

|
||||
|
||||
Another thing to mention here is that it's not just developers that can benefit from Version Control, it's all members of the team to have visibility but also tools all having awareness or leverage, Project Management tools can be linked here, tracking the work. We might also have a build machine for example Jenkins which we will talk about in another module. A tool that Builds and Packages the system, automating the deployment tests and metrics.
|
||||
Otra cosa importante a mencionar es que no son sólo los desarrolladores quienes pueden beneficiarse del Control de Versiones. Todos los miembros del equipo deben tener visibilidad, pero también las herramientas que todos deben conocer o aprovechar. Las herramientas de Gestión de Proyectos pueden estar vinculadas aquí, rastreando el trabajo. También podríamos tener una máquina de construcción, por ejemplo Jenkins, de la que hablaremos en otro módulo. Una herramienta que construye y empaqueta el sistema, automatizando las pruebas de despliegue y las métricas. Y mucho más...
|
||||
|
||||
### What is Git?
|
||||
### ¿Qué es Git?
|
||||
|
||||
Git is a tool that tracks changes to source code or any file, or we could also say Git is an open-source distributed version control system.
|
||||
Git es una herramienta que rastrea los cambios en el código fuente o en cualquier archivo, o también podríamos decir que Git es un sistema de control de versiones distribuido de código abierto.
|
||||
|
||||
There are many ways in which git can be used on our systems, most commonly or at least for me I have seen it at the command line, but we also have graphical user interfaces and tools like Visual Studio Code that have git-aware operations we can take advantage of.
|
||||
Hay muchas formas de utilizar git en nuestros sistemas, lo más habitual es usarlo en la línea de comandos, pero también tenemos interfaces gráficas de usuario y herramientas como Visual Studio Code que tienen operaciones git-aware que podemos aprovechar.
|
||||
|
||||
Now we are going to run through a high-level overview before we even get Git installed on our local machine.
|
||||
Ahora vamos a ejecutar a través de una visión general de alto nivel, incluso antes de tener Git instalado en nuestra máquina local.
|
||||
|
||||
Let's take the folder we created earlier.
|
||||
Utilicemos la carpeta que hemos creado antes.
|
||||
|
||||

|
||||
|
||||
To use this folder with version control we first need to initiate this directory using the `git init` command. For now, just think that this command puts our directory as a repository in a database somewhere on our computer.
|
||||
Para usar esta carpeta con el control de versiones primero necesitamos iniciar este directorio usando el comando `git init`. Por ahora, piensa que este comando pone nuestro directorio como repositorio en una base de datos en algún lugar de nuestro ordenador.
|
||||
|
||||

|
||||
|
||||
Now we can create some files and folders and our source code can begin or maybe it already has and we have something in here already. We can use the `git add .` command which puts all files and folders in our directory into a snapshot but we have not yet committed anything to that database. We are just saying all files with the `.` are ready to be added.
|
||||
Ahora podemos crear algunos archivos y carpetas y nuestro código fuente puede comenzar. Podemos usar el comando `git add .` que pone todos los archivos y carpetas de nuestro directorio en una instantánea pero todavía no hemos confirmado nada en esa base de datos. Sólo estamos diciendo que todos los archivos con el `.` están listos para ser añadidos.
|
||||
|
||||

|
||||
|
||||
Then we want to go ahead and commit our files, we do this with the `git commit -m "My First Commit"` command. We can give a reason for our commit and this is suggested so we know what has happened for each commit.
|
||||
A continuación, queremos seguir adelante y confirmar nuestros archivos, lo hacemos con el comando `git commit -m "Mi primer commit"`. Podemos dar una razón para nuestro commit y es recomendable para que sepamos lo que ha sucedido en cada commit. Se hace con la opción de mensaje `-m`.
|
||||
|
||||

|
||||
|
||||
We can now see what has happened within the history of the project. Using the `git log` command.
|
||||
Ahora podemos ver lo que ha pasado en la historia del proyecto. Usando el comando `git log`.
|
||||
|
||||

|
||||
|
||||
If we create an additional file called `samplecode.ps1`, the status would become different. We can also check the status of our repository by using `git status` this shows we have nothing to commit and we can add a new file called samplecode.ps1. If we then run the same `git status` you will see that we file to be committed.
|
||||
Si creamos un fichero adicional llamado `samplecode.ps1` el estado de este será diferente. Podemos comprobar el estado de nuestro repositorio mediante el uso de `git status` esto muestra que no tenemos nada que confirmar y podemos añadir un nuevo archivo llamado `samplecode.ps1`. Ejecutamos el mismo `git status` y veremos que tenemos un fichero para añadir y confirmar (comitear, commit verborizado al español por los murcianos).
|
||||
|
||||

|
||||
|
||||
Add our new file using the `git add sample code.ps1` command and then we can run `git status` again and see our file is ready to be committed.
|
||||
Añadimos nuestro nuevo fichero usando el comando `git add sample code.ps1` y entonces podemos ejecutar `git status` de nuevo y ver que nuestro fichero está listo para ser comiteado.
|
||||
|
||||

|
||||
|
||||
Then issue `git commit -m "My Second Commit"` command.
|
||||
Pues a comitear se ha dicho, ejecutamos el comando `git commit -m "My Second Commit"`.
|
||||
|
||||

|
||||
|
||||
Another `git status` now shows everything is clean again.
|
||||
Otro `git status` nos muestra que todo está limpio, lo tenemos subido al repositorio local.
|
||||
|
||||

|
||||
|
||||
We can then use the `git log` command which shows the latest changes and first commit.
|
||||
Podemos usar el comando `git log` que muestra los últimos cambios y el primer commit.
|
||||
|
||||

|
||||
|
||||
If we wanted to see the changes between our commits i.e what files have been added or modified we can use the `git diff b8f8 709a`
|
||||
Si quisiéramos ver los cambios entre nuestras confirmaciones, es decir, qué archivos se han añadido o modificado, podemos usar `git diff b8f8 709a`.
|
||||
|
||||

|
||||
|
||||
Which then displays what has changed in our case we added a new file.
|
||||
Nos mostrará lo que ha cambiado. En nuestro caso veremos el fichero añadido.
|
||||
|
||||

|
||||
|
||||
We will go deeper into this later on but we can jump around our commits i.e we can go time travelling! By using our commit number we can use the `git checkout 709a` command to jump back in time without losing our new file.
|
||||
Profundizaremos en esto más adelante pero para empezar a degustar las delicias de git: podemos saltar entre nuestros commits, es decir, ¡podemos viajar en el tiempo! Usando nuestro número de commit con el comando `git checkout 709a` para saltar atrás en el tiempo sin perder nuestro nuevo archivo.
|
||||
|
||||

|
||||
|
||||
But then equally we will want to move forward as well and we can do this the same way with the commit number or you can see here we are using the `git switch -` command to undo our operation.
|
||||
Igualmente podemos avanzar de la misma manera, con el número de commit. También puedes ver que estamos usando el comando `git switch -` para deshacer nuestra operación.
|
||||
|
||||

|
||||
|
||||
The TLDR;
|
||||
El TLDR;
|
||||
|
||||
- Tracking a project's history
|
||||
- Managing multiple versions of a project
|
||||
- Sharing code amongst developers and a wider scope of teams and tools
|
||||
- Coordinating teamwork
|
||||
- Oh and there is some time travel!
|
||||
- Seguimiento de la historia de un proyecto.
|
||||
- Gestión de múltiples versiones de un proyecto.
|
||||
- Compartir código entre desarrolladores. Un mayor número de equipos y herramientas.
|
||||
- Coordinar el trabajo en equipo.
|
||||
- Ah, ¡y hay algunos viajes en el tiempo!
|
||||
|
||||
This might have seemed a jump around but hopefully, you can see without really knowing the commands used the powers and the big picture behind Version Control.
|
||||
Esto ha sido una introducción, espero que se pueda percibir los poderes y el panorama general detrás del Control de Versiones.
|
||||
|
||||
Next up we will be getting git installed and set up on your local machine and diving a little deeper into some other use cases and commands that we can achieve in Git.
|
||||
A continuación vamos a instalar git y configurarlo en una máquina local y bucear un poco más profundo en algunos casos de uso y los comandos que podemos necesitar en Git.
|
||||
|
||||
## Resources
|
||||
## Recursos
|
||||
|
||||
- [What is Version Control?](https://www.youtube.com/watch?v=Yc8sCSeMhi4)
|
||||
- [Types of Version Control System](https://www.youtube.com/watch?v=kr62e_n6QuQ)
|
||||
@ -126,5 +126,13 @@ Next up we will be getting git installed and set up on your local machine and di
|
||||
- [Git for Professionals Tutorial](https://www.youtube.com/watch?v=Uszj_k0DGsg)
|
||||
- [Git and GitHub for Beginners - Crash Course](https://www.youtube.com/watch?v=RGOj5yH7evk&t=8s)
|
||||
- [Complete Git and GitHub Tutorial](https://www.youtube.com/watch?v=apGV9Kg7ics)
|
||||
- [En español] [Comandos Git](https://gitea.vergaracarmona.es/man-linux/comandos-git)
|
||||
- [En español] [Apuntes Curso de Git](https://vergaracarmona.es/wp-content/uploads/2022/10/Curso-git_vergaracarmona.es_.pdf).
|
||||
- [En español] En los [apuntes](https://vergaracarmona.es/apuntes/) del traductor:
|
||||
- ["Instalar git en ubuntu"](https://vergaracarmona.es/instalar-git-en-ubuntu/)
|
||||
- ["Comandos de git"](https://vergaracarmona.es/comandos-de-git/)
|
||||
- ["Estrategias de fusión en git: Ship / Show / Ask"](https://vergaracarmona.es/estrategias-bifurcacion-git-ship-show-ask/)
|
||||
- ["Resolver conflictos en Git. Merge, Squash, Rebase o Pull"](https://vergaracarmona.es/merge-squash-rebase-pull/)
|
||||
- ["Borrar commits de git: reset, rebase y cherry-pick"](https://vergaracarmona.es/reset-rebase-cherry-pick/)
|
||||
|
||||
See you on [Day 36](day36.md)
|
||||
Nos vemos en el [Día 36](day36.md)
|
||||
|
175
2022/vi/Days/day25.md
Normal file
@ -0,0 +1,175 @@
|
||||
---
|
||||
title: '#90DaysOfDevOps - Lập trình Python trong tự động hóa mạng - Ngày 25'
|
||||
published: false
|
||||
description: 90DaysOfDevOps - Lập trình Python trong tự động hóa mạng
|
||||
tags: 'devops, 90daysofdevops, learning'
|
||||
cover_image: null
|
||||
canonical_url: null
|
||||
id: 1049038
|
||||
---
|
||||
|
||||
## Lập trình Python trong tự động hóa mạng
|
||||
|
||||
Python là ngôn ngữ lập trình tiêu chuẩn được sử dụng trong việc tự động hóa cấu hình mạng.
|
||||
|
||||
Mặc dù Python không chỉ dành riêng cho việc tự động hóa mạng nhưng nó dường như được sử dụng ở khắp mọi nơi mỗi khi bạn tìm kiếm công cụ cho mình. Như đã đề cập trước đây nếu nó không phải là chương trình Python thì nó có thể là Ansible (vốn cũng được viết bằng Python).
|
||||
|
||||
Tôi nghĩ rằng tôi đã đề cập đến điều này rồi, trong phần "Học ngôn ngữ lập trình", tôi đã chọn Golang thay vì Python vì những lý do xung quanh việc công ty của tôi đang phát triển Go nên đó là lý do chính đáng để tôi học Go, nhưng nếu không phải vì lí do đó thì Python sẽ là lựa chọn lúc đó.
|
||||
|
||||
- Dễ đọc và dễ sử dụng: Đây là lí do Python là ngôn ngữ lập trình phổ biến. Python không yêu cầu sử dụng `{}` trong chương trình để bắt đầu và kết thúc các khối mã. Kết hợp điều này với một IDE mạnh như VS Code, bạn sẽ có một khởi đầu khá dễ dàng khi muốn chạy một số mã Python.
|
||||
|
||||
Pycharm có thể là một IDE khác đáng được đề cập ở đây.
|
||||
|
||||
- Thư viện: Khả năng mở rộng của Python là mỏ vàng thực sự ở đây, tôi đã đề cập trước đây rằng Python không chỉ dành cho tự động hóa mạng mà trên thực tế, có rất nhiều thư viện cho tất cả các loại thiết bị và cấu hình. Bạn có thể xem số lượng lớn tại đây [PyPi](https://pypi.python.org/pypi)
|
||||
|
||||
Khi bạn muốn tải một thư viện xuống máy tính của mình, thì bạn sử dụng công cụ có tên `pip` để kết nối với PyPI và tải xuống máy của mình. Các nhà cung cấp mạng như Cisco, Juniper và Arista đã phát triển các thư viện để hỗ trợ việc truy cập vào thiết bị của họ.
|
||||
|
||||
- Mạnh mẽ & hiệu quả: Bạn có nhớ trong những ngày học lập trình Go tôi đã viết chương trình "Hello World" với 6 dòng mã không? Trong Python nó là
|
||||
|
||||
```
|
||||
print('hello world')
|
||||
```
|
||||
|
||||
Tổng hợp tất cả các điểm trên lại với nhau bạn sẽ dễ dàng hiểu tại sao Python thường được nhắc đến như một ngôn ngữ tiêu chuẩn khi làm việc về tự động hóa.
|
||||
|
||||
Tôi nghĩ có một điều quan trọng cần lưu ý là vài năm trước có thể đã có các chương trình để tương tác với các thiết bị mạng của bạn để có thể tự động thực hiện sao lưu cấu hình hoặc thu thập nhật ký và thông tin chi tiết khác về thiết bị của bạn. Quá trình tự động hóa mà chúng ta đang nói đến ở đây hơi khác một chút và đó là do bối cảnh mạng nói chung cũng đã thay đổi để phù hợp hơn với cách suy nghĩ này và cho phép tự động hóa nhiều hơn.
|
||||
|
||||
- Software-Defined Network/Mạng được điều khiển bằng phần mềm) - SDN Controller chịu trách nhiệm là nơi cung cấp cấu hình điều khiển cho tất cả các thiết bị trên mạng, nghĩa là chỉ cần một điểm liên hệ duy nhất cho bất kỳ thay đổi mạng nào, không còn phải telnet hoặc SSH vào mọi thiết bị và việc dựa vào con người để làm điều này có khả năng gây ra lỗi hoặc cấu hình sai.
|
||||
|
||||
- High-Level Orchestration/Phối hợp ở mức cao - Thực hiện ở cấp cao hơn SDN Controller và nó cho phép sự điều phối ở cấp độ các dịch vụ, sau đó là sự tích hợp của lớp điều phối này vào các nền tảng bạn chọn, VMware, Kubernetes, dịch vụ điện toán đám mây, v.v.
|
||||
|
||||
- Policy-based management/Quản lý dựa trên chính sách - Bạn muốn cài đặt chính sách gì? Trạng thái mong muốn của dịch vụ là gì? Bạn mô tả điều này và hệ thống có tất cả các chi tiết về cách thiết lập nó trở thành trạng thái bạn mong muốn.
|
||||
|
||||
## Cài đặt môi trường lab
|
||||
|
||||
Không phải ai cũng có thể sở hữu các thiết bị router, swith, và các thiết bị mạng khác.
|
||||
|
||||
Chúng ta có thể sử dụng một số phần mềm cho phép chúng ta có thể thực hành và tìm hiểu cách tự động hóa cấu hình mạng của chúng ta.
|
||||
|
||||
Có một vài phần mềm mà chúng ta có thể chọn.
|
||||
|
||||
- [GNS3 VM](https://www.gns3.com/software/download-vm)
|
||||
- [Eve-ng](https://www.eve-ng.net/)
|
||||
- [Unimus](https://unimus.net/) (Không phải công cụ tạo lab nhưng cung cấp các khái niệm thú vị).
|
||||
|
||||
Chúng ta sẽ xây dựng lab với [Eve-ng](https://www.eve-ng.net/). Như đã đề cập trước đây, bạn có thể sử dụng thiết bị vật lý nhưng thành thật mà nói, môi trường ảo có nghĩa là chúng ta có thể có môi trường an toàn để thử nghiệm nhiều tình huống khác nhau. Ngoài ra việc có thể thực hành các thiết bị và cấu trúc mạng khác nhau cũng rất thú vị.
|
||||
|
||||
Chúng ta sẽ thực hành mọi thứ trên EVE-NG phiên bản cộng đồng.
|
||||
|
||||
### Bắt đầu
|
||||
|
||||
Bạn có thể tải phiên bản cộng dồng dưới định dạng ISO và OVF tại đây. [download](https://www.eve-ng.net/index.php/download/)
|
||||
|
||||
Chúng ta sẽ sử dụng bản tải xuống định dạng OVF, với định dạng ISO, bạn có thể cài đặt trực tiếp trên server của bạn mà không cần chương trình tạo máy ảo.
|
||||
|
||||

|
||||
|
||||
Đối với hướng dẫn này, chúng ta sẽ sử dụng VMware Workstation vì tôi có giấy phép sử dụng thông qua vExpert nhưng bạn cũng có thể sử dụng VMware Player hoặc bất kỳ tùy chọn nào khác được đề cập trong [documentation](https://www.eve-ng.net/index.php/documentation/installation/system-requirement/). Rất tiếc, chúng ta không thể sử dụng Virtual Box!
|
||||
|
||||
Đây cũng là lúc tôi gặp vấn đề khi sử dụng GNS3 với Virtual Box.
|
||||
|
||||
[Download VMware Workstation Player - FREE](https://www.vmware.com/uk/products/workstation-player.html)
|
||||
|
||||
[VMware Workstation PRO](https://www.vmware.com/uk/products/workstation-pro.html) (Lưu ý rằng nó chỉ miễn phí trong thời gian dùng thử!)
|
||||
|
||||
### Cài đặt VMware Workstation PRO
|
||||
|
||||
Bây giờ chúng ta đã tải xuống và cài đặt phần mềm ảo hóa và chúng ta cũng đã tải xuống EVE-NG OVF. Nếu bạn đang sử dụng VMware Player, vui lòng cho tôi biết quy trình này có giống như vậy không.
|
||||
|
||||
Bây giờ chúng ta đã sẵn sàng để cấu hình mọi thứ.
|
||||
|
||||
Mở VMware Workstation rồi chọn `file` và `open`
|
||||
|
||||

|
||||
|
||||
Khi bạn tải xuống file EVE-NG OVF, nó sẽ nằm trong một tệp nén. Giải nén nội dung vào thư mục và nó trông như thế này.
|
||||
|
||||

|
||||
|
||||
Chọn thư mục mà bạn đã tải xuống hình ảnh EVE-NG OVF và bắt đầu import.
|
||||
|
||||
Đặt cho nó một cái tên dễ nhận biết và lưu trữ máy ảo ở đâu đó trên máy tính của bạn.
|
||||
|
||||

|
||||
|
||||
Khi quá trình import hoàn tất, hãy tăng số lượng bộ xử lý (CPU) lên 4 và bộ nhớ (RAM) được phân bổ lên 8 GB. (Đây là cài đặt khi bạn import phiên bản mới nhất, nhưng nếu không đúng thì hãy chỉnh sửa lại như vậy).
|
||||
|
||||
Ngoài ra, hãy đảm bảo tùy chọn Virtualise Intel VT-x/EPT hoặc AMD-V/RVI đã được bật. Tùy chọn này hướng dẫn VMware chuyển các cờ ảo hóa cho HĐH khách (ảo hóa lồng nhau) Đây là vấn đề tôi gặp phải khi sử dụng GNS3 với Virtual Box mặc dù CPU của tôi hỗ trợ tính năng này.
|
||||
|
||||

|
||||
|
||||
### Khởi động và truy cập
|
||||
|
||||
Hãy nhớ rằng tôi đã đề cập rằng điều này sẽ không hoạt động với VirtualBox! Vâng, có cùng một vấn đề với VMware Workstation và EVE-NG nhưng đó không phải là lỗi của nền tảng ảo hóa!
|
||||
|
||||
Tôi có WSL2 đang chạy trên Máy Windows của mình và điều này dường như loại bỏ khả năng chạy bất kỳ thứ gì được lồng trong môi trường ảo của bạn. Tôi thắc mắc không biết tại sao Ubuntu VM lại chạy vì nó dường như vô hiệu hóa tính năng Intel VT-d của CPU khi sử dụng WSL2.
|
||||
|
||||
Để giải quyết vấn đề này, chúng ta có thể chạy lệnh sau trên máy Windows của mình và khởi động lại hệ thống, lưu ý rằng trong khi lệnh này tắt thì bạn sẽ không thể sử dụng WSL2.
|
||||
|
||||
`bcdedit /set hypervisorlaunchtype off`
|
||||
|
||||
Khi bạn muốn quay lại và sử dụng WSL2, bạn sẽ cần chạy lệnh này và khởi động lại.
|
||||
|
||||
`bcdedit /set hypervisorlaunchtype auto`
|
||||
|
||||
Cả hai lệnh này nên được chạy với quyền administrator!
|
||||
|
||||
Ok quay lại hướng dẫn, bây giờ bạn sẽ có một máy ảo đang được chạy trong VMware Workstation và bạn sẽ có một lời nhắc tương tự như thế này trên màn hình.
|
||||
|
||||

|
||||
|
||||
Trên lời nhắc ở trên, bạn có thể sử dụng:
|
||||
|
||||
username = root
|
||||
password = eve
|
||||
|
||||
Sau đó, bạn sẽ được yêu cầu cung cấp lại mật khẩu root, mật khẩu này sẽ được sử dụng để SSH vào máy chủ sau này.
|
||||
|
||||
Sau đó chúng ta có thể thay đổi hostname của máy chủ.
|
||||
|
||||

|
||||
|
||||
Tiếp theo, chúng ta thiết lập DNS Domain Name, tôi đã sử dụng tên bên dưới nhưng tôi không chắc liệu điều này có cần thay đổi sau này hay không.
|
||||
|
||||

|
||||
|
||||
Sau đó, chúng ta cấu hình mạng, tôi chọn sử dụng địa chỉ IP tĩnh (static) để nó không thay đổi sau khi khởi động lại.
|
||||
|
||||

|
||||
|
||||
Bước cuối cùng, thiết lập một địa chỉ IP tĩnh trong mạng mà bạn có thể truy cập được từ máy tính của mình.
|
||||
|
||||

|
||||
|
||||
Có một số bước bổ sung ở đây, trong đó bạn sẽ phải cung cấp subnet mask, default gateway và DNS.
|
||||
|
||||
Sau khi hoàn tất, máy ảo sẽ khởi động lại, lúc này bạn có thể điền địa chỉ IP tĩnh đã thiết lập vào trình duyệt của mình để truy cập.
|
||||
|
||||

|
||||
|
||||
Tên người dùng mặc định cho GUI là `admin` và mật khẩu là `eve` trong khi tên người dùng mặc định cho SSH là `root` và mật khẩu là `eve` nhưng bạn có thể thay đổi trong quá trình thiết lập.
|
||||
|
||||

|
||||
|
||||
Tôi đã chọn HTML5 cho bảng điều khiển thay vì native vì nó cho phép sẽ mở một tab mới trong trình duyệt của bạn khi bạn điều hướng qua các bảng điều khiển khác nhau.
|
||||
|
||||
Phần tiếp theo chúng ta sẽ tìm hiểu:
|
||||
|
||||
- Cài đặt gói ứng dụng EVE-NG
|
||||
- Tải một số file hệ điều hành vào EVE-NG
|
||||
- Xây dựng mô hình mạng
|
||||
- Thêm node
|
||||
- Kết nối các node
|
||||
- Bắt đầu viết chương trình Python
|
||||
- Tìm hiểu các thư viện telnetlib, Netmiko, Paramiko và Pexpect
|
||||
|
||||
## Tài nguyên tham khảo
|
||||
|
||||
- [Free Course: Introduction to EVE-NG](https://www.youtube.com/watch?v=g6B0f_E0NMg)
|
||||
- [EVE-NG - Creating your first lab](https://www.youtube.com/watch?v=9dPWARirtK8)
|
||||
- [3 Necessary Skills for Network Automation](https://www.youtube.com/watch?v=KhiJ7Fu9kKA&list=WL&index=122&t=89s)
|
||||
- [Computer Networking full course](https://www.youtube.com/watch?v=IPvYjXCsTg8)
|
||||
- [Practical Networking](http://www.practicalnetworking.net/)
|
||||
- [Python Network Automation](https://www.youtube.com/watch?v=xKPzLplPECU&list=WL&index=126)
|
||||
|
||||
Hẹn gặp lại các bạn ngày [Ngày 26](day26.md)
|
140
2022/vi/Days/day27.md
Normal file
@ -0,0 +1,140 @@
|
||||
---
|
||||
title: '#90DaysOfDevOps - Thực hành với Python - Ngày 27'
|
||||
published: false
|
||||
description: 90DaysOfDevOps - Thực hành với Python
|
||||
tags: 'devops, 90daysofdevops, learning'
|
||||
cover_image: null
|
||||
canonical_url: null
|
||||
id: 1048735
|
||||
---
|
||||
|
||||
## Thực hành với Python
|
||||
|
||||
Trong phần cuối của loạt bài về mạng máy tính, chúng ta sẽ tìm hiểu một số tác vụ và công cụ tự động hóa dựa trên môi trường lab đã được tạo ra trong [Ngày 26](day26.md)
|
||||
|
||||
Chúng ta sẽ sử dụng SSH để kết nối đến các thiết bị trong mạng. Giao tiếp dựa trên SSH sẽ được mã hóa như đã giới thiệu trước đây trong loạt bài về hệ điều hành Linux. Xem lại [Ngày 18](day18.md).
|
||||
|
||||
## Truy cập môi trường giả lập ảo
|
||||
|
||||
Để tương tác với các switch, bạn có thể thiết lập một máy chủ bên trong mạng EVE-NG hoặc bạn có thể thiết lập một máy tính chạy Linux có cài đặt Python trong EVE-NG ([Resource for setting up Linux inside EVE-NG](https://www.youtube.com/watch?v=3Qstk3zngrY)), hoặc bạn cũng có thể làm theo cách của tôi là tạo một server quản lý từ xa.
|
||||
|
||||

|
||||
|
||||
Để thiết lập như trên, chúng ta nhấp chuột phải vào giao diện ứng dụng, chọn Network, và sau đó chọn "Management(Cloud0)", thao tác này sẽ tạo ra một mạng riêng mới kết nối với máy tính đang dùng (máy host).
|
||||
|
||||

|
||||
|
||||
Tuy nhiên, chúng ta vẫn cần phải kết nối các thiết bị hiện tại với mạng mới này. (Kiến thức về mạng của tôi vẫn còn hạn chế và tôi cảm thấy rằng bạn có thể thực hiện bước tiếp theo này theo một cách khác bằng cách kết nối router với các switch và sau đó có kết nối với phần còn lại của mạng?)
|
||||
|
||||
Tiếp theo bạn hãy truy cập vào từng thiết bị và chạy các lệnh sau trên card mạng được dùng để kết nối với "Management(Cloud0)".
|
||||
|
||||
|
||||
```
|
||||
enable
|
||||
config t
|
||||
int gi0/0
|
||||
IP add DHCP
|
||||
no sh
|
||||
exit
|
||||
exit
|
||||
sh ip int br
|
||||
```
|
||||
|
||||
Lệnh trên nhằm cấp phát địa chỉ IP cho card mạng kết nối với Home Network. Địa chỉ IP của các thiết bị được liệt kê trong bảng sau:
|
||||
|
||||
| Node | IP Address | Home Network IP |
|
||||
| ------- | ------------ | --------------- |
|
||||
| Router | 10.10.88.110 | 192.168.169.115 |
|
||||
| Switch1 | 10.10.88.111 | 192.168.169.178 |
|
||||
| Switch2 | 10.10.88.112 | 192.168.169.193 |
|
||||
| Switch3 | 10.10.88.113 | 192.168.169.125 |
|
||||
| Switch4 | 10.10.88.114 | 192.168.169.197 |
|
||||
|
||||
### Kết nối SSH đến thiết bị mạng
|
||||
|
||||
Với các thông tin địa chỉ IP ở trên, chúng ta có thể kết nối đến các thiết bị trong mạng từ máy host. Tôi sử dụng Putty, tuy nhiên bạn cũng có thể sử dụng bất kì phần mềm hỗ trợ kết nối SSH nào khác.
|
||||
|
||||
Bạn có thể thấy tôi đang kết nối SSH đến router của mình trong hình dưới. (R1)
|
||||
|
||||

|
||||
|
||||
### Sử dụng Python để thu thập thông tin từ các thiết bị
|
||||
|
||||
Ví dụ đầu tiên là sử dụng Python để thu thập thông tin từ tất cả các thiết bị của mình. Cụ thể hơn, tôi sẽ kết nối đến từng thiết bị và chạy một lệnh đơn giản để lấy thông tin cấu hình của mỗi card mạng. Tôi đã lưu chương trình này tại đây [netmiko_con_multi.py](../../Days/Networking/netmiko_con_multi.py)
|
||||
|
||||
Khi tôi chạy chương trình này, tôi có thể thấy cấu hình của mỗi cổng trên tất cả các thiết bị của mình.
|
||||
|
||||

|
||||
|
||||
Việc này rất hữu ích nếu bạn có nhiều thiết bị khác nhau, hãy tạo một chương trình tương tự để bạn có thể kiểm soát tập trung và tìm hiểu nhanh tất cả các cấu hình chỉ với một lần chạy.
|
||||
|
||||
### Sử dụng Python để cấu hình các thiết bị
|
||||
|
||||
Ví dụ trước đó là rất hữu ích nhưng còn việc sử dụng Python để định cấu hình thiết bị của chúng ta thì sao? Trong kịch bản này, chúng ta có một cổng trunk giữa `SW1` và `SW2`, một lần nữa hãy tưởng tượng nếu điều này được thực hiện trên nhiều switch và chúng ta muốn tự động hóa việc này mà không phải kết nối thủ công đến từng switch để thực hiện thay đổi cấu hình.
|
||||
|
||||
Chúng ta có thể sử dụng chương trình [netmiko_sendchange.py](../../Days/Networking/netmiko_sendchange.py) để thực hiện điều này. Thao tác này sẽ kết nối qua SSH và thực hiện thay đổi cần thiết trên `SW1` và `SW2`.
|
||||
|
||||

|
||||
|
||||
Nếu bạn đã xem code, bạn sẽ thấy thông báo xuất hiện và cho chúng ta biết `sending configuration to device` nhưng không có xác nhận rằng điều này đã được thực hiện, chúng ta có thể thêm đoạn code bổ sung vào chương trình để thực hiện kiểm tra và xác thực việc cấu hình trên các switch hoặc chúng ta có thể sửa đổi đoạn code của ví dụ thứ nhất để cho chúng ta thấy điều đó. [netmiko_con_multi_vlan.py](../../Days/Networking/netmiko_con_multi_vlan.py)
|
||||
|
||||

|
||||
|
||||
### Sao lưu cấu hình của các thiết bị
|
||||
|
||||
Một ví dụ khác là sao lưu các cấu hình mạng của các thiết bị. Nếu bạn không muốn kết nối với mọi thiết bị có trên mạng của mình, bạn có thể chỉ định thiết bị mà bạn muốn sao lưu. Bạn có thể tự động hóa việc này bằng cách sử dụng chương trình [backup.py](../../Days/Networking/backup.py). Bạn sẽ cần điền vào file [backup.txt](../../Days/Networking/backup.txt) các địa chỉ IP mà bạn muốn sao lưu.
|
||||
|
||||
Chạy chương trình trên và bạn sẽ thấy nội dung như bên dưới.
|
||||
|
||||

|
||||
|
||||
Đây chỉ là vài thông tin đơn giản được in ra màn hình, tôi sẽ cho bạn xem các file sao lưu.
|
||||
|
||||

|
||||
|
||||
### Paramiko
|
||||
|
||||
Một thư viên Python được sử dụng rộng rãi cho kết nối SSH. Bạn có thể tìm hiểu thêm [tại đây](https://github.com/paramiko/paramiko)
|
||||
|
||||
Chúng ta có thể cài đặt thư viện này bằng lệnh `pip install paramiko`.
|
||||
|
||||

|
||||
|
||||
Chúng ta có thể kiểm tra kết quả cài đặt bằng cách import thư viện paramiko trong Python.
|
||||
|
||||

|
||||
|
||||
### Netmiko
|
||||
|
||||
Thực viện netmiko chỉ tập trung vào các thiết bị mạng trong khi paramiko là một thư viện lớn hơn nhằm phục vụ các thao tác trên SSH nói chung.
|
||||
|
||||
Netmiko mà tôi đã sử dụng ở trên cùng với paramiko có thể được cài đặt bằng lệnh `pip install netmiko`
|
||||
|
||||
Netmiko hỗ trợ thiết bị của nhiều nhà sản xuất, bạn có thể tìm thấy danh sách các thiết bị được hỗ trợ tại [GitHub Page](https://github.com/ktbyers/netmiko#supports)
|
||||
|
||||
### Các thư viện khác
|
||||
|
||||
Cũng cần đề cập đến một số thư viện khác mà chúng ta chưa có cơ hội xem xét nhưng chúng cung cấp nhiều tính năng liên quan đến tự động hóa các thiết lập mạng.
|
||||
|
||||
Thư viện `netaddr` được sử dụng để làm việc với các địa chỉ IP, có thể được cài đặt bằng lệnh `pip install netaddr`
|
||||
|
||||
Nếu bạn muốn lưu trữ cấu hình của nhiều switch trong một bảng tính excel, thư viện `xlrd` sẽ cung cấp các phương thức để làm việc với excel và chuyển đổi các hàng và cột thành ma trận. Cài đặt nó bằng lệnh `pip install xlrd`.
|
||||
|
||||
Bạn cũng có thể tìm thấy một số ví dụ khác về tự động hóa mạng mà tôi chưa có cơ hội giới thiệu [tại đây](https://github.com/ktbyers/pynet/tree/master/presentations/dfwcug/examples)
|
||||
|
||||
Tôi sẽ kết thúc phần loạt bài về Mạng máy tính trong sê-ri #90DaysOfDevOps tại đây. Mạng máy tính là một lĩnh vực mà tôi thực sự đã không làm đến trong một thời gian và còn rất nhiều điều cần đề cập nhưng tôi hy vọng các ghi chú của mình và các tài nguyên được chia sẻ trong những ngày qua sẽ hữu ích với một số bạn.
|
||||
|
||||
## Tài nguyên tham khảo
|
||||
|
||||
- [Free Course: Introduction to EVE-NG](https://www.youtube.com/watch?v=g6B0f_E0NMg)
|
||||
- [EVE-NG - Creating your first lab](https://www.youtube.com/watch?v=9dPWARirtK8)
|
||||
- [3 Necessary Skills for Network Automation](https://www.youtube.com/watch?v=KhiJ7Fu9kKA&list=WL&index=122&t=89s)
|
||||
- [Computer Networking full course](https://www.youtube.com/watch?v=IPvYjXCsTg8)
|
||||
- [Practical Networking](http://www.practicalnetworking.net/)
|
||||
- [Python Network Automation](https://www.youtube.com/watch?v=xKPzLplPECU&list=WL&index=126)
|
||||
|
||||
Vì tôi không phải là một kỹ sư mạng nên phần lớn các ví dụ tôi sử dụng ở trên đến từ cuốn sách này.
|
||||
|
||||
- [Hands-On Enterprise Automation with Python (Book)](https://www.packtpub.com/product/hands-on-enterprise-automation-with-python/9781788998512)
|
||||
|
||||
Hẹn gặp lại các bạn vào [Ngày 28](day28.md), nơi mà chúng ta sẽ tìm hiểu về điện toán đám mây (cloud computing) và các kiến thức cơ bản xoay quanh chủ đề này.
|
52
2023.md
@ -16,7 +16,7 @@ Or contact us via Twitter, my handle is [@MichaelCade1](https://twitter.com/Mich
|
||||
|
||||
## List of Topics
|
||||
|
||||
| Topic | Author | Date | Twitter Handle |
|
||||
| Topic | Author | Date | Twitter Handle |
|
||||
| -------------------------------------- | ----------------------------------- | ------------------- | ----------------------------------------------------------------------------------------------- |
|
||||
| DevSecOps | Michael Cade | 1st Jan - 6th Jan | [@MichaelCade1](https://twitter.com/MichaelCade1) |
|
||||
| Secure Coding | Prateek Jain | 7th Jan - 13th Jan | [@PrateekJainDev](https://twitter.com/PrateekJainDev) |
|
||||
@ -59,30 +59,30 @@ Or contact us via Twitter, my handle is [@MichaelCade1](https://twitter.com/Mich
|
||||
- [✔️] 🐧 14 > [Container Image Scanning](2023/day14.md)
|
||||
- [✔️] 🐧 15 > [Container Image Scanning Advanced](2023/day15.md)
|
||||
- [✔️] 🐧 16 > [Fuzzing](2023/day16.md)
|
||||
- [] 🐧 17 > [](2023/day17.md)
|
||||
- [] 🐧 18 > [](2023/day18.md)
|
||||
- [] 🐧 19 > [](2023/day19.md)
|
||||
- [] 🐧 20 > [](2023/day20.md)
|
||||
- [✔️] 🐧 17 > [Fuzzing Advanced](2023/day17.md)
|
||||
- [✔️] 🐧 18 > [DAST](2023/day18.md)
|
||||
- [✔️] 🐧 19 > [IAST](2023/day19.md)
|
||||
- [✔️] 🐧 20 > [Practical Lab on IAST and DAST](2023/day20.md)
|
||||
|
||||
### Continuous Delivery & Deployment
|
||||
|
||||
- [] 🌐 21 > [](2023/day21.md)
|
||||
- [] 🌐 22 > [](2023/day22.md)
|
||||
- [] 🌐 23 > [](2023/day23.md)
|
||||
- [] 🌐 24 > [](2023/day24.md)
|
||||
- [] 🌐 25 > [](2023/day25.md)
|
||||
- [] 🌐 26 > [](2023/day26.md)
|
||||
- [] 🌐 27 > [](2023/day27.md)
|
||||
- [✔️] 🌐 21 > [Continuous Image Repository Scan](2023/day21.md)
|
||||
- [✔️] 🌐 22 > [Continuous Image Repository Scan - Container Registries](2023/day22.md)
|
||||
- [✔️] 🌐 23 > [Artifacts Scan](2023/day23.md)
|
||||
- [✔️] 🌐 24 > [Signing](2023/day24.md)
|
||||
- [✔️] 🌐 25 > [Systems Vulnerability Scanning](2023/day25.md)
|
||||
- [✔️] 🌐 26 > [Containers Vulnerability Scanning](2023/day26.md)
|
||||
- [✔️] 🌐 27 > [Network Vulnerability Scan](2023/day27.md)
|
||||
|
||||
### Runtime Defence & Monitoring
|
||||
|
||||
- [] ☁️ 28 > [](2023/day28.md)
|
||||
- [] ☁️ 29 > [](2023/day29.md)
|
||||
- [] ☁️ 30 > [](2023/day30.md)
|
||||
- [] ☁️ 31 > [](2023/day31.md)
|
||||
- [] ☁️ 32 > [](2023/day32.md)
|
||||
- [] ☁️ 33 > [](2023/day33.md)
|
||||
- [] ☁️ 34 > [](2023/day34.md)
|
||||
- [✔️] ☁️ 28 > [System monitoring and auditing](2023/day28.md)
|
||||
- [✔️] ☁️ 29 > [Application level monitoring](2023/day29.md)
|
||||
- [✔️] ☁️ 30 > [Detecting suspicious application behavior](2023/day30.md)
|
||||
- [] ☁️ 31 > [Firewalls and network protection](2023/day31.md)
|
||||
- [] ☁️ 32 > [Vulnerability and patch management](2023/day32.md)
|
||||
- [] ☁️ 33 > [Application whitelisting and software trust management](2023/day33.md)
|
||||
- [] ☁️ 34 > [Runtime access control](2023/day34.md)
|
||||
|
||||
### Secrets Management
|
||||
|
||||
@ -96,19 +96,19 @@ Or contact us via Twitter, my handle is [@MichaelCade1](https://twitter.com/Mich
|
||||
|
||||
### Python
|
||||
|
||||
- [] 🏗️ 42 > [](2023/day42.md)
|
||||
- [] 🏗️ 43 > [](2023/day43.md)
|
||||
- [] 🏗️ 44 > [](2023/day44.md)
|
||||
- [] 🏗️ 45 > [](2023/day45.md)
|
||||
- [] 🏗️ 42 > [Programming Language: Introduction to Python](2023/day42.md)
|
||||
- [] 🏗️ 43 > [Python Loops, functions, modules and libraries](2023/day43.md)
|
||||
- [] 🏗️ 44 > [Data Structures and OOP in Python](2023/day44.md)
|
||||
- [] 🏗️ 45 > [Debugging, testing and Regular expression](2023/day45.md)
|
||||
- [] 🏗️ 46 > [](2023/day46.md)
|
||||
- [] 🏗️ 47 > [](2023/day47.md)
|
||||
- [] 🏗️ 48 > [](2023/day48.md)
|
||||
|
||||
### AWS
|
||||
|
||||
- [] ☸ 49 > [](2023/day49.md)
|
||||
- [] ☸ 50 > [](2023/day50.md)
|
||||
- [] ☸ 51 > [](2023/day51.md)
|
||||
- [✔️] ☸ 49 > [AWS Cloud Overview](2023/day49.md)
|
||||
- [✔️] ☸ 50 > [Get a Free Tier Account & Enable Billing Alarms](2023/day50.md)
|
||||
- [✔️] ☸ 51 > [Infrastructure as Code (IaC) and CloudFormation](2023/day51.md)
|
||||
- [] ☸ 52 > [](2023/day52.md)
|
||||
- [] ☸ 53 > [](2023/day53.md)
|
||||
- [] ☸ 54 > [](2023/day54.md)
|
||||
|
@ -1,7 +1,7 @@
|
||||
# Fuzzing
|
||||
|
||||
Fuzzing, also known as "fuzz testing," is a software testing technique that involves providing invalid, unexpected, or random data as input to a computer program.
|
||||
The goal of fuzzing is to identify security vulnerabilities and other bugs in the program by causing it to crash or exhibit unintended behavior.
|
||||
The goal of fuzzing is to identify security vulnerabilities and other bugs in the program by causing it to crash or exhibit unintended behaviour.
|
||||
|
||||
Fuzzing can be performed manually or by using a testing library/framework to craft the inputs for us.
|
||||
|
||||
@ -32,12 +32,12 @@ However, in more complex systems such fail points may not be obvious, and may be
|
||||
|
||||
This is where fuzzing comes in handy.
|
||||
|
||||
The Go Fuzzing library (part of the standard language library since Go 1.18) generates many inputs for a test case, and then based on the coverage and the results determines which inputs are "interesting".
|
||||
The Go Fuzzing library (part of the standard language library since Go 1.18) generates many inputs for a test case, and then based on the coverage and the results determine which inputs are "interesting".
|
||||
|
||||
If we write a fuzz test for this function what will happen is:
|
||||
|
||||
1. The fuzzing library will start providing random strings starting from smaller strings and increasing their size.
|
||||
2. Once the library provides a string of lenght 4 it will notice a change in the test-coverage (`if (len(s) == 4)` is now `true`) and will continue to generate inputs with this length.
|
||||
2. Once the library provides a string of length 4 it will notice a change in the test-coverage (`if (len(s) == 4)` is now `true`) and will continue to generate inputs with this length.
|
||||
3. Once the library provides a string of length 4 that starts with `f` it will notice another change in the test-coverage (`if s[0] == "f"` is now `true`) and will continue to generate inputs that start with `f`.
|
||||
4. The same thing will repeat for `u` and the double `z`.
|
||||
5. Once it provides `fuzz` as input the function will panic and the test will fail.
|
||||
@ -56,7 +56,7 @@ Fuzzing is a useful technique, but there are situations in which it might not be
|
||||
|
||||
For example, if the input that fails our code is too specific and there are no clues to help, the fuzzing library might not be able to guess it.
|
||||
|
||||
If we change the example code from the previoud paragraph to something like this:
|
||||
If we change the example code from the previous paragraph to something like this:
|
||||
|
||||
```go
|
||||
func DontPanic(s input) {
|
||||
|
242
2023/day17.md
@ -0,0 +1,242 @@
|
||||
# Fuzzing Advanced
|
||||
|
||||
Yesterday we learned what fuzzing is and how to write fuzz tests (unit tests with fuzzy inputs).
|
||||
However, fuzz testing goes beyond just unit testing.
|
||||
We can use this methodology to test our web application by fuzzing the requests sent to our server.
|
||||
|
||||
Today, we will take a practical approach to fuzzy testing a web server.
|
||||
|
||||
Different tools can help us do this.
|
||||
|
||||
Such tools are [Burp Intruder](https://portswigger.net/burp/documentation/desktop/tools/intruder) and [SmartBear](https://smartbear.com/).
|
||||
However, there are proprietary tools that require a paid license to use them.
|
||||
|
||||
That is why for our demonstration today we are going to use a simple open-source CLI written in Go that was inspired by Burp Intruder and provides similar functionality.
|
||||
It is called [httpfuzz](https://github.com/JonCooperWorks/httpfuzz).
|
||||
|
||||
|
||||
## Getting started
|
||||
|
||||
This tool is quite simple.
|
||||
We provide it a template for our requests (in which we have defined placeholders for the fuzzy data), a wordlist (the fuzzy data) and `httpfuzz` will render the requests and send them to our server.
|
||||
|
||||
First, we need to define a template for our requests.
|
||||
Create a file named `request.txt` with the following content:
|
||||
|
||||
```text
|
||||
POST / HTTP/1.1
|
||||
Content-Type: application/json
|
||||
User-Agent: PostmanRuntime/7.26.3
|
||||
Accept: */*
|
||||
Cache-Control: no-cache
|
||||
Host: localhost:8000
|
||||
Accept-Encoding: gzip, deflate
|
||||
Connection: close
|
||||
Content-Length: 35
|
||||
|
||||
{
|
||||
"name": "`S9`",
|
||||
}
|
||||
```
|
||||
|
||||
This is a valid HTTP `POST` request to the `/` route with JSON body.
|
||||
The "\`" symbol in the body defines a placeholder that will be substituted with the data we provide.
|
||||
|
||||
`httpfuzz` can also fuzz the headers, path, and URL params.
|
||||
|
||||
Next, we need to provide a wordlist of inputs that will be placed in the request.
|
||||
Create a file named `data.txt` with the following content:
|
||||
|
||||
```text
|
||||
SOME_NAME
|
||||
Mozilla/5.0 (Linux; Android 7.0; SM-G930VC Build/NRD90M; wv) AppleWebKit/537.36 (KHTML, like Gecko) Version/4.0 Chrome/58.0.3029.83 Mobile Safari/537.36
|
||||
```
|
||||
|
||||
In this file, we defined two inputs that will be substituted inside the body.
|
||||
In a real-world scenario, you should put much more data here for proper fuzz testing.
|
||||
|
||||
Now that we have our template and our inputs, let's run the tool.
|
||||
Unfortunately, this tool is not distributed as a binary, so we will have to build it from source.
|
||||
Clone the repo and run:
|
||||
|
||||
```shell
|
||||
go build -o httpfuzz cmd/httpfuzz.go
|
||||
```
|
||||
|
||||
(requires to have a recent version of Go installed on your machine).
|
||||
|
||||
Now that we have the binary let's run it:
|
||||
|
||||
```shell
|
||||
./httpfuzz \
|
||||
--wordlist data.txt \
|
||||
--seed-request request.txt \
|
||||
--target-header User-Agent \
|
||||
--target-param fuzz \
|
||||
--delay-ms 50 \
|
||||
--skip-cert-verify \
|
||||
--proxy-url http://localhost:8080 \
|
||||
```
|
||||
|
||||
- `httpfuzz` is the binary we are invoking.
|
||||
- `--wordlist data.txt` is the file with inputs we provided.
|
||||
- `--seed-request requests.txt` is the request template.
|
||||
- `--target-header User-Agent` tells `httpfuzz` to use the provided inputs in the place of the `User-Agent` header.
|
||||
- `--target-param fuzz` tells `httpfuzz` to use the provided inputs as values for the `fuzz` URL parameter.
|
||||
- `--delay-ms 50` tells `httpfuzz` to wait 50 ms between the requests.
|
||||
- `--skip-cert-verify` tells `httpfuzz` to not do any TLS verification.
|
||||
- `--proxy-url http://localhost:8080` tells `httpfuzz` where our HTTP server is.
|
||||
|
||||
We have 2 inputs and 3 places to place them (in the body, the `User-Agent` header, and the `fuzz` parameter).
|
||||
This means that `httpfuzz` will generate 6 requests and send them to our server.
|
||||
|
||||
Let's run it and see what happens.
|
||||
I wrote a simple web server that logs all requests so that we can see what is coming into our server:
|
||||
|
||||
```shell
|
||||
$ ./httpfuzz \
|
||||
--wordlist data.txt \
|
||||
--seed-request request.txt \
|
||||
--target-header User-Agent \
|
||||
--target-param fuzz \
|
||||
--delay-ms 50 \
|
||||
--skip-cert-verify \
|
||||
--proxy-url http://localhost:8080 \
|
||||
|
||||
httpfuzz: httpfuzz.go:164: Sending 6 requests
|
||||
```
|
||||
|
||||
and the server logs:
|
||||
|
||||
```text
|
||||
-----
|
||||
Got request to http://localhost:8000/
|
||||
User-Agent header = [SOME_NAME]
|
||||
Name = S9
|
||||
-----
|
||||
Got request to http://localhost:8000/?fuzz=SOME_NAME
|
||||
User-Agent header = [PostmanRuntime/7.26.3]
|
||||
Name = S9
|
||||
-----
|
||||
Got request to http://localhost:8000/
|
||||
User-Agent header = [PostmanRuntime/7.26.3]
|
||||
Name = SOME_NAME
|
||||
-----
|
||||
Got request to http://localhost:8000/
|
||||
User-Agent header = [Mozilla/5.0 (Linux; Android 7.0; SM-G930VC Build/NRD90M; wv) AppleWebKit/537.36 (KHTML, like Gecko) Version/4.0 Chrome/58.0.3029.83 Mobile Safari/537.36]
|
||||
Name = S9
|
||||
-----
|
||||
Got request to http://localhost:8000/?fuzz=Mozilla%2F5.0+%28Linux%3B+Android+7.0%3B+SM-G930VC+Build%2FNRD90M%3B+wv%29+AppleWebKit%2F537.36+%28KHTML%2C+like+Gecko%29+Version%2F4.083+Mobile+Safari%2F537.36
|
||||
User-Agent header = [PostmanRuntime/7.26.3]
|
||||
Name = S9
|
||||
-----
|
||||
Got request to http://localhost:8000/
|
||||
User-Agent header = [PostmanRuntime/7.26.3]
|
||||
Name = Mozilla/5.0 (Linux; Android 7.0; SM-G930VC Build/NRD90M; wv) AppleWebKit/537.36 (KHTML, like Gecko) Version/4.0 Chrome/58.0.3029.83 Mobile Safari/537.36
|
||||
```
|
||||
|
||||
We see that we have received 6 HTTP requests.
|
||||
|
||||
Two of them have a value from our values file for the `User-Agent` header, and 4 have the default header from the template.
|
||||
Two of them have a value from our values file for the `fuzz` query parameter, and 4 have the default header from the template.
|
||||
Two of them have a value from our values file for the `Name` body property, and 4 have the default header from the template.
|
||||
|
||||
A slight improvement of the tool could be to make different permutations of these requests (for example, a request that has both `?fuzz=` and `User-Agent` as values from the values file).
|
||||
|
||||
Notice how `httpfuzz` does not give us any information about the outcome of the requests.
|
||||
To figure that out, we need to either set up some sort of monitoring for our server or write a `httpfuzz` plugin that will process the results in a meaningful for us way.
|
||||
Let's do that.
|
||||
|
||||
To write a custom plugin, we need to implement the [`Listener`](https://github.com/JonCooperWorks/httpfuzz/blob/master/plugin.go#L13) interface:
|
||||
|
||||
```go
|
||||
// Listener must be implemented by a plugin to users to hook the request - response transaction.
|
||||
// The Listen method will be run in its own goroutine, so plugins cannot block the rest of the program, however panics can take down the entire process.
|
||||
type Listener interface {
|
||||
Listen(results <-chan *Result)
|
||||
}
|
||||
```
|
||||
|
||||
```go
|
||||
package main
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"io/ioutil"
|
||||
"log"
|
||||
|
||||
"github.com/joncooperworks/httpfuzz"
|
||||
)
|
||||
|
||||
type logResponseCodePlugin struct {
|
||||
logger *log.Logger
|
||||
}
|
||||
|
||||
func (b *logResponseCodePlugin) Listen(results <-chan *httpfuzz.Result) {
|
||||
for result := range results {
|
||||
b.logger.Printf("Got %d response from the server\n", result.Response.StatusCode)
|
||||
}
|
||||
}
|
||||
|
||||
// New returns a logResponseCodePlugin plugin that simple logs the response code of the response.
|
||||
func New(logger *log.Logger) (httpfuzz.Listener, error) {
|
||||
return &logResponseCodePlugin{logger: logger}, nil
|
||||
}
|
||||
```
|
||||
|
||||
Now we need to build our plugin first:
|
||||
|
||||
```shell
|
||||
go build -buildmode=plugin -o log exampleplugins/log/log.go
|
||||
```
|
||||
|
||||
and then we can plug it into `httpfuzz` via the `--post-request` flag:
|
||||
|
||||
```shell
|
||||
$ ./httpfuzz \
|
||||
--wordlist data.txt \
|
||||
--seed-request request.txt \
|
||||
--target-header User-Agent \
|
||||
--target-param fuzz \
|
||||
--delay-ms 50 \
|
||||
--skip-cert-verify \
|
||||
--proxy-url http://localhost:8080 \
|
||||
--post-request log
|
||||
|
||||
httpfuzz: httpfuzz.go:164: Sending 6 requests
|
||||
httpfuzz: log.go:15: Got 200 response from the server
|
||||
httpfuzz: log.go:15: Got 200 response from the server
|
||||
httpfuzz: log.go:15: Got 200 response from the server
|
||||
httpfuzz: log.go:15: Got 200 response from the server
|
||||
httpfuzz: log.go:15: Got 200 response from the server
|
||||
httpfuzz: log.go:15: Got 200 response from the server
|
||||
```
|
||||
|
||||
Voila!
|
||||
Now we can at least see what the response code from the server was.
|
||||
|
||||
Of course, we can write much more sophisticated plugins that output much more data, but for the purpose of this exercise, that is enough.
|
||||
|
||||
## Summary
|
||||
|
||||
Fuzzing is a really powerful testing technique that goes way beyond unit testing.
|
||||
|
||||
Fuzzing can be extremely useful for testing HTTP servers by substituting parts of valid HTTP requests with data that could potentially expose vulnerabilities or deficiencies in our server.
|
||||
|
||||
There are many tools that can help us in fuzzy testing our web applications, both free and paid ones.
|
||||
|
||||
## Resources
|
||||
|
||||
[OWASP: Fuzzing](https://owasp.org/www-community/Fuzzing)
|
||||
|
||||
[OWASP: Fuzz Vectors](https://owasp.org/www-project-web-security-testing-guide/v41/6-Appendix/C-Fuzz_Vectors)
|
||||
|
||||
[Hacking HTTP with HTTPfuzz](https://medium.com/swlh/hacking-http-with-httpfuzz-67cfd061b616)
|
||||
|
||||
[Fuzzing the Stack for Fun and Profit at DefCamp 2019](https://www.youtube.com/watch?v=qCMfrbpuCBk&list=PLnwq8gv9MEKiUOgrM7wble1YRsrqRzHKq&index=33)
|
||||
|
||||
[HTTP Fuzzing Scan with SmartBear](https://support.smartbear.com/readyapi/docs/security/scans/types/fuzzing-http.html)
|
||||
|
||||
[Fuzzing Session: Finding Bugs and Vulnerabilities Automatically](https://youtu.be/DSJePjhBN5E)
|
||||
|
||||
[Fuzzing the CNCF Landscape](https://youtu.be/zIyIZxAZLzo)
|
@ -1,17 +1,17 @@
|
||||
# IAST (Interactive Application Security Testing)
|
||||
|
||||
IAST is a type of security testing tool that is designed to identify vulnerabilities in web applications and help developers fix them. It works by injecting a small agent into the application's runtime environment and monitoring its behavior in real-time. This allows IAST tools to identify vulnerabilities as they occur, rather than relying on static analysis or simulated attacks.
|
||||
IAST is a type of security testing tool that is designed to identify vulnerabilities in web applications and help developers fix them. It works by injecting a small agent into the application's runtime environment and monitoring its behaviour in real-time. This allows IAST tools to identify vulnerabilities as they occur, rather than relying on static analysis or simulated attacks.
|
||||
|
||||
IAST works through software instrumentation, or the use of instruments to monitor an application as it runs and gather information about what it does and how it performs. IAST solutions instrument applications by deploying agents and sensors in running applications and continuously analyzing all application interactions initiated by manual tests, automated tests, or a combination of both to identify vulnerabilities in real time Instrumentation.
|
||||
IAST agent is running inside the application and monitor for known attack patterns. As it is part of the application, it can monitor traffic between different components (either as classic MVC deployments and in microservices deployment).
|
||||
IAST agent is running inside the application and monitoring for known attack patterns. As it is part of the application, it can monitor traffic between different components (either as classic MVC deployments and in microservices deployment).
|
||||
|
||||
## For IAST to be used, there are few prerequisites.
|
||||
- Application should be instrumented (inject the agent).
|
||||
- Traffic should be generated - via manual or automated tests. Another possible approach is via DAST tools (OWASP ZAP can be used for example).
|
||||
|
||||
## Advantages
|
||||
One of the main advantages of IAST tools is that they can provide detailed and accurate information about vulnerabilities and how to fix them. This can save developers a lot of time and effort, as they don't have to manually search for vulnerabilities or try to reproduce them in a testing environment. IAST tools can also identify vulnerabilities that might be missed by other testing methods, such as those that require user interaction or are triggered under certain conditions. Testing time depends on the tests used (as IAST is not a standalone system) and with faster tests (automated tests) can be include into CI/CD pipelines. It can be used to detect different kind of vulnerabilities and due to the nature of the tools (it looks for “real traffic only) false positives/negatives findings are relatively rear compared to other testing types.
|
||||
IAST can be used in two flavors - as a typical testing tool and as real-time protection (it is called RAST in this case). Both work at the same principals and can be used together.
|
||||
One of the main advantages of IAST tools is that they can provide detailed and accurate information about vulnerabilities and how to fix them. This can save developers a lot of time and effort, as they don't have to manually search for vulnerabilities or try to reproduce them in a testing environment. IAST tools can also identify vulnerabilities that might be missed by other testing methods, such as those that require user interaction or are triggered under certain conditions. Testing time depends on the tests used (as IAST is not a standalone system) and with faster tests (automated tests) can be included into CI/CD pipelines. It can be used to detect different kind of vulnerabilities and due to the nature of the tools (it looks for “real traffic only) false positives/negatives findings are relatively rear compared to other testing types.
|
||||
IAST can be used in two flavours - as a typical testing tool and as real-time protection (it is called RAST in this case). Both work at the same principles and can be used together.
|
||||
|
||||
## There are several disadvantages of the technology as well:
|
||||
- It is relatively new technology so there is not a lot of knowledge and experience both for the security teams and for the tools builders (open-source or commercial).
|
||||
@ -21,7 +21,7 @@ IAST can be used in two flavors - as a typical testing tool and as real-time pro
|
||||
|
||||
There are several different IAST tools available, each with its own features and capabilities.
|
||||
## Some common features of IAST tools include:
|
||||
- Real-time monitoring: IAST tools monitor the application's behavior in real-time, allowing them to identify vulnerabilities as they occur.
|
||||
- Real-time monitoring: IAST tools monitor the application's behaviour in real-time, allowing them to identify vulnerabilities as they occur.
|
||||
- Vulnerability identification: IAST tools can identify a wide range of vulnerabilities, including injection attacks, cross-site scripting (XSS), and cross-site request forgery (CSRF).
|
||||
- Remediation guidance: IAST tools often provide detailed information about how to fix identified vulnerabilities, including code snippets and recommendations for secure coding practices.
|
||||
- Integration with other tools: IAST tools can often be integrated with other security testing tools, such as static code analysis or penetration testing tools, to provide a more comprehensive view of an application's security.
|
||||
|
157
2023/day20.md
@ -1,10 +1,153 @@
|
||||
# IAST and DAST in conjunction - lab time
|
||||
|
||||
After learning what IAST and DAST are it's time to get our hands dirty and perform an exercise in which we use these processes to find vulnerabilities in real applications.
|
||||
|
||||
**NOTE:** There are no open-source IAST implementations, so we will have to use a commerical solution.
|
||||
Don't worry, there is a free-tier, so you will be able to follow the lab without paying anything.
|
||||
|
||||
This lab is based on this [repo](https://github.com/rstatsinger/contrast-java-webgoat-docker).
|
||||
|
||||
It contains a vulnerable Java application to be tested and exploited, Docker and Docker Compose for easy setup and [Contrast Community Edition](https://www.contrastsecurity.com/contrast-community-edition?utm_campaign=ContrastCommunityEdition&utm_source=GitHub&utm_medium=WebGoatLab) for IAST solution.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- [Docker](https://www.docker.com/products/docker-desktop/)
|
||||
- [Docker Compose](https://docs.docker.com/compose/)
|
||||
- Contrast CE account. Sign up for free [here](https://www.contrastsecurity.com/contrast-community-edition?utm_campaign=ContrastCommunityEdition&utm_source=GitHub&utm_medium=WebGoatLab).
|
||||
|
||||
**NOTE:** The authors of this article and of the 90 Days of DevOps program are in way associated or affilited with Contrast Security.
|
||||
We are using this commercial solution, because there is not an open-source one, and because this one has a free-tier that does not require paying or providing a credit card.
|
||||
|
||||
1. As there are no open-source IAST implementation will use a commercial one with some free licenses. For this purpose, you will need 2 componenets:
|
||||
IAST solution from here - https://github.com/rstatsinger/contrast-java-webgoat-docker . You need docker and docker-compose installed in mac or linux enviroment (this lab is tested on Mint). Please follow the README to create account in Contrast.
|
||||
2. For running the IAST there are few ways to do it- manually via a DAST scanner, ...
|
||||
- Easiest way to do it is to use ZAP proxy. For this purpose install ZAP from here - https://www.zaproxy.org/download/
|
||||
- Install zap-cli - https://github.com/Grunny/zap-cli
|
||||
- Run ZAP proxy (from installed location, in Mint it is by default in /opt/zaproxy)
|
||||
- Set env variables for ZAP_API_KEY and ZAP_PORT
|
||||
- Run several commands with zap cli. For example: zap-cli quick-scan -s all --ajax-spider -r http://127.0.0.1:8080/WebGoat/login.mvc . You should see some results in contrast UI.
|
||||
IAST solution from here - <https://github.com/rstatsinger/contrast-java-webgoat-docker>. You need docker and docker-compose installed in mac or linux enviroment (this lab is tested on Mint). Please follow the README to create account in Contrast.
|
||||
|
||||
## Getting started
|
||||
|
||||
To start, clone the [repository](https://github.com/rstatsinger/contrast-java-webgoat-docker).
|
||||
|
||||
Get your credentials from Contrast Security.
|
||||
Click on your name in the top-right corner -> `Organization Settings` -> `Agent`.
|
||||
Get the values for `Agent Username`, `Agent Service Key` and `API Key`.
|
||||
|
||||
Replace these values in the `.env.template` file in the newly cloned repository.
|
||||
|
||||
**NOTE:** These values are secret.
|
||||
Do not commit them to Git.
|
||||
It's best to put the `.env.template` under `.gitignore` so that you don't commit these values by mistake.
|
||||
|
||||
## Running the vulnerable application
|
||||
|
||||
To run the vulnerable application, run:
|
||||
|
||||
```sh
|
||||
./run.sh
|
||||
```
|
||||
|
||||
or
|
||||
|
||||
```sh
|
||||
docker compose up
|
||||
```
|
||||
|
||||
Once ready, the application UI will be accessible on <http://localhost:8080/WebGoat>.
|
||||
|
||||
## Do some damage
|
||||
|
||||
Now that we have a vulnerable application let's try to exploit it.
|
||||
|
||||
1. Install ZAP Proxy from [here](https://www.zaproxy.org/download/)
|
||||
|
||||
An easy way to do that is via a DAST scanner.
|
||||
One such scanner is [ZAP Proxy](https://www.zaproxy.org/).
|
||||
It is a free and open-source web app scanner.
|
||||
|
||||
2. Install `zap-cli` from [here](https://github.com/Grunny/zap-cli)
|
||||
|
||||
Next, install `zap-cli`.
|
||||
`zap-cli` is an open-source CLI for ZAP Proxy.
|
||||
|
||||
3. Run ZAP proxy
|
||||
|
||||
Run ZAP Proxy from its installed location.
|
||||
In Linux Mint it is by default in `/opt/zaproxy`.
|
||||
In MacOS it is in `Applications`.
|
||||
|
||||
4. Set env variables for `ZAP_API_KEY` and `ZAP_PORT`
|
||||
|
||||
Get these values from ZAP Proxy.
|
||||
Go to `Options...` -> `API` to get the API Key.
|
||||
|
||||
Go to `Options...` -> `Network` -> `Local Servers/Proxies` to configure and obtain the port.
|
||||
|
||||
5. Run several commands with `zap-cli`
|
||||
|
||||
For example:
|
||||
|
||||
```sh
|
||||
zap-cli quick-scan -s all --ajax-spider -r http://127.0.0.1:8080/WebGoat/login.mvc
|
||||
```
|
||||
|
||||
Alternatively, you can follow the instructions in the [repo](https://github.com/rstatsinger/contrast-java-webgoat-docker/blob/master/Lab-WebGoat.pdf)
|
||||
to cause some damage to the vulnerable application.
|
||||
|
||||
6. Observe findings in Constrast
|
||||
|
||||
Either way, if you go to the **Vulnerabilities** tab for your application in Contrast you should be able to see that Contrast detected the vulnerabilities
|
||||
and is warning you to take some action.
|
||||
|
||||
## Bonus: Image Scanning
|
||||
|
||||
We saw how an IAST solution helped us detect attacks by observing the behaviour of the application.
|
||||
Let's see whether we could have done something to prevent these attacks in the first place.
|
||||
|
||||
The vulnerable application we used for this demo was packages as a container.
|
||||
Let's scan this container via the `grype` scanner we learned about in Days [14](day14.md) and [15](day15.md) and see the results.
|
||||
|
||||
```sh
|
||||
$ grype contrast-java-webgoat-docker-webgoat
|
||||
✔ Vulnerability DB [no update available]
|
||||
✔ Loaded image
|
||||
✔ Parsed image
|
||||
✔ Cataloged packages [316 packages]
|
||||
✔ Scanned image [374 vulnerabilities]
|
||||
NAME INSTALLED FIXED-IN TYPE VULNERABILITY SEVERITY
|
||||
apt 1.8.2.3 deb CVE-2011-3374 Negligible
|
||||
axis 1.4 java-archive GHSA-55w9-c3g2-4rrh Medium
|
||||
axis 1.4 java-archive GHSA-96jq-75wh-2658 Medium
|
||||
bash 5.0-4 deb CVE-2019-18276 Negligible
|
||||
bash 5.0-4 (won't fix) deb CVE-2022-3715 High
|
||||
bsdutils 1:2.33.1-0.1 deb CVE-2022-0563 Negligible
|
||||
bsdutils 1:2.33.1-0.1 (won't fix) deb CVE-2021-37600 Low
|
||||
commons-beanutils 1.8.3 java-archive CVE-2014-0114 High
|
||||
commons-beanutils 1.8.3 java-archive CVE-2019-10086 High
|
||||
commons-beanutils 1.8.3 1.9.2 java-archive GHSA-p66x-2cv9-qq3v High
|
||||
commons-beanutils 1.8.3 1.9.4 java-archive GHSA-6phf-73q6-gh87 High
|
||||
commons-collections 3.2.1 java-archive CVE-2015-6420 High
|
||||
commons-collections 3.2.1 3.2.2 java-archive GHSA-6hgm-866r-3cjv High
|
||||
commons-collections 3.2.1 3.2.2 java-archive GHSA-fjq5-5j5f-mvxh Critical
|
||||
commons-fileupload 1.3.1 java-archive CVE-2016-1000031 Critical
|
||||
commons-fileupload 1.3.1 java-archive CVE-2016-3092 High
|
||||
commons-fileupload 1.3.1 1.3.2 java-archive GHSA-fvm3-cfvj-gxqq High
|
||||
commons-fileupload 1.3.1 1.3.3 java-archive GHSA-7x9j-7223-rg5m Critical
|
||||
commons-io 2.4 java-archive CVE-2021-29425 Medium
|
||||
commons-io 2.4 2.7 java-archive GHSA-gwrp-pvrq-jmwv Medium
|
||||
coreutils 8.30-3 deb CVE-2017-18018 Negligible
|
||||
coreutils 8.30-3 (won't fix) deb CVE-2016-2781 Low
|
||||
curl 7.64.0-4+deb10u3 deb CVE-2021-22922 Negligible
|
||||
curl 7.64.0-4+deb10u3 deb CVE-2021-22923 Negligible
|
||||
<truncated>
|
||||
```
|
||||
|
||||
As we can see this image is full with vulnerabilities.
|
||||
|
||||
If we dive into each one we will see we have vulnerabilities like RCE (Remote Code Execution), SQL Injection, XML External Entity Vulnerability, etc.
|
||||
|
||||
## Week Summary
|
||||
|
||||
IAST and DAST are important methods that can help us find vulnerabilities in our application via monitoring its behaviour.
|
||||
This is done once the application is already deployed.
|
||||
|
||||
Container Image Scanning can help us find vulnerabilities in our application based on the library that are present inside the container.
|
||||
|
||||
Image Scanning and IAST/DAST are not mutually-exclusive.
|
||||
They both have their place in a Secure SDLC and can help us find different problems before the attackers do.
|
||||
|
230
2023/day21.md
@ -0,0 +1,230 @@
|
||||
# Continuous Image Repository Scan
|
||||
|
||||
In [Day 14](day14.md), we learned what container image scanning is and why it's important.
|
||||
We also learned about tools like Grype and Trivy that help us scan our container images.
|
||||
|
||||
However, in modern SDLCs, a DevSecOps engineer would rarely scan container images by hand, e.g., they would not be running Grype and Trivy locally and looking at every single vulnerability.
|
||||
Instead, they would have the image scanning configured as part of the CI/CD pipeline.
|
||||
This way, they would be sure that all the images that are being built by the pipelines are also scanned by the image scanner.
|
||||
These results could then be sent by another system, where the DevSecOps engineers could look at them and take some action depending on the result.
|
||||
|
||||
A sample CI/CD pipeline could look like this:
|
||||
|
||||
0. _Developer pushes code_
|
||||
1. Lint the code
|
||||
2. Build the code
|
||||
3. Test the code
|
||||
4. Build the artifacts (container images, helm charts, etc.)
|
||||
5. Scan the artifacts
|
||||
6. (Optional) Send the scan results somewhere
|
||||
7. (Optional) Verify the scan results and fail the pipeline if the verification fails
|
||||
8. Push the artifacts to a repository
|
||||
|
||||
A failure in the scan or verify steps (steps 6 and 7) would mean that our container will not be pushed to our repository, and we cannot use the code we submitted.
|
||||
|
||||
Today, we are going to take a look at how we can set up such a pipeline and what would be a sensible configuration for one.
|
||||
|
||||
## Setting up a CI/CD pipeline with Grype
|
||||
|
||||
Let's take a look at the [Grype](https://github.com/anchore/grype) scanner.
|
||||
Grype is an open-source scanner maintained by the company [Anchore](https://anchore.com/).
|
||||
|
||||
### Scanning an image with Grype
|
||||
|
||||
Scanning a container image with Grype is as simple as running:
|
||||
|
||||
```shell
|
||||
grype <IMAGE>
|
||||
```
|
||||
|
||||
For example, if we want to scan the `ubuntu:20.04` image, we can run:
|
||||
|
||||
```shell
|
||||
$ grype ubuntu:20.04
|
||||
|
||||
✔ Vulnerability DB [no update available]
|
||||
✔ Pulled image
|
||||
✔ Loaded image
|
||||
✔ Parsed image
|
||||
✔ Cataloged packages [92 packages]
|
||||
✔ Scanned image [19 vulnerabilities]
|
||||
|
||||
NAME INSTALLED FIXED-IN TYPE VULNERABILITY SEVERITY
|
||||
coreutils 8.30-3ubuntu2 deb CVE-2016-2781 Low
|
||||
gpgv 2.2.19-3ubuntu2.2 deb CVE-2022-3219 Low
|
||||
libc-bin 2.31-0ubuntu9.9 deb CVE-2016-20013 Negligible
|
||||
libc6 2.31-0ubuntu9.9 deb CVE-2016-20013 Negligible
|
||||
libncurses6 6.2-0ubuntu2 deb CVE-2021-39537 Negligible
|
||||
libncurses6 6.2-0ubuntu2 deb CVE-2022-29458 Negligible
|
||||
libncursesw6 6.2-0ubuntu2 deb CVE-2021-39537 Negligible
|
||||
libncursesw6 6.2-0ubuntu2 deb CVE-2022-29458 Negligible
|
||||
libpcre3 2:8.39-12ubuntu0.1 deb CVE-2017-11164 Negligible
|
||||
libsystemd0 245.4-4ubuntu3.19 deb CVE-2022-3821 Medium
|
||||
libtinfo6 6.2-0ubuntu2 deb CVE-2021-39537 Negligible
|
||||
libtinfo6 6.2-0ubuntu2 deb CVE-2022-29458 Negligible
|
||||
libudev1 245.4-4ubuntu3.19 deb CVE-2022-3821 Medium
|
||||
login 1:4.8.1-1ubuntu5.20.04.4 deb CVE-2013-4235 Low
|
||||
ncurses-base 6.2-0ubuntu2 deb CVE-2021-39537 Negligible
|
||||
ncurses-base 6.2-0ubuntu2 deb CVE-2022-29458 Negligible
|
||||
ncurses-bin 6.2-0ubuntu2 deb CVE-2021-39537 Negligible
|
||||
ncurses-bin 6.2-0ubuntu2 deb CVE-2022-29458 Negligible
|
||||
passwd 1:4.8.1-1ubuntu5.20.04.4 deb CVE-2013-4235 Low
|
||||
```
|
||||
|
||||
Of course, you already know that because we did it on [Day 14](day14.md).
|
||||
|
||||
However, this command will only output the vulnerabilities and exit with a success code.
|
||||
So if this were in a CI/CD pipeline, the pipeline would be successful even if we have many vulnerabilities.
|
||||
|
||||
The person running the pipeline would have to open it, see the logs and manually determine whether the results are OK.
|
||||
This is tedious and error prone.
|
||||
|
||||
Let's see how we can enforce some rules for the results that come out of the scan.
|
||||
|
||||
### Enforcing rules for the scanned images
|
||||
|
||||
As we already established, just scanning the image does not do much except for giving us visibility into the number of vulnerabilities we have inside the image.
|
||||
But what if we want to enforce a set of rules for our container images?
|
||||
|
||||
For example, a good rule would be "an image should not have critical vulnerabilities" or "an image should not have vulnerabilities with available fixes."
|
||||
|
||||
Fortunately for us, this is also something that Grype supports out of the box.
|
||||
We can use the `--fail-on <SEVERITY>` flag to tell Grype to exit with a non-zero exit code if, during the scan, it found vulnerabilities with a severity higher or equal to the one we specified.
|
||||
This will fail our pipeline, and the engineer would have to look at the results and fix something in order to make it pass.
|
||||
|
||||
Let's tried it out.
|
||||
We are going to use the `springio/petclinic:latest` image, which we already found has many vulnerabilities.
|
||||
You can go back to [Day 14](day14.md) or scan it yourself to see how much exactly.
|
||||
|
||||
We want to fail the pipeline if the image has `CRITICAL` vulnerabilities.
|
||||
We are going to run the can like this:
|
||||
|
||||
```shell
|
||||
$ grype springio/petclinic:latest --fail-on critical
|
||||
✔ Vulnerability DB [no update available]
|
||||
✔ Loaded image
|
||||
✔ Parsed image
|
||||
✔ Cataloged packages [212 packages]
|
||||
✔ Scanned image [168 vulnerabilities]
|
||||
|
||||
NAME INSTALLED FIXED-IN TYPE VULNERABILITY SEVERITY
|
||||
spring-core 5.3.6 java-archive CVE-2016-1000027 Critical
|
||||
spring-core 5.3.6 java-archive CVE-2022-22965 Critical
|
||||
...
|
||||
1 error occurred:
|
||||
* discovered vulnerabilities at or above the severity threshold
|
||||
|
||||
$ echo $?
|
||||
1
|
||||
```
|
||||
|
||||
We see two things here:
|
||||
|
||||
- apart from the results, Grype also outputted an error that is telling us that this scan violated the rule we had defined (no CRITICAL vulnerabilities)
|
||||
- Grype exited with exit code 1, which indicates failure.
|
||||
If this were a CI pipeline, it would have failed.
|
||||
|
||||
When this happens, we will be blocked from merging our code and pushing our container to the registry.
|
||||
This means that we need to take some action to fix the failure so that we can finish our task and push our change.
|
||||
|
||||
Let's see what our options are.
|
||||
|
||||
### Fixing the pipeline
|
||||
|
||||
Once we encounter a vulnerability that is preventing us from publishing our container, we have a few ways we can go depending on the vulnerability.
|
||||
|
||||
#### 1. The vulnerability has a fix
|
||||
|
||||
The best-case scenario is when this vulnerability is already fixed in a newer version of the library we depend on.
|
||||
|
||||
One such vulnerability is this one:
|
||||
|
||||
```text
|
||||
NAME INSTALLED FIXED-IN TYPE VULNERABILITY SEVERITY
|
||||
snakeyaml 1.27 1.31 java-archive GHSA-3mc7-4q67-w48m High
|
||||
```
|
||||
|
||||
This is a `High` severity vulnerability.
|
||||
It's coming from the Java package `snakeyaml`, version `1.27`.
|
||||
Grype is telling us that this vulnerability is fixed in version `1.31` of the same library.
|
||||
|
||||
In this case, we can just upgrade the version of this library in our `pom.xml` or `build.gradle` file,
|
||||
test our code to make sure nothing breaks with the new version,
|
||||
and submit the code again.
|
||||
|
||||
This will build a new version of our container, re-scan it, and hopefully, this time, the vulnerability will not come up, and our scan will be successful.
|
||||
|
||||
### 2. The vulnerability does not have a fix, but it's not dangerous
|
||||
|
||||
Sometimes a vulnerability we encounter will not have a fix available.
|
||||
These are so-called zero-day vulnerabilities that are disclosed before a fix is available.
|
||||
|
||||
We can see two of those in the initial scan results:
|
||||
|
||||
```text
|
||||
NAME INSTALLED FIXED-IN TYPE VULNERABILITY SEVERITY
|
||||
spring-core 5.3.6 java-archive CVE-2016-1000027 Critical
|
||||
spring-core 5.3.6 java-archive CVE-2022-22965 Critical
|
||||
```
|
||||
|
||||
When we encounter such a vulnerability, we need to evaluate how severe it is and calculate the risk of releasing our software with that vulnerability in it.
|
||||
|
||||
We can determine that the vulnerability does not constitute any danger to our software and its consumers.
|
||||
One such case might be when a vulnerability requires physical access to the servers to be exploited.
|
||||
If we are sure that our physical servers are secure enough and an attacker cannot get access to them, we can safely ignore this vulnerability.
|
||||
|
||||
In this case, we can tell Grype to ignore this vulnerability and not fail the scan because of it.
|
||||
|
||||
We can do this via the `grype.yaml` configuration file, where we can list vulnerabilities we want to ignore:
|
||||
|
||||
```yaml
|
||||
ignore:
|
||||
# This is the full set of supported rule fields:
|
||||
- vulnerability: CVE-2016-1000027
|
||||
fix-state: unknown
|
||||
package:
|
||||
name: spring-core
|
||||
version: 5.3.6
|
||||
type: java-archive
|
||||
# We can list as many of these as we want
|
||||
- vulnerability: CVE-2022-22965
|
||||
# Or list whole packages which we want to ignore
|
||||
- package:
|
||||
type: gem
|
||||
```
|
||||
|
||||
Putting this in our configuration file and re-running the scan will make our pipeline green.
|
||||
|
||||
However, it is crucial that we keep track of this file and not ignore vulnerabilities that have a fix.
|
||||
For example, when a fix for this vulnerability is released, it's best we upgrade our dependency and remove this vulnerability from our application.
|
||||
|
||||
That way, we will ensure that our application is as secure as possible and there are no vulnerabilities that can turn out to be more severe than we initially thought.
|
||||
|
||||
### 3. Vulnerability does not have a fix, and IT IS dangerous
|
||||
|
||||
The worst-case scenario is if we encounter a vulnerability that does not have a fix, and it is indeed dangerous, and there is a possibility to be exploited.
|
||||
|
||||
In that case, there is no right move.
|
||||
The best thing we can do is sit down with our security team and come up with an action plan.
|
||||
|
||||
We might decide it's best to do nothing while the vulnerability is fixed.
|
||||
We might decide to manually patch some stuff so that we remove at least some part of the danger.
|
||||
It really depends on the situation.
|
||||
|
||||
Sometimes, a zero-day vulnerability is already in your application that is deployed.
|
||||
In that case, freezing deploys won't help because your app is already vulnerable.
|
||||
|
||||
That was the case with the Log4Shell vulnerability that was discovered in late 2021 but has been present in Log4j since 2013.
|
||||
Luckily, there was a fix available within hours, but next time we might not be this lucky.
|
||||
|
||||
## Summary
|
||||
|
||||
As we already learned in [Day 14](day14.md), scanning your container images for vulnerabilities is important as it can give you valuable insights about
|
||||
the security posture of your images.
|
||||
|
||||
Today we learned that it's even better to have it as part of your CI/CD pipeline and to enforce some basic rules about what vulnerabilities you have inside your images.
|
||||
|
||||
Finally, we discussed the steps we can take when we find a vulnerability.
|
||||
|
||||
Tomorrow we are going to take a look at container registries that enable this scanning out of the box and also at scanning other types of artifacts.
|
||||
See you on [Day 22](day22.md).
|
@ -0,0 +1,77 @@
|
||||
# Continuous Image Repository Scan - Container Registries
|
||||
|
||||
Yesterday we learned how to integrate container image vulnerability scanning into our CI/CD pipelines.
|
||||
|
||||
Today, we are going to take a look at how to enforce that our images are scanned on another level - the container registry.
|
||||
|
||||
There are container registries that will automatically scan your container images once you push them.
|
||||
This ensures that we will have visibility into the number of vulnerabilities for every container image produced by our team.
|
||||
|
||||
Let's take a look at few different registries that provide this capability and how we can use it.
|
||||
|
||||
## Docker Hub
|
||||
|
||||
[Docker Hub](https://hub.docker.com/) is the first container registry.
|
||||
It was build by the team that created Docker and is still very popular today.
|
||||
|
||||
Docker Hub has automatic vulnerability scanner, powered by [Snyk](https://snyk.io/).
|
||||
|
||||
This means that, if enabled, when you push an image to Docker Hub it will be automatically scanned and the results with be visible to you in the UI.
|
||||
|
||||
You can learn more about how to enable and use this feature from the Docker Hub [docs](https://docs.docker.com/docker-hub/vulnerability-scanning/).
|
||||
|
||||
**NOTE:** This feature is not free.
|
||||
In order to use it you need to have a subscription.
|
||||
|
||||
## Harbor
|
||||
|
||||
[Harbor](https://goharbor.io/) is an open-source container registry.
|
||||
Originally developed in VMware, it is now part of the CNCF.
|
||||
|
||||
It supports image scanning via [Trivy](https://github.com/aquasecurity/trivy) and/or [Clair](https://github.com/quay/clair).
|
||||
|
||||
This is configured during installation.
|
||||
(Even if you don't enable image scanning during installation, it can always be configured afterwards).
|
||||
For more info, check out the [docs](https://goharbor.io/docs/2.0.0/administration/vulnerability-scanning/).
|
||||
|
||||
## AWS ECR
|
||||
|
||||
[AWS ECR](https://aws.amazon.com/ecr/) also supports [image scanning via Clair](https://docs.aws.amazon.com/AmazonECR/latest/userguide/image-scanning-basic.html).
|
||||
|
||||
## Azure Container Registry
|
||||
|
||||
[Azure Container Registry](https://azure.microsoft.com/en-us/products/container-registry) support [image scanning via Qualys](https://azure.microsoft.com/en-us/updates/vulnerability-scanning-for-images-in-azure-container-registry-is-now-generally-available/).
|
||||
|
||||
## GCP
|
||||
|
||||
[GCP Container Registry](https://cloud.google.com/container-registry) also support [automatic image scanning](https://cloud.google.com/container-analysis/docs/automated-scanning-howto).
|
||||
|
||||
## Policy Enforcement
|
||||
|
||||
Just scanning the images and having the results visible in your registry is nice thing to have,
|
||||
but it would be even better if we have a way to enforce some standards for these images.
|
||||
|
||||
In [Day 14](day14.md) we saw how to make `grype` fail a scan if an image has vulnerabilities above a certain severity.
|
||||
|
||||
Something like this can also be enforced on the container registry level.
|
||||
|
||||
For example, [Harbor](https://goharbor.io/) has the **Prevent vulnerable images from running** option, which when enable does not allow you to pull an image that has vulnerabilities above a certain severity.
|
||||
If you cannot pull the image, you cannot run it, so this is a good rule to have if you don't want to be running vulnerable images.
|
||||
Of course, a rule like that can effectively prevent you from deploying something to your environment, so you need to use it carefully.
|
||||
|
||||
More about this option and how to enable it in Harbor you can read [here](https://goharbor.io/docs/2.3.0/working-with-projects/project-configuration/).
|
||||
|
||||
For more granular control and for unblocking deployments you can configure a [per-project CVE allowlist](https://goharbor.io/docs/2.3.0/working-with-projects/project-configuration/configure-project-allowlist/).
|
||||
This will allow certain images to run even though they have vulnerabilities.
|
||||
However, these vulnerabilities would be manually curated and allow-listed by the repo admin.
|
||||
|
||||
## Summary
|
||||
|
||||
Scanning your container images and having visibility into the number of vulnerabilities inside them is critical for a secure SDLC.
|
||||
|
||||
One place to do that is you CI pipeline (as seen in [Day 21](day21.md)).
|
||||
|
||||
Another place is your container registry (as seen today).
|
||||
|
||||
Both are good options, both have their pros and cons.
|
||||
It is up to the DevSecOps architect to decide which approach works better for them and their thread model.
|
161
2023/day23.md
@ -0,0 +1,161 @@
|
||||
# Artifacts Scan
|
||||
|
||||
In the previous two days we learned why and how to scan container images.
|
||||
|
||||
However, usually our infrastructure consists of more than just container images.
|
||||
Yes, our services will run as containers, but around them we can also have other artifacts like:
|
||||
|
||||
- Kubernetes manifests
|
||||
- Helm templates
|
||||
- Terraform code
|
||||
|
||||
For maximum security, you would be scanning all the artifacts that you use for your environment, not only your container images.
|
||||
|
||||
The reason for that is that even if you have the most secure Docker images with no CVEs,
|
||||
but run then on an insecure infrastructure with bad Kubernetes configuration,
|
||||
then your environment will not be secure.
|
||||
|
||||
**Each system is as secure as its weakest link.**
|
||||
|
||||
Today we are going to take a look at different tools for scanning artifacts different than container images.
|
||||
|
||||
## Kubernetes manifests
|
||||
|
||||
Scanning Kubernetes manifests can expose misconfigurations and security bad practices like:
|
||||
|
||||
- running containers as root
|
||||
- running containers with no resource limits
|
||||
- giving too much and too powerful capabilities to the containers
|
||||
- hardcoding secrets in the templates, etc.
|
||||
|
||||
All of these are part of the security posture of our Kubernetes workloads, and having a bad posture in security is just as bad as having a bad posture in real-life.
|
||||
|
||||
One popular open-source tool for scanning Kubernetes manifests is [KubeSec](https://kubesec.io/).
|
||||
|
||||
It outputs a list of misconfiguration.
|
||||
|
||||
For example, this Kubernetes manifest taken from their docs has a lot of misconfigurations like missing memory limits, running as root, etc.
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: kubesec-demo
|
||||
spec:
|
||||
containers:
|
||||
- name: kubesec-demo
|
||||
image: gcr.io/google-samples/node-hello:1.0
|
||||
securityContext:
|
||||
runAsNonRoot: false
|
||||
```
|
||||
|
||||
Let's scan it and look at the results.
|
||||
|
||||
```shell
|
||||
$ kubesec scan kubesec-test.yaml
|
||||
[
|
||||
{
|
||||
"object": "Pod/kubesec-demo.default",
|
||||
"valid": true,
|
||||
"message": "Passed with a score of 0 points",
|
||||
"score": 0,
|
||||
"scoring": {
|
||||
"advise": [
|
||||
{
|
||||
"selector": ".metadata .annotations .\"container.seccomp.security.alpha.kubernetes.io/pod\"",
|
||||
"reason": "Seccomp profiles set minimum privilege and secure against unknown threats"
|
||||
},
|
||||
{
|
||||
"selector": ".spec .serviceAccountName",
|
||||
"reason": "Service accounts restrict Kubernetes API access and should be configured with least privilege"
|
||||
},
|
||||
{
|
||||
"selector": "containers[] .securityContext .runAsNonRoot == true",
|
||||
"reason": "Force the running image to run as a non-root user to ensure least privilege"
|
||||
},
|
||||
{
|
||||
"selector": ".metadata .annotations .\"container.apparmor.security.beta.kubernetes.io/nginx\"",
|
||||
"reason": "Well defined AppArmor policies may provide greater protection from unknown threats. WARNING: NOT PRODUCTION READY"
|
||||
},
|
||||
{
|
||||
"selector": "containers[] .resources .requests .memory",
|
||||
"reason": "Enforcing memory requests aids a fair balancing of resources across the cluster"
|
||||
},
|
||||
{
|
||||
"selector": "containers[] .securityContext .runAsUser -gt 10000",
|
||||
"reason": "Run as a high-UID user to avoid conflicts with the host's user table"
|
||||
},
|
||||
{
|
||||
"selector": "containers[] .resources .limits .cpu",
|
||||
"reason": "Enforcing CPU limits prevents DOS via resource exhaustion"
|
||||
},
|
||||
{
|
||||
"selector": "containers[] .resources .requests .cpu",
|
||||
"reason": "Enforcing CPU requests aids a fair balancing of resources across the cluster"
|
||||
},
|
||||
{
|
||||
"selector": "containers[] .securityContext .readOnlyRootFilesystem == true",
|
||||
"reason": "An immutable root filesystem can prevent malicious binaries being added to PATH and increase attack cost"
|
||||
},
|
||||
{
|
||||
"selector": "containers[] .securityContext .capabilities .drop",
|
||||
"reason": "Reducing kernel capabilities available to a container limits its attack surface"
|
||||
},
|
||||
{
|
||||
"selector": "containers[] .resources .limits .memory",
|
||||
"reason": "Enforcing memory limits prevents DOS via resource exhaustion"
|
||||
},
|
||||
{
|
||||
"selector": "containers[] .securityContext .capabilities .drop | index(\"ALL\")",
|
||||
"reason": "Drop all capabilities and add only those required to reduce syscall attack surface"
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
]
|
||||
```
|
||||
|
||||
As we see it produced 12 warnings about thing in this manifests we would want to change.
|
||||
Each warning has an explanation telling us WHY we need to fix it.
|
||||
|
||||
### Others
|
||||
|
||||
Other such tools include [kube-bench](https://github.com/aquasecurity/kube-bench), [kubeaudit](https://github.com/Shopify/kubeaudit) and [kube-score](https://github.com/zegl/kube-score).
|
||||
|
||||
They work in the same or similar manner.
|
||||
You give them a resource to analyze and they output a list of things to fix.
|
||||
|
||||
They can be used in a CI setup.
|
||||
Some of them can also be used as [Kubernetes validating webhook](https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/), and can block resources from being created if they violate a policy.
|
||||
|
||||
## Helm templates
|
||||
|
||||
[Helm](https://helm.sh/) templates are basically templated Kubernetes resources that can be reused and configured with different values.
|
||||
|
||||
There are some tools like [Snyk](https://docs.snyk.io/products/snyk-infrastructure-as-code/scan-kubernetes-configuration-files/scan-and-fix-security-issues-in-helm-charts) that have *some* support for scanning Helm templates for misconfigurations the same way we are scanning Kubernetes resources.
|
||||
|
||||
However, the best way to approach this problem is to just scan the final templated version of your Helm charts.
|
||||
E.g. use the `helm template` to substitute the templated values with actual ones and just scan that via the tools provided above.
|
||||
|
||||
## Terraform
|
||||
|
||||
The most popular tool for scanning misconfigurations in Terraform code is [tfsec](https://github.com/aquasecurity/tfsec).
|
||||
|
||||
It uses static analysis to spot potential issues in your code.
|
||||
|
||||
It support multiple cloud providers and points out issues specific to the one you are using.
|
||||
|
||||
For example, it has checks for [using the default VPC in AWS](https://aquasecurity.github.io/tfsec/v1.28.1/checks/aws/ec2/no-default-vpc/),
|
||||
[hardcoding secrets in the EC2 user data](https://aquasecurity.github.io/tfsec/v1.28.1/checks/aws/ec2/no-secrets-in-launch-template-user-data/),
|
||||
or [allowing public access to your ECR container images](https://aquasecurity.github.io/tfsec/v1.28.1/checks/aws/ecr/no-public-access/).
|
||||
|
||||
It allow you to enable/disable checks and to ignore warnings via inline comments.
|
||||
|
||||
It also allows you to define your own policies via [Rego](https://www.openpolicyagent.org/docs/latest/policy-language/).
|
||||
|
||||
## Summary
|
||||
|
||||
A Secure SDLC would include scanning of all artifacts that end up in our production environment, not just the container images.
|
||||
|
||||
Today we learned how to scan non-container artifacts like Kubernetes manifests, Helm charts and Terraform code.
|
||||
The tools we looked at are free and open-source and can be integrated into any workflow or CI pipeline.
|
147
2023/day24.md
@ -0,0 +1,147 @@
|
||||
# Signing
|
||||
|
||||
The process of signing involves... well, signing an artifact with a key, and later verifying that this artifact has not been tampered with.
|
||||
|
||||
An "artifact" in this scenario can be anything
|
||||
|
||||
- [code](https://venafi.com/machine-identity-basics/what-is-code-signing/#item-1)
|
||||
- [git commit](https://docs.github.com/en/authentication/managing-commit-signature-verification/signing-commits)
|
||||
- [container images](https://docs.sigstore.dev/cosign/overview/)
|
||||
|
||||
Signing and verifying the signature ensures that the artifact(container) we pulled from the registry is the same one that we pushed.
|
||||
This secures us from supply chain and man-in-the-middle attack where we download something different that we wanted.
|
||||
|
||||
The CI workflow would look like this:
|
||||
|
||||
0. Developer pushes code to Git
|
||||
1. CI builds the code into a container
|
||||
2. **CI signs the container with our private key**
|
||||
3. CI pushes the signed container to our registry
|
||||
|
||||
And then when we want to deploy this image:
|
||||
|
||||
1. Pull the image
|
||||
2. **Verify the signature with our public key**
|
||||
1. If signature does not match, fail the deploy - image is probably compromised
|
||||
3. If signature does match, proceed with the deploy
|
||||
|
||||
This workflow is based on public-private key cryptography.
|
||||
When you sign something with your private key, everyone that has access to your public key can verify that this was signed by you.
|
||||
|
||||
And since the public key is... well, public, that means everyone.
|
||||
|
||||
## The danger of NOT signing your images
|
||||
|
||||
If you are not signing your container images, there is the danger that someone will replace an image in your repository with another image that is malicious.
|
||||
|
||||
For example, you can push the `my-repo/my-image:1.0.0` image to your repository, but image tags, even versioned ones (like `1.0.0`) are mutable.
|
||||
So an attacker that has access to your repo can push another image, tag it the same way, and this way it will override your image.
|
||||
Then, when you go an deploy this image, the image that will get deployed is the one that attacked forged.
|
||||
This will probably be a maliciuos one.
|
||||
For example, on that has malware, is stealing data, or using your infrastructure for mining crypto currencies.
|
||||
|
||||
This problem can be solved by signing your images, because when you sign an images, then you can later verify that what you pull is what you uploaded in the first place.
|
||||
|
||||
So let's take a look at how we can do this via a tool called [cosign](https://docs.sigstore.dev/cosign/overview/).
|
||||
|
||||
## Signing container images
|
||||
|
||||
First, download the tool, following the instructions for your OS [here](https://docs.sigstore.dev/cosign/installation/).
|
||||
|
||||
Generate a key-pair if you don't have one:
|
||||
|
||||
```console
|
||||
cosign generate-key-pair
|
||||
```
|
||||
|
||||
This will output two files in the current folder:
|
||||
|
||||
- `cosign.key` - your private key.
|
||||
DO NOT SHARE WITH ANYONE.
|
||||
- `cosign.pub` - your public key.
|
||||
Share with whoever needs it.
|
||||
|
||||
We can use the private key to sign an image:
|
||||
|
||||
```console
|
||||
$ cosign sign --key cosign.key asankov/signed
|
||||
Enter password for private key:
|
||||
|
||||
Pushing signature to: index.docker.io/asankov/signed
|
||||
```
|
||||
|
||||
This command signed the `asankov/signed` contaner image and pushed the signature to the container repo.
|
||||
|
||||
## Verifying signatures
|
||||
|
||||
Now that we have signed the image, let's verify the signature.
|
||||
|
||||
For that, we need our public key:
|
||||
|
||||
```console
|
||||
$ cosign verify --key=cosign.pub asankov/signed | jq
|
||||
|
||||
Verification for index.docker.io/asankov/signed:latest --
|
||||
The following checks were performed on each of these signatures:
|
||||
- The cosign claims were validated
|
||||
- The signatures were verified against the specified public key
|
||||
[
|
||||
{
|
||||
"critical": {
|
||||
"identity": {
|
||||
"docker-reference": "index.docker.io/asankov/signed"
|
||||
},
|
||||
"image": {
|
||||
"docker-manifest-digest": "sha256:93d62c92b70efc512379cf89317eaf41b8ce6cba84a5e69507a95a7f15708506"
|
||||
},
|
||||
"type": "cosign container image signature"
|
||||
},
|
||||
"optional": null
|
||||
}
|
||||
]
|
||||
```
|
||||
|
||||
The output of this command showed us that the image is signed by the key we expected.
|
||||
Since we are the only ones that have access to our private key, this means that no one except us could have pushed this image and signature to the container repo.
|
||||
Hence, the contents of this image have not been tampered with since we pushed it.
|
||||
|
||||
Let's try to verify an image that we have NOT signed.
|
||||
|
||||
```console
|
||||
$ cosign verify --key=cosign.pub asankov/not-signed
|
||||
Error: no matching signatures:
|
||||
|
||||
main.go:62: error during command execution: no matching signatures:
|
||||
```
|
||||
|
||||
Just as expected, `cosign` could not verify the signature of this image (because there was not one).
|
||||
|
||||
In this example, this image (`asankov/not-signed`) is not signed at all, but we would have gotten the same error if someone had signed this image with different key than the one we are using to verify it.
|
||||
|
||||
### Verifying signatures in Kubernetes
|
||||
|
||||
In the previous example, we were verifying the signatures by hand.
|
||||
However, that is good only for demo purposes or for playing around with the tool.
|
||||
|
||||
In a real-world scenario, you would want this verification to be done automatically at the time of deploy.
|
||||
|
||||
Fortunately, there are many `cosign` integrations for doing that.
|
||||
|
||||
For example, if we are using Kubernetes, we can deploy a validating webhook that will audit all new deployments and verify that the container images used by them are signed.
|
||||
|
||||
For Kubernetes you can choose from 3 existing integrations - [Gatekeeper](https://github.com/sigstore/cosign-gatekeeper-provider), [Kyverno](https://kyverno.io/docs/writing-policies/verify-images/) or [Conaisseur](https://github.com/sse-secure-systems/connaisseur#what-is-connaisseur).
|
||||
You can choose one of the three depending on your preference, or if you are already using them for something else.
|
||||
|
||||
## Dangers to be aware of
|
||||
|
||||
As with everything else, signing images is not a silver bullet and will not solve all your security problems.
|
||||
|
||||
There is still the problem that your private keys might leak, in which case everyone can sign everything and it will still pass your signature check.
|
||||
|
||||
However, integrating signing into your workflow adds yet another layer of defence and one more hoop for attackers to jump over.
|
||||
|
||||
## Summary
|
||||
|
||||
Signing artifacts prevents supply-chain and man-in-the-middle attacks, by allowing you to verify the integrity of your artifacts.
|
||||
|
||||
[Sigstore](https://sigstore.dev/) and [cosign](https://docs.sigstore.dev/cosign/overview/) are useful tools to sign your artifacts and they come with many integrations to choose from.
|
@ -0,0 +1,84 @@
|
||||
# Systems Vulnerability Scanning
|
||||
|
||||
## What is systems vulnerability scanning?
|
||||
|
||||
Vulnerability scanning is the process of scanning a network or system to identify any existing security vulnerabilities.
|
||||
|
||||
It is a proactive measure used to detect any weaknesses that an attacker may exploit to gain unauthorised access to a system or network.
|
||||
|
||||
Vulnerability scanning can be either manual or automated.
|
||||
It can involve scanning for known vulnerabilities, analysing the configuration of a system or network, or using an automated tool to detect any possible vulnerabilities.
|
||||
|
||||
## How do you perform a vulnerability scan?
|
||||
|
||||
A vulnerability scan is typically performed with specialised software that searches for known weaknesses and security issues in the system.
|
||||
|
||||
The scan typically looks for missing patches, known malware, open ports, weak passwords, and other security risks.
|
||||
|
||||
Once the scan is complete, the results are analysed to determine which areas of the system need to be addressed to improve its overall security.
|
||||
|
||||
## What are the types of vulnerability scans?
|
||||
|
||||
There are two main types of vulnerability scan: unauthenticated and authenticated.
|
||||
|
||||
Unauthenticated scans are conducted without any credentials and, as such, can only provide limited information about potential vulnerabilities.
|
||||
This type of scan helps identify low-hanging fruit, such as unpatched systems or open ports.
|
||||
|
||||
Authenticated scans, on the other hand, are conducted with administrative credentials.
|
||||
This allows the scanning tool to provide much more comprehensive information about potential vulnerabilities, including those that may not be easily exploitable.
|
||||
|
||||
In the next two days we are going to take a look at containers and network vulnerability scan, which are more specific subsets os system vulnerability scanning.
|
||||
|
||||
## Why are vulnerability scans important?
|
||||
|
||||
Vulnerabilities are widespread across organisations of all sizes.
|
||||
New ones are discovered constantly or can be introduced due to system changes.
|
||||
|
||||
Criminal hackers use automated tools to identify and exploit known vulnerabilities and access unsecured systems, networks or data.
|
||||
|
||||
Exploiting vulnerabilities with automated tools is simple: attacks are cheap, easy to run and indiscriminate, so every Internet-facing organisation is at risk.
|
||||
|
||||
All it takes is one vulnerability for an attacker to access your network.
|
||||
|
||||
This is why applying patches to fix these security vulnerabilities is essential.
|
||||
Updating your software, firmware and operating systems to the newest versions will help protect your organisation from potential vulnerabilities.
|
||||
|
||||
Worse, most intrusions are not discovered until it is too late. According to the global median, dwell time between the start of a cyber intrusion and its identification is 24 days.
|
||||
|
||||
## What does a vulnerability scan test?
|
||||
|
||||
Automated vulnerability scanning tools scan for open ports and detect common services running on those ports.
|
||||
They identify any configuration issues or other vulnerabilities on those services and look at whether best practice is being followed, such as using TLSv1.2 or higher and strong cipher suites.
|
||||
|
||||
A vulnerability scanning report is then generated to highlight the items that have been identified.
|
||||
By acting on these findings, an organisation can improve its security posture.
|
||||
|
||||
## Who conducts vulnerability scans?
|
||||
|
||||
IT departments usually undertake vulnerability scanning if they have the expertise and software to do so, or they can call on a third-party security service provider.
|
||||
|
||||
Vulnerability scans are also performed by attackers who scour the Internet to find entry points into systems and networks.
|
||||
|
||||
Many companies have bug bountry programs, that allow enthical hackers to report vulnerabilities and gain money for that.
|
||||
Usually the bug bountry programs have boundaries, e.g. they define what is allowed and what is not.
|
||||
|
||||
Participating in big bounty programs must be done resposibly.
|
||||
Hacking is a crime, and if you are caugh you cannot just claim that you did it for good, or that you were not going to exploit your findings.
|
||||
|
||||
## How often should you conduct a vulnerability scan?
|
||||
|
||||
Vulnerability scans should be performed regularly so you can detect new vulnerabilities quickly and take appropriate action.
|
||||
|
||||
This will help identify your security weaknesses and the extent to which you are open to attack.
|
||||
|
||||
## Penetration testing
|
||||
|
||||
Penetration testing is the next step after vulnerability scanning.
|
||||
In penetration testing professional ethical hackers combine the results of automated scans with their expertise to reveal vulnerabilities that may not be identified by scans alone.
|
||||
|
||||
Penetration testers will also consider your environment (a significant factor in determining vulnerabilities’ true severity) and upgrade or downgrade the score as appropriate.
|
||||
|
||||
A scan can detect something that is vulnerability, but it cannot be actively exploited, because of the way it is incorporated into our system.
|
||||
This makes the vulnerability a low priority one, because why fix something that presents no danger to you.
|
||||
|
||||
If an issue comes up in penetration testing then that means that this issue is exploitable, and probably a high priority - in the penetation testers managed to exploit it, so will the hackers.
|
129
2023/day26.md
@ -0,0 +1,129 @@
|
||||
# Containers Vulnerability Scanning
|
||||
|
||||
[Yesterday](day25.md) we learned that vulnerability scanning is the process of scanning a network or system to identify any existing security vulnerabilities.
|
||||
We also learned that Containers Vulnerability Scanning is a subset of Systems Vulnerability Scanning, e.g. we are only scanning the "containers" part of our system.
|
||||
|
||||
In [Day 14](day14.md) we learned what container image vulnerability scanning and how it makes us more secure.
|
||||
Then in [Day 15](day15.md) we learned more about that and on Days [21](day21.md) and [22](day22.md) we learned how to integrate the scanning process into our CI/CD pipelines
|
||||
so that it is automatic and enforced.
|
||||
|
||||
Today, we are going to look at other techniques of scanning and securing containers.
|
||||
Vulnerability scanning is important, but is not a silver bullet and not a guarantee that you are secure.
|
||||
|
||||
There are a few reasons for that.
|
||||
|
||||
First, image scanning only shows you the list of _known_ vulnerabilities.
|
||||
There might be many vulnerabilities which have not been discovered, but are still there and could be exploited.
|
||||
|
||||
Second, the security of our deployments depends not only on the image and number of vulnerabilities, but also on the way we deploy that image.
|
||||
For example, if we deploy an insecure application on the open internet where everyone has access to it, or leave the default SSH port and password of our VM,
|
||||
then it does not matter whether our container has vulnerabilities or not, because the attackers will use the other holes in our system to get in.
|
||||
|
||||
That is why today we are going to take a look at few other aspects of containers vulnerability scanning.
|
||||
|
||||
## Host Security
|
||||
|
||||
Containers run on hosts.
|
||||
|
||||
Docker containers run on hosts that have the Docker Daemon installed.
|
||||
Same is true for containerd, podman, cri-o, and other container runtimes.
|
||||
|
||||
If your host is not secured, and someone manages to break it, they will probably have access to your containers and be able to start, stop, modify them, etc.
|
||||
|
||||
That is why it's important to secure the host and secure it well.
|
||||
|
||||
Securing VMs is a deep topic I will not go into today, but the most basic things you can do are:
|
||||
|
||||
- limit the visibility of the machine on the public network
|
||||
- if possible use a Load Balancer to access your containers, and make the host machine not visible on the public internet
|
||||
- close all unnecessary ports
|
||||
- use strong password for SSH and RDP
|
||||
|
||||
In the bottom of the article I will link 2 articles from AWS and VMware about VM security.
|
||||
|
||||
## Network Security
|
||||
|
||||
Network security is another deep topic, which we will look into in better detail [tomorrow](day27.md).
|
||||
|
||||
At a minimum, you should not have network exposure you don't need.
|
||||
E.g. if Container A does not need to make network calls to Container B, it should not be able to make this calls at a first place.
|
||||
|
||||
In Docker you can define [different network drivers](https://docs.docker.com/network/) that can help you with this.
|
||||
In Kubernetes there are [network policies](https://kubernetes.io/docs/concepts/services-networking/network-policies/) that limit which container has access to what.
|
||||
|
||||
## Security misconfiguration
|
||||
|
||||
When working with containers, there are a few security misconfiguration which you can make that can put you in danger of being hacked.
|
||||
|
||||
### Capabilities
|
||||
|
||||
One such thing is giving your container excessive capabilities.
|
||||
|
||||
[Linux capabilities](https://man7.org/linux/man-pages/man7/capabilities.7.html) determine what syscalls you container can execute.
|
||||
|
||||
The best practice is to be aware of the capabilities your containers need and assign them only them.
|
||||
That way you will be sure that a left-over capability that was never needed was not abused by an attacker.
|
||||
|
||||
In practice, it is hard to know what capabilities exactly your containers need, because that involves complex monitoring of your container over time.
|
||||
Even the developers that wrote the code are probably not aware of what capabilities exactly are needed to perform the actions that they code is doing.
|
||||
That is so, because capabilities are a low-level construct and developers usually write higher-level code.
|
||||
|
||||
However, it is good to know which capabilities you should avoid assigning to your containers, because they are too overpowered and give it too many permissions.
|
||||
|
||||
One such capability is `CAP_SYS_ADMIN` which is way overpowered and can do a lot of things.
|
||||
Even the Linux docs of this capability warn you that you should not be using this capability if you can avoid it.
|
||||
|
||||
### Running as root
|
||||
|
||||
Running containers as root is a really bad practice and it should be avoided as much as possible.
|
||||
|
||||
Of course, there might be situations in which you _must_ run containers as root.
|
||||
One such example are the core components of Kubernetes, which run as root containers, because they need to have a lot of priviledges on the host.
|
||||
|
||||
However, if you are running a simple web server, or something like this, you should not have the need to run the container as root.
|
||||
|
||||
Running a container as root means that basically you are throwing away all the isolation containers give you, as a root container have almost full control over the host.
|
||||
|
||||
A lot of container runtime vulnerabilities are only applicable if containers are running as root.
|
||||
|
||||
Tools like [falco](https://github.com/falcosecurity/falco) and [kube-bench](https://github.com/aquasecurity/kube-bench) will warn you if you are running containers as root, so that you can take actions and change that.
|
||||
|
||||
### Resource limits
|
||||
|
||||
Not defining resource limits for your containers can lead to a DDoS attack that brings down your whole infrastructure.
|
||||
|
||||
When you are being DDoS-ed the workload starts consuming more memory and CPU.
|
||||
If that workload is a container with no limits, at some point it will drain all the available resources from the host and there will be none left for the other containers on that host.
|
||||
At some point, the whole host might go down, which will lead to more pressure on your other hosts and can have a domino effect on your whole infra.
|
||||
|
||||
If you have sensible limits for your container, it will consume them, but the orchestrator would not give him more.
|
||||
At some point, the container will die due to lack of resources, but nothing else will happen.
|
||||
Your host and other containers will be safe.
|
||||
|
||||
## Summary
|
||||
|
||||
Containers Vulnerability Scanning is more than just scanning for CVEs.
|
||||
It includes things like proper configuration, host security, network configuration, etc.
|
||||
|
||||
There is not one tool that can help with this, but there are open source solutions that you can combine to achieve the desired results.
|
||||
|
||||
Most of these lessons are useful no matter the orchestrator you are using.
|
||||
You can be using Kubernetes, OpenShift, AWS ECS, Docker Compose, VMs with Docker, etc.
|
||||
The basics are the same, and you should adapt them to the platform you are using.
|
||||
|
||||
Some orchestrators give you more features than others.
|
||||
For example, Kubernetes has [dynamic admission controllers](https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/) that lets you define custom checks for your resources.
|
||||
As far as I am aware, Docker Compose does not have something like this, but if you know what you want to achieve it should not be difficult to write your own.
|
||||
|
||||
## Resources
|
||||
|
||||
[This article](https://sysdig.com/blog/container-security-best-practices/) by Sysdig contains many best practices for containers vulnerability scanning.
|
||||
|
||||
Some of them like container image scanning and Infrastructure-as-Code scanning we already mentioned in previous days.
|
||||
It also includes other useful things like [Host scanning](https://sysdig.com/blog/vulnerability-assessment/#host), [real-time logging and monitoring](https://sysdig.com/blog/container-security-best-practices/#13) and [security misconfigurations](https://sysdig.com/blog/container-security-best-practices/#11).
|
||||
|
||||
More on VM security:
|
||||
|
||||
<https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-security.html>
|
||||
|
||||
<https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.security.doc/GUID-60025A18-8FCF-42D4-8E7A-BB6E14708787.html>
|
@ -0,0 +1,84 @@
|
||||
# Network Vulnerability Scan
|
||||
|
||||
On [Day 25](day25.md) we learned that vulnerability scanning is the process of scanning a network or system to identify any existing security vulnerabilities.
|
||||
We also learned that Network Vulnerability Scanning is a subset of Systems Vulnerability Scanning, e.g. we are only scanning the network part of our system.
|
||||
|
||||
Today we are going to dive deeper into what Network Vulnerability Scanning is and how we can do it.
|
||||
|
||||
## Network Vulnerability Scanning
|
||||
|
||||
**Network vulnerability scanning** is the process of identifying weaknesses on a network that is a potential target for exploitation by threat actors.
|
||||
|
||||
Once upon a time, before the cloud, network security was easy (sort of, good security is never easy).
|
||||
You build a huge firewall around your data center, allow traffic only to the proper entrypoints and assume that everything that managed to get inside is legitimate.
|
||||
|
||||
This approach has one huge flaw - if an attacker managed to get through the wall, there are no more lines of defence to stop them.
|
||||
|
||||
Nowadays, such an approach would work even less.
|
||||
With the cloud and microservices architecture, the actors in a network has grown exponentially.
|
||||
|
||||
This requires us to change our mindset and adopt new processes and tools in building secure systems.
|
||||
|
||||
One such process is **Network Vulnerability Scanning**.
|
||||
The tool that does that is called **Network Vulnerability Scanner**.
|
||||
|
||||
## How does network vulnerability scanning work?
|
||||
|
||||
Vulnerability scanning software relies on a database of known vulnerabilities and automated tests for them.
|
||||
A scanner would scan a wide range of devices and hosts on your networks, identifying the device type and operating system, and probing for relevant vulnerabilities.
|
||||
|
||||
A scan may be purely network-based, conducted from the wider internet (external scan) or from inside your local intranet (internal scan).
|
||||
It may be a deep inspection that is possible when the scanner has been provided with credentials to authenticate itself as a legitimate user of the host or device.
|
||||
|
||||
## Vulnerability management
|
||||
|
||||
After a scan has been performed and has found vulnerabilities, the next step is to address them.
|
||||
This is the vulnerability management phase.
|
||||
|
||||
A vulnerability could be marked as false positive, e.g. the scanner reported something that is not true.
|
||||
It could be acknowledged and then assessed by the security team.
|
||||
|
||||
Many vulnerabilities can be addressed by patching, but not all.
|
||||
A cost/benefit analysis should be part of the process because not all vulnerabilities are security risks in every environment, and there may be business reasons why you can’t install a given patch.
|
||||
It would be useful if the scanner reports alternative means to remediate the vulnerability (e.g., disabling a service or blocking a port via firewall).
|
||||
|
||||
## Caveats
|
||||
|
||||
Similar to container image vulnerability scanning, network vulnerability scanning tests your system for _known_ vulnerabilities.
|
||||
So it will not find anything that is not already reporter.
|
||||
|
||||
Also, it will not protect you from something like exposing your admin panel to the internet and using the default password.
|
||||
(Although I would assume that some network scanner are smart enough to test for well-known endpoints that should not be exposed).
|
||||
|
||||
At the end of the day, it's up to you to know your system, and to know the way to test it, and protect it.
|
||||
Tools only go so far.
|
||||
|
||||
## Network Scanners
|
||||
|
||||
Here is a list of network scanners that can be used for that purpose.
|
||||
|
||||
**NOTE:** The tools on this list are not free and open-source, but most of them have free trials, which you can use to evaluate them.
|
||||
|
||||
- [Intruder Network Vulnerability Scanner](https://www.intruder.io/network-vulnerability-scanner)
|
||||
- [SecPod SanerNow Vulnerability Management](https://www.secpod.com/vulnerability-management/)
|
||||
- [ManageEngine Vulnerability Manager Plus](https://www.manageengine.com/vulnerability-management/)
|
||||
- [Domotz](https://www.domotz.com/features/network-security.php)
|
||||
- [Microsoft Defender for Endpoint](https://www.microsoft.com/en-us/security/business/endpoint-security/microsoft-defender-endpoint)
|
||||
- [Rapid7 InsightVM](https://www.rapid7.com/products/insightvm/)
|
||||
|
||||
## Summary
|
||||
|
||||
As with all the security processes we talked about in the previous day, network scanning is not a silver bullet.
|
||||
Utilizing a network scanner would not make you secure if you are not taking care of the other aspects of systems security.
|
||||
|
||||
Also, using a tool like a network scanner does not mean that you don't need a security team.
|
||||
|
||||
Quite, the opposite, a good Secure SDLC starts with enabling the security team to run that kind of tool againts the system.
|
||||
Then they would also be responsible for triaging the results and working with the revelant teams that need to fix the vulnerabilities.
|
||||
That will be done by either patching up the system, closing a hole that is not necessary, or re-architecturing the system in a more secure manner.
|
||||
|
||||
## Resources
|
||||
|
||||
<https://www.comparitech.com/net-admin/free-network-vulnerability-scanners/>
|
||||
|
||||
<https://www.rapid7.com/solutions/network-vulnerability-scanner/>
|
147
2023/day28.md
@ -0,0 +1,147 @@
|
||||
# Introduction to Runtime Defence & Monitoring
|
||||
|
||||
Welcome to all the DevOps and DevSecOps enthusiasts! 🙌
|
||||
|
||||
We are here to learn about "Runtime defence". This is a huge subject, but we are not deterred by it and will learn about it together in the next 7 days.
|
||||
|
||||

|
||||
|
||||
This subject was split into major parts:
|
||||
* Monitoring (1st and 2nd day)
|
||||
* Intrusion detection
|
||||
* Network defense
|
||||
* Access control
|
||||
* Application defense subjects (6th and 7th days)
|
||||
|
||||
The goal is to get you up to a level in these subjects, where you can start to work on your own.
|
||||
|
||||
Let's start 😎
|
||||
|
||||
# System monitoring and auditing
|
||||
|
||||
## Why this is the first subject of the topic "Runtime defense and monitoring" subject?
|
||||
|
||||
Monitoring computer systems is a fundamental tool for security teams, providing visibility into what is happening within the system. Without monitoring, security teams would be unable to detect and respond to security incidents.
|
||||
|
||||
To illustrate this point, consider physical security. If you want to protect a building, you must have security personnel 24/7 at every entrance to control who is entering the building. In this same example, you are also tasked with controlling the security of everyone in the building therefore you must also have personnel all around. Of course, this is not scaling well therefore installing CCTV cameras at key places is a much better solution today.
|
||||
|
||||
While scaling such physical security measures is difficult, for computer systems, it is easier to achieve through the installation of monitoring tools. Monitoring provides a basic level of control over the system, allowing security teams to detect problems, understand attack patterns, and maintain overall security. Beyond monitoring, there are additional security measures such as detection systems, which we can discuss further.
|
||||
|
||||
Elaborating on this, here are the key reasons why monitoring is important for runtime security include:
|
||||
|
||||
* Identifying security incidents: Monitoring can help organizations detect potential security incidents such as malware infections, unauthorized access attempts, and data breaches.
|
||||
|
||||
* Mitigating risks: By monitoring for signs of security threats, organizations can take action to mitigate those risks before they lead to a breach or other security incident.
|
||||
|
||||
* Complying with regulations: Many industries are subject to regulatory requirements that mandate certain security controls, including monitoring and incident response.
|
||||
|
||||
* Improving incident response: Monitoring provides the necessary data to quickly identify and respond to security incidents, reducing the impact of a breach and allowing organizations to recover more quickly.
|
||||
|
||||
* Gaining visibility: Monitoring provides insight into system activity, which can be used to optimize performance, troubleshoot issues, and identify opportunities for improvement.
|
||||
|
||||
|
||||
## What to monitor and record?
|
||||
|
||||
In theory, the ideal solution would be to log everything that is happening in the system and keep the data forever.
|
||||
|
||||
However, this is not practical. Let's take a look at what needs to be monitored and what events need to be recorded.
|
||||
|
||||
When monitoring cloud-based computer services, there are several key components that should be closely monitored to ensure the system is secure and operating correctly. These components include:
|
||||
|
||||
Control plane logging: all the orchestration of the infrastructure is going through this control plane, it is crucial to always know who did what at the infrastructure level. It does not just enable the identification of malicious activity but also enables troubleshooting of the system.
|
||||
|
||||
Operating level logs: log operating system level events to track system activity and detect any errors or security-related events, such as failed login attempts or system changes. Deeper logs contain information about which use does what on the machine level which is important for identifying malicious behavior.
|
||||
|
||||
Network activity: Monitor network traffic to identify any unusual or unauthorized activity that could indicate an attack or compromise of the network.
|
||||
|
||||
Application activity and performance: Monitor application activity to detect misbehavior in case the attack is coming from the application level. Performance monitoring is important to ensure that services are running smoothly and to respond to any performance issues that may arise.
|
||||
|
||||
Resource utilization: Monitor the use of system resources such as CPU, memory, and disk space to identify bottlenecks or other performance issues. Unusual activity can be a result of denial of service-like attacks or attackers using computation resources for their good.
|
||||
|
||||
Security configurations: Monitor security configurations, such as firewall rules and user access controls, to ensure that they are correctly configured and enforced.
|
||||
|
||||
Backup and disaster recovery systems: Monitor backup and disaster recovery systems to ensure that they are operating correctly and data can be recovered in the event of a failure or disaster.
|
||||
|
||||
## A practical implementation
|
||||
In this part, we move from theory to practice.
|
||||
|
||||
There isn't a silver bullet here, every system has its tools. We will work on Kubernetes as infrastructure with [Microservices demo](https://github.com/GoogleCloudPlatform/microservices-demo) application.
|
||||
|
||||
### Control plane monitoring
|
||||
|
||||
Kubernetes has an event auditing infrastructure called [audit logs](https://kubernetes.io/docs/tasks/debug/debug-cluster/audit/).
|
||||
|
||||
Kubernetes API server has a configuration called `Audit Policy` which tells the API server what to log. The log can either be stored in a file or sent to a webhook.
|
||||
|
||||
We are using Minikube in our example, and for the sake of testing this, we will send the audit logs to the `stdout` of the API server (which is its log).
|
||||
|
||||
```bash
|
||||
mkdir -p ~/.minikube/files/etc/ssl/certs
|
||||
cat <<EOF > ~/.minikube/files/etc/ssl/certs/audit-policy.yaml
|
||||
# Log all requests at the Metadata level.
|
||||
apiVersion: audit.k8s.io/v1
|
||||
kind: Policy
|
||||
rules:
|
||||
- level: RequestResponse
|
||||
EOF
|
||||
minikube start --extra-config=apiserver.audit-policy-file=/etc/ssl/certs/audit-policy.yaml --extra-config=apiserver.audit-log-path=-
|
||||
```
|
||||
|
||||
You can follow the logs with this Kubectl command:
|
||||
```bash
|
||||
kubectl logs kube-apiserver-minikube -n kube-system | grep audit.k8s.io/v1
|
||||
```
|
||||
|
||||
Every API operation is logged to the stream.
|
||||
|
||||
Here is an example of an event "getting all secrets in default namespace":
|
||||
```json
|
||||
{"kind":"Event","apiVersion":"audit.k8s.io/v1","level":"RequestResponse","auditID":"8e526e77-1fd9-43c3-9714-367fde233c99","stage":"RequestReceived","requestURI":"/api/v1/namespaces/default/secrets?limit=500","verb":"list","user":{"username":"minikube-user","groups":["system:masters","system:authenticated"]},"sourceIPs":["192.168.49.1"],"userAgent":"kubectl/v1.25.4 (linux/amd64) kubernetes/872a965","objectRef":{"resource":"secrets","namespace":"default","apiVersion":"v1"},"requestReceivedTimestamp":"2023-02-11T20:34:11.015389Z","stageTimestamp":"2023-02-11T20:34:11.015389Z"}
|
||||
```
|
||||
|
||||
As you can see, all key aspects of the infrastructure request are logged here (who, what, when).
|
||||
|
||||
Storing this in a file is not practical. Audit logs are usually shipped to a logging system and database for later use. Managed Kubernetes services use their own "cloud logging" service to capture Kubernetes Audit logs. In native Kubernetes, you could use Promtail to ship logs to Prometheus as described [here](https://www.bionconsulting.com/blog/monitoring-and-gathering-metrics-from-kubernetes-auditlogs).
|
||||
|
||||
### Resource monitoring
|
||||
|
||||
Kubernetes ecosystem enables multiple ways to monitor resources and logging, however, the most common example is Prometheus (logging and event database) and Grafana (UI and dashboards). These two open-source tools are an easy one-stop shop for multiple tasks around monitoring.
|
||||
|
||||
Out of the box, we will get resource monitoring Kubernetes nodes.
|
||||
|
||||
Here is how we are installing it on the Minikube we started in the previous part. Make sure, you have `helm` installed before.
|
||||
|
||||
```bash
|
||||
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
|
||||
helm repo add grafana https://grafana.github.io/helm-charts
|
||||
helm install prometheus prometheus-community/prometheus
|
||||
helm install grafana grafana/grafana
|
||||
kubectl expose service grafana --type=NodePort --target-port=3000 --name=grafana-np
|
||||
```
|
||||
|
||||
Now, these services should be installed.
|
||||
|
||||
To access Grafana UI, first, get the first password
|
||||
|
||||
```bash
|
||||
kubectl get secret --namespace default grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo
|
||||
```
|
||||
|
||||
Then login to the UI
|
||||
```bash
|
||||
minikube service grafana-np --url
|
||||
```
|
||||
|
||||

|
||||
|
||||
After you have logged in, go to "Data sources/Prometheus" and add our Prometheus service as a source. The URL has to be set to `http://prometheus-server` and click "save & test".
|
||||
|
||||
Now, to set up resource dashboards, go to the "Dashboards" side menu and choose "Import". Here you can import premade dashboard. For example node metrics can be imported by putting the number `6126` in the field `Import via grafana.com` and clicking the `Load` button.
|
||||
|
||||

|
||||
|
||||
Browse Grafana for more dashboards [here](https://grafana.com/grafana/dashboards/).
|
||||
|
||||
# Next...
|
||||
|
||||
Tomorrow we will continue to the application level. Application logs and behavior monitoring will be in focue. We will continue to use the same setup and go deeper into the rabbit hole 😄
|
134
2023/day29.md
@ -0,0 +1,134 @@
|
||||
# Recap
|
||||
|
||||
Last day we discussed why monitoring, logging and auditing are the basics of runtime defense. In short: you cannot protect a live system without knowing what is happening. We built a Minikube cluster yesterday with Prometheus and Grafana. We are continuing to build over this stack today.
|
||||
Let's start 😎
|
||||
|
||||
# Application logging
|
||||
|
||||
Application logs are important from many perspectives. This is the way operators know what is happening inside applications they run on their infrastructure. For the same reason, keeping application logs is important from a security perspective because they provide a detailed record of the system's activity, which can be used to detect and investigate security incidents.
|
||||
|
||||
By analyzing application logs, security teams can identify unusual or suspicious activity, such as failed login attempts, access attempts to sensitive data, or other potentially malicious actions. Logs can also help track down the source of security breaches, including when and how an attacker gained access to the system, and what actions they took once inside.
|
||||
|
||||
In addition, application logs can help with compliance requirements, such as those related to data protection and privacy. By keeping detailed logs, organizations can demonstrate that they are taking the necessary steps to protect sensitive data and comply with regulations.
|
||||
|
||||
Loki is a component in the Grafana stack which collects logs using Promtail for Pods running in the Kubernetes cluster and stores them just as Prometheus does for metrics.
|
||||
|
||||
To install Loki with Promtail on your cluster, install the following Helm chart.
|
||||
|
||||
```bash
|
||||
helm install loki --namespace=monitoring grafana/loki-stack
|
||||
```
|
||||
|
||||
This will put a Promtail and a Loki instance in your Minikube and will start collecting logs. Note that this installation in not production grade and it is here to demonstrate the capabilities.
|
||||
|
||||
You should be seeing the Pods are ready:
|
||||
```bash
|
||||
$ kubectl get pods | grep loki
|
||||
loki-0 1/1 Running 0 8m25s
|
||||
loki-promtail-mpwgq 1/1 Running 0 8m25s
|
||||
```
|
||||
|
||||
Now go to your Grafana UI (just as we did yesterday):
|
||||
|
||||
```bash
|
||||
kubectl get secret --namespace default grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo
|
||||
minikube service grafana-np --url
|
||||
```
|
||||
|
||||
Take the secret of the admin password (if you haven't changed it already) and print the URL of the service, then go to the URL and log in.
|
||||
|
||||
In order to see the logs in Grafana, we need to hook up Loki as a "data source" just as we did yesterday with Prometheus.
|
||||
|
||||

|
||||
|
||||
Now add here a new Loki data source.
|
||||
|
||||
The only thing that needs to be changed in the default configuration is the endpoint of the Loki service, in our case it is http://loki:3100, see it below:
|
||||
|
||||

|
||||
|
||||
Now click "Save & test" and your Grafana should be now connected to Loki.
|
||||
|
||||
You can explore your logs in the "Explore" screen (click Explore in the left menu).
|
||||
|
||||
To try our centralized logging system, we are going to check when Etcd container did compactization in the last hour.
|
||||
|
||||
Choose Loki source on the top of the screen (left of the explore title) and switch from query builder mode (visual builder) to code.
|
||||
|
||||
Add the following line in the query field:
|
||||
```
|
||||
{container="etcd"} |= `compaction`
|
||||
```
|
||||
and click "run query" on the top right part of the screen.
|
||||
|
||||
You should see logs in your browser, like this:
|
||||
|
||||

|
||||
|
||||
|
||||
Voila! You have a logging system ;-)
|
||||
|
||||
|
||||
# Application behavior monitoring
|
||||
|
||||
We start to come over from general monitoring needs to low-level application monitoring for security purposes. A modern way to do this is to monitor fine-grade application behavior using eBPF.
|
||||
|
||||
Monitoring applications with eBPF (extended Berkeley Packet Filter) is important from a security perspective because it provides a powerful and flexible way to monitor and analyze the behavior of applications and the underlying system. Here are some reasons why eBPF is important for application monitoring and security:
|
||||
|
||||
1. Fine-grained monitoring: eBPF allows for fine-grained monitoring of system and application activity, including network traffic, system calls, and other events. This allows you to identify and analyze security threats and potential vulnerabilities in real-time.
|
||||
|
||||
2. Relatively low overhead: eBPF has very low overhead, making it ideal for use in production environments. It can be used to monitor and analyze system and application behavior without impacting performance or reliability at scale.
|
||||
|
||||
3. Customizable analysis: eBPF allows you to create custom analysis and monitoring tools that are tailored to the specific needs of your application and environment. This allows you to identify and analyze security threats and potential vulnerabilities in a way that is tailored to your unique needs.
|
||||
|
||||
4. Real-time analysis: eBPF provides real-time analysis and monitoring, allowing you to detect and respond to security threats and potential vulnerabilities as they occur. This helps you to minimize the impact of security incidents and prevent data loss or other negative outcomes.
|
||||
|
||||
Falco is a well-respected project which installs agents on your Kubernetes nodes and monitors applications at the eBPF level.
|
||||
|
||||
In this part, we will install Falco in our Minikube and channel the data it collects to Prometheus (and eventually, Grafana). This part is based on this great [tutorial](https://falco.org/blog/falco-kind-prometheus-grafana/).
|
||||
|
||||
In order to install Falco, you need to create private keys and certificates for client-server communication between the Falco and its exporter.
|
||||
|
||||
We will use `falcoctl` for this, however you could generate your certificates and keys with `openssl` if you want.
|
||||
|
||||
To install `falcoctl`, run the following command (if you are running Linux on amd64 CPU, otherwise check out [here](https://github.com/falcosecurity/falcoctl#installation)):
|
||||
```bash
|
||||
LATEST=$(curl -sI https://github.com/falcosecurity/falcoctl/releases/latest | awk '/location: /{gsub("\r","",$2);split($2,v,"/");print substr(v[8],2)}')
|
||||
curl --fail -LS "https://github.com/falcosecurity/falcoctl/releases/download/v${LATEST}/falcoctl_${LATEST}_linux_amd64.tar.gz" | tar -xz
|
||||
sudo install -o root -g root -m 0755 falcoctl /usr/local/bin/falcoctl
|
||||
```
|
||||
|
||||
Now generate key pair:
|
||||
```bash
|
||||
FALCOCTL_NAME=falco-grpc.default.svc.cluster.local FALCOCTL_PATH=$PWD falcoctl tls install
|
||||
```
|
||||
|
||||
We need to add Falco Helm repo and install the Falco services and the exporter:
|
||||
```bash
|
||||
helm repo add falcosecurity https://falcosecurity.github.io/charts
|
||||
helm repo update
|
||||
helm install falco falcosecurity/falco --set driver.kind=ebpf --set-file certs.server.key=$PWD/server.key,certs.server.crt=$PWD/server.crt,certs.ca.crt=$PWD/ca.crt --set falco.grpc.enabled=true,falco.grpcOutput.enabled=true,falco.grpc_output.enabled=true
|
||||
helm install falco-exporter --set-file certs.ca.crt=$PWD/ca.crt,certs.client.key=$PWD/client.key,certs.client.crt=$PWD/client.crt falcosecurity/falco-exporter
|
||||
```
|
||||
|
||||
Make sure that all Falco Pods are running OK
|
||||
```bash
|
||||
$ kubectl get pods | grep falco
|
||||
falco-exporter-mlc5h 1/1 Running 3 (32m ago) 38m
|
||||
falco-mlvc4 2/2 Running 0 31m
|
||||
```
|
||||
|
||||
Since Prometheus detects the exporter automatically and we already added the Prometheus data source, we can go directly to Grafana and install the [Falco dashboard](https://grafana.com/grafana/dashboards/11914-falco-dashboard/).
|
||||
|
||||
Go to "Dashboard" left side menu and click import. In "Import via grfana.com" insert the ID `11914` and click "load".
|
||||
|
||||
Now you should see Falco events in your Grafana! 😎
|
||||
|
||||

|
||||
|
||||
|
||||
# Next...
|
||||
|
||||
Next day we will look into how to detect attacks in runtime. See you tomorrow 😃
|
||||
|
||||
|
116
2023/day30.md
@ -0,0 +1,116 @@
|
||||
# Recap
|
||||
|
||||
We were deep yesterday in setting up Falco in our Minikube. It is a great tool for detecting application and container behavior during runtime. We took its output and exported it to our Prometheus instance in the cluster and viewed the results in a dedicated Grafana dashboard.
|
||||
|
||||
Today, we are going to set up some rules and alerts in Falco and see how detection and alerting work.
|
||||
|
||||
Is your coffee around? Have your hacker hoodie on you? Let's do it 😈
|
||||
|
||||
# Runtime detection with Falco
|
||||
|
||||
Falco is a powerful open-source tool that is designed for Kubernetes runtime security. Here are some reasons why Falco is a good choice for securing your Kubernetes environment. Falco provides real-time detection of security threats and potential vulnerabilities in your Kubernetes environment. It uses a rule-based engine to detect and alert suspicious activity, allowing you to quickly respond to security incidents.
|
||||
|
||||
Falco allows you to create custom rules that are tailored to the specific needs of your environment. This allows you to detect and respond to security threats and potential vulnerabilities in a way that is tailored to your unique needs. Falco provides rich metadata about security events, including information about the container, pod, namespace, and other details. This makes it easy to investigate and respond to security incidents.
|
||||
|
||||
## Using built-in rules to detect malicious events
|
||||
|
||||
By this time you should have all the moving parts in place:
|
||||
* Prometheus
|
||||
* Grafana
|
||||
* Falco
|
||||
|
||||
Let's do something that is somewhat unusual for a production system. We will open a shell on a workload and install a package during runtime of the container.
|
||||
|
||||
Let's install a minimalistic Nginx deployment:
|
||||
```bash
|
||||
kubectl create deployment nginx --image=nginx:1.19
|
||||
```
|
||||
|
||||
Now open a shell inside the Pod of the Nginx deployment:
|
||||
```bash
|
||||
kubectl exec -it `kubectl get pod | grep nginx | awk '{print $1}'` -- bash
|
||||
```
|
||||
|
||||
And install a "curl" on the Pod using APT:
|
||||
```bash
|
||||
apt update && apt install -y curl
|
||||
```
|
||||
|
||||
Since we are using Falco to monitor application behavior it should see all these activities, and it does! Let's go to our Grafana back (see previous days to see how to reconnect).
|
||||
|
||||
In Grafana, go to the "explore" screen. Make sure that you use the Prometheus data source.
|
||||
|
||||
In the query builder select metric "falco_events" and label filter "k8s_pod_name" and set the filter to your Nginx Pod name.
|
||||
|
||||
You will now see all the Falco events from this Pod
|
||||
|
||||

|
||||
|
||||
Note the rules that cause the events, among them you'll see "Launch Package Management Process in Container" rule that failed. This event was generated due to our `apt install` command above.
|
||||
|
||||
|
||||
Take note here to appreciate the potential here. By installing this well proven open-source stack you can create a complete runtime monitoring system and know what is happening in real-time in the systems you want to monitor an protect!
|
||||
|
||||
|
||||
## Creating custom rules
|
||||
|
||||
|
||||
Let's say you or your security team wants to know if a the CLI tool `curl` has been invoked in one of Pods (which should rarely happen in a production cluster, but an attacker would use it to report back information to her/himself).
|
||||
|
||||
We need to write a "Falco rule" to detect it.
|
||||
|
||||
Here are the basic steps to add a custom Falco rule:
|
||||
|
||||
### Create the rule
|
||||
First, create a new rule file that defines the behavior you want to detect. Falco rules are written in YAML format and typically include a description of the behavior, a set of conditions that trigger the rule, and an output message that is generated when the rule is triggered.
|
||||
|
||||
To detect that the "apt" command is executed using a Falco rule, you could create a new rule file with the following content:
|
||||
|
||||
```yaml
|
||||
customRules:
|
||||
rules-curl.yaml: |-
|
||||
- rule: DetectCurlCommandExecution
|
||||
desc: Detects the execution of the "curl" command
|
||||
condition: spawned_process and proc.name == curl
|
||||
output: "Curl command executed: %proc.cmdline"
|
||||
priority: WARNING
|
||||
```
|
||||
|
||||
Let's dive a little bit into what we have here.
|
||||
|
||||
Falco instruments events in the Linux kernel and sends them to its rule engine. The rule engine goes over all the rules and tries to match them to the event. If a matching event is found, Falco itself fires a rule based event. These are the entries we see in Prometheus/Grafana. In our custom rule, the `condition` field if the "heart" of the rule and it is used to match the rule to the event.
|
||||
|
||||
In this case, we have used a macro called `spawned_process` which evaluates to `true` if the event is system call from the user-space to the kernel for spawning a new process (`execve` and friends). The second condition is on the name of the new process, which matches `curl`.
|
||||
|
||||
To install this new rule, use the following Helm command to add it to our current deployment:
|
||||
```bash
|
||||
helm upgrade --install falco falcosecurity/falco --set driver.kind=ebpf --set-file certs.server.key=$PWD/server.key,certs.server.crt=$PWD/server.crt,certs.ca.crt=$PWD/ca.crt --set falco.grpc.enabled=true,falco.grpcOutput.enabled=true,falco.grpc_output.enabled=true -f <PATH_TO_RULE_YAML>
|
||||
```
|
||||
|
||||
Make sure that Falco Pod restarted and running correctly.
|
||||
|
||||
Let's return to our shell inside the Nginx pod.
|
||||
```bash
|
||||
kubectl exec -it `kubectl get pod | grep nginx | awk '{print $1}'` -- bash
|
||||
```
|
||||
|
||||
We have installed here `curl` before, so we can invoke it now and simulate a malicious behavior.
|
||||
```bash
|
||||
curl https://google.com
|
||||
```
|
||||
|
||||
Falco with our new rule should have picked up this event, so you should go back to Grafana and check the Falco dashboard:
|
||||
|
||||
|
||||

|
||||
|
||||
Voila!
|
||||
|
||||
You have implemented and applied a custom rule in Falco!!!
|
||||
|
||||
I hope this part gave you an insight into how this system works.
|
||||
|
||||
# Next
|
||||
|
||||
Tomorrow we will move away from the world of applications and go to the network layer, see you then!
|
||||
|
@ -0,0 +1,73 @@
|
||||
# Day 42 - Programming Language:Introduction to Python
|
||||
|
||||
Guido van Rossum created Python, a high-level, interpreted and dynamic programming language, in the late 1980s. It is widely used in range of applications, including web development, devops and data analysis, as well as artificial intelligence and machine learning.
|
||||
|
||||
## Installation and Setting up the Environment:
|
||||
|
||||
Python is available for download and installation on a variety of platforms, including Windows, Mac, and Linux. Python can be downloaded from [the official website](https://www.python.org/.).
|
||||

|
||||
|
||||
Following the installation of Python, you can configure your environment with an Integrated Development Environment (IDE) such as [PyCharm](https://www.jetbrains.com/pycharm/), [Visual Studio Code](https://code.visualstudio.com/), or IDLE (the default IDE that comes with Python).
|
||||
I personally use Visual Studio Code.
|
||||
|
||||
You can also use cloud environment, where you will not have to configure and install python locally, like [Replit](https://replit.com/).
|
||||

|
||||
|
||||
## Basic Data Types:
|
||||
|
||||
Python includes a number of built-in data types for storing and manipulating data. The following are the most common ones:
|
||||
|
||||
- Numbers: integers, floating-point numbers, and complex numbers
|
||||
- Strings are character sequences.
|
||||
- Lists are ordered groups of elements.
|
||||
- Tuples are ordered immutable collections of elements.
|
||||
- Dictionaries are collections of key-value pairs that are not ordered.
|
||||
|
||||
## Operations and Expressions:
|
||||
|
||||
With the above data types, you can perform a variety of operations in Python, including arithmetic, comparison, and logical operations.
|
||||
Expressions can also be used to manipulate data, such as combining multiple values into a new value.
|
||||
|
||||
## Variables:
|
||||
|
||||
A variable is declared and assigned a value in Python by using the assignment operator =. The variable is on the left side of the operator, and the value being assigned is on the right, which can be an expression like `2 + 2` or even other variables. As an example:
|
||||
|
||||
``` python
|
||||
a = 7 # assign variable a the value 7
|
||||
b = x + 3 # assign variable b the value of a plus 3
|
||||
c = b # assign variable c the value of b
|
||||
```
|
||||
|
||||
These examples assign numbers to variables, but numbers are only one of the data types supported by Python. There is no type declaration for the variables. This is due to the fact that Python is a dynamically typed language, which means that the variable type is determined by the data assigned to it. The x, y, and z variables in the preceding examples are integer types, which can store both positive and negative whole numbers.
|
||||
|
||||
Variable names are case sensitive and can contain any letter, number, or underscore ( ). They cannot, however, begin with a number.
|
||||
Also, with numbers, strings are among the most commonly used data types. A string is a sequence of one or more characters. Strings are typically declared with single quotation marks, but they can also be declared with double quotation marks:
|
||||
|
||||
``` python
|
||||
a = 'My name is Rishab'
|
||||
b = "This is also a string"
|
||||
```
|
||||
|
||||
You can add strings to other strings — an operation known as "concatenation" — with the same + operator that adds two numbers:
|
||||
|
||||
``` python
|
||||
x = 'My name is' + ' ' + 'Rishab'
|
||||
print(x) # outputs: My name is Rishab
|
||||
```
|
||||
|
||||
## Printing to the console:
|
||||
|
||||
The print function in Python is one of more than 60 built-in functions. It outputs text to the screen.
|
||||
Let's see an example of the most famous "Hello World!":
|
||||
|
||||
``` python
|
||||
print('Hello World!')
|
||||
```
|
||||
|
||||
The print argument is a string, which is one of Python's basic data types for storing and managing text. Print outputs a newline character at the end of the line by default, so subsequent calls to print will begin on the next line.
|
||||
|
||||
## Resources:
|
||||
|
||||
- [Learn Python - Full course by freeCodeCamp](https://youtu.be/rfscVS0vtbw)
|
||||
- [Python tutorial for beginners by Nana](https://youtu.be/t8pPdKYpowI)
|
||||
- [Python Crash Course book](https://amzn.to/40NfY45)
|
114
2023/day43.md
@ -0,0 +1,114 @@
|
||||
# Day 43 - Programming Language: Python Loops, functions, modules and libraries
|
||||
|
||||
Welcome to the second day of Python, and today we will cover some more concepts:
|
||||
- Loops
|
||||
- Functions
|
||||
- Modules and libraries
|
||||
- File I/O
|
||||
|
||||
## Loops (for/while):
|
||||
|
||||
Loops are used to repeatedly run a block of code.
|
||||
|
||||
### for loop
|
||||
|
||||
Using the `for` loop, a piece of code is executed once for each element of a sequence (such as a list, string, or tuple). Here is an example of a for loop that prints each programming language in a list:
|
||||
|
||||
``` python
|
||||
languages = ['Python', 'Go', 'JavaScript']
|
||||
|
||||
# for loop
|
||||
for language in languages:
|
||||
print(language)
|
||||
```
|
||||
|
||||
Output
|
||||
```
|
||||
Python
|
||||
Go
|
||||
JavaScript
|
||||
```
|
||||
|
||||
### while loop
|
||||
|
||||
The `while loop` is used to execute a block of code repeatedly as long as a condition is True. Here's an example of a while loop that prints the numbers from 1 to 5:
|
||||
|
||||
``` python
|
||||
i = 1
|
||||
n = 5
|
||||
|
||||
# while loop from i = 1 to 5
|
||||
while i <= n:
|
||||
print(i)
|
||||
i = i + 1
|
||||
```
|
||||
|
||||
Output:
|
||||
```
|
||||
1
|
||||
2
|
||||
3
|
||||
4
|
||||
5
|
||||
```
|
||||
|
||||
## Functions
|
||||
Functions are reusable chunks of code with argument/parameters and return values.
|
||||
Using the `def` keyword in Python, you can define a function. In your programme, functions can be used to encapsulate complex logic and can be called several times.
|
||||
Functions can also be used to simplify code and make it easier to read. Here is an illustration of a function that adds two numbers:
|
||||
|
||||
``` python
|
||||
# function has two arguments num1 and num2
|
||||
def add_numbers(num1, num2):
|
||||
sum = num1 + num2
|
||||
print('The sum is: ',sum)
|
||||
```
|
||||
|
||||
``` python
|
||||
# calling the function with arguments to add 5 and 2
|
||||
add_numbers(5, 2)
|
||||
|
||||
# Output: The sum is: 9
|
||||
```
|
||||
|
||||
## Understanding Modules and Importing Libraries:
|
||||
A module is a file in Python that contains definitions and statements. Modules let you arrange your code and reuse it across many apps.
|
||||
The Standard Library, a sizable collection of Python modules, offers a wide range of capabilities, such as file I/O, regular expressions, and more.
|
||||
Additional libraries can be installed using package managers like pip.
|
||||
You must import a module or library using the import statement in order to use it in your programme. Here is an illustration of how to load the math module and calculate a number's square root using the sqrt() function:
|
||||
|
||||
``` python
|
||||
import math
|
||||
|
||||
print(math.sqrt(16)) # 4.0
|
||||
```
|
||||
|
||||
## File I/O
|
||||
File I/O is used to read and write data to and from files on disk.
|
||||
The built-in Python function open() can be used to open a file, after which you can read from and write to it using methods like read() and write().
|
||||
To save system resources, you should always close the file after you are done with it.
|
||||
An example of reading from a file and printing its content:
|
||||
|
||||
``` python
|
||||
f = open("90DaysOfDevOps.txt", "r")
|
||||
print(f.read())
|
||||
f.close()
|
||||
```
|
||||
|
||||
## Exception Handling:
|
||||
|
||||
Exceptions are runtime errors that happen when your programme runs into unexpected circumstances, such dividing by zero or attempting to access a list element that doesn't exist.
|
||||
Using a try/except block, you can manage exceptions in Python. The try block's code is run, and if an exception arises, the except block's code is run to handle the exception.
|
||||
|
||||
``` python
|
||||
try:
|
||||
f = open("90DaysOfDevOps.txt")
|
||||
try:
|
||||
f.write("Python is great")
|
||||
except:
|
||||
print("Something went wrong when writing to the file")
|
||||
```
|
||||
|
||||
## Conclusion
|
||||
|
||||
That is it for today, I will see you tomorrow in Day 3 of Python!
|
125
2023/day44.md
@ -0,0 +1,125 @@
|
||||
# Day 44 - Programming Language: Python Data Structures and OOP
|
||||
|
||||
Welcome to the third day of Python, and today we will cover some more advanced concepts:
|
||||
|
||||
- Data Structures
|
||||
- Object Oriented Programming (OOP)
|
||||
|
||||
## Data Structures:
|
||||
|
||||
Python includes a number of data structures for storing and organizing data. The following are some of the most common ones:
|
||||
|
||||
### Lists:
|
||||
|
||||
Lists are used to store multiple items in a single variable. They can hold any type of collection of items (including other lists), and their elements can be accessed via an index.
|
||||
Lists are mutable, which means they can be changed by adding, removing, or changing elements.
|
||||
Here's an example of how to make a list and access its elements:
|
||||
|
||||
``` python
|
||||
thislist = ["apple", "banana", "orange"]
|
||||
print(thislist[0]) # OUTPUT apple
|
||||
print(thislist[2]) # OUTPUT orange
|
||||
```
|
||||
|
||||
### Tuples:
|
||||
|
||||
Tuples are similar to lists, but they are immutable, which means they cannot be **changed** once created. They are frequently used to represent fixed sets of data.
|
||||
Tuples can be created with or without parentheses, but they are typically used to make the code more readable. Here's an example of a tuple and how to access its elements:
|
||||
|
||||
``` python
|
||||
my_tuple = (1, 2, [4, 5])
|
||||
print(my_tuple[0]) # OUTPUT 1
|
||||
print(my_tuple[2]) # OUTPUT "three"
|
||||
print(my_tuple[3][0]) # OUTPUT 4
|
||||
```
|
||||
|
||||
### Dictionaries:
|
||||
|
||||
Dictionaries are yet another versatile Python data structure that stores a collection of key-value pairs. The keys must be unique and unchangeable (strings and numbers are common), and the values can be of any type.
|
||||
Dictionaries can be changed by adding, removing, or changing key-value pairs.
|
||||
Here's an example of creating and accessing a dictionary's values:
|
||||
|
||||
``` python
|
||||
my_dict = {"name": "Rishab", "project": "90DaysOfDevOps", "country": "Canada"}
|
||||
print(my_dict["name"]) # OUTPUT "Rishab"
|
||||
print(my_dict["project"]) # OUTPUT "90DaysOfDevOps"
|
||||
print(my_dict["country"]) # OUTPUT "Canada"
|
||||
```
|
||||
|
||||
### Sets:
|
||||
|
||||
Sets are used to store multiple items in a single variable. They are frequently used in mathematical operations such as union, intersection, and difference.
|
||||
Sets are mutable, which means they can be added or removed, but the elements themselves must be immutable and sets cannot have two items with the same value.
|
||||
Here's an example of how to make a set and then perform operations on it:
|
||||
|
||||
``` python
|
||||
my_set = {1, 2, 3, 4, 5}
|
||||
other_set = {3, 4, 5, 6, 7}
|
||||
print(my_set.union(other_set)) # {1, 2, 3, 4, 5, 6, 7}
|
||||
print(my_set.intersection(other_set)) # {3, 4, 5}
|
||||
print(my_set.difference(other_set)) # {1, 2}
|
||||
```
|
||||
|
||||
## Object Oriented Programming:
|
||||
|
||||
I also want to talk about object-oriented programming (OOP) concepts in Python, which are used to structure code into reusable and modular components, in addition to data structures. Here are some of the most important OOP concepts to understand:
|
||||
|
||||
### Class
|
||||
|
||||
A class is a template for creating objects. A class specifies the attributes (data) and methods (functions) that a class's objects can have. Classes are defined using the `class` keyword, and objects are created using the class constructor. Here's an example of defining a `Person` class and creating an object of that class:
|
||||
|
||||
``` python
|
||||
class Person:
|
||||
def __init__(self, name, country):
|
||||
self.name = name
|
||||
self.country = country
|
||||
person = Person("Rishab", "Canada")
|
||||
print(person.name) # OUTPUT "Alice"
|
||||
print(person.country) # OUTPUT "Canada"
|
||||
```
|
||||
|
||||
### Inheritance:
|
||||
|
||||
Inheritance is a technique for creating a new class from an existing one. The new class, known as a subclass, inherits the attributes and methods of the existing superclass.
|
||||
Subclasses can extend or override the superclass's attributes and methods to create new functionality. Here's an example of defining a `Person` subclass called `Student`:
|
||||
|
||||
``` python
|
||||
class Student(Person):
|
||||
def __init__(self, name, country, major):
|
||||
super().__init__(name, country)
|
||||
self.major = major
|
||||
|
||||
student = Student("Rishab", "Canada", "Computer Science")
|
||||
print(student.name) # OUTPUT "Rishab"
|
||||
print(student.country) # OUTPUT "Canada"
|
||||
print(student.major) # OUTPUT "Computer Science"
|
||||
```
|
||||
|
||||
### Polymorphism:
|
||||
|
||||
Polymorphism refers to the ability of objects to take on different forms or behaviors depending on their context.
|
||||
Polymorphism can be achieved by using inheritance and method overriding, as well as abstract classes and interfaces. Here's an example of a speak() method being implemented in both the Person and Student classes:
|
||||
|
||||
``` python
|
||||
class Person:
|
||||
def __init__(self, name, country):
|
||||
self.name = name
|
||||
self.country = country
|
||||
|
||||
def speak(self):
|
||||
print("Hello, my name is {} and I am from {}.".format(self.name, self.country))
|
||||
|
||||
class Student(Person):
|
||||
def __init__(self, name, country, major):
|
||||
super().__init__(name, country)
|
||||
self.major = major
|
||||
|
||||
def speak(self):
|
||||
print("Hello, my name is {} and I am a {} major.".format(self.name, self.major))
|
||||
|
||||
person = Person("Rishab", "Canada")
|
||||
student = Student("John", "Canada", "Computer Science")
|
||||
|
||||
person.speak() # "Hello, my name is Rishab and I am from Canada."
|
||||
student.speak() # "Hello, my name is John and I am a Computer Science major."
|
||||
```
|
124
2023/day45.md
@ -0,0 +1,124 @@
|
||||
# Day 45 - Python: Debugging, testing and Regular expression
|
||||
|
||||
Welcome to Day 4 of Python!
|
||||
Today we will learn about:
|
||||
|
||||
- Debugging and testing
|
||||
- Regular expressions
|
||||
- Datetime library
|
||||
|
||||
Let's start!
|
||||
|
||||
## Debugging and testing
|
||||
|
||||
Debugging is the process of finding and correcting errors or bugs in code. Python includes a debugger called `pdb` that allows you to step through your code and inspect variables as you go. You can use `pdb` to help you figure out where your code is going wrong and how to fix it.
|
||||
|
||||
``` python
|
||||
import pdb
|
||||
|
||||
def add_numbers(x, y):
|
||||
result = x + y
|
||||
pdb.set_trace() # Start the debugger at this point in the code
|
||||
return result
|
||||
|
||||
result = add_numbers(2, 3)
|
||||
print(result)
|
||||
```
|
||||
|
||||
In this example, we define the `add_numbers` function, which adds two numbers and returns the result. To start the debugger at a specific point in the code, we use the pdb.set trace() function (in this case, after the result has been calculated). This enables us to inspect variables and step through the code to figure out what's going on.
|
||||
|
||||
In addition to debugging, testing is an important part of programming. It entails creating test cases to ensure that your code is working properly. Python includes a `unittest` module that provides a framework for writing and running test cases.
|
||||
|
||||
``` python
|
||||
import unittest
|
||||
|
||||
def is_prime(n):
|
||||
if n < 2:
|
||||
return False
|
||||
for i in range(2, n):
|
||||
if n % i == 0:
|
||||
return False
|
||||
return True
|
||||
|
||||
class TestIsPrime(unittest.TestCase):
|
||||
def test_is_prime(self):
|
||||
self.assertTrue(is_prime(2))
|
||||
self.assertTrue(is_prime(3))
|
||||
self.assertTrue(is_prime(5))
|
||||
self.assertFalse(is_prime(4))
|
||||
|
||||
if __name__ == '__main__':
|
||||
unittest.main()
|
||||
|
||||
```
|
||||
|
||||
Output:
|
||||
|
||||
``` bash
|
||||
----------------------------------------------------------------------
|
||||
Ran 1 test in 0.000s
|
||||
|
||||
OK
|
||||
```
|
||||
|
||||
## Regular expressions:
|
||||
|
||||
In Python, regular expressions are a powerful tool for working with text data. They enable you to search for and match specific character patterns within a string. Python's `re` module includes functions for working with regular expressions.
|
||||
For example, you can use regular expressions to search for email addresses within a larger block of text, or to extract specific data from a string that follows a particular pattern.
|
||||
|
||||
``` python
|
||||
import re
|
||||
|
||||
# Search for a phone number in a string
|
||||
text = 'My phone number is 555-7777'
|
||||
match = re.search(r'\d{3}-\d{4}', text)
|
||||
if match:
|
||||
print(match.group(0))
|
||||
|
||||
# Extract email addresses from a string
|
||||
text = 'My email is example@devops.com, but I also use other@cloud.com'
|
||||
matches = re.findall(r'\S+@\S+', text)
|
||||
print(matches)
|
||||
```
|
||||
|
||||
Output:
|
||||
|
||||
``` bash
|
||||
555-7777
|
||||
['example@devops.com,', 'other@cloud.com']
|
||||
```
|
||||
|
||||
## Datetime library:
|
||||
|
||||
As the name suggests, Python's `datetime` module allows you to work with dates and times in your code. It includes functions for formatting and manipulating date and time data, as well as classes for representing dates, times, and time intervals.
|
||||
The datetime module, for example, can be used to get the current date and time, calculate the difference between two dates, or convert between different date and time formats.
|
||||
|
||||
``` python
|
||||
from datetime import datetime, timedelta
|
||||
|
||||
# Get the current date and time
|
||||
now = datetime.now()
|
||||
print(now) # Output: 2023-02-17 11:33:27.257712
|
||||
|
||||
# Create a datetime object for a specific date and time
|
||||
date = datetime(2023, 2, 1, 12, 0)
|
||||
print(date) # Output: 2023-02-01 12:00:00
|
||||
|
||||
# Calculate the difference between two dates
|
||||
delta = now - date
|
||||
print(delta) # Output: 15 days, 23:33:27.257712
|
||||
```
|
||||
|
||||
Output:
|
||||
|
||||
``` bash
|
||||
2023-02-17 11:33:27.257712
|
||||
2023-02-01 12:00:00
|
||||
15 days, 23:33:27.257712
|
||||
```
|
||||
|
||||
## Resources
|
||||
|
||||
- [pdb - The Python Debugger](https://docs.python.org/3/library/pdb.html)
|
||||
- [re - Regular expressions operations](https://docs.python.org/3/library/re.html)
|
||||
- [datetime - Basic date and time types](https://docs.python.org/3/library/datetime.html)
|
@ -1,15 +1,38 @@
|
||||
# Day 49: AWS Cloud Overview
|
||||
|
||||
AWS Cloud is a cloud computing platform provided by Amazon Web Services (AWS). It offers a wide range of services, including computing, storage, networking, database, analytics, machine learning, security, and more. AWS Cloud allows businesses and organizations to access these services on a pay-as-you-go basis, which means they only pay for what they use and can scale their resources up or down as needed.
|
||||
|
||||

|
||||
|
||||
## Flexibility
|
||||
|
||||
One of the main benefits of AWS Cloud is its flexibility. You can choose the services that best meet your needs and only pay for what you use. This makes it an ideal solution for small businesses, startups, and enterprises, as it allows them to access the resources they need without having to make a significant upfront investment in infrastructure.
|
||||
|
||||
## Security
|
||||
|
||||
Another benefit of AWS Cloud is its security. AWS has a number of security measures in place to protect your data and resources, including encryption, identity and access management, and network security. It also has a number of compliance programs in place, including HIPAA, PCI DSS, and GDPR, to ensure that your data is secure and compliant with relevant regulations.
|
||||
|
||||
AWS Cloud also offers a range of tools and services to help you manage your resources and infrastructure. For example, the AWS Management Console allows you to monitor and control your resources from a single, centralized dashboard. The AWS Command Line Interface (CLI) allows you to manage your resources from the command line, making it easier to automate tasks and integrate with other tools.
|
||||
|
||||
## EC2
|
||||
|
||||
One of the most popular services offered by AWS Cloud is Amazon Elastic Compute Cloud (EC2). EC2 allows you to easily launch and manage virtual servers in the cloud, making it easy to scale your resources up or down as needed. You can choose from a range of instance types and sizes, and you only pay for the resources you use.
|
||||
|
||||

|
||||
|
||||
## S3
|
||||
|
||||
Another popular service offered by AWS Cloud is Amazon Simple Storage Service (S3). S3 is an object storage service that allows you to store and retrieve large amounts of data from anywhere on the internet. It is highly scalable, durable, and secure, making it an ideal solution for storing and managing data in the cloud.
|
||||
|
||||

|
||||
|
||||
## Databases
|
||||
|
||||
AWS Cloud also offers a range of other services, including Amazon Relational Database Service (RDS) for managing databases, Amazon Redshift for data warehousing and analytics, and Amazon Elasticsearch Service for search and analytics. These services make it easy to build and manage complex applications in the cloud, without having to worry about infrastructure or scaling.
|
||||
|
||||

|
||||
|
||||
Overall, AWS Cloud is a powerful and flexible cloud computing platform that offers a wide range of services and tools for businesses and organizations of all sizes. Whether you are a small business, startup, or enterprise, AWS Cloud has something to offer you. With its pay-as-you-go pricing, security, and management tools, it is an ideal solution for anyone looking to take advantage of the benefits of cloud computing.
|
||||
|
||||
## Resources
|
||||
|
||||
|
@ -0,0 +1,44 @@
|
||||
# Day 50: Get a Free Tier Account & Enable Billing Alarms
|
||||
|
||||
AWS offers a free tier account that allows users to access and experiment with various AWS services without incurring any charges for a limited period of time. In this article, we will guide you through the steps to sign up for a free tier AWS account.
|
||||
|
||||
## Step 1: Go to the AWS website
|
||||
|
||||
The first step to signing up for a free tier AWS account is to go to the AWS website. You can access the website at https://aws.amazon.com. On the website, click on the "Create an AWS Account" button on the top right corner of the page:
|
||||

|
||||
|
||||
## Step 2: Create an AWS account
|
||||
|
||||
Once you click on the "Create an AWS Account" button, you will be directed to the AWS sign-in page. If you already have an AWS account, you can sign in using your email address and password. If you do not have an account, provide an email address, an AWS account name, & click the "Verify email address" button. You will then be sent an email with the a verification code to provide back.
|
||||

|
||||

|
||||
|
||||
## Step 3: Provide your account information
|
||||
|
||||
On the next page, you will be asked to provide your account information. You will be required to provide your password, full name, company name, and a phone number. After entering your information, click on the "Continue" button.
|
||||

|
||||

|
||||
|
||||
|
||||
## Step 4: Provide your payment information
|
||||
|
||||
To sign up for the free tier account, you will need to provide your payment information. AWS requires this information to verify your identity and prevent fraud. However, you will not be charged for the free tier services, as they are provided at no cost for 1 year. After providing your payment information, click on the "Verify and Continue" button. The next page will send an SMS or voice call to your phone to verify your identity.
|
||||

|
||||

|
||||
|
||||
## Step 5: Select a support plan
|
||||
|
||||
After providing your payment information, you will be directed to the support plan page. Here you can choose what level of support you want, for our needs we will use the *Basic support - Free* option. Once you have provided this information, click on the "Complete sign up" button.
|
||||

|
||||
|
||||
## Next steps:
|
||||
|
||||
Once you have access to your free tier account, there are a few additional steps you'll want to perform. Of these steps, I'd argue that creating a billing alarm is THE most important... *so don't skip it!!*
|
||||
1. [Create a billing alarm](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/monitor_estimated_charges_with_cloudwatch.html)
|
||||
2. [Enable MFA on your root user](https://docs.aws.amazon.com/accounts/latest/reference/root-user-mfa.html)
|
||||
3. [Create an IAM user](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users_create.html) for regular tasks and *never* use the root user account except for tasks that can only be performed by it.
|
||||
|
||||
## Resources
|
||||
[Create your free AWS account](https://youtu.be/uZT8dA3G-S4)
|
||||
|
||||
[Generate credentials, budget, and billing alarms via CLI](https://youtu.be/OdUnNuKylHg)
|
@ -0,0 +1,25 @@
|
||||
# Day 51: Infrastructure as Code (IaC) and CloudFormation
|
||||
|
||||
Infrastructure as code (IaC) is a process that allows developers and operations teams to manage and provision infrastructure through code rather than manual processes. With IaC, infrastructure resources can be managed using configuration files and automation tools, resulting in faster, more consistent, and more reliable infrastructure deployments.
|
||||
|
||||
One of the most popular IaC tools is AWS CloudFormation, it allows operations, devops, & developers to define infrastructure resources using templates in YAML or JSON format. These templates can be version-controlled and shared across teams, allowing for easy collaboration and reducing the likelihood of configuration drift.
|
||||
|
||||

|
||||
|
||||
CloudFormation offers a number of benefits for those looking to implement IaC. One key advantage is the ability to automate infrastructure deployment and management, which saves time and reduces the risk of human error. By using CloudFormation, developers and operations teams can define infrastructure resources such as virtual machines, databases, and networking configurations, and then deploy them in a repeatable and consistent way.
|
||||
|
||||
Another benefit of using CloudFormation is the ability to track changes to infrastructure resources. When a change is made to a CloudFormation template, the service can automatically update the resources to reflect the new configuration. This ensures that all resources are kept in sync and reduces the likelihood of configuration errors.
|
||||
|
||||
CloudFormation also provides the ability to manage dependencies between resources. This means that resources can be provisioned in the correct order and with the correct configuration, reducing the likelihood of errors and making the deployment process more efficient.
|
||||
|
||||
In addition to these benefits, CloudFormation also offers a range of other features, such as the ability to roll back changes and the ability to create templates that can be used to deploy entire applications. These features make it easier to manage infrastructure resources and ensure that deployments are consistent and reliable.
|
||||
|
||||
## Resources:
|
||||
|
||||
[What is AWS CloudFormation? Pros & Cons?](https://youtu.be/0Sh9OySCyb4)
|
||||
|
||||
[CloudFormation Tutorial](https://www.youtube.com/live/gJjHK28b0cM)
|
||||
|
||||
[AWS CloudFormation User Guide](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html)
|
||||
|
||||
[AWS CloudFormation Getting Started step-by-step guides](https://aws.amazon.com/cloudformation/getting-started/)
|
BIN
2023/images/day28-0.png
Normal file
After Width: | Height: | Size: 199 KiB |
BIN
2023/images/day28-1.png
Normal file
After Width: | Height: | Size: 134 KiB |
BIN
2023/images/day28-2.png
Normal file
After Width: | Height: | Size: 133 KiB |
BIN
2023/images/day29-1.gif
Normal file
After Width: | Height: | Size: 21 KiB |
BIN
2023/images/day29-2.png
Normal file
After Width: | Height: | Size: 43 KiB |
BIN
2023/images/day29-3.png
Normal file
After Width: | Height: | Size: 183 KiB |
BIN
2023/images/day29-4.png
Normal file
After Width: | Height: | Size: 173 KiB |
BIN
2023/images/day30-1.png
Normal file
After Width: | Height: | Size: 199 KiB |
BIN
2023/images/day30-2.png
Normal file
After Width: | Height: | Size: 142 KiB |
BIN
2023/images/day42-01.png
Normal file
After Width: | Height: | Size: 419 KiB |
BIN
2023/images/day42-02.png
Normal file
After Width: | Height: | Size: 602 KiB |
BIN
2023/images/day49-1.png
Normal file
After Width: | Height: | Size: 296 KiB |
BIN
2023/images/day49-2.png
Normal file
After Width: | Height: | Size: 180 KiB |
BIN
2023/images/day49-3.png
Normal file
After Width: | Height: | Size: 264 KiB |
BIN
2023/images/day49-4.png
Normal file
After Width: | Height: | Size: 160 KiB |
BIN
2023/images/day50-1.png
Normal file
After Width: | Height: | Size: 275 KiB |
BIN
2023/images/day50-2.png
Normal file
After Width: | Height: | Size: 182 KiB |
BIN
2023/images/day50-3.png
Normal file
After Width: | Height: | Size: 62 KiB |
BIN
2023/images/day50-4.png
Normal file
After Width: | Height: | Size: 88 KiB |
BIN
2023/images/day50-5.png
Normal file
After Width: | Height: | Size: 82 KiB |
BIN
2023/images/day50-6.png
Normal file
After Width: | Height: | Size: 92 KiB |
BIN
2023/images/day50-7.png
Normal file
After Width: | Height: | Size: 87 KiB |
BIN
2023/images/day50-8.png
Normal file
After Width: | Height: | Size: 102 KiB |
BIN
2023/images/day51-1.png
Normal file
After Width: | Height: | Size: 33 KiB |