diff --git a/2022/es/Days/day31.md b/2022/es/Days/day31.md index 20c27cf..3140695 100644 --- a/2022/es/Days/day31.md +++ b/2022/es/Days/day31.md @@ -1,104 +1,93 @@ -## Microsoft Azure Compute Models +## Modelos de computación de Microsoft Azure -Following on from covering the basics around security models within Microsoft Azure yesterday today we are going to look into the various compute services available to us in Azure. +Siguiendo con los conceptos básicos sobre los modelos de seguridad dentro de Microsoft Azure vamos a ver los diferentes servicios de computación disponibles en Azure. -### Service Availability Options +### Opciones de Disponibilidad de Servicio -This section is close to my heart given my role in Data Management. As with on-premises, it is critical to ensure the availability of your services. +Esta sección es muy importante el autor por su papel en la gestión de datos. Al igual que en el on-premises, es crítico asegurar la disponibilidad de tus servicios. -- High Availability (Protection within a region) -- Disaster Recovery (Protection between regions) -- Backup (Recovery from a point in time) +- Alta Disponibilidad (Protección dentro de una región) +- Recuperación ante desastres (protección entre regiones) +- Copia de seguridad (Recuperación desde un punto en el tiempo) -Microsoft deploys multiple regions within a geopolitical boundary. +Microsoft despliega múltiples regiones dentro de una frontera geopolítica. Dos conceptos con Azure para la Disponibilidad de Servicios con los conjuntos y zonas: +- **Conjuntos de Disponibilidad** - Proporcionan resiliencia dentro de un centro de datos. +- **Zonas de Disponibilidad** - Proporcionan resiliencia entre centros de datos dentro de una región. -Two concepts with Azure for Service Availability. Both sets and zones. +### Máquinas virtuales -Availability Sets - Provide resiliency within a datacenter +Muy probablemente es el punto de partida para cualquier persona en la nube pública. -Availability Zones - Provide resiliency between data centres within a region. +- Proporciona una variedad de series y tamaños de MV con diferentes capacidades (Algunas abrumadoras). [Tamaños para máquinas virtuales en Azure](https://docs.microsoft.com/en-us/azure/virtual-machines/sizes) +- Hay muchas opciones y enfoques diferentes para MVs desde alto rendimiento, y baja latencia hasta MVs con opciones de alta memoria. +- También tenemos un tipo de MV burstable que se puede encontrar bajo la Serie B. Esto es ideal para las cargas de trabajo en las que puede tener un bajo requerimiento de CPU en su mayor parte, pero pueden requerir una vez al mes el requisito de un pico de rendimiento. +- Las máquinas virtuales se colocan en una red virtual que puede proporcionar conectividad a cualquier red. +- Compatibilidad con sistemas operativos invitados como Windows y Linux. +- También hay kernels ajustados a Azure cuando se trata de distribuciones específicas de Linux. [Azure Tuned Kernals](https://docs.microsoft.com/en-us/azure/virtual-machines/linux/endorsed-distros#azure-tuned-kernels) -### Virtual Machines +### Plantillas -Most likely the starting point for anyone in the public cloud. +Ya se ha mencionado antes que todo lo que hay detrás o debajo de Microsoft Azure es JSON. -- Provides a VM from a variety of series and sizes with different capabilities (Sometimes an overwhelming) [Sizes for Virtual machines in Azure](https://docs.microsoft.com/en-us/azure/virtual-machines/sizes) -- There are many different options and focuses for VMs from high performance, and low latency to high memory options VMs. -- We also have a burstable VM type which can be found under the B-Series. This is great for workloads where you can have a low CPU requirement for the most part but require that maybe once a month performance spike requirement. -- Virtual Machines are placed on a virtual network that can provide connectivity to any network. -- Windows and Linux guest OS support. -- There are also Azure-tuned kernels when it comes to specific Linux distributions. [Azure Tuned Kernals](https://docs.microsoft.com/en-us/azure/virtual-machines/linux/endorsed-distros#azure-tuned-kernels) +Hay varios portales de gestión y consolas diferentes que podemos utilizar para crear nuestros recursos, la ruta preferida va a ser a través de plantillas JSON. -### Templating +Despliegues idempotentes en modo incremental o completo, es decir, estado deseado repetible. -I have mentioned before that everything behind or underneath Microsoft Azure is JSON. +Hay una gran selección de plantillas que pueden exportar definiciones de recursos desplegados. Me gusta pensar en esta característica de plantillas como AWS CloudFormation o podría ser Terraform para una opción multi-nube. Cubriremos más sobre la potencia de Terraform en la sección de Infraestructura como código. -There are several different management portals and consoles we can use to create our resources the preferred route is going to be via JSON templates. +### Escalado -Idempotent deployments in incremental or complete mode - i.e repeatable desired state. +El escalado automático es una gran característica de la nube pública, siendo capaz de reducir los recursos no utilizados o aumentarlos cuando se necesiten. -There is a large selection of templates that can export deployed resource definitions. I like to think about this templating feature to something like AWS CloudFormation or could be Terraform for a multi-cloud option. We will cover Terraform more in the Infrastructure as code section. +En Azure, existe algo llamado Virtual Machine Scale Sets (VMSS) para IaaS. Esto permite la creación automática y la escala de una imagen estándar de oro basado en horarios y métricas. -### Scaling +Esto es ideal para actualizar ventanas de modo que pueda actualizar sus imágenes y desplegarlas con el menor impacto. -Automatic scaling is a large feature of the Public Cloud, being able to spin down resources you are not using or spin up when you need them. +Otros servicios como Azure App Services tienen autoescalado integrado. -In Azure, we have something called Virtual Machine Scale Sets (VMSS) for IaaS. This enables the automatic creation and scale from a gold standard image based on schedules and metrics. +### Contenedores -This is ideal for updating windows so that you can update your images and roll those out with the least impact. +No hemos cubierto los contenedores como un caso de uso, ni qué ni cómo deben ser necesarios en nuestro viaje de aprendizaje DevOps, pero tenemos que mencionar que Azure tiene algunos servicios específicos centrados en contenedores que son dignos de mención. +- **Azure Kubernetes Service (AKS)** - Proporciona una solución Kubernetes gestionada, sin necesidad de preocuparse por el plano de control o la gestión de clústeres subyacentes. También veremos más sobre Kubernetes más adelante. +- **Azure Container Instances** - Contenedores como servicio con facturación por segundos. Ejecute una imagen e intégrela con su red virtual, sin necesidad de Container Orchestration. +- **Service Fabric** - Tiene muchas capacidades pero incluye orquestación para instancias de contenedor. -Other services such as Azure App Services have auto-scaling built in. +Azure también tiene el Container Registry que proporciona un registro privado para Docker Images, Helm charts, OCI Artifacts e imágenes. Más sobre esto de nuevo cuando lleguemos a la sección correspondiente de contenedores. -### Containers +También debemos mencionar que muchos de los servicios de contenedores también pueden aprovechar los contenedores bajo el capó, pero esto se abstrae de su necesidad de gestionar. -We have not covered containers as a use case and what and how they can and should be needed in our DevOps learning journey but we need to mention that Azure has some specific container-focused services to mention. +Estos servicios centrados en contenedores mencionados también encontramos servicios similares en todas las demás nubes públicas. -Azure Kubernetes Service (AKS) - Provides a managed Kubernetes solution, no need to worry about the control plane or management of the underpinning cluster management. More on Kubernetes also later on. +### Servicios de aplicaciones -Azure Container Instances - Containers as a service with Per-Second Billing. Run an image and integrate it with your virtual network, no need for Container Orchestration. +- Azure Application Services ofrece una solución de alojamiento de aplicaciones que proporciona un método sencillo para establecer servicios. +- Despliegue y escalado automáticos. +- Admite soluciones basadas en Windows y Linux. +- Los servicios se ejecutan en un App Service Plan que tiene un tipo y un tamaño. +- Número de servicios diferentes que incluyen aplicaciones web, aplicaciones API y aplicaciones móviles. +- Soporte para ranuras de Despliegue para pruebas y promoción fiables. -Service Fabric - Has many capabilities but includes orchestration for container instances. +### Computación serverless -Azure also has the Container Registry which provides a private registry for Docker Images, Helm charts, OCI Artifacts and images. More on this again when we reach the containers section. +El objetivo con serverless es que sólo pagamos por el tiempo de ejecución de la función y no tenemos que tener máquinas virtuales o aplicaciones PaaS en ejecución todo el tiempo. Simplemente ejecutamos nuestra función cuando la necesitamos y luego desaparece. -We should also mention that a lot of the container services may indeed also leverage containers under the hood but this is abstracted away from your requirement to manage. +**Azure Functions** - Proporciona código serverless. Si nos remontamos a nuestro primer vistazo a la nube pública recordaremos la capa de abstracción de la gestión, con funciones serverless sólo vas a estar gestionando el código. -These mentioned container-focused services we also find similar services in all other public clouds. +**Event-Driven** con escala masiva. Proporciona enlace de entrada y salida a muchos servicios de Azure y de terceros. -### Application Services +Soporta muchos lenguajes de programación diferentes. (C#, NodeJS, Python, PHP, batch, bash, Golang y Rust. Cualquier ejecutable) -- Azure Application Services provides an application hosting solution that provides an easy method to establish services. -- Automatic Deployment and Scaling. -- Supports Windows & Linux-based solutions. -- Services run in an App Service Plan which has a type and size. -- Number of different services including web apps, API apps and mobile apps. -- Support for Deployment slots for reliable testing and promotion. +**Azure Event Grid** permite disparar la lógica desde servicios y eventos. -### Serverless Computing +**Azure Logic App** proporciona workflows e integración basado en gráficos. -Serverless for me is an exciting next step that I am extremely interested in learning more about. +También podemos echar un vistazo a Azure Batch, que puede ejecutar trabajos a gran escala en nodos Windows y Linux con una gestión y programación coherentes. -The goal with serverless is that we only pay for the runtime of the function and do not have to have running virtual machines or PaaS applications running all the time. We simply run our function when we need it and then it goes away. - -Azure Functions - Provides serverless code. If we remember back to our first look into the public cloud we will remember the abstraction layer of management, with serverless functions you are only going to be managing the code. - -Event-Driven with massive scale, I have a plan to build something when I get some hands-on here hopefully later on. - -Provides input and output binding to many Azure and 3rd Party Services. - -Supports many different programming languages. (C#, NodeJS, Python, PHP, batch, bash, Golang and Rust. Or any Executable) - -Azure Event Grid enables logic to be triggered from services and events. - -Azure Logic App provides a graphical-based workflow and integration. - -We can also look at Azure Batch which can run large-scale jobs on both Windows and Linux nodes with consistent management & scheduling. - -## Resources +## Recursos - [Hybrid Cloud and MultiCloud](https://www.youtube.com/watch?v=qkj5W98Xdvw) - [Microsoft Azure Fundamentals](https://www.youtube.com/watch?v=NKEFWyqJ5XA&list=WL&index=130&t=12s) - [Google Cloud Digital Leader Certification Course](https://www.youtube.com/watch?v=UGRDM86MBIQ&list=WL&index=131&t=10s) - [AWS Basics for Beginners - Full Course](https://www.youtube.com/watch?v=ulprqHHWlng&t=5352s) -See you on [Day 32](day32.md) +Nos vemos en el [Día 32](day32.md). diff --git a/2022/es/Days/day32.md b/2022/es/Days/day32.md index 6448a01..22390f3 100644 --- a/2022/es/Days/day32.md +++ b/2022/es/Days/day32.md @@ -1,181 +1,179 @@ -## Microsoft Azure Storage Models +## Modelos de almacenamiento de Microsoft Azure -### Storage Services +### Servicios de almacenamiento -- Azure storage services are provided by storage accounts. -- Storage accounts are primarily accessed via REST API. -- A storage account must have a unique name that is part of a DNS name `.core.windows.net` -- Various replication and encryption options. -- Sits within a resource group +- Los servicios de almacenamiento de Azure se proporcionan mediante cuentas de almacenamiento. +- A las cuentas de almacenamiento se accede principalmente a través de la API REST. +- Una cuenta de almacenamiento debe tener un nombre único que forme parte de un nombre DNS `.core.windows.net`. +- Varias opciones de replicación y cifrado. +- Se encuentra dentro de un grupo de recursos -We can create our storage group by simply searching for Storage Group in the search bar at the top of the Azure Portal. +Podemos crear nuestro grupo de almacenamiento simplemente buscando Storage Group en la barra de búsqueda de la parte superior del Azure Portal. ![](Images/Day32_Cloud1.png) -We can then run through the steps to create our storage account remembering that this name needs to be unique and it also needs to be all lower case, with no spaces but can include numbers. +A continuación, podemos ejecutar los pasos para crear nuestra cuenta de almacenamiento recordando que este nombre tiene que ser único y también tiene que ser todo en minúsculas, sin espacios, pero puede incluir números. ![](Images/Day32_Cloud2.png) -We can also choose the level of redundancy we would like against our storage account and anything we store here. The further down the list the more expensive option but also the spread of your data. +También podemos elegir el nivel de redundancia que queremos para nuestra cuenta de almacenamiento y para todo lo que guardemos aquí. Cuanto más abajo de la lista más cara es la opción, pero también la dispersión de sus datos. -Even the default redundancy option gives us 3 copies of our data. +Incluso la opción de redundancia por defecto nos da 3 copias de nuestros datos. [Azure Storage Redundancy](https://docs.microsoft.com/en-us/azure/storage/common/storage-redundancy) -Summary of the above link down below: +Resumen del enlace anterior: -- **Locally-redundant storage** - replicates your data three times within a single data centre in the primary region. -- **Geo-redundant storage** - copies your data synchronously three times within a single physical location in the primary region using LRS. -- **Zone-redundant storage** - replicates your Azure Storage data synchronously across three Azure availability zones in the primary region. -- **Geo-zone-redundant storage** - combines the high availability provided by redundancy across availability zones with protection from regional outages provided by geo-replication. Data in a GZRS storage account is copied across three Azure availability zones in the primary region and is also replicated to a second geographic region for protection from regional disasters. +- **Almacenamiento redundante local**: replica los datos tres veces en un único centro de datos de la región primaria. +- **Almacenamiento georredundante**: copia los datos de forma sincrónica tres veces en una única ubicación física de la región principal mediante LRS. +- **Almacenamiento redundante en zonas**: replica los datos de Azure Storage de forma sincrónica en tres zonas de disponibilidad de Azure en la región principal. +- **Almacenamiento georredundante**: combina la alta disponibilidad proporcionada por la redundancia entre zonas de disponibilidad con la protección contra interrupciones regionales proporcionada por la georreplicación. Los datos de una cuenta de almacenamiento GZRS se copian en tres zonas de disponibilidad de Azure en la región principal y también se replican en una segunda región geográfica para protegerlos de desastres regionales. ![](Images/Day32_Cloud3.png) -Just moving back up to performance options. We have Standard and Premium to choose from. We have chosen Standard in our walkthrough but premium gives you some specific options. +Volviendo a las opciones de rendimiento. Podemos elegir entre Estándar y Premium. Hemos elegido Estándar en el tutorial, pero Premium te da algunas opciones específicas. ![](Images/Day32_Cloud4.png) -Then in the drop-down, you can see we have these three options to choose from. +A continuación, en el menú desplegable, puedes ver que tenemos estas tres opciones para elegir. ![](Images/Day32_Cloud5.png) -There are lots more advanced options available for your storage account but for now, we do not need to get into these areas. These options are around encryption and data protection. +Hay muchas más opciones avanzadas disponibles para su cuenta de almacenamiento, pero por ahora, no necesitamos entrar en estas áreas. Estas opciones están relacionadas con el cifrado y la protección de datos. -### Managed Disks +### Discos gestionados -Storage access can be achieved in a few different ways. +El acceso al almacenamiento puede realizarse de varias formas. -Authenticated access via: +Acceso autenticado mediante: +- Una clave compartida para un control total. +- Firma de acceso compartido para un acceso delegado y granular. +- Azure Active Directory (cuando esté disponible) -- A shared key for full control. -- Shared Access Signature for delegated, granular access. -- Azure Active Directory (Where Available) +Acceso público: +- El acceso público también se puede conceder para permitir el acceso anónimo, incluso a través de HTTP. +- Un ejemplo de esto podría ser alojar contenido básico y archivos en un blob de bloques para que un navegador pueda ver y descargar estos datos. -Public Access: +Si accede a su almacenamiento desde otro servicio Azure, el tráfico permanece dentro de Azure. -- Public access can also be granted to enable anonymous access including via HTTP. -- An example of this could be to host basic content and files in a block blob so a browser can view and download this data. +Cuando se trata del rendimiento del almacenamiento tenemos dos tipos diferentes: -If you are accessing your storage from another Azure service, traffic stays within Azure. +- **Estándar** - Número máximo de IOPS +- **Premium** - Número garantizado de IOPS -When it comes to storage performance we have two different types: +IOPS => Operaciones de entrada/salida por segundo. -- **Standard** - Maximum number of IOPS -- **Premium** - Guaranteed number of IOPS +También hay que tener en cuenta la diferencia entre discos no gestionados y gestionados a la hora de elegir el almacenamiento adecuado para la tarea que tenemos. -IOPS => Input/Output operations per sec. +### Almacenamiento de Máquinas Virtuales -There is also a difference between unmanaged and managed disks to consider when choosing the right storage for the task you have. +- Los discos del sistema operativo de la máquina virtual suelen almacenarse en un almacenamiento persistente. +- Algunas cargas de trabajo sin estado no requieren almacenamiento persistente y la reducción de la latencia es un beneficio mayor. +- Hay máquinas virtuales que soportan discos efímeros gestionados por el SO que se crean en el almacenamiento local del nodo. + - Estos también se pueden utilizar con VM Scale Sets. -### Virtual Machine Storage +Los discos gestionados son un almacenamiento en bloque duradero que puede utilizarse con las máquinas virtuales Azure. Pueden tener Ultra Disk Storage, Premium SSD, Standard SSD o Standard HDD. También tienen algunas características. -- Virtual Machine OS disks are typically stored on persistent storage. -- Some stateless workloads do not require persistent storage and reduced latency is a larger benefit. -- There are VMs that support ephemeral OS-managed disks that are created on the node-local storage. - - These can also be used with VM Scale Sets. +- Compatibilidad con instantáneas e imágenes +- Movimiento sencillo entre SKUs +- Mejor disponibilidad cuando se combina con conjuntos de disponibilidad +- Facturación basada en el tamaño del disco, no en el almacenamiento consumido. -Managed Disks are durable block storage that can be used with Azure Virtual Machines. You can have Ultra Disk Storage, Premium SSD, Standard SSD, or Standard HDD. They also carry some characteristics. +## Almacenamiento de archivos -- Snapshot and Image support -- Simple movement between SKUs -- Better availability when combined with availability sets -- Billed based on disk size not on consumed storage. +- **Cool Tier** - Está disponible para bloquear y anexar blobs. + - Menor coste de almacenamiento + - Mayor coste de transacción. +- **Archive Tier** - Está disponible para bloques BLOB. + - Se configura para cada BLOB. + - Coste más bajo, latencia de recuperación de datos más larga. + - Misma durabilidad de datos que el almacenamiento Azure normal. + - Se pueden habilitar niveles de datos personalizados según sea necesario. -## Archive Storage +### Compartir Archivos -- **Cool Tier** - A cool tier of storage is available to block and append blobs. - - Lower Storage cost - - Higher transaction cost. -- **Archive Tier** - Archive storage is available for block BLOBs. - - This is configured on a per-BLOB basis. - - Cheaper cost, Longer Data retrieval latency. - - Same Data Durability as regular Azure Storage. - - Custom Data tiering can be enabled as required. - -### File Sharing - -From the above creation of our storage account, we can now create file shares. +A partir de la creación anterior de nuestra cuenta de almacenamiento podemos crear archivos compartidos. ![](Images/Day32_Cloud6.png) -This will provide SMB2.1 and 3.0 file shares in Azure. +Esto proporcionará recursos compartidos de archivos SMB2.1 y 3.0 en Azure. -Useable within the Azure and externally via SMB3 and port 445 open to the internet. +Utilizable dentro de Azure y externamente a través de SMB3 y el puerto 445 abierto a Internet. -Provides shared file storage in Azure. +Proporciona almacenamiento compartido de archivos en Azure. -Can be mapped using standard SMB clients in addition to REST API. +Se puede asignar utilizando clientes SMB estándar además de la API REST. -You might also notice [Azure NetApp Files](https://vzilla.co.uk/vzilla-blog/azure-netapp-files-how) (SMB and NFS) +Consultar [Azure NetApp Files](https://vzilla.co.uk/vzilla-blog/azure-netapp-files-how) (SMB y NFS). -### Caching & Media Services +### Almacenamiento en caché y servicios multimedia -The Azure Content Delivery Network provides a cache of static web content with locations throughout the world. +Azure Content Delivery Network proporciona una caché de contenido web estático con ubicaciones en todo el mundo. -Azure Media Services, provides media transcoding technologies in addition to playback services. +Azure Media Services, proporciona tecnologías de transcodificación de medios además de servicios de reproducción. -## Microsoft Azure Database Models +## Modelos de bases de datos de Microsoft Azure -Back on [Day 28](day28.md), we covered various service options. One of these was PaaS (Platform as a Service) where you abstract a large amount of the infrastructure and operating system away and you are left with the control of the application or in this case the database models. +En el [Día 28](day28.md) vimos varias opciones de servicio. Una de ellas era PaaS (Platform as a Service), en la que se abstrae gran parte de la infraestructura y el sistema operativo y se deja el control de la aplicación o, en este caso, de los modelos de bases de datos. -### Relational Databases +### Bases de datos relacionales -Azure SQL Database provides a relational database as a service based on Microsoft SQL Server. +Azure SQL Database proporciona una base de datos relacional como servicio basada en Microsoft SQL Server. -This is SQL running the latest SQL branch with database compatibility level available where a specific functionality version is required. +Se trata de SQL que ejecuta la última rama de SQL con un nivel de compatibilidad de base de datos disponible cuando se requiere una versión de funcionalidad específica. -There are a few options on how this can be configured, we can provide a single database that provides one database in the instance, while an elastic pool enables multiple databases that share a pool of capacity and collectively scale. +Hay algunas opciones sobre cómo esto se puede configurar, podemos proporcionar una única base de datos que proporciona una base de datos en la instancia, mientras que un pool elástico permite múltiples bases de datos que comparten un pool de capacidad y escalan colectivamente. -These database instances can be accessed like regular SQL instances. +Se puede acceder a estas instancias de base de datos como a instancias SQL normales. -Additional managed offerings for MySQL, PostgreSQL and MariaDB. +Ofertas gestionadas adicionales para MySQL, PostgreSQL y MariaDB. ![](Images/Day32_Cloud7.png) ### NoSQL Solutions -Azure Cosmos DB is a scheme agnostic NoSQL implementation. +Azure Cosmos DB es una implementación NoSQL de esquema agnóstico. 99.99% SLA -Globally distributed database with single-digit latencies at the 99th percentile anywhere in the world with automatic homing. +Base de datos distribuida globalmente con latencias de un solo dígito en el porcentaje 99 en cualquier parte del mundo con homing automático. -Partition key leveraged for the partitioning/sharding/distribution of data. +Partition key aprovechada para la partición/sharding/distribución de datos. -Supports various data models (documents, key-value, graph, column-friendly) +Admite varios modelos de datos (documentos, clave-valor, gráfico, amigable con las columnas). -Supports various APIs (DocumentDB SQL, MongoDB, Azure Table Storage and Gremlin) +Soporta varias APIs (DocumentDB SQL, MongoDB, Azure Table Storage y Gremlin) ![](Images/Day32_Cloud9.png) -Various consistency models are available based around [CAP theorem](https://en.wikipedia.org/wiki/CAP_theorem). +Existen varios modelos de consistencia basados en el [teorema CAP](https://es.wikipedia.org/wiki/Teorema_CAP). ![](Images/Day32_Cloud8.png) -### Caching +### Caché -Without getting into the weeds about caching systems such as Redis I wanted to include that Microsoft Azure has a service called Azure Cache for Redis. +Sin entrar en la maleza sobre los sistemas de almacenamiento en caché como Redis quería incluir que Microsoft Azure tiene un servicio llamado Azure Cache para Redis. -Azure Cache for Redis provides an in-memory data store based on the Redis software. +Azure Cache for Redis proporciona un almacén de datos en memoria basado en el software Redis. -- It is an implementation of the open-source Redis Cache. - - A hosted, secure Redis cache instance. - - Different tiers are available - - Application must be updated to leverage the cache. - - Aimed for an application that has high read requirements compared to writes. - - Key-Value store based. +- Se trata de una implementación de la caché Redis de código abierto. + - Una instancia de caché Redis alojada y segura. + - Diferentes niveles disponibles + - La aplicación debe actualizarse para aprovechar la caché. + - Dirigido a aplicaciones que requieren más lecturas que escrituras. + - Basado en almacén clave-valor. ![](Images/Day32_Cloud10.png) -I appreciate the last few days have been a lot of note-taking and theory on Microsoft Azure but I wanted to cover the building blocks before we get into the hands-on aspects of how these components come together and work. +Los últimos días han sido un montón de teorías y tomar notas sobre Microsoft Azure, pero ahora ya tenemos cubierto los bloques de construcción antes de entrar en los aspectos prácticos de cómo estos componentes se unen y trabajan. -We have one more bit of theory remaining around networking before we can get some scenario-based deployments of services up and running. We also want to take a look at some of the different ways we can interact with Microsoft Azure vs just using the portal that we have been using so far. +Solo queda un poco más de teoría sobre redes para que podamos ponernos en marcha con despliegues de servicios basados en escenarios reales. También echaremos un vistazo a algunas de las diferentes formas en que podemos interactuar con Microsoft Azure. -## Resources +## Recursos - [Hybrid Cloud and MultiCloud](https://www.youtube.com/watch?v=qkj5W98Xdvw) - [Microsoft Azure Fundamentals](https://www.youtube.com/watch?v=NKEFWyqJ5XA&list=WL&index=130&t=12s) - [Google Cloud Digital Leader Certification Course](https://www.youtube.com/watch?v=UGRDM86MBIQ&list=WL&index=131&t=10s) - [AWS Basics for Beginners - Full Course](https://www.youtube.com/watch?v=ulprqHHWlng&t=5352s) -See you on [Day 33](day33.md) +Nos vemos en el [Día 33](day33.md). diff --git a/2022/es/Days/day33.md b/2022/es/Days/day33.md index 018d401..a52b864 100644 --- a/2022/es/Days/day33.md +++ b/2022/es/Days/day33.md @@ -1,180 +1,174 @@ -## Microsoft Azure Networking Models + Azure Management +## Modelos de red de Microsoft Azure + Gestión de Azure -As if today marks the anniversary of Microsoft Azure and its 12th Birthday! (1st February 2022) Anyway, we are going to cover the networking models within Microsoft Azure and some of the management options for Azure. So far we have only used the Azure portal but we have mentioned other areas that can be used to drive and create our resources within the platform. +Vamos a cubrir los modelos de red dentro de Microsoft Azure y algunas de las opciones de gestión de Azure. Hasta ahora solo hemos utilizado el portal de Azure pero hemos mencionado otras áreas que pueden ser utilizadas para manejar y crear nuestros recursos dentro de la plataforma. -## Azure Network Models +## Modelos de Red Azure -### Virtual Networks +### Redes Virtuales -- A virtual network is a construct created in Azure. -- A virtual network has one or more IP ranges assigned to it. -- Virtual networks live within a subscription within a region. -- Virtual subnets are created in the virtual network to break up the network range. -- Virtual machines are placed in virtual subnets. -- All virtual machines within a virtual network can communicate. -- 65,536 Private IPs per Virtual Network. -- Only pay for egress traffic from a region. (Data leaving the region) -- IPv4 & IPv6 Supported. - - IPv6 for public-facing and within virtual networks. +- Una red virtual es una construcción creada en Azure. +- Una red virtual tiene uno o más rangos de IP asignados. +- Las redes virtuales viven dentro de una suscripción dentro de una región. +- Se crean subredes virtuales en la red virtual para dividir el rango de red. +- Las máquinas virtuales se colocan en subredes virtuales. +- Todas las máquinas virtuales dentro de una red virtual pueden comunicarse. +- 65.536 IPs privadas por red virtual. +- Sólo se paga por el tráfico de salida de una región. (Datos que salen de la región) +- Soporta IPv4 e IPv6. + - IPv6 para redes virtuales de cara al público y dentro de ellas. -We can liken Azure Virtual Networks to AWS VPCs. However, there are some differences to note: +Podemos comparar las redes virtuales de Azure con las VPC de AWS. Sin embargo, hay algunas diferencias a tener en cuenta: -- In AWS a default VNet is created that is not the case in Microsoft Azure, you have to create your first virtual network to your requirements. -- All Virtual Machines by default in Azure have NAT access to the internet. No NAT Gateways as per AWS. -- In Microsoft Azure, there is no concept of Private or Public subnets. -- Public IPs are a resource that can be assigned to vNICs or Load Balancers. -- The Virtual Network and Subnets have their own ACLs enabling subnet level delegation. -- Subnets across Availability Zones whereas in AWS you have subnets per Availability Zones. +- En AWS se crea una VNet por defecto que no es el caso en Microsoft Azure, tienes que crear tu primera red virtual a tu medida. +- Todas las máquinas virtuales por defecto en Azure tienen acceso NAT a Internet. No hay NAT Gateways como en AWS. +- En Microsoft Azure no existe el concepto de subredes Privadas o Públicas. +- Las IPs Públicas son un recurso que puede ser asignado a vNICs o Balanceadores de Carga. +- La red virtual y las subredes tienen sus propias ACL que permiten la delegación a nivel de subred. +- Subredes a través de Zonas de Disponibilidad mientras que en AWS tienes subredes por Zonas de Disponibilidad. -We also have Virtual Network Peering. This enables virtual networks across tenants and regions to be connected using the Azure backbone. Not transitive but can be enabled via Azure Firewall in the hub virtual network. Using a gateway transit allows peered virtual networks to the connectivity of the connected network and an example of this could ExpressRoute to On-Premises. +También tenemos Virtual Network Peering. Esto permite la conexión de redes virtuales entre inquilinos y regiones utilizando la red troncal de Azure. No es transitivo, pero puede activarse a través de Azure Firewall en la red virtual central. El uso de una pasarela de tránsito permite a las redes virtuales peered la conectividad de la red conectada y un ejemplo de esto podría ser [ExpressRoute](https://learn.microsoft.com/es-es/azure/expressroute/expressroute-introduction) a On-Premises. -### Access Control +### Control de acceso -- Azure utilises Network Security Groups, these are stateful. -- Enable rules to be created and then assigned to a network security group -- Network security groups applied to subnets or VMs. -- When applied to a subnet it is still enforced at the Virtual Machine NIC that it is not an "Edge" device. +- Azure utiliza Grupos de Seguridad de Red, estos son de estado. +- Permiten crear reglas y luego asignarlas a un grupo de seguridad de red +- Los grupos de seguridad de red se aplican a subredes o máquinas virtuales. +- Cuando se aplica a una subred todavía se aplica en el NIC de la máquina virtual que no es un dispositivo "Edge". ![](Images/Day33_Cloud1.png) -- Rules are combined in a Network Security Group. -- Based on the priority, flexible configurations are possible. -- Lower priority number means high priority. -- Most logic is built by IP Addresses but some tags and labels can also be used. +- Las reglas se combinan en un Grupo de Seguridad de Red. +- En función de la prioridad, es posible realizar configuraciones flexibles. +- Un número de prioridad bajo significa una prioridad alta. +- La mayor parte de la lógica se construye por Direcciones IP pero también se pueden utilizar algunas etiquetas. -| Description | Priority | Source Address | Source Port | Destination Address | Destination Port | Action | -| ---------------- | -------- | ------------------ | ----------- | ------------------- | ---------------- | ------ | -| Inbound 443 | 1005 | \* | \* | \* | 443 | Allow | -| ILB | 1010 | Azure LoadBalancer | \* | \* | 10000 | Allow | -| Deny All Inbound | 4000 | \* | \* | \* | \* | DENY | +| Descripción | Prioridad | Dirección origen | Puerto de origen | Dirección de destino | Puerto de destino | Acción | +| -------------------------- | --------- | ------------------ | ---------------- | -------------------- | ----------------- | -------- | +| Entrada 443 | 1005 | \* | \* | \* | 443 | Permitir | +| ILB | 1010 | Azure LoadBalancer | \* | \* | 10000 | Permitir | +| Denegar todas las entradas | 4000 | \* | \* | \* | \* | Denegar | -We also have Application Security Groups (ASGs) +También tenemos Grupos de Seguridad de Aplicaciones (ASG - Application Security Groups) -- Where NSGs are focused on the IP address ranges which may be difficult to maintain for growing environments. -- ASGs enable real names (Monikers) for different application roles to be defined (Webservers, DB servers, WebApp1 etc.) -- The Virtual Machine NIC is made a member of one or more ASGs. +- Los NSGs se centran en los rangos de direcciones IPs, que pueden ser difíciles de mantener para entornos en crecimiento. +- Los ASGs permiten definir nombres reales (Monikers) para diferentes roles de aplicación (Webservers, DB servers, WebApp1 etc.) +- La NIC de la máquina virtual se convierte en miembro de uno o más ASG. -The ASGs can then be used in rules that are part of Network Security Groups to control the flow of communication and can still use NSG features like service tags. +Los ASG se pueden utilizar en reglas que forman parte de Grupos de Seguridad de Red para controlar el flujo de comunicación y pueden seguir utilizando funciones de NSG como las etiquetas de servicio. -| Action | Name | Source | Destination | Port | -| ------ | ------------------ | ---------- | ----------- | ------------ | -| Allow | AllowInternettoWeb | Internet | WebServers | 443(HTTPS) | -| Allow | AllowWebToApp | WebServers | AppServers | 443(HTTPS) | -| Allow | AllowAppToDB | AppServers | DbServers | 1443 (MSSQL) | -| Deny | DenyAllinbound | Any | Any | Any | +| Acción | Nombre | Origen | Destino | Puerto | +| -------- | ------------------ | ---------- | ---------- | ------------ | +| Permitir | AllowInternettoWeb | Internet | WebServers | 443(HTTPS) | +| Permitir | AllowWebToApp | WebServers | AppServers | 443(HTTPS) | +| Permitir | AllowAppToDB | AppServers | DbServers | 1443 (MSSQL) | +| Denegar | DenyAllinbound | Any | Any | Any | -### Load Balancing +### Balanceador de carga -Microsoft Azure has two separate load balancing solutions. (the first party, there are third parties available in the Azure marketplace.) Both can operate with externally facing or internally facing endpoints. +Microsoft Azure tiene dos soluciones separadas de equilibrio de carga. Ambas pueden funcionar con puntos finales orientados externamente o internamente. La primera solución son las opciones de terceros disponibles en el marketplace de Azure. -- Load Balancer (Layer 4) supporting hash-based distribution and port-forwarding. -- App Gateway (Layer 7) supports features such as SSL offload, cookie-based session affinity and URL-based content routing. +- Balanceador de carga (capa 4) que admite la distribución basada en hash y el reenvío de puertos. +- App Gateway (capa 7) admite funciones como SSL offload, afinidad de sesión basada en cookies y enrutamiento de contenido basado en URL. -Also with the App Gateway, you can optionally use the Web Application firewall component. +También con App Gateway, puede utilizar opcionalmente el componente Web Application firewall. -## Azure Management Tools +## Herramientas de gestión de Azure -We have spent most of our theory time walking through the Azure Portal, I would suggest that when it comes to following a DevOps culture and process a lot of these tasks, especially around provisioning will be done via an API or a command-line tool. I wanted to touch on some of those other management tools that we have available to us as we need to know this for when we are automating the provisioning of our Azure environments. +Hemos pasado la mayor parte teórica por el Portal de Azure, pero cuando se trata de seguir una cultura DevOps el proceso de muchas de estas tareas (especialmente en torno a aprovisionamiento) se hará a través de una API o una herramienta de línea de comandos. Habría que revisar algunas de estas otras herramientas de gestión que tenemos a nuestra disposición, ya que necesitamos conocerla para cuando estemos automatizando el aprovisionamiento de nuestros entornos Azure. -### Azure Portal +### Portal Azure -The Microsoft Azure Portal is a web-based console, that provides an alternative to command-line tools. You can manage your subscriptions within the Azure Portal. Build, Manage, and Monitor everything from a simple web app to complex cloud deployments. Another thing you will find within the portal are these breadcrumbs, JSON as mentioned before is the underpinning of all Azure Resources, It might be that you start in the Portal to understand the features, services and functionality but then later understand the JSON underneath to incorporate into your automated workflows. +El Microsoft Azure Portal es una consola basada en web, que proporciona una alternativa a las herramientas de línea de comandos. Puedes gestionar tus suscripciones dentro del Portal Azure. Construya, Gestione y Monitorice todo, desde una simple aplicación web hasta complejos despliegues en la nube. Otra cosa que encontrarás en el portal son las migas de pan. Como ya se mencionó, JSON es la base de todos los recursos de Azure. Puede ser que comiences en el Portal para entender las características, servicios y funcionalidad, pero tarde o temprano tendrás que entender el JSON para incorporar flujos de trabajo automatizados. ![](Images/Day33_Cloud2.png) -There is also the Azure Preview portal, this can be used to view and test new and upcoming services and enhancements. +También existe el portal Azure Preview, que puede utilizarse para ver y probar servicios y mejoras. ![](Images/Day33_Cloud3.png) ### PowerShell -Before we get into Azure PowerShell it is worth introducing PowerShell first. PowerShell is a task automation and configuration management framework, a command-line shell and a scripting language. We might and dare I say this liken this to what we have covered in the Linux section around shell scripting. PowerShell was very much first found on Windows OS but it is now cross-platform. +Antes de adentrarnos en Azure PowerShell, conviene presentar PowerShell. PowerShell es un marco de automatización de tareas y gestión de la configuración, un shell de línea de comandos y un lenguaje de scripting. Podríamos decir que esto se asemeja a lo que hemos visto en la sección de Linux sobre shell scripting. PowerShell se utilizó por primera vez en el sistema operativo Windows, pero ahora es multiplataforma. -Azure PowerShell is a set of cmdlets for managing Azure resources directly from the PowerShell command line. +Azure PowerShell es un conjunto de cmdlets para gestionar los recursos de Azure directamente desde la línea de comandos de PowerShell. -We can see below that you can connect to your subscription using the PowerShell command `Connect-AzAccount` +Podemos ver a continuación que te puedes conectar a una suscripción mediante el comando PowerShell `Connect-AzAccount`. ![](Images/Day33_Cloud4.png) -Then if we wanted to find some specific commands associated with Azure VMs we can run the following command. You could spend hours learning and understanding more about this PowerShell programming language. +Luego, si quisiéramos encontrar algunos comandos específicos asociados a las VMs de Azure podemos ejecutar el siguiente comando. Podrías pasarte horas aprendiendo y entendiendo más sobre este lenguaje de programación. ![](Images/Day33_Cloud5.png) -There are some great quickstarts from Microsoft on getting started and provisioning services from PowerShell [here](https://docs.microsoft.com/en-us/powershell/azure/get-started-azureps?view=azps-7.1.0) +Hay algunos buenos quickstarts de Microsoft para empezar a aprovisionar servicios desde PowerShell [aquí](https://docs.microsoft.com/en-us/powershell/azure/get-started-azureps?view=azps-7.1.0) ### Visual Studio Code -Like many, and as you have all seen my go-to IDE is Visual Studio Code. +Como habréis visto la IDE de cabecera en el tutorial es Visual Studio Code. Visual Studio Code es un editor de código fuente gratuito creado por Microsoft para Windows, Linux y macOS. -Visual Studio Code is a free source-code editor made by Microsoft for Windows, Linux and macOS. - -You will see below that there are lots of integrations and tools built into Visual Studio Code that you can use to interact with Microsoft Azure and the services within. +Verás a continuación que hay un montón de integraciones y herramientas integradas en Visual Studio Code que puedes utilizar para interactuar con Microsoft Azure y los servicios que contiene. ![](Images/Day33_Cloud6.png) ### Cloud Shell -Azure Cloud Shell is an interactive, authenticated, browser-accessible shell for managing Azure resources. It provides the flexibility of choosing the shell experience that best suits the way you work. +Azure Cloud Shell es un shell interactivo, autenticado y accesible desde el navegador para gestionar los recursos de Azure. Proporciona la flexibilidad de elegir la experiencia de shell que mejor se adapte a su forma de trabajar. ![](Images/Day33_Cloud7.png) -You can see from the below when we first launch Cloud Shell within the portal we can choose between Bash and PowerShell. +Puedes ver en la siguiente imagen que cuando lanzamos Cloud Shell por primera vez dentro del portal podemos elegir entre Bash y PowerShell. ![](Images/Day33_Cloud8.png) -To use the cloud shell you will have to provide a bit of storage in your subscription. +Para utilizar Cloud Shell tendrás que proporcionar un poco de almacenamiento en tu suscripción. -When you select to use the cloud shell it is spinning up a machine, these machines are temporary but your files are persisted in two ways; through a disk image and a mounted file share. +Cuando seleccionas el intérprete de comandos en la nube, se pone en marcha una máquina. Estas máquinas son temporales, pero tus archivos se conservan de dos maneras: a través de una imagen de disco y en un archivo compartido montado. ![](Images/Day33_Cloud9.png) -- Cloud Shell runs on a temporary host provided on a per-session, per-user basis -- Cloud Shell times out after 20 minutes without interactive activity -- Cloud Shell requires an Azure file share to be mounted -- Cloud Shell uses the same Azure file share for both Bash and PowerShell -- Cloud Shell is assigned one machine per user account -- Cloud Shell persists $HOME using a 5-GB image held in your file share -- Permissions are set as a regular Linux user in Bash +- Cloud Shell se ejecuta en un host temporal proporcionado por sesión y por usuario. +- Cloud Shell se desconecta después de 20 minutos sin actividad interactiva. +- Cloud Shell requiere que se monte un archivo compartido de Azure. +- Cloud Shell utiliza el mismo recurso compartido de archivos de Azure para Bash y PowerShell. +- Cloud Shell tiene asignada una máquina por cuenta de usuario. +- Cloud Shell persiste $HOME utilizando una imagen de 5 GB guardada en su recurso compartido de archivos. +- Los permisos se establecen como un usuario normal de Linux en Bash. -The above was copied from [Cloud Shell Overview](https://docs.microsoft.com/en-us/azure/cloud-shell/overview) +Lo anterior fue copiado de [Cloud Shell Overview](https://docs.microsoft.com/en-us/azure/cloud-shell/overview). ### Azure CLI -Finally, I want to cover the Azure CLI, The Azure CLI can be installed on Windows, Linux and macOS. Once installed you can type `az` followed by other commands to create, update, delete and view Azure resources. +Por último vamos a echar un ojo a Azure CLI. Azure CLI se puede instalar en Windows, Linux y macOS. Una vez instalado se puede escribir `az` seguido de otros comandos para crear, actualizar, eliminar y ver los recursos de Azure. -When I initially came into my Azure learning I was a little confused by there being Azure PowerShell and the Azure CLI. +Al empezar con Azure es confusa la existencia de Azure PowerShell y Azure CLI. Estaría bien algún comentario de la comunidad sobre esto. Pero una visión objetica es que Azure PowerShell es un módulo añadido a Windows PowerShell o PowerShell Core (También disponible en otros sistemas operativos, pero no todos), mientras que Azure CLI es un programa de línea de comandos multiplataforma que se conecta a Azure y ejecuta los comandos. -I would love some feedback from the community on this as well. But the way I see it is that Azure PowerShell is a module added to Windows PowerShell or PowerShell Core (Also available on other OS but not all) Whereas Azure CLI is a cross-platform command-line program that connects to Azure and executes those commands. +Ambas opciones tienen una sintaxis diferente, aunque pueden hacer tareas muy similares. -Both of these options have a different syntax, although they can from what I can see and what I have done do very similar tasks. - -For example, creating a virtual machine from PowerShell would use the `New-AzVM` cmdlet whereas Azure CLI would use `az VM create`. - -You saw previously that I have the Azure PowerShell module installed on my system but then I also have the Azure CLI installed that can be called through PowerShell on my Windows machine. +Por ejemplo, crear una máquina virtual desde PowerShell usaría el cmdlet `New-AzVM` mientras que Azure CLI usaría `az VM create`. ![](Images/Day33_Cloud10.png) -The takeaway here as we already mentioned is about choosing the right tool. Azure runs on automation. Every action you take inside the portal translates somewhere to code being executed to read, create, modify, or delete resources. +Como ya hemos mencionado, lo importante aquí es elegir la herramienta adecuada para cada tarea. Azure se basa en la automatización. Cada acción que realizas dentro del portal se traduce en algún lugar en código que se ejecuta para leer, crear, modificar o eliminar recursos. Azure CLI -- Cross-platform command-line interface, installable on Windows, macOS, Linux -- Runs in Windows PowerShell, Cmd, Bash and other Unix shells. +- Interfaz de línea de comandos multiplataforma, instalable en Windows, macOS y Linux. +- Se ejecuta en Windows PowerShell, Cmd, Bash y otros shells Unix. Azure PowerShell -- Cross-platform PowerShell module, runs on Windows, macOS, Linux -- Requires Windows PowerShell or PowerShell +- Módulo PowerShell multiplataforma, ejecutable en Windows, macOS, Linux. +- Requiere Windows PowerShell o PowerShell. -If there is a reason you cannot use PowerShell in your environment but you can use .mdor bash then the Azure CLI is going to be your choice. +Si hay una razón por la que no puede utilizar PowerShell en su entorno, pero puede utilizar .mdor bash entonces el Azure CLI va a ser su elección. -Next up we take all the theories we have been through and create some scenarios and get hands-on in Azure. +A continuación vamos a tomar todas las teorías que hemos estado a través de y crear algunos escenarios y ponerse manos a la obra en Azure. -## Resources +## Recursos - [Hybrid Cloud and MultiCloud](https://www.youtube.com/watch?v=qkj5W98Xdvw) - [Microsoft Azure Fundamentals](https://www.youtube.com/watch?v=NKEFWyqJ5XA&list=WL&index=130&t=12s) - [Google Cloud Digital Leader Certification Course](https://www.youtube.com/watch?v=UGRDM86MBIQ&list=WL&index=131&t=10s) - [AWS Basics for Beginners - Full Course](https://www.youtube.com/watch?v=ulprqHHWlng&t=5352s) -See you on [Day 34](day34.md) +Nos vemos en el [Día 34](day34.md) diff --git a/2022/es/Days/day34.md b/2022/es/Days/day34.md index 85806b3..4b037c4 100644 --- a/2022/es/Days/day34.md +++ b/2022/es/Days/day34.md @@ -1,128 +1,126 @@ -## Microsoft Azure Hands-On Scenarios +## Microsoft Azure Escenarios Prácticos -The last 6 days have been focused on Microsoft Azure and the public cloud in general, a lot of this foundation had to contain a lot of theory to understand the building blocks of Azure but also this will nicely translate to the other major cloud providers as well. +En los últimos 6 días nos hemos centrado en Microsoft Azure y la nube pública para conseguir una mínima base, teníamos que pasar por la teoría para entender los bloques de construcción de Azure. Lo bueno es que esto se traduce muy bien a los otros proveedores de la nube más importantes, tan solo hay que saber como le llaman a cada componente o servicio homólogo. -I mentioned at the very beginning about getting a foundational knowledge of the public cloud and choosing one provider to at least begin with, if you are dancing between different clouds then I believe you can get lost quite easily whereas choosing one you get to understand the fundamentals and when you have those it is quite easy to jump into the other clouds and accelerate your learning. +Al principio se mencionó esta necesidad de obtener un conocimiento básico de la nube pública y importación de la elección de un proveedor, al menos para empezar, porque si estás danzando entre diferentes nubes puede crear confusiones y perderse con facilidad. Mientras que con la elección de una en concreto se llega a entender mejor los fundamentos, así cuando surja la necesidad de saltar a otras nubes será mucho más ágil el aprendizaje. -In this final session, I am going to be picking and choosing my hands-on scenarios from this page here which is a reference created by Microsoft and is used for preparations for the [AZ-104 Microsoft Azure Administrator](https://microsoftlearning.github.io/AZ-104-MicrosoftAzureAdministrator/) +En esta última sesión veremos escenarios prácticos de la siguiente página que se enlaza, una referencia creada por Microsoft para la preparación del examen [AZ-104 Microsoft Azure Administrator](https://microsoftlearning.github.io/AZ-104-MicrosoftAzureAdministrator/). -There are some here such as Containers and Kubernetes that we have not covered in any detail as of yet so I don't want to jump in there just yet. +Algunos como contenedores y Kubernetes aun no están cubiertos en detalle en este viaje, así que no adelantemos acontecimientos todavía. -In previous posts, we have created most of Modules 1,2 and 3. +En los días anteriores sí que se ha visto gran parte de los Módulos 1,2 y 3 👍 -### Virtual Networking +### Redes Virtuales -Following [Module 04](https://microsoftlearning.github.io/AZ-104-MicrosoftAzureAdministrator/Instructions/Labs/LAB_04-Implement_Virtual_Networking.html): +Siguiendo y revisando el [Módulo 04](https://microsoftlearning.github.io/AZ-104-MicrosoftAzureAdministrator/Instructions/Labs/LAB_04-Implement_Virtual_Networking.html) se han cambiado algunos nombres para #90DaysOfDevOps. -I went through the above and changed a few namings for #90DaysOfDevOps. I also instead of using the Cloud Shell went ahead and logged in with my new user created on previous days with the Azure CLI on my Windows machine. +También, en lugar de utilizar el Cloud Shell iniciamos sesión con el nuevo usuario creado en días anteriores con el CLI de Azure. -You can do this using the `az login` which will open a browser and let you authenticate to your account. +Se puede hacer esto usando el `az login` que abrirá un navegador y permite autenticar en la cuenta. -I have then created a PowerShell script and some references from the module to use to build out some of the tasks below. You can find the associated files in this folder. -(Cloud\01VirtualNetworking) +A continuación, veremos un script PowerShell y algunas referencias del módulo a utilizar para construir algunas de las tareas. Puedes encontrar los archivos asociados en esta carpeta [Cloud\01VirtualNetworking](Cloud/01VirtualNetworking/) -Please make sure you change the file location in the script to suit your environment. +Asegúrese de cambiar la ubicación del archivo en el script para adaptarlo a tu entorno. -At this first stage, we have no virtual network or virtual machines created in our environment, I only have a cloud shell storage location configured in my resource group. +En esta primera etapa, no tenemos ninguna red virtual o máquinas virtuales creadas en nuestro entorno, sólo tengo una ubicación de almacenamiento shell en la nube configurada en mi grupo de recursos. -I first of all run my [PowerShell script](Cloud/01VirtualNetworking/Module4_90DaysOfDevOps.ps1) +En primer lugar ejecuto mi [script PowerShell](Cloud/01VirtualNetworking/Module4_90DaysOfDevOps.ps1). ![](Images/Day34_Cloud1.png) -- Task 1: Create and configure a virtual network +- Tarea 1: Crear y configurar una red virtual. ![](Images/Day34_Cloud2.png) -- Task 2: Deploy virtual machines into the virtual network +- Tarea 2: Desplegar máquina virtual en la red virtual. ![](Images/Day34_Cloud3.png) -- Task 3: Configure private and public IP addresses of Azure VMs +- Tarea 3: Configurar las direcciones IP privadas y públicas de las máquinas virtuales Azure. ![](Images/Day34_Cloud4.png) -- Task 4: Configure network security groups +- Tarea 4: Configurar grupos de seguridad en red. ![](Images/Day34_Cloud5.png) ![](Images/Day34_Cloud6.png) -- Task 5: Configure Azure DNS for internal name resolution +- Tarea 5: Configurar Azure DNS para la resolución de nombres internos. ![](Images/Day34_Cloud7.png) ![](Images/Day34_Cloud8.png) -### Network Traffic Management +### Gestión de tráfico de red -Following [Module 06](https://microsoftlearning.github.io/AZ-104-MicrosoftAzureAdministrator/Instructions/Labs/LAB_06-Implement_Network_Traffic_Management.html): +Siguiendo el [Módulo 06](https://microsoftlearning.github.io/AZ-104-MicrosoftAzureAdministrator/Instructions/Labs/LAB_06-Implement_Network_Traffic_Management.html): -Next walkthrough, from the last one we have gone into our resource group and deleted our resources, if you had not set up the user account like me to only have access to that one resource group you could follow the module changing the name to `90Days*` this will delete all resources and resource group. This will be my process for each of the following labs. +Si no has configurado la cuenta de usuario para que sólo tenga acceso a ese grupo de recursos, puedes seguir el módulo cambiando el nombre a '90 días*'. Esto eliminará todos los recursos y el grupo de recursos. Este será el proceso para cada uno de los siguientes laboratorios. -For this lab, I have also created a PowerShell script and some references from the module to use to build out some of the tasks below. You can find the associated files in this folder. -(Cloud\02TrafficManagement) +Para este laboratorio, también se ha creado una secuencia de comandos PowerShell y algunas referencias del módulo a utilizar para construir algunas de las tareas. Puede encontrar los archivos asociados en esta carpeta. +[Nube\02GestiónDeTráfico](Nube\02GestiónDeTráfico) -- Task 1: Provision of the lab environment +- Tarea 1: Provisión del entorno de laboratorio -I first of all run my [PowerShell script](Cloud/02TrafficManagement/Mod06_90DaysOfDevOps.ps1) +En primer lugar ejecuto mi [script PowerShell](Cloud/02TrafficManagement/Mod06_90DaysOfDevOps.ps1) ![](Images/Day34_Cloud9.png) -- Task 2: Configure the hub and spoke network topology +- Tarea 2: Configurar la [topología de red de concentrador y radio (Hub-and-spoke)](https://learn.microsoft.com/es-es/azure/cloud-adoption-framework/ready/azure-best-practices/hub-spoke-network-topology) ![](Images/Day34_Cloud10.png) -- Task 3: Test transitivity of virtual network peering +- Tarea 3: Probar la transitividad del peering de la red virtual -For this my 90DaysOfDevOps group did not have access to the Network Watcher because of permissions, I expect this is because Network Watchers are one of those resources that are not tied to a resource group which is where our RBAC was covered for this user. I added the East US Network Watcher contributor role to the 90DaysOfDevOps group. +El grupo 90DaysOfDevOps no tenía acceso al Network Watcher debido a los permisos, espero que esto se deba a que los Network Watchers son uno de esos recursos que no están ligados a un grupo de recursos, que es donde nuestro RBAC estaba cubierto para este usuario. Se ha añadido el rol de colaborador del Vigilante de Red del Este de EEUU al grupo 90DaysOfDevOps. ![](Images/Day34_Cloud11.png) ![](Images/Day34_Cloud12.png) ![](Images/Day34_Cloud13.png) -^ This is expected since the two spoke virtual networks do not peer with each other (virtual network peering is not transitive). +^ Esto es lo esperado ya que las dos redes virtuales spoke no son peer entre sí (el peering de redes virtuales no es transitivo). -- Task 4: Configure routing in the hub and spoke topology +- Tarea 4: Configurar el enrutamiento en la topología de red de concentrador y radio -I had another issue here with my account not being able to run the script as my user within the group 90DaysOfDevOps which I am unsure of so I did jump back into my main admin account. The 90DaysOfDevOps group is an owner of everything in the 90DaysOfDevOps Resource Group so would love to understand why I cannot run a command inside the VM? +Tuve otro problema aquí con mi cuenta no ser capaz de ejecutar el script como mi usuario dentro del grupo 90DaysOfDevOps que no estoy seguro de lo que hice saltar de nuevo en mi cuenta de administrador principal. El grupo 90DaysOfDevOps es el propietario de todo lo que hay en el grupo de recursos 90DaysOfDevOps, así que me gustaría saber por qué no puedo ejecutar un comando dentro de la máquina virtual. ![](Images/Day34_Cloud14.png) ![](Images/Day34_Cloud15.png) -I then was able to go back into my michael.cade@90DaysOfDevOps.com account and continue this section. Here we are running the same test again but now with the result being reachable. +Entonces pude volver a entrar en la cuenta michael.cade@90DaysOfDevOps.com y continuar con esta sección. Aquí estamos ejecutando la misma prueba de nuevo pero ahora con el resultado alcanzable. ![](Images/Day34_Cloud16.png) -- Task 5: Implement Azure Load Balancer +- Tarea 5: Implementar Azure Load Balancer ![](Images/Day34_Cloud17.png) ![](Images/Day34_Cloud18.png) -- Task 6: Implement Azure Application Gateway +- Tarea 6: Implementar Azure Application Gateway ![](Images/Day34_Cloud19.png) ![](Images/Day34_Cloud20.png) -### Azure Storage +### Almcacenamiento Azure -Following [Module 07](https://microsoftlearning.github.io/AZ-104-MicrosoftAzureAdministrator/Instructions/Labs/LAB_07-Manage_Azure_Storage.html): +Siguiendo el [Módulo 07](https://microsoftlearning.github.io/AZ-104-MicrosoftAzureAdministrator/Instructions/Labs/LAB_07-Manage_Azure_Storage.html): -For this lab, I have also created a PowerShell script and some references from the module to use to build out some of the tasks below. You can find the associated files in this folder. -(Cloud\03Storage) +Para el siguiente laboratorio, también tenemos un script de PowerShell y algunas referencias del módulo a utilizar para construir algunas de las tareas. Puedes encontrar los archivos asociados en la carpeta [Cloud\03Storage](Cloud\03Storage). -- Task 1: Provision of the lab environment +- Tarea 1: Provisión del entorno de laboratorio. -I first of all run my [PowerShell script](Cloud/03Storage/Mod07_90DaysOfDeveOps.ps1) +En primer lugar se ejecuta el [script PowerShell](Cloud/03Storage/Mod07_90DaysOfDeveOps.ps1) ![](Images/Day34_Cloud21.png) -- Task 2: Create and configure Azure Storage accounts +- Tarea 2: Crear y configurar cuentas Azure Storage. ![](Images/Day34_Cloud22.png) -- Task 3: Manage blob storage +- Tarea 3: Gestionar el almacenamiento blob ![](Images/Day34_Cloud23.png) -- Task 4: Manage authentication and authorization for Azure Storage +- Tarea 4: Gestionar la autenticación y autorización para Azure Storage ![](Images/Day34_Cloud24.png) ![](Images/Day34_Cloud25.png) @@ -133,55 +131,55 @@ I was a little impatient waiting for this to be allowed but it did work eventual - Task 5: Create and configure an Azure Files shares -On the run command, this would not work with michael.cade@90DaysOfDevOps.com so I used my elevated account. +Se debe tener paciencia esperando a que esto se autorice. ![](Images/Day34_Cloud27.png) ![](Images/Day34_Cloud28.png) ![](Images/Day34_Cloud29.png) -- Task 6: Manage network access for Azure Storage +- Tarea 6: Gestionar acceso a la red para Azure Storage ![](Images/Day34_Cloud30.png) -### Serverless (Implement Web Apps) +### Serverless (Implementar Web Apps) -Following [Module 09a](https://microsoftlearning.github.io/AZ-104-MicrosoftAzureAdministrator/Instructions/Labs/LAB_09a-Implement_Web_Apps.html): +Ahora toca el [Módulo 09a](https://microsoftlearning.github.io/AZ-104-MicrosoftAzureAdministrator/Instructions/Labs/LAB_09a-Implement_Web_Apps.html): -- Task 1: Create an Azure web app +- Tarea 1: Crear un Azure web app ![](Images/Day34_Cloud31.png) -- Task 2: Create a staging deployment slot +- Tarea 2: Crear un slot de despliegue de preparación ![](Images/Day34_Cloud34.png) -- Task 3: Configure web app deployment settings +- Tarea 3: Configurar opciones del despliegue web app ![](Images/Day34_Cloud33.png) -- Task 4: Deploy code to the staging deployment slot +- Tarea 4: Desplegar el código en el slot de despliegue de preparación ![](Images/Day34_Cloud32.png) -- Task 5: Swap the staging slots +- Tarea 5: Intercambiar slots de preparación ![](Images/Day34_Cloud35.png) -- Task 6: Configure and test autoscaling of the Azure web app +- Tarea 6: Configurar y testear el autoescalado de Azure web app -This script I am using can be found in (Cloud/05Serverless) +Este script que estoy usando se puede encontrar en la carpeta [Cloud/05Serverless](Cloud/05Serverless) ![](Images/Day34_Cloud36.png) -This wraps up the section on Microsoft Azure and the public cloud in general. I will say that I had lots of fun attacking and working through these scenarios. +Con esto terminamos la sección sobre Microsoft Azure y la nube pública en general. Espero que te hayas divertido trabajando los distintos escenarios. -## Resources +## Recursos - [Hybrid Cloud and MultiCloud](https://www.youtube.com/watch?v=qkj5W98Xdvw) - [Microsoft Azure Fundamentals](https://www.youtube.com/watch?v=NKEFWyqJ5XA&list=WL&index=130&t=12s) - [Google Cloud Digital Leader Certification Course](https://www.youtube.com/watch?v=UGRDM86MBIQ&list=WL&index=131&t=10s) - [AWS Basics for Beginners - Full Course](https://www.youtube.com/watch?v=ulprqHHWlng&t=5352s) -Next, we will be diving into version control systems, specifically around git and then also code repository overviews and we will be choosing GitHub as this is my preferred option. +A continuación, vamos a sumergirnos en los sistemas de control de versiones, en concreto en torno a git. Para los repositorios de código veremos GitHub, una de las opciones más utilizadas. -See you on [Day 35](day35.md) +Nos vemos en el [Día 35](day35.md) diff --git a/2022/es/Days/day35.md b/2022/es/Days/day35.md index b1bd91e..8905b14 100644 --- a/2022/es/Days/day35.md +++ b/2022/es/Days/day35.md @@ -1,124 +1,124 @@ -## The Big Picture: Git - Version Control +## El panorama: Git - Control de versiones -Before we get into git, we need to understand what version control is and why? In this opener for Git, we will take a look at what version control is, and the basics of git. +Antes de adentrarnos en git, necesitamos entender qué es el control de versiones y por qué. En esta introducción a Git, le echaremos un vistazo al control de versiones y a los fundamentos de git. -### What is Version Control? +### ¿Qué es el control de versiones? -Git is not the only version control system so here we want to cover what options and what methodologies are available around version control. +Git no es el único sistema de control de versiones, así que aquí queremos cubrir qué opciones y qué metodologías hay disponibles en torno al control de versiones. -The most obvious and a big benefit of Version Control is the ability to track a project's history. We can look back over this repository using `git log` and see that we have many commits and many comments and what has happened so far in the project. Don't worry we will get into the commands later. Now think if this was an actual software project full of source code and multiple people are committing to our software at different times, different authors and then reviewers all are logged here so that we know what has happened, when, by whom and who reviewed. +El más obvio y gran beneficio del Control de Versiones es la capacidad de rastrear la historia de un proyecto. Podemos mirar atrás en este repositorio usando `git log` y ver que tenemos muchos commits (Confirmaciones de cambios), muchos comentarios y analizar lo que ha pasado desde el principio del proyecto. No te preocupes, hablaremos de los comandos más tarde. Ahora piensa en un proyecto de software real lleno de código fuente y con varias personas haciendo commits a nuestro software en diferentes momentos, diferentes autores, luego revisores... todo se registra para que sepamos lo que ha sucedido, cuándo, por quién y quién revisó. ![](Images/Day35_Git1.png) -Version Control before it was cool, would have been something like manually creating a copy of your version before you made changes. It might be that you also comment out old useless code with the just-in-case mentality. +El control de versiones antes de que fuera cool, habría sido algo como crear manualmente una copia de tu versión antes de hacer cambios y, manualmente también, hacer anotaciones de los cambios en un documento típicamente llamado changelog. Podría ser también que comentaras código viejo inútil con la mentalidad del "por si acaso" y lo dejarás entre el código fuente haciendo bulto. ![](Images/Day35_Git2.png) -I have started using version control over not just source code but pretty much anything, talks about projects like this (90DaysOfDevOps). Why not accept the features that rollback and log of everything that has gone on. +Una vez te das cuenta de los beneficios del control de versiones no sólo lo utilizas sobre el código fuente, sino sobre prácticamente cualquier cosa, como proyectos como 90DaysOfDevOps. ¿Por qué no aprovechar las características que rollback y el registro de todo lo que ha pasado? -However, a big disclaimer **Version Control is not a Backup!** +Sin embargo, una gran advertencia: ⚠️ **¡Control de versiones no es una copia de seguridad!** ⚠️ -Another benefit of Version Control is the ability to manage multiple versions of a project, Let's create an example, we have a free app that is available on all operating systems and then we have a paid-for app also available on all operating systems. The majority of the code is shared between both applications. We could copy and paste our code each commit to each app but that is going to be very messy especially as you scale your development to more than just one person, also mistakes will be made. +Otro beneficio del Control de Versiones es la capacidad de gestionar múltiples versiones de un proyecto, vamos a crear un ejemplo, tenemos una aplicación gratuita que está disponible en todos los sistemas operativos y luego tenemos una aplicación de pago también disponible en todos los sistemas operativos. La mayor parte del código se comparte entre ambas aplicaciones. Podríamos copiar y pegar nuestro código en cada commit para cada aplicación, pero eso va a ser muy desordenado, especialmente a medida que escalas tu desarrollo a más de una persona, también se cometerán errores. -The premium app is where we are going to have additional features, let's call them premium commits, the free edition will just contain the normal commits. +La aplicación premium es donde vamos a tener características adicionales, vamos a llamarlos commits premium, la edición gratuita sólo contendrá los commits normales. -The way this is achieved in Version Control is through branching. +La forma en que esto se logra en el Control de Versiones es a través de la ramificación. ![](Images/Day35_Git3.png) -Branching allows for two code streams for the same app as we stated above. But we will still want new features that land in our source code-free version to be in our premium and to achieve this we have something called merging. +La ramificación (branching) permite dos flujos de código para la misma aplicación, como hemos dicho anteriormente. Pero todavía queremos nuevas características que aterrizan en nuestra versión código libre para estar en nuestra prima y para lograr esto tenemos algo que se llama fusión (merging). ![](Images/Day35_Git4.png) -Now, this same easy but merging can be complicated because you could have a team working on the free edition and you could have another team working on the premium paid-for version and what if both change code that affects aspects of the overall code. Maybe a variable gets updated and breaks something. Then you have a conflict that breaks one of the features. Version Control cannot fix the conflicts that are down to you. But version control allows this to be easily managed. +Hacer esto mismo ahora es facilísimo, pero la fusión puede ser complicada porque podrías tener un equipo trabajando en la edición gratuita y podrías tener otro equipo trabajando en la versión premium de pago y ¿qué pasa si ambos equipos cambian código que afecta a aspectos del código general? Tal vez una variable se actualiza y rompe algo. Aquí se produce un conflicto que rompe una de las características. El control de versiones no puede arreglar los conflictos pero permite gestionarlos fácilmente. -The primary reason if you have not picked up so far for version control, in general, is the ability to collaborate. The ability to share code amongst developers and when I say code as I said before more and more we are seeing much more use cases for other reasons to use source control, maybe its a joint presentation you are working on with a colleague or a 90DaysOfDevOps challenge where you have the community offering their corrections and updates throughout the project. +La razón principal de utilizar el control de versiones, en general, es la capacidad de poder colaborar. La capacidad de compartir código entre los desarrolladores es algo principal, pero cada vez se ven más casos de uso. Por ejemplo, en una presentación conjunta que trabajas con un colega o en un reto 90DaysOfDevOps donde tienes una comunidad que ofrece sus correcciones y actualizaciones en todo el proyecto, como esta traducción. -Without version control how did teams of software developers even handle this? I find it hard enough when I am working on my projects to keep track of things. I expect they would split out the code into each functional module. Maybe a little part of the puzzle then was bringing the pieces together and then problems and issues before anything would get released. +Sin el control de versiones, ¿cómo se las arreglaban los equipos de desarrolladores de software? Cuando trabajo en mis proyectos me resulta bastante difícil hacer un seguimiento de las cosas. Supongo que dividirían el código en módulos funcionales y luego, como un puzzle, iban juntando las piezas y resolviendo los problemas antes de que algo se publicara. [El desarrollo en cascada](https://es.wikipedia.org/wiki/Desarrollo_en_cascada). -With version control, we have a single source of truth. We might all still work on different modules but it enables us to collaborate better. +Con el control de versiones, tenemos una única fuente de verdad. Puede que todos sigamos trabajando en módulos diferentes, pero nos permite colaborar mejor porque vemos en tiempo real el trabajo de los demás. ![](Images/Day35_Git5.png) -Another thing to mention here is that it's not just developers that can benefit from Version Control, it's all members of the team to have visibility but also tools all having awareness or leverage, Project Management tools can be linked here, tracking the work. We might also have a build machine for example Jenkins which we will talk about in another module. A tool that Builds and Packages the system, automating the deployment tests and metrics. +Otra cosa importante a mencionar es que no son sólo los desarrolladores quienes pueden beneficiarse del Control de Versiones. Todos los miembros del equipo deben tener visibilidad, pero también las herramientas que todos deben conocer o aprovechar. Las herramientas de Gestión de Proyectos pueden estar vinculadas aquí, rastreando el trabajo. También podríamos tener una máquina de construcción, por ejemplo Jenkins, de la que hablaremos en otro módulo. Una herramienta que construye y empaqueta el sistema, automatizando las pruebas de despliegue y las métricas. Y mucho más... -### What is Git? +### ¿Qué es Git? -Git is a tool that tracks changes to source code or any file, or we could also say Git is an open-source distributed version control system. +Git es una herramienta que rastrea los cambios en el código fuente o en cualquier archivo, o también podríamos decir que Git es un sistema de control de versiones distribuido de código abierto. -There are many ways in which git can be used on our systems, most commonly or at least for me I have seen it at the command line, but we also have graphical user interfaces and tools like Visual Studio Code that have git-aware operations we can take advantage of. +Hay muchas formas de utilizar git en nuestros sistemas, lo más habitual es usarlo en la línea de comandos, pero también tenemos interfaces gráficas de usuario y herramientas como Visual Studio Code que tienen operaciones git-aware que podemos aprovechar. -Now we are going to run through a high-level overview before we even get Git installed on our local machine. +Ahora vamos a ejecutar a través de una visión general de alto nivel, incluso antes de tener Git instalado en nuestra máquina local. -Let's take the folder we created earlier. +Utilicemos la carpeta que hemos creado antes. ![](Images/Day35_Git2.png) -To use this folder with version control we first need to initiate this directory using the `git init` command. For now, just think that this command puts our directory as a repository in a database somewhere on our computer. +Para usar esta carpeta con el control de versiones primero necesitamos iniciar este directorio usando el comando `git init`. Por ahora, piensa que este comando pone nuestro directorio como repositorio en una base de datos en algún lugar de nuestro ordenador. ![](Images/Day35_Git6.png) -Now we can create some files and folders and our source code can begin or maybe it already has and we have something in here already. We can use the `git add .` command which puts all files and folders in our directory into a snapshot but we have not yet committed anything to that database. We are just saying all files with the `.` are ready to be added. +Ahora podemos crear algunos archivos y carpetas y nuestro código fuente puede comenzar. Podemos usar el comando `git add .` que pone todos los archivos y carpetas de nuestro directorio en una instantánea pero todavía no hemos confirmado nada en esa base de datos. Sólo estamos diciendo que todos los archivos con el `.` están listos para ser añadidos. ![](Images/Day35_Git7.png) -Then we want to go ahead and commit our files, we do this with the `git commit -m "My First Commit"` command. We can give a reason for our commit and this is suggested so we know what has happened for each commit. +A continuación, queremos seguir adelante y confirmar nuestros archivos, lo hacemos con el comando `git commit -m "Mi primer commit"`. Podemos dar una razón para nuestro commit y es recomendable para que sepamos lo que ha sucedido en cada commit. Se hace con la opción de mensaje `-m`. ![](Images/Day35_Git8.png) -We can now see what has happened within the history of the project. Using the `git log` command. +Ahora podemos ver lo que ha pasado en la historia del proyecto. Usando el comando `git log`. ![](Images/Day35_Git9.png) -If we create an additional file called `samplecode.ps1`, the status would become different. We can also check the status of our repository by using `git status` this shows we have nothing to commit and we can add a new file called samplecode.ps1. If we then run the same `git status` you will see that we file to be committed. +Si creamos un fichero adicional llamado `samplecode.ps1` el estado de este será diferente. Podemos comprobar el estado de nuestro repositorio mediante el uso de `git status` esto muestra que no tenemos nada que confirmar y podemos añadir un nuevo archivo llamado `samplecode.ps1`. Ejecutamos el mismo `git status` y veremos que tenemos un fichero para añadir y confirmar (comitear, commit verborizado al español por los murcianos). ![](Images/Day35_Git10.png) -Add our new file using the `git add sample code.ps1` command and then we can run `git status` again and see our file is ready to be committed. +Añadimos nuestro nuevo fichero usando el comando `git add sample code.ps1` y entonces podemos ejecutar `git status` de nuevo y ver que nuestro fichero está listo para ser comiteado. ![](Images/Day35_Git11.png) -Then issue `git commit -m "My Second Commit"` command. +Pues a comitear se ha dicho, ejecutamos el comando `git commit -m "My Second Commit"`. ![](Images/Day35_Git12.png) -Another `git status` now shows everything is clean again. +Otro `git status` nos muestra que todo está limpio, lo tenemos subido al repositorio local. ![](Images/Day35_Git13.png) -We can then use the `git log` command which shows the latest changes and first commit. +Podemos usar el comando `git log` que muestra los últimos cambios y el primer commit. ![](Images/Day35_Git14.png) -If we wanted to see the changes between our commits i.e what files have been added or modified we can use the `git diff b8f8 709a` +Si quisiéramos ver los cambios entre nuestras confirmaciones, es decir, qué archivos se han añadido o modificado, podemos usar `git diff b8f8 709a`. ![](Images/Day35_Git15.png) -Which then displays what has changed in our case we added a new file. +Nos mostrará lo que ha cambiado. En nuestro caso veremos el fichero añadido. ![](Images/Day35_Git16.png) -We will go deeper into this later on but we can jump around our commits i.e we can go time travelling! By using our commit number we can use the `git checkout 709a` command to jump back in time without losing our new file. +Profundizaremos en esto más adelante pero para empezar a degustar las delicias de git: podemos saltar entre nuestros commits, es decir, ¡podemos viajar en el tiempo! Usando nuestro número de commit con el comando `git checkout 709a` para saltar atrás en el tiempo sin perder nuestro nuevo archivo. ![](Images/Day35_Git17.png) -But then equally we will want to move forward as well and we can do this the same way with the commit number or you can see here we are using the `git switch -` command to undo our operation. +Igualmente podemos avanzar de la misma manera, con el número de commit. También puedes ver que estamos usando el comando `git switch -` para deshacer nuestra operación. ![](Images/Day35_Git18.png) -The TLDR; +El TLDR; -- Tracking a project's history -- Managing multiple versions of a project -- Sharing code amongst developers and a wider scope of teams and tools -- Coordinating teamwork -- Oh and there is some time travel! +- Seguimiento de la historia de un proyecto. +- Gestión de múltiples versiones de un proyecto. +- Compartir código entre desarrolladores. Un mayor número de equipos y herramientas. +- Coordinar el trabajo en equipo. +- Ah, ¡y hay algunos viajes en el tiempo! -This might have seemed a jump around but hopefully, you can see without really knowing the commands used the powers and the big picture behind Version Control. +Esto ha sido una introducción, espero que se pueda percibir los poderes y el panorama general detrás del Control de Versiones. -Next up we will be getting git installed and set up on your local machine and diving a little deeper into some other use cases and commands that we can achieve in Git. +A continuación vamos a instalar git y configurarlo en una máquina local y bucear un poco más profundo en algunos casos de uso y los comandos que podemos necesitar en Git. -## Resources +## Recursos - [What is Version Control?](https://www.youtube.com/watch?v=Yc8sCSeMhi4) - [Types of Version Control System](https://www.youtube.com/watch?v=kr62e_n6QuQ) @@ -126,5 +126,13 @@ Next up we will be getting git installed and set up on your local machine and di - [Git for Professionals Tutorial](https://www.youtube.com/watch?v=Uszj_k0DGsg) - [Git and GitHub for Beginners - Crash Course](https://www.youtube.com/watch?v=RGOj5yH7evk&t=8s) - [Complete Git and GitHub Tutorial](https://www.youtube.com/watch?v=apGV9Kg7ics) +- [En español] [Comandos Git](https://gitea.vergaracarmona.es/man-linux/comandos-git) +- [En español] [Apuntes Curso de Git](https://vergaracarmona.es/wp-content/uploads/2022/10/Curso-git_vergaracarmona.es_.pdf). +- [En español] En los [apuntes](https://vergaracarmona.es/apuntes/) del traductor: + - ["Instalar git en ubuntu"](https://vergaracarmona.es/instalar-git-en-ubuntu/) + - ["Comandos de git"](https://vergaracarmona.es/comandos-de-git/) + - ["Estrategias de fusión en git: Ship / Show / Ask"](https://vergaracarmona.es/estrategias-bifurcacion-git-ship-show-ask/) + - ["Resolver conflictos en Git. Merge, Squash, Rebase o Pull"](https://vergaracarmona.es/merge-squash-rebase-pull/) + - ["Borrar commits de git: reset, rebase y cherry-pick"](https://vergaracarmona.es/reset-rebase-cherry-pick/) -See you on [Day 36](day36.md) +Nos vemos en el [Día 36](day36.md) diff --git a/2022/vi/Days/day25.md b/2022/vi/Days/day25.md new file mode 100644 index 0000000..b147dd4 --- /dev/null +++ b/2022/vi/Days/day25.md @@ -0,0 +1,175 @@ +--- +title: '#90DaysOfDevOps - Lập trình Python trong tự động hóa mạng - Ngày 25' +published: false +description: 90DaysOfDevOps - Lập trình Python trong tự động hóa mạng +tags: 'devops, 90daysofdevops, learning' +cover_image: null +canonical_url: null +id: 1049038 +--- + +## Lập trình Python trong tự động hóa mạng + +Python là ngôn ngữ lập trình tiêu chuẩn được sử dụng trong việc tự động hóa cấu hình mạng. + +Mặc dù Python không chỉ dành riêng cho việc tự động hóa mạng nhưng nó dường như được sử dụng ở khắp mọi nơi mỗi khi bạn tìm kiếm công cụ cho mình. Như đã đề cập trước đây nếu nó không phải là chương trình Python thì nó có thể là Ansible (vốn cũng được viết bằng Python). + +Tôi nghĩ rằng tôi đã đề cập đến điều này rồi, trong phần "Học ngôn ngữ lập trình", tôi đã chọn Golang thay vì Python vì những lý do xung quanh việc công ty của tôi đang phát triển Go nên đó là lý do chính đáng để tôi học Go, nhưng nếu không phải vì lí do đó thì Python sẽ là lựa chọn lúc đó. + +- Dễ đọc và dễ sử dụng: Đây là lí do Python là ngôn ngữ lập trình phổ biến. Python không yêu cầu sử dụng `{}` trong chương trình để bắt đầu và kết thúc các khối mã. Kết hợp điều này với một IDE mạnh như VS Code, bạn sẽ có một khởi đầu khá dễ dàng khi muốn chạy một số mã Python. + +Pycharm có thể là một IDE khác đáng được đề cập ở đây. + +- Thư viện: Khả năng mở rộng của Python là mỏ vàng thực sự ở đây, tôi đã đề cập trước đây rằng Python không chỉ dành cho tự động hóa mạng mà trên thực tế, có rất nhiều thư viện cho tất cả các loại thiết bị và cấu hình. Bạn có thể xem số lượng lớn tại đây [PyPi](https://pypi.python.org/pypi) + +Khi bạn muốn tải một thư viện xuống máy tính của mình, thì bạn sử dụng công cụ có tên `pip` để kết nối với PyPI và tải xuống máy của mình. Các nhà cung cấp mạng như Cisco, Juniper và Arista đã phát triển các thư viện để hỗ trợ việc truy cập vào thiết bị của họ. + +- Mạnh mẽ & hiệu quả: Bạn có nhớ trong những ngày học lập trình Go tôi đã viết chương trình "Hello World" với 6 dòng mã không? Trong Python nó là + +``` +print('hello world') +``` + +Tổng hợp tất cả các điểm trên lại với nhau bạn sẽ dễ dàng hiểu tại sao Python thường được nhắc đến như một ngôn ngữ tiêu chuẩn khi làm việc về tự động hóa. + +Tôi nghĩ có một điều quan trọng cần lưu ý là vài năm trước có thể đã có các chương trình để tương tác với các thiết bị mạng của bạn để có thể tự động thực hiện sao lưu cấu hình hoặc thu thập nhật ký và thông tin chi tiết khác về thiết bị của bạn. Quá trình tự động hóa mà chúng ta đang nói đến ở đây hơi khác một chút và đó là do bối cảnh mạng nói chung cũng đã thay đổi để phù hợp hơn với cách suy nghĩ này và cho phép tự động hóa nhiều hơn. + +- Software-Defined Network/Mạng được điều khiển bằng phần mềm) - SDN Controller chịu trách nhiệm là nơi cung cấp cấu hình điều khiển cho tất cả các thiết bị trên mạng, nghĩa là chỉ cần một điểm liên hệ duy nhất cho bất kỳ thay đổi mạng nào, không còn phải telnet hoặc SSH vào mọi thiết bị và việc dựa vào con người để làm điều này có khả năng gây ra lỗi hoặc cấu hình sai. + +- High-Level Orchestration/Phối hợp ở mức cao - Thực hiện ở cấp cao hơn SDN Controller và nó cho phép sự điều phối ở cấp độ các dịch vụ, sau đó là sự tích hợp của lớp điều phối này vào các nền tảng bạn chọn, VMware, Kubernetes, dịch vụ điện toán đám mây, v.v. + +- Policy-based management/Quản lý dựa trên chính sách - Bạn muốn cài đặt chính sách gì? Trạng thái mong muốn của dịch vụ là gì? Bạn mô tả điều này và hệ thống có tất cả các chi tiết về cách thiết lập nó trở thành trạng thái bạn mong muốn. + +## Cài đặt môi trường lab + +Không phải ai cũng có thể sở hữu các thiết bị router, swith, và các thiết bị mạng khác. + +Chúng ta có thể sử dụng một số phần mềm cho phép chúng ta có thể thực hành và tìm hiểu cách tự động hóa cấu hình mạng của chúng ta. + +Có một vài phần mềm mà chúng ta có thể chọn. + +- [GNS3 VM](https://www.gns3.com/software/download-vm) +- [Eve-ng](https://www.eve-ng.net/) +- [Unimus](https://unimus.net/) (Không phải công cụ tạo lab nhưng cung cấp các khái niệm thú vị). + +Chúng ta sẽ xây dựng lab với [Eve-ng](https://www.eve-ng.net/). Như đã đề cập trước đây, bạn có thể sử dụng thiết bị vật lý nhưng thành thật mà nói, môi trường ảo có nghĩa là chúng ta có thể có môi trường an toàn để thử nghiệm nhiều tình huống khác nhau. Ngoài ra việc có thể thực hành các thiết bị và cấu trúc mạng khác nhau cũng rất thú vị. + +Chúng ta sẽ thực hành mọi thứ trên EVE-NG phiên bản cộng đồng. + +### Bắt đầu + +Bạn có thể tải phiên bản cộng dồng dưới định dạng ISO và OVF tại đây. [download](https://www.eve-ng.net/index.php/download/) + +Chúng ta sẽ sử dụng bản tải xuống định dạng OVF, với định dạng ISO, bạn có thể cài đặt trực tiếp trên server của bạn mà không cần chương trình tạo máy ảo. + +![](../../Days/Images/Day25_Networking1.png) + +Đối với hướng dẫn này, chúng ta sẽ sử dụng VMware Workstation vì tôi có giấy phép sử dụng thông qua vExpert nhưng bạn cũng có thể sử dụng VMware Player hoặc bất kỳ tùy chọn nào khác được đề cập trong [documentation](https://www.eve-ng.net/index.php/documentation/installation/system-requirement/). Rất tiếc, chúng ta không thể sử dụng Virtual Box! + +Đây cũng là lúc tôi gặp vấn đề khi sử dụng GNS3 với Virtual Box. + +[Download VMware Workstation Player - FREE](https://www.vmware.com/uk/products/workstation-player.html) + +[VMware Workstation PRO](https://www.vmware.com/uk/products/workstation-pro.html) (Lưu ý rằng nó chỉ miễn phí trong thời gian dùng thử!) + +### Cài đặt VMware Workstation PRO + +Bây giờ chúng ta đã tải xuống và cài đặt phần mềm ảo hóa và chúng ta cũng đã tải xuống EVE-NG OVF. Nếu bạn đang sử dụng VMware Player, vui lòng cho tôi biết quy trình này có giống như vậy không. + +Bây giờ chúng ta đã sẵn sàng để cấu hình mọi thứ. + +Mở VMware Workstation rồi chọn `file` và `open` + +![](../../Days/Images/Day25_Networking2.png) + +Khi bạn tải xuống file EVE-NG OVF, nó sẽ nằm trong một tệp nén. Giải nén nội dung vào thư mục và nó trông như thế này. + +![](../../Days/Images/Day25_Networking3.png) + +Chọn thư mục mà bạn đã tải xuống hình ảnh EVE-NG OVF và bắt đầu import. + +Đặt cho nó một cái tên dễ nhận biết và lưu trữ máy ảo ở đâu đó trên máy tính của bạn. + +![](../../Days/Images/Day25_Networking4.png) + +Khi quá trình import hoàn tất, hãy tăng số lượng bộ xử lý (CPU) lên 4 và bộ nhớ (RAM) được phân bổ lên 8 GB. (Đây là cài đặt khi bạn import phiên bản mới nhất, nhưng nếu không đúng thì hãy chỉnh sửa lại như vậy). + +Ngoài ra, hãy đảm bảo tùy chọn Virtualise Intel VT-x/EPT hoặc AMD-V/RVI đã được bật. Tùy chọn này hướng dẫn VMware chuyển các cờ ảo hóa cho HĐH khách (ảo hóa lồng nhau) Đây là vấn đề tôi gặp phải khi sử dụng GNS3 với Virtual Box mặc dù CPU của tôi hỗ trợ tính năng này. + +![](../../Days/Images/Day25_Networking5.png) + +### Khởi động và truy cập + +Hãy nhớ rằng tôi đã đề cập rằng điều này sẽ không hoạt động với VirtualBox! Vâng, có cùng một vấn đề với VMware Workstation và EVE-NG nhưng đó không phải là lỗi của nền tảng ảo hóa! + +Tôi có WSL2 đang chạy trên Máy Windows của mình và điều này dường như loại bỏ khả năng chạy bất kỳ thứ gì được lồng trong môi trường ảo của bạn. Tôi thắc mắc không biết tại sao Ubuntu VM lại chạy vì nó dường như vô hiệu hóa tính năng Intel VT-d của CPU khi sử dụng WSL2. + +Để giải quyết vấn đề này, chúng ta có thể chạy lệnh sau trên máy Windows của mình và khởi động lại hệ thống, lưu ý rằng trong khi lệnh này tắt thì bạn sẽ không thể sử dụng WSL2. + +`bcdedit /set hypervisorlaunchtype off` + +Khi bạn muốn quay lại và sử dụng WSL2, bạn sẽ cần chạy lệnh này và khởi động lại. + +`bcdedit /set hypervisorlaunchtype auto` + +Cả hai lệnh này nên được chạy với quyền administrator! + +Ok quay lại hướng dẫn, bây giờ bạn sẽ có một máy ảo đang được chạy trong VMware Workstation và bạn sẽ có một lời nhắc tương tự như thế này trên màn hình. + +![](../../Days/Images/Day25_Networking6.png) + +Trên lời nhắc ở trên, bạn có thể sử dụng: + +username = root +password = eve + +Sau đó, bạn sẽ được yêu cầu cung cấp lại mật khẩu root, mật khẩu này sẽ được sử dụng để SSH vào máy chủ sau này. + +Sau đó chúng ta có thể thay đổi hostname của máy chủ. + +![](../../Days/Images/Day25_Networking7.png) + +Tiếp theo, chúng ta thiết lập DNS Domain Name, tôi đã sử dụng tên bên dưới nhưng tôi không chắc liệu điều này có cần thay đổi sau này hay không. + +![](../../Days/Images/Day25_Networking8.png) + +Sau đó, chúng ta cấu hình mạng, tôi chọn sử dụng địa chỉ IP tĩnh (static) để nó không thay đổi sau khi khởi động lại. + +![](../../Days/Images/Day25_Networking9.png) + +Bước cuối cùng, thiết lập một địa chỉ IP tĩnh trong mạng mà bạn có thể truy cập được từ máy tính của mình. + +![](../../Days/Images/Day25_Networking10.png) + +Có một số bước bổ sung ở đây, trong đó bạn sẽ phải cung cấp subnet mask, default gateway và DNS. + +Sau khi hoàn tất, máy ảo sẽ khởi động lại, lúc này bạn có thể điền địa chỉ IP tĩnh đã thiết lập vào trình duyệt của mình để truy cập. + +![](../../Days/Images/Day25_Networking11.png) + +Tên người dùng mặc định cho GUI là `admin` và mật khẩu là `eve` trong khi tên người dùng mặc định cho SSH là `root` và mật khẩu là `eve` nhưng bạn có thể thay đổi trong quá trình thiết lập. + +![](../../Days/Images/Day25_Networking12.png) + +Tôi đã chọn HTML5 cho bảng điều khiển thay vì native vì nó cho phép sẽ mở một tab mới trong trình duyệt của bạn khi bạn điều hướng qua các bảng điều khiển khác nhau. + +Phần tiếp theo chúng ta sẽ tìm hiểu: + +- Cài đặt gói ứng dụng EVE-NG +- Tải một số file hệ điều hành vào EVE-NG +- Xây dựng mô hình mạng +- Thêm node +- Kết nối các node +- Bắt đầu viết chương trình Python +- Tìm hiểu các thư viện telnetlib, Netmiko, Paramiko và Pexpect + +## Tài nguyên tham khảo + +- [Free Course: Introduction to EVE-NG](https://www.youtube.com/watch?v=g6B0f_E0NMg) +- [EVE-NG - Creating your first lab](https://www.youtube.com/watch?v=9dPWARirtK8) +- [3 Necessary Skills for Network Automation](https://www.youtube.com/watch?v=KhiJ7Fu9kKA&list=WL&index=122&t=89s) +- [Computer Networking full course](https://www.youtube.com/watch?v=IPvYjXCsTg8) +- [Practical Networking](http://www.practicalnetworking.net/) +- [Python Network Automation](https://www.youtube.com/watch?v=xKPzLplPECU&list=WL&index=126) + +Hẹn gặp lại các bạn ngày [Ngày 26](day26.md) diff --git a/2022/vi/Days/day27.md b/2022/vi/Days/day27.md new file mode 100644 index 0000000..523ce25 --- /dev/null +++ b/2022/vi/Days/day27.md @@ -0,0 +1,140 @@ +--- +title: '#90DaysOfDevOps - Thực hành với Python - Ngày 27' +published: false +description: 90DaysOfDevOps - Thực hành với Python +tags: 'devops, 90daysofdevops, learning' +cover_image: null +canonical_url: null +id: 1048735 +--- + +## Thực hành với Python + +Trong phần cuối của loạt bài về mạng máy tính, chúng ta sẽ tìm hiểu một số tác vụ và công cụ tự động hóa dựa trên môi trường lab đã được tạo ra trong [Ngày 26](day26.md) + +Chúng ta sẽ sử dụng SSH để kết nối đến các thiết bị trong mạng. Giao tiếp dựa trên SSH sẽ được mã hóa như đã giới thiệu trước đây trong loạt bài về hệ điều hành Linux. Xem lại [Ngày 18](day18.md). + +## Truy cập môi trường giả lập ảo + +Để tương tác với các switch, bạn có thể thiết lập một máy chủ bên trong mạng EVE-NG hoặc bạn có thể thiết lập một máy tính chạy Linux có cài đặt Python trong EVE-NG ([Resource for setting up Linux inside EVE-NG](https://www.youtube.com/watch?v=3Qstk3zngrY)), hoặc bạn cũng có thể làm theo cách của tôi là tạo một server quản lý từ xa. + +![](../../Days/Images/Day27_Networking3.png) + +Để thiết lập như trên, chúng ta nhấp chuột phải vào giao diện ứng dụng, chọn Network, và sau đó chọn "Management(Cloud0)", thao tác này sẽ tạo ra một mạng riêng mới kết nối với máy tính đang dùng (máy host). + +![](../../Days/Images/Day27_Networking4.png) + +Tuy nhiên, chúng ta vẫn cần phải kết nối các thiết bị hiện tại với mạng mới này. (Kiến thức về mạng của tôi vẫn còn hạn chế và tôi cảm thấy rằng bạn có thể thực hiện bước tiếp theo này theo một cách khác bằng cách kết nối router với các switch và sau đó có kết nối với phần còn lại của mạng?) + +Tiếp theo bạn hãy truy cập vào từng thiết bị và chạy các lệnh sau trên card mạng được dùng để kết nối với "Management(Cloud0)". + + +``` +enable +config t +int gi0/0 +IP add DHCP +no sh +exit +exit +sh ip int br +``` + +Lệnh trên nhằm cấp phát địa chỉ IP cho card mạng kết nối với Home Network. Địa chỉ IP của các thiết bị được liệt kê trong bảng sau: + +| Node | IP Address | Home Network IP | +| ------- | ------------ | --------------- | +| Router | 10.10.88.110 | 192.168.169.115 | +| Switch1 | 10.10.88.111 | 192.168.169.178 | +| Switch2 | 10.10.88.112 | 192.168.169.193 | +| Switch3 | 10.10.88.113 | 192.168.169.125 | +| Switch4 | 10.10.88.114 | 192.168.169.197 | + +### Kết nối SSH đến thiết bị mạng + +Với các thông tin địa chỉ IP ở trên, chúng ta có thể kết nối đến các thiết bị trong mạng từ máy host. Tôi sử dụng Putty, tuy nhiên bạn cũng có thể sử dụng bất kì phần mềm hỗ trợ kết nối SSH nào khác. + +Bạn có thể thấy tôi đang kết nối SSH đến router của mình trong hình dưới. (R1) + +![](../../Days/Images/Day27_Networking5.png) + +### Sử dụng Python để thu thập thông tin từ các thiết bị + +Ví dụ đầu tiên là sử dụng Python để thu thập thông tin từ tất cả các thiết bị của mình. Cụ thể hơn, tôi sẽ kết nối đến từng thiết bị và chạy một lệnh đơn giản để lấy thông tin cấu hình của mỗi card mạng. Tôi đã lưu chương trình này tại đây [netmiko_con_multi.py](../../Days/Networking/netmiko_con_multi.py) + +Khi tôi chạy chương trình này, tôi có thể thấy cấu hình của mỗi cổng trên tất cả các thiết bị của mình. + +![](../../Days/Images/Day27_Networking6.png) + +Việc này rất hữu ích nếu bạn có nhiều thiết bị khác nhau, hãy tạo một chương trình tương tự để bạn có thể kiểm soát tập trung và tìm hiểu nhanh tất cả các cấu hình chỉ với một lần chạy. + +### Sử dụng Python để cấu hình các thiết bị + +Ví dụ trước đó là rất hữu ích nhưng còn việc sử dụng Python để định cấu hình thiết bị của chúng ta thì sao? Trong kịch bản này, chúng ta có một cổng trunk giữa `SW1` và `SW2`, một lần nữa hãy tưởng tượng nếu điều này được thực hiện trên nhiều switch và chúng ta muốn tự động hóa việc này mà không phải kết nối thủ công đến từng switch để thực hiện thay đổi cấu hình. + +Chúng ta có thể sử dụng chương trình [netmiko_sendchange.py](../../Days/Networking/netmiko_sendchange.py) để thực hiện điều này. Thao tác này sẽ kết nối qua SSH và thực hiện thay đổi cần thiết trên `SW1` và `SW2`. + +![](../../Days/Images/Day27_Networking7.png) + +Nếu bạn đã xem code, bạn sẽ thấy thông báo xuất hiện và cho chúng ta biết `sending configuration to device` nhưng không có xác nhận rằng điều này đã được thực hiện, chúng ta có thể thêm đoạn code bổ sung vào chương trình để thực hiện kiểm tra và xác thực việc cấu hình trên các switch hoặc chúng ta có thể sửa đổi đoạn code của ví dụ thứ nhất để cho chúng ta thấy điều đó. [netmiko_con_multi_vlan.py](../../Days/Networking/netmiko_con_multi_vlan.py) + +![](Images/Day27_Networking8.png) + +### Sao lưu cấu hình của các thiết bị + +Một ví dụ khác là sao lưu các cấu hình mạng của các thiết bị. Nếu bạn không muốn kết nối với mọi thiết bị có trên mạng của mình, bạn có thể chỉ định thiết bị mà bạn muốn sao lưu. Bạn có thể tự động hóa việc này bằng cách sử dụng chương trình [backup.py](../../Days/Networking/backup.py). Bạn sẽ cần điền vào file [backup.txt](../../Days/Networking/backup.txt) các địa chỉ IP mà bạn muốn sao lưu. + +Chạy chương trình trên và bạn sẽ thấy nội dung như bên dưới. + +![](../../Days/Images/Day27_Networking9.png) + +Đây chỉ là vài thông tin đơn giản được in ra màn hình, tôi sẽ cho bạn xem các file sao lưu. + +![](../../Days/Images/Day27_Networking10.png) + +### Paramiko + +Một thư viên Python được sử dụng rộng rãi cho kết nối SSH. Bạn có thể tìm hiểu thêm [tại đây](https://github.com/paramiko/paramiko) + +Chúng ta có thể cài đặt thư viện này bằng lệnh `pip install paramiko`. + +![](../../Days/Images/Day27_Networking1.png) + +Chúng ta có thể kiểm tra kết quả cài đặt bằng cách import thư viện paramiko trong Python. + +![](../../Days/Images/Day27_Networking2.png) + +### Netmiko + +Thực viện netmiko chỉ tập trung vào các thiết bị mạng trong khi paramiko là một thư viện lớn hơn nhằm phục vụ các thao tác trên SSH nói chung. + +Netmiko mà tôi đã sử dụng ở trên cùng với paramiko có thể được cài đặt bằng lệnh `pip install netmiko` + +Netmiko hỗ trợ thiết bị của nhiều nhà sản xuất, bạn có thể tìm thấy danh sách các thiết bị được hỗ trợ tại [GitHub Page](https://github.com/ktbyers/netmiko#supports) + +### Các thư viện khác + +Cũng cần đề cập đến một số thư viện khác mà chúng ta chưa có cơ hội xem xét nhưng chúng cung cấp nhiều tính năng liên quan đến tự động hóa các thiết lập mạng. + +Thư viện `netaddr` được sử dụng để làm việc với các địa chỉ IP, có thể được cài đặt bằng lệnh `pip install netaddr` + +Nếu bạn muốn lưu trữ cấu hình của nhiều switch trong một bảng tính excel, thư viện `xlrd` sẽ cung cấp các phương thức để làm việc với excel và chuyển đổi các hàng và cột thành ma trận. Cài đặt nó bằng lệnh `pip install xlrd`. + +Bạn cũng có thể tìm thấy một số ví dụ khác về tự động hóa mạng mà tôi chưa có cơ hội giới thiệu [tại đây](https://github.com/ktbyers/pynet/tree/master/presentations/dfwcug/examples) + +Tôi sẽ kết thúc phần loạt bài về Mạng máy tính trong sê-ri #90DaysOfDevOps tại đây. Mạng máy tính là một lĩnh vực mà tôi thực sự đã không làm đến trong một thời gian và còn rất nhiều điều cần đề cập nhưng tôi hy vọng các ghi chú của mình và các tài nguyên được chia sẻ trong những ngày qua sẽ hữu ích với một số bạn. + +## Tài nguyên tham khảo + +- [Free Course: Introduction to EVE-NG](https://www.youtube.com/watch?v=g6B0f_E0NMg) +- [EVE-NG - Creating your first lab](https://www.youtube.com/watch?v=9dPWARirtK8) +- [3 Necessary Skills for Network Automation](https://www.youtube.com/watch?v=KhiJ7Fu9kKA&list=WL&index=122&t=89s) +- [Computer Networking full course](https://www.youtube.com/watch?v=IPvYjXCsTg8) +- [Practical Networking](http://www.practicalnetworking.net/) +- [Python Network Automation](https://www.youtube.com/watch?v=xKPzLplPECU&list=WL&index=126) + +Vì tôi không phải là một kỹ sư mạng nên phần lớn các ví dụ tôi sử dụng ở trên đến từ cuốn sách này. + +- [Hands-On Enterprise Automation with Python (Book)](https://www.packtpub.com/product/hands-on-enterprise-automation-with-python/9781788998512) + +Hẹn gặp lại các bạn vào [Ngày 28](day28.md), nơi mà chúng ta sẽ tìm hiểu về điện toán đám mây (cloud computing) và các kiến thức cơ bản xoay quanh chủ đề này. diff --git a/2023.md b/2023.md index 2963820..5396737 100644 --- a/2023.md +++ b/2023.md @@ -77,8 +77,8 @@ Or contact us via Twitter, my handle is [@MichaelCade1](https://twitter.com/Mich ### Runtime Defence & Monitoring - [✔️] ☁️ 28 > [System monitoring and auditing](2023/day28.md) -- [] ☁️ 29 > [Application level monitoring](2023/day29.md) -- [] ☁️ 30 > [Intrusion detection and anti-malware software](2023/day30.md) +- [✔️] ☁️ 29 > [Application level monitoring](2023/day29.md) +- [✔️] ☁️ 30 > [Detecting suspicious application behavior](2023/day30.md) - [] ☁️ 31 > [Firewalls and network protection](2023/day31.md) - [] ☁️ 32 > [Vulnerability and patch management](2023/day32.md) - [] ☁️ 33 > [Application whitelisting and software trust management](2023/day33.md) @@ -96,10 +96,10 @@ Or contact us via Twitter, my handle is [@MichaelCade1](https://twitter.com/Mich ### Python -- [] 🏗️ 42 > [](2023/day42.md) -- [] 🏗️ 43 > [](2023/day43.md) -- [] 🏗️ 44 > [](2023/day44.md) -- [] 🏗️ 45 > [](2023/day45.md) +- [] 🏗️ 42 > [Programming Language: Introduction to Python](2023/day42.md) +- [] 🏗️ 43 > [Python Loops, functions, modules and libraries](2023/day43.md) +- [] 🏗️ 44 > [Data Structures and OOP in Python](2023/day44.md) +- [] 🏗️ 45 > [Debugging, testing and Regular expression](2023/day45.md) - [] 🏗️ 46 > [](2023/day46.md) - [] 🏗️ 47 > [](2023/day47.md) - [] 🏗️ 48 > [](2023/day48.md) diff --git a/2023/day29.md b/2023/day29.md index 8e86971..7609bb3 100644 --- a/2023/day29.md +++ b/2023/day29.md @@ -5,7 +5,7 @@ Let's start 😎 # Application logging -Application logs are important from many perspective. This is the way operators know what is happening inside applications they run on their infrastrucutre. For the same reason, keeping application logs is important from a security perspective because they provide a detailed record of the system's activity, which can be used to detect and investigate security incidents. +Application logs are important from many perspectives. This is the way operators know what is happening inside applications they run on their infrastructure. For the same reason, keeping application logs is important from a security perspective because they provide a detailed record of the system's activity, which can be used to detect and investigate security incidents. By analyzing application logs, security teams can identify unusual or suspicious activity, such as failed login attempts, access attempts to sensitive data, or other potentially malicious actions. Logs can also help track down the source of security breaches, including when and how an attacker gained access to the system, and what actions they took once inside. @@ -107,7 +107,7 @@ We need to add Falco Helm repo and install the Falco services and the exporter: ```bash helm repo add falcosecurity https://falcosecurity.github.io/charts helm repo update -helm install falco falcosecurity/falco --set driver.kind=ebpf --set-file certs.server.key=$PWD/server.key,certs.server.crt=$PWD/server.crt,certs.ca.crt=$PWD/ca.crt --set falco.grpc.enabled=true,falco.grpcOutput.enabled=true +helm install falco falcosecurity/falco --set driver.kind=ebpf --set-file certs.server.key=$PWD/server.key,certs.server.crt=$PWD/server.crt,certs.ca.crt=$PWD/ca.crt --set falco.grpc.enabled=true,falco.grpcOutput.enabled=true,falco.grpc_output.enabled=true helm install falco-exporter --set-file certs.ca.crt=$PWD/ca.crt,certs.client.key=$PWD/client.key,certs.client.crt=$PWD/client.crt falcosecurity/falco-exporter ``` @@ -124,6 +124,9 @@ Go to "Dashboard" left side menu and click import. In "Import via grfana.com" in Now you should see Falco events in your Grafana! 😎 +![](images/day29-4.png) + + # Next... Next day we will look into how to detect attacks in runtime. See you tomorrow 😃 diff --git a/2023/day30.md b/2023/day30.md index e69de29..b710e87 100644 --- a/2023/day30.md +++ b/2023/day30.md @@ -0,0 +1,116 @@ +# Recap + +We were deep yesterday in setting up Falco in our Minikube. It is a great tool for detecting application and container behavior during runtime. We took its output and exported it to our Prometheus instance in the cluster and viewed the results in a dedicated Grafana dashboard. + +Today, we are going to set up some rules and alerts in Falco and see how detection and alerting work. + +Is your coffee around? Have your hacker hoodie on you? Let's do it 😈 + +# Runtime detection with Falco + +Falco is a powerful open-source tool that is designed for Kubernetes runtime security. Here are some reasons why Falco is a good choice for securing your Kubernetes environment. Falco provides real-time detection of security threats and potential vulnerabilities in your Kubernetes environment. It uses a rule-based engine to detect and alert suspicious activity, allowing you to quickly respond to security incidents. + +Falco allows you to create custom rules that are tailored to the specific needs of your environment. This allows you to detect and respond to security threats and potential vulnerabilities in a way that is tailored to your unique needs. Falco provides rich metadata about security events, including information about the container, pod, namespace, and other details. This makes it easy to investigate and respond to security incidents. + +## Using built-in rules to detect malicious events + +By this time you should have all the moving parts in place: +* Prometheus +* Grafana +* Falco + +Let's do something that is somewhat unusual for a production system. We will open a shell on a workload and install a package during runtime of the container. + +Let's install a minimalistic Nginx deployment: +```bash +kubectl create deployment nginx --image=nginx:1.19 +``` + +Now open a shell inside the Pod of the Nginx deployment: +```bash +kubectl exec -it `kubectl get pod | grep nginx | awk '{print $1}'` -- bash +``` + +And install a "curl" on the Pod using APT: +```bash +apt update && apt install -y curl +``` + +Since we are using Falco to monitor application behavior it should see all these activities, and it does! Let's go to our Grafana back (see previous days to see how to reconnect). + +In Grafana, go to the "explore" screen. Make sure that you use the Prometheus data source. + +In the query builder select metric "falco_events" and label filter "k8s_pod_name" and set the filter to your Nginx Pod name. + +You will now see all the Falco events from this Pod + +![](images/day30-1.png) + +Note the rules that cause the events, among them you'll see "Launch Package Management Process in Container" rule that failed. This event was generated due to our `apt install` command above. + + +Take note here to appreciate the potential here. By installing this well proven open-source stack you can create a complete runtime monitoring system and know what is happening in real-time in the systems you want to monitor an protect! + + +## Creating custom rules + + +Let's say you or your security team wants to know if a the CLI tool `curl` has been invoked in one of Pods (which should rarely happen in a production cluster, but an attacker would use it to report back information to her/himself). + +We need to write a "Falco rule" to detect it. + +Here are the basic steps to add a custom Falco rule: + +### Create the rule +First, create a new rule file that defines the behavior you want to detect. Falco rules are written in YAML format and typically include a description of the behavior, a set of conditions that trigger the rule, and an output message that is generated when the rule is triggered. + +To detect that the "apt" command is executed using a Falco rule, you could create a new rule file with the following content: + +```yaml +customRules: + rules-curl.yaml: |- + - rule: DetectCurlCommandExecution + desc: Detects the execution of the "curl" command + condition: spawned_process and proc.name == curl + output: "Curl command executed: %proc.cmdline" + priority: WARNING +``` + +Let's dive a little bit into what we have here. + +Falco instruments events in the Linux kernel and sends them to its rule engine. The rule engine goes over all the rules and tries to match them to the event. If a matching event is found, Falco itself fires a rule based event. These are the entries we see in Prometheus/Grafana. In our custom rule, the `condition` field if the "heart" of the rule and it is used to match the rule to the event. + +In this case, we have used a macro called `spawned_process` which evaluates to `true` if the event is system call from the user-space to the kernel for spawning a new process (`execve` and friends). The second condition is on the name of the new process, which matches `curl`. + +To install this new rule, use the following Helm command to add it to our current deployment: +```bash +helm upgrade --install falco falcosecurity/falco --set driver.kind=ebpf --set-file certs.server.key=$PWD/server.key,certs.server.crt=$PWD/server.crt,certs.ca.crt=$PWD/ca.crt --set falco.grpc.enabled=true,falco.grpcOutput.enabled=true,falco.grpc_output.enabled=true -f +``` + +Make sure that Falco Pod restarted and running correctly. + +Let's return to our shell inside the Nginx pod. +```bash +kubectl exec -it `kubectl get pod | grep nginx | awk '{print $1}'` -- bash +``` + +We have installed here `curl` before, so we can invoke it now and simulate a malicious behavior. +```bash +curl https://google.com +``` + +Falco with our new rule should have picked up this event, so you should go back to Grafana and check the Falco dashboard: + + +![](images/day30-2.png) + +Voila! + +You have implemented and applied a custom rule in Falco!!! + +I hope this part gave you an insight into how this system works. + +# Next + +Tomorrow we will move away from the world of applications and go to the network layer, see you then! + diff --git a/2023/day42.md b/2023/day42.md index 8e501c0..3d22b4a 100644 --- a/2023/day42.md +++ b/2023/day42.md @@ -68,6 +68,6 @@ The print argument is a string, which is one of Python's basic data types for st ## Resources: -[Learn Python - Full course by freeCodeCamp](https://youtu.be/rfscVS0vtbw) -[Python tutorial for beginners by Nana](https://youtu.be/t8pPdKYpowI) -[Python Crash Course book](https://amzn.to/40NfY45) \ No newline at end of file +- [Learn Python - Full course by freeCodeCamp](https://youtu.be/rfscVS0vtbw) +- [Python tutorial for beginners by Nana](https://youtu.be/t8pPdKYpowI) +- [Python Crash Course book](https://amzn.to/40NfY45) \ No newline at end of file diff --git a/2023/day43.md b/2023/day43.md index c769f98..54948f1 100644 --- a/2023/day43.md +++ b/2023/day43.md @@ -1,4 +1,4 @@ -# Day 43 - Programming Language: Python +# Day 43 - Programming Language: Python Loops, functions, modules and libraries Welcome to the second day of Python, and today we will cover some more concepts: - Loops diff --git a/2023/day44.md b/2023/day44.md index e69de29..3f09435 100644 --- a/2023/day44.md +++ b/2023/day44.md @@ -0,0 +1,125 @@ +# Day 44 - Programming Language: Python Data Structures and OOP + +Welcome to the third day of Python, and today we will cover some more advanced concepts: + +- Data Structures +- Object Oriented Programming (OOP) + +## Data Structures: + +Python includes a number of data structures for storing and organizing data. The following are some of the most common ones: + +### Lists: + +Lists are used to store multiple items in a single variable. They can hold any type of collection of items (including other lists), and their elements can be accessed via an index. +Lists are mutable, which means they can be changed by adding, removing, or changing elements. +Here's an example of how to make a list and access its elements: + +``` python +thislist = ["apple", "banana", "orange"] +print(thislist[0]) # OUTPUT apple +print(thislist[2]) # OUTPUT orange +``` + +### Tuples: + +Tuples are similar to lists, but they are immutable, which means they cannot be **changed** once created. They are frequently used to represent fixed sets of data. +Tuples can be created with or without parentheses, but they are typically used to make the code more readable. Here's an example of a tuple and how to access its elements: + +``` python +my_tuple = (1, 2, [4, 5]) +print(my_tuple[0]) # OUTPUT 1 +print(my_tuple[2]) # OUTPUT "three" +print(my_tuple[3][0]) # OUTPUT 4 +``` + +### Dictionaries: + +Dictionaries are yet another versatile Python data structure that stores a collection of key-value pairs. The keys must be unique and unchangeable (strings and numbers are common), and the values can be of any type. +Dictionaries can be changed by adding, removing, or changing key-value pairs. +Here's an example of creating and accessing a dictionary's values: + +``` python +my_dict = {"name": "Rishab", "project": "90DaysOfDevOps", "country": "Canada"} +print(my_dict["name"]) # OUTPUT "Rishab" +print(my_dict["project"]) # OUTPUT "90DaysOfDevOps" +print(my_dict["country"]) # OUTPUT "Canada" +``` + +### Sets: + +Sets are used to store multiple items in a single variable. They are frequently used in mathematical operations such as union, intersection, and difference. +Sets are mutable, which means they can be added or removed, but the elements themselves must be immutable and sets cannot have two items with the same value. +Here's an example of how to make a set and then perform operations on it: + +``` python +my_set = {1, 2, 3, 4, 5} +other_set = {3, 4, 5, 6, 7} +print(my_set.union(other_set)) # {1, 2, 3, 4, 5, 6, 7} +print(my_set.intersection(other_set)) # {3, 4, 5} +print(my_set.difference(other_set)) # {1, 2} +``` + +## Object Oriented Programming: + +I also want to talk about object-oriented programming (OOP) concepts in Python, which are used to structure code into reusable and modular components, in addition to data structures. Here are some of the most important OOP concepts to understand: + +### Class + +A class is a template for creating objects. A class specifies the attributes (data) and methods (functions) that a class's objects can have. Classes are defined using the `class` keyword, and objects are created using the class constructor. Here's an example of defining a `Person` class and creating an object of that class: + +``` python +class Person: + def __init__(self, name, country): + self.name = name + self.country = country +person = Person("Rishab", "Canada") +print(person.name) # OUTPUT "Alice" +print(person.country) # OUTPUT "Canada" +``` + +### Inheritance: + +Inheritance is a technique for creating a new class from an existing one. The new class, known as a subclass, inherits the attributes and methods of the existing superclass. +Subclasses can extend or override the superclass's attributes and methods to create new functionality. Here's an example of defining a `Person` subclass called `Student`: + +``` python +class Student(Person): + def __init__(self, name, country, major): + super().__init__(name, country) + self.major = major + +student = Student("Rishab", "Canada", "Computer Science") +print(student.name) # OUTPUT "Rishab" +print(student.country) # OUTPUT "Canada" +print(student.major) # OUTPUT "Computer Science" +``` + +### Polymorphism: + +Polymorphism refers to the ability of objects to take on different forms or behaviors depending on their context. +Polymorphism can be achieved by using inheritance and method overriding, as well as abstract classes and interfaces. Here's an example of a speak() method being implemented in both the Person and Student classes: + +``` python +class Person: + def __init__(self, name, country): + self.name = name + self.country = country + + def speak(self): + print("Hello, my name is {} and I am from {}.".format(self.name, self.country)) + +class Student(Person): + def __init__(self, name, country, major): + super().__init__(name, country) + self.major = major + + def speak(self): + print("Hello, my name is {} and I am a {} major.".format(self.name, self.major)) + +person = Person("Rishab", "Canada") +student = Student("John", "Canada", "Computer Science") + +person.speak() # "Hello, my name is Rishab and I am from Canada." +student.speak() # "Hello, my name is John and I am a Computer Science major." +``` diff --git a/2023/day45.md b/2023/day45.md index e69de29..9672bbb 100644 --- a/2023/day45.md +++ b/2023/day45.md @@ -0,0 +1,124 @@ +# Day 45 - Python: Debugging, testing and Regular expression + +Welcome to Day 4 of Python! +Today we will learn about: + +- Debugging and testing +- Regular expressions +- Datetime library + +Let's start! + +## Debugging and testing + +Debugging is the process of finding and correcting errors or bugs in code. Python includes a debugger called `pdb` that allows you to step through your code and inspect variables as you go. You can use `pdb` to help you figure out where your code is going wrong and how to fix it. + +``` python +import pdb + +def add_numbers(x, y): + result = x + y + pdb.set_trace() # Start the debugger at this point in the code + return result + +result = add_numbers(2, 3) +print(result) +``` + +In this example, we define the `add_numbers` function, which adds two numbers and returns the result. To start the debugger at a specific point in the code, we use the pdb.set trace() function (in this case, after the result has been calculated). This enables us to inspect variables and step through the code to figure out what's going on. + +In addition to debugging, testing is an important part of programming. It entails creating test cases to ensure that your code is working properly. Python includes a `unittest` module that provides a framework for writing and running test cases. + +``` python +import unittest + +def is_prime(n): + if n < 2: + return False + for i in range(2, n): + if n % i == 0: + return False + return True + +class TestIsPrime(unittest.TestCase): + def test_is_prime(self): + self.assertTrue(is_prime(2)) + self.assertTrue(is_prime(3)) + self.assertTrue(is_prime(5)) + self.assertFalse(is_prime(4)) + +if __name__ == '__main__': + unittest.main() + +``` + +Output: + +``` bash +---------------------------------------------------------------------- +Ran 1 test in 0.000s + +OK +``` + +## Regular expressions: + +In Python, regular expressions are a powerful tool for working with text data. They enable you to search for and match specific character patterns within a string. Python's `re` module includes functions for working with regular expressions. +For example, you can use regular expressions to search for email addresses within a larger block of text, or to extract specific data from a string that follows a particular pattern. + +``` python +import re + +# Search for a phone number in a string +text = 'My phone number is 555-7777' +match = re.search(r'\d{3}-\d{4}', text) +if match: + print(match.group(0)) + +# Extract email addresses from a string +text = 'My email is example@devops.com, but I also use other@cloud.com' +matches = re.findall(r'\S+@\S+', text) +print(matches) +``` + +Output: + +``` bash +555-7777 +['example@devops.com,', 'other@cloud.com'] +``` + +## Datetime library: + +As the name suggests, Python's `datetime` module allows you to work with dates and times in your code. It includes functions for formatting and manipulating date and time data, as well as classes for representing dates, times, and time intervals. +The datetime module, for example, can be used to get the current date and time, calculate the difference between two dates, or convert between different date and time formats. + +``` python +from datetime import datetime, timedelta + +# Get the current date and time +now = datetime.now() +print(now) # Output: 2023-02-17 11:33:27.257712 + +# Create a datetime object for a specific date and time +date = datetime(2023, 2, 1, 12, 0) +print(date) # Output: 2023-02-01 12:00:00 + +# Calculate the difference between two dates +delta = now - date +print(delta) # Output: 15 days, 23:33:27.257712 +``` + +Output: + +``` bash +2023-02-17 11:33:27.257712 +2023-02-01 12:00:00 +15 days, 23:33:27.257712 +``` + +## Resources + +- [pdb - The Python Debugger](https://docs.python.org/3/library/pdb.html) +- [re - Regular expressions operations](https://docs.python.org/3/library/re.html) +- [datetime - Basic date and time types](https://docs.python.org/3/library/datetime.html) diff --git a/2023/images/day29-4.png b/2023/images/day29-4.png new file mode 100644 index 0000000..b717eee Binary files /dev/null and b/2023/images/day29-4.png differ diff --git a/2023/images/day30-1.png b/2023/images/day30-1.png new file mode 100644 index 0000000..f19a847 Binary files /dev/null and b/2023/images/day30-1.png differ diff --git a/2023/images/day30-2.png b/2023/images/day30-2.png new file mode 100644 index 0000000..0545123 Binary files /dev/null and b/2023/images/day30-2.png differ