Markdown Formatting Run1

First run through of the markdown formatting for days 60 through 90 and Readme
This commit is contained in:
Danny Murphy 2022-06-26 14:06:43 +01:00
parent efa9d1c3cd
commit 73cb2561cd
32 changed files with 1168 additions and 1137 deletions

View File

@ -1,12 +1,13 @@
--- ---
title: '#90DaysOfDevOps - Docker Containers, Provisioners & Modules - Day 60' title: "#90DaysOfDevOps - Docker Containers, Provisioners & Modules - Day 60"
published: false published: false
description: '90DaysOfDevOps - Docker Containers, Provisioners & Modules' description: "90DaysOfDevOps - Docker Containers, Provisioners & Modules"
tags: 'devops, 90daysofdevops, learning' tags: "devops, 90daysofdevops, learning"
cover_image: null cover_image: null
canonical_url: null canonical_url: null
id: 1049052 id: 1049052
--- ---
## Docker Containers, Provisioners & Modules ## Docker Containers, Provisioners & Modules
On [Day 59](day59.md) we provisioned a virtual machine using Terraform to our local FREE virtualbox environment. In this section we are going to be deploy a Docker container with some configuration to our local Docker environment. On [Day 59](day59.md) we provisioned a virtual machine using Terraform to our local FREE virtualbox environment. In this section we are going to be deploy a Docker container with some configuration to our local Docker environment.
@ -50,7 +51,7 @@ We then run our `terraform apply` followed by `docker ps` and you can see we hav
![](Images/Day60_IAC2.png) ![](Images/Day60_IAC2.png)
If we then open a browser we can navigate to http://localhost:8000/ and you will see we have access to our NGINX container. If we then open a browser we can navigate to `http://localhost:8000/` and you will see we have access to our NGINX container.
![](Images/Day60_IAC3.png) ![](Images/Day60_IAC3.png)
@ -134,10 +135,9 @@ We can then also navigate to our WordPress front end. Much like when we went thr
Obviously now we have covered containers and Kubernetes in some detail, we probably know that this is ok for testing but if you were really going to be running a website you would not do this with containers alone and you would look at using Kubernetes to achieve this, Next up we are going to take a look using Terraform with Kubernetes. Obviously now we have covered containers and Kubernetes in some detail, we probably know that this is ok for testing but if you were really going to be running a website you would not do this with containers alone and you would look at using Kubernetes to achieve this, Next up we are going to take a look using Terraform with Kubernetes.
### Provisioners ### Provisioners
Provisioners are there so that if something cannot be declartive we have a way in which to parse this to our deployment. Provisioners are there so that if something cannot be declarative we have a way in which to parse this to our deployment.
If you have no other alternative and adding this complexity to your code is the place to go then you can do this by running something similar to the following block of code. If you have no other alternative and adding this complexity to your code is the place to go then you can do this by running something similar to the following block of code.
@ -160,9 +160,9 @@ The remote-exec provisioner invokes a script on a remote resource after it is cr
- local-exec - local-exec
- remote-exec - remote-exec
- vendor - vendor
- ansible - ansible
- chef - chef
- puppet - puppet
### Modules ### Modules
@ -177,15 +177,16 @@ Another benefit to modules is that you can take these modules and use them on ot
We are breaking down our infrastructure into components, components are known here as modules. We are breaking down our infrastructure into components, components are known here as modules.
## Resources ## Resources
I have listed a lot of resources down below and I think this topic has been covered so many times out there, If you have additional resources be sure to raise a PR with your resources and I will be happy to review and add them to the list. I have listed a lot of resources down below and I think this topic has been covered so many times out there, If you have additional resources be sure to raise a PR with your resources and I will be happy to review and add them to the list.
- [What is Infrastructure as Code? Difference of Infrastructure as Code Tools ](https://www.youtube.com/watch?v=POPP2WTJ8es) - [What is Infrastructure as Code? Difference of Infrastructure as Code Tools](https://www.youtube.com/watch?v=POPP2WTJ8es)
- [Terraform Tutorial | Terraform Course Overview 2021](https://www.youtube.com/watch?v=m3cKkYXl-8o) - [Terraform Tutorial | Terraform Course Overview 2021](https://www.youtube.com/watch?v=m3cKkYXl-8o)
- [Terraform explained in 15 mins | Terraform Tutorial for Beginners ](https://www.youtube.com/watch?v=l5k1ai_GBDE) - [Terraform explained in 15 mins | Terraform Tutorial for Beginners](https://www.youtube.com/watch?v=l5k1ai_GBDE)
- [Terraform Course - From BEGINNER to PRO!](https://www.youtube.com/watch?v=7xngnjfIlK4&list=WL&index=141&t=16s) - [Terraform Course - From BEGINNER to PRO!](https://www.youtube.com/watch?v=7xngnjfIlK4&list=WL&index=141&t=16s)
- [HashiCorp Terraform Associate Certification Course](https://www.youtube.com/watch?v=V4waklkBC38&list=WL&index=55&t=111s) - [HashiCorp Terraform Associate Certification Course](https://www.youtube.com/watch?v=V4waklkBC38&list=WL&index=55&t=111s)
- [Terraform Full Course for Beginners](https://www.youtube.com/watch?v=EJ3N-hhiWv0&list=WL&index=39&t=27s) - [Terraform Full Course for Beginners](https://www.youtube.com/watch?v=EJ3N-hhiWv0&list=WL&index=39&t=27s)
- [KodeKloud - Terraform for DevOps Beginners + Labs: Complete Step by Step Guide!](https://www.youtube.com/watch?v=YcJ9IeukJL8&list=WL&index=16&t=11s) - [KodeKloud - Terraform for DevOps Beginners + Labs: Complete Step by Step Guide!](https://www.youtube.com/watch?v=YcJ9IeukJL8&list=WL&index=16&t=11s)
- [Terraform Simple Projects](https://terraform.joshuajebaraj.com/) - [Terraform Simple Projects](https://terraform.joshuajebaraj.com/)
- [Terraform Tutorial - The Best Project Ideas](https://www.youtube.com/watch?v=oA-pPa0vfks) - [Terraform Tutorial - The Best Project Ideas](https://www.youtube.com/watch?v=oA-pPa0vfks)
- [Awesome Terraform](https://github.com/shuaibiyy/awesome-terraform) - [Awesome Terraform](https://github.com/shuaibiyy/awesome-terraform)

View File

@ -1,5 +1,5 @@
--- ---
title: '#90DaysOfDevOps - Kubernetes & Multiple Environments - Day 61' title: "#90DaysOfDevOps - Kubernetes & Multiple Environments - Day 61"
published: false published: false
description: 90DaysOfDevOps - Kubernetes & Multiple Environments description: 90DaysOfDevOps - Kubernetes & Multiple Environments
tags: "devops, 90daysofdevops, learning" tags: "devops, 90daysofdevops, learning"
@ -7,6 +7,7 @@ cover_image: null
canonical_url: null canonical_url: null
id: 1048743 id: 1048743
--- ---
## Kubernetes & Multiple Environments ## Kubernetes & Multiple Environments
So far during this section on Infrastructure as code we have looked at deploying virtual machines albeit to virtualbox but the premise is the same really as we define in code what we want our virtual machine to look like and then we deploy. The same for Docker containers and in this session we are going to take a look at how Terraform can be used to interact with resources supported by Kubernetes. So far during this section on Infrastructure as code we have looked at deploying virtual machines albeit to virtualbox but the premise is the same really as we define in code what we want our virtual machine to look like and then we deploy. The same for Docker containers and in this session we are going to take a look at how Terraform can be used to interact with resources supported by Kubernetes.
@ -109,13 +110,12 @@ We can now take a look at the deployed resources within our cluster.
![](Images/Day61_IAC4.png) ![](Images/Day61_IAC4.png)
Now because we are using minikube and you will have seen in the previous section this has its own limitations when we try and play with the docker networking for ingress. But if we simply issue the `kubectl port-forward -n nginx svc/nginx 30201:80` command and open a browser to http://localhost:30201/ we should see our NGINX page. Now because we are using minikube and you will have seen in the previous section this has its own limitations when we try and play with the docker networking for ingress. But if we simply issue the `kubectl port-forward -n nginx svc/nginx 30201:80` command and open a browser to `http://localhost:30201/` we should see our NGINX page.
![](Images/Day61_IAC5.png) ![](Images/Day61_IAC5.png)
If you want to try out more detailed demos with Terraform and Kubernetes then the [HashiCorp Learn site](https://learn.hashicorp.com/tutorials/terraform/kubernetes-provider) is fantastic to run through. If you want to try out more detailed demos with Terraform and Kubernetes then the [HashiCorp Learn site](https://learn.hashicorp.com/tutorials/terraform/kubernetes-provider) is fantastic to run through.
### Multiple Environments ### Multiple Environments
If we wanted to take any of the demos we have ran through but wanted to now have specific production, staging and development environments looking exactly the same and leveraging this code there are two approaches to achieve this with Terraform If we wanted to take any of the demos we have ran through but wanted to now have specific production, staging and development environments looking exactly the same and leveraging this code there are two approaches to achieve this with Terraform
@ -129,37 +129,42 @@ Each of the above do have their pros and cons though.
### terraform workspaces ### terraform workspaces
Pros Pros
- Easy to get started - Easy to get started
- Convenient terraform.workspace expression - Convenient terraform.workspace expression
- Minimises code duplication - Minimises code duplication
Cons Cons
- Prone to human error (we were trying to eliminate this by using TF) - Prone to human error (we were trying to eliminate this by using TF)
- State stored within the same backend - State stored within the same backend
- Codebase doesnt unambiguously show deployment configurations. - Codebase doesn't unambiguously show deployment configurations.
### File Structure ### File Structure
Pros Pros
- Isolation of backends - Isolation of backends
- improved security - improved security
- decreased potential for human error - decreased potential for human error
- Codebase fully represents deployed state - Codebase fully represents deployed state
Cons Cons
- Multiple terraform apply required to provision environments - Multiple terraform apply required to provision environments
- More code duplication, but can be minimised with modules. - More code duplication, but can be minimised with modules.
## Resources ## Resources
I have listed a lot of resources down below and I think this topic has been covered so many times out there, If you have additional resources be sure to raise a PR with your resources and I will be happy to review and add them to the list. I have listed a lot of resources down below and I think this topic has been covered so many times out there, If you have additional resources be sure to raise a PR with your resources and I will be happy to review and add them to the list.
- [What is Infrastructure as Code? Difference of Infrastructure as Code Tools ](https://www.youtube.com/watch?v=POPP2WTJ8es) - [What is Infrastructure as Code? Difference of Infrastructure as Code Tools](https://www.youtube.com/watch?v=POPP2WTJ8es)
- [Terraform Tutorial | Terraform Course Overview 2021](https://www.youtube.com/watch?v=m3cKkYXl-8o) - [Terraform Tutorial | Terraform Course Overview 2021](https://www.youtube.com/watch?v=m3cKkYXl-8o)
- [Terraform explained in 15 mins | Terraform Tutorial for Beginners ](https://www.youtube.com/watch?v=l5k1ai_GBDE) - [Terraform explained in 15 mins | Terraform Tutorial for Beginners](https://www.youtube.com/watch?v=l5k1ai_GBDE)
- [Terraform Course - From BEGINNER to PRO!](https://www.youtube.com/watch?v=7xngnjfIlK4&list=WL&index=141&t=16s) - [Terraform Course - From BEGINNER to PRO!](https://www.youtube.com/watch?v=7xngnjfIlK4&list=WL&index=141&t=16s)
- [HashiCorp Terraform Associate Certification Course](https://www.youtube.com/watch?v=V4waklkBC38&list=WL&index=55&t=111s) - [HashiCorp Terraform Associate Certification Course](https://www.youtube.com/watch?v=V4waklkBC38&list=WL&index=55&t=111s)
- [Terraform Full Course for Beginners](https://www.youtube.com/watch?v=EJ3N-hhiWv0&list=WL&index=39&t=27s) - [Terraform Full Course for Beginners](https://www.youtube.com/watch?v=EJ3N-hhiWv0&list=WL&index=39&t=27s)
- [KodeKloud - Terraform for DevOps Beginners + Labs: Complete Step by Step Guide!](https://www.youtube.com/watch?v=YcJ9IeukJL8&list=WL&index=16&t=11s) - [KodeKloud - Terraform for DevOps Beginners + Labs: Complete Step by Step Guide!](https://www.youtube.com/watch?v=YcJ9IeukJL8&list=WL&index=16&t=11s)
- [Terraform Simple Projects](https://terraform.joshuajebaraj.com/) - [Terraform Simple Projects](https://terraform.joshuajebaraj.com/)
- [Terraform Tutorial - The Best Project Ideas](https://www.youtube.com/watch?v=oA-pPa0vfks) - [Terraform Tutorial - The Best Project Ideas](https://www.youtube.com/watch?v=oA-pPa0vfks)
- [Awesome Terraform](https://github.com/shuaibiyy/awesome-terraform) - [Awesome Terraform](https://github.com/shuaibiyy/awesome-terraform)

View File

@ -1,25 +1,26 @@
--- ---
title: '#90DaysOfDevOps - Testing, Tools & Alternatives - Day 62' title: "#90DaysOfDevOps - Testing, Tools & Alternatives - Day 62"
published: false published: false
description: '90DaysOfDevOps - Testing, Tools & Alternatives' description: "90DaysOfDevOps - Testing, Tools & Alternatives"
tags: 'devops, 90daysofdevops, learning' tags: "devops, 90daysofdevops, learning"
cover_image: null cover_image: null
canonical_url: null canonical_url: null
id: 1049053 id: 1049053
--- ---
## Testing, Tools & Alternatives ## Testing, Tools & Alternatives
As we close out this section on Infrastructure as Code we must mention about testing our code, the various different tools available and then some of the alternatives to Terraform to achieve this. As I said at the start of the section my focus was on Terraform because it is firstly free and open source, secondly it is cross platform and agnostic to environments. But there are also alternatives out there that should be considered but the overall goal is to make people aware that this is the way to deploy your infrastructure. As we close out this section on Infrastructure as Code we must mention about testing our code, the various different tools available and then some of the alternatives to Terraform to achieve this. As I said at the start of the section my focus was on Terraform because it is firstly free and open source, secondly it is cross platform and agnostic to environments. But there are also alternatives out there that should be considered but the overall goal is to make people aware that this is the way to deploy your infrastructure.
### Code Rot ### Code Rot
The first area I want to cover in this session is code rot, unlike application code, infrastructure as code might get used and then not for a very long time. Lets take the example that we are going to be using Terraform to deploy our VM environment in AWS, perfect and it works first time and we have our environment, but this environment doesnt change too often so the code gets left the state possibly or hopefully stored in a central location but the code does not change. The first area I want to cover in this session is code rot, unlike application code, infrastructure as code might get used and then not for a very long time. Lets take the example that we are going to be using Terraform to deploy our VM environment in AWS, perfect and it works first time and we have our environment, but this environment doesn't change too often so the code gets left the state possibly or hopefully stored in a central location but the code does not change.
What if something changes in the infrastructure? But it is done out of band, or other things change in our environment. What if something changes in the infrastructure? But it is done out of band, or other things change in our environment.
- Out of band changes - Out of band changes
- Unpinned versions - Unpinned versions
- Deprecated dependancies - Deprecated dependencies
- Unapplied changes - Unapplied changes
### Testing ### Testing
@ -28,26 +29,26 @@ Another huge area that follows on from code rot and in general is the ability to
First up there are some built in testing commands we can take a look at: First up there are some built in testing commands we can take a look at:
| Command | Description | | Command | Description |
| --------------------- | ------------------------------------------------------------------------------------------ | | -------------------- | ------------------------------------------------------------------------------------------ |
| `terraform fmt` | Rewrite Terraform configuration files to a canonical format and style. | | `terraform fmt` | Rewrite Terraform configuration files to a canonical format and style. |
| `terraform validate` | Validates the configuration files in a directory, referring only to the configuration | | `terraform validate` | Validates the configuration files in a directory, referring only to the configuration |
| `terraform plan` | Creates an execution plan, which lets you preview the changes that Terraform plans to make | | `terraform plan` | Creates an execution plan, which lets you preview the changes that Terraform plans to make |
| Custom validation | Validation of your input variables to ensure they match what you would expect them to be | | Custom validation | Validation of your input variables to ensure they match what you would expect them to be |
We also have some testing tools available external to Terraform: We also have some testing tools available external to Terraform:
- [tflint](https://github.com/terraform-linters/tflint) - [tflint](https://github.com/terraform-linters/tflint)
- Find possible errors - Find possible errors
- Warn about deprecated syntax, unused declarations. - Warn about deprecated syntax, unused declarations.
- Enforce best practices, naming conventions. - Enforce best practices, naming conventions.
Scanning tools Scanning tools
- [checkov](https://www.checkov.io/) - scans cloud infrastructure configurations to find misconfigurations before they're deployed. - [checkov](https://www.checkov.io/) - scans cloud infrastructure configurations to find misconfigurations before they're deployed.
- [tfsec](https://aquasecurity.github.io/tfsec/v1.4.2/) - static analysis security scanner for your Terraform code. - [tfsec](https://aquasecurity.github.io/tfsec/v1.4.2/) - static analysis security scanner for your Terraform code.
- [terrascan](https://github.com/accurics/terrascan) - static code analyzer for Infrastructure as Code. - [terrascan](https://github.com/accurics/terrascan) - static code analyser for Infrastructure as Code.
- [terraform-compliance](https://terraform-compliance.com/) - a lightweight, security and compliance focused test framework against terraform to enable negative testing capability for your infrastructure-as-code. - [terraform-compliance](https://terraform-compliance.com/) - a lightweight, security and compliance focused test framework against terraform to enable negative testing capability for your infrastructure-as-code.
- [snyk](https://docs.snyk.io/products/snyk-infrastructure-as-code/scan-terraform-files/scan-and-fix-security-issues-in-terraform-files) - scans your Terraform code for misconfigurations and security issues - [snyk](https://docs.snyk.io/products/snyk-infrastructure-as-code/scan-terraform-files/scan-and-fix-security-issues-in-terraform-files) - scans your Terraform code for misconfigurations and security issues
@ -65,7 +66,7 @@ Worth a mention
- [Terragrunt](https://terragrunt.gruntwork.io/) - Terragrunt is a thin wrapper that provides extra tools for keeping your configurations DRY, working with multiple Terraform modules, and managing remote state. - [Terragrunt](https://terragrunt.gruntwork.io/) - Terragrunt is a thin wrapper that provides extra tools for keeping your configurations DRY, working with multiple Terraform modules, and managing remote state.
- [Atlantis](https://www.runatlantis.io/) - Terraform Pull Request Automation - [Atlantis](https://www.runatlantis.io/) - Terraform Pull Request Automation
### Alternatives ### Alternatives
@ -83,7 +84,7 @@ I think an interesting next step for me is to take some time and learn more abou
From a Pulumi comparison on their site From a Pulumi comparison on their site
*"Both Terraform and Pulumi offer a desired state infrastructure as code model where the code represents the desired infrastructure state and the deployment engine compares this desired state with the stacks current state and determines what resources need to be created, updated or deleted."* > "Both Terraform and Pulumi offer a desired state infrastructure as code model where the code represents the desired infrastructure state and the deployment engine compares this desired state with the stacks current state and determines what resources need to be created, updated or deleted."
The biggest difference I can see is that unlike the HashiCorp Configuration Language (HCL) Pulumi allows for general purpose languages like Python, TypeScript, JavaScript, Go and .NET. The biggest difference I can see is that unlike the HashiCorp Configuration Language (HCL) Pulumi allows for general purpose languages like Python, TypeScript, JavaScript, Go and .NET.
@ -92,15 +93,16 @@ A quick overview [Introduction to Pulumi: Modern Infrastructure as Code](https:/
This wraps up the Infrastructure as code section and next we move on to that little bit of overlap with configuration management and in particular as we get past the big picture of configuration management we are going to be using Ansible for some of those tasks and demos. This wraps up the Infrastructure as code section and next we move on to that little bit of overlap with configuration management and in particular as we get past the big picture of configuration management we are going to be using Ansible for some of those tasks and demos.
## Resources ## Resources
I have listed a lot of resources down below and I think this topic has been covered so many times out there, If you have additional resources be sure to raise a PR with your resources and I will be happy to review and add them to the list. I have listed a lot of resources down below and I think this topic has been covered so many times out there, If you have additional resources be sure to raise a PR with your resources and I will be happy to review and add them to the list.
- [What is Infrastructure as Code? Difference of Infrastructure as Code Tools ](https://www.youtube.com/watch?v=POPP2WTJ8es) - [What is Infrastructure as Code? Difference of Infrastructure as Code Tools](https://www.youtube.com/watch?v=POPP2WTJ8es)
- [Terraform Tutorial | Terraform Course Overview 2021](https://www.youtube.com/watch?v=m3cKkYXl-8o) - [Terraform Tutorial | Terraform Course Overview 2021](https://www.youtube.com/watch?v=m3cKkYXl-8o)
- [Terraform explained in 15 mins | Terraform Tutorial for Beginners ](https://www.youtube.com/watch?v=l5k1ai_GBDE) - [Terraform explained in 15 mins | Terraform Tutorial for Beginners](https://www.youtube.com/watch?v=l5k1ai_GBDE)
- [Terraform Course - From BEGINNER to PRO!](https://www.youtube.com/watch?v=7xngnjfIlK4&list=WL&index=141&t=16s) - [Terraform Course - From BEGINNER to PRO!](https://www.youtube.com/watch?v=7xngnjfIlK4&list=WL&index=141&t=16s)
- [HashiCorp Terraform Associate Certification Course](https://www.youtube.com/watch?v=V4waklkBC38&list=WL&index=55&t=111s) - [HashiCorp Terraform Associate Certification Course](https://www.youtube.com/watch?v=V4waklkBC38&list=WL&index=55&t=111s)
- [Terraform Full Course for Beginners](https://www.youtube.com/watch?v=EJ3N-hhiWv0&list=WL&index=39&t=27s) - [Terraform Full Course for Beginners](https://www.youtube.com/watch?v=EJ3N-hhiWv0&list=WL&index=39&t=27s)
- [KodeKloud - Terraform for DevOps Beginners + Labs: Complete Step by Step Guide!](https://www.youtube.com/watch?v=YcJ9IeukJL8&list=WL&index=16&t=11s) - [KodeKloud - Terraform for DevOps Beginners + Labs: Complete Step by Step Guide!](https://www.youtube.com/watch?v=YcJ9IeukJL8&list=WL&index=16&t=11s)
- [Terraform Simple Projects](https://terraform.joshuajebaraj.com/) - [Terraform Simple Projects](https://terraform.joshuajebaraj.com/)
- [Terraform Tutorial - The Best Project Ideas](https://www.youtube.com/watch?v=oA-pPa0vfks) - [Terraform Tutorial - The Best Project Ideas](https://www.youtube.com/watch?v=oA-pPa0vfks)
- [Awesome Terraform](https://github.com/shuaibiyy/awesome-terraform) - [Awesome Terraform](https://github.com/shuaibiyy/awesome-terraform)

View File

@ -1,5 +1,5 @@
--- ---
title: '#90DaysOfDevOps - The Big Picture: Configuration Management - Day 63' title: "#90DaysOfDevOps - The Big Picture: Configuration Management - Day 63"
published: false published: false
description: 90DaysOfDevOps - The Big Picture Configuration Management description: 90DaysOfDevOps - The Big Picture Configuration Management
tags: "devops, 90daysofdevops, learning" tags: "devops, 90daysofdevops, learning"
@ -7,6 +7,7 @@ cover_image: null
canonical_url: null canonical_url: null
id: 1048711 id: 1048711
--- ---
## The Big Picture: Configuration Management ## The Big Picture: Configuration Management
Coming straight off the back of the section covering Infrastructure as Code, there is likely going to be some crossover as we talk about Configuration Management or Application Configuration Management. Coming straight off the back of the section covering Infrastructure as Code, there is likely going to be some crossover as we talk about Configuration Management or Application Configuration Management.
@ -18,11 +19,10 @@ Configuration management keeps you from making small or large changes that go un
### Scenario: Why would you want to use Configuration Management ### Scenario: Why would you want to use Configuration Management
The scenario or why you'd want to use Configuration Management, meet Dean he's our system administrator and Dean is a happy camper pretty and The scenario or why you'd want to use Configuration Management, meet Dean he's our system administrator and Dean is a happy camper pretty and
working on all of the systems in his environement. working on all of the systems in his environment.
What happens if their system fails, if there's a fire, a server goes down well? Dean knows exactly what to do he can fix that fire really easily the problems become really difficult for Dean however if multiple servers start failing particularly when you have large and expanding environments, this is why Dean really needs to have a configuration management tool. Configuration Management tools can help make Dean look like a rockstar, all he has to do is configure the right codes that allows him to push out the instructions on how to set up each of the servers quickly effectively and at scale. What happens if their system fails, if there's a fire, a server goes down well? Dean knows exactly what to do he can fix that fire really easily the problems become really difficult for Dean however if multiple servers start failing particularly when you have large and expanding environments, this is why Dean really needs to have a configuration management tool. Configuration Management tools can help make Dean look like a rockstar, all he has to do is configure the right codes that allows him to push out the instructions on how to set up each of the servers quickly effectively and at scale.
### Configuration Management tools ### Configuration Management tools
There are a variety of configuration management tools available, and each has specific features that make it better for some situations than others. There are a variety of configuration management tools available, and each has specific features that make it better for some situations than others.
@ -32,31 +32,32 @@ There are a variety of configuration management tools available, and each has sp
At this stage we will take a quick fire look at the options in the above picture before making our choice on which one we will use and why. At this stage we will take a quick fire look at the options in the above picture before making our choice on which one we will use and why.
- **Chef** - **Chef**
- Chef ensures configuration is applied consistently in every environment, at any scale with infrastructure automation. - Chef ensures configuration is applied consistently in every environment, at any scale with infrastructure automation.
- Chef is an open-source tool developed by OpsCode written in Ruby and Erlang. - Chef is an open-source tool developed by OpsCode written in Ruby and Erlang.
- Chef is best suited for organisations that have a hetrogenous infrastructure and are looking for mature solutions. - Chef is best suited for organisations that have a heterogeneous infrastructure and are looking for mature solutions.
- Recipes and Cookbooks determine the configuration code for your systems. - Recipes and Cookbooks determine the configuration code for your systems.
- Pro - A large collection of recipes are available - Pro - A large collection of recipes are available
- Pro - Integrates well with Git which provides a strong version control - Pro - Integrates well with Git which provides a strong version control
- Con - Steep learning curve, a considerable amount of time required. - Con - Steep learning curve, a considerable amount of time required.
- Con - The main server doesn't have much control. - Con - The main server doesn't have much control.
- Architecture - Server / Clients - Architecture - Server / Clients
- Ease of setup - Moderate - Ease of setup - Moderate
- Language - Procedural - Specify how to do a task - Language - Procedural - Specify how to do a task
- **Puppet** - **Puppet**
- Puppet is a configuration management tool that supports automatic deployment. - Puppet is a configuration management tool that supports automatic deployment.
- Puppet is built in Ruby and uses DSL for writing manifests. - Puppet is built in Ruby and uses DSL for writing manifests.
- Puppet also works well with hetrogenous infrastructure where the focus is on scalability. - Puppet also works well with heterogeneous infrastructure where the focus is on scalability.
- Pro - Large community for support. - Pro - Large community for support.
- Pro - Well developed reporting mechanism. - Pro - Well developed reporting mechanism.
- Con - Advance tasks require knowledge of Ruby language. - Con - Advance tasks require knowledge of Ruby language.
- Con - The main server doesn't have much control. - Con - The main server doesn't have much control.
- Architecture - Server / Clients - Architecture - Server / Clients
- Ease of setup - Moderate - Ease of setup - Moderate
- Language - Declartive - Specify only what to do - Language - Declarative - Specify only what to do
- **Ansible** - **Ansible**
- Ansible is an IT automation tool that automates configuration management, cloud provisioning, deployment and orchestration. - Ansible is an IT automation tool that automates configuration management, cloud provisioning, deployment and orchestration.
- The core of Ansible playbooks are written in YAML. (Should really do a section on YAML as we have seen this a few times) - The core of Ansible playbooks are written in YAML. (Should really do a section on YAML as we have seen this a few times)
- Ansible works well when there are environments that focus on getting things up and running fast. - Ansible works well when there are environments that focus on getting things up and running fast.
@ -66,7 +67,7 @@ At this stage we will take a quick fire look at the options in the above picture
- Con - Performance speed is often less than other tools (Faster than Dean doing it himself manually) - Con - Performance speed is often less than other tools (Faster than Dean doing it himself manually)
- Con - YAML not as powerful as Ruby but less of a learning curve. - Con - YAML not as powerful as Ruby but less of a learning curve.
- Architecture - Client Only - Architecture - Client Only
- Ease of setup - Very Easy - Ease of setup - Very Easy
- Language - Procedural - Specify how to do a task - Language - Procedural - Specify how to do a task
- **SaltStack** - **SaltStack**
@ -78,8 +79,8 @@ At this stage we will take a quick fire look at the options in the above picture
- Con - Setup phase is tough - Con - Setup phase is tough
- Con - New web ui which is much less developed than the others. - Con - New web ui which is much less developed than the others.
- Architecture - Server / Clients - Architecture - Server / Clients
- Ease of setup - Moderate - Ease of setup - Moderate
- Language - Declartive - Specify only what to do - Language - Declarative - Specify only what to do
### Ansible vs Terraform ### Ansible vs Terraform
@ -87,16 +88,14 @@ The tool that we will be using for this section is going to be Ansible. (Easy to
I think it is important to touch on some of the differences between Ansible and Terraform before we look into the tooling a little further. I think it is important to touch on some of the differences between Ansible and Terraform before we look into the tooling a little further.
| |Ansible |Terraform | | | Ansible | Terraform |
| ------------- | ------------------------------------------------------------- | ----------------------------------------------------------------- | | -------------- | ------------------------------------------------------------ | ---------------------------------------------------------------- |
|Type |Ansible is a configuration management tool |Terraform is a an orchestration tool | | Type | Ansible is a configuration management tool | Terraform is a an orchestration tool |
|Infrastructure |Ansible provides support for mutable infrastructure |Terraform provides support for immutable infrastructure | | Infrastructure | Ansible provides support for mutable infrastructure | Terraform provides support for immutable infrastructure |
|Language |Ansible follows procedural language |Terraform follows a declartive language | | Language | Ansible follows procedural language | Terraform follows a declarative language |
|Provisioning |Ansible provides partial provisioning (VM, Network, Storage) |Terraform provides extensive provisioning (VM, Network, Storage) | | Provisioning | Ansible provides partial provisioning (VM, Network, Storage) | Terraform provides extensive provisioning (VM, Network, Storage) |
|Packaging |Ansible provides complete support for packaging & templating |Terraform provides partial support for packaging & templating | | Packaging | Ansible provides complete support for packaging & templating | Terraform provides partial support for packaging & templating |
|Lifecycle Mgmt |Ansible does not have lifecycle management |Terraform is heavily dependant on lifecycle and state mgmt | | Lifecycle Mgmt | Ansible does not have lifecycle management | Terraform is heavily dependant on lifecycle and state mgmt |
## Resources ## Resources
@ -104,5 +103,4 @@ I think it is important to touch on some of the differences between Ansible and
- [Ansible 101 - Episode 1 - Introduction to Ansible](https://www.youtube.com/watch?v=goclfp6a2IQ) - [Ansible 101 - Episode 1 - Introduction to Ansible](https://www.youtube.com/watch?v=goclfp6a2IQ)
- [NetworkChuck - You need to learn Ansible right now!](https://www.youtube.com/watch?v=5hycyr-8EKs&t=955s) - [NetworkChuck - You need to learn Ansible right now!](https://www.youtube.com/watch?v=5hycyr-8EKs&t=955s)
See you on [Day 64](day64.md) See you on [Day 64](day64.md)

View File

@ -1,17 +1,19 @@
--- ---
title: '#90DaysOfDevOps - Ansible: Getting Started - Day 64' title: "#90DaysOfDevOps - Ansible: Getting Started - Day 64"
published: false published: false
description: '90DaysOfDevOps - Ansible: Getting Started' description: "90DaysOfDevOps - Ansible: Getting Started"
tags: "devops, 90daysofdevops, learning" tags: "devops, 90daysofdevops, learning"
cover_image: null cover_image: null
canonical_url: null canonical_url: null
id: 1048765 id: 1048765
--- ---
## Ansible: Getting Started ## Ansible: Getting Started
We covered a little what Ansible is in the [big picture session yesterday](day63.md) But we are going to get started with a little more information on top of that here. Firstly Ansible comes from RedHat. Secondly it is agentles, connects via SSH and runs commands. Thirdly it is cross platform (Linux & macOS, WSL2) and open-source (there is also a paid for enterprise option) Ansible pushes configuration vs other models. We covered a little what Ansible is in the [big picture session yesterday](day63.md) But we are going to get started with a little more information on top of that here. Firstly Ansible comes from RedHat. Secondly it is agentles, connects via SSH and runs commands. Thirdly it is cross platform (Linux & macOS, WSL2) and open-source (there is also a paid for enterprise option) Ansible pushes configuration vs other models.
### Ansible Installation ### Ansible Installation
As you might imagine, RedHat and the Ansible team have done a fantastic job around documenting Ansible. This generally starts with the installation steps which you can find [here](https://docs.ansible.com/ansible/latest/installation_guide/intro_installation.html) Remember we said that Ansible is an agentless automation tool, the tool is deployed to a system referred to as a "Control Node" from this control node is manages machines and other devices (possibly network) over SSH. As you might imagine, RedHat and the Ansible team have done a fantastic job around documenting Ansible. This generally starts with the installation steps which you can find [here](https://docs.ansible.com/ansible/latest/installation_guide/intro_installation.html) Remember we said that Ansible is an agentless automation tool, the tool is deployed to a system referred to as a "Control Node" from this control node is manages machines and other devices (possibly network) over SSH.
It does state in the above linked documentation that the Windows OS cannot be used as the control node. It does state in the above linked documentation that the Windows OS cannot be used as the control node.
@ -20,12 +22,13 @@ For my control node and for at least this demo I am going to use the Linux VM we
This system was running Ubuntu and the installation steps simply needs the following commands. This system was running Ubuntu and the installation steps simply needs the following commands.
``` ```Shell
sudo apt update sudo apt update
sudo apt install software-properties-common sudo apt install software-properties-common
sudo add-apt-repository --yes --update ppa:ansible/ansible sudo add-apt-repository --yes --update ppa:ansible/ansible
sudo apt install ansible sudo apt install ansible
``` ```
Now we should have ansible installed on our control node, you can check this by running `ansible --version` and you should see something similar to this below. Now we should have ansible installed on our control node, you can check this by running `ansible --version` and you should see something similar to this below.
![](Images/Day64_config1.png) ![](Images/Day64_config1.png)
@ -81,8 +84,4 @@ Ad hoc commands use a declarative model, calculating and executing the actions r
- [Ansible 101 - Episode 1 - Introduction to Ansible](https://www.youtube.com/watch?v=goclfp6a2IQ) - [Ansible 101 - Episode 1 - Introduction to Ansible](https://www.youtube.com/watch?v=goclfp6a2IQ)
- [NetworkChuck - You need to learn Ansible right now!](https://www.youtube.com/watch?v=5hycyr-8EKs&t=955s) - [NetworkChuck - You need to learn Ansible right now!](https://www.youtube.com/watch?v=5hycyr-8EKs&t=955s)
See you on [Day 65](day65.md) See you on [Day 65](day65.md)

View File

@ -1,12 +1,13 @@
--- ---
title: '#90DaysOfDevOps - Ansible Playbooks - Day 65' title: "#90DaysOfDevOps - Ansible Playbooks - Day 65"
published: false published: false
description: 90DaysOfDevOps - Ansible Playbooks description: 90DaysOfDevOps - Ansible Playbooks
tags: 'devops, 90daysofdevops, learning' tags: "devops, 90daysofdevops, learning"
cover_image: null cover_image: null
canonical_url: null canonical_url: null
id: 1049054 id: 1049054
--- ---
### Ansible Playbooks ### Ansible Playbooks
In this section we will take a look at the main reason that I can see at least for Ansible, I mean it is great to take a single command and hit many different servers to perform simple commands such as rebooting a long list of servers and saving the hassle of having to connect to each one individually. In this section we will take a look at the main reason that I can see at least for Ansible, I mean it is great to take a single command and hit many different servers to perform simple commands such as rebooting a long list of servers and saving the hassle of having to connect to each one individually.
@ -25,7 +26,7 @@ These playbooks are written in YAML (YAML aint markup language) you will find
Lets take a look at a simple playbook called playbook.yml. Lets take a look at a simple playbook called playbook.yml.
``` ```Yaml
- name: Simple Play - name: Simple Play
hosts: localhost hosts: localhost
connection: local connection: local
@ -47,7 +48,7 @@ Our second task was to set a ping, this is not an ICMP ping but a python script
Then our third or really our second defined task as the first one will run unless you disable was the printing of a message telling us our OS. In this task we are using conditionals, we could run this playbook against all different types of operating systems and this would return the OS name. We are simply messaging this output for ease but we could add a task to say something like: Then our third or really our second defined task as the first one will run unless you disable was the printing of a message telling us our OS. In this task we are using conditionals, we could run this playbook against all different types of operating systems and this would return the OS name. We are simply messaging this output for ease but we could add a task to say something like:
``` ```Yaml
tasks: tasks:
- name: "shut down Debian flavoured systems" - name: "shut down Debian flavoured systems"
command: /sbin/shutdown -t now command: /sbin/shutdown -t now
@ -56,11 +57,11 @@ tasks:
### Vagrant to setup our environment ### Vagrant to setup our environment
We are going to use Vagrant to set up our node environment, I am going to keep this at a reasonable 4 nodes but you can hopefully see that this could easily be 300 or 3000 and this is the power of Ansible and other configuration management tools to be able to configure your servers. We are going to use Vagrant to set up our node environment, I am going to keep this at a reasonable 4 nodes but you can hopefully see that this could easily be 300 or 3000 and this is the power of Ansible and other configuration management tools to be able to configure your servers.
You can find this file located here ([Vagrantfile](/Days/Configmgmt/Vagrantfile)) You can find this file located here ([Vagrantfile](/Days/Configmgmt/Vagrantfile))
``` ```Vagrant
Vagrant.configure("2") do |config| Vagrant.configure("2") do |config|
servers=[ servers=[
{ {
@ -121,7 +122,7 @@ Now that we have our environment ready, we can check ansible and for this we wil
I have added the following to the default hosts file. I have added the following to the default hosts file.
``` ```Text
[control] [control]
ansible-control ansible-control
@ -136,19 +137,21 @@ web02
db01 db01
``` ```
![](Images/Day65_config2.png) ![](Images/Day65_config2.png)
Before moving on we want to make sure we can run a command against our nodes, lets run `ansible nodes -m command -a hostname` this simple command will test that we have connectivity and report back our host names. Before moving on we want to make sure we can run a command against our nodes, lets run `ansible nodes -m command -a hostname` this simple command will test that we have connectivity and report back our host names.
Also note that I have added these nodes and IPs to my Ubuntu control node within the /etc/hosts file to ensure connectivity. We might also need to do SSH configuration for each node from the Ubuntu box. Also note that I have added these nodes and IPs to my Ubuntu control node within the /etc/hosts file to ensure connectivity. We might also need to do SSH configuration for each node from the Ubuntu box.
``` ```Text
192.168.169.140 ansible-control 192.168.169.140 ansible-control
192.168.169.130 db01 192.168.169.130 db01
192.168.169.131 web01 192.168.169.131 web01
192.168.169.132 web02 192.168.169.132 web02
192.168.169.133 loadbalancer 192.168.169.133 loadbalancer
``` ```
![](Images/Day65_config3.png) ![](Images/Day65_config3.png)
At this stage we want to run through setting up SSH keys between your control and your server nodes. This is what we are going to do next, another way here could be to add variables into your hosts file to give username and password. I would advise against this as this is never going to be a best practice. At this stage we want to run through setting up SSH keys between your control and your server nodes. This is what we are going to do next, another way here could be to add variables into your hosts file to give username and password. I would advise against this as this is never going to be a best practice.
@ -173,8 +176,8 @@ Before running any playbooks I like to make sure that I have simple connectivity
![](Images/Day65_config4.png) ![](Images/Day65_config4.png)
### Our First "real" Ansible Playbook ### Our First "real" Ansible Playbook
Our first Ansible playbook is going to configure our webservers, we have grouped these in our hosts file under the grouping [webservers]. Our first Ansible playbook is going to configure our webservers, we have grouped these in our hosts file under the grouping [webservers].
Before we run our playbook we can confirm that our web01 and web02 do not have apache installed. The top of the screenshot below is showing you the folder and file layout I have created within my ansible control to run this playbook, we have the `playbook1.yml`, then in the templates folder we have the `index.html.j2` and `ports.conf.j2` files. You can find these files in the folder listed above in the repository. Before we run our playbook we can confirm that our web01 and web02 do not have apache installed. The top of the screenshot below is showing you the folder and file layout I have created within my ansible control to run this playbook, we have the `playbook1.yml`, then in the templates folder we have the `index.html.j2` and `ports.conf.j2` files. You can find these files in the folder listed above in the repository.
@ -185,8 +188,7 @@ Then we SSH into web01 to check if we have apache installed?
You can see from the above that we have not got apache installed on our web01 so we can fix this by running the below playbook. You can see from the above that we have not got apache installed on our web01 so we can fix this by running the below playbook.
```Yaml
```
- hosts: webservers - hosts: webservers
become: yes become: yes
vars: vars:
@ -224,6 +226,7 @@ You can see from the above that we have not got apache installed on our web01 so
name: apache2 name: apache2
state: restarted state: restarted
``` ```
Breaking down the above playbook: Breaking down the above playbook:
- `- hosts: webservers` this is saying that our group to run this playbook on is a group called webservers - `- hosts: webservers` this is saying that our group to run this playbook on is a group called webservers

View File

@ -1,5 +1,5 @@
--- ---
title: '#90DaysOfDevOps - Ansible Playbooks Continued... - Day 66' title: "#90DaysOfDevOps - Ansible Playbooks Continued... - Day 66"
published: false published: false
description: 90DaysOfDevOps - Ansible Playbooks Continued... description: 90DaysOfDevOps - Ansible Playbooks Continued...
tags: "devops, 90daysofdevops, learning" tags: "devops, 90daysofdevops, learning"
@ -7,7 +7,8 @@ cover_image: null
canonical_url: null canonical_url: null
id: 1048712 id: 1048712
--- ---
## Ansible Playbooks Continued...
## Ansible Playbooks (Continued)
In our last section we started with creating our small lab using a Vagrantfile to deploy 4 machines and we used our Linux machine we created in that section as our ansible control system. In our last section we started with creating our small lab using a Vagrantfile to deploy 4 machines and we used our Linux machine we created in that section as our ansible control system.
@ -21,7 +22,7 @@ Before we get into further automation and deployment we should cover the ability
we are basically going to copy our tasks into their own file within a folder. we are basically going to copy our tasks into their own file within a folder.
``` ```Yaml
- name: ensure apache is at the latest version - name: ensure apache is at the latest version
apt: name=apache2 state=latest apt: name=apache2 state=latest
@ -46,7 +47,7 @@ we are basically going to copy our tasks into their own file within a folder.
and the same for the handlers. and the same for the handlers.
``` ```Yaml
- name: restart apache - name: restart apache
service: service:
name: apache2 name: apache2
@ -85,7 +86,7 @@ Copy and paste is easy to move those files but we also need to make a change to
We also need to change our playbook now to refer to our new role. In the playbook1.yml and playbook2.yml we determine our tasks and handlers in different ways as we changed these between the two versions. We need to change our playbook to use this role as per below: We also need to change our playbook now to refer to our new role. In the playbook1.yml and playbook2.yml we determine our tasks and handlers in different ways as we changed these between the two versions. We need to change our playbook to use this role as per below:
``` ```Yaml
- hosts: webservers - hosts: webservers
become: yes become: yes
vars: vars:

View File

@ -1,5 +1,5 @@
--- ---
title: '#90DaysOfDevOps - Using Roles & Deploying a Loadbalancer - Day 67' title: "#90DaysOfDevOps - Using Roles & Deploying a Loadbalancer - Day 67"
published: false published: false
description: 90DaysOfDevOps - Using Roles & Deploying a Loadbalancer description: 90DaysOfDevOps - Using Roles & Deploying a Loadbalancer
tags: "devops, 90daysofdevops, learning" tags: "devops, 90daysofdevops, learning"
@ -7,6 +7,7 @@ cover_image: null
canonical_url: null canonical_url: null
id: 1048713 id: 1048713
--- ---
## Using Roles & Deploying a Loadbalancer ## Using Roles & Deploying a Loadbalancer
In the last session we covered roles and used the `ansible-galaxy` command to help create our folder structures for some roles that we are going to use. We finished up with a much tidier working repository for our configuration code as everything is hidden away in our role folders. In the last session we covered roles and used the `ansible-galaxy` command to help create our folder structures for some roles that we are going to use. We finished up with a much tidier working repository for our configuration code as everything is hidden away in our role folders.
@ -18,9 +19,10 @@ At this point if you have only used `vagrant up web01 web02` now is the time to
We have already defined this new machine in our hosts file, but we do not have the ssh key configured until it is available, so we need to also run `ssh-copy-id loadbalancer` when the system is up and ready. We have already defined this new machine in our hosts file, but we do not have the ssh key configured until it is available, so we need to also run `ssh-copy-id loadbalancer` when the system is up and ready.
### Common role ### Common role
I created at the end of yesterdays session the role of `common`, common will be used across all of our servers where as the other roles are specific to use cases, now the applications I am going to install as common as spurious and I cannot see many reasons for this to be the case but it shows the objective. In our common role folder structure, navigate to tasks folder and you will have a main.yml. In this yaml we need to point this to our install_tools.yml file and we do this by adding a line `- import_tasks: install_tools.yml` this used to be `include` but this is going to be depreciated soon enough so we are using import_tasks. I created at the end of yesterdays session the role of `common`, common will be used across all of our servers where as the other roles are specific to use cases, now the applications I am going to install as common as spurious and I cannot see many reasons for this to be the case but it shows the objective. In our common role folder structure, navigate to tasks folder and you will have a main.yml. In this yaml we need to point this to our install_tools.yml file and we do this by adding a line `- import_tasks: install_tools.yml` this used to be `include` but this is going to be depreciated soon enough so we are using import_tasks.
``` ```Yaml
- name: "Install Common packages" - name: "Install Common packages"
apt: name={{ item }} state=latest apt: name={{ item }} state=latest
with_items: with_items:
@ -31,7 +33,7 @@ I created at the end of yesterdays session the role of `common`, common will be
In our playbook we then add in the common role for each host block. In our playbook we then add in the common role for each host block.
``` ```Yaml
- hosts: webservers - hosts: webservers
become: yes become: yes
vars: vars:
@ -51,7 +53,7 @@ First of all we are going to add a host block to our playbook. This block will i
The playbook can be found here. [playbook4.yml](Days/../Configmgmt/ansible-scenario4/playbook4.yml) The playbook can be found here. [playbook4.yml](Days/../Configmgmt/ansible-scenario4/playbook4.yml)
``` ```Yaml
- hosts: webservers - hosts: webservers
become: yes become: yes
vars: vars:
@ -87,7 +89,7 @@ Now that we have our webservers and loadbalancer configured we should now be abl
If you are following along and you do not have this state then it could be down to the server IP addresses you have in your environment. The file can be found in `templates\mysite.j2` and looks similar to the below: You would need to update with your webserver IP addresses. If you are following along and you do not have this state then it could be down to the server IP addresses you have in your environment. The file can be found in `templates\mysite.j2` and looks similar to the below: You would need to update with your webserver IP addresses.
``` ```J2
upstream webservers { upstream webservers {
server 192.168.169.131:8000; server 192.168.169.131:8000;
server 192.168.169.132:8000; server 192.168.169.132:8000;
@ -101,6 +103,7 @@ If you are following along and you do not have this state then it could be down
} }
} }
``` ```
I am pretty confident that what we have installed is all good but let's use an adhoc command using ansible to check these common tools installation. I am pretty confident that what we have installed is all good but let's use an adhoc command using ansible to check these common tools installation.
`ansible loadbalancer -m command -a neofetch` `ansible loadbalancer -m command -a neofetch`

View File

@ -1,23 +1,24 @@
--- ---
title: '#90DaysOfDevOps - Tags, Variables, Inventory & Database Server config - Day 68' title: "#90DaysOfDevOps - Tags, Variables, Inventory & Database Server config - Day 68"
published: false published: false
description: '90DaysOfDevOps - Tags, Variables, Inventory & Database Server config' description: "90DaysOfDevOps - Tags, Variables, Inventory & Database Server config"
tags: "devops, 90daysofdevops, learning" tags: "devops, 90daysofdevops, learning"
cover_image: null cover_image: null
canonical_url: null canonical_url: null
id: 1048780 id: 1048780
--- ---
## Tags, Variables, Inventory & Database Server config ## Tags, Variables, Inventory & Database Server config
### Tags ### Tags
As we left our playbook in the session yesterday we would need to run every tasks and play within that playbook. Which means we would have to run the webservers and loadbalancer plays and tasks to completion. As we left our playbook in the session yesterday we would need to run every tasks and play within that playbook. Which means we would have to run the webservers and loadbalancer plays and tasks to completion.
However tags can enable us to seperate these out if we want. This could be an effcient move if we have extra large and long playbooks in our environments. However tags can enable us to separate these out if we want. This could be an efficient move if we have extra large and long playbooks in our environments.
In our playbook file, in this case we are using [ansible-scenario5](Configmgmt/ansible-scenario5/playbook5.yml) In our playbook file, in this case we are using [ansible-scenario5](Configmgmt/ansible-scenario5/playbook5.yml)
``` ```Yaml
- hosts: webservers - hosts: webservers
become: yes become: yes
vars: vars:
@ -36,6 +37,7 @@ In our playbook file, in this case we are using [ansible-scenario5](Configmgmt/a
- nginx - nginx
tags: proxy tags: proxy
``` ```
We can then confirm this by using the `ansible-playbook playbook5.yml --list-tags` and the list tags is going to outline the tags we have defined in our playbook. We can then confirm this by using the `ansible-playbook playbook5.yml --list-tags` and the list tags is going to outline the tags we have defined in our playbook.
![](Images/Day68_config1.png) ![](Images/Day68_config1.png)
@ -89,13 +91,14 @@ An idea would be to potentially use one of these variables within our nginx temp
} }
} }
``` ```
The outcome of the above will look the same as it does right now but if we added more webservers or removed one this would dynamically change the proxy configuration. For this to work you will need to have name resolution configured. The outcome of the above will look the same as it does right now but if we added more webservers or removed one this would dynamically change the proxy configuration. For this to work you will need to have name resolution configured.
### User created ### User created
User created variables are what we have created ourselves. If you take a look in our playbook you will see we have `vars:` and then a list of 3 variables we are using there. User created variables are what we have created ourselves. If you take a look in our playbook you will see we have `vars:` and then a list of 3 variables we are using there.
``` ```Yaml
- hosts: webservers - hosts: webservers
become: yes become: yes
vars: vars:
@ -117,7 +120,7 @@ User created variables are what we have created ourselves. If you take a look in
We can however keep our playbook clear of variables by moving them to their own file. We are going to do this but we will move into the [ansible-scenario6](Configmgmt/ansible-scenario6) folder. In the root of that folder we are going to create a group_vars folder. We are then going to create another folder called all (all groups are going to get these variables). In there we will create a file called `common_variables.yml` and we will copy our variables from our playbook into this file. Removing them from the playbook along with vars: as well. We can however keep our playbook clear of variables by moving them to their own file. We are going to do this but we will move into the [ansible-scenario6](Configmgmt/ansible-scenario6) folder. In the root of that folder we are going to create a group_vars folder. We are then going to create another folder called all (all groups are going to get these variables). In there we will create a file called `common_variables.yml` and we will copy our variables from our playbook into this file. Removing them from the playbook along with vars: as well.
``` ```Yaml
http_port: 8000 http_port: 8000
https_port: 4443 https_port: 4443
html_welcome_msg: "Hello 90DaysOfDevOps - Welcome to Day 68!" html_welcome_msg: "Hello 90DaysOfDevOps - Welcome to Day 68!"
@ -125,7 +128,7 @@ html_welcome_msg: "Hello 90DaysOfDevOps - Welcome to Day 68!"
Because we are associating this as a global variable we could also add in our NTP and DNS servers here as well. The variables are set from the folder structure that we have created. You can see below how clean our Playbook now looks. Because we are associating this as a global variable we could also add in our NTP and DNS servers here as well. The variables are set from the folder structure that we have created. You can see below how clean our Playbook now looks.
``` ```Yaml
- hosts: webservers - hosts: webservers
become: yes become: yes
roles: roles:
@ -143,10 +146,10 @@ Because we are associating this as a global variable we could also add in our NT
One of those variables was the http_port, we can use this again in our for loop within the mysite.j2 as per below: One of those variables was the http_port, we can use this again in our for loop within the mysite.j2 as per below:
``` ```J2
#Dynamic Config for server {{ ansible_facts['nodename'] }} #Dynamic Config for server {{ ansible_facts['nodename'] }}
upstream webservers { upstream webservers {
{% for host in groups['webservers'] %} {% for host in groups['webservers'] %}
server {{ hostvars[host]['ansible_facts']['nodename'] }}:{{ http_port }}; server {{ hostvars[host]['ansible_facts']['nodename'] }}:{{ http_port }};
{% endfor %} {% endfor %}
} }
@ -162,13 +165,14 @@ One of those variables was the http_port, we can use this again in our for loop
We can also define an ansible fact in our roles/apache2/templates/index.html.j2 file so that we can understand which webserver we are on. We can also define an ansible fact in our roles/apache2/templates/index.html.j2 file so that we can understand which webserver we are on.
``` ```J2
<html> <html>
<h1>{{ html_welcome_msg }}! I'm webserver {{ ansible_facts['nodename'] }} </h1> <h1>{{ html_welcome_msg }}! I'm webserver {{ ansible_facts['nodename'] }} </h1>
</html> </html>
``` ```
The results of running the `ansible-playbook playbook6.yml` command with our variable changes means that when we hit our loadbalancer you can see that we hit either of the webservers we have in our group. The results of running the `ansible-playbook playbook6.yml` command with our variable changes means that when we hit our loadbalancer you can see that we hit either of the webservers we have in our group.
![](Images/Day68_config5.png) ![](Images/Day68_config5.png)
@ -191,7 +195,7 @@ Let's then use `ansible-galaxy init roles/mysql` to create a new folder structur
In our playbook we are going to add a new play block for the database configuration. We have our group database defined in our /etc/ansible/hosts file. We then instruct our database group to have the role common and a new role called mysql which we created in the previous step. We are also tagging our database group with database, this means as we discussed earlier we can choose to only run against these tags if we wish. In our playbook we are going to add a new play block for the database configuration. We have our group database defined in our /etc/ansible/hosts file. We then instruct our database group to have the role common and a new role called mysql which we created in the previous step. We are also tagging our database group with database, this means as we discussed earlier we can choose to only run against these tags if we wish.
``` ```Yaml
- hosts: webservers - hosts: webservers
become: yes become: yes
roles: roles:
@ -220,7 +224,7 @@ Within our roles folder structure you will now have the tree automatically creat
Handlers - main.yml Handlers - main.yml
``` ```Yaml
# handlers file for roles/mysql # handlers file for roles/mysql
- name: restart mysql - name: restart mysql
service: service:
@ -232,7 +236,7 @@ Tasks - install_mysql.yml, main.yml & setup_mysql.yml
install_mysql.yml - this task is going to be there to install mysql and ensure that the service is running. install_mysql.yml - this task is going to be there to install mysql and ensure that the service is running.
``` ```Yaml
- name: "Install Common packages" - name: "Install Common packages"
apt: name={{ item }} state=latest apt: name={{ item }} state=latest
with_items: with_items:
@ -256,7 +260,7 @@ install_mysql.yml - this task is going to be there to install mysql and ensure t
main.yml is a pointer file that will suggest that we import_tasks from these files. main.yml is a pointer file that will suggest that we import_tasks from these files.
``` ```Yaml
# tasks file for roles/mysql # tasks file for roles/mysql
- import_tasks: install_mysql.yml - import_tasks: install_mysql.yml
- import_tasks: setup_mysql.yml - import_tasks: setup_mysql.yml
@ -264,7 +268,7 @@ main.yml is a pointer file that will suggest that we import_tasks from these fil
setup_mysql.yml - This task will create our database and database user. setup_mysql.yml - This task will create our database and database user.
``` ```Yaml
- name: Create my.cnf configuration file - name: Create my.cnf configuration file
template: src=templates/my.cnf.j2 dest=/etc/mysql/conf.d/mysql.cnf template: src=templates/my.cnf.j2 dest=/etc/mysql/conf.d/mysql.cnf
notify: restart mysql notify: restart mysql
@ -290,7 +294,7 @@ setup_mysql.yml - This task will create our database and database user.
You can see from the above we are using some variables to determine some of our configuration such as passwords, usernames and databases, this is all stored in our group_vars/all/common_variables.yml file. You can see from the above we are using some variables to determine some of our configuration such as passwords, usernames and databases, this is all stored in our group_vars/all/common_variables.yml file.
``` ```Yaml
http_port: 8000 http_port: 8000
https_port: 4443 https_port: 4443
html_welcome_msg: "Hello 90DaysOfDevOps - Welcome to Day 68!" html_welcome_msg: "Hello 90DaysOfDevOps - Welcome to Day 68!"
@ -301,9 +305,10 @@ db_user: devops
db_pass: DevOps90 db_pass: DevOps90
db_name: 90DaysOfDevOps db_name: 90DaysOfDevOps
``` ```
We also have the my.cnf.j2 file in the templates folder, which looks like below: We also have the my.cnf.j2 file in the templates folder, which looks like below:
``` ```J2
[mysql] [mysql]
bind-address = 0.0.0.0 bind-address = 0.0.0.0
``` ```
@ -328,7 +333,7 @@ When we have connected let's first make sure we have our user created called dev
![](Images/Day68_config8.png) ![](Images/Day68_config8.png)
Now we can issue the `SHOW DATABASES;` command to see our new database that has also been created. Now we can issue the `SHOW DATABASES;` command to see our new database that has also been created.
![](Images/Day68_config9.png) ![](Images/Day68_config9.png)

View File

@ -1,12 +1,13 @@
--- ---
title: '#90DaysOfDevOps - All other things Ansible - Automation Controller (Tower), AWX, Vault - Day 69' title: "#90DaysOfDevOps - All other things Ansible - Automation Controller (Tower), AWX, Vault - Day 69"
published: false published: false
description: '90DaysOfDevOps - All other things Ansible - Automation Controller (Tower), AWX, Vault' description: "90DaysOfDevOps - All other things Ansible - Automation Controller (Tower), AWX, Vault"
tags: "devops, 90daysofdevops, learning" tags: "devops, 90daysofdevops, learning"
cover_image: null cover_image: null
canonical_url: null canonical_url: null
id: 1048714 id: 1048714
--- ---
## All other things Ansible - Automation Controller (Tower), AWX, Vault ## All other things Ansible - Automation Controller (Tower), AWX, Vault
Rounding out the section on Configuration Management I wanted to have a look into the other areas that you might come across when dealing with Ansible. Rounding out the section on Configuration Management I wanted to have a look into the other areas that you might come across when dealing with Ansible.
@ -52,7 +53,7 @@ I forked the repo above and then ran `git clone https://github.com/MichaelCade/a
In the cloned repository you will find a awx-demo.yml file we need to change `NodePort` for `ClusterIP` as per below: In the cloned repository you will find a awx-demo.yml file we need to change `NodePort` for `ClusterIP` as per below:
``` ```Yaml
--- ---
apiVersion: awx.ansible.com/v1beta1 apiVersion: awx.ansible.com/v1beta1
kind: AWX kind: AWX

View File

@ -1,8 +1,8 @@
--- ---
title: '#90DaysOfDevOps - The Big Picture: CI/CD Pipelines - Day 70' title: "#90DaysOfDevOps - The Big Picture: CI/CD Pipelines - Day 70"
published: false published: false
description: 90DaysOfDevOps - The Big Picture CI/CD Pipelines description: 90DaysOfDevOps - The Big Picture CI/CD Pipelines
tags: 'devops, 90daysofdevops, learning' tags: "devops, 90daysofdevops, learning"
cover_image: null cover_image: null
canonical_url: null canonical_url: null
id: 1048836 id: 1048836
@ -10,13 +10,13 @@ id: 1048836
## The Big Picture: CI/CD Pipelines ## The Big Picture: CI/CD Pipelines
A CI/CD (Continous Integration/Continous Deployment) Pipeline implementation is the backbone of the modern DevOps environment. A CI/CD (Continuous Integration/Continuous Deployment) Pipeline implementation is the backbone of the modern DevOps environment.
It bridges the gap between development and operations by automating the build, test and deployment of applications. It bridges the gap between development and operations by automating the build, test and deployment of applications.
We covered a lot of this Continous mantra in the opening section of the challenge. But to reiterate: We covered a lot of this continuous mantra in the opening section of the challenge. But to reiterate:
Continous Integration (CI) is a more modern software development practice in which incremental code changes are made more frequently and reliabily. Automated build and test workflow steps triggered by Contininous Integration ensures that code changes being merged into the repository are reliable. Continuous Integration (CI) is a more modern software development practice in which incremental code changes are made more frequently and reliably. Automated build and test workflow steps triggered by Continuous Integration ensures that code changes being merged into the repository are reliable.
That code / Application is then delivered quickly and seamlessly as part of the Continuous Deployment process. That code / Application is then delivered quickly and seamlessly as part of the Continuous Deployment process.
@ -24,7 +24,7 @@ That code / Application is then delivered quickly and seamlessly as part of the
- Ship software quickly and efficiently - Ship software quickly and efficiently
- Facilitates an effective process for getting applications to market as fast as possible - Facilitates an effective process for getting applications to market as fast as possible
- A continous flow of bug fixes and new features without waiting months or years for version releases. - A continuous flow of bug fixes and new features without waiting months or years for version releases.
The ability for developers to make small impactful changes regular means we get faster fixes and more features quicker. The ability for developers to make small impactful changes regular means we get faster fixes and more features quicker.

View File

@ -1,5 +1,5 @@
--- ---
title: '#90DaysOfDevOps - What is Jenkins? - Day 71' title: "#90DaysOfDevOps - What is Jenkins? - Day 71"
published: false published: false
description: 90DaysOfDevOps - What is Jenkins? description: 90DaysOfDevOps - What is Jenkins?
tags: "devops, 90daysofdevops, learning" tags: "devops, 90daysofdevops, learning"
@ -7,15 +7,16 @@ cover_image: null
canonical_url: null canonical_url: null
id: 1048745 id: 1048745
--- ---
## What is Jenkins? ## What is Jenkins?
Jenkins is a continous integration tool that allows continous development, test and deployment of newly created code. Jenkins is a continuous integration tool that allows continuous development, test and deployment of newly created code.
There are two ways we can achieve this with either nightly builds or continous development. The first option is that our developers are developing throughout the day on their tasks and come the end of the set day they push their changes to the source code repository. Then during the night we run our unit tests and build of the software. This could be deemed as the old way to integrate all code. There are two ways we can achieve this with either nightly builds or continuous development. The first option is that our developers are developing throughout the day on their tasks and come the end of the set day they push their changes to the source code repository. Then during the night we run our unit tests and build of the software. This could be deemed as the old way to integrate all code.
![](Images/Day71_CICD1.png) ![](Images/Day71_CICD1.png)
The other option and the preferred way is that our developers are still committing their changes to source code, then when that code commit has been made there is a build process kicked off continously. The other option and the preferred way is that our developers are still committing their changes to source code, then when that code commit has been made there is a build process kicked off continuously.
![](Images/Day71_CICD2.png) ![](Images/Day71_CICD2.png)
@ -25,15 +26,12 @@ The above methods means that with distributed developers across the world we don
I know we are talking about Jenkins here but I also want to add a few more to maybe look into later on down the line to get an understanding why I am seeing Jenkins as the overall most popular, why is that and what can the others do over Jenkins. I know we are talking about Jenkins here but I also want to add a few more to maybe look into later on down the line to get an understanding why I am seeing Jenkins as the overall most popular, why is that and what can the others do over Jenkins.
- TravisCI - A hosted, distributed continous integration service used to build and test software projects hosted on GitHub. - TravisCI - A hosted, distributed continuous integration service used to build and test software projects hosted on GitHub.
- Bamboo - Can run multiple builds in parallel for faster compilation, built in functionality to connect with repositories and has build tasks for Ant, Maven. - Bamboo - Can run multiple builds in parallel for faster compilation, built in functionality to connect with repositories and has build tasks for Ant, Maven.
- Buildbot - is an open-source framework for automating software build, test and release processes. It is written in Python and supports distributed, parallel execution of jobs across multiple platforms. - Buildbot - is an open-source framework for automating software build, test and release processes. It is written in Python and supports distributed, parallel execution of jobs across multiple platforms.
- Apache Gump - Specific to Java projects, designed with the aim to build and test those Java projects every night. ensures that all projects are compatible at both API and functionality level. - Apache Gump - Specific to Java projects, designed with the aim to build and test those Java projects every night. ensures that all projects are compatible at both API and functionality level.
Because we are now going to focus on Jenkins - Jenkins is again open source like all of the above tools and is an automation server written in Java. It is used to automate the software development process via continous integration adn faciliates continous delivery. Because we are now going to focus on Jenkins - Jenkins is again open source like all of the above tools and is an automation server written in Java. It is used to automate the software development process via continuous integration adn facilitates continuous delivery.
### Features of Jenkins ### Features of Jenkins
@ -81,7 +79,7 @@ Step 6 - If the test fails then feedback is passed to the developers.
Step 7 - If the tests are successful then we can release to production. Step 7 - If the tests are successful then we can release to production.
This cycle is continous, this is what allows applications to be updated in minutes vs hours, days, months, years! This cycle is continuous, this is what allows applications to be updated in minutes vs hours, days, months, years!
![](Images/Day71_CICD5.png) ![](Images/Day71_CICD5.png)

View File

@ -1,5 +1,5 @@
--- ---
title: '#90DaysOfDevOps - Getting hands on with Jenkins - Day 72' title: "#90DaysOfDevOps - Getting hands on with Jenkins - Day 72"
published: false published: false
description: 90DaysOfDevOps - Getting hands on with Jenkins description: 90DaysOfDevOps - Getting hands on with Jenkins
tags: "devops, 90daysofdevops, learning" tags: "devops, 90daysofdevops, learning"
@ -7,6 +7,7 @@ cover_image: null
canonical_url: null canonical_url: null
id: 1048829 id: 1048829
--- ---
## Getting hands on with Jenkins ## Getting hands on with Jenkins
The plan today is to get some hands on with Jenkins and make something happen as part of our CI pipeline, looking at some example code bases that we can use. The plan today is to get some hands on with Jenkins and make something happen as part of our CI pipeline, looking at some example code bases that we can use.
@ -31,7 +32,7 @@ A jenkins pipeline, is written into a text file called a Jenkinsfile. Which itse
I had some fun deploying Jenkins, You will notice from the [documentation](https://www.jenkins.io/doc/book/installing/) that there are many options on where you can install Jenkins. I had some fun deploying Jenkins, You will notice from the [documentation](https://www.jenkins.io/doc/book/installing/) that there are many options on where you can install Jenkins.
Given that I have minikube on hand and we have used this a number of times I wanted to use this for this task also. (also it is free!) Although the steps given in the [Kubernetes Installation](https://www.jenkins.io/doc/book/installing/kubernetes/) had me hitting a wall and not getting things up and running, you can compare the two when I document my steps here. Given that I have minikube on hand and we have used this a number of times I wanted to use this for this task also. (also it is free!) Although the steps given in the [Kubernetes Installation](https://www.jenkins.io/doc/book/installing/kubernetes/) had me hitting a wall and not getting things up and running, you can compare the two when I document my steps here.
The first step is to get our minikube cluster up and running, we can simply do this with the `minikube start` command. The first step is to get our minikube cluster up and running, we can simply do this with the `minikube start` command.
@ -67,7 +68,7 @@ In order to fix the above or resolve, we need to make sure we provide access or
![](Images/Day72_CICD8.png) ![](Images/Day72_CICD8.png)
The above process should fix the pods, however if not you can force the pods to be refreshed with the `kubectl delete pod jenkins-0 -n jenkins` command. At this point you should have 2/2 running pods called jenkins-0. The above process should fix the pods, however if not you can force the pods to be refreshed with the `kubectl delete pod jenkins-0 -n jenkins` command. At this point you should have 2/2 running pods called jenkins-0.
![](Images/Day72_CICD9.png) ![](Images/Day72_CICD9.png)
@ -79,7 +80,7 @@ Now open a new terminal as we are going to use the `port-forward` command to all
![](Images/Day72_CICD11.png) ![](Images/Day72_CICD11.png)
We should now be able to open a browser and login to http://localhost:8080 and authenticate with the username: admin and password we gathered in a previous step. We should now be able to open a browser and login to `http://localhost:8080` and authenticate with the username: admin and password we gathered in a previous step.
![](Images/Day72_CICD12.png) ![](Images/Day72_CICD12.png)
@ -93,8 +94,8 @@ From here, I would suggest heading to "Manage Jenkins" and you will see "Manage
If you want to go even further and automate the deployment of Jenkins using a shell script this great repository was shared with me on twitter [mehyedes/nodejs-k8s](https://github.com/mehyedes/nodejs-k8s/blob/main/docs/automated-setup.md) If you want to go even further and automate the deployment of Jenkins using a shell script this great repository was shared with me on twitter [mehyedes/nodejs-k8s](https://github.com/mehyedes/nodejs-k8s/blob/main/docs/automated-setup.md)
### Jenkinsfile ### Jenkinsfile
Now we have Jenkins deployed in our Kubernetes cluster, we can now go back and think about this Jenkinsfile. Now we have Jenkins deployed in our Kubernetes cluster, we can now go back and think about this Jenkinsfile.
Every Jenkinsfile will likely start like this, Which is firstly where you would define your steps of your pipeline, in this instance you have Build > Test > Deploy. But we are not really doing anything other than using the `echo` command to call out the specific stages. Every Jenkinsfile will likely start like this, Which is firstly where you would define your steps of your pipeline, in this instance you have Build > Test > Deploy. But we are not really doing anything other than using the `echo` command to call out the specific stages.
@ -126,6 +127,7 @@ pipeline {
} }
``` ```
In our Jenkins dashboard, select "New Item" give the item a name, I am going to "echo1" I am going to suggest that this is a Pipeline. In our Jenkins dashboard, select "New Item" give the item a name, I am going to "echo1" I am going to suggest that this is a Pipeline.
![](Images/Day72_CICD15.png) ![](Images/Day72_CICD15.png)

View File

@ -1,5 +1,5 @@
--- ---
title: '#90DaysOfDevOps - Building a Jenkins Pipeline - Day 73' title: "#90DaysOfDevOps - Building a Jenkins Pipeline - Day 73"
published: false published: false
description: 90DaysOfDevOps - Building a Jenkins Pipeline description: 90DaysOfDevOps - Building a Jenkins Pipeline
tags: "devops, 90daysofdevops, learning" tags: "devops, 90daysofdevops, learning"
@ -7,6 +7,7 @@ cover_image: null
canonical_url: null canonical_url: null
id: 1048766 id: 1048766
--- ---
## Building a Jenkins Pipeline ## Building a Jenkins Pipeline
In the last section we got Jenkins deployed to our Minikube cluster and we set up a very basic Jenkins Pipeline, that didn't do much at all other than echo out the stages of a Pipeline. In the last section we got Jenkins deployed to our Minikube cluster and we set up a very basic Jenkins Pipeline, that didn't do much at all other than echo out the stages of a Pipeline.
@ -15,9 +16,9 @@ You might have also seen that there are some example scripts available for us to
![](Images/Day73_CICD1.png) ![](Images/Day73_CICD1.png)
The first demo script is "Declartive (Kubernetes)" and you can see the stages below. The first demo script is "Declarative (Kubernetes)" and you can see the stages below.
``` ```Yaml
// Uses Declarative syntax to run commands inside a container. // Uses Declarative syntax to run commands inside a container.
pipeline { pipeline {
agent { agent {
@ -58,15 +59,16 @@ spec:
} }
} }
``` ```
You can see below the outcome of what happens when this Pipeline is ran. You can see below the outcome of what happens when this Pipeline is ran.
![](Images/Day73_CICD2.png) ![](Images/Day73_CICD2.png)
### Job creation ### Job creation
**Goals** #### Goals
- Create a simple app and store in GitHub public repository (https://github.com/scriptcamp/kubernetes-kaniko.git) - Create a simple app and store in GitHub public repository [https://github.com/scriptcamp/kubernetes-kaniko.git](https://github.com/scriptcamp/kubernetes-kaniko.git)
- Use Jenkins to build our docker Container image and push to docker hub. (for this we will use a private repository) - Use Jenkins to build our docker Container image and push to docker hub. (for this we will use a private repository)
@ -74,7 +76,7 @@ To achieve this in our Kubernetes cluster running in or using Minikube we need t
With the above in mind we are also going to deploy a secret into Kubernetes with our GitHub credentials. With the above in mind we are also going to deploy a secret into Kubernetes with our GitHub credentials.
``` ```Shell
kubectl create secret docker-registry dockercred \ kubectl create secret docker-registry dockercred \
--docker-server=https://index.docker.io/v1/ \ --docker-server=https://index.docker.io/v1/ \
--docker-username=<dockerhub-username> \ --docker-username=<dockerhub-username> \
@ -118,7 +120,7 @@ We have our DockerHub credentials deployed to as a secret into our Kubernetes cl
The pipeline script is what you can see below, this could in turn become our Jenkinsfile located in our GitHub repository which you can also see is listed in the Get the project stage of the pipeline. The pipeline script is what you can see below, this could in turn become our Jenkinsfile located in our GitHub repository which you can also see is listed in the Get the project stage of the pipeline.
``` ```Yaml
podTemplate(yaml: ''' podTemplate(yaml: '''
apiVersion: v1 apiVersion: v1
kind: Pod kind: Pod

View File

@ -1,5 +1,5 @@
--- ---
title: '#90DaysOfDevOps - Hello World - Jenkinsfile App Pipeline - Day 74' title: "#90DaysOfDevOps - Hello World - Jenkinsfile App Pipeline - Day 74"
published: false published: false
description: 90DaysOfDevOps - Hello World - Jenkinsfile App Pipeline description: 90DaysOfDevOps - Hello World - Jenkinsfile App Pipeline
tags: "devops, 90daysofdevops, learning" tags: "devops, 90daysofdevops, learning"
@ -7,6 +7,7 @@ cover_image: null
canonical_url: null canonical_url: null
id: 1048744 id: 1048744
--- ---
## Hello World - Jenkinsfile App Pipeline ## Hello World - Jenkinsfile App Pipeline
In the last section we built a simple Pipeline in Jenkins that would push our docker image from our dockerfile in a public GitHub repository to our private Dockerhub repository. In the last section we built a simple Pipeline in Jenkins that would push our docker image from our dockerfile in a public GitHub repository to our private Dockerhub repository.
@ -35,7 +36,7 @@ With the above this is what we were using as our source in our Pipeline, now we
Now back in our Jenkins dashboard, we are going to create a new pipeline but now instead of pasting our script we are going to use "Pipeline script from SCM" We are then going to use the configuration options below. Now back in our Jenkins dashboard, we are going to create a new pipeline but now instead of pasting our script we are going to use "Pipeline script from SCM" We are then going to use the configuration options below.
For reference we are going to use https://github.com/MichaelCade/Jenkins-HelloWorld.git as the repository URL. For reference we are going to use `https://github.com/MichaelCade/Jenkins-HelloWorld.git` as the repository URL.
![](Images/Day74_CICD3.png) ![](Images/Day74_CICD3.png)

View File

@ -1,12 +1,13 @@
--- ---
title: '#90DaysOfDevOps - GitHub Actions Overview - Day 75' title: "#90DaysOfDevOps - GitHub Actions Overview - Day 75"
published: false published: false
description: 90DaysOfDevOps - GitHub Actions Overview description: 90DaysOfDevOps - GitHub Actions Overview
tags: 'devops, 90daysofdevops, learning' tags: "devops, 90daysofdevops, learning"
cover_image: null cover_image: null
canonical_url: null canonical_url: null
id: 1049070 id: 1049070
--- ---
## GitHub Actions Overview ## GitHub Actions Overview
In this section I wanted to move on and take a look at maybe a different approach to what we just spent time on. GitHub Actions is where we will focus on in this session. In this section I wanted to move on and take a look at maybe a different approach to what we just spent time on. GitHub Actions is where we will focus on in this session.
@ -57,7 +58,7 @@ Before we get going with a real use case lets take a quick look at the above ima
I have added # to comment in where we can find the components of the YAML workflow. I have added # to comment in where we can find the components of the YAML workflow.
``` ```Yaml
#Workflow #Workflow
name: 90DaysOfDevOps name: 90DaysOfDevOps
#Event #Event
@ -90,7 +91,7 @@ One option is making sure your code is clean and tidy within your repository. Th
I am going to be using some example code linked in one of the resources for this section, we are going to use `github/super-linter` to check against our code. I am going to be using some example code linked in one of the resources for this section, we are going to use `github/super-linter` to check against our code.
``` ```Yaml
name: Super-Linter name: Super-Linter
on: push on: push

View File

@ -1,12 +1,13 @@
--- ---
title: '#90DaysOfDevOps - ArgoCD Overview - Day 76' title: "#90DaysOfDevOps - ArgoCD Overview - Day 76"
published: false published: false
description: 90DaysOfDevOps - ArgoCD Overview description: 90DaysOfDevOps - ArgoCD Overview
tags: 'devops, 90daysofdevops, learning' tags: "devops, 90daysofdevops, learning"
cover_image: null cover_image: null
canonical_url: null canonical_url: null
id: 1048809 id: 1048809
--- ---
## ArgoCD Overview ## ArgoCD Overview
“Argo CD is a declarative, GitOps continuous delivery tool for Kubernetes” “Argo CD is a declarative, GitOps continuous delivery tool for Kubernetes”
@ -21,7 +22,7 @@ From an Operations background but having played a lot around Infrastructure as C
We are going to be using our trusty minikube Kubernetes cluster locally again for this deployment. We are going to be using our trusty minikube Kubernetes cluster locally again for this deployment.
``` ```Shell
kubectl create namespace argocd kubectl create namespace argocd
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
``` ```
@ -38,7 +39,7 @@ Also let's check everything that we deployed in the namespace with `kubectl get
When the above is looking good, we then should consider accessing this via the port forward. Using the `kubectl port-forward svc/argocd-server -n argocd 8080:443` command. Do this in a new terminal. When the above is looking good, we then should consider accessing this via the port forward. Using the `kubectl port-forward svc/argocd-server -n argocd 8080:443` command. Do this in a new terminal.
Then open a new web browser and head to https://localhost:8080 Then open a new web browser and head to `https://localhost:8080`
![](Images/Day76_CICD4.png) ![](Images/Day76_CICD4.png)

View File

@ -1,5 +1,5 @@
--- ---
title: '#90DaysOfDevOps - The Big Picture: Monitoring - Day 77' title: "#90DaysOfDevOps - The Big Picture: Monitoring - Day 77"
published: false published: false
description: 90DaysOfDevOps - The Big Picture Monitoring description: 90DaysOfDevOps - The Big Picture Monitoring
tags: "devops, 90daysofdevops, learning" tags: "devops, 90daysofdevops, learning"
@ -7,6 +7,7 @@ cover_image: null
canonical_url: null canonical_url: null
id: 1048715 id: 1048715
--- ---
## The Big Picture: Monitoring ## The Big Picture: Monitoring
In this section we are going to talk about monitoring, what is it why do we need it? In this section we are going to talk about monitoring, what is it why do we need it?
@ -54,9 +55,9 @@ The difficult question for most monitoring engineers is what do we monitor? and
Every system has a number of resources, which of these should we keep a close eye on and which ones can we turn a blind eye to for instance is it necessary to monitor CPU usage the answer is yes obviously nevertheless it is still a decision that has to be made is it necessary to monitor the number of open ports in the system we may or may not have to depending on the situation if it is a general-purpose server we probably won't have to but then again if it is a webserver we probably would have to. Every system has a number of resources, which of these should we keep a close eye on and which ones can we turn a blind eye to for instance is it necessary to monitor CPU usage the answer is yes obviously nevertheless it is still a decision that has to be made is it necessary to monitor the number of open ports in the system we may or may not have to depending on the situation if it is a general-purpose server we probably won't have to but then again if it is a webserver we probably would have to.
### Continous Monitoring ### Continuous Monitoring
Monitoring is not a new item and even continous monitoring has been an ideal that many enterprises have adopted for many years. Monitoring is not a new item and even continuous monitoring has been an ideal that many enterprises have adopted for many years.
There are three key areas of focus when it comes to monitoring. There are three key areas of focus when it comes to monitoring.

View File

@ -1,15 +1,16 @@
--- ---
title: '#90DaysOfDevOps - Hands-On Monitoring Tools - Day 78' title: "#90DaysOfDevOps - Hands-On Monitoring Tools - Day 78"
published: false published: false
description: 90DaysOfDevOps - Hands-On Monitoring Tools description: 90DaysOfDevOps - Hands-On Monitoring Tools
tags: 'devops, 90daysofdevops, learning' tags: "devops, 90daysofdevops, learning"
cover_image: null cover_image: null
canonical_url: null canonical_url: null
id: 1049056 id: 1049056
--- ---
## Hands-On Monitoring Tools ## Hands-On Monitoring Tools
In the last session, I spoke about the big picture of monitoring and I took a look into Nagios, there was two reasons for doing this. The first was this is a peice of software I have heard a lot of over the years so wanted to know a little more about its capabilities. In the last session, I spoke about the big picture of monitoring and I took a look into Nagios, there was two reasons for doing this. The first was this is a piece of software I have heard a lot of over the years so wanted to know a little more about its capabilities.
Today I am going to be going into Prometheus, I have seen more and more of Prometheus in the Cloud-Native landscape but it can also be used to look after those physical resources as well outside of Kubernetes and the like. Today I am going to be going into Prometheus, I have seen more and more of Prometheus in the Cloud-Native landscape but it can also be used to look after those physical resources as well outside of Kubernetes and the like.
@ -67,11 +68,12 @@ Once all the pods are running we can also take a look at all the deployed aspect
Now for us to access the Prometheus Server UI we can use the following command to port forward. Now for us to access the Prometheus Server UI we can use the following command to port forward.
``` ```Shell
export POD_NAME=$(kubectl get pods --namespace default -l "app=prometheus,component=server" -o jsonpath="{.items[0].metadata.name}") export POD_NAME=$(kubectl get pods --namespace default -l "app=prometheus,component=server" -o jsonpath="{.items[0].metadata.name}")
kubectl --namespace default port-forward $POD_NAME 9090 kubectl --namespace default port-forward $POD_NAME 9090
``` ```
When we first open our browser to http://localhost:9090 we see the following very blank screen.
When we first open our browser to `http://localhost:9090` we see the following very blank screen.
![](Images/Day78_Monitoring5.png) ![](Images/Day78_Monitoring5.png)

View File

@ -1,15 +1,16 @@
--- ---
title: '#90DaysOfDevOps - The Big Picture: Log Management - Day 79' title: "#90DaysOfDevOps - The Big Picture: Log Management - Day 79"
published: false published: false
description: 90DaysOfDevOps - The Big Picture Log Management description: 90DaysOfDevOps - The Big Picture Log Management
tags: 'devops, 90daysofdevops, learning' tags: "devops, 90daysofdevops, learning"
cover_image: null cover_image: null
canonical_url: null canonical_url: null
id: 1049057 id: 1049057
--- ---
## The Big Picture: Log Management ## The Big Picture: Log Management
A continuation to the infrastructure monitoring challenges and solutions, log management is another puzzle peice to the overall observability jigsaw. A continuation to the infrastructure monitoring challenges and solutions, log management is another puzzle piece to the overall observability jigsaw.
### Log Management & Aggregation ### Log Management & Aggregation
@ -33,7 +34,7 @@ The web application would connect to the frontend which then connects to the bac
### The components of elk ### The components of elk
Elasticsearch, logstash and Kibana is that all of services send logs to logstash, logstash takes these logs which are text emitted by the application. For example the web application when you visit a web page, the web page might log this visitor access to this page at this time and that's an example of a log message those logs would be sent to logstash. Elasticsearch, logstash and Kibana is that all of services send logs to logstash, logstash takes these logs which are text emitted by the application. For example the web application when you visit a web page, the web page might log this visitor access to this page at this time and that's an example of a log message those logs would be sent to logstash.
Logstash would then extract things from them so for that log message user did **thing**, at **time**. It would extract the time and extract the message and extract the user and include those all as tags so the message would be an object of tags and message so that you could search them easily you could find all of the requests made by a specific user but logstash doesn't store things itself it stores things in elasticsearch which is a efficient database for querying text and elasticsearch exposes the results as Kibana and Kibana is a web server that connects to elasticsearch and allows administrators as the devops person or other people on your team, the on-call engineer to view the logs in production whenever there's a major fault. You as the administrator would connect to Kibana, Kibana would query elasticsearch for logs matching whatever you wanted. Logstash would then extract things from them so for that log message user did **thing**, at **time**. It would extract the time and extract the message and extract the user and include those all as tags so the message would be an object of tags and message so that you could search them easily you could find all of the requests made by a specific user but logstash doesn't store things itself it stores things in elasticsearch which is a efficient database for querying text and elasticsearch exposes the results as Kibana and Kibana is a web server that connects to elasticsearch and allows administrators as the devops person or other people on your team, the on-call engineer to view the logs in production whenever there's a major fault. You as the administrator would connect to Kibana, Kibana would query elasticsearch for logs matching whatever you wanted.
@ -45,7 +46,7 @@ A user says i saw error code one two three four five six seven when i tried to d
### Security and Access to Logs ### Security and Access to Logs
An important peice of the puzzle is ensuring that logs are only visible to administrators (or the users and groups that absolutely need to have access), logs can contain sensitive information like tokens it's important that only authenticated users can access them you wouldn't want to expose Kibana to the internet without some way of authenticating. An important piece of the puzzle is ensuring that logs are only visible to administrators (or the users and groups that absolutely need to have access), logs can contain sensitive information like tokens it's important that only authenticated users can access them you wouldn't want to expose Kibana to the internet without some way of authenticating.
### Examples of Log Management Tools ### Examples of Log Management Tools
@ -61,7 +62,6 @@ Examples of log management platforms there's
Cloud providers also provide logging such as AWS CloudWatch Logs, Microsoft Azure Monitor and Google Cloud Logging. Cloud providers also provide logging such as AWS CloudWatch Logs, Microsoft Azure Monitor and Google Cloud Logging.
Log Management is a key aspect of the overall observability of your applications and instracture environment for diagnosing problems in production it's relatively simple to install a turnkey solution like ELK or CloudWatch and it makes diagnosing and triaging problems in production significantly easier. Log Management is a key aspect of the overall observability of your applications and instracture environment for diagnosing problems in production it's relatively simple to install a turnkey solution like ELK or CloudWatch and it makes diagnosing and triaging problems in production significantly easier.
## Resources ## Resources

View File

@ -1,18 +1,17 @@
--- ---
title: '#90DaysOfDevOps - ELK Stack - Day 80' title: "#90DaysOfDevOps - ELK Stack - Day 80"
published: false published: false
description: 90DaysOfDevOps - ELK Stack description: 90DaysOfDevOps - ELK Stack
tags: 'devops, 90daysofdevops, learning' tags: "devops, 90daysofdevops, learning"
cover_image: null cover_image: null
canonical_url: null canonical_url: null
id: 1048746 id: 1048746
--- ---
## ELK Stack ## ELK Stack
In this session, we are going to get a little more hands-on with some of the options we have mentioned. In this session, we are going to get a little more hands-on with some of the options we have mentioned.
### ELK Stack
ELK Stack is the combination of 3 separate tools: ELK Stack is the combination of 3 separate tools:
- [Elasticsearch](https://www.elastic.co/what-is/elasticsearch) is a distributed, free and open search and analytics engine for all types of data, including textual, numerical, geospatial, structured, and unstructured. - [Elasticsearch](https://www.elastic.co/what-is/elasticsearch) is a distributed, free and open search and analytics engine for all types of data, including textual, numerical, geospatial, structured, and unstructured.
@ -25,12 +24,11 @@ ELK stack lets us reliably and securely take data from any source, in any format
On top of the above mentioned components you might also see Beats which are lightweight agents that are installed on edge hosts to collect different types of data for forwarding into the stack. On top of the above mentioned components you might also see Beats which are lightweight agents that are installed on edge hosts to collect different types of data for forwarding into the stack.
- Logs: Server logs that need to be analysed are identified
- Logs: Server logs that need to be analyzed are identified
- Logstash: Collect logs and events data. It even parses and transforms data - Logstash: Collect logs and events data. It even parses and transforms data
- ElasticSearch: The transformed data from Logstash is Store, Search, and indexed. - ElasticSearch: The transformed data from Logstash is Store, Search, and indexed.
- Kibana uses Elasticsearch DB to Explore, Visualize, and Share - Kibana uses Elasticsearch DB to Explore, Visualize, and Share
@ -48,7 +46,7 @@ For the hands-on scenario there are many places you can deploy the Elastic Stack
![](Images/Day80_Monitoring1.png) ![](Images/Day80_Monitoring1.png)
You will find the original files and walkthrough that I used here [ deviantony/docker-elk](https://github.com/deviantony/docker-elk) You will find the original files and walkthrough that I used here [deviantony/docker-elk](https://github.com/deviantony/docker-elk)
Now we can run `docker-compose up -d`, the first time this has been ran will require the pulling of images. Now we can run `docker-compose up -d`, the first time this has been ran will require the pulling of images.
@ -56,7 +54,7 @@ Now we can run `docker-compose up -d`, the first time this has been ran will req
If you follow either this repository or the one that I used you will have either have the password of "changeme" or in my repository the password of "90DaysOfDevOps". The username is "elastic" If you follow either this repository or the one that I used you will have either have the password of "changeme" or in my repository the password of "90DaysOfDevOps". The username is "elastic"
After a few minutes we can navigate to http://localhost:5601/ which is our Kibana server / Docker container. After a few minutes we can navigate to `http://localhost:5601/` which is our Kibana server / Docker container.
![](Images/Day80_Monitoring3.png) ![](Images/Day80_Monitoring3.png)
@ -78,7 +76,7 @@ As it states on the dashboard view:
**Sample Logs Data** **Sample Logs Data**
*This dashboard contains sample data for you to play with. You can view it, search it, and interact with the visualizations. For more information about Kibana, check our docs.* > This dashboard contains sample data for you to play with. You can view it, search it, and interact with the visualizations. For more information about Kibana, check our docs.
![](Images/Day80_Monitoring7.png) ![](Images/Day80_Monitoring7.png)

View File

@ -1,5 +1,5 @@
--- ---
title: '#90DaysOfDevOps - Fluentd & FluentBit - Day 81' title: "#90DaysOfDevOps - Fluentd & FluentBit - Day 81"
published: false published: false
description: 90DaysOfDevOps - Fluentd & FluentBit description: 90DaysOfDevOps - Fluentd & FluentBit
tags: "devops, 90daysofdevops, learning" tags: "devops, 90daysofdevops, learning"
@ -7,6 +7,7 @@ cover_image: null
canonical_url: null canonical_url: null
id: 1048716 id: 1048716
--- ---
## Fluentd & FluentBit ## Fluentd & FluentBit
Another data collector that I wanted to explore as part of this observability section was [Fluentd](https://docs.fluentd.org/). An Open-Source unified logging layer. Another data collector that I wanted to explore as part of this observability section was [Fluentd](https://docs.fluentd.org/). An Open-Source unified logging layer.
@ -51,7 +52,6 @@ Fluent Bit in Kubernetes is deployed as a DaemonSet, which means it will run on
Kubernetes annotations can be used within the configuration YAML of our applications. Kubernetes annotations can be used within the configuration YAML of our applications.
First of all we can deploy from the fluent helm repository. `helm repo add fluent https://fluent.github.io/helm-charts` and then install using the `helm install fluent-bit fluent/fluent-bit` command. First of all we can deploy from the fluent helm repository. `helm repo add fluent https://fluent.github.io/helm-charts` and then install using the `helm install fluent-bit fluent/fluent-bit` command.
![](Images/Day81_Monitoring1.png) ![](Images/Day81_Monitoring1.png)
@ -141,7 +141,7 @@ fluent-bit.conf:
Events: <none> Events: <none>
``` ```
We can now port-forward our pod to our localhost to ensure that we have connectivity. Firstly get the name of your pod with `kubectl get pods | grep fluent` and then use `kubectl port-forward fluent-bit-8kvl4 2020:2020` open a web browser to http://localhost:2020/ We can now port-forward our pod to our localhost to ensure that we have connectivity. Firstly get the name of your pod with `kubectl get pods | grep fluent` and then use `kubectl port-forward fluent-bit-8kvl4 2020:2020` open a web browser to `http://localhost:2020/`
![](Images/Day81_Monitoring4.png) ![](Images/Day81_Monitoring4.png)
@ -161,8 +161,6 @@ I also found this really great medium article covering more about [Fluent Bit](h
- [Log Management what DevOps need to know](https://devops.com/log-management-what-devops-teams-need-to-know/) - [Log Management what DevOps need to know](https://devops.com/log-management-what-devops-teams-need-to-know/)
- [What is ELK Stack?](https://www.youtube.com/watch?v=4X0WLg05ASw) - [What is ELK Stack?](https://www.youtube.com/watch?v=4X0WLg05ASw)
- [Fluentd simply explained](https://www.youtube.com/watch?v=5ofsNyHZwWE&t=14s) - [Fluentd simply explained](https://www.youtube.com/watch?v=5ofsNyHZwWE&t=14s)
- [ Fluent Bit explained | Fluent Bit vs Fluentd ](https://www.youtube.com/watch?v=B2IS-XS-cc0) - [Fluent Bit explained | Fluent Bit vs Fluentd](https://www.youtube.com/watch?v=B2IS-XS-cc0)
See you on [Day 82](day82.md) See you on [Day 82](day82.md)

View File

@ -1,12 +1,13 @@
--- ---
title: '#90DaysOfDevOps - EFK Stack - Day 82' title: "#90DaysOfDevOps - EFK Stack - Day 82"
published: false published: false
description: 90DaysOfDevOps - EFK Stack description: 90DaysOfDevOps - EFK Stack
tags: 'devops, 90daysofdevops, learning' tags: "devops, 90daysofdevops, learning"
cover_image: null cover_image: null
canonical_url: null canonical_url: null
id: 1049059 id: 1049059
--- ---
### EFK Stack ### EFK Stack
In the previous section, we spoke about ELK Stack, which uses Logstash as the log collector in the stack, in the EFK Stack we are swapping that out for FluentD or FluentBit. In the previous section, we spoke about ELK Stack, which uses Logstash as the log collector in the stack, in the EFK Stack we are swapping that out for FluentD or FluentBit.
@ -23,9 +24,9 @@ The EFK stack is a collection of 3 software bundled together, including:
- Elasticsearch : NoSQL database is used to store data and provides interface for searching and query log. - Elasticsearch : NoSQL database is used to store data and provides interface for searching and query log.
- Fluentd : Fluentd is an open source data collector for unified logging layer. Fluentd allows you to unify data collection and consumption for a better use and understanding of data. - Fluentd : Fluentd is an open source data collector for unified logging layer. Fluentd allows you to unify data collection and consumption for a better use and understanding of data.
- Kibana : Interface for managing and statistics logs. Responsible for reading information from elasticsearch . - Kibana : Interface for managing and statistics logs. Responsible for reading information from elasticsearch.
### Deploying EFK on Minikube ### Deploying EFK on Minikube
@ -46,6 +47,7 @@ The above command lets us keep an eye on things but I like to clarify that thing
![](Images/Day82_Monitoring5.png) ![](Images/Day82_Monitoring5.png)
Once we have all our pods up and running and at this stage we should see Once we have all our pods up and running and at this stage we should see
- 3 pods associated to ElasticSearch - 3 pods associated to ElasticSearch
- 1 pod associated to Fluentd - 1 pod associated to Fluentd
- 1 pod associated to Kibana - 1 pod associated to Kibana
@ -58,11 +60,11 @@ Now all of our pods are up and running we can now issue in a new terminal the po
![](Images/Day82_Monitoring7.png) ![](Images/Day82_Monitoring7.png)
We can now open up a browser and navigate to this address, http://localhost:5601 you will be greeted with either the screen you see below or you might indeed see a sample data screen or continue and configure yourself. Either way and by all means look at that test data, it is what we covered when we looked at the ELK stack in a previous session. We can now open up a browser and navigate to this address, `http://localhost:5601` you will be greeted with either the screen you see below or you might indeed see a sample data screen or continue and configure yourself. Either way and by all means look at that test data, it is what we covered when we looked at the ELK stack in a previous session.
![](Images/Day82_Monitoring8.png) ![](Images/Day82_Monitoring8.png)
Next, we need to hit the "discover" tab on the left menu and add "*" to our index pattern. Continue to the next step by hitting "Next step". Next, we need to hit the "discover" tab on the left menu and add "\*" to our index pattern. Continue to the next step by hitting "Next step".
![](Images/Day82_Monitoring9.png) ![](Images/Day82_Monitoring9.png)
@ -92,7 +94,6 @@ There is also the option to gather APM (Application Performance Monitoring) whic
I am not going to get into APM here but you can find out more on the [Elastic site](https://www.elastic.co/observability/application-performance-monitoring) I am not going to get into APM here but you can find out more on the [Elastic site](https://www.elastic.co/observability/application-performance-monitoring)
## Resources ## Resources
- [Understanding Logging: Containers & Microservices](https://www.youtube.com/watch?v=MMVdkzeQ848) - [Understanding Logging: Containers & Microservices](https://www.youtube.com/watch?v=MMVdkzeQ848)
@ -109,4 +110,3 @@ I am not going to get into APM here but you can find out more on the [Elastic si
- [Fluentd simply explained](https://www.youtube.com/watch?v=5ofsNyHZwWE&t=14s) - [Fluentd simply explained](https://www.youtube.com/watch?v=5ofsNyHZwWE&t=14s)
See you on [Day 83](day83.md) See you on [Day 83](day83.md)

View File

@ -1,5 +1,5 @@
--- ---
title: '#90DaysOfDevOps - Data Visualisation - Grafana - Day 83' title: "#90DaysOfDevOps - Data Visualisation - Grafana - Day 83"
published: false published: false
description: 90DaysOfDevOps - Data Visualisation - Grafana description: 90DaysOfDevOps - Data Visualisation - Grafana
tags: "devops, 90daysofdevops, learning" tags: "devops, 90daysofdevops, learning"
@ -7,6 +7,7 @@ cover_image: null
canonical_url: null canonical_url: null
id: 1048767 id: 1048767
--- ---
## Data Visualisation - Grafana ## Data Visualisation - Grafana
We saw a lot of Kibana over this section around Observability. But we have to also take some time to cover Grafana. But also they are not the same and they are not completely competing against each other. We saw a lot of Kibana over this section around Observability. But we have to also take some time to cover Grafana. But also they are not the same and they are not completely competing against each other.
@ -53,7 +54,7 @@ When everything is running we can check all pods are in a running and healthy st
![](Images/Day83_Monitoring5.png) ![](Images/Day83_Monitoring5.png)
With the deployment, we deployed a number of services that we are going to be using later on in the demo you can check these by using the `kubectl get svc -n monitoring` command. With the deployment, we deployed a number of services that we are going to be using later on in the demo you can check these by using the `kubectl get svc -n monitoring` command.
![](Images/Day83_Monitoring6.png) ![](Images/Day83_Monitoring6.png)
@ -69,10 +70,12 @@ Open a browser and navigate to http://localhost:3000 you will be prompted for a
![](Images/Day83_Monitoring9.png) ![](Images/Day83_Monitoring9.png)
The default username and password to access is The default username and password to access is
``` ```
Username: admin Username: admin
Password: admin Password: admin
``` ```
However you will be asked to provide a new password at first login. The initial screen or home page you will see will give you some areas to explore as well as some useful resources to get up to speed with Grafana and its capabilities. Notice the "Add your first data source" and "create your first dashboard" widgets we will be using them later. However you will be asked to provide a new password at first login. The initial screen or home page you will see will give you some areas to explore as well as some useful resources to get up to speed with Grafana and its capabilities. Notice the "Add your first data source" and "create your first dashboard" widgets we will be using them later.
![](Images/Day83_Monitoring10.png) ![](Images/Day83_Monitoring10.png)
@ -123,10 +126,10 @@ We have chosen the Kubernetes API Server dashboard and changed the data source t
### Alerting ### Alerting
You could also leverage the alertmanager that we deployed to then send alerts out to slack or other integrations, in order to do this you would need to port foward the alertmanager service using the below details. You could also leverage the alertmanager that we deployed to then send alerts out to slack or other integrations, in order to do this you would need to port forward the alertmanager service using the below details.
`kubectl --namespace monitoring port-forward svc/alertmanager-main 9093` `kubectl --namespace monitoring port-forward svc/alertmanager-main 9093`
http://localhost:9093 `http://localhost:9093`
That wraps up our section on all things observability, I have personally found that this section has highlighted how broad this topic is but equally how important this is for our roles and that be it metrics, logging or tracing you are going to need to have a good idea of what is happening in our broad environments moving forward, especially when they can change so dramatically with all the automation that we have already covered in the other sections. That wraps up our section on all things observability, I have personally found that this section has highlighted how broad this topic is but equally how important this is for our roles and that be it metrics, logging or tracing you are going to need to have a good idea of what is happening in our broad environments moving forward, especially when they can change so dramatically with all the automation that we have already covered in the other sections.

View File

@ -1,5 +1,5 @@
--- ---
title: '#90DaysOfDevOps - The Big Picture: Data Management - Day 84' title: "#90DaysOfDevOps - The Big Picture: Data Management - Day 84"
published: false published: false
description: 90DaysOfDevOps - The Big Picture Data Management description: 90DaysOfDevOps - The Big Picture Data Management
tags: "devops, 90daysofdevops, learning" tags: "devops, 90daysofdevops, learning"
@ -7,6 +7,7 @@ cover_image: null
canonical_url: null canonical_url: null
id: 1048747 id: 1048747
--- ---
## The Big Picture: Data Management ## The Big Picture: Data Management
![](Images/Day84_Data1.png) ![](Images/Day84_Data1.png)
@ -50,7 +51,6 @@ My focus throughout this section is not going to be getting into Machine Learnin
Three key areas that we should consider along this journey with data are: Three key areas that we should consider along this journey with data are:
- Accuracy - Making sure that production data is accurate, equally we need to ensure that our data in the form of backups are also working and tested against recovery to be sure if a failure or a reason comes up we need to be able to get back up and running as fast as possible. - Accuracy - Making sure that production data is accurate, equally we need to ensure that our data in the form of backups are also working and tested against recovery to be sure if a failure or a reason comes up we need to be able to get back up and running as fast as possible.
- Consistent - If our data services span multiple locations then for production we need to make sure we have consistency across all data locations so that we are getting accurate data, this also spans into data protection when it comes to protecting these data services especially data services we need to ensure consistency at different levels to make sure we are taking a good clean copy of that data for our backups, replicas etc. - Consistent - If our data services span multiple locations then for production we need to make sure we have consistency across all data locations so that we are getting accurate data, this also spans into data protection when it comes to protecting these data services especially data services we need to ensure consistency at different levels to make sure we are taking a good clean copy of that data for our backups, replicas etc.
- Secure - Access Control but equally just keeping data in general is a topical theme at the moment across the globe. Making sure the right people have access to your data is paramount, again this leads into data protection where we must make sure that only the required personnel have access to backups and the ability to restore from those as well clone and provide other versions of the business data. - Secure - Access Control but equally just keeping data in general is a topical theme at the moment across the globe. Making sure the right people have access to your data is paramount, again this leads into data protection where we must make sure that only the required personnel have access to backups and the ability to restore from those as well clone and provide other versions of the business data.
@ -70,7 +70,3 @@ During the next 6 sessions we are going to be taking a closer look at Databases,
- [Veeam Portability & Cloud Mobility](https://www.youtube.com/watch?v=hDBlTdzE6Us&t=3s) - [Veeam Portability & Cloud Mobility](https://www.youtube.com/watch?v=hDBlTdzE6Us&t=3s)
See you on [Day 85](day85.md) See you on [Day 85](day85.md)

View File

@ -7,6 +7,7 @@ cover_image: null
canonical_url: null canonical_url: null
id: 1048781 id: 1048781
--- ---
## Data Services ## Data Services
Databases are going to be the most common data service that we come across in our environments. I wanted to take this session to explore some of those different types of Databases and some of the use cases they each have. Some we have used and seen throughout the course of the challenge. Databases are going to be the most common data service that we come across in our environments. I wanted to take this session to explore some of those different types of Databases and some of the use cases they each have. Some we have used and seen throughout the course of the challenge.
@ -21,13 +22,14 @@ A key-value database is a type of nonrelational database that uses a simple key-
An example of a Key-Value database is Redis. An example of a Key-Value database is Redis.
*Redis is an in-memory data structure store, used as a distributed, in-memory keyvalue database, cache and message broker, with optional durability. Redis supports different kinds of abstract data structures, such as strings, lists, maps, sets, sorted sets, HyperLogLogs, bitmaps, streams, and spatial indices.* _Redis is an in-memory data structure store, used as a distributed, in-memory keyvalue database, cache and message broker, with optional durability. Redis supports different kinds of abstract data structures, such as strings, lists, maps, sets, sorted sets, HyperLogLogs, bitmaps, streams, and spatial indices._
![](Images/Day85_Data1.png) ![](Images/Day85_Data1.png)
As you can see from the description of Redis this means that our database is fast but we are limited on space as a trade off. Also no queries or joins which means data modelling options are very limited. As you can see from the description of Redis this means that our database is fast but we are limited on space as a trade off. Also no queries or joins which means data modelling options are very limited.
Best for: Best for:
- Caching - Caching
- Pub/Sub - Pub/Sub
- Leaderboards - Leaderboards
@ -39,13 +41,14 @@ Generally used as a cache above another persistent data layer.
A wide-column database is a NoSQL database that organises data storage into flexible columns that can be spread across multiple servers or database nodes, using multi-dimensional mapping to reference data by column, row, and timestamp. A wide-column database is a NoSQL database that organises data storage into flexible columns that can be spread across multiple servers or database nodes, using multi-dimensional mapping to reference data by column, row, and timestamp.
*Cassandra is a free and open-source, distributed, wide-column store, NoSQL database management system designed to handle large amounts of data across many commodity servers, providing high availability with no single point of failure.* _Cassandra is a free and open-source, distributed, wide-column store, NoSQL database management system designed to handle large amounts of data across many commodity servers, providing high availability with no single point of failure._
![](Images/Day85_Data2.png) ![](Images/Day85_Data2.png)
No schema which means can handle unstructured data however this can be seen as a benefit to some workloads. No schema which means can handle unstructured data however this can be seen as a benefit to some workloads.
Best for: Best for:
- Time-Series - Time-Series
- Historical Records - Historical Records
- High-Write, Low-Read - High-Write, Low-Read
@ -54,7 +57,7 @@ Best for:
A document database (also known as a document-oriented database or a document store) is a database that stores information in documents. A document database (also known as a document-oriented database or a document store) is a database that stores information in documents.
*MongoDB is a source-available cross-platform document-oriented database program. Classified as a NoSQL database program, MongoDB uses JSON-like documents with optional schemas. MongoDB is developed by MongoDB Inc. and licensed under the Server Side Public License.* _MongoDB is a source-available cross-platform document-oriented database program. Classified as a NoSQL database program, MongoDB uses JSON-like documents with optional schemas. MongoDB is developed by MongoDB Inc. and licensed under the Server Side Public License._
![](Images/Day85_Data3.png) ![](Images/Day85_Data3.png)
@ -72,7 +75,7 @@ If you are new to databases but you know of them my guess is that you have absol
A relational database is a digital database based on the relational model of data, as proposed by E. F. Codd in 1970. A system used to maintain relational databases is a relational database management system. Many relational database systems have an option of using the SQL for querying and maintaining the database. A relational database is a digital database based on the relational model of data, as proposed by E. F. Codd in 1970. A system used to maintain relational databases is a relational database management system. Many relational database systems have an option of using the SQL for querying and maintaining the database.
*MySQL is an open-source relational database management system. Its name is a combination of "My", the name of co-founder Michael Widenius's daughter, and "SQL", the abbreviation for Structured Query Language.* _MySQL is an open-source relational database management system. Its name is a combination of "My", the name of co-founder Michael Widenius's daughter, and "SQL", the abbreviation for Structured Query Language._
MySQL is one example of a relational database there are lots of other options. MySQL is one example of a relational database there are lots of other options.
@ -81,6 +84,7 @@ MySQL is one example of a relational database there are lots of other options.
Whilst researching relational databases the term or abbreviation **ACID** has been mentioned a lot, (atomicity, consistency, isolation, durability) is a set of properties of database transactions intended to guarantee data validity despite errors, power failures, and other mishaps. In the context of databases, a sequence of database operations that satisfies the ACID properties (which can be perceived as a single logical operation on the data) is called a transaction. For example, a transfer of funds from one bank account to another, even involving multiple changes such as debiting one account and crediting another, is a single transaction. Whilst researching relational databases the term or abbreviation **ACID** has been mentioned a lot, (atomicity, consistency, isolation, durability) is a set of properties of database transactions intended to guarantee data validity despite errors, power failures, and other mishaps. In the context of databases, a sequence of database operations that satisfies the ACID properties (which can be perceived as a single logical operation on the data) is called a transaction. For example, a transfer of funds from one bank account to another, even involving multiple changes such as debiting one account and crediting another, is a single transaction.
Best for: Best for:
- Most Applications (It has been around for years, doesn't mean it is the best) - Most Applications (It has been around for years, doesn't mean it is the best)
It is not ideal for unstructured data or the ability to scale is where some of the other NoSQL mentions give a better ability to scale for certain workloads. It is not ideal for unstructured data or the ability to scale is where some of the other NoSQL mentions give a better ability to scale for certain workloads.
@ -89,7 +93,7 @@ It is not ideal for unstructured data or the ability to scale is where some of t
A graph database stores nodes and relationships instead of tables, or documents. Data is stored just like you might sketch ideas on a whiteboard. Your data is stored without restricting it to a pre-defined model, allowing a very flexible way of thinking about and using it. A graph database stores nodes and relationships instead of tables, or documents. Data is stored just like you might sketch ideas on a whiteboard. Your data is stored without restricting it to a pre-defined model, allowing a very flexible way of thinking about and using it.
*Neo4j is a graph database management system developed by Neo4j, Inc. Described by its developers as an ACID-compliant transactional database with native graph storage and processing* _Neo4j is a graph database management system developed by Neo4j, Inc. Described by its developers as an ACID-compliant transactional database with native graph storage and processing_
Best for: Best for:
@ -103,7 +107,7 @@ In the last section we actually used a Search Engine database in the way of Elas
A search-engine database is a type of non-relational database that is dedicated to the search of data content. Search-engine databases use indexes to categorise the similar characteristics among data and facilitate search capability. A search-engine database is a type of non-relational database that is dedicated to the search of data content. Search-engine databases use indexes to categorise the similar characteristics among data and facilitate search capability.
*Elasticsearch is a search engine based on the Lucene library. It provides a distributed, multitenant-capable full-text search engine with an HTTP web interface and schema-free JSON documents.* _Elasticsearch is a search engine based on the Lucene library. It provides a distributed, multitenant-capable full-text search engine with an HTTP web interface and schema-free JSON documents._
Best for: Best for:
@ -115,7 +119,7 @@ Best for:
A multi-model database is a database management system designed to support multiple data models against a single, integrated backend. In contrast, most database management systems are organized around a single data model that determines how data can be organized, stored, and manipulated.Document, graph, relational, and keyvalue models are examples of data models that may be supported by a multi-model database. A multi-model database is a database management system designed to support multiple data models against a single, integrated backend. In contrast, most database management systems are organized around a single data model that determines how data can be organized, stored, and manipulated.Document, graph, relational, and keyvalue models are examples of data models that may be supported by a multi-model database.
*Fauna is a flexible, developer-friendly, transactional database delivered as a secure and scalable cloud API with native GraphQL.* _Fauna is a flexible, developer-friendly, transactional database delivered as a secure and scalable cloud API with native GraphQL._
Best for: Best for:
@ -145,5 +149,4 @@ There are a ton of resources I have linked below, you could honestly spend 90 ye
- [FaunaDB Basics - The Database of your Dreams](https://www.youtube.com/watch?v=2CipVwISumA) - [FaunaDB Basics - The Database of your Dreams](https://www.youtube.com/watch?v=2CipVwISumA)
- [Fauna Crash Course - Covering the Basics](https://www.youtube.com/watch?v=ihaB7CqJju0) - [Fauna Crash Course - Covering the Basics](https://www.youtube.com/watch?v=ihaB7CqJju0)
See you on [Day 86](day86.md) See you on [Day 86](day86.md)

View File

@ -1,12 +1,13 @@
--- ---
title: '#90DaysOfDevOps - Backup all the platforms - Day 86' title: "#90DaysOfDevOps - Backup all the platforms - Day 86"
published: false published: false
description: 90DaysOfDevOps - Backup all the platforms description: 90DaysOfDevOps - Backup all the platforms
tags: 'devops, 90daysofdevops, learning' tags: "devops, 90daysofdevops, learning"
cover_image: null cover_image: null
canonical_url: null canonical_url: null
id: 1049058 id: 1049058
--- ---
## Backup all the platforms ## Backup all the platforms
During this whole challenge we have discussed many different platforms and environments. One thing all of those have in common is the fact they all need some level of data protection! During this whole challenge we have discussed many different platforms and environments. One thing all of those have in common is the fact they all need some level of data protection!
@ -25,7 +26,7 @@ But we should be able to perform that protection of the data with automation in
If we look at what backup is: If we look at what backup is:
*In information technology, a backup, or data backup is a copy of computer data taken and stored elsewhere so that it may be used to restore the original after a data loss event. The verb form, referring to the process of doing so, is "back up", whereas the noun and adjective form is "backup".* _In information technology, a backup, or data backup is a copy of computer data taken and stored elsewhere so that it may be used to restore the original after a data loss event. The verb form, referring to the process of doing so, is "back up", whereas the noun and adjective form is "backup"._
If we break this down to the simplest form, a backup is a copy and paste of data to a new location. Simply put I could take a backup right now by copying a file from my C: drive to my D: drive and I would then have a copy in case something happened to the C: drive or something was edited wrongly within the files. I could revert back to the copy I have on the D: drive. Now if my computer dies where both the C & D drives live then I am not protected so I have to consider a solution or a copy of data outside of my system maybe onto a NAS drive in my house? But then what happens if something happens to my house, maybe I need to consider storing it on another system in another location, maybe the cloud is an option. Maybe I could store a copy of my important files in several locations to mitigate against the risk of failure? If we break this down to the simplest form, a backup is a copy and paste of data to a new location. Simply put I could take a backup right now by copying a file from my C: drive to my D: drive and I would then have a copy in case something happened to the C: drive or something was edited wrongly within the files. I could revert back to the copy I have on the D: drive. Now if my computer dies where both the C & D drives live then I am not protected so I have to consider a solution or a copy of data outside of my system maybe onto a NAS drive in my house? But then what happens if something happens to my house, maybe I need to consider storing it on another system in another location, maybe the cloud is an option. Maybe I could store a copy of my important files in several locations to mitigate against the risk of failure?

View File

@ -7,6 +7,7 @@ cover_image: null
canonical_url: null canonical_url: null
id: 1048717 id: 1048717
--- ---
## Hands-On Backup & Recovery ## Hands-On Backup & Recovery
In the last session we touched on [Kopia](https://kopia.io/) an Open-Source backup tool that we used to get some important data off to a local NAS and off to some cloud based object storage. In the last session we touched on [Kopia](https://kopia.io/) an Open-Source backup tool that we used to get some important data off to a local NAS and off to some cloud based object storage.
@ -21,14 +22,14 @@ To set up our minikube cluster we will be issuing the `minikube start --addons v
At this point I know we have not deployed Kasten K10 yet but we want to issue the following command when your cluster is up, but we want to annotate the volumesnapshotclass so that Kasten K10 can use this. At this point I know we have not deployed Kasten K10 yet but we want to issue the following command when your cluster is up, but we want to annotate the volumesnapshotclass so that Kasten K10 can use this.
``` ```Shell
kubectl annotate volumesnapshotclass csi-hostpath-snapclass \ kubectl annotate volumesnapshotclass csi-hostpath-snapclass \
k10.kasten.io/is-snapshot-class=true k10.kasten.io/is-snapshot-class=true
``` ```
We are also going to change over the default storageclass from the standard default storageclass to the csi-hostpath storageclass using the following. We are also going to change over the default storageclass from the standard default storageclass to the csi-hostpath storageclass using the following.
``` ```Shell
kubectl patch storageclass csi-hostpath-sc -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}' kubectl patch storageclass csi-hostpath-sc -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
kubectl patch storageclass standard -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"false"}}}' kubectl patch storageclass standard -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"false"}}}'
@ -66,7 +67,7 @@ The Kasten dashboard will be available at: `http://127.0.0.1:8080/k10/#/`
To authenticate with the dashboard we now need the token which we can get with the following commands. To authenticate with the dashboard we now need the token which we can get with the following commands.
``` ```Shell
TOKEN_NAME=$(kubectl get secret --namespace kasten-io|grep k10-k10-token | cut -d " " -f 1) TOKEN_NAME=$(kubectl get secret --namespace kasten-io|grep k10-k10-token | cut -d " " -f 1)
TOKEN=$(kubectl get secret --namespace kasten-io $TOKEN_NAME -o jsonpath="{.data.token}" | base64 --decode) TOKEN=$(kubectl get secret --namespace kasten-io $TOKEN_NAME -o jsonpath="{.data.token}" | base64 --decode)

View File

@ -1,5 +1,5 @@
--- ---
title: '#90DaysOfDevOps - Application Focused Backup - Day 88' title: "#90DaysOfDevOps - Application Focused Backup - Day 88"
published: false published: false
description: 90DaysOfDevOps - Application Focused Backups description: 90DaysOfDevOps - Application Focused Backups
tags: "devops, 90daysofdevops, learning" tags: "devops, 90daysofdevops, learning"
@ -7,6 +7,7 @@ cover_image: null
canonical_url: null canonical_url: null
id: 1048749 id: 1048749
--- ---
## Application Focused Backups ## Application Focused Backups
We have already spent some time talking about data services or data intensive applications such as databases on [Day 85](day85.md). For these data services we have to consider how we manage consistency, especially when it comes application consistency. We have already spent some time talking about data services or data intensive applications such as databases on [Day 85](day85.md). For these data services we have to consider how we manage consistency, especially when it comes application consistency.
@ -59,7 +60,7 @@ At the time of writing we are up to image version `0.75.0` with the following he
![](Images/Day88_Data7.png) ![](Images/Day88_Data7.png)
We can use `kubectl get pods -n kanister` to ensure the pod is up and runnnig and then we can also check our custom resource definitions are now available (If you have only installed Kanister then you will see the highlighted 3) We can use `kubectl get pods -n kanister` to ensure the pod is up and running and then we can also check our custom resource definitions are now available (If you have only installed Kanister then you will see the highlighted 3)
![](Images/Day88_Data8.png) ![](Images/Day88_Data8.png)
@ -67,18 +68,19 @@ We can use `kubectl get pods -n kanister` to ensure the pod is up and runnnig an
Deploying mysql via helm: Deploying mysql via helm:
``` ```Shell
APP_NAME=my-production-app APP_NAME=my-production-app
kubectl create ns ${APP_NAME} kubectl create ns ${APP_NAME}
helm repo add bitnami https://charts.bitnami.com/bitnami helm repo add bitnami https://charts.bitnami.com/bitnami
helm install mysql-store bitnami/mysql --set primary.persistence.size=1Gi,volumePermissions.enabled=true --namespace=${APP_NAME} helm install mysql-store bitnami/mysql --set primary.persistence.size=1Gi,volumePermissions.enabled=true --namespace=${APP_NAME}
kubectl get pods -n ${APP_NAME} -w kubectl get pods -n ${APP_NAME} -w
``` ```
![](Images/Day88_Data9.png) ![](Images/Day88_Data9.png)
Populate the mysql database with initial data, run the following: Populate the mysql database with initial data, run the following:
``` ```Shell
MYSQL_ROOT_PASSWORD=$(kubectl get secret --namespace ${APP_NAME} mysql-store -o jsonpath="{.data.mysql-root-password}" | base64 --decode) MYSQL_ROOT_PASSWORD=$(kubectl get secret --namespace ${APP_NAME} mysql-store -o jsonpath="{.data.mysql-root-password}" | base64 --decode)
MYSQL_HOST=mysql-store.${APP_NAME}.svc.cluster.local MYSQL_HOST=mysql-store.${APP_NAME}.svc.cluster.local
MYSQL_EXEC="mysql -h ${MYSQL_HOST} -u root --password=${MYSQL_ROOT_PASSWORD} -DmyImportantData -t" MYSQL_EXEC="mysql -h ${MYSQL_HOST} -u root --password=${MYSQL_ROOT_PASSWORD} -DmyImportantData -t"
@ -86,13 +88,15 @@ echo MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD}
``` ```
### Create a MySQL CLIENT ### Create a MySQL CLIENT
We will run another container image to act as our client We will run another container image to act as our client
``` ```Shell
APP_NAME=my-production-app APP_NAME=my-production-app
kubectl run mysql-client --rm --env APP_NS=${APP_NAME} --env MYSQL_EXEC="${MYSQL_EXEC}" --env MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD} --env MYSQL_HOST=${MYSQL_HOST} --namespace ${APP_NAME} --tty -i --restart='Never' --image docker.io/bitnami/mysql:latest --command -- bash kubectl run mysql-client --rm --env APP_NS=${APP_NAME} --env MYSQL_EXEC="${MYSQL_EXEC}" --env MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD} --env MYSQL_HOST=${MYSQL_HOST} --namespace ${APP_NAME} --tty -i --restart='Never' --image docker.io/bitnami/mysql:latest --command -- bash
``` ```
```
```Shell
Note: if you already have an existing mysql client pod running, delete with the command Note: if you already have an existing mysql client pod running, delete with the command
kubectl delete pod -n ${APP_NAME} mysql-client kubectl delete pod -n ${APP_NAME} mysql-client
@ -100,7 +104,7 @@ kubectl delete pod -n ${APP_NAME} mysql-client
### Add Data to MySQL ### Add Data to MySQL
``` ```Shell
echo "create database myImportantData;" | mysql -h ${MYSQL_HOST} -u root --password=${MYSQL_ROOT_PASSWORD} echo "create database myImportantData;" | mysql -h ${MYSQL_HOST} -u root --password=${MYSQL_ROOT_PASSWORD}
MYSQL_EXEC="mysql -h ${MYSQL_HOST} -u root --password=${MYSQL_ROOT_PASSWORD} -DmyImportantData -t" MYSQL_EXEC="mysql -h ${MYSQL_HOST} -u root --password=${MYSQL_ROOT_PASSWORD} -DmyImportantData -t"
echo "drop table Accounts" | ${MYSQL_EXEC} echo "drop table Accounts" | ${MYSQL_EXEC}
@ -116,11 +120,11 @@ echo "insert into Accounts values('rastapopoulos', 377);" | ${MYSQL_EXEC}
echo "select * from Accounts;" | ${MYSQL_EXEC} echo "select * from Accounts;" | ${MYSQL_EXEC}
exit exit
``` ```
You should be able to see some data as per below. You should be able to see some data as per below.
![](Images/Day88_Data10.png) ![](Images/Day88_Data10.png)
### Create Kanister Profile ### Create Kanister Profile
Kanister provides a CLI, `kanctl` and another utility `kando` that is used to interact with your object storage provider from blueprint and both of these utilities. Kanister provides a CLI, `kanctl` and another utility `kando` that is used to interact with your object storage provider from blueprint and both of these utilities.
@ -139,8 +143,7 @@ Don't worry you don't need to create your own one from scratch unless your data
The blueprint we will be using will be the below. The blueprint we will be using will be the below.
```Shell
```
apiVersion: cr.kanister.io/v1alpha1 apiVersion: cr.kanister.io/v1alpha1
kind: Blueprint kind: Blueprint
metadata: metadata:
@ -250,14 +253,14 @@ We need to cause some damage before we can restore anything, we can do this by d
Connect to our MySQL pod. Connect to our MySQL pod.
``` ```Shell
APP_NAME=my-production-app APP_NAME=my-production-app
kubectl run mysql-client --rm --env APP_NS=${APP_NAME} --env MYSQL_EXEC="${MYSQL_EXEC}" --env MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD} --env MYSQL_HOST=${MYSQL_HOST} --namespace ${APP_NAME} --tty -i --restart='Never' --image docker.io/bitnami/mysql:latest --command -- bash kubectl run mysql-client --rm --env APP_NS=${APP_NAME} --env MYSQL_EXEC="${MYSQL_EXEC}" --env MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD} --env MYSQL_HOST=${MYSQL_HOST} --namespace ${APP_NAME} --tty -i --restart='Never' --image docker.io/bitnami/mysql:latest --command -- bash
``` ```
You can see that our importantdata db is there with `echo "SHOW DATABASES;" | ${MYSQL_EXEC}` You can see that our importantdata db is there with `echo "SHOW DATABASES;" | ${MYSQL_EXEC}`
Then to drop we ran `echo "DROP DATABASE myImportantData;" | ${MYSQL_EXEC}` Then to drop we ran `echo "DROP DATABASE myImportantData;" | ${MYSQL_EXEC}`
And confirmed that this was gone with a few attempts to show our database. And confirmed that this was gone with a few attempts to show our database.
@ -269,11 +272,12 @@ We can now use Kanister to get our important data back in business using the `ku
We can confirm our data is back by using the below command to connect to our database. We can confirm our data is back by using the below command to connect to our database.
``` ```Shell
APP_NAME=my-production-app APP_NAME=my-production-app
kubectl run mysql-client --rm --env APP_NS=${APP_NAME} --env MYSQL_EXEC="${MYSQL_EXEC}" --env MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD} --env MYSQL_HOST=${MYSQL_HOST} --namespace ${APP_NAME} --tty -i --restart='Never' --image docker.io/bitnami/mysql:latest --command -- bash kubectl run mysql-client --rm --env APP_NS=${APP_NAME} --env MYSQL_EXEC="${MYSQL_EXEC}" --env MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD} --env MYSQL_HOST=${MYSQL_HOST} --namespace ${APP_NAME} --tty -i --restart='Never' --image docker.io/bitnami/mysql:latest --command -- bash
``` ```
Now we are inside the MySQL Client, we can issue the `echo "SHOW DATABASES;" | ${MYSQL_EXEC}` and we can see the database is back. We can also issue the `echo "select * from Accounts;" | ${MYSQL_EXEC}` to check the contents of the database and our important data is restored.
Now we are inside the MySQL Client, we can issue the `echo "SHOW DATABASES;" | ${MYSQL_EXEC}` and we can see the database is back. We can also issue the `echo "select * from Accounts;" | ${MYSQL_EXEC}` to check the contents of the database and our important data is restored.
![](Images/Day88_Data17.png) ![](Images/Day88_Data17.png)

View File

@ -1,12 +1,13 @@
--- ---
title: '#90DaysOfDevOps - Disaster Recovery - Day 89' title: "#90DaysOfDevOps - Disaster Recovery - Day 89"
published: false published: false
description: 90DaysOfDevOps - Disaster Recovery description: 90DaysOfDevOps - Disaster Recovery
tags: 'devops, 90daysofdevops, learning' tags: "devops, 90daysofdevops, learning"
cover_image: null cover_image: null
canonical_url: null canonical_url: null
id: 1048718 id: 1048718
--- ---
## Disaster Recovery ## Disaster Recovery
We have mentioned already how different failure scenarios will warrant different recovery requirements. When it comes to Fire, Flood and Blood scenarios we can consider these mostly disaster situations where we might need our workloads up and running in a completely different location as fast as possible or at least with near-zero recovery time objectives (RTO). We have mentioned already how different failure scenarios will warrant different recovery requirements. When it comes to Fire, Flood and Blood scenarios we can consider these mostly disaster situations where we might need our workloads up and running in a completely different location as fast as possible or at least with near-zero recovery time objectives (RTO).
@ -39,7 +40,7 @@ The Kasten dashboard will be available at: `http://127.0.0.1:8080/k10/#/`
To authenticate with the dashboard, we now need the token which we can get with the following commands. To authenticate with the dashboard, we now need the token which we can get with the following commands.
``` ```Shell
TOKEN_NAME=$(kubectl get secret --namespace kasten-io|grep k10-k10-token | cut -d " " -f 1) TOKEN_NAME=$(kubectl get secret --namespace kasten-io|grep k10-k10-token | cut -d " " -f 1)
TOKEN=$(kubectl get secret --namespace kasten-io $TOKEN_NAME -o jsonpath="{.data.token}" | base64 --decode) TOKEN=$(kubectl get secret --namespace kasten-io $TOKEN_NAME -o jsonpath="{.data.token}" | base64 --decode)
@ -113,7 +114,6 @@ Below, the screenshot is just to show the successful backup and export of our da
![](Images/Day89_Data14.png) ![](Images/Day89_Data14.png)
### Create a new MiniKube cluster & deploy K10 ### Create a new MiniKube cluster & deploy K10
We then need to deploy a second Kubernetes cluster and where this could be any supported version of Kubernetes including OpenShift, for the purpose of education we will use the very free version of MiniKube with a different name. We then need to deploy a second Kubernetes cluster and where this could be any supported version of Kubernetes including OpenShift, for the purpose of education we will use the very free version of MiniKube with a different name.
@ -142,7 +142,7 @@ The Kasten dashboard will be available at: `http://127.0.0.1:8080/k10/#/`
To authenticate with the dashboard, we now need the token which we can get with the following commands. To authenticate with the dashboard, we now need the token which we can get with the following commands.
``` ```Shell
TOKEN_NAME=$(kubectl get secret --namespace kasten-io|grep k10-k10-token | cut -d " " -f 1) TOKEN_NAME=$(kubectl get secret --namespace kasten-io|grep k10-k10-token | cut -d " " -f 1)
TOKEN=$(kubectl get secret --namespace kasten-io $TOKEN_NAME -o jsonpath="{.data.token}" | base64 --decode) TOKEN=$(kubectl get secret --namespace kasten-io $TOKEN_NAME -o jsonpath="{.data.token}" | base64 --decode)

View File

@ -1,12 +1,13 @@
--- ---
title: '#90DaysOfDevOps - Data & Application Mobility - Day 90' title: "#90DaysOfDevOps - Data & Application Mobility - Day 90"
published: false published: false
description: 90DaysOfDevOps - Data & Application Mobility description: 90DaysOfDevOps - Data & Application Mobility
tags: 'devops, 90daysofdevops, learning' tags: "devops, 90daysofdevops, learning"
cover_image: null cover_image: null
canonical_url: null canonical_url: null
id: 1048748 id: 1048748
--- ---
## Data & Application Mobility ## Data & Application Mobility
Day 90 of the #90DaysOfDevOps Challenge! In this final session I am going to cover mobility of our data and applications. I am specifically going to focus on Kubernetes but the requirement across platforms and between platforms is something that is an ever-growing requirement and is seen in the field. Day 90 of the #90DaysOfDevOps Challenge! In this final session I am going to cover mobility of our data and applications. I am specifically going to focus on Kubernetes but the requirement across platforms and between platforms is something that is an ever-growing requirement and is seen in the field.
@ -121,5 +122,6 @@ As always keep the issues and PRs coming.
Thanks! Thanks!
@MichaelCade1 @MichaelCade1
- [GitHub](https://github.com/MichaelCade) - [GitHub](https://github.com/MichaelCade)
- [Twitter](https://twitter.com/MichaelCade1) - [Twitter](https://twitter.com/MichaelCade1)

View File

@ -161,7 +161,6 @@ This work is licensed under a
[![Star History Chart](https://api.star-history.com/svg?repos=MichaelCade/90DaysOfDevOps&type=Timeline)](https://star-history.com/#MichaelCade/90DaysOfDevOps&Timeline) [![Star History Chart](https://api.star-history.com/svg?repos=MichaelCade/90DaysOfDevOps&type=Timeline)](https://star-history.com/#MichaelCade/90DaysOfDevOps&Timeline)
[cc-by-nc-sa]: http://creativecommons.org/licenses/by-nc-sa/4.0/ [cc-by-nc-sa]: http://creativecommons.org/licenses/by-nc-sa/4.0/
[cc-by-nc-sa-image]: https://licensebuttons.net/l/by-nc-sa/4.0/88x31.png [cc-by-nc-sa-image]: https://licensebuttons.net/l/by-nc-sa/4.0/88x31.png
[cc-by-nc-sa-shield]: https://img.shields.io/badge/License-CC%20BY--NC--SA%204.0-lightgrey.svg [cc-by-nc-sa-shield]: https://img.shields.io/badge/License-CC%20BY--NC--SA%204.0-lightgrey.svg