Spelling & Grammar Day 61-70
This commit is contained in:
parent
19357cb73d
commit
7edba3af8e
@ -10,23 +10,23 @@ id: 1048743
|
||||
|
||||
## Kubernetes & Multiple Environments
|
||||
|
||||
So far during this section on Infrastructure as code we have looked at deploying virtual machines albeit to virtualbox but the premise is the same really as we define in code what we want our virtual machine to look like and then we deploy. The same for Docker containers and in this session we are going to take a look at how Terraform can be used to interact with resources supported by Kubernetes.
|
||||
So far during this section on Infrastructure as code, we have looked at deploying virtual machines albeit to VirtualBox but the premise is the same really as we define in code what we want our virtual machine to look like and then we deploy. The same for Docker containers and in this session, we are going to take a look at how Terraform can be used to interact with resources supported by Kubernetes.
|
||||
|
||||
I have been using Terraform to deploy my Kubernetes clusters for demo purposes across the 3 main cloud providers and you can find the repository [tf_k8deploy](https://github.com/MichaelCade/tf_k8deploy)
|
||||
|
||||
However you can also use Terraform to interact with objects within the Kubernetes cluster, this could be using the [Kubernetes provider](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs) or it could be using the [Helm provider](https://registry.terraform.io/providers/hashicorp/helm/latest) to manage your chart deployments.
|
||||
|
||||
Now we could use `kubectl` as we have showed in previous sections. But there are some benefits to using Terraform in your Kubernetes environment.
|
||||
Now we could use `kubectl` as we have shown in previous sections. But there are some benefits to using Terraform in your Kubernetes environment.
|
||||
|
||||
- Unified workflow - if you have used terraform to deploy your clusters, you could use the same workflow and tool to deploy within your Kubernetes clusters
|
||||
- Unified workflow - if you have used Terraform to deploy your clusters, you could use the same workflow and tool to deploy within your Kubernetes clusters
|
||||
|
||||
- Lifecycle management - Terraform is not just a provisioning tool, its going to enable change, updates and deletions.
|
||||
- Lifecycle management - Terraform is not just a provisioning tool, it's going to enable change, updates and deletions.
|
||||
|
||||
### Simple Kubernetes Demo
|
||||
|
||||
Much like the demo we created in the last session we can now deploy nginx into our Kubernetes cluster, I will be using minikube here again for demo purposes. We create our Kubernetes.tf file and you can find this in the [folder](/Days/IaC/Kubernetes/kubernetes.tf)
|
||||
Much like the demo we created in the last session, we can now deploy nginx into our Kubernetes cluster, I will be using minikube here again for demo purposes. We create our Kubernetes.tf file and you can find this in the [folder](/Days/IaC/Kubernetes/Kubernetes.tf)
|
||||
|
||||
In that file we are going to define our Kubernetes provider, we are going to point to our kubeconfig file, create a namespace called nginx, then we will create a deployment which contains 2 replicas and finally a service.
|
||||
In that file we are going to define our Kubernetes provider, we are going to point to our kubeconfig file, create a namespace called nginx, and then we will create a deployment which contains 2 replicas and finally service.
|
||||
|
||||
```
|
||||
terraform {
|
||||
@ -110,7 +110,7 @@ We can now take a look at the deployed resources within our cluster.
|
||||
|
||||

|
||||
|
||||
Now because we are using minikube and you will have seen in the previous section this has its own limitations when we try and play with the docker networking for ingress. But if we simply issue the `kubectl port-forward -n nginx svc/nginx 30201:80` command and open a browser to `http://localhost:30201/` we should see our NGINX page.
|
||||
Now because we are using minikube as you will have seen in the previous section this has its limitations when we try and play with the docker networking for ingress. But if we simply issue the `kubectl port-forward -n nginx svc/nginx 30201:80` command and open a browser to `http://localhost:30201/` we should see our NGINX page.
|
||||
|
||||

|
||||
|
||||
@ -118,13 +118,13 @@ If you want to try out more detailed demos with Terraform and Kubernetes then th
|
||||
|
||||
### Multiple Environments
|
||||
|
||||
If we wanted to take any of the demos we have ran through but wanted to now have specific production, staging and development environments looking exactly the same and leveraging this code there are two approaches to achieve this with Terraform
|
||||
If we wanted to take any of the demos we have run through but wanted to now have specific production, staging and development environments looking the same and leveraging this code there are two approaches to achieve this with Terraform
|
||||
|
||||
- `terraform workspaces` - multiple named sections within a single backend
|
||||
|
||||
- file structure - Directory layout provides separation, modules provide reuse.
|
||||
|
||||
Each of the above do have their pros and cons though.
|
||||
Each of the above does have its pros and cons though.
|
||||
|
||||
### terraform workspaces
|
||||
|
||||
|
@ -10,11 +10,11 @@ id: 1049053
|
||||
|
||||
## Testing, Tools & Alternatives
|
||||
|
||||
As we close out this section on Infrastructure as Code we must mention about testing our code, the various different tools available and then some of the alternatives to Terraform to achieve this. As I said at the start of the section my focus was on Terraform because it is firstly free and open source, secondly it is cross platform and agnostic to environments. But there are also alternatives out there that should be considered but the overall goal is to make people aware that this is the way to deploy your infrastructure.
|
||||
As we close out this section on Infrastructure as Code we must mention testing our code, the various tools available and then some of the alternatives to Terraform to achieve this. As I said at the start of the section my focus was on Terraform because it is firstly free and open source, secondly, it is cross-platform and agnostic to environments. But there are also alternatives out there that should be considered but the overall goal is to make people aware that this is the way to deploy your infrastructure.
|
||||
|
||||
### Code Rot
|
||||
|
||||
The first area I want to cover in this session is code rot, unlike application code, infrastructure as code might get used and then not for a very long time. Lets take the example that we are going to be using Terraform to deploy our VM environment in AWS, perfect and it works first time and we have our environment, but this environment doesn't change too often so the code gets left the state possibly or hopefully stored in a central location but the code does not change.
|
||||
The first area I want to cover in this session is code rot, unlike application code, infrastructure as code might get used and then not for a very long time. Let's take the example that we are going to be using Terraform to deploy our VM environment in AWS, perfect and it works the first time and we have our environment, but this environment doesn't change too often so the code gets left the state possibly or hopefully stored in a central location but the code does not change.
|
||||
|
||||
What if something changes in the infrastructure? But it is done out of band, or other things change in our environment.
|
||||
|
||||
@ -27,7 +27,7 @@ What if something changes in the infrastructure? But it is done out of band, or
|
||||
|
||||
Another huge area that follows on from code rot and in general is the ability to test your IaC and make sure all areas are working the way they should.
|
||||
|
||||
First up there are some built in testing commands we can take a look at:
|
||||
First up there are some built-in testing commands we can take a look at:
|
||||
|
||||
| Command | Description |
|
||||
| -------------------- | ------------------------------------------------------------------------------------------ |
|
||||
@ -41,15 +41,15 @@ We also have some testing tools available external to Terraform:
|
||||
- [tflint](https://github.com/terraform-linters/tflint)
|
||||
|
||||
- Find possible errors
|
||||
- Warn about deprecated syntax, unused declarations.
|
||||
- Enforce best practices, naming conventions.
|
||||
- Warn about deprecated syntax and unused declarations.
|
||||
- Enforce best practices, and naming conventions.
|
||||
|
||||
Scanning tools
|
||||
|
||||
- [checkov](https://www.checkov.io/) - scans cloud infrastructure configurations to find misconfigurations before they're deployed.
|
||||
- [tfsec](https://aquasecurity.github.io/tfsec/v1.4.2/) - static analysis security scanner for your Terraform code.
|
||||
- [terrascan](https://github.com/accurics/terrascan) - static code analyser for Infrastructure as Code.
|
||||
- [terraform-compliance](https://terraform-compliance.com/) - a lightweight, security and compliance focused test framework against terraform to enable negative testing capability for your infrastructure-as-code.
|
||||
- [terraform-compliance](https://terraform-compliance.com/) - a lightweight, security and compliance-focused test framework against terraform to enable the negative testing capability for your infrastructure-as-code.
|
||||
- [snyk](https://docs.snyk.io/products/snyk-infrastructure-as-code/scan-terraform-files/scan-and-fix-security-issues-in-terraform-files) - scans your Terraform code for misconfigurations and security issues
|
||||
|
||||
Managed Cloud offering
|
||||
@ -78,19 +78,19 @@ We mentioned on Day 57 when we started this section that there were some alterna
|
||||
| Azure Resource Manager | Pulumi |
|
||||
| Google Cloud Deployment Manager | |
|
||||
|
||||
I have used AWS CloudFormation probably the most out of the above list and native to AWS but I have not used the others other than Terraform. As you can imagine the cloud specific versions are very good in that particular cloud but if you have multiple cloud environments then you are going to struggle to migrate those configurations or you are going to have multiple management planes for your IaC efforts.
|
||||
I have used AWS CloudFormation probably the most out of the above list and native to AWS but I have not used the others other than Terraform. As you can imagine the cloud-specific versions are very good in that particular cloud but if you have multiple cloud environments then you are going to struggle to migrate those configurations or you are going to have multiple management planes for your IaC efforts.
|
||||
|
||||
I think an interesting next step for me is to take some time and learn more about [Pulumi](https://www.pulumi.com/)
|
||||
|
||||
From a Pulumi comparison on their site
|
||||
|
||||
> "Both Terraform and Pulumi offer a desired state infrastructure as code model where the code represents the desired infrastructure state and the deployment engine compares this desired state with the stack’s current state and determines what resources need to be created, updated or deleted."
|
||||
> "Both Terraform and Pulumi offer the desired state infrastructure as code model where the code represents the desired infrastructure state and the deployment engine compares this desired state with the stack’s current state and determines what resources need to be created, updated or deleted."
|
||||
|
||||
The biggest difference I can see is that unlike the HashiCorp Configuration Language (HCL) Pulumi allows for general purpose languages like Python, TypeScript, JavaScript, Go and .NET.
|
||||
The biggest difference I can see is that unlike the HashiCorp Configuration Language (HCL) Pulumi allows for general-purpose languages like Python, TypeScript, JavaScript, Go and .NET.
|
||||
|
||||
A quick overview [Introduction to Pulumi: Modern Infrastructure as Code](https://www.youtube.com/watch?v=QfJTJs24-JM) I like the ease and choices you are prompted with and want to get into this a little more.
|
||||
|
||||
This wraps up the Infrastructure as code section and next we move on to that little bit of overlap with configuration management and in particular as we get past the big picture of configuration management we are going to be using Ansible for some of those tasks and demos.
|
||||
This wraps up the Infrastructure as code section and next we move on to that little bit of overlap with configuration management in particular as we get past the big picture of configuration management we are going to be using Ansible for some of those tasks and demos.
|
||||
|
||||
## Resources
|
||||
|
||||
|
@ -12,7 +12,7 @@ id: 1048711
|
||||
|
||||
Coming straight off the back of the section covering Infrastructure as Code, there is likely going to be some crossover as we talk about Configuration Management or Application Configuration Management.
|
||||
|
||||
Configuration Management is the process of maintaining applications, systems and servers in a desired state. The overlap with Infrastructure as code is that IaC is going to make sure your infrastructure is at the desired state but after that especially terraform is not going to look after the desired state of your OS settings or Application and that is where Configuration Management tools come in. Making sure that system and applications perform the way it is expected as changes occur over Deane.
|
||||
Configuration Management is the process of maintaining applications, systems and servers in the desired state. The overlap with Infrastructure as code is that IaC is going to make sure your infrastructure is at the desired state but after that especially terraform is not going to look after the desired state of your OS settings or Application and that is where Configuration Management tools come in. Make sure that the system and applications perform the way it is expected as changes occur over Deane.
|
||||
|
||||
Configuration management keeps you from making small or large changes that go undocumented.
|
||||
|
||||
@ -21,7 +21,7 @@ Configuration management keeps you from making small or large changes that go un
|
||||
The scenario or why you'd want to use Configuration Management, meet Dean he's our system administrator and Dean is a happy camper pretty and
|
||||
working on all of the systems in his environment.
|
||||
|
||||
What happens if their system fails, if there's a fire, a server goes down well? Dean knows exactly what to do he can fix that fire really easily the problems become really difficult for Dean however if multiple servers start failing particularly when you have large and expanding environments, this is why Dean really needs to have a configuration management tool. Configuration Management tools can help make Dean look like a rockstar, all he has to do is configure the right codes that allows him to push out the instructions on how to set up each of the servers quickly effectively and at scale.
|
||||
What happens if their system fails, if there's a fire, a server goes down well? Dean knows exactly what to do he can fix that fire easily the problems become difficult for Dean however if multiple servers start failing particularly when you have large and expanding environments, this is why Dean needs to have a configuration management tool. Configuration Management tools can help make Dean look like a rockstar, all he has to do is configure the right codes that allow him to push out the instructions on how to set up each of the servers quickly effectively and at scale.
|
||||
|
||||
### Configuration Management tools
|
||||
|
||||
@ -29,7 +29,7 @@ There are a variety of configuration management tools available, and each has sp
|
||||
|
||||

|
||||
|
||||
At this stage we will take a quick fire look at the options in the above picture before making our choice on which one we will use and why.
|
||||
At this stage, we will take a quickfire look at the options in the above picture before making our choice on which one we will use and why.
|
||||
|
||||
- **Chef**
|
||||
|
||||
@ -37,7 +37,7 @@ At this stage we will take a quick fire look at the options in the above picture
|
||||
- Chef is an open-source tool developed by OpsCode written in Ruby and Erlang.
|
||||
- Chef is best suited for organisations that have a heterogeneous infrastructure and are looking for mature solutions.
|
||||
- Recipes and Cookbooks determine the configuration code for your systems.
|
||||
- Pro - A large collection of recipes are available
|
||||
- Pro - A large collection of recipes is available
|
||||
- Pro - Integrates well with Git which provides a strong version control
|
||||
- Con - Steep learning curve, a considerable amount of time required.
|
||||
- Con - The main server doesn't have much control.
|
||||
@ -50,8 +50,8 @@ At this stage we will take a quick fire look at the options in the above picture
|
||||
- Puppet is built in Ruby and uses DSL for writing manifests.
|
||||
- Puppet also works well with heterogeneous infrastructure where the focus is on scalability.
|
||||
- Pro - Large community for support.
|
||||
- Pro - Well developed reporting mechanism.
|
||||
- Con - Advance tasks require knowledge of Ruby language.
|
||||
- Pro - Well-developed reporting mechanism.
|
||||
- Con - Advance tasks require knowledge of the Ruby language.
|
||||
- Con - The main server doesn't have much control.
|
||||
- Architecture - Server / Clients
|
||||
- Ease of setup - Moderate
|
||||
@ -59,25 +59,25 @@ At this stage we will take a quick fire look at the options in the above picture
|
||||
- **Ansible**
|
||||
|
||||
- Ansible is an IT automation tool that automates configuration management, cloud provisioning, deployment and orchestration.
|
||||
- The core of Ansible playbooks are written in YAML. (Should really do a section on YAML as we have seen this a few times)
|
||||
- The core of Ansible playbooks is written in YAML. (Should do a section on YAML as we have seen this a few times)
|
||||
- Ansible works well when there are environments that focus on getting things up and running fast.
|
||||
- Works on playbooks which provide instructions to your servers.
|
||||
- Pro - No agents needed on remote nodes.
|
||||
- Pro - No agents are needed on remote nodes.
|
||||
- Pro - YAML is easy to learn.
|
||||
- Con - Performance speed is often less than other tools (Faster than Dean doing it himself manually)
|
||||
- Con - YAML not as powerful as Ruby but less of a learning curve.
|
||||
- Con - YAML is not as powerful as Ruby but has less of a learning curve.
|
||||
- Architecture - Client Only
|
||||
- Ease of setup - Very Easy
|
||||
- Language - Procedural - Specify how to do a task
|
||||
|
||||
- **SaltStack**
|
||||
- SaltStack is a CLI based tool that automates configuration management and remote execution.
|
||||
- SaltStack is Python based whilst the instructions are written in YAML or its own DSL.
|
||||
- SaltStack is a CLI-based tool that automates configuration management and remote execution.
|
||||
- SaltStack is Python based whilst the instructions are written in YAML or its DSL.
|
||||
- Perfect for environments with scalability and resilience as the priority.
|
||||
- Pro - Easy to use when up and running
|
||||
- Pro - Good reporting mechanism
|
||||
- Con - Setup phase is tough
|
||||
- Con - New web ui which is much less developed than the others.
|
||||
- Con - The setup phase is tough
|
||||
- Con - New web UI which is much less developed than the others.
|
||||
- Architecture - Server / Clients
|
||||
- Ease of setup - Moderate
|
||||
- Language - Declarative - Specify only what to do
|
||||
@ -90,12 +90,12 @@ I think it is important to touch on some of the differences between Ansible and
|
||||
|
||||
| | Ansible | Terraform |
|
||||
| -------------- | ------------------------------------------------------------ | ---------------------------------------------------------------- |
|
||||
| Type | Ansible is a configuration management tool | Terraform is a an orchestration tool |
|
||||
| Type | Ansible is a configuration management tool | Terraform is an orchestration tool |
|
||||
| Infrastructure | Ansible provides support for mutable infrastructure | Terraform provides support for immutable infrastructure |
|
||||
| Language | Ansible follows procedural language | Terraform follows a declarative language |
|
||||
| Provisioning | Ansible provides partial provisioning (VM, Network, Storage) | Terraform provides extensive provisioning (VM, Network, Storage) |
|
||||
| Packaging | Ansible provides complete support for packaging & templating | Terraform provides partial support for packaging & templating |
|
||||
| Lifecycle Mgmt | Ansible does not have lifecycle management | Terraform is heavily dependant on lifecycle and state mgmt |
|
||||
| Lifecycle Mgmt | Ansible does not have lifecycle management | Terraform is heavily dependent on lifecycle and state mgmt |
|
||||
|
||||
## Resources
|
||||
|
||||
|
@ -10,17 +10,17 @@ id: 1048765
|
||||
|
||||
## Ansible: Getting Started
|
||||
|
||||
We covered a little what Ansible is in the [big picture session yesterday](day63.md) But we are going to get started with a little more information on top of that here. Firstly Ansible comes from RedHat. Secondly it is agentles, connects via SSH and runs commands. Thirdly it is cross platform (Linux & macOS, WSL2) and open-source (there is also a paid for enterprise option) Ansible pushes configuration vs other models.
|
||||
We covered a little about what Ansible is in the [big picture session yesterday](day63.md) But we are going to get started with a little more information on top of that here. Firstly Ansible comes from RedHat. Secondly, it is agentless, connects via SSH and runs commands. Thirdly it is cross-platform (Linux & macOS, WSL2) and open-source (there is also a paid-for enterprise option) Ansible pushes configuration vs other models.
|
||||
|
||||
### Ansible Installation
|
||||
|
||||
As you might imagine, RedHat and the Ansible team have done a fantastic job around documenting Ansible. This generally starts with the installation steps which you can find [here](https://docs.ansible.com/ansible/latest/installation_guide/intro_installation.html) Remember we said that Ansible is an agentless automation tool, the tool is deployed to a system referred to as a "Control Node" from this control node is manages machines and other devices (possibly network) over SSH.
|
||||
As you might imagine, RedHat and the Ansible team have done a fantastic job of documenting Ansible. This generally starts with the installation steps which you can find [here](https://docs.ansible.com/ansible/latest/installation_guide/intro_installation.html) Remember we said that Ansible is an agentless automation tool, the tool is deployed to a system referred to as a "Control Node" from this control node is manages machines and other devices (possibly network) over SSH.
|
||||
|
||||
It does state in the above linked documentation that the Windows OS cannot be used as the control node.
|
||||
It does state in the above-linked documentation that the Windows OS cannot be used as the control node.
|
||||
|
||||
For my control node and for at least this demo I am going to use the Linux VM we created way back in the [Linux section](day20.md) as my control node.
|
||||
For my control node and at least this demo, I am going to use the Linux VM we created way back in the [Linux section](day20.md) as my control node.
|
||||
|
||||
This system was running Ubuntu and the installation steps simply needs the following commands.
|
||||
This system was running Ubuntu and the installation steps simply need the following commands.
|
||||
|
||||
```Shell
|
||||
sudo apt update
|
||||
@ -33,11 +33,11 @@ Now we should have ansible installed on our control node, you can check this by
|
||||
|
||||

|
||||
|
||||
Before we then start to look at controlling other nodes in our environment, we can also check functionality of ansible by running a command against our local machine `ansible localhost -m ping` will use an [Ansible Module](https://docs.ansible.com/ansible/2.9/user_guide/modules_intro.html) and this is a quick way to perform a single task across many different systems. I mean it is not much fun with just the local host but imagine you wanted to get something or make sure all your systems were up and you had 1000+ servers and devices.
|
||||
Before we then start to look at controlling other nodes in our environment, we can also check the functionality of ansible by running a command against our local machine `ansible localhost -m ping` will use an [Ansible Module](https://docs.ansible.com/ansible/2.9/user_guide/modules_intro.html) and this is a quick way to perform a single task across many different systems. I mean it is not much fun with just the local host but imagines you wanted to get something or make sure all your systems were up and you had 1000+ servers and devices.
|
||||
|
||||

|
||||
|
||||
Or an actual real life use for a module might be something like `ansible webservers --m service -a "name=httpd state=started"` this will tell us if all of our webservers have the httpd service running. I have glossed over the webservers term used in that command.
|
||||
Or an actual real-life use for a module might be something like `ansible webservers --m service -a "name=httpd state=started"` this will tell us if all of our webservers have the httpd service running. I have glossed over the webservers term used in that command.
|
||||
|
||||
### hosts
|
||||
|
||||
@ -45,23 +45,23 @@ The way I used localhost above to run a simple ping module against the system, I
|
||||
|
||||

|
||||
|
||||
In order for us to specify our hosts or the nodes that we want to automate with these tasks we need to define them. We can define them by navigating to the /etc/ansible directory on your system.
|
||||
For us to specify our hosts or the nodes that we want to automate with these tasks, we need to define them. We can define them by navigating to the /etc/ansible directory on your system.
|
||||
|
||||

|
||||
|
||||
The file we want to edit is the hosts file, using a text editor we can jump in and define our hosts. The hosts file contains lots of great instructions on how to use and modify the file. We want to scroll down to the bottom and we are going to create a new group called [windows] and we are going to add our `10.0.0.1` IP address for that host. Save the file.
|
||||
The file we want to edit is the host's file, using a text editor we can jump in and define our hosts. The host file contains lots of great instructions on how to use and modify the file. We want to scroll down to the bottom and we are going to create a new group called [windows] and we are going to add our `10.0.0.1` IP address for that host. Save the file.
|
||||
|
||||

|
||||
|
||||
However remember I said you will need to have SSH available to enable ansible to connect to your system. As you can see below when I run `ansible windows -m ping` we get an unreachable because things failed to connect via SSH.
|
||||
However, remember I said you will need to have SSH available to enable Ansible to connect to your system. As you can see below when I run `ansible windows -m ping` we get an unreachable because things failed to connect via SSH.
|
||||
|
||||

|
||||
|
||||
I have now also started adding some additional hosts to our inventory, another name for this file as this is where you are going to define all of your devices, could be network devices, switches and routers for example also would be added here and grouped. In our hosts file though I have also added in my credentials for accessing the linux group of systems.
|
||||
I have now also started adding some additional hosts to our inventory, another name for this file as this is where you are going to define all of your devices, could be network devices, switches and routers for example also would be added here and grouped. In our hosts file though I have also added my credentials for accessing the Linux group of systems.
|
||||
|
||||

|
||||
|
||||
Now if we run `ansible linux -m ping` we get a success as per below.
|
||||
Now if we run `ansible Linux -m ping` we get success as per below.
|
||||
|
||||

|
||||
|
||||
@ -69,12 +69,12 @@ We then have the node requirements, these are the target systems you wish to aut
|
||||
|
||||
### Ansible Commands
|
||||
|
||||
You saw that we were able to run `ansible linux -m ping` against our Linux machine and get a response, basically with Ansible we have the ability to run many adhoc commands. But obviously you can run this against a group of systems and get that information back. [ad hoc commands](https://docs.ansible.com/ansible/latest/user_guide/intro_adhoc.html)
|
||||
You saw that we were able to run `ansible Linux -m ping` against our Linux machine and get a response, basically, with Ansible we can run many ad-hoc commands. But you can run this against a group of systems and get that information back. [ad hoc commands](https://docs.ansible.com/ansible/latest/user_guide/intro_adhoc.html)
|
||||
|
||||
If you find yourself repeating commands or even worse you are having to log into individual systems to run these commands then Ansible can help there. For example the simple command below would give us the output of all the operating system details for all of the systems we add to our linux group.
|
||||
If you find yourself repeating commands or even worse you are having to log into individual systems to run these commands then Ansible can help there. For example, the simple command below would give us the output of all the operating system details for all of the systems we add to our Linux group.
|
||||
`ansible linux -a "cat /etc/os-release"`
|
||||
|
||||
Other use cases could be to reboot systems, copy files, manage packers and users. You can also couple ad hoc commands with Ansible modules.
|
||||
Other use cases could be to reboot systems, copy files, and manage packers and users. You can also couple ad hoc commands with Ansible modules.
|
||||
|
||||
Ad hoc commands use a declarative model, calculating and executing the actions required to reach a specified final state. They achieve a form of idempotence by checking the current state before they begin and doing nothing unless the current state is different from the specified final state.
|
||||
|
||||
|
@ -20,7 +20,7 @@ This is where ansible playbooks come in. A playbook enables us to take our group
|
||||
|
||||
Playbook > Plays > Tasks
|
||||
|
||||
For anyone that comes from a sports background you may have come across the term playbook, a playbook then tells the team how you will play made up of various plays and tasks, if we think of the plays as the set pieces within the sport or game, and the tasks are associated to each play, you can have multiple tasks to make up a play and in the playbook you may have multiple different plays.
|
||||
For anyone that comes from a sports background you may have come across the term playbook, a playbook then tells the team how you will play made up of various plays and tasks if we think of the plays as the set pieces within the sport or game, and the tasks are associated to each play, you can have multiple tasks to make up a play and in the playbook, you may have multiple different plays.
|
||||
|
||||
These playbooks are written in YAML (YAML ain’t markup language) you will find a lot of the sections we have covered so far especially Containers and Kubernetes to feature YAML formatted configuration files.
|
||||
|
||||
@ -46,7 +46,7 @@ You can see the first task of "gathering steps" happened, but we didn't trigger
|
||||
|
||||
Our second task was to set a ping, this is not an ICMP ping but a python script to report back `pong` on successful connectivity to remote or localhost. [ansible.builtin.ping](https://docs.ansible.com/ansible/latest/collections/ansible/builtin/ping_module.html)
|
||||
|
||||
Then our third or really our second defined task as the first one will run unless you disable was the printing of a message telling us our OS. In this task we are using conditionals, we could run this playbook against all different types of operating systems and this would return the OS name. We are simply messaging this output for ease but we could add a task to say something like:
|
||||
Then our third or our second defined task as the first one will run unless you disable was the printing of a message telling us our OS. In this task we are using conditionals, we could run this playbook against all different types of operating systems and this would return the OS name. We are simply messaging this output for ease but we could add a task to say something like:
|
||||
|
||||
```Yaml
|
||||
tasks:
|
||||
@ -55,7 +55,7 @@ tasks:
|
||||
when: ansible_os_family == "Debian"
|
||||
```
|
||||
|
||||
### Vagrant to setup our environment
|
||||
### Vagrant to set up our environment
|
||||
|
||||
We are going to use Vagrant to set up our node environment, I am going to keep this at a reasonable 4 nodes but you can hopefully see that this could easily be 300 or 3000 and this is the power of Ansible and other configuration management tools to be able to configure your servers.
|
||||
|
||||
@ -118,7 +118,7 @@ If you are resource contrained then you can also run `vagrant up web01 web02` to
|
||||
|
||||
### Ansible host configuration
|
||||
|
||||
Now that we have our environment ready, we can check ansible and for this we will use our Ubuntu desktop (You could use this but you can equally use any Linux based machine on your network accessible to the network below) as our control, let’s also add the new nodes to our group in the ansible hosts file, you can think of this file as an inventory, an alternative to this could be another inventory file that is called on as part of your ansible command with `-i filename` this could be useful vs using the host file as you can have different files for different environments, maybe production, test and staging. Because we are using the default hosts file we do not need to specify as this would be the default used.
|
||||
Now that we have our environment ready, we can check ansible and for this, we will use our Ubuntu desktop (You could use this but you can equally use any Linux-based machine on your network access to the network below) as our control, let’s also add the new nodes to our group in the ansible hosts file, you can think of this file as an inventory, an alternative to this could be another inventory file that is called on as part of your ansible command with `-i filename` this could be useful vs using the host file as you can have different files for different environments, maybe production, test and staging. Because we are using the default hosts file we do not need to specify as this would be the default used.
|
||||
|
||||
I have added the following to the default hosts file.
|
||||
|
||||
@ -142,7 +142,7 @@ db01
|
||||
|
||||
Before moving on we want to make sure we can run a command against our nodes, let’s run `ansible nodes -m command -a hostname` this simple command will test that we have connectivity and report back our host names.
|
||||
|
||||
Also note that I have added these nodes and IPs to my Ubuntu control node within the /etc/hosts file to ensure connectivity. We might also need to do SSH configuration for each node from the Ubuntu box.
|
||||
Also, note that I have added these nodes and IPs to my Ubuntu control node within the /etc/hosts file to ensure connectivity. We might also need to do an SSH configuration for each node from the Ubuntu box.
|
||||
|
||||
```Text
|
||||
192.168.169.140 ansible-control
|
||||
@ -154,7 +154,7 @@ Also note that I have added these nodes and IPs to my Ubuntu control node within
|
||||
|
||||

|
||||
|
||||
At this stage we want to run through setting up SSH keys between your control and your server nodes. This is what we are going to do next, another way here could be to add variables into your hosts file to give username and password. I would advise against this as this is never going to be a best practice.
|
||||
At this stage, we want to run through setting up SSH keys between your control and your server nodes. This is what we are going to do next, another way here could be to add variables into your host's file to give username and password. I would advise against this as this is never going to be a best practice.
|
||||
|
||||
To set up SSH and share amongst your nodes, follow the steps below, you will be prompted for passwords (`vagrant`) and you will likely need to hit `y` a few times to accept.
|
||||
|
||||
@ -172,13 +172,13 @@ I am not running all my VMs and only running the webservers so I issued `ssh-cop
|
||||
|
||||

|
||||
|
||||
Before running any playbooks I like to make sure that I have simple connectivity with my groups so I have ran `ansible webservers -m ping` to test connectivity.
|
||||
Before running any playbooks I like to make sure that I have simple connectivity with my groups so I have run `ansible webservers -m ping` to test connectivity.
|
||||
|
||||

|
||||
|
||||
### Our First "real" Ansible Playbook
|
||||
|
||||
Our first Ansible playbook is going to configure our webservers, we have grouped these in our hosts file under the grouping [webservers].
|
||||
Our first Ansible playbook is going to configure our web servers, we have grouped these in our host's file under the grouping [webservers].
|
||||
|
||||
Before we run our playbook we can confirm that our web01 and web02 do not have apache installed. The top of the screenshot below is showing you the folder and file layout I have created within my ansible control to run this playbook, we have the `playbook1.yml`, then in the templates folder we have the `index.html.j2` and `ports.conf.j2` files. You can find these files in the folder listed above in the repository.
|
||||
|
||||
@ -233,28 +233,28 @@ Breaking down the above playbook:
|
||||
- `become: yes` means that our user running the playbook will become root on our remote systems. You will be prompted for the root password.
|
||||
- We then have `vars` and this defines some environment variables we want throughout our webservers.
|
||||
|
||||
Following this we start our tasks,
|
||||
Following this, we start our tasks,
|
||||
|
||||
- Task 1 is to ensure that apache is running the latest version
|
||||
- Task 2 is writing the ports.conf file from our source found in the templates folder.
|
||||
- Task 3 is creating a basic index.html file
|
||||
- Task 4 is making sure apache is running
|
||||
|
||||
Finally we have a handlers section, [Handlers: Running operations on change](https://docs.ansible.com/ansible/latest/user_guide/playbooks_handlers.html)
|
||||
Finally, we have a handlers section, [Handlers: Running operations on change](https://docs.ansible.com/ansible/latest/user_guide/playbooks_handlers.html)
|
||||
|
||||
"Sometimes you want a task to run only when a change is made on a machine. For example, you may want to restart a service if a task updates the configuration of that service, but not if the configuration is unchanged. Ansible uses handlers to address this use case. Handlers are tasks that only run when notified. Each handler should have a globally unique name."
|
||||
|
||||
At this stage you might be thinking but we have deployed 5 VMs (including our Ubuntu Desktop machine which is acting as our Ansible Control) The other systems will come into play during the rest of the section.
|
||||
At this stage, you might be thinking that we have deployed 5 VMs (including our Ubuntu Desktop machine which is acting as our Ansible Control) The other systems will come into play during the rest of the section.
|
||||
|
||||
### Run our Playbook
|
||||
|
||||
We are now ready to run our playbook against our nodes. To run our playbook we can use the `ansible-playbook playbook1.yml` We have defined our hosts that our playbook will run against within the playbook and this will walkthrough our tasks that we have defined.
|
||||
We are now ready to run our playbook against our nodes. To run our playbook we can use the `ansible-playbook playbook1.yml` We have defined the hosts that our playbook will run against within the playbook and this will walk through the tasks that we have defined.
|
||||
|
||||
When the command is complete we get an output showing our plays and tasks, this may take some time you can see from the below image that this took a while to go and install our desired state.
|
||||
|
||||

|
||||
|
||||
We can then double check this by jumping into a node and checking we have the installed software on our node.
|
||||
We can then double-check this by jumping into a node and checking we have the installed software on our node.
|
||||
|
||||

|
||||
|
||||
@ -262,9 +262,9 @@ Just to round this out as we have deployed two standalone webservers with the ab
|
||||
|
||||

|
||||
|
||||
We are going to build on this playbook as we move through the rest of this section. I am interested as well in taking our Ubuntu desktop and seeing if we could actually bootstrap our applications and configuration using Ansible so we might also touch this. You saw that we can use local host in our commands we can also run playbooks against our local host for example.
|
||||
We are going to build on this playbook as we move through the rest of this section. I am interested as well in taking our Ubuntu desktop and seeing if we could bootstrap our applications and configuration using Ansible so we might also touch this. You saw that we can use the local host in our commands we can also run playbooks against our local host for example.
|
||||
|
||||
Another thing to add here is that we are only really working with Ubuntu VMs but Ansible is agnostic to the target systems. The alternatives that we have previously mentioned to manage your systems could be server by server (not scalable when you get over a large amount of servers, plus a pain even with 3 nodes) we can also use shell scripting which again we covered in the Linux section but these nodes are potentially different so yes it can be done but then someone needs to maintain and manage those scripts. Ansible is free and hits the easy button vs having to have a specialised script.
|
||||
Another thing to add here is that we are only really working with Ubuntu VMs but Ansible is agnostic to the target systems. The alternatives that we have previously mentioned to manage your systems could be server by server (not scalable when you get over a large number of servers, plus a pain even with 3 nodes) we can also use shell scripting which again we covered in the Linux section but these nodes are potentially different so yes it can be done but then someone needs to maintain and manage those scripts. Ansible is free and hits the easy button vs having to have a specialised script.
|
||||
|
||||
## Resources
|
||||
|
||||
|
@ -10,17 +10,17 @@ id: 1048712
|
||||
|
||||
## Ansible Playbooks (Continued)
|
||||
|
||||
In our last section we started with creating our small lab using a Vagrantfile to deploy 4 machines and we used our Linux machine we created in that section as our ansible control system.
|
||||
In our last section, we started with creating our small lab using a Vagrantfile to deploy 4 machines and we used the Linux machine we created in that section as our ansible control system.
|
||||
|
||||
We also ran through a few scenarios of playbooks and at the end we had a playbook that made our web01 and web02 individual webservers.
|
||||
We also ran through a few scenarios of playbooks and at the end we had a playbook that made our web01 and web02 individual web servers.
|
||||
|
||||

|
||||
|
||||
### Keeping things tidy
|
||||
|
||||
Before we get into further automation and deployment we should cover the ability to keep our playbook lean and tidy and how we can separate our taks and handlers into subfolders.
|
||||
Before we get into further automation and deployment we should cover the ability to keep our playbook lean and tidy and how we can separate our tasks and handlers into subfolders.
|
||||
|
||||
we are basically going to copy our tasks into their own file within a folder.
|
||||
we are going to copy our tasks into their file within a folder.
|
||||
|
||||
```Yaml
|
||||
- name: ensure apache is at the latest version
|
||||
@ -68,7 +68,7 @@ We have just tidied up our playbook and started to separate areas that could mak
|
||||
|
||||
### Roles and Ansible Galaxy
|
||||
|
||||
At the moment we have deployed 4 VMs and we have configured 2 of these VMs as our webservers but we have some more specific functions namely, a database server and a loadbalancer or proxy. In order for us to do this and tidy up our repository we can use roles within Ansible.
|
||||
At the moment we have deployed 4 VMs and we have configured 2 of these VMs as our webservers but we have some more specific functions namely, a database server and a loadbalancer or proxy. For us to do this and tidy up our repository, we can use roles within Ansible.
|
||||
|
||||
To do this we will use the `ansible-galaxy` command which is there to manage ansible roles in shared repositories.
|
||||
|
||||
@ -82,7 +82,7 @@ The above command `ansible-galaxy init roles/apache2` will create the folder str
|
||||
|
||||

|
||||
|
||||
Copy and paste is easy to move those files but we also need to make a change to the tasks/main.yml so that we point this to the apache2_install.yml.
|
||||
Copy and paste are easy to move those files but we also need to make a change to the tasks/main.yml so that we point this to the apache2_install.yml.
|
||||
|
||||
We also need to change our playbook now to refer to our new role. In the playbook1.yml and playbook2.yml we determine our tasks and handlers in different ways as we changed these between the two versions. We need to change our playbook to use this role as per below:
|
||||
|
||||
@ -103,7 +103,7 @@ We can now run our playbook again this time with the new playbook name `ansible-
|
||||
|
||||

|
||||
|
||||
Ok, the depreciation although our playbook ran we should fix our ways now, in order to do that I have changed the include option in the tasks/main.yml to now be import_tasks as per below.
|
||||
Ok, the depreciation although our playbook ran we should fix our ways now, to do that I have changed the include option in the tasks/main.yml to now be import_tasks as per below.
|
||||
|
||||

|
||||
|
||||
@ -116,7 +116,7 @@ We are also going to create a few more roles whilst using `ansible-galaxy` we ar
|
||||
|
||||

|
||||
|
||||
I am going to leave this one here and in the next session we will start working on those other nodes we have deployed but have not done anything with yet.
|
||||
I am going to leave this one here and in the next session, we will start working on those other nodes we have deployed but have not done anything with yet.
|
||||
|
||||
## Resources
|
||||
|
||||
|
@ -10,17 +10,17 @@ id: 1048713
|
||||
|
||||
## Using Roles & Deploying a Loadbalancer
|
||||
|
||||
In the last session we covered roles and used the `ansible-galaxy` command to help create our folder structures for some roles that we are going to use. We finished up with a much tidier working repository for our configuration code as everything is hidden away in our role folders.
|
||||
In the last session, we covered roles and used the `ansible-galaxy` command to help create our folder structures for some roles that we are going to use. We finished up with a much tidier working repository for our configuration code as everything is hidden away in our role folders.
|
||||
|
||||
However we have only used the apache2 role and have a working playbook3.yaml to handle our webservers.
|
||||
However, we have only used the apache2 role and have a working playbook3.yaml to handle our webservers.
|
||||
|
||||
At this point if you have only used `vagrant up web01 web02` now is the time to run `vagrant up loadbalancer` this will bring up another Ubuntu system that we will use as our Load Balancer/Proxy.
|
||||
|
||||
We have already defined this new machine in our hosts file, but we do not have the ssh key configured until it is available, so we need to also run `ssh-copy-id loadbalancer` when the system is up and ready.
|
||||
We have already defined this new machine in our host's file, but we do not have the ssh key configured until it is available, so we need to also run `ssh-copy-id loadbalancer` when the system is up and ready.
|
||||
|
||||
### Common role
|
||||
|
||||
I created at the end of yesterdays session the role of `common`, common will be used across all of our servers where as the other roles are specific to use cases, now the applications I am going to install as common as spurious and I cannot see many reasons for this to be the case but it shows the objective. In our common role folder structure, navigate to tasks folder and you will have a main.yml. In this yaml we need to point this to our install_tools.yml file and we do this by adding a line `- import_tasks: install_tools.yml` this used to be `include` but this is going to be depreciated soon enough so we are using import_tasks.
|
||||
I created at the end of yesterday's session the role of `common`, common will be used across all of our servers whereas the other roles are specific to use cases, now the applications I am going to install are as common as spurious and I cannot see many reasons for this to be the case but it shows the objective. In our common role folder structure, navigate to the tasks folder and you will have a main.yml. In this YAML, we need to point this to our install_tools.yml file and we do this by adding a line `- import_tasks: install_tools.yml` this used to be `include` but this is going to be depreciated soon enough so we are using import_tasks.
|
||||
|
||||
```Yaml
|
||||
- name: "Install Common packages"
|
||||
@ -31,7 +31,7 @@ I created at the end of yesterdays session the role of `common`, common will be
|
||||
- figlet
|
||||
```
|
||||
|
||||
In our playbook we then add in the common role for each host block.
|
||||
In our playbook, we then add in the common role for each host block.
|
||||
|
||||
```Yaml
|
||||
- hosts: webservers
|
||||
@ -47,9 +47,9 @@ In our playbook we then add in the common role for each host block.
|
||||
|
||||
### nginx
|
||||
|
||||
The next phase is for us to install and configure nginx on our loadbalancer vm. Like the common folder structure, we have the nginx based on the last session.
|
||||
The next phase is for us to install and configure nginx on our loadbalancer VM. Like the common folder structure, we have the nginx based on the last session.
|
||||
|
||||
First of all we are going to add a host block to our playbook. This block will include our common role and then our new nginx role.
|
||||
First of all, we are going to add a host block to our playbook. This block will include our common role and then our new nginx role.
|
||||
|
||||
The playbook can be found here. [playbook4.yml](Days/../Configmgmt/ansible-scenario4/playbook4.yml)
|
||||
|
||||
@ -71,7 +71,7 @@ The playbook can be found here. [playbook4.yml](Days/../Configmgmt/ansible-scena
|
||||
- nginx
|
||||
```
|
||||
|
||||
In order for this to mean anything, we have to define our tasks that we wish to run, in the same way we will modify the main.yml in tasks to point to two files this time, one for installation and one for configuration.
|
||||
For this to mean anything, we have to define the tasks that we wish to run, in the same way, we will modify the main.yml in tasks to point to two files this time, one for installation and one for configuration.
|
||||
|
||||
There are some other files that I have modified based on the outcome we desire, take a look in the folder [ansible-scenario4](Days/Configmgmt/ansible-scenario4) for all the files changed. You should check the folders tasks, handlers and templates in the nginx folder and you will find those additional changes and files.
|
||||
|
||||
@ -87,7 +87,7 @@ Now that we have our webservers and loadbalancer configured we should now be abl
|
||||
|
||||

|
||||
|
||||
If you are following along and you do not have this state then it could be down to the server IP addresses you have in your environment. The file can be found in `templates\mysite.j2` and looks similar to the below: You would need to update with your webserver IP addresses.
|
||||
If you are following along and you do not have this state then it could be down to the server IP addresses you have in your environment. The file can be found in `templates\mysite.j2` and looks similar to the below: You would need to update with your web server IP addresses.
|
||||
|
||||
```J2
|
||||
upstream webservers {
|
||||
@ -104,7 +104,7 @@ If you are following along and you do not have this state then it could be down
|
||||
}
|
||||
```
|
||||
|
||||
I am pretty confident that what we have installed is all good but let's use an adhoc command using ansible to check these common tools installation.
|
||||
I am pretty confident that what we have installed is all good but let's use an ad-hoc command using ansible to check these common tools installation.
|
||||
|
||||
`ansible loadbalancer -m command -a neofetch`
|
||||
|
||||
|
@ -12,11 +12,11 @@ id: 1048780
|
||||
|
||||
### Tags
|
||||
|
||||
As we left our playbook in the session yesterday we would need to run every tasks and play within that playbook. Which means we would have to run the webservers and loadbalancer plays and tasks to completion.
|
||||
As we left our playbook in the session yesterday we would need to run every task and play within that playbook. This means we would have to run the webservers and loadbalancer plays and tasks to completion.
|
||||
|
||||
However tags can enable us to separate these out if we want. This could be an efficient move if we have extra large and long playbooks in our environments.
|
||||
However, tags can enable us to separate these if we want. This could be an efficient move if we have extra large and long playbooks in our environments.
|
||||
|
||||
In our playbook file, in this case we are using [ansible-scenario5](Configmgmt/ansible-scenario5/playbook5.yml)
|
||||
In our playbook file, in this case, we are using [ansible-scenario5](Configmgmt/ansible-scenario5/playbook5.yml)
|
||||
|
||||
```Yaml
|
||||
- hosts: webservers
|
||||
@ -38,7 +38,7 @@ In our playbook file, in this case we are using [ansible-scenario5](Configmgmt/a
|
||||
tags: proxy
|
||||
```
|
||||
|
||||
We can then confirm this by using the `ansible-playbook playbook5.yml --list-tags` and the list tags is going to outline the tags we have defined in our playbook.
|
||||
We can then confirm this by using the `ansible-playbook playbook5.yml --list-tags` and the list tags are going to outline the tags we have defined in our playbook.
|
||||
|
||||

|
||||
|
||||
@ -46,11 +46,11 @@ Now if we wanted to target just the proxy we could do this by running `ansible-p
|
||||
|
||||

|
||||
|
||||
tags can be added at the task level as well so we can get really granular on where and what you want to happen. It could be application focused tags, we could go through tasks for example and tag our tasks based on installation, configuration or removal. Another very useful tag you can use is
|
||||
tags can be added at the task level as well so we can get granular on where and what you want to happen. It could be application-focused tags, we could go through tasks for example and tag our tasks based on installation, configuration or removal. Another very useful tag you can use is
|
||||
|
||||
`tag: always` this will ensure no matter what --tags you are using in your command if something is tagged with the always value then it will always be ran when you run the ansible-playbook command.
|
||||
`tag: always` this will ensure no matter what --tags you are using in your command if something is tagged with the always value then it will always be running when you run the ansible-playbook command.
|
||||
|
||||
With tags we can also bundle multiple tags together and if we choose to run `ansible-playbook playbook5.yml --tags proxy,web` this will run all of the items with those tags. Obviously in our instance that would mean the same as running the the playbook but if we had multiple other plays then this would make sense.
|
||||
With tags, we can also bundle multiple tags together and if we choose to run `ansible-playbook playbook5.yml --tags proxy,web` this will run all of the items with those tags. Obviously, in our instance, that would mean the same as running the playbook but if we had multiple other plays then this would make sense.
|
||||
|
||||
You can also define more than one tag.
|
||||
|
||||
@ -63,17 +63,17 @@ There are two main types of variables within Ansible.
|
||||
|
||||
### Ansible Facts
|
||||
|
||||
Each time we have ran our playbooks, we have had a task that we have not defined called "Gathering facts" we can use these variables or facts to make things happen with our automation tasks.
|
||||
Each time we have run our playbooks, we have had a task that we have not defined called "Gathering facts" we can use these variables or facts to make things happen with our automation tasks.
|
||||
|
||||

|
||||
|
||||
If we were to run the following `ansible proxy -m setup` command we should see a lot of output in JSON format. There is going to be a lot of information on your terminal though to really use this so we would like to output this to a file using `ansible proxy -m setup >> facts.json` you can see this file in this repository, [ansible-scenario5](Configmgmt/ansible-scenario5/facts.json)
|
||||
If we were to run the following `ansible proxy -m setup` command we should see a lot of output in JSON format. There is going to be a lot of information on your terminal though to use this so we would like to output this to a file using `ansible proxy -m setup >> facts.json` you can see this file in this repository, [ansible-scenario5](Configmgmt/ansible-scenario5/facts.json)
|
||||
|
||||

|
||||
|
||||
If you open this file you can see all sorts of information for our command. We can get our IP addresses, architecture, bios version. A lot of useful information if we want to leverage this and use this in our playbooks.
|
||||
If you open this file you can see all sorts of information for our command. We can get our IP addresses, architecture, and bios version. A lot of useful information if we want to leverage this and use this in our playbooks.
|
||||
|
||||
An idea would be to potentially use one of these variables within our nginx template mysite.j2 where we hard coded the IP addresses of our webservers. You can do this by creating a for loop in your mysite.j2 and this is going to cycle through the group [webservers] this enables us to have more than our 2 webservers automatically and dynamically created or added to this load balancer configuration.
|
||||
An idea would be to potentially use one of these variables within our nginx template mysite.j2 where we hard-coded the IP addresses of our webservers. You can do this by creating a for loop in your mysite.j2 and this is going to cycle through the group [webservers] this enables us to have more than our 2 webservers automatically and dynamically created or added to this load balancer configuration.
|
||||
|
||||
```
|
||||
#Dynamic Config for server {{ ansible_facts['nodename'] }}
|
||||
@ -92,11 +92,11 @@ An idea would be to potentially use one of these variables within our nginx temp
|
||||
}
|
||||
```
|
||||
|
||||
The outcome of the above will look the same as it does right now but if we added more webservers or removed one this would dynamically change the proxy configuration. For this to work you will need to have name resolution configured.
|
||||
The outcome of the above will look the same as it does right now but if we added more web servers or removed one this would dynamically change the proxy configuration. For this to work you will need to have name resolution configured.
|
||||
|
||||
### User created
|
||||
|
||||
User created variables are what we have created ourselves. If you take a look in our playbook you will see we have `vars:` and then a list of 3 variables we are using there.
|
||||
User-created variables are what we have created ourselves. If you take a look in our playbook you will see we have `vars:` and then a list of 3 variables we are using there.
|
||||
|
||||
```Yaml
|
||||
- hosts: webservers
|
||||
@ -118,7 +118,7 @@ User created variables are what we have created ourselves. If you take a look in
|
||||
tags: proxy
|
||||
```
|
||||
|
||||
We can however keep our playbook clear of variables by moving them to their own file. We are going to do this but we will move into the [ansible-scenario6](Configmgmt/ansible-scenario6) folder. In the root of that folder we are going to create a group_vars folder. We are then going to create another folder called all (all groups are going to get these variables). In there we will create a file called `common_variables.yml` and we will copy our variables from our playbook into this file. Removing them from the playbook along with vars: as well.
|
||||
We can however keep our playbook clear of variables by moving them to their file. We are going to do this but we will move into the [ansible-scenario6](Configmgmt/ansible-scenario6) folder. In the root of that folder, we are going to create a group_vars folder. We are then going to create another folder called all (all groups are going to get these variables). In there we will create a file called `common_variables.yml` and we will copy our variables from our playbook into this file. Removing them from the playbook along with vars: as well.
|
||||
|
||||
```Yaml
|
||||
http_port: 8000
|
||||
@ -163,7 +163,7 @@ One of those variables was the http_port, we can use this again in our for loop
|
||||
}
|
||||
```
|
||||
|
||||
We can also define an ansible fact in our roles/apache2/templates/index.html.j2 file so that we can understand which webserver we are on.
|
||||
We can also define an ansible fact in our roles/apache2/templates/index.HTML.j2 file so that we can understand which webserver we are on.
|
||||
|
||||
```J2
|
||||
<html>
|
||||
@ -173,7 +173,7 @@ We can also define an ansible fact in our roles/apache2/templates/index.html.j2
|
||||
</html>
|
||||
```
|
||||
|
||||
The results of running the `ansible-playbook playbook6.yml` command with our variable changes means that when we hit our loadbalancer you can see that we hit either of the webservers we have in our group.
|
||||
The results of running the `ansible-playbook playbook6.yml` command with our variable changes mean that when we hit our loadbalancer you can see that we hit either of the webservers we have in our group.
|
||||
|
||||

|
||||
|
||||
@ -181,19 +181,19 @@ We could also add a folder called host_vars and create a web01.yml and have a sp
|
||||
|
||||
### Inventory Files
|
||||
|
||||
So far we have used the default hosts file in the /etc/ansible folder to determine our hosts. We could however have different files for different environments, for example production and staging. I am not going to create more environments. But we are able to create our own host files.
|
||||
So far we have used the default hosts file in the /etc/ansible folder to determine our hosts. We could however have different files for different environments, for example, production and staging. I am not going to create more environments. But we can create our host files.
|
||||
|
||||
We can create multiple files for our different inventory of servers and nodes. We would call these using `ansible-playbook -i dev playbook.yml` you can also define variables within your hosts file and then print that out or leverage that variable somewhere else in your playbooks for example in the example and training course I am following along to below they have added the environment variable created in the host file to the loadbalancer web page template to show the environment as part of the web page message.
|
||||
We can create multiple files for our different inventory of servers and nodes. We would call these using `ansible-playbook -i dev playbook.yml` you can also define variables within your host's file and then print that out or leverage that variable somewhere else in your playbooks for example in the example and training course I am following along to below they have added the environment variable created in the host file to the loadbalancer web page template to show the environment as part of the web page message.
|
||||
|
||||
### Deploying our Database server
|
||||
|
||||
We still have one more machine we have not powered up yet and configured. We can do this using `vagrant up db01` from where our Vagrantfile is located. When this is up and accessible we then need to make sure the SSH key is copied over using `ssh-copy-id db01` so that we can access.
|
||||
We still have one more machine we have not powered up yet and configured. We can do this using `vagrant up db01` from where our Vagrantfile is located. When this is up and accessible we then need to make sure the SSH key is copied over using `ssh-copy-id db01` so that we can access it.
|
||||
|
||||
We are going to be working from the [ansible-scenario7](Configmgmt/ansible-scenario7) folder
|
||||
|
||||
Let's then use `ansible-galaxy init roles/mysql` to create a new folder structure for a new role called "mysql"
|
||||
Let's then use `ansible-galaxy init roles/mysql` to create a new folder structure for a new role called "MySQL"
|
||||
|
||||
In our playbook we are going to add a new play block for the database configuration. We have our group database defined in our /etc/ansible/hosts file. We then instruct our database group to have the role common and a new role called mysql which we created in the previous step. We are also tagging our database group with database, this means as we discussed earlier we can choose to only run against these tags if we wish.
|
||||
In our playbook, we are going to add a new play block for the database configuration. We have our group database defined in our /etc/ansible/hosts file. We then instruct our database group to have the role common and a new role called MySQL which we created in the previous step. We are also tagging our database group with the database, this means as we discussed earlier we can choose to only run against these tags if we wish.
|
||||
|
||||
```Yaml
|
||||
- hosts: webservers
|
||||
@ -220,7 +220,7 @@ In our playbook we are going to add a new play block for the database configurat
|
||||
tags: database
|
||||
```
|
||||
|
||||
Within our roles folder structure you will now have the tree automatically created, we need to populate the following:
|
||||
Within our roles folder structure, you will now have the tree automatically created, we need to populate the following:
|
||||
|
||||
Handlers - main.yml
|
||||
|
||||
@ -234,7 +234,7 @@ Handlers - main.yml
|
||||
|
||||
Tasks - install_mysql.yml, main.yml & setup_mysql.yml
|
||||
|
||||
install_mysql.yml - this task is going to be there to install mysql and ensure that the service is running.
|
||||
install_mysql.yml - this task is going to be there to install MySQL and ensure that the service is running.
|
||||
|
||||
```Yaml
|
||||
- name: "Install Common packages"
|
||||
@ -306,7 +306,7 @@ db_pass: DevOps90
|
||||
db_name: 90DaysOfDevOps
|
||||
```
|
||||
|
||||
We also have the my.cnf.j2 file in the templates folder, which looks like below:
|
||||
We also have my.cnf.j2 file in the templates folder, which looks like below:
|
||||
|
||||
```J2
|
||||
[mysql]
|
||||
@ -327,9 +327,9 @@ We fixed the above and ran the playbook again and we have a successful change.
|
||||
|
||||
We should probably make sure that everything is how we want it to be on our newly configured db01 server. We can do this from our control node using the `ssh db01` command.
|
||||
|
||||
To connect to mySQL I used `sudo /usr/bin/mysql -u root -p` and gave the vagrant password for root at the prompt.
|
||||
To connect to MySQL I used `sudo /usr/bin/mysql -u root -p` and gave the vagrant password for root at the prompt.
|
||||
|
||||
When we have connected let's first make sure we have our user created called devops. `select user, host from mysql.user;`
|
||||
When we have connected let's first make sure we have our user created called DevOps. `select user, host from mysql.user;`
|
||||
|
||||

|
||||
|
||||
@ -337,9 +337,9 @@ Now we can issue the `SHOW DATABASES;` command to see our new database that has
|
||||
|
||||

|
||||
|
||||
I actually used root to connect but we could also now log in with our devops account in the same way using `sudo /usr/bin/mysql -u devops -p` but the password here is DevOps90.
|
||||
I used the root to connect but we could also now log in with our DevOps account, in the same way, using `sudo /usr/bin/MySQL -u devops -p` but the password here is DevOps90.
|
||||
|
||||
One thing I have found that in our `setup_mysql.yml` I had to add the line `login_unix_socket: /var/run/mysqld/mysqld.sock` in order to successfully connect to my db01 mysql instance and now everytime I run this it reports a change when creating the user, any suggestions would be greatly appreciated.
|
||||
One thing I have found is that in our `setup_mysql.yml` I had to add the line `login_unix_socket: /var/run/mysqld/mysqld.sock` to successfully connect to my db01 MySQL instance and now every time I run this it reports a change when creating the user, any suggestions would be greatly appreciated.
|
||||
|
||||
## Resources
|
||||
|
||||
|
@ -31,7 +31,7 @@ If you are looking for an enterprise solution then you will be looking for the A
|
||||
Both AWX and the Automation Controller bring the following features above everything else we have covered in this section thus far.
|
||||
|
||||
- User Interface
|
||||
- Role Based Access Control
|
||||
- Role-Based Access Control
|
||||
- Workflows
|
||||
- CI/CD integration
|
||||
|
||||
@ -43,7 +43,7 @@ We are going to take a look at deploying AWX within our minikube Kubernetes envi
|
||||
|
||||
AWX does not need to be deployed to a Kubernetes cluster, the [github](https://github.com/ansible/awx) for AWX from ansible will give you that detail. However starting in version 18.0, the AWX Operator is the preferred way to install AWX.
|
||||
|
||||
First of all we need a minikube cluster. We can do this if you followed along during the Kubernetes section by creating a new minikube cluster with the `minikube start --cpus=4 --memory=6g --addons=ingress` command.
|
||||
First of all, we need a minikube cluster. We can do this if you followed along during the Kubernetes section by creating a new minikube cluster with the `minikube start --cpus=4 --memory=6g --addons=ingress` command.
|
||||
|
||||

|
||||
|
||||
@ -51,7 +51,7 @@ The official [Ansible AWX Operator](https://github.com/ansible/awx-operator) can
|
||||
|
||||
I forked the repo above and then ran `git clone https://github.com/MichaelCade/awx-operator.git` my advice is you do the same and do not use my repository as I might change things or it might not be there.
|
||||
|
||||
In the cloned repository you will find a awx-demo.yml file we need to change `NodePort` for `ClusterIP` as per below:
|
||||
In the cloned repository you will find an awx-demo.yml file we need to change `NodePort` for `ClusterIP` as per below:
|
||||
|
||||
```Yaml
|
||||
---
|
||||
@ -71,7 +71,7 @@ In checking we have our new namespace and we have our awx-operator-controller po
|
||||
|
||||

|
||||
|
||||
Within the cloned repository you will find a file called awx-demo.yml we now want to deploy this into our Kubernetes cluser and our awx namespace. `kubectl create -f awx-demo.yml -n awx`
|
||||
Within the cloned repository you will find a file called awx-demo.yml we now want to deploy this into our Kubernetes cluster and our awx namespace. `kubectl create -f awx-demo.yml -n awx`
|
||||
|
||||

|
||||
|
||||
@ -93,19 +93,19 @@ The username by default is admin, to get the password we can run the following c
|
||||
|
||||

|
||||
|
||||
Obviously this then gives you a UI to manage your playbook and configuration management tasks in a centralised location, it also allows you as a team to work together vs what we have been doing so far here where we have been running from one ansible control station.
|
||||
This then gives you a UI to manage your playbook and configuration management tasks in a centralised location, it also allows you as a team to work together vs what we have been doing so far here where we have been running from one ansible control station.
|
||||
|
||||
This is another one of those areas where you could probably go and spend another length of time walking through the capabilities within this tool.
|
||||
|
||||
I will call out a great resource from Jeff Geerling, which goes into more detail on using Ansible AWX. [Ansible 101 - Episode 10 - Ansible Tower and AWX](https://www.youtube.com/watch?v=iKmY4jEiy_A&t=752s)
|
||||
|
||||
In this video he also goes into great detail on the differences between Automation Controller (Previously Ansible Tower) and Ansible AWX (Free and Open Source).
|
||||
In this video, he also goes into great detail on the differences between Automation Controller (Previously Ansible Tower) and Ansible AWX (Free and Open Source).
|
||||
|
||||
### Ansible Vault
|
||||
|
||||
`ansible-vault` allows us to encrypt and decrypt Ansible data files. Throughout this section we have skipped over and we have put some of our sensitive information in plain text.
|
||||
`ansible-vault` allows us to encrypt and decrypt Ansible data files. Throughout this section, we have skipped over and put some of our sensitive information in plain text.
|
||||
|
||||
Built in to the Ansible binary is `ansible-vault` which allows us to mask away this sensitive information.
|
||||
Built into the Ansible binary is `ansible-vault` which allows us to mask away this sensitive information.
|
||||
|
||||

|
||||
|
||||
@ -121,7 +121,7 @@ Now, we have already used `ansible-galaxy` to create some of our roles and file
|
||||
|
||||
### Ansible Testing
|
||||
|
||||
- [Ansible Molecule](https://molecule.readthedocs.io/en/latest/) - Molecule project is designed to aid in the development and testing of Ansible roles
|
||||
- [Ansible Molecule](https://molecule.readthedocs.io/en/latest/) - The molecule project is designed to aid in the development and testing of Ansible roles
|
||||
|
||||
- [Ansible Lint](https://ansible-lint.readthedocs.io/en/latest/) - CLI tool for linting playbooks, roles and collections
|
||||
|
||||
|
@ -16,7 +16,7 @@ It bridges the gap between development and operations by automating the build, t
|
||||
|
||||
We covered a lot of this continuous mantra in the opening section of the challenge. But to reiterate:
|
||||
|
||||
Continuous Integration (CI) is a more modern software development practice in which incremental code changes are made more frequently and reliably. Automated build and test workflow steps triggered by Continuous Integration ensures that code changes being merged into the repository are reliable.
|
||||
Continuous Integration (CI) is a more modern software development practice in which incremental code changes are made more frequently and reliably. Automated build and test workflow steps triggered by Continuous Integration ensure that code changes being merged into the repository are reliable.
|
||||
|
||||
That code / Application is then delivered quickly and seamlessly as part of the Continuous Deployment process.
|
||||
|
||||
@ -30,7 +30,7 @@ The ability for developers to make small impactful changes regular means we get
|
||||
|
||||
### Ok, so what does this mean?
|
||||
|
||||
On [Day 5](day05.md) we covered a lot of the theory behind DevOps and as already mentioned here already that the CI/CD Pipeline is the backbone of the modern DevOps environment.
|
||||
On [Day 5](day05.md) we covered a lot of the theory behind DevOps and as already mentioned here that the CI/CD Pipeline is the backbone of the modern DevOps environment.
|
||||
|
||||

|
||||
|
||||
@ -48,7 +48,7 @@ The steps in the cycle are, developers write the **code** then it gets **built**
|
||||
|
||||
CI is a development practice that requires developers to integrate code into a shared repository several times a day.
|
||||
|
||||
When the code is written and pushed to a repository like github or gitlab that's where the magic begins.
|
||||
When the code is written and pushed to a repository like Github or GitLab that's where the magic begins.
|
||||
|
||||

|
||||
|
||||
@ -58,25 +58,25 @@ The code is verified by an automated build which allows teams or the project own
|
||||
|
||||
From there the code is analysed and given a series of automated tests three examples are
|
||||
|
||||
- Unit testing this tests the individual units of the source code
|
||||
- Validation testing this makes sure that the software satisfies or fits the intended use
|
||||
- Format testing this checks for syntax and other formatting errors
|
||||
- Unit testing tests the individual units of the source code
|
||||
- Validation testing makes sure that the software satisfies or fits the intended use
|
||||
- Format testing checks for syntax and other formatting errors
|
||||
|
||||
These tests are created as a workflow and then are run every time you push to the master branch so pretty much every major development team has some sort of CI/CD workflow and remember on a development team the new code could be coming in from teams all over the world at different times of the day from developers working on all sorts of different projects it's more efficient to build an automated workflow of tests that make sure that everyone is on the same page before the code is accepted. It would take much longer for a human to do this each time.
|
||||
|
||||

|
||||
|
||||
Once we have our tests complete and they are successful then we can compile and send to our repository. For example I am using Docker Hub but this could be anywhere that then gets leveraged for the CD aspect of the pipeline.
|
||||
Once we have our tests complete and they are successful then we can compile and send them to our repository. For example, I am using Docker Hub but this could be anywhere that then gets leveraged for the CD aspect of the pipeline.
|
||||
|
||||

|
||||
|
||||
So this process is obviously very much down to the software development process, we are creating our application, adding, fixing bugs etc and then updating our source control and versioning that whilst also testing.
|
||||
So this process is very much down to the software development process, we are creating our application, adding, fixing bugs etc and then updating our source control and versioning that whilst also testing.
|
||||
|
||||
Moving onto the next phase is the CD element which in fact more and more is what we generally see from any off the shelf software, I would argue that we will see a trend that if we get our software from a vendor such as Oracle or Microsoft we will consume that from a Docker Hub type repository and then we would use our CD pipelines to deploy that into our environments.
|
||||
Moving onto the next phase is the CD element which more and more is what we generally see from any off-the-shelf software, I would argue that we will see a trend if we get our software from a vendor such as Oracle or Microsoft we will consume that from a Docker Hub type repository and then we would use our CD pipelines to deploy that into our environments.
|
||||
|
||||
### CD
|
||||
|
||||
Now we have our tested version of our code and we are ready to deploy out into the wild and like I say, the Software vendor will run through this stage but I strongly believe this is how we will all deploy the off the shelf software we require in the future.
|
||||
Now we have our tested version of our code and we are ready to deploy out into the wild as I say, the Software vendor will run through this stage but I strongly believe this is how we will all deploy the off-the-shelf software we require in the future.
|
||||
|
||||
It is now time to release our code into an environment. This is going to include Production but also likely other environments as well such as staging.
|
||||
|
||||
@ -84,7 +84,7 @@ It is now time to release our code into an environment. This is going to include
|
||||
|
||||
Our next step at least on Day 1 of v1 of the software deployment is we need to make sure we are pulling the correct code base to the correct environment. This could be pulling elements from the software repository (DockerHub) but it is more than likely that we are also pulling additional configuration from maybe another code repository, the configuration for the application for example. In the diagram below we are pulling the latest release of the software from DockerHub and then we are releasing this to our environments whilst possibly picking up configuration from a Git repository. Our CD tool is performing this and pushing everything to our environment.
|
||||
|
||||
It is most likely that this is not done at the same time. i.e we would go to a staging environment run against this with our own configuration make sure things are correct and this could be a manual step for testing or it could again be automated (lets go with automated) before then allowing this code to be deployed into production.
|
||||
It is most likely that this is not done at the same time. i.e we would go to a staging environment run against this with our configuration to make sure things are correct and this could be a manual step for testing or it could again be automated (let's go with automated) before then allowing this code to be deployed into production.
|
||||
|
||||

|
||||
|
||||
@ -92,17 +92,17 @@ Then after this when v2 of the application comes out we rinse and repeat the ste
|
||||
|
||||
### Why use CI/CD?
|
||||
|
||||
I think we have probably covered the benefits a number of time but it is because it automates things that otherwise would have to be done manually it finds small problems before it sneaks into the main codebase, you can probably imagine that if you push bad code out to your customers then you're going to have a bad time!
|
||||
I think we have probably covered the benefits several times but it is because it automates things that otherwise would have to be done manually it finds small problems before it sneaks into the main codebase, you can probably imagine that if you push bad code out to your customers then you're going to have a bad time!
|
||||
|
||||
It also helps to prevent something that we call technical debt which is the idea that since the main code repos are constantly being built upon over time then a shortcut fix taken on day one is now an exponentially more expensive fix years later because now that band-aid of a fix would be so deeply intertwined and baked into all the code bases and logic.
|
||||
|
||||
### Tooling
|
||||
|
||||
Like with other sections we are going to get hands on with some of the tools that achieve the CI/CD pipeline process.
|
||||
Like with other sections we are going to get hands-on with some of the tools that achieve the CI/CD pipeline process.
|
||||
|
||||
I think it is also important to note that not all tools have to do both CI and CD, We will take a look at ArgoCD which you guessed it is great at the CD element of deploying our software to a Kubernetes cluster. But something like Jenkins can work across many different platforms.
|
||||
I think it is also important to note that not all tools have to do both CI and CD, We will take a look at ArgoCD which you guessed is great at the CD element of deploying our software to a Kubernetes cluster. But something like Jenkins can work across many different platforms.
|
||||
|
||||
My plan is to look at the following:
|
||||
I plan to look at the following:
|
||||
|
||||
- Jenkins
|
||||
- ArgoCD
|
||||
|
Loading…
Reference in New Issue
Block a user