Spelling & Grammar Day 71-80

This commit is contained in:
Michael Cade 2022-06-26 21:51:39 +01:00
parent 6963d26033
commit bf051deef0
10 changed files with 147 additions and 147 deletions

View File

@ -2,7 +2,7 @@
title: '#90DaysOfDevOps - What is Jenkins? - Day 71'
published: false
description: 90DaysOfDevOps - What is Jenkins?
tags: 'devops, 90daysofdevops, learning'
tags: 'DevOps, 90daysofdevops, learning'
cover_image: null
canonical_url: null
id: 1048745
@ -10,9 +10,9 @@ id: 1048745
## What is Jenkins?
Jenkins is a continuous integration tool that allows continuous development, test and deployment of newly created code.
Jenkins is a continuous integration tool that allows continuous development, testing and deployment of newly created code.
There are two ways we can achieve this with either nightly builds or continuous development. The first option is that our developers are developing throughout the day on their tasks and come the end of the set day they push their changes to the source code repository. Then during the night we run our unit tests and build of the software. This could be deemed as the old way to integrate all code.
There are two ways we can achieve this with either nightly builds or continuous development. The first option is that our developers are developing throughout the day on their tasks and come to the end of the set day they push their changes to the source code repository. Then during the night we run our unit tests and build the software. This could be deemed as the old way to integrate all code.
![](Images/Day71_CICD1.png)
@ -20,28 +20,28 @@ The other option and the preferred way is that our developers are still committi
![](Images/Day71_CICD2.png)
The above methods means that with distributed developers across the world we don't have a set time each day where we have to stop committing our code changes. This is where Jenkins comes in to act as that CI server to control those tests and build processes.
The above methods mean that with distributed developers across the world we don't have a set time each day where we have to stop committing our code changes. This is where Jenkins comes in to act as that CI server to control those tests and build processes.
![](Images/Day71_CICD3.png)
I know we are talking about Jenkins here but I also want to add a few more to maybe look into later on down the line to get an understanding why I am seeing Jenkins as the overall most popular, why is that and what can the others do over Jenkins.
I know we are talking about Jenkins here but I also want to add a few more to maybe look into later on down the line to get an understanding of why I am seeing Jenkins as the overall most popular, why is that and what can the others do over Jenkins.
- TravisCI - A hosted, distributed continuous integration service used to build and test software projects hosted on GitHub.
- Bamboo - Can run multiple builds in parallel for faster compilation, built in functionality to connect with repositories and has build tasks for Ant, Maven.
- Bamboo - Can run multiple builds in parallel for faster compilation, built-in functionality to connect with repositories and has build tasks for Ant, and Maven.
- Buildbot - is an open-source framework for automating software build, test and release processes. It is written in Python and supports distributed, parallel execution of jobs across multiple platforms.
- Apache Gump - Specific to Java projects, designed with the aim to build and test those Java projects every night. ensures that all projects are compatible at both API and functionality level.
- Apache Gump - Specific to Java projects, designed to build and test those Java projects every night. ensures that all projects are compatible at both API and functionality levels.
Because we are now going to focus on Jenkins - Jenkins is again open source like all of the above tools and is an automation server written in Java. It is used to automate the software development process via continuous integration adn facilitates continuous delivery.
Because we are now going to focus on Jenkins - Jenkins is again open source like all of the above tools and is an automation server written in Java. It is used to automate the software development process via continuous integration and facilitates continuous delivery.
### Features of Jenkins
As you can probably expect Jenkins has a lot of features spanning a lot of areas.
**Easy Installation** - Jenkins is a self contained java based program ready to run with packages for Windows, macOS and Linux operating systems.
**Easy Installation** - Jenkins is a self-contained java based program ready to run with packages for Windows, macOS and Linux operating systems.
**Easy Configuration** - Easy setup and configured via a web interface which includes error checks and built in help.
**Easy Configuration** - Easy setup and configuration via a web interface which includes error checks and built-in help.
**Plug-ins** - Lots of plugins available in the Update Centre and integrates with many tools in the CI / CD toolchain.
**Plug-ins** - Lots of plugins are available in the Update Centre and integrate with many tools in the CI / CD toolchain.
**Extensible** - In addition to the Plug-Ins available, Jenkins can be extended by that plugin architecture which provides nearly infinite options for what it can be used for.
@ -69,27 +69,27 @@ Step 1 - Developers commit changes to the source code repository.
Step 2 - Jenkins checks the repository at regular intervals and pulls any new code.
Step 3 - A build server then builds the code into an executable, in this example we are using maven as a well known build server. Another area to cover.
Step 3 - A build server then builds the code into an executable, in this example, we are using maven as a well-known build server. Another area to cover.
Step 4 - If the build fails then feedback is sent back to the developers.
Step 5 - Jenkins then deploys the build app to the test server, in this example we are using selenium as a well known test server. Another area to cover.
Step 5 - Jenkins then deploys the build app to the test server, in this example, we are using selenium as a well-known test server. Another area to cover.
Step 6 - If the test fails then feedback is passed to the developers.
Step 7 - If the tests are successful then we can release to production.
Step 7 - If the tests are successful then we can release them to production.
This cycle is continuous, this is what allows applications to be updated in minutes vs hours, days, months, years!
This cycle is continuous, this is what allows applications to be updated in minutes vs hours, days, months, and years!
![](Images/Day71_CICD5.png)
There is a lot more to the architecture of Jenkins if you require it, they have a master-slave capability, which enables a master to distribute the tasks to slave jenkins environment.
There is a lot more to the architecture of Jenkins if you require it, they have a master-slave capability, which enables a master to distribute the tasks to the slave Jenkins environment.
For reference with Jenkins being open source, there are going to be lots of enterprises that require support, CloudBees is that enterprise version of Jenkins that brings support and possibly other functionality for the paying enterprise customer.
An example of this in a customer is Bosch, you can find the Bosch case study [here](https://assets.ctfassets.net/vtn4rfaw6n2j/case-study-boschpdf/40a0b23c61992ed3ee414ae0a55b6777/case-study-bosch.pdf)
I am going to be looking for a step by step example of an application that we can use to walkthrough using Jenkins and then also use this with some other tools.
I am going to be looking for a step-by-step example of an application that we can use to walk through using Jenkins and then also use this with some other tools.
## Resources
@ -102,4 +102,4 @@ I am going to be looking for a step by step example of an application that we ca
- [GitHub Actions](https://www.youtube.com/watch?v=R8_veQiYBjI)
- [GitHub Actions CI/CD](https://www.youtube.com/watch?v=mFFXuXjVgkU)
See you on [Day 72](day72.md)
See you on [Day 72](day72.md)

View File

@ -1,16 +1,16 @@
---
title: '#90DaysOfDevOps - Getting hands on with Jenkins - Day 72'
title: '#90DaysOfDevOps - Getting hands-on with Jenkins - Day 72'
published: false
description: 90DaysOfDevOps - Getting hands on with Jenkins
tags: 'devops, 90daysofdevops, learning'
description: 90DaysOfDevOps - Getting hands-on with Jenkins
tags: 'DevOps, 90daysofdevops, learning'
cover_image: null
canonical_url: null
id: 1048829
---
## Getting hands on with Jenkins
## Getting hands-on with Jenkins
The plan today is to get some hands on with Jenkins and make something happen as part of our CI pipeline, looking at some example code bases that we can use.
The plan today is to get some hands-on with Jenkins and make something happen as part of our CI pipeline, looking at some example code bases that we can use.
### What is a pipeline?
@ -20,11 +20,11 @@ Before we start we need to know what is a pipeline when it comes to CI, and we a
We want to take the processes or steps above and we want to automate them to get an outcome eventually meaning that we have a deployed application that we can then ship to our customers, end users etc.
This automated process enables us to have a version control through to our users and customers. Every change, feature enhancement, bug fix etc goes through this automated process confirming that everything is fine without too much manual intervention to ensure our code is good.
This automated process enables us to have version control through to our users and customers. Every change, feature enhancement, bug fix etc goes through this automated process confirming that everything is fine without too much manual intervention to ensure our code is good.
This process involves building the software in a reliable and repeatable manner, as well as progressing the built software (called a "build") through multiple stages of testing and deployment.
A jenkins pipeline, is written into a text file called a Jenkinsfile. Which itself should be committed to a source control repository. This is also known as Pipeline as code, we could also very much liken this to Infrastructure as code which we covered a few weeks back.
A Jenkins pipeline is written into a text file called a Jenkinsfile. Which itself should be committed to a source control repository. This is also known as Pipeline as code, we could also very much liken this to Infrastructure as code which we covered a few weeks back.
[Jenkins Pipeline Definition](https://www.jenkins.io/doc/book/pipeline/#ji-toolbar)
@ -32,7 +32,7 @@ A jenkins pipeline, is written into a text file called a Jenkinsfile. Which itse
I had some fun deploying Jenkins, You will notice from the [documentation](https://www.jenkins.io/doc/book/installing/) that there are many options on where you can install Jenkins.
Given that I have minikube on hand and we have used this a number of times I wanted to use this for this task also. (also it is free!) Although the steps given in the [Kubernetes Installation](https://www.jenkins.io/doc/book/installing/kubernetes/) had me hitting a wall and not getting things up and running, you can compare the two when I document my steps here.
Given that I have minikube on hand and we have used this several times I wanted to use this for this task also. (also it is free!) Although the steps are given in the [Kubernetes Installation](https://www.jenkins.io/doc/book/installing/kubernetes/) had me hitting a wall and not getting things up and running, you can compare the two when I document my steps here.
The first step is to get our minikube cluster up and running, we can simply do this with the `minikube start` command.
@ -42,7 +42,7 @@ I have added a folder with all the YAML configuration and values that can be fou
![](Images/Day72_CICD2.png)
We will be using Helm to deploy jenkins into our cluster, we covered helm in the Kubernetes section. We firstly need to add the jenkinsci helm repository `helm repo add jenkinsci https://charts.jenkins.io` then update our charts `helm repo update`.
We will be using Helm to deploy Jenkins into our cluster, we covered helm in the Kubernetes section. We first need to add the jenkinsci helm repository `helm repo add jenkinsci https://charts.jenkins.io` then update our charts `helm repo update`.
![](Images/Day72_CICD3.png)
@ -50,25 +50,25 @@ The idea behind Jenkins is that it is going to save state for its pipelines, you
![](Images/Day72_CICD4.png)
We also need a service account which we can create using this yaml file and command. `kubectl apply -f jenkins-sa.yml`
We also need a service account which we can create using this YAML file and command. `kubectl apply -f jenkins-sa.yml`
![](Images/Day72_CICD5.png)
At this stage we are good to deploy using the helm chart, we will firstly define our chart using `chart=jenkinsci/jenkins` and then we will deploy using this command where the jenkins-values.yml contain the persistence and service accounts that we previously deployed to our cluster. `helm install jenkins -n jenkins -f jenkins-values.yml $chart`
At this stage we are good to deploy using the helm chart, we will first define our chart using `chart=jenkinsci/jenkins` and then we will deploy using this command where the jenkins-values.yml contain the persistence and service accounts that we previously deployed to our cluster. `helm install jenkins -n jenkins -f jenkins-values.yml $chart`
![](Images/Day72_CICD6.png)
At this stage our pods will be pulling the image but the pod will not have access to the storage so no configuration can be started in terms of getting Jenkins up and running.
At this stage, our pods will be pulling the image but the pod will not have access to the storage so no configuration can be started in terms of getting Jenkins up and running.
This is where the documentation did not help me massively understand what needed to happen. But we can see that we have no permission to start our jenkins install.
This is where the documentation did not help me massively understand what needed to happen. But we can see that we have no permission to start our Jenkins install.
![](Images/Day72_CICD7.png)
In order to fix the above or resolve, we need to make sure we provide access or the right permission in order for our jenkins pods to be able to write to this location that we have suggested. We can do this by using the `minikube ssh` which will put us into the minikube docker container we are running on, and then using `sudo chown -R 1000:1000 /data/jenkins-volume` we can ensure we have permissions set on our data volume.
To fix the above or resolve it, we need to make sure we provide access or the right permission for our Jenkins pods to be able to write to this location that we have suggested. We can do this by using the `minikube ssh` which will put us into the minikube docker container we are running on, and then using `sudo chown -R 1000:1000 /data/jenkins-volume` we can ensure we have permissions set on our data volume.
![](Images/Day72_CICD8.png)
The above process should fix the pods, however if not you can force the pods to be refreshed with the `kubectl delete pod jenkins-0 -n jenkins` command. At this point you should have 2/2 running pods called jenkins-0.
The above process should fix the pods, however, if not you can force the pods to be refreshed with the `kubectl delete pod jenkins-0 -n jenkins` command. At this point, you should have 2/2 running pods called jenkins-0.
![](Images/Day72_CICD9.png)
@ -80,7 +80,7 @@ Now open a new terminal as we are going to use the `port-forward` command to all
![](Images/Day72_CICD11.png)
We should now be able to open a browser and login to `http://localhost:8080` and authenticate with the username: admin and password we gathered in a previous step.
We should now be able to open a browser and log in to `http://localhost:8080` and authenticate with the username: admin and password we gathered in a previous step.
![](Images/Day72_CICD12.png)
@ -92,13 +92,13 @@ From here, I would suggest heading to "Manage Jenkins" and you will see "Manage
![](Images/Day72_CICD14.png)
If you want to go even further and automate the deployment of Jenkins using a shell script this great repository was shared with me on twitter [mehyedes/nodejs-k8s](https://github.com/mehyedes/nodejs-k8s/blob/main/docs/automated-setup.md)
If you want to go even further and automate the deployment of Jenkins using a shell script this great repository was shared with me on Twitter [mehyedes/nodejs-k8s](https://github.com/mehyedes/nodejs-k8s/blob/main/docs/automated-setup.md)
### Jenkinsfile
Now we have Jenkins deployed in our Kubernetes cluster, we can now go back and think about this Jenkinsfile.
Every Jenkinsfile will likely start like this, Which is firstly where you would define your steps of your pipeline, in this instance you have Build > Test > Deploy. But we are not really doing anything other than using the `echo` command to call out the specific stages.
Every Jenkinsfile will likely start like this, Which is firstly where you would define the steps of your pipeline, in this instance you have Build > Test > Deploy. But we are not doing anything other than using the `echo` command to call out the specific stages.
```
@ -132,7 +132,7 @@ In our Jenkins dashboard, select "New Item" give the item a name, I am going to
![](Images/Day72_CICD15.png)
Hit Ok and you will then have the tabs (General, Build Triggers, Advanced Project Options and Pipeline) for a simple test we are only interested in Pipeline. Under Pipeline you have the ability to add a script, we can copy and paste the above script into the box.
Hit Ok and you will then have the tabs (General, Build Triggers, Advanced Project Options and Pipeline) for a simple test we are only interested in Pipeline. Under Pipeline you can add a script, we can copy and paste the above script into the box.
As we said above this is not going to do much but it will show us the stages of our Build > Test > Deploy
@ -146,9 +146,9 @@ We should also open a terminal and run the `kubectl get pods -n jenkins` to see
![](Images/Day72_CICD18.png)
Ok, very simple stuff but we can now see that our Jenkins deployment and installation is working correctly and we can start to see the building blocks of the CI pipeline here.
Ok, very simple stuff but we can now see that our Jenkins deployment and installation are working correctly and we can start to see the building blocks of the CI pipeline here.
In the next section we will be building a Jenkins Pipeline.
In the next section, we will be building a Jenkins Pipeline.
## Resources
@ -161,4 +161,4 @@ In the next section we will be building a Jenkins Pipeline.
- [GitHub Actions](https://www.youtube.com/watch?v=R8_veQiYBjI)
- [GitHub Actions CI/CD](https://www.youtube.com/watch?v=mFFXuXjVgkU)
See you on [Day 73](day73.md)
See you on [Day 73](day73.md)

View File

@ -2,7 +2,7 @@
title: '#90DaysOfDevOps - Building a Jenkins Pipeline - Day 73'
published: false
description: 90DaysOfDevOps - Building a Jenkins Pipeline
tags: 'devops, 90daysofdevops, learning'
tags: 'DevOps, 90daysofdevops, learning'
cover_image: null
canonical_url: null
id: 1048766
@ -10,7 +10,7 @@ id: 1048766
## Building a Jenkins Pipeline
In the last section we got Jenkins deployed to our Minikube cluster and we set up a very basic Jenkins Pipeline, that didn't do much at all other than echo out the stages of a Pipeline.
In the last section, we got Jenkins deployed to our Minikube cluster and we set up a very basic Jenkins Pipeline, that didn't do much at all other than echo out the stages of a Pipeline.
You might have also seen that there are some example scripts available for us to run in the Jenkins Pipeline creation.
@ -60,7 +60,7 @@ spec:
}
```
You can see below the outcome of what happens when this Pipeline is ran.
You can see below the outcome of what happens when this Pipeline is run.
![](Images/Day73_CICD2.png)
@ -68,13 +68,13 @@ You can see below the outcome of what happens when this Pipeline is ran.
#### Goals
- Create a simple app and store in GitHub public repository [https://github.com/scriptcamp/kubernetes-kaniko.git](https://github.com/scriptcamp/kubernetes-kaniko.git)
- Create a simple app and store it in GitHub public repository [https://github.com/scriptcamp/kubernetes-kaniko.git](https://github.com/scriptcamp/kubernetes-kaniko.git)
- Use Jenkins to build our docker Container image and push to docker hub. (for this we will use a private repository)
- Use Jenkins to build our docker Container image and push it to the docker hub. (for this we will use a private repository)
To achieve this in our Kubernetes cluster running in or using Minikube we need to use something called [Kaniko](https://github.com/GoogleContainerTools/kaniko#running-kaniko-in-a-kubernetes-cluster) It general though if you are using Jenkins in a real Kubernetes cluster or you are running it on a server then you can specify an agent which will give you the ability to perform the docker build commands and upload that to DockerHub.
To achieve this in our Kubernetes cluster running in or using Minikube we need to use something called [Kaniko](https://github.com/GoogleContainerTools/kaniko#running-kaniko-in-a-kubernetes-cluster) It is general though if you are using Jenkins in a real Kubernetes cluster or you are running it on a server then you can specify an agent which will give you the ability to perform the docker build commands and upload that to DockerHub.
With the above in mind we are also going to deploy a secret into Kubernetes with our GitHub credentials.
With the above in mind, we are also going to deploy a secret into Kubernetes with our GitHub credentials.
```Shell
kubectl create secret docker-registry dockercred \
@ -84,11 +84,11 @@ kubectl create secret docker-registry dockercred \
--docker-email=<dockerhub-email>
```
In fact I want to share another great resource from [DevOpsCube.com](https://devopscube.com/build-docker-image-kubernetes-pod/) running through much of what we will cover here.
I want to share another great resource from [DevOpsCube.com](https://devopscube.com/build-docker-image-kubernetes-pod/) running through much of what we will cover here.
### Adding credentials to Jenkins
However if you were on a Jenkins system unlike ours then you will likely want to define your credentials within Jenkins and then use them multiple times within your Pipelines and configurations. We can refer to these credentials in the Pipelines using the ID we determine on creation. I went ahead and stepped through and created a user entry for DockerHub and GitHub.
However, if you were on a Jenkins system unlike ours then you will likely want to define your credentials within Jenkins and then use them multiple times within your Pipelines and configurations. We can refer to these credentials in the Pipelines using the ID we determine on creation. I went ahead and stepped through and created a user entry for DockerHub and GitHub.
First of all select "Manage Jenkins" and then "Manage Credentials"
@ -102,7 +102,7 @@ Now select Global Credentials (Unrestricted)
![](Images/Day73_CICD5.png)
Then in the top left you have Add Credentials
Then in the top left, you have Add Credentials
![](Images/Day73_CICD6.png)
@ -110,13 +110,13 @@ Fill in your details for your account and then select OK, remember the ID is wha
![](Images/Day73_CICD7.png)
For GitHub you should use a [Personal Access Token](https://vzilla.co.uk/vzilla-blog/creating-updating-your-github-personal-access-token)
For GitHub, you should use a [Personal Access Token](https://vzilla.co.uk/vzilla-blog/creating-updating-your-github-personal-access-token)
Personally I did not find this process very intuitive to create these accounts, so even though we are not using I wanted to share the process as it is not clear from the UI.
I did not find this process very intuitive to create these accounts, so even though we are not using I wanted to share the process as it is not clear from the UI.
### Building the pipeline
We have our DockerHub credentials deployed to as a secret into our Kubernetes cluster which we will call upon for our docker deploy to DockerHub stage in our pipeline.
We have our DockerHub credentials deployed as a secret into our Kubernetes cluster which we will call upon for our docker deploy to the DockerHub stage in our pipeline.
The pipeline script is what you can see below, this could in turn become our Jenkinsfile located in our GitHub repository which you can also see is listed in the Get the project stage of the pipeline.
@ -192,25 +192,25 @@ We are only interested in the Pipeline tab at the end.
![](Images/Day73_CICD11.png)
In the Pipeline definition we are going to copy and paste the pipeline script that we have above into the Script section and hit save.
In the Pipeline definition, we are going to copy and paste the pipeline script that we have above into the Script section and hit save.
![](Images/Day73_CICD12.png)
Next we will select the "Build Now" option on the left side of the page.
Next, we will select the "Build Now" option on the left side of the page.
![](Images/Day73_CICD13.png)
You should now wait a short amount of time, less than a minute really. and you should see under status the stages that we defined above in our script.
You should now wait a short amount of time, less than a minute. and you should see under status the stages that we defined above in our script.
![](Images/Day73_CICD14.png)
More importantly if we now head on over to our DockerHub and check that we have a new build.
More importantly, if we now head on over to our DockerHub and check that we have a new build.
![](Images/Day73_CICD15.png)
This overall did take a while to figure out but I wanted to stick with it for the purpose of getting hands on and working through a scenario that anyone can run through using minikube and access to github and dockerhub.
Overall did take a while to figure out but I wanted to stick with it to get hands-on and work through a scenario that anyone can run through using minikube and access to GitHub and dockerhub.
The DockerHub repository I used for this demo was a private one. But in the next section I want to advance some of these stages and actually have them do something vs just printing out `pwd` and actually run some tests and build stages.
The DockerHub repository I used for this demo was a private one. But in the next section, I want to advance some of these stages and have them do something vs just printing out `pwd` and running some tests and build stages.
## Resources
@ -223,4 +223,4 @@ The DockerHub repository I used for this demo was a private one. But in the next
- [GitHub Actions](https://www.youtube.com/watch?v=R8_veQiYBjI)
- [GitHub Actions CI/CD](https://www.youtube.com/watch?v=mFFXuXjVgkU)
See you on [Day 74](day74.md)
See you on [Day 74](day74.md)

View File

@ -10,16 +10,16 @@ id: 1048744
## Hello World - Jenkinsfile App Pipeline
In the last section we built a simple Pipeline in Jenkins that would push our docker image from our dockerfile in a public GitHub repository to our private Dockerhub repository.
In the last section, we built a simple Pipeline in Jenkins that would push our docker image from our dockerfile in a public GitHub repository to our private Dockerhub repository.
In this section we want to take this one step further and we want to achieve the following with our simple application.
In this section, we want to take this one step further and we want to achieve the following with our simple application.
### Objective
- Dockerfile (Hello World)
- Jenkinsfile
- Jenkins Pipeline to trigger when GitHub Repository is updated
- Use GitHub Repository as source.
- Use GitHub Repository as the source.
- Run - Clone/Get Repository, Build, Test, Deploy Stages
- Deploy to DockerHub with incremental version numbers
- Stretch Goal to deploy to our Kubernetes Cluster (This will involve another job and manifest repository using GitHub credentials)
@ -34,9 +34,9 @@ With the above this is what we were using as our source in our Pipeline, now we
![](Images/Day74_CICD2.png)
Now back in our Jenkins dashboard, we are going to create a new pipeline but now instead of pasting our script we are going to use "Pipeline script from SCM" We are then going to use the configuration options below.
Now back in our Jenkins dashboard, we are going to create a new pipeline but now instead of pasting our script, we are going to use "Pipeline script from SCM" We are then going to use the configuration options below.
For reference we are going to use `https://github.com/MichaelCade/Jenkins-HelloWorld.git` as the repository URL.
For reference, we are going to use `https://github.com/MichaelCade/Jenkins-HelloWorld.git` as the repository URL.
![](Images/Day74_CICD3.png)
@ -48,7 +48,7 @@ This is a big consideration because if you are using costly cloud resources to h
![](Images/Day74_CICD4.png)
One thing I have changed since yesterdays session is I want to now upload my image to a public repository which in this case would be michaelcade1\90DaysOfDevOps, my Jenkinsfile has this change already. And from previous sections I have removed any existing demo container images.
One thing I have changed since yesterday's session is I want to now upload my image to a public repository which in this case would be michaelcade1\90DaysOfDevOps, my Jenkinsfile has this change already. And from the previous sections, I have removed any existing demo container images.
![](Images/Day74_CICD5.png)
@ -56,15 +56,15 @@ Going backwards here, we created our Pipeline and then as previously shown we ad
![](Images/Day74_CICD6.png)
At this stage our Pipeline has never ran and your stage view will look something like this.
At this stage, our Pipeline has never run and your stage view will look something like this.
![](Images/Day74_CICD7.png)
Now lets trigger the "Build Now" button. and our stage view will display our stages.
Now let's trigger the "Build Now" button. and our stage view will display our stages.
![](Images/Day74_CICD8.png)
If we then head over to our DockerHub repository, we should have 2 new Docker images. We should have a Build ID of 1 and a latest because every build that we create based on the "Upload to DockerHub" is we send a version using the Jenkins Build_ID environment variable and we also issue a latest.
If we then head over to our DockerHub repository, we should have 2 new Docker images. We should have a Build ID of 1 and a latest because for every build that we create based on the "Upload to DockerHub" we send a version using the Jenkins Build_ID environment variable and we also issue a latest.
![](Images/Day74_CICD9.png)
@ -72,7 +72,7 @@ Let's go and create an update to our index.html file in our GitHub repository as
![](Images/Day74_CICD10.png)
If we head back to Jenkins and select "Build Now" again. We will see our #2 build is successful.
If we head back to Jenkins and select "Build Now" again. We will see if our #2 build is successful.
![](Images/Day74_CICD11.png)
@ -80,7 +80,7 @@ Then a quick look at DockerHub, we can see that we have our tagged version 2 and
![](Images/Day74_CICD12.png)
It is worth noting here that I have added into my Kubernetes cluster a secret that enables my access and authentication to push my docker builds into DockerHub. If you are following along you should repeat this process for your account, and also make a change to the Jenkinsfile that is associated to my repository and account.
It is worth noting here that I have added into my Kubernetes cluster a secret that enables my access and authentication to push my docker builds into DockerHub. If you are following along you should repeat this process for your account, and also make a change to the Jenkinsfile that is associated with my repository and account.
## Resources
@ -93,4 +93,4 @@ It is worth noting here that I have added into my Kubernetes cluster a secret th
- [GitHub Actions](https://www.youtube.com/watch?v=R8_veQiYBjI)
- [GitHub Actions CI/CD](https://www.youtube.com/watch?v=mFFXuXjVgkU)
See you on [Day 75](day75.md)
See you on [Day 75](day75.md)

View File

@ -10,18 +10,18 @@ id: 1049070
## GitHub Actions Overview
In this section I wanted to move on and take a look at maybe a different approach to what we just spent time on. GitHub Actions is where we will focus on in this session.
In this section, I wanted to move on and take a look at maybe a different approach to what we just spent time on. GitHub Actions is what we will focus on in this session.
GitHub Actions is a CI/CD platform that allows us to build, test and deploy amongst other tasks our pipeline. It has the concept of workflows that build and test against a GitHub repository. You could also use GitHub Actions to drive other workflows based on events that happen within your repository.
GitHub Actions is a CI/CD platform that allows us to build, test and deploy amongst other tasks in our pipeline. It has the concept of workflows that build and test against a GitHub repository. You could also use GitHub Actions to drive other workflows based on events that happen within your repository.
### Workflows
Overall, in GitHub Actions our task is called a **Workflow**.
Overall, in GitHub Actions, our task is called a **Workflow**.
- A **workflow** is the configurable automated process.
- Defined as YAML files.
- Contain and run one or more **jobs**
- Will run when triggered by an **event** in your repository or can be ran manually
- Will run when triggered by an **event** in your repository or can be run manually
- You can multiple workflows per repository
- A **workflow** will contain a **job** and then **steps** to achieve that **job**
- Within our **workflow** we will also have a **runner** on which our **workflow** runs.
@ -30,7 +30,7 @@ For example, you can have one **workflow** to build and test pull requests, anot
### Events
Events are a specific event in a repository that triggers the workflow to run.
Events are specific event in a repository that triggers the workflow to run.
### Jobs
@ -38,15 +38,15 @@ A job is a set of steps in the workflow that execute on a runner.
### Steps
Each step within the job can be a shell script that gets executed, or an action. Steps are executed in order and they are dependant on each other.
Each step within the job can be a shell script that gets executed or an action. Steps are executed in order and they are dependent on each other.
### Actions
A repeatable custom application used for frequently repeated tasks.
A repeatable custom application is used for frequently repeated tasks.
### Runners
A runner is a server that runs the workflow, each runner runs a single job at a time. GitHub Actions provides the ability to run Ubuntu Linux, Microsoft Windows, and macOS runners. You can also host your own on specific OS or hardware.
A runner is a server that runs the workflow, each runner runs a single job at a time. GitHub Actions provides the ability to run Ubuntu Linux, Microsoft Windows, and macOS runners. You can also host your own on a specific OS or hardware.
Below you can see how this looks, we have our event triggering our workflow > our workflow consists of two jobs > within our jobs we then have steps and then we have actions.
@ -54,9 +54,9 @@ Below you can see how this looks, we have our event triggering our workflow > ou
### YAML
Before we get going with a real use case lets take a quick look at the above image in the form of an example YAML file.
Before we get going with a real use case let's take a quick look at the above image in the form of an example YAML file.
I have added # to comment in where we can find the components of the YAML workflow.
I have added # to the comment where we can find the components of the YAML workflow.
```Yaml
#Workflow
@ -81,7 +81,7 @@ jobs:
### Getting Hands-On with GitHub Actions
I think there are a lot of options when it comes to GitHub Actions, yes it will satisfy your CI/CD needs when it comes to Build, Test, Deploying your code and the continued steps thereafter.
I think there are a lot of options when it comes to GitHub Actions, yes it will satisfy your CI/CD needs when it comes to Building, Test, and Deploying your code and the continued steps thereafter.
I can see lots of options and other automated tasks that we could use GitHub Actions for.
@ -89,7 +89,7 @@ I can see lots of options and other automated tasks that we could use GitHub Act
One option is making sure your code is clean and tidy within your repository. This will be our first example demo.
I am going to be using some example code linked in one of the resources for this section, we are going to use `github/super-linter` to check against our code.
I am going to be using some example code linked in one of the resources for this section, we are going to use `GitHub/super-linter` to check against our code.
```Yaml
name: Super-Linter
@ -112,29 +112,29 @@ jobs:
```
**github/super-linter**
You can see from the above that for one of our steps we have an action called github/super-linter and this is referring to a step that has already been written by the community. You can find out more about this here [Super-Linter](https://github.com/github/super-linter)
You can see from the above that for one of our steps we have an action called GitHub/super-linter and this is referring to a step that has already been written by the community. You can find out more about this here [Super-Linter](https://github.com/github/super-linter)
"This repository is for the GitHub Action to run a Super-Linter. It is a simple combination of various linters, written in bash, to help validate your source code."
Also in the code snippet above it mentions GITHUB_TOKEN so I was interested to find out why and what this does and needed for.
Also in the code snippet above it mentions GITHUB_TOKEN so I was interested to find out why and what this does and is needed for.
"NOTE: If you pass the Environment variable `GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}` in your workflow, then the GitHub Super-Linter will mark the status of each individual linter run in the Checks section of a pull request. Without this you will only see the overall status of the full run. **There is no need to set the GitHub Secret as it is automatically set by GitHub, it only needs to be passed to the action.**"
"NOTE: If you pass the Environment variable `GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}` in your workflow, then the GitHub Super-Linter will mark the status of each linter run in the Checks section of a pull request. Without this, you will only see the overall status of the full run. **There is no need to set the GitHub Secret as it is automatically set by GitHub, it only needs to be passed to the action.**"
The bold text being important to note at this stage. We are using it but we do not need to set any environment variable within our repository.
The bold text is important to note at this stage. We are using it but we do not need to set any environment variable within our repository.
We will use our repository that we used in our Jenkins demo to test against.[Jenkins-HelloWorld](https://github.com/MichaelCade/Jenkins-HelloWorld)
We will use the repository that we used in our Jenkins demo to test against.[Jenkins-HelloWorld](https://github.com/MichaelCade/Jenkins-HelloWorld)
Here is our repository as we left it in the Jenkins sessions.
![](Images/Day75_CICD2.png)
In order for us to take advantage we have to use the Actions tab above to choose from the marketplace which I will cover shortly or we can create our own files using our super-linter code above, in order to create your own you must create a new file in your repository at this exact location. `.github/workflows/workflow_name` obviously making sure the workflow_name is something useful for you recognise, within here we can have many different workflows performing different jobs and tasks against our repository.
For us to take advantage, we have to use the Actions tab above to choose from the marketplace which I will cover shortly or we can create our files using our super-linter code above, to create your own, you must create a new file in your repository at this exact location. `.github/workflows/workflow_name` obviously making sure the workflow_name is something useful for you to recognise, within here we can have many different workflows performing different jobs and tasks against our repository.
We are going to create `.github/workflows/super-linter.yml`
![](Images/Day75_CICD3.png)
We can then paste our code and commit the code to our repository, if we then head to the Actions tab we will now see our Super-Linter workflow listed as per below,
We can then paste our code and commit the code to our repository, if we then head to the Actions tab we will now see our Super-Linter workflow listed below,
![](Images/Day75_CICD4.png)
@ -142,11 +142,11 @@ We defined in our code that this workflow would run when we pushed anything to o
![](Images/Day75_CICD5.png)
As you can see from the above we have some errors most likely with my hacking ability vs coding ability.
As you can see from the above we have some errors most likely with my hacking ability vs my coding ability.
Although actually it was not my code at least not yet, in running this and getting an error I found this [issue](https://github.com/github/super-linter/issues/2255)
Although it was not my code at least not yet, in running this and getting an error I found this [issue](https://github.com/github/super-linter/issues/2255)
Take #2 I changed the version of Super-Linter from version 3 to 4 and have ran the task again.
Take #2 I changed the version of Super-Linter from version 3 to 4 and have run the task again.
![](Images/Day75_CICD6.png)
@ -160,7 +160,7 @@ Now if we resolve the issue with my code and push the changes our workflow will
![](Images/Day75_CICD8.png)
If you hit the new workflow button highlighted above, this is going to open the door to a huge plethora of actions. One thing you might have noticed throughout this challenge is that we don't want to reinvent the wheel we want to stand on the shoulders of giants and share our code, automations and skills far and wide to make our lives easier.
If you hit the new workflow button highlighted above, this is going to open the door to a huge plethora of actions. One thing you might have noticed throughout this challenge is that we don't want to reinvent the wheel we want to stand on the shoulders of giants and share our code, automation and skills far and wide to make our lives easier.
![](Images/Day75_CICD9.png)
@ -183,4 +183,4 @@ Next up we will cover another area of CD, we will be looking into ArgoCD to depl
- [GitHub Actions](https://www.youtube.com/watch?v=R8_veQiYBjI)
- [GitHub Actions CI/CD](https://www.youtube.com/watch?v=mFFXuXjVgkU)
See you on [Day 76](day76.md)
See you on [Day 76](day76.md)

View File

@ -12,7 +12,7 @@ id: 1048809
“Argo CD is a declarative, GitOps continuous delivery tool for Kubernetes”
Version control is the key here, ever made a change to your environment on the fly and have no recollection of that change and because the lights are on and everything is green you continue to keep plodding along? Ever made a change and broke everything or some of everything? You might have known you made the change and you can quickly roll back your change, that bad script or misspelling. Now ever done this a massive scale and maybe it was not you or maybe it was not found straight away and now the business is suffering. Therefore, version control is important. Not only that but “Application definitions, configurations, and environments should be declarative, and version controlled.” On top of this (which comes from ArgoCD), they also mention that “Application deployment and lifecycle management should be automated, auditable, and easy to understand.”
Version control is the key here, ever made a change to your environment on the fly and have no recollection of that change and because the lights are on and everything is green you continue to keep plodding along? Ever made a change and broken everything or some of everything? You might have known you made the change and you can quickly roll back your change, that bad script or misspelling. Now ever done this on a massive scale and maybe it was not you or maybe it was not found straight away and now the business is suffering. Therefore, version control is important. Not only that but “Application definitions, configurations, and environments should be declarative, and version controlled.” On top of this (which comes from ArgoCD), they also mention that “Application deployment and lifecycle management should be automated, auditable, and easy to understand.”
From an Operations background but having played a lot around Infrastructure as Code this is the next step to ensuring all of that good stuff is taken care of along the way with continuous deployment/delivery workflows.
@ -33,7 +33,7 @@ Make sure all the ArgoCD pods are up and running with `kubectl get pods -n argoc
![](Images/Day76_CICD2.png)
Also let's check everything that we deployed in the namespace with `kubectl get all -n argocd`
Also, let's check everything that we deployed in the namespace with `kubectl get all -n argocd`
![](Images/Day76_CICD3.png)
@ -43,7 +43,7 @@ Then open a new web browser and head to `https://localhost:8080`
![](Images/Day76_CICD4.png)
To log in you will need a username of admin and then to grab your created secret as your password use the `kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d && echo`
To log in you will need a username of admin and then grab your created secret as your password use the `kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d && echo`
![](Images/Day76_CICD5.png)
@ -59,15 +59,15 @@ The application I want to deploy is Pac-Man, yes that's right the famous game an
You can find the repository for [Pac-Man](https://github.com/MichaelCade/pacman-tanzu.git) here.
Instead of going through each step using screen shots I thought it would be easier to create a walkthrough video covering the steps taken for this one particular application deployment.
Instead of going through each step using screenshots, I thought it would be easier to create a walkthrough video covering the steps taken for this one particular application deployment.
[ArgoCD Demo - 90DaysOfDevOps](https://www.youtube.com/watch?v=w6J413_j0hA)
Note - During the video there is a service that is never satisfied as the app health being healthy this is because the LoadBalancer type set for the pacman service is in a pending state, in Minikube we do not have a loadbalancer configured. If you would like to test this you could change the YAML for the service to ClusterIP and use port forwarding to play the game.
Note - During the video, there is a service that is never satisfied as the app health is healthy this is because the LoadBalancer type set for the Pacman service is pending, in Minikube we do not have a loadbalancer configured. If you would like to test this you could change the YAML for the service to ClusterIP and use port forwarding to play the game.
This wraps up the CICD Pipelines section, I feel there is a lot of focus on this area in the industry at the moment and you will also hear terms around GitOps also related to the methodologies used within CICD in general.
The next section we move into is around Observability, another concept or area that is not new but it is more and more important as we look at our environments in a different way.
The next section we move into is around Observability, another concept or area that is not new but it is more and more important as we look at our environments differently.
## Resources
@ -80,4 +80,4 @@ The next section we move into is around Observability, another concept or area t
- [GitHub Actions](https://www.youtube.com/watch?v=R8_veQiYBjI)
- [GitHub Actions CI/CD](https://www.youtube.com/watch?v=mFFXuXjVgkU)
See you on [Day 77](day77.md)
See you on [Day 77](day77.md)

View File

@ -10,7 +10,7 @@ id: 1048715
## The Big Picture: Monitoring
In this section we are going to talk about monitoring, what is it why do we need it?
In this section we are going to talk about monitoring, what is it and why do we need it?
### What is Monitoring?
@ -28,10 +28,10 @@ We are responsible for ensuring that all the services, applications and resource
How do we do it? there are three ways:
- Login manually to all of our servers and check all the data pertaining to services processes and resources.
- Login manually to all of our servers and check all the data about service processes and resources.
- Write a script that logs in to the servers for us and checks on the data.
Both of these options would require considerable amount of work on our part,
Both of these options would require a considerable amount of work on our part,
The third option is easier, we could use a monitoring solution that is available in the market.
@ -45,15 +45,15 @@ The tool allows us to monitor our servers and see if they are being sufficiently
![](Images/Day77_Monitoring3.png)
Essentially monitoring allows us to achieve these two goals, check the status of our servers and services and determine the health of our infrastructure it also gives us a 40,000ft view of the complete infrastructure to see if our servers are up and running, if the applications are working properly and the web servers are reachable or not.
Essentially monitoring allows us to achieve these two goals, check the status of our servers and services and determine the health of our infrastructure it also gives us a 40,000ft view of the complete infrastructure to see if our servers are up and running if the applications are working properly and the web servers are reachable or not.
It will tell us that our disk has been increasing by 10 percent for the last 10 weeks in a particular server, that it will exhaust entirely within the next four or five days and we'll fail to respond soon it will alert us when your disk or server is in a critical state so that we can take appropriate actions to avoid possible outages.
It will tell us that our disk has been increasing by 10 per cent for the last 10 weeks in a particular server, that it will exhaust entirely within the next four or five days and we'll fail to respond soon it will alert us when your disk or server is in a critical state so that we can take appropriate actions to avoid possible outages.
In this case we can free up some disk space and ensure that our servers don't fail and that our users are not affected.
In this case, we can free up some disk space and ensure that our servers don't fail and that our users are not affected.
The difficult question for most monitoring engineers is what do we monitor? and alternately what do we not?
Every system has a number of resources, which of these should we keep a close eye on and which ones can we turn a blind eye to for instance is it necessary to monitor CPU usage the answer is yes obviously nevertheless it is still a decision that has to be made is it necessary to monitor the number of open ports in the system we may or may not have to depending on the situation if it is a general-purpose server we probably won't have to but then again if it is a webserver we probably would have to.
Every system has several resources, which of these should we keep a close eye on and which ones can we turn a blind eye to for instance is it necessary to monitor CPU usage the answer is yes obviously nevertheless it is still a decision that has to be made is it necessary to monitor the number of open ports in the system we may or may not have to depend on the situation if it is a general-purpose server we probably won't have to but then again if it is a webserver we probably would have to.
### Continuous Monitoring
@ -65,11 +65,11 @@ There are three key areas of focus when it comes to monitoring.
- Application Monitoring
- Network Monitoring
The important thing to note is that there are many tools available we have mentioned two generic systems and tools in this session but there are lots. The real benefit of a monitoring solution comes when you have really spent the time making sure you are answering that question of what should we be monitoring and what shouldn't we?
The important thing to note is that there are many tools available we have mentioned two generic systems and tools in this session but there are lots. The real benefit of a monitoring solution comes when you have spent the time making sure you are answering the question of what should we be monitoring and what shouldn't we?
We could turn on a monitoring solution in any of our platforms and it will start grabbing information but if that information is simply too much then you are going to struggle to benefit from that solution, you have to spend the time to configure.
We could turn on a monitoring solution in any of our platforms and it will start grabbing information but if that information is simply too much then you are going to struggle to benefit from that solution, you have to spend the time to configure it.
In the next session we will get hands on with a monitoring tool and see what we can start monitoring.
In the next session, we will get hands-on with a monitoring tool and see what we can start monitoring.
## Resources
@ -80,4 +80,4 @@ In the next session we will get hands on with a monitoring tool and see what we
- [How Prometheus Monitoring works](https://www.youtube.com/watch?v=h4Sl21AKiDg)
- [Introduction to Prometheus monitoring](https://www.youtube.com/watch?v=5o37CGlNLr8)
See you on [Day 78](day78.md)
See you on [Day 78](day78.md)

View File

@ -10,23 +10,23 @@ id: 1049056
## Hands-On Monitoring Tools
In the last session, I spoke about the big picture of monitoring and I took a look into Nagios, there was two reasons for doing this. The first was this is a piece of software I have heard a lot of over the years so wanted to know a little more about its capabilities.
In the last session, I spoke about the big picture of monitoring and I took a look into Nagios, there were two reasons for doing this. The first was this is a piece of software I have heard a lot of over the years so wanted to know a little more about its capabilities.
Today I am going to be going into Prometheus, I have seen more and more of Prometheus in the Cloud-Native landscape but it can also be used to look after those physical resources as well outside of Kubernetes and the like.
### Prometheus - Monitors nearly everything
First of all Prometheus is Open-Source that can help you monitor containers and microservice based systems as well as physical, virtual and other services. There is a large community behind Prometheus.
First of all, Prometheus is Open-Source that can help you monitor containers and microservice-based systems as well as physical, virtual and other services. There is a large community behind Prometheus.
Prometheus has a large array of [integrations and exporters](https://prometheus.io/docs/instrumenting/exporters/) The key being to exporting existing metrics as prometheus metrics. On top of this it also supports multiple proagramming languages.
Prometheus has a large array of [integrations and exporters](https://prometheus.io/docs/instrumenting/exporters/) The key is to export existing metrics as Prometheus metrics. On top of this, it also supports multiple programming languages.
Pull approach - If you are talking to thousands of microservices or systems and services a push method is going to be where you generally see the service pushing to the monitoring system. This brings some challenges around flooding the network, high cpu and also a single point of failure. Where Pull gives us a much better experience where Prometheus will pull from the metrics endpoint on every service.
Pull approach - If you are talking to thousands of microservices or systems and services a push method is going to be where you generally see the service pushing to the monitoring system. This brings some challenges around flooding the network, high CPU and also a single point of failure. Where Pull gives us a much better experience where Prometheus will pull from the metrics endpoint on every service.
Once again we see YAML for configuration for Prometheus.
![](Images/Day78_Monitoring7.png)
Later on you are going to see how this looks when deployed into Kubernetes, in particular we have the **PushGateway** which pulls our metrics from our jobs/exporters.
Later on, you are going to see how this looks when deployed into Kubernetes, in particular, we have the **PushGateway** which pulls our metrics from our jobs/exporters.
We have the **AlertManager** which pushes alerts and this is where we can integrate into external services such as email, slack and other tooling.
@ -43,7 +43,7 @@ Various ways of installing Prometheus, [Download Section](https://prometheus.io/
But we are going to focus our efforts on deploying to Kubernetes. Which also has some options.
- Create configuration YAML files
- Using an Operator (manager of all prometheus components)
- Using an Operator (manager of all Prometheus components)
- Using helm chart to deploy operator
### Deploying to Kubernetes
@ -54,11 +54,11 @@ We will be using our minikube cluster locally again for this quick and simple in
![](Images/Day78_Monitoring1.png)
As you can see from the above we have also ran a helm repo update, we are now ready to deploy Prometheus into our minikube environment using the `helm install stable prometheus-community/prometheus` command.
As you can see from the above we have also run a helm repo update, we are now ready to deploy Prometheus into our minikube environment using the `helm install stable prometheus-community/prometheus` command.
![](Images/Day78_Monitoring2.png)
After a couple of minutes you will see a number of new pods appear, for this demo I have deployed into the default namespace, I would normally push this to its own namespace.
After a couple of minutes, you will see several new pods appear, for this demo, I have deployed into the default namespace, I would normally push this to its namespace.
![](Images/Day78_Monitoring3.png)
@ -81,9 +81,9 @@ Because we have deployed to our Kubernetes cluster we will automatically be pick
![](Images/Day78_Monitoring6.png)
Short on learning PromQL and putting that into practice this is very much like I mentioned previously in that gaining metrics is great, so is monitoring but you have to know what you are monitoring and why and what you are not monitoring and why!
Short on learning PromQL and putting that into practice this is very much like I mentioned previously in that gaining metrics is great, and so is monitoring but you have to know what you are monitoring and why and what you are not monitoring and why!
I want to come back to Prometheus but for now I think we need to think about Log Management and Data Visualisation to bring us back to Prometheus later on.
I want to come back to Prometheus but for now, I think we need to think about Log Management and Data Visualisation to bring us back to Prometheus later on.
## Resources
@ -95,4 +95,4 @@ I want to come back to Prometheus but for now I think we need to think about Log
- [Introduction to Prometheus monitoring](https://www.youtube.com/watch?v=5o37CGlNLr8)
- [Promql cheat sheet with examples](https://www.containiq.com/post/promql-cheat-sheet-with-examples)
See you on [Day 79](day79.md)
See you on [Day 79](day79.md)

View File

@ -10,19 +10,19 @@ id: 1049057
## The Big Picture: Log Management
A continuation to the infrastructure monitoring challenges and solutions, log management is another puzzle piece to the overall observability jigsaw.
A continuation of the infrastructure monitoring challenges and solutions, log management is another puzzle piece to the overall observability jigsaw.
### Log Management & Aggregation
Let's talk about two core concepts the first of which is log aggregation and it's a way of collecting and tagging application logs from many different services and to a single dashboard that can easily be searched.
One of the first systems that have to be built out in an application performance management system is log aggregation. Application performance management is the part of the devops lifecycle where things have been built and deployed and you need to make sure that they're continuously working so they have enough resources allocated to them and errors aren't being shown to users. In most production deployments there are many related events that emit logs across services at google a single search might hit ten different services before being returned to the user if you got unexpected search results that might mean a logic problem in any of the ten services and log aggregation helps companies like google diagnose problems in production, they've built a single dashboard where they can map every request to unique id so if you search something your search will get a unique id and then every time that search is passing through a different service that service will connect that id to what they're currently doing.
One of the first systems that have to be built out in an application performance management system is log aggregation. Application performance management is the part of the DevOps lifecycle where things have been built and deployed and you need to make sure that they're continuously working so they have enough resources allocated to them and errors aren't being shown to users. In most production deployments many related events emit logs across services at google a single search might hit ten different services before being returned to the user if you got unexpected search results that might mean a logic problem in any of the ten services and log aggregation helps companies like google diagnose problems in production, they've built a single dashboard where they can map every request to unique id so if you search something your search will get a unique id and then every time that search is passing through a different service that service will connect that id to what they're currently doing.
This is the essence of a good log aggregation platform efficiently collect logs from everywhere that emits them and make them easily searchable in the case of a fault again.
This is the essence of a good log aggregation platform efficiently collects logs from everywhere that emits them and makes them easily searchable in the case of a fault again.
### Example App
Our example application is a web app, we have a typical front end and backend storing our critical data to a MongoDB database.
Our example application is a web app, we have a typical front end and backend storing our critical data in a MongoDB database.
If a user told us the page turned all white and printed an error message we would be hard-pressed to diagnose the problem with our current stack the user would need to manually send us the error and we'd need to match it with relevant logs in the other three services.
@ -34,19 +34,19 @@ The web application would connect to the frontend which then connects to the bac
### The components of elk
Elasticsearch, logstash and Kibana is that all of services send logs to logstash, logstash takes these logs which are text emitted by the application. For example the web application when you visit a web page, the web page might log this visitor access to this page at this time and that's an example of a log message those logs would be sent to logstash.
Elasticsearch, logstash and Kibana are that all the services send logs to logstash, logstash takes these logs which are text emitted by the application. For example, in the web application when you visit a web page, the web page might log this visitor's access to this page at this time and that's an example of a log message those logs would be sent to logstash.
Logstash would then extract things from them so for that log message user did **thing**, at **time**. It would extract the time and extract the message and extract the user and include those all as tags so the message would be an object of tags and message so that you could search them easily you could find all of the requests made by a specific user but logstash doesn't store things itself it stores things in elasticsearch which is a efficient database for querying text and elasticsearch exposes the results as Kibana and Kibana is a web server that connects to elasticsearch and allows administrators as the devops person or other people on your team, the on-call engineer to view the logs in production whenever there's a major fault. You as the administrator would connect to Kibana, Kibana would query elasticsearch for logs matching whatever you wanted.
Logstash would then extract things from them so for that log message user did **thing**, at **time**. It would extract the time and extract the message and extract the user and include those all as tags so the message would be an object of tags and message so that you could search them easily could find all of the requests made by a specific user but logstash doesn't store things itself it stores things in elasticsearch which is an efficient database for querying text and elasticsearch exposes the results as Kibana and Kibana is a web server that connects to elasticsearch and allows administrators as the DevOps person or other people on your team, the on-call engineer to view the logs in production whenever there's a major fault. You as the administrator would connect to Kibana, and Kibana would query elasticsearch for logs matching whatever you wanted.
You could say hey Kibana in the search bar I want to find errors and kibana would say elasticsearch find the messages which contain the string error and then elasticsearch would return results that had been populated by logstash. Logstash would have been sent those results from all of the other services.
### how would we use elk to diagnose a production problem
A user says i saw error code one two three four five six seven when i tried to do this with elk setup we'd have to go to kibana enter one two three four five six seven in the search bar press enter and then that would show us the logs that corresponded to that and one of the logs might say internal server error returning one two three four five six seven and we'd see that the service that emitted that log was the backend and we'd see what time that log was emitted at so we could go to the time in that log and we could look at the messages above and below it in the backend and then we could see a better picture of what happened for the user's request and we'd be able to repeat this process going to other services until we found what actually caused the problem for the user.
A user says I saw error code one two three four five six seven when I tried to do this with elk setup we'd have to go to kibana enter one two three four five six seven in the search bar press enter and then that would show us the logs that corresponded to that and one of the logs might say internal server error returning one two three four five six seven and we'd see that the service that emitted that log was the backend and we'd see what time that log was emitted at so we could go to the time in that log and we could look at the messages above and below it in the backend and then we could see a better picture of what happened for the user's request and we'd be able to repeat this process going to other services until we found what caused the problem for the user.
### Security and Access to Logs
An important piece of the puzzle is ensuring that logs are only visible to administrators (or the users and groups that absolutely need to have access), logs can contain sensitive information like tokens it's important that only authenticated users can access them you wouldn't want to expose Kibana to the internet without some way of authenticating.
An important piece of the puzzle is ensuring that logs are only visible to administrators (or the users and groups that need to have access), logs can contain sensitive information like tokens only authenticated users should have access to them, you wouldn't want to expose Kibana to the internet without some way of authenticating.
### Examples of Log Management Tools
@ -62,7 +62,7 @@ Examples of log management platforms there's
Cloud providers also provide logging such as AWS CloudWatch Logs, Microsoft Azure Monitor and Google Cloud Logging.
Log Management is a key aspect of the overall observability of your applications and instracture environment for diagnosing problems in production it's relatively simple to install a turnkey solution like ELK or CloudWatch and it makes diagnosing and triaging problems in production significantly easier.
Log Management is a key aspect of the overall observability of your applications and infrastructure environment for diagnosing problems in production it's relatively simple to install a turnkey solution like ELK or CloudWatch and it makes diagnosing and triaging problems in production significantly easier.
## Resources
@ -78,4 +78,4 @@ Log Management is a key aspect of the overall observability of your applications
- [What is ELK Stack?](https://www.youtube.com/watch?v=4X0WLg05ASw)
- [Fluentd simply explained](https://www.youtube.com/watch?v=5ofsNyHZwWE&t=14s)
See you on [Day 80](day80.md)
See you on [Day 80](day80.md)

View File

@ -16,13 +16,13 @@ ELK Stack is the combination of 3 separate tools:
- [Elasticsearch](https://www.elastic.co/what-is/elasticsearch) is a distributed, free and open search and analytics engine for all types of data, including textual, numerical, geospatial, structured, and unstructured.
- [Logstash](https://www.elastic.co/logstash/) is a free and open server-side data processing pipeline that ingests data from a multitude of sources, transforms it, and then sends it to your favorite "stash."
- [Logstash](https://www.elastic.co/logstash/) is a free and open server-side data processing pipeline that ingests data from a multitude of sources, transforms it, and then sends it to your favourite "stash."
- [Kibana](https://www.elastic.co/kibana/) is a free and open user interface that lets you visualize your Elasticsearch data and navigate the Elastic Stack. Do anything from tracking query load to understanding the way requests flow through your apps.
ELK stack lets us reliably and securely take data from any source, in any format, then search, analyze, and visualize it in real time.
On top of the above mentioned components you might also see Beats which are lightweight agents that are installed on edge hosts to collect different types of data for forwarding into the stack.
On top of the above-mentioned components, you might also see Beats which are lightweight agents that are installed on edge hosts to collect different types of data for forwarding into the stack.
- Logs: Server logs that need to be analysed are identified
@ -36,11 +36,11 @@ On top of the above mentioned components you might also see Beats which are ligh
[Picture taken from Guru99](https://www.guru99.com/elk-stack-tutorial.html)
A good resource explaining this [The Complete Guide to the ELK Stack](https://logz.io/learn/complete-guide-elk-stack/)
A good resource explaining this [Is the Complete Guide to the ELK Stack](https://logz.io/learn/complete-guide-elk-stack/)
With the addition of beats the ELK Stack is also now known as Elastic Stack.
With the addition of beats, the ELK Stack is also now known as Elastic Stack.
For the hands-on scenario there are many places you can deploy the Elastic Stack but we are going to be using docker compose to deploy locally on our system.
For the hands-on scenario, there are many places you can deploy the Elastic Stack but we are going to be using docker-compose to deploy locally on our system.
[Start the Elastic Stack with Docker Compose](https://www.elastic.co/guide/en/elastic-stack-get-started/current/get-started-stack-docker.html#get-started-docker-tls)
@ -48,13 +48,13 @@ For the hands-on scenario there are many places you can deploy the Elastic Stack
You will find the original files and walkthrough that I used here [deviantony/docker-elk](https://github.com/deviantony/docker-elk)
Now we can run `docker-compose up -d`, the first time this has been ran will require the pulling of images.
Now we can run `docker-compose up -d`, the first time this has been running will require the pulling of images.
![](Images/Day80_Monitoring2.png)
If you follow either this repository or the one that I used you will have either have the password of "changeme" or in my repository the password of "90DaysOfDevOps". The username is "elastic"
If you follow either this repository or the one that I used you will have either the password of "changeme" or in my repository the password of "90DaysOfDevOps". The username is "elastic"
After a few minutes we can navigate to `http://localhost:5601/` which is our Kibana server / Docker container.
After a few minutes, we can navigate to `http://localhost:5601/` which is our Kibana server / Docker container.
![](Images/Day80_Monitoring3.png)
@ -66,9 +66,9 @@ Under the section titled "Get started by adding integrations" there is a "try sa
![](Images/Day80_Monitoring5.png)
I am going to select "Sample web logs" but this is really to get a look and feel of what data sets you can get into the ELK stack.
I am going to select "Sample weblogs" but this is really to get a look and feel of what data sets you can get into the ELK stack.
When you have selected "Add Data" it takes a while to populate some of that data and then you have the "View Data" option and a list of the available ways to view that data in the drop down.
When you have selected "Add Data" it takes a while to populate some of that data and then you have the "View Data" option and a list of the available ways to view that data in the dropdown.
![](Images/Day80_Monitoring6.png)
@ -80,7 +80,7 @@ As it states on the dashboard view:
![](Images/Day80_Monitoring7.png)
This is using Kibana to visualise data that has been added into ElasticSearch via Logstash. This is not the only option but I personally wanted to deploy and look at this.
This is using Kibana to visualise data that has been added into ElasticSearch via Logstash. This is not the only option but I wanted to deploy and look at this.
We are going to cover Grafana at some point and you are going to see some data visualisation similarities between the two, you have also seen Prometheus.
@ -103,4 +103,4 @@ I was reading this article from MetricFire [Prometheus vs. ELK](https://www.metr
- [What is ELK Stack?](https://www.youtube.com/watch?v=4X0WLg05ASw)
- [Fluentd simply explained](https://www.youtube.com/watch?v=5ofsNyHZwWE&t=14s)
See you on [Day 81](day81.md)
See you on [Day 81](day81.md)