Merge branch 'main' into tr-day07

This commit is contained in:
ptux 2022-07-31 07:43:28 +09:00
commit cbb5b845a2
221 changed files with 18532 additions and 4951 deletions

View File

@ -6,6 +6,6 @@ func main() {
var challenge = "#90DaysOfDevOps"
const daystotal = 90
fmt.Println("Welcome to", challenge)
fmt.Println("Welcome to", challenge, "")
fmt.Println("This is a", daystotal, "challenge")
}

View File

@ -17,7 +17,7 @@ metadata:
spec:
selector:
app: elasticsearch
#Renderes The service Headless
#Renders The service Headless
clusterIP: None
ports:
- port: 9200

View File

@ -8,6 +8,7 @@ canonical_url: null
id: 1048731
date: '2022-04-17T10:12:40Z'
---
## Introduction - Day 1
Day 1 of our 90 days and adventure to learn a good foundational understanding of DevOps and tools that help with a DevOps mindset.

View File

@ -8,6 +8,7 @@ canonical_url: null
id: 1048699
date: '2022-04-17T21:15:34Z'
---
## Responsibilities of a DevOps Engineer
Hopefully, you are coming into this off the back of going through the resources and posting on [Day1 of #90DaysOfDevOps](day01.md)
@ -59,10 +60,11 @@ This is where we are going to end this day of learning, hopefully, this was usef
I am always open to adding additional resources to these readme files as it is here as a learning tool.
My advice is to watch all of the below and hopefully you also picked something up from the text and explanations above.
- [What is DevOps? - TechWorld with Nana](https://www.youtube.com/watch?v=0yWAtQ6wYNM)
- [What is DevOps? - GitHub YouTube](https://www.youtube.com/watch?v=kBV8gPVZNEE)
- [What is DevOps? - IBM YouTube](https://www.youtube.com/watch?v=UbtB4sMaaNM)
- [What is DevOps? - AWS ](https://aws.amazon.com/devops/what-is-devops/)
- [What is DevOps? - AWS](https://aws.amazon.com/devops/what-is-devops/)
- [What is DevOps? - Microsoft](https://docs.microsoft.com/en-us/devops/what-is-devops)
If you made it this far then you will know if this is where you want to be or not. See you on [Day 3](day03.md).

View File

@ -7,6 +7,7 @@ cover_image: null
canonical_url: null
id: 1048825
---
## DevOps Lifecycle - Application Focused
As we continue through these next few weeks we are 100% going to come across these titles (Continuous Development, Testing, Deployment, Monitor) over and over again, If you are heading towards the DevOps Engineer role then repeatability will be something you will get used to but constantly enhancing each time is another thing that keeps things interesting.
@ -14,6 +15,7 @@ As we continue through these next few weeks we are 100% going to come across the
In this hour we are going to take a look at the high-level view of the application from start to finish and then back around again like a constant loop.
### Development
Let's take a brand new example of an Application, to start with we have nothing created, maybe as a developer, you have to discuss with your client or end user the requirements and come up with some sort of plan or requirements for your Application. We then need to create from the requirements our brand new application.
In regards to tooling at this stage, there is no real requirement here other than choosing your IDE and the programming language you wish to use to write your application.
@ -27,6 +29,7 @@ We previously mentioned that this application can be written in any language. Im
It is also likely that it will not be one developer working on this project although this could be the case even so best practices would require a code repository to store and collaborate on the code, this could be private or public and could be hosted or privately deployed generally speaking you would hear the likes of **GitHub or GitLab** being used as a code repository. Again we will cover these as part of our section on **Git** later on.
### Testing
At this stage, we have our requirements and we have our application being developed. But we need to make sure we are testing our code in all the different environments that we have available to us or specifically maybe to the programming language chosen.
This phase enables QA to test for bugs, more frequently we see containers being used for simulating the test environment which overall can improve on cost overheads of physical or cloud infrastructure.
@ -46,6 +49,7 @@ Now you might at this stage be saying "but we don't create applications, we buy
I would also suggest just having this above knowledge is very important as you might buy off the shelf software today, but what about tomorrow or down the line... next job maybe?
### Deployment
Ok so we have our application built and tested against the requirements of our end user and we now need to go ahead and deploy this application into production for our end users to consume.
This is the stage where the code is deployed to the production servers, now this is where things get extremely interesting and it is where the rest of our 86 days dives deeper into these areas. Because different applications require different possibly hardware or configurations. This is where **Application Configuration Management** and **Infrastructure as Code** could play a key part in your DevOps lifecycle. It might be that your application is **Containerised** but also available to run on a virtual machine. This then also leads us onto platforms like **Kubernetes** which would be orchestrating those containers and making sure you have the desired state available to your end users.
@ -62,7 +66,7 @@ This section is also where we are going to capture that feedback wheel about the
Reliability is a key factor here as well, at the end of the day we want our Application to be available all the time it is required. This then leads to other **observability, security and data management** areas that should be continuously monitored and feedback can always be used to better enhance, update and release the application continuously.
Some input from the community here specifically [@_ediri](https://twitter.com/_ediri) mentioned also part of this continuous process we should also have the FinOps teams involved. Apps & Data are running and stored somewhere you should be monitoring this continuously to make sure if things change from a resources point of view your costs are not causing some major financial pain on your Cloud Bills.
Some input from the community here specifically [@\_ediri](https://twitter.com/_ediri) mentioned also part of this continuous process we should also have the FinOps teams involved. Apps & Data are running and stored somewhere you should be monitoring this continuously to make sure if things change from a resources point of view your costs are not causing some major financial pain on your Cloud Bills.
I think it is also a good time to bring up the "DevOps Engineer" mentioned above, albeit there are many DevOps Engineer positions in the wild that people hold, this is not the ideal way of positioning the process of DevOps. What I mean is from speaking to others in the community the title of DevOps Engineer should not be the goal for anyone because really any position should be adopting DevOps processes and the culture explained here. DevOps should be used in many different positions such as Cloud-Native engineer/architect, virtualisation admin, cloud architect/engineer, and infrastructure admin. This is to name a few but the reason for using DevOps Engineer above was really to highlight the scope of the process used by any of the above positions and more.

View File

@ -7,64 +7,65 @@ cover_image: null
canonical_url: null
id: 1048830
---
## Plan > Code > Build > Testing > Release > Deploy > Operate > Monitor >
Today we are going to focus on the individual steps from start to finish and the continuous cycle of an Application in a DevOps world.
![DevOps](Images/Day5_DevOps8.png)
### Plan:
### Plan
It all starts with the planning process this is where the development team gets together and figures out what types of features and bug fixes they're going to roll out in their next sprint. This is an opportunity as a DevOps Engineer for you to get involved with that and learn what kinds of things are going to be coming your way that you need to be involved with and also influence their decisions or their path and kind of help them work with the infrastructure that you've built or steer them towards something that's going to work better for them in case they're not on that path and so one key thing to point out here is the developers or software engineering team is your customer as a DevOps engineer so this is your opportunity to work with your customer before they go down a bad path.
### Code:
### Code
Now once that planning session's done they're going to go start writing the code you may or may not be involved a whole lot with this one of the places you may get involved with it, is whenever they're writing code you can help them better understand the infrastructure so if they know what services are available and how to best talk with those services so they're going to do that and then once they're done they'll merge that code into the repository
### Build:
### Build
This is where we'll kick off the first of our automation processes because we're going to take their code and we're going to build it depending on what language they're using it may be transpiring it or compiling it or it might be creating a docker image from that code either way we're going to go through that process using our ci cd pipeline
## Testing:
## Testing
Once we've built it we're going to run some tests on it now the development team usually writes the test you may have some input in what tests get written but we need to run those tests and the testing is a way for us to try and minimise introducing problems out into production, it doesn't guarantee that but we want to get as close to a guarantee as we can that were one not introducing new bugs and two not breaking things that used to work
## Release:
## Release
Once those tests pass we're going to do the release process and depending again on what type of application you're working on this may be a non-step. You know the code may just live in the GitHub repo or the git repository or wherever it lives but it may be the process of taking your compiled code or the docker image that you've built and putting it into a registry or a repository where it's accessible by your production servers for the deployment process
## Deploy:
## Deploy
which is the thing that we do next because deployment is like the end game of this whole thing because deployments are when we put the code into production and it's not until we do that that our business realizes the value from all the time effort and hard work that you and the software engineering team have put into this product up to this point.
## Operate:
## Operate
Once it's deployed we are going to operate it and operate it may involve something like you start getting calls from your customers that they're all annoyed that the site's running slow or their application is running slow right so you need to figure out why that is and then possibly build auto-scaling you know to handle increase the number of servers available during peak periods and decrease the number of servers during off-peak periods either way that's all operational type metrics, another operational thing that you do is include like a feedback loop from production back to your ops team letting you know about key events that happened in production such as a deployment back one step on the deployment thing this may or may not get automated depending on your environment the goal is to always automate it when possible there are some environments where you possibly need to do a few steps before you're ready to do that but ideally you want to deploy automatically as part of your automation process but if you're doing that it might be a good idea to include in your operational steps some type of notification so that your ops team knows that a deployment has happened
## Monitor:
## Monitor
All of the above parts lead to the final step because you need to have monitoring, especially around operational issues auto-scaling troubleshooting like you don't know
there's a problem if you don't have monitoring in place to tell you that there's a problem so some of the things you might build monitoring for are memory utilization CPU utilization disk space, API endpoint, response time, how quickly that endpoint is responding and a big part of that as well is logs. Logs give developers the ability to see what is happening without having to access production systems.
## Rince & Repeat:
## Rinse & Repeat
Once that's in place you go right back to the beginning to the planning stage and go through the whole thing again
## Continuous:
## Continuous
Many tools help us achieve the above continuous process, all this code and the ultimate goal of being completely automated, cloud infrastructure or any environment is often described as Continuous Integration/ Continuous Delivery/Continous Deployment or “CI/CD” for short. We will spend a whole week on CI/CD later on in the 90 Days with some examples and walkthroughs to grasp the fundamentals.
### Continuous Delivery:
### Continuous Delivery
Continuous Delivery = Plan > Code > Build > Test
### Continuous Integration:
### Continuous Integration
This is effectively the outcome of the Continuous Delivery phases above plus the outcome of the Release phase. This is the case for both failure and success but this is fed back into continuous delivery or moved to Continuous Deployment.
Continuous Integration = Plan > Code > Build > Test > Release
### Continuous Deployment:
### Continuous Deployment
If you have a successful release from your continuous integration then move to Continuous Deployment which brings in the following phases
@ -74,10 +75,10 @@ You can see these three Continuous notions above as the simple collection of pha
This last bit was a bit of a recap for me on Day 3 but think this makes things clearer for me.
### Resources:
### Resources
- [DevOps for Developers Software or DevOps Engineer?](https://www.youtube.com/watch?v=a0-uE3rOyeU)
- [Techworld with Nana -DevOps Roadmap 2022 - How to become a DevOps Engineer? What is DevOps? ](https://www.youtube.com/watch?v=9pZ2xmsSDdo&t=125s)
- [Techworld with Nana -DevOps Roadmap 2022 - How to become a DevOps Engineer? What is DevOps?](https://www.youtube.com/watch?v=9pZ2xmsSDdo&t=125s)
- [How to become a DevOps Engineer in 2021 - DevOps Roadmap](https://www.youtube.com/watch?v=5pxbp6FyTfk)
If you made it this far then you will know if this is where you want to be or not.

View File

@ -7,43 +7,46 @@ cover_image: null
canonical_url: null
id: 1048855
---
## DevOps - The real stories
DevOps to begin with was seen to be out of reach for a lot of us as we didn't have an environment or requirement anything like a Netflix or fortune 500 but think now that is beginning to sway into the normal when adopting a DevOps practice within any type of business.
DevOps to begin with was seen to be out of reach for a lot of us as we didn't have companies like Netflix or a fortune 500 company practising it but I think that now it's beginning to sway into the normal as businesses start adopting a DevOps practice.
You will see from the second link below in references there are a lot of different industries and verticals using DevOps and having a hugely positive effect on their business objectives.
You will see from the below references that there are a lot of different industries and verticals using DevOps and hence having a huge positive effect on their business objectives.
The overarching benefit here is DevOps if done correctly should help your Business improve the speed and quality of software development.
The overarching benefit here is that DevOps if done correctly should help improve your business's speed and quality of software development.
I wanted to take this Day to look at successful companies that have adopted a DevOps practice and share some resources around this, This will be another great one for the community to also dive in and help here. Have you adopted a DevOps culture in your business? Has it been successful?
I wanted to take this day to look at successful companies that have adopted a DevOps practice and share some resources around this. This will be a great oppurtunity for the community to dive in and help here. Have you adopted a DevOps culture in your business? Has it been successful?
I mentioned Netflix above and will touch on them again as it is a very good model and advanced to what we generally see today even still but will also mention some other big name brands that are succeeding it seems.
I mentioned Netflix above and will touch on them again as it is a very good model and quite advanced compared to what we generally see today but I'll also mention some other big brands that are succeeding at this.
## Amazon
In 2010 Amazon moved their physical server footprint to Amazon Web Services (AWS) cloud this allowed them to save resources by scaling capacity up and down in very small increments. We also know that this AWS cloud would go on and make a huge amount of revenue itself whilst still running the Amazon retail branch of the company.
Amazon adopted in 2011 (According to the resource below) a continued deployment process where their developers could deploy code whenever they want and to whatever servers they needed. This enabled Amazon to achieve deploying new software to production servers on average every 11.6 seconds!
In 2010 Amazon moved their physical server footprint to the AWS(Amazon Web Services) cloud. This allowed them to save resources by scaling capacity up and down in very small increments. We also know that AWS went on to generate high revenue itself whilst running Amazon's retail branch.
Amazon adopted in 2011 (according to the link below) a continued deployment process where their developers could deploy code whenever they wanted and to whichever servers they needed to. This enabled Amazon to achieve deploying new software to production servers at an average of 11.6 seconds!
## Netflix
Who doesn't use Netflix? a huge quality streaming service with by all accounts at least personally a great user experience.
Why is that user experience so great? Well, the ability to deliver a service with no recollected memory for me at least of glitches requires speed, flexibility, and attention to quality.
Who doesn't use Netflix? It's a high quality streaming service and personally speaking, has a great user experience.
NetFlix developers can automatically build pieces of code into deployable web images without relying on IT operations. As the images are updated, they are integrated into Netflixs infrastructure using a custom-built, web-based platform.
Why is that user experience so great? Well, the ability to deliver a service with no personal recollection of glitches requires speed, flexibility, and attention to quality.
Continuous Monitoring is in place so that if the deployment of the images fails, the new images are rolled back and traffic rerouted to the previous version.
Netflix developers can automatically build pieces of code into deployable web images without relying on IT operations. As the images are updated, they are integrated into Netflixs infrastructure using a custom-built, web-based platform.
Continuous Monitoring is in place so that if the deployment of the images fails, the new images are rolled back and traffic is rerouted back to the previous version.
There is a great talk listed below that goes into more about the DOs and DONTs that Netflix lives and dies by within their teams.
## Etsy
As with many of us and many companies, there was a real struggle around slow and painful deployments. In the same vein, we might have also experienced working in companies that have lots of siloes and teams that are not working well together.
From what I can make out at least from reading about Amazon and Netflix, Etsy might have adopted the letting developers deploy their code around the end of 2009 which might have been before the other two were mentioned. (interesting!)
As with many of us and with many companies, there was a real struggle around slow and painful deployments. In the same vein, we might have also experienced working in companies that have lots of silos and teams that are not working well together.
From what I can make out by reading about Amazon and Netflix is that Etsy might have adopted letting developers deploy their code around the end of 2009 which might have been even before the other two. (Interesting!)
An interesting takeaway I read here was that they realised that when developers feel responsible for deployment they also would take responsibility for application performance, uptime and other goals.
A learning culture is a key part of DevOps, even failure can be a success if lessons are learned. (not sure where this quote came from but it kind of makes sense!)
A learning culture is a key part of DevOps. Even failure can be a success if lessons are learned. (not sure where this quote came from but it kind of makes sense!)
I have added some other stories where DevOps has changed the game within some of these massively successful companies.
@ -59,7 +62,7 @@ I have added some other stories where DevOps has changed the game within some of
### Recap of our first few days looking at DevOps
- DevOps is a combo of Development and Operations that allows a single team to manage the whole application development lifecycle that consists of **Development**, **Testing**, **Deployment**, **Operations**.
- DevOps is a combination of Development and Operations that allows a single team to manage the whole application development lifecycle which consists of **Development**, **Testing**, **Deployment**, **Operations**.
- The main focus and aim of DevOps are to shorten the development lifecycle while delivering features, fixes and functionality frequently in close alignment with business objectives.
@ -67,6 +70,6 @@ I have added some other stories where DevOps has changed the game within some of
If you made it this far then you will know if this is where you want to be or not. See you on [Day 7](day07.md).
Day 7 will be us diving into a programming language, I am not aiming to be a developer but I want to be able to understand what the developers are doing.
Day 7 will be us diving into a programming language. I am not aiming to be a developer but I want to be able to understand what the developers are doing.
Can we achieve that in a week? Probably not but if we spend 7 days or 7 hours learning something we are going to know more than when we started.

View File

@ -7,6 +7,7 @@ cover_image: null
canonical_url: null
id: 1048856
---
## The Big Picture: DevOps & Learning a Programming Language
I think it is fair to say to be successful in the long term as a DevOps engineer you've got to know at least one programming language at a foundational level. I want to take this first session of this section to explore why this is such a critical skill to have, and hopefully, by the end of this week or section, you are going to have a better understanding of the why, how and what to do to progress with your learning journey.
@ -23,7 +24,7 @@ But then you might not have clear cut reasoning like that to choose you might be
Remember I am not looking to become a software developer here I just want to understand a little more about the programming language so that I can read and understand what those tools are doing and then that leads to possibly how we can help improve things.
I would also it is also important to know how you interact with those DevOps tools which could be Kasten K10 or it could be Terraform and HCL. These are what we will call config files and this is how you interact with those DevOps tools to make things happen, commonly these are going to be YAML. (We may use the last day of this section to dive a little into YAML)
It is also important to know how you interact with those DevOps tools which could be Kasten K10 or it could be Terraform and HCL. These are what we will call config files and this is how you interact with those DevOps tools to make things happen, commonly these are going to be YAML. (We may use the last day of this section to dive a little into YAML)
## Did I just talk myself out of learning a programming language?
@ -38,13 +39,14 @@ As I have also mentioned some of the most known DevOps tools and platforms are w
What are some of the characteristics of Go that make it great for DevOps?
## Build and Deployment of Go Programs
An advantage of using a language like Python that is interpreted in a DevOps role is that you dont need to compile a python program before running it. Especially for smaller automation tasks, you dont want to be slowed down by a build process that requires compilation even though, Go is a compiled programming language, **Go compiles directly into machine code**. Go is known also for fast compilation times.
## Go vs Python for DevOps
Go Programs are statically linked, this means that when you compile a go program everything is included in a single binary executable, and no external dependencies will be required that would need to be installed on the remote machine, this makes the deployment of go programs easy, compared to python program that uses external libraries you have to make sure that all those libraries are installed on the remote machine that you wish to run on.
Go is a platform-independent language, which means you can produce binary executables for *all the operating systems, Linux, Windows, macOS etc and very easy to do so. With Python, it is not as easy to create these binary executables for particular operating systems.
Go is a platform-independent language, which means you can produce binary executables for \*all the operating systems, Linux, Windows, macOS etc and very easy to do so. With Python, it is not as easy to create these binary executables for particular operating systems.
Go is a very performant language, it has fast compilation and fast run time with lower resource usage like CPU and memory especially compared to python, numerous optimisations have been implemented in the Go language that makes it so performant. (Resources below)

View File

@ -7,6 +7,7 @@ cover_image: null
canonical_url: null
id: 1048857
---
## Setting up your DevOps environment for Go & Hello World
Before we get into some of the fundamentals of Go we should get Go installed on our workstation and do what every "learning programming 101" module teaches us which is to create the Hello World app. As this one is going to be walking through the steps to get Go installed on your workstation we are going to attempt to document the process in pictures so people can easily follow along.
@ -15,7 +16,7 @@ First of all, let's head on over to [go.dev/dl](https://go.dev/dl/) and you will
![](Images/Day8_Go1.png)
If we made it this far you probably know which workstation operating system you are running so select the appropriate download and then we can get installing. I am using Windows for this walkthrough, basically, from this next screen, we can leave all the defaults in place for now. ***(I will note that at the time of writing this was the latest version so screenshots might be out of date)***
If we made it this far you probably know which workstation operating system you are running so select the appropriate download and then we can get installing. I am using Windows for this walkthrough, basically, from this next screen, we can leave all the defaults in place for now. **_(I will note that at the time of writing this was the latest version so screenshots might be out of date)_**
![](Images/Day8_Go2.png)
@ -72,11 +73,13 @@ func main() {
fmt.Println("Hello #90DaysOfDevOps")
}
```
Now I appreciate that the above might make no sense at all, but we will cover more about functions, packages and more in later days. For now, let's run our app. Back in the terminal and in our Hello folder we can now check that all is working. Using the command below we can check to see if our generic learning program is working.
```
go run main.go
```
![](Images/Day8_Go11.png)
It doesn't end there though, what if we now want to take our program and run it on other Windows machines? We can do that by building our binary using the following command
@ -84,6 +87,7 @@ It doesn't end there though, what if we now want to take our program and run it
```
go build main.go
```
![](Images/Day8_Go12.png)
If we run that, we would see the same output:

View File

@ -7,6 +7,7 @@ cover_image: null
canonical_url: null
id: 1099682
---
## Let's explain the Hello World code
### How Go works
@ -16,6 +17,7 @@ On [Day 8](day08.md) we walked through getting Go installed on your workstation
In this section, we are going to take a deeper look into the code and understand a few more things about the Go language.
### What is Compiling?
Before we get into the [6 lines of the Hello World code](Go/hello.go) we need to have a bit of an understanding of compiling.
Programming languages that we commonly use such as Python, Java, Go and C++ are high-level languages. Meaning they are human-readable but when a machine is trying to execute a program it needs to be in a form that a machine can understand. We have to translate our human-readable code to machine code which is called compiling.
@ -25,6 +27,7 @@ Programming languages that we commonly use such as Python, Java, Go and C++ are
From the above you can see what we did on [Day 8](day08.md) here, we created a simple Hello World main.go and we then used the command `go build main.go` to compile our executable.
### What are packages?
A package is a collection of source files in the same directory that are compiled together. We can simplify this further, a package is a bunch of .go files in the same directory. Remember our Hello folder from Day 8? If and when you get into more complex Go programs you might find that you have folder1 folder2 and folder3 containing different.go files that make up your program with multiple packages.
We use packages so we can reuse other people's code, we don't have to write everything from scratch. Maybe we are wanting a calculator as part of our program, you could probably find an existing Go Package that contains the mathematical functions that you could import into your code saving you a lot of time and effort in the long run.
@ -32,6 +35,7 @@ We use packages so we can reuse other people's code, we don't have to write ever
Go encourages you to organise your code in packages so that it is easy to reuse and maintain source code.
### Hello #90DaysOfDevOps Line by Line
Now let's take a look at our Hello #90DaysOfDevOps main.go file and walk through the lines.
![](Images/Day9_Go2.png)

View File

@ -7,7 +7,9 @@ cover_image: null
canonical_url: null
id: 1048701
---
### The Go Workspace
On [Day 8](day08.md) we briefly covered the Go workspace to get Go up and running to get to the demo of `Hello #90DaysOfDevOps` But we should explain a little more about the Go workspace.
Remember we chose the defaults and we then went through and created our Go folder in the GOPATH that was already defined but in reality, this GOPATH can be changed to be wherever you want it to be.
@ -17,11 +19,13 @@ If you run
```
echo $GOPATH
```
The output should be similar to mine (with a different username may be) which is:
```
/home/michael/projects/go
```
Then here, we created 3 directories. **src**, **pkg** and **bin**
![](Images/Day10_Go1.png)
@ -45,6 +49,7 @@ Our Hello #90DaysOfDevOps is not a complex program so here is an example of a mo
This page also goes into some great detail about why and how the layout is like this it also goes a little deeper on other folders we have not mentioned [GoChronicles](https://gochronicles.com/project-structure/)
### Compiling & running code
On [Day 9](day09.md) we also covered a brief introduction to compiling code, but we can go a little deeper here.
To run our code we first must **compile** it. There are three ways to do this within Go.

View File

@ -15,6 +15,7 @@ On [Day8](day08.md) we set our environment up, on [Day9](day09.md) we walked thr
Today we are going to take a look into Variables, Constants and Data Types whilst writing a new program.
## Variables & Constants in Go
Let's start by planning our application, I think it would be a good idea to work on a program that tells us how many days we have remained in our #90DaysOfDevOps challenge.
The first thing to consider here is that as we are building our app and we are welcoming our attendees and we are giving the user feedback on the number of days they have completed we might use the term #90DaysOfDevOps many times throughout the program. This is a great use case to make #90DaysOfDevOps a variable within our program.
@ -30,6 +31,7 @@ Remember to make sure that your variable names are descriptive. If you declare a
```
var challenge = "#90DaysOfDevOps"
```
With the above set and used as we will see in the next code snippet you can see from the output below that we have used a variable.
```
@ -39,9 +41,10 @@ import "fmt"
func main() {
var challenge = "#90DaysOfDevOps"
fmt.Println("Welcome to", challenge "")
fmt.Println("Welcome to", challenge, "")
}
```
You can find the above code snippet in [day11_example1.go](Go/day11_example1.go)
You will then see from the below that we built our code with the above example and we got the output shown below.
@ -61,10 +64,11 @@ func main() {
var challenge = "#90DaysOfDevOps"
const daystotal = 90
fmt.Println("Welcome to", challenge)
fmt.Println("Welcome to", challenge, "")
fmt.Println("This is a", daystotal, "challenge")
}
```
You can find the above code snippet in [day11_example2.go](Go/day11_example2.go)
If we then go through that `go build` process again and run you will see below the outcome.
@ -90,6 +94,7 @@ func main() {
fmt.Println("Great work")
}
```
You can find the above code snippet in [day11_example3.go](Go/day11_example3.go)
Let's run through that `go build` process again or you could just use `go run`
@ -109,6 +114,7 @@ func main() {
```
## Data Types
In the above examples, we have not defined the type of variables, this is because we can give it a value here and Go is smart enough to know what that type is or at least can infer what it is based on the value you have stored. However, if we want a user to input this will require a specific type.
We have used Strings and Integers in our code so far. Integers for the number of days and strings are for the name of the challenge.
@ -148,6 +154,7 @@ Because Go implies variables where a value is given we can print out those value
```
fmt.Printf("challenge is %T, daystotal is %T, dayscomplete is %T\n", conference, daystotal, dayscomplete)
```
There are many different types of integer and float types the links above will cover these in detail.
- **int** = whole numbers

View File

@ -7,6 +7,7 @@ cover_image: null
canonical_url: null
id: 1048864
---
## Getting user input with Pointers and a finished program
Yesterday ([Day 11](day11.md)), we created our first Go program that was self-contained and the parts we wanted to get user input for were created as variables within our code and given values, we now want to ask the user for their input to give the variable the value for the end message.
@ -36,6 +37,7 @@ This is instead of assigning the value of a variable we want to ask the user for
```
fmt.Scan(&TwitterName)
```
Notice that we also use `&` before the variable. This is known as a pointer which we will cover in the next section.
In our code [day12_example2.go](Go/day12_example2.go) you can see that we are asking the user to input two variables, `TwitterName` and `DaysCompleted`
@ -51,6 +53,7 @@ For us to do that we have created a variable called `remainingDays` and we have
```
remainingDays = remainingDays - DaysCompleted
```
You can see how our finished program looks here [day12_example2.go](Go/day12_example3.go).
If we now run this program you can see that simple calculation is made based on the user input and the value of the `remainingDays`

View File

@ -7,6 +7,7 @@ cover_image: null
canonical_url: null
id: 1048865
---
## Tweet your progress with our new App
On the final day of looking into this programming language, we have only just touched the surface here of the language but it is at that start that I think we need to get interested and excited and want to dive more into it.
@ -14,6 +15,7 @@ On the final day of looking into this programming language, we have only just to
Over the last few days, we have taken a small idea for an application and we have added functionality to it, in this session I want to take advantage of those packages we mentioned and create the functionality for our app to not only give you the update of your progress on screen but also send a tweet with the details of the challenge and your status.
## Adding the ability to tweet your progress
The first thing we need to do is set up our developer API access with Twitter for this to work.
Head to the [Twitter Developer Platform](https://developer.twitter.com) and sign in with your Twitter handle and details. Once in you should see something like the below without the app that I already have created.
@ -55,6 +57,7 @@ To test this before putting this into our main application, I created a new dire
We now need those keys, tokens and secrets we gathered from the Twitter developer portal. We are going to set these in our environment variables. This will depend on the OS you are running:
Windows
```
set CONSUMER_KEY
set CONSUMER_SECRET
@ -63,12 +66,14 @@ set ACCESS_TOKEN_SECRET
```
Linux / macOS
```
export CONSUMER_KEY
export CONSUMER_SECRET
export ACCESS_TOKEN
export ACCESS_TOKEN_SECRET
```
At this stage, you can take a look at [day13_example2](Go/day13_example2.go) at the code but you will see here that we are using a struct to define our keys, secrets and tokens.
We then have a `func` to parse those credentials and make that connection to the Twitter API
@ -152,6 +157,7 @@ func main() {
}
```
The above will either give you an error based on what is happening or it will succeed and you will have a tweet sent with the message outlined in the code.
## Pairing the two together - Go-Twitter-Bot + Our App
@ -261,6 +267,7 @@ func main() {
}
```
The outcome of this should be a tweet but if you did not supply your environment variables then you should get an error like the one below.
![](Images/Day13_Go7.png)
@ -280,6 +287,7 @@ I next want to cover the question, "How do you compile for multiple Operating Sy
```
go tool dist list
```
Using our `go build` commands so far is great and it will use the `GOOS` and `GOARCH` environment variables to determine the host machine and what the build should be built for. But we can also create other binaries by using the code below as an example.
```

View File

@ -7,7 +7,9 @@ cover_image: null
canonical_url: null
id: 1049033
---
## The Big Picture: DevOps and Linux
Linux and DevOps share very similar cultures and perspectives; both are focused on customization and scalability. Both of these aspects of Linux are of particular importance for DevOps.
A lot of technologies start on Linux, especially if they are related to software development or managing infrastructure.
@ -19,47 +21,35 @@ From a DevOps perspective or any operations role perspective, you are going to c
I have been using Linux daily for several years but my go to desktop machine has always been either macOS or Windows. However, when I moved into the Cloud Native role I am in now I took the plunge to make sure that my laptop was fully Linux based and my daily driver, whilst I still needed Windows for work-based applications and a lot of my audio and video gear does not run on Linux I was forcing myself to run a Linux desktop full time to get a better grasp of a lot of the things we are going to touch on over the next 7 days.
## Getting Started
I am not suggesting you do the same as me by any stretch as there are easier options which are less destructive but I will say that taking that full-time step forces you to learn faster how to make things work on Linux.
For the majority of these 7 days, I am going to deploy a Virtual Machine in Virtual Box on my Windows machine. I am also going to deploy a desktop version of a Linux distribution, whereas a lot of the Linux servers you will be administering will likely be servers that come with no GUI and everything is shell-based. However, as I said at the start a lot of the tools that we covered throughout this whole 90 days started on Linux I would also strongly encourage you to dive into running that Linux Desktop for that learning experience as well.
For the rest of this post, we are going to concentrate on getting a Ubuntu Desktop virtual machine up and running in our Virtual Box environment. Now we could just download [Virtual Box](https://www.virtualbox.org/) and grab the latest [Ubuntu ISO](https://ubuntu.com/download) from the sites linked and go ahead and build out our desktop environment but that wouldn't be very DevOps of us, would it?
Another good reason to use most Linux distributions is that they are free and open-source. We are also choosing Ubuntu as it is probably the most widely used distribution deployed not thinking about mobile devices and enterprise RedHat Enterprise servers. I might be wrong there but with CentOS and the history there I bet Ubuntu is high on the list and it's super simple.
## Introducing HashiCorp Vagrant
Vagrant is a CLI utility that manages the lifecycle of your virtual machines. We can use vagrant to spin up and down virtual machines across many different platforms including vSphere, Hyper-v, Virtual Box and also Docker. It does have other providers but we will stick with Virtual Box here so we are good to go.
The first thing we need to do is get Vagrant installed on our machine, when you go to the downloads page you will see all the operating systems listed for your choice. [HashiCorp Vagrant](https://www.vagrantup.com/downloads) I am using Windows so I grabbed the binary for my system and went ahead and installed this on my system.
Next up we also need to get [Virtual Box](https://www.virtualbox.org/wiki/Downloads) installed. Again, this can also be installed on many different operating systems and a good reason to choose this and vagrant is that if you are running Windows, macOS, or Linux then we have you covered here.
Both installations are pretty straightforward and both have great communitites around them so feel free to reach out if you have issues and I can try and assist too.
## Our first VAGRANTFILE
The VAGRANTFILE describes the type of machine we want to deploy. It also defines the configuration and provisioning for this machine.
When it comes to saving these and organizing your VAGRANTFILEs I tend to put them in their folders in my workspace. You can see below how this looks on my system. Hopefully following this you will play around with Vagrant and see the ease of spinning up different systems, it is also great for that rabbit hole known as distro hopping for Linux Desktops.
![](Images/Day14_Linux1.png)
Let's take a look at that VAGRANTFILE and see what we are building.
```
Vagrant.configure("2") do |config|
@ -82,10 +72,8 @@ end
This is a very simple VAGRANTFILE overall. We are saying that we want a specific "box", a box being possibly either a public image or private build of the system you are looking for. You can find a long list of "boxes" publicly available here in the [public catalogue of Vagrant boxes](https://app.vagrantup.com/boxes/search)
Next line we're saying that we want to use a specific provider and in this case it's `VirtualBox`. We also define our machine's memory to `8GB` and the number of CPUs to `4`. My experience tells me that you may want to also add the following line if you experience display issues. This will set the video memory to what you want, I would ramp this right up to `128MB` but it depends on your system.
```
v.customize ["modifyvm", :id, "--vram", ""]
@ -94,42 +82,30 @@ v.customize ["modifyvm", :id, "--vram", ""]
I have also placed a copy of this specific vagrant file in the [Linux Folder](Linux/VAGRANTFILE)
## Provisioning our Linux Desktop
We are now ready to get our first machine up and running, in our workstation's terminal. In my case I am using PowerShell on my Windows machine. Navigate to your projects folder and where you will find your VAGRANTFILE. Once there you can type the command `vagrant up` and if everything's allright you will see something like this.
![](Images/Day14_Linux2.png)
Another thing to add here is that the network will be set to `NAT` on your virtual machine. At this stage we don't need to know about NAT and I plan to have a whole session talking about it in the Networking session. Know that it is the easy button when it comes to getting a machine on your home network, it is also the default networking mode on Virtual Box. You can find out more in the [Virtual Box documentation](https://www.virtualbox.org/manual/ch06.html#network_nat)
Once `vagrant up` is complete we can now use `vagrant ssh` to jump straight into the terminal of our new VM.
![](Images/Day14_Linux3.png)
This is where we will do most of our exploring over the next few days but I also want to dive into some customizations for your developer workstation that I have done and it makes your life much simpler when running this as your daily driver, and of course, are you really in DevOps unless you have a cool nonstandard terminal?
But just to confirm in Virtual Box you should see the login prompt when you select your VM.
![](Images/Day14_Linux4.png)
Oh and if you made it this far and you have been asking "WHAT IS THE USERNAME & PASSWORD?"
- Username = vagrant
- Password = vagrant
Tomorrow we are going to get into some of the commands and what they do, The terminal is going to be the place to make everything happen.
## Resources

View File

@ -7,6 +7,7 @@ cover_image: null
canonical_url: null
id: 1048834
---
## Linux Commands for DevOps (Actually everyone)
I mentioned [yesterday](day14.md) that we are going to be spending a lot of time in the terminal with some commands to get stuff done.
@ -103,13 +104,17 @@ You are also able to use `history | grep "Command` to search for something speci
On servers to trace back when was a command executed, it can be useful to append the date and time to each command in the history file.
The following system variable controls this behaviour:
```
HISTTIMEFORMAT="%d-%m-%Y %T "
```
You can easily add to your bash_profile:
```
echo 'export HISTTIMEFORMAT="%d-%m-%Y %T "' >> ~/.bash_profile
```
So as useful to allow the history file to grow bigger:
```
@ -142,7 +147,7 @@ A full list:
- 0 = None `---`
- 1 = Execute only `--X`
- 2 = Write only `-W-`
- 3 = Write & Exectute `-WX`
- 3 = Write & Execute `-WX`
- 4 = Read Only `R--`
- 5 = Read & Execute `R-X`
- 6 = Read & Write `RW-`

View File

@ -7,9 +7,10 @@ cover_image: null
canonical_url: null
id: 1048702
---
## Managing your Linux System, Filesystem & Storage
So far we have had a brief overview of Linux and DevOps and then we got our lab environment set up using vagant [(Day 14)](day14.md), we then touched on a small portion of commands that will be in your daily toolkit when in the terminal and getting things done [(Day 15)](day15.md).
So far we have had a brief overview of Linux and DevOps and then we got our lab environment set up using Vagrant [(Day 14)](day14.md), we then touched on a small portion of commands that will be in your daily toolkit when in the terminal and getting things done [(Day 15)](day15.md).
Here we are going to look into three key areas of looking after your Linux systems with updates, installing software, understanding what system folders are used for and we will also take a look at storage.

View File

@ -7,6 +7,7 @@ cover_image: null
canonical_url: null
id: 1048703
---
## Text Editors - nano vs vim
The majority of your Linux systems are going to be servers and these are not going to have a GUI. I also mentioned in the last session that Linux is mostly made up of configuration files, to make changes you are going to need to be able to edit those configuration files to change anything on the system.

View File

@ -7,6 +7,7 @@ cover_image: null
canonical_url: null
id: 1048733
---
## SSH & Web Server
As we have mentioned throughout you are going to most likely be managing lots of remote Linux servers, because of this, you will need to make sure that your connectivity to these remote servers is secure. In this section, we want to cover some of the basics of SSH that everyone should know that will help you with that secure tunnel to your remote systems.
@ -121,6 +122,7 @@ You might also see this referred to as a LAMP stack.
- **P**HP
### Apache2
Apache2 is an open-source HTTP server. We can install apache2 with the following command.
`sudo apt-get install apache2`
@ -132,9 +134,11 @@ Then using the bridged network address from the SSH walkthrough open a browser a
![](Images/Day18_Linux10.png)
### mySQL
MySQL is a database in which we will be storing our data for our simple website. To get MySQL installed we should use the following command `sudo apt-get install mysql-server`
### PHP
PHP is a server-side scripting language, we will use this to interact with a MySQL database. The final installation is to get PHP and dependencies installed using `sudo apt-get install php libapache2-mod-php php-mysql`
The first configuration change we want to make out of the box apache is using index.html and we want it to use index.php instead.

View File

@ -7,6 +7,7 @@ cover_image: null
canonical_url: null
id: 1048774
---
## Automate tasks with bash scripts
The shell that we are going to use today is the bash but we will cover another shell tomorrow when we dive into ZSH.
@ -58,6 +59,7 @@ cd 90DaysOfDevOps
touch Day19
ls
```
You can then save this and exit your text editor, if we run our script with `./90DaysOfDevOps.sh` you should get a permission denied message. You can check the permissions of this file using the `ls -al` command and you can see highlighted we do not have executable rights on this file.
![](Images/Day19_Linux2.png)
@ -73,6 +75,7 @@ Now we can run our script again using `./90DaysOfDevOps.sh` after running the sc
Pretty basic stuff but you can start to see hopefully how this could be used to call on other tools as part of ways to make your life easier and automate things.
### Variables, Conditionals
A lot of this section is a repeat of what we covered when we were learning Golang but I think it's worth us diving in here again.
- ### Variables
@ -138,6 +141,7 @@ else
echo "You have entered the wrong amount of days"
fi
```
You can also see from the above that we are running some comparisons or checking values against each other to move on to the next stage. We have different options here worth noting.
- `eq` - if the two values are equal will return TRUE
@ -181,6 +185,7 @@ I found this amazing repository on GitHub that has what seems to be an endless a
**Scenario**: We have our company called "90DaysOfDevOps" and we have been running a while and now it is time to expand the team from 1 person to lots more over the coming weeks, I am the only one so far that knows the onboarding process so we want to reduce that bottleneck by automating some of these tasks.
**Requirements**:
- A user can be passed in as a command line argument.
- A user is created with the name of the command line argument.
- A password can be parsed as a command line argument.

View File

@ -7,6 +7,7 @@ cover_image: null
canonical_url: null
id: 1048734
---
## Dev workstation setup - All the pretty things
Not to be confused with us setting Linux servers up this way but I wanted to also show off the choice and flexibility that we have within the Linux desktop.
@ -28,6 +29,7 @@ We can also see our default bash shell below,
A lot of this comes down to dotfiles something we will cover in this final Linux session of the series.
### dotfiles
First up I want to dig into dotfiles, I have said on a previous day that Linux is made up of configuration files. These dotfiles are configuration files for your Linux system and applications.
I will also add that dotfiles are not just used to customise and make your desktop look pretty, there are also dotfile changes and configurations that will help you with productivity.
@ -45,6 +47,7 @@ We are going to be changing our shell, so we will later be seeing a new `.zshrc`
But now you know if we refer to dotfiles you know they are configuration files. We can use them to add aliases to our command prompt as well as paths to different locations. Some people publish their dotfiles so they are publicly available. You will find mine here on my GitHub [MichaelCade/dotfiles](https://github.com/MichaelCade/dotfiles) here you will find my custom `.zshrc` file, my terminal of choice is terminator which also has some configuration files in the folder and then also some background options.
### ZSH
As I mentioned throughout our interactions so far we have been using a bash shell the default shell with Ubuntu. ZSH is very similar but it does have some benefits over bash.
Zsh has features like interactive Tab completion, automated file searching, regex integration, advanced shorthand for defining command scope, and a rich theme engine.
@ -87,17 +90,17 @@ When you have run the above command you should see some output like the below.
![](Images/Day20_Linux7.png)
Now we can move on to start putting a theme in for our experience, there are well over 100 bundled with Oh My ZSH but my go-to for all of my applications and everything is the Dracula theme.
Now we can move on to start putting a theme in for our experience, there are well over 100 bundled with Oh My ZSH but my go-to for all of my applications and everything is the Dracula theme.
I also want to add that these two plugins are a must when using Oh My ZSH.
I also want to add that these two plugins are a must when using Oh My ZSH.
`git clone https://github.com/zsh-users/zsh-autosuggestions.git $ZSH_CUSTOM/plugins/zsh-autosuggestions`
`git clone https://github.com/zsh-users/zsh-autosuggestions.git $ZSH_CUSTOM/plugins/zsh-autosuggestions`
`git clone https://github.com/zsh-users/zsh-syntax-highlighting.git $ZSH_CUSTOM/plugins/zsh-syntax-highlighting`
`git clone https://github.com/zsh-users/zsh-syntax-highlighting.git $ZSH_CUSTOM/plugins/zsh-syntax-highlighting`
`nano ~/.zshrc`
`nano ~/.zshrc`
edit the plugins to now include `plugins=(git zsh-autosuggestions zsh-syntax-highlighting)`
edit the plugins to now include `plugins=(git zsh-autosuggestions zsh-syntax-highlighting)`
## Gnome Extensions

View File

@ -7,8 +7,11 @@ cover_image: null
canonical_url: null
id: 1048761
---
## The Big Picture: DevOps and Networking
As with all sections, I am using open and free training materials and a lot of the content can be attributed to others. In the case of the networking section a large majority of the content shown is from [Practical Networking](https://www.practicalnetworking.net/)'s free [Networking Fundamentals series](https://www.youtube.com/playlist?list=PLIFyRwBY_4bRLmKfP1KnZA6rZbRHtxmXi). It is mentioned in the resources as well as a link but it's appropriate to highlight this as from a community point of view, I have leveraged this course to help myself understand more about particular areas of technologies. This repository is a repository for my note taking and enabling the community to hopefully benefit from this and the listed resources.
Welcome to Day 21! We are going to be getting into Networking over the next 7 days, Networking and DevOps are the overarching themes but we will need to get into some of the networking fundamentals as well.
Ultimately as we have said previously DevOps is about a culture and process change within your organisation this as we have discussed can be Virtual Machines, Containers, or Kubernetes but it can also be the network, If we are using those DevOps principles for our infrastructure that has to include the network more to the point from a DevOps point of view you also need to know about the network as in the different topologies and networking tools and stacks that we have available.
@ -23,7 +26,7 @@ But if you are not a network engineer then we probably need to get foundational
But in regards to those terms, we can think of NetDevOps or Network DevOps as applying the DevOps Principles and Practices to the network, applying version control and automation tools to the network creation, testing, monitoring, and deployments.
If we think of Network DevOps as having to require automation, we mentioned before about DevOps breaking down the siloes between teams. If the networking teams do not change to a similar model and process then they become the bottleneck or even the failure overall.
If we think of Network DevOps as having to require automation, we mentioned before about DevOps breaking down the silos between teams. If the networking teams do not change to a similar model and process then they become the bottleneck or even the failure overall.
Using the automation principles around provisioning, configuration, testing, version control and deployment is a great start. Automation is overall going to enable speed of deployment, stability of the networking infrastructure and consistent improvement as well as the process being shared across multiple environments once they have been tested. Such as a fully tested Network Policy that has been fully tested on one environment can be used quickly in another location because of the nature of this being in code vs a manually authored process which it might have been before.
A really good viewpoint and outline of this thinking can be found here. [Network DevOps](https://www.thousandeyes.com/learning/techtorials/network-devops)
@ -34,6 +37,12 @@ Let's forget the DevOps side of things to begin with here and we now need to loo
### Network Devices
If you prefer this content in video form, check out these videos from Practical Networking:
* [Network Devices - Hosts, IP Addresses, Networks - Networking Fundamentals - Lesson 1a](https://www.youtube.com/watch?v=bj-Yfakjllc&list=PLIFyRwBY_4bRLmKfP1KnZA6rZbRHtxmXi&index=1)
* [Network Devices - Hub, Bridge, Switch, Router - Networking Fundamentals - Lesson 1b
](https://www.youtube.com/watch?v=H7-NR3Q3BeI&list=PLIFyRwBY_4bRLmKfP1KnZA6rZbRHtxmXi&index=2)
**Host** are any devices which send or receive traffic.
![](Images/Day21_Networking1.png)
@ -48,7 +57,7 @@ A logical group of hosts which require similar connectivity.
![](Images/Day21_Networking3.png)
**Switches** facilitate communication ***within*** a network. A switch forwards data packets between hosts. A switch sends packets directly to hosts.
**Switches** facilitate communication **_within_** a network. A switch forwards data packets between hosts. A switch sends packets directly to hosts.
- Network: A Grouping of hosts which require similar connectivity.
- Hosts on a Network share the same IP address space.
@ -101,6 +110,7 @@ Over the next few days, we are going to get to know a little more about this lis
## Resources
[Computer Networking full course](https://www.youtube.com/watch?v=IPvYjXCsTg8)
* [Networking Fundamentals](https://www.youtube.com/playlist?list=PLIFyRwBY_4bRLmKfP1KnZA6rZbRHtxmXi)
* [Computer Networking full course](https://www.youtube.com/watch?v=IPvYjXCsTg8)
See you on [Day22](day22.md)

View File

@ -7,6 +7,12 @@ cover_image: null
canonical_url: null
id: 1049037
---
The content below comes mostly from Practical Networking's [Networking Fundamentals series](https://www.youtube.com/playlist?list=PLIFyRwBY_4bRLmKfP1KnZA6rZbRHtxmXi). If you prefer this content in video form, check out these two videos:
* [The OSI Model: A Practical Perspective - Layers 1 / 2 / 3](https://www.youtube.com/watch?v=LkolbURrtTs&list=PLIFyRwBY_4bRLmKfP1KnZA6rZbRHtxmXi&index=3)
* [The OSI Model: A Practical Perspective - Layers 4 / 5+](https://www.youtube.com/watch?v=0aGqGKrRE0g&list=PLIFyRwBY_4bRLmKfP1KnZA6rZbRHtxmXi&index=4)
## The OSI Model - The 7 Layers
The overall purpose of networking as an industry is to allow two hosts to share data. Before networking if I want to get data from this host to this host I'd have to plug something into this host walk it over to the other host and plug it into the other host.
@ -24,11 +30,13 @@ The OSI Model (Open Systems Interconnection Model) is a framework used to descri
![](Images/Day22_Networking1.png)
### Physical
Layer 1 in the OSI model and this is known as physical, the premise of being able to get data from one host to another through a means be it physical cable or we could also consider Wi-Fi in this layer as well. We might also see some more legacy hardware seen here around hubs and repeaters to transport the data from one host to another.
![](Images/Day22_Networking2.png)
### Data Link
Layer 2, the data link enables a node to node transfer where data is packaged into frames. There is also a level of error correcting that might have occurred at the physical layer. This is also where we introduce or first see MAC addresses.
This is where we see the first mention of switches that we covered on our first day of networking on [Day 21](day21.md)
@ -36,6 +44,7 @@ This is where we see the first mention of switches that we covered on our first
![](Images/Day22_Networking3.png)
### Network
You have likely heard the term layer 3 switches or layer 2 switches. In our OSI model Layer 3, the Network has a goal of an end to end delivery, this is where we see our IP addresses also mentioned in the first-day overview.
Routers and hosts exist at layer 3, remember the router is the ability to route between multiple networks. Anything with an IP could be considered Layer 3.
@ -55,11 +64,13 @@ MAC Addresses - Layer 2 = Hop to Hop Delivery
Now there is a network protocol that we will get into but not today called ARP(Address Resolution Protocol) which links our Layer3 and Layer2 addresses.
### Transport
Service to Service delivery, Layer 4 is there to distinguish data streams. In the same way that Layer 3 and Layer 2 both had their addressing schemes, in Layer 4 we have ports.
![](Images/Day22_Networking5.png)
### Session, Presentation, Application
The distinction between Layers 5,6,7 is or had become somewhat vague.
It is worth looking at the [TCP IP Model](https://www.geeksforgeeks.org/tcp-ip-model/) to get a more recent understanding.
@ -92,7 +103,7 @@ The Application sending the data is being sent somewhere so the receiving is som
## Resources
* [Networking Fundamentals](https://www.youtube.com/playlist?list=PLIFyRwBY_4bRLmKfP1KnZA6rZbRHtxmXi)
- [Computer Networking full course](https://www.youtube.com/watch?v=IPvYjXCsTg8)
- [Practical Networking](http://www.practicalnetworking.net/)
See you on [Day23](day23.md)

View File

@ -7,6 +7,12 @@ cover_image: null
canonical_url: null
id: 1048704
---
The content below comes mostly from Practical Networking's [Networking Fundamentals series](https://www.youtube.com/playlist?list=PLIFyRwBY_4bRLmKfP1KnZA6rZbRHtxmXi). If you prefer this content in video form, check out this video:
* [Network Protocols - ARP, FTP, SMTP, HTTP, SSL, TLS, HTTPS, DNS, DHCP](https://www.youtube.com/watch?v=E5bSumTAHZE&list=PLIFyRwBY_4bRLmKfP1KnZA6rZbRHtxmXi&index=12)
## Network Protocols
A set of rules and messages that form a standard. An Internet Standard.
@ -39,7 +45,7 @@ HTTP is the foundation of the internet and browsing content. Giving us the abili
- SSL - Secure Sockets Layer | TLS - Transport Layer Security
TLS has taken over from SSL, TLS is a [Cryptographic Protocol]() that provides secure communications over a network. It can and will be found in the mail, Instant Messaging and other applications but most commonly it is used to secure HTTPS.
TLS has taken over from SSL, TLS is a **Cryptographic Protocol** that provides secure communications over a network. It can and will be found in the mail, Instant Messaging and other applications but most commonly it is used to secure HTTPS.
![](Images/Day23_Networking5.png)
@ -80,7 +86,7 @@ Then we have DNS as we just covered to help us convert complicated public IP add
As I said each host requires these 4 things, if you have 1000 or 10,000 hosts then that is going to take you a very long time to determine each one of these individually. This is where DHCP comes in and allows you to determine a scope for your network and then this protocol will distribute to all available hosts in your network.
Another example is you head into a coffee shop, grab a coffee and sit down with your laptop or your phone let's call that your host. You connect your host to the coffee shop wifi and you gain access to the internet, messages and mail start pinging through and you can navigate web pages and social media. When you connected to the coffee shop wifi your machine would have picked up a DHCP address either from a dedicated DHCP server or most likely from the router also handling DHCP.
Another example is you head into a coffee shop, grab a coffee and sit down with your laptop or your phone let's call that your host. You connect your host to the coffee shop WiFi and you gain access to the internet, messages and mail start pinging through and you can navigate web pages and social media. When you connected to the coffee shop WiFi your machine would have picked up a DHCP address either from a dedicated DHCP server or most likely from the router also handling DHCP.
![](Images/Day23_Networking8.png)
@ -110,7 +116,8 @@ If a section of a network is compromised, it can be quarantined, making it diffi
## Resources
- [Networking Fundamentals](https://www.youtube.com/playlist?list=PLIFyRwBY_4bRLmKfP1KnZA6rZbRHtxmXi)
- [Subnetting Mastery](https://www.youtube.com/playlist?list=PLIFyRwBY_4bQUE4IB5c4VPRyDoLgOdExE)
- [Computer Networking full course](https://www.youtube.com/watch?v=IPvYjXCsTg8)
- [Practical Networking](http://www.practicalnetworking.net/)
See you on [Day 24](day24.md)

View File

@ -7,11 +7,13 @@ cover_image: null
canonical_url: null
id: 1048805
---
## Network Automation
### Basics of network automation
Primary drivers for Network Automation
- Achieve Agility
- Reduce Cost
- Eliminate Errors

View File

@ -7,6 +7,7 @@ cover_image: null
canonical_url: null
id: 1049038
---
## Python for Network Automation
Python is the standard language used for automated network operations.

View File

@ -7,6 +7,7 @@ cover_image: null
canonical_url: null
id: 1048762
---
## Building our Lab
We are going to continue our setup of our emulated network using EVE-NG and then hopefully get some devices deployed and start thinking about how we can automate the configuration of these devices. On [Day 25](day25.md) we covered the installation of EVE-NG onto our machine using VMware Workstation.
@ -44,12 +45,12 @@ For our lab, we need Cisco vIOS L2 (switches) and Cisco vIOS (router)
Inside the EVE-NG web interface, we are going to create our new network topology. We will have four switches and one router that will act as our gateway to outside networks.
| Node | IP Address |
| ----------- | ----------- |
| Router | 10.10.88.110|
| Switch1 | 10.10.88.111|
| Switch2 | 10.10.88.112|
| Switch3 | 10.10.88.113|
| Switch4 | 10.10.88.114|
| ------- | ------------ |
| Router | 10.10.88.110 |
| Switch1 | 10.10.88.111 |
| Switch2 | 10.10.88.112 |
| Switch3 | 10.10.88.113 |
| Switch4 | 10.10.88.114 |
#### Adding our Nodes to EVE-NG
@ -97,7 +98,7 @@ Once we have our lab up and running you will be able to console into each device
I will leave my configuration in the Networking folder of the repository for reference.
| Node | Configuration |
| ----------- | ----------- |
| ------- | --------------------- |
| Router | [R1](Networking/R1) |
| Switch1 | [SW1](Networking/SW1) |
| Switch2 | [SW2](Networking/SW2) |

View File

@ -7,6 +7,7 @@ cover_image: null
canonical_url: null
id: 1048735
---
## Getting Hands-On with Python & Network
In this final section of Networking fundamentals, we are going to cover some automation tasks and tools with our lab environment created on [Day 26](day26.md)
@ -41,12 +42,12 @@ sh ip int br
The final step gives us the DHCP address from our home network. My device network list is as follows:
| Node | IP Address | Home Network IP |
| ----------- | ----------- | ----------- |
| Router | 10.10.88.110| 192.168.169.115 |
| Switch1 | 10.10.88.111| 192.168.169.178 |
| Switch2 | 10.10.88.112| 192.168.169.193 |
| Switch3 | 10.10.88.113| 192.168.169.125 |
| Switch4 | 10.10.88.114| 192.168.169.197 |
| ------- | ------------ | --------------- |
| Router | 10.10.88.110 | 192.168.169.115 |
| Switch1 | 10.10.88.111 | 192.168.169.178 |
| Switch2 | 10.10.88.112 | 192.168.169.193 |
| Switch3 | 10.10.88.113 | 192.168.169.125 |
| Switch4 | 10.10.88.114 | 192.168.169.197 |
### SSH to a network device

View File

@ -7,6 +7,7 @@ cover_image: null
canonical_url: null
id: 1048737
---
## The Big Picture: DevOps & The Cloud
When it comes to cloud computing and what is offered, it goes very nicely with the DevOps ethos and processes. We can think of Cloud Computing as bringing the technology and services whilst DevOps as we have mentioned many times before is about the process and process improvement.
@ -46,7 +47,7 @@ Next up we have the public cloud, most people would think of this in a few diffe
Some will also see the public cloud as a much wider offering that includes those hyper scalers but also the thousands of MSPs all over the world as well. For this post, we are going to consider Public Cloud including hyper scalers and MSPs, although later on, we will specifically dive into one or more of the hyper scalers to get that foundational knowledge.
![](Images/Day28_Cloud5.png)
*thousands more companies could land on this, I am merely picking from local, regional, telco and global brands I have worked with and am aware of.*
_thousands more companies could land on this, I am merely picking from local, regional, telco and global brands I have worked with and am aware of._
We mentioned in the SaaS section that Cloud removed the responsibility or the burden of having to administer parts of a system. If SaaS we see a lot of the abstraction layers removed i.e the physical systems, network, storage, operating system, and even application to some degree. When it comes to the cloud there are various levels of abstraction we can remove or keep depending on your requirements.

View File

@ -7,6 +7,7 @@ cover_image: null
canonical_url: null
id: 1048705
---
## Microsoft Azure Fundamentals
Before we get going, the winner of the Twitter poll was Microsoft Azure, hence the title of the page. It was close and also quite interesting to see the results come in over the 24 hours.
@ -37,7 +38,7 @@ The best way to get started and follow along is by clicking the link, which will
I linked the interactive map above, but we can see the image below the breadth of regions being offered in the Microsoft Azure platform worldwide.
![](Images/Day29_Cloud2.png)
*image taken from [Microsoft Docs - 01/05/2021](https://docs.microsoft.com/en-us/azure/networking/microsoft-global-network)*
_image taken from [Microsoft Docs - 01/05/2021](https://docs.microsoft.com/en-us/azure/networking/microsoft-global-network)_
You will also see several "sovereign" clouds meaning they are not linked or able to speak to the other regions, for example, these would be associated with governments such as the `AzureUSGovernment` also `AzureChinaCloud` and others.
@ -77,13 +78,15 @@ Subscriptions belong to these management groups so you could have many subscript
### Resource Manager and Resource Groups
**Azure Resource Manager**
#### Azure Resource Manager
- JSON based API that is built on resource providers.
- Resources belong to a resource group and share a common life cycle.
- Parallelism
- JSON-Based deployments are declarative, idempotent and understand dependencies between resources to govern creation and order.
**Resource Groups**
#### Resource Groups
- Every Azure Resource Manager resource exists in one and only one resource group!
- Resource groups are created in a region that can contain resources from outside the region.
- Resources can be moved between resource groups

View File

@ -7,12 +7,11 @@ cover_image: null
canonical_url: null
id: 1049039
---
## Microsoft Azure Security Models
Following on from the Microsoft Azure Overview, we are going to start with Azure Security and see where this can help in our day to day. For the most part, I have found the built-in roles have been sufficient but knowing that we can create and work with many different areas of authentication and configurations. I have found Microsoft Azure to be quite advanced with its Active Directory background compared to other public clouds.
## Microsoft Azure Security Models
This is one area in which Microsoft Azure seemingly works differently from other public cloud providers, in Azure there is ALWAYS Azure AD.
### Directory Services

View File

@ -7,13 +7,14 @@ cover_image: null
canonical_url: null
id: 1049040
---
## Microsoft Azure Compute Models
Following on from covering the basics around security models within Microsoft Azure yesterday today we are going to look into the various compute services available to us in Azure.
### Service Availability Options
This section is close to my heart given my role within Data Management. As with on-premises, it is critical to ensure the availability of your services.
This section is close to my heart given my role in Data Management. As with on-premises, it is critical to ensure the availability of your services.
- High Availability (Protection within a region)
- Disaster Recovery (Protection between regions)
@ -32,7 +33,7 @@ Availability Zones - Provide resiliency between data centres within a region.
Most likely the starting point for anyone in the public cloud.
- Provides a VM from a variety of series and sizes with different capabilities (Sometimes an overwhelming) [Sizes for Virtual machines in Azure](https://docs.microsoft.com/en-us/azure/virtual-machines/sizes)
- There are many different options and focuses for VMs from high performance, low latency to high memory option VMs.
- There are many different options and focuses for VMs from high performance, and low latency to high memory options VMs.
- We also have a burstable VM type which can be found under the B-Series. This is great for workloads where you can have a low CPU requirement for the most part but require that maybe once a month performance spike requirement.
- Virtual Machines are placed on a virtual network that can provide connectivity to any network.
- Windows and Linux guest OS support.
@ -50,21 +51,21 @@ There is a large selection of templates that can export deployed resource defini
### Scaling
Automatic scaling is a large feature of the Public Cloud, being able to spin down resources you are not using or spinning up when you need them.
Automatic scaling is a large feature of the Public Cloud, being able to spin down resources you are not using or spin up when you need them.
In Azure, we have something called Virtual Machine Scale Sets (VMSS) for IaaS. This enables the automatic creation and scale from a gold standard image based on schedules and metrics.
This is ideal for updating windows so that you can update your images and roll those out with the least impact.
Other services such as Azure App Services have auto-scaling built-in.
Other services such as Azure App Services have auto-scaling built in.
### Containers
We have not covered containers as a use case and what and how they can and should be needed in our DevOps learning journey but we need to mention that Azure have some specific container focused services to mention.
We have not covered containers as a use case and what and how they can and should be needed in our DevOps learning journey but we need to mention that Azure has some specific container-focused services to mention.
Azure Kubernetes Service (AKS) - Provides a managed Kubernetes solution, no need to worry about the control plane or management of the underpinning cluster management. More on Kubernetes also later on.
Azure Container Instances - Containers as a service with Per-Second Billing. Run an image and integrate with your virtual network, no need for Container Orchestration.
Azure Container Instances - Containers as a service with Per-Second Billing. Run an image and integrate it with your virtual network, no need for Container Orchestration.
Service Fabric - Has many capabilities but includes orchestration for container instances.
@ -72,13 +73,13 @@ Azure also has the Container Registry which provides a private registry for Dock
We should also mention that a lot of the container services may indeed also leverage containers under the hood but this is abstracted away from your requirement to manage.
These mentioned container focused services we also find similar services in all other public clouds.
These mentioned container-focused services we also find similar services in all other public clouds.
### Application Services
- Azure Application Services provides an application hosting solution that provides an easy method to establish services.
- Automatic Deployment and Scaling.
- Supports Windows & Linux based solutions.
- Supports Windows & Linux-based solutions.
- Services run in an App Service Plan which has a type and size.
- Number of different services including web apps, API apps and mobile apps.
- Support for Deployment slots for reliable testing and promotion.
@ -89,7 +90,7 @@ Serverless for me is an exciting next step that I am extremely interested in lea
The goal with serverless is that we only pay for the runtime of the function and do not have to have running virtual machines or PaaS applications running all the time. We simply run our function when we need it and then it goes away.
Azure Functions - Provides serverless code. If we remember back to our first look into the public cloud you will remember the abstraction layer of management, with serverless functions you are only going to be managing the code.
Azure Functions - Provides serverless code. If we remember back to our first look into the public cloud we will remember the abstraction layer of management, with serverless functions you are only going to be managing the code.
Event-Driven with massive scale, I have a plan to build something when I get some hands-on here hopefully later on.
@ -111,4 +112,3 @@ We can also look at Azure Batch which can run large-scale jobs on both Windows a
- [AWS Basics for Beginners - Full Course](https://www.youtube.com/watch?v=ulprqHHWlng&t=5352s)
See you on [Day 32](day32.md)

View File

@ -7,6 +7,7 @@ cover_image: null
canonical_url: null
id: 1048775
---
## Microsoft Azure Storage Models
### Storage Services
@ -21,11 +22,11 @@ We can create our storage group by simply searching for Storage Group in the sea
![](Images/Day32_Cloud1.png)
We can then run through the steps to create our storage account remembering that this name needs to be unique and it also needs to be all lower case, no spaces but can include numbers.
We can then run through the steps to create our storage account remembering that this name needs to be unique and it also needs to be all lower case, with no spaces but can include numbers.
![](Images/Day32_Cloud2.png)
We can also choose the level of redundancy we would like against our storage account and anything we store within here. The further down the list the more expensive option but also the spread of your data.
We can also choose the level of redundancy we would like against our storage account and anything we store here. The further down the list the more expensive option but also the spread of your data.
Even the default redundancy option gives us 3 copies of our data.
@ -34,11 +35,8 @@ Even the default redundancy option gives us 3 copies of our data.
Summary of the above link down below:
- **Locally-redundant storage** - replicates your data three times within a single data centre in the primary region.
- **Geo-redundant storage** - copies your data synchronously three times within a single physical location in the primary region using LRS.
- **Zone-redundant storage** - replicates your Azure Storage data synchronously across three Azure availability zones in the primary region.
- **Geo-zone-redundant storage** - combines the high availability provided by redundancy across availability zones with protection from regional outages provided by geo-replication. Data in a GZRS storage account is copied across three Azure availability zones in the primary region and is also replicated to a second geographic region for protection from regional disasters.
![](Images/Day32_Cloud3.png)
@ -58,17 +56,20 @@ There are lots more advanced options available for your storage account but for
Storage access can be achieved in a few different ways.
Authenticated access via:
- A shared key for full control.
- Shared Access Signature for delegated, granular access.
- Azure Active Directory (Where Available)
Public Access:
- Public access can also be granted to enable anonymous access including via HTTP.
- An example of this could be to host basic content and files in a block blob so a browser can view and download this data.
If you are accessing your storage from another Azure service, traffic stays within Azure.
When it comes to storage performance we have two different types:
- **Standard** - Maximum number of IOPS
- **Premium** - Guaranteed number of IOPS
@ -80,10 +81,10 @@ There is also a difference between unmanaged and managed disks to consider when
- Virtual Machine OS disks are typically stored on persistent storage.
- Some stateless workloads do not require persistent storage and reduced latency is a larger benefit.
- There are VMs that support ephemeral OS managed disks that are created on the node-local storage.
- There are VMs that support ephemeral OS-managed disks that are created on the node-local storage.
- These can also be used with VM Scale Sets.
Managed Disks are durable block storage that can be used with Azure Virtual Machines. You can have Ultra Disk Storage, Premium SSD, Standard SSD, Standard HDD. They also carry some characteristics.
Managed Disks are durable block storage that can be used with Azure Virtual Machines. You can have Ultra Disk Storage, Premium SSD, Standard SSD, or Standard HDD. They also carry some characteristics.
- Snapshot and Image support
- Simple movement between SKUs
@ -92,7 +93,7 @@ Managed Disks are durable block storage that can be used with Azure Virtual Mach
## Archive Storage
- **Cool Tier** - A cool tier of storage is available to block and append BLOBs.
- **Cool Tier** - A cool tier of storage is available to block and append blobs.
- Lower Storage cost
- Higher transaction cost.
- **Archive Tier** - Archive storage is available for block BLOBs.
@ -131,7 +132,7 @@ Back on [Day 28](day28.md), we covered various service options. One of these was
Azure SQL Database provides a relational database as a service based on Microsoft SQL Server.
This is SQL running the latest SQL branch with database compatibility level available where specific functionality version is required.
This is SQL running the latest SQL branch with database compatibility level available where a specific functionality version is required.
There are a few options on how this can be configured, we can provide a single database that provides one database in the instance, while an elastic pool enables multiple databases that share a pool of capacity and collectively scale.
@ -163,7 +164,7 @@ Various consistency models are available based around [CAP theorem](https://en.w
### Caching
Without getting into the weeds about caching systems such as Redis I wanted to include that Microsoft Azure have their service called Azure Cache for Redis.
Without getting into the weeds about caching systems such as Redis I wanted to include that Microsoft Azure has a service called Azure Cache for Redis.
Azure Cache for Redis provides an in-memory data store based on the Redis software.

View File

@ -2,11 +2,12 @@
title: '#90DaysOfDevOps - Microsoft Azure Networking Models + Azure Management - Day 33'
published: false
description: 90DaysOfDevOps - Microsoft Azure Networking Models + Azure Management
tags: "devops, 90daysofdevops, learning"
tags: 'devops, 90daysofdevops, learning'
cover_image: null
canonical_url: null
id: 1048706
---
## Microsoft Azure Networking Models + Azure Management
As if today marks the anniversary of Microsoft Azure and its 12th Birthday! (1st February 2022) Anyway, we are going to cover the networking models within Microsoft Azure and some of the management options for Azure. So far we have only used the Azure portal but we have mentioned other areas that can be used to drive and create our resources within the platform.
@ -31,7 +32,7 @@ We can liken Azure Virtual Networks to AWS VPCs. However, there are some differe
- In AWS a default VNet is created that is not the case in Microsoft Azure, you have to create your first virtual network to your requirements.
- All Virtual Machines by default in Azure have NAT access to the internet. No NAT Gateways as per AWS.
- In Microsoft Azure, there is no concept of Private or Public subnets.
- Public IPs is a resource that can be assigned to vNICs or Load Balancers.
- Public IPs are a resource that can be assigned to vNICs or Load Balancers.
- The Virtual Network and Subnets have their own ACLs enabling subnet level delegation.
- Subnets across Availability Zones whereas in AWS you have subnets per Availability Zones.
@ -40,9 +41,9 @@ We also have Virtual Network Peering. This enables virtual networks across tenan
### Access Control
- Azure utilises Network Security Groups, these are stateful.
- Enable rules to be created then assigned to a network security group
- Enable rules to be created and then assigned to a network security group
- Network security groups applied to subnets or VMs.
- When applied to a subnet it is still enforced at the Virtual Machine NIC it is not an "Edge" device.
- When applied to a subnet it is still enforced at the Virtual Machine NIC that it is not an "Edge" device.
![](Images/Day33_Cloud1.png)
@ -52,10 +53,10 @@ We also have Virtual Network Peering. This enables virtual networks across tenan
- Most logic is built by IP Addresses but some tags and labels can also be used.
| Description | Priority | Source Address | Source Port | Destination Address | Destination Port | Action |
| ----------- | ---------| -------------- | ----------- | ------------------- | ---------------- | ------ |
| Inbound 443 | 1005 | * | * | * | 443 | Allow |
| ILB | 1010 | Azure LoadBalancer | * | * | 10000 | Allow |
| Deny All Inbound | 4000 | * | * | * | * | DENY |
| ---------------- | -------- | ------------------ | ----------- | ------------------- | ---------------- | ------ |
| Inbound 443 | 1005 | \* | \* | \* | 443 | Allow |
| ILB | 1010 | Azure LoadBalancer | \* | \* | 10000 | Allow |
| Deny All Inbound | 4000 | \* | \* | \* | \* | DENY |
We also have Application Security Groups (ASGs)
@ -65,8 +66,8 @@ We also have Application Security Groups (ASGs)
The ASGs can then be used in rules that are part of Network Security Groups to control the flow of communication and can still use NSG features like service tags.
| Action| Name | Source | Destination | Port |
| ------| ------------------ | ---------- | ----------- | ------------ |
| Action | Name | Source | Destination | Port |
| ------ | ------------------ | ---------- | ----------- | ------------ |
| Allow | AllowInternettoWeb | Internet | WebServers | 443(HTTPS) |
| Allow | AllowWebToApp | WebServers | AppServers | 443(HTTPS) |
| Allow | AllowAppToDB | AppServers | DbServers | 1443 (MSSQL) |
@ -83,11 +84,11 @@ Also with the App Gateway, you can optionally use the Web Application firewall c
## Azure Management Tools
We have spent most of our theory time walking through the Azure Portal, I would suggest that when it comes to following a DevOps culture and process a lot of these tasks especially around provisioning will be done via an API or a command-line tool. I wanted to touch on some of those other management tools that we have available to us as we need to know this for when we are automating the provisioning of our Azure environments.
We have spent most of our theory time walking through the Azure Portal, I would suggest that when it comes to following a DevOps culture and process a lot of these tasks, especially around provisioning will be done via an API or a command-line tool. I wanted to touch on some of those other management tools that we have available to us as we need to know this for when we are automating the provisioning of our Azure environments.
### Azure Portal
The Microsoft Azure Portal is a web-based console, that provides an alternative to command-line tools. You can manage your subscriptions within the Azure Portal. Build, Manage, Monitor everything from a simple web app to complex cloud deployments. Another thing you will find within the portal are these breadcrumbs, JSON as mentioned before is the underpinning of all Azure Resources, It might be that you start in the Portal to understand the features, services and functionality but then later understand the JSON underneath to incorporate into your automated workflows.
The Microsoft Azure Portal is a web-based console, that provides an alternative to command-line tools. You can manage your subscriptions within the Azure Portal. Build, Manage, and Monitor everything from a simple web app to complex cloud deployments. Another thing you will find within the portal are these breadcrumbs, JSON as mentioned before is the underpinning of all Azure Resources, It might be that you start in the Portal to understand the features, services and functionality but then later understand the JSON underneath to incorporate into your automated workflows.
![](Images/Day33_Cloud2.png)
@ -101,7 +102,7 @@ Before we get into Azure PowerShell it is worth introducing PowerShell first. Po
Azure PowerShell is a set of cmdlets for managing Azure resources directly from the PowerShell command line.
We can see from below that you can connect to your subscription using the PowerShell command `Connect-AzAccount`
We can see below that you can connect to your subscription using the PowerShell command `Connect-AzAccount`
![](Images/Day33_Cloud4.png)
@ -117,7 +118,7 @@ Like many, and as you have all seen my go-to IDE is Visual Studio Code.
Visual Studio Code is a free source-code editor made by Microsoft for Windows, Linux and macOS.
You will see from below that there are lots of integrations and tools built into Visual Studio Code that you can use to interact with Microsoft Azure and the services within.
You will see below that there are lots of integrations and tools built into Visual Studio Code that you can use to interact with Microsoft Azure and the services within.
![](Images/Day33_Cloud6.png)
@ -137,13 +138,13 @@ When you select to use the cloud shell it is spinning up a machine, these machin
![](Images/Day33_Cloud9.png)
- Cloud Shell runs on a temporary host provided on a per-session, per-user basis
- Cloud Shell times out after 20 minutes without interactive activity
- Cloud Shell requires an Azure file share to be mounted
- Cloud Shell uses the same Azure file share for both Bash and PowerShell
- Cloud Shell is assigned one machine per user account
- Cloud Shell persists $HOME using a 5-GB image held in your file share
- Permissions are set as a regular Linux user in Bash
- Cloud Shell runs on a temporary host provided on a per-session, per-user basis
- Cloud Shell times out after 20 minutes without interactive activity
- Cloud Shell requires an Azure file share to be mounted
- Cloud Shell uses the same Azure file share for both Bash and PowerShell
- Cloud Shell is assigned one machine per user account
- Cloud Shell persists $HOME using a 5-GB image held in your file share
- Permissions are set as a regular Linux user in Bash
The above was copied from [Cloud Shell Overview](https://docs.microsoft.com/en-us/azure/cloud-shell/overview)
@ -168,7 +169,7 @@ The takeaway here as we already mentioned is about choosing the right tool. Azur
Azure CLI
- Cross-platform command-line interface, installable on Windows, macOS, Linux
- Runs in Windows PowerShell, Cmd, or Bash and other Unix shells.
- Runs in Windows PowerShell, Cmd, Bash and other Unix shells.
Azure PowerShell
@ -177,7 +178,7 @@ Azure PowerShell
If there is a reason you cannot use PowerShell in your environment but you can use .mdor bash then the Azure CLI is going to be your choice.
Next up we take all the theory we have been through and create some scenarios and get hands-on in Azure.
Next up we take all the theories we have been through and create some scenarios and get hands-on in Azure.
## Resources

View File

@ -2,11 +2,12 @@
title: '#90DaysOfDevOps - Microsoft Azure Hands-On Scenarios - Day 34'
published: false
description: 90DaysOfDevOps - Microsoft Azure Hands-On Scenarios
tags: "devops, 90daysofdevops, learning"
tags: 'DevOps, 90daysofdevops, learning'
cover_image: null
canonical_url: null
id: 1048763
---
## Microsoft Azure Hands-On Scenarios
The last 6 days have been focused on Microsoft Azure and the public cloud in general, a lot of this foundation had to contain a lot of theory to understand the building blocks of Azure but also this will nicely translate to the other major cloud providers as well.
@ -20,34 +21,35 @@ There are some here such as Containers and Kubernetes that we have not covered i
In previous posts, we have created most of Modules 1,2 and 3.
### Virtual Networking
Following [Module 04](https://microsoftlearning.github.io/AZ-104-MicrosoftAzureAdministrator/Instructions/Labs/LAB_04-Implement_Virtual_Networking.html):
I went through the above and changed a few namings for the purpose of #90DaysOfDevOps. I also instead of using the Cloud Shell went ahead and logged in with my new user created on previous days with the Azure CLI on my Windows machine.
I went through the above and changed a few namings for #90DaysOfDevOps. I also instead of using the Cloud Shell went ahead and logged in with my new user created on previous days with the Azure CLI on my Windows machine.
You can do this using the `az login` which will open a browser and let you authenticate to your account.
I have then created a PowerShell script and some references from the module to use to build out some of the tasks below. You can find the associated files in this folder.
(Cloud\01VirtualNetworking)
(Cloud\01VirtualNetworking)
Please make sure you change the file location in the script to suit your environment.
Please make sure you change the file location in the script to suit your environment.
At this first stage we have no virtual network or virtual machines created in our environment, I only have a cloudshell storage location configured in my resource group.
At this first stage, we have no virtual network or virtual machines created in our environment, I only have a cloud shell storage location configured in my resource group.
I first of all run my [PowerShell script](Cloud/01VirtualNetworking/Module4_90DaysOfDevOps.ps1)
![](Images/Day34_Cloud1.png)
![](Images/Day34_Cloud1.png)
- Task 1: Create and configure a virtual network
![](Images/Day34_Cloud2.png)
![](Images/Day34_Cloud2.png)
- Task 2: Deploy virtual machines into the virtual network
![](Images/Day34_Cloud3.png)
![](Images/Day34_Cloud3.png)
- Task 3: Configure private and public IP addresses of Azure VMs
![](Images/Day34_Cloud4.png)
![](Images/Day34_Cloud4.png)
- Task 4: Configure network security groups
@ -60,15 +62,15 @@ I first of all run my [PowerShell script](Cloud/01VirtualNetworking/Module4_90Da
![](Images/Day34_Cloud8.png)
### Network Traffic Management
Following [Module 06](https://microsoftlearning.github.io/AZ-104-MicrosoftAzureAdministrator/Instructions/Labs/LAB_06-Implement_Network_Traffic_Management.html):
Next walkthrough, from the last one we have gone into our resource group and deleted our resources, if you had not set up the user account like me to only have access to that one resource group you could follow the module changing the name to `90Days*` this will delete all resources and resource group. This will be my process for each of the following lab.
Next walkthrough, from the last one we have gone into our resource group and deleted our resources, if you had not set up the user account like me to only have access to that one resource group you could follow the module changing the name to `90Days*` this will delete all resources and resource group. This will be my process for each of the following labs.
For this lab I have also created a PowerShell script and some references from the module to use to build out some of the tasks below. You can find the associated files in this folder.
(Cloud\02TrafficManagement)
For this lab, I have also created a PowerShell script and some references from the module to use to build out some of the tasks below. You can find the associated files in this folder.
(Cloud\02TrafficManagement)
- Task 1: Provision the lab environment
- Task 1: Provision of the lab environment
I first of all run my [PowerShell script](Cloud/02TrafficManagement/Mod06_90DaysOfDevOps.ps1)
@ -80,13 +82,13 @@ I first of all run my [PowerShell script](Cloud/02TrafficManagement/Mod06_90Days
- Task 3: Test transitivity of virtual network peering
For this my 90DaysOfDevOps group did not have access to the Network Watcher because of permissions, I expect this is because Network Watchers are one of those resources that are not tied to a resource group which is where our RBAC was covered for this user. I added the East US Network Watcher contributer role to the 90DaysOfDevOps group.
For this my 90DaysOfDevOps group did not have access to the Network Watcher because of permissions, I expect this is because Network Watchers are one of those resources that are not tied to a resource group which is where our RBAC was covered for this user. I added the East US Network Watcher contributor role to the 90DaysOfDevOps group.
![](Images/Day34_Cloud11.png)
![](Images/Day34_Cloud12.png)
![](Images/Day34_Cloud13.png)
^ This is expected, since the two spoke virtual networks are not peered with each other (virtual network peering is not transitive).
^ This is expected since the two spoke virtual networks do not peer with each other (virtual network peering is not transitive).
- Task 4: Configure routing in the hub and spoke topology
@ -110,12 +112,13 @@ I then was able to go back into my michael.cade@90DaysOfDevOps.com account and c
![](Images/Day34_Cloud20.png)
### Azure Storage
Following [Module 07](https://microsoftlearning.github.io/AZ-104-MicrosoftAzureAdministrator/Instructions/Labs/LAB_07-Manage_Azure_Storage.html):
For this lab I have also created a PowerShell script and some references from the module to use to build out some of the tasks below. You can find the associated files in this folder.
(Cloud\03Storage)
For this lab, I have also created a PowerShell script and some references from the module to use to build out some of the tasks below. You can find the associated files in this folder.
(Cloud\03Storage)
- Task 1: Provision the lab environment
- Task 1: Provision of the lab environment
I first of all run my [PowerShell script](Cloud/03Storage/Mod07_90DaysOfDeveOps.ps1)
@ -138,23 +141,21 @@ I was a little impatient waiting for this to be allowed but it did work eventual
![](Images/Day34_Cloud26.png)
- Task 5: Create and configure an Azure Files shares
On the run command this would not work with michael.cade@90DaysOfDevOps.com so I used my elevated account.
On the run command, this would not work with michael.cade@90DaysOfDevOps.com so I used my elevated account.
![](Images/Day34_Cloud27.png)
![](Images/Day34_Cloud28.png)
![](Images/Day34_Cloud29.png)
- Task 6: Manage network access for Azure Storage
![](Images/Day34_Cloud30.png)
### Serverless (Implement Web Apps)
Following [Module 09a](https://microsoftlearning.github.io/AZ-104-MicrosoftAzureAdministrator/Instructions/Labs/LAB_09a-Implement_Web_Apps.html):
Following [Module 09a](https://microsoftlearning.github.io/AZ-104-MicrosoftAzureAdministrator/Instructions/Labs/LAB_09a-Implement_Web_Apps.html):
- Task 1: Create an Azure web app
@ -182,7 +183,7 @@ This script I am using can be found in (Cloud/05Serverless)
![](Images/Day34_Cloud36.png)
This wraps up the section on Microsoft Azure and the public cloud in general. I will say that I had lots of fun attacking and working through this scenarios.
This wraps up the section on Microsoft Azure and the public cloud in general. I will say that I had lots of fun attacking and working through these scenarios.
## Resources
@ -191,6 +192,6 @@ This wraps up the section on Microsoft Azure and the public cloud in general. I
- [Google Cloud Digital Leader Certification Course](https://www.youtube.com/watch?v=UGRDM86MBIQ&list=WL&index=131&t=10s)
- [AWS Basics for Beginners - Full Course](https://www.youtube.com/watch?v=ulprqHHWlng&t=5352s)
Next we will be diving into version control systems, specifically around git and then also code repository overviews and we will be choosing GitHub as this is my preferred option.
Next, we will be diving into version control systems, specifically around git and then also code repository overviews and we will be choosing GitHub as this is my preferred option.
See you on [Day 35](day35.md)

View File

@ -7,9 +7,10 @@ cover_image: null
canonical_url: null
id: 1049041
---
## The Big Picture: Git - Version Control
Before we get into git, we need to understand what version control is and why? In this opener for Git, we will take a look at what version control is, the basics of git.
Before we get into git, we need to understand what version control is and why? In this opener for Git, we will take a look at what version control is, and the basics of git.
### What is Version Control?
@ -19,11 +20,11 @@ The most obvious and a big benefit of Version Control is the ability to track a
![](Images/Day35_Git1.png)
Version Control before it was cool, would have been something like manually creating a copy of your version before you made changes. It might be that you also comment out old useless code with the just in case mentality.
Version Control before it was cool, would have been something like manually creating a copy of your version before you made changes. It might be that you also comment out old useless code with the just-in-case mentality.
![](Images/Day35_Git2.png)
I have started using version control over not just source code but pretty much anything, talks about projects like this (90DaysOfDevOps) because why would you not want that rollback and log of everything that has gone on.
I have started using version control over not just source code but pretty much anything, talks about projects like this (90DaysOfDevOps). Why not accept the features that rollback and log of everything that has gone on.
However, a big disclaimer **Version Control is not a Backup!**
@ -35,11 +36,11 @@ The way this is achieved in Version Control is through branching.
![](Images/Day35_Git3.png)
Branching allows for two code streams for the same app as we stated above. But we will still want new features that land in our source code free version to be in our premium and to achieve this we have something called merging.
Branching allows for two code streams for the same app as we stated above. But we will still want new features that land in our source code-free version to be in our premium and to achieve this we have something called merging.
![](Images/Day35_Git4.png)
Now, this same easy but merging can be complicated because you could have a team working on the free edition and you could have another team working on the premium paid for version and what if both change code that affects aspects of the overall code. Maybe a variable gets updated and breaks something. Then you have a conflict that breaks one of the features. Version Control cannot fix the conflicts that are down to you. But version control allows this to be easily managed.
Now, this same easy but merging can be complicated because you could have a team working on the free edition and you could have another team working on the premium paid-for version and what if both change code that affects aspects of the overall code. Maybe a variable gets updated and breaks something. Then you have a conflict that breaks one of the features. Version Control cannot fix the conflicts that are down to you. But version control allows this to be easily managed.
The primary reason if you have not picked up so far for version control, in general, is the ability to collaborate. The ability to share code amongst developers and when I say code as I said before more and more we are seeing much more use cases for other reasons to use source control, maybe its a joint presentation you are working on with a colleague or a 90DaysOfDevOps challenge where you have the community offering their corrections and updates throughout the project.
@ -55,7 +56,7 @@ Another thing to mention here is that it's not just developers that can benefit
Git is a tool that tracks changes to source code or any file, or we could also say Git is an open-source distributed version control system.
There are many ways in which git can be used on our systems, most commonly or at least for me I have seen it in at the command line, but we also have graphical user interfaces and tools like Visual Studio Code that have git aware operations we can take advantage of.
There are many ways in which git can be used on our systems, most commonly or at least for me I have seen it at the command line, but we also have graphical user interfaces and tools like Visual Studio Code that have git-aware operations we can take advantage of.
Now we are going to run through a high-level overview before we even get Git installed on our local machine.
@ -63,7 +64,7 @@ Let's take the folder we created earlier.
![](Images/Day35_Git2.png)
To use this folder with version control we first need to initiate this directory using the `git init command. For now, just think that this command puts our directory as a repository in a database somewhere on our computer.
To use this folder with version control we first need to initiate this directory using the `git init` command. For now, just think that this command puts our directory as a repository in a database somewhere on our computer.
![](Images/Day35_Git6.png)
@ -79,11 +80,11 @@ We can now see what has happened within the history of the project. Using the `g
![](Images/Day35_Git9.png)
We can also check the status of our repository by using `git status` this shows we have nothing to commit and we can add a new file called samplecode.ps1. If we then run the same `git status you will see that we file to be committed.
If we create an additional file called `samplecode.ps1`, the status would become different. We can also check the status of our repository by using `git status` this shows we have nothing to commit and we can add a new file called samplecode.ps1. If we then run the same `git status` you will see that we file to be committed.
![](Images/Day35_Git10.png)
Add our new file using the `git add samplecode.ps1` command and then we can run `git status` again and see our file is ready to be committed.
Add our new file using the `git add sample code.ps1` command and then we can run `git status` again and see our file is ready to be committed.
![](Images/Day35_Git11.png)
@ -107,7 +108,7 @@ Which then displays what has changed in our case we added a new file.
![](Images/Day35_Git16.png)
We can also and we will go deeper into this later on but we can jump around our commits i.e we can go time travelling! By using our commit number we can use the `git checkout 709a` command to jump back in time without losing our new file.
We will go deeper into this later on but we can jump around our commits i.e we can go time travelling! By using our commit number we can use the `git checkout 709a` command to jump back in time without losing our new file.
![](Images/Day35_Git17.png)
@ -117,19 +118,16 @@ But then equally we will want to move forward as well and we can do this the sam
The TLDR;
- Tracking a projects history
- Tracking a project's history
- Managing multiple versions of a project
- Sharing code amongst developers and a wider scope of teams and tools
- Coordinating teamwork
- Oh and there is some time travel!
This might have seemed a jump around but hopefully, you can see without really knowing the commands used the powers and the big picture behind Version Control.
Next up we will be getting git installed and set up on your local machine and diving a little deeper into some other use cases and commands that we can achieve in Git.
## Resources
- [What is Version Control?](https://www.youtube.com/watch?v=Yc8sCSeMhi4)
@ -140,4 +138,3 @@ Next up we will be getting git installed and set up on your local machine and di
- [Complete Git and GitHub Tutorial](https://www.youtube.com/watch?v=apGV9Kg7ics)
See you on [Day 36](day36.md)

View File

@ -7,21 +7,22 @@ cover_image: null
canonical_url: null
id: 1048738
---
## Installing & Configuring Git
Git is a open source, cross platform tool for version control. If you are like me, using Ubuntu or most Linux environments you might find that you already have git installed but we are going to run through the install and configuration.
Git is an open source, cross-platform tool for version control. If you are like me, using Ubuntu or most Linux environments you might find that you already have git installed but we are going to run through the install and configuration.
Even if you already have git installed on your system it is also a good idea to make sure we are up to date.
### Installing Git
As already mentioned Git is cross platform, we will be running through Windows and Linux but you can find macOS also listed [here](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git)
As already mentioned Git is cross-platform, we will be running through Windows and Linux but you can find macOS also listed [here](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git)
For [Windows](https://git-scm.com/download/win) we can grab our installers from the official site.
You could also use `winget` on your Windows machine, think of this as your Windows Application Package Manager.
Before we install anything lets see what version we have on our Windows Machine. Open a PowerShell window and run `git --version`
Before we install anything let's see what version we have on our Windows Machine. Open a PowerShell window and run `git --version`
![](Images/Day36_Git1.png)
@ -35,11 +36,11 @@ I went ahead and downloaded the latest installer and ran through the wizard and
Meaning that the process shown below is also the same process for the most part as if you were installing from no git.
It is a very simple installation. Once downloaded double click and get started. Read through the GNU license agreement. But remember this is free and open source software.
It is a very simple installation. Once downloaded double click and get started. Read through the GNU license agreement. But remember this is free and open-source software.
![](Images/Day36_Git3.png)
Now we can choose additional components that we would like to also install but also associate with git. On Windows I always make sure I install Git Bash as this allows us to run bash scripts on Windows.
Now we can choose additional components that we would like to also install but also associate with git. On Windows, I always make sure I install Git Bash as this allows us to run bash scripts on Windows.
![](Images/Day36_Git4.png)
@ -47,7 +48,7 @@ We can then choose which SSH Executable we wish to use. IN leave this as the bun
![](Images/Day36_Git5.png)
We then have experimental features that we may wish to enable, for me I don't need them so I don't enable, you can always come back in through the installation and enable these later on.
We then have experimental features that we may wish to enable, for me I don't need them so I don't enable them, you can always come back in through the installation and enable these later on.
![](Images/Day36_Git6.png)
@ -55,11 +56,11 @@ Installation complete, we can now choose to open Git Bash and or the latest rele
![](Images/Day36_Git7.png)
The final check is to take a look in our PowerShell window what version of git we have now.
The final check is to take a look in our PowerShell window at what version of git we have now.
![](Images/Day36_Git8.png)
Super simple stuff and now we are on the latest version. On our Linux machine we seemed to be a little behind so we can also walk through that update process.
Super simple stuff and now we are on the latest version. On our Linux machine, we seemed to be a little behind so we can also walk through that update process.
I simply run the `sudo apt-get install git` command.
@ -73,6 +74,7 @@ sudo apt-get update
sudo apt-get install git -y
git --version
```
### Configuring Git
When we first use git we have to define some settings,
@ -95,7 +97,7 @@ Depending on your Operating System will determine the default text editor. In my
`git config --global core.editor "code --wait"`
now if we want to be able to see all git configuration then we can use the following command.
now if we want to be able to see all git configurations then we can use the following command.
`git config --global -e`
@ -113,15 +115,15 @@ I mentioned in the post yesterday that there were other version control types an
Before git was around Client-Server was the defacto method for version control. An example of this would be [Apache Subversion](https://subversion.apache.org/) which is an open source version control system founded in 2000.
In this model of Client-Server version control, the first step the developer downloads the source code, the actual files from the server. This doesnt remove the conflicts but it does remove the complexity of the conflicts and how to resolve them.
In this model of Client-Server version control, the first step the developer downloads the source code and the actual files from the server. This doesn't remove the conflicts but it does remove the complexity of the conflicts and how to resolve them.
![](Images/Day36_Git12.png)
Now for example lets say we have two developers working on the same files and one wins the race and commits or uploads their file back to the server first with their new changes. When the second developer goes to update they have a conflict.
Now for example let's say we have two developers working on the same files and one wins the race and commits or uploads their file back to the server first with their new changes. When the second developer goes to update they have a conflict.
![](Images/Day36_Git13.png)
So now the Dev needs to pull down the first devs code change next to theirs check and then commit once those conflicts have been settled.
So now the Dev needs to pull down the first devs code change next to their check and then commit once those conflicts have been settled.
![](Images/Day36_Git15.png)

View File

@ -2,26 +2,27 @@
title: '#90DaysOfDevOps - Gitting to know Git - Day 37'
published: false
description: 90DaysOfDevOps - Gitting to know Git
tags: "devops, 90daysofdevops, learning"
tags: 'DevOps, 90daysofdevops, learning'
cover_image: null
canonical_url: null
id: 1048707
---
## Gitting to know Git
Apoligies for the terrible puns in the title and throughout. I am surely not the first person to turn Git into a dad joke!
Apologies for the terrible puns in the title and throughout. I am surely not the first person to turn Git into a dad joke!
In the last two posts we learnt about version control systems, and some of the fundamental workflows of git as a version control system [Day 35](day35.md) Then we got git installed on our system, updated and configured. We also went a little deeper on the theory between Client-Server version control system and Git which is a distributed version control system [Day 36](day36.md).
In the last two posts we learnt about version control systems, and some of the fundamental workflows of git as a version control system [Day 35](day35.md) Then we got git installed on our system, updated and configured. We also went a little deeper into the theory between the Client-Server version control system and Git which is a distributed version control system [Day 36](day36.md).
Now we are going to run through some of the commands and use cases that we will all commonly see with git.
### Where to git help with git?
There is going to be times where you just cannot remember or just don't know the command you need to get things done with git. You are going to need help.
There are going to be times when you just cannot remember or just don't know the command you need to get things done with git. You are going to need help.
It goes without saying that google or any search engine is likely to be your first port of call when searching help.
Google or any search engine is likely to be your first port of call when searching for help.
Secondly the next place is going to be the official git site and the documentation. [git-scm.com/docs](http://git-scm.com/docs) Here you will find not only a solid reference to all the commands available but also lots of different resources.
Secondly, the next place is going to be the official git site and the documentation. [git-scm.com/docs](http://git-scm.com/docs) Here you will find not only a solid reference to all the commands available but also lots of different resources.
![](Images/Day37_Git1.png)
@ -29,7 +30,7 @@ We can also access this same documentation which is super useful if you are with
![](Images/Day37_Git2.png)
We can also in the shell use `git add -h` which is going to give us a short summary of the options we have available.
We can also in the shell use `git add -h` which is going to give us a summary of the options we have available.
![](Images/Day37_Git3.png)
@ -37,41 +38,39 @@ We can also in the shell use `git add -h` which is going to give us a short summ
"Git has no access control" - You can empower a leader to maintain source code.
"Git is too heavy" - Git has the ability to provide shallow repositories which basically means a reduced amount of history if you have large projects.
"Git is too heavy" - Git can provide shallow repositories which means a reduced amount of history if you have large projects.
### Real shortcomings
Not ideal for Binary files. Great for source code but not great for executable files or videos for example.
Git is not user friendly, the fact that we have to spend time talking about commands and functions of the tool is probably a key sign of that.
Git is not user-friendly, the fact that we have to spend time talking about commands and functions of the tool is probably a key sign of that.
Overall though, git is hard to learn, but easy to use.
Overall though, git is hard to learn but easy to use.
### The git ecosystem
I want to briefly cover the ecosystem around git but not deep dive into some of these areas but I think its important to note these here at a high level.
I want to briefly cover the ecosystem around git but not deep dive into some of these areas but I think it's important to note these here at a high level.
Almost all modern development tools support Git.
- Developer tools - We have already mentioned visual studio code but you will find git plugins and integrations into sublime text and other text editors and IDEs.
- Team tools - Also mentioned around tools like Jenkins from a CI/CD point of view, Slack from a messaging framework and Jira for project management and issue tracking.
- Cloud Providers - All the large cloud providers support git, Microsoft Azure, Amazon AWS, Google Cloud Platform.
- Git-Based services - Then we have the GitHub, GitLab and BitBucket of which we will cover in more detail later on. I have heard these services as the social network for code!
- Cloud Providers - All the large cloud providers support git, Microsoft Azure, Amazon AWS, and Google Cloud Platform.
- Git-Based services - Then we have GitHub, GitLab and BitBucket which we will cover in more detail later on. I have heard of these services as the social network for code!
### The Git Cheatsheet
We have not covered most of these commands but having looked at some cheatsheets available online I wanted to document some of the git commands and what their purpose are. We don't need to remember these all, and with more hands on practice and using you will pick at least the git basics.
We have not covered most of these commands but having looked at some cheat sheets available online I wanted to document some of the git commands and what their purpose is. We don't need to remember these all, and with more hands-on practice and use you will pick at least the git basics.
I have taken these from [atlassian](https://www.atlassian.com/git/tutorials/atlassian-git-cheatsheet) but writing them down and reading the description is a good way to get to know what the commands are as well as getting hands on in every day tasks.
I have taken these from [atlassian](https://www.atlassian.com/git/tutorials/atlassian-git-cheatsheet) but writing them down and reading the description is a good way to get to know what the commands are as well as getting hands-on in everyday tasks.
### Git Basics
| Command | Example | Description |
| --------------- | ------------------------------------- | ------------------------------------------------------------------------------------------------------------------------- |
| git init | `git init <directory>` | Create an empty git repository in specified directory. |
| ------------- | --------------------------- | --------------------------------------------------------------------------------------------------------------------------- |
| git init | `git init <directory>` | Create an empty git repository in the specified directory. |
| git clone | `git clone <repo>` | Clone repository located at <repo> onto local machine. |
| git config | `git config user.name` | Define author name to be used for all commits in current repository `system`, `global`, `local` flag to set config options. |
| git add | `git add <directory>` | Stage all changes in <directory> for the next commit. We can also add <files> and <.> for everything. |
@ -83,23 +82,23 @@ I have taken these from [atlassian](https://www.atlassian.com/git/tutorials/atla
### Git Undoing Changes
| Command | Example | Description |
| --------------- | ------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------- |
| ---------- | --------------------- | ------------------------------------------------------------------------------------------------------------------------------------- |
| git revert | `git revert <commit>` | Create a new commit that undoes all of the changes made in <commit> then apply it to the current branch. |
| git reset | `git reset <file>` | Remove <file> from the staging area, but leave the working directory unchanged. This unstages a file without overwriting any changes. |
| git reset | `git reset <file>` | Remove <file> from the staging area, but leave the working directory unchanged. This unstaged a file without overwriting any changes. |
| git clean | `git clean -n` | Shows which files would be removed from the working directory. Use `-f` in place of `-n` to execute the clean. |
### Git Rewriting History
| Command | Example | Description |
| --------------- | ------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------- |
| git commit | `git commit --amend` | Replace the last commit with the staged changes and last commit combined. Use with nothing staged to edit the last commits message. |
| ---------- | -------------------- | ---------------------------------------------------------------------------------------------------------------------------------------- |
| git commit | `git commit --amend` | Replace the last commit with the staged changes and the last commit combined. Use with nothing staged to edit the last commits message. |
| git rebase | `git rebase <base>` | Rebase the current branch onto <base>. <base> can be a commit ID, branch name, a tag, or a relative reference to HEAD. |
| git reflog | `git reflog` | Show a log of changes to the local repositorys HEAD. Add --relative-date flag to show date info or --all to show all refs. |
### Git Branches
| Command | Example | Description |
| --------------- | ------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------- |
| ------------ | -------------------------- | ------------------------------------------------------------------------------------------------------------- |
| git branch | `git branch` | List all of the branches in your repo. Add a <branch> argument to create a new branch with the name <branch>. |
| git checkout | `git checkout -b <branch>` | Create and check out a new branch named <branch>. Drop the -b flag to checkout an existing branch. |
| git merge | `git merge <branch>` | Merge <branch> into the current branch. |
@ -107,7 +106,7 @@ I have taken these from [atlassian](https://www.atlassian.com/git/tutorials/atla
### Git Remote Repositories
| Command | Example | Description |
| --------------- | ------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------- |
| -------------- | ----------------------------- | ----------------------------------------------------------------------------------------------------------------------------------- |
| git remote add | `git remote add <name> <url>` | Create a new connection to a remote repo. After adding a remote, you can use <name> as a shortcut for <url> in other commands. |
| git fetch | `git fetch <remote> <branch>` | Fetches a specific <branch>, from the repo. Leave off <branch> to fetch all remote refs. |
| git pull | `git pull <remote>` | Fetch the specified remotes copy of current branch and immediately merge it into the local copy. |
@ -116,14 +115,14 @@ I have taken these from [atlassian](https://www.atlassian.com/git/tutorials/atla
### Git Diff
| Command | Example | Description |
| --------------- | ------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------- |
| git diff HEAD | `git diff HEAD` | Show difference between working directory and last commit. |
| ----------------- | ------------------- | ---------------------------------------------------------------------- |
| git diff HEAD | `git diff HEAD` | Show the difference between the working directory and the last commit. |
| git diff --cached | `git diff --cached` | Show difference between staged changes and last commit |
### Git Config
| Command | Example | Description |
| ----------------------------------------------------- | ---------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------- |
| ---------------------------------------------------- | ------------------------------------------------------ | --------------------------------------------------------------------------------------------------------------------------------------------- |
| git config --global user.name <name> | `git config --global user.name <name>` | Define the author name to be used for all commits by the current user. |
| git config --global user.email <email> | `git config --global user.email <email>` | Define author email to be used for all commits by the current user. |
| git config --global alias <alias-name> <git-command> | `git config --global alias <alias-name> <git-command>` | Create shortcut for a git command . |
@ -133,29 +132,29 @@ I have taken these from [atlassian](https://www.atlassian.com/git/tutorials/atla
### Git Rebase
| Command | Example | Description |
| ------------------------------------- | ------------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------- |
| -------------------- | ---------------------- | ------------------------------------------------------------------------------------------------------------------------------------------- |
| git rebase -i <base> | `git rebase -i <base>` | Interactively rebase current branch onto <base>. Launches editor to enter commands for how each commit will be transferred to the new base. |
### Git Pull
| Command | Example | Description |
| ------------------------------------- | ------------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------- |
| git pull --rebase <remote> | `git pull --rebase <remote>` | Fetch the remotes copy of current branch and rebases it into the local copy. Uses git rebase instead of merge to integrate the branches. |
| -------------------------- | ---------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------- |
| git pull --rebase <remote> | `git pull --rebase <remote>` | Fetch the remotes copy of current branch and rebases it into the local copy. Uses git rebase instead of the merge to integrate the branches. |
### Git Reset
| Command | Example | Description |
| ------------------------- | --------------------------| --------------------------------------------------------------------------------------------------------------------------------------------- |
| git reset | `git reset ` | Reset staging area to match most recent commit, but leave the working directory unchanged. |
| ------------------------- | --------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------- |
| git reset | `git reset ` | Reset the staging area to match the most recent commit but leave the working directory unchanged. |
| git reset --hard | `git reset --hard` | Reset staging area and working directory to match most recent commit and overwrites all changes in the working directory |
| git reset <commit> | `git reset <commit>` | Move the current branch tip backward to <commit>, reset the staging area to match, but leave the working directory alone |
| git reset <commit> | `git reset <commit>` | Move the current branch tip backwards to <commit>, reset the staging area to match, but leave the working directory alone |
| git reset --hard <commit> | `git reset --hard <commit>` | Same as previous, but resets both the staging area & working directory to match. Deletes uncommitted changes, and all commits after <commit>. |
### Git Push
| Command | Example | Description |
| ------------------------- | --------------------------| --------------------------------------------------------------------------------------------------------------------------------------------- |
| git push <remote> --force | `git push <remote> --force` | Forces the git push even if it results in a non-fast-forward merge. Do not use the --force flag unless youre absolutely sure you know what youre doing. |
| ------------------------- | --------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------- |
| git push <remote> --force | `git push <remote> --force` | Forces the git push even if it results in a non-fast-forward merge. Do not use the --force flag unless youre sure you know what youre doing. |
| git push <remote> --all | `git push <remote> --all` | Push all of your local branches to the specified remote. |
| git push <remote> --tags | `git push <remote> --tags` | Tags arent automatically pushed when you push a branch or use the --all flag. The --tags flag sends all of your local tags to the remote repo. |
@ -169,5 +168,4 @@ I have taken these from [atlassian](https://www.atlassian.com/git/tutorials/atla
- [Complete Git and GitHub Tutorial](https://www.youtube.com/watch?v=apGV9Kg7ics)
- [Git cheatsheet](https://www.atlassian.com/git/tutorials/atlassian-git-cheatsheet)
See you on [Day 38](day38.md)

View File

@ -7,6 +7,7 @@ cover_image: null
canonical_url: null
id: 1049042
---
## Staging & Changing
We have already covered some of the basics but putting things into a walkthrough makes it better for me to learn and understand how and why we are doing it this way. Before we get into any git-based services such as GitHub, git has its powers that we can take advantage of on our local workstation.
@ -23,11 +24,11 @@ This is where the details of the git repository are stored as well as the inform
### Staging Files
We then start working on our empty folder and maybe we add some source code as a first days work. We create our readme.mdfile and we can see that file in the directory, next we check our `git status` and it knows about the new readme.mdfile but we have not committed the file yet.
We then start working on our empty folder and maybe we add some source code on the first days of work. We create our readme.mdfile and we can see that file in the directory, next we check our `git status` and it knows about the new readme.mdfile but we have not committed the file yet.
![](Images/Day38_Git3.png)
We can stage our readme.mdfile with the `git add README.md` command then we can see changes to be committed which we did not have before and a green new file.
We can stage our readme.mdfile with the `git add README.md` command then we can see changes to be committed that we did not have before and a green new file.
![](Images/Day38_Git4.png)
@ -51,7 +52,7 @@ When nano opens you can then add your short and long description and then save t
### Committing Best Practices
There is a balance here to when to commit, commit often. We do not want to be waiting to be finished the project before committing, each commit should be meaningful and they also should not be coupled with non-relevant tasks with each other. If you have a bug fix and a typo make sure they are two separate commits as a best practice.
There is a balance here between when to commit and commit often. We do not want to be waiting to be finished the project before committing, each commit should be meaningful and they also should not be coupled with non-relevant tasks with each other. If you have a bug fix and a typo make sure they are two separate commits as a best practice.
Make the commit message mean something.

View File

@ -2,14 +2,15 @@
title: '#90DaysOfDevOps - Viewing, unstaging, discarding & restoring - Day 39'
published: false
description: '90DaysOfDevOps - Viewing, unstaging, discarding & restoring'
tags: "devops, 90daysofdevops, learning"
tags: 'devops, 90daysofdevops, learning'
cover_image: null
canonical_url: null
id: 1048827
---
## Viewing, unstaging, discarding & restoring
Continuing on from where we finished yesterday around some of the commands that we have with git and how to leverage git with your projects. Remember we have not touched GitHub or any other git based services yet this is all to help you keep control of your projects locally at the moment, but they will all become useful when we start to integrate into those tools.
Continuing from where we finished yesterday around some of the commands that we have with git and how to leverage git with your projects. Remember we have not touched GitHub or any other git-based services yet this is all to help you keep control of your projects locally at the moment, but they will all become useful when we start to integrate into those tools.
### Viewing the Staged and Unstaged Changes
@ -33,7 +34,7 @@ If we then run `git diff` we compare and see the output below.
### Visual Diff Tools
For me the above is more confusing so I would much rather use a visual tool,
For me, the above is more confusing so I would much rather use a visual tool,
To name a few visual diff tools:
@ -60,7 +61,7 @@ Which then opens our VScode editor on the diff page and compares the two, we hav
![](Images/Day39_Git8.png)
I find this method much easier to track changes and this is something similar to what we will see when we look into git based services such as GitHub.
I find this method much easier to track changes and this is something similar to what we will see when we look into git-based services such as GitHub.
We can also use `git difftool --staged` to compare stage with committed files.
@ -74,13 +75,13 @@ I am using VScode as my IDE and like most IDEs they have this functionality buil
### Viewing the History
We previously touched on `git log` which will provide us a comprehensive view on all commits we have made in our repository.
We previously touched on `git log` which will provide us with a comprehensive view of all commits we have made in our repository.
![](Images/Day39_Git11.png)
Each commit has its own hexadecimal string, unique to the repository. Here you can see which branch we are working on and then also the author, date and commit message.
Each commit has its hexadecimal string, unique to the repository. Here you can see which branch we are working on and then also the author, date and commit message.
We also have `git log --oneline` and this gives us a much smaller version of the hexadecimal string whcih we can use in other `diff` commands. We also only have the one line description or commit message.
We also have `git log --oneline` and this gives us a much smaller version of the hexadecimal string which we can use in other `diff` commands. We also only have the one-line description or commit message.
![](Images/Day39_Git12.png)
@ -90,19 +91,19 @@ We can reverse this into a start with the first commit by running `git log --one
### Viewing a Commit
Being able to look at the commit message is great if you have been concious about following best practices and you have added a meaningful commit message, however there is also `git show` command which allows us to inspect and view a commit.
Being able to look at the commit message is great if you have been conscious about following best practices and you have added a meaningful commit message, however, there is also `git show` command which allows us to inspect and view a commit.
We can use `git log --oneline --reverse` to get a list of our commits. and then we can take those and run `git show <commit ID>`
![](Images/Day39_Git14.png)
The output of that command will look like below with the detail of the commit, author and what actually changed.
The output of that command will look like below with the detail of the commit, author and what changed.
![](Images/Day39_Git15.png)
We can also use `git show HEAD~1` where 1 is how many steps back from the current version we want to get back to.
This is great if you want some detail on your files, but if we want to list all the files in a tree for the whole snapshot directory. We can achieve this by using the `git ls-tree HEAD~1` command, again going back one snapshot from the last commit. We can see below we have two blobs, these indicate files where as tree would indicate a directory. You can also see commits and tags in this information.
This is great if you want some detail on your files, but if we want to list all the files in a tree for the whole snapshot directory. We can achieve this by using the `git ls-tree HEAD~1` command, again going back one snapshot from the last commit. We can see below we have two blobs, these indicate files whereas the tree would indicate a directory. You can also see commits and tags in this information.
![](Images/Day39_Git16.png)
@ -116,7 +117,7 @@ Then the contents of that specific version of the file will be shown.
### Unstaging Files
There will be a time where you have maybe used `git add .` but actually there are files you do not wish to commit to that snapshot just yet. In this example below I have added newfile.txt to my staging area but I am not ready to commit this file so I am going to use the `git restore --staged newfile.txt` to undo the `git add` step.
There will be a time when you have maybe used `git add .` but there are files you do not wish to commit to that snapshot just yet. In this example below I have added newfile.txt to my staging area but I am not ready to commit this file so I am going to use the `git restore --staged newfile.txt` to undo the `git add` step.
![](Images/Day39_Git19.png)
@ -124,11 +125,11 @@ We can also do the same to modified files such as main.js and unstage the commit
![](Images/Day39_Git20.png)
I have actually found this command quite useful during the 90DaysOfDevOps as I sometimes work ahead of the days where I feel I want to make notes for the following day but I don't want to commit and push to the public GitHub repository.
I have found this command quite useful during the 90DaysOfDevOps as I sometimes work ahead of the days where I feel I want to make notes for the following day but I don't want to commit and push to the public GitHub repository.
### Discarding Local Changes
Some times we might make changes but we are not happy with those changes and we want to throw them away. We are going to use the `git restore` command again and we are going to be able to restore files from our snapshots or previous versions. We can run `git restore .` against our directory and we will restore everything from our snapshot but notice that our untracked file is still present. There is no previous file being tracked called newfile.txt.
Sometimes we might make changes but we are not happy with those changes and we want to throw them away. We are going to use the `git restore` command again and we are going to be able to restore files from our snapshots or previous versions. We can run `git restore .` against our directory and we will restore everything from our snapshot but notice that our untracked file is still present. There is no previous file being tracked called newfile.txt.
![](Images/Day39_Git21.png)
@ -142,13 +143,13 @@ Or if we know the consequences then we might want to run `git clean -fd` to forc
### Restoring a File to an Earlier Version
As we have alluded to throughout a big portion of what Git is able to help with is being able to restore copies of your files from your snapshots (this is not a backup but it is a very fast restore point) My advice is that you also save copies of your code in other locations using a backup solution for this.
As we have alluded to throughout a big portion of what Git can help with is being able to restore copies of your files from your snapshots (this is not a backup but it is a very fast restore point) My advice is that you also save copies of your code in other locations using a backup solution for this.
As an example let's go and delete our most important file in our directory, notice we are using unix based commands to remove this from the directory, not git commands.
As an example let's go and delete our most important file in our directory, notice we are using Unix-based commands to remove this from the directory, not git commands.
![](Images/Day39_Git24.png)
Now we have no readme.mdin our working directory. We could have used `git rm readme.md` and this would then be reflected in our git database. Let's also delete from here to simiulate it being removed completely.
Now we have no readme.mdin our working directory. We could have used `git rm readme.md` and this would then be reflected in our git database. Let's also delete it from here to simulate it being removed completely.
![](Images/Day39_Git25.png)
@ -170,9 +171,9 @@ We now have a new untracked file and we can use our commands previously mentione
This seems to be the biggest headache when it comes to Git and when to use rebase vs using merge on your git repositories.
The first thing to know is that both `git rebase` and `git merge` solve the same problem. Both are to integrate changes from one brance into another branch. However they do this in different ways.
The first thing to know is that both `git rebase` and `git merge` solve the same problem. Both are to integrate changes from one branch into another branch. However, they do this in different ways.
Let's start with a new feature in a new dedicated branch. The Main branch continues on with new commits.
Let's start with a new feature in a new dedicated branch. The Main branch continues with new commits.
![](Images/Day39_Git28.png)
@ -180,21 +181,22 @@ The easy option here is to use `git merge feature main` which will merge the mai
![](Images/Day39_Git29.png)
Merging is easy because it is non-destructive. The existing branches are not changed in any way. However this also means that the feature branch will have an irrellevant merge commit every time you need to incorporate upstream changes. If main is very busy or active this will or can pollute the feature branch history.
Merging is easy because it is non-destructive. The existing branches are not changed in any way. However, this also means that the feature branch will have an irrelevant merge commit every time you need to incorporate upstream changes. If the main is very busy or active this will or can pollute the feature branch history.
As an alternate option we can rebase the feature branch onto the main branch using
As an alternate option, we can rebase the feature branch onto the main branch using
```
git checkout feature
git rebase main
```
This moves the feature branch (the entire feature branch) effectively incorporating all of the new commits in main. But, instead of using a merge commit, rebasing re-writes the project history by creating brand new commits for each commit in the original branch.
This moves the feature branch (the entire feature branch) effectively incorporating all of the new commits in the main. But, instead of using a merge commit, rebasing re-writes the project history by creating brand new commits for each commit in the original branch.
![](Images/Day39_Git30.png)
The biggest benefit of rebasing is a much cleaner project history. It also eliminates unnecessary merge commits. and as you compare the last two images, you can follow arguably a much cleaner linear project history.
Although it's still not a forgone conclusion, because choosing the cleaner history also comes with tradeoffs, If you do not follow the [The Golden rule of rebasing](https://www.atlassian.com/git/tutorials/merging-vs-rebasing#the-golden-rule-of-rebasing) re-writing project history can be potentially catastrophic for your collaboration workflow. And, less importantly, rebasing loses the context provided by a merge commit—you cant see when upstream changes were incorporated into the feature.
Although it's still not a foregone conclusion, choosing the cleaner history also comes with tradeoffs, If you do not follow the [The Golden rule of rebasing](https://www.atlassian.com/git/tutorials/merging-vs-rebasing#the-golden-rule-of-rebasing) re-writing project history can be potentially catastrophic for your collaboration workflow. And, less importantly, rebasing loses the context provided by a merge commit—you cant see when upstream changes were incorporated into the feature.
## Resources
@ -207,5 +209,4 @@ Although it's still not a forgone conclusion, because choosing the cleaner histo
- [Git cheatsheet](https://www.atlassian.com/git/tutorials/atlassian-git-cheatsheet)
- [Exploring the Git command line A getting started guide](https://veducate.co.uk/exploring-the-git-command-line/)
See you on [Day40](day40.md)

View File

@ -7,10 +7,12 @@ cover_image: null
canonical_url: null
id: 1049044
---
## Social Network for code
Exploring GitHub | GitLab | BitBucket
Today I want to cover off some of the git based services that we have likely all heard of and expect we also use on a daily basis.
Today I want to cover some of the git-based services that we have likely all heard of and expect we also use daily.
We will then use some of our prior session knowledge to move copies of our data to each of the main services.
@ -18,7 +20,7 @@ I called this section "Social Network for Code" let me explain why?
### GitHub
Most common at least for me is GitHub, GitHub is a web based hosting service for git. It is most commonly used by software developers to store their code in. Source Code Management with the git version control features as well as a lot of additional features. It allows for teams or open contributors to easily communicate and provides a social aspect to coding. (hence the social networking title) Since 2018 GitHub is part of Microsoft.
Most common at least for me is GitHub, GitHub is a web-based hosting service for git. It is most commonly used by software developers to store their code. Source Code Management with the git version control features as well as a lot of additional features. It allows for teams or open contributors to easily communicate and provides a social aspect to coding. (hence the social networking title) Since 2018 GitHub is part of Microsoft.
GitHub has been around for quite some time and was founded in 2007/2008. With Over 40 million users on the platform today.
@ -29,19 +31,19 @@ GitHub Main Features
- Project Management toolset - Issues
- CI / CD Pipeline - GitHub Actions
In terms of pricing GitHub have various different levels of pricing for their users. More can be found on [Pricing](https://github.com/pricing)
In terms of pricing, GitHub has different levels of pricing for its users. More can be found on [Pricing](https://github.com/pricing)
For the purpose of this we will cover the free tier.
For this, we will cover the free tier.
I am going to be using my already created GitHub account during this walkthrough, if you do not have an account then on the opening GitHub page there is a sign up option and some easy steps to get set up.
I am going to be using my already created GitHub account during this walkthrough, if you do not have an account then on the opening GitHub page there is a sign-up option and some easy steps to get set up.
### GitHub opening page
When you first login to your GitHub account you get a page containing a lot of widgets giving you options of where and what you would like to see or do. First up we have the "All Activity" this is going to give you a look into what is happening with your repositories or activity in general associated to your organisation or account.
When you first log in to your GitHub account you get a page containing a lot of widgets giving you options of where and what you would like to see or do. First up we have the "All Activity" this is going to give you a look into what is happening with your repositories or activity in general associated with your organisation or account.
![](Images/Day40_Git1.png)
Next we have our Code Repositories, either our own or repositories that we have interacted with recently. We can also quickly create new repositories or search repositories.
Next, we have our Code Repositories, either our own or repositories that we have interacted with recently. We can also quickly create new repositories or search repositories.
![](Images/Day40_Git2.png)
@ -49,71 +51,71 @@ We then have our recent activity, these for me are issues and pull requests that
![](Images/Day40_Git3.png)
Over on the right side of the page we have some referrals for repositories that we might be interested in, most likely based on your recent activity or own projects.
Over on the right side of the page, we have some referrals for repositories that we might be interested in, most likely based on your recent activity or own projects.
![](Images/Day40_Git4.png)
To be honest I am very rarely on my home page that we just saw and described, although I now see that the feed could be really useful to help interacting with the community a little better on certain projects.
To be honest I am very rarely on my home page that we just saw and described, although I now see that the feed could be really useful to help interact with the community a little better on certain projects.
Next up if we want to head into our GitHub Profile we can navigate to the top right corner and on your image there is a drop down which allows you to navigate through your account. From here to access your Profile select "Your Profile"
Next up if we want to head into our GitHub Profile we can navigate to the top right corner and on your image, there is a drop-down which allows you to navigate through your account. From here to access your Profile select "Your Profile"
![](Images/Day40_Git5.png)
Next, your profile page will appear, by default unless you change your configuration you are not going to see what I have, I have added some functionality that shows my recent blog posts over on [vZilla](https://vzilla.co.uk) and then also my latest videos on my [YouTube](https://m.youtube.com/c/MichaelCade1) Channel.
Next, your profile page will appear, by default, unless you change your configuration you are not going to see what I have, I have added some functionality that shows my recent blog posts over on [vZilla](https://vzilla.co.uk) and then also my latest videos on my [YouTube](https://m.youtube.com/c/MichaelCade1) Channel.
Personally you are not going to be spending much time looking at your own profile, but this is a good profile page to share around your network so they can see the cool projects you are working on.
You are not going to be spending much time looking at your profile, but this is a good profile page to share around your network so they can see the cool projects you are working on.
![](Images/Day40_Git6.png)
We can then drill down into the building block of GitHub, the repositories. Here you are going to see your own repositories and if you have private repositories they are also going to be shown in this long list.
We can then drill down into the building block of GitHub, the repositories. Here you are going to see your repositories and if you have private repositories they are also going to be shown in this long list.
![](Images/Day40_Git7.png)
As the repository is so important to GitHub let me choose a pretty busy one of late and run through some of the core functionality that we can use here on top of everything I am already using when it comes to editing our "code" in git on my local system.
First of all from the previous window I have selected the 90DaysOfDevOps repository and we get to see this view. You can see from this view we have a lot of information, we have our main code structure in the middle showing our files and folders that are stored in our repository. We have our readme.mdbeing displayed down at the bottom. Over to the right of the page we have an about section where the repository has a description and purpose. Then we have lot of information underneath this showing how many people have starred the project, forked, and watching.
First of all, from the previous window, I have selected the 90DaysOfDevOps repository and we get to see this view. You can see from this view we have a lot of information, we have our main code structure in the middle showing our files and folders that are stored in our repository. We have our readme. mdbeing displayed down at the bottom. Over to the right of the page, we have an about section where the repository has a description and purpose. Then we have a lot of information underneath this showing how many people have starred in the project, forked, and watched.
![](Images/Day40_Git8.png)
If we scroll down a little further you will also see that we have Releases, these are from the golang part of the challenge. We do not have any packages in our project, we have our contributers listed here. (Thank you community for assisting in my spelling and fact checking) We then have languages used again these are from different sections in the challenge.
If we scroll down a little further you will also see that we have Released, these are from the golang part of the challenge. We do not have any packages in our project, we have our contributors listed here. (Thank you community for assisting in my spelling and fact checking) We then have languages used again these are from different sections in the challenge.
![](Images/Day40_Git9.png)
A the top of the page you are going to see a list of tabs. These may vary and these can be modified to only show the ones you require. You will see here that I am not using all of these and I should remove them to make sure my whole repository is tidy.
First up we had the code tab which we just discussed but these tabs are always available when navigating through a repository which is super useful so we can jump between sections quickly and easily. Next we have the issues tab.
First up we had the code tab which we just discussed but these tabs are always available when navigating through a repository which is super useful so we can jump between sections quickly and easily. Next, we have the issues tab.
Issues let you track your work on GitHub, where development happens. In this specific repository you can see I have some issues focused on adding diagrams or typos but also we have an issue stating a need or requirement for a Chinese version of the repository.
If this was a code repository then this is a great place to raise concerns or issues with the maintainers, but remember be mindful and detailed about what you are reporting, give as much detail as possible.
If this was a code repository then this is a great place to raise concerns or issues with the maintainers, but remember to be mindful and detailed about what you are reporting, and give as much detail as possible.
![](Images/Day40_Git10.png)
The next tab is Pull Requests, Pull requests let you tell others about changes you've pushed to a branch in a repository. This is where someone may have forked your repository, made changes such as bug fixes or feature enhancements or just typos a lot of the case in this repository.
The next tab is Pull Requests, Pull requests let you tell others about changes you've pushed to a branch in a repository. This is where someone may have forked your repository, made changes such as bug fixes or feature enhancements or just typos in a lot of the cases in this repository.
We will cover forking later on.
![](Images/Day40_Git11.png)
I believe the next tab is quite new? But I thought for a project like #90DaysOfDevOps this could really help guide the content journey but also help the community as they walk through their own learning journey. I have created some discussion groups for each section of the challenge so people can jump in and discuss.
I believe the next tab is quite new? But I thought for a project like #90DaysOfDevOps this could help guide the content journey but also help the community as they walk through their learning journey. I have created some discussion groups for each section of the challenge so people can jump in and discuss.
![](Images/Day40_Git12.png)
The Actions tab is going to enable you to build, test and deploy code and a lot more right from within GitHub. GitHub Actions will be something we cover in the CI/CD section of the challenge but this is where we can set some configuration here to automate steps for us.
On my main GitHub Profile I am using GitHub Actions to fetch the latest blog posts and YouTube videos to keep things up to date on that home screen.
On my main GitHub Profile, I am using GitHub Actions to fetch the latest blog posts and YouTube videos to keep things up to date on that home screen.
![](Images/Day40_Git13.png)
I mentioned above about how GitHub is not just a source code repository but it is also a project management tool, The Project tab enables us to build out project tables kanban type boards so that we can link issues and PRs to better collaborate on the project and have a visibility of those tasks.
I mentioned above how GitHub is not just a source code repository but is also a project management tool, The Project tab enables us to build out project tables kanban type boards so that we can link issues and PRs to better collaborate on the project and have visibility of those tasks.
![](Images/Day40_Git14.png)
I know that issues to me seems like a good place to log feature requests and they are but the wiki page allows for a comprehensive roadmap for the project to be outlined with the current status and in general better document your project be it troubleshooting or how-to type content.
I know that issues to me seem like a good place to log feature requests and they are but the wiki page allows for a comprehensive roadmap for the project to be outlined with the current status and in general better document your project is it troubleshooting or how-to type content.
![](Images/Day40_Git15.png)
Not so applicable to this project but the Security tab is really there to make sure that contributers know how to deal with certain tasks, we can define a policy here but also code scanning add-ons to make sure your code for example does not contain secret environment variables.
Not so applicable to this project but the Security tab is there to make sure that contributors know how to deal with certain tasks, we can define a policy here but also code scanning add-ons to make sure your code for example does not contain secret environment variables.
![](Images/Day40_Git16.png)
@ -121,7 +123,7 @@ For me the insights tab is great, it provides so much information about the repo
![](Images/Day40_Git17.png)
Finally we have the Settings tab, this is where we can get into the details of how we run our repository, I am currently the only maintainer of the repository but we could share this responsibility here. We can define integrations and other such tasks here.
Finally, we have the Settings tab, this is where we can get into the details of how we run our repository, I am currently the only maintainer of the repository but we could share this responsibility here. We can define integrations and other such tasks here.
![](Images/Day40_Git18.png)
@ -142,7 +144,7 @@ If we click on that repository we are going to get the same look as we have just
If we notice below we have 3 options, we have watch, fork and star.
- Watch - Updates when things happen to the repository.
- Fork - copy of a repository.
- Fork - a copy of a repository.
- Star - "I think your project is cool"
![](Images/Day40_Git21.png)
@ -151,17 +153,17 @@ Given our scenario of wanting a copy of this repository to work on we are going
![](Images/Day40_Git22.png)
Now we have our own copy of the repository that we can freely work on and change as we see fit. This would be the start of the pull request process that we mentioned briefly before but we will cover in more detail tomorrow.
Now we have our copy of the repository that we can freely work on and change as we see fit. This would be the start of the pull request process that we mentioned briefly before but we will cover it in more detail tomorrow.
![](Images/Day40_Git23.png)
Ok I hear you say, but how do I make changes to this repository and code if its on a website, well you can go through and edit on the website but its not going to be the same as using your favourite IDE on your local system with your favourite colour theme. In order for us to get a copy of this repository on our local machine we will perform a clone of the repository. This will allow us to work on things locally and then push our changes back into our forked copy of the repository.
Ok, I hear you say, but how do I make changes to this repository and code if it's on a website, well you can go through and edit on the website but it's not going to be the same as using your favourite IDE on your local system with your favourite colour theme. For us to get a copy of this repository on our local machine we will perform a clone of the repository. This will allow us to work on things locally and then push our changes back into our forked copy of the repository.
We have several options when it comes to getting a copy of this code as you can see below.
There is a local version available of GitHub Desktop which gives you a visual desktop application to track changes and push and pull changes between local and github.
There is a local version available of GitHub Desktop which gives you a visual desktop application to track changes and push and pull changes between local and GitHub.
For the purpose of this little demo I am going to use the HTTPS url we see on there.
For this little demo, I am going to use the HTTPS URL we see on there.
![](Images/Day40_Git24.png)
@ -169,7 +171,7 @@ Now on our local machine, I am going to navigate to a directory I am happy to do
![](Images/Day40_Git25.png)
Now we could take to VScode to really make some changes to this.
Now we could take it to VScode to make some changes to this.
![](Images/Day40_Git26.png)
@ -181,7 +183,7 @@ Now if we check back on GitHub and we find our readme.mdin that repository, you
![](Images/Day40_Git28.png)
At this stage this might be complete and we might be happy with our change as we are the only people going to use our new change but maybe it was a bug change and if that is the case then we will want to contribute via a Pull Request to notify the original repository maintainers of our change and see if they accept our changes.
At this stage, this might be complete and we might be happy with our change as we are the only people going to use our new change but maybe it was a bug change and if that is the case then we will want to contribute via a Pull Request to notify the original repository maintainers of our change and see if they accept our changes.
We can do this by using the contribute button highlighted below. I will cover more on this tomorrow when we look into Open-Source workflows.
@ -189,11 +191,10 @@ We can do this by using the contribute button highlighted below. I will cover mo
I have spent a long time looking through GitHub and I hear some of you cry but what about other options!
Well there are and I am going to find some resources that cover the basics for some of those as well. You are going to come across GitLab and BitBucket amongst others in your travels and whilst they are git based services they have their differences.
Well, there are and I am going to find some resources that cover the basics for some of those as well. You are going to come across GitLab and BitBucket amongst others in your travels and whilst they are git-based services they have their differences.
You will also come across hosted options. Most commonly here I have seen GitLab as a hosted version vs GitHub Enterprise (Don't believe there is a free hosted GitHub?)
## Resources
- [Learn GitLab in 3 Hours | GitLab Complete Tutorial For Beginners](https://www.youtube.com/watch?v=8aV5AxJrHDg)
@ -206,5 +207,4 @@ You will also come across hosted options. Most commonly here I have seen GitLab
- [Complete Git and GitHub Tutorial](https://www.youtube.com/watch?v=apGV9Kg7ics)
- [Git cheatsheet](https://www.atlassian.com/git/tutorials/atlassian-git-cheatsheet)
See you on [Day 41](day41.md)

View File

@ -2,22 +2,23 @@
title: '#90DaysOfDevOps - The Open Source Workflow - Day 41'
published: false
description: 90DaysOfDevOps - The Open Source Workflow
tags: "devops, 90daysofdevops, learning"
tags: 'DevOps, 90daysofdevops, learning'
cover_image: null
canonical_url: null
id: 1048806
---
## The Open Source Workflow
Hopefully through the last 7 sections of Git we have a better understanding of what git is and then how a git based service such as GitHub integrates with git to provide a source code repository but also a way in which the wider community can collaborate on code and projects together.
Hopefully, through the last 7 sections of Git, we have a better understanding of what git is and then how a git-based service such as GitHub integrates with git to provide a source code repository but also a way in which the wider community can collaborate on code and projects together.
When we went through the GitHub fundamentals we went through the process of forking a random project and making a change to our local repository. Here we want to go one step further and contribute to an open source project. Remember that contributing doesn't need to be bug fixes, coding features but it could also be documentation. Every little helps and it also allows you to get hands on with some of the git functionality we have covered.
When we went through the GitHub fundamentals we went through the process of forking a random project and making a change to our local repository. Here we want to go one step further and contribute to an open-source project. Remember that contributing doesn't need to be bug fixes or coding features but it could also be documentation. Every little helps and it also allows you to get hands-on with some of the git functionality we have covered.
## Fork a Project
The first thing we have to do is find a project we can contribute to. I have recently been presenting on the [Kanister Project](https://github.com/kanisterio/kanister) and I would like to share my presentations that are now on YouTube to the main readme.mdfile in the project.
First of all we need to fork the project. Let's run through that process. I am going to navigate to the link share above and fork the repository.
First of all, we need to fork the project. Let's run through that process. I am going to navigate to the link shared above and fork the repository.
![](Images/Day41_Git1.png)
@ -25,13 +26,13 @@ We now have our copy of the whole repository.
![](Images/Day41_Git2.png)
For reference on the Readme.mdfile the original Presenations listed are just these two so we need to fix this with our process.
For reference on the Readme.mdfile the original Presentations listed are just these two so we need to fix this with our process.
![](Images/Day41_Git3.png)
## Clones to local machine
## Clones to a local machine
Now we have our own fork we can bring that down to our local and we can then start making our edits to the files. Using the code button on our repo we can grab the URL and then use `git clone url` in a directory we wish to place the repository.
Now we have our fork we can bring that down to our local and we can then start making our edits to the files. Using the code button on our repo we can grab the URL and then use `git clone url` in a directory we wish to place the repository.
![](Images/Day41_Git4.png)
@ -47,19 +48,19 @@ The readme.mdfile is written in markdown language and because I am modifying som
## Test your changes
We must as a best practice test our changes, this makes total sense if this was a code change to an application you would want to ensure that the application still functions after code change, well we also must make sure that documentation is formatted and looks correct.
We must as a best practice test our changes, this makes total sense if this was a code change to an application you would want to ensure that the application still functions after a code change, well we also must make sure that documentation is formatted and looks correct.
In VScode we have the ability to add a lot of plugins one of these is the ability to preview markdown pages.
In vscode we can add a lot of plugins one of these is the ability to preview markdown pages.
![](Images/Day41_Git7.png)
## Push changes back to our forked repository
We do not have the authentication to push our changes directly back to the Kanister repository so we have to take this route. Now that I am happy with our changes we can run through some of those now well known git commands.
We do not have the authentication to push our changes directly back to the Kanister repository so we have to take this route. Now that I am happy with our changes we can run through some of those now well-known git commands.
![](Images/Day41_Git8.png)
Now we go back into GitHub to check the changes once more and then contribute back to the master project.
Now we go back to GitHub to check the changes once more and then contribute back to the master project.
Looks good.
@ -69,7 +70,7 @@ Now we can go back to the top of our forked repository for Kanister and we can s
![](Images/Day41_Git10.png)
Next we hit that contribute button highlighted above. We see the option to "Open Pull Request"
Next, we hit that contribute button highlighted above. We see the option to "Open Pull Request"
![](Images/Day41_Git11.png)
@ -79,29 +80,29 @@ There is quite a bit going on in this next image, top left you can now see we ar
![](Images/Day41_Git12.png)
We have reviewed the above changes and we are ready to create pull request hitting the green button.
We have reviewed the above changes and we are ready to create a pull request by hitting the green button.
Then depending on how the maintainer of a project has set out their Pull Request functionality on their repository you may or may not have a template that gives you pointers on what the maintainer wants to see.
This again where you want to make a meaningful description of what you have done, clear and concise but enough detail. You can see I have made a simple change overview and I have ticked documentation.
This is again where you want to make a meaningful description of what you have done, clear and concise but with enough detail. You can see I have made a simple change overview and I have ticked documentation.
![](Images/Day41_Git13.png)
## Create pull request
## Create a pull request
We are now ready to create our pull request. After hitting the "Create Pull Request" at the top of the page you will get a summary of your pull request.
![](Images/Day41_Git14.png)
Scrolling down you are likely to see some automation taking place, in this instance we require a review and some checks are taking place. We can see that Travis CI is in progress and a build has started and this will check our update, making sure that before anything is merged we are not breaking things with our additions.
Scrolling down you are likely to see some automation taking place, in this instance, we require a review and some checks are taking place. We can see that Travis CI is in progress and a build has started and this will check our update, making sure that before anything is merged we are not breaking things with our additions.
![](Images/Day41_Git15.png)
Another thing to note here is that the red in the screen shot above, can look a little daunting and look as if you have made mistakes! Don't worry you have not broken anything, my biggest tip here is this process is there to help you and the maintainers of the project. If you have made a mistake at least from my experience the maintainer will contact and advise on what to do next.
Another thing to note here is that the red in the screenshot above, can look a little daunting and look as if you have made mistakes! Don't worry you have not broken anything, my biggest tip here is this process is there to help you and the maintainers of the project. If you have made a mistake at least from my experience the maintainer will contact and advise on what to do next.
This pull request is now public for everyone to see [added Kanister presentation/resource #1237](https://github.com/kanisterio/kanister/pull/1237)
I am going to publish this before the merge and pull request is accepted so maybe we can get a little prize for anyone that is still following along and is able to add a picture in of the successful PR?
I am going to publish this before the merge and pull requests are accepted so maybe we can get a little prize for anyone that is still following along and can add a picture of the successful PR?
1. Fork this repository to your own GitHub account
2. Add your picture and possibly text
@ -109,7 +110,7 @@ I am going to publish this before the merge and pull request is accepted so mayb
4. Create a PR that I will see and approve.
5. I will think of some sort of prize
This then wraps up our look into Git and GitHub, next we are diving into containers which starts with a big picture look into how, why containers and also a look into virtualisation and how we got here.
This then wraps up our look into Git and GitHub, next we are diving into containers which starts with a big picture look into how, and why containers and also a look into virtualisation and how we got here.
## Resources
@ -123,5 +124,4 @@ This then wraps up our look into Git and GitHub, next we are diving into contain
- [Complete Git and GitHub Tutorial](https://www.youtube.com/watch?v=apGV9Kg7ics)
- [Git cheatsheet](https://www.atlassian.com/git/tutorials/atlassian-git-cheatsheet)
See you on [Day 42](day42.md)

View File

@ -2,16 +2,17 @@
title: '#90DaysOfDevOps - The Big Picture: Containers - Day 42'
published: false
description: 90DaysOfDevOps - The Big Picture Containers
tags: "devops, 90daysofdevops, learning"
tags: 'devops, 90daysofdevops, learning'
cover_image: null
canonical_url: null
id: 1048826
---
## The Big Picture: Containers
We are now starting the next section and this section is going to be focused on containers in particular we are going to be looking into Docker getting into some of the key areas to understand more about Containers.
I will also be trying to get some hands-on here to create the container that we can use during this section but also future sections later on in the challenge.
I will also be trying to get some hands-on here to create the container that we can use during this section but also in future sections later on in the challenge.
As always this first post is going to be focused on the big picture of how we got here and what it all means.
@ -20,7 +21,7 @@ As always this first post is going to be focused on the big picture of how we go
### Why another way to run applications?
The first thing we have to take a look at is why do we need another way to run our software or applications? Well it is just that choice is great, we can run our applications in many different forms, we might see applications deployed on physical hardware with an operating system and a single application deployed there, we might see the virtual machine or cloud-based IaaS instances running our application which then integrate into a database again in a VM or as PaaS offering in the public cloud. Or we might see our applications running in containers.
The first thing we have to take a look at is why do we need another way to run our software or applications? Well it is just that choice is great, we can run our applications in many different forms, we might see applications deployed on physical hardware with an operating system and a single application deployed there, and we might see the virtual machine or cloud-based IaaS instances running our application which then integrate into a database again in a VM or as PaaS offering in the public cloud. Or we might see our applications running in containers.
None of the above options is wrong or right, but they each have their reasons to exist and I also strongly believe that none of these is going away. I have seen a lot of content that walks into Containers vs Virtual Machines and there really should not be an argument as that is more like apples vs pears argument where they are both fruit (ways to run our applications) but they are not the same.
@ -38,7 +39,7 @@ As you can probably tell as I have said before, I am not going to advocate that
![](Images/Day42_Containers4.png)
We have had container technology for a long time, so why now over the last say 10 years has this become popular, I would say even more popular in the last 5. We have had containers for decades. It comes down to the challenge containers or should I say images as well, to how we distribute our software, because if we just have container technology, then we still will have many of the same problems we've had with software management.
We have had container technology for a long time, so why now over the last say 10 years has this become popular, I would say even more popular in the last 5. We have had containers for decades. It comes down to the challenge of containers or should I say images as well, to how we distribute our software, because if we just have container technology, then we still will have many of the same problems we've had with software management.
If we think about Docker as a tool, the reason that it took off, is because of the ecosystem of images that are easy to find and use. Simple to get on your systems and get up and running. A major part of this is consistency across the entire space, of all these different challenges that we face with software. It doesn't matter if it's MongoDB or nodeJS, the process to get either of those up and running will be the same. The process to stop either of those is the same. All of these issues will still exist, but the nice thing is, when we bring good container and image technology together, we now have a single set of tools to help us tackle all of these different problems. Some of those issues are listed below:
@ -65,9 +66,9 @@ If we think about Docker as a tool, the reason that it took off, is because of t
We can split the above into 3 areas of the complexity of the software that containers and images do help with these.
| Distribution | Installation | Operation |
| ------------ | ------------ | ----------------- |
| ------------ | ------------- | ------------------ |
| Find | Install | Start |
| Download | Configuration| Security |
| Download | Configuration | Security |
| License | Uninstall | Ports |
| Package | Dependencies | Resource Conflicts |
| Trust | Platform | Auto-Restart |
@ -98,7 +99,7 @@ a container that you can move.
### The advantages of these containers
- Containers help package all the dependencies within the container and
isolate it.
isolate it.
- It is easy to manage the containers
@ -109,7 +110,7 @@ isolate it.
- Containers are easily scalable.
Using containers you can scale independent containers and use a load balancer
or a service which help split the traffic and you can scale the applications horizontally. Containers offer a lot of flexibility and ease how you manage your applications
or a service which helps split the traffic and you can scale the applications horizontally. Containers offer a lot of flexibility and ease in how you manage your applications
### What is a container?

View File

@ -2,18 +2,19 @@
title: '#90DaysOfDevOps - What is Docker & Getting installed - Day 43'
published: false
description: 90DaysOfDevOps - What is Docker & Getting installed
tags: "devops, 90daysofdevops, learning"
tags: 'devops, 90daysofdevops, learning'
cover_image: null
canonical_url: null
id: 1048739
---
## What is Docker & Getting installed
In the previous post I mentioned Docker at least once and that is because Docker is really innovative in the making containers popular even though they have been around for such a long time.
In the previous post, I mentioned Docker at least once and that is because Docker is innovative in making containers popular even though they have been around for such a long time.
We are going to be using and explaining docker here but we should also mention the [Open Container Initiative (OCI)](https://www.opencontainers.org/) which is an industry standards organization that encourages innovation while avoiding the danger of vendor lock-in. Thanks to the OCI, we have a choice when choosing a container toolchain, including Docker, [CRI-O](https://cri-o.io/), [Podman](http://podman.io/), [LXC](https://linuxcontainers.org/), and others.
Docker is a software framework for building, running, and managing containers. The term "docker" may refer to either the tools (the commands and a daemon) or to the Dockerfile file format.
Docker is a software framework for building, running, and managing containers. The term "docker" may refer to either the tools (the commands and a daemon) or the Dockerfile file format.
We are going to be using Docker Personal here which is free (for education and learning). This includes all the essentials that we need to cover to get a good foundation of knowledge of containers and tooling.
@ -21,45 +22,46 @@ It is probably worth breaking down some of the "docker" tools that we will be us
### Docker Engine
Docker Engine is an open source containerization technology for building and containerizing your applications. Docker Engine acts as a client-server application with:
Docker Engine is an open-source containerization technology for building and containerizing your applications. Docker Engine acts as a client-server application with:
- A server with a long-running daemon process dockerd.
- APIs which specify interfaces that programs can use to talk to and instruct the Docker daemon.
- APIs specify interfaces that programs can use to talk to and instruct the Docker daemon.
- A command line interface (CLI) client docker.
The above was taken from the official Docker documentation and the specific [Docker Engine Overview](https://docs.docker.com/engine/)
### Docker Desktop
We have a docker desktop for both Windows and macOS systems. An easy to install, lightweight docker development environment. A native OS application that leverages virtualisation capabilities on the host operating system.
We have a docker desktop for both Windows and macOS systems. An easy-to-install, lightweight docker development environment. A native OS application that leverages virtualisation capabilities on the host operating system.
Its the best solution if you want to build, debug, test, package, and ship Dockerized applications on Windows or macOS.
On Windows we are able to also take advantage of WSL2 and Microsoft Hyper-V. We will cover some of the WSL2 benefits as we go through.
On Windows, we can also take advantage of WSL2 and Microsoft Hyper-V. We will cover some of the WSL2 benefits as we go through.
Because of the integration with hypervisor capabilities on the host operating system docker provides the ability to run your containers with Linux Operating systems.
### Docker Compose
Docker compose is a tool that allows you to run more complex apps over multiple containers. With the benefit of being able to use a single file and command to spin up your application.
### Docker Hub
A centralised resource for working with Docker and its components. Most commonly known as a registry to host docker images. But there is a lot of additional services here which can be used in part with automation or integrated into GitHub as well as security scanning.
A centralised resource for working with Docker and its components. Most commonly known as a registry to host docker images. But there are a lot of additional services here which can be used in part with automation or integrated into GitHub as well as security scanning.
### Dockerfile
A dockerfile is a text file that contains commands you would normally execute manually in order to build a docker image. Docker can build images automatically by reading the instructions we have in our dockerfile.
A dockerfile is a text file that contains commands you would normally execute manually to build a docker image. Docker can build images automatically by reading the instructions we have in our dockerfile.
## Installing Docker Desktop
The [docker documenation](https://docs.docker.com/engine/install/) is amazing and if you are only just diving in then you should take a look and have a read through. We will be using Docker Desktop on Windows with WSL2. I had already ran through the installation on my machine we are using here.
The [docker documenation](https://docs.docker.com/engine/install/) is amazing and if you are only just diving in then you should take a look and have a read-through. We will be using Docker Desktop on Windows with WSL2. I had already run through the installation on the machine we are using here.
![](Images/Day43_Containers1.png)
Take note before you go ahead and install at the system requirements, [Install Docker Desktop on Windows](https://docs.docker.com/desktop/windows/install/) if you are using macOS including the M1 based CPU architecture you can also take a look at [Install Docker Desktop on macOS](https://docs.docker.com/desktop/mac/install/)
Take note before you go ahead and install at the system requirements, [Install Docker Desktop on Windows](https://docs.docker.com/desktop/windows/install/) if you are using macOS including the M1-based CPU architecture you can also take a look at [Install Docker Desktop on macOS](https://docs.docker.com/desktop/mac/install/)
I will run through the Docker Desktop installation for Windows on another Windows Machine and log the process down below.
## Resources
- [TechWorld with Nana - Docker Tutorial for Beginners](https://www.youtube.com/watch?v=3c-iBn73dDE)

View File

@ -7,27 +7,28 @@ cover_image: null
canonical_url: null
id: 1048708
---
## Docker Images & Hands-On with Docker Desktop
We now have Docker Desktop installed on our system. (If you are running Linux then you still have options but no GUI but docker obviously does work on Linux.)[Install Docker Engine on Ubuntu](https://docs.docker.com/engine/install/ubuntu/) (Other distributions also available.)
We now have Docker Desktop installed on our system. (If you are running Linux then you still have options but no GUI but docker does work on Linux.)[Install Docker Engine on Ubuntu](https://docs.docker.com/engine/install/ubuntu/) (Other distributions also available.)
In this post we are going to get started with deploying some images into our environment. A recap on what a Docker Image is - A Docker image is a file used to execute code in a Docker container. Docker images act as a set of instructions to build a Docker container, like a template. Docker images also act as the starting point when using Docker.
In this post, we are going to get started with deploying some images into our environment. A recap on what a Docker Image is - A Docker image is a file used to execute code in a Docker container. Docker images act as a set of instructions to build a Docker container, like a template. Docker images also act as the starting point when using Docker.
Now is a good time to go and create your account on [DockerHub](https://hub.docker.com/)
![](Images/Day44_Containers1.png)
DockerHub is a centralised resource for working with Docker and its components. Most commonly known as a registry to host docker images. But there is a lot of additional services here which can be used in part with automation or integrated into GitHub as well as security scanning.
DockerHub is a centralised resource for working with Docker and its components. Most commonly known as a registry to host docker images. But there are a lot of additional services here which can be used in part with automation or integrated into GitHub as well as security scanning.
If you scroll down once logged in you are going to see a list of container images, You might see database images for mySQL, hello-world etc. Think of these as great baseline images or you might in fact just need a database image and you are best to use the official one which means you don't need to create your own.
If you scroll down once logged in you are going to see a list of container images, You might see database images for MySQL, hello-world etc. Think of these as great baseline images or you might just need a database image and you are best to use the official one which means you don't need to create your own.
![](Images/Day44_Containers2.png)
We can drill deeper into the view of available images and search across categories, operating systems and architectures. The one thing I highlight below is the Official Image, this should give you peace of mind of the origin of this container image.
We can drill deeper into the view of available images and search across categories, operating systems and architectures. The one thing I highlight below is the Official Image, this should give you peace of mind about the origin of this container image.
![](Images/Day44_Containers3.png)
We can also search for a specific image, for example wordpress might be a good base image that we want we can do that in the top and find all container images related to wordpress. Below notice that we also have verified publisher.
We can also search for a specific image, for example, WordPress might be a good base image that we want we can do that at the top and find all container images related to WordPress. Below are notices that we also have verified publisher.
- Official Image - Docker Official images are a curated set of Docker open source and "drop-in" solution repositories.
@ -37,7 +38,7 @@ We can also search for a specific image, for example wordpress might be a good b
### Exploring Docker Desktop
We have Docker Desktop installed on our system and if open this I expect unless you had this already installed you will see something similar to the image below. As you can see we have no containers running and our docker engine is running.
We have Docker Desktop installed on our system and if you open this I expect unless you had this already installed you will see something similar to the image below. As you can see we have no containers running and our docker engine is running.
![](Images/Day44_Containers5.png)
@ -45,7 +46,7 @@ Because this was not a fresh install for me, I do have some images already downl
![](Images/Day44_Containers6.png)
Under remote repositories this is where you will find any container images you have stored in your docker hub. You can see from the below I do not have any images.
Under remote repositories, this is where you will find any container images you have stored in your docker hub. You can see from the below I do not have any images.
![](Images/Day44_Containers7.png)
@ -53,11 +54,11 @@ We can also clarify this on our dockerhub site and confirm that we have no repos
![](Images/Day44_Containers8.png)
Next we have the Volumes tab, If you have containers that require persistence then this is where we can add these volumes on your local file system or a shared file system.
Next, we have the Volumes tab, If you have containers that require persistence then this is where we can add these volumes to your local file system or a shared file system.
![](Images/Day44_Containers9.png)
At the time of writing there is also a Dev Environments tab, this is going to help you collaborate with your team instead of moving between different git branches. We won't be covering this.
At the time of writing, there is also a Dev Environments tab, this is going to help you collaborate with your team instead of moving between different git branches. We won't be covering this.
![](Images/Day44_Containers10.png)
@ -69,35 +70,35 @@ If we go and check our docker desktop window again, we are going to see that we
![](Images/Day44_Containers12.png)
You might have noticed that I am using WSL2 and in order for you to be able to use that you will need to make sure this is enabled in the settings.
You might have noticed that I am using WSL2 and for you to be able to use that you will need to make sure this is enabled in the settings.
![](Images/Day44_Containers13.png)
If we now go and check our Images tab again, you should now see an in use image called docker/getting-started.
If we now go and check our Images tab again, you should now see an in-use image called docker/getting-started.
![](Images/Day44_Containers14.png)
Back to the Containers/Apps tab, click on your running container. You are going to see the logs by default and along the top you have some options to choose from, in our case I am pretty confident that this is going to be a web page running in this container so we are going to choose the open in browser.
Back to the Containers/Apps tab, click on your running container. You are going to see the logs by default and along the top, you have some options to choose from, in our case I am pretty confident that this is going to be a web page running in this container so we are going to choose the open in the browser.
![](Images/Day44_Containers15.png)
When we hit that button above sure enough a web page should open hitting your localhost and display something similar to below.
This container also has some more detail on what are containers and images.
This container also has some more detail on our containers and images.
![](Images/Day44_Containers16.png)
We have now ran our first container. Nothing too scary just yet. What about if we wanted to pull one of the container images down from DockerHub? Maybe there is a `hello world` docker container we could use.
We have now run our first container. Nothing too scary just yet. What about if we wanted to pull one of the container images down from DockerHub? Maybe there is a `hello world` docker container we could use.
I went ahead and stopped the getting started container not that it's taking up any mass amount of resources but for tidyness as we walk through some more steps.
I went ahead and stopped the getting started container not that it's taking up any mass amount of resources but for tidiness, as we walk through some more steps.
Back in our terminal lets go ahead and run `docker run hello-world` and see what happens.
Back in our terminal let's go ahead and run `docker run hello-world` and see what happens.
You can see we did not have the image locally so we pulled that down and then we got a message that is written into the container image with some information on what it did to get up and running and some links to reference points.
![](Images/Day44_Containers17.png)
However, if we go and look in Docker Desktop now we have no running containers but we do have an exited container that used the hello-world message, meaning it came up, it delivered the message and then it terminated.
However, if we go and look in Docker Desktop now we have no running containers but we do have an exited container that used the hello-world message, meaning it came up, delivered the message and then is terminated.
![](Images/Day44_Containers18.png)
@ -105,7 +106,7 @@ And for the last time, let's just go and check the images tab and see that we ha
![](Images/Day44_Containers19.png)
In the message from the hello-world container it set down a challenge of running something a little more ambitious.
The message from the hello-world container set down the challenge of running something a little more ambitious.
Challenge Accepted!
@ -113,11 +114,11 @@ Challenge Accepted!
In running `docker run -it ubuntu bash` in our terminal we are going to run a containerised version of Ubuntu well not a full copy of the Operating system. You can find out more about this particular image on [DockerHub](https://hub.docker.com/_/ubuntu)
You can see below when we run the command we now have an interactive prompt (`-it`) and we have bash shell into our container.
You can see below when we run the command we now have an interactive prompt (`-it`) and we have a bash shell into our container.
![](Images/Day44_Containers21.png)
We have a bash shell but we don't have much more which is why this container image is less than 30mb.
We have a bash shell but we don't have much more which is why this container image is less than 30MB.
![](Images/Day44_Containers22.png)
@ -125,11 +126,11 @@ But we can still use this image and we can still install software using our apt
![](Images/Day44_Containers23.png)
Or maybe we want to install some software into our container, I have chosen a really bad example here as pinta is an image editor and its over 200mb but hopefully you get where I am going with this. This would increase the size of our container considerably but still we are going to be in the mb and not into the gb.
Or maybe we want to install some software into our container, I have chosen a really bad example here as pinta is an image editor and it's over 200MB but hopefully you get where I am going with this. This would increase the size of our container considerably but still, we are going to be in the MB and not in the GB.
![](Images/Day44_Containers24.png)
I wanted that to hopefully give you an overview of Docker Desktop and the not so scary world of containers when you break it down with simple use cases, we do need to cover some networking, security and other options we have vs just downloading container images and using them like this. By the end of the section we want to have made something and uploaded to our DockerHub repository and be able to deploy it.
I wanted that to hopefully give you an overview of Docker Desktop and the not-so-scary world of containers when you break it down with simple use cases, we do need to cover some networking, security and other options we have vs just downloading container images and using them like this. By the end of the section, we want to have made something and uploaded it to our DockerHub repository and be able to deploy it.
## Resources

View File

@ -2,34 +2,34 @@
title: '#90DaysOfDevOps - The anatomy of a Docker Image - Day 45'
published: false
description: 90DaysOfDevOps - The anatomy of a Docker Image
tags: 'devops, 90daysofdevops, learning'
tags: 'DevOps, 90daysofdevops, learning'
cover_image: null
canonical_url: null
id: 1048777
---
## The anatomy of a Docker Image
In the last session we covered some basics of how we can use Docker Desktop combined with DockerHub to deploy and run some verified images. A recap on what an image is, you won't forget things if I keep mentioning.
In the last session, we covered some basics of how we can use Docker Desktop combined with DockerHub to deploy and run some verified images. A recap on what an image is, you won't forget things if I keep mentioning them.
A Docker image is a read-only template that contains a set of instructions for creating a container that can run on the Docker platform. It provides a convenient way to package up applications and preconfigured server environments, which you can use for your own private use or share publicly with other Docker users. Docker images are also the starting point for anyone using Docker for the first time.
A Docker image is a read-only template that contains a set of instructions for creating a container that can run on the Docker platform. It provides a convenient way to package up applications and preconfigured server environments, which you can use for your private use or share publicly with other Docker users. Docker images are also the starting point for anyone using Docker for the first time.
What happens if we want to create our own Docker image? For us to do this we would create a Dockerfile. You saw how we could take that Ubuntu container image and we could add our software and we would have our container image with the software that we wanted and everything is good, however if that container is shut down or thrown away then all those software updates and installations go away there is no repeatable version of what we had done. So that is great for showing off the capabilities but it doesn't help with the transport of images across multiple environments with the same set of software installed each time the container is ran.
What happens if we want to create our own Docker image? For us to do this we would create a Dockerfile. You saw how we could take that Ubuntu container image and we could add our software and we would have our container image with the software that we wanted and everything is good, however, if that container is shut down or thrown away then all those software updates and installations go away there is no repeatable version of what we had done. So that is great for showing off the capabilities but it doesn't help with the transport of images across multiple environments with the same set of software installed each time the container is run.
### What is a Dockerfile
A dockerfile is a text file that contains commands you would normally execute manually in order to build a docker image. Docker can build images automatically by reading the instructions we have in our dockerfile.
A dockerfile is a text file that contains commands you would normally execute manually to build a docker image. Docker can build images automatically by reading the instructions we have in our dockerfile.
Each of the files that make up a docker image is known as a layer. these layers form a series of images, built on top of each other in stages. Each layer is dependant on the layer immediatly below it. The order of your layers is key to the effciency of the lifecycle management of your docker images.
Each of the files that make up a docker image is known as a layer. these layers form a series of images, built on top of each other in stages. Each layer is dependent on the layer immediately below it. The order of your layers is key to the efficiency of the lifecycle management of your docker images.
We should organise our layers that change most often as high in the stack as possible, this is because when you make changes to a layer in your image, Docker not only rebuilds that particular layer but all layers built from it. Therefore a change to a layer at the top involves the least amount of work to rebuild the entire image.
Each time docker launches a container from an image (like we ran yesterday) it adds a writeable layer, known as the container layer. This stores all changes to the container throughout its runtime. This layer is the only difference between a live operational container and the source image itself. Any number of like for like containers can share access to the same underlying image while maintaining their own individual state.
Each time docker launches a container from an image (like we ran yesterday) it adds a writeable layer, known as the container layer. This stores all changes to the container throughout its runtime. This layer is the only difference between a live operational container and the source image itself. Any number of like-for-like containers can share access to the same underlying image while maintaining their state.
Back to the example we used yesterday with the Ubuntu image. We could run that same command multiple times and on the first container we could go and install pinta and on the second we could install figlet two different applications, different purpose, different size etc. Each container that we deployed share the same image but not the same state and then that state is then gone when we remove the container.
Back to the example, we used yesterday with the Ubuntu image. We could run that same command multiple times and on the first container we could go and install pinta and on the second we could install figlet with two different applications, different purposes, different sizes etc. Each container that we deployed shares the same image but not the same state and then that state is then gone when we remove the container.
![](Images/Day45_Containers1.png)
Following the example above with the Ubuntu image, but also many other ready built container images available on DockerHub and other third party repositories. These images are generally known as the parent image. It is the foundations upon which all other layers are build and provides the basic building blocks for our container environments.
Following the example above with the Ubuntu image, but also many other ready-built container images available on DockerHub and other third-party repositories. These images are generally known as the parent image. It is the foundation upon which all other layers are built and provides the basic building blocks for our container environments.
Together with a set of individual layer files, a Docker image also includes an additional file known as a manifest. This is essentially a description of the image in JSON format and comprises information such as image tags, a digital signature, and details on how to configure the container for different types of host platforms.
@ -37,15 +37,15 @@ Together with a set of individual layer files, a Docker image also includes an a
### How to create a docker image
There are two ways we can create a docker image. We can do it a little on the fly with the process that we started yesterday, we pick our base image we spin up that container, we install all of the software and depenancies that we wish to have on our container.
There are two ways we can create a docker image. We can do it a little on the fly with the process that we started yesterday, we pick our base image spin up that container, and install all of the software and dependencies that we wish to have on our container.
Then we can use the `docker commit container name` then we have a local copy of this image under docker images and in our docker desktop images tab.
Super simple, I would not recommend this method unless you want to understand the process, it is going to be very difficult to manage lifecycle management this way and a lot of manual configuration/reconfiguration. But it is the quickest and most simple ways to build a docker image. Great for testing, troubleshooting, validating dependencies etc.
Super simple, I would not recommend this method unless you want to understand the process, it is going to be very difficult to manage lifecycle management this way and a lot of manual configuration/reconfiguration. But it is the quickest and most simple way to build a docker image. Great for testing, troubleshooting, validating dependencies etc.
The way we intend to build our image is through a dockerfile. Which gives us a clean, compact and repeatable way to create our images. Much easier lifecycle management and easy integration into Continous integration and Continous delivery procesess. But as you might gathered it is a little more difficult than the first mentioned process.
The way we intend to build our image is through a dockerfile. Which gives us a clean, compact and repeatable way to create our images. Much easier lifecycle management and easy integration into Continous Integration and Continuous delivery processes. But as you might gather it is a little more difficult than the first mentioned process.
Using the dockerfile method is much more in tune with real-world, enterprise grade container deployments.
Using the dockerfile method is much more in tune with real-world, enterprise-grade container deployments.
A dockerfile is a three-step process whereby you create the dockerfile and add the commands you need to assemble the image.
@ -63,10 +63,9 @@ The following table shows some of the dockerfile statements we will be using or
| EXPOSE | To define which port through which to access your container application. |
| LABEL | To add metadata to the image. |
Now we have the detail on how to build our first dockerfile we can create a working directory and create our dockerfile. I have created a working directory within this repository where you can see the files and folders I have to walk through. [Containers](Containers)
In this directory I am going to create a .dockerignore file similar to the .gitignore we used in the last section. This file will list any files that would otherwise be created during the Docker build process, which you want to exclude from the final build.
In this directory, I am going to create a .dockerignore file similar to the .gitignore we used in the last section. This file will list any files that would otherwise be created during the Docker build process, which you want to exclude from the final build.
Remember everything about containers is about being compact, as fast as possible with no bloat.
@ -93,21 +92,20 @@ Whilst in Docker Desktop there is also the ability to leverage the UI to do some
![](Images/Day45_Containers5.png)
We can inspect our image, in doing so you see very much the dockerfile and the lines of code that we wanted to run within our container.
We can inspect our image, in doing so you see very much of the dockerfile and the lines of code that we wanted to run within our container.
![](Images/Day45_Containers6.png)
We have a pull option, now this would fail for us because this image is not hosted anywhere so we would get that as an error. However we do have a Push to hub which would enable us to push our image to DockerHub.
We have a pull option, now this would fail for us because this image is not hosted anywhere so we would get that as an error. However, we do have a Push to hub which would enable us to push our image to DockerHub.
If you are using the same `docker build` we ran earlier then this would not work either, you would need the build command to be `docker build -t {{username}}/{{imagename}}:{{version}}`
![](Images/Day45_Containers7.png)
Then if we go and take a look in our DockerHub repository you can see that we just pushed a new image. Now in Docker Desktop we would be able to use that pull tab.
Then if we go and take a look in our DockerHub repository you can see that we just pushed a new image. Now in Docker Desktop, we would be able to use that pull tab.
![](Images/Day45_Containers8.png)
## Resources
- [TechWorld with Nana - Docker Tutorial for Beginners](https://www.youtube.com/watch?v=3c-iBn73dDE)

View File

@ -2,16 +2,17 @@
title: '#90DaysOfDevOps - Docker Compose - Day 46'
published: false
description: 90DaysOfDevOps - Docker Compose
tags: "devops, 90daysofdevops, learning"
tags: 'devops, 90daysofdevops, learning'
cover_image: null
canonical_url: null
id: 1048740
---
## Docker Compose
The ability to run one container could be great if you have a self contained image that has everything you need for your single use case, where things get interesting is when you are looking to build multiple applications between different container images. For example if I had a website front end but had a requirement for a backend database I could put everything in one container but better and more efficient would be to have its own container for the database.
The ability to run one container could be great if you have a self-contained image that has everything you need for your single use case, where things get interesting is when you are looking to build multiple applications between different container images. For example, if I had a website front end but required a backend database I could put everything in one container but better and more efficient would be to have its container for the database.
This is where Docker compose comes in which is a tool that allows you to run more complex apps over multiple containers. With the benefit of being able to use a single file and command to spin up your application. The example I am going to walkthrough in this post is from the [Docker QuickStart sample apps (Quickstart: Compose and WordPress)](https://docs.docker.com/samples/wordpress/).
This is where Docker compose comes in which is a tool that allows you to run more complex apps over multiple containers. With the benefit of being able to use a single file and command to spin up your application. The example I am going to the walkthrough in this post is from the [Docker QuickStart sample apps (Quickstart: Compose and WordPress)](https://docs.docker.com/samples/wordpress/).
In this first example we are going to:
@ -22,7 +23,8 @@ In this first example we are going to:
- Shutdown and Clean up
### Install Docker Compose
As mentioned Docker Compose is a tool, If you are on macOS or Windows then compose is included in your Docker Desktop installation. However you might be wanting to run your containers on a Windows server host or Linux server and in which case you can install using these instructions [Install Docker Compose](https://docs.docker.com/compose/install/)
As mentioned Docker Compose is a tool, If you are on macOS or Windows then compose is included in your Docker Desktop installation. However, you might be wanting to run your containers on a Windows server host or Linux server and in which case you can install using these instructions [Install Docker Compose](https://docs.docker.com/compose/install/)
To confirm we have `docker-compose` installed on our system we can open a terminal and simply type the above command.
@ -30,9 +32,9 @@ To confirm we have `docker-compose` installed on our system we can open a termin
### Docker-Compose.yml (YAML)
The next thing to talk about is the docker-compose.yml which you can find in the container folder of the repository. But more importantly we need to discuss YAML in general a little.
The next thing to talk about is the docker-compose.yml which you can find in the container folder of the repository. But more importantly, we need to discuss YAML, in general, a little.
YAML could almost have its own session as you are going to find it in so many different places. But for the most part
YAML could almost have its session as you are going to find it in so many different places. But for the most part
"YAML is a human-friendly data serialization language for all programming languages."
@ -44,13 +46,13 @@ The YAML acronym was shorthand for Yet Another Markup Language. But the maintain
Anyway, back to the docker-compose.yml file. This is a configuration file of what we want to do when it comes to multiple containers being deployed on our single system.
Straight from the tuturial linked above you can see the contents of the file looks like this:
Straight from the tutorial linked above you can see the contents of the file looks like this:
```
version: "3.9"
services:
db:
DB:
image: mysql:5.7
volumes:
- db_data:/var/lib/mysql
@ -80,25 +82,25 @@ volumes:
wordpress_data: {}
```
We declare a version and then a large part of this docker-compose.yml file is made up of our services, we have a db service and a wordpress service. You can see each of those have an image defined wiht a version tag associated. We are now also introducing state into our configuration unlike our first walkthroughs, but now we are going to create volumes so we can store our databases there.
We declare a version and then a large part of this docker-compose.yml file is made up of our services, we have a DB service and a WordPress service. You can see each of those has an image defined with a version tag associated. We are now also introducing state into our configuration unlike our first walkthroughs, but now we are going to create volumes so we can store our databases there.
We then have some environmental variables such as passwords and usernames. Obviously these files can get very complicated but the YAML configuration file simplifies what these look like overall.
We then have some environmental variables such as passwords and usernames. These files can get very complicated but the YAML configuration file simplifies what these look like overall.
### Build the project
Next up we can head back into our terminal and we can use some commands with our docker-compose tool. Navigate to your directory, where your docker-compose.yml file is located.
From the terminal we can simply run `docker-compose up -d` this will start the process of pulling those images and standing up your multi container application.
From the terminal, we can simply run `docker-compose up -d` this will start the process of pulling those images and standing up your multi-container application.
The `-d` in this command means detached mode, which means that the Run command is or will be in the background.
![](Images/Day46_Containers2.png)
If we now run the `docker ps` command, you can see we have 2 containers running, one being wordpress and the other being mySQL.
If we now run the `docker ps` command, you can see we have 2 containers running, one being WordPress and the other being MySQL.
![](Images/Day46_Containers3.png)
Next we can validate that we have WordPress up and running by opening a browser and going to `http://localhost:8000` and you should see the wordpress set up page.
Next, we can validate that we have WordPress up and running by opening a browser and going to `http://localhost:8000` and you should see the WordPress set-up page.
![](Images/Day46_Containers4.png)
@ -110,11 +112,11 @@ If we then open a new tab and navigate to that same address we did before `http:
![](Images/Day46_Containers6.png)
Before we make any changes, open Docker Desktop and navigate to the volumes tab and here you will see two volumes associated to our containers, one for wordpress and one for db.
Before we make any changes, open Docker Desktop and navigate to the volumes tab and here you will see two volumes associated with our containers, one for WordPress and one for DB.
![](Images/Day46_Containers7.png)
My Current theme for wordpress is "Twenty Twenty-Two" and I want to change this to "Twenty Twenty" Back in the dashboard we can make those changes.
My Current wordpress theme is "Twenty Twenty-Two" and I want to change this to "Twenty Twenty" Back in the dashboard we can make those changes.
![](Images/Day46_Containers8.png)
@ -136,23 +138,23 @@ If we then want to bring things back up then we can issue the `docker up -d` com
![](Images/Day46_Containers12.png)
We then navigate in our browser to that same address of `http://localhost:8000` and notice that our new post and our theme change is all still in place.
We then navigate in our browser to that same address of `http://localhost:8000` and notice that our new post and our theme change are all still in place.
![](Images/Day46_Containers13.png)
If we want to get rid of the containers and those volumes then issueing the `docker-compose down --volumes` will also destroy the volumes.
If we want to get rid of the containers and those volumes then issuing the `docker-compose down --volumes` will also destroy the volumes.
![](Images/Day46_Containers14.png)
Now when we use `docker-compose up -d` again we will be starting again, however the images will still be local on our system so you won't need to re pull them from the DockerHub repository.
Now when we use `docker-compose up -d` again we will be starting, however, the images will still be local on our system so you won't need to re-pull them from the DockerHub repository.
I know that when I started diving into docker-compose and its capabilities I was then confused as to where this sits alongside or with Container Orchestration tools such as Kubernetes, well everything we have done here in this short demo is focused on one host we have wordpress and db running on the local desktop machine. We don't have multiple virtual machines or multiple physical machines, we also don't have the ability to easily scale up and down the requirements of our application.
I know that when I started diving into docker-compose and its capabilities I was then confused as to where this sits alongside or with Container Orchestration tools such as Kubernetes, well everything we have done here in this short demo is focused on one host we have WordPress and DB running on the local desktop machine. We don't have multiple virtual machines or multiple physical machines, we also can't easily scale up and down the requirements of our application.
Our next section is going to cover Kubernetes but we have a few more days of Containers in general first.
This is also a great resource for samples of docker compose applications with multiple integrations. [Awesome-Compose](https://github.com/docker/awesome-compose)
This is also a great resource for samples of docker-compose applications with multiple integrations. [Awesome-Compose](https://github.com/docker/awesome-compose)
In the above repository there is a great example which will deploy an Elasticsearch, Logstash, and Kibana (ELK) in single-node.
In the above repository, there is a great example which will deploy an Elasticsearch, Logstash, and Kibana (ELK) in single-node.
I have uploaded the files to the [Containers folder](/Days/Containers/elasticsearch-logstash-kibana/) When you have this folder locally, navigate there and you can simply use `docker-compose up -d`
@ -162,7 +164,7 @@ We can then check we have those running containers with `docker ps`
![](Images/Day46_Containers16.png)
Now we can open a browser for each of containers:
Now we can open a browser for each of the containers:
![](Images/Day46_Containers17.png)
@ -174,7 +176,7 @@ To remove everything we can use the `docker-compose down` command.
- [Programming with Mosh - Docker Tutorial for Beginners](https://www.youtube.com/watch?v=pTFZFxd4hOI)
- [Docker Tutorial for Beginners - What is Docker? Introduction to Containers](https://www.youtube.com/watch?v=17Bl31rlnRM&list=WL&index=128&t=61s)
- [WSL 2 with Docker getting started](https://www.youtube.com/watch?v=5RQbdMn04Oc)
- [Blog on gettng started building a docker image](https://stackify.com/docker-build-a-beginners-guide-to-building-docker-images/)
- [Blog on getting started building a docker image](https://stackify.com/docker-build-a-beginners-guide-to-building-docker-images/)
- [Docker documentation for building an image](https://docs.docker.com/develop/develop-images/dockerfile_best-practices/)
- [YAML Tutorial: Everything You Need to Get Started in Minute](https://www.cloudbees.com/blog/yaml-tutorial-everything-you-need-get-started)

View File

@ -7,25 +7,26 @@ cover_image: null
canonical_url: null
id: 1049078
---
## Docker Networking & Security
During this container session so far we have made things happen but we have not really looked at how things have worked behind the scenes either from a networking point of view but also we have not touched on security, that is the plan for this session.
During this container session so far we have made things happen but we have not looked at how things have worked behind the scenes either from a networking point of view also we have not touched on security, that is the plan for this session.
### Docker Networking Basics
Open a terminal, and type the command `docker network` this is the main command for configuring and managing container networks.
From the below you can see this is how we can use the command, and all of the sub commands available. We can create new networks, list existing, inspect and remove networks.
From the below, you can see this is how we can use the command, and all of the sub-commands available. We can create new networks, list existing ones, and inspect and remove networks.
![](Images/Day47_Containers1.png)
Lets take a look at the existing networks we have since our installation, so the out of box Docker networking looks like using the `docker network list` command.
Let's take a look at the existing networks we have since our installation, so the out-of-box Docker networking looks like using the `docker network list` command.
Each network gets a unique ID and NAME. Each network is also associated with a single driver. Notice that the "bridge" network and the "host" network have the same name as their respective drivers.
![](Images/Day47_Containers2.png)
Next we can take a deeper look into our networks with the `docker network inspect` command.
Next, we can take a deeper look into our networks with the `docker network inspect` command.
With me running `docker network inspect bridge` I can get all the configuration details of that specific network name. This includes name, ID, drivers, connected containers and as you can see quite a lot more.
@ -33,7 +34,7 @@ With me running `docker network inspect bridge` I can get all the configuration
### Docker: Bridge Networking
As you have seen above a standard installation of Docker Desktop gives us a pre-built network called `bridge` If you look back up to the `docker network list` command, you will see that the networked called bridge is associated with the `bridge` driver. Just because they have the same name doesn't they are the same thing. Connected but not the same thing.
As you have seen above a standard installation of Docker Desktop gives us a pre-built network called `bridge` If you look back up to the `docker network list` command, you will see that the network called bridge is associated with the `bridge` driver. Just because they have the same name doesn't they are the same thing. Connected but not the same thing.
The output above also shows that the bridge network is scoped locally. This means that the network only exists on this Docker host. This is true of all networks using the bridge driver - the bridge driver provides single-host networking.
@ -43,7 +44,7 @@ All networks created with the bridge driver are based on a Linux bridge (a.k.a.
By default the bridge network is assigned to new containers, meaning unless you specify a network all containers will be connected to the bridge network.
Lets create a new container with the command `docker run -dt ubuntu sleep infinity`
Let's create a new container with the command `docker run -dt ubuntu sleep infinity`
The sleep command above is just going to keep the container running in the background so we can mess around with it.
@ -59,24 +60,23 @@ From here our image doesn't have anything to ping so we need to run the followin
![](Images/Day47_Containers6.png)
To clear this up we can run `docker stop 3a99af449ca2` again use `docker ps` to find your container ID but this will remove our container.
To clear this up we can run `docker stop 3a99af449ca2` again and use `docker ps` to find your container ID but this will remove our container.
### Configure NAT for external connectivity
In this step we'll start a new NGINX container and map port 8080 on the Docker host to port 80 inside of the container. This means that traffic that hits the Docker host on port 8080 will be passed on to port 80 inside the container.
In this step, we'll start a new NGINX container and map port 8080 on the Docker host to port 80 inside of the container. This means that traffic that hits the Docker host on port 8080 will be passed on to port 80 inside the container.
Start a new container based off the official NGINX image by running `docker run --name web1 -d -p 8080:80 nginx`
Start a new container based on the official NGINX image by running `docker run --name web1 -d -p 8080:80 nginx`
![](Images/Day47_Containers7.png)
Review the container status and port mappings by running `docker ps`
![](Images/Day47_Containers8.png)
The top line shows the new web1 container running NGINX. Take note of the command the container is running as well as the port mapping - `0.0.0.0:8080->80/tcp` maps port 8080 on all host interfaces to port 80 inside the web1 container. This port mapping is what effectively makes the containers web service accessible from external sources (via the Docker hosts IP address on port 8080).
The top line shows the new web1 container running NGINX. Take note of the command the container is running as well as the port mapping - `0.0.0.0:8080->80/tcp` maps port 8080 on all host interfaces to port 80 inside the web1 container. This port mapping is what effectively makes the container's web service accessible from external sources (via the Docker hosts IP address on port 8080).
Now we need our IP address for our actual host, we can do this by going into our WSL terminal and using the `ip addr` command.
Now we need our IP address for our actual host, we can do this by going into our WSL terminal and using the `IP addr` command.
![](Images/Day47_Containers9.png)
@ -84,19 +84,19 @@ Then we can take this IP and open a browser and head to `http://172.25.218.154:8
![](Images/Day47_Containers10.png)
I have taken these instructions from this site from way back in 2017 DockerCon but they are still relevant today. However the rest of the walkthrough goes into Docker Swarm and I am not going to be looking into that here. [Docker Networking - DockerCon 2017](https://github.com/docker/labs/tree/master/dockercon-us-2017/docker-networking)
I have taken these instructions from this site from way back in 2017 DockerCon but they are still relevant today. However, the rest of the walkthrough goes into Docker Swarm and I am not going to be looking into that here. [Docker Networking - DockerCon 2017](https://github.com/docker/labs/tree/master/dockercon-us-2017/docker-networking)
### Securing your containers
Containers provide a secure environment for your workloads vs a full server configuration. They offer the ability to break up your applications into much smaller, loosly coupled components each isolated from one another which helps resude the attack surface overall.
Containers provide a secure environment for your workloads vs a full server configuration. They offer the ability to break up your applications into much smaller, loosely coupled components each isolated from one another which helps reduce the attack surface overall.
But they are not immune from hackers that are looking to exploit systems. We still need to understand the security pitfalls of the technology and maintain best practices.
### Move away from root permission
All of the containers we have deployed have been using the root permission to the process within your containers. Which means they have full administrative access to your container and host environments. Now for the purposes of walking through we knew these systems were not going to be up and running for long. But you saw how easy it was to get up and running.
All of the containers we have deployed have been using the root permission to the process within your containers. This means they have full administrative access to your container and host environments. Now to walk through we knew these systems were not going to be up and running for long. But you saw how easy it was to get up and running.
We can add a few steps to our process to enable non root users to be our preferred best practice. When creating our dockerfile we can create user accounts. You can find this example also in the containers folder in the repository.
We can add a few steps to our process to enable non-root users to be our preferred best practice. When creating our dockerfile we can create user accounts. You can find this example also in the containers folder in the repository.
```
# Use the official Ubuntu 18.04 as base
@ -112,9 +112,9 @@ However, this method doesnt address the underlying security flaw of the image
### Private Registry
Another area we have used heavily is public registries in DockerHub, with a private registry of container images set up by your organisation means that you can host where you wish or there are managed services for this as well, but all in all this gives you complete control of the images available for you and your team.
Another area we have used heavily in public registries in DockerHub, with a private registry of container images set up by your organisation means that you can host where you wish or there are managed services for this as well, but all in all, this gives you complete control of the images available for you and your team.
DockerHub is great to give you a baseline, but its only going to be providing you with a basic service where you have to put a lot of trust into the image publisher.
DockerHub is great to give you a baseline, but it's only going to be providing you with a basic service where you have to put a lot of trust into the image publisher.
### Lean & Clean
@ -132,7 +132,7 @@ Checking `docker image` is a great command to see the size of your images.
- [Programming with Mosh - Docker Tutorial for Beginners](https://www.youtube.com/watch?v=pTFZFxd4hOI)
- [Docker Tutorial for Beginners - What is Docker? Introduction to Containers](https://www.youtube.com/watch?v=17Bl31rlnRM&list=WL&index=128&t=61s)
- [WSL 2 with Docker getting started](https://www.youtube.com/watch?v=5RQbdMn04Oc)
- [Blog on gettng started building a docker image](https://stackify.com/docker-build-a-beginners-guide-to-building-docker-images/)
- [Blog on getting started building a docker image](https://stackify.com/docker-build-a-beginners-guide-to-building-docker-images/)
- [Docker documentation for building an image](https://docs.docker.com/develop/develop-images/dockerfile_best-practices/)
- [YAML Tutorial: Everything You Need to Get Started in Minute](https://www.cloudbees.com/blog/yaml-tutorial-everything-you-need-get-started)

View File

@ -7,9 +7,10 @@ cover_image: null
canonical_url: null
id: 1048807
---
## Alternatives to Docker
I did say at the very beginning of this section that we were going to be using Docker, simply because resource wise there is so much and the community is very big, but also this was really where the indents to making containers popular really came from. I would encourage you to go and watch some of the history around Docker and how it came to be, I found it very useful.
I did say at the very beginning of this section that we were going to be using Docker, simply because resource wise there is so much and the community is very big, but also this was really where the indents to making containers popular came from. I would encourage you to go and watch some of the history around Docker and how it came to be, I found it very useful.
But as I have alluded to there are other alternatives to Docker. If we think about what Docker is and what we have covered. It is a platform for developing, testing, deploying, and managing applications.
@ -17,29 +18,29 @@ I want to highlight a few alternatives to Docker that you might or will in the f
### Podman
What is Podman? Podman is a daemonless container engine for developing, managing, and running OCI Containers on your Linux System. Containers can either be run as root or in rootless mode.
What is Podman? Podman is a daemon-less container engine for developing, managing, and running OCI Containers on your Linux System. Containers can either be run as root or in rootless mode.
I am going to be looking at this from a Windows point of view but know that like Docker there is no requirement for virtualisation there as it will use the underlying OS which is cannot do in the Windows world.
Podman can be ran under WSL2 although not as sleak as the experience with Docker Desktop. There is also a Windows remote client where you can connect to a Linux VM where your containers will run.
Podman can be run under WSL2 although not as sleek as the experience with Docker Desktop. There is also a Windows remote client where you can connect to a Linux VM where your containers will run.
My Ubuntu on WSL2 is the 20.04 release. Following the next steps will enable you to install Podman on your WSL instance.
```
```Shell
echo "deb https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/xUbuntu_20.04/ /" |
sudo tee /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list
```
Add the GPG Key
```
```Shell
curl -L "https://download.opensuse.org/repositories/devel:/kubic:\
/libcontainers:/stable/xUbuntu_20.04/Release.key" | sudo apt-key add -
```
Run a system update and upgrade with the `sudo apt-get update && sudo apt-get upgrade` command. Finally we can install podman using `sudo apt install podman`
Run a system update and upgrade with the `sudo apt-get update && sudo apt-get upgrade` command. Finally, we can install podman using `sudo apt install podman`
We can now use a lot of the same commands we have been using for docker, note though that we do not have that nice docker desktop UI. You can see below I used `podman images` and I have nothing after install then I used `podman pull ubuntu` to pull down the ubuntu container image.
We can now use a lot of the same commands we have been using for docker, note that we do not have that nice docker desktop UI. You can see below I used `podman images` and I have nothing after installation then I used `podman pull ubuntu` to pull down the ubuntu container image.
![](Images/Day48_Containers1.png)
@ -51,29 +52,29 @@ To then get into that container we can run `podman attach dazzling_darwin` your
![](Images/Day48_Containers3.png)
If you are moving from docker to podman it is also common to change your config file to have `alias docker=podman` that way any command you run with docker will in fact use podman.
If you are moving from docker to podman it is also common to change your config file to have `alias docker=podman` that way any command you run with docker will use podman.
### LXC
LXC is a containerisation engine that enables users again to create multiple isolateed Linux container environments. Unlike Docker LXC acts as a hypervisor for create multiple Linux machines with separeate system files, networking features. Was around before Docker and then made a short comeback due to Docker shortcomings.
LXC is a containerisation engine that enables users again to create multiple isolated Linux container environments. Unlike Docker, LXC acts as a hypervisor for creating multiple Linux machines with separate system files, and networking features. Was around before Docker and then made a short comeback due to Docker's shortcomings.
LXC is as lightweight though as docker, and easily deployed.
LXC is as lightweight though as docker and easily deployed.
### Containerd
A standalone container runtime. Containerd brings simplicity and robustness as well as of course portability. Containerd was formerly a tool that runs as part of Docker container services until Docker decided to graduate its components into standalone components.
A project in the Cloud Native Computing Foundation, placing it in the same class with popular container tools like Kubernetes, Prometheus, and CoreDNS.
A project in the Cloud Native Computing Foundation, placing it in the same class as popular container tools like Kubernetes, Prometheus, and CoreDNS.
### Other Docker tooling
We could also mention toolings and options around Rancher, VirtualBox but we can cover them off in more detail another time.
We could also mention toolings and options around Rancher, and VirtualBox but we can cover them in more detail another time.
[**Gradle**](https://gradle.org/)
- Build scans allow teams to collaboratively debug their scripts and track the history of all builds.
- Execution options give teams the ability to continuously build so that whenever changes are inputted, the task is automatically executed.
- The custom repository layout gives teams the ability to treat any file directory structure as an artifact repository.
- The custom repository layout gives teams the ability to treat any file directory structure as an artefact repository.
[**Packer**](https://packer.io/)
@ -83,7 +84,7 @@ We could also mention toolings and options around Rancher, VirtualBox but we can
[**Logspout**](https://github.com/gliderlabs/logspout)
- Logging tool - The tools customisability allows teams to ship the same logs to multiple destinations.
- Logging tool - The tools customizability allows teams to ship the same logs to multiple destinations.
- Teams can easily manage their files because the tool only requires access to the Docker socket.
- Completely open-sourced and easy to deploy.
@ -91,7 +92,7 @@ We could also mention toolings and options around Rancher, VirtualBox but we can
- Customize your pipeline using Logstashs pluggable framework.
- Easily parse and transform your data for analysis and to deliver business value.
- Logstashs variety of outputs let you route your data where you want.
- Logstashs variety of outputs lets you route your data where you want.
[**Portainer**](https://www.portainer.io/)
@ -99,20 +100,16 @@ We could also mention toolings and options around Rancher, VirtualBox but we can
- Create teams and assign roles and permissions to team members.
- Know what is running in each environment using the tools dashboard.
## Resources
- [TechWorld with Nana - Docker Tutorial for Beginners](https://www.youtube.com/watch?v=3c-iBn73dDE)
- [Programming with Mosh - Docker Tutorial for Beginners](https://www.youtube.com/watch?v=pTFZFxd4hOI)
- [Docker Tutorial for Beginners - What is Docker? Introduction to Containers](https://www.youtube.com/watch?v=17Bl31rlnRM&list=WL&index=128&t=61s)
- [WSL 2 with Docker getting started](https://www.youtube.com/watch?v=5RQbdMn04Oc)
- [Blog on gettng started building a docker image](https://stackify.com/docker-build-a-beginners-guide-to-building-docker-images/)
- [Blog on getting started building a docker image](https://stackify.com/docker-build-a-beginners-guide-to-building-docker-images/)
- [Docker documentation for building an image](https://docs.docker.com/develop/develop-images/dockerfile_best-practices/)
- [YAML Tutorial: Everything You Need to Get Started in Minute](https://www.cloudbees.com/blog/yaml-tutorial-everything-you-need-get-started)
- [Podman | Daemonless Docker | Getting Started with Podman](https://www.youtube.com/watch?v=Za2BqzeZjBk)
- [LXC - Guide to building a LXC Lab](https://www.youtube.com/watch?v=cqOtksmsxfg)
- [LXC - Guide to building an LXC Lab](https://www.youtube.com/watch?v=cqOtksmsxfg)
See you on [Day 49](day49.md)

View File

@ -7,31 +7,32 @@ cover_image: null
canonical_url: null
id: 1049049
---
## The Big Picture: Kubernetes
In the last section we covered Containers, Containers fall short when it comes to scale and orchestration alone. The best we can do is use docker-compose to bring up multiple containers together. When it comes to Kubernetes which is a Container Orchestrator, this gives us the ability to scale up and down in an automated way or based on the load of your applications and services.
In the last section we covered Containers, Containers fall short when it comes to scale and orchestration alone. The best we can do is use docker-compose to bring up multiple containers together. When it comes to Kubernetes which is a Container Orchestrator, this gives us the ability to scale up and down in an automated way or based on a load of your applications and services.
As a platform Kubernetes offers the ability to orchestrate containers according to your requirements and desired state. We are going to cover Kubernetes in this section as it is growing rapidly as the next wave of infrastructure. I would also suggest that from a DevOps perspective Kubernetes is just one platform that you will need to have a basic understanding of, you will also need to understand bare metal, virtualisation and most likely cloud based services as well. Kubernetes is just another option to run our applications.
As a platform Kubernetes offers the ability to orchestrate containers according to your requirements and desired state. We are going to cover Kubernetes in this section as it is growing rapidly as the next wave of infrastructure. I would also suggest that from a DevOps perspective Kubernetes is just one platform that you will need to have a basic understanding of, you will also need to understand bare metal, virtualisation and most likely cloud-based services as well. Kubernetes is just another option to run our applications.
### What is Container Orchestration?
I have mentioned Kubernetes and I have mentioned Container Orchestration, Kubernetes is the technology where as the container orchestration is the concept or the process behind the technology. Kubernetes is not the only Container Orchestration platform we also have Docker Swarm, HashiCorp Nomad and others. But Kubernetes is going from strength to strength so I want to cover Kubernetes but wanted to say that it is not the only one out there.
I have mentioned Kubernetes and I have mentioned Container Orchestration, Kubernetes is the technology whereas container orchestration is the concept or the process behind the technology. Kubernetes is not the only Container Orchestration platform we also have Docker Swarm, HashiCorp Nomad and others. But Kubernetes is going from strength to strength so I want to cover Kubernetes but wanted to say that it is not the only one out there.
### What is Kubernetes?
The first thing you should read if you are new to Kubernetes is the official documentation, My experience of really deep diving into Kubernetes a little over a year ago was that this is going to be a steep learning curve. Coming from a virtualisation and storage background I was thinking how daunting this felt.
The first thing you should read if you are new to Kubernetes is the official documentation, My experience of really deep diving into Kubernetes a little over a year ago was that this is going to be a steep learning curve. Coming from a virtualisation and storage background I was thinking about how daunting this felt.
But actually the community, free learning resources and documentation is actually amazing. [Kubernetes.io](https://kubernetes.io/docs/concepts/overview/what-is-kubernetes/)
But the community, free learning resources and documentation are amazing. [Kubernetes.io](https://kubernetes.io/docs/concepts/overview/what-is-kubernetes/)
*Kubernetes is a portable, extensible, open-source platform for managing containerized workloads and services, that facilitates both declarative configuration and automation. It has a large, rapidly growing ecosystem. Kubernetes services, support, and tools are widely available.*
_Kubernetes is a portable, extensible, open-source platform for managing containerized workloads and services, that facilitates both declarative configuration and automation. It has a large, rapidly growing ecosystem. Kubernetes services, support, and tools are widely available._
Important things to note from the above qoute, Kubernetes is Open-Source with a rich history that goes back to Google who donated the project to the Cloud Native computing foundation (CNCF) and it has now been progressed by the open-source community as well as large enterprise vendors contributing to make Kubernetes what it is today.
Important things to note from the above quote, Kubernetes is Open-Source with a rich history that goes back to Google who donated the project to the Cloud Native Computing Foundation (CNCF) and it has now been progressed by the open-source community as well as large enterprise vendors contributing to making Kubernetes what it is today.
I mentioned above that containers are great and in the previous section we spoke about how containers and container images have changed and accelerated the adoption of cloud-native systems. But containers alone are not going to give you the production ready experience you need from your application. Kubernetes gives us the following:
I mentioned above that containers are great and in the previous section, we spoke about how containers and container images have changed and accelerated the adoption of cloud-native systems. But containers alone are not going to give you the production-ready experience you need from your application. Kubernetes gives us the following:
- **Service discovery and load balancing** Kubernetes can expose a container using the DNS name or using their own IP address. If traffic to a container is high, Kubernetes is able to load balance and distribute the network traffic so that the deployment is stable.
- **Service discovery and load balancing** Kubernetes can expose a container using the DNS name or using their IP address. If traffic to a container is high, Kubernetes can load balance and distribute the network traffic so that the deployment is stable.
- **Storage orchestration** Kubernetes allows you to automatically mount a storage system of your choice, such as local storages, public cloud providers, and more.
- **Storage orchestration** Kubernetes allows you to automatically mount a storage system of your choice, such as local storage, public cloud providers, and more.
- **Automated rollouts and rollbacks** You can describe the desired state for your deployed containers using Kubernetes, and it can change the actual state to the desired state at a controlled rate. For example, you can automate Kubernetes to create new containers for your deployment, remove existing containers and adopt all their resources to the new container.
@ -43,96 +44,96 @@ I mentioned above that containers are great and in the previous section we spoke
Kubernetes provides you with a framework to run distributed systems resiliently.
Container Orchestration manages the deployment, placement, and lifecycle of containers.
Container Orchestration manages the deployment, placement, and lifecycle of containers.
It also has many other responsibilities:
It also has many other responsibilities:
- Cluster management federates hosts into one target.
- Cluster management federates hosts into one target.
- Schedule management distributes containers across nodes through the scheduler.
- Service discovery knows where containers are located and distributes client requests across them.
- Service discovery knows where containers are located and distributes client requests across them.
- Replication ensures that the right number of nodes and containers are available for the requested workload.
- Replication ensures that the right number of nodes and containers are available for the requested workload.
- Health management detects and replaces unhealthy containers and nodes.
- Health management detects and replaces unhealthy containers and nodes.
### Main Kubernetes Components
Kubernetes is a container orchestrator to provision, manage, and scale apps. You can use it to manage the lifecycle of containerized apps in a cluster of nodes, which is a collection of worker machines such as VMs or physical machines.
Kubernetes is a container orchestrator to provision, manage, and scale apps. You can use it to manage the lifecycle of containerized apps in a cluster of nodes, which is a collection of worker machines such as VMs or physical machines.
Your apps might need many other resources to run, such as volumes, networks, and secrets that can help you connect to databases, talk to firewalled back ends, and secure keys. With Kubernetes, you can add those resources into your app. Infrastructure resources that your apps need are managed declaratively.
Your apps might need many other resources to run, such as volumes, networks, and secrets that can help you connect to databases, talk to firewalled back ends, and secure keys. With Kubernetes, you can add those resources to your app. Infrastructure resources that your apps need are managed declaratively.
The key paradigm of Kubernetes is its declarative model. You provide the state that you want and Kubernetes makes it happen. If you need five instances, you don't start five separate instances on your own. Instead, you tell Kubernetes that you need five instances, and Kubernetes automatically reconciles the state. If something goes wrong with one of your instances and it fails, Kubernetes still knows the state that you want and creates instances on an available node.
The key paradigm of Kubernetes is its declarative model. You provide the state that you want and Kubernetes makes it happen. If you need five instances, you don't start five separate instances on your own. Instead, you tell Kubernetes that you need five instances, and Kubernetes automatically reconciles the state. If something goes wrong with one of your instances and it fails, Kubernetes still knows the state that you want and creates instances on an available node.
### Node
**Control Plane**
#### Control Plane
Every Kubernetes cluster requires a Control Plane node, the control plane's components make global decisions about the cluster (for example, scheduling), as well as detecting and responding to cluster events.
![](Images/Day49_Kubernetes1.png)
**Worker Node**
A worker machine that runs Kubernetes workloads. It can be a physical (bare metal) machine or a virtual machine (VM). Each node can host one or more pods. Kubernetes nodes are managed by a control plane
#### Worker Node
A worker machine that runs Kubernetes workloads. It can be a physical (bare metal) machine or a virtual machine (VM). Each node can host one or more pods. Kubernetes nodes are managed by a control plane
![](Images/Day49_Kubernetes2.png)
There are other node types but I won't be covering them here.
**kubelet**
#### kubelet
An agent that runs on each node in the cluster. It makes sure that containers are running in a Pod.
An agent that runs on each node in the cluster. It makes sure that containers are running in a Pod.
The kubelet takes a set of PodSpecs that are provided through various mechanisms and ensures that the containers described in those PodSpecs are running and healthy. The kubelet doesn't manage containers which were not created by Kubernetes.
The kubelet takes a set of PodSpecs that are provided through various mechanisms and ensures that the containers described in those PodSpecs are running and healthy. The kubelet doesn't manage containers which were not created by Kubernetes.
![](Images/Day49_Kubernetes3.png)
**kube-proxy**
#### kube-proxy
kube-proxy is a network proxy that runs on each node in your cluster, implementing part of the Kubernetes Service concept.
kube-proxy is a network proxy that runs on each node in your cluster, implementing part of the Kubernetes Service concept.
kube-proxy maintains network rules on nodes. These network rules allow network communication to your Pods from network sessions inside or outside of your cluster.
kube-proxy maintains network rules on nodes. These network rules allow network communication to your Pods from network sessions inside or outside of your cluster.
kube-proxy uses the operating system packet filtering layer if there is one and it's available. Otherwise, kube-proxy forwards the traffic itself.
kube-proxy uses the operating system packet filtering layer if there is one and it's available. Otherwise, kube-proxy forwards the traffic itself.
![](Images/Day49_Kubernetes4.png)
**Container runtime**
#### Container runtime
The container runtime is the software that is responsible for running containers.
The container runtime is the software that is responsible for running containers.
Kubernetes supports several container runtimes: Docker, containerd, CRI-O, and any implementation of the Kubernetes CRI (Container Runtime Interface).
Kubernetes supports several container runtimes: Docker, containerd, CRI-O, and any implementation of the Kubernetes CRI (Container Runtime Interface).
![](Images/Day49_Kubernetes5.png)
### Cluster
A cluster is a group of nodes, where a node can be a physical machine or virtual machines. Each of the nodes will have the container runtime (Docker) and will also be running a kubelet service, which is an agent that takes in the commands from the Master controller (more on that later) and a Proxy, that is used to proxy connections to the Pods from another component (Services, that we will see later).
A cluster is a group of nodes, where a node can be a physical machine or a virtual machine. Each of the nodes will have the container runtime (Docker) and will also be running a kubelet service, which is an agent that takes in the commands from the Master controller (more on that later) and a Proxy, that is used to proxy connections to the Pods from another component (Services, that we will see later).
On our control plane which can be made highly available will contain some unique roles compared to the worker nodes, the most important will be the kube API server, this is where any communication will take place in order to get information or push information to our Kubernetes cluster.
Our control plane which can be made highly available will contain some unique roles compared to the worker nodes, the most important will be the kube API server, this is where any communication will take place to get information or push information to our Kubernetes cluster.
**Kube API-Server**
#### Kube API-Server
The Kubernetes API server validates and configures data for the api objects which include pods, services, replicationcontrollers, and others. The API Server services REST operations and provides the frontend to the cluster's shared state through which all other components interact.
The Kubernetes API server validates and configures data for the API objects which include pods, services, replication controllers, and others. The API Server services REST operations and provide the frontend to the cluster's shared state through which all other components interact.
**Scheduler**
#### Scheduler
The Kubernetes scheduler is a control plane process which assigns Pods to Nodes. The scheduler determines which Nodes are valid placements for each Pod in the scheduling queue according to constraints and available resources. The scheduler then ranks each valid Node and binds the Pod to a suitable Node.
**Controller Manager**
#### Controller Manager
The Kubernetes controller manager is a daemon that embeds the core control loops shipped with Kubernetes. In applications of robotics and automation, a control loop is a non-terminating loop that regulates the state of the system. In Kubernetes, a controller is a control loop that watches the shared state of the cluster through the apiserver and makes changes attempting to move the current state towards the desired state.
**etcd**
#### etcd
Consistent and highly-available key value store used as Kubernetes' backing store for all cluster data.
![](Images/Day49_Kubernetes6.png)
**kubectl**
#### kubectl
In order to manage this from a CLI point of view we have kubectl, kubectl interacts with the API server.
To manage this from a CLI point of view we have kubectl, kubectl interacts with the API server.
The Kubernetes command-line tool, kubectl, allows you to run commands against Kubernetes clusters. You can use kubectl to deploy applications, inspect and manage cluster resources, and view logs.
@ -140,25 +141,25 @@ The Kubernetes command-line tool, kubectl, allows you to run commands against Ku
### Pods
A Pod is a group of containers that form a logical application. For e.g. If you have a web application that is running a NodeJS container and also a MySQL container, then both these containers will be located in a single Pod. A Pod can also share common data volumes and they also share the same networking namespace. Remember that Pods are ephemeral and they could be brought up and down by the Master Controller. Kubernetes uses a simple but effective means to identify the Pods via the concepts of Labels (name values).
A Pod is a group of containers that form a logical application. E.g. If you have a web application that is running a NodeJS container and also a MySQL container, then both these containers will be located in a single Pod. A Pod can also share common data volumes and they also share the same networking namespace. Remember that Pods are ephemeral and they could be brought up and down by the Master Controller. Kubernetes uses a simple but effective means to identify the Pods via the concepts of Labels (name values).
- Pods handle Volumes, Secrets, and configuration for containers.
- Pods handle Volumes, Secrets, and configuration for containers.
- Pods are ephemeral. They are intended to be restarted automatically when they die.
- Pods are ephemeral. They are intended to be restarted automatically when they die.
- Pods are replicated when the app is scaled horizontally by the ReplicationSet. Each Pod will run the same container code.
- Pods are replicated when the app is scaled horizontally by the ReplicationSet. Each Pod will run the same container code.
- Pods live on Worker Nodes.
- Pods live on Worker Nodes.
![](Images/Day49_Kubernetes8.png)
### Deployments
- You can just decide to run Pods but when they die they die.
- You can just decide to run Pods but when they die they die.
- A Deployment will enable your pod to run continuously.
- A Deployment will enable your pod to run continuously.
- Deployments allow you to update a running app without downtime.
- Deployments allow you to update a running app without downtime.
- Deployments also specify a strategy to restart Pods when they die
@ -166,47 +167,45 @@ A Pod is a group of containers that form a logical application. For e.g. If you
### ReplicaSets
- The Deployment can also create the ReplicaSet
- The Deployment can also create the ReplicaSet
- A ReplicaSet ensures your app has the desired number of Pods
- A ReplicaSet ensures your app has the desired number of Pods
- ReplicaSets will create and scale Pods based on the Deployment
- Deployments, ReplicaSets, Pods are not exclusive but can be
- ReplicaSets will create and scale Pods based on the Deployment
- Deployments, ReplicaSets, and Pods are not exclusive but can be
### StatefulSets
- Does your App require you to keep information about its state?
- Does your App require you to keep information about its state?
- A database needs state
- A database needs state
- A StatefulSets Pods are not interchangeable.
- Each Pod has a unique, persistent identifier that the controller maintains over any rescheduling.
- A StatefulSets Pods are not interchangeable.
- Each pod has a unique, persistent identifier that the controller maintains over any rescheduling.
![](Images/Day49_Kubernetes10.png)
### DaemonSets
- DaemonSets are for continuous process
- DaemonSets are for continuous process
- They run one Pod per Node.
- They run one Pod per Node.
- Each new node added to the cluster gets a pod started
- Each new node added to the cluster gets a pod started
- Useful for background tasks such as monitoring and log collection
- Useful for background tasks such as monitoring and log collection
- Each Pod has a unique, persistent identifier that the controller maintains over any rescheduling.
- Each pod has a unique, persistent identifier that the controller maintains over any rescheduling.
![](Images/Day49_Kubernetes11.png)
### Services
- A single endpoint to access Pods
- A single endpoint to access Pods
- a unified way to route traffic to a cluster and eventually to a list of Pods.
- a unified way to route traffic to a cluster and eventually to a list of Pods.
- By using a Service, Pods can be brought up and down without affecting anything.
@ -222,7 +221,7 @@ This is just a quick overview and notes around the fundamental building blocks o
- Kubernetes Ingress
- Kubernetes Services
- Helm Package Manager
- Persistant Storage
- Persistent Storage
- Stateful Apps
## Resources

View File

@ -7,13 +7,14 @@ cover_image: null
canonical_url: null
id: 1049046
---
## Choosing your Kubernetes platform
I wanted to use this session to breakdown some of the platforms or maybe distributions is a better term to use here, one thing that has been a challenge in the Kubernetes world is removing complexity.
I wanted to use this session to break down some of the platforms or maybe distributions is a better term to use here, one thing that has been a challenge in the Kubernetes world is removing complexity.
Kubernetes the hard way walks through how to build out from nothing to a full blown functional Kubernetes cluster, obviously this is to the extreme but more and more at least the people I am speaking to are wanting to remove that complexity and run a managed Kubernetes cluster. The issue there is that it costs more money but the benefits could be if you use a managed service do you really need to know the underpinning node architecture and what is happening from a Control Plane node point of view when generally you do not have access to this.
Kubernetes the hard way walks through how to build out from nothing to a full-blown functional Kubernetes cluster this is to the extreme but more and more at least the people I am speaking to are wanting to remove that complexity and run a managed Kubernetes cluster. The issue there is that it costs more money but the benefits could be if you use a managed service do you need to know the underpinning node architecture and what is happening from a Control Plane node point of view when generally you do not have access to this.
Then we have the local development distributions that enable us to use our own systems and run a local version of Kubernetes so developers can have the full working environment to run their apps in the platform they are intended for.
Then we have the local development distributions that enable us to use our systems and run a local version of Kubernetes so developers can have the full working environment to run their apps in the platform they are intended for.
The general basis of all of these concepts is that they are all a flavour of Kubernetes which means we should be able to freely migrate and move our workloads where we need them to suit our requirements.
@ -21,30 +22,31 @@ A lot of our choice will also depend on what investments have been made. I menti
### Bare-Metal Clusters
An option for many could be running your Linux OS straight onto a number of physical servers to create our cluster, it could also be Windows but I have not heard much about the adoption rate around Windows, Containers and Kubernetes. Obviously if you are a business and you have made a CAPEX decision to buy your physical servers then this might be the way in which you go when building out your Kubernetes cluster, the management and admin side here means you are going to have to build yourself and manage everything from the ground up.
An option for many could be running your Linux OS straight onto several physical servers to create our cluster, it could also be Windows but I have not heard much about the adoption rate around Windows, Containers and Kubernetes. If you are a business and you have made a CAPEX decision to buy your physical servers then this might be how you go when building out your Kubernetes cluster, the management and admin side here means you are going to have to build yourself and manage everything from the ground up.
### Virtualisation
Regardless of test and learning environments or enterprise ready Kubernetes clusters virtualisation is a great way to go, typically the ability to spin up virtual machines to act as your nodes and then cluster those together. You have the underpinning architecture, effciency and speed of virtualisation as well as leveraging that existing spend. VMware for example offers a great solution for both Virtual Machines and Kubernetes in various different flavours.
Regardless of test and learning environments or enterprise-ready Kubernetes clusters virtualisation is a great way to go, typically the ability to spin up virtual machines to act as your nodes and then cluster those together. You have the underpinning architecture, efficiency and speed of virtualisation as well as leveraging that existing spend. VMware for example offers a great solution for both Virtual Machines and Kubernetes in various flavours.
My first ever Kubernetes cluster was build based on Virtualisation using Microsoft Hyper-V on an old server that I had which was capable of running a few VMs as my nodes.
My first ever Kubernetes cluster was built based on Virtualisation using Microsoft Hyper-V on an old server that I had which was capable of running a few VMs as my nodes.
### Local Desktop options
There are a number of options when it comes to running a local Kubernetes cluster on your desktop or laptop. This as previously said gives developers the ability to see what their app will look like without having to have multiple costly or complex clusters. Personally this has been one that I have used a lot and in particular I have been using minikube. It has some great functionality and add ons which changes the way you get something up and running.
There are several options when it comes to running a local Kubernetes cluster on your desktop or laptop. This as previously said gives developers the ability to see what their app will look like without having to have multiple costly or complex clusters. Personally, this has been one that I have used a lot and in particular, I have been using minikube. It has some great functionality and adds-ons which changes the way you get something up and running.
### Kubernetes Managed Services
I have mentioned virtualisation, and this can be achieved with hypervisors locally but we know from previous sections we could also leverage VMs in the public cloud to act as our nodes. What I am talking about here with Kubernetes managed services are the offerings we see from the large hyperscalers but also from MSPs removing layers of management and control away from the end user, this could be removing the control plane from the end user this is what happens with Amazon EKS, Microsoft AKS and Google Kubernetes Engine. (GKE)
### Overwhelming choice
I mean choice is great but there is a point where things become overwhelming and this is really not an depth look into all options within each catagory listed above. On top of the above we also have OpenShift which is from Red Hat and this option can really be ran across the options above in all the major cloud providers and probably today gives the best overall useability to the admins regardless where clusters are deployed.
I mean the choice is great but there is a point where things become overwhelming and this is not a depth look into all options within each category listed above. On top of the above, we also have OpenShift which is from Red Hat and this option can be run across the options above in all the major cloud providers and probably today gives the best overall useability to the admins regardless of where clusters are deployed.
So where do you start from your learning perspective, as I said I started with the virtualisation route but that was because I had access to a physical server which I could use for the purpose, I appreciate and in fact since then I no longer have this option.
So where do you start from your learning perspective, as I said I started with the virtualisation route but that was because I had access to a physical server which I could use for the purpose, I appreciate and in fact, since then I no longer have this option.
My actual advice now would be to use Minikube as a first option or Kind (Kubernetes in Docker) but Minikube gives us some additional benefits which almost abstracts the complexity out as we can just use add ons and get things built out really quickly and we can then blow it away when we are finished, we can run multiple clusters, we can run it almost anywhere, cross platform and hardware agnostic.
My actual advice now would be to use Minikube as a first option or Kind (Kubernetes in Docker) but Minikube gives us some additional benefits which almost abstracts the complexity out as we can just use add-ons and get things built out quickly and we can then blow it away when we are finished, we can run multiple clusters, we can run it almost anywhere, cross-platform and hardware agnostic.
I have been through a bit of a journey with my learning around Kubernetes so I am going to leave the platform choice and specifics here to list out the options that i have tried to give me a better understanding around Kubernetes the platform and where it can run. What I might do with the below blog posts is take another look at these update them and bring them more into here vs them being links to blog posts.
I have been through a bit of a journey with my learning around Kubernetes so I am going to leave the platform choice and specifics here to list out the options that I have tried to give me a better understanding of Kubernetes the platform and where it can run. What I might do with the below blog posts is take another look at these update them and bring them more into here vs them being linked to blog posts.
- [Kubernetes playground How to choose your platform](https://vzilla.co.uk/vzilla-blog/building-the-home-lab-kubernetes-playground-part-1)
- [Kubernetes playground Setting up your cluster](https://vzilla.co.uk/vzilla-blog/building-the-home-lab-kubernetes-playground-part-2)
@ -64,7 +66,7 @@ I have been through a bit of a journey with my learning around Kubernetes so I a
- Kubernetes Ingress
- Kubernetes Services
- Helm Package Manager
- Persistant Storage
- Persistent Storage
- Stateful Apps
## Resources

View File

@ -2,30 +2,31 @@
title: '#90DaysOfDevOps - Deploying your first Kubernetes Cluster - Day 51'
published: false
description: 90DaysOfDevOps - Deploying your first Kubernetes Cluster
tags: "devops, 90daysofdevops, learning"
tags: 'DevOps, 90daysofdevops, learning'
cover_image: null
canonical_url: null
id: 1048778
---
## Deploying your first Kubernetes Cluster
In this post we are going get a Kubernetes cluster up and running on our local machine using minikube, this will give us a baseline Kubernetes cluster for the rest of the Kubernetes section, although we will look at deploying a Kubernetes cluster also in VirtualBox later on. The reason for choosing this method vs spinning a managed Kubernetes cluster up in the public cloud is that this is going to cost money even with the free tier, I shared some blogs though if you would like to spin up that environment in the previous section [Day 50](day50.md).
In this post we are going to get a Kubernetes cluster up and running on our local machine using minikube, this will give us a baseline Kubernetes cluster for the rest of the Kubernetes section, although we will look at deploying a Kubernetes cluster also in VirtualBox later on. The reason for choosing this method vs spinning a managed Kubernetes cluster up in the public cloud is that this is going to cost money even with the free tier, I shared some blogs though if you would like to spin up that environment in the previous section [Day 50](day50.md).
### What is Minikube?
*“minikube quickly sets up a local Kubernetes cluster on macOS, Linux, and Windows. We proudly focus on helping application developers and new Kubernetes users.”*
> “minikube quickly sets up a local Kubernetes cluster on macOS, Linux, and Windows. We proudly focus on helping application developers and new Kubernetes users.”
You might not fit into the above but I have found minikube is a great little tool if you just want to test something out in a Kubernetes fashion, you can easily deploy and app and they have some amazing add ons which I will also cover.
You might not fit into the above but I have found minikube is a great little tool if you just want to test something out in a Kubernetes fashion, you can easily deploy an app and they have some amazing add-ons which I will also cover.
To begin with regardless of your workstation OS, you can run minikube. First, head over to the [project page here](https://minikube.sigs.k8s.io/docs/start/). The first option you have is choosing your installation method. I did not use this method, but you might choose to vs my way (my way is coming up).
To begin with, regardless of your workstation OS, you can run minikube. First, head over to the [project page here](https://minikube.sigs.k8s.io/docs/start/). The first option you have is choosing your installation method. I did not use this method, but you might choose to vs my way (my way is coming up).
mentioned below it states that you need to have a “Container or virtual machine manager, such as: Docker, Hyperkit, Hyper-V, KVM, Parallels, Podman, VirtualBox, or VMware” this is where MiniKube will run and the easy option and unless stated in the repository I am using Docker. You can install Docker on your system using the following [link](https://docs.docker.com/get-docker/).
mentioned below it states that you need to have a “Container or virtual machine managers, such as Docker, Hyper kit, Hyper-V, KVM, Parallels, Podman, VirtualBox, or VMware” this is where MiniKube will run and the easy option and unless stated in the repository I am using Docker. You can install Docker on your system using the following [link](https://docs.docker.com/get-docker/).
![](Images/Day51_Kubernetes1.png)
### My way of installing minikube and other prereqs…
I have been using arkade for some time now to get all those Kubernetes tools and CLIs, you can see the installation steps on this [github repository](https://github.com/alexellis/arkade) for getting started with Arkade. I have also mentioned this in other blog posts where I needed something installing. The simplicity of just hitting arkade get and then seeing if your tool or cli is available is handy. In the Linux section we spoke about package manager and the process for getting our software, you can think about Arkade as that marketplace for all your apps and clis for Kubernetes. A very handy little tool to have on your systems, written in Golang and cross platform.
I have been using arkade for some time now to get all those Kubernetes tools and CLIs, you can see the installation steps on this [github repository](https://github.com/alexellis/arkade) for getting started with Arkade. I have also mentioned this in other blog posts where I needed something installed. The simplicity of just hitting arkade get and then seeing if your tool or cli is available in handy. In the Linux section, we spoke about package manager and the process for getting our software, you can think about Arkade as that marketplace for all your apps and CLIs for Kubernetes. A very handy little tool to have on your systems, written in Golang and cross-platform.
![](Images/Day51_Kubernetes2.png)
@ -37,21 +38,21 @@ We will also need kubectl as part of our tooling so you can also get this via ar
### Getting a Kubernetes cluster up and running
For this particular section I want to cover the options available to us when it comes to getting a Kubernetes cluster up and running on your local machine. We could simply run the following command and it would spin up a cluster for you to use.
For this particular section, I want to cover the options available to us when it comes to getting a Kubernetes cluster up and running on your local machine. We could simply run the following command and it would spin up a cluster for you to use.
minikube is used on the command line, and simply put once you have everything installed you can run `minikube start` to deploy your first Kubernetes cluster. You will see below that the Docker Driver is the default as to where we will be running our nested virtualisation node. I mentioned at the start of the post the other options available, the other options help when you want to expand what this local Kubernetes cluster needs to look like.
A single Minikube cluster is going to consist of a single docker container in this instance which will have the control plane node and worker node in one instance. Where as typically you would separate those nodes out. Something we will cover in the next section where we look at still home lab type Kubernetes environments but a little closer to production architecture.
A single Minikube cluster is going to consist of a single docker container in this instance which will have the control plane node and worker node in one instance. Whereas typically you would separate those nodes. Something we will cover in the next section where we look at still home lab type Kubernetes environments but a little closer to production architecture.
![](Images/Day51_Kubernetes4.png)
I have mentioned this a few times now, I really like minikube because of the addons available, the ability to deploy a cluster with a simple command including all the required addons from the start really helps me deploy the same required setup everytime.
I have mentioned this a few times now, I like minikube because of the add-ons available, the ability to deploy a cluster with a simple command including all the required addons from the start helps me deploy the same required setup every time.
Below you can see a list of those addons, I generally use the `csi-hostpath-driver` and the `volumesnapshots` addons but you can see the long list below. Sure these addons can generally be deployed using Helm again something we will cover later on in the Kubernetes section but this makes things much simpler.
Below you can see a list of those add-ons, I generally use the `CSI-host path-driver` and the `volumesnapshots` addons but you can see the long list below. Sure these addons can generally be deployed using Helm again something we will cover later on in the Kubernetes section but this makes things much simpler.
![](Images/Day51_Kubernetes5.png)
I am also defining in our project some additional configuration, apiserver is set to 6433 instead of a random API port, I define the container runtime also to containerd however docker is default and CRI-O is also available. I am also setting a specific Kubernetes version.
I am also defining in our project some additional configuration, apiserver is set to 6433 instead of a random API port, and I define the container runtime also to containerd however docker is default and CRI-O is also available. I am also setting a specific Kubernetes version.
![](Images/Day51_Kubernetes6.png)
@ -59,7 +60,7 @@ Now we are ready to deploy our first Kubernetes cluster using minikube. I mentio
![](Images/Day51_Kubernetes7.png)
or you can download cross platform from the following
or you can download cross-platform from the following
- [Linux](https://kubernetes.io/docs/tasks/tools/install-kubectl-linux)
- [macOS](https://kubernetes.io/docs/tasks/tools/install-kubectl-macos)
@ -71,47 +72,46 @@ Once you have kubectl installed we can then interact with our cluster with a sim
### What is kubectl?
We now have our minikube | Kubernetes cluster up and running and I have asked you to install both Minikube where I have explained at least what it does but I have not really explained what kubectl is and what it does.
We now have our minikube | Kubernetes cluster up and running and I have asked you to install both Minikube where I have explained at least what it does but I have not explained what kubectl is and what it does.
kubectl is a cli that is used or allows you to interact with Kubernetes clusters, we are using it here for interacting with our minikube cluster but we would also use kubectl for interacting with our enterprise clusters across the public cloud.
We use kubectl to deploy applications, inspect and manage cluster resources. A much better [Overview of kubectl](https://kubernetes.io/docs/reference/kubectl/overview/) can be found here on the Kubernetes official documentation.
We use kubectl to deploy applications and inspect and manage cluster resources. A much better [Overview of kubectl](https://kubernetes.io/docs/reference/kubectl/overview/) can be found here on the Kubernetes official documentation.
kubectl interacts with the API server found on the Control Plane node which we breifly covered in an earlier post.
kubectl interacts with the API server found on the Control Plane node which we briefly covered in an earlier post.
### kubectl cheat sheet
Along with the official documentation, I have also found myself with this page open all the time when looking for kubectl commands. [Unofficial Kubernetes](https://unofficial-kubernetes.readthedocs.io/en/latest/)
|Listing Resources | |
| ------------------------------ | ----------------------------------------- |
|kubectl get nodes |List all nodes in cluster |
|kubectl get namespaces |List all namespaces in cluster |
|kubectl get pods |List all pods in default namespace cluster |
|kubectl get pods -n name |List all pods in "name" namespace |
|kubectl get pods -n name |List all pods in "name" namespace |
| Listing Resources | |
| ------------------------ | ------------------------------------------ |
| kubectl get nodes | List all nodes in cluster |
| kubectl get namespaces | List all namespaces in cluster |
| kubectl get pods | List all pods in default namespace cluster |
| kubectl get pods -n name | List all pods in the "name" namespace |
|Creating Resources | |
| ------------------------------ | ----------------------------------------- |
|kubectl create namespace name |Create a namespace called "name" |
|kubectl create -f [filename] |Create a resource from a JSON or YAML file:|
| Creating Resources | |
| ----------------------------- | ------------------------------------------- |
| kubectl create namespace name | Create a namespace called "name" |
| kubectl create -f [filename] | Create a resource from a JSON or YAML file: |
|Editing Resources | |
| ------------------------------ | ----------------------------------------- |
|kubectl edit svc/servicename |To edit a service |
| Editing Resources | |
| ---------------------------- | ----------------- |
| kubectl edit svc/servicename | To edit a service |
|More detail on Resources | |
| ------------------------------ | ------------------------------------------------------ |
|kubectl describe nodes | display the state of any number of resources in detail,|
| More detail on Resources | |
| ------------------------ | ------------------------------------------------------- |
| kubectl describe nodes | display the state of any number of resources in detail, |
|Delete Resources | |
| ------------------------------ | ------------------------------------------------------ |
|kubectl delete pod | Remove resources, this can be from stdin or file |
| Delete Resources | |
| ------------------ | ------------------------------------------------ |
| kubectl delete pod | Remove resources, this can be from stdin or file |
You will find yourself wanting to know the short names for some of the kubectl commands, for example `-n` is the short name for `namespace` which makes it easier to type a command but also if you are scripting anything you can have much tidier code.
| Short name | Full name |
| -------------------- | ---------------------------- |
| ---------- | -------------------------- |
| csr | certificatesigningrequests |
| cs | componentstatuses |
| cm | configmaps |
@ -135,9 +135,9 @@ You will find yourself wanting to know the short names for some of the kubectl c
| sa | serviceaccounts |
| svc | services |
The final thing to add here is that I created another project around minikube to help me quickly spin up demo environments to display data services and protecting those workloads with Kasten K10, [Project Pace](https://github.com/MichaelCade/project_pace) can be found there and would love your feedback or interaction, it also displays or includes some automated ways of deploying your minikube clusters and creating different data services applications.
The final thing to add here is that I created another project around minikube to help me quickly spin up demo environments to display data services and protect those workloads with Kasten K10, [Project Pace](https://github.com/MichaelCade/project_pace) can be found there and would love your feedback or interaction, it also displays or includes some automated ways of deploying your minikube clusters and creating different data services applications.
Next up, we will get in to deploying multiple nodes into virtual machines using VirtualBox but we are going to hit the easy button there like we did in the Linux section where we used vagrant to quickly spin up the machines and deploy our software how we want them.
Next up, we will get into deploying multiple nodes into virtual machines using VirtualBox but we are going to hit the easy button there as we did in the Linux section where we used vagrant to quickly spin up the machines and deploy our software how we want them.
I added this list to the post yesterday which are walkthrough blogs I have done around different Kubernetes clusters being deployed.
@ -153,7 +153,7 @@ I added this list to the post yesterday which are walkthrough blogs I have done
### What we will cover in the series on Kubernetes
We have started covering some of these mentioned below but we are going to get more hands on tomorrow with our second cluster deployment then we can start deploying applications into our clusters.
We have started covering some of these mentioned below but we are going to get more hands-on tomorrow with our second cluster deployment then we can start deploying applications into our clusters.
- Kubernetes Architecture
- Kubectl Commands
@ -161,12 +161,12 @@ We have started covering some of these mentioned below but we are going to get m
- Kubernetes Ingress
- Kubernetes Services
- Helm Package Manager
- Persistant Storage
- Persistent Storage
- Stateful Apps
## Resources
If you have FREE resources that you have used then please feel free to add them in here via a PR to the repository and I will be happy to include them.
If you have FREE resources that you have used then please feel free to add them here via a PR to the repository and I will be happy to include them.
- [Kubernetes Documentation](https://kubernetes.io/docs/home/)
- [TechWorld with Nana - Kubernetes Tutorial for Beginners [FULL COURSE in 4 Hours]](https://www.youtube.com/watch?v=X48VuDVv0do)

View File

@ -7,41 +7,42 @@ cover_image: null
canonical_url: null
id: 1049050
---
## Setting up a multinode Kubernetes Cluster
I wanted this title to be "Setting up a multinode Kubernetes cluster with Vagrant" but thought it might be a little too long!
In the session yesterday we used a cool project to deploy our first Kubernetes cluster and get a little hands on with the most important CLI tool you will come across when using Kubernetes (kubectl).
In the session yesterday we used a cool project to deploy our first Kubernetes cluster and get a little hands-on with the most important CLI tool you will come across when using Kubernetes (kubectl).
Here we are going to use VirtualBox as our base but as mentioned the last time we spoke about Vagrant back in the Linux section we can really use any hypervisor or virtualisation tool supported. It was [Day 14](day14.md) when we went through and deployed an Ubuntu machine for the Linux section.
Here we are going to use VirtualBox as our base but as mentioned the last time we spoke about Vagrant back in the Linux section we can use any hypervisor or virtualisation tool supported. It was [Day 14](day14.md) when we went through and deployed an Ubuntu machine for the Linux section.
### A quick recap on Vagrant
Vagrant is a CLI utility that manages the lifecyle of your virtual machines. We can use vagrant to spin up and down virtual machines across many different platforms including vSphere, Hyper-v, Virtual Box and also Docker. It does have other providers but we will stick with that we are using Virtual Box here so we are good to go.
Vagrant is a CLI utility that manages the lifecycle of your virtual machines. We can use vagrant to spin up and down virtual machines across many different platforms including vSphere, Hyper-v, Virtual Box and also Docker. It does have other providers but we will stick with that we are using Virtual Box here so we are good to go.
I am going to be using a baseline this [blog and repository](https://devopscube.com/kubernetes-cluster-vagrant/) to walk through the configuration. I would however advise that if this is your first time deploying a Kubernetes cluster then maybe also look into how you would do this manually and then at least you know what this looks like. Although I will say that this Day 0 operations and effort is being made more efficient with every release of Kubernetes. I liken this very much to the days of VMware and ESX and how you would need at least a day to deploy 3 ESX servers now we can have that up and running in an hour. We are heading in that direction when it comes to Kubernetes.
### Kubernetes Lab environment
I have uploaded in [Kubernetes folder](Kubernetes) the vagrantfile that we will be using to build out our environment. Grab this and navigate to this directory in your terminal. I am again using Windows so I will be using PowerShell to perform my workstation commands with vagrant. If you do not have vagrant then you can use arkade, we covered this yesterday when installing minikube and other tools. A simple command `arkade get vagrant` should see you download and install the latest version of vagrant.
I have uploaded in [Kubernetes folder](Kubernetes) the vagrantfile that we will be using to build out our environment. Grab this and navigate to this directory in your terminal. I am again using Windows so I will be using PowerShell to perform my workstation commands with vagrant. If you do not have vagrant then you can use arkade, we covered this yesterday when installing minikube and other tools. A simple command `arkade get vagrant` should see your download and install the latest version of vagrant.
When you are in your directory then you can simply run `vagrant up` and if all is configured correctly then you should see the following kick off in your terminal.
When you are in your directory then you can simply run `vagrant up` and if all is configured correctly then you should see the following kick-off in your terminal.
![](Images/Day52_Kubernetes1.png)
In the terminal you are going to see a number of steps taking place, but in the meantime let's take a look at what we are actually building here.
In the terminal, you are going to see several steps taking place, but in the meantime let's take a look at what we are building here.
![](Images/Day52_Kubernetes2.png)
From the above you can see that we are going to build out 3 virtual machines, we will have a control plane node and then two worker nodes. If you head back to [Day 49](day49.md) You will see some more description on these areas we see in the image.
From the above you can see that we are going to build out 3 virtual machines, we will have a control plane node and then two worker nodes. If you head back to [Day 49](day49.md) You will see some more descriptions of these areas we see in the image.
Also in the image we indicate that our kubectl access will come from outside of the cluster and hit that kube apiserver when in fact as part of the vagrant provisioning we are deploying kubectl on each of these nodes so that we can access the cluster from within each of our nodes.
Also in the image, we indicate that our kubectl access will come from outside of the cluster and hit that kube apiserver when in fact as part of the vagrant provisioning we are deploying kubectl on each of these nodes so that we can access the cluster from within each of our nodes.
The process of building out this lab could take anything from 5 minutes to 30 minutes depending on your setup.
I am going to cover the scripts shortly as well but you will notice if you look into the vagrant file that we are calling on 3 scripts as part of the deployment and this is really where the cluster is created. We have seen how easy it is to use vagrant to deploy our virtual machines and OS installations using vagrant boxes but having the ability to run a shell script as part of the deployment process is where it gets quite interesting around automating these lab build outs.
I am going to cover the scripts shortly as well but you will notice if you look into the vagrant file that we are calling on 3 scripts as part of the deployment and this is really where the cluster is created. We have seen how easy it is to use vagrant to deploy our virtual machines and OS installations using vagrant boxes but having the ability to run a shell script as part of the deployment process is where it gets quite interesting around automating these lab build-outs.
Once complete we can then ssh to one of our nodes `vagrant ssh master` from the terminal should get you access, default username and password is `vagrant/vagrant`
Once complete we can then ssh to one of our nodes `vagrant ssh master` from the terminal should get you access, the default username and password is `vagrant/vagrant`
You can also use `vagrant ssh node01` and `vagrant ssh node02` to gain access to the worker nodes should you wish.
@ -51,11 +52,11 @@ Now we are in one of the above nodes in our new cluster we can issue `kubectl ge
![](Images/Day52_Kubernetes4.png)
At this point we have a running 3 node cluster, with 1 control plane node and 2 worker nodes.
At this point, we have a running 3 node cluster, with 1 control plane node and 2 worker nodes.
### Vagrantfile and Shell Script walkthrough
If we take a look at our vagrantfile, you will see that we are defining a number of worker nodes, networking IP addresses for the bridged network within VirtualBox and then some naming. Another you will notice is that we are also calling upon some scripts that we want to run on specific hosts.
If we take a look at our vagrantfile, you will see that we are defining several worker nodes, networking IP addresses for the bridged network within VirtualBox and then some naming. Another you will notice is that we are also calling upon some scripts that we want to run on specific hosts.
```
NUM_WORKER_NODES=2
@ -98,12 +99,13 @@ Vagrant.configure("2") do |config|
end
end
end
```
Lets break down those scripts that are being ran. We have three scripts listed in the above VAGRANTFILE to run on specific nodes.
```
Let's break down those scripts that are being run. We have three scripts listed in the above VAGRANTFILE to run on specific nodes.
`master.vm.provision "shell", path: "scripts/common.sh"`
This script above is going to focus on getting the nodes ready, it is going to be ran on all 3 of our nodes and it will remove any existing Docker components and reinstall Docker and ContainerD as well as kubeadm, kubelet and kubectl. This script will also update existing software packages on the system.
This script above is going to focus on getting the nodes ready, it is going to be run on all 3 of our nodes and it will remove any existing Docker components and reinstall Docker and ContainerD as well kubeadm, kubelet and kubectl. This script will also update existing software packages on the system.
`master.vm.provision "shell", path: "scripts/master.sh"`
@ -115,15 +117,15 @@ This is simply going to take the config created by the master and join our nodes
### Access to the Kubernetes cluster
Now we have two clusters deployed we have our minikube cluster that we deployed in the previous section and we have the new 3 node cluster we just deployed to VirtualBox.
Now we have two clusters deployed we have the minikube cluster that we deployed in the previous section and we have the new 3 node cluster we just deployed to VirtualBox.
Also in that config file that you will also have access to on the machine you ran vagrant from consists of how we can gain access to our cluster from our workstation.
Also, that config file that you will also have access to on the machine, you ran vagrant from consists of how we can gain access to our cluster from our workstation.
Before we show that let me touch on the context.
Before we show that let me touch on the context.
![](Images/Day52_Kubernetes5.png)
Context is important, the ability to access your Kubernetes cluster from your desktop or laptop is required. Lots of different options out there and people use obviously different operating systems as their daily drivers.
Context is important, the ability to access your Kubernetes cluster from your desktop or laptop is required. Lots of different options out there and people use different operating systems as their daily drivers.
By default, the Kubernetes CLI client (kubectl) uses the C:\Users\username\.kube\config to store the Kubernetes cluster details such as endpoint and credentials. If you have deployed a cluster you will be able to see this file in that location. But if you have been using maybe the master node to run all of your kubectl commands so far via SSH or other methods then this post will hopefully help you get to grips with being able to connect with your workstation.
@ -131,11 +133,11 @@ We then need to grab the kubeconfig file from the cluster or we can also get thi
![](Images/Day52_Kubernetes6.png)
We then want to take a copy of that config file and move to our `$HOME/.kube/config` location.
We then want to take a copy of that config file and move it to our `$HOME/.kube/config` location.
![](Images/Day52_Kubernetes7.png)
Now from your local workstation you will be able to run `kubectl cluster-info` and `kubectl get nodes` to validate that you have access to your cluster.
Now from your local workstation, you will be able to run `kubectl cluster-info` and `kubectl get nodes` to validate that you have access to your cluster.
![](Images/Day52_Kubernetes8.png)
@ -157,7 +159,7 @@ I have added this list which are walkthrough blogs I have done around different
### What we will cover in the series on Kubernetes
We have started covering some of these mentioned below but we are going to get more hands on tomorrow with our second cluster deployment then we can start deploying applications into our clusters.
We have started covering some of these mentioned below but we are going to get more hands-on tomorrow with our second cluster deployment then we can start deploying applications into our clusters.
- Kubernetes Architecture
- Kubectl Commands
@ -165,12 +167,12 @@ We have started covering some of these mentioned below but we are going to get m
- Kubernetes Ingress
- Kubernetes Services
- Helm Package Manager
- Persistant Storage
- Persistent Storage
- Stateful Apps
## Resources
If you have FREE resources that you have used then please feel free to add them in here via a PR to the repository and I will be happy to include them.
If you have FREE resources that you have used then please feel free to add them here via a PR to the repository and I will be happy to include them.
- [Kubernetes Documentation](https://kubernetes.io/docs/home/)
- [TechWorld with Nana - Kubernetes Tutorial for Beginners [FULL COURSE in 4 Hours]](https://www.youtube.com/watch?v=X48VuDVv0do)

View File

@ -2,24 +2,25 @@
title: '#90DaysOfDevOps - Rancher Overview - Hands On - Day 53'
published: false
description: 90DaysOfDevOps - Rancher Overview - Hands On
tags: "devops, 90daysofdevops, learning"
tags: 'devops, 90daysofdevops, learning'
cover_image: null
canonical_url: null
id: 1048742
---
## Rancher Overview - Hands On
In this section we are going to take a look at Rancher, so far everything we have done has been in the cli and using kubectl but we have a few really good UIs and multi cluster management tools to give our operations teams good visibility into our cluster management.
In this section we are going to take a look at Rancher, so far everything we have done has been in the cli and using kubectl but we have a few good UIs and multi-cluster management tools to give our operations teams good visibility into our cluster management.
Rancher is according to their [site](https://rancher.com/)
*Rancher is a complete software stack for teams adopting containers. It addresses the operational and security challenges of managing multiple Kubernetes clusters across any infrastructure, while providing DevOps teams with integrated tools for running containerized workloads.*
> Rancher is a complete software stack for teams adopting containers. It addresses the operational and security challenges of managing multiple Kubernetes clusters across any infrastructure while providing DevOps teams with integrated tools for running containerized workloads.
Rancher enables us to deploy production grade Kubernetes clusters from pretty much any location and then provides centralised authentication, access control and observability. I mentioned in a previous section that there is almost an overwhelming choice when it comes to Kubernetes and where you should or could run them, looking at Rancher it really doesn't matter where they are.
Rancher enables us to deploy production-grade Kubernetes clusters from pretty much any location and then provides centralised authentication, access control and observability. I mentioned in a previous section that there is almost an overwhelming choice when it comes to Kubernetes and where you should or could run them, looking at Rancher it doesn't matter where they are.
### Deploy Rancher
The first thing we need to do is deploy Rancher on our local workstation, there are few ways and locations you can choose to proceed with this step, for me I want to use my local workstation and run rancher as a docker container. By running the command below we will pull down a container image and then have access to the rancher UI.
The first thing we need to do is deploy Rancher on our local workstation, there are a few ways and locations you can choose to proceed with this step, for me I want to use my local workstation and run rancher as a docker container. By running the command below we will pull down a container image and then have access to the rancher UI.
Other rancher deployment methods are available [Rancher Quick-Start-Guide](https://rancher.com/docs/rancher/v2.6/en/quick-start-guide/deployment/)
@ -39,35 +40,35 @@ Follow the instructions below to get the password required. Because I am using W
![](Images/Day53_Kubernetes3.png)
We can then take the above password and login, the next page is where we can define a new password.
We can then take the above password and log in, the next page is where we can define a new password.
![](Images/Day53_Kubernetes4.png)
Once we have done the above we will then be logged in and we can see our opening screen. As part of the Rancher deployment we will also see a local K3s cluster provisioned.
Once we have done the above we will then be logged in and we can see our opening screen. As part of the Rancher deployment, we will also see a local K3s cluster provisioned.
![](Images/Day53_Kubernetes5.png)
### A quick tour of rancher
The first thing for us to look at is our locally deployed K3S cluster You can see below that we get a good visual on what is happening inside our cluster. This is the default deployment and we have not yet deployed anything to this cluster. You can see it is made up of 1 node and has 5 deployments. Then you can also see that there are some stats on pods, cores and memory.
The first thing for us to look at is our locally deployed K3S cluster You can see below that we get a good visual of what is happening inside our cluster. This is the default deployment and we have not yet deployed anything to this cluster. You can see it is made up of 1 node and has 5 deployments. Then you can also see that there are some stats on pods, cores and memory.
![](Images/Day53_Kubernetes6.png)
On the left hand menu we also have an Apps & Marketplace tab, this allows us to choose applications we would like to run on our clusters, as mentioned previously Rancher gives us the capability of running or managing a number of different clusters. With the marketplace we can deploy our applications very easily.
On the left-hand menu, we also have an Apps & Marketplace tab, this allows us to choose applications we would like to run on our clusters, as mentioned previously Rancher gives us the capability of running or managing several different clusters. With the marketplace, we can deploy our applications very easily.
![](Images/Day53_Kubernetes7.png)
Another thing to mention is that if you did need to get access to any cluster being managed by Rancher in the top right you have the ability to open a kubectl shell to the selected cluster.
Another thing to mention is that if you did need to get access to any cluster being managed by Rancher in the top right you can open a kubectl shell to the selected cluster.
![](Images/Day53_Kubernetes8.png)
### Create a new cluster
Over the past two sessions we have created a minikube cluster locally and we have used Vagrant with VirtualBox to create a 3 node Kubernetes cluster, with Rancher we can also create clusters. In the [Rancher Folder](Days/Kubernetes/Rancher) you will find additional vagrant files that will build out the same 3 nodes but without the steps for creating our Kubernetes cluster (we want Rancher to do this for us)
Over the past two sessions, we have created a minikube cluster locally and we have used Vagrant with VirtualBox to create a 3 node Kubernetes cluster, with Rancher we can also create clusters. In the [Rancher Folder](Kubernetes/Rancher) you will find additional vagrant files that will build out the same 3 nodes but without the steps for creating our Kubernetes cluster (we want Rancher to do this for us)
We do however want docker installed and for the OS to be updated so you will still see the `common.sh` script being ran on each of our nodes. This will also install Kubeadm, Kubectl etc. But it will not run the Kubeadm commands to create and join our nodes into a cluster.
We do however want docker installed and for the OS to be updated so you will still see the `common.sh` script being run on each of our nodes. This will also install Kubeadm, Kubectl etc. But it will not run the Kubeadm commands to create and join our nodes into a cluster.
We can navigate to our vagrant folder location and we can simply run `vagrant up` and this will begin that process of creating our 3 VMs in virtualbox.
We can navigate to our vagrant folder location and we can simply run `vagrant up` and this will begin the process of creating our 3 VMs in VirtualBox.
![](Images/Day53_Kubernetes9.png)
@ -79,7 +80,7 @@ We will be choosing "custom" as we are not using one of the integrated platforms
![](Images/Day53_Kubernetes11.png)
The next page is going to give you the registration code that needs to be ran on each of your nodes with the appropriate services to be enabled. etcd, controlplane and worker. For our master node we want etcd and controlplane so the command can be seen below.
The next page is going to give you the registration code that needs to be run on each of your nodes with the appropriate services to be enabled. etcd, control-plane and worker. For our master node, we want etcd and control-plane so the command can be seen below.
![](Images/Day53_Kubernetes12.png)
@ -99,13 +100,13 @@ sudo docker run -d --privileged --restart=unless-stopped --net=host -v /etc/kube
![](Images/Day53_Kubernetes14.png)
Over the last 3 sessions we have used a few different ways to get up and running with a Kubernetes cluster, over the remaining days we are going to look at the application side of the platform arguably the most important. We will look into services and being able to provision and use our service in Kubernetes.
Over the last 3 sessions, we have used a few different ways to get up and running with a Kubernetes cluster, over the remaining days we are going to look at the application side of the platform arguably the most important. We will look into services and being able to provision and use our service in Kubernetes.
I have been told since that the requirements around bootstrapping rancher nodes requires those VMs to have 4GB ram or they will crash-loop, I have since updated as our worker nodes had 2GB.
I have been told since that the requirements around bootstrapping rancher nodes require those VMs to have 4GB ram or they will crash-loop, I have since updated as our worker nodes had 2GB.
### What we will cover in the series on Kubernetes
We have started covering some of these mentioned below but we are going to get more hands on tomorrow with our second cluster deployment then we can start deploying applications into our clusters.
We have started covering some of these mentioned below but we are going to get more hands-on tomorrow with our second cluster deployment then we can start deploying applications into our clusters.
- Kubernetes Architecture
- Kubectl Commands
@ -113,12 +114,12 @@ We have started covering some of these mentioned below but we are going to get m
- Kubernetes Ingress
- Kubernetes Services
- Helm Package Manager
- Persistant Storage
- Persistent Storage
- Stateful Apps
## Resources
If you have FREE resources that you have used then please feel free to add them in here via a PR to the repository and I will be happy to include them.
If you have FREE resources that you have used then please feel free to add them here via a PR to the repository and I will be happy to include them.
- [Kubernetes Documentation](https://kubernetes.io/docs/home/)
- [TechWorld with Nana - Kubernetes Tutorial for Beginners [FULL COURSE in 4 Hours]](https://www.youtube.com/watch?v=X48VuDVv0do)

View File

@ -2,14 +2,15 @@
title: '#90DaysOfDevOps - Kubernetes Application Deployment - Day 54'
published: false
description: 90DaysOfDevOps - Kubernetes Application Deployment
tags: "devops, 90daysofdevops, learning"
tags: 'devops, 90daysofdevops, learning'
cover_image: null
canonical_url: null
id: 1048764
---
## Kubernetes Application Deployment
Now we finally get to actually deploying some applications into our clusters, some would say this is the reason Kubernetes exists, for Application delivery.
Now we finally get to deploying some applications into our clusters, some would say this is the reason Kubernetes exists, for Application delivery.
The idea here is that we can take our container images and now deploy these as pods into our Kubernetes cluster to take advantage of Kubernetes as a container orchestrator.
@ -19,19 +20,19 @@ There are several ways in which we can deploy our applications into our Kubernet
We will be using our minikube cluster for these application deployments. We will be walking through some of the previously mentioned components or building blocks of Kubernetes.
All through this section and the Container section we have discussed about images and the benefits of Kubernetes and how we can handle scale quite easily on this platform.
All through this section and the Container section we have discussed images and the benefits of Kubernetes and how we can handle scale quite easily on this platform.
In this first step we are simply going to create a stateless application within our minikube cluster. We will be using the defacto standard stateless application in our first demonostration `nginx` we will configure a Deployment, which will provide us with our pods and then we will also create a service which will allow us to navigate to the simple web server hosted by the nginx pod. All of this will be contained in a namespace.
In this first step, we are simply going to create a stateless application within our minikube cluster. We will be using the defacto standard stateless application in our first demonstration `nginx` we will configure a Deployment, which will provide us with our pods and then we will also create a service which will allow us to navigate to the simple web server hosted by the nginx pod. All of this will be contained in a namespace.
![](Images/Day54_Kubernetes1.png)
### Creating the YAML
In the first demo we want to define everything we do with YAML, we could have a whole section on YAML but I am going to skim over this and leave some resources at the end that will cover YAML in more detail.
In the first demo, we want to define everything we do with YAML, we could have a whole section on YAML but I am going to skim over this and leave some resources at the end that will cover YAML in more detail.
We could create the following as one YAML file or we could break this down for each aspect of our application, i.e this could be separate files for namespace, deployment and service creation but in this file below we separate these by using `---` in one file. You can find this file located [here](Days\Kubernetes\nginx-stateless-demo.yaml)
We could create the following as one YAML file or we could break this down for each aspect of our application, i.e this could be separate files for namespace, deployment and service creation but in this file, below we separate these by using `---` in one file. You can find this file located [here](Kubernetes) (File name:- nginx-stateless-demo.YAML)
```
```Yaml
apiVersion: v1
kind: Namespace
metadata:
@ -74,6 +75,7 @@ spec:
port: 80
targetPort: 80
```
### Checking our cluster
Before we deploy anything we should just make sure that we have no existing namespaces called `nginx` we can do this by running the `kubectl get namespace` command and as you can see below we do not have a namespace called `nginx`
@ -84,7 +86,7 @@ Before we deploy anything we should just make sure that we have no existing name
Now we are ready to deploy our application to our minikube cluster, this same process will work on any other Kubernetes cluster.
We need to navigate to our yaml file location and then we can run `kubectl create -f nginx-stateless-demo.yaml` which you then see that 3 objects have been created, we have a namespace, deployment and service.
We need to navigate to our YAML file location and then we can run `kubectl create -f nginx-stateless-demo.yaml` which you then see that 3 objects have been created, and we have a namespace, deployment and service.
![](Images/Day54_Kubernetes3.png)
@ -100,7 +102,7 @@ We can also check our service is created by running `kubectl get service -n ngin
![](Images/Day54_Kubernetes6.png)
Finally we can then go and check our deployment, the deployment is where and how we keep our desired configuration.
Finally, we can then go and check our deployment, the deployment is where and how we keep our desired configuration.
![](Images/Day54_Kubernetes7.png)
@ -108,13 +110,13 @@ The above takes a few commands that are worth knowing but you can also use `kube
![](Images/Day54_Kubernetes8.png)
You will notice in the above that we also have a replicaset, in our deployment we define how many replicas of our image we would like to deploy. This was set to 1 initially, but if we wanted to quickly scale our application then we can do this several ways.
You will notice in the above that we also have a replicaset, in our deployment we define how many replicas of our image we would like to deploy. This was set to 1 initially, but if we wanted to quickly scale our application then we can do these several ways.
We can edit our file using `kubectl edit deployment nginx-deployment -n nginx` which will open a text editor within your terminal and enable you to modify you deployment.
We can edit our file using `kubectl edit deployment nginx-deployment -n nginx` which will open a text editor within your terminal and enable you to modify your deployment.
![](Images/Day54_Kubernetes9.png)
Upon saving the above in your text editor within the terminal if there was no issues and the correct formatting was used then you should see additional deployed in your namespace.
Upon saving the above in your text editor within the terminal if there were no issues and the correct formatting was used then you should see additional deployed in your namespace.
![](Images/Day54_Kubernetes10.png)
@ -122,18 +124,17 @@ We can also make a change to the number of replicas using kubectl and the `kubec
![](Images/Day54_Kubernetes11.png)
We can equally use this method to scale our application down back to 1 again if we wish using either method. I used the edit option but you can also use the scale command above.
We can equally use this method to scale our application down back to 1 again if we wish to use either method. I used the edit option but you can also use the scale command above.
![](Images/Day54_Kubernetes12.png)
Hopefully here you can see the use case not only are things super fast to spin up and down but we have the ability to quickly scale up and down our applications. If this was a web server we could scale up during busy times and down when load is quiet.
Hopefully, here you can see the use case not only are things super fast to spin up and down but we have the ability to quickly scale up and down our applications. If this was a web server we could scale up during busy times and down when the load is quiet.
### Exposing our app
But how do we access our web server?
If you look above at our service you will see there is no External IP available so we cannot just open a web browser and expect this to be there magically. For access we have a few options.
If you look above at our service you will see there is no External IP available so we cannot just open a web browser and expect this to be there magically. For access, we have a few options.
**ClusterIP** - The IP you do see is a ClusterIP this is on an internal network on the cluster. Only things within the cluster can reach this IP.
@ -141,9 +142,9 @@ If you look above at our service you will see there is no External IP available
**LoadBalancer** - Creates an external load balancer in the current cloud, we are using minikube but also if you have built your own Kubernetes cluster i.e what we did in VirtualBox you would need to deploy a LoadBalancer such as metallb into your cluster to provide this functionality.
**Port-Forward** - We also have the ability to Port Forward, which allows you to access and interact with internal Kubernetes cluster processes from your localhost. Really this option is only for testing and fault finding.
**Port-Forward** - We also have the ability to Port Forward, which allows you to access and interact with internal Kubernetes cluster processes from your localhost. This option is only for testing and fault finding.
We now have a few options to choose from, Minikube has some limitations or differences I should say to a full blown Kubernetes cluster.
We now have a few options to choose from, Minikube has some limitations or differences I should say to a full-blown Kubernetes cluster.
We could simply run the following command to port forward our access using our local workstation.
@ -157,13 +158,13 @@ note that when you run the above command this terminal is now unusable as this i
We are now going to run through specifically with Minikube how we can expose our application. We can also use minikube to create a URL to connect to a service [More details](https://minikube.sigs.k8s.io/docs/commands/service/)
First of all we will delete our service using `kubectl delete service nginx-service -n nginx`
First of all, we will delete our service using `kubectl delete service nginx-service -n nginx`
Next we are going to create a new service using `kubectl expose deployment nginx-deployment --name nginx-service --namespace nginx --port=80 --type=NodePort` notice here we are going to use the expose and change the type to NodePort.
Next, we are going to create a new service using `kubectl expose deployment nginx-deployment --name nginx-service --namespace nginx --port=80 --type=NodePort` notice here that we are going to use the expose and change the type to NodePort.
![](Images/Day54_Kubernetes15.png)
Finally in a new terminal run `minikube --profile='mc-demo' service nginx-service --url -n nginx` to create a tunnel for our service.
Finally in a new terminal run `minikube --profile='mc-demo' service nginx-service --URL -n nginx` to create a tunnel for our service.
![](Images/Day54_Kubernetes16.png)
@ -175,15 +176,15 @@ Open a browser or control and click on the link in your terminal.
Helm is another way in which we can deploy our applications. Known as "The package manager for Kubernetes" You can find out more [here](https://helm.sh/)
Helm is a package manager for Kubernetes. Helm could be considered the Kubernetes equivalent of yum or apt. Helm deploys charts, which you can think of like a packaged application., it is a blueprint for your pre-configured application resources which can be deployed as one easy to use chart. You can then deploy another version of the chart with a different set of configurations.
Helm is a package manager for Kubernetes. Helm could be considered the Kubernetes equivalent of yum or apt. Helm deploys charts, which you can think of like a packaged application., it is a blueprint for your pre-configured application resources which can be deployed as one easy-to-use chart. You can then deploy another version of the chart with a different set of configurations.
They have a site where you can browse all the Helm charts available and of course you can create your own. The documentation is also clear and concise and not as daunting as when I first started hearing the term helm amongst all of the other new words in this space.
They have a site where you can browse all the Helm charts available and of course, you can create your own. The documentation is also clear and concise and not as daunting as when I first started hearing the term helm amongst all of the other new words in this space.
It is super simple to get Helm up and running or installed. Simply. You can find the binaries and download links here for pretty much all distributions including your RaspberryPi arm64 devices.
Or you can use an installer script, the benefit here is that the latest version of the helm will be downloaded and installed.
```
```Shell
curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3
chmod 700 get_helm.sh
@ -195,11 +196,11 @@ Finally, there is also the option to use a package manager for the application m
Helm so far seems to be the go-to way to get different test applications downloaded and installed in your cluster.
A good resource to link here would be [ArtifactHUB](https://artifacthub.io/) which is a resource to find, install and publish Kubernetes packages. I will also give a shout out to [KubeApps](https://kubeapps.com/) which is a UI to display helm charts.
A good resource to link here would be [ArtifactHUB](https://artifacthub.io/) which is a resource to find, install and publish Kubernetes packages. I will also give a shout-out to [KubeApps](https://kubeapps.com/) which is a UI to display helm charts.
### What we will cover in the series on Kubernetes
We have started covering some of these mentioned below but we are going to get more hands on tomorrow with our second cluster deployment then we can start deploying applications into our clusters.
We have started covering some of these mentioned below but we are going to get more hands-on tomorrow with our second cluster deployment then we can start deploying applications into our clusters.
- Kubernetes Architecture
- Kubectl Commands
@ -207,12 +208,12 @@ We have started covering some of these mentioned below but we are going to get m
- Kubernetes Ingress
- Kubernetes Services
- Helm Package Manager
- Persistant Storage
- Persistent Storage
- Stateful Apps
## Resources
If you have FREE resources that you have used then please feel free to add them in here via a PR to the repository and I will be happy to include them.
If you have FREE resources that you have used then please feel free to add them here via a PR to the repository and I will be happy to include them.
- [Kubernetes Documentation](https://kubernetes.io/docs/home/)
- [TechWorld with Nana - Kubernetes Tutorial for Beginners [FULL COURSE in 4 Hours]](https://www.youtube.com/watch?v=X48VuDVv0do)

View File

@ -2,24 +2,26 @@
title: '#90DaysOfDevOps - State and Ingress in Kubernetes - Day 55'
published: false
description: 90DaysOfDevOps - State and Ingress in Kubernetes
tags: "devops, 90daysofdevops, learning"
tags: 'DevOps, 90daysofdevops, learning'
cover_image: null
canonical_url: null
id: 1048779
---
## State and Ingress in Kubernetes
In this closing section of Kubernetes, we are going to take a look at State and ingress.
Everything we have said so far is about stateless, stateless is really where our applications do not care which network it is using and does not need any permanent storage. Whereas stateful apps, databases for example for such an application to function correctly, youll need to ensure that pods can reach each other through a unique identity that does not change (hostnames, IPs...etc.). Examples of stateful applications include MySQL clusters, Redis, Kafka, MongoDB and others. Basically though any application that stores data.
Everything we have said so far is about stateless, stateless is really where our applications do not care which network it is using and does not need any permanent storage. Whereas stateful apps and databases for example for such an application to function correctly, youll need to ensure that pods can reach each other through a unique identity that does not change (hostnames, IPs...etc.). Examples of stateful applications include MySQL clusters, Redis, Kafka, MongoDB and others. Basically, through any application that stores data.
### Stateful Application
StatefulSets represent a set of Pods with unique, persistent identities and stable hostnames that Kubernetes maintains regardless of where they are scheduled. The state information and other resilient data for any given StatefulSet Pod is maintained in persistent disk storage associated with the StatefulSet.
StatefulSets represent a set of Pods with unique, persistent identities and stable hostnames that Kubernetes maintains regardless of where they are scheduled. The state information and other resilient data for any given StatefulSet Pod are maintained in persistent disk storage associated with the StatefulSet.
### Deployment vs StatefulSet
- Replicating stateful applications is more difficult.
- Replicating our pods in a deployment (Stateless Application) is identical and interchangable.
- Replicating our pods in a deployment (Stateless Application) is identical and interchangeable.
- Create pods in random order with random hashes
- One Service that load balances to any Pod.
@ -29,15 +31,15 @@ When it comes to StatefulSets or Stateful Applications the above is more difficu
- Can't be randomly addressed.
- replica Pods are not identical
Something you will see in our demonstration shortly is that each pod has its own identity. With a stateless Application you will see random names. For example `app-7469bbb6d7-9mhxd` where as a Stateful Application would be more aligned to `mongo-0` and then when scaled it will create a new pod called `mongo-1`.
Something you will see in our demonstration shortly is that each pod has its own identity. With a stateless Application, you will see random names. For example `app-7469bbb6d7-9mhxd` whereas a Stateful Application would be more aligned to `mongo-0` and then when scaled it will create a new pod called `mongo-1`.
These pods are created from the same specification, but they are not interchangable. Each StatefulSet pod has a persistent identifier across any re-scheduling. This is necessary because when we require stateful workloads such as a database where we require writing and reading to a database, we cannot have two pods writing at the same time with no awareness as this will give us data inconsistency. We need to ensure that only one of our pods is writing to the database at any given time however we can have multiple pods reading that data.
These pods are created from the same specification, but they are not interchangeable. Each StatefulSet pod has a persistent identifier across any rescheduling. This is necessary because when we require stateful workloads such as a database where we require writing and reading to a database, we cannot have two pods writing at the same time with no awareness as this will give us data inconsistency. We need to ensure that only one of our pods is writing to the database at any given time however we can have multiple pods reading that data.
Each pod in a StatefulSet would have access to its own persistent volume and replica copy of the database to read from, this is continuously updated from the master. Its also interesting to note that each pod will also store its pod state in this persistent volume, if then `mongo-0` dies then when a new one is provisioned it will take over the pod state stored in storage.
Each pod in a StatefulSet would have access to its persistent volume and replica copy of the database to read from, this is continuously updated from the master. It's also interesting to note that each pod will also store its pod state in this persistent volume, if then `mongo-0` dies then when a new one is provisioned it will take over the pod state stored in storage.
TLDR; StatefulSets vs Deployments
- Predicatable pod name = `mongo-0`
- Predictable pod name = `mongo-0`
- Fixed individual DNS name
- Pod Identity - Retain State, Retain Role
- Replicating stateful apps is complex
@ -50,7 +52,7 @@ TLDR; StatefulSets vs Deployments
How to persist data in Kubernetes?
We mentioned above when we have a stateful application, we have to store the state somewhere and this is where the need for a volume comes in, out of the box Kubernetes does not provide persistance out of the box.
We mentioned above when we have a stateful application, we have to store the state somewhere and this is where the need for a volume comes in, out of the box Kubernetes does not provide persistence out of the box.
We require a storage layer that does not depend on the pod lifecycle. This storage should be available and accessible from all of our Kubernetes nodes. The storage should also be outside of the Kubernetes cluster to be able to survive even if the Kubernetes cluster crashes.
@ -80,9 +82,10 @@ Another way to think of PVs and PVCs is that
PVs are created by the Kubernetes Admin
PVCs are created by the user or application developer
We also have two other types of volumes that we will not get into detail on but worth mentioning:
We also have two other types of volumes that we will not get into detail on but are worth mentioning:
### ConfigMaps | Secrets
- Configuration file for your pod.
- Certificate file for your pod.
@ -90,31 +93,30 @@ We also have two other types of volumes that we will not get into detail on but
- Created via a YAML file
- Provisions Persistent Volumes Dynamically when a PVC claims it
- Each storage backend has its own provisioner
- Each storage backend has its provisioner
- Storage backend is defined in YAML (via provisioner attribute)
- Abstracts underlying storage provider
- Define parameters for that storage
### Walkthrough time
In the session yesterday we walked through creating a stateless application, here we want to do the same but we want to use our minikube cluster to deploy a stateful workload.
A recap on the minikube command we are using to have the capability and addons to use persistence is `minikube start --addons volumesnapshots,csi-hostpath-driver --apiserver-port=6443 --container-runtime=containerd -p mc-demo --kubernetes-version=1.21.2`
A recap on the minikube command we are using to have the capability and addons to use persistence is `minikube start --addons volumesnapshots,csi-hostpath-driver --apiserver-port=6443 --container-runtime=containerd -p mc-demo --Kubernetes-version=1.21.2`
This command uses the csi-hostpath-driver which is what gives us our storageclass, something I will show later.
This command uses the CSI-hostpath-driver which is what gives us our storageclass, something I will show later.
The build out of the application looks like the below:
The build-out of the application looks like the below:
![](Images/Day55_Kubernetes1.png)
You can find the YAML configuration file for this application here[pacman-stateful-demo.yaml](Days/Kubernetes/pacman-stateful-demo.yaml)
You can find the YAML configuration file for this application here [pacman-stateful-demo.yaml](Kubernetes)
### StorageClass Configuration
There is one more step though that we should run before we start deploying our application and that is make sure that our storageclass (csi-hostpath-sc) is our default one. We can firstly check this by running the `kubectl get storageclass` command but out of the box the minikube cluster will be showing the standard storageclass as default so we have to change that with the following commands.
There is one more step though that we should run before we start deploying our application and that is to make sure that our storageclass (CSI-hostpath-sc) is our default one. We can firstly check this by running the `kubectl get storageclass` command but out of the box, the minikube cluster will be showing the standard storageclass as default so we have to change that with the following commands.
This first command will make our csi-hostpath-sc storageclass our default.
This first command will make our CSI-hostpath-sc storageclass our default.
`kubectl patch storageclass csi-hostpath-sc -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'`
@ -124,11 +126,11 @@ This command will remove the default annotation from the standard StorageClass.
![](Images/Day55_Kubernetes2.png)
We start with no pacman namespace in our cluster. `kubectl get namespace`
We start with no Pacman namespace in our cluster. `kubectl get namespace`
![](Images/Day55_Kubernetes3.png)
We will then deploy our YAML file. `kubectl create -f pacman-stateful-demo.yaml` you can see from this command we are creating a number of objects within our Kubernetes cluster.
We will then deploy our YAML file. `kubectl create -f pacman-stateful-demo.yaml` you can see from this command we are creating several objects within our Kubernetes cluster.
![](Images/Day55_Kubernetes4.png)
@ -136,19 +138,19 @@ We now have our newly created namespace.
![](Images/Day55_Kubernetes5.png)
You can then see from the next image and command `kubectl get all -n pacman` that we have a number of things happening inside of our namespace. We have our pods running our NodeJS web front end, we have mongo running our backend database. There are services for both pacman and mongo to access those pods. We have a deployment for pacman and a statefulset for mongo.
You can then see from the next image and command `kubectl get all -n pacman` that we have several things happening inside of our namespace. We have our pods running our NodeJS web front end, we have mongo running our backend database. There are services for both Pacman and mongo to access those pods. We have a deployment for Pacman and a statefulset for mongo.
![](Images/Day55_Kubernetes6.png)
We also have our persistent volume and persistent volume claim by running `kubectl get pv` will give us our non namespaced persistent volumes and running `kubectl get pvc -n pacman` will give us our namespaced persistent volume claims.
We also have our persistent volume and persistent volume claim by running `kubectl get pv` will give us our non-namespaced persistent volumes and running `kubectl get pvc -n pacman` will give us our namespaced persistent volume claims.
![](Images/Day55_Kubernetes7.png)
### Playing the game | I mean accessing our mission critical application
### Playing the game | I mean accessing our mission-critical application
Because we are using Minikube as mentioned in the stateless application we have a few hurdles to get over when it comes to accessing our application, If however we had access to ingress or a load balancer within our cluster the service is set up to automatically get an IP from that to gain access externally. (you can see this above in the image of all components in the pacman namespace).
Because we are using Minikube as mentioned in the stateless application we have a few hurdles to get over when it comes to accessing our application, however, we had access to ingress or a load balancer within our cluster the service is set up to automatically get an IP from that to gain access externally. (you can see this above in the image of all components in the Pacman namespace).
For this demo we are going to use the port forward method to access our application. By opening a new terminal and running the following `kubectl port-forward svc/pacman 9090:80 -n pacman` command, opening a browser we will now have access to our application. If you are running this in AWS or specific locations then this will also report on the cloud and zone as well as the host which equals your pod within Kubernetes, again you can look back and see this pod name in our screenshots above.
For this demo, we are going to use the port forward method to access our application. By opening a new terminal and running the following `kubectl port-forward svc/pacman 9090:80 -n pacman` command, opening a browser we will now have access to our application. If you are running this in AWS or specific locations then this will also report on the cloud and zone as well as the host which equals your pod within Kubernetes, again you can look back and see this pod name in our screenshots above.
![](Images/Day55_Kubernetes8.png)
@ -164,32 +166,33 @@ Now if I go back to my game I can create a new game and see my high scores. The
![](Images/Day55_Kubernetes11.png)
With the deployment we can scale this up using the commands that we covered in the previous session but in particular here, especially if you want to host a huge pacman party then you can scale this up using `kubectl scale deployment pacman --replicas=10 -n pacman`
With the deployment, we can scale this up using the commands that we covered in the previous session but in particular here, especially if you want to host a huge Pacman party then you can scale this up using `kubectl scale deployment pacman --replicas=10 -n pacman`
![](Images/Day55_Kubernetes12.png)
### Ingress explained
Before we wrap things up with Kubernetes I also wanted to touch on a huge aspect of Kubernetes and that is ingress.
### What is ingress?
So far with our examples we have used port-forward or we have used specific commands within minikube to gain access to our applications but this in production is not going to work. We are going to want a better way of accessing our applications at scale with multiple users.
So far with our examples, we have used port-forward or we have used specific commands within minikube to gain access to our applications but this in production is not going to work. We are going to want a better way of accessing our applications at scale with multiple users.
We also spoke about NodePort being an option but again this should be only for test purposes.
Ingress gives us a better way of exposing our applications, this allows us to define routing rules within our Kubernetes cluster.
For ingress we would create a forward request to the internal service of our application.
For ingress, we would create a forward request to the internal service of our application.
### When do you need ingress?
If you are using a cloud provider, a managed Kubernetes offering they most likely will have their own ingress option for your cluster or they provide you with their own load balancer option. You don't have to implement this yourself, one of the benefits of managed Kubernetes.
If you are running your own cluster then you will need to configure an entrypoint.
If you are using a cloud provider, a managed Kubernetes offering they most likely will have their ingress option for your cluster or they provide you with their load balancer option. You don't have to implement this yourself, one of the benefits of managed Kubernetes.
If you are running your cluster then you will need to configure an entrypoint.
### Configure Ingress on Minikube
On my particular running cluster called mc-demo I can run the following command to get ingress enabled on my cluster.
On my particular running cluster called mc-demo, I can run the following command to get ingress enabled on my cluster.
`minikube --profile='mc-demo' addons enable ingress`
@ -199,19 +202,19 @@ If we check our namespaces now you will see that we have a new ingress-nginx nam
![](Images/Day55_Kubernetes14.png)
Now we must create our ingress YAML configuration to hit our Pacman service I have added this file to the repository [pacman-ingress.yaml](Days/Kubernetes/pacman-ingress.yaml)
Now we must create our ingress YAML configuration to hit our Pacman service I have added this file to the repository [pacman-ingress.yaml](Kubernetes)
We can then create this in our ingress namespace with `kubectl create -f pacman-ingress.yaml`
We can then create this in our ingress namespace with `kubectl create -f Pacman-ingress.yaml`
![](Images/Day55_Kubernetes15.png)
Then if we run `kubectl get ingress -n pacman`
Then if we run `kubectl get ingress -n Pacman
![](Images/Day55_Kubernetes16.png)
I am then told because we are using minikube running on WSL2 in Windows we have to create the minikube tunnel using `minikube tunnel --profile=mc-demo`
But I am still not able to gain access to 192.168.49.2 and play my pacman game.
But I am still not able to gain access to 192.168.49.2 and play my Pacman game.
If anyone has or can get this working on Windows and WSL I would appreciate the feedback. I will raise an issue on the repository for this and come back to it once I have time and a fix.
@ -219,7 +222,7 @@ UPDATE: I feel like this blog helps identify maybe the cause of this not working
## Resources
If you have FREE resources that you have used then please feel free to add them in here via a PR to the repository and I will be happy to include them.
If you have FREE resources that you have used then please feel free to add them here via a PR to the repository and I will be happy to include them.
- [Kubernetes StatefulSet simply explained](https://www.youtube.com/watch?v=pPQKAR1pA9U)
- [Kubernetes Volumes explained](https://www.youtube.com/watch?v=0swOh5C3OVM)
@ -229,7 +232,7 @@ If you have FREE resources that you have used then please feel free to add them
- [TechWorld with Nana - Kubernetes Crash Course for Absolute Beginners](https://www.youtube.com/watch?v=s_o8dwzRlu4)
- [Kunal Kushwaha - Kubernetes Tutorial for Beginners | What is Kubernetes? Architecture Simplified!](https://www.youtube.com/watch?v=KVBON1lA9N8)
This wraps up our Kubernetes section, there is so much additional content we could cover on Kubernetes and 7 days gives us a foundational knowledge but there are people running through [100DaysOfKubernetes](https://100daysofkubernetes.io/overview.html) where you can get really into the weeds.
This wraps up our Kubernetes section, there is so much additional content we could cover on Kubernetes and 7 days gives us foundational knowledge but people are running through [100DaysOfKubernetes](https://100daysofkubernetes.io/overview.html) where you can get really into the weeds.
Next up we are going to be taking a look at Infrastructure as Code and the important role it plays from a DevOps perspective.

View File

@ -2,11 +2,12 @@
title: '#90DaysOfDevOps - The Big Picture: IaC - Day 56'
published: false
description: 90DaysOfDevOps - The Big Picture IaC
tags: "devops, 90daysofdevops, learning"
tags: 'devops, 90daysofdevops, learning'
cover_image: null
canonical_url: null
id: 1048709
---
## The Big Picture: IaC
Humans make mistakes! Automation is the way to go!
@ -19,19 +20,19 @@ How long would it take you to replace everything?
Infrastructure as code provides a solution to be able to do this whilst also being able to test this, we should not confuse this with backup and recovery but in terms of your infrastructure and environments, your platforms we should be able to spin them up and treat them as cattle vs pets.
The TLDR; is that we can use code to rebuild our whole entire environment.
The TLDR; is that we can use code to rebuild our entire environment.
If we also remember from the start we said about DevOps in general is a way in which to break down barriers to deliver systems into production safely and rapidly.
Infrastructure as code helps us deliver the systems, we have spoken a lot of processes and tools. IaC brings us more tools to be familiar with to enable this part of the process.
We are going to concentrate on Infrastructure as code in this section. You might also hear this mentioned as Infrastructure from code or configuration as code. I think the most well known term is likely Infrastructure as code.
We are going to concentrate on Infrastructure as code in this section. You might also hear this mentioned as Infrastructure from code or configuration as code. I think the most well-known term is likely Infrastructure as code.
### Pets vs Cattle
If we take a look at pre DevOps, if we had the requirement to build a new Application, we would need to prepare our servers manually for the most part.
If we take a look at pre-DevOps, if we had the requirement to build a new Application, we would need to prepare our servers manually for the most part.
- Deploy VMs | Physical Servers and install operating system
- Deploy VMs | Physical Servers and install the operating system
- Configure networking
- Create routing tables
- Install software and updates
@ -53,25 +54,26 @@ Add the complexity of multiple test and dev environments.
This is where Infrastructure as Code comes in, the above was very much a time where we would look after those servers as if they were pets, people even called them servers pet names or at least named them something because they were going to be around for a while, they were going to hopefully be part of the "family" for a while.
With Infrastructure as Code we have the ability to automate all these tasks end to end. Infrastructure as code is a concept and there are tools that carry out this automated provisioning of infrastructure, at this point if something bad happens to a server you throw it away and you spin up a new one. This process is automated and the server is exactly as defined in code. At this point we don't care what they are called they are there in the field serving their purpose until they are no longer in the field and we have another to replace it either because of a failure or because we updated part or all of our application.
With Infrastructure as Code, we can automate all these tasks end to end. Infrastructure as code is a concept and some tools carry out this automated provisioning of infrastructure, at this point if something bad happens to a server you throw it away and you spin up a new one. This process is automated and the server is exactly as defined in the code. At this point we don't care what they are called they are there in the field serving their purpose until they are no longer in the field and we have another to replace it either because of a failure or because we updated part or all of our application.
This can be used in almost all platforms, virtualisation, cloud based workloads and also cloud-native infrastructure such as Kubernetes and containers.
This can be used in almost all platforms, virtualisation, cloud-based workloads and also cloud-native infrastructures such as Kubernetes and containers.
### Infrastructure Provisioning
Not all IaC cover all of the below, You will find that the tool we are going to be using during this section only really covers the the first 2 areas of below; Terraform is that tool we will be covering and this allows us to start from nothing and define in code what our infrastructure should look like and then deploy that, it will also enable us to manage that infrastructure and also initially deploy an application but at that point it is going to lose track of the application which is where the next section comes in and something like Ansible as a configuration management tool might work better on that front.
Without jumping ahead tools like chef, puppet and ansible are best suited to deal with the initial application setup and then to manage those applications and their configuration.
Not all IaC cover all of the below, You will find that the tool we are going to be using during this section only really covers the first 2 areas below; Terraform is that tool we will be covering and this allows us to start from nothing and define in code what our infrastructure should look like and then deploy that, it will also enable us to manage that infrastructure and also initially deploy an application but at that point it is going to lose track of the application which is where the next section comes in and something like Ansible as a configuration management tool might work better on that front.
Without jumping ahead tools like chef, puppet and ansible are best suited to deal with the initial application setup and then manage those applications and their configuration.
Initial installation & configuration of software
- Spinning up new servers
- Network configuration
- Creating load balancers
- Configuration on infrastructure level
- Configuration on the infrastructure level
### Configuration of provisioned infrastructure
- Installing application on servers
- Installing applications on servers
- Prepare the servers to deploy your application.
### Deployment of Application
@ -81,40 +83,45 @@ Initial installation & configuration of software
- Software updates
- Reconfiguration
### Difference of IaC tools
### Difference between IaC tools
Declarative vs procedural
Procedural
- Step by step instruction
- Step-by-step instruction
- Create a server > Add a server > Make this change
Declartive
- declare end result
Declarative
- declare the result
- 2 Servers
Mutable (pets) vs Immutable (cattle)
Mutable
- Change instead of replace
- Change instead of replacing
- Generally long lived
Immutable
- Replace instead of change
- Possibly short lived
- Possibly short-lived
This is really why we have lots of different options for Infrastructure as Code because there is no one tool to rule them all.
We are going to be mostly using terraform and getting hands on as this is the best way to start seeing the benefits of Infrastructure as Code when it is in action. Getting hands on is also the best way to pick up the skills as you are going to be writing code.
We are going to be mostly using terraform and getting hands-on as this is the best way to start seeing the benefits of Infrastructure as Code when it is in action. Getting hands-on is also the best way to pick up the skills you are going to be writing code.
Next up we will start looking into Terraform with a 101 before we get some hands on get using.
Next up we will start looking into Terraform with a 101 before we get some hands-on getting used.
## Resources
I have listed a lot of resources down below and I think this topic has been covered so many times out there, If you have additional resources be sure to raise a PR with your resources and I will be happy to review and add them to the list.
- [What is Infrastructure as Code? Difference of Infrastructure as Code Tools ](https://www.youtube.com/watch?v=POPP2WTJ8es)
- [What is Infrastructure as Code? Difference of Infrastructure as Code Tools](https://www.youtube.com/watch?v=POPP2WTJ8es)
- [Terraform Tutorial | Terraform Course Overview 2021](https://www.youtube.com/watch?v=m3cKkYXl-8o)
- [Terraform explained in 15 mins | Terraform Tutorial for Beginners ](https://www.youtube.com/watch?v=l5k1ai_GBDE)
- [Terraform explained in 15 mins | Terraform Tutorial for Beginners](https://www.youtube.com/watch?v=l5k1ai_GBDE)
- [Terraform Course - From BEGINNER to PRO!](https://www.youtube.com/watch?v=7xngnjfIlK4&list=WL&index=141&t=16s)
- [HashiCorp Terraform Associate Certification Course](https://www.youtube.com/watch?v=V4waklkBC38&list=WL&index=55&t=111s)
- [Terraform Full Course for Beginners](https://www.youtube.com/watch?v=EJ3N-hhiWv0&list=WL&index=39&t=27s)

View File

@ -7,19 +7,20 @@ cover_image: null
canonical_url: null
id: 1048710
---
## An intro to Terraform
"Terraform is a tool for building, changing, and versioning infrastructure safely and efficiently"
The above quote is from HashiCorp, HashiCorp is the company behind Terraform.
"Terraform is an open-source infrastructure as code software tool that provides a consistent CLI workflow to manage hundreds of cloud services. Terraform codifies cloud APIs into declarative configuration files."
"Terraform is an open-source infrastructure as a code software tool that provides a consistent CLI workflow to manage hundreds of cloud services. Terraform codifies cloud APIs into declarative configuration files."
HashiCorp have a great resource in [HashiCorp Learn](https://learn.hashicorp.com/terraform?utm_source=terraform_io&utm_content=terraform_io_hero) which covers all of their products and gives some great walkthrough demos when you are trying to achieve something with Infrastructure as Code.
HashiCorp has a great resource in [HashiCorp Learn](https://learn.hashicorp.com/terraform?utm_source=terraform_io&utm_content=terraform_io_hero) which covers all of their products and gives some great walkthrough demos when you are trying to achieve something with Infrastructure as Code.
All cloud providers and on prem platforms generally give us access to management consoles which enables us to create our resources via a UI, generally these platforms also provide a CLI or API access to also create the same resources but with an API we have the ability to provision fast.
All cloud providers and on-prem platforms generally give us access to management consoles which enables us to create our resources via a UI, generally, these platforms also provide a CLI or API access to create the same resources but with an API we can provision fast.
Infrastructure as Code allows us to hook into those APIs to deploy our resources in a desired state.
Infrastructure as Code allows us to hook into those APIs to deploy our resources in the desired state.
Other tools but not exclusive or exhaustive below. If you have other tools then please share via a PR.
@ -33,63 +34,59 @@ This is another reason why we are using Terraform, we want to be agnostic to the
## Terraform Overview
Terraform is a provisioning focused tool, Terraform is a CLI that gives the capabilities of being able to provision complex infrastructure environments. With Terraform we can define complex infrastructure requirements that exist locally or remote (cloud) Terraform not only enables us to build things initially but also to maintain and update those resources for their lifetime.
Terraform is a provisioning-focused tool, Terraform is a CLI that gives the capabilities of being able to provision complex infrastructure environments. With Terraform we can define complex infrastructure requirements that exist locally or remote (cloud) Terraform not only enables us to build things initially but also to maintain and update those resources for their lifetime.
We are going to cover the high level here but for more details and loads of resources you can head to [terraform.io](https://www.terraform.io/)
We are going to cover the high level here but for more details and loads of resources, you can head to [terraform. io](https://www.terraform.io/)
### Write
Terraform allows us to create declaritive configuration files that will build our environments. The files are written using the HashiCorp Configuration Language (HCL) which allows for concise descriptions of resources using blocks, arguments, and expressions. We will of course be looking into these in detail in deploying VMs, Containers and within Kubernetes.
Terraform allows us to create declarative configuration files that will build our environments. The files are written using the HashiCorp Configuration Language (HCL) which allows for concise descriptions of resources using blocks, arguments, and expressions. We will of course be looking into these in detail in deploying VMs, Containers and within Kubernetes.
### Plan
The ability to check that the above configuration files are going to deploy what we want to see using specific functions of the terraform cli to be able to test that plan before deploying anything or changing anything. Remember Terraform is a continued tool for your infrastructure if you would like to change aspect of your infrastructure you should do that via terraform so that it is captured all in code.
The ability to check that the above configuration files are going to deploy what we want to see using specific functions of the terraform cli to be able to test that plan before deploying anything or changing anything. Remember Terraform is a continued tool for your infrastructure if you would like to change aspects of your infrastructure you should do that via terraform so that it is captured all in code.
### Apply
Obviously once you are happy you can go ahead and apply this configuration to the many providers that are available within Terraform. You can see the large amount of providers available [here](https://registry.terraform.io/browse/providers)
Once you are happy you can go ahead and apply this configuration to the many providers that are available within Terraform. You can see a large number of providers available [here](https://registry.terraform.io/browse/providers)
Another thing to mention is that there are also modules available, and this is similar to container images in that these modules have been created and shared in public so you do not have to create it again and again just re use the best practice of deploying a specific infrastructure resource the same way everywhere. You can find the modules available [here](https://registry.terraform.io/browse/modules)
The Terraform workflow looks like this: (*taken from the terraform site*)
Another thing to mention is that there are also modules available, and this is similar to container images in that these modules have been created and shared in public so you do not have to create them again and again just reuse the best practice of deploying a specific infrastructure resource the same way everywhere. You can find the modules available [here](https://registry.terraform.io/browse/modules)
The Terraform workflow looks like this: (_taken from the terraform site_)
![](Images/Day57_IAC3.png)
### Terraform vs Vagrant
During this challenge we have used Vagrant which happens to be another Hashicorp open source tool which concentrates on the development environments.
During this challenge, we have used Vagrant which happens to be another Hashicorp open source tool which concentrates on the development environments.
- Vagrant is a tool focused for managing development environments
- Vagrant is a tool focused on managing development environments
- Terraform is a tool for building infrastructure.
A great comparison of the two tools can be found here on the official [Hashicorp site](https://www.vagrantup.com/intro/vs/terraform)
## Terraform Installation
There is really not much to the installation of Terraform.
There is not much to the installation of Terraform.
Terraform is cross platform and you can see below on my Linux machine we have several options to download and install the CLI
Terraform is cross-platform and you can see below on my Linux machine we have several options to download and install the CLI
![](Images/Day57_IAC2.png)
Using `arkade` to install Terraform, arkade is a handy little tool for getting your required tools, apps and clis onto your system. A simple `arkade get terraform` will allow for an update of terraform if available or this same command will also install the Terraform CLI
![](Images/Day57_IAC1.png)
We are going to get into more around HCL and then also start using Terraform to create some infrastructure resources in various different platforms.
We are going to get into more around HCL and then also start using Terraform to create some infrastructure resources in various platforms.
## Resources
I have listed a lot of resources down below and I think this topic has been covered so many times out there, If you have additional resources be sure to raise a PR with your resources and I will be happy to review and add them to the list.
- [What is Infrastructure as Code? Difference of Infrastructure as Code Tools ](https://www.youtube.com/watch?v=POPP2WTJ8es)
- [What is Infrastructure as Code? Difference of Infrastructure as Code Tools](https://www.youtube.com/watch?v=POPP2WTJ8es)
- [Terraform Tutorial | Terraform Course Overview 2021](https://www.youtube.com/watch?v=m3cKkYXl-8o)
- [Terraform explained in 15 mins | Terraform Tutorial for Beginners ](https://www.youtube.com/watch?v=l5k1ai_GBDE)
- [Terraform explained in 15 mins | Terraform Tutorial for Beginners](https://www.youtube.com/watch?v=l5k1ai_GBDE)
- [Terraform Course - From BEGINNER to PRO!](https://www.youtube.com/watch?v=7xngnjfIlK4&list=WL&index=141&t=16s)
- [HashiCorp Terraform Associate Certification Course](https://www.youtube.com/watch?v=V4waklkBC38&list=WL&index=55&t=111s)
- [Terraform Full Course for Beginners](https://www.youtube.com/watch?v=EJ3N-hhiWv0&list=WL&index=39&t=27s)

View File

@ -2,31 +2,31 @@
title: '#90DaysOfDevOps - HashiCorp Configuration Language (HCL) - Day 58'
published: false
description: 90DaysOfDevOps - HashiCorp Configuration Language (HCL)
tags: "devops, 90daysofdevops, learning"
tags: 'devops, 90daysofdevops, learning'
cover_image: null
canonical_url: null
id: 1048741
---
## HashiCorp Configuration Language (HCL)
Before we start making stuff with Terraform we have to dive a little into HashiCorp Configuration Language (HCL). So far during our challenge we have looked at a few different scripting and programming languages and here is another one. We touched on the [Go programming language](Days/day07.md) then [bash scripts](Days/day19.md) we even touched on a little python when it came to [network automation](Days/day27.md)
Before we start making stuff with Terraform we have to dive a little into HashiCorp Configuration Language (HCL). So far during our challenge, we have looked at a few different scripting and programming languages and here is another one. We touched on the [Go programming language](day07.md) then [bash scripts](day19.md) we even touched on a little python when it came to [network automation](day27.md)
Now we must cover HashiCorp Configuration Language (HCL) if this is the first time you are seeing the language it might look a little daunting but its quite simple and very powerful.
Now we must cover HashiCorp Configuration Language (HCL) if this is the first time you are seeing the language it might look a little daunting but it's quite simple and very powerful.
As we move through this section, we are going to be using examples that we can run locally on our system regardless of what OS you are using, we will be using virtualbox, albeit not the infrastructure platform you would usually be using with Terraform. However running this locally, it is free and will allow us to achieve what we are looking for in this post. We could also extend this posts concepts to docker or Kubernetes as well.
As we move through this section, we are going to be using examples that we can run locally on our system regardless of what OS you are using, we will be using VirtualBox, albeit not the infrastructure platform you would usually be using with Terraform. However running this locally, is free and will allow us to achieve what we are looking for in this post. We could also extend this post's concept to docker or Kubernetes as well.
In general though, you would or should be using Terraform to deploy your infrastructure in the public cloud (AWS, Google, Microsoft Azure) but then also in your virtualisation environments such as (VMware, Microsoft Hyper-V, Nutanix AHV). In the public cloud Terraform allows for us to do a lot more than just Virtual Machine automated deployment, we can create all the required infrastructure such as PaaS workloads and all of the networking required assets such as VPCs and Security Groups.
In general, though, you would or should be using Terraform to deploy your infrastructure in the public cloud (AWS, Google, Microsoft Azure) but then also in your virtualisation environments such as (VMware, Microsoft Hyper-V, Nutanix AHV). In the public cloud Terraform allows for us to do a lot more than just Virtual Machine automated deployment, we can create all the required infrastructure such as PaaS workloads and all of the networking required assets such as VPCs and Security Groups.
There are two important aspects to Terraform, we have the code which we are going to get into in this post and then we also have the state. Both of these together could be called the Terraform core. We then have the environment we wish to speak to and deploy into, which is executed using Terraform providers, briefly mentioned in the last session, but we have an AWS provider, we have an Azure providers etc. There are hundreds.
There are two important aspects to Terraform, we have the code which we are going to get into in this post and then we also have the state. Both of these together could be called the Terraform core. We then have the environment we wish to speak to and deploy into, which is executed using Terraform providers, briefly mentioned in the last session, but we have an AWS provider, we have Azure providers etc. There are hundreds.
### Basic Terraform Usage
Let's take a look at a Terraform `.tf` file to see how they are made up. The first example we will walk through will in fact be code to deploy resources to AWS, this would then also require the AWS CLI to be installed on your system and configured for your account.
Let's take a look at a Terraform `.tf` file to see how they are made up. The first example we will walk through will be code to deploy resources to AWS, this would then also require the AWS CLI to be installed on your system and configured for your account.
### Providers
At the top of our `.tf` file structure, generally called `main.tf` at least until we make things more complex. Here we will define the providers that we have mentioned before. Our source of the aws provider as you can see is `hashicorp/aws` this means the provider is maintained or has been published by hashicorp themselves. By default you will reference providers that are available from the [Terraform Registry](https://registry.terraform.io/), you also have the ability to write your own providers, and use these locally, or self-publish to the Terraform Registry.
At the top of our `.tf` file structure, generally called `main.tf` at least until we make things more complex. Here we will define the providers that we have mentioned before. Our source of the AWS provider as you can see is `hashicorp/aws` this means the provider is maintained or has been published by hashicorp themselves. By default you will reference providers that are available from the [Terraform Registry](https://registry.terraform.io/), you also can write your providers, and use these locally, or self-publish to the Terraform Registry.
```
terraform {
@ -38,6 +38,7 @@ terraform {
}
}
```
We might also add in a region as well here to determine which AWS region we would like to provision to we can do this by adding the following:
```
@ -46,7 +47,7 @@ provider "aws" {
}
```
### Resources
### Terraform Resources
- Another important component of a terraform config file which describes one or more infrastructure objects like EC2, Load Balancer, VPC, etc.
@ -54,7 +55,6 @@ provider "aws" {
- The resource type and name together serve as an identifier for a given resource.
```
resource "aws_instance" "90daysofdevops" {
ami = data.aws_ami.instance_id.id
@ -117,12 +117,12 @@ resource "aws_instance" "90daysofdevops" {
tags = {
Name = "Created by Terraform"
tags = {
Name = "ExampleAppServerInstance"
}
}
```
The above code will go and deploy a very simple web server as an ec2 instance in AWS, the great thing about this and any other configuration like this is that we can repeat this and we will get the same output every single time. Other than the chance that I have messed up the code there is no human interaction with the above.
We can take a look at a super simple example, one that you will likely never use but let's humour it anyway. Like with all good scripting and programming language we should start with a hello-world scenario.
@ -140,13 +140,14 @@ output "hello_world" {
value = "Hello, 90DaysOfDevOps from Terraform"
}
```
You will find this file in the IAC folder under hello-world, but out of the box this is not going to simply work there are some commans we need to run in order to use our terraform code.
You will find this file in the IAC folder under hello-world, but out of the box, this is not going to simply work there are some commands we need to run to use our terraform code.
In your terminal navigate to your folder where the main.tf has been created, this could be from this repository or you could create a new one using the code above.
When in that folder we are going to run `terraform init`
We need to perform this on any directory where we have or before we run any terraform code. Initialising a configuration directory downloads and installs the providers defined in the configuration, in this case we have no providers but in the example above this would download the aws provider for this configuration.
We need to perform this on any directory where we have or before we run any terraform code. Initialising a configuration directory downloads and installs the providers defined in the configuration, in this case, we have no providers but in the example above this would download the AWS provider for this configuration.
![](Images/Day58_IAC1.png)
@ -154,26 +155,26 @@ The next command will be `terraform plan`
The `terraform plan` command creates an execution plan, which lets you preview the changes that Terraform plans to make to your infrastructure.
You can simply see below that with our hello-world example we are going to see an output if this was an AWS ec2 instance we would see all the steps that we will be creating.
You can simply see below that with our hello-world example we are going to see output if this was an AWS ec2 instance we would see all the steps that we will be creating.
![](Images/Day58_IAC2.png)
At this point we have initialised our repository and we have our providers downloaded where required, we have run a test walkthrough to make sure this is what we want to see so now we can run and deploy our code.
At this point, we have initialised our repository and we have our providers downloaded where required, we have run a test walkthrough to make sure this is what we want to see so now we can run and deploy our code.
`terraform apply` allows us to do this there is a built in safety measure to this command and this will again give you a plan view on what is going to happen which warrants a response from you to say yes to continue.
`terraform apply` allows us to do this there is a built-in safety measure to this command and this will again give you a plan view on what is going to happen which warrants a response from you to say yes to continue.
![](Images/Day58_IAC3.png)
When we type in yes to the enter a value, and our code is deployed. Obviously not that exciting but you can see we have the output that we defined in our code.
When we type in yes to enter a value, our code is deployed. Not that exciting but you can see we have the output that we defined in our code.
![](Images/Day58_IAC4.png)
Now we have not deployed anything, we have not added, changed or destroyed anything but if we did then we would see that indicated also in the above. If however we had deployed something and we wanted to get rid of everything we deployed we can use the `terraform destroy` command. Again this has that safety where you have to type yes although you can use `--auto-approve` on the end of your `apply` and `destroy` commands to bypass that manual intervention. But I would advise only using this shortcut when in learning and testing as everything will dissappear sometimes faster than it was built.
Now we have not deployed anything, we have not added, changed or destroyed anything but if we did then we would see that indicated also in the above. If however we had deployed something and we wanted to get rid of everything we deployed we can use the `terraform destroy` command. Again this has that safety where you have to type yes although you can use `--auto-approve` on the end of your `apply` and `destroy` commands to bypass that manual intervention. But I would advise only using this shortcut when learning and testing as everything will disappear sometimes faster than it was built.
From this there are really 4 commands we have covered from the Terraform CLI.
From this, there are 4 commands we have covered from the Terraform CLI.
- `terraform init` = get your project folder ready with providers
- `terraform plan` = show what is going to be created, changed during the next command based on our code.
- `terraform plan` = show what is going to be created, and changed during the next command based on our code.
- `terraform apply` = will go and deploy the resources defined in our code.
- `terraform destroy` = will destroy the resources we have created in our project
@ -188,18 +189,18 @@ Another thing to note when running `terraform init` take a look at the tree on t
We also need to be aware of the state file that is created also inside our directory and for this hello world example our state file is simple. This is a JSON file which is the representation of the world according to Terraform. The state will happily show off your sensitive data so be careful and as a best practice put your `.tfstate` files in your `.gitignore` folder before uploading to GitHub.
By default the state file as you can see lives inside the same directory as your project code, but it can also be stored remotely as an option. In a production environment this is likely going to be a shared location such as an S3 bucket.
By default, the state file as you can see lives inside the same directory as your project code, but it can also be stored remotely as an option. In a production environment, this is likely going to be a shared location such as an S3 bucket.
Another option could be Terraform Cloud, this is a paid for managed service. (Free up to 5 users)
Another option could be Terraform Cloud, this is a paid-for-managed service. (Free up to 5 users)
The pros for storing state in a remote location is that we get:
The pros for storing state in a remote location are that we get:
- Sensitive data encrypted
- Collaboration
- Automation
- However it could bring increase complexity
- However, it could bring increase the complexity
```
```JSON
{
"version": 4,
"terraform_version": "1.1.6",
@ -215,13 +216,13 @@ The pros for storing state in a remote location is that we get:
}
```
## Resources
I have listed a lot of resources down below and I think this topic has been covered so many times out there, If you have additional resources be sure to raise a PR with your resources and I will be happy to review and add them to the list.
- [What is Infrastructure as Code? Difference of Infrastructure as Code Tools ](https://www.youtube.com/watch?v=POPP2WTJ8es)
- [What is Infrastructure as Code? Difference of Infrastructure as Code Tools](https://www.youtube.com/watch?v=POPP2WTJ8es)
- [Terraform Tutorial | Terraform Course Overview 2021](https://www.youtube.com/watch?v=m3cKkYXl-8o)
- [Terraform explained in 15 mins | Terraform Tutorial for Beginners ](https://www.youtube.com/watch?v=l5k1ai_GBDE)
- [Terraform explained in 15 mins | Terraform Tutorial for Beginners](https://www.youtube.com/watch?v=l5k1ai_GBDE)
- [Terraform Course - From BEGINNER to PRO!](https://www.youtube.com/watch?v=7xngnjfIlK4&list=WL&index=141&t=16s)
- [HashiCorp Terraform Associate Certification Course](https://www.youtube.com/watch?v=V4waklkBC38&list=WL&index=55&t=111s)
- [Terraform Full Course for Beginners](https://www.youtube.com/watch?v=EJ3N-hhiWv0&list=WL&index=39&t=27s)

View File

@ -7,17 +7,18 @@ cover_image: null
canonical_url: null
id: 1049051
---
## Create a VM with Terraform & Variables
In this session we are going to be creating a VM or two VMs using terraform inside VirtualBox. This is not the normal, VirtualBox is a workstation virtualisation option and really this would not be a use case for Terraform but I am currently 36,000ft in the air and as much as I have deployed public cloud resources this high in the clouds it is much faster to do this locally on my laptop.
In this session, we are going to be creating a VM or two VMs using terraform inside VirtualBox. This is not normal, VirtualBox is a workstation virtualisation option and this would not be a use case for Terraform but I am currently 36,000ft in the air and as much as I have deployed public cloud resources this high in the clouds it is much faster to do this locally on my laptop.
Purely demo purpose but the concept is the same we are going to have our desired state configuration code and then we are going to run that against the virtualbox provider. In the past we have used vagrant here and I covered off the differences between vagrant and terraform at the beginning of the section.
Purely for demo purposes but the concept is the same we are going to have our desired state configuration code and then we are going to run that against the VirtualBox provider. In the past, we have used vagrant here and I covered the differences between vagrant and terraform at the beginning of the section.
### Create virtual machine in VirtualBox
### Create a virtual machine in VirtualBox
The first thing we are going to do is create a new folder called virtualbox, we can then create a virtualbox.tf file and this is going to be where we define our resources. The code below which can be found in the VirtualBox folder as virtualbox.tf this is going to create 2 VMs in Virtualbox.
The first thing we are going to do is create a new folder called VirtualBox, we can then create a VirtualBox.tf file and this is going to be where we define our resources. The code below which can be found in the VirtualBox folder as VirtualBox.tf is going to create 2 VMs in Virtualbox.
You can find more about the community virtualbox provider [here](https://registry.terraform.io/providers/terra-farm/virtualbox/latest/docs/resources/vm)
You can find more about the community VirtualBox provider [here](https://registry.terraform.io/providers/terra-farm/virtualbox/latest/docs/resources/vm)
```
terraform {
@ -54,26 +55,25 @@ output "IPAddr_2" {
```
Now that we have our code defined we can now perform the `terraform init` on our folder to download the provider for virtualbox.
Now that we have our code defined we can now perform the `terraform init` on our folder to download the provider for Virtualbox.
![](Images/Day59_IAC1.png)
Obviously you will also need to have virtualbox installed on your system as well. We can then next run `terraform plan` to see what our code will create for us. Followed by `terraform apply` the below image shows your completed process.
You will also need to have VirtualBox installed on your system as well. We can then next run `terraform plan` to see what our code will create for us. Followed by `terraform apply` the below image shows your completed process.
![](Images/Day59_IAC2.png)
In Virtualbox you will now see your 2 virtual machines.
In Virtualbox, you will now see your 2 virtual machines.
![](Images/Day59_IAC3.png)
### Change configuration
Lets add another node to our deployment. We can simply change the count line to show our newly desired number of nodes. When we run our `terraform apply` it will look something like below.
Let's add another node to our deployment. We can simply change the count line to show our new desired number of nodes. When we run our `terraform apply` it will look something like the below.
![](Images/Day59_IAC4.png)
Once complete in virtualbox you can see we now have 3 nodes up and running.
Once complete in VirtualBox you can see we now have 3 nodes up and running.
![](Images/Day59_IAC5.png)
@ -95,7 +95,7 @@ But there are many other variables that we can use here as well, there are also
- My preference is to use a terraform.tfvars file in our project folder.
- There is an *auto.tfvars file option
- There is an \*auto.tfvars file option
- or we can define when we run the `terraform plan` or `terraform apply` with the `-var` or `-var-file`.
@ -113,11 +113,12 @@ variable "some resource" {
```
## Resources
I have listed a lot of resources down below and I think this topic has been covered so many times out there, If you have additional resources be sure to raise a PR with your resources and I will be happy to review and add them to the list.
- [What is Infrastructure as Code? Difference of Infrastructure as Code Tools ](https://www.youtube.com/watch?v=POPP2WTJ8es)
- [What is Infrastructure as Code? Difference of Infrastructure as Code Tools](https://www.youtube.com/watch?v=POPP2WTJ8es)
- [Terraform Tutorial | Terraform Course Overview 2021](https://www.youtube.com/watch?v=m3cKkYXl-8o)
- [Terraform explained in 15 mins | Terraform Tutorial for Beginners ](https://www.youtube.com/watch?v=l5k1ai_GBDE)
- [Terraform explained in 15 mins | Terraform Tutorial for Beginners](https://www.youtube.com/watch?v=l5k1ai_GBDE)
- [Terraform Course - From BEGINNER to PRO!](https://www.youtube.com/watch?v=7xngnjfIlK4&list=WL&index=141&t=16s)
- [HashiCorp Terraform Associate Certification Course](https://www.youtube.com/watch?v=V4waklkBC38&list=WL&index=55&t=111s)
- [Terraform Full Course for Beginners](https://www.youtube.com/watch?v=EJ3N-hhiWv0&list=WL&index=39&t=27s)

View File

@ -7,9 +7,10 @@ cover_image: null
canonical_url: null
id: 1049052
---
## Docker Containers, Provisioners & Modules
On [Day 59](day59.md) we provisioned a virtual machine using Terraform to our local FREE virtualbox environment. In this section we are going to be deploy a Docker container with some configuration to our local Docker environment.
On [Day 59](day59.md) we provisioned a virtual machine using Terraform to our local FREE VirtualBox environment. In this section, we are going to deploy a Docker container with some configuration to our local Docker environment.
### Docker Demo
@ -50,15 +51,15 @@ We then run our `terraform apply` followed by `docker ps` and you can see we hav
![](Images/Day60_IAC2.png)
If we then open a browser we can navigate to http://localhost:8000/ and you will see we have access to our NGINX container.
If we then open a browser we can navigate to `http://localhost:8000/` and you will see we have access to our NGINX container.
![](Images/Day60_IAC3.png)
You can find out more information on the [Docker Provider](https://registry.terraform.io/providers/kreuzwerker/docker/latest/docs/resources/container)
The above is a very simple demo of what can be done with Terraform plus Docker and how we can now manage this under the Terraform state. We covered docker compose in the containers section and there is a little crossover in a way between this, infrastructure as code as well as then Kubernetes.
The above is a very simple demo of what can be done with Terraform plus Docker and how we can now manage this under the Terraform state. We covered docker-compose in the containers section and there is a little crossover in a way between this, infrastructure as code as well as then Kubernetes.
For the purpose of showing this and how Terraform can handle a little more complexity, we are going to take the docker compose file for wordpress and mysql that we created with docker compose and we will put this to Terraform. You can find the [docker-wordpress.tf](/Days/IaC/Docker-Wordpress/docker-wordpress.tf)
To show this and how Terraform can handle a little more complexity, we are going to take the docker-compose file for WordPress and MySQL that we created with docker-compose and we will put this to Terraform. You can find the [docker-wordpress.tf](/Days/IaC/Docker-WordPress/docker-WordPress.tf)
```
terraform {
@ -120,7 +121,7 @@ resource "docker_container" "wordpress" {
}
```
We again put this is in a new folder and then run our `terraform init` command to pull down our provisioners required.
We again put this in a new folder and then run our `terraform init` command to pull down our provisioners required.
![](Images/Day60_IAC4.png)
@ -128,16 +129,15 @@ We then run our `terraform apply` command and then take a look at our docker ps
![](Images/Day60_IAC5.png)
We can then also navigate to our WordPress front end. Much like when we went through this process with docker-compose in the containers section we can now run through the setup and our wordpress posts would be living in our MySQL database.
We can then also navigate to our WordPress front end. Much like when we went through this process with docker-compose in the containers section we can now run through the setup and our WordPress posts would be living in our MySQL database.
![](Images/Day60_IAC6.png)
Obviously now we have covered containers and Kubernetes in some detail, we probably know that this is ok for testing but if you were really going to be running a website you would not do this with containers alone and you would look at using Kubernetes to achieve this, Next up we are going to take a look using Terraform with Kubernetes.
Now we have covered containers and Kubernetes in some detail, we probably know that this is ok for testing but if you were going to be running a website you would not do this with containers alone and you would look at using Kubernetes to achieve this, Next up we are going to take a look using Terraform with Kubernetes.
### Provisioners
Provisioners are there so that if something cannot be declartive we have a way in which to parse this to our deployment.
Provisioners are there so that if something cannot be declarative we have a way in which to parse this to our deployment.
If you have no other alternative and adding this complexity to your code is the place to go then you can do this by running something similar to the following block of code.
@ -152,7 +152,7 @@ resource "docker_container" "db" {
```
The remote-exec provisioner invokes a script on a remote resource after it is created. This could be used for something OS specific or it could be used to wrap in a configuration management tool. Although notice that we have some of these covered in their own provisioners.
The remote-exec provisioner invokes a script on a remote resource after it is created. This could be used for something OS-specific or it could be used to wrap in a configuration management tool. Although notice that we have some of these covered in their provisioners.
[More details on provisioners](https://www.terraform.io/language/resources/provisioners/syntax)
@ -168,20 +168,21 @@ The remote-exec provisioner invokes a script on a remote resource after it is cr
Modules are containers for multiple resources that are used together. A module consists of a collection of .tf files in the same directory.
Modules are a good way to separate your infrastructure resources as well as being able to pull in third party modules that have already been created so you do not have to re invent the wheel.
Modules are a good way to separate your infrastructure resources as well as be able to pull in third-party modules that have already been created so you do not have to reinvent the wheel.
For example if we wanted to use the same project to build out some VMs, VPCs, Security Groups and then also a Kubernetes cluster we would likely want to split our resources out into modules to better define our resources and where they are grouped.
For example, if we wanted to use the same project to build out some VMs, VPCs, Security Groups and then also a Kubernetes cluster we would likely want to split our resources out into modules to better define our resources and where they are grouped.
Another benefit to modules is that you can take these modules and use them on other projects or share publicly to help the community.
Another benefit to modules is that you can take these modules and use them on other projects or share them publicly to help the community.
We are breaking down our infrastructure into components, components are known here as modules.
## Resources
I have listed a lot of resources down below and I think this topic has been covered so many times out there, If you have additional resources be sure to raise a PR with your resources and I will be happy to review and add them to the list.
- [What is Infrastructure as Code? Difference of Infrastructure as Code Tools ](https://www.youtube.com/watch?v=POPP2WTJ8es)
- [What is Infrastructure as Code? Difference of Infrastructure as Code Tools](https://www.youtube.com/watch?v=POPP2WTJ8es)
- [Terraform Tutorial | Terraform Course Overview 2021](https://www.youtube.com/watch?v=m3cKkYXl-8o)
- [Terraform explained in 15 mins | Terraform Tutorial for Beginners ](https://www.youtube.com/watch?v=l5k1ai_GBDE)
- [Terraform explained in 15 mins | Terraform Tutorial for Beginners](https://www.youtube.com/watch?v=l5k1ai_GBDE)
- [Terraform Course - From BEGINNER to PRO!](https://www.youtube.com/watch?v=7xngnjfIlK4&list=WL&index=141&t=16s)
- [HashiCorp Terraform Associate Certification Course](https://www.youtube.com/watch?v=V4waklkBC38&list=WL&index=55&t=111s)
- [Terraform Full Course for Beginners](https://www.youtube.com/watch?v=EJ3N-hhiWv0&list=WL&index=39&t=27s)

View File

@ -2,30 +2,31 @@
title: '#90DaysOfDevOps - Kubernetes & Multiple Environments - Day 61'
published: false
description: 90DaysOfDevOps - Kubernetes & Multiple Environments
tags: "devops, 90daysofdevops, learning"
tags: 'devops, 90daysofdevops, learning'
cover_image: null
canonical_url: null
id: 1048743
---
## Kubernetes & Multiple Environments
So far during this section on Infrastructure as code we have looked at deploying virtual machines albeit to virtualbox but the premise is the same really as we define in code what we want our virtual machine to look like and then we deploy. The same for Docker containers and in this session we are going to take a look at how Terraform can be used to interact with resources supported by Kubernetes.
So far during this section on Infrastructure as code, we have looked at deploying virtual machines albeit to VirtualBox but the premise is the same really as we define in code what we want our virtual machine to look like and then we deploy. The same for Docker containers and in this session, we are going to take a look at how Terraform can be used to interact with resources supported by Kubernetes.
I have been using Terraform to deploy my Kubernetes clusters for demo purposes across the 3 main cloud providers and you can find the repository [tf_k8deploy](https://github.com/MichaelCade/tf_k8deploy)
However you can also use Terraform to interact with objects within the Kubernetes cluster, this could be using the [Kubernetes provider](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs) or it could be using the [Helm provider](https://registry.terraform.io/providers/hashicorp/helm/latest) to manage your chart deployments.
Now we could use `kubectl` as we have showed in previous sections. But there are some benefits to using Terraform in your Kubernetes environment.
Now we could use `kubectl` as we have shown in previous sections. But there are some benefits to using Terraform in your Kubernetes environment.
- Unified workflow - if you have used terraform to deploy your clusters, you could use the same workflow and tool to deploy within your Kubernetes clusters
- Unified workflow - if you have used Terraform to deploy your clusters, you could use the same workflow and tool to deploy within your Kubernetes clusters
- Lifecycle management - Terraform is not just a provisioning tool, its going to enable change, updates and deletions.
- Lifecycle management - Terraform is not just a provisioning tool, it's going to enable change, updates and deletions.
### Simple Kubernetes Demo
Much like the demo we created in the last session we can now deploy nginx into our Kubernetes cluster, I will be using minikube here again for demo purposes. We create our Kubernetes.tf file and you can find this in the [folder](/Days/IaC/Kubernetes/kubernetes.tf)
Much like the demo we created in the last session, we can now deploy nginx into our Kubernetes cluster, I will be using minikube here again for demo purposes. We create our Kubernetes.tf file and you can find this in the [folder](/Days/IaC/Kubernetes/Kubernetes.tf)
In that file we are going to define our Kubernetes provider, we are going to point to our kubeconfig file, create a namespace called nginx, then we will create a deployment which contains 2 replicas and finally a service.
In that file we are going to define our Kubernetes provider, we are going to point to our kubeconfig file, create a namespace called nginx, and then we will create a deployment which contains 2 replicas and finally service.
```
terraform {
@ -109,53 +110,57 @@ We can now take a look at the deployed resources within our cluster.
![](Images/Day61_IAC4.png)
Now because we are using minikube and you will have seen in the previous section this has its own limitations when we try and play with the docker networking for ingress. But if we simply issue the `kubectl port-forward -n nginx svc/nginx 30201:80` command and open a browser to http://localhost:30201/ we should see our NGINX page.
Now because we are using minikube as you will have seen in the previous section this has its limitations when we try and play with the docker networking for ingress. But if we simply issue the `kubectl port-forward -n nginx svc/nginx 30201:80` command and open a browser to `http://localhost:30201/` we should see our NGINX page.
![](Images/Day61_IAC5.png)
If you want to try out more detailed demos with Terraform and Kubernetes then the [HashiCorp Learn site](https://learn.hashicorp.com/tutorials/terraform/kubernetes-provider) is fantastic to run through.
### Multiple Environments
If we wanted to take any of the demos we have ran through but wanted to now have specific production, staging and development environments looking exactly the same and leveraging this code there are two approaches to achieve this with Terraform
If we wanted to take any of the demos we have run through but wanted to now have specific production, staging and development environments looking the same and leveraging this code there are two approaches to achieve this with Terraform
- `terraform workspaces` - multiple named sections within a single backend
- file structure - Directory layout provides separation, modules provide reuse.
Each of the above do have their pros and cons though.
Each of the above does have its pros and cons though.
### terraform workspaces
Pros
- Easy to get started
- Convenient terraform.workspace expression
- Minimises code duplication
Cons
- Prone to human error (we were trying to eliminate this by using TF)
- State stored within the same backend
- Codebase doesnt unambiguously show deployment configurations.
- Codebase doesn't unambiguously show deployment configurations.
### File Structure
Pros
- Isolation of backends
- improved security
- decreased potential for human error
- Codebase fully represents deployed state
Cons
- Multiple terraform apply required to provision environments
- More code duplication, but can be minimised with modules.
## Resources
I have listed a lot of resources down below and I think this topic has been covered so many times out there, If you have additional resources be sure to raise a PR with your resources and I will be happy to review and add them to the list.
- [What is Infrastructure as Code? Difference of Infrastructure as Code Tools ](https://www.youtube.com/watch?v=POPP2WTJ8es)
- [What is Infrastructure as Code? Difference of Infrastructure as Code Tools](https://www.youtube.com/watch?v=POPP2WTJ8es)
- [Terraform Tutorial | Terraform Course Overview 2021](https://www.youtube.com/watch?v=m3cKkYXl-8o)
- [Terraform explained in 15 mins | Terraform Tutorial for Beginners ](https://www.youtube.com/watch?v=l5k1ai_GBDE)
- [Terraform explained in 15 mins | Terraform Tutorial for Beginners](https://www.youtube.com/watch?v=l5k1ai_GBDE)
- [Terraform Course - From BEGINNER to PRO!](https://www.youtube.com/watch?v=7xngnjfIlK4&list=WL&index=141&t=16s)
- [HashiCorp Terraform Associate Certification Course](https://www.youtube.com/watch?v=V4waklkBC38&list=WL&index=55&t=111s)
- [Terraform Full Course for Beginners](https://www.youtube.com/watch?v=EJ3N-hhiWv0&list=WL&index=39&t=27s)

View File

@ -7,29 +7,30 @@ cover_image: null
canonical_url: null
id: 1049053
---
## Testing, Tools & Alternatives
As we close out this section on Infrastructure as Code we must mention about testing our code, the various different tools available and then some of the alternatives to Terraform to achieve this. As I said at the start of the section my focus was on Terraform because it is firstly free and open source, secondly it is cross platform and agnostic to environments. But there are also alternatives out there that should be considered but the overall goal is to make people aware that this is the way to deploy your infrastructure.
As we close out this section on Infrastructure as Code we must mention testing our code, the various tools available and then some of the alternatives to Terraform to achieve this. As I said at the start of the section my focus was on Terraform because it is firstly free and open source, secondly, it is cross-platform and agnostic to environments. But there are also alternatives out there that should be considered but the overall goal is to make people aware that this is the way to deploy your infrastructure.
### Code Rot
The first area I want to cover in this session is code rot, unlike application code, infrastructure as code might get used and then not for a very long time. Lets take the example that we are going to be using Terraform to deploy our VM environment in AWS, perfect and it works first time and we have our environment, but this environment doesnt change too often so the code gets left the state possibly or hopefully stored in a central location but the code does not change.
The first area I want to cover in this session is code rot, unlike application code, infrastructure as code might get used and then not for a very long time. Let's take the example that we are going to be using Terraform to deploy our VM environment in AWS, perfect and it works the first time and we have our environment, but this environment doesn't change too often so the code gets left the state possibly or hopefully stored in a central location but the code does not change.
What if something changes in the infrastructure? But it is done out of band, or other things change in our environment.
- Out of band changes
- Unpinned versions
- Deprecated dependancies
- Deprecated dependencies
- Unapplied changes
### Testing
Another huge area that follows on from code rot and in general is the ability to test your IaC and make sure all areas are working the way they should.
First up there are some built in testing commands we can take a look at:
First up there are some built-in testing commands we can take a look at:
| Command | Description |
| --------------------- | ------------------------------------------------------------------------------------------ |
| -------------------- | ------------------------------------------------------------------------------------------ |
| `terraform fmt` | Rewrite Terraform configuration files to a canonical format and style. |
| `terraform validate` | Validates the configuration files in a directory, referring only to the configuration |
| `terraform plan` | Creates an execution plan, which lets you preview the changes that Terraform plans to make |
@ -40,15 +41,15 @@ We also have some testing tools available external to Terraform:
- [tflint](https://github.com/terraform-linters/tflint)
- Find possible errors
- Warn about deprecated syntax, unused declarations.
- Enforce best practices, naming conventions.
- Warn about deprecated syntax and unused declarations.
- Enforce best practices, and naming conventions.
Scanning tools
- [checkov](https://www.checkov.io/) - scans cloud infrastructure configurations to find misconfigurations before they're deployed.
- [tfsec](https://aquasecurity.github.io/tfsec/v1.4.2/) - static analysis security scanner for your Terraform code.
- [terrascan](https://github.com/accurics/terrascan) - static code analyzer for Infrastructure as Code.
- [terraform-compliance](https://terraform-compliance.com/) - a lightweight, security and compliance focused test framework against terraform to enable negative testing capability for your infrastructure-as-code.
- [terrascan](https://github.com/accurics/terrascan) - static code analyser for Infrastructure as Code.
- [terraform-compliance](https://terraform-compliance.com/) - a lightweight, security and compliance-focused test framework against terraform to enable the negative testing capability for your infrastructure-as-code.
- [snyk](https://docs.snyk.io/products/snyk-infrastructure-as-code/scan-terraform-files/scan-and-fix-security-issues-in-terraform-files) - scans your Terraform code for misconfigurations and security issues
Managed Cloud offering
@ -77,26 +78,27 @@ We mentioned on Day 57 when we started this section that there were some alterna
| Azure Resource Manager | Pulumi |
| Google Cloud Deployment Manager | |
I have used AWS CloudFormation probably the most out of the above list and native to AWS but I have not used the others other than Terraform. As you can imagine the cloud specific versions are very good in that particular cloud but if you have multiple cloud environments then you are going to struggle to migrate those configurations or you are going to have multiple management planes for your IaC efforts.
I have used AWS CloudFormation probably the most out of the above list and native to AWS but I have not used the others other than Terraform. As you can imagine the cloud-specific versions are very good in that particular cloud but if you have multiple cloud environments then you are going to struggle to migrate those configurations or you are going to have multiple management planes for your IaC efforts.
I think an interesting next step for me is to take some time and learn more about [Pulumi](https://www.pulumi.com/)
From a Pulumi comparison on their site
*"Both Terraform and Pulumi offer a desired state infrastructure as code model where the code represents the desired infrastructure state and the deployment engine compares this desired state with the stacks current state and determines what resources need to be created, updated or deleted."*
> "Both Terraform and Pulumi offer the desired state infrastructure as code model where the code represents the desired infrastructure state and the deployment engine compares this desired state with the stacks current state and determines what resources need to be created, updated or deleted."
The biggest difference I can see is that unlike the HashiCorp Configuration Language (HCL) Pulumi allows for general purpose languages like Python, TypeScript, JavaScript, Go and .NET.
The biggest difference I can see is that unlike the HashiCorp Configuration Language (HCL) Pulumi allows for general-purpose languages like Python, TypeScript, JavaScript, Go and .NET.
A quick overview [Introduction to Pulumi: Modern Infrastructure as Code](https://www.youtube.com/watch?v=QfJTJs24-JM) I like the ease and choices you are prompted with and want to get into this a little more.
This wraps up the Infrastructure as code section and next we move on to that little bit of overlap with configuration management and in particular as we get past the big picture of configuration management we are going to be using Ansible for some of those tasks and demos.
This wraps up the Infrastructure as code section and next we move on to that little bit of overlap with configuration management in particular as we get past the big picture of configuration management we are going to be using Ansible for some of those tasks and demos.
## Resources
I have listed a lot of resources down below and I think this topic has been covered so many times out there, If you have additional resources be sure to raise a PR with your resources and I will be happy to review and add them to the list.
- [What is Infrastructure as Code? Difference of Infrastructure as Code Tools ](https://www.youtube.com/watch?v=POPP2WTJ8es)
- [What is Infrastructure as Code? Difference of Infrastructure as Code Tools](https://www.youtube.com/watch?v=POPP2WTJ8es)
- [Terraform Tutorial | Terraform Course Overview 2021](https://www.youtube.com/watch?v=m3cKkYXl-8o)
- [Terraform explained in 15 mins | Terraform Tutorial for Beginners ](https://www.youtube.com/watch?v=l5k1ai_GBDE)
- [Terraform explained in 15 mins | Terraform Tutorial for Beginners](https://www.youtube.com/watch?v=l5k1ai_GBDE)
- [Terraform Course - From BEGINNER to PRO!](https://www.youtube.com/watch?v=7xngnjfIlK4&list=WL&index=141&t=16s)
- [HashiCorp Terraform Associate Certification Course](https://www.youtube.com/watch?v=V4waklkBC38&list=WL&index=55&t=111s)
- [Terraform Full Course for Beginners](https://www.youtube.com/watch?v=EJ3N-hhiWv0&list=WL&index=39&t=27s)

View File

@ -2,26 +2,26 @@
title: '#90DaysOfDevOps - The Big Picture: Configuration Management - Day 63'
published: false
description: 90DaysOfDevOps - The Big Picture Configuration Management
tags: "devops, 90daysofdevops, learning"
tags: 'devops, 90daysofdevops, learning'
cover_image: null
canonical_url: null
id: 1048711
---
## The Big Picture: Configuration Management
Coming straight off the back of the section covering Infrastructure as Code, there is likely going to be some crossover as we talk about Configuration Management or Application Configuration Management.
Configuration Management is the process of maintaining applications, systems and servers in a desired state. The overlap with Infrastructure as code is that IaC is going to make sure your infrastructure is at the desired state but after that especially terraform is not going to look after the desired state of your OS settings or Application and that is where Configuration Management tools come in. Making sure that system and applications perform the way it is expected as changes occur over Deane.
Configuration Management is the process of maintaining applications, systems and servers in the desired state. The overlap with Infrastructure as code is that IaC is going to make sure your infrastructure is at the desired state but after that especially terraform is not going to look after the desired state of your OS settings or Application and that is where Configuration Management tools come in. Make sure that the system and applications perform the way it is expected as changes occur over Deane.
Configuration management keeps you from making small or large changes that go undocumented.
### Scenario: Why would you want to use Configuration Management
The scenario or why you'd want to use Configuration Management, meet Dean he's our system administrator and Dean is a happy camper pretty and
working on all of the systems in his environement.
What happens if their system fails, if there's a fire, a server goes down well? Dean knows exactly what to do he can fix that fire really easily the problems become really difficult for Dean however if multiple servers start failing particularly when you have large and expanding environments, this is why Dean really needs to have a configuration management tool. Configuration Management tools can help make Dean look like a rockstar, all he has to do is configure the right codes that allows him to push out the instructions on how to set up each of the servers quickly effectively and at scale.
working on all of the systems in his environment.
What happens if their system fails, if there's a fire, a server goes down well? Dean knows exactly what to do he can fix that fire easily the problems become difficult for Dean however if multiple servers start failing particularly when you have large and expanding environments, this is why Dean needs to have a configuration management tool. Configuration Management tools can help make Dean look like a rockstar, all he has to do is configure the right codes that allow him to push out the instructions on how to set up each of the servers quickly effectively and at scale.
### Configuration Management tools
@ -29,14 +29,15 @@ There are a variety of configuration management tools available, and each has sp
![](Images/Day63_config1.png)
At this stage we will take a quick fire look at the options in the above picture before making our choice on which one we will use and why.
At this stage, we will take a quickfire look at the options in the above picture before making our choice on which one we will use and why.
- **Chef**
- Chef ensures configuration is applied consistently in every environment, at any scale with infrastructure automation.
- Chef is an open-source tool developed by OpsCode written in Ruby and Erlang.
- Chef is best suited for organisations that have a hetrogenous infrastructure and are looking for mature solutions.
- Chef is best suited for organisations that have a heterogeneous infrastructure and are looking for mature solutions.
- Recipes and Cookbooks determine the configuration code for your systems.
- Pro - A large collection of recipes are available
- Pro - A large collection of recipes is available
- Pro - Integrates well with Git which provides a strong version control
- Con - Steep learning curve, a considerable amount of time required.
- Con - The main server doesn't have much control.
@ -47,39 +48,39 @@ At this stage we will take a quick fire look at the options in the above picture
- **Puppet**
- Puppet is a configuration management tool that supports automatic deployment.
- Puppet is built in Ruby and uses DSL for writing manifests.
- Puppet also works well with hetrogenous infrastructure where the focus is on scalability.
- Puppet also works well with heterogeneous infrastructure where the focus is on scalability.
- Pro - Large community for support.
- Pro - Well developed reporting mechanism.
- Con - Advance tasks require knowledge of Ruby language.
- Pro - Well-developed reporting mechanism.
- Con - Advance tasks require knowledge of the Ruby language.
- Con - The main server doesn't have much control.
- Architecture - Server / Clients
- Ease of setup - Moderate
- Language - Declartive - Specify only what to do
- Language - Declarative - Specify only what to do
- **Ansible**
- Ansible is an IT automation tool that automates configuration management, cloud provisioning, deployment and orchestration.
- The core of Ansible playbooks are written in YAML. (Should really do a section on YAML as we have seen this a few times)
- The core of Ansible playbooks is written in YAML. (Should do a section on YAML as we have seen this a few times)
- Ansible works well when there are environments that focus on getting things up and running fast.
- Works on playbooks which provide instructions to your servers.
- Pro - No agents needed on remote nodes.
- Pro - No agents are needed on remote nodes.
- Pro - YAML is easy to learn.
- Con - Performance speed is often less than other tools (Faster than Dean doing it himself manually)
- Con - YAML not as powerful as Ruby but less of a learning curve.
- Con - YAML is not as powerful as Ruby but has less of a learning curve.
- Architecture - Client Only
- Ease of setup - Very Easy
- Language - Procedural - Specify how to do a task
- **SaltStack**
- SaltStack is a CLI based tool that automates configuration management and remote execution.
- SaltStack is Python based whilst the instructions are written in YAML or its own DSL.
- SaltStack is a CLI-based tool that automates configuration management and remote execution.
- SaltStack is Python based whilst the instructions are written in YAML or its DSL.
- Perfect for environments with scalability and resilience as the priority.
- Pro - Easy to use when up and running
- Pro - Good reporting mechanism
- Con - Setup phase is tough
- Con - New web ui which is much less developed than the others.
- Con - The setup phase is tough
- Con - New web UI which is much less developed than the others.
- Architecture - Server / Clients
- Ease of setup - Moderate
- Language - Declartive - Specify only what to do
- Language - Declarative - Specify only what to do
### Ansible vs Terraform
@ -87,16 +88,14 @@ The tool that we will be using for this section is going to be Ansible. (Easy to
I think it is important to touch on some of the differences between Ansible and Terraform before we look into the tooling a little further.
| |Ansible |Terraform |
| ------------- | ------------------------------------------------------------- | ----------------------------------------------------------------- |
|Type |Ansible is a configuration management tool |Terraform is a an orchestration tool |
|Infrastructure |Ansible provides support for mutable infrastructure |Terraform provides support for immutable infrastructure |
|Language |Ansible follows procedural language |Terraform follows a declartive language |
|Provisioning |Ansible provides partial provisioning (VM, Network, Storage) |Terraform provides extensive provisioning (VM, Network, Storage) |
|Packaging |Ansible provides complete support for packaging & templating |Terraform provides partial support for packaging & templating |
|Lifecycle Mgmt |Ansible does not have lifecycle management |Terraform is heavily dependant on lifecycle and state mgmt |
| | Ansible | Terraform |
| -------------- | ------------------------------------------------------------ | ---------------------------------------------------------------- |
| Type | Ansible is a configuration management tool | Terraform is an orchestration tool |
| Infrastructure | Ansible provides support for mutable infrastructure | Terraform provides support for immutable infrastructure |
| Language | Ansible follows procedural language | Terraform follows a declarative language |
| Provisioning | Ansible provides partial provisioning (VM, Network, Storage) | Terraform provides extensive provisioning (VM, Network, Storage) |
| Packaging | Ansible provides complete support for packaging & templating | Terraform provides partial support for packaging & templating |
| Lifecycle Mgmt | Ansible does not have lifecycle management | Terraform is heavily dependent on lifecycle and state mgmt |
## Resources
@ -104,5 +103,4 @@ I think it is important to touch on some of the differences between Ansible and
- [Ansible 101 - Episode 1 - Introduction to Ansible](https://www.youtube.com/watch?v=goclfp6a2IQ)
- [NetworkChuck - You need to learn Ansible right now!](https://www.youtube.com/watch?v=5hycyr-8EKs&t=955s)
See you on [Day 64](day64.md)

View File

@ -2,39 +2,42 @@
title: '#90DaysOfDevOps - Ansible: Getting Started - Day 64'
published: false
description: '90DaysOfDevOps - Ansible: Getting Started'
tags: "devops, 90daysofdevops, learning"
tags: 'devops, 90daysofdevops, learning'
cover_image: null
canonical_url: null
id: 1048765
---
## Ansible: Getting Started
We covered a little what Ansible is in the [big picture session yesterday](day63.md) But we are going to get started with a little more information on top of that here. Firstly Ansible comes from RedHat. Secondly it is agentles, connects via SSH and runs commands. Thirdly it is cross platform (Linux & macOS, WSL2) and open-source (there is also a paid for enterprise option) Ansible pushes configuration vs other models.
We covered a little about what Ansible is in the [big picture session yesterday](day63.md) But we are going to get started with a little more information on top of that here. Firstly Ansible comes from RedHat. Secondly, it is agentless, connects via SSH and runs commands. Thirdly it is cross-platform (Linux & macOS, WSL2) and open-source (there is also a paid-for enterprise option) Ansible pushes configuration vs other models.
### Ansible Installation
As you might imagine, RedHat and the Ansible team have done a fantastic job around documenting Ansible. This generally starts with the installation steps which you can find [here](https://docs.ansible.com/ansible/latest/installation_guide/intro_installation.html) Remember we said that Ansible is an agentless automation tool, the tool is deployed to a system referred to as a "Control Node" from this control node is manages machines and other devices (possibly network) over SSH.
It does state in the above linked documentation that the Windows OS cannot be used as the control node.
As you might imagine, RedHat and the Ansible team have done a fantastic job of documenting Ansible. This generally starts with the installation steps which you can find [here](https://docs.ansible.com/ansible/latest/installation_guide/intro_installation.html) Remember we said that Ansible is an agentless automation tool, the tool is deployed to a system referred to as a "Control Node" from this control node is manages machines and other devices (possibly network) over SSH.
For my control node and for at least this demo I am going to use the Linux VM we created way back in the [Linux section](day20.md) as my control node.
It does state in the above-linked documentation that the Windows OS cannot be used as the control node.
This system was running Ubuntu and the installation steps simply needs the following commands.
For my control node and at least this demo, I am going to use the Linux VM we created way back in the [Linux section](day20.md) as my control node.
```
This system was running Ubuntu and the installation steps simply need the following commands.
```Shell
sudo apt update
sudo apt install software-properties-common
sudo add-apt-repository --yes --update ppa:ansible/ansible
sudo apt install ansible
```
Now we should have ansible installed on our control node, you can check this by running `ansible --version` and you should see something similar to this below.
![](Images/Day64_config1.png)
Before we then start to look at controlling other nodes in our environment, we can also check functionality of ansible by running a command against our local machine `ansible localhost -m ping` will use an [Ansible Module](https://docs.ansible.com/ansible/2.9/user_guide/modules_intro.html) and this is a quick way to perform a single task across many different systems. I mean it is not much fun with just the local host but imagine you wanted to get something or make sure all your systems were up and you had 1000+ servers and devices.
Before we then start to look at controlling other nodes in our environment, we can also check the functionality of ansible by running a command against our local machine `ansible localhost -m ping` will use an [Ansible Module](https://docs.ansible.com/ansible/2.9/user_guide/modules_intro.html) and this is a quick way to perform a single task across many different systems. I mean it is not much fun with just the local host but imagines you wanted to get something or make sure all your systems were up and you had 1000+ servers and devices.
![](Images/Day64_config2.png)
Or an actual real life use for a module might be something like `ansible webservers --m service -a "name=httpd state=started"` this will tell us if all of our webservers have the httpd service running. I have glossed over the webservers term used in that command.
Or an actual real-life use for a module might be something like `ansible webservers --m service -a "name=httpd state=started"` this will tell us if all of our webservers have the httpd service running. I have glossed over the webservers term used in that command.
### hosts
@ -42,23 +45,23 @@ The way I used localhost above to run a simple ping module against the system, I
![](Images/Day64_config3.png)
In order for us to specify our hosts or the nodes that we want to automate with these tasks we need to define them. We can define them by navigating to the /etc/ansible directory on your system.
For us to specify our hosts or the nodes that we want to automate with these tasks, we need to define them. We can define them by navigating to the /etc/ansible directory on your system.
![](Images/Day64_config4.png)
The file we want to edit is the hosts file, using a text editor we can jump in and define our hosts. The hosts file contains lots of great instructions on how to use and modify the file. We want to scroll down to the bottom and we are going to create a new group called [windows] and we are going to add our `10.0.0.1` IP address for that host. Save the file.
The file we want to edit is the host's file, using a text editor we can jump in and define our hosts. The host file contains lots of great instructions on how to use and modify the file. We want to scroll down to the bottom and we are going to create a new group called [windows] and we are going to add our `10.0.0.1` IP address for that host. Save the file.
![](Images/Day64_config5.png)
However remember I said you will need to have SSH available to enable ansible to connect to your system. As you can see below when I run `ansible windows -m ping` we get an unreachable because things failed to connect via SSH.
However, remember I said you will need to have SSH available to enable Ansible to connect to your system. As you can see below when I run `ansible windows -m ping` we get an unreachable because things failed to connect via SSH.
![](Images/Day64_config6.png)
I have now also started adding some additional hosts to our inventory, another name for this file as this is where you are going to define all of your devices, could be network devices, switches and routers for example also would be added here and grouped. In our hosts file though I have also added in my credentials for accessing the linux group of systems.
I have now also started adding some additional hosts to our inventory, another name for this file as this is where you are going to define all of your devices, could be network devices, switches and routers for example also would be added here and grouped. In our hosts file though I have also added my credentials for accessing the Linux group of systems.
![](Images/Day64_config7.png)
Now if we run `ansible linux -m ping` we get a success as per below.
Now if we run `ansible Linux -m ping` we get success as per below.
![](Images/Day64_config8.png)
@ -66,12 +69,12 @@ We then have the node requirements, these are the target systems you wish to aut
### Ansible Commands
You saw that we were able to run `ansible linux -m ping` against our Linux machine and get a response, basically with Ansible we have the ability to run many adhoc commands. But obviously you can run this against a group of systems and get that information back. [ad hoc commands](https://docs.ansible.com/ansible/latest/user_guide/intro_adhoc.html)
You saw that we were able to run `ansible Linux -m ping` against our Linux machine and get a response, basically, with Ansible we can run many ad-hoc commands. But you can run this against a group of systems and get that information back. [ad hoc commands](https://docs.ansible.com/ansible/latest/user_guide/intro_adhoc.html)
If you find yourself repeating commands or even worse you are having to log into individual systems to run these commands then Ansible can help there. For example the simple command below would give us the output of all the operating system details for all of the systems we add to our linux group.
If you find yourself repeating commands or even worse you are having to log into individual systems to run these commands then Ansible can help there. For example, the simple command below would give us the output of all the operating system details for all of the systems we add to our Linux group.
`ansible linux -a "cat /etc/os-release"`
Other use cases could be to reboot systems, copy files, manage packers and users. You can also couple ad hoc commands with Ansible modules.
Other use cases could be to reboot systems, copy files, and manage packers and users. You can also couple ad hoc commands with Ansible modules.
Ad hoc commands use a declarative model, calculating and executing the actions required to reach a specified final state. They achieve a form of idempotence by checking the current state before they begin and doing nothing unless the current state is different from the specified final state.
@ -81,8 +84,4 @@ Ad hoc commands use a declarative model, calculating and executing the actions r
- [Ansible 101 - Episode 1 - Introduction to Ansible](https://www.youtube.com/watch?v=goclfp6a2IQ)
- [NetworkChuck - You need to learn Ansible right now!](https://www.youtube.com/watch?v=5hycyr-8EKs&t=955s)
See you on [Day 65](day65.md)

View File

@ -7,6 +7,7 @@ cover_image: null
canonical_url: null
id: 1049054
---
### Ansible Playbooks
In this section we will take a look at the main reason that I can see at least for Ansible, I mean it is great to take a single command and hit many different servers to perform simple commands such as rebooting a long list of servers and saving the hassle of having to connect to each one individually.
@ -19,13 +20,13 @@ This is where ansible playbooks come in. A playbook enables us to take our group
Playbook > Plays > Tasks
For anyone that comes from a sports background you may have come across the term playbook, a playbook then tells the team how you will play made up of various plays and tasks, if we think of the plays as the set pieces within the sport or game, and the tasks are associated to each play, you can have multiple tasks to make up a play and in the playbook you may have multiple different plays.
For anyone that comes from a sports background you may have come across the term playbook, a playbook then tells the team how you will play made up of various plays and tasks if we think of the plays as the set pieces within the sport or game, and the tasks are associated to each play, you can have multiple tasks to make up a play and in the playbook, you may have multiple different plays.
These playbooks are written in YAML (YAML aint markup language) you will find a lot of the sections we have covered so far especially Containers and Kubernetes to feature YAML formatted configuration files.
Lets take a look at a simple playbook called playbook.yml.
```
```Yaml
- name: Simple Play
hosts: localhost
connection: local
@ -45,22 +46,22 @@ You can see the first task of "gathering steps" happened, but we didn't trigger
Our second task was to set a ping, this is not an ICMP ping but a python script to report back `pong` on successful connectivity to remote or localhost. [ansible.builtin.ping](https://docs.ansible.com/ansible/latest/collections/ansible/builtin/ping_module.html)
Then our third or really our second defined task as the first one will run unless you disable was the printing of a message telling us our OS. In this task we are using conditionals, we could run this playbook against all different types of operating systems and this would return the OS name. We are simply messaging this output for ease but we could add a task to say something like:
Then our third or our second defined task as the first one will run unless you disable was the printing of a message telling us our OS. In this task we are using conditionals, we could run this playbook against all different types of operating systems and this would return the OS name. We are simply messaging this output for ease but we could add a task to say something like:
```
```Yaml
tasks:
- name: "shut down Debian flavoured systems"
command: /sbin/shutdown -t now
when: ansible_os_family == "Debian"
```
### Vagrant to setup our environment
### Vagrant to set up our environment
We are going to use Vagrant to set up our node environment, I am going to keep this at a reasonable 4 nodes but you can hopefully see that this could easily be 300 or 3000 and this is the power of Ansible and other configuration management tools to be able to configure your servers.
You can find this file located here ([Vagrantfile](/Days/Configmgmt/Vagrantfile))
```
```Vagrant
Vagrant.configure("2") do |config|
servers=[
{
@ -117,11 +118,11 @@ If you are resource contrained then you can also run `vagrant up web01 web02` to
### Ansible host configuration
Now that we have our environment ready, we can check ansible and for this we will use our Ubuntu desktop (You could use this but you can equally use any Linux based machine on your network accessible to the network below) as our control, lets also add the new nodes to our group in the ansible hosts file, you can think of this file as an inventory, an alternative to this could be another inventory file that is called on as part of your ansible command with `-i filename` this could be useful vs using the host file as you can have different files for different environments, maybe production, test and staging. Because we are using the default hosts file we do not need to specify as this would be the default used.
Now that we have our environment ready, we can check ansible and for this, we will use our Ubuntu desktop (You could use this but you can equally use any Linux-based machine on your network access to the network below) as our control, lets also add the new nodes to our group in the ansible hosts file, you can think of this file as an inventory, an alternative to this could be another inventory file that is called on as part of your ansible command with `-i filename` this could be useful vs using the host file as you can have different files for different environments, maybe production, test and staging. Because we are using the default hosts file we do not need to specify as this would be the default used.
I have added the following to the default hosts file.
```
```Text
[control]
ansible-control
@ -136,22 +137,24 @@ web02
db01
```
![](Images/Day65_config2.png)
Before moving on we want to make sure we can run a command against our nodes, lets run `ansible nodes -m command -a hostname` this simple command will test that we have connectivity and report back our host names.
Also note that I have added these nodes and IPs to my Ubuntu control node within the /etc/hosts file to ensure connectivity. We might also need to do SSH configuration for each node from the Ubuntu box.
Also, note that I have added these nodes and IPs to my Ubuntu control node within the /etc/hosts file to ensure connectivity. We might also need to do an SSH configuration for each node from the Ubuntu box.
```
```Text
192.168.169.140 ansible-control
192.168.169.130 db01
192.168.169.131 web01
192.168.169.132 web02
192.168.169.133 loadbalancer
```
![](Images/Day65_config3.png)
At this stage we want to run through setting up SSH keys between your control and your server nodes. This is what we are going to do next, another way here could be to add variables into your hosts file to give username and password. I would advise against this as this is never going to be a best practice.
At this stage, we want to run through setting up SSH keys between your control and your server nodes. This is what we are going to do next, another way here could be to add variables into your host's file to give username and password. I would advise against this as this is never going to be a best practice.
To set up SSH and share amongst your nodes, follow the steps below, you will be prompted for passwords (`vagrant`) and you will likely need to hit `y` a few times to accept.
@ -169,13 +172,13 @@ I am not running all my VMs and only running the webservers so I issued `ssh-cop
![](Images/Day65_config7.png)
Before running any playbooks I like to make sure that I have simple connectivity with my groups so I have ran `ansible webservers -m ping` to test connectivity.
Before running any playbooks I like to make sure that I have simple connectivity with my groups so I have run `ansible webservers -m ping` to test connectivity.
![](Images/Day65_config4.png)
### Our First "real" Ansible Playbook
Our first Ansible playbook is going to configure our webservers, we have grouped these in our hosts file under the grouping [webservers].
Our first Ansible playbook is going to configure our web servers, we have grouped these in our host's file under the grouping [webservers].
Before we run our playbook we can confirm that our web01 and web02 do not have apache installed. The top of the screenshot below is showing you the folder and file layout I have created within my ansible control to run this playbook, we have the `playbook1.yml`, then in the templates folder we have the `index.html.j2` and `ports.conf.j2` files. You can find these files in the folder listed above in the repository.
@ -185,8 +188,7 @@ Then we SSH into web01 to check if we have apache installed?
You can see from the above that we have not got apache installed on our web01 so we can fix this by running the below playbook.
```
```Yaml
- hosts: webservers
become: yes
vars:
@ -224,34 +226,35 @@ You can see from the above that we have not got apache installed on our web01 so
name: apache2
state: restarted
```
Breaking down the above playbook:
- `- hosts: webservers` this is saying that our group to run this playbook on is a group called webservers
- `become: yes` means that our user running the playbook will become root on our remote systems. You will be prompted for the root password.
- We then have `vars` and this defines some environment variables we want throughout our webservers.
Following this we start our tasks,
Following this, we start our tasks,
- Task 1 is to ensure that apache is running the latest version
- Task 2 is writing the ports.conf file from our source found in the templates folder.
- Task 3 is creating a basic index.html file
- Task 4 is making sure apache is running
Finally we have a handlers section, [Handlers: Running operations on change](https://docs.ansible.com/ansible/latest/user_guide/playbooks_handlers.html)
Finally, we have a handlers section, [Handlers: Running operations on change](https://docs.ansible.com/ansible/latest/user_guide/playbooks_handlers.html)
"Sometimes you want a task to run only when a change is made on a machine. For example, you may want to restart a service if a task updates the configuration of that service, but not if the configuration is unchanged. Ansible uses handlers to address this use case. Handlers are tasks that only run when notified. Each handler should have a globally unique name."
At this stage you might be thinking but we have deployed 5 VMs (including our Ubuntu Desktop machine which is acting as our Ansible Control) The other systems will come into play during the rest of the section.
At this stage, you might be thinking that we have deployed 5 VMs (including our Ubuntu Desktop machine which is acting as our Ansible Control) The other systems will come into play during the rest of the section.
### Run our Playbook
We are now ready to run our playbook against our nodes. To run our playbook we can use the `ansible-playbook playbook1.yml` We have defined our hosts that our playbook will run against within the playbook and this will walkthrough our tasks that we have defined.
We are now ready to run our playbook against our nodes. To run our playbook we can use the `ansible-playbook playbook1.yml` We have defined the hosts that our playbook will run against within the playbook and this will walk through the tasks that we have defined.
When the command is complete we get an output showing our plays and tasks, this may take some time you can see from the below image that this took a while to go and install our desired state.
![](Images/Day65_config9.png)
We can then double check this by jumping into a node and checking we have the installed software on our node.
We can then double-check this by jumping into a node and checking we have the installed software on our node.
![](Images/Day65_config10.png)
@ -259,9 +262,9 @@ Just to round this out as we have deployed two standalone webservers with the ab
![](Images/Day65_config11.png)
We are going to build on this playbook as we move through the rest of this section. I am interested as well in taking our Ubuntu desktop and seeing if we could actually bootstrap our applications and configuration using Ansible so we might also touch this. You saw that we can use local host in our commands we can also run playbooks against our local host for example.
We are going to build on this playbook as we move through the rest of this section. I am interested as well in taking our Ubuntu desktop and seeing if we could bootstrap our applications and configuration using Ansible so we might also touch this. You saw that we can use the local host in our commands we can also run playbooks against our local host for example.
Another thing to add here is that we are only really working with Ubuntu VMs but Ansible is agnostic to the target systems. The alternatives that we have previously mentioned to manage your systems could be server by server (not scalable when you get over a large amount of servers, plus a pain even with 3 nodes) we can also use shell scripting which again we covered in the Linux section but these nodes are potentially different so yes it can be done but then someone needs to maintain and manage those scripts. Ansible is free and hits the easy button vs having to have a specialised script.
Another thing to add here is that we are only really working with Ubuntu VMs but Ansible is agnostic to the target systems. The alternatives that we have previously mentioned to manage your systems could be server by server (not scalable when you get over a large number of servers, plus a pain even with 3 nodes) we can also use shell scripting which again we covered in the Linux section but these nodes are potentially different so yes it can be done but then someone needs to maintain and manage those scripts. Ansible is free and hits the easy button vs having to have a specialised script.
## Resources

View File

@ -2,26 +2,27 @@
title: '#90DaysOfDevOps - Ansible Playbooks Continued... - Day 66'
published: false
description: 90DaysOfDevOps - Ansible Playbooks Continued...
tags: "devops, 90daysofdevops, learning"
tags: 'devops, 90daysofdevops, learning'
cover_image: null
canonical_url: null
id: 1048712
---
## Ansible Playbooks Continued...
In our last section we started with creating our small lab using a Vagrantfile to deploy 4 machines and we used our Linux machine we created in that section as our ansible control system.
## Ansible Playbooks (Continued)
We also ran through a few scenarios of playbooks and at the end we had a playbook that made our web01 and web02 individual webservers.
In our last section, we started with creating our small lab using a Vagrantfile to deploy 4 machines and we used the Linux machine we created in that section as our ansible control system.
We also ran through a few scenarios of playbooks and at the end we had a playbook that made our web01 and web02 individual web servers.
![](Images/Day66_config1.png)
### Keeping things tidy
Before we get into further automation and deployment we should cover the ability to keep our playbook lean and tidy and how we can separate our taks and handlers into subfolders.
Before we get into further automation and deployment we should cover the ability to keep our playbook lean and tidy and how we can separate our tasks and handlers into subfolders.
we are basically going to copy our tasks into their own file within a folder.
we are going to copy our tasks into their file within a folder.
```
```Yaml
- name: ensure apache is at the latest version
apt: name=apache2 state=latest
@ -46,7 +47,7 @@ we are basically going to copy our tasks into their own file within a folder.
and the same for the handlers.
```
```Yaml
- name: restart apache
service:
name: apache2
@ -67,7 +68,7 @@ We have just tidied up our playbook and started to separate areas that could mak
### Roles and Ansible Galaxy
At the moment we have deployed 4 VMs and we have configured 2 of these VMs as our webservers but we have some more specific functions namely, a database server and a loadbalancer or proxy. In order for us to do this and tidy up our repository we can use roles within Ansible.
At the moment we have deployed 4 VMs and we have configured 2 of these VMs as our webservers but we have some more specific functions namely, a database server and a loadbalancer or proxy. For us to do this and tidy up our repository, we can use roles within Ansible.
To do this we will use the `ansible-galaxy` command which is there to manage ansible roles in shared repositories.
@ -81,11 +82,11 @@ The above command `ansible-galaxy init roles/apache2` will create the folder str
![](Images/Day66_config6.png)
Copy and paste is easy to move those files but we also need to make a change to the tasks/main.yml so that we point this to the apache2_install.yml.
Copy and paste are easy to move those files but we also need to make a change to the tasks/main.yml so that we point this to the apache2_install.yml.
We also need to change our playbook now to refer to our new role. In the playbook1.yml and playbook2.yml we determine our tasks and handlers in different ways as we changed these between the two versions. We need to change our playbook to use this role as per below:
```
```Yaml
- hosts: webservers
become: yes
vars:
@ -102,7 +103,7 @@ We can now run our playbook again this time with the new playbook name `ansible-
![](Images/Day66_config8.png)
Ok, the depreciation although our playbook ran we should fix our ways now, in order to do that I have changed the include option in the tasks/main.yml to now be import_tasks as per below.
Ok, the depreciation although our playbook ran we should fix our ways now, to do that I have changed the include option in the tasks/main.yml to now be import_tasks as per below.
![](Images/Day66_config9.png)
@ -115,7 +116,7 @@ We are also going to create a few more roles whilst using `ansible-galaxy` we ar
![](Images/Day66_config10.png)
I am going to leave this one here and in the next session we will start working on those other nodes we have deployed but have not done anything with yet.
I am going to leave this one here and in the next session, we will start working on those other nodes we have deployed but have not done anything with yet.
## Resources

View File

@ -1,26 +1,28 @@
---
title: '#90DaysOfDevOps - Using Roles & Deploying a Loadbalancer - Day 67'
published: false
description: 90DaysOfDevOps - Using Roles & Deploying a Loadbalancer
tags: "devops, 90daysofdevops, learning"
description: '90DaysOfDevOps - Using Roles & Deploying a Loadbalancer'
tags: 'devops, 90daysofdevops, learning'
cover_image: null
canonical_url: null
id: 1048713
---
## Using Roles & Deploying a Loadbalancer
In the last session we covered roles and used the `ansible-galaxy` command to help create our folder structures for some roles that we are going to use. We finished up with a much tidier working repository for our configuration code as everything is hidden away in our role folders.
In the last session, we covered roles and used the `ansible-galaxy` command to help create our folder structures for some roles that we are going to use. We finished up with a much tidier working repository for our configuration code as everything is hidden away in our role folders.
However we have only used the apache2 role and have a working playbook3.yaml to handle our webservers.
However, we have only used the apache2 role and have a working playbook3.yaml to handle our webservers.
At this point if you have only used `vagrant up web01 web02` now is the time to run `vagrant up loadbalancer` this will bring up another Ubuntu system that we will use as our Load Balancer/Proxy.
We have already defined this new machine in our hosts file, but we do not have the ssh key configured until it is available, so we need to also run `ssh-copy-id loadbalancer` when the system is up and ready.
We have already defined this new machine in our host's file, but we do not have the ssh key configured until it is available, so we need to also run `ssh-copy-id loadbalancer` when the system is up and ready.
### Common role
I created at the end of yesterdays session the role of `common`, common will be used across all of our servers where as the other roles are specific to use cases, now the applications I am going to install as common as spurious and I cannot see many reasons for this to be the case but it shows the objective. In our common role folder structure, navigate to tasks folder and you will have a main.yml. In this yaml we need to point this to our install_tools.yml file and we do this by adding a line `- import_tasks: install_tools.yml` this used to be `include` but this is going to be depreciated soon enough so we are using import_tasks.
```
I created at the end of yesterday's session the role of `common`, common will be used across all of our servers whereas the other roles are specific to use cases, now the applications I am going to install are as common as spurious and I cannot see many reasons for this to be the case but it shows the objective. In our common role folder structure, navigate to the tasks folder and you will have a main.yml. In this YAML, we need to point this to our install_tools.yml file and we do this by adding a line `- import_tasks: install_tools.yml` this used to be `include` but this is going to be depreciated soon enough so we are using import_tasks.
```Yaml
- name: "Install Common packages"
apt: name={{ item }} state=latest
with_items:
@ -29,9 +31,9 @@ I created at the end of yesterdays session the role of `common`, common will be
- figlet
```
In our playbook we then add in the common role for each host block.
In our playbook, we then add in the common role for each host block.
```
```Yaml
- hosts: webservers
become: yes
vars:
@ -45,13 +47,13 @@ In our playbook we then add in the common role for each host block.
### nginx
The next phase is for us to install and configure nginx on our loadbalancer vm. Like the common folder structure, we have the nginx based on the last session.
The next phase is for us to install and configure nginx on our loadbalancer VM. Like the common folder structure, we have the nginx based on the last session.
First of all we are going to add a host block to our playbook. This block will include our common role and then our new nginx role.
First of all, we are going to add a host block to our playbook. This block will include our common role and then our new nginx role.
The playbook can be found here. [playbook4.yml](Days/../Configmgmt/ansible-scenario4/playbook4.yml)
```
```Yaml
- hosts: webservers
become: yes
vars:
@ -69,7 +71,7 @@ The playbook can be found here. [playbook4.yml](Days/../Configmgmt/ansible-scena
- nginx
```
In order for this to mean anything, we have to define our tasks that we wish to run, in the same way we will modify the main.yml in tasks to point to two files this time, one for installation and one for configuration.
For this to mean anything, we have to define the tasks that we wish to run, in the same way, we will modify the main.yml in tasks to point to two files this time, one for installation and one for configuration.
There are some other files that I have modified based on the outcome we desire, take a look in the folder [ansible-scenario4](Days/Configmgmt/ansible-scenario4) for all the files changed. You should check the folders tasks, handlers and templates in the nginx folder and you will find those additional changes and files.
@ -85,9 +87,9 @@ Now that we have our webservers and loadbalancer configured we should now be abl
![](Images/Day67_config2.png)
If you are following along and you do not have this state then it could be down to the server IP addresses you have in your environment. The file can be found in `templates\mysite.j2` and looks similar to the below: You would need to update with your webserver IP addresses.
If you are following along and you do not have this state then it could be down to the server IP addresses you have in your environment. The file can be found in `templates\mysite.j2` and looks similar to the below: You would need to update with your web server IP addresses.
```
```J2
upstream webservers {
server 192.168.169.131:8000;
server 192.168.169.132:8000;
@ -101,7 +103,8 @@ If you are following along and you do not have this state then it could be down
}
}
```
I am pretty confident that what we have installed is all good but let's use an adhoc command using ansible to check these common tools installation.
I am pretty confident that what we have installed is all good but let's use an ad-hoc command using ansible to check these common tools installation.
`ansible loadbalancer -m command -a neofetch`

View File

@ -2,22 +2,23 @@
title: '#90DaysOfDevOps - Tags, Variables, Inventory & Database Server config - Day 68'
published: false
description: '90DaysOfDevOps - Tags, Variables, Inventory & Database Server config'
tags: "devops, 90daysofdevops, learning"
tags: 'devops, 90daysofdevops, learning'
cover_image: null
canonical_url: null
id: 1048780
---
## Tags, Variables, Inventory & Database Server config
### Tags
As we left our playbook in the session yesterday we would need to run every tasks and play within that playbook. Which means we would have to run the webservers and loadbalancer plays and tasks to completion.
As we left our playbook in the session yesterday we would need to run every task and play within that playbook. This means we would have to run the webservers and loadbalancer plays and tasks to completion.
However tags can enable us to seperate these out if we want. This could be an effcient move if we have extra large and long playbooks in our environments.
However, tags can enable us to separate these if we want. This could be an efficient move if we have extra large and long playbooks in our environments.
In our playbook file, in this case we are using [ansible-scenario5](Configmgmt/ansible-scenario5/playbook5.yml)
In our playbook file, in this case, we are using [ansible-scenario5](Configmgmt/ansible-scenario5/playbook5.yml)
```
```Yaml
- hosts: webservers
become: yes
vars:
@ -36,7 +37,8 @@ In our playbook file, in this case we are using [ansible-scenario5](Configmgmt/a
- nginx
tags: proxy
```
We can then confirm this by using the `ansible-playbook playbook5.yml --list-tags` and the list tags is going to outline the tags we have defined in our playbook.
We can then confirm this by using the `ansible-playbook playbook5.yml --list-tags` and the list tags are going to outline the tags we have defined in our playbook.
![](Images/Day68_config1.png)
@ -44,11 +46,11 @@ Now if we wanted to target just the proxy we could do this by running `ansible-p
![](Images/Day68_config2.png)
tags can be added at the task level as well so we can get really granular on where and what you want to happen. It could be application focused tags, we could go through tasks for example and tag our tasks based on installation, configuration or removal. Another very useful tag you can use is
tags can be added at the task level as well so we can get granular on where and what you want to happen. It could be application-focused tags, we could go through tasks for example and tag our tasks based on installation, configuration or removal. Another very useful tag you can use is
`tag: always` this will ensure no matter what --tags you are using in your command if something is tagged with the always value then it will always be ran when you run the ansible-playbook command.
`tag: always` this will ensure no matter what --tags you are using in your command if something is tagged with the always value then it will always be running when you run the ansible-playbook command.
With tags we can also bundle multiple tags together and if we choose to run `ansible-playbook playbook5.yml --tags proxy,web` this will run all of the items with those tags. Obviously in our instance that would mean the same as running the the playbook but if we had multiple other plays then this would make sense.
With tags, we can also bundle multiple tags together and if we choose to run `ansible-playbook playbook5.yml --tags proxy,web` this will run all of the items with those tags. Obviously, in our instance, that would mean the same as running the playbook but if we had multiple other plays then this would make sense.
You can also define more than one tag.
@ -61,17 +63,17 @@ There are two main types of variables within Ansible.
### Ansible Facts
Each time we have ran our playbooks, we have had a task that we have not defined called "Gathering facts" we can use these variables or facts to make things happen with our automation tasks.
Each time we have run our playbooks, we have had a task that we have not defined called "Gathering facts" we can use these variables or facts to make things happen with our automation tasks.
![](Images/Day68_config3.png)
If we were to run the following `ansible proxy -m setup` command we should see a lot of output in JSON format. There is going to be a lot of information on your terminal though to really use this so we would like to output this to a file using `ansible proxy -m setup >> facts.json` you can see this file in this repository, [ansible-scenario5](Configmgmt/ansible-scenario5/facts.json)
If we were to run the following `ansible proxy -m setup` command we should see a lot of output in JSON format. There is going to be a lot of information on your terminal though to use this so we would like to output this to a file using `ansible proxy -m setup >> facts.json` you can see this file in this repository, [ansible-scenario5](Configmgmt/ansible-scenario5/facts.json)
![](Images/Day68_config4.png)
If you open this file you can see all sorts of information for our command. We can get our IP addresses, architecture, bios version. A lot of useful information if we want to leverage this and use this in our playbooks.
If you open this file you can see all sorts of information for our command. We can get our IP addresses, architecture, and bios version. A lot of useful information if we want to leverage this and use this in our playbooks.
An idea would be to potentially use one of these variables within our nginx template mysite.j2 where we hard coded the IP addresses of our webservers. You can do this by creating a for loop in your mysite.j2 and this is going to cycle through the group [webservers] this enables us to have more than our 2 webservers automatically and dynamically created or added to this load balancer configuration.
An idea would be to potentially use one of these variables within our nginx template mysite.j2 where we hard-coded the IP addresses of our webservers. You can do this by creating a for loop in your mysite.j2 and this is going to cycle through the group [webservers] this enables us to have more than our 2 webservers automatically and dynamically created or added to this load balancer configuration.
```
#Dynamic Config for server {{ ansible_facts['nodename'] }}
@ -89,13 +91,14 @@ An idea would be to potentially use one of these variables within our nginx temp
}
}
```
The outcome of the above will look the same as it does right now but if we added more webservers or removed one this would dynamically change the proxy configuration. For this to work you will need to have name resolution configured.
The outcome of the above will look the same as it does right now but if we added more web servers or removed one this would dynamically change the proxy configuration. For this to work you will need to have name resolution configured.
### User created
User created variables are what we have created ourselves. If you take a look in our playbook you will see we have `vars:` and then a list of 3 variables we are using there.
User-created variables are what we have created ourselves. If you take a look in our playbook you will see we have `vars:` and then a list of 3 variables we are using there.
```
```Yaml
- hosts: webservers
become: yes
vars:
@ -115,9 +118,9 @@ User created variables are what we have created ourselves. If you take a look in
tags: proxy
```
We can however keep our playbook clear of variables by moving them to their own file. We are going to do this but we will move into the [ansible-scenario6](Configmgmt/ansible-scenario6) folder. In the root of that folder we are going to create a group_vars folder. We are then going to create another folder called all (all groups are going to get these variables). In there we will create a file called `common_variables.yml` and we will copy our variables from our playbook into this file. Removing them from the playbook along with vars: as well.
We can however keep our playbook clear of variables by moving them to their file. We are going to do this but we will move into the [ansible-scenario6](Configmgmt/ansible-scenario6) folder. In the root of that folder, we are going to create a group_vars folder. We are then going to create another folder called all (all groups are going to get these variables). In there we will create a file called `common_variables.yml` and we will copy our variables from our playbook into this file. Removing them from the playbook along with vars: as well.
```
```Yaml
http_port: 8000
https_port: 4443
html_welcome_msg: "Hello 90DaysOfDevOps - Welcome to Day 68!"
@ -125,7 +128,7 @@ html_welcome_msg: "Hello 90DaysOfDevOps - Welcome to Day 68!"
Because we are associating this as a global variable we could also add in our NTP and DNS servers here as well. The variables are set from the folder structure that we have created. You can see below how clean our Playbook now looks.
```
```Yaml
- hosts: webservers
become: yes
roles:
@ -143,7 +146,7 @@ Because we are associating this as a global variable we could also add in our NT
One of those variables was the http_port, we can use this again in our for loop within the mysite.j2 as per below:
```
```J2
#Dynamic Config for server {{ ansible_facts['nodename'] }}
upstream webservers {
{% for host in groups['webservers'] %}
@ -160,16 +163,17 @@ One of those variables was the http_port, we can use this again in our for loop
}
```
We can also define an ansible fact in our roles/apache2/templates/index.html.j2 file so that we can understand which webserver we are on.
We can also define an ansible fact in our roles/apache2/templates/index.HTML.j2 file so that we can understand which webserver we are on.
```
```J2
<html>
<h1>{{ html_welcome_msg }}! I'm webserver {{ ansible_facts['nodename'] }} </h1>
</html>
```
The results of running the `ansible-playbook playbook6.yml` command with our variable changes means that when we hit our loadbalancer you can see that we hit either of the webservers we have in our group.
The results of running the `ansible-playbook playbook6.yml` command with our variable changes mean that when we hit our loadbalancer you can see that we hit either of the webservers we have in our group.
![](Images/Day68_config5.png)
@ -177,21 +181,21 @@ We could also add a folder called host_vars and create a web01.yml and have a sp
### Inventory Files
So far we have used the default hosts file in the /etc/ansible folder to determine our hosts. We could however have different files for different environments, for example production and staging. I am not going to create more environments. But we are able to create our own host files.
So far we have used the default hosts file in the /etc/ansible folder to determine our hosts. We could however have different files for different environments, for example, production and staging. I am not going to create more environments. But we can create our host files.
We can create multiple files for our different inventory of servers and nodes. We would call these using `ansible-playbook -i dev playbook.yml` you can also define variables within your hosts file and then print that out or leverage that variable somewhere else in your playbooks for example in the example and training course I am following along to below they have added the environment variable created in the host file to the loadbalancer web page template to show the environment as part of the web page message.
We can create multiple files for our different inventory of servers and nodes. We would call these using `ansible-playbook -i dev playbook.yml` you can also define variables within your host's file and then print that out or leverage that variable somewhere else in your playbooks for example in the example and training course I am following along to below they have added the environment variable created in the host file to the loadbalancer web page template to show the environment as part of the web page message.
### Deploying our Database server
We still have one more machine we have not powered up yet and configured. We can do this using `vagrant up db01` from where our Vagrantfile is located. When this is up and accessible we then need to make sure the SSH key is copied over using `ssh-copy-id db01` so that we can access.
We still have one more machine we have not powered up yet and configured. We can do this using `vagrant up db01` from where our Vagrantfile is located. When this is up and accessible we then need to make sure the SSH key is copied over using `ssh-copy-id db01` so that we can access it.
We are going to be working from the [ansible-scenario7](Configmgmt/ansible-scenario7) folder
Let's then use `ansible-galaxy init roles/mysql` to create a new folder structure for a new role called "mysql"
Let's then use `ansible-galaxy init roles/mysql` to create a new folder structure for a new role called "MySQL"
In our playbook we are going to add a new play block for the database configuration. We have our group database defined in our /etc/ansible/hosts file. We then instruct our database group to have the role common and a new role called mysql which we created in the previous step. We are also tagging our database group with database, this means as we discussed earlier we can choose to only run against these tags if we wish.
In our playbook, we are going to add a new play block for the database configuration. We have our group database defined in our /etc/ansible/hosts file. We then instruct our database group to have the role common and a new role called MySQL which we created in the previous step. We are also tagging our database group with the database, this means as we discussed earlier we can choose to only run against these tags if we wish.
```
```Yaml
- hosts: webservers
become: yes
roles:
@ -216,11 +220,11 @@ In our playbook we are going to add a new play block for the database configurat
tags: database
```
Within our roles folder structure you will now have the tree automatically created, we need to populate the following:
Within our roles folder structure, you will now have the tree automatically created, we need to populate the following:
Handlers - main.yml
```
```Yaml
# handlers file for roles/mysql
- name: restart mysql
service:
@ -230,9 +234,9 @@ Handlers - main.yml
Tasks - install_mysql.yml, main.yml & setup_mysql.yml
install_mysql.yml - this task is going to be there to install mysql and ensure that the service is running.
install_mysql.yml - this task is going to be there to install MySQL and ensure that the service is running.
```
```Yaml
- name: "Install Common packages"
apt: name={{ item }} state=latest
with_items:
@ -256,7 +260,7 @@ install_mysql.yml - this task is going to be there to install mysql and ensure t
main.yml is a pointer file that will suggest that we import_tasks from these files.
```
```Yaml
# tasks file for roles/mysql
- import_tasks: install_mysql.yml
- import_tasks: setup_mysql.yml
@ -264,7 +268,7 @@ main.yml is a pointer file that will suggest that we import_tasks from these fil
setup_mysql.yml - This task will create our database and database user.
```
```Yaml
- name: Create my.cnf configuration file
template: src=templates/my.cnf.j2 dest=/etc/mysql/conf.d/mysql.cnf
notify: restart mysql
@ -290,7 +294,7 @@ setup_mysql.yml - This task will create our database and database user.
You can see from the above we are using some variables to determine some of our configuration such as passwords, usernames and databases, this is all stored in our group_vars/all/common_variables.yml file.
```
```Yaml
http_port: 8000
https_port: 4443
html_welcome_msg: "Hello 90DaysOfDevOps - Welcome to Day 68!"
@ -301,9 +305,10 @@ db_user: devops
db_pass: DevOps90
db_name: 90DaysOfDevOps
```
We also have the my.cnf.j2 file in the templates folder, which looks like below:
```
We also have my.cnf.j2 file in the templates folder, which looks like below:
```J2
[mysql]
bind-address = 0.0.0.0
```
@ -322,9 +327,9 @@ We fixed the above and ran the playbook again and we have a successful change.
We should probably make sure that everything is how we want it to be on our newly configured db01 server. We can do this from our control node using the `ssh db01` command.
To connect to mySQL I used `sudo /usr/bin/mysql -u root -p` and gave the vagrant password for root at the prompt.
To connect to MySQL I used `sudo /usr/bin/mysql -u root -p` and gave the vagrant password for root at the prompt.
When we have connected let's first make sure we have our user created called devops. `select user, host from mysql.user;`
When we have connected let's first make sure we have our user created called DevOps. `select user, host from mysql.user;`
![](Images/Day68_config8.png)
@ -332,9 +337,9 @@ Now we can issue the `SHOW DATABASES;` command to see our new database that has
![](Images/Day68_config9.png)
I actually used root to connect but we could also now log in with our devops account in the same way using `sudo /usr/bin/mysql -u devops -p` but the password here is DevOps90.
I used the root to connect but we could also now log in with our DevOps account, in the same way, using `sudo /usr/bin/MySQL -u devops -p` but the password here is DevOps90.
One thing I have found that in our `setup_mysql.yml` I had to add the line `login_unix_socket: /var/run/mysqld/mysqld.sock` in order to successfully connect to my db01 mysql instance and now everytime I run this it reports a change when creating the user, any suggestions would be greatly appreciated.
One thing I have found is that in our `setup_mysql.yml` I had to add the line `login_unix_socket: /var/run/mysqld/mysqld.sock` to successfully connect to my db01 MySQL instance and now every time I run this it reports a change when creating the user, any suggestions would be greatly appreciated.
## Resources

View File

@ -2,11 +2,12 @@
title: '#90DaysOfDevOps - All other things Ansible - Automation Controller (Tower), AWX, Vault - Day 69'
published: false
description: '90DaysOfDevOps - All other things Ansible - Automation Controller (Tower), AWX, Vault'
tags: "devops, 90daysofdevops, learning"
tags: 'devops, 90daysofdevops, learning'
cover_image: null
canonical_url: null
id: 1048714
---
## All other things Ansible - Automation Controller (Tower), AWX, Vault
Rounding out the section on Configuration Management I wanted to have a look into the other areas that you might come across when dealing with Ansible.
@ -30,7 +31,7 @@ If you are looking for an enterprise solution then you will be looking for the A
Both AWX and the Automation Controller bring the following features above everything else we have covered in this section thus far.
- User Interface
- Role Based Access Control
- Role-Based Access Control
- Workflows
- CI/CD integration
@ -42,7 +43,7 @@ We are going to take a look at deploying AWX within our minikube Kubernetes envi
AWX does not need to be deployed to a Kubernetes cluster, the [github](https://github.com/ansible/awx) for AWX from ansible will give you that detail. However starting in version 18.0, the AWX Operator is the preferred way to install AWX.
First of all we need a minikube cluster. We can do this if you followed along during the Kubernetes section by creating a new minikube cluster with the `minikube start --cpus=4 --memory=6g --addons=ingress` command.
First of all, we need a minikube cluster. We can do this if you followed along during the Kubernetes section by creating a new minikube cluster with the `minikube start --cpus=4 --memory=6g --addons=ingress` command.
![](Images/Day69_config2.png)
@ -50,9 +51,9 @@ The official [Ansible AWX Operator](https://github.com/ansible/awx-operator) can
I forked the repo above and then ran `git clone https://github.com/MichaelCade/awx-operator.git` my advice is you do the same and do not use my repository as I might change things or it might not be there.
In the cloned repository you will find a awx-demo.yml file we need to change `NodePort` for `ClusterIP` as per below:
In the cloned repository you will find an awx-demo.yml file we need to change `NodePort` for `ClusterIP` as per below:
```
```Yaml
---
apiVersion: awx.ansible.com/v1beta1
kind: AWX
@ -70,7 +71,7 @@ In checking we have our new namespace and we have our awx-operator-controller po
![](Images/Day69_config4.png)
Within the cloned repository you will find a file called awx-demo.yml we now want to deploy this into our Kubernetes cluser and our awx namespace. `kubectl create -f awx-demo.yml -n awx`
Within the cloned repository you will find a file called awx-demo.yml we now want to deploy this into our Kubernetes cluster and our awx namespace. `kubectl create -f awx-demo.yml -n awx`
![](Images/Day69_config5.png)
@ -92,19 +93,19 @@ The username by default is admin, to get the password we can run the following c
![](Images/Day69_config9.png)
Obviously this then gives you a UI to manage your playbook and configuration management tasks in a centralised location, it also allows you as a team to work together vs what we have been doing so far here where we have been running from one ansible control station.
This then gives you a UI to manage your playbook and configuration management tasks in a centralised location, it also allows you as a team to work together vs what we have been doing so far here where we have been running from one ansible control station.
This is another one of those areas where you could probably go and spend another length of time walking through the capabilities within this tool.
I will call out a great resource from Jeff Geerling, which goes into more detail on using Ansible AWX. [Ansible 101 - Episode 10 - Ansible Tower and AWX](https://www.youtube.com/watch?v=iKmY4jEiy_A&t=752s)
In this video he also goes into great detail on the differences between Automation Controller (Previously Ansible Tower) and Ansible AWX (Free and Open Source).
In this video, he also goes into great detail on the differences between Automation Controller (Previously Ansible Tower) and Ansible AWX (Free and Open Source).
### Ansible Vault
`ansible-vault` allows us to encrypt and decrypt Ansible data files. Throughout this section we have skipped over and we have put some of our sensitive information in plain text.
`ansible-vault` allows us to encrypt and decrypt Ansible data files. Throughout this section, we have skipped over and put some of our sensitive information in plain text.
Built in to the Ansible binary is `ansible-vault` which allows us to mask away this sensitive information.
Built into the Ansible binary is `ansible-vault` which allows us to mask away this sensitive information.
![](Images/Day69_config10.png)
@ -120,7 +121,7 @@ Now, we have already used `ansible-galaxy` to create some of our roles and file
### Ansible Testing
- [Ansible Molecule](https://molecule.readthedocs.io/en/latest/) - Molecule project is designed to aid in the development and testing of Ansible roles
- [Ansible Molecule](https://molecule.readthedocs.io/en/latest/) - The molecule project is designed to aid in the development and testing of Ansible roles
- [Ansible Lint](https://ansible-lint.readthedocs.io/en/latest/) - CLI tool for linting playbooks, roles and collections

View File

@ -10,13 +10,13 @@ id: 1048836
## The Big Picture: CI/CD Pipelines
A CI/CD (Continous Integration/Continous Deployment) Pipeline implementation is the backbone of the modern DevOps environment.
A CI/CD (Continuous Integration/Continuous Deployment) Pipeline implementation is the backbone of the modern DevOps environment.
It bridges the gap between development and operations by automating the build, test and deployment of applications.
We covered a lot of this Continous mantra in the opening section of the challenge. But to reiterate:
We covered a lot of this continuous mantra in the opening section of the challenge. But to reiterate:
Continous Integration (CI) is a more modern software development practice in which incremental code changes are made more frequently and reliabily. Automated build and test workflow steps triggered by Contininous Integration ensures that code changes being merged into the repository are reliable.
Continuous Integration (CI) is a more modern software development practice in which incremental code changes are made more frequently and reliably. Automated build and test workflow steps triggered by Continuous Integration ensure that code changes being merged into the repository are reliable.
That code / Application is then delivered quickly and seamlessly as part of the Continuous Deployment process.
@ -24,13 +24,13 @@ That code / Application is then delivered quickly and seamlessly as part of the
- Ship software quickly and efficiently
- Facilitates an effective process for getting applications to market as fast as possible
- A continous flow of bug fixes and new features without waiting months or years for version releases.
- A continuous flow of bug fixes and new features without waiting months or years for version releases.
The ability for developers to make small impactful changes regular means we get faster fixes and more features quicker.
### Ok, so what does this mean?
On [Day 5](day5.md) we covered a lot of the theory behind DevOps and as already mentioned here already that the CI/CD Pipeline is the backbone of the modern DevOps environment.
On [Day 5](day05.md) we covered a lot of the theory behind DevOps and as already mentioned here that the CI/CD Pipeline is the backbone of the modern DevOps environment.
![DevOps](Images/Day5_DevOps8.png)
@ -48,7 +48,7 @@ The steps in the cycle are, developers write the **code** then it gets **built**
CI is a development practice that requires developers to integrate code into a shared repository several times a day.
When the code is written and pushed to a repository like github or gitlab that's where the magic begins.
When the code is written and pushed to a repository like Github or GitLab that's where the magic begins.
![](Images/Day70_CICD1.png)
@ -58,25 +58,25 @@ The code is verified by an automated build which allows teams or the project own
From there the code is analysed and given a series of automated tests three examples are
- Unit testing this tests the individual units of the source code
- Validation testing this makes sure that the software satisfies or fits the intended use
- Format testing this checks for syntax and other formatting errors
- Unit testing tests the individual units of the source code
- Validation testing makes sure that the software satisfies or fits the intended use
- Format testing checks for syntax and other formatting errors
These tests are created as a workflow and then are run every time you push to the master branch so pretty much every major development team has some sort of CI/CD workflow and remember on a development team the new code could be coming in from teams all over the world at different times of the day from developers working on all sorts of different projects it's more efficient to build an automated workflow of tests that make sure that everyone is on the same page before the code is accepted. It would take much longer for a human to do this each time.
![](Images/Day70_CICD3.png)
Once we have our tests complete and they are successful then we can compile and send to our repository. For example I am using Docker Hub but this could be anywhere that then gets leveraged for the CD aspect of the pipeline.
Once we have our tests complete and they are successful then we can compile and send them to our repository. For example, I am using Docker Hub but this could be anywhere that then gets leveraged for the CD aspect of the pipeline.
![](Images/Day70_CICD4.png)
So this process is obviously very much down to the software development process, we are creating our application, adding, fixing bugs etc and then updating our source control and versioning that whilst also testing.
So this process is very much down to the software development process, we are creating our application, adding, fixing bugs etc and then updating our source control and versioning that whilst also testing.
Moving onto the next phase is the CD element which in fact more and more is what we generally see from any off the shelf software, I would argue that we will see a trend that if we get our software from a vendor such as Oracle or Microsoft we will consume that from a Docker Hub type repository and then we would use our CD pipelines to deploy that into our environments.
Moving onto the next phase is the CD element which more and more is what we generally see from any off-the-shelf software, I would argue that we will see a trend if we get our software from a vendor such as Oracle or Microsoft we will consume that from a Docker Hub type repository and then we would use our CD pipelines to deploy that into our environments.
### CD
Now we have our tested version of our code and we are ready to deploy out into the wild and like I say, the Software vendor will run through this stage but I strongly believe this is how we will all deploy the off the shelf software we require in the future.
Now we have our tested version of our code and we are ready to deploy out into the wild as I say, the Software vendor will run through this stage but I strongly believe this is how we will all deploy the off-the-shelf software we require in the future.
It is now time to release our code into an environment. This is going to include Production but also likely other environments as well such as staging.
@ -84,7 +84,7 @@ It is now time to release our code into an environment. This is going to include
Our next step at least on Day 1 of v1 of the software deployment is we need to make sure we are pulling the correct code base to the correct environment. This could be pulling elements from the software repository (DockerHub) but it is more than likely that we are also pulling additional configuration from maybe another code repository, the configuration for the application for example. In the diagram below we are pulling the latest release of the software from DockerHub and then we are releasing this to our environments whilst possibly picking up configuration from a Git repository. Our CD tool is performing this and pushing everything to our environment.
It is most likely that this is not done at the same time. i.e we would go to a staging environment run against this with our own configuration make sure things are correct and this could be a manual step for testing or it could again be automated (lets go with automated) before then allowing this code to be deployed into production.
It is most likely that this is not done at the same time. i.e we would go to a staging environment run against this with our configuration to make sure things are correct and this could be a manual step for testing or it could again be automated (let's go with automated) before then allowing this code to be deployed into production.
![](Images/Day70_CICD6.png)
@ -92,17 +92,17 @@ Then after this when v2 of the application comes out we rinse and repeat the ste
### Why use CI/CD?
I think we have probably covered the benefits a number of time but it is because it automates things that otherwise would have to be done manually it finds small problems before it sneaks into the main codebase, you can probably imagine that if you push bad code out to your customers then you're going to have a bad time!
I think we have probably covered the benefits several times but it is because it automates things that otherwise would have to be done manually it finds small problems before it sneaks into the main codebase, you can probably imagine that if you push bad code out to your customers then you're going to have a bad time!
It also helps to prevent something that we call technical debt which is the idea that since the main code repos are constantly being built upon over time then a shortcut fix taken on day one is now an exponentially more expensive fix years later because now that band-aid of a fix would be so deeply intertwined and baked into all the code bases and logic.
### Tooling
Like with other sections we are going to get hands on with some of the tools that achieve the CI/CD pipeline process.
Like with other sections we are going to get hands-on with some of the tools that achieve the CI/CD pipeline process.
I think it is also important to note that not all tools have to do both CI and CD, We will take a look at ArgoCD which you guessed it is great at the CD element of deploying our software to a Kubernetes cluster. But something like Jenkins can work across many different platforms.
I think it is also important to note that not all tools have to do both CI and CD, We will take a look at ArgoCD which you guessed is great at the CD element of deploying our software to a Kubernetes cluster. But something like Jenkins can work across many different platforms.
My plan is to look at the following:
I plan to look at the following:
- Jenkins
- ArgoCD

View File

@ -2,48 +2,46 @@
title: '#90DaysOfDevOps - What is Jenkins? - Day 71'
published: false
description: 90DaysOfDevOps - What is Jenkins?
tags: "devops, 90daysofdevops, learning"
tags: 'DevOps, 90daysofdevops, learning'
cover_image: null
canonical_url: null
id: 1048745
---
## What is Jenkins?
Jenkins is a continous integration tool that allows continous development, test and deployment of newly created code.
Jenkins is a continuous integration tool that allows continuous development, testing and deployment of newly created code.
There are two ways we can achieve this with either nightly builds or continous development. The first option is that our developers are developing throughout the day on their tasks and come the end of the set day they push their changes to the source code repository. Then during the night we run our unit tests and build of the software. This could be deemed as the old way to integrate all code.
There are two ways we can achieve this with either nightly builds or continuous development. The first option is that our developers are developing throughout the day on their tasks and come to the end of the set day they push their changes to the source code repository. Then during the night we run our unit tests and build the software. This could be deemed as the old way to integrate all code.
![](Images/Day71_CICD1.png)
The other option and the preferred way is that our developers are still committing their changes to source code, then when that code commit has been made there is a build process kicked off continously.
The other option and the preferred way is that our developers are still committing their changes to source code, then when that code commit has been made there is a build process kicked off continuously.
![](Images/Day71_CICD2.png)
The above methods means that with distributed developers across the world we don't have a set time each day where we have to stop committing our code changes. This is where Jenkins comes in to act as that CI server to control those tests and build processes.
The above methods mean that with distributed developers across the world we don't have a set time each day where we have to stop committing our code changes. This is where Jenkins comes in to act as that CI server to control those tests and build processes.
![](Images/Day71_CICD3.png)
I know we are talking about Jenkins here but I also want to add a few more to maybe look into later on down the line to get an understanding why I am seeing Jenkins as the overall most popular, why is that and what can the others do over Jenkins.
- TravisCI - A hosted, distributed continous integration service used to build and test software projects hosted on GitHub.
- Bamboo - Can run multiple builds in parallel for faster compilation, built in functionality to connect with repositories and has build tasks for Ant, Maven.
I know we are talking about Jenkins here but I also want to add a few more to maybe look into later on down the line to get an understanding of why I am seeing Jenkins as the overall most popular, why is that and what can the others do over Jenkins.
- TravisCI - A hosted, distributed continuous integration service used to build and test software projects hosted on GitHub.
- Bamboo - Can run multiple builds in parallel for faster compilation, built-in functionality to connect with repositories and has build tasks for Ant, and Maven.
- Buildbot - is an open-source framework for automating software build, test and release processes. It is written in Python and supports distributed, parallel execution of jobs across multiple platforms.
- Apache Gump - Specific to Java projects, designed to build and test those Java projects every night. ensures that all projects are compatible at both API and functionality levels.
- Apache Gump - Specific to Java projects, designed with the aim to build and test those Java projects every night. ensures that all projects are compatible at both API and functionality level.
Because we are now going to focus on Jenkins - Jenkins is again open source like all of the above tools and is an automation server written in Java. It is used to automate the software development process via continous integration adn faciliates continous delivery.
Because we are now going to focus on Jenkins - Jenkins is again open source like all of the above tools and is an automation server written in Java. It is used to automate the software development process via continuous integration and facilitates continuous delivery.
### Features of Jenkins
As you can probably expect Jenkins has a lot of features spanning a lot of areas.
**Easy Installation** - Jenkins is a self contained java based program ready to run with packages for Windows, macOS and Linux operating systems.
**Easy Installation** - Jenkins is a self-contained java based program ready to run with packages for Windows, macOS and Linux operating systems.
**Easy Configuration** - Easy setup and configured via a web interface which includes error checks and built in help.
**Easy Configuration** - Easy setup and configuration via a web interface which includes error checks and built-in help.
**Plug-ins** - Lots of plugins available in the Update Centre and integrates with many tools in the CI / CD toolchain.
**Plug-ins** - Lots of plugins are available in the Update Centre and integrate with many tools in the CI / CD toolchain.
**Extensible** - In addition to the Plug-Ins available, Jenkins can be extended by that plugin architecture which provides nearly infinite options for what it can be used for.
@ -71,27 +69,27 @@ Step 1 - Developers commit changes to the source code repository.
Step 2 - Jenkins checks the repository at regular intervals and pulls any new code.
Step 3 - A build server then builds the code into an executable, in this example we are using maven as a well known build server. Another area to cover.
Step 3 - A build server then builds the code into an executable, in this example, we are using maven as a well-known build server. Another area to cover.
Step 4 - If the build fails then feedback is sent back to the developers.
Step 5 - Jenkins then deploys the build app to the test server, in this example we are using selenium as a well known test server. Another area to cover.
Step 5 - Jenkins then deploys the build app to the test server, in this example, we are using selenium as a well-known test server. Another area to cover.
Step 6 - If the test fails then feedback is passed to the developers.
Step 7 - If the tests are successful then we can release to production.
Step 7 - If the tests are successful then we can release them to production.
This cycle is continous, this is what allows applications to be updated in minutes vs hours, days, months, years!
This cycle is continuous, this is what allows applications to be updated in minutes vs hours, days, months, and years!
![](Images/Day71_CICD5.png)
There is a lot more to the architecture of Jenkins if you require it, they have a master-slave capability, which enables a master to distribute the tasks to slave jenkins environment.
There is a lot more to the architecture of Jenkins if you require it, they have a master-slave capability, which enables a master to distribute the tasks to the slave Jenkins environment.
For reference with Jenkins being open source, there are going to be lots of enterprises that require support, CloudBees is that enterprise version of Jenkins that brings support and possibly other functionality for the paying enterprise customer.
An example of this in a customer is Bosch, you can find the Bosch case study [here](https://assets.ctfassets.net/vtn4rfaw6n2j/case-study-boschpdf/40a0b23c61992ed3ee414ae0a55b6777/case-study-bosch.pdf)
I am going to be looking for a step by step example of an application that we can use to walkthrough using Jenkins and then also use this with some other tools.
I am going to be looking for a step-by-step example of an application that we can use to walk through using Jenkins and then also use this with some other tools.
## Resources

View File

@ -1,15 +1,16 @@
---
title: '#90DaysOfDevOps - Getting hands on with Jenkins - Day 72'
title: '#90DaysOfDevOps - Getting hands-on with Jenkins - Day 72'
published: false
description: 90DaysOfDevOps - Getting hands on with Jenkins
tags: "devops, 90daysofdevops, learning"
description: 90DaysOfDevOps - Getting hands-on with Jenkins
tags: 'DevOps, 90daysofdevops, learning'
cover_image: null
canonical_url: null
id: 1048829
---
## Getting hands on with Jenkins
The plan today is to get some hands on with Jenkins and make something happen as part of our CI pipeline, looking at some example code bases that we can use.
## Getting hands-on with Jenkins
The plan today is to get some hands-on with Jenkins and make something happen as part of our CI pipeline, looking at some example code bases that we can use.
### What is a pipeline?
@ -19,11 +20,11 @@ Before we start we need to know what is a pipeline when it comes to CI, and we a
We want to take the processes or steps above and we want to automate them to get an outcome eventually meaning that we have a deployed application that we can then ship to our customers, end users etc.
This automated process enables us to have a version control through to our users and customers. Every change, feature enhancement, bug fix etc goes through this automated process confirming that everything is fine without too much manual intervention to ensure our code is good.
This automated process enables us to have version control through to our users and customers. Every change, feature enhancement, bug fix etc goes through this automated process confirming that everything is fine without too much manual intervention to ensure our code is good.
This process involves building the software in a reliable and repeatable manner, as well as progressing the built software (called a "build") through multiple stages of testing and deployment.
A jenkins pipeline, is written into a text file called a Jenkinsfile. Which itself should be committed to a source control repository. This is also known as Pipeline as code, we could also very much liken this to Infrastructure as code which we covered a few weeks back.
A Jenkins pipeline is written into a text file called a Jenkinsfile. Which itself should be committed to a source control repository. This is also known as Pipeline as code, we could also very much liken this to Infrastructure as code which we covered a few weeks back.
[Jenkins Pipeline Definition](https://www.jenkins.io/doc/book/pipeline/#ji-toolbar)
@ -31,17 +32,17 @@ A jenkins pipeline, is written into a text file called a Jenkinsfile. Which itse
I had some fun deploying Jenkins, You will notice from the [documentation](https://www.jenkins.io/doc/book/installing/) that there are many options on where you can install Jenkins.
Given that I have minikube on hand and we have used this a number of times I wanted to use this for this task also. (also it is free!) Although the steps given in the [Kubernetes Installation](https://www.jenkins.io/doc/book/installing/kubernetes/) had me hitting a wall and not getting things up and running, you can compare the two when I document my steps here.
Given that I have minikube on hand and we have used this several times I wanted to use this for this task also. (also it is free!) Although the steps are given in the [Kubernetes Installation](https://www.jenkins.io/doc/book/installing/kubernetes/) had me hitting a wall and not getting things up and running, you can compare the two when I document my steps here.
The first step is to get our minikube cluster up and running, we can simply do this with the `minikube start` command.
![](Images/Day72_CICD1.png)
I have added a folder with all the YAML configuration and values that can be found [here](days/CICD/Jenkins) Now that we have our cluster we can run the following to create our jenkins namespace. `kubectl create -f jenkins-namespace.yml`
I have added a folder with all the YAML configuration and values that can be found [here](CICD/Jenkins) Now that we have our cluster we can run the following to create our jenkins namespace. `kubectl create -f jenkins-namespace.yml`
![](Images/Day72_CICD2.png)
We will be using Helm to deploy jenkins into our cluster, we covered helm in the Kubernetes section. We firstly need to add the jenkinsci helm repository `helm repo add jenkinsci https://charts.jenkins.io` then update our charts `helm repo update`.
We will be using Helm to deploy Jenkins into our cluster, we covered helm in the Kubernetes section. We first need to add the jenkinsci helm repository `helm repo add jenkinsci https://charts.jenkins.io` then update our charts `helm repo update`.
![](Images/Day72_CICD3.png)
@ -49,25 +50,25 @@ The idea behind Jenkins is that it is going to save state for its pipelines, you
![](Images/Day72_CICD4.png)
We also need a service account which we can create using this yaml file and command. `kubectl apply -f jenkins-sa.yml`
We also need a service account which we can create using this YAML file and command. `kubectl apply -f jenkins-sa.yml`
![](Images/Day72_CICD5.png)
At this stage we are good to deploy using the helm chart, we will firstly define our chart using `chart=jenkinsci/jenkins` and then we will deploy using this command where the jenkins-values.yml contain the persistence and service accounts that we previously deployed to our cluster. `helm install jenkins -n jenkins -f jenkins-values.yml $chart`
At this stage we are good to deploy using the helm chart, we will first define our chart using `chart=jenkinsci/jenkins` and then we will deploy using this command where the jenkins-values.yml contain the persistence and service accounts that we previously deployed to our cluster. `helm install jenkins -n jenkins -f jenkins-values.yml $chart`
![](Images/Day72_CICD6.png)
At this stage our pods will be pulling the image but the pod will not have access to the storage so no configuration can be started in terms of getting Jenkins up and running.
At this stage, our pods will be pulling the image but the pod will not have access to the storage so no configuration can be started in terms of getting Jenkins up and running.
This is where the documentation did not help me massively understand what needed to happen. But we can see that we have no permission to start our jenkins install.
This is where the documentation did not help me massively understand what needed to happen. But we can see that we have no permission to start our Jenkins install.
![](Images/Day72_CICD7.png)
In order to fix the above or resolve, we need to make sure we provide access or the right permission in order for our jenkins pods to be able to write to this location that we have suggested. We can do this by using the `minikube ssh` which will put us into the minikube docker container we are running on, and then using `sudo chown -R 1000:1000 /data/jenkins-volume` we can ensure we have permissions set on our data volume.
To fix the above or resolve it, we need to make sure we provide access or the right permission for our Jenkins pods to be able to write to this location that we have suggested. We can do this by using the `minikube ssh` which will put us into the minikube docker container we are running on, and then using `sudo chown -R 1000:1000 /data/jenkins-volume` we can ensure we have permissions set on our data volume.
![](Images/Day72_CICD8.png)
The above process should fix the pods, however if not you can force the pods to be refreshed with the `kubectl delete pod jenkins-0 -n jenkins` command. At this point you should have 2/2 running pods called jenkins-0.
The above process should fix the pods, however, if not you can force the pods to be refreshed with the `kubectl delete pod jenkins-0 -n jenkins` command. At this point, you should have 2/2 running pods called jenkins-0.
![](Images/Day72_CICD9.png)
@ -79,7 +80,7 @@ Now open a new terminal as we are going to use the `port-forward` command to all
![](Images/Day72_CICD11.png)
We should now be able to open a browser and login to http://localhost:8080 and authenticate with the username: admin and password we gathered in a previous step.
We should now be able to open a browser and log in to `http://localhost:8080` and authenticate with the username: admin and password we gathered in a previous step.
![](Images/Day72_CICD12.png)
@ -91,13 +92,13 @@ From here, I would suggest heading to "Manage Jenkins" and you will see "Manage
![](Images/Day72_CICD14.png)
If you want to go even further and automate the deployment of Jenkins using a shell script this great repository was shared with me on twitter [mehyedes/nodejs-k8s](https://github.com/mehyedes/nodejs-k8s/blob/main/docs/automated-setup.md)
If you want to go even further and automate the deployment of Jenkins using a shell script this great repository was shared with me on Twitter [mehyedes/nodejs-k8s](https://github.com/mehyedes/nodejs-k8s/blob/main/docs/automated-setup.md)
### Jenkinsfile
Now we have Jenkins deployed in our Kubernetes cluster, we can now go back and think about this Jenkinsfile.
Every Jenkinsfile will likely start like this, Which is firstly where you would define your steps of your pipeline, in this instance you have Build > Test > Deploy. But we are not really doing anything other than using the `echo` command to call out the specific stages.
Every Jenkinsfile will likely start like this, Which is firstly where you would define the steps of your pipeline, in this instance you have Build > Test > Deploy. But we are not doing anything other than using the `echo` command to call out the specific stages.
```
@ -126,11 +127,12 @@ pipeline {
}
```
In our Jenkins dashboard, select "New Item" give the item a name, I am going to "echo1" I am going to suggest that this is a Pipeline.
![](Images/Day72_CICD15.png)
Hit Ok and you will then have the tabs (General, Build Triggers, Advanced Project Options and Pipeline) for a simple test we are only interested in Pipeline. Under Pipeline you have the ability to add a script, we can copy and paste the above script into the box.
Hit Ok and you will then have the tabs (General, Build Triggers, Advanced Project Options and Pipeline) for a simple test we are only interested in Pipeline. Under Pipeline you can add a script, we can copy and paste the above script into the box.
As we said above this is not going to do much but it will show us the stages of our Build > Test > Deploy
@ -144,9 +146,9 @@ We should also open a terminal and run the `kubectl get pods -n jenkins` to see
![](Images/Day72_CICD18.png)
Ok, very simple stuff but we can now see that our Jenkins deployment and installation is working correctly and we can start to see the building blocks of the CI pipeline here.
Ok, very simple stuff but we can now see that our Jenkins deployment and installation are working correctly and we can start to see the building blocks of the CI pipeline here.
In the next section we will be building a Jenkins Pipeline.
In the next section, we will be building a Jenkins Pipeline.
## Resources

View File

@ -2,22 +2,23 @@
title: '#90DaysOfDevOps - Building a Jenkins Pipeline - Day 73'
published: false
description: 90DaysOfDevOps - Building a Jenkins Pipeline
tags: "devops, 90daysofdevops, learning"
tags: 'DevOps, 90daysofdevops, learning'
cover_image: null
canonical_url: null
id: 1048766
---
## Building a Jenkins Pipeline
In the last section we got Jenkins deployed to our Minikube cluster and we set up a very basic Jenkins Pipeline, that didn't do much at all other than echo out the stages of a Pipeline.
In the last section, we got Jenkins deployed to our Minikube cluster and we set up a very basic Jenkins Pipeline, that didn't do much at all other than echo out the stages of a Pipeline.
You might have also seen that there are some example scripts available for us to run in the Jenkins Pipeline creation.
![](Images/Day73_CICD1.png)
The first demo script is "Declartive (Kubernetes)" and you can see the stages below.
The first demo script is "Declarative (Kubernetes)" and you can see the stages below.
```
```Yaml
// Uses Declarative syntax to run commands inside a container.
pipeline {
agent {
@ -58,23 +59,24 @@ spec:
}
}
```
You can see below the outcome of what happens when this Pipeline is ran.
You can see below the outcome of what happens when this Pipeline is run.
![](Images/Day73_CICD2.png)
### Job creation
**Goals**
#### Goals
- Create a simple app and store in GitHub public repository (https://github.com/scriptcamp/kubernetes-kaniko.git)
- Create a simple app and store it in GitHub public repository [https://github.com/scriptcamp/kubernetes-kaniko.git](https://github.com/scriptcamp/kubernetes-kaniko.git)
- Use Jenkins to build our docker Container image and push to docker hub. (for this we will use a private repository)
- Use Jenkins to build our docker Container image and push it to the docker hub. (for this we will use a private repository)
To achieve this in our Kubernetes cluster running in or using Minikube we need to use something called [Kaniko](https://github.com/GoogleContainerTools/kaniko#running-kaniko-in-a-kubernetes-cluster) It general though if you are using Jenkins in a real Kubernetes cluster or you are running it on a server then you can specify an agent which will give you the ability to perform the docker build commands and upload that to DockerHub.
To achieve this in our Kubernetes cluster running in or using Minikube we need to use something called [Kaniko](https://github.com/GoogleContainerTools/kaniko#running-kaniko-in-a-kubernetes-cluster) It is general though if you are using Jenkins in a real Kubernetes cluster or you are running it on a server then you can specify an agent which will give you the ability to perform the docker build commands and upload that to DockerHub.
With the above in mind we are also going to deploy a secret into Kubernetes with our GitHub credentials.
With the above in mind, we are also going to deploy a secret into Kubernetes with our GitHub credentials.
```
```Shell
kubectl create secret docker-registry dockercred \
--docker-server=https://index.docker.io/v1/ \
--docker-username=<dockerhub-username> \
@ -82,11 +84,11 @@ kubectl create secret docker-registry dockercred \
--docker-email=<dockerhub-email>
```
In fact I want to share another great resource from [DevOpsCube.com](https://devopscube.com/build-docker-image-kubernetes-pod/) running through much of what we will cover here.
I want to share another great resource from [DevOpsCube.com](https://devopscube.com/build-docker-image-kubernetes-pod/) running through much of what we will cover here.
### Adding credentials to Jenkins
However if you were on a Jenkins system unlike ours then you will likely want to define your credentials within Jenkins and then use them multiple times within your Pipelines and configurations. We can refer to these credentials in the Pipelines using the ID we determine on creation. I went ahead and stepped through and created a user entry for DockerHub and GitHub.
However, if you were on a Jenkins system unlike ours then you will likely want to define your credentials within Jenkins and then use them multiple times within your Pipelines and configurations. We can refer to these credentials in the Pipelines using the ID we determine on creation. I went ahead and stepped through and created a user entry for DockerHub and GitHub.
First of all select "Manage Jenkins" and then "Manage Credentials"
@ -100,7 +102,7 @@ Now select Global Credentials (Unrestricted)
![](Images/Day73_CICD5.png)
Then in the top left you have Add Credentials
Then in the top left, you have Add Credentials
![](Images/Day73_CICD6.png)
@ -108,17 +110,17 @@ Fill in your details for your account and then select OK, remember the ID is wha
![](Images/Day73_CICD7.png)
For GitHub you should use a [Personal Access Token](https://vzilla.co.uk/vzilla-blog/creating-updating-your-github-personal-access-token)
For GitHub, you should use a [Personal Access Token](https://vzilla.co.uk/vzilla-blog/creating-updating-your-github-personal-access-token)
Personally I did not find this process very intuitive to create these accounts, so even though we are not using I wanted to share the process as it is not clear from the UI.
I did not find this process very intuitive to create these accounts, so even though we are not using I wanted to share the process as it is not clear from the UI.
### Building the pipeline
We have our DockerHub credentials deployed to as a secret into our Kubernetes cluster which we will call upon for our docker deploy to DockerHub stage in our pipeline.
We have our DockerHub credentials deployed as a secret into our Kubernetes cluster which we will call upon for our docker deploy to the DockerHub stage in our pipeline.
The pipeline script is what you can see below, this could in turn become our Jenkinsfile located in our GitHub repository which you can also see is listed in the Get the project stage of the pipeline.
```
```Yaml
podTemplate(yaml: '''
apiVersion: v1
kind: Pod
@ -190,25 +192,25 @@ We are only interested in the Pipeline tab at the end.
![](Images/Day73_CICD11.png)
In the Pipeline definition we are going to copy and paste the pipeline script that we have above into the Script section and hit save.
In the Pipeline definition, we are going to copy and paste the pipeline script that we have above into the Script section and hit save.
![](Images/Day73_CICD12.png)
Next we will select the "Build Now" option on the left side of the page.
Next, we will select the "Build Now" option on the left side of the page.
![](Images/Day73_CICD13.png)
You should now wait a short amount of time, less than a minute really. and you should see under status the stages that we defined above in our script.
You should now wait a short amount of time, less than a minute. and you should see under status the stages that we defined above in our script.
![](Images/Day73_CICD14.png)
More importantly if we now head on over to our DockerHub and check that we have a new build.
More importantly, if we now head on over to our DockerHub and check that we have a new build.
![](Images/Day73_CICD15.png)
This overall did take a while to figure out but I wanted to stick with it for the purpose of getting hands on and working through a scenario that anyone can run through using minikube and access to github and dockerhub.
Overall did take a while to figure out but I wanted to stick with it to get hands-on and work through a scenario that anyone can run through using minikube and access to GitHub and dockerhub.
The DockerHub repository I used for this demo was a private one. But in the next section I want to advance some of these stages and actually have them do something vs just printing out `pwd` and actually run some tests and build stages.
The DockerHub repository I used for this demo was a private one. But in the next section, I want to advance some of these stages and have them do something vs just printing out `pwd` and running some tests and build stages.
## Resources

View File

@ -2,23 +2,24 @@
title: '#90DaysOfDevOps - Hello World - Jenkinsfile App Pipeline - Day 74'
published: false
description: 90DaysOfDevOps - Hello World - Jenkinsfile App Pipeline
tags: "devops, 90daysofdevops, learning"
tags: 'devops, 90daysofdevops, learning'
cover_image: null
canonical_url: null
id: 1048744
---
## Hello World - Jenkinsfile App Pipeline
In the last section we built a simple Pipeline in Jenkins that would push our docker image from our dockerfile in a public GitHub repository to our private Dockerhub repository.
In the last section, we built a simple Pipeline in Jenkins that would push our docker image from our dockerfile in a public GitHub repository to our private Dockerhub repository.
In this section we want to take this one step further and we want to achieve the following with our simple application.
In this section, we want to take this one step further and we want to achieve the following with our simple application.
### Objective
- Dockerfile (Hello World)
- Jenkinsfile
- Jenkins Pipeline to trigger when GitHub Repository is updated
- Use GitHub Repository as source.
- Use GitHub Repository as the source.
- Run - Clone/Get Repository, Build, Test, Deploy Stages
- Deploy to DockerHub with incremental version numbers
- Stretch Goal to deploy to our Kubernetes Cluster (This will involve another job and manifest repository using GitHub credentials)
@ -33,9 +34,9 @@ With the above this is what we were using as our source in our Pipeline, now we
![](Images/Day74_CICD2.png)
Now back in our Jenkins dashboard, we are going to create a new pipeline but now instead of pasting our script we are going to use "Pipeline script from SCM" We are then going to use the configuration options below.
Now back in our Jenkins dashboard, we are going to create a new pipeline but now instead of pasting our script, we are going to use "Pipeline script from SCM" We are then going to use the configuration options below.
For reference we are going to use https://github.com/MichaelCade/Jenkins-HelloWorld.git as the repository URL.
For reference, we are going to use `https://github.com/MichaelCade/Jenkins-HelloWorld.git` as the repository URL.
![](Images/Day74_CICD3.png)
@ -47,7 +48,7 @@ This is a big consideration because if you are using costly cloud resources to h
![](Images/Day74_CICD4.png)
One thing I have changed since yesterdays session is I want to now upload my image to a public repository which in this case would be michaelcade1\90DaysOfDevOps, my Jenkinsfile has this change already. And from previous sections I have removed any existing demo container images.
One thing I have changed since yesterday's session is I want to now upload my image to a public repository which in this case would be michaelcade1\90DaysOfDevOps, my Jenkinsfile has this change already. And from the previous sections, I have removed any existing demo container images.
![](Images/Day74_CICD5.png)
@ -55,15 +56,15 @@ Going backwards here, we created our Pipeline and then as previously shown we ad
![](Images/Day74_CICD6.png)
At this stage our Pipeline has never ran and your stage view will look something like this.
At this stage, our Pipeline has never run and your stage view will look something like this.
![](Images/Day74_CICD7.png)
Now lets trigger the "Build Now" button. and our stage view will display our stages.
Now let's trigger the "Build Now" button. and our stage view will display our stages.
![](Images/Day74_CICD8.png)
If we then head over to our DockerHub repository, we should have 2 new Docker images. We should have a Build ID of 1 and a latest because every build that we create based on the "Upload to DockerHub" is we send a version using the Jenkins Build_ID environment variable and we also issue a latest.
If we then head over to our DockerHub repository, we should have 2 new Docker images. We should have a Build ID of 1 and a latest because for every build that we create based on the "Upload to DockerHub" we send a version using the Jenkins Build_ID environment variable and we also issue a latest.
![](Images/Day74_CICD9.png)
@ -71,7 +72,7 @@ Let's go and create an update to our index.html file in our GitHub repository as
![](Images/Day74_CICD10.png)
If we head back to Jenkins and select "Build Now" again. We will see our #2 build is successful.
If we head back to Jenkins and select "Build Now" again. We will see if our #2 build is successful.
![](Images/Day74_CICD11.png)
@ -79,7 +80,7 @@ Then a quick look at DockerHub, we can see that we have our tagged version 2 and
![](Images/Day74_CICD12.png)
It is worth noting here that I have added into my Kubernetes cluster a secret that enables my access and authentication to push my docker builds into DockerHub. If you are following along you should repeat this process for your account, and also make a change to the Jenkinsfile that is associated to my repository and account.
It is worth noting here that I have added into my Kubernetes cluster a secret that enables my access and authentication to push my docker builds into DockerHub. If you are following along you should repeat this process for your account, and also make a change to the Jenkinsfile that is associated with my repository and account.
## Resources

View File

@ -7,20 +7,21 @@ cover_image: null
canonical_url: null
id: 1049070
---
## GitHub Actions Overview
In this section I wanted to move on and take a look at maybe a different approach to what we just spent time on. GitHub Actions is where we will focus on in this session.
In this section, I wanted to move on and take a look at maybe a different approach to what we just spent time on. GitHub Actions is what we will focus on in this session.
GitHub Actions is a CI/CD platform that allows us to build, test and deploy amongst other tasks our pipeline. It has the concept of workflows that build and test against a GitHub repository. You could also use GitHub Actions to drive other workflows based on events that happen within your repository.
GitHub Actions is a CI/CD platform that allows us to build, test and deploy amongst other tasks in our pipeline. It has the concept of workflows that build and test against a GitHub repository. You could also use GitHub Actions to drive other workflows based on events that happen within your repository.
### Workflows
Overall, in GitHub Actions our task is called a **Workflow**.
Overall, in GitHub Actions, our task is called a **Workflow**.
- A **workflow** is the configurable automated process.
- Defined as YAML files.
- Contain and run one or more **jobs**
- Will run when triggered by an **event** in your repository or can be ran manually
- Will run when triggered by an **event** in your repository or can be run manually
- You can multiple workflows per repository
- A **workflow** will contain a **job** and then **steps** to achieve that **job**
- Within our **workflow** we will also have a **runner** on which our **workflow** runs.
@ -29,7 +30,7 @@ For example, you can have one **workflow** to build and test pull requests, anot
### Events
Events are a specific event in a repository that triggers the workflow to run.
Events are specific event in a repository that triggers the workflow to run.
### Jobs
@ -37,15 +38,15 @@ A job is a set of steps in the workflow that execute on a runner.
### Steps
Each step within the job can be a shell script that gets executed, or an action. Steps are executed in order and they are dependant on each other.
Each step within the job can be a shell script that gets executed or an action. Steps are executed in order and they are dependent on each other.
### Actions
A repeatable custom application used for frequently repeated tasks.
A repeatable custom application is used for frequently repeated tasks.
### Runners
A runner is a server that runs the workflow, each runner runs a single job at a time. GitHub Actions provides the ability to run Ubuntu Linux, Microsoft Windows, and macOS runners. You can also host your own on specific OS or hardware.
A runner is a server that runs the workflow, each runner runs a single job at a time. GitHub Actions provides the ability to run Ubuntu Linux, Microsoft Windows, and macOS runners. You can also host your own on a specific OS or hardware.
Below you can see how this looks, we have our event triggering our workflow > our workflow consists of two jobs > within our jobs we then have steps and then we have actions.
@ -53,11 +54,11 @@ Below you can see how this looks, we have our event triggering our workflow > ou
### YAML
Before we get going with a real use case lets take a quick look at the above image in the form of an example YAML file.
Before we get going with a real use case let's take a quick look at the above image in the form of an example YAML file.
I have added # to comment in where we can find the components of the YAML workflow.
I have added # to the comment where we can find the components of the YAML workflow.
```
```Yaml
#Workflow
name: 90DaysOfDevOps
#Event
@ -80,7 +81,7 @@ jobs:
### Getting Hands-On with GitHub Actions
I think there are a lot of options when it comes to GitHub Actions, yes it will satisfy your CI/CD needs when it comes to Build, Test, Deploying your code and the continued steps thereafter.
I think there are a lot of options when it comes to GitHub Actions, yes it will satisfy your CI/CD needs when it comes to Building, Test, and Deploying your code and the continued steps thereafter.
I can see lots of options and other automated tasks that we could use GitHub Actions for.
@ -88,9 +89,9 @@ I can see lots of options and other automated tasks that we could use GitHub Act
One option is making sure your code is clean and tidy within your repository. This will be our first example demo.
I am going to be using some example code linked in one of the resources for this section, we are going to use `github/super-linter` to check against our code.
I am going to be using some example code linked in one of the resources for this section, we are going to use `GitHub/super-linter` to check against our code.
```
```Yaml
name: Super-Linter
on: push
@ -111,29 +112,29 @@ jobs:
```
**github/super-linter**
You can see from the above that for one of our steps we have an action called github/super-linter and this is referring to a step that has already been written by the community. You can find out more about this here [Super-Linter](https://github.com/github/super-linter)
You can see from the above that for one of our steps we have an action called GitHub/super-linter and this is referring to a step that has already been written by the community. You can find out more about this here [Super-Linter](https://github.com/github/super-linter)
"This repository is for the GitHub Action to run a Super-Linter. It is a simple combination of various linters, written in bash, to help validate your source code."
Also in the code snippet above it mentions GITHUB_TOKEN so I was interested to find out why and what this does and needed for.
Also in the code snippet above it mentions GITHUB_TOKEN so I was interested to find out why and what this does and is needed for.
"NOTE: If you pass the Environment variable `GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}` in your workflow, then the GitHub Super-Linter will mark the status of each individual linter run in the Checks section of a pull request. Without this you will only see the overall status of the full run. **There is no need to set the GitHub Secret as it is automatically set by GitHub, it only needs to be passed to the action.**"
"NOTE: If you pass the Environment variable `GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}` in your workflow, then the GitHub Super-Linter will mark the status of each linter run in the Checks section of a pull request. Without this, you will only see the overall status of the full run. **There is no need to set the GitHub Secret as it is automatically set by GitHub, it only needs to be passed to the action.**"
The bold text being important to note at this stage. We are using it but we do not need to set any environment variable within our repository.
The bold text is important to note at this stage. We are using it but we do not need to set any environment variable within our repository.
We will use our repository that we used in our Jenkins demo to test against.[Jenkins-HelloWorld](https://github.com/MichaelCade/Jenkins-HelloWorld)
We will use the repository that we used in our Jenkins demo to test against.[Jenkins-HelloWorld](https://github.com/MichaelCade/Jenkins-HelloWorld)
Here is our repository as we left it in the Jenkins sessions.
![](Images/Day75_CICD2.png)
In order for us to take advantage we have to use the Actions tab above to choose from the marketplace which I will cover shortly or we can create our own files using our super-linter code above, in order to create your own you must create a new file in your repository at this exact location. `.github/workflows/workflow_name` obviously making sure the workflow_name is something useful for you recognise, within here we can have many different workflows performing different jobs and tasks against our repository.
For us to take advantage, we have to use the Actions tab above to choose from the marketplace which I will cover shortly or we can create our files using our super-linter code above, to create your own, you must create a new file in your repository at this exact location. `.github/workflows/workflow_name` obviously making sure the workflow_name is something useful for you to recognise, within here we can have many different workflows performing different jobs and tasks against our repository.
We are going to create `.github/workflows/super-linter.yml`
![](Images/Day75_CICD3.png)
We can then paste our code and commit the code to our repository, if we then head to the Actions tab we will now see our Super-Linter workflow listed as per below,
We can then paste our code and commit the code to our repository, if we then head to the Actions tab we will now see our Super-Linter workflow listed below,
![](Images/Day75_CICD4.png)
@ -141,11 +142,11 @@ We defined in our code that this workflow would run when we pushed anything to o
![](Images/Day75_CICD5.png)
As you can see from the above we have some errors most likely with my hacking ability vs coding ability.
As you can see from the above we have some errors most likely with my hacking ability vs my coding ability.
Although actually it was not my code at least not yet, in running this and getting an error I found this [issue](https://github.com/github/super-linter/issues/2255)
Although it was not my code at least not yet, in running this and getting an error I found this [issue](https://github.com/github/super-linter/issues/2255)
Take #2 I changed the version of Super-Linter from version 3 to 4 and have ran the task again.
Take #2 I changed the version of Super-Linter from version 3 to 4 and have run the task again.
![](Images/Day75_CICD6.png)
@ -159,7 +160,7 @@ Now if we resolve the issue with my code and push the changes our workflow will
![](Images/Day75_CICD8.png)
If you hit the new workflow button highlighted above, this is going to open the door to a huge plethora of actions. One thing you might have noticed throughout this challenge is that we don't want to reinvent the wheel we want to stand on the shoulders of giants and share our code, automations and skills far and wide to make our lives easier.
If you hit the new workflow button highlighted above, this is going to open the door to a huge plethora of actions. One thing you might have noticed throughout this challenge is that we don't want to reinvent the wheel we want to stand on the shoulders of giants and share our code, automation and skills far and wide to make our lives easier.
![](Images/Day75_CICD9.png)

View File

@ -7,11 +7,12 @@ cover_image: null
canonical_url: null
id: 1048809
---
## ArgoCD Overview
“Argo CD is a declarative, GitOps continuous delivery tool for Kubernetes”
Version control is the key here, ever made a change to your environment on the fly and have no recollection of that change and because the lights are on and everything is green you continue to keep plodding along? Ever made a change and broke everything or some of everything? You might have known you made the change and you can quickly roll back your change, that bad script or misspelling. Now ever done this a massive scale and maybe it was not you or maybe it was not found straight away and now the business is suffering. Therefore, version control is important. Not only that but “Application definitions, configurations, and environments should be declarative, and version controlled.” On top of this (which comes from ArgoCD), they also mention that “Application deployment and lifecycle management should be automated, auditable, and easy to understand.”
Version control is the key here, ever made a change to your environment on the fly and have no recollection of that change and because the lights are on and everything is green you continue to keep plodding along? Ever made a change and broken everything or some of everything? You might have known you made the change and you can quickly roll back your change, that bad script or misspelling. Now ever done this on a massive scale and maybe it was not you or maybe it was not found straight away and now the business is suffering. Therefore, version control is important. Not only that but “Application definitions, configurations, and environments should be declarative, and version controlled.” On top of this (which comes from ArgoCD), they also mention that “Application deployment and lifecycle management should be automated, auditable, and easy to understand.”
From an Operations background but having played a lot around Infrastructure as Code this is the next step to ensuring all of that good stuff is taken care of along the way with continuous deployment/delivery workflows.
@ -21,7 +22,7 @@ From an Operations background but having played a lot around Infrastructure as C
We are going to be using our trusty minikube Kubernetes cluster locally again for this deployment.
```
```Shell
kubectl create namespace argocd
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
```
@ -32,17 +33,17 @@ Make sure all the ArgoCD pods are up and running with `kubectl get pods -n argoc
![](Images/Day76_CICD2.png)
Also let's check everything that we deployed in the namespace with `kubectl get all -n argocd`
Also, let's check everything that we deployed in the namespace with `kubectl get all -n argocd`
![](Images/Day76_CICD3.png)
When the above is looking good, we then should consider accessing this via the port forward. Using the `kubectl port-forward svc/argocd-server -n argocd 8080:443` command. Do this in a new terminal.
Then open a new web browser and head to https://localhost:8080
Then open a new web browser and head to `https://localhost:8080`
![](Images/Day76_CICD4.png)
To log in you will need a username of admin and then to grab your created secret as your password use the `kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d && echo`
To log in you will need a username of admin and then grab your created secret as your password use the `kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d && echo`
![](Images/Day76_CICD5.png)
@ -58,15 +59,15 @@ The application I want to deploy is Pac-Man, yes that's right the famous game an
You can find the repository for [Pac-Man](https://github.com/MichaelCade/pacman-tanzu.git) here.
Instead of going through each step using screen shots I thought it would be easier to create a walkthrough video covering the steps taken for this one particular application deployment.
Instead of going through each step using screenshots, I thought it would be easier to create a walkthrough video covering the steps taken for this one particular application deployment.
[ArgoCD Demo - 90DaysOfDevOps](https://www.youtube.com/watch?v=w6J413_j0hA)
Note - During the video there is a service that is never satisfied as the app health being healthy this is because the LoadBalancer type set for the pacman service is in a pending state, in Minikube we do not have a loadbalancer configured. If you would like to test this you could change the YAML for the service to ClusterIP and use port forwarding to play the game.
Note - During the video, there is a service that is never satisfied as the app health is healthy this is because the LoadBalancer type set for the Pacman service is pending, in Minikube we do not have a loadbalancer configured. If you would like to test this you could change the YAML for the service to ClusterIP and use port forwarding to play the game.
This wraps up the CICD Pipelines section, I feel there is a lot of focus on this area in the industry at the moment and you will also hear terms around GitOps also related to the methodologies used within CICD in general.
The next section we move into is around Observability, another concept or area that is not new but it is more and more important as we look at our environments in a different way.
The next section we move into is around Observability, another concept or area that is not new but it is more and more important as we look at our environments differently.
## Resources

View File

@ -2,14 +2,15 @@
title: '#90DaysOfDevOps - The Big Picture: Monitoring - Day 77'
published: false
description: 90DaysOfDevOps - The Big Picture Monitoring
tags: "devops, 90daysofdevops, learning"
tags: 'devops, 90daysofdevops, learning'
cover_image: null
canonical_url: null
id: 1048715
---
## The Big Picture: Monitoring
In this section we are going to talk about monitoring, what is it why do we need it?
In this section we are going to talk about monitoring, what is it and why do we need it?
### What is Monitoring?
@ -27,10 +28,10 @@ We are responsible for ensuring that all the services, applications and resource
How do we do it? there are three ways:
- Login manually to all of our servers and check all the data pertaining to services processes and resources.
- Login manually to all of our servers and check all the data about service processes and resources.
- Write a script that logs in to the servers for us and checks on the data.
Both of these options would require considerable amount of work on our part,
Both of these options would require a considerable amount of work on our part,
The third option is easier, we could use a monitoring solution that is available in the market.
@ -44,19 +45,19 @@ The tool allows us to monitor our servers and see if they are being sufficiently
![](Images/Day77_Monitoring3.png)
Essentially monitoring allows us to achieve these two goals, check the status of our servers and services and determine the health of our infrastructure it also gives us a 40,000ft view of the complete infrastructure to see if our servers are up and running, if the applications are working properly and the web servers are reachable or not.
Essentially monitoring allows us to achieve these two goals, check the status of our servers and services and determine the health of our infrastructure it also gives us a 40,000ft view of the complete infrastructure to see if our servers are up and running if the applications are working properly and the web servers are reachable or not.
It will tell us that our disk has been increasing by 10 percent for the last 10 weeks in a particular server, that it will exhaust entirely within the next four or five days and we'll fail to respond soon it will alert us when your disk or server is in a critical state so that we can take appropriate actions to avoid possible outages.
It will tell us that our disk has been increasing by 10 per cent for the last 10 weeks in a particular server, that it will exhaust entirely within the next four or five days and we'll fail to respond soon it will alert us when your disk or server is in a critical state so that we can take appropriate actions to avoid possible outages.
In this case we can free up some disk space and ensure that our servers don't fail and that our users are not affected.
In this case, we can free up some disk space and ensure that our servers don't fail and that our users are not affected.
The difficult question for most monitoring engineers is what do we monitor? and alternately what do we not?
Every system has a number of resources, which of these should we keep a close eye on and which ones can we turn a blind eye to for instance is it necessary to monitor CPU usage the answer is yes obviously nevertheless it is still a decision that has to be made is it necessary to monitor the number of open ports in the system we may or may not have to depending on the situation if it is a general-purpose server we probably won't have to but then again if it is a webserver we probably would have to.
Every system has several resources, which of these should we keep a close eye on and which ones can we turn a blind eye to for instance is it necessary to monitor CPU usage the answer is yes obviously nevertheless it is still a decision that has to be made is it necessary to monitor the number of open ports in the system we may or may not have to depend on the situation if it is a general-purpose server we probably won't have to but then again if it is a webserver we probably would have to.
### Continous Monitoring
### Continuous Monitoring
Monitoring is not a new item and even continous monitoring has been an ideal that many enterprises have adopted for many years.
Monitoring is not a new item and even continuous monitoring has been an ideal that many enterprises have adopted for many years.
There are three key areas of focus when it comes to monitoring.
@ -64,11 +65,11 @@ There are three key areas of focus when it comes to monitoring.
- Application Monitoring
- Network Monitoring
The important thing to note is that there are many tools available we have mentioned two generic systems and tools in this session but there are lots. The real benefit of a monitoring solution comes when you have really spent the time making sure you are answering that question of what should we be monitoring and what shouldn't we?
The important thing to note is that there are many tools available we have mentioned two generic systems and tools in this session but there are lots. The real benefit of a monitoring solution comes when you have spent the time making sure you are answering the question of what should we be monitoring and what shouldn't we?
We could turn on a monitoring solution in any of our platforms and it will start grabbing information but if that information is simply too much then you are going to struggle to benefit from that solution, you have to spend the time to configure.
We could turn on a monitoring solution in any of our platforms and it will start grabbing information but if that information is simply too much then you are going to struggle to benefit from that solution, you have to spend the time to configure it.
In the next session we will get hands on with a monitoring tool and see what we can start monitoring.
In the next session, we will get hands-on with a monitoring tool and see what we can start monitoring.
## Resources

View File

@ -7,25 +7,26 @@ cover_image: null
canonical_url: null
id: 1049056
---
## Hands-On Monitoring Tools
In the last session, I spoke about the big picture of monitoring and I took a look into Nagios, there was two reasons for doing this. The first was this is a peice of software I have heard a lot of over the years so wanted to know a little more about its capabilities.
In the last session, I spoke about the big picture of monitoring and I took a look into Nagios, there were two reasons for doing this. The first was this is a piece of software I have heard a lot of over the years so wanted to know a little more about its capabilities.
Today I am going to be going into Prometheus, I have seen more and more of Prometheus in the Cloud-Native landscape but it can also be used to look after those physical resources as well outside of Kubernetes and the like.
### Prometheus - Monitors nearly everything
First of all Prometheus is Open-Source that can help you monitor containers and microservice based systems as well as physical, virtual and other services. There is a large community behind Prometheus.
First of all, Prometheus is Open-Source that can help you monitor containers and microservice-based systems as well as physical, virtual and other services. There is a large community behind Prometheus.
Prometheus has a large array of [integrations and exporters](https://prometheus.io/docs/instrumenting/exporters/) The key being to exporting existing metrics as prometheus metrics. On top of this it also supports multiple proagramming languages.
Prometheus has a large array of [integrations and exporters](https://prometheus.io/docs/instrumenting/exporters/) The key is to export existing metrics as Prometheus metrics. On top of this, it also supports multiple programming languages.
Pull approach - If you are talking to thousands of microservices or systems and services a push method is going to be where you generally see the service pushing to the monitoring system. This brings some challenges around flooding the network, high cpu and also a single point of failure. Where Pull gives us a much better experience where Prometheus will pull from the metrics endpoint on every service.
Pull approach - If you are talking to thousands of microservices or systems and services a push method is going to be where you generally see the service pushing to the monitoring system. This brings some challenges around flooding the network, high CPU and also a single point of failure. Where Pull gives us a much better experience where Prometheus will pull from the metrics endpoint on every service.
Once again we see YAML for configuration for Prometheus.
![](Images/Day78_Monitoring7.png)
Later on you are going to see how this looks when deployed into Kubernetes, in particular we have the **PushGateway** which pulls our metrics from our jobs/exporters.
Later on, you are going to see how this looks when deployed into Kubernetes, in particular, we have the **PushGateway** which pulls our metrics from our jobs/exporters.
We have the **AlertManager** which pushes alerts and this is where we can integrate into external services such as email, slack and other tooling.
@ -42,7 +43,7 @@ Various ways of installing Prometheus, [Download Section](https://prometheus.io/
But we are going to focus our efforts on deploying to Kubernetes. Which also has some options.
- Create configuration YAML files
- Using an Operator (manager of all prometheus components)
- Using an Operator (manager of all Prometheus components)
- Using helm chart to deploy operator
### Deploying to Kubernetes
@ -53,11 +54,11 @@ We will be using our minikube cluster locally again for this quick and simple in
![](Images/Day78_Monitoring1.png)
As you can see from the above we have also ran a helm repo update, we are now ready to deploy Prometheus into our minikube environment using the `helm install stable prometheus-community/prometheus` command.
As you can see from the above we have also run a helm repo update, we are now ready to deploy Prometheus into our minikube environment using the `helm install stable prometheus-community/prometheus` command.
![](Images/Day78_Monitoring2.png)
After a couple of minutes you will see a number of new pods appear, for this demo I have deployed into the default namespace, I would normally push this to its own namespace.
After a couple of minutes, you will see several new pods appear, for this demo, I have deployed into the default namespace, I would normally push this to its namespace.
![](Images/Day78_Monitoring3.png)
@ -67,11 +68,12 @@ Once all the pods are running we can also take a look at all the deployed aspect
Now for us to access the Prometheus Server UI we can use the following command to port forward.
```
```Shell
export POD_NAME=$(kubectl get pods --namespace default -l "app=prometheus,component=server" -o jsonpath="{.items[0].metadata.name}")
kubectl --namespace default port-forward $POD_NAME 9090
```
When we first open our browser to http://localhost:9090 we see the following very blank screen.
When we first open our browser to `http://localhost:9090` we see the following very blank screen.
![](Images/Day78_Monitoring5.png)
@ -79,9 +81,9 @@ Because we have deployed to our Kubernetes cluster we will automatically be pick
![](Images/Day78_Monitoring6.png)
Short on learning PromQL and putting that into practice this is very much like I mentioned previously in that gaining metrics is great, so is monitoring but you have to know what you are monitoring and why and what you are not monitoring and why!
Short on learning PromQL and putting that into practice this is very much like I mentioned previously in that gaining metrics is great, and so is monitoring but you have to know what you are monitoring and why and what you are not monitoring and why!
I want to come back to Prometheus but for now I think we need to think about Log Management and Data Visualisation to bring us back to Prometheus later on.
I want to come back to Prometheus but for now, I think we need to think about Log Management and Data Visualisation to bring us back to Prometheus later on.
## Resources

View File

@ -7,21 +7,22 @@ cover_image: null
canonical_url: null
id: 1049057
---
## The Big Picture: Log Management
A continuation to the infrastructure monitoring challenges and solutions, log management is another puzzle peice to the overall observability jigsaw.
A continuation of the infrastructure monitoring challenges and solutions, log management is another puzzle piece to the overall observability jigsaw.
### Log Management & Aggregation
Let's talk about two core concepts the first of which is log aggregation and it's a way of collecting and tagging application logs from many different services and to a single dashboard that can easily be searched.
One of the first systems that have to be built out in an application performance management system is log aggregation. Application performance management is the part of the devops lifecycle where things have been built and deployed and you need to make sure that they're continuously working so they have enough resources allocated to them and errors aren't being shown to users. In most production deployments there are many related events that emit logs across services at google a single search might hit ten different services before being returned to the user if you got unexpected search results that might mean a logic problem in any of the ten services and log aggregation helps companies like google diagnose problems in production, they've built a single dashboard where they can map every request to unique id so if you search something your search will get a unique id and then every time that search is passing through a different service that service will connect that id to what they're currently doing.
One of the first systems that have to be built out in an application performance management system is log aggregation. Application performance management is the part of the DevOps lifecycle where things have been built and deployed and you need to make sure that they're continuously working so they have enough resources allocated to them and errors aren't being shown to users. In most production deployments many related events emit logs across services at google a single search might hit ten different services before being returned to the user if you got unexpected search results that might mean a logic problem in any of the ten services and log aggregation helps companies like google diagnose problems in production, they've built a single dashboard where they can map every request to unique id so if you search something your search will get a unique id and then every time that search is passing through a different service that service will connect that id to what they're currently doing.
This is the essence of a good log aggregation platform efficiently collect logs from everywhere that emits them and make them easily searchable in the case of a fault again.
This is the essence of a good log aggregation platform efficiently collects logs from everywhere that emits them and makes them easily searchable in the case of a fault again.
### Example App
Our example application is a web app, we have a typical front end and backend storing our critical data to a MongoDB database.
Our example application is a web app, we have a typical front end and backend storing our critical data in a MongoDB database.
If a user told us the page turned all white and printed an error message we would be hard-pressed to diagnose the problem with our current stack the user would need to manually send us the error and we'd need to match it with relevant logs in the other three services.
@ -33,19 +34,19 @@ The web application would connect to the frontend which then connects to the bac
### The components of elk
Elasticsearch, logstash and Kibana is that all of services send logs to logstash, logstash takes these logs which are text emitted by the application. For example the web application when you visit a web page, the web page might log this visitor access to this page at this time and that's an example of a log message those logs would be sent to logstash.
Elasticsearch, logstash and Kibana are that all the services send logs to logstash, logstash takes these logs which are text emitted by the application. For example, in the web application when you visit a web page, the web page might log this visitor's access to this page at this time and that's an example of a log message those logs would be sent to logstash.
Logstash would then extract things from them so for that log message user did **thing**, at **time**. It would extract the time and extract the message and extract the user and include those all as tags so the message would be an object of tags and message so that you could search them easily you could find all of the requests made by a specific user but logstash doesn't store things itself it stores things in elasticsearch which is a efficient database for querying text and elasticsearch exposes the results as Kibana and Kibana is a web server that connects to elasticsearch and allows administrators as the devops person or other people on your team, the on-call engineer to view the logs in production whenever there's a major fault. You as the administrator would connect to Kibana, Kibana would query elasticsearch for logs matching whatever you wanted.
Logstash would then extract things from them so for that log message user did **thing**, at **time**. It would extract the time and extract the message and extract the user and include those all as tags so the message would be an object of tags and message so that you could search them easily could find all of the requests made by a specific user but logstash doesn't store things itself it stores things in elasticsearch which is an efficient database for querying text and elasticsearch exposes the results as Kibana and Kibana is a web server that connects to elasticsearch and allows administrators as the DevOps person or other people on your team, the on-call engineer to view the logs in production whenever there's a major fault. You as the administrator would connect to Kibana, and Kibana would query elasticsearch for logs matching whatever you wanted.
You could say hey Kibana in the search bar I want to find errors and kibana would say elasticsearch find the messages which contain the string error and then elasticsearch would return results that had been populated by logstash. Logstash would have been sent those results from all of the other services.
### how would we use elk to diagnose a production problem
A user says i saw error code one two three four five six seven when i tried to do this with elk setup we'd have to go to kibana enter one two three four five six seven in the search bar press enter and then that would show us the logs that corresponded to that and one of the logs might say internal server error returning one two three four five six seven and we'd see that the service that emitted that log was the backend and we'd see what time that log was emitted at so we could go to the time in that log and we could look at the messages above and below it in the backend and then we could see a better picture of what happened for the user's request and we'd be able to repeat this process going to other services until we found what actually caused the problem for the user.
A user says I saw error code one two three four five six seven when I tried to do this with elk setup we'd have to go to kibana enter one two three four five six seven in the search bar press enter and then that would show us the logs that corresponded to that and one of the logs might say internal server error returning one two three four five six seven and we'd see that the service that emitted that log was the backend and we'd see what time that log was emitted at so we could go to the time in that log and we could look at the messages above and below it in the backend and then we could see a better picture of what happened for the user's request and we'd be able to repeat this process going to other services until we found what caused the problem for the user.
### Security and Access to Logs
An important peice of the puzzle is ensuring that logs are only visible to administrators (or the users and groups that absolutely need to have access), logs can contain sensitive information like tokens it's important that only authenticated users can access them you wouldn't want to expose Kibana to the internet without some way of authenticating.
An important piece of the puzzle is ensuring that logs are only visible to administrators (or the users and groups that need to have access), logs can contain sensitive information like tokens only authenticated users should have access to them, you wouldn't want to expose Kibana to the internet without some way of authenticating.
### Examples of Log Management Tools
@ -61,8 +62,7 @@ Examples of log management platforms there's
Cloud providers also provide logging such as AWS CloudWatch Logs, Microsoft Azure Monitor and Google Cloud Logging.
Log Management is a key aspect of the overall observability of your applications and instracture environment for diagnosing problems in production it's relatively simple to install a turnkey solution like ELK or CloudWatch and it makes diagnosing and triaging problems in production significantly easier.
Log Management is a key aspect of the overall observability of your applications and infrastructure environment for diagnosing problems in production it's relatively simple to install a turnkey solution like ELK or CloudWatch and it makes diagnosing and triaging problems in production significantly easier.
## Resources

View File

@ -7,26 +7,24 @@ cover_image: null
canonical_url: null
id: 1048746
---
## ELK Stack
In this session, we are going to get a little more hands-on with some of the options we have mentioned.
### ELK Stack
ELK Stack is the combination of 3 separate tools:
- [Elasticsearch](https://www.elastic.co/what-is/elasticsearch) is a distributed, free and open search and analytics engine for all types of data, including textual, numerical, geospatial, structured, and unstructured.
- [Logstash](https://www.elastic.co/logstash/) is a free and open server-side data processing pipeline that ingests data from a multitude of sources, transforms it, and then sends it to your favorite "stash."
- [Logstash](https://www.elastic.co/logstash/) is a free and open server-side data processing pipeline that ingests data from a multitude of sources, transforms it, and then sends it to your favourite "stash."
- [Kibana](https://www.elastic.co/kibana/) is a free and open user interface that lets you visualize your Elasticsearch data and navigate the Elastic Stack. Do anything from tracking query load to understanding the way requests flow through your apps.
ELK stack lets us reliably and securely take data from any source, in any format, then search, analyze, and visualize it in real time.
On top of the above mentioned components you might also see Beats which are lightweight agents that are installed on edge hosts to collect different types of data for forwarding into the stack.
On top of the above-mentioned components, you might also see Beats which are lightweight agents that are installed on edge hosts to collect different types of data for forwarding into the stack.
- Logs: Server logs that need to be analyzed are identified
- Logs: Server logs that need to be analysed are identified
- Logstash: Collect logs and events data. It even parses and transforms data
@ -38,25 +36,25 @@ On top of the above mentioned components you might also see Beats which are ligh
[Picture taken from Guru99](https://www.guru99.com/elk-stack-tutorial.html)
A good resource explaining this [The Complete Guide to the ELK Stack](https://logz.io/learn/complete-guide-elk-stack/)
A good resource explaining this [Is the Complete Guide to the ELK Stack](https://logz.io/learn/complete-guide-elk-stack/)
With the addition of beats the ELK Stack is also now known as Elastic Stack.
With the addition of beats, the ELK Stack is also now known as Elastic Stack.
For the hands-on scenario there are many places you can deploy the Elastic Stack but we are going to be using docker compose to deploy locally on our system.
For the hands-on scenario, there are many places you can deploy the Elastic Stack but we are going to be using docker-compose to deploy locally on our system.
[Start the Elastic Stack with Docker Compose](https://www.elastic.co/guide/en/elastic-stack-get-started/current/get-started-stack-docker.html#get-started-docker-tls)
![](Images/Day80_Monitoring1.png)
You will find the original files and walkthrough that I used here [ deviantony/docker-elk](https://github.com/deviantony/docker-elk)
You will find the original files and walkthrough that I used here [deviantony/docker-elk](https://github.com/deviantony/docker-elk)
Now we can run `docker-compose up -d`, the first time this has been ran will require the pulling of images.
Now we can run `docker-compose up -d`, the first time this has been running will require the pulling of images.
![](Images/Day80_Monitoring2.png)
If you follow either this repository or the one that I used you will have either have the password of "changeme" or in my repository the password of "90DaysOfDevOps". The username is "elastic"
If you follow either this repository or the one that I used you will have either the password of "changeme" or in my repository the password of "90DaysOfDevOps". The username is "elastic"
After a few minutes we can navigate to http://localhost:5601/ which is our Kibana server / Docker container.
After a few minutes, we can navigate to `http://localhost:5601/` which is our Kibana server / Docker container.
![](Images/Day80_Monitoring3.png)
@ -68,9 +66,9 @@ Under the section titled "Get started by adding integrations" there is a "try sa
![](Images/Day80_Monitoring5.png)
I am going to select "Sample web logs" but this is really to get a look and feel of what data sets you can get into the ELK stack.
I am going to select "Sample weblogs" but this is really to get a look and feel of what data sets you can get into the ELK stack.
When you have selected "Add Data" it takes a while to populate some of that data and then you have the "View Data" option and a list of the available ways to view that data in the drop down.
When you have selected "Add Data" it takes a while to populate some of that data and then you have the "View Data" option and a list of the available ways to view that data in the dropdown.
![](Images/Day80_Monitoring6.png)
@ -78,11 +76,11 @@ As it states on the dashboard view:
**Sample Logs Data**
*This dashboard contains sample data for you to play with. You can view it, search it, and interact with the visualizations. For more information about Kibana, check our docs.*
> This dashboard contains sample data for you to play with. You can view it, search it, and interact with the visualizations. For more information about Kibana, check our docs.
![](Images/Day80_Monitoring7.png)
This is using Kibana to visualise data that has been added into ElasticSearch via Logstash. This is not the only option but I personally wanted to deploy and look at this.
This is using Kibana to visualise data that has been added into ElasticSearch via Logstash. This is not the only option but I wanted to deploy and look at this.
We are going to cover Grafana at some point and you are going to see some data visualisation similarities between the two, you have also seen Prometheus.

View File

@ -2,40 +2,41 @@
title: '#90DaysOfDevOps - Fluentd & FluentBit - Day 81'
published: false
description: 90DaysOfDevOps - Fluentd & FluentBit
tags: "devops, 90daysofdevops, learning"
tags: 'devops, 90daysofdevops, learning'
cover_image: null
canonical_url: null
id: 1048716
---
## Fluentd & FluentBit
Another data collector that I wanted to explore as part of this observability section was [Fluentd](https://docs.fluentd.org/). An Open-Source unified logging layer.
Fluentd has four key features that makes it suitable to build clean, reliable logging pipelines:
Fluentd has four key features that make it suitable to build clean, reliable logging pipelines:
Unified Logging with JSON: Fluentd tries to structure data as JSON as much as possible. This allows Fluentd to unify all facets of processing log data: collecting, filtering, buffering, and outputting logs across multiple sources and destinations. The downstream data processing is much easier with JSON, since it has enough structure to be accessible without forcing rigid schemas.
Unified Logging with JSON: Fluentd tries to structure data as JSON as much as possible. This allows Fluentd to unify all facets of processing log data: collecting, filtering, buffering, and outputting logs across multiple sources and destinations. The downstream data processing is much easier with JSON since it has enough structure to be accessible without forcing rigid schemas.
Pluggable Architecture: Fluentd has a flexible plugin system that allows the community to extend its functionality. Over 300 community-contributed plugins connect dozens of data sources to dozens of data outputs, manipulating the data as needed. By using plugins, you can make better use of your logs right away.
Minimum Resources Required: A data collector should be lightweight so that it runs comfortably on a busy machine. Fluentd is written in a combination of C and Ruby, and requires minimal system resources. The vanilla instance runs on 30-40MB of memory and can process 13,000 events/second/core.
Minimum Resources Required: A data collector should be lightweight so that it runs comfortably on a busy machine. Fluentd is written in a combination of C and Ruby and requires minimal system resources. The vanilla instance runs on 30-40MB of memory and can process 13,000 events/second/core.
Built-in Reliability: Data loss should never happen. Fluentd supports memory- and file-based buffering to prevent inter-node data loss. Fluentd also supports robust failover and can be set up for high availability.
[Installing Fluentd](https://docs.fluentd.org/quickstart#step-1-installing-fluentd)
### How apps log data?
### How do apps log data?
- Write to files. `.log` files (difficult to analyse without a tool and at scale)
- Log directly to a database (each application must be configured with the correct format)
- Third party applications (NodeJS, NGINX, PostgreSQL)
- Third-party applications (NodeJS, NGINX, PostgreSQL)
This is why we want a unified logging layer.
FluentD allows for the 3 logging data types shown above and gives us the ability to collect, process and send those to a destination, this could be sending them logs to Elastic, MongoDB, Kafka databases for example.
FluentD allows for the 3 logging data types shown above and gives us the ability to collect, process and send those to a destination, this could be sending them logs to Elastic, MongoDB, or Kafka databases for example.
Any Data, Any Data source can be sent to FluentD and that can be sent to any destination. FluentD is not tied to any particular source or destination.
In my research of Fluentd I kept stumbling across Fluent bit as another option and it looks like if you were looking to deploy a logging tool into your Kubernetes environment then fluent bit would give you that capability, even though fluentd can also be deployed to containers as well as servers.
In my research of Fluentd, I kept stumbling across Fluent bit as another option and it looks like if you were looking to deploy a logging tool into your Kubernetes environment then fluent bit would give you that capability, even though fluentd can also be deployed to containers as well as servers.
[Fluentd & Fluent Bit](https://docs.fluentbit.io/manual/about/fluentd-and-fluent-bit)
@ -43,7 +44,7 @@ Fluentd and Fluentbit will use the input plugins to transform that data to Fluen
We can also use tags and matches between configurations.
I cannot see a good reason for using fluentd and it sems that Fluent Bit is the best way to get started. Although they can be used together in some architectures.
I cannot see a good reason for using fluentd and it seems that Fluent Bit is the best way to get started. Although they can be used together in some architectures.
### Fluent Bit in Kubernetes
@ -51,16 +52,15 @@ Fluent Bit in Kubernetes is deployed as a DaemonSet, which means it will run on
Kubernetes annotations can be used within the configuration YAML of our applications.
First of all we can deploy from the fluent helm repository. `helm repo add fluent https://fluent.github.io/helm-charts` and then install using the `helm install fluent-bit fluent/fluent-bit` command.
First of all, we can deploy from the fluent helm repository. `helm repo add fluent https://fluent.github.io/helm-charts` and then install using the `helm install fluent-bit fluent/fluent-bit` command.
![](Images/Day81_Monitoring1.png)
In my cluster I am also running prometheus in my default namespace (for test purposes) we need to make sure our fluent-bit pod is up and running. we can do this using `kubectl get all | grep fluent` this is going to show us our running pod, service and daemonset that we mentioned earlier.
In my cluster, I am also running Prometheus in my default namespace (for test purposes) we need to make sure our fluent-bit pod is up and running. we can do this using `kubectl get all | grep fluent` this is going to show us our running pod, service and daemonset that we mentioned earlier.
![](Images/Day81_Monitoring2.png)
So that fluentbit knows where to get logs from we have a configuration file, in this Kubernetes deployment of fluentbit we have a configmap which resembles the configuration file.
So that fluentbit knows where to get logs from we have a configuration file, in this Kubernetes deployment of fluentbit, we have a configmap which resembles the configuration file.
![](Images/Day81_Monitoring3.png)
@ -116,7 +116,7 @@ fluent-bit.conf:
Read_From_Tail On
[FILTER]
Name kubernetes
Name Kubernetes
Match kube.*
Merge_Log On
Keep_Log Off
@ -141,11 +141,11 @@ fluent-bit.conf:
Events: <none>
```
We can now port-forward our pod to our localhost to ensure that we have connectivity. Firstly get the name of your pod with `kubectl get pods | grep fluent` and then use `kubectl port-forward fluent-bit-8kvl4 2020:2020` open a web browser to http://localhost:2020/
We can now port-forward our pod to our localhost to ensure that we have connectivity. Firstly get the name of your pod with `kubectl get pods | grep fluent` and then use `kubectl port-forward fluent-bit-8kvl4 2020:2020` to open a web browser to `http://localhost:2020/`
![](Images/Day81_Monitoring4.png)
I also found this really great medium article covering more about [Fluent Bit](https://medium.com/kubernetes-tutorials/exporting-kubernetes-logs-to-elasticsearch-using-fluent-bit-758e8de606af)
I also found this great medium article covering more about [Fluent Bit](https://medium.com/kubernetes-tutorials/exporting-kubernetes-logs-to-elasticsearch-using-fluent-bit-758e8de606af)
## Resources
@ -161,8 +161,6 @@ I also found this really great medium article covering more about [Fluent Bit](h
- [Log Management what DevOps need to know](https://devops.com/log-management-what-devops-teams-need-to-know/)
- [What is ELK Stack?](https://www.youtube.com/watch?v=4X0WLg05ASw)
- [Fluentd simply explained](https://www.youtube.com/watch?v=5ofsNyHZwWE&t=14s)
- [ Fluent Bit explained | Fluent Bit vs Fluentd ](https://www.youtube.com/watch?v=B2IS-XS-cc0)
- [Fluent Bit explained | Fluent Bit vs Fluentd](https://www.youtube.com/watch?v=B2IS-XS-cc0)
See you on [Day 82](day82.md)

View File

@ -7,6 +7,7 @@ cover_image: null
canonical_url: null
id: 1049059
---
### EFK Stack
In the previous section, we spoke about ELK Stack, which uses Logstash as the log collector in the stack, in the EFK Stack we are swapping that out for FluentD or FluentBit.
@ -21,11 +22,11 @@ We will be deploying the following into our Kubernetes cluster.
The EFK stack is a collection of 3 software bundled together, including:
- Elasticsearch : NoSQL database is used to store data and provides interface for searching and query log.
- Elasticsearch: NoSQL database is used to store data and provides an interface for searching and query logs.
- Fluentd : Fluentd is an open source data collector for unified logging layer. Fluentd allows you to unify data collection and consumption for a better use and understanding of data.
- Fluentd: Fluentd is an open source data collector for a unified logging layer. Fluentd allows you to unify data collection and consumption for better use and understanding of data.
- Kibana : Interface for managing and statistics logs. Responsible for reading information from elasticsearch .
- Kibana: Interface for managing and statistics logs. Responsible for reading information from elasticsearch.
### Deploying EFK on Minikube
@ -37,7 +38,7 @@ I have created [efk-stack.yaml](Days/Monitoring/../../Monitoring/EFK%20Stack/efk
![](Images/Day82_Monitoring3.png)
Depending on your system and if you have ran this already and have images pulled you should now watch the pods into a ready state before we can move on, you can check the progress with the following command. `kubectl get pods -n kube-logging -w` This can take a few minutes.
Depending on your system and if you have run this already and have images pulled you should now watch the pods into a ready state before we can move on, you can check the progress with the following command. `kubectl get pods -n kube-logging -w` This can take a few minutes.
![](Images/Day82_Monitoring4.png)
@ -45,12 +46,13 @@ The above command lets us keep an eye on things but I like to clarify that thing
![](Images/Day82_Monitoring5.png)
Once we have all our pods up and running and at this stage we should see
- 3 pods associated to ElasticSearch
- 1 pod associated to Fluentd
- 1 pod associated to Kibana
Once we have all our pods up and running and at this stage, we should see
We can also use `kubectl get all -n kube-logging` to show all in our namespace, fluentd as explained previously is deployed as a daemonset, kibana as a deployment and Elasticsearch as a statefulset.
- 3 pods associated with ElasticSearch
- 1 pod associated with Fluentd
- 1 pod associated with Kibana
We can also use `kubectl get all -n kube-logging` to show all in our namespace, fluentd as explained previously is deployed as a daemonset, kibana as deployment and Elasticsearch as a statefulset.
![](Images/Day82_Monitoring6.png)
@ -58,15 +60,15 @@ Now all of our pods are up and running we can now issue in a new terminal the po
![](Images/Day82_Monitoring7.png)
We can now open up a browser and navigate to this address, http://localhost:5601 you will be greeted with either the screen you see below or you might indeed see a sample data screen or continue and configure yourself. Either way and by all means look at that test data, it is what we covered when we looked at the ELK stack in a previous session.
We can now open up a browser and navigate to this address, `http://localhost:5601` you will be greeted with either the screen you see below or you might indeed see a sample data screen or continue and configure yourself. Either way and by all means look at that test data, it is what we covered when we looked at the ELK stack in a previous session.
![](Images/Day82_Monitoring8.png)
Next, we need to hit the "discover" tab on the left menu and add "*" to our index pattern. Continue to the next step by hitting "Next step".
Next, we need to hit the "discover" tab on the left menu and add "\*" to our index pattern. Continue to the next step by hitting "Next step".
![](Images/Day82_Monitoring9.png)
On Step 2 of 2, we are going to use the @timestamp option from the dropdown as this will filter our data by time. When you hit create pattern it might take a few seconds to complete.
In Step 2 of 2, we are going to use the @timestamp option from the dropdown as this will filter our data by time. When you hit create pattern it might take a few seconds to complete.
![](Images/Day82_Monitoring10.png)
@ -74,9 +76,9 @@ If we now head back to our "discover" tab after a few seconds you should start t
![](Images/Day82_Monitoring11.png)
Now that we have the EFK stack up and running and we are gathering logs from our Kubernetes cluster via Fluentd we can also take a look at other sources we can choose from, if you navigate to the home screen by hitting the Kibana logo in the top left you will be greeted with the same page we saw when we first logged in.
Now that we have the EFK stack up and running and we are gathering logs from our Kubernetes cluster via Fluentd we can also take a look at other sources we can choose from if you navigate to the home screen by hitting the Kibana logo on the top left you will be greeted with the same page we saw when we first logged in.
We have the ability to add APM, Log data, metric data and security events from other plugins or sources.
We can add APM, Log data, metric data and security events from other plugins or sources.
![](Images/Day82_Monitoring12.png)
@ -84,7 +86,7 @@ If we select "Add log data" then we can see below that we have a lot of choices
![](Images/Day82_Monitoring13.png)
Under the metrics data you will find that you can add sources for Prometheus and lots of other services.
Under the metrics data, you will find that you can add sources for Prometheus and lots of other services.
### APM (Application Performance Monitoring)
@ -92,7 +94,6 @@ There is also the option to gather APM (Application Performance Monitoring) whic
I am not going to get into APM here but you can find out more on the [Elastic site](https://www.elastic.co/observability/application-performance-monitoring)
## Resources
- [Understanding Logging: Containers & Microservices](https://www.youtube.com/watch?v=MMVdkzeQ848)
@ -109,4 +110,3 @@ I am not going to get into APM here but you can find out more on the [Elastic si
- [Fluentd simply explained](https://www.youtube.com/watch?v=5ofsNyHZwWE&t=14s)
See you on [Day 83](day83.md)

View File

@ -2,18 +2,19 @@
title: '#90DaysOfDevOps - Data Visualisation - Grafana - Day 83'
published: false
description: 90DaysOfDevOps - Data Visualisation - Grafana
tags: "devops, 90daysofdevops, learning"
tags: 'devops, 90daysofdevops, learning'
cover_image: null
canonical_url: null
id: 1048767
---
## Data Visualisation - Grafana
We saw a lot of Kibana over this section around Observability. But we have to also take some time to cover Grafana. But also they are not the same and they are not completely competing against each other.
Kibanas core feature is data querying and analysis. Using various methods, users can search the data indexed in Elasticsearch for specific events or strings within their data for root cause analysis and diagnostics. Based on these queries, users can use Kibanas visualisation features which allow users to visualize data in a variety of different ways, using charts, tables, geographical maps and other types of visualizations.
Grafana actually started as a fork of Kibana, Grafana had an aim to supply support for metrics aka monitoring, which at that time Kibana did not provide.
Grafana started as a fork of Kibana, Grafana had an aim to supply support for metrics aka monitoring, which at that time Kibana did not provide.
Grafana is a free and Open-Source data visualisation tool. We commonly see Prometheus and Grafana together out in the field but we might also see Grafana alongside Elasticsearch and Graphite.
@ -29,19 +30,19 @@ There are no doubt others but Grafana is a tool that I have seen spanning the vi
### Prometheus Operator + Grafana Deployment
We have covered Prometheus already in this section but as we see these paired so often I wanted to spin up an environment that would allow us to at least see what metrics we could have displayed in a visualisation. We know that monitoring our environments is important but going through those metrics alone in Prometheus or any metric tool is going to be cumbersome and it is not going to scale. This is where Grafana comes in and provides us that interactive visualisation of those metrics collected and stored in the Prometheus database.
We have covered Prometheus already in this section but as we see these paired so often I wanted to spin up an environment that would allow us to at least see what metrics we could have displayed in a visualisation. We know that monitoring our environments is important but going through those metrics alone in Prometheus or any metric tool is going to be cumbersome and it is not going to scale. This is where Grafana comes in and provides us with that interactive visualisation of those metrics collected and stored in the Prometheus database.
With that visualisation we can create custom charts, graphs and alerts for our environment. In this walkthrough we will be using our minikube cluster.
With that visualisation, we can create custom charts, graphs and alerts for our environment. In this walkthrough, we will be using our minikube cluster.
We are going to start by cloning this down to our local system. Using `git clone https://github.com/prometheus-operator/kube-prometheus.git` and `cd kube-prometheus`
![](Images/Day83_Monitoring1.png)
First job is to create our namespace within our minikube cluster `kubectl create -f manifests/setup` if you have not been following along in previous sections we can use `minikube start` to bring up a new cluster here.
The first job is to create our namespace within our minikube cluster `kubectl create -f manifests/setup` if you have not been following along in previous sections we can use `minikube start` to bring up a new cluster here.
![](Images/Day83_Monitoring2.png)
Next we are going to deploy everything we need for our demo using the `kubectl create -f manifests/` command, as you can see this is going to deploy a lot of different resources within our cluster.
Next, we are going to deploy everything we need for our demo using the `kubectl create -f manifests/` command, as you can see this is going to deploy a lot of different resources within our cluster.
![](Images/Day83_Monitoring3.png)
@ -53,11 +54,11 @@ When everything is running we can check all pods are in a running and healthy st
![](Images/Day83_Monitoring5.png)
With the deployment, we deployed a number of services that we are going to be using later on in the demo you can check these by using the `kubectl get svc -n monitoring` command.
With the deployment, we deployed several services that we are going to be using later on in the demo you can check these by using the `kubectl get svc -n monitoring` command.
![](Images/Day83_Monitoring6.png)
And finally lets check on all resources deployed in our new monitoring namespace using the `kubectl get all -n monitoring` command.
And finally, let's check on all resources deployed in our new monitoring namespace using the `kubectl get all -n monitoring` command.
![](Images/Day83_Monitoring7.png)
@ -69,23 +70,25 @@ Open a browser and navigate to http://localhost:3000 you will be prompted for a
![](Images/Day83_Monitoring9.png)
The default username and password to access is
```
Username: admin
Password: admin
```
However you will be asked to provide a new password at first login. The initial screen or home page you will see will give you some areas to explore as well as some useful resources to get up to speed with Grafana and its capabilities. Notice the "Add your first data source" and "create your first dashboard" widgets we will be using them later.
However, you will be asked to provide a new password at first login. The initial screen or home page you will see will give you some areas to explore as well as some useful resources to get up to speed with Grafana and its capabilities. Notice the "Add your first data source" and "create your first dashboard" widgets we will be using later.
![](Images/Day83_Monitoring10.png)
You will find that there is already a prometheus data source already added to our Grafana data sources, however because we are using minikube we need to also port forward prometheus so that this is available on our localhost, opening a new terminal we can run the following command. `kubectl --namespace monitoring port-forward svc/prometheus-k8s 9090` if on the home page of Grafana we now enter into the widget "Add your first data source" and from here we are going to select Prometheus.
You will find that there is already a prometheus data source already added to our Grafana data sources, however, because we are using minikube we need to also port forward prometheus so that this is available on our localhost, opening a new terminal we can run the following command. `kubectl --namespace monitoring port-forward svc/prometheus-k8s 9090` if on the home page of Grafana we now enter into the widget "Add your first data source" and from here we are going to select Prometheus.
![](Images/Day83_Monitoring11.png)
For our new data source we can use the address http://localhost:9090 and we will also need to change the dropdown to browser as highlighted below.
For our new data source, we can use the address http://localhost:9090 and we will also need to change the dropdown to the browser as highlighted below.
![](Images/Day83_Monitoring12.png)
At the bottom of the page, we can now hit save and test. This should give us the outcome you see below if the port forward for prometheus is working.
At the bottom of the page, we can now hit save and test. This should give us the outcome you see below if the port forward for Prometheus is working.
![](Images/Day83_Monitoring13.png)
@ -101,11 +104,11 @@ If you then select the Metrics browser you will have a long list of metrics bein
![](Images/Day83_Monitoring16.png)
For the purpose of the demo I am going to find a metric that gives us some output around our system resources, `cluster:node_cpu:ratio{}` gives us some detail on the nodes in our cluster and proves that this integration is working.
For the demo I am going to find a metric that gives us some output around our system resources, `cluster:node_cpu:ratio{}` gives us some detail on the nodes in our cluster and proves that this integration is working.
![](Images/Day83_Monitoring17.png)
Once you are happy with this as your visualisation then you can hit the apply button in the top right and you will then add this graph to your dashboard. Obviously you can go ahead and add additional graphs and other charts to give you the visual that you need.
Once you are happy with this as your visualisation then you can hit the apply button in the top right and you will then add this graph to your dashboard. You can go ahead and add additional graphs and other charts to give you the visuals that you need.
![](Images/Day83_Monitoring18.png)
@ -113,24 +116,24 @@ We can however take advantage of thousands of previously created dashboards that
![](Images/Day83_Monitoring19.png)
If we do a search for Kubernetes we will see a long list of pre built dashboards that we can choose from.
If we search Kubernetes we will see a long list of pre-built dashboards that we can choose from.
![](Images/Day83_Monitoring20.png)
We have chosen the Kubernetes API Server dashboard and changed the data source to suit our newly added Prometheus-1 data source and we get to see some of the metrics displayed as per below.
We have chosen the Kubernetes API Server dashboard and changed the data source to suit our newly added Prometheus-1 data source and we get to see some of the metrics displayed below.
![](Images/Day83_Monitoring21.png)
### Alerting
You could also leverage the alertmanager that we deployed to then send alerts out to slack or other integrations, in order to do this you would need to port foward the alertmanager service using the below details.
You could also leverage the alertmanager that we deployed to then send alerts out to slack or other integrations, to do this you would need to port forward the alertmanager service using the below details.
`kubectl --namespace monitoring port-forward svc/alertmanager-main 9093`
http://localhost:9093
`http://localhost:9093`
That wraps up our section on all things observability, I have personally found that this section has highlighted how broad this topic is but equally how important this is for our roles and that be it metrics, logging or tracing you are going to need to have a good idea of what is happening in our broad environments moving forward, especially when they can change so dramatically with all the automation that we have already covered in the other sections.
Next up we are going to be taking a look into data management and how DevOps principles also needs to be considered when it comes to Data Management.
Next up we are going to be taking a look into data management and how DevOps principles also need to be considered when it comes to Data Management.
## Resources

View File

@ -2,64 +2,64 @@
title: '#90DaysOfDevOps - The Big Picture: Data Management - Day 84'
published: false
description: 90DaysOfDevOps - The Big Picture Data Management
tags: "devops, 90daysofdevops, learning"
tags: 'devops, 90daysofdevops, learning'
cover_image: null
canonical_url: null
id: 1048747
---
## The Big Picture: Data Management
![](Images/Day84_Data1.png)
Data Management is by no means a new wall to climb, although we do know that data is more important than it maybe was a few years ago. Valuable and ever changing it can also be a massive nightmare when we are talking about automation and continuously integrate, test and deploy frequent software releases. Enter the persistent data and underlying data services often the main culprit when things go wrong.
Data Management is by no means a new wall to climb, although we do know that data is more important than it maybe was a few years ago. Valuable and ever-changing it can also be a massive nightmare when we are talking about automation and continuously integrating, testing and deploying frequent software releases. Enter the persistent data and underlying data services are often the main culprit when things go wrong.
But before I get into the Cloud-Native Data Management, we need to go up a level. We have touched on many different platforms throughout this challenge. Be it Physical, Virtual, Cloud and Cloud-Native obviously including Kubernetes there is none of these platforms that provide the lack of requirement for data management.
But before I get into Cloud-Native Data Management, we need to go up a level. We have touched on many different platforms throughout this challenge. Be it Physical, Virtual, Cloud or Cloud-Native obviously including Kubernetes there is none of these platforms that provide the lack of requirement for data management.
Whatever our business it is more than likely you will find a database lurking in the environment somewhere, be it for the most mission critical system in the business or at least some cog in the chain is storing that persistent data on some level of system.
Whatever our business it is more than likely you will find a database lurking in the environment somewhere, be it for the most mission-critical system in the business or at least some cog in the chain is storing that persistent data on some level of the system.
### DevOps and Data
Much like the very start of this series where we spoke about the DevOps principles, in order for a better process when it comes to data you have to include the right people. This might be the DBAs but equally that is going to include people that care about the backup of those data services as well.
Much like the very start of this series where we spoke about the DevOps principles, for a better process when it comes to data you have to include the right people. This might be the DBAs but equally, that is going to include people that care about the backup of those data services as well.
Secondly we also need to identify the different data types, domains, boundaries that we have associated with our data. This way it is not just dealt with in a silo approach amongst Database administrators, storage engineers or Backup focused engineers. This way the whole team can determine the best route of action when it comes to developing and hosting applications for the wider business and focus on the data architecture vs it being an after thought.
Secondly, we also need to identify the different data types, domains, and boundaries that we have associated with our data. This way it is not just dealt with in a silo approach amongst Database administrators, storage engineers or Backup focused engineers. This way the whole team can determine the best route of action when it comes to developing and hosting applications for the wider business and focus on the data architecture vs it being an afterthought.
Now, this can span many different areas of the data lifecycle, we could be talking about data ingest, where and how will data be ingested into our service or application? How will the service, application or users access this data. But then it also requires us to understand how we will secure the data and then how will we protect that data.
Now, this can span many different areas of the data lifecycle, we could be talking about data ingest, where and how will data be ingested into our service or application? How will the service, application or users access this data? But then it also requires us to understand how we will secure the data and then how will we protect that data.
### Data Management 101
Data management according to the [Data Management Body of Knowledge](https://www.dama.org/cpages/body-of-knowledge) is “the development, execution and supervision of plans, policies, programs and practices that control, protect, deliver and enhance the value of data and information assets.”
- Data is the most important aspect of your business - Data is only one part of your overall business. I have seen the term "Data is the lifeblood of our business" and most likely absolutely true. Which then got me thinking about blood being pretty important to the body but alone it is nothing we still need the aspects of the body to make the blood something other than a liquid.
- Data is the most important aspect of your business - Data is only one part of your overall business. I have seen the term "Data is the lifeblood of our business" and most likely true. This then got me thinking about blood being pretty important to the body but alone it is nothing we still need the aspects of the body to make the blood something other than a liquid.
- Data quality is more important than ever - We are having to treat data as a business asset, meaning that we have to give it the considerations it needs and requires to work with our automation and DevOps principles.
- Accessing data in a timely fashion - Nobody has the patience to not have access to the right data at the right time to make effective decisions. Data must be available in a streamlined and timely manher regardless of presentation.
- Accessing data in a timely fashion - Nobody has the patience to not have access to the right data at the right time to make effective decisions. Data must be available in a streamlined and timely manner regardless of presentation.
- Data Management has to be an enabler to DevOps - I mentioned streamline previously, we have to include the data management requirements into our cycle and ensure not just availablity of that data but also include other important policy based protection of those data points along with fully tested recovery models with that as well.
- Data Management has to be an enabler to DevOps - I mentioned streamlining previously, we have to include the data management requirements into our cycle and ensure not just availability of that data but also include other important policy-based protection of those data points along with fully tested recovery models with that as well.
### DataOps
Both DataOps and DevOps apply the best practices of technology development and operations to improve quality, increase speed, reduce security threats, delight customers and provide meaningful and challenging work for skilled professionals. DevOps and DataOps share goals to accelerate product delivery by automating as many process steps as possible. For DataOps, the objective is a resilient data pipeline and trusted insights from data analytics.
Some of the most common higher level areas that focus on DataOps are going to be Machine Learning, Big Data and Data Analytics including Artifical Intelligence.
Some of the most common higher-level areas that focus on DataOps are going to be Machine Learning, Big Data and Data Analytics including Artificial Intelligence.
### Data Management is the management of information
My focus throughout this section is not going to be getting into Machine Learning or Articial Intelligence but to focus on the protecting the data from a data protection point of view, the title of this subsection is "Data management is the management of information" and we can relate that information = data.
My focus throughout this section is not going to be getting into Machine Learning or Artificial Intelligence but focus on the protecting the data from a data protection point of view, the title of this subsection is "Data management is the management of information" and we can relate that information = data.
Three key areas that we should consider along this journey with data are:
- Accuracy - Making sure that production data is accurate, equally we need to ensure that our data in the form of backups are also working and tested against recovery to be sure if a failure or a reason comes up we need to be able to get back up and running as fast as possible.
- Consistent - If our data services span multiple locations then for production we need to make sure we have consistency across all data locations so that we are getting accurate data, this also spans into data protection when it comes to protecting these data services, especially data services we need to ensure consistency at different levels to make sure we are taking a good clean copy of that data for our backups, replicas etc.
- Consistent - If our data services span multiple locations then for production we need to make sure we have consistency across all data locations so that we are getting accurate data, this also spans into data protection when it comes to protecting these data services especially data services we need to ensure consistency at different levels to make sure we are taking a good clean copy of that data for our backups, replicas etc.
- Secure - Access Control but equally just keeping data in general is a topical theme at the moment across the globe. Making sure the right people have access to your data is paramount, again this leads into data protection where we must make sure that only the required personnel have access to backups and the ability to restore from those as well clone and provide other versions of the business data.
- Secure - Access Control but equally just keeping data, in general, is a topical theme at the moment across the globe. Making sure the right people have access to your data is paramount, again this leads to data protection where we must make sure that only the required personnel have access to backups and the ability to restore from those as well clone and provide other versions of the business data.
Better Data = Better Decisions
### Data Management Days
During the next 6 sessions we are going to be taking a closer look at Databases, Backup & Recovery, Disaster Recovery, Application Mobility all with an element of demo and hands on throughout.
During the next 6 sessions we are going to be taking a closer look at Databases, Backup & Recovery, Disaster Recovery, and Application Mobility all with an element of demo and hands-on throughout.
## Resources
@ -70,7 +70,3 @@ During the next 6 sessions we are going to be taking a closer look at Databases,
- [Veeam Portability & Cloud Mobility](https://www.youtube.com/watch?v=hDBlTdzE6Us&t=3s)
See you on [Day 85](day85.md)

View File

@ -1,15 +1,16 @@
---
title: "#90DaysOfDevOps - Data Services - Day 85"
title: '#90DaysOfDevOps - Data Services - Day 85'
published: false
description: "90DaysOfDevOps - Data Services"
tags: "devops, 90daysofdevops, learning"
description: 90DaysOfDevOps - Data Services
tags: 'devops, 90daysofdevops, learning'
cover_image: null
canonical_url: null
id: 1048781
---
## Data Services
Databases are going to be the most common data service that we come across in our environments. I wanted to take this session to explore some of those different types of Databases and some of the use cases they each have. Some we have used and seen throughout the course of the challenge.
Databases are going to be the most common data service that we come across in our environments. I wanted to take this session to explore some of those different types of Databases and some of the use cases they each have. Some we have used and seen throughout the challenge.
From an application development point of view choosing the right data service or database is going to be a huge decision when it comes to the performance and scalability of your application.
@ -21,13 +22,14 @@ A key-value database is a type of nonrelational database that uses a simple key-
An example of a Key-Value database is Redis.
*Redis is an in-memory data structure store, used as a distributed, in-memory keyvalue database, cache and message broker, with optional durability. Redis supports different kinds of abstract data structures, such as strings, lists, maps, sets, sorted sets, HyperLogLogs, bitmaps, streams, and spatial indices.*
_Redis is an in-memory data structure store, used as a distributed, in-memory keyvalue database, cache and message broker, with optional durability. Redis supports different kinds of abstract data structures, such as strings, lists, maps, sets, sorted sets, HyperLogLogs, bitmaps, streams, and spatial indices._
![](Images/Day85_Data1.png)
As you can see from the description of Redis this means that our database is fast but we are limited on space as a trade off. Also no queries or joins which means data modelling options are very limited.
As you can see from the description of Redis this means that our database is fast but we are limited on space as a trade-off. Also, no queries or joins which means data modelling options are very limited.
Best for:
- Caching
- Pub/Sub
- Leaderboards
@ -39,13 +41,14 @@ Generally used as a cache above another persistent data layer.
A wide-column database is a NoSQL database that organises data storage into flexible columns that can be spread across multiple servers or database nodes, using multi-dimensional mapping to reference data by column, row, and timestamp.
*Cassandra is a free and open-source, distributed, wide-column store, NoSQL database management system designed to handle large amounts of data across many commodity servers, providing high availability with no single point of failure.*
_Cassandra is a free and open-source, distributed, wide-column store, NoSQL database management system designed to handle large amounts of data across many commodity servers, providing high availability with no single point of failure._
![](Images/Day85_Data2.png)
No schema which means can handle unstructured data however this can be seen as a benefit to some workloads.
Best for:
- Time-Series
- Historical Records
- High-Write, Low-Read
@ -54,7 +57,7 @@ Best for:
A document database (also known as a document-oriented database or a document store) is a database that stores information in documents.
*MongoDB is a source-available cross-platform document-oriented database program. Classified as a NoSQL database program, MongoDB uses JSON-like documents with optional schemas. MongoDB is developed by MongoDB Inc. and licensed under the Server Side Public License.*
_MongoDB is a source-available cross-platform document-oriented database program. Classified as a NoSQL database program, MongoDB uses JSON-like documents with optional schemas. MongoDB is developed by MongoDB Inc. and licensed under the Server Side Public License._
![](Images/Day85_Data3.png)
@ -68,11 +71,11 @@ Best for:
### Relational
If you are new to databases but you know of them my guess is that you have absolutely come across a relational database.
If you are new to databases but you know of them I guess that you have come across a relational database.
A relational database is a digital database based on the relational model of data, as proposed by E. F. Codd in 1970. A system used to maintain relational databases is a relational database management system. Many relational database systems have an option of using the SQL for querying and maintaining the database.
A relational database is a digital database based on the relational model of data, as proposed by E. F. Codd in 1970. A system used to maintain relational databases is a relational database management system. Many relational database systems have the option of using SQL for querying and maintaining the database.
*MySQL is an open-source relational database management system. Its name is a combination of "My", the name of co-founder Michael Widenius's daughter, and "SQL", the abbreviation for Structured Query Language.*
_MySQL is an open-source relational database management system. Its name is a combination of "My", the name of co-founder Michael Widenius's daughter, and "SQL", the abbreviation for Structured Query Language._
MySQL is one example of a relational database there are lots of other options.
@ -81,6 +84,7 @@ MySQL is one example of a relational database there are lots of other options.
Whilst researching relational databases the term or abbreviation **ACID** has been mentioned a lot, (atomicity, consistency, isolation, durability) is a set of properties of database transactions intended to guarantee data validity despite errors, power failures, and other mishaps. In the context of databases, a sequence of database operations that satisfies the ACID properties (which can be perceived as a single logical operation on the data) is called a transaction. For example, a transfer of funds from one bank account to another, even involving multiple changes such as debiting one account and crediting another, is a single transaction.
Best for:
- Most Applications (It has been around for years, doesn't mean it is the best)
It is not ideal for unstructured data or the ability to scale is where some of the other NoSQL mentions give a better ability to scale for certain workloads.
@ -89,7 +93,7 @@ It is not ideal for unstructured data or the ability to scale is where some of t
A graph database stores nodes and relationships instead of tables, or documents. Data is stored just like you might sketch ideas on a whiteboard. Your data is stored without restricting it to a pre-defined model, allowing a very flexible way of thinking about and using it.
*Neo4j is a graph database management system developed by Neo4j, Inc. Described by its developers as an ACID-compliant transactional database with native graph storage and processing*
_Neo4j is a graph database management system developed by Neo4j, Inc. Described by its developers as an ACID-compliant transactional database with native graph storage and processing_
Best for:
@ -99,11 +103,11 @@ Best for:
### Search Engine
In the last section we actually used a Search Engine database in the way of Elasticsearch.
In the last section, we used a Search Engine database in the way of Elasticsearch.
A search-engine database is a type of non-relational database that is dedicated to the search of data content. Search-engine databases use indexes to categorise the similar characteristics among data and facilitate search capability.
A search-engine database is a type of non-relational database that is dedicated to the search for data content. Search-engine databases use indexes to categorise similar characteristics among data and facilitate search capability.
*Elasticsearch is a search engine based on the Lucene library. It provides a distributed, multitenant-capable full-text search engine with an HTTP web interface and schema-free JSON documents.*
_Elasticsearch is a search engine based on the Lucene library. It provides a distributed, multitenant-capable full-text search engine with an HTTP web interface and schema-free JSON documents._
Best for:
@ -113,13 +117,13 @@ Best for:
### Multi-model
A multi-model database is a database management system designed to support multiple data models against a single, integrated backend. In contrast, most database management systems are organized around a single data model that determines how data can be organized, stored, and manipulated.Document, graph, relational, and keyvalue models are examples of data models that may be supported by a multi-model database.
A multi-model database is a database management system designed to support multiple data models against a single, integrated backend. In contrast, most database management systems are organized around a single data model that determines how data can be organized, stored, and manipulated. Document, graph, relational, and keyvalue models are examples of data models that may be supported by a multi-model database.
*Fauna is a flexible, developer-friendly, transactional database delivered as a secure and scalable cloud API with native GraphQL.*
_Fauna is a flexible, developer-friendly, transactional database delivered as a secure and scalable cloud API with native GraphQL._
Best for:
- You are not stuck to having to choose a data model
- You are not stuck on having to choose a data model
- ACID Compliant
- Fast
- No provisioning overhead
@ -145,5 +149,4 @@ There are a ton of resources I have linked below, you could honestly spend 90 ye
- [FaunaDB Basics - The Database of your Dreams](https://www.youtube.com/watch?v=2CipVwISumA)
- [Fauna Crash Course - Covering the Basics](https://www.youtube.com/watch?v=ihaB7CqJju0)
See you on [Day 86](day86.md)

View File

@ -7,31 +7,32 @@ cover_image: null
canonical_url: null
id: 1049058
---
## Backup all the platforms
During this whole challenge we have discussed many different platforms and environments. One thing all of those have in common is the fact they all need some level of data protection!
During this whole challenge, we discussed many different platforms and environments. One thing all of those have in common is the fact they all need some level of data protection!
Data Protection has been around for many many years but the wealth of data that we have today and the value that this data brings means we have to make sure we are not only resilient to infrastructure failure by having multiple nodes and high availablity across applications but we must also consider that we need a copy of that data, that important data in a safe and secure location if a failure scenario was to occur.
Data Protection has been around for many many years but the wealth of data that we have today and the value that this data brings means we have to make sure we are not only resilient to infrastructure failure by having multiple nodes and high availability across applications but we must also consider that we need a copy of that data, that important data in a safe and secure location if a failure scenario was to occur.
We hear a lot these days it seems about cybercrime and ransomware, and don't get me wrong this is a massive threat and I stand by the fact that you will be attacked by ransomware. It is not a matter of if it is a matter of when. So even more reason to make sure you have your data secure for when that time arises. However the most common cause for data loss is not ransomware or cybercrime it is simply accidental deletion!
We hear a lot these days it seems about cybercrime and ransomware, and don't get me wrong this is a massive threat and I stand by the fact that you will be attacked by ransomware. It is not a matter of if it is a matter of when. So even more reason to make sure you have your data secure for when that time arises. However, the most common cause for data loss is not ransomware or cybercrime it is simply accidental deletion!
We have all done it, deleted something we shouldn't have and had that instant regret.
With all of the technology and automation we have discussed during the challenge, the requirement to protect any stateful data or even complex stateless configuration is still there, regardless of platform.
With all of the technology and automation we have discussed during the challenge, the requirement to protect any stateful data or even complex stateless configuration is still there, regardless of the platform.
![](Images/Day86_Data1.png)
But we should be able to perform that protection of the data with automation in mind and being able to integrate into our workflows.
But we should be able to perform that protection of the data with automation in mind and be able to integrate it into our workflows.
If we look at what backup is:
*In information technology, a backup, or data backup is a copy of computer data taken and stored elsewhere so that it may be used to restore the original after a data loss event. The verb form, referring to the process of doing so, is "back up", whereas the noun and adjective form is "backup".*
_In information technology, a backup, or data backup is a copy of computer data taken and stored elsewhere so that it may be used to restore the original after a data loss event. The verb form, referring to the process of doing so, is "back up", whereas the noun and adjective form is "backup"._
If we break this down to the simplest form, a backup is a copy and paste of data to a new location. Simply put I could take a backup right now by copying a file from my C: drive to my D: drive and I would then have a copy in case something happened to the C: drive or something was edited wrongly within the files. I could revert back to the copy I have on the D: drive. Now if my computer dies where both the C & D drives live then I am not protected so I have to consider a solution or a copy of data outside of my system maybe onto a NAS drive in my house? But then what happens if something happens to my house, maybe I need to consider storing it on another system in another location, maybe the cloud is an option. Maybe I could store a copy of my important files in several locations to mitigate against the risk of failure?
If we break this down to the simplest form, a backup is a copy and paste of data to a new location. Simply put I could take a backup right now by copying a file from my C: drive to my D: drive and I would then have a copy in case something happened to the C: drive or something was edited wrongly within the files. I could revert to the copy I have on the D: drive. Now if my computer dies where both the C & D drives live then I am not protected so I have to consider a solution or a copy of data outside of my system maybe onto a NAS drive in my house? But then what happens if something happens to my house, maybe I need to consider storing it on another system in another location, maybe the cloud is an option. Maybe I could store a copy of my important files in several locations to mitigate the risk of failure?
### 3-2-1 Backup Methodolgy
Now seems a good time to talk about the 3-2-1 rule or backup methodology. I actually did a [lightening talk](https://www.youtube.com/watch?v=5wRt1bJfKBw) covering this topic.
Now seems a good time to talk about the 3-2-1 rule or backup methodology. I did a [lightning talk](https://www.youtube.com/watch?v=5wRt1bJfKBw) covering this topic.
We have already mentioned before some of the extreme ends of why we need to protect our data but a few more are listed below:
@ -45,13 +46,13 @@ We then want to make sure we also send a copy of our data external or offsite th
### Backup Responsibility
We have most likely heard all of the myths when it comes to not having to backup, things like "Everything is stateless" I mean if everything is stateless then what is the business? no databases? word documents? Obviously there is a level of responsibility on every individual within the business to ensure they are protected but it is going to come down most likely to the operations teams to provide the backup process for the mission critical applications and data.
We have most likely heard all of the myths when it comes to not having to backup, things like "Everything is stateless" I mean if everything is stateless then what is the business? no databases? word documents? There is a level of responsibility on every individual within the business to ensure they are protected but it is going to come down most likely to the operations teams to provide the backup process for the mission-critical applications and data.
Another good one is that "High availability is my backup, we have built in multiple nodes into our cluster there is no way this is going down!" apart from when you make a mistake to the database and this is replicated over all the nodes in the cluster, or there is fire, flood or blood scenario that means the cluster is no longer available and with it the important data. It's not about being stubborn it is about being aware of the data and the services, absolutely everyone should factor in high availability and fault tollerance into their architecture but that does not substitute the need for backup!
Another good one is that "High availability is my backup, we have built in multiple nodes into our cluster there is no way this is going down!" apart from when you make a mistake to the database and this is replicated over all the nodes in the cluster, or there is fire, flood or blood scenario that means the cluster is no longer available and with it the important data. It's not about being stubborn it is about being aware of the data and the services, absolutely everyone should factor in high availability and fault tolerance into their architecture but that does not substitute the need for backup!
Replication can also seem to give us the offsite copy of the data and maybe that cluster mentioned above does live across multiple locations, however the first accidental mistake would still be replicated there. But again a Backup requirement should stand alongside application replication or system replication within the environment.
Replication can also seem to give us the offsite copy of the data and maybe that cluster mentioned above does live across multiple locations, however, the first accidental mistake would still be replicated there. But again a Backup requirement should stand alongside application replication or system replication within the environment.
Now with all this said you can go to the extreme the other end as well and send copies of data to too many locations which is going to not only cost but also increase risk about being attacked as your surface area is now massively expanded.
Now with all this said you can go to the extreme on the other end as well and send copies of data to too many locations which is going to not only cost but also increase the risk of being attacked as your surface area is now massively expanded.
Anyway, who looks after backup? It will be different within each business but someone should be taking it upon themselves to understand the backup requirements. But also understand the recovery plan!
@ -59,19 +60,19 @@ Anyway, who looks after backup? It will be different within each business but so
Backup is a prime example, nobody cares about backup until you need to restore something. Alongside the requirement to back our data up we also need to consider how we restore!
With our text document example we are talking very small files so the ability to copy back and forth is easy and fast. But if we are talking about 100GB plus files then this is going to take time. Also we have to consider the level in which we need to recover, if we take a virtual machine for example.
With our text document example, we are talking about very small files so the ability to copy back and forth is easy and fast. But if we are talking about 100GB plus files then this is going to take time. Also, we have to consider the level at which we need to recover if we take a virtual machine for example.
We have the whole Virtual Machine, we have the Operating System, Application installation and then if this is a database server then we will have some database files as well. If we have made a mistake and inserted the wrong line of code into our database I probably don't need to restore the whole virtual machine, I want to be granular on what I recover back.
We have the whole Virtual Machine, we have the Operating System, Application installation and then if this is a database server we will have some database files as well. If we have made a mistake and inserted the wrong line of code into our database I probably don't need to restore the whole virtual machine, I want to be granular on what I recover back.
### Backup Scenario
I want to now start building on a scenario to protect some data, specifically I want to protect some files on my local machine (in this case Windows but the tool I am going to use is in fact not only free and open-source but also cross platform) I would like to make sure they are protected to a NAS device I have locally in my home but also into an Object Storage bucket in the cloud.
I want to now start building on a scenario to protect some data, specifically, I want to protect some files on my local machine (in this case Windows but the tool I am going to use is not only free and open-source but also cross-platform) I would like to make sure they are protected to a NAS device I have locally in my home but also into an Object Storage bucket in the cloud.
I want to backup this important data, it just so happens to be the repository for the 90DaysOfDevOps, which yes this is also being sent to GitHub which is probably where you are reading this now but what if my machine was to die and GitHub was down? How would anyone be able to read the content but also how would I potentially be able to restore that data to another service.
I want to back up this important data, it just so happens to be the repository for the 90DaysOfDevOps, which yes is also being sent to GitHub which is probably where you are reading this now but what if my machine was to die and GitHub was down? How would anyone be able to read the content but also how would I potentially be able to restore that data to another service?
![](Images/Day86_Data5.png)
There are lots of tools that can help us achieve this but I am going to be using a a tool called [Kopia](https://kopia.io/) an Open-Source backup tool which will enable us to encrypt, dedupe and compress our backups whilst being able to send them to many locations.
There are lots of tools that can help us achieve this but I am going to be using a tool called [Kopia](https://kopia.io/) an Open-Source backup tool which will enable us to encrypt, dedupe and compress our backups whilst being able to send them to many locations.
You will find the releases to download [here](https://github.com/kopia/kopia/releases) at the time of writing I will be using v0.10.6.
@ -81,25 +82,25 @@ There is a Kopia CLI and GUI, we will be using the GUI but know that you can hav
I will be using `KopiaUI-Setup-0.10.6.exe`
Really quick next next installation and then when you open the application you are greeted with the choice of selecting your storage type that you wish to use as your backup repository.
Really quick next next installation and then when you open the application you are greeted with the choice of selecting the storage type that you wish to use as your backup repository.
![](Images/Day86_Data6.png)
### Setting up a Repository
Firstly we would like to setup a repository using our local NAS device and we are going to do this using SMB, but we could also use NFS I believe.
Firstly we would like to set up a repository using our local NAS device and we are going to do this using SMB, but we could also use NFS I believe.
![](Images/Day86_Data7.png)
On the next screen we are going to define a password, this password is used to encrypt the repository contents.
On the next screen, we are going to define a password, this password is used to encrypt the repository contents.
![](Images/Day86_Data8.png)
Now that we have the repository configured we can trigger an adhoc snapshot to start writing data to our it.
Now that we have the repository configured we can trigger an ad-hoc snapshot to start writing data to it.
![](Images/Day86_Data9.png)
First up we need to enter a path to what we want to snapshot and our case we want to take a copy of our `90DaysOfDevOps` folder. We will get back to the scheduling aspect shortly.
First up we need to enter a path to what we want to snapshot and in our case we want to take a copy of our `90DaysOfDevOps` folder. We will get back to the scheduling aspect shortly.
![](Images/Day86_Data10.png)
@ -111,11 +112,11 @@ Maybe there are files or file types that we wish to exclude.
![](Images/Day86_Data12.png)
If we wanted to define a schedule we could this on this next screen, when you first create this snapshot this is the opening page to define.
If we wanted to define a schedule we could do this on this next screen, when you first create this snapshot this is the opening page to define.
![](Images/Day86_Data13.png)
And you will see a number of other settings that can be handled here.
And you will see several other settings that can be handled here.
![](Images/Day86_Data14.png)
@ -125,9 +126,9 @@ Select snapshot now and the data will be written to your repository.
### Offsite backup to S3
With Kopia we can through the UI it seems only have one repository configured at a time. But through the UI we can be creative and basically have multiple repository configuration files to choose from to achieve our goal of having a copy local and offsite in Object Storage.
With Kopia we can through the UI it seems only to have one repository configured at a time. But through the UI we can be creative and have multiple repository configuration files to choose from to achieve our goal of having a copy local and offsite in Object Storage.
The Object Storage I am choosing to send my data to is going to Google Cloud Storage. I firstly logged into my Google Cloud Platform account and created myself a storage bucket. I already had the Google Cloud SDK installed on my system but running the `gcloud auth application-default login` authenticated me with my account.
The Object Storage I am choosing to send my data to is going to Google Cloud Storage. I firstly logged into my Google Cloud Platform account and created a storage bucket. I already had the Google Cloud SDK installed on my system but running the `gcloud auth application-default login` authenticated me with my account.
![](Images/Day86_Data16.png)
@ -135,7 +136,7 @@ I then used the CLI of Kopia to show me the current status of my repository afte
![](Images/Day86_Data17.png)
We are now ready to replace for the purpose of the demo the configuration for the repository, what we would probably do if we wanted a long term solution to hit both of these repositories is we would create an `smb.config` file and a `object.config` file and be able to run both of these commands to send our copies of data to each location. To add our repository we ran `"C:\Program Files\KopiaUI\resources\server\kopia.exe" --config-file=C:\Users\micha\AppData\Roaming\kopia\repository.config repository create gcs --bucket 90daysofdevops`
We are now ready to replace for the demo the configuration for the repository, what we would probably do if we wanted a long-term solution to hit both of these repositories is we would create an `smb.config` file and a `object.config` file and be able to run both of these commands to send our copies of data to each location. To add our repository we ran `"C:\Program Files\KopiaUI\resources\server\kopia.exe" --config-file=C:\Users\micha\AppData\Roaming\kopia\repository.config repository create gcs --bucket 90daysofdevops`
The above command is taking into account that the Google Cloud Storage bucket we created is called `90daysofdevops`
@ -145,15 +146,15 @@ Now that we have created our new repository we can then run the `"C:\Program Fil
![](Images/Day86_Data19.png)
Next thing we need to do is create a snapshot and send that to our newly created repository. Using the `"C:\Program Files\KopiaUI\resources\server\kopia.exe" --config-file=C:\Users\micha\AppData\Roaming\kopia\repository.config kopia snapshot create "C:\Users\micha\demo\90DaysOfDevOps"` command we can kick off this process. You can see in the below browser that our Google Cloud Storage bucket now has kopia files based on our backup in place.
The next thing we need to do is create a snapshot and send that to our newly created repository. Using the `"C:\Program Files\KopiaUI\resources\server\kopia.exe" --config-file=C:\Users\micha\AppData\Roaming\kopia\repository.config kopia snapshot create "C:\Users\micha\demo\90DaysOfDevOps"` command we can kick off this process. You can see in the below browser that our Google Cloud Storage bucket now has kopia files based on our backup in place.
![](Images/Day86_Data20.png)
With the above process we are able to settle our requirement of sending our important data to 2 different locations, 1 of which is offsite in Google Cloud Storage and of course we still have our production copy of our data on a different media type.
With the above process we can settle our requirement of sending our important data to 2 different locations, 1 of which is offsite in Google Cloud Storage and of course we still have our production copy of our data on a different media type.
### Restore
Restore is another consideration and is very important, Kopia gives us the capability to not only restore to the existing location but also to a new location.
Restore is another consideration and is very important, Kopia gives us the capability to not only restore to the existing location but also a new location.
If we run the command `"C:\Program Files\KopiaUI\resources\server\kopia.exe" --config-file=C:\Users\micha\AppData\Roaming\kopia\repository.config snapshot list` this will list the snapshots that we have currently in our configured repository (GCS)
@ -165,9 +166,9 @@ We can then mount those snapshots directly from GCS using the `"C:\Program Files
We could also restore the snapshot contents using `kopia snapshot restore kdbd9dff738996cfe7bcf99b45314e193`
Obviously the commands above are very long and this is because I was using the KopiaUI version of the kopia.exe as explained at the top of the walkthrough you can download the kopia.exe and put into a path so you can just use the `kopia` command.
The commands above are very long and this is because I was using the KopiaUI version of the kopia.exe as explained at the top of the walkthrough you can download the kopia.exe and put it into a path so you can just use the `kopia` command.
In the next session we will be focusing in on protecting workloads within Kubernetes.
In the next session, we will be focusing on protecting workloads within Kubernetes.
## Resources

View File

@ -1,15 +1,16 @@
---
title: "#90DaysOfDevOps - Hands-On Backup & Recovery - Day 87"
title: '#90DaysOfDevOps - Hands-On Backup & Recovery - Day 87'
published: false
description: "90DaysOfDevOps - Hands-On Backup & Recovery"
tags: "devops, 90daysofdevops, learning"
description: 90DaysOfDevOps - Hands-On Backup & Recovery
tags: 'devops, 90daysofdevops, learning'
cover_image: null
canonical_url: null
id: 1048717
---
## Hands-On Backup & Recovery
In the last session we touched on [Kopia](https://kopia.io/) an Open-Source backup tool that we used to get some important data off to a local NAS and off to some cloud based object storage.
In the last session, we touched on [Kopia](https://kopia.io/) an Open-Source backup tool that we used to get some important data off to a local NAS and off to some cloud-based object storage.
In this section, I want to get into the world of Kubernetes backup. It is a platform we covered [The Big Picture: Kubernetes](Days/day49.md) earlier in the challenge.
@ -17,18 +18,18 @@ We will again be using our minikube cluster but this time we are going to take a
### Kubernetes cluster setup
To set up our minikube cluster we will be issuing the `minikube start --addons volumesnapshots,csi-hostpath-driver --apiserver-port=6443 --container-runtime=containerd -p 90daysofdevops --kubernetes-version=1.21.2` you will notice that we are using the `volumesnapshots` and `csi-hostpath-driver` as we will take full use of these for when we are taking our backups.
To set up our minikube cluster we will be issuing the `minikube start --addons volumesnapshots,csi-hostpath-driver --apiserver-port=6443 --container-runtime=containerd -p 90daysofdevops --kubernetes-version=1.21.2` you will notice that we are using the `volumesnapshots` and `csi-hostpath-driver` as we will make full use of these for when we are taking our backups.
At this point I know we have not deployed Kasten K10 yet but we want to issue the following command when your cluster is up, but we want to annotate the volumesnapshotclass so that Kasten K10 can use this.
At this point I know we have not deployed Kasten K10 yet but we want to issue the following command when your cluster is up, we want to annotate the volumesnapshotclass so that Kasten K10 can use this.
```
```Shell
kubectl annotate volumesnapshotclass csi-hostpath-snapclass \
k10.kasten.io/is-snapshot-class=true
```
We are also going to change over the default storageclass from the standard default storageclass to the csi-hostpath storageclass using the following.
```
```Shell
kubectl patch storageclass csi-hostpath-sc -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
kubectl patch storageclass standard -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"false"}}}'
@ -42,7 +43,7 @@ Add the Kasten Helm repository
`helm repo add kasten https://charts.kasten.io/`
We could use `arkade kasten install k10` here as well but for the purpose of the demo we will run through the following steps. [More Details](https://blog.kasten.io/kasten-k10-goes-to-the-arkade)
We could use `arkade kasten install k10` here as well but for the demo, we will run through the following steps. [More Details](https://blog.kasten.io/kasten-k10-goes-to-the-arkade)
Create the namespace and deploy K10, note that this will take around 5 mins
@ -60,13 +61,13 @@ Port forward to access the K10 dashboard, open a new terminal to run the below c
`kubectl --namespace kasten-io port-forward service/gateway 8080:8000`
The Kasten dashboard will be available at: `http://127.0.0.1:8080/k10/#/`
The Kasten dashboard will be available at `http://127.0.0.1:8080/k10/#/`
![](Images/Day87_Data4.png)
To authenticate with the dashboard we now need the token which we can get with the following commands.
```
```Shell
TOKEN_NAME=$(kubectl get secret --namespace kasten-io|grep k10-k10-token | cut -d " " -f 1)
TOKEN=$(kubectl get secret --namespace kasten-io $TOKEN_NAME -o jsonpath="{.data.token}" | base64 --decode)
@ -90,7 +91,7 @@ Use the stateful application that we used in the Kubernetes section.
![](Images/Day55_Kubernetes1.png)
You can find the YAML configuration file for this application here[pacman-stateful-demo.yaml](Days/Kubernetes/pacman-stateful-demo.yaml)
You can find the YAML configuration file for this application here-> [pacman-stateful-demo.yaml](Kubernetes/pacman-stateful-demo.yaml)
![](Images/Day87_Data8.png)
@ -110,9 +111,9 @@ Take the time to clock up some high scores in the backend MongoDB database.
### Protect our High Scores
Now we have some mission critical data in our database and we do not want to lose it. We can use Kasten K10 to protect this whole application.
Now we have some mission-critical data in our database and we do not want to lose it. We can use Kasten K10 to protect this whole application.
If we head back into the Kasten K10 dashboard tab you will see that our number of application has now increased from 1 to 2 with the addition of our pacman application to our Kubernetes cluster.
If we head back into the Kasten K10 dashboard tab you will see that our number of applications has now increased from 1 to 2 with the addition of our Pacman application to our Kubernetes cluster.
![](Images/Day87_Data12.png)
@ -120,37 +121,37 @@ If you click on the Applications card you will see the automatically discovered
![](Images/Day87_Data13.png)
With Kasten K10 we have the ability to leverage storage based snapshots as well export our copies out to object storage options.
With Kasten K10 we can leverage storage-based snapshots as well export our copies out to object storage options.
For the purpose of the demo, we will create a manual storage snapshot in our cluster and then we can add some rogue data to our high scores to simulate an accidental mistake being made or is it?
For the demo, we will create a manual storage snapshot in our cluster and then we can add some rogue data to our high scores to simulate an accidental mistake being made or is it?
Firstly we can use the manual snapshot option below.
![](Images/Day87_Data14.png)
For the demo I am going to leave everything as the default
For the demo, I am going to leave everything as the default
![](Images/Day87_Data15.png)
Back on the dashboard you get a status report on the job as it is running and then when complete it should look as successful as this one.
Back on the dashboard, you get a status report on the job as it is running and then when complete it should look as successful as this one.
![](Images/Day87_Data16.png)
### Failure Scenario
We can now make that fatal change to our mission critical data by simply adding in a prescriptive bad change to our application.
We can now make that fatal change to our mission-critical data by simply adding in a prescriptive bad change to our application.
As you can see below we have two inputs that we probably dont want in our production mission critical database.
As you can see below we have two inputs that we probably don't want in our production mission-critical database.
![](Images/Day87_Data17.png)
### Restore the data
Obviously this is a simple demo and in a way not realistic although have you seen how easy it is to drop databases?
This is a simple demo and in a way not realistic although have you seen how easy it is to drop databases?
Now we want to get that high score list looking a little cleaner and how we had it before the mistakes were made.
Back in the Applications card and on the pacman tab we now have 1 restore point we can use to restore from.
Back in the Applications card and on the Pacman tab, we now have 1 restore point we can use to restore from.
![](Images/Day87_Data18.png)
@ -162,7 +163,7 @@ Select that restore and a side window will appear, we will keep the default sett
![](Images/Day87_Data20.png)
Confirm that you really want to make this happen.
Confirm that you want to make this happen.
![](Images/Day87_Data21.png)
@ -170,13 +171,13 @@ You can then go back to the dashboard and see the progress of the restore. You s
![](Images/Day87_Data22.png)
But more importantly how is our High-Score list looking in our mission critical application. You will have to start the port forward again to pacman as we previously covered.
But more importantly, how is our High-Score list looking in our mission-critical application. You will have to start the port forward again to Pacman as we previously covered.
![](Images/Day87_Data23.png)
A super simple demo and only really touching the surface of what Kasten K10 can really achieve when it comes to backup. I will be creating some more in depth video content on some of these areas in the future. We will also be using Kasten K10 to highlight some of the other prominent areas around Data Management when it comes to Disaster Recovery and the mobility of your data.
A super simple demo and only really touching the surface of what Kasten K10 can achieve when it comes to backup. I will be creating some more in-depth video content on some of these areas in the future. We will also be using Kasten K10 to highlight some of the other prominent areas around Data Management when it comes to Disaster Recovery and the mobility of your data.
Next we will take a look at Application consistency.
Next, we will take a look at Application consistency.
## Resources

View File

@ -2,18 +2,19 @@
title: '#90DaysOfDevOps - Application Focused Backup - Day 88'
published: false
description: 90DaysOfDevOps - Application Focused Backups
tags: "devops, 90daysofdevops, learning"
tags: 'devops, 90daysofdevops, learning'
cover_image: null
canonical_url: null
id: 1048749
---
## Application Focused Backups
We have already spent some time talking about data services or data intensive applications such as databases on [Day 85](day85.md). For these data services we have to consider how we manage consistency, especially when it comes application consistency.
## Application-Focused Backups
In this post we are going to dive into that requirement around protecting the application data in a consistent manner.
We have already spent some time talking about data services or data-intensive applications such as databases on [Day 85](day85.md). For these data services, we have to consider how we manage consistency, especially when it comes to application consistency.
In order to do this our tool of choice will be [Kanister](https://kanister.io/)
In this post, we are going to dive into that requirement around consistently protecting the application data.
To do this our tool of choice will be [Kanister](https://kanister.io/)
![](Images/Day88_Data1.png)
@ -29,7 +30,7 @@ Kanister uses Kubernetes custom resources, the main custom resources that are in
### Execution Walkthrough
Before we get hands on we should take a look at the workflow that Kanister takes in protecting application data. Firstly our controller is deployed using helm into our Kubernetes cluster, Kanister lives within its own namespace. We take our Blueprint of which there are many community supported blueprints available, we will cover this in more detail shortly. We then have our database workload.
Before we get hands-on we should take a look at the workflow that Kanister takes in protecting application data. Firstly our controller is deployed using helm into our Kubernetes cluster, Kanister lives within its namespace. We take our Blueprint of which there are many community-supported blueprints available, we will cover this in more detail shortly. We then have our database workload.
![](Images/Day88_Data2.png)
@ -41,7 +42,7 @@ The ActionSet allows us to run the actions defined in the blueprint against the
![](Images/Day88_Data4.png)
The ActionSet in turns uses the Kanister functions (KubeExec, KubeTask, Resource Lifecycle) and pushes our backup to our target repository (Profile).
The ActionSet in turn uses the Kanister functions (KubeExec, KubeTask, Resource Lifecycle) and pushes our backup to our target repository (Profile).
![](Images/Day88_Data5.png)
@ -53,32 +54,33 @@ If that action is completed/failed the respective status is updated in the Actio
Once again we will be using the minikube cluster to achieve this application backup. If you have it still running from the previous session then we can continue to use this.
At the time of writing we are up to image version `0.75.0` with the following helm command we will install kanister into our Kubernetes cluster.
At the time of writing, we are up to image version `0.75.0` with the following helm command we will install kanister into our Kubernetes cluster.
`helm install kanister --namespace kanister kanister/kanister-operator --set image.tag=0.75.0 --create-namespace`
![](Images/Day88_Data7.png)
We can use `kubectl get pods -n kanister` to ensure the pod is up and runnnig and then we can also check our custom resource definitions are now available (If you have only installed Kanister then you will see the highlighted 3)
We can use `kubectl get pods -n kanister` to ensure the pod is up and running and then we can also check our custom resource definitions are now available (If you have only installed Kanister then you will see the highlighted 3)
![](Images/Day88_Data8.png)
### Deploy a Database
Deploying mysql via helm:
Deploying MySQL via helm:
```
```Shell
APP_NAME=my-production-app
kubectl create ns ${APP_NAME}
helm repo add bitnami https://charts.bitnami.com/bitnami
helm install mysql-store bitnami/mysql --set primary.persistence.size=1Gi,volumePermissions.enabled=true --namespace=${APP_NAME}
kubectl get pods -n ${APP_NAME} -w
```
![](Images/Day88_Data9.png)
Populate the mysql database with initial data, run the following:
Populate the MySQL database with initial data, and run the following:
```
```Shell
MYSQL_ROOT_PASSWORD=$(kubectl get secret --namespace ${APP_NAME} mysql-store -o jsonpath="{.data.mysql-root-password}" | base64 --decode)
MYSQL_HOST=mysql-store.${APP_NAME}.svc.cluster.local
MYSQL_EXEC="mysql -h ${MYSQL_HOST} -u root --password=${MYSQL_ROOT_PASSWORD} -DmyImportantData -t"
@ -86,21 +88,23 @@ echo MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD}
```
### Create a MySQL CLIENT
We will run another container image to act as our client
```
```Shell
APP_NAME=my-production-app
kubectl run mysql-client --rm --env APP_NS=${APP_NAME} --env MYSQL_EXEC="${MYSQL_EXEC}" --env MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD} --env MYSQL_HOST=${MYSQL_HOST} --namespace ${APP_NAME} --tty -i --restart='Never' --image docker.io/bitnami/mysql:latest --command -- bash
```
```
Note: if you already have an existing mysql client pod running, delete with the command
```Shell
Note: if you already have an existing MySQL client pod running, delete with the command
kubectl delete pod -n ${APP_NAME} mysql-client
```
### Add Data to MySQL
```
```Shell
echo "create database myImportantData;" | mysql -h ${MYSQL_HOST} -u root --password=${MYSQL_ROOT_PASSWORD}
MYSQL_EXEC="mysql -h ${MYSQL_HOST} -u root --password=${MYSQL_ROOT_PASSWORD} -DmyImportantData -t"
echo "drop table Accounts" | ${MYSQL_EXEC}
@ -116,18 +120,18 @@ echo "insert into Accounts values('rastapopoulos', 377);" | ${MYSQL_EXEC}
echo "select * from Accounts;" | ${MYSQL_EXEC}
exit
```
You should be able to see some data as per below.
![](Images/Day88_Data10.png)
### Create Kanister Profile
Kanister provides a CLI, `kanctl` and another utility `kando` that is used to interact with your object storage provider from blueprint and both of these utilities.
Kanister provides a CLI, `kanctl` and another utility `kando` that is used to interact with your object storage provider from the blueprint and both of these utilities.
[CLI Download](https://docs.kanister.io/tooling.html#tooling)
I have gone and I have created an AWS S3 Bucket that we will use as our profile target and restore location. I am going to be using environment variables so that I am able to still show you the commands I am running with `kanctl` to create our kanister profile.
I have gone and I have created an AWS S3 Bucket that we will use as our profile target and restore location. I am going to be using environment variables so that I can still show you the commands I am running with `kanctl` to create our kanister profile.
`kanctl create profile s3compliant --access-key $ACCESS_KEY --secret-key $SECRET_KEY --bucket $BUCKET --region eu-west-2 --namespace my-production-app`
@ -135,12 +139,11 @@ I have gone and I have created an AWS S3 Bucket that we will use as our profile
### Blueprint time
Don't worry you don't need to create your own one from scratch unless your data service is not listed here in the [Kanister Examples](https://github.com/kanisterio/kanister/tree/master/examples) but by all means community contributions are how this project gains awareness.
Don't worry you don't need to create your one from scratch unless your data service is not listed here in the [Kanister Examples](https://github.com/kanisterio/kanister/tree/master/examples) but by all means, community contributions are how this project gains awareness.
The blueprint we will be using will be the below.
```
```Shell
apiVersion: cr.kanister.io/v1alpha1
kind: Blueprint
metadata:
@ -240,7 +243,7 @@ You can see from the command above we are defining the blueprint we added to the
Check the status of the ActionSet by taking the ActionSet name and using this command `kubectl --namespace kanister describe actionset backup-qpnqv`
Finally we can go and confirm that we now have data in our AWS S3 bucket.
Finally, we can go and confirm that we now have data in our AWS S3 bucket.
![](Images/Day88_Data14.png)
@ -250,12 +253,12 @@ We need to cause some damage before we can restore anything, we can do this by d
Connect to our MySQL pod.
```
```Shell
APP_NAME=my-production-app
kubectl run mysql-client --rm --env APP_NS=${APP_NAME} --env MYSQL_EXEC="${MYSQL_EXEC}" --env MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD} --env MYSQL_HOST=${MYSQL_HOST} --namespace ${APP_NAME} --tty -i --restart='Never' --image docker.io/bitnami/mysql:latest --command -- bash
```
You can see that our importantdata db is there with `echo "SHOW DATABASES;" | ${MYSQL_EXEC}`
You can see that our importantdata DB is there with `echo "SHOW DATABASES;" | ${MYSQL_EXEC}`
Then to drop we ran `echo "DROP DATABASE myImportantData;" | ${MYSQL_EXEC}`
@ -263,21 +266,22 @@ And confirmed that this was gone with a few attempts to show our database.
![](Images/Day88_Data15.png)
We can now use Kanister to get our important data back in business using the `kubectl get actionset -n kanister` to find out our ActionSet name that we took earlier. Then we will create a restore ActionSet to restore our data using `kanctl create actionset -n kanister --action restore --from "backup-qpnqv"`
We can now use Kanister to get our important data back in business using the `kubectl get actionset -n kanister` to find out the ActionSet name that we took earlier. Then we will create a restore ActionSet to restore our data using `kanctl create actionset -n kanister --action restore --from "backup-qpnqv"`
![](Images/Day88_Data16.png)
We can confirm our data is back by using the below command to connect to our database.
```
```Shell
APP_NAME=my-production-app
kubectl run mysql-client --rm --env APP_NS=${APP_NAME} --env MYSQL_EXEC="${MYSQL_EXEC}" --env MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD} --env MYSQL_HOST=${MYSQL_HOST} --namespace ${APP_NAME} --tty -i --restart='Never' --image docker.io/bitnami/mysql:latest --command -- bash
```
Now we are inside the MySQL Client, we can issue the `echo "SHOW DATABASES;" | ${MYSQL_EXEC}` and we can see the database is back. We can also issue the `echo "select * from Accounts;" | ${MYSQL_EXEC}` to check the contents of the database and our important data is restored.
![](Images/Day88_Data17.png)
In the next post we take a look at Disaster Recovery within Kubernetes.
In the next post, we take a look at Disaster Recovery within Kubernetes.
## Resources

View File

@ -7,6 +7,7 @@ cover_image: null
canonical_url: null
id: 1048718
---
## Disaster Recovery
We have mentioned already how different failure scenarios will warrant different recovery requirements. When it comes to Fire, Flood and Blood scenarios we can consider these mostly disaster situations where we might need our workloads up and running in a completely different location as fast as possible or at least with near-zero recovery time objectives (RTO).
@ -15,11 +16,11 @@ This can only be achieved at scale when you automate the replication of the comp
This allows for fast failovers across cloud regions, cloud providers or between on-premises and cloud infrastructure.
Keeping with the theme so far, we are going to concentrate on how this can be achieved using Kasten K10 using our minikube cluster that we deployed and configured a few sessions ago.
Keeping with the theme so far, we are going to concentrate on how this can be achieved using Kasten K10 using the minikube cluster that we deployed and configured a few sessions ago.
We will then create another minikube cluster with Kasten K10 also installed to act as our standby cluster which in theory could be any location.
Kasten K10 also has built in functionality to ensure if something was to happen to the Kubernetes cluster it is running on that the catalog data is replicated and available in a new one [K10 Disaster Recovery](https://docs.kasten.io/latest/operating/dr.html).
Kasten K10 also has built-in functionality to ensure if something was to happen to the Kubernetes cluster it is running on that the catalogue data is replicated and available in a new one [K10 Disaster Recovery](https://docs.kasten.io/latest/operating/dr.html).
### Add object storage to K10
@ -33,13 +34,13 @@ Port forward to access the K10 dashboard, open a new terminal to run the below c
`kubectl --namespace kasten-io port-forward service/gateway 8080:8000`
The Kasten dashboard will be available at: `http://127.0.0.1:8080/k10/#/`
The Kasten dashboard will be available at `http://127.0.0.1:8080/k10/#/`
![](Images/Day87_Data4.png)
To authenticate with the dashboard, we now need the token which we can get with the following commands.
```
```Shell
TOKEN_NAME=$(kubectl get secret --namespace kasten-io|grep k10-k10-token | cut -d " " -f 1)
TOKEN=$(kubectl get secret --namespace kasten-io $TOKEN_NAME -o jsonpath="{.data.token}" | base64 --decode)
@ -61,11 +62,11 @@ Now that we are back in the Kasten K10 dashboard we can add our location profile
![](Images/Day89_Data2.png)
You can see from the image below that we have choice when it comes to where this location profile is, we are going to select Amazon S3, and we are going to add our sensitive access credentials, region and bucket name.
You can see from the image below that we have a choice when it comes to where this location profile is, we are going to select Amazon S3, and we are going to add our sensitive access credentials, region and bucket name.
![](Images/Day89_Data3.png)
If we scroll down on the New Profile creation window you will see, we also have the ability to enable immutable backups which leverages the S3 Object Lock API. For this demo we won't be using that.
If we scroll down on the New Profile creation window you will see, that we also can enable immutable backups which leverage the S3 Object Lock API. For this demo, we won't be using that.
![](Images/Day89_Data4.png)
@ -73,9 +74,9 @@ Hit "Save Profile" and you can now see our newly created or added location profi
![](Images/Day89_Data5.png)
### Create a policy to protect Pac-Man app to object storage
### Create a policy to protect the Pac-Man app to object storage
In the previous session we created only an ad-hoc snapshot of our Pac-Man application, therefore we need to create a backup policy that will send our application backups to our newly created object storage location.
In the previous session, we created only an ad-hoc snapshot of our Pac-Man application, therefore we need to create a backup policy that will send our application backups to our newly created object storage location.
If you head back to the dashboard and select the Policy card you will see a screen as per below. Select "Create New Policy".
@ -97,7 +98,7 @@ Under Advanced settings we are not going to be using any of these but based on o
![](Images/Day89_Data10.png)
Finally select "Create Policy" and you will now see the policy in our Policy window.
Finally, select "Create Policy" and you will now see the policy in our Policy window.
![](Images/Day89_Data11.png)
@ -105,7 +106,7 @@ At the bottom of the created policy, you will have "Show import details" we need
![](Images/Day89_Data12.png)
Before we move on, we just need to select "run once" to get a backup sent our object storage bucket.
Before we move on, we just need to select "run once" to get a backup sent to our object storage bucket.
![](Images/Day89_Data13.png)
@ -113,10 +114,9 @@ Below, the screenshot is just to show the successful backup and export of our da
![](Images/Day89_Data14.png)
### Create a new MiniKube cluster & deploy K10
We then need to deploy a second Kubernetes cluster and where this could be any supported version of Kubernetes including OpenShift, for the purpose of education we will use the very free version of MiniKube with a different name.
We then need to deploy a second Kubernetes cluster and where this could be any supported version of Kubernetes including OpenShift, for education we will use the very free version of MiniKube with a different name.
Using `minikube start --addons volumesnapshots,csi-hostpath-driver --apiserver-port=6443 --container-runtime=containerd -p standby --kubernetes-version=1.21.2` we can create our new cluster.
@ -128,7 +128,7 @@ We then can deploy Kasten K10 in this cluster using:
This will take a while but in the meantime, we can use `kubectl get pods -n kasten-io -w` to watch the progress of our pods getting to the running status.
It is worth noting that because we are using MiniKube our application will just run when we run our import policy, our storageclass is the same on this standby cluster. However, something we will cover in the final session is about mobility and transformation.
It is worth noting that because we are using MiniKube our application will just run when we run our import policy, our storageclass is the same on this standby cluster. However, something we will cover in the final session is mobility and transformation.
When the pods are up and running, we can follow the steps we went through on the previous steps in the other cluster.
@ -136,13 +136,13 @@ Port forward to access the K10 dashboard, open a new terminal to run the below c
`kubectl --namespace kasten-io port-forward service/gateway 8080:8000`
The Kasten dashboard will be available at: `http://127.0.0.1:8080/k10/#/`
The Kasten dashboard will be available at `http://127.0.0.1:8080/k10/#/`
![](Images/Day87_Data4.png)
To authenticate with the dashboard, we now need the token which we can get with the following commands.
```
```Shell
TOKEN_NAME=$(kubectl get secret --namespace kasten-io|grep k10-k10-token | cut -d " " -f 1)
TOKEN=$(kubectl get secret --namespace kasten-io $TOKEN_NAME -o jsonpath="{.data.token}" | base64 --decode)
@ -162,7 +162,7 @@ Then we get access to the Kasten K10 dashboard.
### Import Pac-Man into new the MiniKube cluster
At this point we are now able to create an import policy in that standby cluster and connect to the object storage backups and determine what and how we want this to look.
At this point, we are now able to create an import policy in that standby cluster and connect to the object storage backups and determine what and how we want this to look.
First, we add in our Location Profile that we walked through earlier on the other cluster, showing off dark mode here to show the difference between our production system and our DR standby location.
@ -172,11 +172,11 @@ Now we go back to the dashboard and into the policies tab to create a new policy
![](Images/Day89_Data17.png)
Create the import policy as per the below image. When complete, we can create policy. There are options here to restore after import and some people might want this option, this will go and restore into our standby cluster on completion. We also have the ability to change the configuration of the application as it is restored and this is what I have documented in [Day 90](day90.md).
Create the import policy as per the below image. When complete, we can create a policy. There are options here to restore after import and some people might want this option, this will go and be restored into our standby cluster on completion. We also can change the configuration of the application as it is restored and this is what I have documented in [Day 90](day90.md).
![](Images/Day89_Data18.png)
I selected to import on demand, but you can obviously set a schedule on when you want this import to happen. Because of this I am going to run once.
I selected to import on demand, but you can set a schedule on when you want this import to happen. Because of this, I am going to run once.
![](Images/Day89_Data19.png)
@ -184,7 +184,7 @@ You can see below the successful import policy job.
![](Images/Day89_Data20.png)
If we now head back to the dashboard and into the Applications card, we can then select the drop down where you see below "Removed" you will see our application here. Select Restore
If we now head back to the dashboard and into the Applications card, we can then select the drop-down where you see below "Removed" you will see our application here. Select Restore
![](Images/Day89_Data21.png)
@ -204,7 +204,7 @@ We can see below that we are in the standby cluster and if we check on our pods,
![](Images/Day89_Data25.png)
We can then port forward (in real life/production environments, you would not need this step to access the application, you would be using ingress)
We can then port forward (in real-life/production environments, you would not need this step to access the application, you would be using ingress)
![](Images/Day89_Data26.png)

View File

@ -7,27 +7,28 @@ cover_image: null
canonical_url: null
id: 1048748
---
## Data & Application Mobility
Day 90 of the #90DaysOfDevOps Challenge! In this final session I am going to cover mobility of our data and applications. I am specifically going to focus on Kubernetes but the requirement across platforms and between platforms is something that is an ever-growing requirement and is seen in the field.
Day 90 of the #90DaysOfDevOps Challenge! In this final session, I am going to cover the mobility of our data and applications. I am specifically going to focus on Kubernetes but the requirement across platforms and between platforms is something that is an ever-growing requirement and is seen in the field.
The use case being "I want to move my workload, application and data from one location to another" for many different reasons, could be cost, risk or to provide the business with a better service.
In this session we are going to take our workload and we are going to look at moving a Kubernetes workload from one cluster to another, but in doing so we are going to change how our application is on the target location.
In this session, we are going to take our workload and we are going to look at moving a Kubernetes workload from one cluster to another, but in doing so we are going to change how our application is on the target location.
It in fact uses a lot of the characteristics that we went through with [Disaster Recovery](day89.md)
It uses a lot of the characteristics that we went through with [Disaster Recovery](day89.md)
### **The Requirement**
Our current Kubernetes cluster cannot handle demand and our costs are rocketing through the roof, it is a business decision that we wish to move our production Kubernetes cluster to our Disaster Recovery location, located on a different public cloud which will provide the ability to expand but also at a cheaper rate. We could also take advantage of some of the native cloud services available in the target cloud.
Our current mission critical application (Pac-Man) has a database (MongoDB) and is running on slow storage, we would like to move to a newer faster storage tier.
Our current mission-critical application (Pac-Man) has a database (MongoDB) and is running on slow storage, we would like to move to a newer faster storage tier.
The current Pac-Man (NodeJS) front-end is not scaling very well, and we would like to increase the number of available pods in the new location.
### Getting to IT
We have our brief and in fact we have our imports already hitting the Disaster Recovery Kubernetes cluster.
We have our brief and in fact, we have our imports already hitting the Disaster Recovery Kubernetes cluster.
The first job we need to do is remove the restore operation we carried out on Day 89 for the Disaster Recovery testing.
@ -35,15 +36,15 @@ We can do this using `kubectl delete ns pacman` on the "standby" minikube cluste
![](Images/Day90_Data1.png)
To get started head into the Kasten K10 Dashboard, select the Applications card. From the dropdown choose "Removed"
To get started head into the Kasten K10 Dashboard, and select the Applications card. From the dropdown choose "Removed"
![](Images/Day90_Data2.png)
We then get a list of the available restore points. We will select the one that is available as this contains our mission critical data. (In this example we only have a single restore point.)
We then get a list of the available restore points. We will select the one that is available as this contains our mission-critical data. (In this example we only have a single restore point.)
![](Images/Day90_Data3.png)
When we worked on the Disaster Recovery process, we left everything as default. However these additional restore options are there if you have a Disaster Recovery process that requires the transformation of your application. In this instance we have the requirement to change our storage and number of replicas.
When we worked on the Disaster Recovery process, we left everything as default. However, these additional restore options are there if you have a Disaster Recovery process that requires the transformation of your application. In this instance, we have the requirement to change our storage and number of replicas.
![](Images/Day90_Data4.png)
@ -51,7 +52,7 @@ Select the "Apply transforms to restored resources" option.
![](Images/Day90_Data5.png)
It just so happens that the two built in examples for the transformation that we want to perform are what we need for our requirements.
It just so happens that the two built-in examples for the transformation that we want to perform are what we need for our requirements.
![](Images/Day90_Data6.png)
@ -71,7 +72,7 @@ If you are following along you should see both of our transforms as per below.
![](Images/Day90_Data10.png)
You can now see from the below image that we are going to restore all of the artifacts listed below, if we wanted to we could also be granular about what we wanted to restore. Hit the "Restore" button
You can now see from the below image that we are going to restore all of the artefacts listed below, if we wanted to we could also be granular about what we wanted to restore. Hit the "Restore" button
![](Images/Day90_Data11.png)
@ -79,15 +80,15 @@ Again, we will be asked to confirm the actions.
![](Images/Day90_Data12.png)
The final thing to show is now if we head back into the terminal and we take a look at our cluster, you can see we have 5 pods now for the pacman pods and our storageclass is now set to standard vs the csi-hostpath-sc
The final thing to show is now if we head back into the terminal and we take a look at our cluster, you can see we have 5 pods now for the Pacman pods and our storageclass is now set to standard vs the csi-hostpath-sc
![](Images/Day90_Data13.png)
There are many different options that can be achieved through transformation. This can span not only migration but also Disaster Recovery, test and development type scenarios and more.
Many different options can be achieved through transformation. This can span not only migration but also Disaster Recovery, test and development type scenarios and more.
### API and Automation
I have not spoken about the ability to leverage the API and to automate some of these tasks, but these options are present and throughout the UI there are breadcrumbs that provide the command sets to take advantage of the APIs for automation tasks.
I have not spoken about the ability to leverage the API and automate some of these tasks, but these options are present and throughout the UI some breadcrumbs provide the command sets to take advantage of the APIs for automation tasks.
The important thing to note about Kasten K10 is that on deployment it is deployed inside the Kubernetes cluster and then can be called through the Kubernetes API.
@ -107,13 +108,13 @@ As I wrap up this challenge, I want to continue to ask for feedback to make sure
I also appreciate there are a lot of topics that I was not able to cover or not able to dive deeper into around the topics of DevOps.
This means that we can always take another attempt that this challenge next year and find another 90 day's worth of content and walkthroughs to work through.
This means that we can always make another attempt that this challenge next year and find another 90 days' worth of content and walkthroughs to work through.
### What is next?
Firstly, a break from writing for a little while, I started this challenge on the 1st January 2022 and I have finished on the 31st March 2022 19:50 BST! It has been a slog. But as I say and have said for a long time, if this content helps one person, then it is always worth learning in public!
Firstly, a break from writing for a little while, I started this challenge on the 1st of January 2022 and I finished on the 31st of March 2022 at 19:50 BST! It has been a slog. But as I say and have said for a long time, if this content helps one person, then it is always worth learning in public!
I have some ideas on where to take this next and hopefully it has a life outside of a GitHub repository and we can look at creating an eBook and possibly even a physical book.
I have some ideas on where to take this next and hopefully, it has a life outside of a GitHub repository and we can look at creating an eBook and possibly even a physical book.
I also know that we need to revisit each post and make sure everything is grammatically correct before making anything like that happen. If anyone does know about how to take markdown to print or to an eBook it would be greatly appreciated feedback.
@ -121,5 +122,6 @@ As always keep the issues and PRs coming.
Thanks!
@MichaelCade1
- [GitHub](https://github.com/MichaelCade)
- [Twitter](https://twitter.com/MichaelCade1)

View File

@ -4,7 +4,7 @@
<img src="logo.png?raw=true" alt="90DaysOfDevOps Logo" width="50%" height="50%" />
</p>
English Version | [中文版本](zh_cn/README.md) | [繁體中文版本](zh_tw/README.md)| [日本語版](ja/README.md)
English Version | [中文版本](zh_cn/README.md) | [繁體中文版本](zh_tw/README.md)| [日本語版](ja/README.md) | [Wersja Polska](pl/README.md)
This repository is used to document my journey on getting a better foundational knowledge of "DevOps". I will be starting this journey on the 1st January 2022 but the idea is that we take 90 days which just so happens to be January 1st to March 31st.
@ -91,7 +91,7 @@ The quickest way to get in touch is going to be via Twitter, my handle is [@Mich
### Kubernetes
- [✔️] ☸ 49 > [The Big Picture: Kubernetes](Days/day49.md)
- [✔️] ☸ 50 > [Choosing your Kubernetes platform ](Days/day50.md)
- [✔️] ☸ 50 > [Choosing your Kubernetes platform](Days/day50.md)
- [✔️] ☸ 51 > [Deploying your first Kubernetes Cluster](Days/day51.md)
- [✔️] ☸ 52 > [Setting up a multinode Kubernetes Cluster](Days/day52.md)
- [✔️] ☸ 53 > [Rancher Overview - Hands On](Days/day53.md)
@ -101,7 +101,7 @@ The quickest way to get in touch is going to be via Twitter, my handle is [@Mich
### Learn Infrastructure as Code
- [✔️] 🤖 56 > [The Big Picture: IaC](Days/day56.md)
- [✔️] 🤖 57 > [An intro to Terraform ](Days/day57.md)
- [✔️] 🤖 57 > [An intro to Terraform](Days/day57.md)
- [✔️] 🤖 58 > [HashiCorp Configuration Language (HCL)](Days/day58.md)
- [✔️] 🤖 59 > [Create a VM with Terraform & Variables](Days/day59.md)
- [✔️] 🤖 60 > [Docker Containers, Provisioners & Modules](Days/day60.md)
@ -161,7 +161,6 @@ This work is licensed under a
[![Star History Chart](https://api.star-history.com/svg?repos=MichaelCade/90DaysOfDevOps&type=Timeline)](https://star-history.com/#MichaelCade/90DaysOfDevOps&Timeline)
[cc-by-nc-sa]: http://creativecommons.org/licenses/by-nc-sa/4.0/
[cc-by-nc-sa-image]: https://licensebuttons.net/l/by-nc-sa/4.0/88x31.png
[cc-by-nc-sa-shield]: https://img.shields.io/badge/License-CC%20BY--NC--SA%204.0-lightgrey.svg

File diff suppressed because it is too large Load Diff

View File

@ -61,7 +61,7 @@ DevOpsエンジニアとして、あなたはアプリケーションをプロ
- [What is DevOps? - TechWorld with Nana](https://www.youtube.com/watch?v=0yWAtQ6wYNM)
- [What is DevOps? - GitHub YouTube](https://www.youtube.com/watch?v=kBV8gPVZNEE)
- [What is DevOps? - IBM YouTube](https://www.youtube.com/watch?v=UbtB4sMaaNM)
- [What is DevOps? - AWS ](https://aws.amazon.com/devops/what-is-devops/)
- [What is DevOps? - AWS](https://aws.amazon.com/devops/what-is-devops/)
- [What is DevOps? - Microsoft](https://docs.microsoft.com/en-us/devops/what-is-devops)
ここまで来れば、ここが自分の望むところかどうかが分かるはずです。それでは、[3日目](day03.md)でお会いしましょう。

View File

@ -77,7 +77,7 @@ CIリリースが成功した場合 = 継続的デプロイメント = デプロ
### リソース:
- [DevOps for Developers Software or DevOps Engineer?](https://www.youtube.com/watch?v=a0-uE3rOyeU)
- [Techworld with Nana -DevOps Roadmap 2022 - How to become a DevOps Engineer? What is DevOps? ](https://www.youtube.com/watch?v=9pZ2xmsSDdo&t=125s)
- [Techworld with Nana -DevOps Roadmap 2022 - How to become a DevOps Engineer? What is DevOps?](https://www.youtube.com/watch?v=9pZ2xmsSDdo&t=125s)
- [How to become a DevOps Engineer in 2021 - DevOps Roadmap](https://www.youtube.com/watch?v=5pxbp6FyTfk)
ここまで来れば、ここが自分の居場所かどうかが分かるはずです。

View File

@ -1,55 +1,53 @@
---
title: '#90DaysOfDevOps - DevOps - The real stories - Day 6'
title: '#90DaysOfDevOps - DevOps - 本当の話 - 6日目'
published: false
description: 90DaysOfDevOps - DevOps - The real stories
description: 90DaysOfDevOps - DevOps - 本当の話
tags: 'devops, 90daysofdevops, learning'
cover_image: null
canonical_url: null
id: 1048855
---
## DevOps - The real stories
## DevOps - 本当の話
DevOps to begin with was seen to be out of reach for a lot of us as we didn't have an environment or requirement anything like a Netflix or fortune 500 but think now that is beginning to sway into the normal when adopting a DevOps practice within any type of business.
DevOpsは当初、NetflixやFortune 500のような環境や要件がないため、多くの人にとって手の届かないものと思われていましたが、今ではあらゆる種類のビジネスでDevOpsの実践を採用する際に、それが普通になってきているように思います。
You will see from the second link below in references there are a lot of different industries and verticals using DevOps and having a hugely positive effect on their business objectives.
文末のリソースの2番目のリンクから、さまざまな業界や業種がDevOpsを使用しており、ビジネス目標に大きなプラスの効果をもたらしていることがおわかりいただけるでしょう。
Obviously the overarching benefit here is DevOps if done correctly should help your Business improve the speed and quality of software development.
もちろん、ここでの包括的な利点は、DevOpsが正しく実行されれば、ソフトウェア開発のスピードと品質を向上させるのに役立つということです。
I wanted to take this Day to look at succesful companies that have adopted a DevOps practice and share some resources around this, This will be another great one for the community to also dive in and help here. Have you adopted a DevOps culture in your business? Has it been successful?
この日は、DevOpsの実践を採用した成功した企業を見て、そのリソースを共有したいと思います。あなたのビジネスでDevOps文化を採用しましたかそれは成功しましたか
I mentioned Netflix above and will touch on them again as it is a very good model and advanced to what we generally see today even still but will also mention some other big name brands that are succeeding it seems.
Netflixは非常に良いモデルであり、現在でも一般的に見られるものより進んでいるので、もう一度触れますが、成功している他の有名ブランドについても触れます。
## Amazon
In 2010 Amazon moved their physical server footprint to Amazon Web Services (AWS) cloud this allowed them to save resources by scaling capacity up and down in very small increments. We also know that this AWS cloud would go on and make a huge amount of revenue itself whilst still running the Amazon retail branch of the company.
2010年、Amazonは物理サーバーをAmazon Web ServicesAWSクラウドに移行し、非常に小さな単位で容量を増減させることでリソースを節約できるようになりました。また、このAWSクラウドは、Amazonの小売部門を運営しながら、莫大な収益を上げるようになったことも知っている。
Amazon adopted in 2011 (According to the resource below) a continued deployment process where their developers could deploy code whenever they want and to whatever servers they needed. This enabled Amazon to achieve deploying new software to production servers on average every 11.6 seconds!
Amazonは2011年に、開発者が好きな時に好きなサーバにコードをデプロイできる継続的なデプロイメントプロセスを採用した下記のリソースによる。)これにより、アマゾンは平均11.6秒ごとに新しいソフトウェアを本番サーバにデプロイすることを実現しました。
## Netflix
Who doesn't use Netflix? obviously a huge quality streaming service with by all accounts at least personally a great user experience.
Netflixを利用しない人はいないでしょう。明らかに巨大で高品質なストリーミングサービスであり、少なくとも個人的には素晴らしいユーザーエクスペリエンスを持っています。
Why is that user experience so great? Well the ability to deliver a service with no recollected memory for me at least of glitches requires speed, flexibility, and attention to quality.
なぜ、そのような素晴らしいユーザーエクスペリエンスが必要なのでしょうか?少なくとも私にとっては、不具合の記憶がないサービスを提供するためには、スピード、柔軟性、そして品質へのこだわりが必要です。
NetFlix developers can automatically build pieces of code into deployable web images without relying on IT operations. As the images are updated, they are integrated into Netflixs infrastructure using a custom-built, web-based platform.
NetFlixの開発者は、ITオペレーションに依存することなく、コードの断片を自動的にデプロイ可能なWebイメージに構築することができます。イメージは更新されると、カスタムビルドされたウェブベースのプラットフォームを使って、Netflixのインフラに統合されます。
Continous Monitoring is in place so that if the deployment of the images fails, the new images are rolled back and traffic rerouted to the previous version.
継続的なモニタリングにより、イメージのデプロイに失敗した場合、新しいイメージはロールバックされ、トラフィックは以前のバージョンにリルートされるようになっています。
There is a great talk listed below that goes into more about the DOs and DONTs that Netflix live and die by within their teams.
Netflix がチーム内で実践している「やるべきこと」と「やってはいけないこと」については、次のような素晴らしい講演があります。
## Etsy
As with many of us and many companies there was a real struggle around slow and painful deployments. In the same vein we might have also experienced working in companies that have lots of siloes and teams that are not really working well together.
私たちの多くが、また多くの企業がそうであるように、遅くて辛いデプロイメントに本当に苦労していました。同じように、私たちも、サイロ化したチームや、連携がうまくいっていない会社で働くことを経験してきたかもしれません。
From what I can make out at least from reading about Amazon and Netflix, Etsy might have adopted the letting developers deploy their own code around the end of 2009 which might have been before the other two mentioned. (interesting!)
AmazonとNetflixの記事を読んだ限りでは、Etsyは2009年末頃に開発者が自分自身のコードをデプロイできるようにしたようですが、これは他の2社より早かったかもしれません。(面白い!)。
An interesting take away I read here was that they realised that when developers feel responsibility for deployment they also would take responsibility for application performance, uptime and other goals.
興味深いのは、開発者がデプロイメントに責任を感じると、アプリケーションのパフォーマンスやアップタイム、その他の目標にも責任を持つようになる、ということに気づいたということです。
学習文化はDevOpsの重要な部分であり、教訓が得られれば、失敗さえも成功になり得ます。(この引用が実際にどこから来たのかは分かりませんが、なんとなく納得です!)。
この他にも、DevOpsが大成功を収めた企業の中で、ゲームを変えたという話をいくつか追加しています。
A learning culture is a key part to DevOps, even failure can be a success if lessons are learned. (not sure where this quote actually came from but it kind of makes sense!)
I have added some other stories where DevOps has changed the game within some of these massively successful companies.
## Resources
## リソース
- [How Netflix Thinks of DevOps](https://www.youtube.com/watch?v=UTKIT6STSVM)
- [16 Popular DevOps Use Cases & Real Life Applications [2021]](https://www.upgrad.com/blog/devops-use-cases-applications/)
@ -59,16 +57,16 @@ I have added some other stories where DevOps has changed the game within some of
- [Interplanetary DevOps at NASA JPL](https://www.usenix.org/conference/lisa16/technical-sessions/presentation/isla)
- [Target CIO explains how DevOps took root inside the retail giant](https://enterprisersproject.com/article/2017/1/target-cio-explains-how-devops-took-root-inside-retail-giant)
### Recap of our first few days looking at DevOps
### DevOpsに注目した最初の数日間を振り返る
- DevOps is a combo of Development and Operations that allows a single team to manage the whole application development lifecycle that consists of **Development**, **Testing**, **Deployment**, **Operations**.
- DevOpsは、開発と運用を組み合わせたもので、**開発**、*テスト**、*デプロイメント**、*運用**からなるアプリケーション開発ライフサイクル全体を1つのチームが管理できるようにするものです。
- The main focus and aim of DevOps is to shorten the development lifecycle while delivering features, fixes and functionality frequently in close alignment with business objectives.
- DevOpsの主な焦点と目的は、開発ライフサイクルを短縮する一方で、ビジネス目標に密接に連携した特徴、修正、および機能を頻繁に提供することです。
- DevOps is a software development approach through which software can be delivered and developed reliably and quickly. You may also see this referenced as **Continuous Development, Testing, Deployment, Monitoring**
- DevOpsは、ソフトウェアを確実かつ迅速に提供・開発するためのソフトウェア開発手法である。また、「Continuous Development, Testing, Deployment, Monitoring**(継続的開発、テスト、デプロイメント、モニタリング)」と表記されることもあります。
If you made it this far then you will know if this is where you want to be or not. See you on [Day 7](day07.md).
ここまでくれば、自分がやりたいことがここにあるのか、そうでないのかがわかるはずです。では、[7日目](day07.md)でお会いしましょう。
Day 7 will be us diving into a programming language, I am not aiming to be a developer but I want to be able to understand what the developers are doing.
7日目はプログラミング言語に飛び込みます。私は開発者になることを目指しているわけではありませんが、開発者が何をしているのかを理解できるようになりたいです。
Can we achieve that in a week? Probably not but if we spend 7 days or 7 hours learning something we are going to know more than when we started.
1週間でそれを達成できるでしょうかしかし、7日間または7時間かけて何かを学べば、始めたときよりも多くのことを知ることができるようになります。

View File

@ -1,67 +1,67 @@
---
title: '#90DaysOfDevOps - Setting up your DevOps environment for Go & Hello World - Day 8'
title: '#90DaysOfDevOps - GoとHello WorldのためのDevOps環境のセットアップ - 8日目'
published: false
description: 90DaysOfDevOps - Setting up your DevOps environment for Go & Hello World
description: 90DaysOfDevOps - GoとHello WorldのためのDevOps環境のセットアップ
tags: 'devops, 90daysofdevops, learning'
cover_image: null
canonical_url: null
id: 1048857
---
## Setting up your DevOps environment for Go & Hello World
## GoとHello WorldのためのDevOps環境のセットアップ
Before we get into some of the fundamentals of Go we should get Go installed on our workstation and do what every "learning programming 101" module teaches us which is to create the Hello World app. As this one is going to be walking through the steps to get Go installed on your workstation we are going to attempt to document the process in pictures so people can easily follow along.
Goの基本に入る前に、ワークステーションにGoをインストールし、「プログラミングを学ぶ101」モジュールで教えられるように、Hello Worldアプリを作成しましょう。今回は、ワークステーションにGoをインストールする手順を説明するため、その手順を写真で記録し、簡単に追えるようにします。
First of all, let's head on over to [go.dev/dl](https://go.dev/dl/) and you will be greeted with some available options for downloads.
まず最初に、[go.dev/dl](https://go.dev/dl/) に移動して、ダウンロードのためのいくつかの利用可能なオプションを表示します。
![](Images/Day8_Go1.png)
If we made it this far you probably know which workstation operating system you are running so select the appropriate download and then we can get installing. I am using Windows for this walkthrough, basically, from this next screen, we can leave all the defaults in place for now. ***(I will note that at the time of writing this was the latest version so screenshots might be out of date)***
ここまでくれば、どのワークステーションオペレーティングシステムを使用しているかはご存知でしょうから、適切なダウンロードを選択し、インストールを開始しましょう。このチュートリアルではWindowsを使用します。基本的に、次の画面からはデフォルトのままでよいでしょう。***(執筆時点では最新版のため、スクリーンショットが古い可能性があります) ***
![](Images/Day8_Go2.png)
Also note if you do have an older version of Go installed you will have to remove this before installing, Windows has this built into the installer and will remove and install as one.
また、古いバージョンのGoがインストールされている場合は、インストールする前に削除する必要があります。
Once finished you should now open a command prompt/terminal and we want to check that we have Go installed. If you do not get the output that we see below then Go is not installed and you will need to retrace your steps.
完了したら、コマンドプロンプト/ターミナルを開き、Goがインストールされていることを確認します。以下のような出力が得られない場合は、Goがインストールされていないため、手順をやり直す必要があります。
`go version`
![](Images/Day8_Go3.png)
Next up we want to check our environment for Go. This is always good to check to make sure your working directories are configured correctly, as you can see below we need to make sure you have the following directory on your system.
次に、Goの環境をチェックしたいと思います。これは、作業ディレクトリが正しく設定されているかどうかを確認するために常に行うべきことです。以下に示すように、あなたのシステム上に以下のディレクトリがあることを確認する必要があります。
![](Images/Day8_Go4.png)
Did you check? Are you following along? You will probably get something like the below if you try and navigate there.
確認しましたか?ついてきていますか?おそらく、そこにナビゲートしようとすると、以下のようなものが表示されると思います。
![](Images/Day8_Go5.png)
Ok, let's create that directory for ease I am going to use the mkdir command in my powershell terminal. We also need to create 3 folders within the Go folder as you will see also below.
では、簡単にディレクトリを作成しましょう。powershellのターミナルでmkdirコマンドを使用します。また、Goフォルダの中に3つのフォルダを作成する必要があるので、以下も参照ください。
![](Images/Day8_Go6.png)
Now we have Go installed and we have our Go working directory ready for action. We now need an integrated development environment (IDE) Now there are many out there available that you can use but the most common and the one I use is Visual Studio Code or Code. You can learn more about IDEs [here](https://www.youtube.com/watch?v=vUn5akOlFXQ).
これでGoがインストールされ、Goの作業ディレクトリが準備されました。統合開発環境(IDE)はたくさんありますが、最も一般的で私が使っているのはVisual Studio CodeまたはCodeです。IDEについては[ここ](https://www.youtube.com/watch?v=vUn5akOlFXQ)で詳しく説明されています。
If you have not downloaded and installed VSCode already on your workstation then you can do so by heading [here](https://code.visualstudio.com/download). As you can see below you have your different OS options.
VSCodeのダウンロードとインストールがまだの場合は、[こちら](https://code.visualstudio.com/download)から行うことができます。以下のように、さまざまなOSのオプションがあります。
![](Images/Day8_Go7.png)
Much the same as with the Go installation we are going to download and install and keep the defaults. Once complete you can open VSCode and you can select Open File and navigate to our Go directory that we created above.
Goのインストールと同じように、ダウンロードしてインストールし、デフォルトを維持します。完了したら、VSCodeを開き、Open Fileを選択して、上記で作成したGoのディレクトリに移動します。
![](Images/Day8_Go8.png)
You may get a popup about trust, read it if you want and then hit Yes, trust the authors. (I am not responsible later on though if you start opening things you don't trust!)
信頼についてのポップアップが表示されるかもしれませんが、それを読んでから「はい、作者を信頼します」を押してください。(信頼できないものを開き始めても、私は責任を負いません!)
Now you should see the three folders we also created earlier as well and what we want to do now is right click the src folder and create a new folder called `Hello`
これで、先ほど作成した3つのフォルダが表示されるはずです。今やりたいことは、srcフォルダを右クリックして、`Hello`という名前の新しいフォルダを作成することです。
![](Images/Day8_Go9.png)
Pretty easy stuff I would say up till this point? Now we are going to create our first Go Program with no understanding about anything we put in this next phase.
ここまで来ると、かなり簡単だと思いませんかさて、次のフェーズでは何も理解しないまま、最初のGoプログラムを作成することになります。
Next create a file called `main.go` in your `Hello` folder. As soon as you hit enter on the main.go you will be asked if you want to install the Go extension and also packages you can also check that empty pkg file that we made a few steps back and notice that we should have some new packages in there now?
次に、`Hello` フォルダに `main.go` というファイルを作成します。main.goでエンターキーを押すとすぐにGoエクステンションとパッケージをインストールするかどうか聞かれます。
![](Images/Day8_Go10.png)
Now let's get this Hello World app going, copy for the following code into your new main.go file and save that.
次のコードを新しいmain.goファイルにコピーして、保存してください。
```
package main
@ -72,28 +72,28 @@ func main() {
fmt.Println("Hello #90DaysOfDevOps")
}
```
Now I appreciate that the above might make no sense at all, but we will cover more about functions, packages and more in later days. For now let's run our app. Back in the terminal and in our Hello folder we can now check that all is working. Using the command below we can check to see if our generic learning program is working.
さて、上記は全く意味がないかもしれませんが、関数やパッケージなどについては、後日詳しく説明します。とりあえず、このアプリを実行してみましょう。ターミナルとHelloフォルダーに戻って、すべてが機能していることを確認できます。以下のコマンドを使用して、一般的な学習プログラムが動作しているかどうかを確認できます。
```
go run main.go
```
![](Images/Day8_Go11.png)
It doesn't end there though, what if we now want to take our program and run it on other Windows machines? We can do that by building our binary using the following command
しかし、これで終わりではなく、このプログラムを他のWindowsマシンで実行したいとしたらどうでしょうそのためには、次のコマンドでバイナリをビルドします。
```
go build main.go
```
![](Images/Day8_Go12.png)
If we run that, we would see the same output:
それを実行すると、同じ出力が表示されます。
```bash
$ ./main.exe
Hello #90DaysOfDevOps
```
## Resources
## リソース
- [StackOverflow 2021 Developer Survey](https://insights.stackoverflow.com/survey/2021)
- [Why we are choosing Golang to learn](https://www.youtube.com/watch?v=7pLqIIAqZD4&t=9s)
@ -104,6 +104,6 @@ Hello #90DaysOfDevOps
- [Hitesh Choudhary - Complete playlist](https://www.youtube.com/playlist?list=PLRAV69dS1uWSR89FRQGZ6q9BR2b44Tr9N)
See you on [Day 9](day09.md).
[9日目](day09.md)でお会いしましょう。
![](Images/Day8_Go13.png)

View File

@ -112,9 +112,9 @@ Next up we will start looking into Terraform with a 101 before we get some hands
## Resources
I have listed a lot of resources down below and I think this topic has been covered so many times out there, If you have additional resources be sure to raise a PR with your resources and I will be happy to review and add them to the list.
- [What is Infrastructure as Code? Difference of Infrastructure as Code Tools ](https://www.youtube.com/watch?v=POPP2WTJ8es)
- [What is Infrastructure as Code? Difference of Infrastructure as Code Tools](https://www.youtube.com/watch?v=POPP2WTJ8es)
- [Terraform Tutorial | Terraform Course Overview 2021](https://www.youtube.com/watch?v=m3cKkYXl-8o)
- [Terraform explained in 15 mins | Terraform Tutorial for Beginners ](https://www.youtube.com/watch?v=l5k1ai_GBDE)
- [Terraform explained in 15 mins | Terraform Tutorial for Beginners](https://www.youtube.com/watch?v=l5k1ai_GBDE)
- [Terraform Course - From BEGINNER to PRO!](https://www.youtube.com/watch?v=7xngnjfIlK4&list=WL&index=141&t=16s)
- [HashiCorp Terraform Associate Certification Course](https://www.youtube.com/watch?v=V4waklkBC38&list=WL&index=55&t=111s)
- [Terraform Full Course for Beginners](https://www.youtube.com/watch?v=EJ3N-hhiWv0&list=WL&index=39&t=27s)

View File

@ -87,9 +87,9 @@ We are going to get into more around HCL and then also start using Terraform to
## Resources
I have listed a lot of resources down below and I think this topic has been covered so many times out there, If you have additional resources be sure to raise a PR with your resources and I will be happy to review and add them to the list.
- [What is Infrastructure as Code? Difference of Infrastructure as Code Tools ](https://www.youtube.com/watch?v=POPP2WTJ8es)
- [What is Infrastructure as Code? Difference of Infrastructure as Code Tools](https://www.youtube.com/watch?v=POPP2WTJ8es)
- [Terraform Tutorial | Terraform Course Overview 2021](https://www.youtube.com/watch?v=m3cKkYXl-8o)
- [Terraform explained in 15 mins | Terraform Tutorial for Beginners ](https://www.youtube.com/watch?v=l5k1ai_GBDE)
- [Terraform explained in 15 mins | Terraform Tutorial for Beginners](https://www.youtube.com/watch?v=l5k1ai_GBDE)
- [Terraform Course - From BEGINNER to PRO!](https://www.youtube.com/watch?v=7xngnjfIlK4&list=WL&index=141&t=16s)
- [HashiCorp Terraform Associate Certification Course](https://www.youtube.com/watch?v=V4waklkBC38&list=WL&index=55&t=111s)
- [Terraform Full Course for Beginners](https://www.youtube.com/watch?v=EJ3N-hhiWv0&list=WL&index=39&t=27s)

View File

@ -219,9 +219,9 @@ The pros for storing state in a remote location is that we get:
## Resources
I have listed a lot of resources down below and I think this topic has been covered so many times out there, If you have additional resources be sure to raise a PR with your resources and I will be happy to review and add them to the list.
- [What is Infrastructure as Code? Difference of Infrastructure as Code Tools ](https://www.youtube.com/watch?v=POPP2WTJ8es)
- [What is Infrastructure as Code? Difference of Infrastructure as Code Tools](https://www.youtube.com/watch?v=POPP2WTJ8es)
- [Terraform Tutorial | Terraform Course Overview 2021](https://www.youtube.com/watch?v=m3cKkYXl-8o)
- [Terraform explained in 15 mins | Terraform Tutorial for Beginners ](https://www.youtube.com/watch?v=l5k1ai_GBDE)
- [Terraform explained in 15 mins | Terraform Tutorial for Beginners](https://www.youtube.com/watch?v=l5k1ai_GBDE)
- [Terraform Course - From BEGINNER to PRO!](https://www.youtube.com/watch?v=7xngnjfIlK4&list=WL&index=141&t=16s)
- [HashiCorp Terraform Associate Certification Course](https://www.youtube.com/watch?v=V4waklkBC38&list=WL&index=55&t=111s)
- [Terraform Full Course for Beginners](https://www.youtube.com/watch?v=EJ3N-hhiWv0&list=WL&index=39&t=27s)

Some files were not shown because too many files have changed in this diff Show More