Whitespace Removal
Remaining pages in EN
This commit is contained in:
parent
b51140167c
commit
6693ddc861
@ -17,7 +17,7 @@ metadata:
|
||||
spec:
|
||||
selector:
|
||||
app: elasticsearch
|
||||
#Renderes The service Headless
|
||||
#Renders The service Headless
|
||||
clusterIP: None
|
||||
ports:
|
||||
- port: 9200
|
||||
@ -253,4 +253,4 @@ spec:
|
||||
- name: varlibdockercontainers
|
||||
hostPath:
|
||||
path: /var/lib/docker/containers
|
||||
---
|
||||
---
|
||||
|
@ -1,63 +1,64 @@
|
||||
---
|
||||
title: '#90DaysOfDevOps - Introduction - Day 1'
|
||||
title: "#90DaysOfDevOps - Introduction - Day 1"
|
||||
published: true
|
||||
description: 90DaysOfDevOps - Introduction
|
||||
tags: 'devops, 90daysofdevops, learning'
|
||||
tags: "devops, 90daysofdevops, learning"
|
||||
cover_image: null
|
||||
canonical_url: null
|
||||
id: 1048731
|
||||
date: '2022-04-17T10:12:40Z'
|
||||
date: "2022-04-17T10:12:40Z"
|
||||
---
|
||||
## Introduction - Day 1
|
||||
|
||||
Day 1 of our 90 days and adventure to learn a good foundational understanding of DevOps and tools that help with a DevOps mindset.
|
||||
## Introduction - Day 1
|
||||
|
||||
This learning journey started for me a few years back, but my focus then was around virtualisation platforms and cloud-based technologies, I was looking mostly into Infrastructure as Code and Application configuration management with Terraform and Chef.
|
||||
Day 1 of our 90 days and adventure to learn a good foundational understanding of DevOps and tools that help with a DevOps mindset.
|
||||
|
||||
Fast forward to March 2021, I was given an amazing opportunity to concentrate my efforts around the Cloud Native strategy at Kasten by Veeam. Which was going to be a massive focus on Kubernetes and DevOps and the community surrounding these technologies. I started my learning journey and quickly realised there was a very wide world aside from just learning the fundamentals of Kubernetes and Containerisation and it was then when I started speaking to the community and learning more and more about the DevOps culture, tooling and processes so I started documenting some of the areas I wanted to learn in public.
|
||||
This learning journey started for me a few years back, but my focus then was around virtualisation platforms and cloud-based technologies, I was looking mostly into Infrastructure as Code and Application configuration management with Terraform and Chef.
|
||||
|
||||
Fast forward to March 2021, I was given an amazing opportunity to concentrate my efforts around the Cloud Native strategy at Kasten by Veeam. Which was going to be a massive focus on Kubernetes and DevOps and the community surrounding these technologies. I started my learning journey and quickly realised there was a very wide world aside from just learning the fundamentals of Kubernetes and Containerisation and it was then when I started speaking to the community and learning more and more about the DevOps culture, tooling and processes so I started documenting some of the areas I wanted to learn in public.
|
||||
|
||||
[So you want to learn DevOps?](https://blog.kasten.io/devops-learning-curve)
|
||||
|
||||
## Let the journey begin
|
||||
|
||||
If you read the above blog, you will see this is a high-level contents for my learning journey and I will say at this point I am nowhere near an expert in any of these sections but what I wanted to do was share some resources both FREE and some paid for but an option for both as we all have different circumstances.
|
||||
If you read the above blog, you will see this is a high-level contents for my learning journey and I will say at this point I am nowhere near an expert in any of these sections but what I wanted to do was share some resources both FREE and some paid for but an option for both as we all have different circumstances.
|
||||
|
||||
Over the next 90 days, I want to document these resources and cover those foundational areas. I would love for the community to also get involved. Share your journey and resources so we can learn in public and help each other.
|
||||
Over the next 90 days, I want to document these resources and cover those foundational areas. I would love for the community to also get involved. Share your journey and resources so we can learn in public and help each other.
|
||||
|
||||
You will see from the opening readme in the project repository that I have split things into sections and it is 12 weeks plus 6 days. For the first 6 days, we will explore the fundamentals of DevOps in general before diving into some of the specific areas. By no way is this list exhaustive and again, I would love for the community to assist in making this a useful resource.
|
||||
You will see from the opening readme in the project repository that I have split things into sections and it is 12 weeks plus 6 days. For the first 6 days, we will explore the fundamentals of DevOps in general before diving into some of the specific areas. By no way is this list exhaustive and again, I would love for the community to assist in making this a useful resource.
|
||||
|
||||
Another resource I will share at this point and that I think everyone should have a good look at, maybe create your mind map for yourself and your interest and position, is the following:
|
||||
Another resource I will share at this point and that I think everyone should have a good look at, maybe create your mind map for yourself and your interest and position, is the following:
|
||||
|
||||
[DevOps Roadmap](https://roadmap.sh/devops)
|
||||
|
||||
I found this a great resource when I was creating my initial list and blog post on this topic. You can also see other areas go into a lot more detail outside of the 12 topics I have listed here in this repository.
|
||||
I found this a great resource when I was creating my initial list and blog post on this topic. You can also see other areas go into a lot more detail outside of the 12 topics I have listed here in this repository.
|
||||
|
||||
## First Steps - What is DevOps?
|
||||
## First Steps - What is DevOps?
|
||||
|
||||
There are so many blog articles and YouTube videos to list here, but as we start the 90-day challenge and we focus on spending around an hour a day learning something new or about DevOps, I thought it was good to get some of the high level of "what DevOps is" down to begin.
|
||||
There are so many blog articles and YouTube videos to list here, but as we start the 90-day challenge and we focus on spending around an hour a day learning something new or about DevOps, I thought it was good to get some of the high level of "what DevOps is" down to begin.
|
||||
|
||||
Firstly, DevOps is not a tool. You cannot buy it, it is not a software SKU or an open source GitHub repository you can download. It is also not a programming language, it is also not some dark art magic either.
|
||||
Firstly, DevOps is not a tool. You cannot buy it, it is not a software SKU or an open source GitHub repository you can download. It is also not a programming language, it is also not some dark art magic either.
|
||||
|
||||
DevOps is a way to do smarter things in Software Development. - Hold up... But if you are not a software developer should you turn away right now and not dive into this project??? No. Not at all. Stay... Because DevOps brings together a combination of software development and operations. I mentioned earlier that I was more on the VM side and that would generally fall under the Operations side of the house, but within the community, there are people with all different backgrounds where DevOps is 100% going to benefit the individual, Developers, Operations and QA Engineers all can equally learn these best practices by having a better understanding of DevOps.
|
||||
DevOps is a way to do smarter things in Software Development. - Hold up... But if you are not a software developer should you turn away right now and not dive into this project??? No. Not at all. Stay... Because DevOps brings together a combination of software development and operations. I mentioned earlier that I was more on the VM side and that would generally fall under the Operations side of the house, but within the community, there are people with all different backgrounds where DevOps is 100% going to benefit the individual, Developers, Operations and QA Engineers all can equally learn these best practices by having a better understanding of DevOps.
|
||||
|
||||
DevOps is a set of practices that help to reach the goal of this movement: reducing the time between the ideation phase of a product and its release in production to the end-user or whomever it could be an internal team or customer.
|
||||
DevOps is a set of practices that help to reach the goal of this movement: reducing the time between the ideation phase of a product and its release in production to the end-user or whomever it could be an internal team or customer.
|
||||
|
||||
Another area we will dive into in this first week is around **The Agile Methodology**. DevOps and Agile are widely adopted together to achieve continuous delivery of your **Application**.
|
||||
Another area we will dive into in this first week is around **The Agile Methodology**. DevOps and Agile are widely adopted together to achieve continuous delivery of your **Application**.
|
||||
|
||||
The high-level takeaway is that a DevOps mindset or culture is about shrinking the long, drawn out software release process from potentially years to being able to drop smaller releases more frequently. The other key fundamental point to understand here is the responsibility of a DevOps engineer to break down silos between the teams I previously mentioned: Developers, Operations and QA.
|
||||
The high-level takeaway is that a DevOps mindset or culture is about shrinking the long, drawn out software release process from potentially years to being able to drop smaller releases more frequently. The other key fundamental point to understand here is the responsibility of a DevOps engineer to break down silos between the teams I previously mentioned: Developers, Operations and QA.
|
||||
|
||||
From a DevOps perspective, **Development, Testing and Deployment** all land with the DevOps team.
|
||||
From a DevOps perspective, **Development, Testing and Deployment** all land with the DevOps team.
|
||||
|
||||
The final point I will make is to make this as effective and efficient as possible we must leverage **Automation**.
|
||||
|
||||
## Resources
|
||||
## Resources
|
||||
|
||||
I am always open to adding additional resources to these readme files as it is here as a learning tool.
|
||||
I am always open to adding additional resources to these readme files as it is here as a learning tool.
|
||||
|
||||
My advice is to watch all of the below and hopefully you have also picked something up from the text and explanations above.
|
||||
My advice is to watch all of the below and hopefully you have also picked something up from the text and explanations above.
|
||||
|
||||
- [DevOps in 5 Minutes](https://www.youtube.com/watch?v=Xrgk023l4lI)
|
||||
- [What is DevOps? Easy Way](https://www.youtube.com/watch?v=_Gpe1Zn-1fE&t=43s)
|
||||
- [DevOps roadmap 2022 | Success Roadmap 2022](https://www.youtube.com/watch?v=7l_n97Mt0ko)
|
||||
|
||||
If you made it this far, then you will know if this is where you want to be or not. See you on [Day 2](day02.md).
|
||||
If you made it this far, then you will know if this is where you want to be or not. See you on [Day 2](day02.md).
|
||||
|
@ -1,68 +1,70 @@
|
||||
---
|
||||
title: '#90DaysOfDevOps - Responsibilities of a DevOps Engineer - Day 2'
|
||||
title: "#90DaysOfDevOps - Responsibilities of a DevOps Engineer - Day 2"
|
||||
published: false
|
||||
description: 90DaysOfDevOps - Responsibilities of a DevOps Engineer
|
||||
tags: 'devops, 90daysofdevops, learning'
|
||||
tags: "devops, 90daysofdevops, learning"
|
||||
cover_image: null
|
||||
canonical_url: null
|
||||
id: 1048699
|
||||
date: '2022-04-17T21:15:34Z'
|
||||
date: "2022-04-17T21:15:34Z"
|
||||
---
|
||||
## Responsibilities of a DevOps Engineer
|
||||
|
||||
Hopefully, you are coming into this off the back of going through the resources and posting on [Day1 of #90DaysOfDevOps](day01.md)
|
||||
## Responsibilities of a DevOps Engineer
|
||||
|
||||
It was briefly touched on in the first post but now we must get deeper into this concept and understand that there are two main parts when creating an application. We have the **Development** part where software developers program the application and test it. Then we have the **Operations** part where the application is deployed and maintained on a server.
|
||||
Hopefully, you are coming into this off the back of going through the resources and posting on [Day1 of #90DaysOfDevOps](day01.md)
|
||||
|
||||
It was briefly touched on in the first post but now we must get deeper into this concept and understand that there are two main parts when creating an application. We have the **Development** part where software developers program the application and test it. Then we have the **Operations** part where the application is deployed and maintained on a server.
|
||||
|
||||
## DevOps is the link between the two
|
||||
|
||||
To get to grips with DevOps or the tasks which a DevOps engineer would be carrying out we need to understand the tools or the process and overview of those and how they come together.
|
||||
To get to grips with DevOps or the tasks which a DevOps engineer would be carrying out we need to understand the tools or the process and overview of those and how they come together.
|
||||
|
||||
Everything starts with the application! You will see so much throughout that it is all about the application when it comes to DevOps.
|
||||
|
||||
Developers will create an application, this can be done with many different technology stacks and let's leave that to the imagination for now as we get into this later. This can also involve many different programming languages, build tools, code repositories etc.
|
||||
Developers will create an application, this can be done with many different technology stacks and let's leave that to the imagination for now as we get into this later. This can also involve many different programming languages, build tools, code repositories etc.
|
||||
|
||||
As a DevOps engineer you won't be programming the application but having a good understanding of the concepts of how a developer works and the systems, tools and processes they are using is key to success.
|
||||
As a DevOps engineer you won't be programming the application but having a good understanding of the concepts of how a developer works and the systems, tools and processes they are using is key to success.
|
||||
|
||||
At a very high level, you are going to need to know how the application is configured to talk to all of its required services or data services and then also sprinkle a requirement of how this can or should be tested.
|
||||
At a very high level, you are going to need to know how the application is configured to talk to all of its required services or data services and then also sprinkle a requirement of how this can or should be tested.
|
||||
|
||||
The application will need to be deployed somewhere, lets's keep it generally simple here and make this a server, doesn't matter where but a server. This is then expected to be accessed by the customer or end user depending on the application that has been created.
|
||||
The application will need to be deployed somewhere, lets's keep it generally simple here and make this a server, doesn't matter where but a server. This is then expected to be accessed by the customer or end user depending on the application that has been created.
|
||||
|
||||
This server needs to run somewhere, on-premises, in a public cloud, serverless (Ok I have gone too far, we won't be covering serverless but its an option and more and more enterprises are heading this way) Someone needs to create and configure these servers and get them ready for the application to run. Now, this element might land to you as a DevOps engineer to deploy and configure these servers.
|
||||
This server needs to run somewhere, on-premises, in a public cloud, serverless (Ok I have gone too far, we won't be covering serverless but its an option and more and more enterprises are heading this way) Someone needs to create and configure these servers and get them ready for the application to run. Now, this element might land to you as a DevOps engineer to deploy and configure these servers.
|
||||
|
||||
These servers run an operating system and generally speaking this is going to be Linux but we have a whole section or week where we cover some of the foundational knowledge you should gain here.
|
||||
These servers run an operating system and generally speaking this is going to be Linux but we have a whole section or week where we cover some of the foundational knowledge you should gain here.
|
||||
|
||||
It is also likely that we need to communicate with other services in our network or environment, so we also need to have that level of knowledge around networking and configuring that, this might to some degree also land at the feet of the DevOps engineer. Again we will cover this in more detail in a dedicated section talking about all things DNS, DHCP, Load Balancing etc.
|
||||
It is also likely that we need to communicate with other services in our network or environment, so we also need to have that level of knowledge around networking and configuring that, this might to some degree also land at the feet of the DevOps engineer. Again we will cover this in more detail in a dedicated section talking about all things DNS, DHCP, Load Balancing etc.
|
||||
|
||||
## Jack of all trades, Master of none
|
||||
## Jack of all trades, Master of none
|
||||
|
||||
I will say at this point though, you don't need to be a Network or Infrastructure specialist you need a foundational knowledge of how to get things up and running and talking to each other, much the same as maybe having a foundational knowledge of a programming language but you don't need to be a developer. However, you might be coming into this as a specialist in an area and that is a great footing to adapt to other areas.
|
||||
I will say at this point though, you don't need to be a Network or Infrastructure specialist you need a foundational knowledge of how to get things up and running and talking to each other, much the same as maybe having a foundational knowledge of a programming language but you don't need to be a developer. However, you might be coming into this as a specialist in an area and that is a great footing to adapt to other areas.
|
||||
|
||||
You will also most likely not take over the management of these servers or the application daily.
|
||||
You will also most likely not take over the management of these servers or the application daily.
|
||||
|
||||
We have been talking about servers but the likelihood is that your application will be developed to run as containers, Which still runs on a server for the most part but you will also need an understanding of not only virtualisation, Cloud Infrastructure as a Service (IaaS) but also containerisation as well, The focus in these 90 days will be more catered towards containers.
|
||||
We have been talking about servers but the likelihood is that your application will be developed to run as containers, Which still runs on a server for the most part but you will also need an understanding of not only virtualisation, Cloud Infrastructure as a Service (IaaS) but also containerisation as well, The focus in these 90 days will be more catered towards containers.
|
||||
|
||||
## High-Level Overview
|
||||
|
||||
On one side we have our developers creating new features and functionality (as well as bug fixes) for the application.
|
||||
On one side we have our developers creating new features and functionality (as well as bug fixes) for the application.
|
||||
|
||||
On the other side, we have some sort of environment, infrastructure or servers which are configured and managed to run this application and communicate with all its required services.
|
||||
On the other side, we have some sort of environment, infrastructure or servers which are configured and managed to run this application and communicate with all its required services.
|
||||
|
||||
The big question is how do we get those features and bug fixes into our products and make them available to those end users?
|
||||
The big question is how do we get those features and bug fixes into our products and make them available to those end users?
|
||||
|
||||
How do we release the new application version? This is one of the main tasks for a DevOps engineer, and the important thing here is not to just figure out how to do this once but we need to do this continuously and in an automated, efficient way which also needs to include testing!
|
||||
How do we release the new application version? This is one of the main tasks for a DevOps engineer, and the important thing here is not to just figure out how to do this once but we need to do this continuously and in an automated, efficient way which also needs to include testing!
|
||||
|
||||
This is where we are going to end this day of learning, hopefully, this was useful. Over the next few days, we are going to dive a little deeper into some more areas of DevOps and then we will get into the sections that dive deeper into the tooling and processes and the benefits of these.
|
||||
This is where we are going to end this day of learning, hopefully, this was useful. Over the next few days, we are going to dive a little deeper into some more areas of DevOps and then we will get into the sections that dive deeper into the tooling and processes and the benefits of these.
|
||||
|
||||
## Resources
|
||||
## Resources
|
||||
|
||||
I am always open to adding additional resources to these readme files as it is here as a learning tool.
|
||||
I am always open to adding additional resources to these readme files as it is here as a learning tool.
|
||||
|
||||
My advice is to watch all of the below and hopefully you also picked something up from the text and explanations above.
|
||||
|
||||
My advice is to watch all of the below and hopefully you also picked something up from the text and explanations above.
|
||||
- [What is DevOps? - TechWorld with Nana](https://www.youtube.com/watch?v=0yWAtQ6wYNM)
|
||||
- [What is DevOps? - GitHub YouTube](https://www.youtube.com/watch?v=kBV8gPVZNEE)
|
||||
- [What is DevOps? - IBM YouTube](https://www.youtube.com/watch?v=UbtB4sMaaNM)
|
||||
- [What is DevOps? - AWS ](https://aws.amazon.com/devops/what-is-devops/)
|
||||
- [What is DevOps? - AWS](https://aws.amazon.com/devops/what-is-devops/)
|
||||
- [What is DevOps? - Microsoft](https://docs.microsoft.com/en-us/devops/what-is-devops)
|
||||
|
||||
If you made it this far then you will know if this is where you want to be or not. See you on [Day 3](day03.md).
|
||||
If you made it this far then you will know if this is where you want to be or not. See you on [Day 3](day03.md).
|
||||
|
@ -1,78 +1,82 @@
|
||||
---
|
||||
title: '#90DaysOfDevOps - Application Focused - Day 3'
|
||||
title: "#90DaysOfDevOps - Application Focused - Day 3"
|
||||
published: false
|
||||
description: 90DaysOfDevOps - Application Focused
|
||||
tags: 'devops, 90daysofdevops, learning'
|
||||
tags: "devops, 90daysofdevops, learning"
|
||||
cover_image: null
|
||||
canonical_url: null
|
||||
id: 1048825
|
||||
---
|
||||
|
||||
## DevOps Lifecycle - Application Focused
|
||||
|
||||
As we continue through these next few weeks we are 100% going to come across these titles (Continuous Development, Testing, Deployment, Monitor) over and over again, If you are heading towards the DevOps Engineer role then repeatability will be something you will get used to but constantly enhancing each time is another thing that keeps things interesting.
|
||||
As we continue through these next few weeks we are 100% going to come across these titles (Continuous Development, Testing, Deployment, Monitor) over and over again, If you are heading towards the DevOps Engineer role then repeatability will be something you will get used to but constantly enhancing each time is another thing that keeps things interesting.
|
||||
|
||||
In this hour we are going to take a look at the high-level view of the application from start to finish and then back around again like a constant loop.
|
||||
In this hour we are going to take a look at the high-level view of the application from start to finish and then back around again like a constant loop.
|
||||
|
||||
### Development
|
||||
Let's take a brand new example of an Application, to start with we have nothing created, maybe as a developer, you have to discuss with your client or end user the requirements and come up with some sort of plan or requirements for your Application. We then need to create from the requirements our brand new application.
|
||||
### Development
|
||||
|
||||
In regards to tooling at this stage, there is no real requirement here other than choosing your IDE and the programming language you wish to use to write your application.
|
||||
Let's take a brand new example of an Application, to start with we have nothing created, maybe as a developer, you have to discuss with your client or end user the requirements and come up with some sort of plan or requirements for your Application. We then need to create from the requirements our brand new application.
|
||||
|
||||
As a DevOps engineer, remember you are probably not the one creating this plan or coding the application for the end user, this will be a skilled developer.
|
||||
In regards to tooling at this stage, there is no real requirement here other than choosing your IDE and the programming language you wish to use to write your application.
|
||||
|
||||
As a DevOps engineer, remember you are probably not the one creating this plan or coding the application for the end user, this will be a skilled developer.
|
||||
|
||||
But it also would not hurt for you to be able to read some of the code so that you can make the best infrastructure decisions moving forward for your application.
|
||||
|
||||
We previously mentioned that this application can be written in any language. Importantly this should be maintained using a version control system, this is something we will cover also in detail later on and in particular, we will dive into **Git**.
|
||||
We previously mentioned that this application can be written in any language. Importantly this should be maintained using a version control system, this is something we will cover also in detail later on and in particular, we will dive into **Git**.
|
||||
|
||||
It is also likely that it will not be one developer working on this project although this could be the case even so best practices would require a code repository to store and collaborate on the code, this could be private or public and could be hosted or privately deployed generally speaking you would hear the likes of **GitHub or GitLab** being used as a code repository. Again we will cover these as part of our section on **Git** later on.
|
||||
It is also likely that it will not be one developer working on this project although this could be the case even so best practices would require a code repository to store and collaborate on the code, this could be private or public and could be hosted or privately deployed generally speaking you would hear the likes of **GitHub or GitLab** being used as a code repository. Again we will cover these as part of our section on **Git** later on.
|
||||
|
||||
### Testing
|
||||
At this stage, we have our requirements and we have our application being developed. But we need to make sure we are testing our code in all the different environments that we have available to us or specifically maybe to the programming language chosen.
|
||||
### Testing
|
||||
|
||||
This phase enables QA to test for bugs, more frequently we see containers being used for simulating the test environment which overall can improve on cost overheads of physical or cloud infrastructure.
|
||||
At this stage, we have our requirements and we have our application being developed. But we need to make sure we are testing our code in all the different environments that we have available to us or specifically maybe to the programming language chosen.
|
||||
|
||||
This phase enables QA to test for bugs, more frequently we see containers being used for simulating the test environment which overall can improve on cost overheads of physical or cloud infrastructure.
|
||||
|
||||
This phase is also likely going to be automated as part of the next area which is Continuous Integration.
|
||||
|
||||
The ability to automate this testing vs 10s,100s or even 1000s of QA engineers having to do this manually speaks for itself, these engineers can focus on something else within the stack to ensure you are moving faster and developing more functionality vs testing bugs and software which tends to be the hold up on most traditional software releases that use a waterfall methodology.
|
||||
The ability to automate this testing vs 10s,100s or even 1000s of QA engineers having to do this manually speaks for itself, these engineers can focus on something else within the stack to ensure you are moving faster and developing more functionality vs testing bugs and software which tends to be the hold up on most traditional software releases that use a waterfall methodology.
|
||||
|
||||
### Integration
|
||||
### Integration
|
||||
|
||||
Quite importantly Integration is at the middle of the DevOps lifecycle. It is the practice in which developers require to commit changes to the source code more frequently. This could be on a daily or weekly basis.
|
||||
Quite importantly Integration is at the middle of the DevOps lifecycle. It is the practice in which developers require to commit changes to the source code more frequently. This could be on a daily or weekly basis.
|
||||
|
||||
With every commit, your application can go through the automated testing phases and this allows for early detection of issues or bugs before the next phase.
|
||||
With every commit, your application can go through the automated testing phases and this allows for early detection of issues or bugs before the next phase.
|
||||
|
||||
Now you might at this stage be saying "but we don't create applications, we buy it off the shelf from a software vendor" Don't worry many companies do this and will continue to do this and it will be the software vendor that is concentrating on the above 3 phases but you might want to still adopt the final phase as this will enable for faster and more efficient deployments of your off the shelf deployments.
|
||||
Now you might at this stage be saying "but we don't create applications, we buy it off the shelf from a software vendor" Don't worry many companies do this and will continue to do this and it will be the software vendor that is concentrating on the above 3 phases but you might want to still adopt the final phase as this will enable for faster and more efficient deployments of your off the shelf deployments.
|
||||
|
||||
I would also suggest just having this above knowledge is very important as you might buy off the shelf software today, but what about tomorrow or down the line... next job maybe?
|
||||
I would also suggest just having this above knowledge is very important as you might buy off the shelf software today, but what about tomorrow or down the line... next job maybe?
|
||||
|
||||
### Deployment
|
||||
Ok so we have our application built and tested against the requirements of our end user and we now need to go ahead and deploy this application into production for our end users to consume.
|
||||
### Deployment
|
||||
|
||||
This is the stage where the code is deployed to the production servers, now this is where things get extremely interesting and it is where the rest of our 86 days dives deeper into these areas. Because different applications require different possibly hardware or configurations. This is where **Application Configuration Management** and **Infrastructure as Code** could play a key part in your DevOps lifecycle. It might be that your application is **Containerised** but also available to run on a virtual machine. This then also leads us onto platforms like **Kubernetes** which would be orchestrating those containers and making sure you have the desired state available to your end users.
|
||||
Ok so we have our application built and tested against the requirements of our end user and we now need to go ahead and deploy this application into production for our end users to consume.
|
||||
|
||||
Of these bold topics, we will go into more detail over the next few weeks to get a better foundational knowledge of what they are and when to use them.
|
||||
This is the stage where the code is deployed to the production servers, now this is where things get extremely interesting and it is where the rest of our 86 days dives deeper into these areas. Because different applications require different possibly hardware or configurations. This is where **Application Configuration Management** and **Infrastructure as Code** could play a key part in your DevOps lifecycle. It might be that your application is **Containerised** but also available to run on a virtual machine. This then also leads us onto platforms like **Kubernetes** which would be orchestrating those containers and making sure you have the desired state available to your end users.
|
||||
|
||||
### Monitoring
|
||||
Of these bold topics, we will go into more detail over the next few weeks to get a better foundational knowledge of what they are and when to use them.
|
||||
|
||||
Things are moving fast here and we have our Application that we are continuously updating with new features and functionality and we have our testing making sure no gremlins are being found. We have the application running in our environment that can be continually keeping the required configuration and performance.
|
||||
### Monitoring
|
||||
|
||||
But now we need to be sure that our end users are getting the experience they require. Here we need to make sure that our Application Performance is continuously being monitored, this phase is going to allow your developers to make better decisions about enhancements to the application in future releases to better serve the end users.
|
||||
Things are moving fast here and we have our Application that we are continuously updating with new features and functionality and we have our testing making sure no gremlins are being found. We have the application running in our environment that can be continually keeping the required configuration and performance.
|
||||
|
||||
This section is also where we are going to capture that feedback wheel about the features that have been implemented and how the end users would like to make these better for them.
|
||||
But now we need to be sure that our end users are getting the experience they require. Here we need to make sure that our Application Performance is continuously being monitored, this phase is going to allow your developers to make better decisions about enhancements to the application in future releases to better serve the end users.
|
||||
|
||||
Reliability is a key factor here as well, at the end of the day we want our Application to be available all the time it is required. This then leads to other **observability, security and data management** areas that should be continuously monitored and feedback can always be used to better enhance, update and release the application continuously.
|
||||
This section is also where we are going to capture that feedback wheel about the features that have been implemented and how the end users would like to make these better for them.
|
||||
|
||||
Some input from the community here specifically [@_ediri](https://twitter.com/_ediri) mentioned also part of this continuous process we should also have the FinOps teams involved. Apps & Data are running and stored somewhere you should be monitoring this continuously to make sure if things change from a resources point of view your costs are not causing some major financial pain on your Cloud Bills.
|
||||
Reliability is a key factor here as well, at the end of the day we want our Application to be available all the time it is required. This then leads to other **observability, security and data management** areas that should be continuously monitored and feedback can always be used to better enhance, update and release the application continuously.
|
||||
|
||||
I think it is also a good time to bring up the "DevOps Engineer" mentioned above, albeit there are many DevOps Engineer positions in the wild that people hold, this is not the ideal way of positioning the process of DevOps. What I mean is from speaking to others in the community the title of DevOps Engineer should not be the goal for anyone because really any position should be adopting DevOps processes and the culture explained here. DevOps should be used in many different positions such as Cloud-Native engineer/architect, virtualisation admin, cloud architect/engineer, and infrastructure admin. This is to name a few but the reason for using DevOps Engineer above was really to highlight the scope of the process used by any of the above positions and more.
|
||||
Some input from the community here specifically [@\_ediri](https://twitter.com/_ediri) mentioned also part of this continuous process we should also have the FinOps teams involved. Apps & Data are running and stored somewhere you should be monitoring this continuously to make sure if things change from a resources point of view your costs are not causing some major financial pain on your Cloud Bills.
|
||||
|
||||
## Resources
|
||||
I think it is also a good time to bring up the "DevOps Engineer" mentioned above, albeit there are many DevOps Engineer positions in the wild that people hold, this is not the ideal way of positioning the process of DevOps. What I mean is from speaking to others in the community the title of DevOps Engineer should not be the goal for anyone because really any position should be adopting DevOps processes and the culture explained here. DevOps should be used in many different positions such as Cloud-Native engineer/architect, virtualisation admin, cloud architect/engineer, and infrastructure admin. This is to name a few but the reason for using DevOps Engineer above was really to highlight the scope of the process used by any of the above positions and more.
|
||||
|
||||
I am always open to adding additional resources to these readme files as it is here as a learning tool.
|
||||
## Resources
|
||||
|
||||
My advice is to watch all of the below and hopefully you also picked something up from the text and explanations above.
|
||||
I am always open to adding additional resources to these readme files as it is here as a learning tool.
|
||||
|
||||
- [Continuous Development](https://www.youtube.com/watch?v=UnjwVYAN7Ns) I will also add that this is focused on manufacturing but the lean culture can be closely followed with DevOps.
|
||||
My advice is to watch all of the below and hopefully you also picked something up from the text and explanations above.
|
||||
|
||||
- [Continuous Development](https://www.youtube.com/watch?v=UnjwVYAN7Ns) I will also add that this is focused on manufacturing but the lean culture can be closely followed with DevOps.
|
||||
- [Continuous Testing - IBM YouTube](https://www.youtube.com/watch?v=RYQbmjLgubM)
|
||||
- [Continuous Integration - IBM YouTube](https://www.youtube.com/watch?v=1er2cjUq1UI)
|
||||
- [Continuous Monitoring](https://www.youtube.com/watch?v=Zu53QQuYqJ0)
|
||||
@ -80,4 +84,4 @@ My advice is to watch all of the below and hopefully you also picked something u
|
||||
- [FinOps Foundation - What is FinOps](https://www.finops.org/introduction/what-is-finops/)
|
||||
- [**NOT FREE** The Phoenix Project: A Novel About IT, DevOps, and Helping Your Business Win](https://www.amazon.com/Phoenix-Project-DevOps-Helping-Business/dp/1942788290/)
|
||||
|
||||
If you made it this far then you will know if this is where you want to be or not. See you on [Day 4](day04.md).
|
||||
If you made it this far then you will know if this is where you want to be or not. See you on [Day 4](day04.md).
|
||||
|
@ -1,8 +1,8 @@
|
||||
---
|
||||
title: '#90DaysOfDevOps - DevOps & Agile - Day 4'
|
||||
title: "#90DaysOfDevOps - DevOps & Agile - Day 4"
|
||||
published: false
|
||||
description: 90DaysOfDevOps - DevOps & Agile
|
||||
tags: 'devops, 90daysofdevops, learning'
|
||||
tags: "devops, 90daysofdevops, learning"
|
||||
cover_image: null
|
||||
canonical_url: null
|
||||
id: 1048700
|
||||
@ -71,11 +71,11 @@ DevOps uses tools for team communication, software development, deployment and i
|
||||
|
||||
The combination of Agile and DevOps brings the following benefits you will get:
|
||||
|
||||
- Flexible management and powerful technology.
|
||||
- Agile practices help DevOps teams to communicate their priorities more efficiently.
|
||||
- The automation cost that you have to pay for your DevOps practices is justified by your agile requirement of deploying quickly and frequently.
|
||||
- It leads to strengthening: the team adopting agile practices will improve collaboration, increase the team's motivation and decrease employee turnover rates.
|
||||
- As a result, you get better product quality.
|
||||
- Flexible management and powerful technology.
|
||||
- Agile practices help DevOps teams to communicate their priorities more efficiently.
|
||||
- The automation cost that you have to pay for your DevOps practices is justified by your agile requirement of deploying quickly and frequently.
|
||||
- It leads to strengthening: the team adopting agile practices will improve collaboration, increase the team's motivation and decrease employee turnover rates.
|
||||
- As a result, you get better product quality.
|
||||
|
||||
Agile allows coming back to previous product development stages to fix errors and prevent the accumulation of technical debt. To adopt agile and DevOps
|
||||
simultaneously just follow 7 steps:
|
||||
@ -92,8 +92,8 @@ What do you think? Do you have different views? I want to hear from Developers,
|
||||
|
||||
### Resources
|
||||
|
||||
- [DevOps for Developers – Day in the Life: DevOps Engineer in 2021](https://www.youtube.com/watch?v=2JymM0YoqGA)
|
||||
- [3 Things I wish I knew as a DevOps Engineer](https://www.youtube.com/watch?v=udRNM7YRdY4)
|
||||
- [How to become a DevOps Engineer feat. Shawn Powers](https://www.youtube.com/watch?v=kDQMjAQNvY4)
|
||||
- [DevOps for Developers – Day in the Life: DevOps Engineer in 2021](https://www.youtube.com/watch?v=2JymM0YoqGA)
|
||||
- [3 Things I wish I knew as a DevOps Engineer](https://www.youtube.com/watch?v=udRNM7YRdY4)
|
||||
- [How to become a DevOps Engineer feat. Shawn Powers](https://www.youtube.com/watch?v=kDQMjAQNvY4)
|
||||
|
||||
If you made it this far then you will know if this is where you want to be or not. See you on [Day 5](day05.md).
|
||||
|
@ -1,85 +1,86 @@
|
||||
---
|
||||
title: '#90DaysOfDevOps - Plan > Code > Build > Testing > Release > Deploy > Operate > Monitor > - Day 5'
|
||||
title: "#90DaysOfDevOps - Plan > Code > Build > Testing > Release > Deploy > Operate > Monitor > - Day 5"
|
||||
published: false
|
||||
description: 90DaysOfDevOps - Plan > Code > Build > Testing > Release > Deploy > Operate > Monitor >
|
||||
tags: 'devops, 90daysofdevops, learning'
|
||||
tags: "devops, 90daysofdevops, learning"
|
||||
cover_image: null
|
||||
canonical_url: null
|
||||
id: 1048830
|
||||
---
|
||||
## Plan > Code > Build > Testing > Release > Deploy > Operate > Monitor >
|
||||
|
||||
Today we are going to focus on the individual steps from start to finish and the continuous cycle of an Application in a DevOps world.
|
||||
## Plan > Code > Build > Testing > Release > Deploy > Operate > Monitor >
|
||||
|
||||
Today we are going to focus on the individual steps from start to finish and the continuous cycle of an Application in a DevOps world.
|
||||
|
||||

|
||||
|
||||
### Plan:
|
||||
### Plan
|
||||
|
||||
It all starts with the planning process this is where the development team gets together and figures out what types of features and bug fixes they're going to roll out in their next sprint. This is an opportunity as a DevOps Engineer for you to get involved with that and learn what kinds of things are going to be coming your way that you need to be involved with and also influence their decisions or their path and kind of help them work with the infrastructure that you've built or steer them towards something that's going to work better for them in case they're not on that path and so one key thing to point out here is the developers or software engineering team is your customer as a DevOps engineer so this is your opportunity to work with your customer before they go down a bad path.
|
||||
|
||||
### Code:
|
||||
### Code
|
||||
|
||||
Now once that planning session's done they're going to go start writing the code you may or may not be involved a whole lot with this one of the places you may get involved with it, is whenever they're writing code you can help them better understand the infrastructure so if they know what services are available and how to best talk with those services so they're going to do that and then once they're done they'll merge that code into the repository
|
||||
Now once that planning session's done they're going to go start writing the code you may or may not be involved a whole lot with this one of the places you may get involved with it, is whenever they're writing code you can help them better understand the infrastructure so if they know what services are available and how to best talk with those services so they're going to do that and then once they're done they'll merge that code into the repository
|
||||
|
||||
### Build:
|
||||
### Build
|
||||
|
||||
This is where we'll kick off the first of our automation processes because we're going to take their code and we're going to build it depending on what language they're using it may be transpiring it or compiling it or it might be creating a docker image from that code either way we're going to go through that process using our ci cd pipeline
|
||||
This is where we'll kick off the first of our automation processes because we're going to take their code and we're going to build it depending on what language they're using it may be transpiring it or compiling it or it might be creating a docker image from that code either way we're going to go through that process using our ci cd pipeline
|
||||
|
||||
## Testing:
|
||||
## Testing
|
||||
|
||||
Once we've built it we're going to run some tests on it now the development team usually writes the test you may have some input in what tests get written but we need to run those tests and the testing is a way for us to try and minimise introducing problems out into production, it doesn't guarantee that but we want to get as close to a guarantee as we can that were one not introducing new bugs and two not breaking things that used to work
|
||||
|
||||
## Release:
|
||||
## Release
|
||||
|
||||
Once those tests pass we're going to do the release process and depending again on what type of application you're working on this may be a non-step. You know the code may just live in the GitHub repo or the git repository or wherever it lives but it may be the process of taking your compiled code or the docker image that you've built and putting it into a registry or a repository where it's accessible by your production servers for the deployment process
|
||||
Once those tests pass we're going to do the release process and depending again on what type of application you're working on this may be a non-step. You know the code may just live in the GitHub repo or the git repository or wherever it lives but it may be the process of taking your compiled code or the docker image that you've built and putting it into a registry or a repository where it's accessible by your production servers for the deployment process
|
||||
|
||||
## Deploy:
|
||||
## Deploy
|
||||
|
||||
which is the thing that we do next because deployment is like the end game of this whole thing because deployments are when we put the code into production and it's not until we do that that our business realizes the value from all the time effort and hard work that you and the software engineering team have put into this product up to this point.
|
||||
which is the thing that we do next because deployment is like the end game of this whole thing because deployments are when we put the code into production and it's not until we do that that our business realizes the value from all the time effort and hard work that you and the software engineering team have put into this product up to this point.
|
||||
|
||||
## Operate:
|
||||
## Operate
|
||||
|
||||
Once it's deployed we are going to operate it and operate it may involve something like you start getting calls from your customers that they're all annoyed that the site's running slow or their application is running slow right so you need to figure out why that is and then possibly build auto-scaling you know to handle increase the number of servers available during peak periods and decrease the number of servers during off-peak periods either way that's all operational type metrics, another operational thing that you do is include like a feedback loop from production back to your ops team letting you know about key events that happened in production such as a deployment back one step on the deployment thing this may or may not get automated depending on your environment the goal is to always automate it when possible there are some environments where you possibly need to do a few steps before you're ready to do that but ideally you want to deploy automatically as part of your automation process but if you're doing that it might be a good idea to include in your operational steps some type of notification so that your ops team knows that a deployment has happened
|
||||
Once it's deployed we are going to operate it and operate it may involve something like you start getting calls from your customers that they're all annoyed that the site's running slow or their application is running slow right so you need to figure out why that is and then possibly build auto-scaling you know to handle increase the number of servers available during peak periods and decrease the number of servers during off-peak periods either way that's all operational type metrics, another operational thing that you do is include like a feedback loop from production back to your ops team letting you know about key events that happened in production such as a deployment back one step on the deployment thing this may or may not get automated depending on your environment the goal is to always automate it when possible there are some environments where you possibly need to do a few steps before you're ready to do that but ideally you want to deploy automatically as part of your automation process but if you're doing that it might be a good idea to include in your operational steps some type of notification so that your ops team knows that a deployment has happened
|
||||
|
||||
## Monitor:
|
||||
## Monitor
|
||||
|
||||
All of the above parts lead to the final step because you need to have monitoring, especially around operational issues auto-scaling troubleshooting like you don't know
|
||||
there's a problem if you don't have monitoring in place to tell you that there's a problem so some of the things you might build monitoring for are memory utilization CPU utilization disk space, API endpoint, response time, how quickly that endpoint is responding and a big part of that as well is logs. Logs give developers the ability to see what is happening without having to access production systems.
|
||||
there's a problem if you don't have monitoring in place to tell you that there's a problem so some of the things you might build monitoring for are memory utilization CPU utilization disk space, API endpoint, response time, how quickly that endpoint is responding and a big part of that as well is logs. Logs give developers the ability to see what is happening without having to access production systems.
|
||||
|
||||
## Rince & Repeat:
|
||||
## Rinse & Repeat
|
||||
|
||||
Once that's in place you go right back to the beginning to the planning stage and go through the whole thing again
|
||||
|
||||
## Continuous:
|
||||
## Continuous
|
||||
|
||||
Many tools help us achieve the above continuous process, all this code and the ultimate goal of being completely automated, cloud infrastructure or any environment is often described as Continuous Integration/ Continuous Delivery/Continous Deployment or “CI/CD” for short. We will spend a whole week on CI/CD later on in the 90 Days with some examples and walkthroughs to grasp the fundamentals.
|
||||
Many tools help us achieve the above continuous process, all this code and the ultimate goal of being completely automated, cloud infrastructure or any environment is often described as Continuous Integration/ Continuous Delivery/Continous Deployment or “CI/CD” for short. We will spend a whole week on CI/CD later on in the 90 Days with some examples and walkthroughs to grasp the fundamentals.
|
||||
|
||||
### Continuous Delivery:
|
||||
### Continuous Delivery
|
||||
|
||||
Continuous Delivery = Plan > Code > Build > Test
|
||||
Continuous Delivery = Plan > Code > Build > Test
|
||||
|
||||
### Continuous Integration:
|
||||
### Continuous Integration
|
||||
|
||||
This is effectively the outcome of the Continuous Delivery phases above plus the outcome of the Release phase. This is the case for both failure and success but this is fed back into continuous delivery or moved to Continuous Deployment.
|
||||
This is effectively the outcome of the Continuous Delivery phases above plus the outcome of the Release phase. This is the case for both failure and success but this is fed back into continuous delivery or moved to Continuous Deployment.
|
||||
|
||||
Continuous Integration = Plan > Code > Build > Test > Release
|
||||
Continuous Integration = Plan > Code > Build > Test > Release
|
||||
|
||||
### Continuous Deployment:
|
||||
### Continuous Deployment
|
||||
|
||||
If you have a successful release from your continuous integration then move to Continuous Deployment which brings in the following phases
|
||||
If you have a successful release from your continuous integration then move to Continuous Deployment which brings in the following phases
|
||||
|
||||
CI Release is Success = Continuous Deployment = Deploy > Operate > Monitor
|
||||
CI Release is Success = Continuous Deployment = Deploy > Operate > Monitor
|
||||
|
||||
You can see these three Continuous notions above as the simple collection of phases of the DevOps Lifecycle.
|
||||
You can see these three Continuous notions above as the simple collection of phases of the DevOps Lifecycle.
|
||||
|
||||
This last bit was a bit of a recap for me on Day 3 but think this makes things clearer for me.
|
||||
This last bit was a bit of a recap for me on Day 3 but think this makes things clearer for me.
|
||||
|
||||
### Resources:
|
||||
### Resources
|
||||
|
||||
- [DevOps for Developers – Software or DevOps Engineer?](https://www.youtube.com/watch?v=a0-uE3rOyeU)
|
||||
- [Techworld with Nana -DevOps Roadmap 2022 - How to become a DevOps Engineer? What is DevOps? ](https://www.youtube.com/watch?v=9pZ2xmsSDdo&t=125s)
|
||||
- [Techworld with Nana -DevOps Roadmap 2022 - How to become a DevOps Engineer? What is DevOps?](https://www.youtube.com/watch?v=9pZ2xmsSDdo&t=125s)
|
||||
- [How to become a DevOps Engineer in 2021 - DevOps Roadmap](https://www.youtube.com/watch?v=5pxbp6FyTfk)
|
||||
|
||||
If you made it this far then you will know if this is where you want to be or not.
|
||||
If you made it this far then you will know if this is where you want to be or not.
|
||||
|
||||
See you on [Day 6](day06.md).
|
||||
See you on [Day 6](day06.md).
|
||||
|
@ -1,53 +1,56 @@
|
||||
---
|
||||
title: '#90DaysOfDevOps - DevOps - The real stories - Day 6'
|
||||
title: "#90DaysOfDevOps - DevOps - The real stories - Day 6"
|
||||
published: false
|
||||
description: 90DaysOfDevOps - DevOps - The real stories
|
||||
tags: 'devops, 90daysofdevops, learning'
|
||||
tags: "devops, 90daysofdevops, learning"
|
||||
cover_image: null
|
||||
canonical_url: null
|
||||
id: 1048855
|
||||
---
|
||||
## DevOps - The real stories
|
||||
|
||||
DevOps to begin with was seen to be out of reach for a lot of us as we didn't have an environment or requirement anything like a Netflix or fortune 500 but think now that is beginning to sway into the normal when adopting a DevOps practice within any type of business.
|
||||
## DevOps - The real stories
|
||||
|
||||
You will see from the second link below in references there are a lot of different industries and verticals using DevOps and having a hugely positive effect on their business objectives.
|
||||
DevOps to begin with was seen to be out of reach for a lot of us as we didn't have an environment or requirement anything like a Netflix or fortune 500 but think now that is beginning to sway into the normal when adopting a DevOps practice within any type of business.
|
||||
|
||||
The overarching benefit here is DevOps if done correctly should help your Business improve the speed and quality of software development.
|
||||
You will see from the second link below in references there are a lot of different industries and verticals using DevOps and having a hugely positive effect on their business objectives.
|
||||
|
||||
I wanted to take this Day to look at successful companies that have adopted a DevOps practice and share some resources around this, This will be another great one for the community to also dive in and help here. Have you adopted a DevOps culture in your business? Has it been successful?
|
||||
The overarching benefit here is DevOps if done correctly should help your Business improve the speed and quality of software development.
|
||||
|
||||
I mentioned Netflix above and will touch on them again as it is a very good model and advanced to what we generally see today even still but will also mention some other big name brands that are succeeding it seems.
|
||||
I wanted to take this Day to look at successful companies that have adopted a DevOps practice and share some resources around this, This will be another great one for the community to also dive in and help here. Have you adopted a DevOps culture in your business? Has it been successful?
|
||||
|
||||
## Amazon
|
||||
In 2010 Amazon moved their physical server footprint to Amazon Web Services (AWS) cloud this allowed them to save resources by scaling capacity up and down in very small increments. We also know that this AWS cloud would go on and make a huge amount of revenue itself whilst still running the Amazon retail branch of the company.
|
||||
I mentioned Netflix above and will touch on them again as it is a very good model and advanced to what we generally see today even still but will also mention some other big name brands that are succeeding it seems.
|
||||
|
||||
Amazon adopted in 2011 (According to the resource below) a continued deployment process where their developers could deploy code whenever they want and to whatever servers they needed. This enabled Amazon to achieve deploying new software to production servers on average every 11.6 seconds!
|
||||
## Amazon
|
||||
|
||||
## Netflix
|
||||
Who doesn't use Netflix? a huge quality streaming service with by all accounts at least personally a great user experience.
|
||||
In 2010 Amazon moved their physical server footprint to Amazon Web Services (AWS) cloud this allowed them to save resources by scaling capacity up and down in very small increments. We also know that this AWS cloud would go on and make a huge amount of revenue itself whilst still running the Amazon retail branch of the company.
|
||||
|
||||
Why is that user experience so great? Well, the ability to deliver a service with no recollected memory for me at least of glitches requires speed, flexibility, and attention to quality.
|
||||
Amazon adopted in 2011 (According to the resource below) a continued deployment process where their developers could deploy code whenever they want and to whatever servers they needed. This enabled Amazon to achieve deploying new software to production servers on average every 11.6 seconds!
|
||||
|
||||
## Netflix
|
||||
|
||||
Who doesn't use Netflix? a huge quality streaming service with by all accounts at least personally a great user experience.
|
||||
|
||||
Why is that user experience so great? Well, the ability to deliver a service with no recollected memory for me at least of glitches requires speed, flexibility, and attention to quality.
|
||||
|
||||
NetFlix developers can automatically build pieces of code into deployable web images without relying on IT operations. As the images are updated, they are integrated into Netflix’s infrastructure using a custom-built, web-based platform.
|
||||
|
||||
Continuous Monitoring is in place so that if the deployment of the images fails, the new images are rolled back and traffic rerouted to the previous version.
|
||||
Continuous Monitoring is in place so that if the deployment of the images fails, the new images are rolled back and traffic rerouted to the previous version.
|
||||
|
||||
There is a great talk listed below that goes into more about the DOs and DONTs that Netflix lives and dies by within their teams.
|
||||
There is a great talk listed below that goes into more about the DOs and DONTs that Netflix lives and dies by within their teams.
|
||||
|
||||
## Etsy
|
||||
As with many of us and many companies, there was a real struggle around slow and painful deployments. In the same vein, we might have also experienced working in companies that have lots of siloes and teams that are not working well together.
|
||||
## Etsy
|
||||
|
||||
From what I can make out at least from reading about Amazon and Netflix, Etsy might have adopted the letting developers deploy their code around the end of 2009 which might have been before the other two were mentioned. (interesting!)
|
||||
As with many of us and many companies, there was a real struggle around slow and painful deployments. In the same vein, we might have also experienced working in companies that have lots of siloes and teams that are not working well together.
|
||||
|
||||
An interesting takeaway I read here was that they realised that when developers feel responsible for deployment they also would take responsibility for application performance, uptime and other goals.
|
||||
From what I can make out at least from reading about Amazon and Netflix, Etsy might have adopted the letting developers deploy their code around the end of 2009 which might have been before the other two were mentioned. (interesting!)
|
||||
|
||||
An interesting takeaway I read here was that they realised that when developers feel responsible for deployment they also would take responsibility for application performance, uptime and other goals.
|
||||
|
||||
A learning culture is a key part of DevOps, even failure can be a success if lessons are learned. (not sure where this quote came from but it kind of makes sense!)
|
||||
|
||||
I have added some other stories where DevOps has changed the game within some of these massively successful companies.
|
||||
I have added some other stories where DevOps has changed the game within some of these massively successful companies.
|
||||
|
||||
## Resources
|
||||
## Resources
|
||||
|
||||
- [How Netflix Thinks of DevOps](https://www.youtube.com/watch?v=UTKIT6STSVM)
|
||||
- [16 Popular DevOps Use Cases & Real Life Applications [2021]](https://www.upgrad.com/blog/devops-use-cases-applications/)
|
||||
@ -59,14 +62,14 @@ I have added some other stories where DevOps has changed the game within some of
|
||||
|
||||
### Recap of our first few days looking at DevOps
|
||||
|
||||
- DevOps is a combo of Development and Operations that allows a single team to manage the whole application development lifecycle that consists of **Development**, **Testing**, **Deployment**, **Operations**.
|
||||
- DevOps is a combo of Development and Operations that allows a single team to manage the whole application development lifecycle that consists of **Development**, **Testing**, **Deployment**, **Operations**.
|
||||
|
||||
- The main focus and aim of DevOps are to shorten the development lifecycle while delivering features, fixes and functionality frequently in close alignment with business objectives.
|
||||
- The main focus and aim of DevOps are to shorten the development lifecycle while delivering features, fixes and functionality frequently in close alignment with business objectives.
|
||||
|
||||
- DevOps is a software development approach through which software can be delivered and developed reliably and quickly. You may also see this referenced as **Continuous Development, Testing, Deployment, Monitoring**
|
||||
|
||||
If you made it this far then you will know if this is where you want to be or not. See you on [Day 7](day07.md).
|
||||
If you made it this far then you will know if this is where you want to be or not. See you on [Day 7](day07.md).
|
||||
|
||||
Day 7 will be us diving into a programming language, I am not aiming to be a developer but I want to be able to understand what the developers are doing.
|
||||
Day 7 will be us diving into a programming language, I am not aiming to be a developer but I want to be able to understand what the developers are doing.
|
||||
|
||||
Can we achieve that in a week? Probably not but if we spend 7 days or 7 hours learning something we are going to know more than when we started.
|
||||
Can we achieve that in a week? Probably not but if we spend 7 days or 7 hours learning something we are going to know more than when we started.
|
||||
|
@ -1,69 +1,71 @@
|
||||
---
|
||||
title: '#90DaysOfDevOps - The Big Picture: Learning a Programming Language - Day 7'
|
||||
title: "#90DaysOfDevOps - The Big Picture: Learning a Programming Language - Day 7"
|
||||
published: false
|
||||
description: 90DaysOfDevOps - The Big Picture DevOps & Learning a Programming Language
|
||||
tags: 'devops, 90daysofdevops, learning'
|
||||
tags: "devops, 90daysofdevops, learning"
|
||||
cover_image: null
|
||||
canonical_url: null
|
||||
id: 1048856
|
||||
---
|
||||
|
||||
## The Big Picture: DevOps & Learning a Programming Language
|
||||
|
||||
I think it is fair to say to be successful in the long term as a DevOps engineer you've got to know at least one programming language at a foundational level. I want to take this first session of this section to explore why this is such a critical skill to have, and hopefully, by the end of this week or section, you are going to have a better understanding of the why, how and what to do to progress with your learning journey.
|
||||
I think it is fair to say to be successful in the long term as a DevOps engineer you've got to know at least one programming language at a foundational level. I want to take this first session of this section to explore why this is such a critical skill to have, and hopefully, by the end of this week or section, you are going to have a better understanding of the why, how and what to do to progress with your learning journey.
|
||||
|
||||
I think if I was to ask out on social do you need to have programming skills for DevOps related roles, the answer will be most likely a hard yes? Let me know if you think otherwise? Ok but then a bigger question and this is where you won't get such a clear answer which programming language? The most common answer I have seen here has been Python or increasingly more often, we're seeing Golang or Go should be the language that you learn.
|
||||
I think if I was to ask out on social do you need to have programming skills for DevOps related roles, the answer will be most likely a hard yes? Let me know if you think otherwise? Ok but then a bigger question and this is where you won't get such a clear answer which programming language? The most common answer I have seen here has been Python or increasingly more often, we're seeing Golang or Go should be the language that you learn.
|
||||
|
||||
To be successful in DevOps you have to have a good knowledge of programming skills is my takeaway from that at least. But we have to understand why we need it to choose the right path.
|
||||
To be successful in DevOps you have to have a good knowledge of programming skills is my takeaway from that at least. But we have to understand why we need it to choose the right path.
|
||||
|
||||
## Understand why you need to learn a programming language.
|
||||
## Understand why you need to learn a programming language.
|
||||
|
||||
The reason that Python and Go are recommended so often for DevOps engineers is that a lot of the DevOps tooling is written in either Python or Go, which makes sense if you are going to be building DevOps tools. Now this is important as this will determine really what you should learn and that would likely be the most beneficial. If you are going to be building DevOps tools or you are joining a team that does then it would make sense to learn that same language, if you are going to be heavily involved in Kubernetes or Containers then it's more than likely that you would want to choose Go as your programming language. For me, the company I work for (Kasten by Veeam) is in the Cloud-Native ecosystem focused on data management for Kubernetes and everything is written in Go.
|
||||
The reason that Python and Go are recommended so often for DevOps engineers is that a lot of the DevOps tooling is written in either Python or Go, which makes sense if you are going to be building DevOps tools. Now this is important as this will determine really what you should learn and that would likely be the most beneficial. If you are going to be building DevOps tools or you are joining a team that does then it would make sense to learn that same language, if you are going to be heavily involved in Kubernetes or Containers then it's more than likely that you would want to choose Go as your programming language. For me, the company I work for (Kasten by Veeam) is in the Cloud-Native ecosystem focused on data management for Kubernetes and everything is written in Go.
|
||||
|
||||
But then you might not have clear cut reasoning like that to choose you might be a student or transitioning careers with no real decision made for you. I think in this situation then you should choose the one that seems to resonate and fit with the applications you are looking to work with.
|
||||
But then you might not have clear cut reasoning like that to choose you might be a student or transitioning careers with no real decision made for you. I think in this situation then you should choose the one that seems to resonate and fit with the applications you are looking to work with.
|
||||
|
||||
Remember I am not looking to become a software developer here I just want to understand a little more about the programming language so that I can read and understand what those tools are doing and then that leads to possibly how we can help improve things.
|
||||
Remember I am not looking to become a software developer here I just want to understand a little more about the programming language so that I can read and understand what those tools are doing and then that leads to possibly how we can help improve things.
|
||||
|
||||
I would also it is also important to know how you interact with those DevOps tools which could be Kasten K10 or it could be Terraform and HCL. These are what we will call config files and this is how you interact with those DevOps tools to make things happen, commonly these are going to be YAML. (We may use the last day of this section to dive a little into YAML)
|
||||
I would also it is also important to know how you interact with those DevOps tools which could be Kasten K10 or it could be Terraform and HCL. These are what we will call config files and this is how you interact with those DevOps tools to make things happen, commonly these are going to be YAML. (We may use the last day of this section to dive a little into YAML)
|
||||
|
||||
## Did I just talk myself out of learning a programming language?
|
||||
|
||||
Most of the time or depending on the role, you will be helping engineering teams implement DevOps into their workflow, a lot of testing around the application and making sure that the workflow that is built aligns to those DevOps principles we mentioned over the first few days. But in reality, this is going to be a lot of the time troubleshooting an application performance issue or something along those lines. This comes back to my original point and reasoning, the programming language I need to know is the one that the code is written in? If their application is written in NodeJS it won’t help much if you have a Go or Python badge.
|
||||
Most of the time or depending on the role, you will be helping engineering teams implement DevOps into their workflow, a lot of testing around the application and making sure that the workflow that is built aligns to those DevOps principles we mentioned over the first few days. But in reality, this is going to be a lot of the time troubleshooting an application performance issue or something along those lines. This comes back to my original point and reasoning, the programming language I need to know is the one that the code is written in? If their application is written in NodeJS it won’t help much if you have a Go or Python badge.
|
||||
|
||||
## Why Go
|
||||
## Why Go
|
||||
|
||||
Why Golang is the next programming language for DevOps, Go has become a very popular programming language in recent years. According to the StackOverflow Survey for 2021 Go came in fourth for the most wanted Programming, scripting and markup languages with Python being top but hear me out. [StackOverflow 2021 Developer Survey – Most Wanted Link](https://insights.stackoverflow.com/survey/2021#section-most-loved-dreaded-and-wanted-programming-scripting-and-markup-languages)
|
||||
|
||||
As I have also mentioned some of the most known DevOps tools and platforms are written in Go such as Kubernetes, Docker, Grafana and Prometheus.
|
||||
As I have also mentioned some of the most known DevOps tools and platforms are written in Go such as Kubernetes, Docker, Grafana and Prometheus.
|
||||
|
||||
What are some of the characteristics of Go that make it great for DevOps?
|
||||
|
||||
## Build and Deployment of Go Programs
|
||||
An advantage of using a language like Python that is interpreted in a DevOps role is that you don’t need to compile a python program before running it. Especially for smaller automation tasks, you don’t want to be slowed down by a build process that requires compilation even though, Go is a compiled programming language, **Go compiles directly into machine code**. Go is known also for fast compilation times.
|
||||
## Build and Deployment of Go Programs
|
||||
|
||||
## Go vs Python for DevOps
|
||||
An advantage of using a language like Python that is interpreted in a DevOps role is that you don’t need to compile a python program before running it. Especially for smaller automation tasks, you don’t want to be slowed down by a build process that requires compilation even though, Go is a compiled programming language, **Go compiles directly into machine code**. Go is known also for fast compilation times.
|
||||
|
||||
Go Programs are statically linked, this means that when you compile a go program everything is included in a single binary executable, and no external dependencies will be required that would need to be installed on the remote machine, this makes the deployment of go programs easy, compared to python program that uses external libraries you have to make sure that all those libraries are installed on the remote machine that you wish to run on.
|
||||
## Go vs Python for DevOps
|
||||
|
||||
Go is a platform-independent language, which means you can produce binary executables for *all the operating systems, Linux, Windows, macOS etc and very easy to do so. With Python, it is not as easy to create these binary executables for particular operating systems.
|
||||
Go Programs are statically linked, this means that when you compile a go program everything is included in a single binary executable, and no external dependencies will be required that would need to be installed on the remote machine, this makes the deployment of go programs easy, compared to python program that uses external libraries you have to make sure that all those libraries are installed on the remote machine that you wish to run on.
|
||||
|
||||
Go is a very performant language, it has fast compilation and fast run time with lower resource usage like CPU and memory especially compared to python, numerous optimisations have been implemented in the Go language that makes it so performant. (Resources below)
|
||||
Go is a platform-independent language, which means you can produce binary executables for \*all the operating systems, Linux, Windows, macOS etc and very easy to do so. With Python, it is not as easy to create these binary executables for particular operating systems.
|
||||
|
||||
Unlike Python which often requires the use of third party libraries to implement a particular python program, go includes a standard library that has the majority of functionality that you would need for DevOps built directly into it. This includes functionality file processing, HTTP web services, JSON processing, native support for concurrency and parallelism as well as built-in testing.
|
||||
Go is a very performant language, it has fast compilation and fast run time with lower resource usage like CPU and memory especially compared to python, numerous optimisations have been implemented in the Go language that makes it so performant. (Resources below)
|
||||
|
||||
This is by no way throwing Python under the bus I am just giving my reasons for choosing Go but they are not the above Go vs Python it's generally because it makes sense as the company I work for develops software in Go so that is why.
|
||||
Unlike Python which often requires the use of third party libraries to implement a particular python program, go includes a standard library that has the majority of functionality that you would need for DevOps built directly into it. This includes functionality file processing, HTTP web services, JSON processing, native support for concurrency and parallelism as well as built-in testing.
|
||||
|
||||
I will say that once you have or at least I am told as I am not many pages into this chapter right now, is that once you learn your first programming language it becomes easier to take on other languages. You're probably never going to have a single job in any company anywhere where you don't have to deal with managing, architect, orchestrating, debug JavaScript and Node JS applications.
|
||||
This is by no way throwing Python under the bus I am just giving my reasons for choosing Go but they are not the above Go vs Python it's generally because it makes sense as the company I work for develops software in Go so that is why.
|
||||
|
||||
I will say that once you have or at least I am told as I am not many pages into this chapter right now, is that once you learn your first programming language it becomes easier to take on other languages. You're probably never going to have a single job in any company anywhere where you don't have to deal with managing, architect, orchestrating, debug JavaScript and Node JS applications.
|
||||
|
||||
## Resources
|
||||
|
||||
- [StackOverflow 2021 Developer Survey](https://insights.stackoverflow.com/survey/2021)
|
||||
- [Why we are choosing Golang to learn](https://www.youtube.com/watch?v=7pLqIIAqZD4&t=9s)
|
||||
- [Jake Wright - Learn Go in 12 minutes](https://www.youtube.com/watch?v=C8LgvuEBraI&t=312s)
|
||||
- [Techworld with Nana - Golang full course - 3 hours 24 mins](https://www.youtube.com/watch?v=yyUHQIec83I)
|
||||
- [**NOT FREE** Nigel Poulton Pluralsight - Go Fundamentals - 3 hours 26 mins](https://www.pluralsight.com/courses/go-fundamentals)
|
||||
- [FreeCodeCamp - Learn Go Programming - Golang Tutorial for Beginners](https://www.youtube.com/watch?v=YS4e4q9oBaU&t=1025s)
|
||||
- [Hitesh Choudhary - Complete playlist](https://www.youtube.com/playlist?list=PLRAV69dS1uWSR89FRQGZ6q9BR2b44Tr9N)
|
||||
- [Jake Wright - Learn Go in 12 minutes](https://www.youtube.com/watch?v=C8LgvuEBraI&t=312s)
|
||||
- [Techworld with Nana - Golang full course - 3 hours 24 mins](https://www.youtube.com/watch?v=yyUHQIec83I)
|
||||
- [**NOT FREE** Nigel Poulton Pluralsight - Go Fundamentals - 3 hours 26 mins](https://www.pluralsight.com/courses/go-fundamentals)
|
||||
- [FreeCodeCamp - Learn Go Programming - Golang Tutorial for Beginners](https://www.youtube.com/watch?v=YS4e4q9oBaU&t=1025s)
|
||||
- [Hitesh Choudhary - Complete playlist](https://www.youtube.com/playlist?list=PLRAV69dS1uWSR89FRQGZ6q9BR2b44Tr9N)
|
||||
|
||||
Now for the next 6 days of this topic, I intend to work through some of the resources listed above and document my notes for each day. You will notice that they are generally around 3 hours as a full course, I wanted to share my complete list so that if you have time you should move ahead and work through each one if time permits, I will be sticking to my learning hour each day.
|
||||
Now for the next 6 days of this topic, I intend to work through some of the resources listed above and document my notes for each day. You will notice that they are generally around 3 hours as a full course, I wanted to share my complete list so that if you have time you should move ahead and work through each one if time permits, I will be sticking to my learning hour each day.
|
||||
|
||||
See you on [Day 8](day08.md).
|
||||
See you on [Day 8](day08.md).
|
||||
|
@ -1,51 +1,52 @@
|
||||
---
|
||||
title: '#90DaysOfDevOps - Setting up your DevOps environment for Go & Hello World - Day 8'
|
||||
title: "#90DaysOfDevOps - Setting up your DevOps environment for Go & Hello World - Day 8"
|
||||
published: false
|
||||
description: 90DaysOfDevOps - Setting up your DevOps environment for Go & Hello World
|
||||
tags: 'devops, 90daysofdevops, learning'
|
||||
tags: "devops, 90daysofdevops, learning"
|
||||
cover_image: null
|
||||
canonical_url: null
|
||||
id: 1048857
|
||||
---
|
||||
|
||||
## Setting up your DevOps environment for Go & Hello World
|
||||
|
||||
Before we get into some of the fundamentals of Go we should get Go installed on our workstation and do what every "learning programming 101" module teaches us which is to create the Hello World app. As this one is going to be walking through the steps to get Go installed on your workstation we are going to attempt to document the process in pictures so people can easily follow along.
|
||||
Before we get into some of the fundamentals of Go we should get Go installed on our workstation and do what every "learning programming 101" module teaches us which is to create the Hello World app. As this one is going to be walking through the steps to get Go installed on your workstation we are going to attempt to document the process in pictures so people can easily follow along.
|
||||
|
||||
First of all, let's head on over to [go.dev/dl](https://go.dev/dl/) and you will be greeted with some available options for downloads.
|
||||
First of all, let's head on over to [go.dev/dl](https://go.dev/dl/) and you will be greeted with some available options for downloads.
|
||||
|
||||

|
||||
|
||||
If we made it this far you probably know which workstation operating system you are running so select the appropriate download and then we can get installing. I am using Windows for this walkthrough, basically, from this next screen, we can leave all the defaults in place for now. ***(I will note that at the time of writing this was the latest version so screenshots might be out of date)***
|
||||
If we made it this far you probably know which workstation operating system you are running so select the appropriate download and then we can get installing. I am using Windows for this walkthrough, basically, from this next screen, we can leave all the defaults in place for now. **_(I will note that at the time of writing this was the latest version so screenshots might be out of date)_**
|
||||
|
||||

|
||||
|
||||
Also note if you do have an older version of Go installed you will have to remove this before installing, Windows has this built into the installer and will remove and install as one.
|
||||
Also note if you do have an older version of Go installed you will have to remove this before installing, Windows has this built into the installer and will remove and install as one.
|
||||
|
||||
Once finished you should now open a command prompt/terminal and we want to check that we have to Go installed. If you do not get the output that we see below then Go is not installed and you will need to retrace your steps.
|
||||
Once finished you should now open a command prompt/terminal and we want to check that we have to Go installed. If you do not get the output that we see below then Go is not installed and you will need to retrace your steps.
|
||||
|
||||
`go version`
|
||||
|
||||

|
||||
|
||||
Next up we want to check our environment for Go. This is always good to check to make sure your working directories are configured correctly, as you can see below we need to make sure you have the following directory on your system.
|
||||
Next up we want to check our environment for Go. This is always good to check to make sure your working directories are configured correctly, as you can see below we need to make sure you have the following directory on your system.
|
||||
|
||||

|
||||
|
||||
Did you check? Are you following along? You will probably get something like the below if you try and navigate there.
|
||||
Did you check? Are you following along? You will probably get something like the below if you try and navigate there.
|
||||
|
||||

|
||||
|
||||
Ok, let's create that directory for ease I am going to use the mkdir command in my PowerShell terminal. We also need to create 3 folders within the Go folder as you will see also below.
|
||||
Ok, let's create that directory for ease I am going to use the mkdir command in my PowerShell terminal. We also need to create 3 folders within the Go folder as you will see also below.
|
||||
|
||||

|
||||
|
||||
Now we have to Go installed and we have our Go working directory ready for action. We now need an integrated development environment (IDE) Now there are many out there available that you can use but the most common and the one I use is Visual Studio Code or Code. You can learn more about IDEs [here](https://www.youtube.com/watch?v=vUn5akOlFXQ).
|
||||
Now we have to Go installed and we have our Go working directory ready for action. We now need an integrated development environment (IDE) Now there are many out there available that you can use but the most common and the one I use is Visual Studio Code or Code. You can learn more about IDEs [here](https://www.youtube.com/watch?v=vUn5akOlFXQ).
|
||||
|
||||
If you have not downloaded and installed VSCode already on your workstation then you can do so by heading [here](https://code.visualstudio.com/download). As you can see below you have your different OS options.
|
||||
If you have not downloaded and installed VSCode already on your workstation then you can do so by heading [here](https://code.visualstudio.com/download). As you can see below you have your different OS options.
|
||||
|
||||

|
||||
|
||||
Much the same as with the Go installation we are going to download and install and keep the defaults. Once complete you can open VSCode you can select Open File and navigate to our Go directory that we created above.
|
||||
Much the same as with the Go installation we are going to download and install and keep the defaults. Once complete you can open VSCode you can select Open File and navigate to our Go directory that we created above.
|
||||
|
||||

|
||||
|
||||
@ -55,13 +56,13 @@ Now you should see the three folders we also created earlier as well and what we
|
||||
|
||||

|
||||
|
||||
Pretty easy stuff I would say up till this point? Now we are going to create our first Go Program with no understanding of anything we put in this next phase.
|
||||
Pretty easy stuff I would say up till this point? Now we are going to create our first Go Program with no understanding of anything we put in this next phase.
|
||||
|
||||
Next, create a file called `main.go` in your `Hello` folder. As soon as you hit enter on the main.go you will be asked if you want to install the Go extension and also packages you can also check that empty pkg file that we made a few steps back and notice that we should have some new packages in there now?
|
||||
Next, create a file called `main.go` in your `Hello` folder. As soon as you hit enter on the main.go you will be asked if you want to install the Go extension and also packages you can also check that empty pkg file that we made a few steps back and notice that we should have some new packages in there now?
|
||||
|
||||

|
||||
|
||||
Now let's get this Hello World app going, copy the following code into your new main.go file and save that.
|
||||
Now let's get this Hello World app going, copy the following code into your new main.go file and save that.
|
||||
|
||||
```
|
||||
package main
|
||||
@ -72,18 +73,21 @@ func main() {
|
||||
fmt.Println("Hello #90DaysOfDevOps")
|
||||
}
|
||||
```
|
||||
Now I appreciate that the above might make no sense at all, but we will cover more about functions, packages and more in later days. For now, let's run our app. Back in the terminal and in our Hello folder we can now check that all is working. Using the command below we can check to see if our generic learning program is working.
|
||||
|
||||
Now I appreciate that the above might make no sense at all, but we will cover more about functions, packages and more in later days. For now, let's run our app. Back in the terminal and in our Hello folder we can now check that all is working. Using the command below we can check to see if our generic learning program is working.
|
||||
|
||||
```
|
||||
go run main.go
|
||||
```
|
||||
|
||||

|
||||
|
||||
It doesn't end there though, what if we now want to take our program and run it on other Windows machines? We can do that by building our binary using the following command
|
||||
It doesn't end there though, what if we now want to take our program and run it on other Windows machines? We can do that by building our binary using the following command
|
||||
|
||||
```
|
||||
go build main.go
|
||||
```
|
||||
```
|
||||
|
||||

|
||||
|
||||
If we run that, we would see the same output:
|
||||
@ -97,11 +101,11 @@ Hello #90DaysOfDevOps
|
||||
|
||||
- [StackOverflow 2021 Developer Survey](https://insights.stackoverflow.com/survey/2021)
|
||||
- [Why we are choosing Golang to learn](https://www.youtube.com/watch?v=7pLqIIAqZD4&t=9s)
|
||||
- [Jake Wright - Learn Go in 12 minutes](https://www.youtube.com/watch?v=C8LgvuEBraI&t=312s)
|
||||
- [Techworld with Nana - Golang full course - 3 hours 24 mins](https://www.youtube.com/watch?v=yyUHQIec83I)
|
||||
- [**NOT FREE** Nigel Poulton Pluralsight - Go Fundamentals - 3 hours 26 mins](https://www.pluralsight.com/courses/go-fundamentals)
|
||||
- [FreeCodeCamp - Learn Go Programming - Golang Tutorial for Beginners](https://www.youtube.com/watch?v=YS4e4q9oBaU&t=1025s)
|
||||
- [Hitesh Choudhary - Complete playlist](https://www.youtube.com/playlist?list=PLRAV69dS1uWSR89FRQGZ6q9BR2b44Tr9N)
|
||||
- [Jake Wright - Learn Go in 12 minutes](https://www.youtube.com/watch?v=C8LgvuEBraI&t=312s)
|
||||
- [Techworld with Nana - Golang full course - 3 hours 24 mins](https://www.youtube.com/watch?v=yyUHQIec83I)
|
||||
- [**NOT FREE** Nigel Poulton Pluralsight - Go Fundamentals - 3 hours 26 mins](https://www.pluralsight.com/courses/go-fundamentals)
|
||||
- [FreeCodeCamp - Learn Go Programming - Golang Tutorial for Beginners](https://www.youtube.com/watch?v=YS4e4q9oBaU&t=1025s)
|
||||
- [Hitesh Choudhary - Complete playlist](https://www.youtube.com/playlist?list=PLRAV69dS1uWSR89FRQGZ6q9BR2b44Tr9N)
|
||||
|
||||
See you on [Day 9](day09.md).
|
||||
|
||||
|
@ -1,52 +1,56 @@
|
||||
---
|
||||
title: '#90DaysOfDevOps - Let''s explain the Hello World code - Day 9'
|
||||
title: "#90DaysOfDevOps - Let's explain the Hello World code - Day 9"
|
||||
published: false
|
||||
description: 90DaysOfDevOps - Let's explain the Hello World code
|
||||
tags: 'devops, 90daysofdevops, learning'
|
||||
tags: "devops, 90daysofdevops, learning"
|
||||
cover_image: null
|
||||
canonical_url: null
|
||||
id: 1099682
|
||||
---
|
||||
|
||||
## Let's explain the Hello World code
|
||||
|
||||
### How Go works
|
||||
### How Go works
|
||||
|
||||
On [Day 8](day08.md) we walked through getting Go installed on your workstation and we then created our first Go application.
|
||||
|
||||
In this section, we are going to take a deeper look into the code and understand a few more things about the Go language.
|
||||
On [Day 8](day08.md) we walked through getting Go installed on your workstation and we then created our first Go application.
|
||||
|
||||
In this section, we are going to take a deeper look into the code and understand a few more things about the Go language.
|
||||
|
||||
### What is Compiling?
|
||||
|
||||
Before we get into the [6 lines of the Hello World code](Go/hello.go) we need to have a bit of an understanding of compiling.
|
||||
|
||||
Programming languages that we commonly use such as Python, Java, Go and C++ are high-level languages. Meaning they are human-readable but when a machine is trying to execute a program it needs to be in a form that a machine can understand. We have to translate our human-readable code to machine code which is called compiling.
|
||||
Programming languages that we commonly use such as Python, Java, Go and C++ are high-level languages. Meaning they are human-readable but when a machine is trying to execute a program it needs to be in a form that a machine can understand. We have to translate our human-readable code to machine code which is called compiling.
|
||||
|
||||

|
||||
|
||||
From the above you can see what we did on [Day 8](day08.md) here, we created a simple Hello World main.go and we then used the command `go build main.go` to compile our executable.
|
||||
From the above you can see what we did on [Day 8](day08.md) here, we created a simple Hello World main.go and we then used the command `go build main.go` to compile our executable.
|
||||
|
||||
### What are packages?
|
||||
A package is a collection of source files in the same directory that are compiled together. We can simplify this further, a package is a bunch of .go files in the same directory. Remember our Hello folder from Day 8? If and when you get into more complex Go programs you might find that you have folder1 folder2 and folder3 containing different.go files that make up your program with multiple packages.
|
||||
|
||||
We use packages so we can reuse other people's code, we don't have to write everything from scratch. Maybe we are wanting a calculator as part of our program, you could probably find an existing Go Package that contains the mathematical functions that you could import into your code saving you a lot of time and effort in the long run.
|
||||
A package is a collection of source files in the same directory that are compiled together. We can simplify this further, a package is a bunch of .go files in the same directory. Remember our Hello folder from Day 8? If and when you get into more complex Go programs you might find that you have folder1 folder2 and folder3 containing different.go files that make up your program with multiple packages.
|
||||
|
||||
Go encourages you to organise your code in packages so that it is easy to reuse and maintain source code.
|
||||
We use packages so we can reuse other people's code, we don't have to write everything from scratch. Maybe we are wanting a calculator as part of our program, you could probably find an existing Go Package that contains the mathematical functions that you could import into your code saving you a lot of time and effort in the long run.
|
||||
|
||||
### Hello #90DaysOfDevOps Line by Line
|
||||
Now let's take a look at our Hello #90DaysOfDevOps main.go file and walk through the lines.
|
||||
Go encourages you to organise your code in packages so that it is easy to reuse and maintain source code.
|
||||
|
||||
### Hello #90DaysOfDevOps Line by Line
|
||||
|
||||
Now let's take a look at our Hello #90DaysOfDevOps main.go file and walk through the lines.
|
||||
|
||||

|
||||
|
||||
In the first line, you have `package main` which means that this file belongs to a package called main. All .go files need to belong to a package, they should also have `package something` in the opening line.
|
||||
In the first line, you have `package main` which means that this file belongs to a package called main. All .go files need to belong to a package, they should also have `package something` in the opening line.
|
||||
|
||||
A package can be named whatever you wish. We have to call this `main` as this is the starting point of the program that is going to be in this package, this is a rule. (I need to understand more about this rule?)
|
||||
A package can be named whatever you wish. We have to call this `main` as this is the starting point of the program that is going to be in this package, this is a rule. (I need to understand more about this rule?)
|
||||
|
||||

|
||||
|
||||
Whenever we want to compile and execute our code we have to tell the machine where the execution needs to start. We do this by writing a function called main. The machine will look for a function called main to find the entry point of the program.
|
||||
Whenever we want to compile and execute our code we have to tell the machine where the execution needs to start. We do this by writing a function called main. The machine will look for a function called main to find the entry point of the program.
|
||||
|
||||
A function is a block of code that can do some specific task and can be used across the program.
|
||||
A function is a block of code that can do some specific task and can be used across the program.
|
||||
|
||||
You can declare a function with any name using `func` but in this case, we need to name it `main` as this is where the code starts.
|
||||
You can declare a function with any name using `func` but in this case, we need to name it `main` as this is where the code starts.
|
||||
|
||||

|
||||
|
||||
@ -54,25 +58,25 @@ Next, we are going to look at line 3 of our code, the import, this means you wan
|
||||
|
||||

|
||||
|
||||
the `Println()` that we have here is a way in which to write standard output to the terminal where ever the executable has been executed successfully. Feel free to change the message in between the ().
|
||||
the `Println()` that we have here is a way in which to write standard output to the terminal where ever the executable has been executed successfully. Feel free to change the message in between the ().
|
||||
|
||||

|
||||
|
||||
### TLDR
|
||||
|
||||
- **Line 1** = This file will be in the package called `main` and this needs to be called `main` because includes the entry point of the program.
|
||||
- **Line 3** = For us to use the `Println()` we have to import the fmt package to use this on line 6.
|
||||
- **Line 5** = The actual starting point, its the `main` function.
|
||||
- **Line 6** = This will let us print "Hello #90DaysOfDevOps" on our system.
|
||||
- **Line 1** = This file will be in the package called `main` and this needs to be called `main` because includes the entry point of the program.
|
||||
- **Line 3** = For us to use the `Println()` we have to import the fmt package to use this on line 6.
|
||||
- **Line 5** = The actual starting point, its the `main` function.
|
||||
- **Line 6** = This will let us print "Hello #90DaysOfDevOps" on our system.
|
||||
|
||||
## Resources
|
||||
|
||||
- [StackOverflow 2021 Developer Survey](https://insights.stackoverflow.com/survey/2021)
|
||||
- [Why we are choosing Golang to learn](https://www.youtube.com/watch?v=7pLqIIAqZD4&t=9s)
|
||||
- [Jake Wright - Learn Go in 12 minutes](https://www.youtube.com/watch?v=C8LgvuEBraI&t=312s)
|
||||
- [Techworld with Nana - Golang full course - 3 hours 24 mins](https://www.youtube.com/watch?v=yyUHQIec83I)
|
||||
- [**NOT FREE** Nigel Poulton Pluralsight - Go Fundamentals - 3 hours 26 mins](https://www.pluralsight.com/courses/go-fundamentals)
|
||||
- [FreeCodeCamp - Learn Go Programming - Golang Tutorial for Beginners](https://www.youtube.com/watch?v=YS4e4q9oBaU&t=1025s)
|
||||
- [Hitesh Choudhary - Complete playlist](https://www.youtube.com/playlist?list=PLRAV69dS1uWSR89FRQGZ6q9BR2b44Tr9N)
|
||||
- [Jake Wright - Learn Go in 12 minutes](https://www.youtube.com/watch?v=C8LgvuEBraI&t=312s)
|
||||
- [Techworld with Nana - Golang full course - 3 hours 24 mins](https://www.youtube.com/watch?v=yyUHQIec83I)
|
||||
- [**NOT FREE** Nigel Poulton Pluralsight - Go Fundamentals - 3 hours 26 mins](https://www.pluralsight.com/courses/go-fundamentals)
|
||||
- [FreeCodeCamp - Learn Go Programming - Golang Tutorial for Beginners](https://www.youtube.com/watch?v=YS4e4q9oBaU&t=1025s)
|
||||
- [Hitesh Choudhary - Complete playlist](https://www.youtube.com/playlist?list=PLRAV69dS1uWSR89FRQGZ6q9BR2b44Tr9N)
|
||||
|
||||
See you on [Day 10](day10.md).
|
||||
|
@ -1,28 +1,32 @@
|
||||
---
|
||||
title: '#90DaysOfDevOps - The Go Workspace - Day 10'
|
||||
title: "#90DaysOfDevOps - The Go Workspace - Day 10"
|
||||
published: false
|
||||
description: 90DaysOfDevOps - The Go Workspace
|
||||
tags: 'devops, 90daysofdevops, learning'
|
||||
tags: "devops, 90daysofdevops, learning"
|
||||
cover_image: null
|
||||
canonical_url: null
|
||||
id: 1048701
|
||||
---
|
||||
### The Go Workspace
|
||||
On [Day 8](day08.md) we briefly covered the Go workspace to get Go up and running to get to the demo of `Hello #90DaysOfDevOps` But we should explain a little more about the Go workspace.
|
||||
|
||||
Remember we chose the defaults and we then went through and created our Go folder in the GOPATH that was already defined but in reality, this GOPATH can be changed to be wherever you want it to be.
|
||||
### The Go Workspace
|
||||
|
||||
If you run
|
||||
On [Day 8](day08.md) we briefly covered the Go workspace to get Go up and running to get to the demo of `Hello #90DaysOfDevOps` But we should explain a little more about the Go workspace.
|
||||
|
||||
Remember we chose the defaults and we then went through and created our Go folder in the GOPATH that was already defined but in reality, this GOPATH can be changed to be wherever you want it to be.
|
||||
|
||||
If you run
|
||||
|
||||
```
|
||||
echo $GOPATH
|
||||
```
|
||||
The output should be similar to mine (with a different username may be) which is:
|
||||
```
|
||||
|
||||
The output should be similar to mine (with a different username may be) which is:
|
||||
|
||||
```
|
||||
/home/michael/projects/go
|
||||
```
|
||||
Then here, we created 3 directories. **src**, **pkg** and **bin**
|
||||
|
||||
Then here, we created 3 directories. **src**, **pkg** and **bin**
|
||||
|
||||

|
||||
|
||||
@ -30,11 +34,11 @@ Then here, we created 3 directories. **src**, **pkg** and **bin**
|
||||
|
||||

|
||||
|
||||
**pkg** is where your archived files of packages that are or were installed in programs. This helps to speed up the compiling process based on if the packages being used have been modified.
|
||||
**pkg** is where your archived files of packages that are or were installed in programs. This helps to speed up the compiling process based on if the packages being used have been modified.
|
||||
|
||||

|
||||
|
||||
**bin** is where all of your compiled binaries are stored.
|
||||
**bin** is where all of your compiled binaries are stored.
|
||||
|
||||

|
||||
|
||||
@ -44,51 +48,52 @@ Our Hello #90DaysOfDevOps is not a complex program so here is an example of a mo
|
||||
|
||||
This page also goes into some great detail about why and how the layout is like this it also goes a little deeper on other folders we have not mentioned [GoChronicles](https://gochronicles.com/project-structure/)
|
||||
|
||||
### Compiling & running code
|
||||
On [Day 9](day09.md) we also covered a brief introduction to compiling code, but we can go a little deeper here.
|
||||
### Compiling & running code
|
||||
|
||||
To run our code we first must **compile** it. There are three ways to do this within Go.
|
||||
On [Day 9](day09.md) we also covered a brief introduction to compiling code, but we can go a little deeper here.
|
||||
|
||||
To run our code we first must **compile** it. There are three ways to do this within Go.
|
||||
|
||||
- go build
|
||||
- go install
|
||||
- go run
|
||||
- go run
|
||||
|
||||
Before we get to the above compile stage we need to take a look at what we get with the Go Installation.
|
||||
Before we get to the above compile stage we need to take a look at what we get with the Go Installation.
|
||||
|
||||
When we installed Go on Day 8 we installed something known as Go tools which consist of several programs that let us build and process our Go source files. One of the tools is `Go`
|
||||
|
||||
It is worth noting that you can install additional tools that are not in the standard Go installation.
|
||||
It is worth noting that you can install additional tools that are not in the standard Go installation.
|
||||
|
||||
If you open your command prompt and type `go` you should see something like the image below and then you will see "Additional Help Topics" below that for now we don't need to worry about those.
|
||||
If you open your command prompt and type `go` you should see something like the image below and then you will see "Additional Help Topics" below that for now we don't need to worry about those.
|
||||
|
||||

|
||||
|
||||
You might also remember that we have already used at least two of these tools so far on Day 8.
|
||||
You might also remember that we have already used at least two of these tools so far on Day 8.
|
||||
|
||||

|
||||
|
||||
The ones we want to learn more about are the build, install and run.
|
||||
The ones we want to learn more about are the build, install and run.
|
||||
|
||||

|
||||
|
||||
- `go run` - This command compiles and runs the main package comprised of the .go files specified on the command line. The command is compiled to a temporary folder.
|
||||
- `go build` - To compile packages and dependencies, compile the package in the current directory. If the `main` package, will place the executable in the current directory if not then it will place the executable in the `pkg` folder. `go build` also enables you to build an executable file for any Go Supported OS platform.
|
||||
- `go install` - The same as go build but will place the executable in the `bin` folder
|
||||
- `go build` - To compile packages and dependencies, compile the package in the current directory. If the `main` package, will place the executable in the current directory if not then it will place the executable in the `pkg` folder. `go build` also enables you to build an executable file for any Go Supported OS platform.
|
||||
- `go install` - The same as go build but will place the executable in the `bin` folder
|
||||
|
||||
We have run through go build and go run but feel free to run through them again here if you wish, `go install` as stated above puts the executable in our bin folder.
|
||||
We have run through go build and go run but feel free to run through them again here if you wish, `go install` as stated above puts the executable in our bin folder.
|
||||
|
||||

|
||||
|
||||
Hopefully, if you are following along you are watching one of the playlists or videos below, I am taking bits of all of these and translating these into my notes so that I can understand the foundational knowledge of the Golang language. The resources below are likely going to give you a much better understanding of a lot of the areas you need overall but I am trying to document the 7 days or 7 hours worth of the journey with interesting things that I have found.
|
||||
Hopefully, if you are following along you are watching one of the playlists or videos below, I am taking bits of all of these and translating these into my notes so that I can understand the foundational knowledge of the Golang language. The resources below are likely going to give you a much better understanding of a lot of the areas you need overall but I am trying to document the 7 days or 7 hours worth of the journey with interesting things that I have found.
|
||||
|
||||
## Resources
|
||||
|
||||
- [StackOverflow 2021 Developer Survey](https://insights.stackoverflow.com/survey/2021)
|
||||
- [Why we are choosing Golang to learn](https://www.youtube.com/watch?v=7pLqIIAqZD4&t=9s)
|
||||
- [Jake Wright - Learn Go in 12 minutes](https://www.youtube.com/watch?v=C8LgvuEBraI&t=312s)
|
||||
- [Techworld with Nana - Golang full course - 3 hours 24 mins](https://www.youtube.com/watch?v=yyUHQIec83I)
|
||||
- [**NOT FREE** Nigel Poulton Pluralsight - Go Fundamentals - 3 hours 26 mins](https://www.pluralsight.com/courses/go-fundamentals)
|
||||
- [FreeCodeCamp - Learn Go Programming - Golang Tutorial for Beginners](https://www.youtube.com/watch?v=YS4e4q9oBaU&t=1025s)
|
||||
- [Hitesh Choudhary - Complete playlist](https://www.youtube.com/playlist?list=PLRAV69dS1uWSR89FRQGZ6q9BR2b44Tr9N)
|
||||
- [Jake Wright - Learn Go in 12 minutes](https://www.youtube.com/watch?v=C8LgvuEBraI&t=312s)
|
||||
- [Techworld with Nana - Golang full course - 3 hours 24 mins](https://www.youtube.com/watch?v=yyUHQIec83I)
|
||||
- [**NOT FREE** Nigel Poulton Pluralsight - Go Fundamentals - 3 hours 26 mins](https://www.pluralsight.com/courses/go-fundamentals)
|
||||
- [FreeCodeCamp - Learn Go Programming - Golang Tutorial for Beginners](https://www.youtube.com/watch?v=YS4e4q9oBaU&t=1025s)
|
||||
- [Hitesh Choudhary - Complete playlist](https://www.youtube.com/playlist?list=PLRAV69dS1uWSR89FRQGZ6q9BR2b44Tr9N)
|
||||
|
||||
See you on [Day 11](day11.md).
|
||||
|
@ -1,36 +1,38 @@
|
||||
---
|
||||
title: '#90DaysOfDevOps - Variables & Constants in Go - Day 11'
|
||||
title: "#90DaysOfDevOps - Variables & Constants in Go - Day 11"
|
||||
published: false
|
||||
description: 90DaysOfDevOps - Variables & Constants in Go
|
||||
tags: 'devops, 90daysofdevops, learning'
|
||||
tags: "devops, 90daysofdevops, learning"
|
||||
cover_image: null
|
||||
canonical_url: null
|
||||
id: 1048862
|
||||
---
|
||||
|
||||
Before we get into the topics for today I want to give a massive shout out to [Techworld with Nana](https://www.youtube.com/watch?v=yyUHQIec83I) and this fantastic concise journey through the fundamentals of Go.
|
||||
Before we get into the topics for today I want to give a massive shout out to [Techworld with Nana](https://www.youtube.com/watch?v=yyUHQIec83I) and this fantastic concise journey through the fundamentals of Go.
|
||||
|
||||
On [Day8](day08.md) we set our environment up, on [Day9](day09.md) we walked through the Hello #90DaysOfDevOps code and on [Day10](day10.md)) we looked at our Go workspace and went a little deeper into compiling and running the code.
|
||||
|
||||
Today we are going to take a look into Variables, Constants and Data Types whilst writing a new program.
|
||||
Today we are going to take a look into Variables, Constants and Data Types whilst writing a new program.
|
||||
|
||||
## Variables & Constants in Go
|
||||
Let's start by planning our application, I think it would be a good idea to work on a program that tells us how many days we have remained in our #90DaysOfDevOps challenge.
|
||||
|
||||
The first thing to consider here is that as we are building our app and we are welcoming our attendees and we are giving the user feedback on the number of days they have completed we might use the term #90DaysOfDevOps many times throughout the program. This is a great use case to make #90DaysOfDevOps a variable within our program.
|
||||
Let's start by planning our application, I think it would be a good idea to work on a program that tells us how many days we have remained in our #90DaysOfDevOps challenge.
|
||||
|
||||
- Variables are used to store values.
|
||||
- Like a little box with our saved information or values.
|
||||
- We can then use this variable across the program which also benefits that if this challenge or variable changes then we only have to change this in one place. This means we could translate this to other challenges we have in the community by just changing that one variable value.
|
||||
The first thing to consider here is that as we are building our app and we are welcoming our attendees and we are giving the user feedback on the number of days they have completed we might use the term #90DaysOfDevOps many times throughout the program. This is a great use case to make #90DaysOfDevOps a variable within our program.
|
||||
|
||||
To declare this in our Go Program we define a value by using a **keyword** for variables. This will live within our `func main` block of code that you will see later. You can find more about [Keywords](https://go.dev/ref/spec#Keywords)here.
|
||||
- Variables are used to store values.
|
||||
- Like a little box with our saved information or values.
|
||||
- We can then use this variable across the program which also benefits that if this challenge or variable changes then we only have to change this in one place. This means we could translate this to other challenges we have in the community by just changing that one variable value.
|
||||
|
||||
Remember to make sure that your variable names are descriptive. If you declare a variable you must use it or you will get an error, this is to avoid possible dead code, code that is never used. This is the same for packages not used.
|
||||
To declare this in our Go Program we define a value by using a **keyword** for variables. This will live within our `func main` block of code that you will see later. You can find more about [Keywords](https://go.dev/ref/spec#Keywords)here.
|
||||
|
||||
Remember to make sure that your variable names are descriptive. If you declare a variable you must use it or you will get an error, this is to avoid possible dead code, code that is never used. This is the same for packages not used.
|
||||
|
||||
```
|
||||
var challenge = "#90DaysOfDevOps"
|
||||
```
|
||||
With the above set and used as we will see in the next code snippet you can see from the output below that we have used a variable.
|
||||
|
||||
With the above set and used as we will see in the next code snippet you can see from the output below that we have used a variable.
|
||||
|
||||
```
|
||||
package main
|
||||
@ -42,15 +44,16 @@ func main() {
|
||||
fmt.Println("Welcome to", challenge, "")
|
||||
}
|
||||
```
|
||||
|
||||
You can find the above code snippet in [day11_example1.go](Go/day11_example1.go)
|
||||
|
||||
You will then see from the below that we built our code with the above example and we got the output shown below.
|
||||
You will then see from the below that we built our code with the above example and we got the output shown below.
|
||||
|
||||

|
||||
|
||||
We also know that our challenge is 90 days at least for this challenge, but next, maybe it's 100 so we want to define a variable to help us here as well. However, for our program, we want to define this as a constant. Constants are like variables, except that their value cannot be changed within code (we can still create a new app later on down the line with this code and change this constant but this 90 will not change whilst we are running our application)
|
||||
|
||||
Adding the `const` to our code and adding another line of code to print this.
|
||||
Adding the `const` to our code and adding another line of code to print this.
|
||||
|
||||
```
|
||||
package main
|
||||
@ -65,15 +68,16 @@ func main() {
|
||||
fmt.Println("This is a", daystotal, "challenge")
|
||||
}
|
||||
```
|
||||
|
||||
You can find the above code snippet in [day11_example2.go](Go/day11_example2.go)
|
||||
|
||||
If we then go through that `go build` process again and run you will see below the outcome.
|
||||
|
||||

|
||||
|
||||
Finally, and this won't be the end of our program we will come back to this in [Day12](day12.md) to add more functionality. We now want to add another variable for the number of days we have completed the challenge.
|
||||
Finally, and this won't be the end of our program we will come back to this in [Day12](day12.md) to add more functionality. We now want to add another variable for the number of days we have completed the challenge.
|
||||
|
||||
Below I added the `dayscomplete` variable with the number of days completed.
|
||||
Below I added the `dayscomplete` variable with the number of days completed.
|
||||
|
||||
```
|
||||
package main
|
||||
@ -90,17 +94,18 @@ func main() {
|
||||
fmt.Println("Great work")
|
||||
}
|
||||
```
|
||||
|
||||
You can find the above code snippet in [day11_example3.go](Go/day11_example3.go)
|
||||
|
||||
Let's run through that `go build` process again or you could just use `go run`
|
||||
|
||||

|
||||
|
||||
Here are some other examples that I have used to make the code easier to read and edit. We have up till now been using `Println` but we can simplify this by using `Printf` by using `%v` which means we define our variables in order at the end of the line of code. we also use `\n` for a line break.
|
||||
Here are some other examples that I have used to make the code easier to read and edit. We have up till now been using `Println` but we can simplify this by using `Printf` by using `%v` which means we define our variables in order at the end of the line of code. we also use `\n` for a line break.
|
||||
|
||||
I am using `%v` as this uses a default value but there are other options that can be found here in the [fmt package documentation](https://pkg.go.dev/fmt) you can find the code example [day11_example4.go](Go/day11_example4.go)
|
||||
|
||||
Variables may also be defined in a simpler format in your code. Instead of defining that it is a `var` and the `type` you can code this as follows to get the same functionality but a nice cleaner and simpler look for your code. This will only work for variables though and not constants.
|
||||
Variables may also be defined in a simpler format in your code. Instead of defining that it is a `var` and the `type` you can code this as follows to get the same functionality but a nice cleaner and simpler look for your code. This will only work for variables though and not constants.
|
||||
|
||||
```
|
||||
func main() {
|
||||
@ -108,14 +113,15 @@ func main() {
|
||||
const daystotal = 90
|
||||
```
|
||||
|
||||
## Data Types
|
||||
In the above examples, we have not defined the type of variables, this is because we can give it a value here and Go is smart enough to know what that type is or at least can infer what it is based on the value you have stored. However, if we want a user to input this will require a specific type.
|
||||
## Data Types
|
||||
|
||||
We have used Strings and Integers in our code so far. Integers for the number of days and strings are for the name of the challenge.
|
||||
In the above examples, we have not defined the type of variables, this is because we can give it a value here and Go is smart enough to know what that type is or at least can infer what it is based on the value you have stored. However, if we want a user to input this will require a specific type.
|
||||
|
||||
It is also important to note that each data type can do different things and behaves differently. For example, integers can multiply where strings do not.
|
||||
We have used Strings and Integers in our code so far. Integers for the number of days and strings are for the name of the challenge.
|
||||
|
||||
There are four categories
|
||||
It is also important to note that each data type can do different things and behaves differently. For example, integers can multiply where strings do not.
|
||||
|
||||
There are four categories
|
||||
|
||||
- **Basic type**: Numbers, strings, and booleans come under this category.
|
||||
- **Aggregate type**: Array and structs come under this category.
|
||||
@ -134,21 +140,22 @@ Go has three basic data types:
|
||||
|
||||
I found this resource super detailed on data types [Golang by example](https://golangbyexample.com/all-data-types-in-golang-with-examples/)
|
||||
|
||||
I would also suggest [Techworld with Nana](https://www.youtube.com/watch?v=yyUHQIec83I&t=2023s) at this point covers in detail a lot about the data types in Go.
|
||||
I would also suggest [Techworld with Nana](https://www.youtube.com/watch?v=yyUHQIec83I&t=2023s) at this point covers in detail a lot about the data types in Go.
|
||||
|
||||
If we need to define a type in our variable we can do this like so:
|
||||
If we need to define a type in our variable we can do this like so:
|
||||
|
||||
```
|
||||
var TwitterHandle string
|
||||
var TwitterHandle string
|
||||
var DaysCompleted uint
|
||||
```
|
||||
|
||||
Because Go implies variables where a value is given we can print out those values with the following:
|
||||
Because Go implies variables where a value is given we can print out those values with the following:
|
||||
|
||||
```
|
||||
fmt.Printf("challenge is %T, daystotal is %T, dayscomplete is %T\n", conference, daystotal, dayscomplete)
|
||||
```
|
||||
There are many different types of integer and float types the links above will cover these in detail.
|
||||
|
||||
There are many different types of integer and float types the links above will cover these in detail.
|
||||
|
||||
- **int** = whole numbers
|
||||
- **unint** = positive whole numbers
|
||||
@ -158,12 +165,12 @@ There are many different types of integer and float types the links above will c
|
||||
|
||||
- [StackOverflow 2021 Developer Survey](https://insights.stackoverflow.com/survey/2021)
|
||||
- [Why we are choosing Golang to learn](https://www.youtube.com/watch?v=7pLqIIAqZD4&t=9s)
|
||||
- [Jake Wright - Learn Go in 12 minutes](https://www.youtube.com/watch?v=C8LgvuEBraI&t=312s)
|
||||
- [Techworld with Nana - Golang full course - 3 hours 24 mins](https://www.youtube.com/watch?v=yyUHQIec83I)
|
||||
- [**NOT FREE** Nigel Poulton Pluralsight - Go Fundamentals - 3 hours 26 mins](https://www.pluralsight.com/courses/go-fundamentals)
|
||||
- [FreeCodeCamp - Learn Go Programming - Golang Tutorial for Beginners](https://www.youtube.com/watch?v=YS4e4q9oBaU&t=1025s)
|
||||
- [Hitesh Choudhary - Complete playlist](https://www.youtube.com/playlist?list=PLRAV69dS1uWSR89FRQGZ6q9BR2b44Tr9N)
|
||||
- [Jake Wright - Learn Go in 12 minutes](https://www.youtube.com/watch?v=C8LgvuEBraI&t=312s)
|
||||
- [Techworld with Nana - Golang full course - 3 hours 24 mins](https://www.youtube.com/watch?v=yyUHQIec83I)
|
||||
- [**NOT FREE** Nigel Poulton Pluralsight - Go Fundamentals - 3 hours 26 mins](https://www.pluralsight.com/courses/go-fundamentals)
|
||||
- [FreeCodeCamp - Learn Go Programming - Golang Tutorial for Beginners](https://www.youtube.com/watch?v=YS4e4q9oBaU&t=1025s)
|
||||
- [Hitesh Choudhary - Complete playlist](https://www.youtube.com/playlist?list=PLRAV69dS1uWSR89FRQGZ6q9BR2b44Tr9N)
|
||||
|
||||
Next up we are going to start adding some user input functionality to our program so that we are asked how many days have been completed.
|
||||
Next up we are going to start adding some user input functionality to our program so that we are asked how many days have been completed.
|
||||
|
||||
See you on [Day 12](day12.md).
|
||||
|
@ -1,57 +1,60 @@
|
||||
---
|
||||
title: '#90DaysOfDevOps - Getting user input with Pointers and a finished program - Day 12'
|
||||
title: "#90DaysOfDevOps - Getting user input with Pointers and a finished program - Day 12"
|
||||
published: false
|
||||
description: 90DaysOfDevOps - Getting user input with Pointers and a finished program
|
||||
tags: 'devops, 90daysofdevops, learning'
|
||||
tags: "devops, 90daysofdevops, learning"
|
||||
cover_image: null
|
||||
canonical_url: null
|
||||
id: 1048864
|
||||
---
|
||||
|
||||
## Getting user input with Pointers and a finished program
|
||||
|
||||
Yesterday ([Day 11](day11.md)), we created our first Go program that was self-contained and the parts we wanted to get user input for were created as variables within our code and given values, we now want to ask the user for their input to give the variable the value for the end message.
|
||||
Yesterday ([Day 11](day11.md)), we created our first Go program that was self-contained and the parts we wanted to get user input for were created as variables within our code and given values, we now want to ask the user for their input to give the variable the value for the end message.
|
||||
|
||||
## Getting user input
|
||||
|
||||
Before we do that let's take a look at our application again and walk through the variables we want as a test before getting that user input.
|
||||
Before we do that let's take a look at our application again and walk through the variables we want as a test before getting that user input.
|
||||
|
||||
Yesterday we finished up with our code looking like this [day11_example4.go](Go/day11_example4.go) we have manually in code defined our `challenge, daystotal, dayscomplete` variables and constants.
|
||||
Yesterday we finished up with our code looking like this [day11_example4.go](Go/day11_example4.go) we have manually in code defined our `challenge, daystotal, dayscomplete` variables and constants.
|
||||
|
||||
Let's now add a new variable called `TwitterName` you can find this new code at [day12_example1.go](Go/day12_example1.go) and if we run this code this is our output.
|
||||
Let's now add a new variable called `TwitterName` you can find this new code at [day12_example1.go](Go/day12_example1.go) and if we run this code this is our output.
|
||||
|
||||

|
||||
|
||||
We are on day 12 and we would need to change that `dayscomplete` every day and compile our code each day if this was hardcoded which doesn't sound so great.
|
||||
We are on day 12 and we would need to change that `dayscomplete` every day and compile our code each day if this was hardcoded which doesn't sound so great.
|
||||
|
||||
Getting user input, we want to get the value of maybe a name and the number of days completed. For us to do this we can use another function from within the `fmt` package.
|
||||
Getting user input, we want to get the value of maybe a name and the number of days completed. For us to do this we can use another function from within the `fmt` package.
|
||||
|
||||
Recap on the `fmt` package, different functions for formatted input and output (I/O)
|
||||
|
||||
- Print Messages
|
||||
- Collect User Input
|
||||
- Write into a file
|
||||
- Print Messages
|
||||
- Collect User Input
|
||||
- Write into a file
|
||||
|
||||
This is instead of assigning the value of a variable we want to ask the user for their input.
|
||||
This is instead of assigning the value of a variable we want to ask the user for their input.
|
||||
|
||||
```
|
||||
fmt.Scan(&TwitterName)
|
||||
```
|
||||
Notice that we also use `&` before the variable. This is known as a pointer which we will cover in the next section.
|
||||
|
||||
Notice that we also use `&` before the variable. This is known as a pointer which we will cover in the next section.
|
||||
|
||||
In our code [day12_example2.go](Go/day12_example2.go) you can see that we are asking the user to input two variables, `TwitterName` and `DaysCompleted`
|
||||
|
||||
Let's now run our program and you see we have input for both of the above.
|
||||
Let's now run our program and you see we have input for both of the above.
|
||||
|
||||

|
||||
|
||||
Ok, that's great we got some user input and we printed a message but what about getting our program to tell us how many days we have left in our challenge.
|
||||
|
||||
For us to do that we have created a variable called `remainingDays` and we have hard valued this in our code as `90` we then need to change the value of this value to print out the remaining days when we get our user input of `DaysCompleted` we can do this with this simple variable change.
|
||||
For us to do that we have created a variable called `remainingDays` and we have hard valued this in our code as `90` we then need to change the value of this value to print out the remaining days when we get our user input of `DaysCompleted` we can do this with this simple variable change.
|
||||
|
||||
```
|
||||
remainingDays = remainingDays - DaysCompleted
|
||||
```
|
||||
You can see how our finished program looks here [day12_example2.go](Go/day12_example3.go).
|
||||
|
||||
You can see how our finished program looks here [day12_example2.go](Go/day12_example3.go).
|
||||
|
||||
If we now run this program you can see that simple calculation is made based on the user input and the value of the `remainingDays`
|
||||
|
||||
@ -59,13 +62,13 @@ If we now run this program you can see that simple calculation is made based on
|
||||
|
||||
## What is a pointer? (Special Variables)
|
||||
|
||||
A pointer is a (special) variable that points to the memory address of another variable.
|
||||
A pointer is a (special) variable that points to the memory address of another variable.
|
||||
|
||||
A great explanation of this can be found here at [geeksforgeeks](https://www.geeksforgeeks.org/pointers-in-golang/)
|
||||
|
||||
Let's simplify our code now and show with and without the `&` in front of one of our print commands, this gives us the memory address of the pointer. I have added this code example here. [day12_example4.go](Go/day12_example4.go)
|
||||
Let's simplify our code now and show with and without the `&` in front of one of our print commands, this gives us the memory address of the pointer. I have added this code example here. [day12_example4.go](Go/day12_example4.go)
|
||||
|
||||
Below is running this code.
|
||||
Below is running this code.
|
||||
|
||||

|
||||
|
||||
@ -73,10 +76,10 @@ Below is running this code.
|
||||
|
||||
- [StackOverflow 2021 Developer Survey](https://insights.stackoverflow.com/survey/2021)
|
||||
- [Why we are choosing Golang to learn](https://www.youtube.com/watch?v=7pLqIIAqZD4&t=9s)
|
||||
- [Jake Wright - Learn Go in 12 minutes](https://www.youtube.com/watch?v=C8LgvuEBraI&t=312s)
|
||||
- [Techworld with Nana - Golang full course - 3 hours 24 mins](https://www.youtube.com/watch?v=yyUHQIec83I)
|
||||
- [**NOT FREE** Nigel Poulton Pluralsight - Go Fundamentals - 3 hours 26 mins](https://www.pluralsight.com/courses/go-fundamentals)
|
||||
- [FreeCodeCamp - Learn Go Programming - Golang Tutorial for Beginners](https://www.youtube.com/watch?v=YS4e4q9oBaU&t=1025s)
|
||||
- [Hitesh Choudhary - Complete playlist](https://www.youtube.com/playlist?list=PLRAV69dS1uWSR89FRQGZ6q9BR2b44Tr9N)
|
||||
- [Jake Wright - Learn Go in 12 minutes](https://www.youtube.com/watch?v=C8LgvuEBraI&t=312s)
|
||||
- [Techworld with Nana - Golang full course - 3 hours 24 mins](https://www.youtube.com/watch?v=yyUHQIec83I)
|
||||
- [**NOT FREE** Nigel Poulton Pluralsight - Go Fundamentals - 3 hours 26 mins](https://www.pluralsight.com/courses/go-fundamentals)
|
||||
- [FreeCodeCamp - Learn Go Programming - Golang Tutorial for Beginners](https://www.youtube.com/watch?v=YS4e4q9oBaU&t=1025s)
|
||||
- [Hitesh Choudhary - Complete playlist](https://www.youtube.com/playlist?list=PLRAV69dS1uWSR89FRQGZ6q9BR2b44Tr9N)
|
||||
|
||||
See you on [Day 13](day13.md).
|
||||
|
@ -1,60 +1,63 @@
|
||||
---
|
||||
title: '#90DaysOfDevOps - Tweet your progress with our new App - Day 13'
|
||||
title: "#90DaysOfDevOps - Tweet your progress with our new App - Day 13"
|
||||
published: false
|
||||
description: 90DaysOfDevOps - Tweet your progress with our new App
|
||||
tags: 'devops, 90daysofdevops, learning'
|
||||
tags: "devops, 90daysofdevops, learning"
|
||||
cover_image: null
|
||||
canonical_url: null
|
||||
id: 1048865
|
||||
---
|
||||
|
||||
## Tweet your progress with our new App
|
||||
|
||||
On the final day of looking into this programming language, we have only just touched the surface here of the language but it is at that start that I think we need to get interested and excited and want to dive more into it.
|
||||
On the final day of looking into this programming language, we have only just touched the surface here of the language but it is at that start that I think we need to get interested and excited and want to dive more into it.
|
||||
|
||||
Over the last few days, we have taken a small idea for an application and we have added functionality to it, in this session I want to take advantage of those packages we mentioned and create the functionality for our app to not only give you the update of your progress on screen but also send a tweet with the details of the challenge and your status.
|
||||
Over the last few days, we have taken a small idea for an application and we have added functionality to it, in this session I want to take advantage of those packages we mentioned and create the functionality for our app to not only give you the update of your progress on screen but also send a tweet with the details of the challenge and your status.
|
||||
|
||||
## Adding the ability to tweet your progress
|
||||
The first thing we need to do is set up our developer API access with Twitter for this to work.
|
||||
## Adding the ability to tweet your progress
|
||||
|
||||
Head to the [Twitter Developer Platform](https://developer.twitter.com) and sign in with your Twitter handle and details. Once in you should see something like the below without the app that I already have created.
|
||||
The first thing we need to do is set up our developer API access with Twitter for this to work.
|
||||
|
||||
Head to the [Twitter Developer Platform](https://developer.twitter.com) and sign in with your Twitter handle and details. Once in you should see something like the below without the app that I already have created.
|
||||
|
||||

|
||||
|
||||
From here you may also want to request elevated access, this might take some time but it was very fast for me.
|
||||
From here you may also want to request elevated access, this might take some time but it was very fast for me.
|
||||
|
||||
Next, we should select Projects & Apps and create our App. Limits are depending on the account access you have, with essential you only have one app and one project and with elevated you can have 3 apps.
|
||||
Next, we should select Projects & Apps and create our App. Limits are depending on the account access you have, with essential you only have one app and one project and with elevated you can have 3 apps.
|
||||
|
||||

|
||||
|
||||
Give your application a name
|
||||
Give your application a name
|
||||
|
||||

|
||||
|
||||
You will be then given these API tokens, you must save these somewhere secure. (I have since deleted this app) We will need these later with our Go Application.
|
||||
You will be then given these API tokens, you must save these somewhere secure. (I have since deleted this app) We will need these later with our Go Application.
|
||||
|
||||

|
||||
|
||||
Now we have our app created,(I did have to change my app name as the one in the screenshot above was already taken, these names need to be unique)
|
||||
Now we have our app created,(I did have to change my app name as the one in the screenshot above was already taken, these names need to be unique)
|
||||
|
||||

|
||||
|
||||
The keys that we gathered before are known as our consumer keys and we will also need our access token and secrets. We can gather this information using the "Keys & Tokens" tab.
|
||||
The keys that we gathered before are known as our consumer keys and we will also need our access token and secrets. We can gather this information using the "Keys & Tokens" tab.
|
||||
|
||||

|
||||
|
||||
Ok, we are done in the Twitter developer portal for now. Make sure you keep your keys safe because we will need them later.
|
||||
Ok, we are done in the Twitter developer portal for now. Make sure you keep your keys safe because we will need them later.
|
||||
|
||||
## Go Twitter Bot
|
||||
## Go Twitter Bot
|
||||
|
||||
Remember the code we are starting within our application as well [day13_example1](Go/day13_example1.go) but first, we need to check we have the correct code to make something tweet
|
||||
Remember the code we are starting within our application as well [day13_example1](Go/day13_example1.go) but first, we need to check we have the correct code to make something tweet
|
||||
|
||||
We now need to think about the code to get our output or message to Twitter in the form of a tweet. We are going to be using [go-twitter](https://github.com/dghubble/go-twitter) This is a Go client library for the Twitter API.
|
||||
We now need to think about the code to get our output or message to Twitter in the form of a tweet. We are going to be using [go-twitter](https://github.com/dghubble/go-twitter) This is a Go client library for the Twitter API.
|
||||
|
||||
To test this before putting this into our main application, I created a new directory in our `src` folder called go-twitter-bot, issued the `go mod init github.com/michaelcade/go-Twitter-bot` on the folder which then created a `go.mod` file and then we can start writing our new main.go and test this out.
|
||||
To test this before putting this into our main application, I created a new directory in our `src` folder called go-twitter-bot, issued the `go mod init github.com/michaelcade/go-Twitter-bot` on the folder which then created a `go.mod` file and then we can start writing our new main.go and test this out.
|
||||
|
||||
We now need those keys, tokens and secrets we gathered from the Twitter developer portal. We are going to set these in our environment variables. This will depend on the OS you are running:
|
||||
We now need those keys, tokens and secrets we gathered from the Twitter developer portal. We are going to set these in our environment variables. This will depend on the OS you are running:
|
||||
|
||||
Windows
|
||||
|
||||
```
|
||||
set CONSUMER_KEY
|
||||
set CONSUMER_SECRET
|
||||
@ -63,17 +66,19 @@ set ACCESS_TOKEN_SECRET
|
||||
```
|
||||
|
||||
Linux / macOS
|
||||
|
||||
```
|
||||
export CONSUMER_KEY
|
||||
export CONSUMER_SECRET
|
||||
export ACCESS_TOKEN
|
||||
export ACCESS_TOKEN_SECRET
|
||||
```
|
||||
At this stage, you can take a look at [day13_example2](Go/day13_example2.go) at the code but you will see here that we are using a struct to define our keys, secrets and tokens.
|
||||
|
||||
We then have a `func` to parse those credentials and make that connection to the Twitter API
|
||||
At this stage, you can take a look at [day13_example2](Go/day13_example2.go) at the code but you will see here that we are using a struct to define our keys, secrets and tokens.
|
||||
|
||||
Then based on the success we will then send a tweet.
|
||||
We then have a `func` to parse those credentials and make that connection to the Twitter API
|
||||
|
||||
Then based on the success we will then send a tweet.
|
||||
|
||||
```
|
||||
package main
|
||||
@ -152,13 +157,14 @@ func main() {
|
||||
}
|
||||
|
||||
```
|
||||
The above will either give you an error based on what is happening or it will succeed and you will have a tweet sent with the message outlined in the code.
|
||||
|
||||
## Pairing the two together - Go-Twitter-Bot + Our App
|
||||
The above will either give you an error based on what is happening or it will succeed and you will have a tweet sent with the message outlined in the code.
|
||||
|
||||
Now we need to merge these two in our `main.go` I am sure someone out there is screaming that there is a better way of doing this and please comment on this as you can have more than one `.go` file in a project it might make sense but this works.
|
||||
## Pairing the two together - Go-Twitter-Bot + Our App
|
||||
|
||||
You can see the merged codebase [day13_example3](Go/day13_example3.go) but I will also show it below.
|
||||
Now we need to merge these two in our `main.go` I am sure someone out there is screaming that there is a better way of doing this and please comment on this as you can have more than one `.go` file in a project it might make sense but this works.
|
||||
|
||||
You can see the merged codebase [day13_example3](Go/day13_example3.go) but I will also show it below.
|
||||
|
||||
```
|
||||
package main
|
||||
@ -261,26 +267,28 @@ func main() {
|
||||
|
||||
}
|
||||
```
|
||||
The outcome of this should be a tweet but if you did not supply your environment variables then you should get an error like the one below.
|
||||
|
||||
The outcome of this should be a tweet but if you did not supply your environment variables then you should get an error like the one below.
|
||||
|
||||

|
||||
|
||||
Once you have fixed that or if you choose not to authenticate with Twitter then you can use the code we finished with yesterday. The terminal output on success will look similar to this:
|
||||
Once you have fixed that or if you choose not to authenticate with Twitter then you can use the code we finished with yesterday. The terminal output on success will look similar to this:
|
||||
|
||||

|
||||
|
||||
The resulting tweet should look something like this:
|
||||
The resulting tweet should look something like this:
|
||||
|
||||

|
||||
|
||||
## How to compile for multiple OSs
|
||||
|
||||
I next want to cover the question, "How do you compile for multiple Operating Systems?" The great thing about Go is that it can easily compile for many different Operating Systems. You can get a full list by running the following command:
|
||||
I next want to cover the question, "How do you compile for multiple Operating Systems?" The great thing about Go is that it can easily compile for many different Operating Systems. You can get a full list by running the following command:
|
||||
|
||||
```
|
||||
go tool dist list
|
||||
```
|
||||
Using our `go build` commands so far is great and it will use the `GOOS` and `GOARCH` environment variables to determine the host machine and what the build should be built for. But we can also create other binaries by using the code below as an example.
|
||||
|
||||
Using our `go build` commands so far is great and it will use the `GOOS` and `GOARCH` environment variables to determine the host machine and what the build should be built for. But we can also create other binaries by using the code below as an example.
|
||||
|
||||
```
|
||||
GOARCH=amd64 GOOS=darwin go build -o ${BINARY_NAME}_0.1_darwin main.go
|
||||
@ -298,18 +306,18 @@ This is what I have used to create the releases you can now see on the [reposito
|
||||
|
||||
- [StackOverflow 2021 Developer Survey](https://insights.stackoverflow.com/survey/2021)
|
||||
- [Why we are choosing Golang to learn](https://www.youtube.com/watch?v=7pLqIIAqZD4&t=9s)
|
||||
- [Jake Wright - Learn Go in 12 minutes](https://www.youtube.com/watch?v=C8LgvuEBraI&t=312s)
|
||||
- [Techworld with Nana - Golang full course - 3 hours 24 mins](https://www.youtube.com/watch?v=yyUHQIec83I)
|
||||
- [**NOT FREE** Nigel Poulton Pluralsight - Go Fundamentals - 3 hours 26 mins](https://www.pluralsight.com/courses/go-fundamentals)
|
||||
- [FreeCodeCamp - Learn Go Programming - Golang Tutorial for Beginners](https://www.youtube.com/watch?v=YS4e4q9oBaU&t=1025s)
|
||||
- [Hitesh Choudhary - Complete playlist](https://www.youtube.com/playlist?list=PLRAV69dS1uWSR89FRQGZ6q9BR2b44Tr9N)
|
||||
- [Jake Wright - Learn Go in 12 minutes](https://www.youtube.com/watch?v=C8LgvuEBraI&t=312s)
|
||||
- [Techworld with Nana - Golang full course - 3 hours 24 mins](https://www.youtube.com/watch?v=yyUHQIec83I)
|
||||
- [**NOT FREE** Nigel Poulton Pluralsight - Go Fundamentals - 3 hours 26 mins](https://www.pluralsight.com/courses/go-fundamentals)
|
||||
- [FreeCodeCamp - Learn Go Programming - Golang Tutorial for Beginners](https://www.youtube.com/watch?v=YS4e4q9oBaU&t=1025s)
|
||||
- [Hitesh Choudhary - Complete playlist](https://www.youtube.com/playlist?list=PLRAV69dS1uWSR89FRQGZ6q9BR2b44Tr9N)
|
||||
- [A great repo full of all things DevOps & exercises](https://github.com/bregman-arie/devops-exercises)
|
||||
- [GoByExample - Example based learning](https://gobyexample.com/)
|
||||
- [go.dev/tour/list](https://go.dev/tour/list)
|
||||
- [go.dev/learn](https://go.dev/learn/)
|
||||
|
||||
This wraps up the Programming language for 7 days! So much more that can be covered and I hope you have been able to continue through the content above and be able to understand some of the other aspects of the Go programming language.
|
||||
This wraps up the Programming language for 7 days! So much more that can be covered and I hope you have been able to continue through the content above and be able to understand some of the other aspects of the Go programming language.
|
||||
|
||||
Next, we take our focus into Linux and some of the fundamentals that we should all know there.
|
||||
Next, we take our focus into Linux and some of the fundamentals that we should all know there.
|
||||
|
||||
See you on [Day 14](day14.md).
|
||||
|
@ -1,66 +1,56 @@
|
||||
---
|
||||
title: '#90DaysOfDevOps - The Big Picture: DevOps and Linux - Day 14'
|
||||
title: "#90DaysOfDevOps - The Big Picture: DevOps and Linux - Day 14"
|
||||
published: false
|
||||
description: 90DaysOfDevOps - The Big Picture DevOps and Linux
|
||||
tags: 'devops, 90daysofdevops, learning'
|
||||
tags: "devops, 90daysofdevops, learning"
|
||||
cover_image: null
|
||||
canonical_url: null
|
||||
id: 1049033
|
||||
---
|
||||
|
||||
## The Big Picture: DevOps and Linux
|
||||
|
||||
Linux and DevOps share very similar cultures and perspectives; both are focused on customization and scalability. Both of these aspects of Linux are of particular importance for DevOps.
|
||||
|
||||
A lot of technologies start on Linux, especially if they are related to software development or managing infrastructure.
|
||||
|
||||
As well lots of open source projects, especially DevOps tools, were designed to run on Linux from the start.
|
||||
|
||||
From a DevOps perspective or any operations role perspective, you are going to come across Linux I would say mostly. There is a place for WinOps but the majority of the time you are going to be administering and deploying Linux servers.
|
||||
From a DevOps perspective or any operations role perspective, you are going to come across Linux I would say mostly. There is a place for WinOps but the majority of the time you are going to be administering and deploying Linux servers.
|
||||
|
||||
I have been using Linux daily for several years but my go to desktop machine has always been either macOS or Windows. However, when I moved into the Cloud Native role I am in now I took the plunge to make sure that my laptop was fully Linux based and my daily driver, whilst I still needed Windows for work-based applications and a lot of my audio and video gear does not run on Linux I was forcing myself to run a Linux desktop full time to get a better grasp of a lot of the things we are going to touch on over the next 7 days.
|
||||
I have been using Linux daily for several years but my go to desktop machine has always been either macOS or Windows. However, when I moved into the Cloud Native role I am in now I took the plunge to make sure that my laptop was fully Linux based and my daily driver, whilst I still needed Windows for work-based applications and a lot of my audio and video gear does not run on Linux I was forcing myself to run a Linux desktop full time to get a better grasp of a lot of the things we are going to touch on over the next 7 days.
|
||||
|
||||
## Getting Started
|
||||
I am not suggesting you do the same as me by any stretch as there are easier options which are less destructive but I will say that taking that full-time step forces you to learn faster how to make things work on Linux.
|
||||
## Getting Started
|
||||
|
||||
For the majority of these 7 days, I am going to deploy a Virtual Machine in Virtual Box on my Windows machine. I am also going to deploy a desktop version of a Linux distribution, whereas a lot of the Linux servers you will be administering will likely be servers that come with no GUI and everything is shell-based. However, as I said at the start a lot of the tools that we covered throughout this whole 90 days started on Linux I would also strongly encourage you to dive into running that Linux Desktop for that learning experience as well.
|
||||
I am not suggesting you do the same as me by any stretch as there are easier options which are less destructive but I will say that taking that full-time step forces you to learn faster how to make things work on Linux.
|
||||
|
||||
For the majority of these 7 days, I am going to deploy a Virtual Machine in Virtual Box on my Windows machine. I am also going to deploy a desktop version of a Linux distribution, whereas a lot of the Linux servers you will be administering will likely be servers that come with no GUI and everything is shell-based. However, as I said at the start a lot of the tools that we covered throughout this whole 90 days started on Linux I would also strongly encourage you to dive into running that Linux Desktop for that learning experience as well.
|
||||
|
||||
For the rest of this post, we are going to concentrate on getting a Ubuntu Desktop virtual machine up and running in our Virtual Box environment. Now we could just download [Virtual Box](https://www.virtualbox.org/) and grab the latest [Ubuntu ISO](https://ubuntu.com/download) from the sites linked and go ahead and build out our desktop environment but that wouldn't be very DevOps of us, would it?
|
||||
For the rest of this post, we are going to concentrate on getting a Ubuntu Desktop virtual machine up and running in our Virtual Box environment. Now we could just download [Virtual Box](https://www.virtualbox.org/) and grab the latest [Ubuntu ISO](https://ubuntu.com/download) from the sites linked and go ahead and build out our desktop environment but that wouldn't be very DevOps of us, would it?
|
||||
|
||||
Another good reason to use most Linux distributions is that they are free and open-source. We are also choosing Ubuntu as it is probably the most widely used distribution deployed not thinking about mobile devices and enterprise RedHat Enterprise servers. I might be wrong there but with CentOS and the history there I bet Ubuntu is high on the list and it's super simple.
|
||||
|
||||
Another good reason to use most Linux distributions is that they are free and open-source. We are also choosing Ubuntu as it is probably the most widely used distribution deployed not thinking about mobile devices and enterprise RedHat Enterprise servers. I might be wrong there but with CentOS and the history there I bet Ubuntu is high on the list and it's super simple.
|
||||
## Introducing HashiCorp Vagrant
|
||||
|
||||
Vagrant is a CLI utility that manages the lifecycle of your virtual machines. We can use vagrant to spin up and down virtual machines across many different platforms including vSphere, Hyper-v, Virtual Box and also Docker. It does have other providers but we will stick with Virtual Box here so we are good to go.
|
||||
|
||||
## Introducing HashiCorp Vagrant
|
||||
|
||||
|
||||
Vagrant is a CLI utility that manages the lifecycle of your virtual machines. We can use vagrant to spin up and down virtual machines across many different platforms including vSphere, Hyper-v, Virtual Box and also Docker. It does have other providers but we will stick with Virtual Box here so we are good to go.
|
||||
|
||||
|
||||
The first thing we need to do is get Vagrant installed on our machine, when you go to the downloads page you will see all the operating systems listed for your choice. [HashiCorp Vagrant](https://www.vagrantup.com/downloads) I am using Windows so I grabbed the binary for my system and went ahead and installed this on my system.
|
||||
|
||||
|
||||
Next up we also need to get [Virtual Box](https://www.virtualbox.org/wiki/Downloads) installed. Again, this can also be installed on many different operating systems and a good reason to choose this and vagrant is that if you are running Windows, macOS, or Linux then we have you covered here.
|
||||
The first thing we need to do is get Vagrant installed on our machine, when you go to the downloads page you will see all the operating systems listed for your choice. [HashiCorp Vagrant](https://www.vagrantup.com/downloads) I am using Windows so I grabbed the binary for my system and went ahead and installed this on my system.
|
||||
|
||||
Next up we also need to get [Virtual Box](https://www.virtualbox.org/wiki/Downloads) installed. Again, this can also be installed on many different operating systems and a good reason to choose this and vagrant is that if you are running Windows, macOS, or Linux then we have you covered here.
|
||||
|
||||
Both installations are pretty straightforward and both have great communitites around them so feel free to reach out if you have issues and I can try and assist too.
|
||||
|
||||
|
||||
## Our first VAGRANTFILE
|
||||
|
||||
The VAGRANTFILE describes the type of machine we want to deploy. It also defines the configuration and provisioning for this machine.
|
||||
|
||||
The VAGRANTFILE describes the type of machine we want to deploy. It also defines the configuration and provisioning for this machine.
|
||||
|
||||
|
||||
When it comes to saving these and organizing your VAGRANTFILEs I tend to put them in their folders in my workspace. You can see below how this looks on my system. Hopefully following this you will play around with Vagrant and see the ease of spinning up different systems, it is also great for that rabbit hole known as distro hopping for Linux Desktops.
|
||||
|
||||
When it comes to saving these and organizing your VAGRANTFILEs I tend to put them in their folders in my workspace. You can see below how this looks on my system. Hopefully following this you will play around with Vagrant and see the ease of spinning up different systems, it is also great for that rabbit hole known as distro hopping for Linux Desktops.
|
||||
|
||||

|
||||
|
||||
Let's take a look at that VAGRANTFILE and see what we are building.
|
||||
|
||||
Let's take a look at that VAGRANTFILE and see what we are building.
|
||||
|
||||
|
||||
```
|
||||
```
|
||||
|
||||
Vagrant.configure("2") do |config|
|
||||
|
||||
@ -80,11 +70,9 @@ end
|
||||
|
||||
```
|
||||
|
||||
This is a very simple VAGRANTFILE overall. We are saying that we want a specific "box", a box being possibly either a public image or private build of the system you are looking for. You can find a long list of "boxes" publicly available here in the [public catalogue of Vagrant boxes](https://app.vagrantup.com/boxes/search)
|
||||
|
||||
|
||||
Next line we're saying that we want to use a specific provider and in this case it's `VirtualBox`. We also define our machine's memory to `8GB` and the number of CPUs to `4`. My experience tells me that you may want to also add the following line if you experience display issues. This will set the video memory to what you want, I would ramp this right up to `128MB` but it depends on your system.
|
||||
This is a very simple VAGRANTFILE overall. We are saying that we want a specific "box", a box being possibly either a public image or private build of the system you are looking for. You can find a long list of "boxes" publicly available here in the [public catalogue of Vagrant boxes](https://app.vagrantup.com/boxes/search)
|
||||
|
||||
Next line we're saying that we want to use a specific provider and in this case it's `VirtualBox`. We also define our machine's memory to `8GB` and the number of CPUs to `4`. My experience tells me that you may want to also add the following line if you experience display issues. This will set the video memory to what you want, I would ramp this right up to `128MB` but it depends on your system.
|
||||
|
||||
```
|
||||
|
||||
@ -92,53 +80,41 @@ v.customize ["modifyvm", :id, "--vram", ""]
|
||||
|
||||
```
|
||||
|
||||
I have also placed a copy of this specific vagrant file in the [Linux Folder](Linux/VAGRANTFILE)
|
||||
|
||||
I have also placed a copy of this specific vagrant file in the [Linux Folder](Linux/VAGRANTFILE)
|
||||
|
||||
## Provisioning our Linux Desktop
|
||||
|
||||
|
||||
We are now ready to get our first machine up and running, in our workstation's terminal. In my case I am using PowerShell on my Windows machine. Navigate to your projects folder and where you will find your VAGRANTFILE. Once there you can type the command `vagrant up` and if everything's allright you will see something like this.
|
||||
|
||||
We are now ready to get our first machine up and running, in our workstation's terminal. In my case I am using PowerShell on my Windows machine. Navigate to your projects folder and where you will find your VAGRANTFILE. Once there you can type the command `vagrant up` and if everything's allright you will see something like this.
|
||||
|
||||

|
||||
|
||||
|
||||
Another thing to add here is that the network will be set to `NAT` on your virtual machine. At this stage we don't need to know about NAT and I plan to have a whole session talking about it in the Networking session. Know that it is the easy button when it comes to getting a machine on your home network, it is also the default networking mode on Virtual Box. You can find out more in the [Virtual Box documentation](https://www.virtualbox.org/manual/ch06.html#network_nat)
|
||||
|
||||
|
||||
Once `vagrant up` is complete we can now use `vagrant ssh` to jump straight into the terminal of our new VM.
|
||||
|
||||
Once `vagrant up` is complete we can now use `vagrant ssh` to jump straight into the terminal of our new VM.
|
||||
|
||||

|
||||
|
||||
This is where we will do most of our exploring over the next few days but I also want to dive into some customizations for your developer workstation that I have done and it makes your life much simpler when running this as your daily driver, and of course, are you really in DevOps unless you have a cool nonstandard terminal?
|
||||
|
||||
This is where we will do most of our exploring over the next few days but I also want to dive into some customizations for your developer workstation that I have done and it makes your life much simpler when running this as your daily driver, and of course, are you really in DevOps unless you have a cool nonstandard terminal?
|
||||
|
||||
|
||||
But just to confirm in Virtual Box you should see the login prompt when you select your VM.
|
||||
|
||||
But just to confirm in Virtual Box you should see the login prompt when you select your VM.
|
||||
|
||||

|
||||
|
||||
Oh and if you made it this far and you have been asking "WHAT IS THE USERNAME & PASSWORD?"
|
||||
|
||||
Oh and if you made it this far and you have been asking "WHAT IS THE USERNAME & PASSWORD?"
|
||||
- Username = vagrant
|
||||
|
||||
- Password = vagrant
|
||||
|
||||
- Username = vagrant
|
||||
Tomorrow we are going to get into some of the commands and what they do, The terminal is going to be the place to make everything happen.
|
||||
|
||||
- Password = vagrant
|
||||
|
||||
|
||||
Tomorrow we are going to get into some of the commands and what they do, The terminal is going to be the place to make everything happen.
|
||||
|
||||
## Resources
|
||||
## Resources
|
||||
|
||||
- [Learn the Linux Fundamentals - Part 1](https://www.youtube.com/watch?v=kPylihJRG70)
|
||||
- [Linux for hackers (don't worry you don't need to be a hacker!)](https://www.youtube.com/watch?v=VbEx7B_PTOE)
|
||||
|
||||
There are going to be lots of resources I find as we go through and much like the Go resources I am generally going to be keeping them to FREE content so we can all partake and learn here.
|
||||
There are going to be lots of resources I find as we go through and much like the Go resources I am generally going to be keeping them to FREE content so we can all partake and learn here.
|
||||
|
||||
As I mentioned next up we will take a look at the commands we might be using on a daily whilst in our Linux environments.
|
||||
As I mentioned next up we will take a look at the commands we might be using on a daily whilst in our Linux environments.
|
||||
|
||||
See you on [Day15](day15.md)
|
||||
|
@ -1,40 +1,41 @@
|
||||
---
|
||||
title: '#90DaysOfDevOps - Linux Commands for DevOps (Actually everyone) - Day 15'
|
||||
title: "#90DaysOfDevOps - Linux Commands for DevOps (Actually everyone) - Day 15"
|
||||
published: false
|
||||
description: 90DaysOfDevOps - Linux Commands for DevOps (Actually everyone)
|
||||
tags: 'devops, 90daysofdevops, learning'
|
||||
tags: "devops, 90daysofdevops, learning"
|
||||
cover_image: null
|
||||
canonical_url: null
|
||||
id: 1048834
|
||||
---
|
||||
|
||||
## Linux Commands for DevOps (Actually everyone)
|
||||
|
||||
I mentioned [yesterday](day14.md) that we are going to be spending a lot of time in the terminal with some commands to get stuff done.
|
||||
I mentioned [yesterday](day14.md) that we are going to be spending a lot of time in the terminal with some commands to get stuff done.
|
||||
|
||||
I also mentioned that with our vagrant provisioned VM we can use `vagrant ssh` and gain access to our box. You will need to be in the same directory as we provisioned it from.
|
||||
|
||||
For SSH you won't need the username and password, you will only need that if you decide to log in to the Virtual Box console.
|
||||
For SSH you won't need the username and password, you will only need that if you decide to log in to the Virtual Box console.
|
||||
|
||||
This is where we want to be as per below:
|
||||
This is where we want to be as per below:
|
||||
|
||||

|
||||
|
||||
## Commands
|
||||
## Commands
|
||||
|
||||
I cannot cover all the commands here, there are pages and pages of documentation that cover these but also if you are ever in your terminal and you just need to understand options to a specific command we have the `man` pages short for manual. We can use this to go through each of the commands we touch on during this post to find out more options for each one. We can run `man man` which will give you the help for manual pages. To escape the man pages you should press `q` for quit.
|
||||
I cannot cover all the commands here, there are pages and pages of documentation that cover these but also if you are ever in your terminal and you just need to understand options to a specific command we have the `man` pages short for manual. We can use this to go through each of the commands we touch on during this post to find out more options for each one. We can run `man man` which will give you the help for manual pages. To escape the man pages you should press `q` for quit.
|
||||
|
||||

|
||||

|
||||
|
||||
`sudo` If you are familiar with Windows and the right click `run as administrator` we can think of `sudo` as very much this. When you run a command with this command you will be running it as `root` it will prompt you for the password before running the command.
|
||||
`sudo` If you are familiar with Windows and the right click `run as administrator` we can think of `sudo` as very much this. When you run a command with this command you will be running it as `root` it will prompt you for the password before running the command.
|
||||
|
||||

|
||||
|
||||
For one off jobs like installing applications or services, you might need that `sudo command` but what if you have several tasks to deal with and you want to live as `sudo` for a while? This is where you can use `sudo su` again the same as `sudo` once entered you will be prompted for your `root` password. In a test VM like ours, this is fine but I would find it very hard for us to be rolling around as `root` for prolonged periods, bad things can happen. To get out of this elevated position you simply type in `exit`
|
||||
For one off jobs like installing applications or services, you might need that `sudo command` but what if you have several tasks to deal with and you want to live as `sudo` for a while? This is where you can use `sudo su` again the same as `sudo` once entered you will be prompted for your `root` password. In a test VM like ours, this is fine but I would find it very hard for us to be rolling around as `root` for prolonged periods, bad things can happen. To get out of this elevated position you simply type in `exit`
|
||||
|
||||

|
||||
|
||||
I find myself using `clear` all the time, the `clear` command does exactly what it says it is going to clear the screen of all previous commands, putting your prompt to the top and giving you a nice clean workspace. Windows I think is `cls` in the .mdprompt.
|
||||
I find myself using `clear` all the time, the `clear` command does exactly what it says it is going to clear the screen of all previous commands, putting your prompt to the top and giving you a nice clean workspace. Windows I think is `cls` in the .mdprompt.
|
||||
|
||||

|
||||
|
||||
@ -50,7 +51,7 @@ With `cd` this allows us to change the directory, so for us to move into our new
|
||||
|
||||

|
||||
|
||||
I am sure we have all done it where we have navigated to the depths of our file system to a directory and not known where we are. `pwd` gives us the printout of the working directory, pwd as much as it looks like password it stands for print working directory.
|
||||
I am sure we have all done it where we have navigated to the depths of our file system to a directory and not known where we are. `pwd` gives us the printout of the working directory, pwd as much as it looks like password it stands for print working directory.
|
||||
|
||||

|
||||
|
||||
@ -58,35 +59,35 @@ We know how to create folders and directories but how do we create files? We can
|
||||
|
||||

|
||||
|
||||
`ls` I can put my house on this, you will use this command so many times, this is going to list all the files and folders in the current directory. Let's see if we can see that file we just created.
|
||||
`ls` I can put my house on this, you will use this command so many times, this is going to list all the files and folders in the current directory. Let's see if we can see that file we just created.
|
||||
|
||||

|
||||
|
||||
How can we find files on our Linux system? `locate` is going to allow us to search our file system. If we use `locate Day15` it will report back the location of the file. The bonus round is that if you know that the file does exist but you get a blank result then run `sudo updatedb` which will index all the files in the file system then run your `locate` again. If you do not have `locate` available to you, you can install it using this command `sudo apt install mlocate`
|
||||
How can we find files on our Linux system? `locate` is going to allow us to search our file system. If we use `locate Day15` it will report back the location of the file. The bonus round is that if you know that the file does exist but you get a blank result then run `sudo updatedb` which will index all the files in the file system then run your `locate` again. If you do not have `locate` available to you, you can install it using this command `sudo apt install mlocate`
|
||||
|
||||

|
||||
|
||||
What about moving files from one location to another? `mv` is going to allow you to move your files. Example `mv Day15 90DaysOfDevOps` will move your file to the 90DaysOfDevOps folder.
|
||||
What about moving files from one location to another? `mv` is going to allow you to move your files. Example `mv Day15 90DaysOfDevOps` will move your file to the 90DaysOfDevOps folder.
|
||||
|
||||

|
||||
|
||||
We have moved our file but what if we want to rename it now to something else? We can do that using the `mv` command again... WOT!!!? yep we can simply use `mv Day15 day15` to change to upper case or we could use `mv day15 AnotherDay` to change it altogether, now use `ls` to check the file.
|
||||
We have moved our file but what if we want to rename it now to something else? We can do that using the `mv` command again... WOT!!!? yep we can simply use `mv Day15 day15` to change to upper case or we could use `mv day15 AnotherDay` to change it altogether, now use `ls` to check the file.
|
||||
|
||||

|
||||
|
||||
Enough is enough, let's now get rid (delete)of our file and maybe even our directory if we have one created. `rm` simply `rm AnotherDay` will remove our file. We will also use quite a bit `rm -R` which will recursively work through a folder or location. We might also use `rm -R -f` to force the removal of all of those files. Spoiler if you run `rm -R -f /` add sudo to it and you can say goodbye to your system....!
|
||||
Enough is enough, let's now get rid (delete)of our file and maybe even our directory if we have one created. `rm` simply `rm AnotherDay` will remove our file. We will also use quite a bit `rm -R` which will recursively work through a folder or location. We might also use `rm -R -f` to force the removal of all of those files. Spoiler if you run `rm -R -f /` add sudo to it and you can say goodbye to your system....!
|
||||
|
||||

|
||||
|
||||
We have looked at moving files around but what if I just want to copy files from one folder to another, simply put its very similar to the `mv` command but we use `cp` so we can now say `cp Day15 Desktop`
|
||||
We have looked at moving files around but what if I just want to copy files from one folder to another, simply put its very similar to the `mv` command but we use `cp` so we can now say `cp Day15 Desktop`
|
||||
|
||||

|
||||
|
||||
We have created folders and files but we haven't put any contents into our folder, we can add contents a few ways but an easy way is `echo` we can also use `echo` to print out a lot of things in our terminal, I use echo a lot to print out system variables to know if they are set or not at least. we can use `echo "Hello #90DaysOfDevOps" > Day15` and this will add this to our file. We can also append to our file using `echo "Commands are fun!" >> Day15`
|
||||
We have created folders and files but we haven't put any contents into our folder, we can add contents a few ways but an easy way is `echo` we can also use `echo` to print out a lot of things in our terminal, I use echo a lot to print out system variables to know if they are set or not at least. we can use `echo "Hello #90DaysOfDevOps" > Day15` and this will add this to our file. We can also append to our file using `echo "Commands are fun!" >> Day15`
|
||||
|
||||

|
||||
|
||||
Another one of those commands you will use a lot! `cat` short for concatenate. We can use `cat Day15` to see the contents inside the file. Great for quickly reading those configuration files.
|
||||
Another one of those commands you will use a lot! `cat` short for concatenate. We can use `cat Day15` to see the contents inside the file. Great for quickly reading those configuration files.
|
||||
|
||||

|
||||
|
||||
@ -94,22 +95,26 @@ If you have a long complex configuration file and you want or need to find somet
|
||||
|
||||

|
||||
|
||||
If you are like me and you use that `clear` command a lot then you might miss some of the commands previously ran, we can use `history` to find out all those commands we have run prior. `history -c` will remove the history.
|
||||
If you are like me and you use that `clear` command a lot then you might miss some of the commands previously ran, we can use `history` to find out all those commands we have run prior. `history -c` will remove the history.
|
||||
|
||||
When you run `history` and you would like to pick a specific command you can use `!3` to choose the 3rd command in the list.
|
||||
When you run `history` and you would like to pick a specific command you can use `!3` to choose the 3rd command in the list.
|
||||
|
||||
You are also able to use `history | grep "Command` to search for something specific.
|
||||
You are also able to use `history | grep "Command` to search for something specific.
|
||||
|
||||
On servers to trace back when was a command executed, it can be useful to append the date and time to each command in the history file.
|
||||
|
||||
The following system variable controls this behaviour:
|
||||
|
||||
```
|
||||
HISTTIMEFORMAT="%d-%m-%Y %T "
|
||||
```
|
||||
|
||||
You can easily add to your bash_profile:
|
||||
|
||||
```
|
||||
echo 'export HISTTIMEFORMAT="%d-%m-%Y %T "' >> ~/.bash_profile
|
||||
```
|
||||
|
||||
So as useful to allow the history file to grow bigger:
|
||||
|
||||
```
|
||||
@ -119,7 +124,7 @@ echo 'export HISTFILESIZE=10000000' >> ~/.bash_profile
|
||||
|
||||

|
||||
|
||||
Need to change your password? `passwd` is going to allow us to change our password. Note that when you add your password like this when it is hidden it will not be shown in `history` however if your command has `-p PASSWORD` then this will be visible in your `history`.
|
||||
Need to change your password? `passwd` is going to allow us to change our password. Note that when you add your password like this when it is hidden it will not be shown in `history` however if your command has `-p PASSWORD` then this will be visible in your `history`.
|
||||
|
||||

|
||||
|
||||
@ -127,22 +132,22 @@ We might also want to add new users to our system, we can do this with `useradd`
|
||||
|
||||

|
||||
|
||||
Creating a group again requires `sudo` and we can use `sudo groupadd DevOps` then if we want to add our new user to that group we can do this by running `sudo usermod -a -G DevOps` `-a` is add and `-G` is group name.
|
||||
Creating a group again requires `sudo` and we can use `sudo groupadd DevOps` then if we want to add our new user to that group we can do this by running `sudo usermod -a -G DevOps` `-a` is add and `-G` is group name.
|
||||
|
||||

|
||||
|
||||
How do we add users to the `sudo` group, this would be a very rare occasion for this to happen but to do this it would be `usermod -a -G sudo NewUser`
|
||||
How do we add users to the `sudo` group, this would be a very rare occasion for this to happen but to do this it would be `usermod -a -G sudo NewUser`
|
||||
|
||||
### Permissions
|
||||
### Permissions
|
||||
|
||||
read, write and execute are the permissions we have on all of our files and folders on our Linux system.
|
||||
read, write and execute are the permissions we have on all of our files and folders on our Linux system.
|
||||
|
||||
A full list:
|
||||
A full list:
|
||||
|
||||
- 0 = None `---`
|
||||
- 1 = Execute only `--X`
|
||||
- 2 = Write only `-W-`
|
||||
- 3 = Write & Exectute `-WX`
|
||||
- 3 = Write & Execute `-WX`
|
||||
- 4 = Read Only `R--`
|
||||
- 5 = Read & Execute `R-X`
|
||||
- 6 = Read & Write `RW-`
|
||||
@ -150,41 +155,41 @@ A full list:
|
||||
|
||||
You will also see `777` or `775` and these represent the same numbers as the list above but each one represents **User - Group - Everyone**
|
||||
|
||||
Let's take a look at our file. `ls -al Day15` you can see the 3 groups mentioned above, user and group have read & write but everyone only has read.
|
||||
Let's take a look at our file. `ls -al Day15` you can see the 3 groups mentioned above, user and group have read & write but everyone only has read.
|
||||
|
||||

|
||||
|
||||
We can change this using `chmod` you might find yourself doing this if you are creating binaries a lot on your systems as well and you need to give the ability to execute those binaries. `chmod 750 Day15` now run `ls -al Day15` if you want to run this for a whole folder then you can use `-R` to recursively do that.
|
||||
We can change this using `chmod` you might find yourself doing this if you are creating binaries a lot on your systems as well and you need to give the ability to execute those binaries. `chmod 750 Day15` now run `ls -al Day15` if you want to run this for a whole folder then you can use `-R` to recursively do that.
|
||||
|
||||

|
||||
|
||||
What about changing the owner of the file? We can use `chown` for this operation, if we wanted to change the ownership of our `Day15` from user `vagrant` to `NewUser` we can run `sudo chown NewUser Day15` again `-R` can be used.
|
||||
What about changing the owner of the file? We can use `chown` for this operation, if we wanted to change the ownership of our `Day15` from user `vagrant` to `NewUser` we can run `sudo chown NewUser Day15` again `-R` can be used.
|
||||
|
||||

|
||||
|
||||
A command that you will come across is `awk` which comes in real use when you have an output that you only need specific data from. like running `who` we get lines with information, but maybe we only need the names. We can run `who | awk '{print $1}'` to get just a list of that first column.
|
||||
A command that you will come across is `awk` which comes in real use when you have an output that you only need specific data from. like running `who` we get lines with information, but maybe we only need the names. We can run `who | awk '{print $1}'` to get just a list of that first column.
|
||||
|
||||

|
||||
|
||||
If you are looking to read streams of data from standard input, then generate and execute command lines; meaning it can take the output of a command and passes it as an argument of another command. `xargs` is a useful tool for this use case. If for example, I want a list of all the Linux user accounts on the system I can run. `cut -d: -f1 < /etc/passwd` and get the long list we see below.
|
||||
If you are looking to read streams of data from standard input, then generate and execute command lines; meaning it can take the output of a command and passes it as an argument of another command. `xargs` is a useful tool for this use case. If for example, I want a list of all the Linux user accounts on the system I can run. `cut -d: -f1 < /etc/passwd` and get the long list we see below.
|
||||
|
||||

|
||||
|
||||
If I want to compact that list I can do so by using `xargs` in a command like this `cut -d: -f1 < /etc/passwd | sort | xargs`
|
||||
If I want to compact that list I can do so by using `xargs` in a command like this `cut -d: -f1 < /etc/passwd | sort | xargs`
|
||||
|
||||

|
||||
|
||||
I didn't mention the `cut` command either, this allows us to remove sections from each line of a file. It can be used to cut parts of a line by byte position, character and field. The `cut -d " " -f 2 list.txt` command allows us to remove that first letter we have and just display our numbers. There are so many combinations that can be used here with this command, I am sure I have spent too much time trying to use this command when I could have extracted data quicker manually.
|
||||
I didn't mention the `cut` command either, this allows us to remove sections from each line of a file. It can be used to cut parts of a line by byte position, character and field. The `cut -d " " -f 2 list.txt` command allows us to remove that first letter we have and just display our numbers. There are so many combinations that can be used here with this command, I am sure I have spent too much time trying to use this command when I could have extracted data quicker manually.
|
||||
|
||||

|
||||
|
||||
Also to note if you type a command and you are no longer happy with it and you want to start again just hit control + c and this will cancel that line and start you fresh.
|
||||
Also to note if you type a command and you are no longer happy with it and you want to start again just hit control + c and this will cancel that line and start you fresh.
|
||||
|
||||
## Resources
|
||||
## Resources
|
||||
|
||||
- [Learn the Linux Fundamentals - Part 1](https://www.youtube.com/watch?v=kPylihJRG70)
|
||||
- [Linux for hackers (don't worry you don't need to be a hacker!)](https://www.youtube.com/watch?v=VbEx7B_PTOE)
|
||||
|
||||
See you on [Day16](day16.md)
|
||||
|
||||
This is a pretty heavy list already but I can safely say that I have used all of these commands in my day to day, be it from an administering Linux servers or on my Linux Desktop, it is very easy when you are in Windows or macOS to navigate the UI but in Linux Servers, they are not there, everything is done through the terminal.
|
||||
This is a pretty heavy list already but I can safely say that I have used all of these commands in my day to day, be it from an administering Linux servers or on my Linux Desktop, it is very easy when you are in Windows or macOS to navigate the UI but in Linux Servers, they are not there, everything is done through the terminal.
|
||||
|
@ -1,107 +1,108 @@
|
||||
---
|
||||
title: '#90DaysOfDevOps - Managing your Linux System, Filesystem & Storage - Day 16'
|
||||
title: "#90DaysOfDevOps - Managing your Linux System, Filesystem & Storage - Day 16"
|
||||
published: false
|
||||
description: '90DaysOfDevOps - Managing your Linux System, Filesystem & Storage'
|
||||
tags: 'devops, 90daysofdevops, learning'
|
||||
description: "90DaysOfDevOps - Managing your Linux System, Filesystem & Storage"
|
||||
tags: "devops, 90daysofdevops, learning"
|
||||
cover_image: null
|
||||
canonical_url: null
|
||||
id: 1048702
|
||||
---
|
||||
|
||||
## Managing your Linux System, Filesystem & Storage
|
||||
|
||||
So far we have had a brief overview of Linux and DevOps and then we got our lab environment set up using vagant [(Day 14)](day14.md), we then touched on a small portion of commands that will be in your daily toolkit when in the terminal and getting things done [(Day 15)](day15.md).
|
||||
So far we have had a brief overview of Linux and DevOps and then we got our lab environment set up using Vagrant [(Day 14)](day14.md), we then touched on a small portion of commands that will be in your daily toolkit when in the terminal and getting things done [(Day 15)](day15.md).
|
||||
|
||||
Here we are going to look into three key areas of looking after your Linux systems with updates, installing software, understanding what system folders are used for and we will also take a look at storage.
|
||||
Here we are going to look into three key areas of looking after your Linux systems with updates, installing software, understanding what system folders are used for and we will also take a look at storage.
|
||||
|
||||
## Managing Ubuntu & Software
|
||||
|
||||
The first thing we are going to look at is how we update our operating system. Most of you will be familiar with this process in a Windows OS and macOS, this looks slightly different on a Linux desktop and server.
|
||||
The first thing we are going to look at is how we update our operating system. Most of you will be familiar with this process in a Windows OS and macOS, this looks slightly different on a Linux desktop and server.
|
||||
|
||||
We are going to be looking at the apt package manager, this is what we are going to use on our Ubuntu VM for updates and software installation.
|
||||
We are going to be looking at the apt package manager, this is what we are going to use on our Ubuntu VM for updates and software installation.
|
||||
|
||||
Generally, at least on dev workstations, I run this command to make sure that I have the latest available updates from the central repositories, before any software installation.
|
||||
Generally, at least on dev workstations, I run this command to make sure that I have the latest available updates from the central repositories, before any software installation.
|
||||
|
||||
`sudo apt-get update`
|
||||
|
||||

|
||||
|
||||
Now we have an updated Ubuntu VM with the latest OS updates installed. We now want to get some software installed here.
|
||||
Now we have an updated Ubuntu VM with the latest OS updates installed. We now want to get some software installed here.
|
||||
|
||||
Let's choose `figlet` which is a program that generates text banners.
|
||||
|
||||
If we type `figlet` in our terminal you are going to see that we do not have it installed on our system.
|
||||
If we type `figlet` in our terminal you are going to see that we do not have it installed on our system.
|
||||
|
||||

|
||||
|
||||
You will see from the above though that it does give us some `apt` install options that we could try. This is because in the default repositories there is a program called figlet. Let's try `sudo apt install figlet`
|
||||
You will see from the above though that it does give us some `apt` install options that we could try. This is because in the default repositories there is a program called figlet. Let's try `sudo apt install figlet`
|
||||
|
||||

|
||||
|
||||
We can now use our `figlet` app as you can see below.
|
||||
We can now use our `figlet` app as you can see below.
|
||||
|
||||

|
||||
|
||||
If we want to remove that or any of our software installations we can also do that via the `apt` package manager.
|
||||
If we want to remove that or any of our software installations we can also do that via the `apt` package manager.
|
||||
|
||||
`sudo apt remove figlet`
|
||||
|
||||

|
||||
|
||||
There are third party repositories that we can also add to our system, the ones we have access to out of the box are the Ubuntu default repositories.
|
||||
There are third party repositories that we can also add to our system, the ones we have access to out of the box are the Ubuntu default repositories.
|
||||
|
||||
If for example, we wanted to install vagrant on our Ubuntu VM we would not be able to right now and you can see this below on the first command issued. We then add the key to trust the HashiCorp repository, then add the repository to our system.
|
||||
If for example, we wanted to install vagrant on our Ubuntu VM we would not be able to right now and you can see this below on the first command issued. We then add the key to trust the HashiCorp repository, then add the repository to our system.
|
||||
|
||||

|
||||
|
||||
Once we have the HashiCorp repository added we can go ahead and run `sudo apt install vagrant` and get vagrant installed on our system.
|
||||
Once we have the HashiCorp repository added we can go ahead and run `sudo apt install vagrant` and get vagrant installed on our system.
|
||||
|
||||

|
||||
|
||||
There are so many options when it comes to software installation, different options for package managers, built into Ubuntu we could also use snaps for our software installations.
|
||||
There are so many options when it comes to software installation, different options for package managers, built into Ubuntu we could also use snaps for our software installations.
|
||||
|
||||
Hopefully, this gives you a feel about how to manage your OS and software installations on Linux.
|
||||
Hopefully, this gives you a feel about how to manage your OS and software installations on Linux.
|
||||
|
||||
## File System Explained
|
||||
## File System Explained
|
||||
|
||||
Linux is made up of configuration files, if you want to change anything then you change these configuration files.
|
||||
Linux is made up of configuration files, if you want to change anything then you change these configuration files.
|
||||
|
||||
On Windows, you have C: drive and that is what we consider the root. On Linux we have `/` this is where we are going to find the important folders on our Linux system.
|
||||
On Windows, you have C: drive and that is what we consider the root. On Linux we have `/` this is where we are going to find the important folders on our Linux system.
|
||||
|
||||

|
||||
|
||||
- `/bin` - Short for binary, the bin folder is where our binaries that your system needs, executables and tools will mostly be found here.
|
||||
- `/bin` - Short for binary, the bin folder is where our binaries that your system needs, executables and tools will mostly be found here.
|
||||
|
||||

|
||||
|
||||
- `/boot` - All the files your system needs to boot up. How to boot up, and what drive to boot from.
|
||||
- `/boot` - All the files your system needs to boot up. How to boot up, and what drive to boot from.
|
||||
|
||||

|
||||
|
||||
- `/dev` - You can find device information here, this is where you will find pointers to your disk drives `sda` will be your main OS disk.
|
||||
- `/dev` - You can find device information here, this is where you will find pointers to your disk drives `sda` will be your main OS disk.
|
||||
|
||||

|
||||
|
||||
- `/etc` Likely the most important folder on your Linux system, this is where the majority of your configuration files are.
|
||||
- `/etc` Likely the most important folder on your Linux system, this is where the majority of your configuration files are.
|
||||
|
||||

|
||||
|
||||
- `/home` - this is where you will find your user folders and files. We have our vagrant user folder. This is where you will find your `Documents` and `Desktop` folders that we worked in for the commands section.
|
||||
- `/home` - this is where you will find your user folders and files. We have our vagrant user folder. This is where you will find your `Documents` and `Desktop` folders that we worked in for the commands section.
|
||||
|
||||

|
||||
|
||||
- `/lib` - We mentioned that `/bin` is where our binaries and executables live, and `/lib` is where you will find the shared libraries for those.
|
||||
- `/lib` - We mentioned that `/bin` is where our binaries and executables live, and `/lib` is where you will find the shared libraries for those.
|
||||
|
||||

|
||||
|
||||
- `/media` - This is where we will find removable devices.
|
||||
- `/media` - This is where we will find removable devices.
|
||||
|
||||

|
||||
|
||||
- `/mnt` - This is a temporary mount point. We will cover more here in the next storage section.
|
||||
- `/mnt` - This is a temporary mount point. We will cover more here in the next storage section.
|
||||
|
||||

|
||||
|
||||
- `/opt` - Optional software packages. You will notice here that we have some vagrant and virtual box software stored here.
|
||||
- `/opt` - Optional software packages. You will notice here that we have some vagrant and virtual box software stored here.
|
||||
|
||||

|
||||
|
||||
@ -109,7 +110,7 @@ On Windows, you have C: drive and that is what we consider the root. On Linux we
|
||||
|
||||

|
||||
|
||||
- `/root` - To gain access you will need to sudo into this folder. The home folder for root.
|
||||
- `/root` - To gain access you will need to sudo into this folder. The home folder for root.
|
||||
|
||||

|
||||
|
||||
@ -121,43 +122,43 @@ On Windows, you have C: drive and that is what we consider the root. On Linux we
|
||||
|
||||

|
||||
|
||||
- `/tmp` - temporary files.
|
||||
- `/tmp` - temporary files.
|
||||
|
||||

|
||||
|
||||
- `/usr` - If we as a standard user have installed software packages it would generally be installed in the `/usr/bin` location.
|
||||
- `/usr` - If we as a standard user have installed software packages it would generally be installed in the `/usr/bin` location.
|
||||
|
||||

|
||||
|
||||
- `/var` - Our applications get installed in a `bin` folder. We need somewhere to store all of the log files this is `/var`
|
||||
- `/var` - Our applications get installed in a `bin` folder. We need somewhere to store all of the log files this is `/var`
|
||||
|
||||

|
||||
|
||||
## Storage
|
||||
## Storage
|
||||
|
||||
When we come to a Linux system or any system we might want to know the available disks and how much free space we have on those disks. The next few commands will help us identify and use and manage storage.
|
||||
When we come to a Linux system or any system we might want to know the available disks and how much free space we have on those disks. The next few commands will help us identify and use and manage storage.
|
||||
|
||||
- `lsblk` List Block devices. `sda` is our physical disk and then `sda1, sda2, sda3` are our partitions on that disk.
|
||||
- `lsblk` List Block devices. `sda` is our physical disk and then `sda1, sda2, sda3` are our partitions on that disk.
|
||||
|
||||

|
||||
|
||||
- `df` gives us a little more detail about those partitions, total, used and available. You can parse other flags here I generally use `df -h` to give us a human output of the data.
|
||||
- `df` gives us a little more detail about those partitions, total, used and available. You can parse other flags here I generally use `df -h` to give us a human output of the data.
|
||||
|
||||

|
||||
|
||||
If you were adding a new disk to your system and this is the same in Windows you would need to format the disk in disk management, in the Linux terminal you can do this by using the `sudo mkfs -t ext4 /dev/sdb` with sdb relating to our newly added disk.
|
||||
If you were adding a new disk to your system and this is the same in Windows you would need to format the disk in disk management, in the Linux terminal you can do this by using the `sudo mkfs -t ext4 /dev/sdb` with sdb relating to our newly added disk.
|
||||
|
||||
We would then need to mount our newly formatted disk so that it was useable. We would do this in our `/mnt` folder previously mentioned and we would create a directory there with `sudo mkdir NewDisk` we would then use `sudo mount /dev/sdb newdisk` to mount the disk to that location.
|
||||
We would then need to mount our newly formatted disk so that it was useable. We would do this in our `/mnt` folder previously mentioned and we would create a directory there with `sudo mkdir NewDisk` we would then use `sudo mount /dev/sdb newdisk` to mount the disk to that location.
|
||||
|
||||
It is also possible that you will need to unmount storage from your system safely vs just pulling it from the configuration. We can do this with `sudo umount /dev/sdb`
|
||||
It is also possible that you will need to unmount storage from your system safely vs just pulling it from the configuration. We can do this with `sudo umount /dev/sdb`
|
||||
|
||||
If you did not want to unmount that disk and you were going to be using this disk for a database or some other persistent use case then you want it to be there when you reboot your system. For this to happen we need to add this disk to our `/etc/fstab` configuration file for it to persist, if you don't it won't be useable when the machine reboots and you would manually have to go through the above process. The data will still be there on the disk but it won't automount unless you add the configuration to this file.
|
||||
If you did not want to unmount that disk and you were going to be using this disk for a database or some other persistent use case then you want it to be there when you reboot your system. For this to happen we need to add this disk to our `/etc/fstab` configuration file for it to persist, if you don't it won't be useable when the machine reboots and you would manually have to go through the above process. The data will still be there on the disk but it won't automount unless you add the configuration to this file.
|
||||
|
||||
Once you have edited the `fstab` configuration file you can check your workings with `sudo mount -a` if no errors then your changes will now be persistent across restarts.
|
||||
Once you have edited the `fstab` configuration file you can check your workings with `sudo mount -a` if no errors then your changes will now be persistent across restarts.
|
||||
|
||||
We will cover how you would edit a file using a text editor in a future session.
|
||||
We will cover how you would edit a file using a text editor in a future session.
|
||||
|
||||
## Resources
|
||||
## Resources
|
||||
|
||||
- [Learn the Linux Fundamentals - Part 1](https://www.youtube.com/watch?v=kPylihJRG70)
|
||||
- [Linux for hackers (don't worry you don't need to be a hacker!)](https://www.youtube.com/watch?v=VbEx7B_PTOE)
|
||||
|
@ -1,57 +1,58 @@
|
||||
---
|
||||
title: '#90DaysOfDevOps - Text Editors - nano vs vim - Day 17'
|
||||
title: "#90DaysOfDevOps - Text Editors - nano vs vim - Day 17"
|
||||
published: false
|
||||
description: 90DaysOfDevOps - Text Editors - nano vs vim
|
||||
tags: 'devops, 90daysofdevops, learning'
|
||||
tags: "devops, 90daysofdevops, learning"
|
||||
cover_image: null
|
||||
canonical_url: null
|
||||
id: 1048703
|
||||
---
|
||||
|
||||
## Text Editors - nano vs vim
|
||||
|
||||
The majority of your Linux systems are going to be servers and these are not going to have a GUI. I also mentioned in the last session that Linux is mostly made up of configuration files, to make changes you are going to need to be able to edit those configuration files to change anything on the system.
|
||||
The majority of your Linux systems are going to be servers and these are not going to have a GUI. I also mentioned in the last session that Linux is mostly made up of configuration files, to make changes you are going to need to be able to edit those configuration files to change anything on the system.
|
||||
|
||||
There are lots of options out there but I think we should cover probably the two most common terminal text editors. I have used both of these editors and for me, I find `nano` the easy button when it comes to quick changes but `vim` has such a broad set of capabilities.
|
||||
There are lots of options out there but I think we should cover probably the two most common terminal text editors. I have used both of these editors and for me, I find `nano` the easy button when it comes to quick changes but `vim` has such a broad set of capabilities.
|
||||
|
||||
### nano
|
||||
### nano
|
||||
|
||||
- Not available on every system.
|
||||
- Not available on every system.
|
||||
- Great for getting started.
|
||||
|
||||
If you run `nano 90DaysOfDevOps.txt` we will create a new file with nothing in, from here we can add our text and we have our instructions below for what we want to do with that file.
|
||||
If you run `nano 90DaysOfDevOps.txt` we will create a new file with nothing in, from here we can add our text and we have our instructions below for what we want to do with that file.
|
||||
|
||||

|
||||
|
||||
We can now use `control x + enter` and then run `ls` you can now see our new text file.
|
||||
We can now use `control x + enter` and then run `ls` you can now see our new text file.
|
||||
|
||||

|
||||
|
||||
We can now run `cat` against that file to read our file. We can then use that same `nano 90DaysOfDevOps.txt` to add additional text or modify your file.
|
||||
We can now run `cat` against that file to read our file. We can then use that same `nano 90DaysOfDevOps.txt` to add additional text or modify your file.
|
||||
|
||||
For me, nano is super easy when it comes to getting small changes done on configuration files.
|
||||
For me, nano is super easy when it comes to getting small changes done on configuration files.
|
||||
|
||||
### vim
|
||||
### vim
|
||||
|
||||
Possibly the most common text editor around? A sibling of the UNIX text editor vi from 1976 we get a lot of functionality with vim.
|
||||
Possibly the most common text editor around? A sibling of the UNIX text editor vi from 1976 we get a lot of functionality with vim.
|
||||
|
||||
- Pretty much supported on every single Linux distribution.
|
||||
- Incredibly powerful! You can likely find a full 7-hour course just covering vim.
|
||||
- Pretty much supported on every single Linux distribution.
|
||||
- Incredibly powerful! You can likely find a full 7-hour course just covering vim.
|
||||
|
||||
We can jump into vim with the `vim` command or if we want to edit our new txt file we could run `vim 90DaysOfDevOps.txt` but you are going to first see the lack of help menus at the bottom.
|
||||
We can jump into vim with the `vim` command or if we want to edit our new txt file we could run `vim 90DaysOfDevOps.txt` but you are going to first see the lack of help menus at the bottom.
|
||||
|
||||
The first question might be "How do I exit vim?" that is going to be `escape` and if we have not made any changes then it will be `:q`
|
||||
The first question might be "How do I exit vim?" that is going to be `escape` and if we have not made any changes then it will be `:q`
|
||||
|
||||

|
||||
|
||||
You start in `normal` mode, there are other modes `command, normal, visual, insert`, if we want to add the text we will need to switch from `normal` to `insert` we need to press `i` if you have added some text and would like to save these changes then you would hit escape and then `:wq`
|
||||
You start in `normal` mode, there are other modes `command, normal, visual, insert`, if we want to add the text we will need to switch from `normal` to `insert` we need to press `i` if you have added some text and would like to save these changes then you would hit escape and then `:wq`
|
||||
|
||||

|
||||
|
||||

|
||||
|
||||
You can confirm this with the `cat` command to check you have saved those changes.
|
||||
You can confirm this with the `cat` command to check you have saved those changes.
|
||||
|
||||
There is some cool fast functionality with vim that allows you to do menial tasks very quickly if you know the shortcuts which is a lecture in itself. Let's say we have added a list of repeated words and we now need to change that, maybe it's a configuration file and we repeat a network name and now this has changed and we quickly want to change this. I am using the word day for this example.
|
||||
There is some cool fast functionality with vim that allows you to do menial tasks very quickly if you know the shortcuts which is a lecture in itself. Let's say we have added a list of repeated words and we now need to change that, maybe it's a configuration file and we repeat a network name and now this has changed and we quickly want to change this. I am using the word day for this example.
|
||||
|
||||

|
||||
|
||||
@ -59,23 +60,23 @@ Now we want to replace that word with 90DaysOfDevOps, we can do this by hitting
|
||||
|
||||

|
||||
|
||||
The outcome when you hit enter is that the word day is then replaced with 90DaysOfDevOps.
|
||||
The outcome when you hit enter is that the word day is then replaced with 90DaysOfDevOps.
|
||||
|
||||

|
||||
|
||||
Copy and Paste was a big eye-opener for me. Copy is not copied it is yanked. we can copy using `yy` on our keyboard in normal mode. `p` paste on the same line, `P` paste on a new line.
|
||||
Copy and Paste was a big eye-opener for me. Copy is not copied it is yanked. we can copy using `yy` on our keyboard in normal mode. `p` paste on the same line, `P` paste on a new line.
|
||||
|
||||
You can also delete these lines by choosing the number of lines you wish to delete followed by `dd`
|
||||
You can also delete these lines by choosing the number of lines you wish to delete followed by `dd`
|
||||
|
||||
There is also likely a time you will need to search a file, now we can use `grep` as mentioned in a previous session but we can also use vim. we can use `/word` and this will find the first match, to navigate through to the next you will use the `n` key and so on.
|
||||
There is also likely a time you will need to search a file, now we can use `grep` as mentioned in a previous session but we can also use vim. we can use `/word` and this will find the first match, to navigate through to the next you will use the `n` key and so on.
|
||||
|
||||
For vim this is not even touching the surface, the biggest advice I can give is to get hands-on and use vim wherever possible.
|
||||
For vim this is not even touching the surface, the biggest advice I can give is to get hands-on and use vim wherever possible.
|
||||
|
||||
A common interview question is what is your favourite text editor in Linux and I would make sure you have at least this knowledge of both so you can answer, it is fine to say nano because it's simple. At least you show competence in understanding what a text editor is. But get hands-on with them to be more proficient.
|
||||
A common interview question is what is your favourite text editor in Linux and I would make sure you have at least this knowledge of both so you can answer, it is fine to say nano because it's simple. At least you show competence in understanding what a text editor is. But get hands-on with them to be more proficient.
|
||||
|
||||
Another pointer to navigate around in vim we can use `H,J,K,L` as well as our arrow keys.
|
||||
Another pointer to navigate around in vim we can use `H,J,K,L` as well as our arrow keys.
|
||||
|
||||
## Resources
|
||||
## Resources
|
||||
|
||||
- [Vim in 100 Seconds](https://www.youtube.com/watch?v=-txKSRn0qeA)
|
||||
- [Vim tutorial](https://www.youtube.com/watch?v=IiwGbcd8S7I)
|
||||
|
112
Days/day18.md
112
Days/day18.md
@ -1,89 +1,90 @@
|
||||
---
|
||||
title: '#90DaysOfDevOps - SSH & Web Server - Day 18'
|
||||
title: "#90DaysOfDevOps - SSH & Web Server - Day 18"
|
||||
published: false
|
||||
description: 90DaysOfDevOps - SSH & Web Server
|
||||
tags: 'devops, 90daysofdevops, learning'
|
||||
tags: "devops, 90daysofdevops, learning"
|
||||
cover_image: null
|
||||
canonical_url: null
|
||||
id: 1048733
|
||||
---
|
||||
|
||||
## SSH & Web Server
|
||||
|
||||
As we have mentioned throughout you are going to most likely be managing lots of remote Linux servers, because of this, you will need to make sure that your connectivity to these remote servers is secure. In this section, we want to cover some of the basics of SSH that everyone should know that will help you with that secure tunnel to your remote systems.
|
||||
As we have mentioned throughout you are going to most likely be managing lots of remote Linux servers, because of this, you will need to make sure that your connectivity to these remote servers is secure. In this section, we want to cover some of the basics of SSH that everyone should know that will help you with that secure tunnel to your remote systems.
|
||||
|
||||
- Setting up a connection with SSH
|
||||
- Transferring files
|
||||
- Setting up a connection with SSH
|
||||
- Transferring files
|
||||
- Create your private key
|
||||
|
||||
### SSH introduction
|
||||
### SSH introduction
|
||||
|
||||
- Secure shell
|
||||
- Networking Protocol
|
||||
- Allows secure communications
|
||||
- Can secure any network service
|
||||
- Typically used for remote command-line access
|
||||
- Secure shell
|
||||
- Networking Protocol
|
||||
- Allows secure communications
|
||||
- Can secure any network service
|
||||
- Typically used for remote command-line access
|
||||
|
||||
In our environment, if you have been following along we have been using SSH already but this was all configured and automated through our vagrant configuration so we only had to run `vagrant ssh` and we gained access to our remote virtual machine.
|
||||
In our environment, if you have been following along we have been using SSH already but this was all configured and automated through our vagrant configuration so we only had to run `vagrant ssh` and we gained access to our remote virtual machine.
|
||||
|
||||
If our remote machine was not on the same system as our workstation and was in a remote location, maybe a cloud-based system or running in a data centre that we could only access over the internet we would need a secure way of being able to access the system to manage it.
|
||||
|
||||
SSH provides a secure tunnel between client and server so that nothing can be intercepted by bad actors.
|
||||
SSH provides a secure tunnel between client and server so that nothing can be intercepted by bad actors.
|
||||
|
||||

|
||||
|
||||
The server has a server-side SSH service always running and listening on a specific TCP port (22).
|
||||
The server has a server-side SSH service always running and listening on a specific TCP port (22).
|
||||
|
||||
If we use our client to connect with the correct credentials or SSH key then we gain access to that server.
|
||||
If we use our client to connect with the correct credentials or SSH key then we gain access to that server.
|
||||
|
||||
### Adding a bridged network adapter to our system
|
||||
|
||||
For us to use this with our current virtual box VM, we need to add a bridged network adapter to our machine.
|
||||
For us to use this with our current virtual box VM, we need to add a bridged network adapter to our machine.
|
||||
|
||||
Power down your virtual machine, right-click on your machine within Virtual Box and select settings. In the new window then select networking.
|
||||
Power down your virtual machine, right-click on your machine within Virtual Box and select settings. In the new window then select networking.
|
||||
|
||||

|
||||
|
||||
Now power your machine back on and you will now have an IP address on your local machine. You can confirm this with the `IP addr` command.
|
||||
Now power your machine back on and you will now have an IP address on your local machine. You can confirm this with the `IP addr` command.
|
||||
|
||||
### Confirming SSH server is running
|
||||
|
||||
We know SSH is already configured on our machine as we have been using it with vagrant but we can confirm by running
|
||||
We know SSH is already configured on our machine as we have been using it with vagrant but we can confirm by running
|
||||
|
||||
`sudo systemctl status ssh`
|
||||
|
||||

|
||||
|
||||
If your system does not have the SSH server then you can install it by issuing this command `sudo apt install OpenSSH-server`
|
||||
If your system does not have the SSH server then you can install it by issuing this command `sudo apt install OpenSSH-server`
|
||||
|
||||
You then want to make sure that our SSH is allowed if the firewall is running. We can do this with `sudo ufw allow ssh` this is not required on our configuration as we automated this with our vagrant provisioning.
|
||||
You then want to make sure that our SSH is allowed if the firewall is running. We can do this with `sudo ufw allow ssh` this is not required on our configuration as we automated this with our vagrant provisioning.
|
||||
|
||||
### Remote Access - SSH Password
|
||||
### Remote Access - SSH Password
|
||||
|
||||
Now that we have our SSH Server listening out on port 22 for any incoming connection requests and we have added the bridged networking we could use putty or an SSH client on our local machine to connect to our system using SSH.
|
||||
Now that we have our SSH Server listening out on port 22 for any incoming connection requests and we have added the bridged networking we could use putty or an SSH client on our local machine to connect to our system using SSH.
|
||||
|
||||

|
||||
|
||||
Then hit open, if this is the first time you have connected to this system via this IP address you will get this warning. We know that this is our system so you can choose yes.
|
||||
Then hit open, if this is the first time you have connected to this system via this IP address you will get this warning. We know that this is our system so you can choose yes.
|
||||
|
||||

|
||||
|
||||
We are then prompted for our username (vagrant) and password (default password - vagrant) Below you will see we are now using our SSH client (Putty) to connect to our machine using username and password.
|
||||
We are then prompted for our username (vagrant) and password (default password - vagrant) Below you will see we are now using our SSH client (Putty) to connect to our machine using username and password.
|
||||
|
||||

|
||||
|
||||
At this stage, we are connected to our VM from our remote client and we can issue our commands on our system.
|
||||
At this stage, we are connected to our VM from our remote client and we can issue our commands on our system.
|
||||
|
||||
### Remote Access - SSH Key
|
||||
|
||||
The above is an easy way to gain access to your systems however it still relies on username and password, if some malicious actor was to gain access to this information plus the public address or IP of your system then it could be easily compromised. This is where SSH keys are preferred.
|
||||
The above is an easy way to gain access to your systems however it still relies on username and password, if some malicious actor was to gain access to this information plus the public address or IP of your system then it could be easily compromised. This is where SSH keys are preferred.
|
||||
|
||||
SSH Keys means that we provide a key pair so that both the client and server know that this is a trusted device.
|
||||
SSH Keys means that we provide a key pair so that both the client and server know that this is a trusted device.
|
||||
|
||||
Creating a key is easy. On our local machine (Windows) We can issue the following command in fact if you have an ssh-client installed on any system I believe this same command will work?
|
||||
Creating a key is easy. On our local machine (Windows) We can issue the following command in fact if you have an ssh-client installed on any system I believe this same command will work?
|
||||
|
||||
`ssh-keygen -t ed25519`
|
||||
|
||||
I am not going to get into what `ed25519` is and means here but you can have a search if you want to learn more about [cryptography](https://en.wikipedia.org/wiki/EdDSA#Ed25519)
|
||||
I am not going to get into what `ed25519` is and means here but you can have a search if you want to learn more about [cryptography](https://en.wikipedia.org/wiki/EdDSA#Ed25519)
|
||||
|
||||

|
||||
|
||||
@ -91,36 +92,37 @@ At this point, we have our created SSH key stored in `C:\Users\micha/.ssh/`
|
||||
|
||||
But to link this with our Linux VM we need to copy the key. We can do this by using the `ssh-copy-id vagrant@192.168.169.135`
|
||||
|
||||
I used Powershell to create my keys on my Windows client but there is no `ssh-copy-id` available here. There are ways in which you can do this on Windows and a small search online will find you an alternative, but I will just use git bash on my Windows machine to make the copy.
|
||||
I used Powershell to create my keys on my Windows client but there is no `ssh-copy-id` available here. There are ways in which you can do this on Windows and a small search online will find you an alternative, but I will just use git bash on my Windows machine to make the copy.
|
||||
|
||||

|
||||
|
||||
We can now go back to Powershell to test that our connection now works with our SSH Keys and no password is required.
|
||||
We can now go back to Powershell to test that our connection now works with our SSH Keys and no password is required.
|
||||
|
||||
`ssh vagrant@192.168.169.135`
|
||||
|
||||

|
||||
|
||||
We could secure this further if needed by using a passphrase. We could also go one step further saying that no passwords at all meaning only key pairs over SSH would be allowed. You can make this happen in the following configuration file.
|
||||
We could secure this further if needed by using a passphrase. We could also go one step further saying that no passwords at all meaning only key pairs over SSH would be allowed. You can make this happen in the following configuration file.
|
||||
|
||||
`sudo nano /etc/ssh/sshd_config`
|
||||
`sudo nano /etc/ssh/sshd_config`
|
||||
|
||||
there is a line in here with `PasswordAuthentication yes` this will be `#` commented out, you should uncomment and change the yes to no. You will then need to reload the SSH service with `sudo systemctl reload sshd`
|
||||
there is a line in here with `PasswordAuthentication yes` this will be `#` commented out, you should uncomment and change the yes to no. You will then need to reload the SSH service with `sudo systemctl reload sshd`
|
||||
|
||||
## Setting up a Web Server
|
||||
## Setting up a Web Server
|
||||
|
||||
Not specifically related to what we have just done with SSH above but I wanted to include this as this is again another task that you might find a little daunting but it really should not be.
|
||||
Not specifically related to what we have just done with SSH above but I wanted to include this as this is again another task that you might find a little daunting but it really should not be.
|
||||
|
||||
We have our Linux playground VM and at this stage, we want to add an apache webserver to our VM so that we can host a simple website from it that serves my home network. Note that this web page will not be accessible from the internet, this can be done but it will not be covered here.
|
||||
We have our Linux playground VM and at this stage, we want to add an apache webserver to our VM so that we can host a simple website from it that serves my home network. Note that this web page will not be accessible from the internet, this can be done but it will not be covered here.
|
||||
|
||||
You might also see this referred to as a LAMP stack.
|
||||
You might also see this referred to as a LAMP stack.
|
||||
|
||||
- **L**inux Operating System
|
||||
- **A**pache Web Server
|
||||
- **m**ySQL database
|
||||
- **L**inux Operating System
|
||||
- **A**pache Web Server
|
||||
- **m**ySQL database
|
||||
- **P**HP
|
||||
|
||||
### Apache2
|
||||
### Apache2
|
||||
|
||||
Apache2 is an open-source HTTP server. We can install apache2 with the following command.
|
||||
|
||||
`sudo apt-get install apache2`
|
||||
@ -132,32 +134,34 @@ Then using the bridged network address from the SSH walkthrough open a browser a
|
||||

|
||||
|
||||
### mySQL
|
||||
|
||||
MySQL is a database in which we will be storing our data for our simple website. To get MySQL installed we should use the following command `sudo apt-get install mysql-server`
|
||||
|
||||
### PHP
|
||||
PHP is a server-side scripting language, we will use this to interact with a MySQL database. The final installation is to get PHP and dependencies installed using `sudo apt-get install php libapache2-mod-php php-mysql`
|
||||
|
||||
The first configuration change we want to make out of the box apache is using index.html and we want it to use index.php instead.
|
||||
PHP is a server-side scripting language, we will use this to interact with a MySQL database. The final installation is to get PHP and dependencies installed using `sudo apt-get install php libapache2-mod-php php-mysql`
|
||||
|
||||
We are going to use `sudo nano /etc/apache2/mods-enabled/dir.conf` and we are going to move index.php to the first item in the list.
|
||||
The first configuration change we want to make out of the box apache is using index.html and we want it to use index.php instead.
|
||||
|
||||
We are going to use `sudo nano /etc/apache2/mods-enabled/dir.conf` and we are going to move index.php to the first item in the list.
|
||||
|
||||

|
||||
|
||||
Restart the apache2 service `sudo systemctl restart apache2`
|
||||
|
||||
Now let's confirm that our system is configured correctly for PHP. Create the following file using this command, this will open a blank file in nano.
|
||||
Now let's confirm that our system is configured correctly for PHP. Create the following file using this command, this will open a blank file in nano.
|
||||
|
||||
`sudo nano /var/www/html/90Days.php`
|
||||
|
||||
then copy the following and use control + x to exit and save your file.
|
||||
then copy the following and use control + x to exit and save your file.
|
||||
|
||||
```
|
||||
```
|
||||
<?php
|
||||
phpinfo();
|
||||
?>
|
||||
```
|
||||
|
||||
Now navigate to your Linux VM IP again with the additional 90Days.php on the end of the URL. `http://192.168.169.135/90Days.php` you should see something similar to the below if PHP is configured correctly.
|
||||
Now navigate to your Linux VM IP again with the additional 90Days.php on the end of the URL. `http://192.168.169.135/90Days.php` you should see something similar to the below if PHP is configured correctly.
|
||||
|
||||

|
||||
|
||||
@ -189,13 +193,13 @@ I then walked through this tutorial to get WordPress up on our LAMP stack, some
|
||||
|
||||
`sudo rm latest.tar.gz`
|
||||
|
||||
At this point you are in Step 4 in the linked article, you will need to follow the steps to make sure all correct permissions are in place for the WordPress directory.
|
||||
At this point you are in Step 4 in the linked article, you will need to follow the steps to make sure all correct permissions are in place for the WordPress directory.
|
||||
|
||||
Because this is internal only you do not need to "generate security keys" in this step. Move to Step 5 which is changing the Apache configuration to WordPress.
|
||||
Because this is internal only you do not need to "generate security keys" in this step. Move to Step 5 which is changing the Apache configuration to WordPress.
|
||||
|
||||
Then providing everything is configured correctly you will be able to access via your internal network address and run through the WordPress installation.
|
||||
Then providing everything is configured correctly you will be able to access via your internal network address and run through the WordPress installation.
|
||||
|
||||
## Resources
|
||||
## Resources
|
||||
|
||||
- [Client SSH GUI - Remmina](https://remmina.org/)
|
||||
- [The Beginner's guide to SSH](https://www.youtube.com/watch?v=2QXkrLVsRmk)
|
||||
|
133
Days/day19.md
133
Days/day19.md
@ -1,85 +1,88 @@
|
||||
---
|
||||
title: '#90DaysOfDevOps - Automate tasks with bash scripts - Day 19'
|
||||
title: "#90DaysOfDevOps - Automate tasks with bash scripts - Day 19"
|
||||
published: false
|
||||
description: 90DaysOfDevOps - Automate tasks with bash scripts
|
||||
tags: 'devops, 90daysofdevops, learning'
|
||||
tags: "devops, 90daysofdevops, learning"
|
||||
cover_image: null
|
||||
canonical_url: null
|
||||
id: 1048774
|
||||
---
|
||||
|
||||
## Automate tasks with bash scripts
|
||||
|
||||
The shell that we are going to use today is the bash but we will cover another shell tomorrow when we dive into ZSH.
|
||||
The shell that we are going to use today is the bash but we will cover another shell tomorrow when we dive into ZSH.
|
||||
|
||||
BASH - **B**ourne **A**gain **Sh**ell
|
||||
|
||||
We could almost dedicate a whole section of 7 days to shell scripting much like the programming languages, bash gives us the capability of working alongside other automation tools to get things done.
|
||||
We could almost dedicate a whole section of 7 days to shell scripting much like the programming languages, bash gives us the capability of working alongside other automation tools to get things done.
|
||||
|
||||
I still speak to a lot of people who have set up some complex shell scripts to make something happen and they rely on this script for some of the most important things in the business, I am not saying we need to understand shell/bash scripting for this purpose, this is not the way. But we should learn shell/bash scripting to work alongside our automation tools and for ad-hoc tasks.
|
||||
I still speak to a lot of people who have set up some complex shell scripts to make something happen and they rely on this script for some of the most important things in the business, I am not saying we need to understand shell/bash scripting for this purpose, this is not the way. But we should learn shell/bash scripting to work alongside our automation tools and for ad-hoc tasks.
|
||||
|
||||
An example of this that we have used in this section could be the VAGRANTFILE we used to create our VM, we could wrap this into a simple bash script that deleted and renewed this every Monday morning so that we have a fresh copy of our Linux VM every week, we could also add all the software stack that we need on said Linux machine and so on all through this one bash script.
|
||||
An example of this that we have used in this section could be the VAGRANTFILE we used to create our VM, we could wrap this into a simple bash script that deleted and renewed this every Monday morning so that we have a fresh copy of our Linux VM every week, we could also add all the software stack that we need on said Linux machine and so on all through this one bash script.
|
||||
|
||||
I think another thing I am at least hearing is that hands-on scripting questions are becoming more and more apparent in all lines of interviews.
|
||||
I think another thing I am at least hearing is that hands-on scripting questions are becoming more and more apparent in all lines of interviews.
|
||||
|
||||
### Getting started
|
||||
### Getting started
|
||||
|
||||
As with a lot of things we are covering in this whole 90 days, the only real way to learn is through doing. Hands-on experience is going to help soak all of this into your muscle memory.
|
||||
As with a lot of things we are covering in this whole 90 days, the only real way to learn is through doing. Hands-on experience is going to help soak all of this into your muscle memory.
|
||||
|
||||
First of all, we are going to need a text editor. On [Day 17](day17.md) we covered probably the two most common text editors and a little on how to use them.
|
||||
First of all, we are going to need a text editor. On [Day 17](day17.md) we covered probably the two most common text editors and a little on how to use them.
|
||||
|
||||
Let's get straight into it and create our first shell script.
|
||||
Let's get straight into it and create our first shell script.
|
||||
|
||||
`touch 90DaysOfDevOps.sh`
|
||||
|
||||
Followed by `nano 90DaysOfDevOps.sh` this will open our new blank shell script in nano. Again you can choose your text editor of choice here.
|
||||
Followed by `nano 90DaysOfDevOps.sh` this will open our new blank shell script in nano. Again you can choose your text editor of choice here.
|
||||
|
||||
The first line of all bash scripts will need to look something like this `#!/usr/bin/bash` this is the path to your bash binary.
|
||||
The first line of all bash scripts will need to look something like this `#!/usr/bin/bash` this is the path to your bash binary.
|
||||
|
||||
You should however check this in the terminal by running `which bash` if you are not using Ubuntu then you might also try `whereis bash` from the terminal.
|
||||
You should however check this in the terminal by running `which bash` if you are not using Ubuntu then you might also try `whereis bash` from the terminal.
|
||||
|
||||
However, you may see other paths listed in already created shell scripts which could include:
|
||||
However, you may see other paths listed in already created shell scripts which could include:
|
||||
|
||||
- `#!/bin/bash`
|
||||
- `#!/usr/bin/env bash`
|
||||
|
||||
In the next line in our script, I like to add a comment and add the purpose of the script or at least some information about me. You can do this by using the `#` This allows us to comment on particular lines in our code and provide descriptions of what the upcoming commands will be doing. I find the more notes the better for the user experience especially if you are sharing this.
|
||||
In the next line in our script, I like to add a comment and add the purpose of the script or at least some information about me. You can do this by using the `#` This allows us to comment on particular lines in our code and provide descriptions of what the upcoming commands will be doing. I find the more notes the better for the user experience especially if you are sharing this.
|
||||
|
||||
I sometimes use figlet, a program we installed earlier in the Linux section to create some asci art to kick things off in our scripts.
|
||||
I sometimes use figlet, a program we installed earlier in the Linux section to create some asci art to kick things off in our scripts.
|
||||
|
||||

|
||||
|
||||
All of the commands we have been through earlier in this Linux section ([Day15](day15.md)) could be used here as a simple command to test our script.
|
||||
All of the commands we have been through earlier in this Linux section ([Day15](day15.md)) could be used here as a simple command to test our script.
|
||||
|
||||
Let's add a simple block of code to our script.
|
||||
Let's add a simple block of code to our script.
|
||||
|
||||
```
|
||||
```
|
||||
mkdir 90DaysOfDevOps
|
||||
cd 90DaysOfDevOps
|
||||
touch Day19
|
||||
ls
|
||||
ls
|
||||
```
|
||||
You can then save this and exit your text editor, if we run our script with `./90DaysOfDevOps.sh` you should get a permission denied message. You can check the permissions of this file using the `ls -al` command and you can see highlighted we do not have executable rights on this file.
|
||||
|
||||
You can then save this and exit your text editor, if we run our script with `./90DaysOfDevOps.sh` you should get a permission denied message. You can check the permissions of this file using the `ls -al` command and you can see highlighted we do not have executable rights on this file.
|
||||
|
||||

|
||||
|
||||
We can change this using `chmod +x 90DaysOfDevOps.sh` and then you will see the `x` meaning we can now execute our script.
|
||||
We can change this using `chmod +x 90DaysOfDevOps.sh` and then you will see the `x` meaning we can now execute our script.
|
||||
|
||||

|
||||
|
||||
Now we can run our script again using `./90DaysOfDevOps.sh` after running the script has now created a new directory, changed into that directory and then created a new file.
|
||||
Now we can run our script again using `./90DaysOfDevOps.sh` after running the script has now created a new directory, changed into that directory and then created a new file.
|
||||
|
||||

|
||||
|
||||
Pretty basic stuff but you can start to see hopefully how this could be used to call on other tools as part of ways to make your life easier and automate things.
|
||||
Pretty basic stuff but you can start to see hopefully how this could be used to call on other tools as part of ways to make your life easier and automate things.
|
||||
|
||||
### Variables, Conditionals
|
||||
A lot of this section is a repeat of what we covered when we were learning Golang but I think it's worth us diving in here again.
|
||||
|
||||
- ### Variables
|
||||
A lot of this section is a repeat of what we covered when we were learning Golang but I think it's worth us diving in here again.
|
||||
|
||||
Variables enable us to define once a particular repeated term that is used throughout a potentially complex script.
|
||||
- ### Variables
|
||||
|
||||
To add a variable you simply add it like this to a clean line in your script.
|
||||
Variables enable us to define once a particular repeated term that is used throughout a potentially complex script.
|
||||
|
||||
To add a variable you simply add it like this to a clean line in your script.
|
||||
|
||||
`challenge="90DaysOfDevOps"`
|
||||
|
||||
@ -87,22 +90,22 @@ This way when and where we use `$challenge` in our code, if we change the variab
|
||||
|
||||

|
||||
|
||||
If we now run our `sh` script you will see the printout that was added to our script.
|
||||
If we now run our `sh` script you will see the printout that was added to our script.
|
||||
|
||||

|
||||
|
||||
We can also ask for user input that can set our variables using the following:
|
||||
We can also ask for user input that can set our variables using the following:
|
||||
|
||||
```
|
||||
```
|
||||
echo "Enter your name"
|
||||
read name
|
||||
```
|
||||
|
||||
This would then define the input as the variable `$name` We could then use this later on.
|
||||
This would then define the input as the variable `$name` We could then use this later on.
|
||||
|
||||
- ### Conditionals
|
||||
- ### Conditionals
|
||||
|
||||
Maybe we want to find out who we have on our challenge and how many days they have completed, we can define this using `if` `if-else` `else-if` conditionals, this is what we have defined below in our script.
|
||||
Maybe we want to find out who we have on our challenge and how many days they have completed, we can define this using `if` `if-else` `else-if` conditionals, this is what we have defined below in our script.
|
||||
|
||||
```
|
||||
#!/bin/bash
|
||||
@ -138,7 +141,8 @@ else
|
||||
echo "You have entered the wrong amount of days"
|
||||
fi
|
||||
```
|
||||
You can also see from the above that we are running some comparisons or checking values against each other to move on to the next stage. We have different options here worth noting.
|
||||
|
||||
You can also see from the above that we are running some comparisons or checking values against each other to move on to the next stage. We have different options here worth noting.
|
||||
|
||||
- `eq` - if the two values are equal will return TRUE
|
||||
- `ne` - if the two values are not equal will return TRUE
|
||||
@ -147,11 +151,11 @@ You can also see from the above that we are running some comparisons or checking
|
||||
- `lt` - if the first value is less than the second value will return TRUE
|
||||
- `le` - if the first value is less than or equal to the second value will return TRUE
|
||||
|
||||
We might also use bash scripting to determine information about files and folders, this is known as file conditions.
|
||||
We might also use bash scripting to determine information about files and folders, this is known as file conditions.
|
||||
|
||||
- `-d file` True if the file is a directory
|
||||
- `-e file` True if the file exists
|
||||
- `-f file` True if the provided string is a file
|
||||
- `-f file` True if the provided string is a file
|
||||
- `-g file` True if the group id is set on a file
|
||||
- `-r file` True if the file is readable
|
||||
- `-s file` True if the file has a non-zero size
|
||||
@ -159,41 +163,42 @@ We might also use bash scripting to determine information about files and folder
|
||||
```
|
||||
FILE="90DaysOfDevOps.txt"
|
||||
if [ -f "$FILE" ]
|
||||
then
|
||||
then
|
||||
echo "$FILE is a file"
|
||||
else
|
||||
else
|
||||
echo "$FILE is not a file"
|
||||
fi
|
||||
```
|
||||
|
||||

|
||||
|
||||
Providing we have that file still in our directory we should get the first echo command back. But if we remove that file then we should get the second echo command.
|
||||
Providing we have that file still in our directory we should get the first echo command back. But if we remove that file then we should get the second echo command.
|
||||
|
||||

|
||||
|
||||
You can hopefully see how this can be used to save you time when searching through a system for specific items.
|
||||
You can hopefully see how this can be used to save you time when searching through a system for specific items.
|
||||
|
||||
I found this amazing repository on GitHub that has what seems to be an endless amount of scripts [DevOps Bash Tools](https://github.com/HariSekhon/DevOps-Bash-tools/blob/master/README.md)
|
||||
|
||||
### Example
|
||||
### Example
|
||||
|
||||
**Scenario**: We have our company called "90DaysOfDevOps" and we have been running a while and now it is time to expand the team from 1 person to lots more over the coming weeks, I am the only one so far that knows the onboarding process so we want to reduce that bottleneck by automating some of these tasks.
|
||||
**Scenario**: We have our company called "90DaysOfDevOps" and we have been running a while and now it is time to expand the team from 1 person to lots more over the coming weeks, I am the only one so far that knows the onboarding process so we want to reduce that bottleneck by automating some of these tasks.
|
||||
|
||||
**Requirements**:
|
||||
- A user can be passed in as a command line argument.
|
||||
- A user is created with the name of the command line argument.
|
||||
- A password can be parsed as a command line argument.
|
||||
- The password is set for the user
|
||||
- A message of successful account creation is displayed.
|
||||
**Requirements**:
|
||||
|
||||
- A user can be passed in as a command line argument.
|
||||
- A user is created with the name of the command line argument.
|
||||
- A password can be parsed as a command line argument.
|
||||
- The password is set for the user
|
||||
- A message of successful account creation is displayed.
|
||||
|
||||
Let's start with creating our shell script with `touch create_user.sh`
|
||||
|
||||
Before we move on let's also make this executable using `chmod +x create_user.sh`
|
||||
|
||||
then we can use `nano create_user.sh` to start editing our script for the scenario we have been set.
|
||||
then we can use `nano create_user.sh` to start editing our script for the scenario we have been set.
|
||||
|
||||
We can take a look at the first requirement "A user can be passed in as a command line argument" we can use the following
|
||||
We can take a look at the first requirement "A user can be passed in as a command line argument" we can use the following
|
||||
|
||||
```
|
||||
#! /usr/bin/bash
|
||||
@ -204,7 +209,7 @@ echo "$1"
|
||||
|
||||

|
||||
|
||||
Go ahead and run this using `./create_user.sh Michael` replace Michael with your name when you run the script.
|
||||
Go ahead and run this using `./create_user.sh Michael` replace Michael with your name when you run the script.
|
||||
|
||||

|
||||
|
||||
@ -223,11 +228,11 @@ sudo useradd -m "$1"
|
||||
|
||||
Warning: If you do not provide a user account name then it will error as we have not filled the variable `$1`
|
||||
|
||||
We can then check this account has been created with the `awk -F: '{ print $1}' /etc/passwd` command.
|
||||
We can then check this account has been created with the `awk -F: '{ print $1}' /etc/passwd` command.
|
||||
|
||||

|
||||
|
||||
Our next requirement is "A password can be parsed as a command line argument." First of all, we are not going to ever do this in production it is more for us to work through a list of requirements in the lab to understand.
|
||||
Our next requirement is "A password can be parsed as a command line argument." First of all, we are not going to ever do this in production it is more for us to work through a list of requirements in the lab to understand.
|
||||
|
||||
```
|
||||
#! /usr/bin/bash
|
||||
@ -244,15 +249,15 @@ sudo chpasswd <<< "$1":"$2"
|
||||
|
||||
If we then run this script with the two parameters `./create_user.sh 90DaysOfDevOps password`
|
||||
|
||||
You can see from the below image that we executed our script it created our user and password and then we manually jumped into that user and confirmed with the `whoami` command.
|
||||
You can see from the below image that we executed our script it created our user and password and then we manually jumped into that user and confirmed with the `whoami` command.
|
||||
|
||||

|
||||
|
||||
The final requirement is "A message of successful account creation is displayed." We already have this in the top line of our code and we can see on the above screenshot that we have a `90DaysOfDevOps user account being created` shown. This was left from our testing with the `$1` parameter.
|
||||
The final requirement is "A message of successful account creation is displayed." We already have this in the top line of our code and we can see on the above screenshot that we have a `90DaysOfDevOps user account being created` shown. This was left from our testing with the `$1` parameter.
|
||||
|
||||
Now, this script can be used to quickly onboard and set up new users on to our Linux systems. But maybe instead of a few of the historic people having to work through this and then having to get other people their new usernames or passwords we could add some user input that we have previously covered earlier on to capture our variables.
|
||||
Now, this script can be used to quickly onboard and set up new users on to our Linux systems. But maybe instead of a few of the historic people having to work through this and then having to get other people their new usernames or passwords we could add some user input that we have previously covered earlier on to capture our variables.
|
||||
|
||||
```
|
||||
```
|
||||
#! /usr/bin/bash
|
||||
|
||||
echo "What is your intended username?"
|
||||
@ -270,7 +275,7 @@ sudo useradd -m $username
|
||||
sudo chpasswd <<< $username:$password
|
||||
```
|
||||
|
||||
With the steps being more interactive,
|
||||
With the steps being more interactive,
|
||||
|
||||

|
||||
|
||||
@ -286,13 +291,13 @@ If you do want to delete the user you have created for lab purposes then you can
|
||||
|
||||
[Example Script](Linux/create-user.sh)
|
||||
|
||||
Once again I am not saying this is going to be something that you do create in your day to day but it was something I thought of that would highlight the flexibility of what you could use shell scripting for.
|
||||
Once again I am not saying this is going to be something that you do create in your day to day but it was something I thought of that would highlight the flexibility of what you could use shell scripting for.
|
||||
|
||||
Think about any repeatable tasks that you do every day or week or month and how could you better automate that, first option is likely going to be using a bash script before moving into more complex territory.
|
||||
Think about any repeatable tasks that you do every day or week or month and how could you better automate that, first option is likely going to be using a bash script before moving into more complex territory.
|
||||
|
||||
I have created a very simple bash file that helps me spin up a Kubernetes cluster using minikube on my local machine along with data services and Kasten K10 to help demonstrate the requirements and needs around data management. [Project Pace](https://github.com/MichaelCade/project_pace/blob/main/singlecluster_demo.sh) But I did not feel this appropriate to raise here as we have not covered Kubernetes yet.
|
||||
I have created a very simple bash file that helps me spin up a Kubernetes cluster using minikube on my local machine along with data services and Kasten K10 to help demonstrate the requirements and needs around data management. [Project Pace](https://github.com/MichaelCade/project_pace/blob/main/singlecluster_demo.sh) But I did not feel this appropriate to raise here as we have not covered Kubernetes yet.
|
||||
|
||||
## Resources
|
||||
## Resources
|
||||
|
||||
- [Bash in 100 seconds](https://www.youtube.com/watch?v=I4EWvMFj37g)
|
||||
- [Bash script with practical examples - Full Course](https://www.youtube.com/watch?v=TPRSJbtfK4M)
|
||||
|
111
Days/day20.md
111
Days/day20.md
@ -1,81 +1,84 @@
|
||||
---
|
||||
title: '#90DaysOfDevOps - Dev workstation setup - All the pretty things - Day 20'
|
||||
title: "#90DaysOfDevOps - Dev workstation setup - All the pretty things - Day 20"
|
||||
published: false
|
||||
description: 90DaysOfDevOps - Dev workstation setup - All the pretty things
|
||||
tags: 'devops, 90daysofdevops, learning'
|
||||
tags: "devops, 90daysofdevops, learning"
|
||||
cover_image: null
|
||||
canonical_url: null
|
||||
id: 1048734
|
||||
---
|
||||
|
||||
## Dev workstation setup - All the pretty things
|
||||
|
||||
Not to be confused with us setting Linux servers up this way but I wanted to also show off the choice and flexibility that we have within the Linux desktop.
|
||||
Not to be confused with us setting Linux servers up this way but I wanted to also show off the choice and flexibility that we have within the Linux desktop.
|
||||
|
||||
I have been using a Linux Desktop for almost a year now and I have it configured just the way I want from a look and feel perspective. Using our Ubuntu VM on Virtual Box we can run through some of the customisations I have made to my daily driver.
|
||||
I have been using a Linux Desktop for almost a year now and I have it configured just the way I want from a look and feel perspective. Using our Ubuntu VM on Virtual Box we can run through some of the customisations I have made to my daily driver.
|
||||
|
||||
I have put together a YouTube video walking through the rest as some people might be able to better follow along:
|
||||
I have put together a YouTube video walking through the rest as some people might be able to better follow along:
|
||||
|
||||
[](https://youtu.be/jeEslAtHfKc)
|
||||
|
||||
Out of the box, our system will look something like the below:
|
||||
Out of the box, our system will look something like the below:
|
||||
|
||||

|
||||
|
||||
We can also see our default bash shell below,
|
||||
We can also see our default bash shell below,
|
||||
|
||||

|
||||
|
||||
A lot of this comes down to dotfiles something we will cover in this final Linux session of the series.
|
||||
A lot of this comes down to dotfiles something we will cover in this final Linux session of the series.
|
||||
|
||||
### dotfiles
|
||||
First up I want to dig into dotfiles, I have said on a previous day that Linux is made up of configuration files. These dotfiles are configuration files for your Linux system and applications.
|
||||
### dotfiles
|
||||
|
||||
I will also add that dotfiles are not just used to customise and make your desktop look pretty, there are also dotfile changes and configurations that will help you with productivity.
|
||||
First up I want to dig into dotfiles, I have said on a previous day that Linux is made up of configuration files. These dotfiles are configuration files for your Linux system and applications.
|
||||
|
||||
As I mentioned many software programs store their configurations in these dotfiles. These dotfiles assist in managing functionality.
|
||||
I will also add that dotfiles are not just used to customise and make your desktop look pretty, there are also dotfile changes and configurations that will help you with productivity.
|
||||
|
||||
Each dotfile starts with a `.` You can probably guess where the naming came from?
|
||||
As I mentioned many software programs store their configurations in these dotfiles. These dotfiles assist in managing functionality.
|
||||
|
||||
So far we have been using bash as our shell which means you will have a .bashrc and .bash_profile in our home folder. You can see below a few dotfiles we have on our system.
|
||||
Each dotfile starts with a `.` You can probably guess where the naming came from?
|
||||
|
||||
So far we have been using bash as our shell which means you will have a .bashrc and .bash_profile in our home folder. You can see below a few dotfiles we have on our system.
|
||||
|
||||

|
||||
|
||||
We are going to be changing our shell, so we will later be seeing a new `.zshrc` configuration dotfile.
|
||||
We are going to be changing our shell, so we will later be seeing a new `.zshrc` configuration dotfile.
|
||||
|
||||
But now you know if we refer to dotfiles you know they are configuration files. We can use them to add aliases to our command prompt as well as paths to different locations. Some people publish their dotfiles so they are publicly available. You will find mine here on my GitHub [MichaelCade/dotfiles](https://github.com/MichaelCade/dotfiles) here you will find my custom `.zshrc` file, my terminal of choice is terminator which also has some configuration files in the folder and then also some background options.
|
||||
But now you know if we refer to dotfiles you know they are configuration files. We can use them to add aliases to our command prompt as well as paths to different locations. Some people publish their dotfiles so they are publicly available. You will find mine here on my GitHub [MichaelCade/dotfiles](https://github.com/MichaelCade/dotfiles) here you will find my custom `.zshrc` file, my terminal of choice is terminator which also has some configuration files in the folder and then also some background options.
|
||||
|
||||
### ZSH
|
||||
As I mentioned throughout our interactions so far we have been using a bash shell the default shell with Ubuntu. ZSH is very similar but it does have some benefits over bash.
|
||||
### ZSH
|
||||
|
||||
As I mentioned throughout our interactions so far we have been using a bash shell the default shell with Ubuntu. ZSH is very similar but it does have some benefits over bash.
|
||||
|
||||
Zsh has features like interactive Tab completion, automated file searching, regex integration, advanced shorthand for defining command scope, and a rich theme engine.
|
||||
|
||||
We can use our `apt` package manager to get zsh installed on our system. Let's go ahead and run `sudo apt install zsh` from our bash terminal. I am going to do this from within the VM console vs being connected over SSH.
|
||||
We can use our `apt` package manager to get zsh installed on our system. Let's go ahead and run `sudo apt install zsh` from our bash terminal. I am going to do this from within the VM console vs being connected over SSH.
|
||||
|
||||
When the installation command is complete you can run `zsh` inside your terminal, this will then start a shell configuration script.
|
||||
When the installation command is complete you can run `zsh` inside your terminal, this will then start a shell configuration script.
|
||||
|
||||

|
||||
|
||||
I selected `1` to the above question and now we have some more options.
|
||||
I selected `1` to the above question and now we have some more options.
|
||||
|
||||

|
||||
|
||||
You can see from this menu that we can make some out of the box edits to make ZSH configured to our needs.
|
||||
You can see from this menu that we can make some out of the box edits to make ZSH configured to our needs.
|
||||
|
||||
If you exit the wizard with a `0` and then use the `ls -al | grep .zshrc` you should see we have a new configuration file.
|
||||
If you exit the wizard with a `0` and then use the `ls -al | grep .zshrc` you should see we have a new configuration file.
|
||||
|
||||
Now we want to make zsh our default shell every time we open our terminal, we can do this by running the following command to change our shell `chsh -s $(which zsh)` we then need to log out and back in again for the changes to take place.
|
||||
Now we want to make zsh our default shell every time we open our terminal, we can do this by running the following command to change our shell `chsh -s $(which zsh)` we then need to log out and back in again for the changes to take place.
|
||||
|
||||
When you log back and open a terminal it should look something like this. We can also confirm our shell has now been changed over by running `which $SHELL`
|
||||
|
||||

|
||||
|
||||
I generally perform this step on each Ubuntu desktop I spin up and find in general without going any further that the zsh shell is a little faster than bash.
|
||||
I generally perform this step on each Ubuntu desktop I spin up and find in general without going any further that the zsh shell is a little faster than bash.
|
||||
|
||||
### OhMyZSH
|
||||
### OhMyZSH
|
||||
|
||||
Next up we want to make things look a little better and also add some functionality to help us move around within the terminal.
|
||||
Next up we want to make things look a little better and also add some functionality to help us move around within the terminal.
|
||||
|
||||
OhMyZSH is a free and open source framework for managing your zsh configuration. There are lots of plugins, themes and other things that just make interacting with the zsh shell a lot nicer.
|
||||
OhMyZSH is a free and open source framework for managing your zsh configuration. There are lots of plugins, themes and other things that just make interacting with the zsh shell a lot nicer.
|
||||
|
||||
You can find out more about [ohmyzsh](https://ohmyz.sh/)
|
||||
|
||||
@ -87,67 +90,67 @@ When you have run the above command you should see some output like the below.
|
||||
|
||||

|
||||
|
||||
Now we can move on to start putting a theme in for our experience, there are well over 100 bundled with Oh My ZSH but my go-to for all of my applications and everything is the Dracula theme.
|
||||
Now we can move on to start putting a theme in for our experience, there are well over 100 bundled with Oh My ZSH but my go-to for all of my applications and everything is the Dracula theme.
|
||||
|
||||
I also want to add that these two plugins are a must when using Oh My ZSH.
|
||||
I also want to add that these two plugins are a must when using Oh My ZSH.
|
||||
|
||||
`git clone https://github.com/zsh-users/zsh-autosuggestions.git $ZSH_CUSTOM/plugins/zsh-autosuggestions`
|
||||
`git clone https://github.com/zsh-users/zsh-autosuggestions.git $ZSH_CUSTOM/plugins/zsh-autosuggestions`
|
||||
|
||||
`git clone https://github.com/zsh-users/zsh-syntax-highlighting.git $ZSH_CUSTOM/plugins/zsh-syntax-highlighting`
|
||||
`git clone https://github.com/zsh-users/zsh-syntax-highlighting.git $ZSH_CUSTOM/plugins/zsh-syntax-highlighting`
|
||||
|
||||
`nano ~/.zshrc`
|
||||
`nano ~/.zshrc`
|
||||
|
||||
edit the plugins to now include `plugins=(git zsh-autosuggestions zsh-syntax-highlighting)`
|
||||
edit the plugins to now include `plugins=(git zsh-autosuggestions zsh-syntax-highlighting)`
|
||||
|
||||
## Gnome Extensions
|
||||
|
||||
I also use Gnome extensions, and in particular the list below
|
||||
I also use Gnome extensions, and in particular the list below
|
||||
|
||||
[Gnome extensions](https://extensions.gnome.org)
|
||||
|
||||
- Caffeine
|
||||
- Caffeine
|
||||
- CPU Power Manager
|
||||
- Dash to Dock
|
||||
- Desktop Icons
|
||||
- User Themes
|
||||
- Dash to Dock
|
||||
- Desktop Icons
|
||||
- User Themes
|
||||
|
||||
## Software Installation
|
||||
|
||||
A short list of the programs I install on the machine using `apt`
|
||||
A short list of the programs I install on the machine using `apt`
|
||||
|
||||
- VSCode
|
||||
- azure-cli
|
||||
- VSCode
|
||||
- azure-cli
|
||||
- containerd.io
|
||||
- docker
|
||||
- docker-ce
|
||||
- google-cloud-sdk
|
||||
- insomnia
|
||||
- docker-ce
|
||||
- google-cloud-sdk
|
||||
- insomnia
|
||||
- packer
|
||||
- terminator
|
||||
- terraform
|
||||
- terraform
|
||||
- vagrant
|
||||
|
||||
### Dracula theme
|
||||
|
||||
This site is the only theme I am using at the moment. Looks clear, and clean and everything looks great. [Dracula Theme](https://draculatheme.com/) It also has you covered when you have lots of other programs you use on your machine.
|
||||
This site is the only theme I am using at the moment. Looks clear, and clean and everything looks great. [Dracula Theme](https://draculatheme.com/) It also has you covered when you have lots of other programs you use on your machine.
|
||||
|
||||
From the link above we can search for zsh on the site and you will find at least two options.
|
||||
From the link above we can search for zsh on the site and you will find at least two options.
|
||||
|
||||
Follow the instructions listed to install either manually or using git. Then you will need to finally edit your `.zshrc` configuration file as per below.
|
||||
Follow the instructions listed to install either manually or using git. Then you will need to finally edit your `.zshrc` configuration file as per below.
|
||||
|
||||

|
||||
|
||||
You are next going to want the [Gnome Terminal Dracula theme](https://draculatheme.com/gnome-terminal) with all instructions available here as well.
|
||||
You are next going to want the [Gnome Terminal Dracula theme](https://draculatheme.com/gnome-terminal) with all instructions available here as well.
|
||||
|
||||
It would take a long time for me to document every step so I created a video walkthrough of the process. (**Click on the image below**)
|
||||
|
||||
[](https://youtu.be/jeEslAtHfKc)
|
||||
|
||||
If you made it this far, then we have now finished our Linux section of the #90DaysOfDevOps. Once again I am open to feedback and additions to resources here.
|
||||
If you made it this far, then we have now finished our Linux section of the #90DaysOfDevOps. Once again I am open to feedback and additions to resources here.
|
||||
|
||||
I also thought on this it was easier to show you a lot of the steps through video vs writing them down here, what do you think about this? I do have a goal to work back through these days and where possible create video walkthroughs to add in and better maybe explain and show some of the things we have covered. What do you think?
|
||||
I also thought on this it was easier to show you a lot of the steps through video vs writing them down here, what do you think about this? I do have a goal to work back through these days and where possible create video walkthroughs to add in and better maybe explain and show some of the things we have covered. What do you think?
|
||||
|
||||
## Resources
|
||||
## Resources
|
||||
|
||||
- [Bash in 100 seconds](https://www.youtube.com/watch?v=I4EWvMFj37g)
|
||||
- [Bash script with practical examples - Full Course](https://www.youtube.com/watch?v=TPRSJbtfK4M)
|
||||
@ -158,6 +161,6 @@ I also thought on this it was easier to show you a lot of the steps through vide
|
||||
- [Learn the Linux Fundamentals - Part 1](https://www.youtube.com/watch?v=kPylihJRG70)
|
||||
- [Linux for hackers (don't worry you don't need to be a hacker!)](https://www.youtube.com/watch?v=VbEx7B_PTOE)
|
||||
|
||||
Tomorrow we start our 7 days of diving into Networking, we will be looking to give ourselves the foundational knowledge and understanding of Networking around DevOps.
|
||||
Tomorrow we start our 7 days of diving into Networking, we will be looking to give ourselves the foundational knowledge and understanding of Networking around DevOps.
|
||||
|
||||
See you on [Day21](day21.md)
|
||||
|
@ -1,105 +1,106 @@
|
||||
---
|
||||
title: '#90DaysOfDevOps - The Big Picture: DevOps and Networking - Day 21'
|
||||
title: "#90DaysOfDevOps - The Big Picture: DevOps and Networking - Day 21"
|
||||
published: false
|
||||
description: 90DaysOfDevOps - The Big Picture DevOps and Networking
|
||||
tags: 'devops, 90daysofdevops, learning'
|
||||
tags: "devops, 90daysofdevops, learning"
|
||||
cover_image: null
|
||||
canonical_url: null
|
||||
id: 1048761
|
||||
---
|
||||
|
||||
## The Big Picture: DevOps and Networking
|
||||
|
||||
Welcome to Day 21! We are going to be getting into Networking over the next 7 days, Networking and DevOps are the overarching themes but we will need to get into some of the networking fundamentals as well.
|
||||
Welcome to Day 21! We are going to be getting into Networking over the next 7 days, Networking and DevOps are the overarching themes but we will need to get into some of the networking fundamentals as well.
|
||||
|
||||
Ultimately as we have said previously DevOps is about a culture and process change within your organisation this as we have discussed can be Virtual Machines, Containers, or Kubernetes but it can also be the network, If we are using those DevOps principles for our infrastructure that has to include the network more to the point from a DevOps point of view you also need to know about the network as in the different topologies and networking tools and stacks that we have available.
|
||||
Ultimately as we have said previously DevOps is about a culture and process change within your organisation this as we have discussed can be Virtual Machines, Containers, or Kubernetes but it can also be the network, If we are using those DevOps principles for our infrastructure that has to include the network more to the point from a DevOps point of view you also need to know about the network as in the different topologies and networking tools and stacks that we have available.
|
||||
|
||||
I would argue that we should have our networking devices configured using infrastructure as code and have everything automated like we would our virtual machines, but to do that we have to have a good understanding of what we are automating.
|
||||
I would argue that we should have our networking devices configured using infrastructure as code and have everything automated like we would our virtual machines, but to do that we have to have a good understanding of what we are automating.
|
||||
|
||||
### What is NetDevOps | Network DevOps?
|
||||
|
||||
You may also hear the terms Network DevOps or NetDevOps. Maybe you are already a Network engineer and have a great grasp on the network components within the infrastructure you understand the elements used around networking such as DHCP, DNS, NAT etc. You will also have a good understanding of the hardware or software-defined networking options, switches, routers etc.
|
||||
You may also hear the terms Network DevOps or NetDevOps. Maybe you are already a Network engineer and have a great grasp on the network components within the infrastructure you understand the elements used around networking such as DHCP, DNS, NAT etc. You will also have a good understanding of the hardware or software-defined networking options, switches, routers etc.
|
||||
|
||||
But if you are not a network engineer then we probably need to get foundational knowledge across the board in some of those areas so that we can understand the end goal of Network DevOps.
|
||||
But if you are not a network engineer then we probably need to get foundational knowledge across the board in some of those areas so that we can understand the end goal of Network DevOps.
|
||||
|
||||
But in regards to those terms, we can think of NetDevOps or Network DevOps as applying the DevOps Principles and Practices to the network, applying version control and automation tools to the network creation, testing, monitoring, and deployments.
|
||||
But in regards to those terms, we can think of NetDevOps or Network DevOps as applying the DevOps Principles and Practices to the network, applying version control and automation tools to the network creation, testing, monitoring, and deployments.
|
||||
|
||||
If we think of Network DevOps as having to require automation, we mentioned before about DevOps breaking down the siloes between teams. If the networking teams do not change to a similar model and process then they become the bottleneck or even the failure overall.
|
||||
If we think of Network DevOps as having to require automation, we mentioned before about DevOps breaking down the silos between teams. If the networking teams do not change to a similar model and process then they become the bottleneck or even the failure overall.
|
||||
|
||||
Using the automation principles around provisioning, configuration, testing, version control and deployment is a great start. Automation is overall going to enable speed of deployment, stability of the networking infrastructure and consistent improvement as well as the process being shared across multiple environments once they have been tested. Such as a fully tested Network Policy that has been fully tested on one environment can be used quickly in another location because of the nature of this being in code vs a manually authored process which it might have been before.
|
||||
Using the automation principles around provisioning, configuration, testing, version control and deployment is a great start. Automation is overall going to enable speed of deployment, stability of the networking infrastructure and consistent improvement as well as the process being shared across multiple environments once they have been tested. Such as a fully tested Network Policy that has been fully tested on one environment can be used quickly in another location because of the nature of this being in code vs a manually authored process which it might have been before.
|
||||
A really good viewpoint and outline of this thinking can be found here. [Network DevOps](https://www.thousandeyes.com/learning/techtorials/network-devops)
|
||||
|
||||
## Networking The Basics
|
||||
## Networking The Basics
|
||||
|
||||
Let's forget the DevOps side of things to begin with here and we now need to look very briefly into some of the Networking fundamentals.
|
||||
Let's forget the DevOps side of things to begin with here and we now need to look very briefly into some of the Networking fundamentals.
|
||||
|
||||
### Network Devices
|
||||
### Network Devices
|
||||
|
||||
**Host** are any devices which send or receive traffic.
|
||||
**Host** are any devices which send or receive traffic.
|
||||
|
||||

|
||||
|
||||
**IP Address** the identity of each host.
|
||||
**IP Address** the identity of each host.
|
||||
|
||||

|
||||
|
||||
**Network** is what transports traffic between hosts. If we did not have networks there would be a lot of manual movement of data!
|
||||
**Network** is what transports traffic between hosts. If we did not have networks there would be a lot of manual movement of data!
|
||||
|
||||
A logical group of hosts which require similar connectivity.
|
||||
A logical group of hosts which require similar connectivity.
|
||||
|
||||

|
||||
|
||||
**Switches** facilitate communication ***within*** a network. A switch forwards data packets between hosts. A switch sends packets directly to hosts.
|
||||
**Switches** facilitate communication **_within_** a network. A switch forwards data packets between hosts. A switch sends packets directly to hosts.
|
||||
|
||||
- Network: A Grouping of hosts which require similar connectivity.
|
||||
- Hosts on a Network share the same IP address space.
|
||||
- Network: A Grouping of hosts which require similar connectivity.
|
||||
- Hosts on a Network share the same IP address space.
|
||||
|
||||

|
||||
|
||||
**Router** facilitates communication between networks. As we said before that a switch looks after communication within a network a router allows us to join these networks together or at least give them access to each other if permitted.
|
||||
**Router** facilitates communication between networks. As we said before that a switch looks after communication within a network a router allows us to join these networks together or at least give them access to each other if permitted.
|
||||
|
||||
A router can provide a traffic control point (security, filtering, redirecting) More and more switches also provide some of these functions now.
|
||||
A router can provide a traffic control point (security, filtering, redirecting) More and more switches also provide some of these functions now.
|
||||
|
||||
Routers learn which networks they are attached to. These are known as routes, a routing table is all the networks a router knows about.
|
||||
Routers learn which networks they are attached to. These are known as routes, a routing table is all the networks a router knows about.
|
||||
|
||||
A router has an IP address in the networks they are attached to. This IP is also going to be each host's way out of their local network also known as a gateway.
|
||||
A router has an IP address in the networks they are attached to. This IP is also going to be each host's way out of their local network also known as a gateway.
|
||||
|
||||
Routers also create the hierarchy in networks I mentioned earlier.
|
||||
Routers also create the hierarchy in networks I mentioned earlier.
|
||||
|
||||

|
||||
|
||||
## Switches vs Routers
|
||||
## Switches vs Routers
|
||||
|
||||
**Routing** is the process of moving data between networks.
|
||||
|
||||
**Routing** is the process of moving data between networks.
|
||||
|
||||
- A router is a device whose primary purpose is Routing.
|
||||
|
||||
**Switching** is the process of moving data within networks.
|
||||
**Switching** is the process of moving data within networks.
|
||||
|
||||
- A Switch is a device whose primary purpose is switching.
|
||||
- A Switch is a device whose primary purpose is switching.
|
||||
|
||||
This is very much a foundational overview of devices as we know there are many different Network Devices such as:
|
||||
This is very much a foundational overview of devices as we know there are many different Network Devices such as:
|
||||
|
||||
- Access Points
|
||||
- Firewalls
|
||||
- Load Balancers
|
||||
- Access Points
|
||||
- Firewalls
|
||||
- Load Balancers
|
||||
- Layer 3 Switches
|
||||
- IDS / IPS
|
||||
- Proxies
|
||||
- Virtual Switches
|
||||
- Virtual Routers
|
||||
- IDS / IPS
|
||||
- Proxies
|
||||
- Virtual Switches
|
||||
- Virtual Routers
|
||||
|
||||
Although all of these devices are going to perform Routing and/or Switching.
|
||||
Although all of these devices are going to perform Routing and/or Switching.
|
||||
|
||||
Over the next few days, we are going to get to know a little more about this list.
|
||||
Over the next few days, we are going to get to know a little more about this list.
|
||||
|
||||
- OSI Model
|
||||
- Network Protocols
|
||||
- OSI Model
|
||||
- Network Protocols
|
||||
- DNS (Domain Name System)
|
||||
- NAT
|
||||
- NAT
|
||||
- DHCP
|
||||
- Subnets
|
||||
- Subnets
|
||||
|
||||
## Resources
|
||||
## Resources
|
||||
|
||||
[Computer Networking full course](https://www.youtube.com/watch?v=IPvYjXCsTg8)
|
||||
|
||||
|
@ -1,12 +1,13 @@
|
||||
---
|
||||
title: '#90DaysOfDevOps - The OSI Model - The 7 Layers - Day 22'
|
||||
title: "#90DaysOfDevOps - The OSI Model - The 7 Layers - Day 22"
|
||||
published: false
|
||||
description: 90DaysOfDevOps - The OSI Model - The 7 Layers
|
||||
tags: 'devops, 90daysofdevops, learning'
|
||||
tags: "devops, 90daysofdevops, learning"
|
||||
cover_image: null
|
||||
canonical_url: null
|
||||
id: 1049037
|
||||
---
|
||||
|
||||
## The OSI Model - The 7 Layers
|
||||
|
||||
The overall purpose of networking as an industry is to allow two hosts to share data. Before networking if I want to get data from this host to this host I'd have to plug something into this host walk it over to the other host and plug it into the other host.
|
||||
@ -15,82 +16,87 @@ Networking allows us to automate this by allowing the host to share data automat
|
||||
|
||||
This is no different than any language. English has a set of rules that two English speakers must follow. Spanish has its own set of rules. French has its own set of rules, while networking also has its own set of rules
|
||||
|
||||
The rules for networking are divided into seven different layers and those layers are known as the OSI model.
|
||||
The rules for networking are divided into seven different layers and those layers are known as the OSI model.
|
||||
|
||||
### Introduction to the OSI Model
|
||||
### Introduction to the OSI Model
|
||||
|
||||
The OSI Model (Open Systems Interconnection Model) is a framework used to describe the functions of a networking system. The OSI model characterises computing functions into a universal set of rules and requirements to support interoperability between different products and software. In the OSI reference model, the communications between a computing system are split into seven different abstraction layers: **Physical, Data Link, Network, Transport, Session, Presentation, and Application**.
|
||||
|
||||

|
||||
|
||||
### Physical
|
||||
Layer 1 in the OSI model and this is known as physical, the premise of being able to get data from one host to another through a means be it physical cable or we could also consider Wi-Fi in this layer as well. We might also see some more legacy hardware seen here around hubs and repeaters to transport the data from one host to another.
|
||||
|
||||
Layer 1 in the OSI model and this is known as physical, the premise of being able to get data from one host to another through a means be it physical cable or we could also consider Wi-Fi in this layer as well. We might also see some more legacy hardware seen here around hubs and repeaters to transport the data from one host to another.
|
||||
|
||||

|
||||
|
||||
### Data Link
|
||||
Layer 2, the data link enables a node to node transfer where data is packaged into frames. There is also a level of error correcting that might have occurred at the physical layer. This is also where we introduce or first see MAC addresses.
|
||||
### Data Link
|
||||
|
||||
Layer 2, the data link enables a node to node transfer where data is packaged into frames. There is also a level of error correcting that might have occurred at the physical layer. This is also where we introduce or first see MAC addresses.
|
||||
|
||||
This is where we see the first mention of switches that we covered on our first day of networking on [Day 21](day21.md)
|
||||
|
||||

|
||||
|
||||
### Network
|
||||
You have likely heard the term layer 3 switches or layer 2 switches. In our OSI model Layer 3, the Network has a goal of an end to end delivery, this is where we see our IP addresses also mentioned in the first-day overview.
|
||||
### Network
|
||||
|
||||
Routers and hosts exist at layer 3, remember the router is the ability to route between multiple networks. Anything with an IP could be considered Layer 3.
|
||||
You have likely heard the term layer 3 switches or layer 2 switches. In our OSI model Layer 3, the Network has a goal of an end to end delivery, this is where we see our IP addresses also mentioned in the first-day overview.
|
||||
|
||||
Routers and hosts exist at layer 3, remember the router is the ability to route between multiple networks. Anything with an IP could be considered Layer 3.
|
||||
|
||||

|
||||
|
||||
So why do we need addressing schemes on both Layers 2 and 3? (MAC Addresses vs IP Addresses)
|
||||
So why do we need addressing schemes on both Layers 2 and 3? (MAC Addresses vs IP Addresses)
|
||||
|
||||
If we think about getting data from one host to another, each host has an IP address but there are several switches and routers in between. Each of the devices has that layer 2 MAC address.
|
||||
If we think about getting data from one host to another, each host has an IP address but there are several switches and routers in between. Each of the devices has that layer 2 MAC address.
|
||||
|
||||
The layer 2 MAC address will go from host to switch/router only, it is focused on hops whereas the layer 3 IP addresses will stay with that packet of data until it reaches its end host. (End to End)
|
||||
|
||||
IP Addresses - Layer 3 = End to End Delivery
|
||||
IP Addresses - Layer 3 = End to End Delivery
|
||||
|
||||
MAC Addresses - Layer 2 = Hop to Hop Delivery
|
||||
MAC Addresses - Layer 2 = Hop to Hop Delivery
|
||||
|
||||
Now there is a network protocol that we will get into but not today called ARP(Address Resolution Protocol) which links our Layer3 and Layer2 addresses.
|
||||
Now there is a network protocol that we will get into but not today called ARP(Address Resolution Protocol) which links our Layer3 and Layer2 addresses.
|
||||
|
||||
### Transport
|
||||
Service to Service delivery, Layer 4 is there to distinguish data streams. In the same way that Layer 3 and Layer 2 both had their addressing schemes, in Layer 4 we have ports.
|
||||
### Transport
|
||||
|
||||
Service to Service delivery, Layer 4 is there to distinguish data streams. In the same way that Layer 3 and Layer 2 both had their addressing schemes, in Layer 4 we have ports.
|
||||
|
||||

|
||||
|
||||
### Session, Presentation, Application
|
||||
The distinction between Layers 5,6,7 is or had become somewhat vague.
|
||||
### Session, Presentation, Application
|
||||
|
||||
It is worth looking at the [TCP IP Model](https://www.geeksforgeeks.org/tcp-ip-model/) to get a more recent understanding.
|
||||
The distinction between Layers 5,6,7 is or had become somewhat vague.
|
||||
|
||||
It is worth looking at the [TCP IP Model](https://www.geeksforgeeks.org/tcp-ip-model/) to get a more recent understanding.
|
||||
|
||||
Let's now try and explain what's happening when hosts are communicating with each other using this networking stack. This host has an application that's going to generate data that is meant to be sent to another host.
|
||||
|
||||
The source host is going to go through is what's known as the encapsulation process. That data will be first sent to layer 4.
|
||||
|
||||
Layer 4 is going to add a header to that data which can facilitate the goal of layer 4 which is service to service delivery. This is going to be a port using either TCP or UDP. It is also going to include the source port and destination port.
|
||||
Layer 4 is going to add a header to that data which can facilitate the goal of layer 4 which is service to service delivery. This is going to be a port using either TCP or UDP. It is also going to include the source port and destination port.
|
||||
|
||||
This may also be known as a segment (Data and Port)
|
||||
|
||||
This segment is going to be passed down the OSI stack to layer 3, the network layer, and the network layer is going to add another header to this data.
|
||||
This header is going to facilitate the goal of layer 3 which is the end to end delivery meaning in this header you will have a source IP address and a destination IP, the header plus data may also be referred to as a packet.
|
||||
This header is going to facilitate the goal of layer 3 which is the end to end delivery meaning in this header you will have a source IP address and a destination IP, the header plus data may also be referred to as a packet.
|
||||
|
||||
Layer 3 will then take that packet and hand it off to layer 2, layer 2 will once again add another header to that data to accomplish layer 2's goal of hop to hop delivery meaning this header will include a source and destination mac address.
|
||||
Layer 3 will then take that packet and hand it off to layer 2, layer 2 will once again add another header to that data to accomplish layer 2's goal of hop to hop delivery meaning this header will include a source and destination mac address.
|
||||
This is known as a frame when you have the layer 2 header and data.
|
||||
|
||||
That frame then gets converted into ones and zeros and sent over the Layer 1 Physical cable or wifi.
|
||||
That frame then gets converted into ones and zeros and sent over the Layer 1 Physical cable or wifi.
|
||||
|
||||

|
||||
|
||||
I did mention above the naming for each layer of header plus data but decided to draw this out as well.
|
||||
I did mention above the naming for each layer of header plus data but decided to draw this out as well.
|
||||
|
||||

|
||||
|
||||
The Application sending the data is being sent somewhere so the receiving is somewhat in reverse to get that back up the stack and into the receiving host.
|
||||
The Application sending the data is being sent somewhere so the receiving is somewhat in reverse to get that back up the stack and into the receiving host.
|
||||
|
||||

|
||||
|
||||
## Resources
|
||||
## Resources
|
||||
|
||||
- [Computer Networking full course](https://www.youtube.com/watch?v=IPvYjXCsTg8)
|
||||
- [Practical Networking](http://www.practicalnetworking.net/)
|
||||
|
@ -1,114 +1,115 @@
|
||||
---
|
||||
title: '#90DaysOfDevOps - Network Protocols - Day 23'
|
||||
title: "#90DaysOfDevOps - Network Protocols - Day 23"
|
||||
published: false
|
||||
description: 90DaysOfDevOps - Network Protocols
|
||||
tags: 'devops, 90daysofdevops, learning'
|
||||
tags: "devops, 90daysofdevops, learning"
|
||||
cover_image: null
|
||||
canonical_url: null
|
||||
id: 1048704
|
||||
---
|
||||
## Network Protocols
|
||||
|
||||
A set of rules and messages that form a standard. An Internet Standard.
|
||||
## Network Protocols
|
||||
|
||||
- ARP - Address Resolution Protocol
|
||||
A set of rules and messages that form a standard. An Internet Standard.
|
||||
|
||||
If you want to get really into the weeds on ARP you can read the Internet Standard here. [RFC 826](https://datatracker.ietf.org/doc/html/rfc826)
|
||||
- ARP - Address Resolution Protocol
|
||||
|
||||
Connects IP addresses to fixed physical machine addresses, also known as MAC addresses across a layer 2 network.
|
||||
If you want to get really into the weeds on ARP you can read the Internet Standard here. [RFC 826](https://datatracker.ietf.org/doc/html/rfc826)
|
||||
|
||||
Connects IP addresses to fixed physical machine addresses, also known as MAC addresses across a layer 2 network.
|
||||
|
||||

|
||||
|
||||
- FTP - File Transfer Protocol
|
||||
- FTP - File Transfer Protocol
|
||||
|
||||
Allows for the transfer of files from source to destination. Generally, this process is authenticated but there is the ability if configured to use anonymous access. You will more frequently now see FTPS which provides SSL/TLS connectivity to FTP servers from the client for better security. This protocol would be found in the Application layer of the OSI Model.
|
||||
Allows for the transfer of files from source to destination. Generally, this process is authenticated but there is the ability if configured to use anonymous access. You will more frequently now see FTPS which provides SSL/TLS connectivity to FTP servers from the client for better security. This protocol would be found in the Application layer of the OSI Model.
|
||||
|
||||

|
||||
|
||||
- SMTP - Simple Mail Transfer Protocol
|
||||
- SMTP - Simple Mail Transfer Protocol
|
||||
|
||||
Used for email transmission, mail servers use SMTP to send and receive mail messages. You will still find even with Microsoft 365 that the SMTP protocol is used for the same purpose.
|
||||
Used for email transmission, mail servers use SMTP to send and receive mail messages. You will still find even with Microsoft 365 that the SMTP protocol is used for the same purpose.
|
||||
|
||||

|
||||
|
||||
- HTTP - Hyper Text Transfer Protocol
|
||||
- HTTP - Hyper Text Transfer Protocol
|
||||
|
||||
HTTP is the foundation of the internet and browsing content. Giving us the ability to easily access our favourite websites. HTTP is still heavily used but HTTPS is more so used or should be used on most of your favourite sites.
|
||||
HTTP is the foundation of the internet and browsing content. Giving us the ability to easily access our favourite websites. HTTP is still heavily used but HTTPS is more so used or should be used on most of your favourite sites.
|
||||
|
||||

|
||||
|
||||
- SSL - Secure Sockets Layer | TLS - Transport Layer Security
|
||||
- SSL - Secure Sockets Layer | TLS - Transport Layer Security
|
||||
|
||||
TLS has taken over from SSL, TLS is a [Cryptographic Protocol]() that provides secure communications over a network. It can and will be found in the mail, Instant Messaging and other applications but most commonly it is used to secure HTTPS.
|
||||
TLS has taken over from SSL, TLS is a **Cryptographic Protocol** that provides secure communications over a network. It can and will be found in the mail, Instant Messaging and other applications but most commonly it is used to secure HTTPS.
|
||||
|
||||

|
||||
|
||||
- HTTPS - HTTP secured with SSL/TLS
|
||||
- HTTPS - HTTP secured with SSL/TLS
|
||||
|
||||
An extension of HTTP, used for secure communications over a network, HTTPS is encrypted with TLS as mentioned above. The focus here was to bring authentication, privacy and integrity whilst data is exchanged between hosts.
|
||||
An extension of HTTP, used for secure communications over a network, HTTPS is encrypted with TLS as mentioned above. The focus here was to bring authentication, privacy and integrity whilst data is exchanged between hosts.
|
||||
|
||||

|
||||
|
||||
- DNS - Domain Name System
|
||||
- DNS - Domain Name System
|
||||
|
||||
The DNS is used to map human-friendly domain names for example we all know [google.com](https://google.com) but if you were to open a browser and put in [8.8.8.8](https://8.8.8.8) you would get Google as we pretty much know it. However good luck trying to remember all of the IP addresses for all of your websites where some of them we even use google to find information.
|
||||
The DNS is used to map human-friendly domain names for example we all know [google.com](https://google.com) but if you were to open a browser and put in [8.8.8.8](https://8.8.8.8) you would get Google as we pretty much know it. However good luck trying to remember all of the IP addresses for all of your websites where some of them we even use google to find information.
|
||||
|
||||
This is where DNS comes in, it ensures that hosts, services and other resources are reachable.
|
||||
This is where DNS comes in, it ensures that hosts, services and other resources are reachable.
|
||||
|
||||
On all hosts, if they require internet connectivity then they must have DNS to be able to resolve those domain names. DNS is an area you could spend Days and Years on learning. I would also say from experience that DNS is mostly the common cause of all errors when it comes to Networking. Not sure if a Network engineer would agree there though.
|
||||
On all hosts, if they require internet connectivity then they must have DNS to be able to resolve those domain names. DNS is an area you could spend Days and Years on learning. I would also say from experience that DNS is mostly the common cause of all errors when it comes to Networking. Not sure if a Network engineer would agree there though.
|
||||
|
||||

|
||||
|
||||
- DHCP - Dynamic Host Configuration Protocol
|
||||
- DHCP - Dynamic Host Configuration Protocol
|
||||
|
||||
We have discussed a lot about protocols that are required to make our hosts work, be it accessing the internet or transferring files between each other.
|
||||
We have discussed a lot about protocols that are required to make our hosts work, be it accessing the internet or transferring files between each other.
|
||||
|
||||
There are 4 things that we need on every host for it to be able to achieve both of those tasks.
|
||||
There are 4 things that we need on every host for it to be able to achieve both of those tasks.
|
||||
|
||||
- IP Address
|
||||
- Subnet Mask
|
||||
- Default Gateway
|
||||
- DNS
|
||||
- IP Address
|
||||
- Subnet Mask
|
||||
- Default Gateway
|
||||
- DNS
|
||||
|
||||
We have covered IP address being a unique address for your host on the network it resides, we can think of this as our house number.
|
||||
We have covered IP address being a unique address for your host on the network it resides, we can think of this as our house number.
|
||||
|
||||
Subnet mask we will cover shortly, but you can think of this as postcode or zip code.
|
||||
Subnet mask we will cover shortly, but you can think of this as postcode or zip code.
|
||||
|
||||
A default gateway is the IP of our router generally on our network providing us with that Layer 3 connectivity. You could think of this as the single road that allows us out of our street.
|
||||
A default gateway is the IP of our router generally on our network providing us with that Layer 3 connectivity. You could think of this as the single road that allows us out of our street.
|
||||
|
||||
Then we have DNS as we just covered to help us convert complicated public IP addresses to more suitable and rememberable domain names. Maybe we can think of this as the giant sorting office to make sure we get the right post.
|
||||
Then we have DNS as we just covered to help us convert complicated public IP addresses to more suitable and rememberable domain names. Maybe we can think of this as the giant sorting office to make sure we get the right post.
|
||||
|
||||
As I said each host requires these 4 things, if you have 1000 or 10,000 hosts then that is going to take you a very long time to determine each one of these individually. This is where DHCP comes in and allows you to determine a scope for your network and then this protocol will distribute to all available hosts in your network.
|
||||
As I said each host requires these 4 things, if you have 1000 or 10,000 hosts then that is going to take you a very long time to determine each one of these individually. This is where DHCP comes in and allows you to determine a scope for your network and then this protocol will distribute to all available hosts in your network.
|
||||
|
||||
Another example is you head into a coffee shop, grab a coffee and sit down with your laptop or your phone let's call that your host. You connect your host to the coffee shop wifi and you gain access to the internet, messages and mail start pinging through and you can navigate web pages and social media. When you connected to the coffee shop wifi your machine would have picked up a DHCP address either from a dedicated DHCP server or most likely from the router also handling DHCP.
|
||||
Another example is you head into a coffee shop, grab a coffee and sit down with your laptop or your phone let's call that your host. You connect your host to the coffee shop WiFi and you gain access to the internet, messages and mail start pinging through and you can navigate web pages and social media. When you connected to the coffee shop WiFi your machine would have picked up a DHCP address either from a dedicated DHCP server or most likely from the router also handling DHCP.
|
||||
|
||||

|
||||
|
||||
### Subnetting
|
||||
### Subnetting
|
||||
|
||||
A subnet is a logical subdivision of an IP network.
|
||||
|
||||
Subnets break large networks into smaller, more manageable networks that run more efficiently.
|
||||
Subnets break large networks into smaller, more manageable networks that run more efficiently.
|
||||
|
||||
Each subnet is a logical subdivision of the bigger network. Connected devices with enough subnet share common IP address identifiers, enabling them to communicate with each other.
|
||||
Each subnet is a logical subdivision of the bigger network. Connected devices with enough subnet share common IP address identifiers, enabling them to communicate with each other.
|
||||
|
||||
Routers manage communication between subnets.
|
||||
Routers manage communication between subnets.
|
||||
|
||||
The size of a subnet depends on the connectivity requirements and the network technology used.
|
||||
The size of a subnet depends on the connectivity requirements and the network technology used.
|
||||
|
||||
An organisation is responsible for determining the number and size of the subnets within the limits of address space
|
||||
available, and the details remain local to that organisation. Subnets can also be segmented into even smaller subnets for things like Point to Point links, or subnetworks supporting a few devices.
|
||||
available, and the details remain local to that organisation. Subnets can also be segmented into even smaller subnets for things like Point to Point links, or subnetworks supporting a few devices.
|
||||
|
||||
Among other advantages, segmenting large
|
||||
networks into subnets enable IP address
|
||||
reallocation and relieves network congestion, streamlining, network communication and efficiency.
|
||||
reallocation and relieves network congestion, streamlining, network communication and efficiency.
|
||||
|
||||
Subnets can also improve network security.
|
||||
If a section of a network is compromised, it can be quarantined, making it difficult for bad actors to move around the larger network.
|
||||
If a section of a network is compromised, it can be quarantined, making it difficult for bad actors to move around the larger network.
|
||||
|
||||

|
||||
|
||||
## Resources
|
||||
## Resources
|
||||
|
||||
- [Computer Networking full course](https://www.youtube.com/watch?v=IPvYjXCsTg8)
|
||||
- [Practical Networking](http://www.practicalnetworking.net/)
|
||||
|
136
Days/day24.md
136
Days/day24.md
@ -1,22 +1,24 @@
|
||||
---
|
||||
title: '#90DaysOfDevOps - Network Automation - Day 24'
|
||||
title: "#90DaysOfDevOps - Network Automation - Day 24"
|
||||
published: false
|
||||
description: 90DaysOfDevOps - Network Automation
|
||||
tags: 'devops, 90daysofdevops, learning'
|
||||
tags: "devops, 90daysofdevops, learning"
|
||||
cover_image: null
|
||||
canonical_url: null
|
||||
id: 1048805
|
||||
---
|
||||
## Network Automation
|
||||
|
||||
## Network Automation
|
||||
|
||||
### Basics of network automation
|
||||
|
||||
Primary drivers for Network Automation
|
||||
- Achieve Agility
|
||||
- Reduce Cost
|
||||
- Eliminate Errors
|
||||
- Ensure Compliance
|
||||
- Centralised Management
|
||||
Primary drivers for Network Automation
|
||||
|
||||
- Achieve Agility
|
||||
- Reduce Cost
|
||||
- Eliminate Errors
|
||||
- Ensure Compliance
|
||||
- Centralised Management
|
||||
|
||||
The automation adoption process is specific to each business. There's no one size fits all when it comes to deploying automation, the ability to identify and embrace the approach that works best for your organisation is critical in advancing towards maintaining or creating a more agile environment, the focus should always be on business value and end-user experience. (We said something similar right at the start in regards to the whole of DevOps and the culture change and the automated process that this brings)
|
||||
|
||||
@ -26,98 +28,98 @@ To break this down you would need to identify how the task or process that you'r
|
||||
|
||||
Have a framework or design structure that you're trying to achieve know what your end goal is and then work step by step towards achieving that goal measuring the automation success at various stages based on the business outcomes.
|
||||
|
||||
Build concepts modelled around existing applications there's no need to design the concepts around automation in a bubble because they need to be applied to your application, your service, and your infrastructure, so begin to build the concepts and model them around your existing infrastructure, you're existing applications.
|
||||
Build concepts modelled around existing applications there's no need to design the concepts around automation in a bubble because they need to be applied to your application, your service, and your infrastructure, so begin to build the concepts and model them around your existing infrastructure, you're existing applications.
|
||||
|
||||
### Approach to Networking Automation
|
||||
### Approach to Networking Automation
|
||||
|
||||
We should identify the tasks and perform a discovery on network change requests so that you have the most common issues and problems to automate a solution to.
|
||||
We should identify the tasks and perform a discovery on network change requests so that you have the most common issues and problems to automate a solution to.
|
||||
|
||||
- Make a list of all the change requests and workflows that are currently being addressed manually.
|
||||
- Determine the most common, time-consuming and error-prone activities.
|
||||
- Prioritise the requests by taking a business-driven approach.
|
||||
- This is the framework for building an automation process, what must be automated and what must not.
|
||||
- Make a list of all the change requests and workflows that are currently being addressed manually.
|
||||
- Determine the most common, time-consuming and error-prone activities.
|
||||
- Prioritise the requests by taking a business-driven approach.
|
||||
- This is the framework for building an automation process, what must be automated and what must not.
|
||||
|
||||
We should then divide tasks and analyse how different network functions work and interact with each other.
|
||||
We should then divide tasks and analyse how different network functions work and interact with each other.
|
||||
|
||||
- The infrastructure/Network team receives change tickets at multiple layers to deploy applications.
|
||||
- Based on Network services, divide them into different areas and understand how they interact with each other.
|
||||
- Application Optimisation
|
||||
- ADC (Application Delivery Controller)
|
||||
- Firewall
|
||||
- DDI (DNS, DHCP, IPAM etc)
|
||||
- Routing
|
||||
- Others
|
||||
- Identify various dependencies to address business and cultural differences and bring in cross-team collaboration.
|
||||
- The infrastructure/Network team receives change tickets at multiple layers to deploy applications.
|
||||
- Based on Network services, divide them into different areas and understand how they interact with each other.
|
||||
- Application Optimisation
|
||||
- ADC (Application Delivery Controller)
|
||||
- Firewall
|
||||
- DDI (DNS, DHCP, IPAM etc)
|
||||
- Routing
|
||||
- Others
|
||||
- Identify various dependencies to address business and cultural differences and bring in cross-team collaboration.
|
||||
|
||||
Reusable policies, define and simplify reusable service tasks, processes and input/outputs.
|
||||
Reusable policies, define and simplify reusable service tasks, processes and input/outputs.
|
||||
|
||||
- Define offerings for various services, processes and input/outputs.
|
||||
- Simplifying the deployment process will reduce the time to market for both new and existing workloads.
|
||||
- Once you have a standard process, it can be sequenced and aligned to individual requests for a multi-threaded approach and delivery.
|
||||
- Define offerings for various services, processes and input/outputs.
|
||||
- Simplifying the deployment process will reduce the time to market for both new and existing workloads.
|
||||
- Once you have a standard process, it can be sequenced and aligned to individual requests for a multi-threaded approach and delivery.
|
||||
|
||||
Combine the policies with business-specific activities. How does implementing this policy help the business? Saves time? Saves Money? Provides a better business outcome?
|
||||
Combine the policies with business-specific activities. How does implementing this policy help the business? Saves time? Saves Money? Provides a better business outcome?
|
||||
|
||||
- Ensure that service tasks are interoperable.
|
||||
- Associate the incremental service tasks so that they align to create business services.
|
||||
- Allow for the flexibility to associate and re-associate service tasks on demand.
|
||||
- Deploy Self-Service capabilities and pave the way for improved operational efficiency.
|
||||
- Allow for the multiple technology skillsets to continue to contribute with oversight and compliance.
|
||||
- Ensure that service tasks are interoperable.
|
||||
- Associate the incremental service tasks so that they align to create business services.
|
||||
- Allow for the flexibility to associate and re-associate service tasks on demand.
|
||||
- Deploy Self-Service capabilities and pave the way for improved operational efficiency.
|
||||
- Allow for the multiple technology skillsets to continue to contribute with oversight and compliance.
|
||||
|
||||
**Iterate** on the policies and process, adding and improving while maintaining availability and service.
|
||||
**Iterate** on the policies and process, adding and improving while maintaining availability and service.
|
||||
|
||||
- Start small by automating existing tasks.
|
||||
- Get familiar with the automation process, so that you can identify other areas that can benefit from automation.
|
||||
- iterate your automation initiatives, adding agility incrementally while maintaining the required availability.
|
||||
- Taking an incremental approach paves the way for success!
|
||||
- Start small by automating existing tasks.
|
||||
- Get familiar with the automation process, so that you can identify other areas that can benefit from automation.
|
||||
- iterate your automation initiatives, adding agility incrementally while maintaining the required availability.
|
||||
- Taking an incremental approach paves the way for success!
|
||||
|
||||
Orchestrate the network service!
|
||||
|
||||
- Automation of the deployment process is required to deliver applications rapidly.
|
||||
- Creating an agile service environment requires different elements to be managed across technology skillsets.
|
||||
- Prepare for an end to end orchestration that provides for control over automation and the order of deployments.
|
||||
- Automation of the deployment process is required to deliver applications rapidly.
|
||||
- Creating an agile service environment requires different elements to be managed across technology skillsets.
|
||||
- Prepare for an end to end orchestration that provides for control over automation and the order of deployments.
|
||||
|
||||
## Network Automation Tools
|
||||
## Network Automation Tools
|
||||
|
||||
The good news here is that for the most part, the tools we use here for Network automation are generally the same that we will use for other areas of automation or what we have already covered so far or what we will cover in future sessions.
|
||||
The good news here is that for the most part, the tools we use here for Network automation are generally the same that we will use for other areas of automation or what we have already covered so far or what we will cover in future sessions.
|
||||
|
||||
Operating System - As I have throughout this challenge, I am focusing on doing most of my learning with a Linux OS, those reasons were given in the Linux section but almost all of the tooling that we will touch albeit cross-OS platforms maybe today they all started as Linux based applications or tools, to begin with.
|
||||
Operating System - As I have throughout this challenge, I am focusing on doing most of my learning with a Linux OS, those reasons were given in the Linux section but almost all of the tooling that we will touch albeit cross-OS platforms maybe today they all started as Linux based applications or tools, to begin with.
|
||||
|
||||
Integrated Development Environment (IDE) - Again not much to say here other than throughout I would suggest Visual Studio Code as your IDE, based on the extensive plugins that are available for so many different languages.
|
||||
Integrated Development Environment (IDE) - Again not much to say here other than throughout I would suggest Visual Studio Code as your IDE, based on the extensive plugins that are available for so many different languages.
|
||||
|
||||
Configuration Management - We have not got to the Configuration management section yet, but it is very clear that Ansible is a favourite in this area for managing and automating configurations. Ansible is written in Python but you do not need to know Python.
|
||||
|
||||
- Agentless
|
||||
Configuration Management - We have not got to the Configuration management section yet, but it is very clear that Ansible is a favourite in this area for managing and automating configurations. Ansible is written in Python but you do not need to know Python.
|
||||
|
||||
- Agentless
|
||||
- Only requires SSH
|
||||
- Large Support Community
|
||||
- Large Support Community
|
||||
- Lots of Network Modules
|
||||
- Push only model
|
||||
- Configured with YAML
|
||||
- Open Source!
|
||||
- Push only model
|
||||
- Configured with YAML
|
||||
- Open Source!
|
||||
|
||||
[Link to Ansible Network Modules](https://docs.ansible.com/ansible/2.9/modules/list_of_network_modules.html)
|
||||
|
||||
We will also touch on **Ansible Tower** in the configuration management section, see this as the GUI front end for Ansible.
|
||||
We will also touch on **Ansible Tower** in the configuration management section, see this as the GUI front end for Ansible.
|
||||
|
||||
CI/CD - Again we will cover more about the concepts and tooling around this but it's important to at least mention here as this spans not only networking but all provisioning of service and platform.
|
||||
CI/CD - Again we will cover more about the concepts and tooling around this but it's important to at least mention here as this spans not only networking but all provisioning of service and platform.
|
||||
|
||||
In particular, Jenkins provides or seems to be a popular tool for Network Automation.
|
||||
|
||||
- Monitors git repository for changes and then initiates them.
|
||||
- Monitors git repository for changes and then initiates them.
|
||||
|
||||
Version Control - Again something we will dive deeper into later on.
|
||||
Version Control - Again something we will dive deeper into later on.
|
||||
|
||||
- Git provides version control of your code on your local device - Cross-Platform
|
||||
- GitHub, GitLab, BitBucket etc are online websites where you define your repositories and upload your code.
|
||||
- GitHub, GitLab, BitBucket etc are online websites where you define your repositories and upload your code.
|
||||
|
||||
Language | Scripting - Something we have not covered here is Python as a language, I decided to dive into Go instead as the programming language based on my circumstances, I would say that it was a close call between Golang and Python and Python it seems to be the winner for Network Automation.
|
||||
Language | Scripting - Something we have not covered here is Python as a language, I decided to dive into Go instead as the programming language based on my circumstances, I would say that it was a close call between Golang and Python and Python it seems to be the winner for Network Automation.
|
||||
|
||||
- Nornir is something to mention here, an automation framework written in Python. This seems to take the role of Ansible but specifically around Network Automation. [Nornir documentation](https://nornir.readthedocs.io/en/latest/)
|
||||
- Nornir is something to mention here, an automation framework written in Python. This seems to take the role of Ansible but specifically around Network Automation. [Nornir documentation](https://nornir.readthedocs.io/en/latest/)
|
||||
|
||||
Analyse APIs - Postman is a great tool for analysing RESTful APIs. Helps to build, test and modify APIs.
|
||||
|
||||
- POST >>> To create resources objects.
|
||||
- GET >>> To retrieve a resources.
|
||||
- PUT >>> To create or replace the resources.
|
||||
- GET >>> To retrieve a resources.
|
||||
- PUT >>> To create or replace the resources.
|
||||
- PATCH >>> To create or update the resources object.
|
||||
- Delete >>> To delete a resources
|
||||
|
||||
@ -131,11 +133,11 @@ Analyse APIs - Postman is a great tool for analysing RESTful APIs. Helps to buil
|
||||
|
||||
[Network Test Automation](https://pubhub.devnetcloud.com/media/genie-feature-browser/docs/#/)
|
||||
|
||||
Over the next 3 days, I am planning to get more hands-on with some of the things we have covered and put some work in around Python and Network automation.
|
||||
Over the next 3 days, I am planning to get more hands-on with some of the things we have covered and put some work in around Python and Network automation.
|
||||
|
||||
We have nowhere near covered all of the networking topics so far but wanted to make this broad enough to follow along and still keep learning from the resources I am adding below.
|
||||
We have nowhere near covered all of the networking topics so far but wanted to make this broad enough to follow along and still keep learning from the resources I am adding below.
|
||||
|
||||
## Resources
|
||||
## Resources
|
||||
|
||||
- [3 Necessary Skills for Network Automation](https://www.youtube.com/watch?v=KhiJ7Fu9kKA&list=WL&index=122&t=89s)
|
||||
- [Computer Networking full course](https://www.youtube.com/watch?v=IPvYjXCsTg8)
|
||||
|
117
Days/day25.md
117
Days/day25.md
@ -1,23 +1,24 @@
|
||||
---
|
||||
title: '#90DaysOfDevOps - Python for Network Automation - Day 25'
|
||||
title: "#90DaysOfDevOps - Python for Network Automation - Day 25"
|
||||
published: false
|
||||
description: 90DaysOfDevOps - Python for Network Automation
|
||||
tags: 'devops, 90daysofdevops, learning'
|
||||
tags: "devops, 90daysofdevops, learning"
|
||||
cover_image: null
|
||||
canonical_url: null
|
||||
id: 1049038
|
||||
---
|
||||
## Python for Network Automation
|
||||
|
||||
Python is the standard language used for automated network operations.
|
||||
## Python for Network Automation
|
||||
|
||||
Whilst it is not only for network automation it seems to be everywhere when you are looking for resources and as previously mentioned if it's not Python then it's generally Ansible which is written also in Python.
|
||||
Python is the standard language used for automated network operations.
|
||||
|
||||
I think I have mentioned this already but during the "Learn a programming language" section I chose Golang over Python for reasons around my company is developing in Go so that was a good reason for me to learn but if that was not the case then Python would have taken that time.
|
||||
Whilst it is not only for network automation it seems to be everywhere when you are looking for resources and as previously mentioned if it's not Python then it's generally Ansible which is written also in Python.
|
||||
|
||||
- Readability and ease of use - It seems that Python seems just makes sense. There don't seem to be the requirements around `{}` in the code to start and end blocks. Couple this with a strong IDE like VS Code you have a pretty easy start when wanting to run some python code.
|
||||
I think I have mentioned this already but during the "Learn a programming language" section I chose Golang over Python for reasons around my company is developing in Go so that was a good reason for me to learn but if that was not the case then Python would have taken that time.
|
||||
|
||||
Pycharm might be another IDE worth mentioning here.
|
||||
- Readability and ease of use - It seems that Python seems just makes sense. There don't seem to be the requirements around `{}` in the code to start and end blocks. Couple this with a strong IDE like VS Code you have a pretty easy start when wanting to run some python code.
|
||||
|
||||
Pycharm might be another IDE worth mentioning here.
|
||||
|
||||
- Libraries - The extensibility of Python is the real gold mine here, I mentioned before that this is not just for Network Automation but in fact, there are libraries plenty for all sorts of devices and configurations. You can see the vast amount here [PyPi](https://pypi.python.org/pypi)
|
||||
|
||||
@ -25,144 +26,144 @@ When you want to download the library to your workstation, then you use a tool c
|
||||
|
||||
- Powerful & Efficient - Remember during the Go days I went through the "Hello World" scenario and we went through I think 6 lines of code? In Python it is
|
||||
|
||||
```
|
||||
```
|
||||
print('hello world')
|
||||
```
|
||||
|
||||
Put all of the above points together and it should be easy to see why Python is generally mentioned as the de-facto tool when working on automating.
|
||||
Put all of the above points together and it should be easy to see why Python is generally mentioned as the de-facto tool when working on automating.
|
||||
|
||||
I think it's important to note that it's possible that several years back there were scripts that might have interacted with your network devices to maybe automate the backup of configuration or to gather logs and other insights into your devices. The automation we are talking about here is a little different and that's because the overall networking landscape has also changed to suit this way of thinking better and enabled more automation.
|
||||
I think it's important to note that it's possible that several years back there were scripts that might have interacted with your network devices to maybe automate the backup of configuration or to gather logs and other insights into your devices. The automation we are talking about here is a little different and that's because the overall networking landscape has also changed to suit this way of thinking better and enabled more automation.
|
||||
|
||||
- Software-Defined Network - SDN Controllers take the responsibility of delivering the control plane configuration to all devices on the network, meaning just a single point of contact for any network changes, no longer having to telnet or SSH into every device and also relying on humans to do this which has a repeatable chance of failure or misconfiguration.
|
||||
- Software-Defined Network - SDN Controllers take the responsibility of delivering the control plane configuration to all devices on the network, meaning just a single point of contact for any network changes, no longer having to telnet or SSH into every device and also relying on humans to do this which has a repeatable chance of failure or misconfiguration.
|
||||
|
||||
- High-Level Orchestration - Go up a level from those SDN controllers and this allows for orchestration of service levels then there is the integration of this orchestration layer into your platforms of choice, VMware, Kubernetes, Public Clouds etc.
|
||||
- High-Level Orchestration - Go up a level from those SDN controllers and this allows for orchestration of service levels then there is the integration of this orchestration layer into your platforms of choice, VMware, Kubernetes, Public Clouds etc.
|
||||
|
||||
- Policy-based management - What do you want to have? What is the desired state? You describe this and the system has all the details on how to figure it out to become the desired state.
|
||||
- Policy-based management - What do you want to have? What is the desired state? You describe this and the system has all the details on how to figure it out to become the desired state.
|
||||
|
||||
## Setting up the lab environment
|
||||
|
||||
Not everyone has access to physical routers, switches and other networking devices.
|
||||
Not everyone has access to physical routers, switches and other networking devices.
|
||||
|
||||
I wanted to make it possible for us to look at some of the tooling pre-mentioned but also get hands-on and learn how to automate the configuration of our networks.
|
||||
I wanted to make it possible for us to look at some of the tooling pre-mentioned but also get hands-on and learn how to automate the configuration of our networks.
|
||||
|
||||
When it comes to options there are a few that we can choose from.
|
||||
When it comes to options there are a few that we can choose from.
|
||||
|
||||
- [GNS3 VM](https://www.gns3.com/software/download-vm)
|
||||
- [Eve-ng](https://www.eve-ng.net/)
|
||||
- [Unimus](https://unimus.net/) Not a lab environment but an interesting concept.
|
||||
- [Unimus](https://unimus.net/) Not a lab environment but an interesting concept.
|
||||
|
||||
We will build our lab out using [Eve-ng](https://www.eve-ng.net/) as mentioned before you can use a physical device but to be honest a virtual environment means that we can have a sandbox environment to test many different scenarios. Plus being able to play with different devices and topologies might be of interest.
|
||||
We will build our lab out using [Eve-ng](https://www.eve-ng.net/) as mentioned before you can use a physical device but to be honest a virtual environment means that we can have a sandbox environment to test many different scenarios. Plus being able to play with different devices and topologies might be of interest.
|
||||
|
||||
We are going to do everything on EVE-NG with the community edition.
|
||||
We are going to do everything on EVE-NG with the community edition.
|
||||
|
||||
### Getting started
|
||||
### Getting started
|
||||
|
||||
The community edition comes in ISO and OVF formats for [download](https://www.eve-ng.net/index.php/download/)
|
||||
|
||||
We will be using the OVF download but with the ISO there is the option to build out on a bare metal server without the need for a hypervisor.
|
||||
We will be using the OVF download but with the ISO there is the option to build out on a bare metal server without the need for a hypervisor.
|
||||
|
||||

|
||||
|
||||
For our walkthrough, we will be using VMware Workstation as I have a license via my vExpert but you can equally use VMware Player or any of the other options mentioned in the [documentation](https://www.eve-ng.net/index.php/documentation/installation/system-requirement/)Unfortunately we cannot use our previously used Virtual box!
|
||||
For our walkthrough, we will be using VMware Workstation as I have a license via my vExpert but you can equally use VMware Player or any of the other options mentioned in the [documentation](https://www.eve-ng.net/index.php/documentation/installation/system-requirement/)Unfortunately we cannot use our previously used Virtual box!
|
||||
|
||||
This is also where I had an issue with GNS3 with Virtual Box even though supported.
|
||||
This is also where I had an issue with GNS3 with Virtual Box even though supported.
|
||||
|
||||
[Download VMware Workstation Player - FREE](https://www.vmware.com/uk/products/workstation-player.html)
|
||||
[Download VMware Workstation Player - FREE](https://www.vmware.com/uk/products/workstation-player.html)
|
||||
|
||||
[VMware Workstation PRO](https://www.vmware.com/uk/products/workstation-pro.html) Also noted that there is an evaluation period for free!
|
||||
[VMware Workstation PRO](https://www.vmware.com/uk/products/workstation-pro.html) Also noted that there is an evaluation period for free!
|
||||
|
||||
### Installation on VMware Workstation PRO
|
||||
### Installation on VMware Workstation PRO
|
||||
|
||||
Now we have our hypervisor software downloaded and installed, and we have the EVE-NG OVF downloaded. If you are using VMware Player please let me know if this process is the same.
|
||||
Now we have our hypervisor software downloaded and installed, and we have the EVE-NG OVF downloaded. If you are using VMware Player please let me know if this process is the same.
|
||||
|
||||
We are now ready to get things configured.
|
||||
We are now ready to get things configured.
|
||||
|
||||
Open VMware Workstation and then select `file` and `open`
|
||||
Open VMware Workstation and then select `file` and `open`
|
||||
|
||||

|
||||
|
||||
When you download the EVE-NG OVF Image it is going to be within a compressed file. Extract the contents out into its folder so it looks like this.
|
||||
When you download the EVE-NG OVF Image it is going to be within a compressed file. Extract the contents out into its folder so it looks like this.
|
||||
|
||||

|
||||
|
||||
Navigate to the location where you downloaded the EVE-NG OVF image and begin the import.
|
||||
Navigate to the location where you downloaded the EVE-NG OVF image and begin the import.
|
||||
|
||||
Give it a recognisable name and store the virtual machine somewhere on your system.
|
||||
Give it a recognisable name and store the virtual machine somewhere on your system.
|
||||
|
||||

|
||||
|
||||
When the import is complete increase the number of processors to 4 and the memory allocated to 8 GB. (This should be the case after import with the latest version if not then edit VM settings)
|
||||
|
||||
Also, make sure the Virtualise Intel VT-x/EPT or AMD-V/RVI checkbox is enabled. This option instructs the VMware workstation to pass the virtualisation flags to the guest OS (nested virtualisation) This was the issue I was having with GNS3 with Virtual Box even though my CPU allows this.
|
||||
Also, make sure the Virtualise Intel VT-x/EPT or AMD-V/RVI checkbox is enabled. This option instructs the VMware workstation to pass the virtualisation flags to the guest OS (nested virtualisation) This was the issue I was having with GNS3 with Virtual Box even though my CPU allows this.
|
||||
|
||||

|
||||
|
||||
### Power on & Access
|
||||
### Power on & Access
|
||||
|
||||
Sidenote & Rabbit hole: Remember I mentioned that this would not work with VirtualBox! Well yeah had the same issue with VMware Workstation and EVE-NG but it was not the fault of the virtualisation platform!
|
||||
Sidenote & Rabbit hole: Remember I mentioned that this would not work with VirtualBox! Well yeah had the same issue with VMware Workstation and EVE-NG but it was not the fault of the virtualisation platform!
|
||||
|
||||
I have WSL2 running on my Windows Machine and this seems to remove the capability of being able to run anything nested inside of your environment. I am confused as to why the Ubuntu VM does run as it seems to take out the Intel VT-d virtualisation aspect of the CPU when using WSL2.
|
||||
I have WSL2 running on my Windows Machine and this seems to remove the capability of being able to run anything nested inside of your environment. I am confused as to why the Ubuntu VM does run as it seems to take out the Intel VT-d virtualisation aspect of the CPU when using WSL2.
|
||||
|
||||
To resolve this we can run the following command on our Windows machine and reboot the system, note that whilst this is off then you will not be able to use WSL2.
|
||||
To resolve this we can run the following command on our Windows machine and reboot the system, note that whilst this is off then you will not be able to use WSL2.
|
||||
|
||||
`bcdedit /set hypervisorlaunchtype off`
|
||||
|
||||
When you want to go back and use WSL2 then you will need to run this command and reboot.
|
||||
When you want to go back and use WSL2 then you will need to run this command and reboot.
|
||||
|
||||
`bcdedit /set hypervisorlaunchtype auto`
|
||||
|
||||
Both of these commands should be run as administrator!
|
||||
Both of these commands should be run as administrator!
|
||||
|
||||
Ok back to the show, You should now have a powered-on machine in VMware Workstation and you should have a prompt looking similar to this.
|
||||
Ok back to the show, You should now have a powered-on machine in VMware Workstation and you should have a prompt looking similar to this.
|
||||
|
||||

|
||||
|
||||
On the prompt above you can use:
|
||||
On the prompt above you can use:
|
||||
|
||||
username = root
|
||||
password = eve
|
||||
|
||||
You will then be asked to provide the root password again, this will be used to SSH into the host later on.
|
||||
You will then be asked to provide the root password again, this will be used to SSH into the host later on.
|
||||
|
||||
We then can change the hostname.
|
||||
We then can change the hostname.
|
||||
|
||||

|
||||
|
||||
Next, we define a DNS Domain Name, I have used the one below but I am not sure if this will need to be changed later on.
|
||||
Next, we define a DNS Domain Name, I have used the one below but I am not sure if this will need to be changed later on.
|
||||
|
||||

|
||||
|
||||
We then configure networking, I am selecting static so that the IP address given will be persistent after reboots.
|
||||
We then configure networking, I am selecting static so that the IP address given will be persistent after reboots.
|
||||
|
||||

|
||||
|
||||
The final step, provide a static IP address from a network that is reachable from your workstation.
|
||||
The final step, provide a static IP address from a network that is reachable from your workstation.
|
||||
|
||||

|
||||
|
||||
There are some additional steps here where you will have to provide a subnet mask for your network, default gateway and DNS.
|
||||
There are some additional steps here where you will have to provide a subnet mask for your network, default gateway and DNS.
|
||||
|
||||
Once finished it will reboot, when it is back up you can take your static IP address and put this into your browser.
|
||||
Once finished it will reboot, when it is back up you can take your static IP address and put this into your browser.
|
||||
|
||||

|
||||
|
||||
The default username for the GUI is `admin` and the password is `eve` while the default username for SSH is `root` and the password is `eve` but this would have been changed if you changed during the setup.
|
||||
The default username for the GUI is `admin` and the password is `eve` while the default username for SSH is `root` and the password is `eve` but this would have been changed if you changed during the setup.
|
||||
|
||||

|
||||
|
||||
I chose HTML5 for the console vs native as this will open a new tab in your browser when you are navigating through different consoles.
|
||||
I chose HTML5 for the console vs native as this will open a new tab in your browser when you are navigating through different consoles.
|
||||
|
||||
Next up we are going to:
|
||||
Next up we are going to:
|
||||
|
||||
- Install the EVE-NG client pack
|
||||
- Install the EVE-NG client pack
|
||||
- Load some network images into EVE-NG
|
||||
- Build a Network Topology
|
||||
- Adding Nodes
|
||||
- Connecting Nodes
|
||||
- Start building Python Scripts
|
||||
- Build a Network Topology
|
||||
- Adding Nodes
|
||||
- Connecting Nodes
|
||||
- Start building Python Scripts
|
||||
- Look at telnetlib, Netmiko, Paramiko and Pexpect
|
||||
|
||||
## Resources
|
||||
## Resources
|
||||
|
||||
- [Free Course: Introduction to EVE-NG](https://www.youtube.com/watch?v=g6B0f_E0NMg)
|
||||
- [EVE-NG - Creating your first lab](https://www.youtube.com/watch?v=9dPWARirtK8)
|
||||
|
@ -1,19 +1,20 @@
|
||||
---
|
||||
title: '#90DaysOfDevOps - Building our Lab - Day 26'
|
||||
title: "#90DaysOfDevOps - Building our Lab - Day 26"
|
||||
published: false
|
||||
description: 90DaysOfDevOps - Building our Lab
|
||||
tags: 'devops, 90daysofdevops, learning'
|
||||
tags: "devops, 90daysofdevops, learning"
|
||||
cover_image: null
|
||||
canonical_url: null
|
||||
id: 1048762
|
||||
---
|
||||
|
||||
## Building our Lab
|
||||
|
||||
We are going to continue our setup of our emulated network using EVE-NG and then hopefully get some devices deployed and start thinking about how we can automate the configuration of these devices. On [Day 25](day25.md) we covered the installation of EVE-NG onto our machine using VMware Workstation.
|
||||
We are going to continue our setup of our emulated network using EVE-NG and then hopefully get some devices deployed and start thinking about how we can automate the configuration of these devices. On [Day 25](day25.md) we covered the installation of EVE-NG onto our machine using VMware Workstation.
|
||||
|
||||
### Installing EVE-NG Client
|
||||
|
||||
There is also a client pack that allows us to choose which application is used when we SSH to the devices. It will also set up Wireshark for packet captures between links. You can grab the client pack for your OS (Windows, macOS, Linux).
|
||||
There is also a client pack that allows us to choose which application is used when we SSH to the devices. It will also set up Wireshark for packet captures between links. You can grab the client pack for your OS (Windows, macOS, Linux).
|
||||
|
||||
[EVE-NG Client Download](https://www.eve-ng.net/index.php/download/)
|
||||
|
||||
@ -21,90 +22,90 @@ There is also a client pack that allows us to choose which application is used w
|
||||
|
||||
Quick Tip: If you are using Linux as your client then there is this [client pack](https://github.com/SmartFinn/eve-ng-integration).
|
||||
|
||||
The install is straightforward next, next and I would suggest leaving the defaults.
|
||||
The install is straightforward next, next and I would suggest leaving the defaults.
|
||||
|
||||
### Obtaining network images
|
||||
|
||||
This step has been a challenge, I have followed some videos that I will link at the end that links to some resources and downloads for our router and switch images whilst telling us how and where to upload them.
|
||||
This step has been a challenge, I have followed some videos that I will link at the end that links to some resources and downloads for our router and switch images whilst telling us how and where to upload them.
|
||||
|
||||
It is important to note that I using everything for education purposes. I would suggest downloading official images from network vendors.
|
||||
It is important to note that I using everything for education purposes. I would suggest downloading official images from network vendors.
|
||||
|
||||
[Blog & Links to YouTube videos](https://loopedback.com/2019/11/15/setting-up-eve-ng-for-ccna-ccnp-ccie-level-studies-includes-multiple-vendor-node-support-an-absolutely-amazing-study-tool-to-check-out-asap/)
|
||||
[Blog & Links to YouTube videos](https://loopedback.com/2019/11/15/setting-up-eve-ng-for-ccna-ccnp-ccie-level-studies-includes-multiple-vendor-node-support-an-absolutely-amazing-study-tool-to-check-out-asap/)
|
||||
|
||||
[How To Add Cisco VIRL vIOS image to Eve-ng](https://networkhunt.com/how-to-add-cisco-virl-vios-image-to-eve-ng/)
|
||||
|
||||
Overall the steps here are a little complicated and could be much easier but the above blogs and videos walk through the process of adding the images to your EVE-NG box.
|
||||
Overall the steps here are a little complicated and could be much easier but the above blogs and videos walk through the process of adding the images to your EVE-NG box.
|
||||
|
||||
I used FileZilla to transfer the qcow2 to the VM over SFTP.
|
||||
I used FileZilla to transfer the qcow2 to the VM over SFTP.
|
||||
|
||||
For our lab, we need Cisco vIOS L2 (switches) and Cisco vIOS (router)
|
||||
For our lab, we need Cisco vIOS L2 (switches) and Cisco vIOS (router)
|
||||
|
||||
### Create a Lab
|
||||
|
||||
Inside the EVE-NG web interface, we are going to create our new network topology. We will have four switches and one router that will act as our gateway to outside networks.
|
||||
Inside the EVE-NG web interface, we are going to create our new network topology. We will have four switches and one router that will act as our gateway to outside networks.
|
||||
|
||||
| Node | IP Address |
|
||||
| ----------- | ----------- |
|
||||
| Router | 10.10.88.110|
|
||||
| Switch1 | 10.10.88.111|
|
||||
| Switch2 | 10.10.88.112|
|
||||
| Switch3 | 10.10.88.113|
|
||||
| Switch4 | 10.10.88.114|
|
||||
| Node | IP Address |
|
||||
| ------- | ------------ |
|
||||
| Router | 10.10.88.110 |
|
||||
| Switch1 | 10.10.88.111 |
|
||||
| Switch2 | 10.10.88.112 |
|
||||
| Switch3 | 10.10.88.113 |
|
||||
| Switch4 | 10.10.88.114 |
|
||||
|
||||
#### Adding our Nodes to EVE-NG
|
||||
|
||||
When you first log in to EVE-NG you will see a screen like the below, we want to start by creating our first lab.
|
||||
When you first log in to EVE-NG you will see a screen like the below, we want to start by creating our first lab.
|
||||
|
||||

|
||||
|
||||
Give your lab a name and the other fields are optional.
|
||||
Give your lab a name and the other fields are optional.
|
||||
|
||||

|
||||
|
||||
You will be then greeted with a blank canvas to start creating your network. Right-click on your canvas and choose add node.
|
||||
You will be then greeted with a blank canvas to start creating your network. Right-click on your canvas and choose add node.
|
||||
|
||||
From here you will have a long list of node options, If you have followed along above you will have the two in blue shown below and the others are going to be grey and unselectable.
|
||||
From here you will have a long list of node options, If you have followed along above you will have the two in blue shown below and the others are going to be grey and unselectable.
|
||||
|
||||

|
||||
|
||||
We want to add the following to our lab:
|
||||
We want to add the following to our lab:
|
||||
|
||||
- 1 x Cisco vIOS Router
|
||||
- 1 x Cisco vIOS Router
|
||||
- 4 x Cisco vIOS Switch
|
||||
|
||||
Run through the simple wizard to add them to your lab and it should look something like this.
|
||||
Run through the simple wizard to add them to your lab and it should look something like this.
|
||||
|
||||

|
||||
|
||||
#### Connecting our nodes
|
||||
#### Connecting our nodes
|
||||
|
||||
We now need to add our connectivity between our routers and switches. We can do this quite easily by hovering over the device and seeing the connection icon as per below and then connecting that to the device we wish to connect to.
|
||||
We now need to add our connectivity between our routers and switches. We can do this quite easily by hovering over the device and seeing the connection icon as per below and then connecting that to the device we wish to connect to.
|
||||
|
||||

|
||||
|
||||
When you have finished connecting your environment you may also want to add some way to define physical boundaries or locations using boxes or circles which can also be found in the right-click menu. You can also add text which is useful when we want to define our naming or IP addresses in our labs.
|
||||
When you have finished connecting your environment you may also want to add some way to define physical boundaries or locations using boxes or circles which can also be found in the right-click menu. You can also add text which is useful when we want to define our naming or IP addresses in our labs.
|
||||
|
||||
I went ahead and made my lab look like the below.
|
||||
I went ahead and made my lab look like the below.
|
||||
|
||||

|
||||
|
||||
You will also notice that the lab above is all powered off, we can start our lab by selecting everything and right-clicking and selecting start selected.
|
||||
You will also notice that the lab above is all powered off, we can start our lab by selecting everything and right-clicking and selecting start selected.
|
||||
|
||||

|
||||
|
||||
Once we have our lab up and running you will be able to console into each device and you will notice at this stage they are pretty dumb with no configuration. We can add some configuration to each node by copying or creating your own in each terminal.
|
||||
Once we have our lab up and running you will be able to console into each device and you will notice at this stage they are pretty dumb with no configuration. We can add some configuration to each node by copying or creating your own in each terminal.
|
||||
|
||||
I will leave my configuration in the Networking folder of the repository for reference.
|
||||
I will leave my configuration in the Networking folder of the repository for reference.
|
||||
|
||||
| Node | Configuration |
|
||||
| ----------- | ----------- |
|
||||
| Router | [R1](Networking/R1) |
|
||||
| Switch1 | [SW1](Networking/SW1) |
|
||||
| Switch2 | [SW2](Networking/SW2) |
|
||||
| Switch3 | [SW3](Networking/SW3) |
|
||||
| Switch4 | [SW4](Networking/SW4) |
|
||||
| Node | Configuration |
|
||||
| ------- | --------------------- |
|
||||
| Router | [R1](Networking/R1) |
|
||||
| Switch1 | [SW1](Networking/SW1) |
|
||||
| Switch2 | [SW2](Networking/SW2) |
|
||||
| Switch3 | [SW3](Networking/SW3) |
|
||||
| Switch4 | [SW4](Networking/SW4) |
|
||||
|
||||
## Resources
|
||||
## Resources
|
||||
|
||||
- [Free Course: Introduction to EVE-NG](https://www.youtube.com/watch?v=g6B0f_E0NMg)
|
||||
- [EVE-NG - Creating your first lab](https://www.youtube.com/watch?v=9dPWARirtK8)
|
||||
@ -113,7 +114,7 @@ I will leave my configuration in the Networking folder of the repository for ref
|
||||
- [Practical Networking](http://www.practicalnetworking.net/)
|
||||
- [Python Network Automation](https://www.youtube.com/watch?v=xKPzLplPECU&list=WL&index=126)
|
||||
|
||||
Most of the examples I am using here as I am not a Network Engineer have come from this extensive book which is not free but I am using some of the scenarios to help understand Network Automation.
|
||||
Most of the examples I am using here as I am not a Network Engineer have come from this extensive book which is not free but I am using some of the scenarios to help understand Network Automation.
|
||||
|
||||
- [Hands-On Enterprise Automation with Python (Book)](https://www.packtpub.com/product/hands-on-enterprise-automation-with-python/9781788998512)
|
||||
|
||||
|
@ -1,76 +1,77 @@
|
||||
---
|
||||
title: '#90DaysOfDevOps - Getting Hands-On with Python & Network - Day 27'
|
||||
title: "#90DaysOfDevOps - Getting Hands-On with Python & Network - Day 27"
|
||||
published: false
|
||||
description: 90DaysOfDevOps - Getting Hands-On with Python & Network
|
||||
tags: 'devops, 90daysofdevops, learning'
|
||||
tags: "devops, 90daysofdevops, learning"
|
||||
cover_image: null
|
||||
canonical_url: null
|
||||
id: 1048735
|
||||
---
|
||||
|
||||
## Getting Hands-On with Python & Network
|
||||
|
||||
In this final section of Networking fundamentals, we are going to cover some automation tasks and tools with our lab environment created on [Day 26](day26.md)
|
||||
In this final section of Networking fundamentals, we are going to cover some automation tasks and tools with our lab environment created on [Day 26](day26.md)
|
||||
|
||||
We will be using an SSH tunnel to connect to our devices from our client vs telnet. The SSH tunnel created between client and device is encrypted. We also covered SSH in the Linux section on [Day 18](day18.md)
|
||||
|
||||
## Access our virtual emulated environment
|
||||
|
||||
For us to interact with our switches we either need a workstation inside the EVE-NG network or you can deploy a Linux box there with Python installed to perform your automation ([Resource for setting up Linux inside EVE-NG](https://www.youtube.com/watch?v=3Qstk3zngrY)) or you can do something like me and define a cloud for access from your workstation.
|
||||
For us to interact with our switches we either need a workstation inside the EVE-NG network or you can deploy a Linux box there with Python installed to perform your automation ([Resource for setting up Linux inside EVE-NG](https://www.youtube.com/watch?v=3Qstk3zngrY)) or you can do something like me and define a cloud for access from your workstation.
|
||||
|
||||

|
||||
|
||||
To do this, we have right-clicked on our canvas and we have selected network and then selected "Management(Cloud0)" this will bridge out to our home network.
|
||||
To do this, we have right-clicked on our canvas and we have selected network and then selected "Management(Cloud0)" this will bridge out to our home network.
|
||||
|
||||

|
||||
|
||||
However, we do not have anything inside this network so we need to add connections from the new network to each of our devices. (My networking knowledge needs more attention and I feel that you could just do this next step to the top router and then have connectivity to the rest of the network through this one cable?)
|
||||
|
||||
I have then logged on to each of our devices and I have run through the following commands for the interfaces applicable to where the cloud comes in.
|
||||
I have then logged on to each of our devices and I have run through the following commands for the interfaces applicable to where the cloud comes in.
|
||||
|
||||
```
|
||||
enable
|
||||
config t
|
||||
int gi0/0
|
||||
IP add DHCP
|
||||
no sh
|
||||
exit
|
||||
IP add DHCP
|
||||
no sh
|
||||
exit
|
||||
exit
|
||||
sh ip int br
|
||||
```
|
||||
|
||||
The final step gives us the DHCP address from our home network. My device network list is as follows:
|
||||
The final step gives us the DHCP address from our home network. My device network list is as follows:
|
||||
|
||||
| Node | IP Address | Home Network IP |
|
||||
| ----------- | ----------- | ----------- |
|
||||
| Router | 10.10.88.110| 192.168.169.115 |
|
||||
| Switch1 | 10.10.88.111| 192.168.169.178 |
|
||||
| Switch2 | 10.10.88.112| 192.168.169.193 |
|
||||
| Switch3 | 10.10.88.113| 192.168.169.125 |
|
||||
| Switch4 | 10.10.88.114| 192.168.169.197 |
|
||||
| Node | IP Address | Home Network IP |
|
||||
| ------- | ------------ | --------------- |
|
||||
| Router | 10.10.88.110 | 192.168.169.115 |
|
||||
| Switch1 | 10.10.88.111 | 192.168.169.178 |
|
||||
| Switch2 | 10.10.88.112 | 192.168.169.193 |
|
||||
| Switch3 | 10.10.88.113 | 192.168.169.125 |
|
||||
| Switch4 | 10.10.88.114 | 192.168.169.197 |
|
||||
|
||||
### SSH to a network device
|
||||
### SSH to a network device
|
||||
|
||||
With the above in place, we can now connect to our devices on our home network using our workstation. I am using Putty but also have access to other terminals such as git bash that give me the ability to SSH to our devices.
|
||||
With the above in place, we can now connect to our devices on our home network using our workstation. I am using Putty but also have access to other terminals such as git bash that give me the ability to SSH to our devices.
|
||||
|
||||
Below you can see we have an SSH connection to our router device. (R1)
|
||||
|
||||

|
||||
|
||||
### Using Python to gather information from our devices
|
||||
### Using Python to gather information from our devices
|
||||
|
||||
The first example of how we can leverage Python is to gather information from all of our devices and in particular, I want to be able to connect to each one and run a simple command to provide me with interface configuration and settings. I have stored this script here [netmiko_con_multi.py](Networking/netmiko_con_multi.py)
|
||||
|
||||
Now when I run this I can see each port configuration over all of my devices.
|
||||
Now when I run this I can see each port configuration over all of my devices.
|
||||
|
||||

|
||||
|
||||
This could be handy if you have a lot of different devices, create this one script so that you can centrally control and understand quickly all of the configurations in one place.
|
||||
This could be handy if you have a lot of different devices, create this one script so that you can centrally control and understand quickly all of the configurations in one place.
|
||||
|
||||
### Using Python to configure our devices
|
||||
### Using Python to configure our devices
|
||||
|
||||
The above is useful but what about using Python to configure our devices, in our scenario we have a trunked port between `SW1` and `SW2` again imagine if this was to be done across many of the same switches we want to automate that and not have to manually connect to each switch to make the configuration change.
|
||||
The above is useful but what about using Python to configure our devices, in our scenario we have a trunked port between `SW1` and `SW2` again imagine if this was to be done across many of the same switches we want to automate that and not have to manually connect to each switch to make the configuration change.
|
||||
|
||||
We can use [netmiko_sendchange.py](Networking/netmiko_sendchange.py) to achieve this. This will connect over SSH and perform that change on our `SW1` which will also change to `SW2`.
|
||||
We can use [netmiko_sendchange.py](Networking/netmiko_sendchange.py) to achieve this. This will connect over SSH and perform that change on our `SW1` which will also change to `SW2`.
|
||||
|
||||

|
||||
|
||||
@ -78,51 +79,51 @@ Now for those that look at the code, you will see the message appears and tells
|
||||
|
||||

|
||||
|
||||
### backing up your device configurations
|
||||
### backing up your device configurations
|
||||
|
||||
Another use case would be to capture our network configurations and make sure we have those backed up, but again we don't want to be connecting to every device we have on our network so we can also automate this using [backup.py](Networking/backup.py). You will also need to populate the [backup.txt](Networking/backup.txt) with the IP addresses you want to backup.
|
||||
Another use case would be to capture our network configurations and make sure we have those backed up, but again we don't want to be connecting to every device we have on our network so we can also automate this using [backup.py](Networking/backup.py). You will also need to populate the [backup.txt](Networking/backup.txt) with the IP addresses you want to backup.
|
||||
|
||||
Run your script and you should see something like the below.
|
||||
Run your script and you should see something like the below.
|
||||
|
||||

|
||||
|
||||
That could be me just writing a simple print script in python so I should show you the backup files as well.
|
||||
That could be me just writing a simple print script in python so I should show you the backup files as well.
|
||||
|
||||

|
||||
|
||||
### Paramiko
|
||||
### Paramiko
|
||||
|
||||
A widely used Python module for SSH. You can find out more at the official GitHub link [here](https://github.com/paramiko/paramiko)
|
||||
|
||||
We can install this module using the `pip install paramiko` command.
|
||||
We can install this module using the `pip install paramiko` command.
|
||||
|
||||

|
||||
|
||||
We can verify the installation by entering the Python shell and importing the paramiko module.
|
||||
We can verify the installation by entering the Python shell and importing the paramiko module.
|
||||
|
||||

|
||||
|
||||
### Netmiko
|
||||
### Netmiko
|
||||
|
||||
The netmiko module targets network devices specifically whereas paramiko is a broader tool for handling SSH connections overall.
|
||||
The netmiko module targets network devices specifically whereas paramiko is a broader tool for handling SSH connections overall.
|
||||
|
||||
Netmiko which we have used above alongside paramiko can be installed using `pip install netmiko`
|
||||
Netmiko which we have used above alongside paramiko can be installed using `pip install netmiko`
|
||||
|
||||
Netmiko supports many network vendors and devices, you can find a list of supported devices on the [GitHub Page](https://github.com/ktbyers/netmiko#supports)
|
||||
Netmiko supports many network vendors and devices, you can find a list of supported devices on the [GitHub Page](https://github.com/ktbyers/netmiko#supports)
|
||||
|
||||
### Other modules
|
||||
### Other modules
|
||||
|
||||
It is also worth mentioning a few other modules that we have not had the chance to look at but they give a lot more functionality when it comes to network automation.
|
||||
It is also worth mentioning a few other modules that we have not had the chance to look at but they give a lot more functionality when it comes to network automation.
|
||||
|
||||
`netaddr` is used for working with and manipulating IP addresses, again the installation is simple with `pip install netaddr`
|
||||
`netaddr` is used for working with and manipulating IP addresses, again the installation is simple with `pip install netaddr`
|
||||
|
||||
you might find yourself wanting to store a lot of your switch configuration in an excel spreadsheet, the `xlrd` will allow your scripts to read the excel workbook and convert rows and columns into a matrix. `pip install xlrd` to get the module installed.
|
||||
you might find yourself wanting to store a lot of your switch configuration in an excel spreadsheet, the `xlrd` will allow your scripts to read the excel workbook and convert rows and columns into a matrix. `pip install xlrd` to get the module installed.
|
||||
|
||||
Some more use cases where network automation can be used that I have not had the chance to look into can be found [here](https://github.com/ktbyers/pynet/tree/master/presentations/dfwcug/examples)
|
||||
|
||||
I think this wraps up our Networking section of the #90DaysOfDevOps, Networking is one area that I have not touched for a while really and there is so much more to cover but I am hoping between my notes and the resources shared throughout it is helpful for some.
|
||||
I think this wraps up our Networking section of the #90DaysOfDevOps, Networking is one area that I have not touched for a while really and there is so much more to cover but I am hoping between my notes and the resources shared throughout it is helpful for some.
|
||||
|
||||
## Resources
|
||||
## Resources
|
||||
|
||||
- [Free Course: Introduction to EVE-NG](https://www.youtube.com/watch?v=g6B0f_E0NMg)
|
||||
- [EVE-NG - Creating your first lab](https://www.youtube.com/watch?v=9dPWARirtK8)
|
||||
@ -131,8 +132,8 @@ I think this wraps up our Networking section of the #90DaysOfDevOps, Networking
|
||||
- [Practical Networking](http://www.practicalnetworking.net/)
|
||||
- [Python Network Automation](https://www.youtube.com/watch?v=xKPzLplPECU&list=WL&index=126)
|
||||
|
||||
Most of the examples I am using here as I am not a Network Engineer have come from this extensive book which is not free but I am using some of the scenarios to help understand Network Automation.
|
||||
Most of the examples I am using here as I am not a Network Engineer have come from this extensive book which is not free but I am using some of the scenarios to help understand Network Automation.
|
||||
|
||||
- [Hands-On Enterprise Automation with Python (Book)](https://www.packtpub.com/product/hands-on-enterprise-automation-with-python/9781788998512)
|
||||
|
||||
See you on [Day 28](day28.md) where will start looking into cloud computing and get a good grasp and foundational knowledge of the topic and what is available.
|
||||
See you on [Day 28](day28.md) where will start looking into cloud computing and get a good grasp and foundational knowledge of the topic and what is available.
|
||||
|
@ -1,72 +1,73 @@
|
||||
---
|
||||
title: '#90DaysOfDevOps - The Big Picture: DevOps & The Cloud - Day 28'
|
||||
title: "#90DaysOfDevOps - The Big Picture: DevOps & The Cloud - Day 28"
|
||||
published: false
|
||||
description: 90DaysOfDevOps - The Big Picture DevOps & The Cloud
|
||||
tags: 'devops, 90daysofdevops, learning'
|
||||
tags: "devops, 90daysofdevops, learning"
|
||||
cover_image: null
|
||||
canonical_url: null
|
||||
id: 1048737
|
||||
---
|
||||
|
||||
## The Big Picture: DevOps & The Cloud
|
||||
|
||||
When it comes to cloud computing and what is offered, it goes very nicely with the DevOps ethos and processes. We can think of Cloud Computing as bringing the technology and services whilst DevOps as we have mentioned many times before is about the process and process improvement.
|
||||
When it comes to cloud computing and what is offered, it goes very nicely with the DevOps ethos and processes. We can think of Cloud Computing as bringing the technology and services whilst DevOps as we have mentioned many times before is about the process and process improvement.
|
||||
|
||||
But to start with that cloud learning journey is a steep one and making sure you know and understand all elements or the best service to choose for the right price point is confusing.
|
||||
But to start with that cloud learning journey is a steep one and making sure you know and understand all elements or the best service to choose for the right price point is confusing.
|
||||
|
||||

|
||||
|
||||
Does the public cloud require a DevOps mindset? My answer here is not, but to really take advantage of cloud computing and possibly avoid those large cloud bills that so many people have been hit with then it is important to think of Cloud Computing and DevOps together.
|
||||
Does the public cloud require a DevOps mindset? My answer here is not, but to really take advantage of cloud computing and possibly avoid those large cloud bills that so many people have been hit with then it is important to think of Cloud Computing and DevOps together.
|
||||
|
||||
If we look at what we mean by the Public Cloud at a 40,000ft view, it is about removing some responsibility to a managed service to enable you and your team to focus on more important aspects which name should be the application and the end-users. After all the Public Cloud is just someone else's computer.
|
||||
If we look at what we mean by the Public Cloud at a 40,000ft view, it is about removing some responsibility to a managed service to enable you and your team to focus on more important aspects which name should be the application and the end-users. After all the Public Cloud is just someone else's computer.
|
||||
|
||||

|
||||
|
||||
In this first section, I want to get into and describe a little more of what a Public Cloud is and some of the building blocks that get referred to as the Public Cloud overall.
|
||||
In this first section, I want to get into and describe a little more of what a Public Cloud is and some of the building blocks that get referred to as the Public Cloud overall.
|
||||
|
||||
### SaaS
|
||||
### SaaS
|
||||
|
||||
The first area to cover is Software as a service, this service is removing almost all of the management overhead of a service that you may have once run on-premises. Let's think about Microsoft Exchange for our email, this used to be a physical box that lived in your data centre or maybe in the cupboard under the stairs. You would need to feed and water that server. By that I mean you would need to keep it updated and you would be responsible for buying the server hardware, most likely installing the operating system, installing the applications required and then keeping that patched, if anything went wrong you would have to troubleshoot and get things back up and running.
|
||||
The first area to cover is Software as a service, this service is removing almost all of the management overhead of a service that you may have once run on-premises. Let's think about Microsoft Exchange for our email, this used to be a physical box that lived in your data centre or maybe in the cupboard under the stairs. You would need to feed and water that server. By that I mean you would need to keep it updated and you would be responsible for buying the server hardware, most likely installing the operating system, installing the applications required and then keeping that patched, if anything went wrong you would have to troubleshoot and get things back up and running.
|
||||
|
||||
Oh, and you would also have to make sure you were backing up your data, although this doesn't change with SaaS for the most part either.
|
||||
Oh, and you would also have to make sure you were backing up your data, although this doesn't change with SaaS for the most part either.
|
||||
|
||||
What SaaS does and in particular Microsoft 365, because I mentioned Exchange is removing that administration overhead and they provide a service that delivers your exchange functionality by way of mail but also much other productivity (Office 365) and storage options (OneDrive) that overall gives a great experience to the end-user.
|
||||
What SaaS does and in particular Microsoft 365, because I mentioned Exchange is removing that administration overhead and they provide a service that delivers your exchange functionality by way of mail but also much other productivity (Office 365) and storage options (OneDrive) that overall gives a great experience to the end-user.
|
||||
|
||||
Other SaaS applications are widely adopted, such as Salesforce, SAP, Oracle, Google, and Apple. All removing that burden of having to manage more of the stack.
|
||||
Other SaaS applications are widely adopted, such as Salesforce, SAP, Oracle, Google, and Apple. All removing that burden of having to manage more of the stack.
|
||||
|
||||
I am sure there is a story with DevOps and SaaS-based applications but I am struggling to find out what they may be. I know Azure DevOps has some great integrations with Microsoft 365 that I might have a look into and report back to.
|
||||
I am sure there is a story with DevOps and SaaS-based applications but I am struggling to find out what they may be. I know Azure DevOps has some great integrations with Microsoft 365 that I might have a look into and report back to.
|
||||
|
||||

|
||||
|
||||
### Public Cloud
|
||||
|
||||
Next up we have the public cloud, most people would think of this in a few different ways, some would see this as the hyper scalers only such as Microsoft Azure, Google Cloud Platform and AWS.
|
||||
Next up we have the public cloud, most people would think of this in a few different ways, some would see this as the hyper scalers only such as Microsoft Azure, Google Cloud Platform and AWS.
|
||||
|
||||

|
||||
|
||||
Some will also see the public cloud as a much wider offering that includes those hyper scalers but also the thousands of MSPs all over the world as well. For this post, we are going to consider Public Cloud including hyper scalers and MSPs, although later on, we will specifically dive into one or more of the hyper scalers to get that foundational knowledge.
|
||||
Some will also see the public cloud as a much wider offering that includes those hyper scalers but also the thousands of MSPs all over the world as well. For this post, we are going to consider Public Cloud including hyper scalers and MSPs, although later on, we will specifically dive into one or more of the hyper scalers to get that foundational knowledge.
|
||||
|
||||

|
||||
*thousands more companies could land on this, I am merely picking from local, regional, telco and global brands I have worked with and am aware of.*
|
||||
_thousands more companies could land on this, I am merely picking from local, regional, telco and global brands I have worked with and am aware of._
|
||||
|
||||
We mentioned in the SaaS section that Cloud removed the responsibility or the burden of having to administer parts of a system. If SaaS we see a lot of the abstraction layers removed i.e the physical systems, network, storage, operating system, and even application to some degree. When it comes to the cloud there are various levels of abstraction we can remove or keep depending on your requirements.
|
||||
We mentioned in the SaaS section that Cloud removed the responsibility or the burden of having to administer parts of a system. If SaaS we see a lot of the abstraction layers removed i.e the physical systems, network, storage, operating system, and even application to some degree. When it comes to the cloud there are various levels of abstraction we can remove or keep depending on your requirements.
|
||||
|
||||
We have already mentioned SaaS but there are at least two more to mention regarding the public cloud.
|
||||
We have already mentioned SaaS but there are at least two more to mention regarding the public cloud.
|
||||
|
||||
Infrastructure as a service - You can think of this layer as a virtual machine but whereas on-premises you will be having to look after the physical layer in the cloud this is not the case, the physical is the cloud provider's responsibility and you will manage and administer the Operating System, the data and the applications you wish to run.
|
||||
Infrastructure as a service - You can think of this layer as a virtual machine but whereas on-premises you will be having to look after the physical layer in the cloud this is not the case, the physical is the cloud provider's responsibility and you will manage and administer the Operating System, the data and the applications you wish to run.
|
||||
|
||||
Platform as a service - This continues to remove the responsibility of layers and this is really about you taking control of the data and the application but not having to worry about the underpinning hardware or operating system.
|
||||
Platform as a service - This continues to remove the responsibility of layers and this is really about you taking control of the data and the application but not having to worry about the underpinning hardware or operating system.
|
||||
|
||||
There are many other aaS offerings out there but these are the two fundamentals. You might see offerings around StaaS (Storage as a service) which provide you with your storage layer but without having to worry about the hardware underneath. Or you might have heard CaaS for Containers as a service which we will get onto, later on, another aaS we will look to cover over the next 7 days is FaaS (Functions as a Service) where maybe you do not need a running system up all the time and you just want a function to be executed as and when.
|
||||
|
||||
There are many ways in which the public cloud can provide abstraction layers of control that you wish to pass up and pay for.
|
||||
There are many ways in which the public cloud can provide abstraction layers of control that you wish to pass up and pay for.
|
||||
|
||||

|
||||
|
||||
### Private Cloud
|
||||
|
||||
Having your own data centre is not a thing of the past I would think that this has become a resurgence among a lot of companies that have found the OPEX model difficult to manage as well as skill sets in just using the public cloud.
|
||||
Having your own data centre is not a thing of the past I would think that this has become a resurgence among a lot of companies that have found the OPEX model difficult to manage as well as skill sets in just using the public cloud.
|
||||
|
||||
The important thing to note here is the public cloud is likely now going to be your responsibility and it is going to be on your premises.
|
||||
The important thing to note here is the public cloud is likely now going to be your responsibility and it is going to be on your premises.
|
||||
|
||||
We have some interesting things happening in this space not only with VMware that dominated the virtualisation era and on-premises infrastructure environments. We also have the hyper scalers offering an on-premises version of their public clouds.
|
||||
|
||||
@ -74,27 +75,27 @@ We have some interesting things happening in this space not only with VMware tha
|
||||
|
||||
### Hybrid Cloud
|
||||
|
||||
To follow on from the Public and Private cloud mentions we also can span across both of these environments to provide flexibility between the two, maybe take advantage of services available in the public cloud but then also take advantage of features and functionality of being on-premises or it might be a regulation that dictates you having to store data locally.
|
||||
To follow on from the Public and Private cloud mentions we also can span across both of these environments to provide flexibility between the two, maybe take advantage of services available in the public cloud but then also take advantage of features and functionality of being on-premises or it might be a regulation that dictates you having to store data locally.
|
||||
|
||||

|
||||
|
||||
Putting this all together we have a lot of choices for where we store and run our workloads.
|
||||
Putting this all together we have a lot of choices for where we store and run our workloads.
|
||||
|
||||

|
||||
|
||||
Before we get into a specific hyper-scale, I have asked the power of Twitter where we should go?
|
||||
Before we get into a specific hyper-scale, I have asked the power of Twitter where we should go?
|
||||
|
||||

|
||||
|
||||
[Link to Twitter Poll](https://twitter.com/MichaelCade1/status/1486814904510259208?s=20&t=x2n6QhyOXSUs7Pq0itdIIQ)
|
||||
|
||||
Whichever one gets the highest percentage we will take a deeper dive into the offerings, I think the important to mention though is that services from all of these are quite similar which is why I say to start with one because I have found that in knowing the foundation of one and how to create virtual machines, set up networking etc. I have been able to go to the others and quickly ramp up in those areas.
|
||||
Whichever one gets the highest percentage we will take a deeper dive into the offerings, I think the important to mention though is that services from all of these are quite similar which is why I say to start with one because I have found that in knowing the foundation of one and how to create virtual machines, set up networking etc. I have been able to go to the others and quickly ramp up in those areas.
|
||||
|
||||
Either way, I am going to share some great **FREE** resources that cover all three of the hyper scalers.
|
||||
Either way, I am going to share some great **FREE** resources that cover all three of the hyper scalers.
|
||||
|
||||
I am also going to build out a scenario as I have done in the other sections where we can build something as we move through the days.
|
||||
I am also going to build out a scenario as I have done in the other sections where we can build something as we move through the days.
|
||||
|
||||
## Resources
|
||||
## Resources
|
||||
|
||||
- [Hybrid Cloud and MultiCloud](https://www.youtube.com/watch?v=qkj5W98Xdvw)
|
||||
- [Microsoft Azure Fundamentals](https://www.youtube.com/watch?v=NKEFWyqJ5XA&list=WL&index=130&t=12s)
|
||||
|
109
Days/day29.md
109
Days/day29.md
@ -1,134 +1,137 @@
|
||||
---
|
||||
title: '#90DaysOfDevOps - Microsoft Azure Fundamentals - Day 29'
|
||||
title: "#90DaysOfDevOps - Microsoft Azure Fundamentals - Day 29"
|
||||
published: false
|
||||
description: 90DaysOfDevOps - Microsoft Azure Fundamentals
|
||||
tags: 'devops, 90daysofdevops, learning'
|
||||
tags: "devops, 90daysofdevops, learning"
|
||||
cover_image: null
|
||||
canonical_url: null
|
||||
id: 1048705
|
||||
---
|
||||
## Microsoft Azure Fundamentals
|
||||
|
||||
Before we get going, the winner of the Twitter poll was Microsoft Azure, hence the title of the page. It was close and also quite interesting to see the results come in over the 24 hours.
|
||||
## Microsoft Azure Fundamentals
|
||||
|
||||
Before we get going, the winner of the Twitter poll was Microsoft Azure, hence the title of the page. It was close and also quite interesting to see the results come in over the 24 hours.
|
||||
|
||||

|
||||
|
||||
I would say in terms of covering this topic is going to give me a better understanding and update around the services available on Microsoft Azure, I lean towards Amazon AWS when it comes to my day today. I have however left resources I had lined up for all three of the major cloud providers.
|
||||
I would say in terms of covering this topic is going to give me a better understanding and update around the services available on Microsoft Azure, I lean towards Amazon AWS when it comes to my day today. I have however left resources I had lined up for all three of the major cloud providers.
|
||||
|
||||
I do appreciate that there are more and the poll only included these 3 and in particular, there were some comments about Oracle Cloud. I would love to hear more about other cloud providers being used out in the wild.
|
||||
I do appreciate that there are more and the poll only included these 3 and in particular, there were some comments about Oracle Cloud. I would love to hear more about other cloud providers being used out in the wild.
|
||||
|
||||
### The Basics
|
||||
### The Basics
|
||||
|
||||
- Provides public cloud services
|
||||
- Provides public cloud services
|
||||
- Geographically distributed (60+ Regions worldwide)
|
||||
- Accessed via the internet and/or private connections
|
||||
- Multi-tenant model
|
||||
- Consumption-based billing - (Pay as you go | Pay as you grow)
|
||||
- A large number of service types and offerings for different requirements.
|
||||
- Accessed via the internet and/or private connections
|
||||
- Multi-tenant model
|
||||
- Consumption-based billing - (Pay as you go | Pay as you grow)
|
||||
- A large number of service types and offerings for different requirements.
|
||||
|
||||
- [Microsoft Azure Global Infrastructure](https://infrastructuremap.microsoft.com/explore)
|
||||
|
||||
As much as we spoke about SaaS and Hybrid Cloud we are not planning on covering those topics here.
|
||||
As much as we spoke about SaaS and Hybrid Cloud we are not planning on covering those topics here.
|
||||
|
||||
The best way to get started and follow along is by clicking the link, which will enable you to spin up a [Microsoft Azure Free Account](https://azure.microsoft.com/en-gb/free/)
|
||||
|
||||
### Regions
|
||||
### Regions
|
||||
|
||||
I linked the interactive map above, but we can see the image below the breadth of regions being offered in the Microsoft Azure platform worldwide.
|
||||
I linked the interactive map above, but we can see the image below the breadth of regions being offered in the Microsoft Azure platform worldwide.
|
||||
|
||||

|
||||
*image taken from [Microsoft Docs - 01/05/2021](https://docs.microsoft.com/en-us/azure/networking/microsoft-global-network)*
|
||||
_image taken from [Microsoft Docs - 01/05/2021](https://docs.microsoft.com/en-us/azure/networking/microsoft-global-network)_
|
||||
|
||||
You will also see several "sovereign" clouds meaning they are not linked or able to speak to the other regions, for example, these would be associated with governments such as the `AzureUSGovernment` also `AzureChinaCloud` and others.
|
||||
You will also see several "sovereign" clouds meaning they are not linked or able to speak to the other regions, for example, these would be associated with governments such as the `AzureUSGovernment` also `AzureChinaCloud` and others.
|
||||
|
||||
When we are deploying our services within Microsoft Azure we will choose a region for almost everything. However, it is important to note that not every service is available in every region. You can see [Products available by region](https://azure.microsoft.com/en-us/global-infrastructure/services/?products=all) at the time of my writing this that in West Central US we cannot use Azure Databricks.
|
||||
When we are deploying our services within Microsoft Azure we will choose a region for almost everything. However, it is important to note that not every service is available in every region. You can see [Products available by region](https://azure.microsoft.com/en-us/global-infrastructure/services/?products=all) at the time of my writing this that in West Central US we cannot use Azure Databricks.
|
||||
|
||||
I also mentioned "almost everything" above, there are certain services that are linked to the region such as Azure Bot Services, Bing Speech, Azure Virtual Desktop, Static Web Apps, and some more.
|
||||
I also mentioned "almost everything" above, there are certain services that are linked to the region such as Azure Bot Services, Bing Speech, Azure Virtual Desktop, Static Web Apps, and some more.
|
||||
|
||||
Behind the scenes, a region may be made up of more than one data centre. These will be referred to as Availability Zones.
|
||||
Behind the scenes, a region may be made up of more than one data centre. These will be referred to as Availability Zones.
|
||||
|
||||
In the below image you will see and again this is taken from the Microsoft official documentation it describes what a region is and how it is made up of Availability Zones. However not all regions have multiple Availability Zones.
|
||||
In the below image you will see and again this is taken from the Microsoft official documentation it describes what a region is and how it is made up of Availability Zones. However not all regions have multiple Availability Zones.
|
||||
|
||||

|
||||
|
||||
The Microsoft Documentation is very good, and you can read up more on [Regions and Availability Zones](https://docs.microsoft.com/en-us/azure/availability-zones/az-overview) here.
|
||||
The Microsoft Documentation is very good, and you can read up more on [Regions and Availability Zones](https://docs.microsoft.com/en-us/azure/availability-zones/az-overview) here.
|
||||
|
||||
### Subscriptions
|
||||
### Subscriptions
|
||||
|
||||
Remember we mentioned that Microsoft Azure is a consumption model cloud you will find that all major cloud providers follow this model.
|
||||
Remember we mentioned that Microsoft Azure is a consumption model cloud you will find that all major cloud providers follow this model.
|
||||
|
||||
If you are an Enterprise then you might want or have an Enterprise Agreement set up with Microsoft to enable your company to consume these Azure Services.
|
||||
If you are an Enterprise then you might want or have an Enterprise Agreement set up with Microsoft to enable your company to consume these Azure Services.
|
||||
|
||||
If you are like me and you are using Microsoft Azure for education then we have a few other options.
|
||||
If you are like me and you are using Microsoft Azure for education then we have a few other options.
|
||||
|
||||
We have the [Microsoft Azure Free Account](https://azure.microsoft.com/en-gb/free/) which generally gives you several free cloud credits to spend in Azure over some time.
|
||||
We have the [Microsoft Azure Free Account](https://azure.microsoft.com/en-gb/free/) which generally gives you several free cloud credits to spend in Azure over some time.
|
||||
|
||||
There is also the ability to use a Visual Studio subscription which gives you maybe some free credits each month alongside your annual subscription to Visual Studio, this was commonly known as the MSDN years ago. [Visual Studio](https://azure.microsoft.com/en-us/pricing/member-offers/credit-for-visual-studio-subscribers/)
|
||||
|
||||
Then finally there is the hand over a credit card and have a pay as you go, model. [Pay-as-you-go](https://azure.microsoft.com/en-us/pricing/purchase-options/pay-as-you-go/)
|
||||
|
||||
A subscription can be seen as a boundary between different subscriptions potentially cost centres but completely different environments. A subscription is where the resources are created.
|
||||
A subscription can be seen as a boundary between different subscriptions potentially cost centres but completely different environments. A subscription is where the resources are created.
|
||||
|
||||
### Management Groups
|
||||
|
||||
Management groups give us the ability to segregate control across our Azure Active Directory (AD) or our tenant environment. Management groups allow us to control policies, Role Based Access Control (RBAC), and budgets.
|
||||
|
||||
Subscriptions belong to these management groups so you could have many subscriptions in your Azure AD Tenant, these subscriptions then can also control policies, RBAC, and budgets.
|
||||
Subscriptions belong to these management groups so you could have many subscriptions in your Azure AD Tenant, these subscriptions then can also control policies, RBAC, and budgets.
|
||||
|
||||
### Resource Manager and Resource Groups
|
||||
### Resource Manager and Resource Groups
|
||||
|
||||
**Azure Resource Manager**
|
||||
- JSON based API that is built on resource providers.
|
||||
- Resources belong to a resource group and share a common life cycle.
|
||||
- Parallelism
|
||||
- JSON-Based deployments are declarative, idempotent and understand dependencies between resources to govern creation and order.
|
||||
#### Azure Resource Manager
|
||||
|
||||
**Resource Groups**
|
||||
- Every Azure Resource Manager resource exists in one and only one resource group!
|
||||
- Resource groups are created in a region that can contain resources from outside the region.
|
||||
- Resources can be moved between resource groups
|
||||
- Resource groups are not walled off from other resource groups, there can be communication between resource groups.
|
||||
- Resource Groups can also control policies, RBAC, and budgets.
|
||||
- JSON based API that is built on resource providers.
|
||||
- Resources belong to a resource group and share a common life cycle.
|
||||
- Parallelism
|
||||
- JSON-Based deployments are declarative, idempotent and understand dependencies between resources to govern creation and order.
|
||||
|
||||
### Hands-On
|
||||
#### Resource Groups
|
||||
|
||||
Let's go and get connected and make sure we have a **Subscription** available to us. We can check our simple out of the box **Management Group**, We can then go and create a new dedicated **Resource Group** in our preferred **Region**.
|
||||
- Every Azure Resource Manager resource exists in one and only one resource group!
|
||||
- Resource groups are created in a region that can contain resources from outside the region.
|
||||
- Resources can be moved between resource groups
|
||||
- Resource groups are not walled off from other resource groups, there can be communication between resource groups.
|
||||
- Resource Groups can also control policies, RBAC, and budgets.
|
||||
|
||||
When we first login to our [Azure portal](https://portal.azure.com/#home) you will see at the top the ability to search for resources, services and docs.
|
||||
### Hands-On
|
||||
|
||||
Let's go and get connected and make sure we have a **Subscription** available to us. We can check our simple out of the box **Management Group**, We can then go and create a new dedicated **Resource Group** in our preferred **Region**.
|
||||
|
||||
When we first login to our [Azure portal](https://portal.azure.com/#home) you will see at the top the ability to search for resources, services and docs.
|
||||
|
||||

|
||||
|
||||
We are going to first look at our subscription, you will see here that I am using a Visual Studio Professional subscription which gives me some free credit each month.
|
||||
We are going to first look at our subscription, you will see here that I am using a Visual Studio Professional subscription which gives me some free credit each month.
|
||||
|
||||

|
||||
|
||||
If we go into that you will get a wider view and a look into what is happening or what can be done with the subscription, we can see billing information with control functions on the left where you can define IAM Access Control and further down there are more resources available.
|
||||
If we go into that you will get a wider view and a look into what is happening or what can be done with the subscription, we can see billing information with control functions on the left where you can define IAM Access Control and further down there are more resources available.
|
||||
|
||||

|
||||
|
||||
There might be a scenario where you have multiple subscriptions and you want to manage them all under one, this is where management groups can be used to segregate responsibility groups. In mine below, you can see there is just my tenant root group with my subscription.
|
||||
There might be a scenario where you have multiple subscriptions and you want to manage them all under one, this is where management groups can be used to segregate responsibility groups. In mine below, you can see there is just my tenant root group with my subscription.
|
||||
|
||||
You will also see in the previous image that the parent management group is the same id used on the tenant root group.
|
||||
You will also see in the previous image that the parent management group is the same id used on the tenant root group.
|
||||
|
||||

|
||||
|
||||
Next up we have Resource groups, this is where we combine our resources and we can easily manage them in one place. I have a few created for various other projects.
|
||||
Next up we have Resource groups, this is where we combine our resources and we can easily manage them in one place. I have a few created for various other projects.
|
||||
|
||||

|
||||
|
||||
With what we are going to be doing over the next few days, we want to create our resource group. This is easily done in this console by hitting the create option on the previous image.
|
||||
With what we are going to be doing over the next few days, we want to create our resource group. This is easily done in this console by hitting the create option on the previous image.
|
||||
|
||||

|
||||
|
||||
A validation step takes place and then you have the chance to review your creation and then create. You will also see down the bottom "Download a template for automation" this allows us to grab the JSON format so that we can perform this simple in an automated fashion later on if we wanted, we will cover this later on as well.
|
||||
A validation step takes place and then you have the chance to review your creation and then create. You will also see down the bottom "Download a template for automation" this allows us to grab the JSON format so that we can perform this simple in an automated fashion later on if we wanted, we will cover this later on as well.
|
||||
|
||||

|
||||
|
||||
Hit create, then in our list of resource groups, we now have our "90DaysOfDevOps" group ready for what we do in the next session.
|
||||
Hit create, then in our list of resource groups, we now have our "90DaysOfDevOps" group ready for what we do in the next session.
|
||||
|
||||

|
||||
|
||||
## Resources
|
||||
## Resources
|
||||
|
||||
- [Hybrid Cloud and MultiCloud](https://www.youtube.com/watch?v=qkj5W98Xdvw)
|
||||
- [Microsoft Azure Fundamentals](https://www.youtube.com/watch?v=NKEFWyqJ5XA&list=WL&index=130&t=12s)
|
||||
|
107
Days/day30.md
107
Days/day30.md
@ -1,104 +1,103 @@
|
||||
---
|
||||
title: '#90DaysOfDevOps - Microsoft Azure Security Models - Day 30'
|
||||
title: "#90DaysOfDevOps - Microsoft Azure Security Models - Day 30"
|
||||
published: false
|
||||
description: 90DaysOfDevOps - Microsoft Azure Security Models
|
||||
tags: 'devops, 90daysofdevops, learning'
|
||||
tags: "devops, 90daysofdevops, learning"
|
||||
cover_image: null
|
||||
canonical_url: null
|
||||
id: 1049039
|
||||
---
|
||||
## Microsoft Azure Security Models
|
||||
|
||||
Following on from the Microsoft Azure Overview, we are going to start with Azure Security and see where this can help in our day to day. For the most part, I have found the built-in roles have been sufficient but knowing that we can create and work with many different areas of authentication and configurations. I have found Microsoft Azure to be quite advanced with its Active Directory background compared to other public clouds.
|
||||
|
||||
## Microsoft Azure Security Models
|
||||
|
||||
This is one area in which Microsoft Azure seemingly works differently from other public cloud providers, in Azure there is ALWAYS Azure AD.
|
||||
Following on from the Microsoft Azure Overview, we are going to start with Azure Security and see where this can help in our day to day. For the most part, I have found the built-in roles have been sufficient but knowing that we can create and work with many different areas of authentication and configurations. I have found Microsoft Azure to be quite advanced with its Active Directory background compared to other public clouds.
|
||||
|
||||
### Directory Services
|
||||
This is one area in which Microsoft Azure seemingly works differently from other public cloud providers, in Azure there is ALWAYS Azure AD.
|
||||
|
||||
- Azure Active Directory hosts the security principles used by Microsoft Azure and other Microsoft cloud services.
|
||||
- Authentication is accomplished through protocols such as SAML, WS-Federation, OpenID Connect and OAuth2.
|
||||
- Queries are accomplished through REST API called Microsoft Graph API.
|
||||
- Tenants have a tenant.onmicrosoft.com default name but can also have custom domain names.
|
||||
- Subscriptions are associated with an Azure Active Directory tenant.
|
||||
### Directory Services
|
||||
|
||||
If we think about AWS to compare the equivalent offering would be AWS IAM (Identity & Access Management) Although still very different
|
||||
- Azure Active Directory hosts the security principles used by Microsoft Azure and other Microsoft cloud services.
|
||||
- Authentication is accomplished through protocols such as SAML, WS-Federation, OpenID Connect and OAuth2.
|
||||
- Queries are accomplished through REST API called Microsoft Graph API.
|
||||
- Tenants have a tenant.onmicrosoft.com default name but can also have custom domain names.
|
||||
- Subscriptions are associated with an Azure Active Directory tenant.
|
||||
|
||||
Azure AD Connect provides the ability to replicate accounts from AD to Azure AD. This can also include groups and sometimes objects. This can be granular and filtered. Supports multiple forests and domains.
|
||||
If we think about AWS to compare the equivalent offering would be AWS IAM (Identity & Access Management) Although still very different
|
||||
|
||||
It is possible to create cloud accounts in Microsoft Azure Active Directory (AD) but most organisations already have accounted for their users in their own Active Directory being on-premises.
|
||||
Azure AD Connect provides the ability to replicate accounts from AD to Azure AD. This can also include groups and sometimes objects. This can be granular and filtered. Supports multiple forests and domains.
|
||||
|
||||
Azure AD Connect also allows you to not only see Windows AD servers but also other Azure AD, Google and others. This also provides the ability to collaborate with external people and organisations this is called Azure B2B.
|
||||
It is possible to create cloud accounts in Microsoft Azure Active Directory (AD) but most organisations already have accounted for their users in their own Active Directory being on-premises.
|
||||
|
||||
Azure AD Connect also allows you to not only see Windows AD servers but also other Azure AD, Google and others. This also provides the ability to collaborate with external people and organisations this is called Azure B2B.
|
||||
|
||||
Authentication options between Active Directory Domain Services and Microsoft Azure Active Directory are possible with both identity sync with a password hash.
|
||||
|
||||

|
||||
|
||||
The passing of the password hash is optional, if this is not used then pass-through authentication is required.
|
||||
The passing of the password hash is optional, if this is not used then pass-through authentication is required.
|
||||
|
||||
There is a video linked below that goes into detail about Passthrough authentication.
|
||||
There is a video linked below that goes into detail about Passthrough authentication.
|
||||
|
||||
[User sign-in with Azure Active Directory Pass-through Authentication](https://docs.microsoft.com/en-us/azure/active-directory/hybrid/how-to-connect-pta)
|
||||
|
||||

|
||||
|
||||
### Federation
|
||||
### Federation
|
||||
|
||||
It's fair to say that if you are using Microsoft 365, Microsoft Dynamics and on-premises Active Directory it is quite easy to understand and integrate into Azure AD for federation. However, you might be using other services outside of the Microsoft ecosystem.
|
||||
It's fair to say that if you are using Microsoft 365, Microsoft Dynamics and on-premises Active Directory it is quite easy to understand and integrate into Azure AD for federation. However, you might be using other services outside of the Microsoft ecosystem.
|
||||
|
||||
Azure AD can act as a federation broker to these other Non-Microsoft apps and other directory services.
|
||||
Azure AD can act as a federation broker to these other Non-Microsoft apps and other directory services.
|
||||
|
||||
This will be seen in the Azure Portal as Enterprise Applications of which there are a large number of options.
|
||||
This will be seen in the Azure Portal as Enterprise Applications of which there are a large number of options.
|
||||
|
||||

|
||||
|
||||
If you scroll down on the enterprise application page you are going to see a long list of featured applications.
|
||||
If you scroll down on the enterprise application page you are going to see a long list of featured applications.
|
||||
|
||||

|
||||
|
||||
This option also allows for "bring your own" integration, an application you are developing or a non-gallery application.
|
||||
This option also allows for "bring your own" integration, an application you are developing or a non-gallery application.
|
||||
|
||||
I have not looked into this before but I can see that this is quite the feature set when compared to the other cloud providers and capabilities.
|
||||
I have not looked into this before but I can see that this is quite the feature set when compared to the other cloud providers and capabilities.
|
||||
|
||||
### Role-Based Access Control
|
||||
### Role-Based Access Control
|
||||
|
||||
We have already covered on [Day 29](day29.md) the scopes we are going to cover here, we can set our role-based access control according to one of these areas.
|
||||
We have already covered on [Day 29](day29.md) the scopes we are going to cover here, we can set our role-based access control according to one of these areas.
|
||||
|
||||
- Subscriptions
|
||||
- Management Group
|
||||
- Resource Group
|
||||
- Resources
|
||||
- Resource Group
|
||||
- Resources
|
||||
|
||||
Roles can be split into three, there are many built-in roles in Microsoft Azure. Those three are:
|
||||
Roles can be split into three, there are many built-in roles in Microsoft Azure. Those three are:
|
||||
|
||||
- Owner
|
||||
- Contributor
|
||||
- Reader
|
||||
- Owner
|
||||
- Contributor
|
||||
- Reader
|
||||
|
||||
Owner and Contributor are very similar in their boundaries of scope however the owner can change permissions.
|
||||
Owner and Contributor are very similar in their boundaries of scope however the owner can change permissions.
|
||||
|
||||
Other roles are specific to certain types of Azure Resources as well as custom roles.
|
||||
Other roles are specific to certain types of Azure Resources as well as custom roles.
|
||||
|
||||
We should focus on assigning permissions to groups vs users.
|
||||
We should focus on assigning permissions to groups vs users.
|
||||
|
||||
Permissions are inherited.
|
||||
Permissions are inherited.
|
||||
|
||||
If we go back and look at the "90DaysOfDevOps" Resource group we created and check the Access Control (IAM) within you can see we have a list of contributors and a customer User Access Administrator, and we do have a list of owners (But I cannot show this)
|
||||
|
||||

|
||||
|
||||
We can also check the roles we have assigned here if they are BuiltInRoles and which category they fall under.
|
||||
We can also check the roles we have assigned here if they are BuiltInRoles and which category they fall under.
|
||||
|
||||

|
||||
|
||||
We can also use the check access tab if we want to check an account against this resource group and make sure that the account we wish to have that access to has the correct permissions or maybe we want to check if a user has too much access.
|
||||
We can also use the check access tab if we want to check an account against this resource group and make sure that the account we wish to have that access to has the correct permissions or maybe we want to check if a user has too much access.
|
||||
|
||||

|
||||
|
||||
### Microsoft Defender for Cloud
|
||||
### Microsoft Defender for Cloud
|
||||
|
||||
- Microsoft Defender for Cloud (formerly known as Azure Security Center) provides insight into the security of the entire Azure environment.
|
||||
- Microsoft Defender for Cloud (formerly known as Azure Security Center) provides insight into the security of the entire Azure environment.
|
||||
|
||||
- A single dashboard for visibility into the overall security health of all Azure and non-Azure resources (via Azure Arc) and security hardening guidance.
|
||||
|
||||
@ -106,7 +105,7 @@ We can also use the check access tab if we want to check an account against this
|
||||
|
||||
- Paid plans for protected resource types (e.g. Servers, AppService, SQL, Storage, Containers, KeyVault).
|
||||
|
||||
I have switched to another subscription to view the Azure Security Center and you can see here based on very few resources that I have some recommendations in one place.
|
||||
I have switched to another subscription to view the Azure Security Center and you can see here based on very few resources that I have some recommendations in one place.
|
||||
|
||||

|
||||
|
||||
@ -128,45 +127,45 @@ I have gone out and I have purchased www.90DaysOfDevOps.com and I would like to
|
||||
|
||||

|
||||
|
||||
With that now, we can create a new user on our new Active Directory Domain.
|
||||
With that now, we can create a new user on our new Active Directory Domain.
|
||||
|
||||

|
||||
|
||||
Now we want to create a group for all of our new 90DaysOfDevOps users in one group. We can create a group as per the below, notice that I am using "Dynamic User" which means Azure AD will query user accounts and add them dynamically vs assigned which is where you manually add the user to your group.
|
||||
Now we want to create a group for all of our new 90DaysOfDevOps users in one group. We can create a group as per the below, notice that I am using "Dynamic User" which means Azure AD will query user accounts and add them dynamically vs assigned which is where you manually add the user to your group.
|
||||
|
||||

|
||||
|
||||
There are lots of options when it comes to creating your query, I plan to simply find the principal name and make sure that the name contains @90DaysOfDevOps.com.
|
||||
There are lots of options when it comes to creating your query, I plan to simply find the principal name and make sure that the name contains @90DaysOfDevOps.com.
|
||||
|
||||

|
||||
|
||||
Now because we have created our user account already for michael.cade@90DaysOfDevOps.com we can validate the rules are working. For comparison I have also added another account I have associated to another domain here and you can see that because of this rule our user will not land in this group.
|
||||
Now because we have created our user account already for michael.cade@90DaysOfDevOps.com we can validate the rules are working. For comparison I have also added another account I have associated to another domain here and you can see that because of this rule our user will not land in this group.
|
||||
|
||||

|
||||
|
||||
I have since added a new user1@90DaysOfDevOps.com and if we go and check the group we can see our members.
|
||||
I have since added a new user1@90DaysOfDevOps.com and if we go and check the group we can see our members.
|
||||
|
||||

|
||||
|
||||
If we have this requirement x100 then we are not going to want to do this all in the console we are going to want to take advantage of either bulk options to create, invite, and delete users or you are going to want to look into PowerShell to achieve this automated approach to scale.
|
||||
If we have this requirement x100 then we are not going to want to do this all in the console we are going to want to take advantage of either bulk options to create, invite, and delete users or you are going to want to look into PowerShell to achieve this automated approach to scale.
|
||||
|
||||
Now we can go to our Resource Group and specify that on the 90DaysOfDevOps resource group we want the owner to be the group we just created.
|
||||
Now we can go to our Resource Group and specify that on the 90DaysOfDevOps resource group we want the owner to be the group we just created.
|
||||
|
||||

|
||||
|
||||
We can equally go in here and deny assignments access to our resource group as well.
|
||||
We can equally go in here and deny assignments access to our resource group as well.
|
||||
|
||||
Now if we log in to the Azure Portal with our new user account, you can see that we only have access to our 90DaysOfDevOps resource group and not the others seen in previous pictures because we do not have the access.
|
||||
Now if we log in to the Azure Portal with our new user account, you can see that we only have access to our 90DaysOfDevOps resource group and not the others seen in previous pictures because we do not have the access.
|
||||
|
||||

|
||||
|
||||
The above is great if this is a user that has access to resources inside of your Azure portal, not every user needs to be aware of the portal, but to check access we can use the [Apps Portal](https://myapps.microsoft.com/) This is a single sign-on portal for us to test.
|
||||
The above is great if this is a user that has access to resources inside of your Azure portal, not every user needs to be aware of the portal, but to check access we can use the [Apps Portal](https://myapps.microsoft.com/) This is a single sign-on portal for us to test.
|
||||
|
||||

|
||||
|
||||
You can customise this portal with your branding and this might be something we come back to later on.
|
||||
You can customise this portal with your branding and this might be something we come back to later on.
|
||||
|
||||
## Resources
|
||||
## Resources
|
||||
|
||||
- [Hybrid Cloud and MultiCloud](https://www.youtube.com/watch?v=qkj5W98Xdvw)
|
||||
- [Microsoft Azure Fundamentals](https://www.youtube.com/watch?v=NKEFWyqJ5XA&list=WL&index=130&t=12s)
|
||||
|
103
Days/day31.md
103
Days/day31.md
@ -1,113 +1,114 @@
|
||||
---
|
||||
title: '#90DaysOfDevOps - Microsoft Azure Compute Models - Day 31'
|
||||
title: "#90DaysOfDevOps - Microsoft Azure Compute Models - Day 31"
|
||||
published: false
|
||||
description: 90DaysOfDevOps - Microsoft Azure Compute Models
|
||||
tags: 'devops, 90daysofdevops, learning'
|
||||
tags: "devops, 90daysofdevops, learning"
|
||||
cover_image: null
|
||||
canonical_url: null
|
||||
id: 1049040
|
||||
---
|
||||
|
||||
## Microsoft Azure Compute Models
|
||||
|
||||
Following on from covering the basics around security models within Microsoft Azure yesterday today we are going to look into the various compute services available to us in Azure.
|
||||
Following on from covering the basics around security models within Microsoft Azure yesterday today we are going to look into the various compute services available to us in Azure.
|
||||
|
||||
### Service Availability Options
|
||||
### Service Availability Options
|
||||
|
||||
This section is close to my heart given my role in Data Management. As with on-premises, it is critical to ensure the availability of your services.
|
||||
This section is close to my heart given my role in Data Management. As with on-premises, it is critical to ensure the availability of your services.
|
||||
|
||||
- High Availability (Protection within a region)
|
||||
- Disaster Recovery (Protection between regions)
|
||||
- Backup (Recovery from a point in time)
|
||||
|
||||
Microsoft deploys multiple regions within a geopolitical boundary.
|
||||
Microsoft deploys multiple regions within a geopolitical boundary.
|
||||
|
||||
Two concepts with Azure for Service Availability. Both sets and zones.
|
||||
Two concepts with Azure for Service Availability. Both sets and zones.
|
||||
|
||||
Availability Sets - Provide resiliency within a datacenter
|
||||
Availability Sets - Provide resiliency within a datacenter
|
||||
|
||||
Availability Zones - Provide resiliency between data centres within a region.
|
||||
Availability Zones - Provide resiliency between data centres within a region.
|
||||
|
||||
### Virtual Machines
|
||||
### Virtual Machines
|
||||
|
||||
Most likely the starting point for anyone in the public cloud.
|
||||
Most likely the starting point for anyone in the public cloud.
|
||||
|
||||
- Provides a VM from a variety of series and sizes with different capabilities (Sometimes an overwhelming) [Sizes for Virtual machines in Azure](https://docs.microsoft.com/en-us/azure/virtual-machines/sizes)
|
||||
- There are many different options and focuses for VMs from high performance, and low latency to high memory options VMs.
|
||||
- We also have a burstable VM type which can be found under the B-Series. This is great for workloads where you can have a low CPU requirement for the most part but require that maybe once a month performance spike requirement.
|
||||
- Virtual Machines are placed on a virtual network that can provide connectivity to any network.
|
||||
- Windows and Linux guest OS support.
|
||||
- There are also Azure-tuned kernels when it comes to specific Linux distributions. [Azure Tuned Kernals](https://docs.microsoft.com/en-us/azure/virtual-machines/linux/endorsed-distros#azure-tuned-kernels)
|
||||
- There are many different options and focuses for VMs from high performance, and low latency to high memory options VMs.
|
||||
- We also have a burstable VM type which can be found under the B-Series. This is great for workloads where you can have a low CPU requirement for the most part but require that maybe once a month performance spike requirement.
|
||||
- Virtual Machines are placed on a virtual network that can provide connectivity to any network.
|
||||
- Windows and Linux guest OS support.
|
||||
- There are also Azure-tuned kernels when it comes to specific Linux distributions. [Azure Tuned Kernals](https://docs.microsoft.com/en-us/azure/virtual-machines/linux/endorsed-distros#azure-tuned-kernels)
|
||||
|
||||
### Templating
|
||||
### Templating
|
||||
|
||||
I have mentioned before that everything behind or underneath Microsoft Azure is JSON.
|
||||
I have mentioned before that everything behind or underneath Microsoft Azure is JSON.
|
||||
|
||||
There are several different management portals and consoles we can use to create our resources the preferred route is going to be via JSON templates.
|
||||
There are several different management portals and consoles we can use to create our resources the preferred route is going to be via JSON templates.
|
||||
|
||||
Idempotent deployments in incremental or complete mode - i.e repeatable desired state.
|
||||
Idempotent deployments in incremental or complete mode - i.e repeatable desired state.
|
||||
|
||||
There is a large selection of templates that can export deployed resource definitions. I like to think about this templating feature to something like AWS CloudFormation or could be Terraform for a multi-cloud option. We will cover Terraform more in the Infrastructure as code section.
|
||||
There is a large selection of templates that can export deployed resource definitions. I like to think about this templating feature to something like AWS CloudFormation or could be Terraform for a multi-cloud option. We will cover Terraform more in the Infrastructure as code section.
|
||||
|
||||
### Scaling
|
||||
|
||||
Automatic scaling is a large feature of the Public Cloud, being able to spin down resources you are not using or spin up when you need them.
|
||||
Automatic scaling is a large feature of the Public Cloud, being able to spin down resources you are not using or spin up when you need them.
|
||||
|
||||
In Azure, we have something called Virtual Machine Scale Sets (VMSS) for IaaS. This enables the automatic creation and scale from a gold standard image based on schedules and metrics.
|
||||
In Azure, we have something called Virtual Machine Scale Sets (VMSS) for IaaS. This enables the automatic creation and scale from a gold standard image based on schedules and metrics.
|
||||
|
||||
This is ideal for updating windows so that you can update your images and roll those out with the least impact.
|
||||
This is ideal for updating windows so that you can update your images and roll those out with the least impact.
|
||||
|
||||
Other services such as Azure App Services have auto-scaling built in.
|
||||
Other services such as Azure App Services have auto-scaling built in.
|
||||
|
||||
### Containers
|
||||
### Containers
|
||||
|
||||
We have not covered containers as a use case and what and how they can and should be needed in our DevOps learning journey but we need to mention that Azure has some specific container-focused services to mention.
|
||||
We have not covered containers as a use case and what and how they can and should be needed in our DevOps learning journey but we need to mention that Azure has some specific container-focused services to mention.
|
||||
|
||||
Azure Kubernetes Service (AKS) - Provides a managed Kubernetes solution, no need to worry about the control plane or management of the underpinning cluster management. More on Kubernetes also later on.
|
||||
Azure Kubernetes Service (AKS) - Provides a managed Kubernetes solution, no need to worry about the control plane or management of the underpinning cluster management. More on Kubernetes also later on.
|
||||
|
||||
Azure Container Instances - Containers as a service with Per-Second Billing. Run an image and integrate it with your virtual network, no need for Container Orchestration.
|
||||
Azure Container Instances - Containers as a service with Per-Second Billing. Run an image and integrate it with your virtual network, no need for Container Orchestration.
|
||||
|
||||
Service Fabric - Has many capabilities but includes orchestration for container instances.
|
||||
Service Fabric - Has many capabilities but includes orchestration for container instances.
|
||||
|
||||
Azure also has the Container Registry which provides a private registry for Docker Images, Helm charts, OCI Artifacts and images. More on this again when we reach the containers section.
|
||||
Azure also has the Container Registry which provides a private registry for Docker Images, Helm charts, OCI Artifacts and images. More on this again when we reach the containers section.
|
||||
|
||||
We should also mention that a lot of the container services may indeed also leverage containers under the hood but this is abstracted away from your requirement to manage.
|
||||
We should also mention that a lot of the container services may indeed also leverage containers under the hood but this is abstracted away from your requirement to manage.
|
||||
|
||||
These mentioned container-focused services we also find similar services in all other public clouds.
|
||||
These mentioned container-focused services we also find similar services in all other public clouds.
|
||||
|
||||
### Application Services
|
||||
### Application Services
|
||||
|
||||
- Azure Application Services provides an application hosting solution that provides an easy method to establish services.
|
||||
- Automatic Deployment and Scaling.
|
||||
- Supports Windows & Linux-based solutions.
|
||||
- Services run in an App Service Plan which has a type and size.
|
||||
- Number of different services including web apps, API apps and mobile apps.
|
||||
- Support for Deployment slots for reliable testing and promotion.
|
||||
- Azure Application Services provides an application hosting solution that provides an easy method to establish services.
|
||||
- Automatic Deployment and Scaling.
|
||||
- Supports Windows & Linux-based solutions.
|
||||
- Services run in an App Service Plan which has a type and size.
|
||||
- Number of different services including web apps, API apps and mobile apps.
|
||||
- Support for Deployment slots for reliable testing and promotion.
|
||||
|
||||
### Serverless Computing
|
||||
### Serverless Computing
|
||||
|
||||
Serverless for me is an exciting next step that I am extremely interested in learning more about.
|
||||
Serverless for me is an exciting next step that I am extremely interested in learning more about.
|
||||
|
||||
The goal with serverless is that we only pay for the runtime of the function and do not have to have running virtual machines or PaaS applications running all the time. We simply run our function when we need it and then it goes away.
|
||||
The goal with serverless is that we only pay for the runtime of the function and do not have to have running virtual machines or PaaS applications running all the time. We simply run our function when we need it and then it goes away.
|
||||
|
||||
Azure Functions - Provides serverless code. If we remember back to our first look into the public cloud we will remember the abstraction layer of management, with serverless functions you are only going to be managing the code.
|
||||
Azure Functions - Provides serverless code. If we remember back to our first look into the public cloud we will remember the abstraction layer of management, with serverless functions you are only going to be managing the code.
|
||||
|
||||
Event-Driven with massive scale, I have a plan to build something when I get some hands-on here hopefully later on.
|
||||
Event-Driven with massive scale, I have a plan to build something when I get some hands-on here hopefully later on.
|
||||
|
||||
Provides input and output binding to many Azure and 3rd Party Services.
|
||||
Provides input and output binding to many Azure and 3rd Party Services.
|
||||
|
||||
Supports many different programming languages. (C#, NodeJS, Python, PHP, batch, bash, Golang and Rust. Or any Executable)
|
||||
|
||||
Azure Event Grid enables logic to be triggered from services and events.
|
||||
Azure Event Grid enables logic to be triggered from services and events.
|
||||
|
||||
Azure Logic App provides a graphical-based workflow and integration.
|
||||
Azure Logic App provides a graphical-based workflow and integration.
|
||||
|
||||
We can also look at Azure Batch which can run large-scale jobs on both Windows and Linux nodes with consistent management & scheduling.
|
||||
We can also look at Azure Batch which can run large-scale jobs on both Windows and Linux nodes with consistent management & scheduling.
|
||||
|
||||
## Resources
|
||||
## Resources
|
||||
|
||||
- [Hybrid Cloud and MultiCloud](https://www.youtube.com/watch?v=qkj5W98Xdvw)
|
||||
- [Microsoft Azure Fundamentals](https://www.youtube.com/watch?v=NKEFWyqJ5XA&list=WL&index=130&t=12s)
|
||||
- [Google Cloud Digital Leader Certification Course](https://www.youtube.com/watch?v=UGRDM86MBIQ&list=WL&index=131&t=10s)
|
||||
- [AWS Basics for Beginners - Full Course](https://www.youtube.com/watch?v=ulprqHHWlng&t=5352s)
|
||||
|
||||
See you on [Day 32](day32.md)
|
||||
See you on [Day 32](day32.md)
|
||||
|
155
Days/day32.md
155
Days/day32.md
@ -1,190 +1,191 @@
|
||||
---
|
||||
title: '#90DaysOfDevOps - Microsoft Azure Storage Models - Day 32'
|
||||
title: "#90DaysOfDevOps - Microsoft Azure Storage Models - Day 32"
|
||||
published: false
|
||||
description: 90DaysOfDevOps - Microsoft Azure Storage Models
|
||||
tags: 'devops, 90daysofdevops, learning'
|
||||
tags: "devops, 90daysofdevops, learning"
|
||||
cover_image: null
|
||||
canonical_url: null
|
||||
id: 1048775
|
||||
---
|
||||
|
||||
## Microsoft Azure Storage Models
|
||||
|
||||
### Storage Services
|
||||
|
||||
- Azure storage services are provided by storage accounts.
|
||||
- Storage accounts are primarily accessed via REST API.
|
||||
- Azure storage services are provided by storage accounts.
|
||||
- Storage accounts are primarily accessed via REST API.
|
||||
- A storage account must have a unique name that is part of a DNS name `<Storage Account name>.core.windows.net`
|
||||
- Various replication and encryption options.
|
||||
- Sits within a resource group
|
||||
|
||||
We can create our storage group by simply searching for Storage Group in the search bar at the top of the Azure Portal.
|
||||
We can create our storage group by simply searching for Storage Group in the search bar at the top of the Azure Portal.
|
||||
|
||||

|
||||
|
||||
We can then run through the steps to create our storage account remembering that this name needs to be unique and it also needs to be all lower case, with no spaces but can include numbers.
|
||||
We can then run through the steps to create our storage account remembering that this name needs to be unique and it also needs to be all lower case, with no spaces but can include numbers.
|
||||
|
||||

|
||||
|
||||
We can also choose the level of redundancy we would like against our storage account and anything we store here. The further down the list the more expensive option but also the spread of your data.
|
||||
We can also choose the level of redundancy we would like against our storage account and anything we store here. The further down the list the more expensive option but also the spread of your data.
|
||||
|
||||
Even the default redundancy option gives us 3 copies of our data.
|
||||
Even the default redundancy option gives us 3 copies of our data.
|
||||
|
||||
[Azure Storage Redundancy](https://docs.microsoft.com/en-us/azure/storage/common/storage-redundancy)
|
||||
|
||||
Summary of the above link down below:
|
||||
Summary of the above link down below:
|
||||
|
||||
- **Locally-redundant storage** - replicates your data three times within a single data centre in the primary region.
|
||||
|
||||
- **Geo-redundant storage** - copies your data synchronously three times within a single physical location in the primary region using LRS.
|
||||
|
||||
- **Zone-redundant storage** - replicates your Azure Storage data synchronously across three Azure availability zones in the primary region.
|
||||
|
||||
- **Geo-zone-redundant storage** - combines the high availability provided by redundancy across availability zones with protection from regional outages provided by geo-replication. Data in a GZRS storage account is copied across three Azure availability zones in the primary region and is also replicated to a second geographic region for protection from regional disasters.
|
||||
|
||||

|
||||
|
||||
Just moving back up to performance options. We have Standard and Premium to choose from. We have chosen Standard in our walkthrough but premium gives you some specific options.
|
||||
Just moving back up to performance options. We have Standard and Premium to choose from. We have chosen Standard in our walkthrough but premium gives you some specific options.
|
||||
|
||||

|
||||
|
||||
Then in the drop-down, you can see we have these three options to choose from.
|
||||
Then in the drop-down, you can see we have these three options to choose from.
|
||||
|
||||

|
||||
|
||||
There are lots more advanced options available for your storage account but for now, we do not need to get into these areas. These options are around encryption and data protection.
|
||||
There are lots more advanced options available for your storage account but for now, we do not need to get into these areas. These options are around encryption and data protection.
|
||||
|
||||
### Managed Disks
|
||||
### Managed Disks
|
||||
|
||||
Storage access can be achieved in a few different ways.
|
||||
Storage access can be achieved in a few different ways.
|
||||
|
||||
Authenticated access via:
|
||||
- A shared key for full control.
|
||||
Authenticated access via:
|
||||
|
||||
- A shared key for full control.
|
||||
- Shared Access Signature for delegated, granular access.
|
||||
- Azure Active Directory (Where Available)
|
||||
|
||||
Public Access:
|
||||
- Public access can also be granted to enable anonymous access including via HTTP.
|
||||
- An example of this could be to host basic content and files in a block blob so a browser can view and download this data.
|
||||
Public Access:
|
||||
|
||||
If you are accessing your storage from another Azure service, traffic stays within Azure.
|
||||
- Public access can also be granted to enable anonymous access including via HTTP.
|
||||
- An example of this could be to host basic content and files in a block blob so a browser can view and download this data.
|
||||
|
||||
If you are accessing your storage from another Azure service, traffic stays within Azure.
|
||||
|
||||
When it comes to storage performance we have two different types:
|
||||
|
||||
When it comes to storage performance we have two different types:
|
||||
- **Standard** - Maximum number of IOPS
|
||||
- **Premium** - Guaranteed number of IOPS
|
||||
|
||||
IOPS => Input/Output operations per sec.
|
||||
|
||||
There is also a difference between unmanaged and managed disks to consider when choosing the right storage for the task you have.
|
||||
There is also a difference between unmanaged and managed disks to consider when choosing the right storage for the task you have.
|
||||
|
||||
### Virtual Machine Storage
|
||||
### Virtual Machine Storage
|
||||
|
||||
- Virtual Machine OS disks are typically stored on persistent storage.
|
||||
- Some stateless workloads do not require persistent storage and reduced latency is a larger benefit.
|
||||
- There are VMs that support ephemeral OS-managed disks that are created on the node-local storage.
|
||||
- Virtual Machine OS disks are typically stored on persistent storage.
|
||||
- Some stateless workloads do not require persistent storage and reduced latency is a larger benefit.
|
||||
- There are VMs that support ephemeral OS-managed disks that are created on the node-local storage.
|
||||
- These can also be used with VM Scale Sets.
|
||||
|
||||
Managed Disks are durable block storage that can be used with Azure Virtual Machines. You can have Ultra Disk Storage, Premium SSD, Standard SSD, or Standard HDD. They also carry some characteristics.
|
||||
Managed Disks are durable block storage that can be used with Azure Virtual Machines. You can have Ultra Disk Storage, Premium SSD, Standard SSD, or Standard HDD. They also carry some characteristics.
|
||||
|
||||
- Snapshot and Image support
|
||||
- Simple movement between SKUs
|
||||
- Better availability when combined with availability sets
|
||||
- Billed based on disk size not on consumed storage.
|
||||
- Snapshot and Image support
|
||||
- Simple movement between SKUs
|
||||
- Better availability when combined with availability sets
|
||||
- Billed based on disk size not on consumed storage.
|
||||
|
||||
## Archive Storage
|
||||
## Archive Storage
|
||||
|
||||
- **Cool Tier** - A cool tier of storage is available to block and append blobs.
|
||||
- **Cool Tier** - A cool tier of storage is available to block and append blobs.
|
||||
- Lower Storage cost
|
||||
- Higher transaction cost.
|
||||
- **Archive Tier** - Archive storage is available for block BLOBs.
|
||||
- This is configured on a per-BLOB basis.
|
||||
- Cheaper cost, Longer Data retrieval latency.
|
||||
- Same Data Durability as regular Azure Storage.
|
||||
- Custom Data tiering can be enabled as required.
|
||||
- Higher transaction cost.
|
||||
- **Archive Tier** - Archive storage is available for block BLOBs.
|
||||
- This is configured on a per-BLOB basis.
|
||||
- Cheaper cost, Longer Data retrieval latency.
|
||||
- Same Data Durability as regular Azure Storage.
|
||||
- Custom Data tiering can be enabled as required.
|
||||
|
||||
### File Sharing
|
||||
### File Sharing
|
||||
|
||||
From the above creation of our storage account, we can now create file shares.
|
||||
|
||||

|
||||
|
||||
This will provide SMB2.1 and 3.0 file shares in Azure.
|
||||
This will provide SMB2.1 and 3.0 file shares in Azure.
|
||||
|
||||
Useable within the Azure and externally via SMB3 and port 445 open to the internet.
|
||||
Useable within the Azure and externally via SMB3 and port 445 open to the internet.
|
||||
|
||||
Provides shared file storage in Azure.
|
||||
Provides shared file storage in Azure.
|
||||
|
||||
Can be mapped using standard SMB clients in addition to REST API.
|
||||
Can be mapped using standard SMB clients in addition to REST API.
|
||||
|
||||
You might also notice [Azure NetApp Files](https://vzilla.co.uk/vzilla-blog/azure-netapp-files-how) (SMB and NFS)
|
||||
You might also notice [Azure NetApp Files](https://vzilla.co.uk/vzilla-blog/azure-netapp-files-how) (SMB and NFS)
|
||||
|
||||
### Caching & Media Services
|
||||
### Caching & Media Services
|
||||
|
||||
The Azure Content Delivery Network provides a cache of static web content with locations throughout the world.
|
||||
The Azure Content Delivery Network provides a cache of static web content with locations throughout the world.
|
||||
|
||||
Azure Media Services, provides media transcoding technologies in addition to playback services.
|
||||
Azure Media Services, provides media transcoding technologies in addition to playback services.
|
||||
|
||||
## Microsoft Azure Database Models
|
||||
|
||||
Back on [Day 28](day28.md), we covered various service options. One of these was PaaS (Platform as a Service) where you abstract a large amount of the infrastructure and operating system away and you are left with the control of the application or in this case the database models.
|
||||
Back on [Day 28](day28.md), we covered various service options. One of these was PaaS (Platform as a Service) where you abstract a large amount of the infrastructure and operating system away and you are left with the control of the application or in this case the database models.
|
||||
|
||||
### Relational Databases
|
||||
|
||||
Azure SQL Database provides a relational database as a service based on Microsoft SQL Server.
|
||||
Azure SQL Database provides a relational database as a service based on Microsoft SQL Server.
|
||||
|
||||
This is SQL running the latest SQL branch with database compatibility level available where a specific functionality version is required.
|
||||
This is SQL running the latest SQL branch with database compatibility level available where a specific functionality version is required.
|
||||
|
||||
There are a few options on how this can be configured, we can provide a single database that provides one database in the instance, while an elastic pool enables multiple databases that share a pool of capacity and collectively scale.
|
||||
There are a few options on how this can be configured, we can provide a single database that provides one database in the instance, while an elastic pool enables multiple databases that share a pool of capacity and collectively scale.
|
||||
|
||||
These database instances can be accessed like regular SQL instances.
|
||||
These database instances can be accessed like regular SQL instances.
|
||||
|
||||
Additional managed offerings for MySQL, PostgreSQL and MariaDB.
|
||||
Additional managed offerings for MySQL, PostgreSQL and MariaDB.
|
||||
|
||||

|
||||
|
||||
### NoSQL Solutions
|
||||
### NoSQL Solutions
|
||||
|
||||
Azure Cosmos DB is a scheme agnostic NoSQL implementation.
|
||||
Azure Cosmos DB is a scheme agnostic NoSQL implementation.
|
||||
|
||||
99.99% SLA
|
||||
99.99% SLA
|
||||
|
||||
Globally distributed database with single-digit latencies at the 99th percentile anywhere in the world with automatic homing.
|
||||
Globally distributed database with single-digit latencies at the 99th percentile anywhere in the world with automatic homing.
|
||||
|
||||
Partition key leveraged for the partitioning/sharding/distribution of data.
|
||||
Partition key leveraged for the partitioning/sharding/distribution of data.
|
||||
|
||||
Supports various data models (documents, key-value, graph, column-friendly)
|
||||
|
||||
Supports various APIs (DocumentDB SQL, MongoDB, Azure Table Storage and Gremlin)
|
||||
Supports various APIs (DocumentDB SQL, MongoDB, Azure Table Storage and Gremlin)
|
||||
|
||||

|
||||
|
||||
Various consistency models are available based around [CAP theorem](https://en.wikipedia.org/wiki/CAP_theorem).
|
||||
Various consistency models are available based around [CAP theorem](https://en.wikipedia.org/wiki/CAP_theorem).
|
||||
|
||||

|
||||
|
||||
### Caching
|
||||
### Caching
|
||||
|
||||
Without getting into the weeds about caching systems such as Redis I wanted to include that Microsoft Azure has a service called Azure Cache for Redis.
|
||||
Without getting into the weeds about caching systems such as Redis I wanted to include that Microsoft Azure has a service called Azure Cache for Redis.
|
||||
|
||||
Azure Cache for Redis provides an in-memory data store based on the Redis software.
|
||||
Azure Cache for Redis provides an in-memory data store based on the Redis software.
|
||||
|
||||
- It is an implementation of the open-source Redis Cache.
|
||||
- A hosted, secure Redis cache instance.
|
||||
- Different tiers are available
|
||||
- Application must be updated to leverage the cache.
|
||||
- Aimed for an application that has high read requirements compared to writes.
|
||||
- Key-Value store based.
|
||||
- It is an implementation of the open-source Redis Cache.
|
||||
- A hosted, secure Redis cache instance.
|
||||
- Different tiers are available
|
||||
- Application must be updated to leverage the cache.
|
||||
- Aimed for an application that has high read requirements compared to writes.
|
||||
- Key-Value store based.
|
||||
|
||||

|
||||
|
||||
I appreciate the last few days have been a lot of note-taking and theory on Microsoft Azure but I wanted to cover the building blocks before we get into the hands-on aspects of how these components come together and work.
|
||||
I appreciate the last few days have been a lot of note-taking and theory on Microsoft Azure but I wanted to cover the building blocks before we get into the hands-on aspects of how these components come together and work.
|
||||
|
||||
We have one more bit of theory remaining around networking before we can get some scenario-based deployments of services up and running. We also want to take a look at some of the different ways we can interact with Microsoft Azure vs just using the portal that we have been using so far.
|
||||
We have one more bit of theory remaining around networking before we can get some scenario-based deployments of services up and running. We also want to take a look at some of the different ways we can interact with Microsoft Azure vs just using the portal that we have been using so far.
|
||||
|
||||
## Resources
|
||||
## Resources
|
||||
|
||||
- [Hybrid Cloud and MultiCloud](https://www.youtube.com/watch?v=qkj5W98Xdvw)
|
||||
- [Microsoft Azure Fundamentals](https://www.youtube.com/watch?v=NKEFWyqJ5XA&list=WL&index=130&t=12s)
|
||||
- [Google Cloud Digital Leader Certification Course](https://www.youtube.com/watch?v=UGRDM86MBIQ&list=WL&index=131&t=10s)
|
||||
- [AWS Basics for Beginners - Full Course](https://www.youtube.com/watch?v=ulprqHHWlng&t=5352s)
|
||||
|
||||
See you on [Day 33](day33.md)
|
||||
See you on [Day 33](day33.md)
|
||||
|
167
Days/day33.md
167
Days/day33.md
@ -1,165 +1,166 @@
|
||||
---
|
||||
title: '#90DaysOfDevOps - Microsoft Azure Networking Models + Azure Management - Day 33'
|
||||
title: "#90DaysOfDevOps - Microsoft Azure Networking Models + Azure Management - Day 33"
|
||||
published: false
|
||||
description: 90DaysOfDevOps - Microsoft Azure Networking Models + Azure Management
|
||||
tags: 'devops, 90daysofdevops, learning'
|
||||
tags: "devops, 90daysofdevops, learning"
|
||||
cover_image: null
|
||||
canonical_url: null
|
||||
id: 1048706
|
||||
---
|
||||
|
||||
## Microsoft Azure Networking Models + Azure Management
|
||||
|
||||
As if today marks the anniversary of Microsoft Azure and its 12th Birthday! (1st February 2022) Anyway, we are going to cover the networking models within Microsoft Azure and some of the management options for Azure. So far we have only used the Azure portal but we have mentioned other areas that can be used to drive and create our resources within the platform.
|
||||
As if today marks the anniversary of Microsoft Azure and its 12th Birthday! (1st February 2022) Anyway, we are going to cover the networking models within Microsoft Azure and some of the management options for Azure. So far we have only used the Azure portal but we have mentioned other areas that can be used to drive and create our resources within the platform.
|
||||
|
||||
## Azure Network Models
|
||||
## Azure Network Models
|
||||
|
||||
### Virtual Networks
|
||||
### Virtual Networks
|
||||
|
||||
- A virtual network is a construct created in Azure.
|
||||
- A virtual network is a construct created in Azure.
|
||||
- A virtual network has one or more IP ranges assigned to it.
|
||||
- Virtual networks live within a subscription within a region.
|
||||
- Virtual subnets are created in the virtual network to break up the network range.
|
||||
- Virtual machines are placed in virtual subnets.
|
||||
- Virtual subnets are created in the virtual network to break up the network range.
|
||||
- Virtual machines are placed in virtual subnets.
|
||||
- All virtual machines within a virtual network can communicate.
|
||||
- 65,536 Private IPs per Virtual Network.
|
||||
- Only pay for egress traffic from a region. (Data leaving the region)
|
||||
- IPv4 & IPv6 Supported.
|
||||
- IPv4 & IPv6 Supported.
|
||||
- IPv6 for public-facing and within virtual networks.
|
||||
|
||||
We can liken Azure Virtual Networks to AWS VPCs. However, there are some differences to note:
|
||||
We can liken Azure Virtual Networks to AWS VPCs. However, there are some differences to note:
|
||||
|
||||
- In AWS a default VNet is created that is not the case in Microsoft Azure, you have to create your first virtual network to your requirements.
|
||||
- All Virtual Machines by default in Azure have NAT access to the internet. No NAT Gateways as per AWS.
|
||||
- In Microsoft Azure, there is no concept of Private or Public subnets.
|
||||
- Public IPs are a resource that can be assigned to vNICs or Load Balancers.
|
||||
- The Virtual Network and Subnets have their own ACLs enabling subnet level delegation.
|
||||
- Subnets across Availability Zones whereas in AWS you have subnets per Availability Zones.
|
||||
- In AWS a default VNet is created that is not the case in Microsoft Azure, you have to create your first virtual network to your requirements.
|
||||
- All Virtual Machines by default in Azure have NAT access to the internet. No NAT Gateways as per AWS.
|
||||
- In Microsoft Azure, there is no concept of Private or Public subnets.
|
||||
- Public IPs are a resource that can be assigned to vNICs or Load Balancers.
|
||||
- The Virtual Network and Subnets have their own ACLs enabling subnet level delegation.
|
||||
- Subnets across Availability Zones whereas in AWS you have subnets per Availability Zones.
|
||||
|
||||
We also have Virtual Network Peering. This enables virtual networks across tenants and regions to be connected using the Azure backbone. Not transitive but can be enabled via Azure Firewall in the hub virtual network. Using a gateway transit allows peered virtual networks to the connectivity of the connected network and an example of this could ExpressRoute to On-Premises.
|
||||
We also have Virtual Network Peering. This enables virtual networks across tenants and regions to be connected using the Azure backbone. Not transitive but can be enabled via Azure Firewall in the hub virtual network. Using a gateway transit allows peered virtual networks to the connectivity of the connected network and an example of this could ExpressRoute to On-Premises.
|
||||
|
||||
### Access Control
|
||||
### Access Control
|
||||
|
||||
- Azure utilises Network Security Groups, these are stateful.
|
||||
- Enable rules to be created and then assigned to a network security group
|
||||
- Network security groups applied to subnets or VMs.
|
||||
- When applied to a subnet it is still enforced at the Virtual Machine NIC that it is not an "Edge" device.
|
||||
- Azure utilises Network Security Groups, these are stateful.
|
||||
- Enable rules to be created and then assigned to a network security group
|
||||
- Network security groups applied to subnets or VMs.
|
||||
- When applied to a subnet it is still enforced at the Virtual Machine NIC that it is not an "Edge" device.
|
||||
|
||||

|
||||
|
||||
- Rules are combined in a Network Security Group.
|
||||
- Based on the priority, flexible configurations are possible.
|
||||
- Lower priority number means high priority.
|
||||
- Most logic is built by IP Addresses but some tags and labels can also be used.
|
||||
- Rules are combined in a Network Security Group.
|
||||
- Based on the priority, flexible configurations are possible.
|
||||
- Lower priority number means high priority.
|
||||
- Most logic is built by IP Addresses but some tags and labels can also be used.
|
||||
|
||||
| Description | Priority | Source Address | Source Port | Destination Address | Destination Port | Action |
|
||||
| ----------- | ---------| -------------- | ----------- | ------------------- | ---------------- | ------ |
|
||||
| Inbound 443 | 1005 | * | * | * | 443 | Allow |
|
||||
| ILB | 1010 | Azure LoadBalancer | * | * | 10000 | Allow |
|
||||
| Deny All Inbound | 4000 | * | * | * | * | DENY |
|
||||
| Description | Priority | Source Address | Source Port | Destination Address | Destination Port | Action |
|
||||
| ---------------- | -------- | ------------------ | ----------- | ------------------- | ---------------- | ------ |
|
||||
| Inbound 443 | 1005 | \* | \* | \* | 443 | Allow |
|
||||
| ILB | 1010 | Azure LoadBalancer | \* | \* | 10000 | Allow |
|
||||
| Deny All Inbound | 4000 | \* | \* | \* | \* | DENY |
|
||||
|
||||
We also have Application Security Groups (ASGs)
|
||||
We also have Application Security Groups (ASGs)
|
||||
|
||||
- Where NSGs are focused on the IP address ranges which may be difficult to maintain for growing environments.
|
||||
- Where NSGs are focused on the IP address ranges which may be difficult to maintain for growing environments.
|
||||
- ASGs enable real names (Monikers) for different application roles to be defined (Webservers, DB servers, WebApp1 etc.)
|
||||
- The Virtual Machine NIC is made a member of one or more ASGs.
|
||||
- The Virtual Machine NIC is made a member of one or more ASGs.
|
||||
|
||||
The ASGs can then be used in rules that are part of Network Security Groups to control the flow of communication and can still use NSG features like service tags.
|
||||
The ASGs can then be used in rules that are part of Network Security Groups to control the flow of communication and can still use NSG features like service tags.
|
||||
|
||||
| Action| Name | Source | Destination | Port |
|
||||
| ------| ------------------ | ---------- | ----------- | ------------ |
|
||||
| Allow | AllowInternettoWeb | Internet | WebServers | 443(HTTPS) |
|
||||
| Allow | AllowWebToApp | WebServers | AppServers | 443(HTTPS) |
|
||||
| Allow | AllowAppToDB | AppServers | DbServers | 1443 (MSSQL) |
|
||||
| Deny | DenyAllinbound | Any | Any | Any |
|
||||
| Action | Name | Source | Destination | Port |
|
||||
| ------ | ------------------ | ---------- | ----------- | ------------ |
|
||||
| Allow | AllowInternettoWeb | Internet | WebServers | 443(HTTPS) |
|
||||
| Allow | AllowWebToApp | WebServers | AppServers | 443(HTTPS) |
|
||||
| Allow | AllowAppToDB | AppServers | DbServers | 1443 (MSSQL) |
|
||||
| Deny | DenyAllinbound | Any | Any | Any |
|
||||
|
||||
### Load Balancing
|
||||
### Load Balancing
|
||||
|
||||
Microsoft Azure has two separate load balancing solutions. (the first party, there are third parties available in the Azure marketplace.) Both can operate with externally facing or internally facing endpoints.
|
||||
Microsoft Azure has two separate load balancing solutions. (the first party, there are third parties available in the Azure marketplace.) Both can operate with externally facing or internally facing endpoints.
|
||||
|
||||
- Load Balancer (Layer 4) supporting hash-based distribution and port-forwarding.
|
||||
- App Gateway (Layer 7) supports features such as SSL offload, cookie-based session affinity and URL-based content routing.
|
||||
- Load Balancer (Layer 4) supporting hash-based distribution and port-forwarding.
|
||||
- App Gateway (Layer 7) supports features such as SSL offload, cookie-based session affinity and URL-based content routing.
|
||||
|
||||
Also with the App Gateway, you can optionally use the Web Application firewall component.
|
||||
Also with the App Gateway, you can optionally use the Web Application firewall component.
|
||||
|
||||
## Azure Management Tools
|
||||
## Azure Management Tools
|
||||
|
||||
We have spent most of our theory time walking through the Azure Portal, I would suggest that when it comes to following a DevOps culture and process a lot of these tasks, especially around provisioning will be done via an API or a command-line tool. I wanted to touch on some of those other management tools that we have available to us as we need to know this for when we are automating the provisioning of our Azure environments.
|
||||
We have spent most of our theory time walking through the Azure Portal, I would suggest that when it comes to following a DevOps culture and process a lot of these tasks, especially around provisioning will be done via an API or a command-line tool. I wanted to touch on some of those other management tools that we have available to us as we need to know this for when we are automating the provisioning of our Azure environments.
|
||||
|
||||
### Azure Portal
|
||||
### Azure Portal
|
||||
|
||||
The Microsoft Azure Portal is a web-based console, that provides an alternative to command-line tools. You can manage your subscriptions within the Azure Portal. Build, Manage, and Monitor everything from a simple web app to complex cloud deployments. Another thing you will find within the portal are these breadcrumbs, JSON as mentioned before is the underpinning of all Azure Resources, It might be that you start in the Portal to understand the features, services and functionality but then later understand the JSON underneath to incorporate into your automated workflows.
|
||||
The Microsoft Azure Portal is a web-based console, that provides an alternative to command-line tools. You can manage your subscriptions within the Azure Portal. Build, Manage, and Monitor everything from a simple web app to complex cloud deployments. Another thing you will find within the portal are these breadcrumbs, JSON as mentioned before is the underpinning of all Azure Resources, It might be that you start in the Portal to understand the features, services and functionality but then later understand the JSON underneath to incorporate into your automated workflows.
|
||||
|
||||

|
||||
|
||||
There is also the Azure Preview portal, this can be used to view and test new and upcoming services and enhancements.
|
||||
There is also the Azure Preview portal, this can be used to view and test new and upcoming services and enhancements.
|
||||
|
||||

|
||||
|
||||
### PowerShell
|
||||
### PowerShell
|
||||
|
||||
Before we get into Azure PowerShell it is worth introducing PowerShell first. PowerShell is a task automation and configuration management framework, a command-line shell and a scripting language. We might and dare I say this liken this to what we have covered in the Linux section around shell scripting. PowerShell was very much first found on Windows OS but it is now cross-platform.
|
||||
Before we get into Azure PowerShell it is worth introducing PowerShell first. PowerShell is a task automation and configuration management framework, a command-line shell and a scripting language. We might and dare I say this liken this to what we have covered in the Linux section around shell scripting. PowerShell was very much first found on Windows OS but it is now cross-platform.
|
||||
|
||||
Azure PowerShell is a set of cmdlets for managing Azure resources directly from the PowerShell command line.
|
||||
Azure PowerShell is a set of cmdlets for managing Azure resources directly from the PowerShell command line.
|
||||
|
||||
We can see below that you can connect to your subscription using the PowerShell command `Connect-AzAccount`
|
||||
We can see below that you can connect to your subscription using the PowerShell command `Connect-AzAccount`
|
||||
|
||||

|
||||
|
||||
Then if we wanted to find some specific commands associated with Azure VMs we can run the following command. You could spend hours learning and understanding more about this PowerShell programming language.
|
||||
Then if we wanted to find some specific commands associated with Azure VMs we can run the following command. You could spend hours learning and understanding more about this PowerShell programming language.
|
||||
|
||||

|
||||
|
||||
There are some great quickstarts from Microsoft on getting started and provisioning services from PowerShell [here](https://docs.microsoft.com/en-us/powershell/azure/get-started-azureps?view=azps-7.1.0)
|
||||
|
||||
### Visual Studio Code
|
||||
### Visual Studio Code
|
||||
|
||||
Like many, and as you have all seen my go-to IDE is Visual Studio Code.
|
||||
Like many, and as you have all seen my go-to IDE is Visual Studio Code.
|
||||
|
||||
Visual Studio Code is a free source-code editor made by Microsoft for Windows, Linux and macOS.
|
||||
Visual Studio Code is a free source-code editor made by Microsoft for Windows, Linux and macOS.
|
||||
|
||||
You will see below that there are lots of integrations and tools built into Visual Studio Code that you can use to interact with Microsoft Azure and the services within.
|
||||
You will see below that there are lots of integrations and tools built into Visual Studio Code that you can use to interact with Microsoft Azure and the services within.
|
||||
|
||||

|
||||
|
||||
### Cloud Shell
|
||||
### Cloud Shell
|
||||
|
||||
Azure Cloud Shell is an interactive, authenticated, browser-accessible shell for managing Azure resources. It provides the flexibility of choosing the shell experience that best suits the way you work.
|
||||
Azure Cloud Shell is an interactive, authenticated, browser-accessible shell for managing Azure resources. It provides the flexibility of choosing the shell experience that best suits the way you work.
|
||||
|
||||

|
||||
|
||||
You can see from the below when we first launch Cloud Shell within the portal we can choose between Bash and PowerShell.
|
||||
You can see from the below when we first launch Cloud Shell within the portal we can choose between Bash and PowerShell.
|
||||
|
||||

|
||||
|
||||
To use the cloud shell you will have to provide a bit of storage in your subscription.
|
||||
To use the cloud shell you will have to provide a bit of storage in your subscription.
|
||||
|
||||
When you select to use the cloud shell it is spinning up a machine, these machines are temporary but your files are persisted in two ways; through a disk image and a mounted file share.
|
||||
When you select to use the cloud shell it is spinning up a machine, these machines are temporary but your files are persisted in two ways; through a disk image and a mounted file share.
|
||||
|
||||

|
||||
|
||||
- Cloud Shell runs on a temporary host provided on a per-session, per-user basis
|
||||
- Cloud Shell times out after 20 minutes without interactive activity
|
||||
- Cloud Shell requires an Azure file share to be mounted
|
||||
- Cloud Shell uses the same Azure file share for both Bash and PowerShell
|
||||
- Cloud Shell is assigned one machine per user account
|
||||
- Cloud Shell persists $HOME using a 5-GB image held in your file share
|
||||
- Permissions are set as a regular Linux user in Bash
|
||||
- Cloud Shell runs on a temporary host provided on a per-session, per-user basis
|
||||
- Cloud Shell times out after 20 minutes without interactive activity
|
||||
- Cloud Shell requires an Azure file share to be mounted
|
||||
- Cloud Shell uses the same Azure file share for both Bash and PowerShell
|
||||
- Cloud Shell is assigned one machine per user account
|
||||
- Cloud Shell persists $HOME using a 5-GB image held in your file share
|
||||
- Permissions are set as a regular Linux user in Bash
|
||||
|
||||
The above was copied from [Cloud Shell Overview](https://docs.microsoft.com/en-us/azure/cloud-shell/overview)
|
||||
|
||||
### Azure CLI
|
||||
### Azure CLI
|
||||
|
||||
Finally, I want to cover the Azure CLI, The Azure CLI can be installed on Windows, Linux and macOS. Once installed you can type `az` followed by other commands to create, update, delete and view Azure resources.
|
||||
Finally, I want to cover the Azure CLI, The Azure CLI can be installed on Windows, Linux and macOS. Once installed you can type `az` followed by other commands to create, update, delete and view Azure resources.
|
||||
|
||||
When I initially came into my Azure learning I was a little confused by there being Azure PowerShell and the Azure CLI.
|
||||
When I initially came into my Azure learning I was a little confused by there being Azure PowerShell and the Azure CLI.
|
||||
|
||||
I would love some feedback from the community on this as well. But the way I see it is that Azure PowerShell is a module added to Windows PowerShell or PowerShell Core (Also available on other OS but not all) Whereas Azure CLI is a cross-platform command-line program that connects to Azure and executes those commands.
|
||||
I would love some feedback from the community on this as well. But the way I see it is that Azure PowerShell is a module added to Windows PowerShell or PowerShell Core (Also available on other OS but not all) Whereas Azure CLI is a cross-platform command-line program that connects to Azure and executes those commands.
|
||||
|
||||
Both of these options have a different syntax, although they can from what I can see and what I have done do very similar tasks.
|
||||
Both of these options have a different syntax, although they can from what I can see and what I have done do very similar tasks.
|
||||
|
||||
For example, creating a virtual machine from PowerShell would use the `New-AzVM` cmdlet whereas Azure CLI would use `az VM create`.
|
||||
For example, creating a virtual machine from PowerShell would use the `New-AzVM` cmdlet whereas Azure CLI would use `az VM create`.
|
||||
|
||||
You saw previously that I have the Azure PowerShell module installed on my system but then I also have the Azure CLI installed that can be called through PowerShell on my Windows machine.
|
||||
You saw previously that I have the Azure PowerShell module installed on my system but then I also have the Azure CLI installed that can be called through PowerShell on my Windows machine.
|
||||
|
||||

|
||||
|
||||
@ -175,15 +176,15 @@ Azure PowerShell
|
||||
- Cross-platform PowerShell module, runs on Windows, macOS, Linux
|
||||
- Requires Windows PowerShell or PowerShell
|
||||
|
||||
If there is a reason you cannot use PowerShell in your environment but you can use .mdor bash then the Azure CLI is going to be your choice.
|
||||
If there is a reason you cannot use PowerShell in your environment but you can use .mdor bash then the Azure CLI is going to be your choice.
|
||||
|
||||
Next up we take all the theories we have been through and create some scenarios and get hands-on in Azure.
|
||||
Next up we take all the theories we have been through and create some scenarios and get hands-on in Azure.
|
||||
|
||||
## Resources
|
||||
## Resources
|
||||
|
||||
- [Hybrid Cloud and MultiCloud](https://www.youtube.com/watch?v=qkj5W98Xdvw)
|
||||
- [Microsoft Azure Fundamentals](https://www.youtube.com/watch?v=NKEFWyqJ5XA&list=WL&index=130&t=12s)
|
||||
- [Google Cloud Digital Leader Certification Course](https://www.youtube.com/watch?v=UGRDM86MBIQ&list=WL&index=131&t=10s)
|
||||
- [AWS Basics for Beginners - Full Course](https://www.youtube.com/watch?v=ulprqHHWlng&t=5352s)
|
||||
|
||||
See you on [Day 34](day34.md)
|
||||
See you on [Day 34](day34.md)
|
||||
|
@ -1,53 +1,55 @@
|
||||
---
|
||||
title: '#90DaysOfDevOps - Microsoft Azure Hands-On Scenarios - Day 34'
|
||||
title: "#90DaysOfDevOps - Microsoft Azure Hands-On Scenarios - Day 34"
|
||||
published: false
|
||||
description: 90DaysOfDevOps - Microsoft Azure Hands-On Scenarios
|
||||
tags: 'DevOps, 90daysofdevops, learning'
|
||||
tags: "DevOps, 90daysofdevops, learning"
|
||||
cover_image: null
|
||||
canonical_url: null
|
||||
id: 1048763
|
||||
---
|
||||
|
||||
## Microsoft Azure Hands-On Scenarios
|
||||
|
||||
The last 6 days have been focused on Microsoft Azure and the public cloud in general, a lot of this foundation had to contain a lot of theory to understand the building blocks of Azure but also this will nicely translate to the other major cloud providers as well.
|
||||
The last 6 days have been focused on Microsoft Azure and the public cloud in general, a lot of this foundation had to contain a lot of theory to understand the building blocks of Azure but also this will nicely translate to the other major cloud providers as well.
|
||||
|
||||
I mentioned at the very beginning about getting a foundational knowledge of the public cloud and choosing one provider to at least begin with, if you are dancing between different clouds then I believe you can get lost quite easily whereas choosing one you get to understand the fundamentals and when you have those it is quite easy to jump into the other clouds and accelerate your learning.
|
||||
I mentioned at the very beginning about getting a foundational knowledge of the public cloud and choosing one provider to at least begin with, if you are dancing between different clouds then I believe you can get lost quite easily whereas choosing one you get to understand the fundamentals and when you have those it is quite easy to jump into the other clouds and accelerate your learning.
|
||||
|
||||
In this final session, I am going to be picking and choosing my hands-on scenarios from this page here which is a reference created by Microsoft and is used for preparations for the [AZ-104 Microsoft Azure Administrator](https://microsoftlearning.github.io/AZ-104-MicrosoftAzureAdministrator/)
|
||||
In this final session, I am going to be picking and choosing my hands-on scenarios from this page here which is a reference created by Microsoft and is used for preparations for the [AZ-104 Microsoft Azure Administrator](https://microsoftlearning.github.io/AZ-104-MicrosoftAzureAdministrator/)
|
||||
|
||||
There are some here such as Containers and Kubernetes that we have not covered in any detail as of yet so I don't want to jump in there just yet.
|
||||
There are some here such as Containers and Kubernetes that we have not covered in any detail as of yet so I don't want to jump in there just yet.
|
||||
|
||||
In previous posts, we have created most of Modules 1,2 and 3.
|
||||
In previous posts, we have created most of Modules 1,2 and 3.
|
||||
|
||||
### Virtual Networking
|
||||
|
||||
### Virtual Networking
|
||||
Following [Module 04](https://microsoftlearning.github.io/AZ-104-MicrosoftAzureAdministrator/Instructions/Labs/LAB_04-Implement_Virtual_Networking.html):
|
||||
|
||||
I went through the above and changed a few namings for #90DaysOfDevOps. I also instead of using the Cloud Shell went ahead and logged in with my new user created on previous days with the Azure CLI on my Windows machine.
|
||||
I went through the above and changed a few namings for #90DaysOfDevOps. I also instead of using the Cloud Shell went ahead and logged in with my new user created on previous days with the Azure CLI on my Windows machine.
|
||||
|
||||
You can do this using the `az login` which will open a browser and let you authenticate to your account.
|
||||
You can do this using the `az login` which will open a browser and let you authenticate to your account.
|
||||
|
||||
I have then created a PowerShell script and some references from the module to use to build out some of the tasks below. You can find the associated files in this folder.
|
||||
(Cloud\01VirtualNetworking)
|
||||
(Cloud\01VirtualNetworking)
|
||||
|
||||
Please make sure you change the file location in the script to suit your environment.
|
||||
Please make sure you change the file location in the script to suit your environment.
|
||||
|
||||
At this first stage, we have no virtual network or virtual machines created in our environment, I only have a cloud shell storage location configured in my resource group.
|
||||
At this first stage, we have no virtual network or virtual machines created in our environment, I only have a cloud shell storage location configured in my resource group.
|
||||
|
||||
I first of all run my [PowerShell script](Cloud/01VirtualNetworking/Module4_90DaysOfDevOps.ps1)
|
||||
|
||||

|
||||
|
||||

|
||||
|
||||
- Task 1: Create and configure a virtual network
|
||||
|
||||

|
||||

|
||||
|
||||
- Task 2: Deploy virtual machines into the virtual network
|
||||
|
||||

|
||||

|
||||
|
||||
- Task 3: Configure private and public IP addresses of Azure VMs
|
||||
|
||||

|
||||
|
||||

|
||||
|
||||
- Task 4: Configure network security groups
|
||||
|
||||
@ -59,13 +61,14 @@ I first of all run my [PowerShell script](Cloud/01VirtualNetworking/Module4_90Da
|
||||

|
||||

|
||||
|
||||
### Network Traffic Management
|
||||
### Network Traffic Management
|
||||
|
||||
Following [Module 06](https://microsoftlearning.github.io/AZ-104-MicrosoftAzureAdministrator/Instructions/Labs/LAB_06-Implement_Network_Traffic_Management.html):
|
||||
|
||||
Next walkthrough, from the last one we have gone into our resource group and deleted our resources, if you had not set up the user account like me to only have access to that one resource group you could follow the module changing the name to `90Days*` this will delete all resources and resource group. This will be my process for each of the following labs.
|
||||
Next walkthrough, from the last one we have gone into our resource group and deleted our resources, if you had not set up the user account like me to only have access to that one resource group you could follow the module changing the name to `90Days*` this will delete all resources and resource group. This will be my process for each of the following labs.
|
||||
|
||||
For this lab, I have also created a PowerShell script and some references from the module to use to build out some of the tasks below. You can find the associated files in this folder.
|
||||
(Cloud\02TrafficManagement)
|
||||
(Cloud\02TrafficManagement)
|
||||
|
||||
- Task 1: Provision of the lab environment
|
||||
|
||||
@ -79,22 +82,22 @@ I first of all run my [PowerShell script](Cloud/02TrafficManagement/Mod06_90Days
|
||||
|
||||
- Task 3: Test transitivity of virtual network peering
|
||||
|
||||
For this my 90DaysOfDevOps group did not have access to the Network Watcher because of permissions, I expect this is because Network Watchers are one of those resources that are not tied to a resource group which is where our RBAC was covered for this user. I added the East US Network Watcher contributor role to the 90DaysOfDevOps group.
|
||||
For this my 90DaysOfDevOps group did not have access to the Network Watcher because of permissions, I expect this is because Network Watchers are one of those resources that are not tied to a resource group which is where our RBAC was covered for this user. I added the East US Network Watcher contributor role to the 90DaysOfDevOps group.
|
||||
|
||||

|
||||

|
||||

|
||||
|
||||
^ This is expected since the two spoke virtual networks do not peer with each other (virtual network peering is not transitive).
|
||||
^ This is expected since the two spoke virtual networks do not peer with each other (virtual network peering is not transitive).
|
||||
|
||||
- Task 4: Configure routing in the hub and spoke topology
|
||||
|
||||
I had another issue here with my account not being able to run the script as my user within the group 90DaysOfDevOps which I am unsure of so I did jump back into my main admin account. The 90DaysOfDevOps group is an owner of everything in the 90DaysOfDevOps Resource Group so would love to understand why I cannot run a command inside the VM?
|
||||
I had another issue here with my account not being able to run the script as my user within the group 90DaysOfDevOps which I am unsure of so I did jump back into my main admin account. The 90DaysOfDevOps group is an owner of everything in the 90DaysOfDevOps Resource Group so would love to understand why I cannot run a command inside the VM?
|
||||
|
||||

|
||||

|
||||
|
||||
I then was able to go back into my michael.cade@90DaysOfDevOps.com account and continue this section. Here we are running the same test again but now with the result being reachable.
|
||||
I then was able to go back into my michael.cade@90DaysOfDevOps.com account and continue this section. Here we are running the same test again but now with the result being reachable.
|
||||
|
||||

|
||||
|
||||
@ -108,11 +111,12 @@ I then was able to go back into my michael.cade@90DaysOfDevOps.com account and c
|
||||

|
||||

|
||||
|
||||
### Azure Storage
|
||||
### Azure Storage
|
||||
|
||||
Following [Module 07](https://microsoftlearning.github.io/AZ-104-MicrosoftAzureAdministrator/Instructions/Labs/LAB_07-Manage_Azure_Storage.html):
|
||||
|
||||
For this lab, I have also created a PowerShell script and some references from the module to use to build out some of the tasks below. You can find the associated files in this folder.
|
||||
(Cloud\03Storage)
|
||||
(Cloud\03Storage)
|
||||
|
||||
- Task 1: Provision of the lab environment
|
||||
|
||||
@ -133,13 +137,13 @@ I first of all run my [PowerShell script](Cloud/03Storage/Mod07_90DaysOfDeveOps.
|
||||

|
||||

|
||||
|
||||
I was a little impatient waiting for this to be allowed but it did work eventually.
|
||||
I was a little impatient waiting for this to be allowed but it did work eventually.
|
||||
|
||||

|
||||
|
||||
- Task 5: Create and configure an Azure Files shares
|
||||
|
||||
On the run command, this would not work with michael.cade@90DaysOfDevOps.com so I used my elevated account.
|
||||
On the run command, this would not work with michael.cade@90DaysOfDevOps.com so I used my elevated account.
|
||||
|
||||

|
||||

|
||||
@ -150,6 +154,7 @@ On the run command, this would not work with michael.cade@90DaysOfDevOps.com so
|
||||

|
||||
|
||||
### Serverless (Implement Web Apps)
|
||||
|
||||
Following [Module 09a](https://microsoftlearning.github.io/AZ-104-MicrosoftAzureAdministrator/Instructions/Labs/LAB_09a-Implement_Web_Apps.html):
|
||||
|
||||
- Task 1: Create an Azure web app
|
||||
@ -178,15 +183,15 @@ This script I am using can be found in (Cloud/05Serverless)
|
||||
|
||||

|
||||
|
||||
This wraps up the section on Microsoft Azure and the public cloud in general. I will say that I had lots of fun attacking and working through these scenarios.
|
||||
This wraps up the section on Microsoft Azure and the public cloud in general. I will say that I had lots of fun attacking and working through these scenarios.
|
||||
|
||||
## Resources
|
||||
## Resources
|
||||
|
||||
- [Hybrid Cloud and MultiCloud](https://www.youtube.com/watch?v=qkj5W98Xdvw)
|
||||
- [Microsoft Azure Fundamentals](https://www.youtube.com/watch?v=NKEFWyqJ5XA&list=WL&index=130&t=12s)
|
||||
- [Google Cloud Digital Leader Certification Course](https://www.youtube.com/watch?v=UGRDM86MBIQ&list=WL&index=131&t=10s)
|
||||
- [AWS Basics for Beginners - Full Course](https://www.youtube.com/watch?v=ulprqHHWlng&t=5352s)
|
||||
|
||||
Next, we will be diving into version control systems, specifically around git and then also code repository overviews and we will be choosing GitHub as this is my preferred option.
|
||||
Next, we will be diving into version control systems, specifically around git and then also code repository overviews and we will be choosing GitHub as this is my preferred option.
|
||||
|
||||
See you on [Day 35](day35.md)
|
||||
|
@ -1,77 +1,78 @@
|
||||
---
|
||||
title: '#90DaysOfDevOps - The Big Picture: Git - Version Control - Day 35'
|
||||
title: "#90DaysOfDevOps - The Big Picture: Git - Version Control - Day 35"
|
||||
published: false
|
||||
description: 90DaysOfDevOps - The Big Picture Git - Version Control
|
||||
tags: 'devops, 90daysofdevops, learning'
|
||||
tags: "devops, 90daysofdevops, learning"
|
||||
cover_image: null
|
||||
canonical_url: null
|
||||
id: 1049041
|
||||
---
|
||||
|
||||
## The Big Picture: Git - Version Control
|
||||
|
||||
Before we get into git, we need to understand what version control is and why? In this opener for Git, we will take a look at what version control is, and the basics of git.
|
||||
Before we get into git, we need to understand what version control is and why? In this opener for Git, we will take a look at what version control is, and the basics of git.
|
||||
|
||||
### What is Version Control?
|
||||
### What is Version Control?
|
||||
|
||||
Git is not the only version control system so here we want to cover what options and what methodologies are available around version control.
|
||||
Git is not the only version control system so here we want to cover what options and what methodologies are available around version control.
|
||||
|
||||
The most obvious and a big benefit of Version Control is the ability to track a project's history. We can look back over this repository using `git log` and see that we have many commits and many comments and what has happened so far in the project. Don't worry we will get into the commands later. Now think if this was an actual software project full of source code and multiple people are committing to our software at different times, different authors and then reviewers all are logged here so that we know what has happened, when, by whom and who reviewed.
|
||||
The most obvious and a big benefit of Version Control is the ability to track a project's history. We can look back over this repository using `git log` and see that we have many commits and many comments and what has happened so far in the project. Don't worry we will get into the commands later. Now think if this was an actual software project full of source code and multiple people are committing to our software at different times, different authors and then reviewers all are logged here so that we know what has happened, when, by whom and who reviewed.
|
||||
|
||||

|
||||
|
||||
Version Control before it was cool, would have been something like manually creating a copy of your version before you made changes. It might be that you also comment out old useless code with the just-in-case mentality.
|
||||
Version Control before it was cool, would have been something like manually creating a copy of your version before you made changes. It might be that you also comment out old useless code with the just-in-case mentality.
|
||||
|
||||

|
||||
|
||||
I have started using version control over not just source code but pretty much anything that talks about projects like this (90DaysOfDevOps) because why would you not want that rollback and log of everything that has gone on.
|
||||
I have started using version control over not just source code but pretty much anything that talks about projects like this (90DaysOfDevOps) because why would you not want that rollback and log of everything that has gone on.
|
||||
|
||||
However, a big disclaimer **Version Control is not a Backup!**
|
||||
|
||||
Another benefit of Version Control is the ability to manage multiple versions of a project, Let's create an example, we have a free app that is available on all operating systems and then we have a paid-for app also available on all operating systems. The majority of the code is shared between both applications. We could copy and paste our code each commit to each app but that is going to be very messy especially as you scale your development to more than just one person, also mistakes will be made.
|
||||
Another benefit of Version Control is the ability to manage multiple versions of a project, Let's create an example, we have a free app that is available on all operating systems and then we have a paid-for app also available on all operating systems. The majority of the code is shared between both applications. We could copy and paste our code each commit to each app but that is going to be very messy especially as you scale your development to more than just one person, also mistakes will be made.
|
||||
|
||||
The premium app is where we are going to have additional features, let's call them premium commits, the free edition will just contain the normal commits.
|
||||
The premium app is where we are going to have additional features, let's call them premium commits, the free edition will just contain the normal commits.
|
||||
|
||||
The way this is achieved in Version Control is through branching.
|
||||
The way this is achieved in Version Control is through branching.
|
||||
|
||||

|
||||
|
||||
Branching allows for two code streams for the same app as we stated above. But we will still want new features that land in our source code-free version to be in our premium and to achieve this we have something called merging.
|
||||
Branching allows for two code streams for the same app as we stated above. But we will still want new features that land in our source code-free version to be in our premium and to achieve this we have something called merging.
|
||||
|
||||

|
||||
|
||||
Now, this same easy but merging can be complicated because you could have a team working on the free edition and you could have another team working on the premium paid-for version and what if both change code that affects aspects of the overall code. Maybe a variable gets updated and breaks something. Then you have a conflict that breaks one of the features. Version Control cannot fix the conflicts that are down to you. But version control allows this to be easily managed.
|
||||
Now, this same easy but merging can be complicated because you could have a team working on the free edition and you could have another team working on the premium paid-for version and what if both change code that affects aspects of the overall code. Maybe a variable gets updated and breaks something. Then you have a conflict that breaks one of the features. Version Control cannot fix the conflicts that are down to you. But version control allows this to be easily managed.
|
||||
|
||||
The primary reason if you have not picked up so far for version control, in general, is the ability to collaborate. The ability to share code amongst developers and when I say code as I said before more and more we are seeing much more use cases for other reasons to use source control, maybe its a joint presentation you are working on with a colleague or a 90DaysOfDevOps challenge where you have the community offering their corrections and updates throughout the project.
|
||||
The primary reason if you have not picked up so far for version control, in general, is the ability to collaborate. The ability to share code amongst developers and when I say code as I said before more and more we are seeing much more use cases for other reasons to use source control, maybe its a joint presentation you are working on with a colleague or a 90DaysOfDevOps challenge where you have the community offering their corrections and updates throughout the project.
|
||||
|
||||
Without version control how did teams of software developers even handle this? I find it hard enough when I am working on my projects to keep track of things. I expect they would split out the code into each functional module. Maybe a little part of the puzzle then was bringing the pieces together and then problems and issues before anything would get released.
|
||||
Without version control how did teams of software developers even handle this? I find it hard enough when I am working on my projects to keep track of things. I expect they would split out the code into each functional module. Maybe a little part of the puzzle then was bringing the pieces together and then problems and issues before anything would get released.
|
||||
|
||||
With version control, we have a single source of truth. We might all still work on different modules but it enables us to collaborate better.
|
||||
With version control, we have a single source of truth. We might all still work on different modules but it enables us to collaborate better.
|
||||
|
||||

|
||||
|
||||
Another thing to mention here is that it's not just developers that can benefit from Version Control, it's all members of the team to have visibility but also tools all having awareness or leverage, Project Management tools can be linked here, tracking the work. We might also have a build machine for example Jenkins which we will talk about in another module. A tool that Builds and Packages the system, automating the deployment tests and metrics.
|
||||
Another thing to mention here is that it's not just developers that can benefit from Version Control, it's all members of the team to have visibility but also tools all having awareness or leverage, Project Management tools can be linked here, tracking the work. We might also have a build machine for example Jenkins which we will talk about in another module. A tool that Builds and Packages the system, automating the deployment tests and metrics.
|
||||
|
||||
### What is Git?
|
||||
### What is Git?
|
||||
|
||||
Git is a tool that tracks changes to source code or any file, or we could also say Git is an open-source distributed version control system.
|
||||
Git is a tool that tracks changes to source code or any file, or we could also say Git is an open-source distributed version control system.
|
||||
|
||||
There are many ways in which git can be used on our systems, most commonly or at least for me I have seen it at the command line, but we also have graphical user interfaces and tools like Visual Studio Code that have git-aware operations we can take advantage of.
|
||||
There are many ways in which git can be used on our systems, most commonly or at least for me I have seen it at the command line, but we also have graphical user interfaces and tools like Visual Studio Code that have git-aware operations we can take advantage of.
|
||||
|
||||
Now we are going to run through a high-level overview before we even get Git installed on our local machine.
|
||||
Now we are going to run through a high-level overview before we even get Git installed on our local machine.
|
||||
|
||||
Let's take the folder we created earlier.
|
||||
Let's take the folder we created earlier.
|
||||
|
||||

|
||||
|
||||
To use this folder with version control we first need to initiate this directory using the `git init command. For now, just think that this command puts our directory as a repository in a database somewhere on our computer.
|
||||
To use this folder with version control we first need to initiate this directory using the `git init command. For now, just think that this command puts our directory as a repository in a database somewhere on our computer.
|
||||
|
||||

|
||||
|
||||
Now we can create some files and folders and our source code can begin or maybe it already has and we have something in here already. We can use the `git add .` command which puts all files and folders in our directory into a snapshot but we have not yet committed anything to that database. We are just saying all files with the `.` are ready to be added.
|
||||
Now we can create some files and folders and our source code can begin or maybe it already has and we have something in here already. We can use the `git add .` command which puts all files and folders in our directory into a snapshot but we have not yet committed anything to that database. We are just saying all files with the `.` are ready to be added.
|
||||
|
||||

|
||||
|
||||
Then we want to go ahead and commit our files, we do this with the `git commit -m "My First Commit"` command. We can give a reason for our commit and this is suggested so we know what has happened for each commit.
|
||||
Then we want to go ahead and commit our files, we do this with the `git commit -m "My First Commit"` command. We can give a reason for our commit and this is suggested so we know what has happened for each commit.
|
||||
|
||||

|
||||
|
||||
@ -79,11 +80,11 @@ We can now see what has happened within the history of the project. Using the `g
|
||||
|
||||

|
||||
|
||||
We can also check the status of our repository by using `git status` this shows we have nothing to commit and we can add a new file called sample code.ps1. If we then run the same `git status you will see that we file to be committed.
|
||||
We can also check the status of our repository by using `git status` this shows we have nothing to commit and we can add a new file called sample code.ps1. If we then run the same `git status you will see that we file to be committed.
|
||||
|
||||

|
||||
|
||||
Add our new file using the `git add sample code.ps1` command and then we can run `git status` again and see our file is ready to be committed.
|
||||
Add our new file using the `git add sample code.ps1` command and then we can run `git status` again and see our file is ready to be committed.
|
||||
|
||||

|
||||
|
||||
@ -95,7 +96,7 @@ Another `git status` now shows everything is clean again.
|
||||
|
||||

|
||||
|
||||
We can then use the `git log` command which shows the latest changes and first commit.
|
||||
We can then use the `git log` command which shows the latest changes and first commit.
|
||||
|
||||

|
||||
|
||||
@ -103,38 +104,37 @@ If we wanted to see the changes between our commits i.e what files have been add
|
||||
|
||||

|
||||
|
||||
Which then displays what has changed in our case we added a new file.
|
||||
Which then displays what has changed in our case we added a new file.
|
||||
|
||||

|
||||
|
||||
We can also and we will go deeper into this later on but we can jump around our commits i.e we can go time travelling! By using our commit number we can use the `git checkout 709a` command to jump back in time without losing our new file.
|
||||
We can also and we will go deeper into this later on but we can jump around our commits i.e we can go time travelling! By using our commit number we can use the `git checkout 709a` command to jump back in time without losing our new file.
|
||||
|
||||

|
||||
|
||||
But then equally we will want to move forward as well and we can do this the same way with the commit number or you can see here we are using the `git switch -` command to undo our operation.
|
||||
But then equally we will want to move forward as well and we can do this the same way with the commit number or you can see here we are using the `git switch -` command to undo our operation.
|
||||
|
||||

|
||||
|
||||
The TLDR;
|
||||
The TLDR;
|
||||
|
||||
- Tracking a project's history
|
||||
- Managing multiple versions of a project
|
||||
- Sharing code amongst developers and a wider scope of teams and tools
|
||||
- Coordinating teamwork
|
||||
- Oh and there is some time travel!
|
||||
- Oh and there is some time travel!
|
||||
|
||||
This might have seemed a jump around but hopefully, you can see without really knowing the commands used the powers and the big picture behind Version Control.
|
||||
This might have seemed a jump around but hopefully, you can see without really knowing the commands used the powers and the big picture behind Version Control.
|
||||
|
||||
Next up we will be getting git installed and set up on your local machine and diving a little deeper into some other use cases and commands that we can achieve in Git.
|
||||
Next up we will be getting git installed and set up on your local machine and diving a little deeper into some other use cases and commands that we can achieve in Git.
|
||||
|
||||
|
||||
## Resources
|
||||
## Resources
|
||||
|
||||
- [What is Version Control?](https://www.youtube.com/watch?v=Yc8sCSeMhi4)
|
||||
- [Types of Version Control System](https://www.youtube.com/watch?v=kr62e_n6QuQ)
|
||||
- [Git Tutorial for Beginners](https://www.youtube.com/watch?v=8JJ101D3knE&t=52s)
|
||||
- [Git for Professionals Tutorial](https://www.youtube.com/watch?v=Uszj_k0DGsg)
|
||||
- [Git and GitHub for Beginners - Crash Course](https://www.youtube.com/watch?v=RGOj5yH7evk&t=8s)
|
||||
- [Git Tutorial for Beginners](https://www.youtube.com/watch?v=8JJ101D3knE&t=52s)
|
||||
- [Git for Professionals Tutorial](https://www.youtube.com/watch?v=Uszj_k0DGsg)
|
||||
- [Git and GitHub for Beginners - Crash Course](https://www.youtube.com/watch?v=RGOj5yH7evk&t=8s)
|
||||
- [Complete Git and GitHub Tutorial](https://www.youtube.com/watch?v=apGV9Kg7ics)
|
||||
|
||||
See you on [Day 36](day36.md)
|
||||
See you on [Day 36](day36.md)
|
||||
|
102
Days/day36.md
102
Days/day36.md
@ -1,152 +1,154 @@
|
||||
---
|
||||
title: '#90DaysOfDevOps - Installing & Configuring Git - Day 36'
|
||||
title: "#90DaysOfDevOps - Installing & Configuring Git - Day 36"
|
||||
published: false
|
||||
description: 90DaysOfDevOps - Installing & Configuring Git
|
||||
tags: 'devops, 90daysofdevops, learning'
|
||||
tags: "devops, 90daysofdevops, learning"
|
||||
cover_image: null
|
||||
canonical_url: null
|
||||
id: 1048738
|
||||
---
|
||||
|
||||
## Installing & Configuring Git
|
||||
|
||||
Git is an open source, cross-platform tool for version control. If you are like me, using Ubuntu or most Linux environments you might find that you already have git installed but we are going to run through the install and configuration.
|
||||
Git is an open source, cross-platform tool for version control. If you are like me, using Ubuntu or most Linux environments you might find that you already have git installed but we are going to run through the install and configuration.
|
||||
|
||||
Even if you already have git installed on your system it is also a good idea to make sure we are up to date.
|
||||
Even if you already have git installed on your system it is also a good idea to make sure we are up to date.
|
||||
|
||||
### Installing Git
|
||||
|
||||
As already mentioned Git is cross-platform, we will be running through Windows and Linux but you can find macOS also listed [here](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git)
|
||||
|
||||
For [Windows](https://git-scm.com/download/win) we can grab our installers from the official site.
|
||||
For [Windows](https://git-scm.com/download/win) we can grab our installers from the official site.
|
||||
|
||||
You could also use `winget` on your Windows machine, think of this as your Windows Application Package Manager.
|
||||
You could also use `winget` on your Windows machine, think of this as your Windows Application Package Manager.
|
||||
|
||||
Before we install anything let's see what version we have on our Windows Machine. Open a PowerShell window and run `git --version`
|
||||
Before we install anything let's see what version we have on our Windows Machine. Open a PowerShell window and run `git --version`
|
||||
|
||||

|
||||
|
||||
We can also check our WSL Ubuntu version of Git as well.
|
||||
We can also check our WSL Ubuntu version of Git as well.
|
||||
|
||||

|
||||
|
||||
At the time of writing the latest Windows release is `2.35.1` so we have some updating to do there which I will run through. I expect the same for Linux.
|
||||
At the time of writing the latest Windows release is `2.35.1` so we have some updating to do there which I will run through. I expect the same for Linux.
|
||||
|
||||
I went ahead and downloaded the latest installer and ran through the wizard and will document that here. The important thing to note is that git will uninstall previous versions before installing the latest.
|
||||
I went ahead and downloaded the latest installer and ran through the wizard and will document that here. The important thing to note is that git will uninstall previous versions before installing the latest.
|
||||
|
||||
Meaning that the process shown below is also the same process for the most part as if you were installing from no git.
|
||||
Meaning that the process shown below is also the same process for the most part as if you were installing from no git.
|
||||
|
||||
It is a very simple installation. Once downloaded double click and get started. Read through the GNU license agreement. But remember this is free and open-source software.
|
||||
It is a very simple installation. Once downloaded double click and get started. Read through the GNU license agreement. But remember this is free and open-source software.
|
||||
|
||||

|
||||
|
||||
Now we can choose additional components that we would like to also install but also associate with git. On Windows, I always make sure I install Git Bash as this allows us to run bash scripts on Windows.
|
||||
Now we can choose additional components that we would like to also install but also associate with git. On Windows, I always make sure I install Git Bash as this allows us to run bash scripts on Windows.
|
||||
|
||||

|
||||
|
||||
We can then choose which SSH Executable we wish to use. IN leave this as the bundled OpenSSH that you might have seen in the Linux section.
|
||||
We can then choose which SSH Executable we wish to use. IN leave this as the bundled OpenSSH that you might have seen in the Linux section.
|
||||
|
||||

|
||||
|
||||
We then have experimental features that we may wish to enable, for me I don't need them so I don't enable them, you can always come back in through the installation and enable these later on.
|
||||
We then have experimental features that we may wish to enable, for me I don't need them so I don't enable them, you can always come back in through the installation and enable these later on.
|
||||
|
||||

|
||||
|
||||
Installation complete, we can now choose to open Git Bash and or the latest release notes.
|
||||
Installation complete, we can now choose to open Git Bash and or the latest release notes.
|
||||
|
||||

|
||||
|
||||
The final check is to take a look in our PowerShell window at what version of git we have now.
|
||||
The final check is to take a look in our PowerShell window at what version of git we have now.
|
||||
|
||||

|
||||
|
||||
Super simple stuff and now we are on the latest version. On our Linux machine, we seemed to be a little behind so we can also walk through that update process.
|
||||
Super simple stuff and now we are on the latest version. On our Linux machine, we seemed to be a little behind so we can also walk through that update process.
|
||||
|
||||
I simply run the `sudo apt-get install git` command.
|
||||
I simply run the `sudo apt-get install git` command.
|
||||
|
||||

|
||||
|
||||
You could also run the following which will add the git repository for software installations.
|
||||
You could also run the following which will add the git repository for software installations.
|
||||
|
||||
```
|
||||
sudo add-apt-repository ppa:git-core/ppa -y
|
||||
sudo apt-get update
|
||||
sudo apt-get install git -y
|
||||
git --version
|
||||
```
|
||||
```
|
||||
|
||||
### Configuring Git
|
||||
|
||||
When we first use git we have to define some settings,
|
||||
When we first use git we have to define some settings,
|
||||
|
||||
- Name
|
||||
- Email
|
||||
- Email
|
||||
- Default Editor
|
||||
- Line Ending
|
||||
|
||||
This can be done at three levels
|
||||
This can be done at three levels
|
||||
|
||||
- System = All users
|
||||
- Global = All repositories of the current user
|
||||
- System = All users
|
||||
- Global = All repositories of the current user
|
||||
- Local = The current repository
|
||||
|
||||
Example:
|
||||
`git config --global user.name "Michael Cade"`
|
||||
Example:
|
||||
`git config --global user.name "Michael Cade"`
|
||||
`git config --global user.email Michael.Cade@90DaysOfDevOPs.com"`
|
||||
Depending on your Operating System will determine the default text editor. In my Ubuntu machine without setting the next command is using nano. The below command will change this to visual studio code.
|
||||
Depending on your Operating System will determine the default text editor. In my Ubuntu machine without setting the next command is using nano. The below command will change this to visual studio code.
|
||||
|
||||
`git config --global core.editor "code --wait"`
|
||||
|
||||
now if we want to be able to see all git configurations then we can use the following command.
|
||||
now if we want to be able to see all git configurations then we can use the following command.
|
||||
|
||||
`git config --global -e`
|
||||
`git config --global -e`
|
||||
|
||||

|
||||
|
||||
On any machine this file will be named `.gitconfig` on my Windows machine you will find this in your user account directory.
|
||||
On any machine this file will be named `.gitconfig` on my Windows machine you will find this in your user account directory.
|
||||
|
||||

|
||||
|
||||
### Git Theory
|
||||
|
||||
I mentioned in the post yesterday that there were other version control types and we can split these down into two different types. One is Client Server and the other is Distributed.
|
||||
I mentioned in the post yesterday that there were other version control types and we can split these down into two different types. One is Client Server and the other is Distributed.
|
||||
|
||||
### Client-Server Version Control
|
||||
### Client-Server Version Control
|
||||
|
||||
Before git was around Client-Server was the defacto method for version control. An example of this would be [Apache Subversion](https://subversion.apache.org/) which is an open source version control system founded in 2000.
|
||||
Before git was around Client-Server was the defacto method for version control. An example of this would be [Apache Subversion](https://subversion.apache.org/) which is an open source version control system founded in 2000.
|
||||
|
||||
In this model of Client-Server version control, the first step the developer downloads the source code and the actual files from the server. This doesn't remove the conflicts but it does remove the complexity of the conflicts and how to resolve them.
|
||||
In this model of Client-Server version control, the first step the developer downloads the source code and the actual files from the server. This doesn't remove the conflicts but it does remove the complexity of the conflicts and how to resolve them.
|
||||
|
||||

|
||||
|
||||
Now for example let's say we have two developers working on the same files and one wins the race and commits or uploads their file back to the server first with their new changes. When the second developer goes to update they have a conflict.
|
||||
Now for example let's say we have two developers working on the same files and one wins the race and commits or uploads their file back to the server first with their new changes. When the second developer goes to update they have a conflict.
|
||||
|
||||

|
||||
|
||||
So now the Dev needs to pull down the first devs code change next to their check and then commit once those conflicts have been settled.
|
||||
So now the Dev needs to pull down the first devs code change next to their check and then commit once those conflicts have been settled.
|
||||
|
||||

|
||||
|
||||
### Distributed Version Control
|
||||
### Distributed Version Control
|
||||
|
||||
Git is not the only distributed version control system. But it is very much the defacto.
|
||||
Git is not the only distributed version control system. But it is very much the defacto.
|
||||
|
||||
Some of the major benefits of Git are:
|
||||
Some of the major benefits of Git are:
|
||||
|
||||
- Fast
|
||||
- Smart
|
||||
- Flexible
|
||||
- Fast
|
||||
- Smart
|
||||
- Flexible
|
||||
- Safe & Secure
|
||||
|
||||
Unlike the Client-Server version control model, each developer downloads the source repository meaning everything. History of commits, all the branches etc.
|
||||
Unlike the Client-Server version control model, each developer downloads the source repository meaning everything. History of commits, all the branches etc.
|
||||
|
||||

|
||||
|
||||
## Resources
|
||||
## Resources
|
||||
|
||||
- [What is Version Control?](https://www.youtube.com/watch?v=Yc8sCSeMhi4)
|
||||
- [Types of Version Control System](https://www.youtube.com/watch?v=kr62e_n6QuQ)
|
||||
- [Git Tutorial for Beginners](https://www.youtube.com/watch?v=8JJ101D3knE&t=52s)
|
||||
- [Git for Professionals Tutorial](https://www.youtube.com/watch?v=Uszj_k0DGsg)
|
||||
- [Git and GitHub for Beginners - Crash Course](https://www.youtube.com/watch?v=RGOj5yH7evk&t=8s)
|
||||
- [Git Tutorial for Beginners](https://www.youtube.com/watch?v=8JJ101D3knE&t=52s)
|
||||
- [Git for Professionals Tutorial](https://www.youtube.com/watch?v=Uszj_k0DGsg)
|
||||
- [Git and GitHub for Beginners - Crash Course](https://www.youtube.com/watch?v=RGOj5yH7evk&t=8s)
|
||||
- [Complete Git and GitHub Tutorial](https://www.youtube.com/watch?v=apGV9Kg7ics)
|
||||
|
||||
See you on [Day 37](day37.md)
|
||||
See you on [Day 37](day37.md)
|
||||
|
183
Days/day37.md
183
Days/day37.md
@ -1,172 +1,171 @@
|
||||
---
|
||||
title: '#90DaysOfDevOps - Gitting to know Git - Day 37'
|
||||
title: "#90DaysOfDevOps - Gitting to know Git - Day 37"
|
||||
published: false
|
||||
description: 90DaysOfDevOps - Gitting to know Git
|
||||
tags: 'DevOps, 90daysofdevops, learning'
|
||||
tags: "DevOps, 90daysofdevops, learning"
|
||||
cover_image: null
|
||||
canonical_url: null
|
||||
id: 1048707
|
||||
---
|
||||
|
||||
## Gitting to know Git
|
||||
|
||||
Apologies for the terrible puns in the title and throughout. I am surely not the first person to turn Git into a dad joke!
|
||||
Apologies for the terrible puns in the title and throughout. I am surely not the first person to turn Git into a dad joke!
|
||||
|
||||
In the last two posts we learnt about version control systems, and some of the fundamental workflows of git as a version control system [Day 35](day35.md) Then we got git installed on our system, updated and configured. We also went a little deeper into the theory between the Client-Server version control system and Git which is a distributed version control system [Day 36](day36.md).
|
||||
|
||||
Now we are going to run through some of the commands and use cases that we will all commonly see with git.
|
||||
|
||||
### Where to git help with git?
|
||||
### Where to git help with git?
|
||||
|
||||
There are going to be times when you just cannot remember or just don't know the command you need to get things done with git. You are going to need help.
|
||||
There are going to be times when you just cannot remember or just don't know the command you need to get things done with git. You are going to need help.
|
||||
|
||||
Google or any search engine is likely to be your first port of call when searching for help.
|
||||
Google or any search engine is likely to be your first port of call when searching for help.
|
||||
|
||||
Secondly, the next place is going to be the official git site and the documentation. [git-scm.com/docs](http://git-scm.com/docs) Here you will find not only a solid reference to all the commands available but also lots of different resources.
|
||||
Secondly, the next place is going to be the official git site and the documentation. [git-scm.com/docs](http://git-scm.com/docs) Here you will find not only a solid reference to all the commands available but also lots of different resources.
|
||||
|
||||

|
||||
|
||||
We can also access this same documentation which is super useful if you are without connectivity from the terminal. If we chose the `git add` command for example we can run `git add --help` and we see below the manual.
|
||||
We can also access this same documentation which is super useful if you are without connectivity from the terminal. If we chose the `git add` command for example we can run `git add --help` and we see below the manual.
|
||||
|
||||

|
||||
|
||||
We can also in the shell use `git add -h` which is going to give us a summary of the options we have available.
|
||||
We can also in the shell use `git add -h` which is going to give us a summary of the options we have available.
|
||||
|
||||

|
||||
|
||||
### Myths surrounding Git
|
||||
|
||||
"Git has no access control" - You can empower a leader to maintain source code.
|
||||
"Git has no access control" - You can empower a leader to maintain source code.
|
||||
|
||||
"Git is too heavy" - Git can provide shallow repositories which means a reduced amount of history if you have large projects.
|
||||
"Git is too heavy" - Git can provide shallow repositories which means a reduced amount of history if you have large projects.
|
||||
|
||||
### Real shortcomings
|
||||
|
||||
Not ideal for Binary files. Great for source code but not great for executable files or videos for example.
|
||||
Not ideal for Binary files. Great for source code but not great for executable files or videos for example.
|
||||
|
||||
Git is not user-friendly, the fact that we have to spend time talking about commands and functions of the tool is probably a key sign of that.
|
||||
Git is not user-friendly, the fact that we have to spend time talking about commands and functions of the tool is probably a key sign of that.
|
||||
|
||||
Overall though, git is hard to learn but easy to use.
|
||||
Overall though, git is hard to learn but easy to use.
|
||||
|
||||
### The git ecosystem
|
||||
### The git ecosystem
|
||||
|
||||
I want to briefly cover the ecosystem around git but not deep dive into some of these areas but I think it's important to note these here at a high level.
|
||||
I want to briefly cover the ecosystem around git but not deep dive into some of these areas but I think it's important to note these here at a high level.
|
||||
|
||||
Almost all modern development tools support Git.
|
||||
Almost all modern development tools support Git.
|
||||
|
||||
- Developer tools - We have already mentioned visual studio code but you will find git plugins and integrations into sublime text and other text editors and IDEs.
|
||||
|
||||
- Team tools - Also mentioned around tools like Jenkins from a CI/CD point of view, Slack from a messaging framework and Jira for project management and issue tracking.
|
||||
- Developer tools - We have already mentioned visual studio code but you will find git plugins and integrations into sublime text and other text editors and IDEs.
|
||||
- Team tools - Also mentioned around tools like Jenkins from a CI/CD point of view, Slack from a messaging framework and Jira for project management and issue tracking.
|
||||
|
||||
- Cloud Providers - All the large cloud providers support git, Microsoft Azure, Amazon AWS, and Google Cloud Platform.
|
||||
|
||||
- Git-Based services - Then we have GitHub, GitLab and BitBucket which we will cover in more detail later on. I have heard of these services as the social network for code!
|
||||
- Cloud Providers - All the large cloud providers support git, Microsoft Azure, Amazon AWS, and Google Cloud Platform.
|
||||
- Git-Based services - Then we have GitHub, GitLab and BitBucket which we will cover in more detail later on. I have heard of these services as the social network for code!
|
||||
|
||||
### The Git Cheatsheet
|
||||
### The Git Cheatsheet
|
||||
|
||||
We have not covered most of these commands but having looked at some cheat sheets available online I wanted to document some of the git commands and what their purpose is. We don't need to remember these all, and with more hands-on practice and use you will pick at least the git basics.
|
||||
We have not covered most of these commands but having looked at some cheat sheets available online I wanted to document some of the git commands and what their purpose is. We don't need to remember these all, and with more hands-on practice and use you will pick at least the git basics.
|
||||
|
||||
I have taken these from [atlassian](https://www.atlassian.com/git/tutorials/atlassian-git-cheatsheet) but writing them down and reading the description is a good way to get to know what the commands are as well as getting hands-on in everyday tasks.
|
||||
I have taken these from [atlassian](https://www.atlassian.com/git/tutorials/atlassian-git-cheatsheet) but writing them down and reading the description is a good way to get to know what the commands are as well as getting hands-on in everyday tasks.
|
||||
|
||||
### Git Basics
|
||||
### Git Basics
|
||||
|
||||
| Command | Example | Description |
|
||||
| --------------- | ------------------------------------- | ------------------------------------------------------------------------------------------------------------------------- |
|
||||
| git init | `git init <directory>` | Create an empty git repository in the specified directory. |
|
||||
| git clone | `git clone <repo>` | Clone repository located at <repo> onto local machine. |
|
||||
| git config | `git config user.name` | Define author name to be used for all commits in current repository `system`, `global`, `local` flag to set config options. |
|
||||
| git add | `git add <directory>` | Stage all changes in <directory> for the next commit. We can also add <files> and <.> for everything. |
|
||||
| git commit -m | `git commit -m "<message>"` | Commit the staged snapshot, use <message> to detail what is being committed. |
|
||||
| git status | `git status` | List files that are staged, unstaged and untracked. |
|
||||
| git log | `git log` | Display all commit history using the default format. There are additional options with this command. |
|
||||
| git diff | `git diff` | Show unstaged changes between your index and working directory. |
|
||||
| Command | Example | Description |
|
||||
| ------------- | --------------------------- | --------------------------------------------------------------------------------------------------------------------------- |
|
||||
| git init | `git init <directory>` | Create an empty git repository in the specified directory. |
|
||||
| git clone | `git clone <repo>` | Clone repository located at <repo> onto local machine. |
|
||||
| git config | `git config user.name` | Define author name to be used for all commits in current repository `system`, `global`, `local` flag to set config options. |
|
||||
| git add | `git add <directory>` | Stage all changes in <directory> for the next commit. We can also add <files> and <.> for everything. |
|
||||
| git commit -m | `git commit -m "<message>"` | Commit the staged snapshot, use <message> to detail what is being committed. |
|
||||
| git status | `git status` | List files that are staged, unstaged and untracked. |
|
||||
| git log | `git log` | Display all commit history using the default format. There are additional options with this command. |
|
||||
| git diff | `git diff` | Show unstaged changes between your index and working directory. |
|
||||
|
||||
### Git Undoing Changes
|
||||
### Git Undoing Changes
|
||||
|
||||
| Command | Example | Description |
|
||||
| --------------- | ------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| git revert | `git revert <commit>` | Create a new commit that undoes all of the changes made in <commit> then apply it to the current branch. |
|
||||
| git reset | `git reset <file>` | Remove <file> from the staging area, but leave the working directory unchanged. This unstaged a file without overwriting any changes. |
|
||||
| git clean | `git clean -n` | Shows which files would be removed from the working directory. Use `-f` in place of `-n` to execute the clean. |
|
||||
| Command | Example | Description |
|
||||
| ---------- | --------------------- | ------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| git revert | `git revert <commit>` | Create a new commit that undoes all of the changes made in <commit> then apply it to the current branch. |
|
||||
| git reset | `git reset <file>` | Remove <file> from the staging area, but leave the working directory unchanged. This unstaged a file without overwriting any changes. |
|
||||
| git clean | `git clean -n` | Shows which files would be removed from the working directory. Use `-f` in place of `-n` to execute the clean. |
|
||||
|
||||
### Git Rewriting History
|
||||
|
||||
| Command | Example | Description |
|
||||
| --------------- | ------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| git commit | `git commit --amend` | Replace the last commit with the staged changes and the last commit combined. Use with nothing staged to edit the last commit’s message. |
|
||||
| git rebase | `git rebase <base>` | Rebase the current branch onto <base>. <base> can be a commit ID, branch name, a tag, or a relative reference to HEAD. |
|
||||
| git reflog | `git reflog` | Show a log of changes to the local repository’s HEAD. Add --relative-date flag to show date info or --all to show all refs. |
|
||||
| Command | Example | Description |
|
||||
| ---------- | -------------------- | ---------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| git commit | `git commit --amend` | Replace the last commit with the staged changes and the last commit combined. Use with nothing staged to edit the last commit’s message. |
|
||||
| git rebase | `git rebase <base>` | Rebase the current branch onto <base>. <base> can be a commit ID, branch name, a tag, or a relative reference to HEAD. |
|
||||
| git reflog | `git reflog` | Show a log of changes to the local repository’s HEAD. Add --relative-date flag to show date info or --all to show all refs. |
|
||||
|
||||
### Git Branches
|
||||
|
||||
| Command | Example | Description |
|
||||
| --------------- | ------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| git branch | `git branch` | List all of the branches in your repo. Add a <branch> argument to create a new branch with the name <branch>. |
|
||||
| git checkout | `git checkout -b <branch>` | Create and check out a new branch named <branch>. Drop the -b flag to checkout an existing branch. |
|
||||
| git merge | `git merge <branch>` | Merge <branch> into the current branch. |
|
||||
| Command | Example | Description |
|
||||
| ------------ | -------------------------- | ------------------------------------------------------------------------------------------------------------- |
|
||||
| git branch | `git branch` | List all of the branches in your repo. Add a <branch> argument to create a new branch with the name <branch>. |
|
||||
| git checkout | `git checkout -b <branch>` | Create and check out a new branch named <branch>. Drop the -b flag to checkout an existing branch. |
|
||||
| git merge | `git merge <branch>` | Merge <branch> into the current branch. |
|
||||
|
||||
### Git Remote Repositories
|
||||
|
||||
| Command | Example | Description |
|
||||
| --------------- | ------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| git remote add | `git remote add <name> <url>` | Create a new connection to a remote repo. After adding a remote, you can use <name> as a shortcut for <url> in other commands. |
|
||||
| git fetch | `git fetch <remote> <branch>` | Fetches a specific <branch>, from the repo. Leave off <branch> to fetch all remote refs. |
|
||||
| git pull | `git pull <remote>` | Fetch the specified remote’s copy of current branch and immediately merge it into the local copy. |
|
||||
| git push | `git push <remote> <branch>` | Push the branch to <remote>, along with necessary commits and objects. Creates named branch in the remote repo if it doesn’t exist. |
|
||||
| Command | Example | Description |
|
||||
| -------------- | ----------------------------- | ----------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| git remote add | `git remote add <name> <url>` | Create a new connection to a remote repo. After adding a remote, you can use <name> as a shortcut for <url> in other commands. |
|
||||
| git fetch | `git fetch <remote> <branch>` | Fetches a specific <branch>, from the repo. Leave off <branch> to fetch all remote refs. |
|
||||
| git pull | `git pull <remote>` | Fetch the specified remote’s copy of current branch and immediately merge it into the local copy. |
|
||||
| git push | `git push <remote> <branch>` | Push the branch to <remote>, along with necessary commits and objects. Creates named branch in the remote repo if it doesn’t exist. |
|
||||
|
||||
### Git Diff
|
||||
|
||||
| Command | Example | Description |
|
||||
| --------------- | ------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| git diff HEAD | `git diff HEAD` | Show the difference between the working directory and the last commit. |
|
||||
| git diff --cached | `git diff --cached` | Show difference between staged changes and last commit |
|
||||
| Command | Example | Description |
|
||||
| ----------------- | ------------------- | ---------------------------------------------------------------------- |
|
||||
| git diff HEAD | `git diff HEAD` | Show the difference between the working directory and the last commit. |
|
||||
| git diff --cached | `git diff --cached` | Show difference between staged changes and last commit |
|
||||
|
||||
### Git Config
|
||||
|
||||
| Command | Example | Description |
|
||||
| ----------------------------------------------------- | ---------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| git config --global user.name <name> | `git config --global user.name <name>` | Define the author name to be used for all commits by the current user. |
|
||||
| git config --global user.email <email> | `git config --global user.email <email>` | Define author email to be used for all commits by the current user. |
|
||||
| git config --global alias <alias-name> <git-command> | `git config --global alias <alias-name> <git-command>` | Create shortcut for a git command . |
|
||||
| git config --system core.editor <editor> | `git config --system core.editor <editor>` | Set the text editor to be used by commands for all users on the machine. <editor> arg should be the comamnd that launches the desired editor. |
|
||||
| git config --global --edit | `git config --global --edit ` | Open the global configuration file in a text editor for manual editing. |
|
||||
| Command | Example | Description |
|
||||
| ---------------------------------------------------- | ------------------------------------------------------ | --------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| git config --global user.name <name> | `git config --global user.name <name>` | Define the author name to be used for all commits by the current user. |
|
||||
| git config --global user.email <email> | `git config --global user.email <email>` | Define author email to be used for all commits by the current user. |
|
||||
| git config --global alias <alias-name> <git-command> | `git config --global alias <alias-name> <git-command>` | Create shortcut for a git command . |
|
||||
| git config --system core.editor <editor> | `git config --system core.editor <editor>` | Set the text editor to be used by commands for all users on the machine. <editor> arg should be the comamnd that launches the desired editor. |
|
||||
| git config --global --edit | `git config --global --edit ` | Open the global configuration file in a text editor for manual editing. |
|
||||
|
||||
### Git Rebase
|
||||
|
||||
| Command | Example | Description |
|
||||
| ------------------------------------- | ------------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| git rebase -i <base> | `git rebase -i <base>` | Interactively rebase current branch onto <base>. Launches editor to enter commands for how each commit will be transferred to the new base. |
|
||||
| Command | Example | Description |
|
||||
| -------------------- | ---------------------- | ------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| git rebase -i <base> | `git rebase -i <base>` | Interactively rebase current branch onto <base>. Launches editor to enter commands for how each commit will be transferred to the new base. |
|
||||
|
||||
### Git Pull
|
||||
|
||||
| Command | Example | Description |
|
||||
| ------------------------------------- | ------------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| git pull --rebase <remote> | `git pull --rebase <remote>` | Fetch the remote’s copy of current branch and rebases it into the local copy. Uses git rebase instead of the merge to integrate the branches. |
|
||||
| Command | Example | Description |
|
||||
| -------------------------- | ---------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| git pull --rebase <remote> | `git pull --rebase <remote>` | Fetch the remote’s copy of current branch and rebases it into the local copy. Uses git rebase instead of the merge to integrate the branches. |
|
||||
|
||||
### Git Reset
|
||||
|
||||
| Command | Example | Description |
|
||||
| ------------------------- | --------------------------| --------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| git reset | `git reset ` | Reset the staging area to match the most recent commit but leave the working directory unchanged. |
|
||||
| git reset --hard | `git reset --hard` | Reset staging area and working directory to match most recent commit and overwrites all changes in the working directory |
|
||||
| git reset <commit> | `git reset <commit>` | Move the current branch tip backwards to <commit>, reset the staging area to match, but leave the working directory alone |
|
||||
| git reset --hard <commit> | `git reset --hard <commit>` | Same as previous, but resets both the staging area & working directory to match. Deletes uncommitted changes, and all commits after <commit>. |
|
||||
| Command | Example | Description |
|
||||
| ------------------------- | --------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| git reset | `git reset ` | Reset the staging area to match the most recent commit but leave the working directory unchanged. |
|
||||
| git reset --hard | `git reset --hard` | Reset staging area and working directory to match most recent commit and overwrites all changes in the working directory |
|
||||
| git reset <commit> | `git reset <commit>` | Move the current branch tip backwards to <commit>, reset the staging area to match, but leave the working directory alone |
|
||||
| git reset --hard <commit> | `git reset --hard <commit>` | Same as previous, but resets both the staging area & working directory to match. Deletes uncommitted changes, and all commits after <commit>. |
|
||||
|
||||
### Git Push
|
||||
|
||||
| Command | Example | Description |
|
||||
| ------------------------- | --------------------------| --------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| git push <remote> --force | `git push <remote> --force` | Forces the git push even if it results in a non-fast-forward merge. Do not use the --force flag unless you’re sure you know what you’re doing. |
|
||||
| git push <remote> --all | `git push <remote> --all` | Push all of your local branches to the specified remote. |
|
||||
| git push <remote> --tags | `git push <remote> --tags` | Tags aren’t automatically pushed when you push a branch or use the --all flag. The --tags flag sends all of your local tags to the remote repo. |
|
||||
| Command | Example | Description |
|
||||
| ------------------------- | --------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| git push <remote> --force | `git push <remote> --force` | Forces the git push even if it results in a non-fast-forward merge. Do not use the --force flag unless you’re sure you know what you’re doing. |
|
||||
| git push <remote> --all | `git push <remote> --all` | Push all of your local branches to the specified remote. |
|
||||
| git push <remote> --tags | `git push <remote> --tags` | Tags aren’t automatically pushed when you push a branch or use the --all flag. The --tags flag sends all of your local tags to the remote repo. |
|
||||
|
||||
## Resources
|
||||
## Resources
|
||||
|
||||
- [What is Version Control?](https://www.youtube.com/watch?v=Yc8sCSeMhi4)
|
||||
- [Types of Version Control System](https://www.youtube.com/watch?v=kr62e_n6QuQ)
|
||||
- [Git Tutorial for Beginners](https://www.youtube.com/watch?v=8JJ101D3knE&t=52s)
|
||||
- [Git for Professionals Tutorial](https://www.youtube.com/watch?v=Uszj_k0DGsg)
|
||||
- [Git and GitHub for Beginners - Crash Course](https://www.youtube.com/watch?v=RGOj5yH7evk&t=8s)
|
||||
- [Git Tutorial for Beginners](https://www.youtube.com/watch?v=8JJ101D3knE&t=52s)
|
||||
- [Git for Professionals Tutorial](https://www.youtube.com/watch?v=Uszj_k0DGsg)
|
||||
- [Git and GitHub for Beginners - Crash Course](https://www.youtube.com/watch?v=RGOj5yH7evk&t=8s)
|
||||
- [Complete Git and GitHub Tutorial](https://www.youtube.com/watch?v=apGV9Kg7ics)
|
||||
- [Git cheatsheet](https://www.atlassian.com/git/tutorials/atlassian-git-cheatsheet)
|
||||
|
||||
See you on [Day 38](day38.md)
|
||||
See you on [Day 38](day38.md)
|
||||
|
@ -1,29 +1,30 @@
|
||||
---
|
||||
title: '#90DaysOfDevOps - Staging & Changing - Day 38'
|
||||
title: "#90DaysOfDevOps - Staging & Changing - Day 38"
|
||||
published: false
|
||||
description: 90DaysOfDevOps - Staging & Changing
|
||||
tags: 'devops, 90daysofdevops, learning'
|
||||
tags: "devops, 90daysofdevops, learning"
|
||||
cover_image: null
|
||||
canonical_url: null
|
||||
id: 1049042
|
||||
---
|
||||
|
||||
## Staging & Changing
|
||||
|
||||
We have already covered some of the basics but putting things into a walkthrough makes it better for me to learn and understand how and why we are doing it this way. Before we get into any git-based services such as GitHub, git has its powers that we can take advantage of on our local workstation.
|
||||
We have already covered some of the basics but putting things into a walkthrough makes it better for me to learn and understand how and why we are doing it this way. Before we get into any git-based services such as GitHub, git has its powers that we can take advantage of on our local workstation.
|
||||
|
||||
We are going to take the project folder we created at the start of the git session and we are going to walk through some of the simple steps we can do with git. We created a folder on our local machine and we initialised it with the `git init` command
|
||||
We are going to take the project folder we created at the start of the git session and we are going to walk through some of the simple steps we can do with git. We created a folder on our local machine and we initialised it with the `git init` command
|
||||
|
||||

|
||||
|
||||
We can also see now that we have initialised the folder we have a hidden folder in our directory.
|
||||
We can also see now that we have initialised the folder we have a hidden folder in our directory.
|
||||
|
||||

|
||||
|
||||
This is where the details of the git repository are stored as well as the information regarding our branches and commits.
|
||||
This is where the details of the git repository are stored as well as the information regarding our branches and commits.
|
||||
|
||||
### Staging Files
|
||||
|
||||
We then start working on our empty folder and maybe we add some source code on the first days of work. We create our readme.mdfile and we can see that file in the directory, next we check our `git status` and it knows about the new readme.mdfile but we have not committed the file yet.
|
||||
We then start working on our empty folder and maybe we add some source code on the first days of work. We create our readme.mdfile and we can see that file in the directory, next we check our `git status` and it knows about the new readme.mdfile but we have not committed the file yet.
|
||||
|
||||

|
||||
|
||||
@ -31,96 +32,96 @@ We can stage our readme.mdfile with the `git add README.md` command then we can
|
||||
|
||||

|
||||
|
||||
Next up we want to commit this, our first commit or our first snapshot of our project. We can do this by using the `git commit -m "Meaningful message"` command so that we can easily see what has changed for each commit. Also, notice the yellow cross changes now to a green tick. This is something I have within my terminal with the theme I use, something we covered in the Linux section.
|
||||
Next up we want to commit this, our first commit or our first snapshot of our project. We can do this by using the `git commit -m "Meaningful message"` command so that we can easily see what has changed for each commit. Also, notice the yellow cross changes now to a green tick. This is something I have within my terminal with the theme I use, something we covered in the Linux section.
|
||||
|
||||

|
||||
|
||||
### Committing Changes
|
||||
|
||||
We are going to most likely want to add more files or even change the files we have in our directory. We have already done our first commit above. But now we are going to add more details and more files.
|
||||
We are going to most likely want to add more files or even change the files we have in our directory. We have already done our first commit above. But now we are going to add more details and more files.
|
||||
|
||||
We could repeat our process from before, create or edit our file > `git add .` to add all files to the staging area then `git commit -m "meaningful message"` and this would work just fine. But to be able to offer a meaningful message on commit of what has changed you might not want to write something out like `git commit -m "Well, I changed some code because it did not work and when I fixed that I also added something new to the readme.mdto ensure everyone knew about the user experience and then I made a tea."` I mean this would work as well although probably make it descriptive but the preferred way here is to add this with a text editor.
|
||||
We could repeat our process from before, create or edit our file > `git add .` to add all files to the staging area then `git commit -m "meaningful message"` and this would work just fine. But to be able to offer a meaningful message on commit of what has changed you might not want to write something out like `git commit -m "Well, I changed some code because it did not work and when I fixed that I also added something new to the readme.mdto ensure everyone knew about the user experience and then I made a tea."` I mean this would work as well although probably make it descriptive but the preferred way here is to add this with a text editor.
|
||||
|
||||
If we run `git commit` after running `git add` it will open our default text editor which in my case here is nano. Here are the steps I took to add some changes to the file, ran `git status` to show what is and what is not staged. Then I used `git add` to add the file to the staging area, then ran `git commit` which opened nano.
|
||||
|
||||

|
||||
|
||||
When nano opens you can then add your short and long description and then save the file.
|
||||
When nano opens you can then add your short and long description and then save the file.
|
||||
|
||||

|
||||
|
||||
### Committing Best Practices
|
||||
|
||||
There is a balance here between when to commit and commit often. We do not want to be waiting to be finished the project before committing, each commit should be meaningful and they also should not be coupled with non-relevant tasks with each other. If you have a bug fix and a typo make sure they are two separate commits as a best practice.
|
||||
There is a balance here between when to commit and commit often. We do not want to be waiting to be finished the project before committing, each commit should be meaningful and they also should not be coupled with non-relevant tasks with each other. If you have a bug fix and a typo make sure they are two separate commits as a best practice.
|
||||
|
||||
Make the commit message mean something.
|
||||
Make the commit message mean something.
|
||||
|
||||
In terms of wording, the team or yourself should be sticking to the same wording for each commit.
|
||||
In terms of wording, the team or yourself should be sticking to the same wording for each commit.
|
||||
|
||||
### Skipping the Staging Area
|
||||
|
||||
Do we always have to stage our changes before committing them?
|
||||
Do we always have to stage our changes before committing them?
|
||||
|
||||
The answer is yes but don't see this as a shortcut, you have to be sure 100% that you are not needing that snapshot to roll back to, it is a risky thing to do.
|
||||
The answer is yes but don't see this as a shortcut, you have to be sure 100% that you are not needing that snapshot to roll back to, it is a risky thing to do.
|
||||
|
||||

|
||||
|
||||
### Removing Files
|
||||
|
||||
What about removing files from our project, maybe we have another file in our directory that we have committed but now the project no longer needs or using it, as a best practice we should remove it.
|
||||
What about removing files from our project, maybe we have another file in our directory that we have committed but now the project no longer needs or using it, as a best practice we should remove it.
|
||||
|
||||
Just because we remove the file from the directory, git is still aware of this file and we also need to remove it from the repository. You can see the workflow for this below.
|
||||
Just because we remove the file from the directory, git is still aware of this file and we also need to remove it from the repository. You can see the workflow for this below.
|
||||
|
||||

|
||||
|
||||
That could be a bit of a pain to either remember or have to deal with if you have a large project which has many moving files and folders. We can do this with one command with `git rm oldcode.ps1`
|
||||
That could be a bit of a pain to either remember or have to deal with if you have a large project which has many moving files and folders. We can do this with one command with `git rm oldcode.ps1`
|
||||
|
||||

|
||||
|
||||
### Renaming or Moving Files
|
||||
|
||||
Within our operating system, we can rename and move our files. We will no doubt need to do this from time to time with our projects. Similar to removing though there is a two-step process, we change our files on our OS and then we have to modify and make sure that the staging area or that the files are added correctly. Steps as follows:
|
||||
Within our operating system, we can rename and move our files. We will no doubt need to do this from time to time with our projects. Similar to removing though there is a two-step process, we change our files on our OS and then we have to modify and make sure that the staging area or that the files are added correctly. Steps as follows:
|
||||
|
||||

|
||||
|
||||
However, like removing files from the operating system and then the git repository we can perform this rename using a git command too.
|
||||
However, like removing files from the operating system and then the git repository we can perform this rename using a git command too.
|
||||
|
||||

|
||||
|
||||
### Ignoring Files
|
||||
|
||||
We may have the requirement to ignore files or folders within our project that we might be using locally or that will be just wasted space if we were to share with the overall project, a good example of this could be logs. I also think using this for secrets that you do not want to be shared out in public or across teams.
|
||||
We may have the requirement to ignore files or folders within our project that we might be using locally or that will be just wasted space if we were to share with the overall project, a good example of this could be logs. I also think using this for secrets that you do not want to be shared out in public or across teams.
|
||||
|
||||
We can ignore files by adding folders or files to the `.gitignore` file in our project directory.
|
||||
We can ignore files by adding folders or files to the `.gitignore` file in our project directory.
|
||||
|
||||

|
||||
|
||||
You can then open the `.gitignore` file and see that we have the logs/ directory present. But we could also add additional files and folders here to ignore.
|
||||
You can then open the `.gitignore` file and see that we have the logs/ directory present. But we could also add additional files and folders here to ignore.
|
||||
|
||||

|
||||
|
||||
We can then see `git status` and then see what has happened.
|
||||
We can then see `git status` and then see what has happened.
|
||||
|
||||

|
||||
|
||||
There are also ways in which you might need to go back and ignore files and folders, maybe you did want to share the logs folder but then later realised that you didn't want to. You will have to use `git rm --cached ` to remove files and folders from the staging area if you have a previously tracked folder that you now want to ignore.
|
||||
There are also ways in which you might need to go back and ignore files and folders, maybe you did want to share the logs folder but then later realised that you didn't want to. You will have to use `git rm --cached ` to remove files and folders from the staging area if you have a previously tracked folder that you now want to ignore.
|
||||
|
||||
### Short Status
|
||||
|
||||
We have been using `git status` a lot to understand what we have in our staging area and what we do not, it's a very comprehensive command with lots of detail. Most of the time you will just want to know what has been modified or what is new? We can use `git status -s` for a short status of this detail. I would usually set an alias on my system to just use `git status -s` vs the more detailed command.
|
||||
We have been using `git status` a lot to understand what we have in our staging area and what we do not, it's a very comprehensive command with lots of detail. Most of the time you will just want to know what has been modified or what is new? We can use `git status -s` for a short status of this detail. I would usually set an alias on my system to just use `git status -s` vs the more detailed command.
|
||||
|
||||

|
||||
|
||||
In the post tomorrow we will continue to look through these short examples of these common git commands.
|
||||
In the post tomorrow we will continue to look through these short examples of these common git commands.
|
||||
|
||||
## Resources
|
||||
## Resources
|
||||
|
||||
- [What is Version Control?](https://www.youtube.com/watch?v=Yc8sCSeMhi4)
|
||||
- [Types of Version Control System](https://www.youtube.com/watch?v=kr62e_n6QuQ)
|
||||
- [Git Tutorial for Beginners](https://www.youtube.com/watch?v=8JJ101D3knE&t=52s)
|
||||
- [Git for Professionals Tutorial](https://www.youtube.com/watch?v=Uszj_k0DGsg)
|
||||
- [Git and GitHub for Beginners - Crash Course](https://www.youtube.com/watch?v=RGOj5yH7evk&t=8s)
|
||||
- [Git Tutorial for Beginners](https://www.youtube.com/watch?v=8JJ101D3knE&t=52s)
|
||||
- [Git for Professionals Tutorial](https://www.youtube.com/watch?v=Uszj_k0DGsg)
|
||||
- [Git and GitHub for Beginners - Crash Course](https://www.youtube.com/watch?v=RGOj5yH7evk&t=8s)
|
||||
- [Complete Git and GitHub Tutorial](https://www.youtube.com/watch?v=apGV9Kg7ics)
|
||||
- [Git cheatsheet](https://www.atlassian.com/git/tutorials/atlassian-git-cheatsheet)
|
||||
|
||||
See you on [Day 39](day39.md)
|
||||
See you on [Day 39](day39.md)
|
||||
|
124
Days/day39.md
124
Days/day39.md
@ -1,210 +1,212 @@
|
||||
---
|
||||
title: '#90DaysOfDevOps - Viewing, unstaging, discarding & restoring - Day 39'
|
||||
title: "#90DaysOfDevOps - Viewing, unstaging, discarding & restoring - Day 39"
|
||||
published: false
|
||||
description: '90DaysOfDevOps - Viewing, unstaging, discarding & restoring'
|
||||
tags: 'devops, 90daysofdevops, learning'
|
||||
description: "90DaysOfDevOps - Viewing, unstaging, discarding & restoring"
|
||||
tags: "devops, 90daysofdevops, learning"
|
||||
cover_image: null
|
||||
canonical_url: null
|
||||
id: 1048827
|
||||
---
|
||||
|
||||
## Viewing, unstaging, discarding & restoring
|
||||
|
||||
Continuing from where we finished yesterday around some of the commands that we have with git and how to leverage git with your projects. Remember we have not touched GitHub or any other git-based services yet this is all to help you keep control of your projects locally at the moment, but they will all become useful when we start to integrate into those tools.
|
||||
Continuing from where we finished yesterday around some of the commands that we have with git and how to leverage git with your projects. Remember we have not touched GitHub or any other git-based services yet this is all to help you keep control of your projects locally at the moment, but they will all become useful when we start to integrate into those tools.
|
||||
|
||||
### Viewing the Staged and Unstaged Changes
|
||||
### Viewing the Staged and Unstaged Changes
|
||||
|
||||
It is good practice to make sure you view the staged and unstaged code before committing. We can do this by running the `git diff --staged` command
|
||||
It is good practice to make sure you view the staged and unstaged code before committing. We can do this by running the `git diff --staged` command
|
||||
|
||||

|
||||
|
||||
This then shows us all the changes we have made and all new files we have added or deleted.
|
||||
This then shows us all the changes we have made and all new files we have added or deleted.
|
||||
|
||||
changes in the modified files are indicated with `---` or `+++` you can see below that we just added +add some text below which means they are new lines.
|
||||
changes in the modified files are indicated with `---` or `+++` you can see below that we just added +add some text below which means they are new lines.
|
||||
|
||||

|
||||
|
||||
We can also run `git diff` to compare our staging area with our working directory. If we make some changes to our newly added file code.txt and add some lines of text.
|
||||
We can also run `git diff` to compare our staging area with our working directory. If we make some changes to our newly added file code.txt and add some lines of text.
|
||||
|
||||

|
||||
|
||||
If we then run `git diff` we compare and see the output below.
|
||||
If we then run `git diff` we compare and see the output below.
|
||||
|
||||

|
||||
|
||||
### Visual Diff Tools
|
||||
|
||||
For me, the above is more confusing so I would much rather use a visual tool,
|
||||
For me, the above is more confusing so I would much rather use a visual tool,
|
||||
|
||||
To name a few visual diff tools:
|
||||
To name a few visual diff tools:
|
||||
|
||||
- KDiff3
|
||||
- P4Merge
|
||||
- P4Merge
|
||||
- WinMerge (Windows Only)
|
||||
- VSCode
|
||||
|
||||
To set this in git we run the following command `git config --global diff.tool vscode`
|
||||
|
||||
We are going to run the above and we are going to set some parameters when we launch VScode.
|
||||
We are going to run the above and we are going to set some parameters when we launch VScode.
|
||||
|
||||

|
||||
|
||||
We can also check our configuration with `git config --global -e`
|
||||
We can also check our configuration with `git config --global -e`
|
||||
|
||||

|
||||
|
||||
We can then use `git difftool` to now open our diff visual tool.
|
||||
We can then use `git difftool` to now open our diff visual tool.
|
||||
|
||||

|
||||
|
||||
Which then opens our VScode editor on the diff page and compares the two, we have only modified one file from nothing to now adding a line of code on the right side.
|
||||
Which then opens our VScode editor on the diff page and compares the two, we have only modified one file from nothing to now adding a line of code on the right side.
|
||||
|
||||

|
||||
|
||||
I find this method much easier to track changes and this is something similar to what we will see when we look into git-based services such as GitHub.
|
||||
I find this method much easier to track changes and this is something similar to what we will see when we look into git-based services such as GitHub.
|
||||
|
||||
We can also use `git difftool --staged` to compare stage with committed files.
|
||||
We can also use `git difftool --staged` to compare stage with committed files.
|
||||
|
||||

|
||||
|
||||
Then we can cycle through our changed files before we commit.
|
||||
Then we can cycle through our changed files before we commit.
|
||||
|
||||

|
||||
|
||||
I am using VScode as my IDE and like most IDEs they have this functionality built in it is very rare you would need to run these commands from the terminal, although helpful if you don't have an IDE installed for some reason.
|
||||
I am using VScode as my IDE and like most IDEs they have this functionality built in it is very rare you would need to run these commands from the terminal, although helpful if you don't have an IDE installed for some reason.
|
||||
|
||||
### Viewing the History
|
||||
|
||||
We previously touched on `git log` which will provide us with a comprehensive view of all commits we have made in our repository.
|
||||
We previously touched on `git log` which will provide us with a comprehensive view of all commits we have made in our repository.
|
||||
|
||||

|
||||
|
||||
Each commit has its hexadecimal string, unique to the repository. Here you can see which branch we are working on and then also the author, date and commit message.
|
||||
Each commit has its hexadecimal string, unique to the repository. Here you can see which branch we are working on and then also the author, date and commit message.
|
||||
|
||||
We also have `git log --oneline` and this gives us a much smaller version of the hexadecimal string which we can use in other `diff` commands. We also only have the one-line description or commit message.
|
||||
We also have `git log --oneline` and this gives us a much smaller version of the hexadecimal string which we can use in other `diff` commands. We also only have the one-line description or commit message.
|
||||
|
||||

|
||||
|
||||
We can reverse this into a start with the first commit by running `git log --oneline --reverse` and now we see our first commit at the top of our page.
|
||||
We can reverse this into a start with the first commit by running `git log --oneline --reverse` and now we see our first commit at the top of our page.
|
||||
|
||||

|
||||
|
||||
### Viewing a Commit
|
||||
|
||||
Being able to look at the commit message is great if you have been conscious about following best practices and you have added a meaningful commit message, however, there is also `git show` command which allows us to inspect and view a commit.
|
||||
Being able to look at the commit message is great if you have been conscious about following best practices and you have added a meaningful commit message, however, there is also `git show` command which allows us to inspect and view a commit.
|
||||
|
||||
We can use `git log --oneline --reverse` to get a list of our commits. and then we can take those and run `git show <commit ID>`
|
||||
|
||||

|
||||
|
||||
The output of that command will look like below with the detail of the commit, author and what changed.
|
||||
The output of that command will look like below with the detail of the commit, author and what changed.
|
||||
|
||||

|
||||
|
||||
We can also use `git show HEAD~1` where 1 is how many steps back from the current version we want to get back to.
|
||||
We can also use `git show HEAD~1` where 1 is how many steps back from the current version we want to get back to.
|
||||
|
||||
This is great if you want some detail on your files, but if we want to list all the files in a tree for the whole snapshot directory. We can achieve this by using the `git ls-tree HEAD~1` command, again going back one snapshot from the last commit. We can see below we have two blobs, these indicate files whereas the tree would indicate a directory. You can also see commits and tags in this information.
|
||||
This is great if you want some detail on your files, but if we want to list all the files in a tree for the whole snapshot directory. We can achieve this by using the `git ls-tree HEAD~1` command, again going back one snapshot from the last commit. We can see below we have two blobs, these indicate files whereas the tree would indicate a directory. You can also see commits and tags in this information.
|
||||
|
||||

|
||||
|
||||
We can then use the above to drill in and see the contents of our file (blobs) using the `git show` command.
|
||||
We can then use the above to drill in and see the contents of our file (blobs) using the `git show` command.
|
||||
|
||||

|
||||
|
||||
Then the contents of that specific version of the file will be shown.
|
||||
Then the contents of that specific version of the file will be shown.
|
||||
|
||||

|
||||
|
||||
### Unstaging Files
|
||||
|
||||
There will be a time when you have maybe used `git add .` but there are files you do not wish to commit to that snapshot just yet. In this example below I have added newfile.txt to my staging area but I am not ready to commit this file so I am going to use the `git restore --staged newfile.txt` to undo the `git add` step.
|
||||
There will be a time when you have maybe used `git add .` but there are files you do not wish to commit to that snapshot just yet. In this example below I have added newfile.txt to my staging area but I am not ready to commit this file so I am going to use the `git restore --staged newfile.txt` to undo the `git add` step.
|
||||
|
||||

|
||||
|
||||
We can also do the same to modified files such as main.js and unstage the commit, see above we have a greem M for modified and then below we are unstaging those changes.
|
||||
We can also do the same to modified files such as main.js and unstage the commit, see above we have a greem M for modified and then below we are unstaging those changes.
|
||||
|
||||

|
||||
|
||||
I have found this command quite useful during the 90DaysOfDevOps as I sometimes work ahead of the days where I feel I want to make notes for the following day but I don't want to commit and push to the public GitHub repository.
|
||||
I have found this command quite useful during the 90DaysOfDevOps as I sometimes work ahead of the days where I feel I want to make notes for the following day but I don't want to commit and push to the public GitHub repository.
|
||||
|
||||
### Discarding Local Changes
|
||||
|
||||
Sometimes we might make changes but we are not happy with those changes and we want to throw them away. We are going to use the `git restore` command again and we are going to be able to restore files from our snapshots or previous versions. We can run `git restore .` against our directory and we will restore everything from our snapshot but notice that our untracked file is still present. There is no previous file being tracked called newfile.txt.
|
||||
Sometimes we might make changes but we are not happy with those changes and we want to throw them away. We are going to use the `git restore` command again and we are going to be able to restore files from our snapshots or previous versions. We can run `git restore .` against our directory and we will restore everything from our snapshot but notice that our untracked file is still present. There is no previous file being tracked called newfile.txt.
|
||||
|
||||

|
||||
|
||||
Now to remove newfile.txt or any untracked files. We can use `git clean` we will get a warning alone.
|
||||
Now to remove newfile.txt or any untracked files. We can use `git clean` we will get a warning alone.
|
||||
|
||||

|
||||
|
||||
Or if we know the consequences then we might want to run `git clean -fd` to force and remove all directories.
|
||||
Or if we know the consequences then we might want to run `git clean -fd` to force and remove all directories.
|
||||
|
||||

|
||||
|
||||
### Restoring a File to an Earlier Version
|
||||
### Restoring a File to an Earlier Version
|
||||
|
||||
As we have alluded to throughout a big portion of what Git can help with is being able to restore copies of your files from your snapshots (this is not a backup but it is a very fast restore point) My advice is that you also save copies of your code in other locations using a backup solution for this.
|
||||
As we have alluded to throughout a big portion of what Git can help with is being able to restore copies of your files from your snapshots (this is not a backup but it is a very fast restore point) My advice is that you also save copies of your code in other locations using a backup solution for this.
|
||||
|
||||
As an example let's go and delete our most important file in our directory, notice we are using Unix-based commands to remove this from the directory, not git commands.
|
||||
As an example let's go and delete our most important file in our directory, notice we are using Unix-based commands to remove this from the directory, not git commands.
|
||||
|
||||

|
||||
|
||||
Now we have no readme.mdin our working directory. We could have used `git rm readme.md` and this would then be reflected in our git database. Let's also delete it from here to simulate it being removed completely.
|
||||
Now we have no readme.mdin our working directory. We could have used `git rm readme.md` and this would then be reflected in our git database. Let's also delete it from here to simulate it being removed completely.
|
||||
|
||||

|
||||
|
||||
Let's now commit this with a message and prove that we no longer have anything in our working directory or staging area.
|
||||
Let's now commit this with a message and prove that we no longer have anything in our working directory or staging area.
|
||||
|
||||

|
||||
|
||||
Mistakes were made and we now need this file back!
|
||||
Mistakes were made and we now need this file back!
|
||||
|
||||
We could use the `git undo` command which will undo the last commit, but what if it was a while back? We can use our `git log` command to find our commits and then we find that our file is in the last commit but we don't all of those commits to be undone so we can then use this command `git restore --source=HEAD~1 README.md` to specifically find the file and restore it from our snapshot.
|
||||
We could use the `git undo` command which will undo the last commit, but what if it was a while back? We can use our `git log` command to find our commits and then we find that our file is in the last commit but we don't all of those commits to be undone so we can then use this command `git restore --source=HEAD~1 README.md` to specifically find the file and restore it from our snapshot.
|
||||
|
||||
You can see using this process we now have the file back in our working directory.
|
||||
You can see using this process we now have the file back in our working directory.
|
||||
|
||||

|
||||
|
||||
We now have a new untracked file and we can use our commands previously mentioned to track, stage and commit our files and changes.
|
||||
We now have a new untracked file and we can use our commands previously mentioned to track, stage and commit our files and changes.
|
||||
|
||||
### Rebase vs Merge
|
||||
### Rebase vs Merge
|
||||
|
||||
This seems to be the biggest headache when it comes to Git and when to use rebase vs using merge on your git repositories.
|
||||
This seems to be the biggest headache when it comes to Git and when to use rebase vs using merge on your git repositories.
|
||||
|
||||
The first thing to know is that both `git rebase` and `git merge` solve the same problem. Both are to integrate changes from one branch into another branch. However, they do this in different ways.
|
||||
The first thing to know is that both `git rebase` and `git merge` solve the same problem. Both are to integrate changes from one branch into another branch. However, they do this in different ways.
|
||||
|
||||
Let's start with a new feature in a new dedicated branch. The Main branch continues with new commits.
|
||||
Let's start with a new feature in a new dedicated branch. The Main branch continues with new commits.
|
||||
|
||||

|
||||
|
||||
The easy option here is to use `git merge feature main` which will merge the main branch into the feature branch.
|
||||
The easy option here is to use `git merge feature main` which will merge the main branch into the feature branch.
|
||||
|
||||

|
||||
|
||||
Merging is easy because it is non-destructive. The existing branches are not changed in any way. However, this also means that the feature branch will have an irrelevant merge commit every time you need to incorporate upstream changes. If the main is very busy or active this will or can pollute the feature branch history.
|
||||
Merging is easy because it is non-destructive. The existing branches are not changed in any way. However, this also means that the feature branch will have an irrelevant merge commit every time you need to incorporate upstream changes. If the main is very busy or active this will or can pollute the feature branch history.
|
||||
|
||||
As an alternate option, we can rebase the feature branch onto the main branch using
|
||||
As an alternate option, we can rebase the feature branch onto the main branch using
|
||||
|
||||
```
|
||||
git checkout feature
|
||||
git rebase main
|
||||
```
|
||||
```
|
||||
|
||||
This moves the feature branch (the entire feature branch) effectively incorporating all of the new commits in the main. But, instead of using a merge commit, rebasing re-writes the project history by creating brand new commits for each commit in the original branch.
|
||||
|
||||

|
||||
|
||||
The biggest benefit of rebasing is a much cleaner project history. It also eliminates unnecessary merge commits. and as you compare the last two images, you can follow arguably a much cleaner linear project history.
|
||||
The biggest benefit of rebasing is a much cleaner project history. It also eliminates unnecessary merge commits. and as you compare the last two images, you can follow arguably a much cleaner linear project history.
|
||||
|
||||
Although it's still not a foregone conclusion, choosing the cleaner history also comes with tradeoffs, If you do not follow the [The Golden rule of rebasing](https://www.atlassian.com/git/tutorials/merging-vs-rebasing#the-golden-rule-of-rebasing) re-writing project history can be potentially catastrophic for your collaboration workflow. And, less importantly, rebasing loses the context provided by a merge commit—you can’t see when upstream changes were incorporated into the feature.
|
||||
Although it's still not a foregone conclusion, choosing the cleaner history also comes with tradeoffs, If you do not follow the [The Golden rule of rebasing](https://www.atlassian.com/git/tutorials/merging-vs-rebasing#the-golden-rule-of-rebasing) re-writing project history can be potentially catastrophic for your collaboration workflow. And, less importantly, rebasing loses the context provided by a merge commit—you can’t see when upstream changes were incorporated into the feature.
|
||||
|
||||
## Resources
|
||||
## Resources
|
||||
|
||||
- [What is Version Control?](https://www.youtube.com/watch?v=Yc8sCSeMhi4)
|
||||
- [Types of Version Control System](https://www.youtube.com/watch?v=kr62e_n6QuQ)
|
||||
- [Git Tutorial for Beginners](https://www.youtube.com/watch?v=8JJ101D3knE&t=52s)
|
||||
- [Git for Professionals Tutorial](https://www.youtube.com/watch?v=Uszj_k0DGsg)
|
||||
- [Git and GitHub for Beginners - Crash Course](https://www.youtube.com/watch?v=RGOj5yH7evk&t=8s)
|
||||
- [Git Tutorial for Beginners](https://www.youtube.com/watch?v=8JJ101D3knE&t=52s)
|
||||
- [Git for Professionals Tutorial](https://www.youtube.com/watch?v=Uszj_k0DGsg)
|
||||
- [Git and GitHub for Beginners - Crash Course](https://www.youtube.com/watch?v=RGOj5yH7evk&t=8s)
|
||||
- [Complete Git and GitHub Tutorial](https://www.youtube.com/watch?v=apGV9Kg7ics)
|
||||
- [Git cheatsheet](https://www.atlassian.com/git/tutorials/atlassian-git-cheatsheet)
|
||||
- [Exploring the Git command line – A getting started guide](https://veducate.co.uk/exploring-the-git-command-line/)
|
||||
|
||||
See you on [Day40](day40.md)
|
||||
See you on [Day40](day40.md)
|
||||
|
140
Days/day40.md
140
Days/day40.md
@ -1,208 +1,210 @@
|
||||
---
|
||||
title: '#90DaysOfDevOps - Social Network for code - Day 40'
|
||||
title: "#90DaysOfDevOps - Social Network for code - Day 40"
|
||||
published: false
|
||||
description: 90DaysOfDevOps - Social Network for code
|
||||
tags: 'devops, 90daysofdevops, learning'
|
||||
tags: "devops, 90daysofdevops, learning"
|
||||
cover_image: null
|
||||
canonical_url: null
|
||||
id: 1049044
|
||||
---
|
||||
|
||||
## Social Network for code
|
||||
Exploring GitHub | GitLab | BitBucket
|
||||
|
||||
Today I want to cover some of the git-based services that we have likely all heard of and expect we also use daily.
|
||||
Exploring GitHub | GitLab | BitBucket
|
||||
|
||||
We will then use some of our prior session knowledge to move copies of our data to each of the main services.
|
||||
Today I want to cover some of the git-based services that we have likely all heard of and expect we also use daily.
|
||||
|
||||
I called this section "Social Network for Code" let me explain why?
|
||||
We will then use some of our prior session knowledge to move copies of our data to each of the main services.
|
||||
|
||||
### GitHub
|
||||
I called this section "Social Network for Code" let me explain why?
|
||||
|
||||
Most common at least for me is GitHub, GitHub is a web-based hosting service for git. It is most commonly used by software developers to store their code. Source Code Management with the git version control features as well as a lot of additional features. It allows for teams or open contributors to easily communicate and provides a social aspect to coding. (hence the social networking title) Since 2018 GitHub is part of Microsoft.
|
||||
### GitHub
|
||||
|
||||
GitHub has been around for quite some time and was founded in 2007/2008. With Over 40 million users on the platform today.
|
||||
Most common at least for me is GitHub, GitHub is a web-based hosting service for git. It is most commonly used by software developers to store their code. Source Code Management with the git version control features as well as a lot of additional features. It allows for teams or open contributors to easily communicate and provides a social aspect to coding. (hence the social networking title) Since 2018 GitHub is part of Microsoft.
|
||||
|
||||
GitHub Main Features
|
||||
GitHub has been around for quite some time and was founded in 2007/2008. With Over 40 million users on the platform today.
|
||||
|
||||
- Code Repository
|
||||
- Pull Requests
|
||||
- Project Management toolset - Issues
|
||||
- CI / CD Pipeline - GitHub Actions
|
||||
GitHub Main Features
|
||||
|
||||
In terms of pricing, GitHub has different levels of pricing for its users. More can be found on [Pricing](https://github.com/pricing)
|
||||
- Code Repository
|
||||
- Pull Requests
|
||||
- Project Management toolset - Issues
|
||||
- CI / CD Pipeline - GitHub Actions
|
||||
|
||||
For this, we will cover the free tier.
|
||||
In terms of pricing, GitHub has different levels of pricing for its users. More can be found on [Pricing](https://github.com/pricing)
|
||||
|
||||
I am going to be using my already created GitHub account during this walkthrough, if you do not have an account then on the opening GitHub page there is a sign-up option and some easy steps to get set up.
|
||||
For this, we will cover the free tier.
|
||||
|
||||
I am going to be using my already created GitHub account during this walkthrough, if you do not have an account then on the opening GitHub page there is a sign-up option and some easy steps to get set up.
|
||||
|
||||
### GitHub opening page
|
||||
|
||||
When you first log in to your GitHub account you get a page containing a lot of widgets giving you options of where and what you would like to see or do. First up we have the "All Activity" this is going to give you a look into what is happening with your repositories or activity in general associated with your organisation or account.
|
||||
When you first log in to your GitHub account you get a page containing a lot of widgets giving you options of where and what you would like to see or do. First up we have the "All Activity" this is going to give you a look into what is happening with your repositories or activity in general associated with your organisation or account.
|
||||
|
||||

|
||||
|
||||
Next, we have our Code Repositories, either our own or repositories that we have interacted with recently. We can also quickly create new repositories or search repositories.
|
||||
Next, we have our Code Repositories, either our own or repositories that we have interacted with recently. We can also quickly create new repositories or search repositories.
|
||||
|
||||

|
||||
|
||||
We then have our recent activity, these for me are issues and pull requests that I have created or contributed to recently.
|
||||
We then have our recent activity, these for me are issues and pull requests that I have created or contributed to recently.
|
||||
|
||||

|
||||
|
||||
Over on the right side of the page, we have some referrals for repositories that we might be interested in, most likely based on your recent activity or own projects.
|
||||
Over on the right side of the page, we have some referrals for repositories that we might be interested in, most likely based on your recent activity or own projects.
|
||||
|
||||

|
||||
|
||||
To be honest I am very rarely on my home page that we just saw and described, although I now see that the feed could be really useful to help interact with the community a little better on certain projects.
|
||||
To be honest I am very rarely on my home page that we just saw and described, although I now see that the feed could be really useful to help interact with the community a little better on certain projects.
|
||||
|
||||
Next up if we want to head into our GitHub Profile we can navigate to the top right corner and on your image, there is a drop-down which allows you to navigate through your account. From here to access your Profile select "Your Profile"
|
||||
Next up if we want to head into our GitHub Profile we can navigate to the top right corner and on your image, there is a drop-down which allows you to navigate through your account. From here to access your Profile select "Your Profile"
|
||||
|
||||

|
||||
|
||||
Next, your profile page will appear, by default, unless you change your configuration you are not going to see what I have, I have added some functionality that shows my recent blog posts over on [vZilla](https://vzilla.co.uk) and then also my latest videos on my [YouTube](https://m.youtube.com/c/MichaelCade1) Channel.
|
||||
Next, your profile page will appear, by default, unless you change your configuration you are not going to see what I have, I have added some functionality that shows my recent blog posts over on [vZilla](https://vzilla.co.uk) and then also my latest videos on my [YouTube](https://m.youtube.com/c/MichaelCade1) Channel.
|
||||
|
||||
You are not going to be spending much time looking at your profile, but this is a good profile page to share around your network so they can see the cool projects you are working on.
|
||||
You are not going to be spending much time looking at your profile, but this is a good profile page to share around your network so they can see the cool projects you are working on.
|
||||
|
||||

|
||||
|
||||
We can then drill down into the building block of GitHub, the repositories. Here you are going to see your repositories and if you have private repositories they are also going to be shown in this long list.
|
||||
We can then drill down into the building block of GitHub, the repositories. Here you are going to see your repositories and if you have private repositories they are also going to be shown in this long list.
|
||||
|
||||

|
||||
|
||||
As the repository is so important to GitHub let me choose a pretty busy one of late and run through some of the core functionality that we can use here on top of everything I am already using when it comes to editing our "code" in git on my local system.
|
||||
As the repository is so important to GitHub let me choose a pretty busy one of late and run through some of the core functionality that we can use here on top of everything I am already using when it comes to editing our "code" in git on my local system.
|
||||
|
||||
First of all, from the previous window, I have selected the 90DaysOfDevOps repository and we get to see this view. You can see from this view we have a lot of information, we have our main code structure in the middle showing our files and folders that are stored in our repository. We have our readme. mdbeing displayed down at the bottom. Over to the right of the page, we have an about section where the repository has a description and purpose. Then we have a lot of information underneath this showing how many people have starred in the project, forked, and watched.
|
||||
First of all, from the previous window, I have selected the 90DaysOfDevOps repository and we get to see this view. You can see from this view we have a lot of information, we have our main code structure in the middle showing our files and folders that are stored in our repository. We have our readme. mdbeing displayed down at the bottom. Over to the right of the page, we have an about section where the repository has a description and purpose. Then we have a lot of information underneath this showing how many people have starred in the project, forked, and watched.
|
||||
|
||||

|
||||
|
||||
If we scroll down a little further you will also see that we have Released, these are from the golang part of the challenge. We do not have any packages in our project, we have our contributors listed here. (Thank you community for assisting in my spelling and fact checking) We then have languages used again these are from different sections in the challenge.
|
||||
If we scroll down a little further you will also see that we have Released, these are from the golang part of the challenge. We do not have any packages in our project, we have our contributors listed here. (Thank you community for assisting in my spelling and fact checking) We then have languages used again these are from different sections in the challenge.
|
||||
|
||||

|
||||
|
||||
A the top of the page you are going to see a list of tabs. These may vary and these can be modified to only show the ones you require. You will see here that I am not using all of these and I should remove them to make sure my whole repository is tidy.
|
||||
A the top of the page you are going to see a list of tabs. These may vary and these can be modified to only show the ones you require. You will see here that I am not using all of these and I should remove them to make sure my whole repository is tidy.
|
||||
|
||||
First up we had the code tab which we just discussed but these tabs are always available when navigating through a repository which is super useful so we can jump between sections quickly and easily. Next, we have the issues tab.
|
||||
First up we had the code tab which we just discussed but these tabs are always available when navigating through a repository which is super useful so we can jump between sections quickly and easily. Next, we have the issues tab.
|
||||
|
||||
Issues let you track your work on GitHub, where development happens. In this specific repository you can see I have some issues focused on adding diagrams or typos but also we have an issue stating a need or requirement for a Chinese version of the repository.
|
||||
Issues let you track your work on GitHub, where development happens. In this specific repository you can see I have some issues focused on adding diagrams or typos but also we have an issue stating a need or requirement for a Chinese version of the repository.
|
||||
|
||||
If this was a code repository then this is a great place to raise concerns or issues with the maintainers, but remember to be mindful and detailed about what you are reporting, and give as much detail as possible.
|
||||
If this was a code repository then this is a great place to raise concerns or issues with the maintainers, but remember to be mindful and detailed about what you are reporting, and give as much detail as possible.
|
||||
|
||||

|
||||
|
||||
The next tab is Pull Requests, Pull requests let you tell others about changes you've pushed to a branch in a repository. This is where someone may have forked your repository, made changes such as bug fixes or feature enhancements or just typos in a lot of the cases in this repository.
|
||||
The next tab is Pull Requests, Pull requests let you tell others about changes you've pushed to a branch in a repository. This is where someone may have forked your repository, made changes such as bug fixes or feature enhancements or just typos in a lot of the cases in this repository.
|
||||
|
||||
We will cover forking later on.
|
||||
We will cover forking later on.
|
||||
|
||||

|
||||
|
||||
I believe the next tab is quite new? But I thought for a project like #90DaysOfDevOps this could help guide the content journey but also help the community as they walk through their learning journey. I have created some discussion groups for each section of the challenge so people can jump in and discuss.
|
||||
I believe the next tab is quite new? But I thought for a project like #90DaysOfDevOps this could help guide the content journey but also help the community as they walk through their learning journey. I have created some discussion groups for each section of the challenge so people can jump in and discuss.
|
||||
|
||||

|
||||
|
||||
The Actions tab is going to enable you to build, test and deploy code and a lot more right from within GitHub. GitHub Actions will be something we cover in the CI/CD section of the challenge but this is where we can set some configuration here to automate steps for us.
|
||||
The Actions tab is going to enable you to build, test and deploy code and a lot more right from within GitHub. GitHub Actions will be something we cover in the CI/CD section of the challenge but this is where we can set some configuration here to automate steps for us.
|
||||
|
||||
On my main GitHub Profile, I am using GitHub Actions to fetch the latest blog posts and YouTube videos to keep things up to date on that home screen.
|
||||
On my main GitHub Profile, I am using GitHub Actions to fetch the latest blog posts and YouTube videos to keep things up to date on that home screen.
|
||||
|
||||

|
||||
|
||||
I mentioned above how GitHub is not just a source code repository but is also a project management tool, The Project tab enables us to build out project tables kanban type boards so that we can link issues and PRs to better collaborate on the project and have visibility of those tasks.
|
||||
I mentioned above how GitHub is not just a source code repository but is also a project management tool, The Project tab enables us to build out project tables kanban type boards so that we can link issues and PRs to better collaborate on the project and have visibility of those tasks.
|
||||
|
||||

|
||||
|
||||
I know that issues to me seem like a good place to log feature requests and they are but the wiki page allows for a comprehensive roadmap for the project to be outlined with the current status and in general better document your project is it troubleshooting or how-to type content.
|
||||
I know that issues to me seem like a good place to log feature requests and they are but the wiki page allows for a comprehensive roadmap for the project to be outlined with the current status and in general better document your project is it troubleshooting or how-to type content.
|
||||
|
||||

|
||||
|
||||
Not so applicable to this project but the Security tab is there to make sure that contributors know how to deal with certain tasks, we can define a policy here but also code scanning add-ons to make sure your code for example does not contain secret environment variables.
|
||||
Not so applicable to this project but the Security tab is there to make sure that contributors know how to deal with certain tasks, we can define a policy here but also code scanning add-ons to make sure your code for example does not contain secret environment variables.
|
||||
|
||||

|
||||
|
||||
For me the insights tab is great, it provides so much information about the repository from how much activity has been going on down to commits and issues, but it also reports on traffic to the repository. You can see a list on the left side that allows you to go into great detail about metrics on the repository.
|
||||
For me the insights tab is great, it provides so much information about the repository from how much activity has been going on down to commits and issues, but it also reports on traffic to the repository. You can see a list on the left side that allows you to go into great detail about metrics on the repository.
|
||||
|
||||

|
||||
|
||||
Finally, we have the Settings tab, this is where we can get into the details of how we run our repository, I am currently the only maintainer of the repository but we could share this responsibility here. We can define integrations and other such tasks here.
|
||||
Finally, we have the Settings tab, this is where we can get into the details of how we run our repository, I am currently the only maintainer of the repository but we could share this responsibility here. We can define integrations and other such tasks here.
|
||||
|
||||

|
||||
|
||||
This was a super quick overview of GitHub, I think there are some other areas that I might have mentioned that need explaining in a little more detail. As mentioned GitHub houses millions of repositories mostly these are holding source code and these can be public or privately accessible.
|
||||
This was a super quick overview of GitHub, I think there are some other areas that I might have mentioned that need explaining in a little more detail. As mentioned GitHub houses millions of repositories mostly these are holding source code and these can be public or privately accessible.
|
||||
|
||||
### Forking
|
||||
### Forking
|
||||
|
||||
I am going to get more into Open-Source in the session tomorrow but a big part of any code repository is the ability to collaborate with the community. Let's think of the scenario I want a copy of a repository because I want to make some changes to it, maybe I want to fix a bug or maybe I want to change something to use it for a use case that I have that was maybe not the intended use case for the original maintainer of the code. This is what we would call forking a repository. A fork is a copy of a repository. Forking a repository allows you to freely experiment with changes without affecting the original project.
|
||||
I am going to get more into Open-Source in the session tomorrow but a big part of any code repository is the ability to collaborate with the community. Let's think of the scenario I want a copy of a repository because I want to make some changes to it, maybe I want to fix a bug or maybe I want to change something to use it for a use case that I have that was maybe not the intended use case for the original maintainer of the code. This is what we would call forking a repository. A fork is a copy of a repository. Forking a repository allows you to freely experiment with changes without affecting the original project.
|
||||
|
||||
Let me head back to the opening page after login and see one of those suggested repositories.
|
||||
Let me head back to the opening page after login and see one of those suggested repositories.
|
||||
|
||||

|
||||
|
||||
If we click on that repository we are going to get the same look as we have just walked through on the 90DaysOfDevOps repository.
|
||||
If we click on that repository we are going to get the same look as we have just walked through on the 90DaysOfDevOps repository.
|
||||
|
||||

|
||||
|
||||
If we notice below we have 3 options, we have watch, fork and star.
|
||||
If we notice below we have 3 options, we have watch, fork and star.
|
||||
|
||||
- Watch - Updates when things happen to the repository.
|
||||
- Watch - Updates when things happen to the repository.
|
||||
- Fork - a copy of a repository.
|
||||
- Star - "I think your project is cool"
|
||||
|
||||

|
||||
|
||||
Given our scenario of wanting a copy of this repository to work on we are going to hit the fork option. If you are a member of multiple organisations then you will have to choose where the fork will take place, I am going to choose my profile.
|
||||
Given our scenario of wanting a copy of this repository to work on we are going to hit the fork option. If you are a member of multiple organisations then you will have to choose where the fork will take place, I am going to choose my profile.
|
||||
|
||||

|
||||
|
||||
Now we have our copy of the repository that we can freely work on and change as we see fit. This would be the start of the pull request process that we mentioned briefly before but we will cover it in more detail tomorrow.
|
||||
Now we have our copy of the repository that we can freely work on and change as we see fit. This would be the start of the pull request process that we mentioned briefly before but we will cover it in more detail tomorrow.
|
||||
|
||||

|
||||
|
||||
Ok, I hear you say, but how do I make changes to this repository and code if it's on a website, well you can go through and edit on the website but it's not going to be the same as using your favourite IDE on your local system with your favourite colour theme. For us to get a copy of this repository on our local machine we will perform a clone of the repository. This will allow us to work on things locally and then push our changes back into our forked copy of the repository.
|
||||
Ok, I hear you say, but how do I make changes to this repository and code if it's on a website, well you can go through and edit on the website but it's not going to be the same as using your favourite IDE on your local system with your favourite colour theme. For us to get a copy of this repository on our local machine we will perform a clone of the repository. This will allow us to work on things locally and then push our changes back into our forked copy of the repository.
|
||||
|
||||
We have several options when it comes to getting a copy of this code as you can see below.
|
||||
We have several options when it comes to getting a copy of this code as you can see below.
|
||||
|
||||
There is a local version available of GitHub Desktop which gives you a visual desktop application to track changes and push and pull changes between local and GitHub.
|
||||
There is a local version available of GitHub Desktop which gives you a visual desktop application to track changes and push and pull changes between local and GitHub.
|
||||
|
||||
For this little demo, I am going to use the HTTPS URL we see on there.
|
||||
For this little demo, I am going to use the HTTPS URL we see on there.
|
||||
|
||||

|
||||
|
||||
Now on our local machine, I am going to navigate to a directory I am happy to download this repository to and then run `git clone url`
|
||||
Now on our local machine, I am going to navigate to a directory I am happy to download this repository to and then run `git clone url`
|
||||
|
||||

|
||||
|
||||
Now we could take it to VScode to make some changes to this.
|
||||
Now we could take it to VScode to make some changes to this.
|
||||
|
||||

|
||||
|
||||
Let's now make some changes, I want to make a change to all those links and replace that with something else.
|
||||
Let's now make some changes, I want to make a change to all those links and replace that with something else.
|
||||
|
||||

|
||||
|
||||
Now if we check back on GitHub and we find our readme.mdin that repository, you should be able to see a few changes that I made to the file.
|
||||
Now if we check back on GitHub and we find our readme.mdin that repository, you should be able to see a few changes that I made to the file.
|
||||
|
||||

|
||||
|
||||
At this stage, this might be complete and we might be happy with our change as we are the only people going to use our new change but maybe it was a bug change and if that is the case then we will want to contribute via a Pull Request to notify the original repository maintainers of our change and see if they accept our changes.
|
||||
At this stage, this might be complete and we might be happy with our change as we are the only people going to use our new change but maybe it was a bug change and if that is the case then we will want to contribute via a Pull Request to notify the original repository maintainers of our change and see if they accept our changes.
|
||||
|
||||
We can do this by using the contribute button highlighted below. I will cover more on this tomorrow when we look into Open-Source workflows.
|
||||
We can do this by using the contribute button highlighted below. I will cover more on this tomorrow when we look into Open-Source workflows.
|
||||
|
||||

|
||||
|
||||
I have spent a long time looking through GitHub and I hear some of you cry but what about other options!
|
||||
I have spent a long time looking through GitHub and I hear some of you cry but what about other options!
|
||||
|
||||
Well, there are and I am going to find some resources that cover the basics for some of those as well. You are going to come across GitLab and BitBucket amongst others in your travels and whilst they are git-based services they have their differences.
|
||||
Well, there are and I am going to find some resources that cover the basics for some of those as well. You are going to come across GitLab and BitBucket amongst others in your travels and whilst they are git-based services they have their differences.
|
||||
|
||||
You will also come across hosted options. Most commonly here I have seen GitLab as a hosted version vs GitHub Enterprise (Don't believe there is a free hosted GitHub?)
|
||||
|
||||
## Resources
|
||||
## Resources
|
||||
|
||||
- [Learn GitLab in 3 Hours | GitLab Complete Tutorial For Beginners](https://www.youtube.com/watch?v=8aV5AxJrHDg)
|
||||
- [BitBucket Tutorials Playlist](https://www.youtube.com/watch?v=OMLh-5O6Ub8&list=PLaD4FvsFdarSyyGl3ooAm-ZyAllgw_AM5)
|
||||
- [What is Version Control?](https://www.youtube.com/watch?v=Yc8sCSeMhi4)
|
||||
- [Types of Version Control System](https://www.youtube.com/watch?v=kr62e_n6QuQ)
|
||||
- [Git Tutorial for Beginners](https://www.youtube.com/watch?v=8JJ101D3knE&t=52s)
|
||||
- [Git for Professionals Tutorial](https://www.youtube.com/watch?v=Uszj_k0DGsg)
|
||||
- [Git and GitHub for Beginners - Crash Course](https://www.youtube.com/watch?v=RGOj5yH7evk&t=8s)
|
||||
- [Git Tutorial for Beginners](https://www.youtube.com/watch?v=8JJ101D3knE&t=52s)
|
||||
- [Git for Professionals Tutorial](https://www.youtube.com/watch?v=Uszj_k0DGsg)
|
||||
- [Git and GitHub for Beginners - Crash Course](https://www.youtube.com/watch?v=RGOj5yH7evk&t=8s)
|
||||
- [Complete Git and GitHub Tutorial](https://www.youtube.com/watch?v=apGV9Kg7ics)
|
||||
- [Git cheatsheet](https://www.atlassian.com/git/tutorials/atlassian-git-cheatsheet)
|
||||
|
||||
See you on [Day 41](day41.md)
|
||||
See you on [Day 41](day41.md)
|
||||
|
@ -1,55 +1,56 @@
|
||||
---
|
||||
title: '#90DaysOfDevOps - The Open Source Workflow - Day 41'
|
||||
title: "#90DaysOfDevOps - The Open Source Workflow - Day 41"
|
||||
published: false
|
||||
description: 90DaysOfDevOps - The Open Source Workflow
|
||||
tags: 'DevOps, 90daysofdevops, learning'
|
||||
tags: "DevOps, 90daysofdevops, learning"
|
||||
cover_image: null
|
||||
canonical_url: null
|
||||
id: 1048806
|
||||
---
|
||||
## The Open Source Workflow
|
||||
|
||||
Hopefully, through the last 7 sections of Git, we have a better understanding of what git is and then how a git-based service such as GitHub integrates with git to provide a source code repository but also a way in which the wider community can collaborate on code and projects together.
|
||||
|
||||
When we went through the GitHub fundamentals we went through the process of forking a random project and making a change to our local repository. Here we want to go one step further and contribute to an open-source project. Remember that contributing doesn't need to be bug fixes or coding features but it could also be documentation. Every little helps and it also allows you to get hands-on with some of the git functionality we have covered.
|
||||
## The Open Source Workflow
|
||||
|
||||
## Fork a Project
|
||||
Hopefully, through the last 7 sections of Git, we have a better understanding of what git is and then how a git-based service such as GitHub integrates with git to provide a source code repository but also a way in which the wider community can collaborate on code and projects together.
|
||||
|
||||
The first thing we have to do is find a project we can contribute to. I have recently been presenting on the [Kanister Project](https://github.com/kanisterio/kanister) and I would like to share my presentations that are now on YouTube to the main readme.mdfile in the project.
|
||||
When we went through the GitHub fundamentals we went through the process of forking a random project and making a change to our local repository. Here we want to go one step further and contribute to an open-source project. Remember that contributing doesn't need to be bug fixes or coding features but it could also be documentation. Every little helps and it also allows you to get hands-on with some of the git functionality we have covered.
|
||||
|
||||
First of all, we need to fork the project. Let's run through that process. I am going to navigate to the link shared above and fork the repository.
|
||||
## Fork a Project
|
||||
|
||||
The first thing we have to do is find a project we can contribute to. I have recently been presenting on the [Kanister Project](https://github.com/kanisterio/kanister) and I would like to share my presentations that are now on YouTube to the main readme.mdfile in the project.
|
||||
|
||||
First of all, we need to fork the project. Let's run through that process. I am going to navigate to the link shared above and fork the repository.
|
||||
|
||||

|
||||
|
||||
We now have our copy of the whole repository.
|
||||
We now have our copy of the whole repository.
|
||||
|
||||

|
||||
|
||||
For reference on the Readme.mdfile the original Presentations listed are just these two so we need to fix this with our process.
|
||||
For reference on the Readme.mdfile the original Presentations listed are just these two so we need to fix this with our process.
|
||||
|
||||

|
||||
|
||||
## Clones to a local machine
|
||||
## Clones to a local machine
|
||||
|
||||
Now we have our fork we can bring that down to our local and we can then start making our edits to the files. Using the code button on our repo we can grab the URL and then use `git clone url` in a directory we wish to place the repository.
|
||||
Now we have our fork we can bring that down to our local and we can then start making our edits to the files. Using the code button on our repo we can grab the URL and then use `git clone url` in a directory we wish to place the repository.
|
||||
|
||||

|
||||
|
||||
## Make our changes
|
||||
## Make our changes
|
||||
|
||||
We have our project local so we can open VSCode or an IDE or text editor of your choice to add your modifications.
|
||||
We have our project local so we can open VSCode or an IDE or text editor of your choice to add your modifications.
|
||||
|
||||

|
||||
|
||||
The readme.mdfile is written in markdown language and because I am modifying someone else's project I am going to follow the existing project formatting to add our content.
|
||||
The readme.mdfile is written in markdown language and because I am modifying someone else's project I am going to follow the existing project formatting to add our content.
|
||||
|
||||

|
||||
|
||||
## Test your changes
|
||||
|
||||
We must as a best practice test our changes, this makes total sense if this was a code change to an application you would want to ensure that the application still functions after a code change, well we also must make sure that documentation is formatted and looks correct.
|
||||
We must as a best practice test our changes, this makes total sense if this was a code change to an application you would want to ensure that the application still functions after a code change, well we also must make sure that documentation is formatted and looks correct.
|
||||
|
||||
In vscode we can add a lot of plugins one of these is the ability to preview markdown pages.
|
||||
In vscode we can add a lot of plugins one of these is the ability to preview markdown pages.
|
||||
|
||||

|
||||
|
||||
@ -59,13 +60,13 @@ We do not have the authentication to push our changes directly back to the Kanis
|
||||
|
||||

|
||||
|
||||
Now we go back to GitHub to check the changes once more and then contribute back to the master project.
|
||||
Now we go back to GitHub to check the changes once more and then contribute back to the master project.
|
||||
|
||||
Looks good.
|
||||
Looks good.
|
||||
|
||||

|
||||
|
||||
Now we can go back to the top of our forked repository for Kanister and we can see that we are 1 commit ahead of the kanisterio:master branch.
|
||||
Now we can go back to the top of our forked repository for Kanister and we can see that we are 1 commit ahead of the kanisterio:master branch.
|
||||
|
||||

|
||||
|
||||
@ -73,54 +74,54 @@ Next, we hit that contribute button highlighted above. We see the option to "Ope
|
||||
|
||||

|
||||
|
||||
## Open a pull request
|
||||
## Open a pull request
|
||||
|
||||
There is quite a bit going on in this next image, top left you can now see we are in the original or the master repository. then you can see what we are comparing and that is the original master and our forked repository. We then have a create pull request button which we will come back to shortly. We have our single commit but if this was more changes you might have multiple commits here. then we have the changes we have made in the readme.mdfile.
|
||||
There is quite a bit going on in this next image, top left you can now see we are in the original or the master repository. then you can see what we are comparing and that is the original master and our forked repository. We then have a create pull request button which we will come back to shortly. We have our single commit but if this was more changes you might have multiple commits here. then we have the changes we have made in the readme.mdfile.
|
||||
|
||||

|
||||
|
||||
We have reviewed the above changes and we are ready to create a pull request by hitting the green button.
|
||||
We have reviewed the above changes and we are ready to create a pull request by hitting the green button.
|
||||
|
||||
Then depending on how the maintainer of a project has set out their Pull Request functionality on their repository you may or may not have a template that gives you pointers on what the maintainer wants to see.
|
||||
Then depending on how the maintainer of a project has set out their Pull Request functionality on their repository you may or may not have a template that gives you pointers on what the maintainer wants to see.
|
||||
|
||||
This is again where you want to make a meaningful description of what you have done, clear and concise but with enough detail. You can see I have made a simple change overview and I have ticked documentation.
|
||||
This is again where you want to make a meaningful description of what you have done, clear and concise but with enough detail. You can see I have made a simple change overview and I have ticked documentation.
|
||||
|
||||

|
||||
|
||||
## Create a pull request
|
||||
|
||||
We are now ready to create our pull request. After hitting the "Create Pull Request" at the top of the page you will get a summary of your pull request.
|
||||
We are now ready to create our pull request. After hitting the "Create Pull Request" at the top of the page you will get a summary of your pull request.
|
||||
|
||||

|
||||
|
||||
Scrolling down you are likely to see some automation taking place, in this instance, we require a review and some checks are taking place. We can see that Travis CI is in progress and a build has started and this will check our update, making sure that before anything is merged we are not breaking things with our additions.
|
||||
Scrolling down you are likely to see some automation taking place, in this instance, we require a review and some checks are taking place. We can see that Travis CI is in progress and a build has started and this will check our update, making sure that before anything is merged we are not breaking things with our additions.
|
||||
|
||||

|
||||
|
||||
Another thing to note here is that the red in the screenshot above, can look a little daunting and look as if you have made mistakes! Don't worry you have not broken anything, my biggest tip here is this process is there to help you and the maintainers of the project. If you have made a mistake at least from my experience the maintainer will contact and advise on what to do next.
|
||||
Another thing to note here is that the red in the screenshot above, can look a little daunting and look as if you have made mistakes! Don't worry you have not broken anything, my biggest tip here is this process is there to help you and the maintainers of the project. If you have made a mistake at least from my experience the maintainer will contact and advise on what to do next.
|
||||
|
||||
This pull request is now public for everyone to see [added Kanister presentation/resource #1237](https://github.com/kanisterio/kanister/pull/1237)
|
||||
|
||||
I am going to publish this before the merge and pull requests are accepted so maybe we can get a little prize for anyone that is still following along and can add a picture of the successful PR?
|
||||
I am going to publish this before the merge and pull requests are accepted so maybe we can get a little prize for anyone that is still following along and can add a picture of the successful PR?
|
||||
|
||||
1. Fork this repository to your own GitHub account
|
||||
2. Add your picture and possibly text
|
||||
3. Push the changes to your forked repository
|
||||
4. Create a PR that I will see and approve.
|
||||
5. I will think of some sort of prize
|
||||
1. Fork this repository to your own GitHub account
|
||||
2. Add your picture and possibly text
|
||||
3. Push the changes to your forked repository
|
||||
4. Create a PR that I will see and approve.
|
||||
5. I will think of some sort of prize
|
||||
|
||||
This then wraps up our look into Git and GitHub, next we are diving into containers which starts with a big picture look into how, and why containers and also a look into virtualisation and how we got here.
|
||||
This then wraps up our look into Git and GitHub, next we are diving into containers which starts with a big picture look into how, and why containers and also a look into virtualisation and how we got here.
|
||||
|
||||
## Resources
|
||||
## Resources
|
||||
|
||||
- [Learn GitLab in 3 Hours | GitLab Complete Tutorial For Beginners](https://www.youtube.com/watch?v=8aV5AxJrHDg)
|
||||
- [BitBucket Tutorials Playlist](https://www.youtube.com/watch?v=OMLh-5O6Ub8&list=PLaD4FvsFdarSyyGl3ooAm-ZyAllgw_AM5)
|
||||
- [What is Version Control?](https://www.youtube.com/watch?v=Yc8sCSeMhi4)
|
||||
- [Types of Version Control System](https://www.youtube.com/watch?v=kr62e_n6QuQ)
|
||||
- [Git Tutorial for Beginners](https://www.youtube.com/watch?v=8JJ101D3knE&t=52s)
|
||||
- [Git for Professionals Tutorial](https://www.youtube.com/watch?v=Uszj_k0DGsg)
|
||||
- [Git and GitHub for Beginners - Crash Course](https://www.youtube.com/watch?v=RGOj5yH7evk&t=8s)
|
||||
- [Git Tutorial for Beginners](https://www.youtube.com/watch?v=8JJ101D3knE&t=52s)
|
||||
- [Git for Professionals Tutorial](https://www.youtube.com/watch?v=Uszj_k0DGsg)
|
||||
- [Git and GitHub for Beginners - Crash Course](https://www.youtube.com/watch?v=RGOj5yH7evk&t=8s)
|
||||
- [Complete Git and GitHub Tutorial](https://www.youtube.com/watch?v=apGV9Kg7ics)
|
||||
- [Git cheatsheet](https://www.atlassian.com/git/tutorials/atlassian-git-cheatsheet)
|
||||
|
||||
See you on [Day 42](day42.md)
|
||||
See you on [Day 42](day42.md)
|
||||
|
127
Days/day42.md
127
Days/day42.md
@ -1,137 +1,138 @@
|
||||
---
|
||||
title: '#90DaysOfDevOps - The Big Picture: Containers - Day 42'
|
||||
title: "#90DaysOfDevOps - The Big Picture: Containers - Day 42"
|
||||
published: false
|
||||
description: 90DaysOfDevOps - The Big Picture Containers
|
||||
tags: 'devops, 90daysofdevops, learning'
|
||||
tags: "devops, 90daysofdevops, learning"
|
||||
cover_image: null
|
||||
canonical_url: null
|
||||
id: 1048826
|
||||
---
|
||||
|
||||
## The Big Picture: Containers
|
||||
|
||||
We are now starting the next section and this section is going to be focused on containers in particular we are going to be looking into Docker getting into some of the key areas to understand more about Containers.
|
||||
We are now starting the next section and this section is going to be focused on containers in particular we are going to be looking into Docker getting into some of the key areas to understand more about Containers.
|
||||
|
||||
I will also be trying to get some hands-on here to create the container that we can use during this section but also in future sections later on in the challenge.
|
||||
I will also be trying to get some hands-on here to create the container that we can use during this section but also in future sections later on in the challenge.
|
||||
|
||||
As always this first post is going to be focused on the big picture of how we got here and what it all means.
|
||||
As always this first post is going to be focused on the big picture of how we got here and what it all means.
|
||||
|
||||
#History of platforms and application development
|
||||
#do we want to talk about Virtualisation & Containerisation
|
||||
#do we want to talk about Virtualisation & Containerisation
|
||||
|
||||
### Why another way to run applications?
|
||||
### Why another way to run applications?
|
||||
|
||||
The first thing we have to take a look at is why do we need another way to run our software or applications? Well it is just that choice is great, we can run our applications in many different forms, we might see applications deployed on physical hardware with an operating system and a single application deployed there, and we might see the virtual machine or cloud-based IaaS instances running our application which then integrate into a database again in a VM or as PaaS offering in the public cloud. Or we might see our applications running in containers.
|
||||
The first thing we have to take a look at is why do we need another way to run our software or applications? Well it is just that choice is great, we can run our applications in many different forms, we might see applications deployed on physical hardware with an operating system and a single application deployed there, and we might see the virtual machine or cloud-based IaaS instances running our application which then integrate into a database again in a VM or as PaaS offering in the public cloud. Or we might see our applications running in containers.
|
||||
|
||||
None of the above options is wrong or right, but they each have their reasons to exist and I also strongly believe that none of these is going away. I have seen a lot of content that walks into Containers vs Virtual Machines and there really should not be an argument as that is more like apples vs pears argument where they are both fruit (ways to run our applications) but they are not the same.
|
||||
None of the above options is wrong or right, but they each have their reasons to exist and I also strongly believe that none of these is going away. I have seen a lot of content that walks into Containers vs Virtual Machines and there really should not be an argument as that is more like apples vs pears argument where they are both fruit (ways to run our applications) but they are not the same.
|
||||
|
||||
I would also say that if you were starting and you were developing an application you should lean towards containers simply because we will get into some of these areas later, but it's about efficiency, speed and size. But that also comes with a price, if you have no idea about containers then it's going to be a learning curve to force yourself to understand the why and get into that mindset. If you have developed your applications a particular way or you are not in a greenfield environment then you might have more pain points to deal with before even considering containers.
|
||||
I would also say that if you were starting and you were developing an application you should lean towards containers simply because we will get into some of these areas later, but it's about efficiency, speed and size. But that also comes with a price, if you have no idea about containers then it's going to be a learning curve to force yourself to understand the why and get into that mindset. If you have developed your applications a particular way or you are not in a greenfield environment then you might have more pain points to deal with before even considering containers.
|
||||
|
||||
We have many different choices then when it comes to downloading a given piece of software, there are a variety of different operating systems that we might be using. And specific instructions for what we need to do to install our applications.
|
||||
We have many different choices then when it comes to downloading a given piece of software, there are a variety of different operating systems that we might be using. And specific instructions for what we need to do to install our applications.
|
||||
|
||||

|
||||
|
||||
More and more recently I am finding that the applications we might have once needed a full server OS, A VM, Physical or cloud instance are now releasing container-based versions of their software. I find this interesting as this opens the world of containers and then Kubernetes to everyone and not just a focus on application developers.
|
||||
More and more recently I am finding that the applications we might have once needed a full server OS, A VM, Physical or cloud instance are now releasing container-based versions of their software. I find this interesting as this opens the world of containers and then Kubernetes to everyone and not just a focus on application developers.
|
||||
|
||||

|
||||
|
||||
As you can probably tell as I have said before, I am not going to advocate that the answer is containers, what's the question! But I would like to discuss how this is another option for us to be aware of when we deploy our applications.
|
||||
As you can probably tell as I have said before, I am not going to advocate that the answer is containers, what's the question! But I would like to discuss how this is another option for us to be aware of when we deploy our applications.
|
||||
|
||||

|
||||
|
||||
We have had container technology for a long time, so why now over the last say 10 years has this become popular, I would say even more popular in the last 5. We have had containers for decades. It comes down to the challenge of containers or should I say images as well, to how we distribute our software, because if we just have container technology, then we still will have many of the same problems we've had with software management.
|
||||
We have had container technology for a long time, so why now over the last say 10 years has this become popular, I would say even more popular in the last 5. We have had containers for decades. It comes down to the challenge of containers or should I say images as well, to how we distribute our software, because if we just have container technology, then we still will have many of the same problems we've had with software management.
|
||||
|
||||
If we think about Docker as a tool, the reason that it took off, is because of the ecosystem of images that are easy to find and use. Simple to get on your systems and get up and running. A major part of this is consistency across the entire space, of all these different challenges that we face with software. It doesn't matter if it's MongoDB or nodeJS, the process to get either of those up and running will be the same. The process to stop either of those is the same. All of these issues will still exist, but the nice thing is, when we bring good container and image technology together, we now have a single set of tools to help us tackle all of these different problems. Some of those issues are listed below:
|
||||
If we think about Docker as a tool, the reason that it took off, is because of the ecosystem of images that are easy to find and use. Simple to get on your systems and get up and running. A major part of this is consistency across the entire space, of all these different challenges that we face with software. It doesn't matter if it's MongoDB or nodeJS, the process to get either of those up and running will be the same. The process to stop either of those is the same. All of these issues will still exist, but the nice thing is, when we bring good container and image technology together, we now have a single set of tools to help us tackle all of these different problems. Some of those issues are listed below:
|
||||
|
||||
- We first have to find software on the internet.
|
||||
- We then have to download this software.
|
||||
- Do we trust the source?
|
||||
- Do we then need a license? Which License?
|
||||
- Is it compatible with different platforms?
|
||||
- What is the package? binary? Executable? Package manager?
|
||||
- How do we configure the software?
|
||||
- Dependencies? Did the overall download have us covered or do we need them as well?
|
||||
- Dependencies of Dependencies?
|
||||
- How do we start the application?
|
||||
- How do we stop the application?
|
||||
- Will it auto-restart?
|
||||
- Start on boot?
|
||||
- Resource conflicts?
|
||||
- Conflicting libraries?
|
||||
- We first have to find software on the internet.
|
||||
- We then have to download this software.
|
||||
- Do we trust the source?
|
||||
- Do we then need a license? Which License?
|
||||
- Is it compatible with different platforms?
|
||||
- What is the package? binary? Executable? Package manager?
|
||||
- How do we configure the software?
|
||||
- Dependencies? Did the overall download have us covered or do we need them as well?
|
||||
- Dependencies of Dependencies?
|
||||
- How do we start the application?
|
||||
- How do we stop the application?
|
||||
- Will it auto-restart?
|
||||
- Start on boot?
|
||||
- Resource conflicts?
|
||||
- Conflicting libraries?
|
||||
- Port Conflicts
|
||||
- Security for the software?
|
||||
- Software updates?
|
||||
- How can I remove the software?
|
||||
- Security for the software?
|
||||
- Software updates?
|
||||
- How can I remove the software?
|
||||
|
||||
We can split the above into 3 areas of the complexity of the software that containers and images do help with these.
|
||||
We can split the above into 3 areas of the complexity of the software that containers and images do help with these.
|
||||
|
||||
| Distribution | Installation | Operation |
|
||||
| ------------ | ------------ | ----------------- |
|
||||
| Find | Install | Start |
|
||||
| Download | Configuration| Security |
|
||||
| License | Uninstall | Ports |
|
||||
| Package | Dependencies | Resource Conflicts |
|
||||
| Trust | Platform | Auto-Restart |
|
||||
| Find | Libraries | Updates |
|
||||
| Distribution | Installation | Operation |
|
||||
| ------------ | ------------- | ------------------ |
|
||||
| Find | Install | Start |
|
||||
| Download | Configuration | Security |
|
||||
| License | Uninstall | Ports |
|
||||
| Package | Dependencies | Resource Conflicts |
|
||||
| Trust | Platform | Auto-Restart |
|
||||
| Find | Libraries | Updates |
|
||||
|
||||
Containers and images are going to help us remove some of these challenges that we have with possibly other software and applications.
|
||||
Containers and images are going to help us remove some of these challenges that we have with possibly other software and applications.
|
||||
|
||||
At a high level we could move installation and operation into the same list, Images are going to help us from a distribution point of view and containers help with the installation and operations.
|
||||
At a high level we could move installation and operation into the same list, Images are going to help us from a distribution point of view and containers help with the installation and operations.
|
||||
|
||||
Ok, probably sounds great and exciting but we still need to understand what is a container and now I have mentioned images so let's cover those areas next.
|
||||
Ok, probably sounds great and exciting but we still need to understand what is a container and now I have mentioned images so let's cover those areas next.
|
||||
|
||||
Another thing you might have seen a lot when we talk about Containers for software development is the analogy used alongside shipping containers, shipping containers are used to ship various goods across the seas using large vessels.
|
||||
Another thing you might have seen a lot when we talk about Containers for software development is the analogy used alongside shipping containers, shipping containers are used to ship various goods across the seas using large vessels.
|
||||
|
||||

|
||||
|
||||
What does this have to do with our topic of containers? Think about the code that software developers write, how can we ship that particular code from one machine to another machine?
|
||||
|
||||
If we think about what we touched on before about software distribution, installation and operations but now we start to build this out into an environment visual. We have hardware and an operating system where you will run multiple applications. For example, nodejs has certain dependencies and needs certain libraries. If you then want to install MySQL then it needs its required libraries and dependencies. Each software application will have its library and dependency. We might be massively lucky and not have any conflicts between any of our applications where specific libraries and dependencies are clashing causing issues but the more applications the more chance or risk of conflicts. However, this is not about that one deployment when everything fixes your software applications are going to be updated and then we can also introduce these conflicts.
|
||||
If we think about what we touched on before about software distribution, installation and operations but now we start to build this out into an environment visual. We have hardware and an operating system where you will run multiple applications. For example, nodejs has certain dependencies and needs certain libraries. If you then want to install MySQL then it needs its required libraries and dependencies. Each software application will have its library and dependency. We might be massively lucky and not have any conflicts between any of our applications where specific libraries and dependencies are clashing causing issues but the more applications the more chance or risk of conflicts. However, this is not about that one deployment when everything fixes your software applications are going to be updated and then we can also introduce these conflicts.
|
||||
|
||||

|
||||
|
||||
Containers can help solve this problem. Containers help **build** your application, **ship** the application, **deploy** and **scale** these applications with ease independently. let's look at the architecture, you will have hardware and operating system then on top of it you will have a container engine like docker which we will cover later. The container engine software helps create containers that package the libraries and dependencies along with it so that you can move this container seamlessly from one machine to another machine without worrying about the libraries and dependencies since they come as a part of a package which is nothing but the container so you can have different containers this container can be moved across the systems without worrying about the underlying dependencies that the application
|
||||
needs to run because everything the application needs to run is packaged as
|
||||
a container that you can move.
|
||||
a container that you can move.
|
||||
|
||||

|
||||
|
||||
### The advantages of these containers
|
||||
### The advantages of these containers
|
||||
|
||||
- Containers help package all the dependencies within the container and
|
||||
isolate it.
|
||||
isolate it.
|
||||
|
||||
- It is easy to manage the containers
|
||||
- It is easy to manage the containers
|
||||
|
||||
- The ability to move from one system to another.
|
||||
- The ability to move from one system to another.
|
||||
|
||||
- Containers help package the software and you can easily ship it without any duplicate efforts
|
||||
- Containers help package the software and you can easily ship it without any duplicate efforts
|
||||
|
||||
- Containers are easily scalable.
|
||||
|
||||
Using containers you can scale independent containers and use a load balancer
|
||||
or a service which helps split the traffic and you can scale the applications horizontally. Containers offer a lot of flexibility and ease in how you manage your applications
|
||||
or a service which helps split the traffic and you can scale the applications horizontally. Containers offer a lot of flexibility and ease in how you manage your applications
|
||||
|
||||
### What is a container?
|
||||
### What is a container?
|
||||
|
||||
When we run applications on our computer, this could be the web browser or VScode that you are using to read this post. That application is running as a process or what is known as a process. On our laptops or systems, we tend to run multiple applications or as we said processes. When we open a new application or click on the application icon this is an application we would like to run, sometimes this application might be a service that we just want to run in the background, our operating system is full of services that are running in the background providing you with the user experience you get with your system.
|
||||
When we run applications on our computer, this could be the web browser or VScode that you are using to read this post. That application is running as a process or what is known as a process. On our laptops or systems, we tend to run multiple applications or as we said processes. When we open a new application or click on the application icon this is an application we would like to run, sometimes this application might be a service that we just want to run in the background, our operating system is full of services that are running in the background providing you with the user experience you get with your system.
|
||||
|
||||
That application icon represents a link to an executable somewhere on your file system, the operating system then loads that executable into memory. Interestingly, that executable is sometimes referred to as an image when we're talking about a process.
|
||||
That application icon represents a link to an executable somewhere on your file system, the operating system then loads that executable into memory. Interestingly, that executable is sometimes referred to as an image when we're talking about a process.
|
||||
|
||||
Containers are processes, A container is a standard unit of software that packages up code and all its dependencies so the application runs quickly and reliably from one computing environment to another.
|
||||
Containers are processes, A container is a standard unit of software that packages up code and all its dependencies so the application runs quickly and reliably from one computing environment to another.
|
||||
|
||||
Containerised software will always run the same, regardless of the infrastructure. Containers isolate software from its environment and ensure that it works uniformly despite differences for instance between development and staging.
|
||||
|
||||
I mentioned images in the last section when it comes to how and why containers and images combined made containers popular in our ecosystem.
|
||||
I mentioned images in the last section when it comes to how and why containers and images combined made containers popular in our ecosystem.
|
||||
|
||||
### What is an Image?
|
||||
### What is an Image?
|
||||
|
||||
A container image is a lightweight, standalone, executable package of software that includes everything needed to run an application: code, runtime, system tools, system libraries and settings. Container images become containers at runtime.
|
||||
A container image is a lightweight, standalone, executable package of software that includes everything needed to run an application: code, runtime, system tools, system libraries and settings. Container images become containers at runtime.
|
||||
|
||||
## Resources
|
||||
## Resources
|
||||
|
||||
- [TechWorld with Nana - Docker Tutorial for Beginners](https://www.youtube.com/watch?v=3c-iBn73dDE)
|
||||
- [Programming with Mosh - Docker Tutorial for Beginners](https://www.youtube.com/watch?v=pTFZFxd4hOI)
|
||||
- [Docker Tutorial for Beginners - What is Docker? Introduction to Containers](https://www.youtube.com/watch?v=17Bl31rlnRM&list=WL&index=128&t=61s)
|
||||
- [Introduction to Container By Red Hat](https://www.redhat.com/en/topics/containers)
|
||||
|
||||
See you on [Day 43](day43.md)
|
||||
See you on [Day 43](day43.md)
|
||||
|
@ -1,25 +1,26 @@
|
||||
---
|
||||
title: '#90DaysOfDevOps - What is Docker & Getting installed - Day 43'
|
||||
title: "#90DaysOfDevOps - What is Docker & Getting installed - Day 43"
|
||||
published: false
|
||||
description: 90DaysOfDevOps - What is Docker & Getting installed
|
||||
tags: 'devops, 90daysofdevops, learning'
|
||||
tags: "devops, 90daysofdevops, learning"
|
||||
cover_image: null
|
||||
canonical_url: null
|
||||
id: 1048739
|
||||
---
|
||||
|
||||
## What is Docker & Getting installed
|
||||
|
||||
In the previous post, I mentioned Docker at least once and that is because Docker is innovative in making containers popular even though they have been around for such a long time.
|
||||
In the previous post, I mentioned Docker at least once and that is because Docker is innovative in making containers popular even though they have been around for such a long time.
|
||||
|
||||
We are going to be using and explaining docker here but we should also mention the [Open Container Initiative (OCI)](https://www.opencontainers.org/) which is an industry standards organization that encourages innovation while avoiding the danger of vendor lock-in. Thanks to the OCI, we have a choice when choosing a container toolchain, including Docker, [CRI-O](https://cri-o.io/), [Podman](http://podman.io/), [LXC](https://linuxcontainers.org/), and others.
|
||||
|
||||
Docker is a software framework for building, running, and managing containers. The term "docker" may refer to either the tools (the commands and a daemon) or the Dockerfile file format.
|
||||
|
||||
We are going to be using Docker Personal here which is free (for education and learning). This includes all the essentials that we need to cover to get a good foundation of knowledge of containers and tooling.
|
||||
We are going to be using Docker Personal here which is free (for education and learning). This includes all the essentials that we need to cover to get a good foundation of knowledge of containers and tooling.
|
||||
|
||||
It is probably worth breaking down some of the "docker" tools that we will be using and what they are used for. The term docker can be referring to the docker project overall, which is a platform for devs and admins to develop, ship and run applications. It might also be a reference to the docker daemon process running on the host which manages images and containers also called Docker Engine.
|
||||
It is probably worth breaking down some of the "docker" tools that we will be using and what they are used for. The term docker can be referring to the docker project overall, which is a platform for devs and admins to develop, ship and run applications. It might also be a reference to the docker daemon process running on the host which manages images and containers also called Docker Engine.
|
||||
|
||||
### Docker Engine
|
||||
### Docker Engine
|
||||
|
||||
Docker Engine is an open-source containerization technology for building and containerizing your applications. Docker Engine acts as a client-server application with:
|
||||
|
||||
@ -29,37 +30,39 @@ Docker Engine is an open-source containerization technology for building and con
|
||||
|
||||
The above was taken from the official Docker documentation and the specific [Docker Engine Overview](https://docs.docker.com/engine/)
|
||||
|
||||
### Docker Desktop
|
||||
We have a docker desktop for both Windows and macOS systems. An easy-to-install, lightweight docker development environment. A native OS application that leverages virtualisation capabilities on the host operating system.
|
||||
### Docker Desktop
|
||||
|
||||
It’s the best solution if you want to build, debug, test, package, and ship Dockerized applications on Windows or macOS.
|
||||
We have a docker desktop for both Windows and macOS systems. An easy-to-install, lightweight docker development environment. A native OS application that leverages virtualisation capabilities on the host operating system.
|
||||
|
||||
On Windows, we can also take advantage of WSL2 and Microsoft Hyper-V. We will cover some of the WSL2 benefits as we go through.
|
||||
It’s the best solution if you want to build, debug, test, package, and ship Dockerized applications on Windows or macOS.
|
||||
|
||||
Because of the integration with hypervisor capabilities on the host operating system docker provides the ability to run your containers with Linux Operating systems.
|
||||
On Windows, we can also take advantage of WSL2 and Microsoft Hyper-V. We will cover some of the WSL2 benefits as we go through.
|
||||
|
||||
### Docker Compose
|
||||
Docker compose is a tool that allows you to run more complex apps over multiple containers. With the benefit of being able to use a single file and command to spin up your application.
|
||||
Because of the integration with hypervisor capabilities on the host operating system docker provides the ability to run your containers with Linux Operating systems.
|
||||
|
||||
### Docker Hub
|
||||
A centralised resource for working with Docker and its components. Most commonly known as a registry to host docker images. But there are a lot of additional services here which can be used in part with automation or integrated into GitHub as well as security scanning.
|
||||
### Docker Compose
|
||||
|
||||
### Dockerfile
|
||||
Docker compose is a tool that allows you to run more complex apps over multiple containers. With the benefit of being able to use a single file and command to spin up your application.
|
||||
|
||||
A dockerfile is a text file that contains commands you would normally execute manually to build a docker image. Docker can build images automatically by reading the instructions we have in our dockerfile.
|
||||
### Docker Hub
|
||||
|
||||
## Installing Docker Desktop
|
||||
A centralised resource for working with Docker and its components. Most commonly known as a registry to host docker images. But there are a lot of additional services here which can be used in part with automation or integrated into GitHub as well as security scanning.
|
||||
|
||||
The [docker documenation](https://docs.docker.com/engine/install/) is amazing and if you are only just diving in then you should take a look and have a read-through. We will be using Docker Desktop on Windows with WSL2. I had already run through the installation on the machine we are using here.
|
||||
### Dockerfile
|
||||
|
||||
A dockerfile is a text file that contains commands you would normally execute manually to build a docker image. Docker can build images automatically by reading the instructions we have in our dockerfile.
|
||||
|
||||
## Installing Docker Desktop
|
||||
|
||||
The [docker documenation](https://docs.docker.com/engine/install/) is amazing and if you are only just diving in then you should take a look and have a read-through. We will be using Docker Desktop on Windows with WSL2. I had already run through the installation on the machine we are using here.
|
||||
|
||||

|
||||
|
||||
Take note before you go ahead and install at the system requirements, [Install Docker Desktop on Windows](https://docs.docker.com/desktop/windows/install/) if you are using macOS including the M1-based CPU architecture you can also take a look at [Install Docker Desktop on macOS](https://docs.docker.com/desktop/mac/install/)
|
||||
|
||||
I will run through the Docker Desktop installation for Windows on another Windows Machine and log the process down below.
|
||||
I will run through the Docker Desktop installation for Windows on another Windows Machine and log the process down below.
|
||||
|
||||
|
||||
## Resources
|
||||
## Resources
|
||||
|
||||
- [TechWorld with Nana - Docker Tutorial for Beginners](https://www.youtube.com/watch?v=3c-iBn73dDE)
|
||||
- [Programming with Mosh - Docker Tutorial for Beginners](https://www.youtube.com/watch?v=pTFZFxd4hOI)
|
||||
|
@ -1,51 +1,52 @@
|
||||
---
|
||||
title: '#90DaysOfDevOps - Docker Images & Hands-On with Docker Desktop - Day 44'
|
||||
title: "#90DaysOfDevOps - Docker Images & Hands-On with Docker Desktop - Day 44"
|
||||
published: false
|
||||
description: 90DaysOfDevOps - Docker Images & Hands-On with Docker Desktop
|
||||
tags: 'devops, 90daysofdevops, learning'
|
||||
tags: "devops, 90daysofdevops, learning"
|
||||
cover_image: null
|
||||
canonical_url: null
|
||||
id: 1048708
|
||||
---
|
||||
|
||||
## Docker Images & Hands-On with Docker Desktop
|
||||
|
||||
We now have Docker Desktop installed on our system. (If you are running Linux then you still have options but no GUI but docker does work on Linux.)[Install Docker Engine on Ubuntu](https://docs.docker.com/engine/install/ubuntu/) (Other distributions also available.)
|
||||
|
||||
In this post, we are going to get started with deploying some images into our environment. A recap on what a Docker Image is - A Docker image is a file used to execute code in a Docker container. Docker images act as a set of instructions to build a Docker container, like a template. Docker images also act as the starting point when using Docker.
|
||||
|
||||
Now is a good time to go and create your account on [DockerHub](https://hub.docker.com/)
|
||||
Now is a good time to go and create your account on [DockerHub](https://hub.docker.com/)
|
||||
|
||||

|
||||
|
||||
DockerHub is a centralised resource for working with Docker and its components. Most commonly known as a registry to host docker images. But there are a lot of additional services here which can be used in part with automation or integrated into GitHub as well as security scanning.
|
||||
|
||||
If you scroll down once logged in you are going to see a list of container images, You might see database images for MySQL, hello-world etc. Think of these as great baseline images or you might just need a database image and you are best to use the official one which means you don't need to create your own.
|
||||
If you scroll down once logged in you are going to see a list of container images, You might see database images for MySQL, hello-world etc. Think of these as great baseline images or you might just need a database image and you are best to use the official one which means you don't need to create your own.
|
||||
|
||||

|
||||
|
||||
We can drill deeper into the view of available images and search across categories, operating systems and architectures. The one thing I highlight below is the Official Image, this should give you peace of mind about the origin of this container image.
|
||||
We can drill deeper into the view of available images and search across categories, operating systems and architectures. The one thing I highlight below is the Official Image, this should give you peace of mind about the origin of this container image.
|
||||
|
||||

|
||||
|
||||
We can also search for a specific image, for example, WordPress might be a good base image that we want we can do that at the top and find all container images related to WordPress. Below are notices that we also have verified publisher.
|
||||
We can also search for a specific image, for example, WordPress might be a good base image that we want we can do that at the top and find all container images related to WordPress. Below are notices that we also have verified publisher.
|
||||
|
||||
- Official Image - Docker Official images are a curated set of Docker open source and "drop-in" solution repositories.
|
||||
- Official Image - Docker Official images are a curated set of Docker open source and "drop-in" solution repositories.
|
||||
|
||||
- Verified Publisher - High-quality Docker content from verified publishers. These products are published and maintained directly by a commercial entity.
|
||||
- Verified Publisher - High-quality Docker content from verified publishers. These products are published and maintained directly by a commercial entity.
|
||||
|
||||

|
||||
|
||||
### Exploring Docker Desktop
|
||||
### Exploring Docker Desktop
|
||||
|
||||
We have Docker Desktop installed on our system and if you open this I expect unless you had this already installed you will see something similar to the image below. As you can see we have no containers running and our docker engine is running.
|
||||
We have Docker Desktop installed on our system and if you open this I expect unless you had this already installed you will see something similar to the image below. As you can see we have no containers running and our docker engine is running.
|
||||
|
||||

|
||||
|
||||
Because this was not a fresh install for me, I do have some images already downloaded and available on my system. You will likely see nothing in here.
|
||||
Because this was not a fresh install for me, I do have some images already downloaded and available on my system. You will likely see nothing in here.
|
||||
|
||||

|
||||
|
||||
Under remote repositories, this is where you will find any container images you have stored in your docker hub. You can see from the below I do not have any images.
|
||||
Under remote repositories, this is where you will find any container images you have stored in your docker hub. You can see from the below I do not have any images.
|
||||
|
||||

|
||||
|
||||
@ -53,59 +54,59 @@ We can also clarify this on our dockerhub site and confirm that we have no repos
|
||||
|
||||

|
||||
|
||||
Next, we have the Volumes tab, If you have containers that require persistence then this is where we can add these volumes to your local file system or a shared file system.
|
||||
Next, we have the Volumes tab, If you have containers that require persistence then this is where we can add these volumes to your local file system or a shared file system.
|
||||
|
||||

|
||||
|
||||
At the time of writing, there is also a Dev Environments tab, this is going to help you collaborate with your team instead of moving between different git branches. We won't be covering this.
|
||||
At the time of writing, there is also a Dev Environments tab, this is going to help you collaborate with your team instead of moving between different git branches. We won't be covering this.
|
||||
|
||||

|
||||
|
||||
Going back to the first tab you can see that there is a command we can run which is a getting started container. Let's run `docker run -d -p 80:80 docker/getting-started` in our terminal.
|
||||
Going back to the first tab you can see that there is a command we can run which is a getting started container. Let's run `docker run -d -p 80:80 docker/getting-started` in our terminal.
|
||||
|
||||

|
||||
|
||||
If we go and check our docker desktop window again, we are going to see that we have a running container.
|
||||
If we go and check our docker desktop window again, we are going to see that we have a running container.
|
||||
|
||||

|
||||
|
||||
You might have noticed that I am using WSL2 and for you to be able to use that you will need to make sure this is enabled in the settings.
|
||||
You might have noticed that I am using WSL2 and for you to be able to use that you will need to make sure this is enabled in the settings.
|
||||
|
||||

|
||||
|
||||
If we now go and check our Images tab again, you should now see an in-use image called docker/getting-started.
|
||||
If we now go and check our Images tab again, you should now see an in-use image called docker/getting-started.
|
||||
|
||||

|
||||
|
||||
Back to the Containers/Apps tab, click on your running container. You are going to see the logs by default and along the top, you have some options to choose from, in our case I am pretty confident that this is going to be a web page running in this container so we are going to choose the open in the browser.
|
||||
Back to the Containers/Apps tab, click on your running container. You are going to see the logs by default and along the top, you have some options to choose from, in our case I am pretty confident that this is going to be a web page running in this container so we are going to choose the open in the browser.
|
||||
|
||||

|
||||
|
||||
When we hit that button above sure enough a web page should open hitting your localhost and display something similar to below.
|
||||
When we hit that button above sure enough a web page should open hitting your localhost and display something similar to below.
|
||||
|
||||
This container also has some more detail on our containers and images.
|
||||
This container also has some more detail on our containers and images.
|
||||
|
||||

|
||||
|
||||
We have now run our first container. Nothing too scary just yet. What about if we wanted to pull one of the container images down from DockerHub? Maybe there is a `hello world` docker container we could use.
|
||||
We have now run our first container. Nothing too scary just yet. What about if we wanted to pull one of the container images down from DockerHub? Maybe there is a `hello world` docker container we could use.
|
||||
|
||||
I went ahead and stopped the getting started container not that it's taking up any mass amount of resources but for tidiness, as we walk through some more steps.
|
||||
I went ahead and stopped the getting started container not that it's taking up any mass amount of resources but for tidiness, as we walk through some more steps.
|
||||
|
||||
Back in our terminal let's go ahead and run `docker run hello-world` and see what happens.
|
||||
Back in our terminal let's go ahead and run `docker run hello-world` and see what happens.
|
||||
|
||||
You can see we did not have the image locally so we pulled that down and then we got a message that is written into the container image with some information on what it did to get up and running and some links to reference points.
|
||||
You can see we did not have the image locally so we pulled that down and then we got a message that is written into the container image with some information on what it did to get up and running and some links to reference points.
|
||||
|
||||

|
||||
|
||||
However, if we go and look in Docker Desktop now we have no running containers but we do have an exited container that used the hello-world message, meaning it came up, delivered the message and then is terminated.
|
||||
However, if we go and look in Docker Desktop now we have no running containers but we do have an exited container that used the hello-world message, meaning it came up, delivered the message and then is terminated.
|
||||
|
||||

|
||||
|
||||
And for the last time, let's just go and check the images tab and see that we have a new hello-world image locally on our system, meaning that if we run the `docker run hello-world` command again in our terminal we would not have to pull anything unless a version changes.
|
||||
And for the last time, let's just go and check the images tab and see that we have a new hello-world image locally on our system, meaning that if we run the `docker run hello-world` command again in our terminal we would not have to pull anything unless a version changes.
|
||||
|
||||

|
||||
|
||||
The message from the hello-world container set down the challenge of running something a little more ambitious.
|
||||
The message from the hello-world container set down the challenge of running something a little more ambitious.
|
||||
|
||||
Challenge Accepted!
|
||||
|
||||
@ -113,29 +114,29 @@ Challenge Accepted!
|
||||
|
||||
In running `docker run -it ubuntu bash` in our terminal we are going to run a containerised version of Ubuntu well not a full copy of the Operating system. You can find out more about this particular image on [DockerHub](https://hub.docker.com/_/ubuntu)
|
||||
|
||||
You can see below when we run the command we now have an interactive prompt (`-it`) and we have a bash shell into our container.
|
||||
You can see below when we run the command we now have an interactive prompt (`-it`) and we have a bash shell into our container.
|
||||
|
||||

|
||||
|
||||
We have a bash shell but we don't have much more which is why this container image is less than 30MB.
|
||||
We have a bash shell but we don't have much more which is why this container image is less than 30MB.
|
||||
|
||||

|
||||
|
||||
But we can still use this image and we can still install software using our apt package manager, we can update our container image and upgrade also.
|
||||
But we can still use this image and we can still install software using our apt package manager, we can update our container image and upgrade also.
|
||||
|
||||

|
||||
|
||||
Or maybe we want to install some software into our container, I have chosen a really bad example here as pinta is an image editor and it's over 200MB but hopefully you get where I am going with this. This would increase the size of our container considerably but still, we are going to be in the MB and not in the GB.
|
||||
Or maybe we want to install some software into our container, I have chosen a really bad example here as pinta is an image editor and it's over 200MB but hopefully you get where I am going with this. This would increase the size of our container considerably but still, we are going to be in the MB and not in the GB.
|
||||
|
||||

|
||||
|
||||
I wanted that to hopefully give you an overview of Docker Desktop and the not-so-scary world of containers when you break it down with simple use cases, we do need to cover some networking, security and other options we have vs just downloading container images and using them like this. By the end of the section, we want to have made something and uploaded it to our DockerHub repository and be able to deploy it.
|
||||
I wanted that to hopefully give you an overview of Docker Desktop and the not-so-scary world of containers when you break it down with simple use cases, we do need to cover some networking, security and other options we have vs just downloading container images and using them like this. By the end of the section, we want to have made something and uploaded it to our DockerHub repository and be able to deploy it.
|
||||
|
||||
## Resources
|
||||
## Resources
|
||||
|
||||
- [TechWorld with Nana - Docker Tutorial for Beginners](https://www.youtube.com/watch?v=3c-iBn73dDE)
|
||||
- [Programming with Mosh - Docker Tutorial for Beginners](https://www.youtube.com/watch?v=pTFZFxd4hOI)
|
||||
- [Docker Tutorial for Beginners - What is Docker? Introduction to Containers](https://www.youtube.com/watch?v=17Bl31rlnRM&list=WL&index=128&t=61s)
|
||||
- [WSL 2 with Docker getting started](https://www.youtube.com/watch?v=5RQbdMn04Oc)
|
||||
|
||||
See you on [Day 45](day45.md)
|
||||
See you on [Day 45](day45.md)
|
||||
|
@ -1,54 +1,56 @@
|
||||
---
|
||||
title: '#90DaysOfDevOps - Docker Compose - Day 46'
|
||||
title: "#90DaysOfDevOps - Docker Compose - Day 46"
|
||||
published: false
|
||||
description: 90DaysOfDevOps - Docker Compose
|
||||
tags: 'devops, 90daysofdevops, learning'
|
||||
tags: "devops, 90daysofdevops, learning"
|
||||
cover_image: null
|
||||
canonical_url: null
|
||||
id: 1048740
|
||||
---
|
||||
## Docker Compose
|
||||
|
||||
The ability to run one container could be great if you have a self-contained image that has everything you need for your single use case, where things get interesting is when you are looking to build multiple applications between different container images. For example, if I had a website front end but required a backend database I could put everything in one container but better and more efficient would be to have its container for the database.
|
||||
## Docker Compose
|
||||
|
||||
The ability to run one container could be great if you have a self-contained image that has everything you need for your single use case, where things get interesting is when you are looking to build multiple applications between different container images. For example, if I had a website front end but required a backend database I could put everything in one container but better and more efficient would be to have its container for the database.
|
||||
|
||||
This is where Docker compose comes in which is a tool that allows you to run more complex apps over multiple containers. With the benefit of being able to use a single file and command to spin up your application. The example I am going to the walkthrough in this post is from the [Docker QuickStart sample apps (Quickstart: Compose and WordPress)](https://docs.docker.com/samples/wordpress/).
|
||||
|
||||
In this first example we are going to:
|
||||
In this first example we are going to:
|
||||
|
||||
- Use Docker compose to bring up WordPress and a separate MySQL instance.
|
||||
- Use Docker compose to bring up WordPress and a separate MySQL instance.
|
||||
- Use a YAML file which will be called `docker-compose.yml`
|
||||
- Build the project
|
||||
- Build the project
|
||||
- Configure WordPress via a Browser
|
||||
- Shutdown and Clean up
|
||||
|
||||
### Install Docker Compose
|
||||
As mentioned Docker Compose is a tool, If you are on macOS or Windows then compose is included in your Docker Desktop installation. However, you might be wanting to run your containers on a Windows server host or Linux server and in which case you can install using these instructions [Install Docker Compose](https://docs.docker.com/compose/install/)
|
||||
### Install Docker Compose
|
||||
|
||||
To confirm we have `docker-compose` installed on our system we can open a terminal and simply type the above command.
|
||||
As mentioned Docker Compose is a tool, If you are on macOS or Windows then compose is included in your Docker Desktop installation. However, you might be wanting to run your containers on a Windows server host or Linux server and in which case you can install using these instructions [Install Docker Compose](https://docs.docker.com/compose/install/)
|
||||
|
||||
To confirm we have `docker-compose` installed on our system we can open a terminal and simply type the above command.
|
||||
|
||||

|
||||
|
||||
### Docker-Compose.yml (YAML)
|
||||
|
||||
The next thing to talk about is the docker-compose.yml which you can find in the container folder of the repository. But more importantly, we need to discuss YAML, in general, a little.
|
||||
The next thing to talk about is the docker-compose.yml which you can find in the container folder of the repository. But more importantly, we need to discuss YAML, in general, a little.
|
||||
|
||||
YAML could almost have its session as you are going to find it in so many different places. But for the most part
|
||||
YAML could almost have its session as you are going to find it in so many different places. But for the most part
|
||||
|
||||
"YAML is a human-friendly data serialization language for all programming languages."
|
||||
|
||||
It is commonly used for configuration files and in some applications where data is being stored or transmitted. You have no doubt come across XML files that tend to offer that same configuration file. YAML provides a minimal syntax but is aimed at those same use cases.
|
||||
It is commonly used for configuration files and in some applications where data is being stored or transmitted. You have no doubt come across XML files that tend to offer that same configuration file. YAML provides a minimal syntax but is aimed at those same use cases.
|
||||
|
||||
YAML Ain't Markup Language (YAML) is a serialisation language that has steadily increased in popularity over the last few years. The object serialisation abilities make it a viable replacement for languages like JSON.
|
||||
|
||||
The YAML acronym was shorthand for Yet Another Markup Language. But the maintainers renamed it to YAML Ain't Markup Language to place more emphasis on its data-oriented features.
|
||||
|
||||
Anyway, back to the docker-compose.yml file. This is a configuration file of what we want to do when it comes to multiple containers being deployed on our single system.
|
||||
Anyway, back to the docker-compose.yml file. This is a configuration file of what we want to do when it comes to multiple containers being deployed on our single system.
|
||||
|
||||
Straight from the tutorial linked above you can see the contents of the file looks like this:
|
||||
Straight from the tutorial linked above you can see the contents of the file looks like this:
|
||||
|
||||
```
|
||||
version: "3.9"
|
||||
|
||||
|
||||
services:
|
||||
DB:
|
||||
image: mysql:5.7
|
||||
@ -60,7 +62,7 @@ services:
|
||||
MYSQL_DATABASE: wordpress
|
||||
MYSQL_USER: wordpress
|
||||
MYSQL_PASSWORD: wordpress
|
||||
|
||||
|
||||
wordpress:
|
||||
depends_on:
|
||||
- db
|
||||
@ -80,95 +82,95 @@ volumes:
|
||||
wordpress_data: {}
|
||||
```
|
||||
|
||||
We declare a version and then a large part of this docker-compose.yml file is made up of our services, we have a DB service and a WordPress service. You can see each of those has an image defined with a version tag associated. We are now also introducing state into our configuration unlike our first walkthroughs, but now we are going to create volumes so we can store our databases there.
|
||||
We declare a version and then a large part of this docker-compose.yml file is made up of our services, we have a DB service and a WordPress service. You can see each of those has an image defined with a version tag associated. We are now also introducing state into our configuration unlike our first walkthroughs, but now we are going to create volumes so we can store our databases there.
|
||||
|
||||
We then have some environmental variables such as passwords and usernames. These files can get very complicated but the YAML configuration file simplifies what these look like overall.
|
||||
We then have some environmental variables such as passwords and usernames. These files can get very complicated but the YAML configuration file simplifies what these look like overall.
|
||||
|
||||
### Build the project
|
||||
### Build the project
|
||||
|
||||
Next up we can head back into our terminal and we can use some commands with our docker-compose tool. Navigate to your directory, where your docker-compose.yml file is located.
|
||||
Next up we can head back into our terminal and we can use some commands with our docker-compose tool. Navigate to your directory, where your docker-compose.yml file is located.
|
||||
|
||||
From the terminal, we can simply run `docker-compose up -d` this will start the process of pulling those images and standing up your multi-container application.
|
||||
From the terminal, we can simply run `docker-compose up -d` this will start the process of pulling those images and standing up your multi-container application.
|
||||
|
||||
The `-d` in this command means detached mode, which means that the Run command is or will be in the background.
|
||||
|
||||

|
||||
|
||||
If we now run the `docker ps` command, you can see we have 2 containers running, one being WordPress and the other being MySQL.
|
||||
If we now run the `docker ps` command, you can see we have 2 containers running, one being WordPress and the other being MySQL.
|
||||
|
||||

|
||||
|
||||
Next, we can validate that we have WordPress up and running by opening a browser and going to `http://localhost:8000` and you should see the WordPress set-up page.
|
||||
Next, we can validate that we have WordPress up and running by opening a browser and going to `http://localhost:8000` and you should see the WordPress set-up page.
|
||||
|
||||

|
||||
|
||||
We can run through the setup of WordPress, and then we can start building our website as we see fit in the console below.
|
||||
We can run through the setup of WordPress, and then we can start building our website as we see fit in the console below.
|
||||
|
||||

|
||||
|
||||
If we then open a new tab and navigate to that same address we did before `http://localhost:8000` we will now see a simple default theme with our site title "90DaysOfDevOps" and then a sample post.
|
||||
If we then open a new tab and navigate to that same address we did before `http://localhost:8000` we will now see a simple default theme with our site title "90DaysOfDevOps" and then a sample post.
|
||||
|
||||

|
||||
|
||||
Before we make any changes, open Docker Desktop and navigate to the volumes tab and here you will see two volumes associated with our containers, one for WordPress and one for DB.
|
||||
Before we make any changes, open Docker Desktop and navigate to the volumes tab and here you will see two volumes associated with our containers, one for WordPress and one for DB.
|
||||
|
||||

|
||||
|
||||
My Current wordpress theme is "Twenty Twenty-Two" and I want to change this to "Twenty Twenty" Back in the dashboard we can make those changes.
|
||||
My Current wordpress theme is "Twenty Twenty-Two" and I want to change this to "Twenty Twenty" Back in the dashboard we can make those changes.
|
||||
|
||||

|
||||
|
||||
I am also going to add a new post to my site, and here below you see the latest version of our new site.
|
||||
I am also going to add a new post to my site, and here below you see the latest version of our new site.
|
||||
|
||||

|
||||
|
||||
### Clean Up or not
|
||||
|
||||
If we were now to use the command `docker-compose down` this would bring down our containers. But will leave our volumes in place.
|
||||
If we were now to use the command `docker-compose down` this would bring down our containers. But will leave our volumes in place.
|
||||
|
||||

|
||||
|
||||
We can just confirm in Docker Desktop that our volumes are still there though.
|
||||
We can just confirm in Docker Desktop that our volumes are still there though.
|
||||
|
||||

|
||||
|
||||
If we then want to bring things back up then we can issue the `docker up -d` command from within the same directory and we have our application back up and running.
|
||||
If we then want to bring things back up then we can issue the `docker up -d` command from within the same directory and we have our application back up and running.
|
||||
|
||||

|
||||
|
||||
We then navigate in our browser to that same address of `http://localhost:8000` and notice that our new post and our theme change are all still in place.
|
||||
We then navigate in our browser to that same address of `http://localhost:8000` and notice that our new post and our theme change are all still in place.
|
||||
|
||||

|
||||
|
||||
If we want to get rid of the containers and those volumes then issuing the `docker-compose down --volumes` will also destroy the volumes.
|
||||
If we want to get rid of the containers and those volumes then issuing the `docker-compose down --volumes` will also destroy the volumes.
|
||||
|
||||

|
||||
|
||||
Now when we use `docker-compose up -d` again we will be starting, however, the images will still be local on our system so you won't need to re-pull them from the DockerHub repository.
|
||||
Now when we use `docker-compose up -d` again we will be starting, however, the images will still be local on our system so you won't need to re-pull them from the DockerHub repository.
|
||||
|
||||
I know that when I started diving into docker-compose and its capabilities I was then confused as to where this sits alongside or with Container Orchestration tools such as Kubernetes, well everything we have done here in this short demo is focused on one host we have WordPress and DB running on the local desktop machine. We don't have multiple virtual machines or multiple physical machines, we also can't easily scale up and down the requirements of our application.
|
||||
I know that when I started diving into docker-compose and its capabilities I was then confused as to where this sits alongside or with Container Orchestration tools such as Kubernetes, well everything we have done here in this short demo is focused on one host we have WordPress and DB running on the local desktop machine. We don't have multiple virtual machines or multiple physical machines, we also can't easily scale up and down the requirements of our application.
|
||||
|
||||
Our next section is going to cover Kubernetes but we have a few more days of Containers in general first.
|
||||
Our next section is going to cover Kubernetes but we have a few more days of Containers in general first.
|
||||
|
||||
This is also a great resource for samples of docker-compose applications with multiple integrations. [Awesome-Compose](https://github.com/docker/awesome-compose)
|
||||
|
||||
In the above repository, there is a great example which will deploy an Elasticsearch, Logstash, and Kibana (ELK) in single-node.
|
||||
In the above repository, there is a great example which will deploy an Elasticsearch, Logstash, and Kibana (ELK) in single-node.
|
||||
|
||||
I have uploaded the files to the [Containers folder](/Days/Containers/elasticsearch-logstash-kibana/) When you have this folder locally, navigate there and you can simply use `docker-compose up -d`
|
||||
I have uploaded the files to the [Containers folder](/Days/Containers/elasticsearch-logstash-kibana/) When you have this folder locally, navigate there and you can simply use `docker-compose up -d`
|
||||
|
||||

|
||||
|
||||
We can then check we have those running containers with `docker ps`
|
||||
We can then check we have those running containers with `docker ps`
|
||||
|
||||

|
||||
|
||||
Now we can open a browser for each of the containers:
|
||||
Now we can open a browser for each of the containers:
|
||||
|
||||

|
||||
|
||||
To remove everything we can use the `docker-compose down` command.
|
||||
To remove everything we can use the `docker-compose down` command.
|
||||
|
||||
## Resources
|
||||
## Resources
|
||||
|
||||
- [TechWorld with Nana - Docker Tutorial for Beginners](https://www.youtube.com/watch?v=3c-iBn73dDE)
|
||||
- [Programming with Mosh - Docker Tutorial for Beginners](https://www.youtube.com/watch?v=pTFZFxd4hOI)
|
||||
|
@ -1,39 +1,40 @@
|
||||
---
|
||||
title: '#90DaysOfDevOps - Docker Networking & Security - Day 47'
|
||||
title: "#90DaysOfDevOps - Docker Networking & Security - Day 47"
|
||||
published: false
|
||||
description: 90DaysOfDevOps - Docker Networking & Security
|
||||
tags: 'devops, 90daysofdevops, learning'
|
||||
tags: "devops, 90daysofdevops, learning"
|
||||
cover_image: null
|
||||
canonical_url: null
|
||||
id: 1049078
|
||||
---
|
||||
|
||||
## Docker Networking & Security
|
||||
|
||||
During this container session so far we have made things happen but we have not looked at how things have worked behind the scenes either from a networking point of view also we have not touched on security, that is the plan for this session.
|
||||
During this container session so far we have made things happen but we have not looked at how things have worked behind the scenes either from a networking point of view also we have not touched on security, that is the plan for this session.
|
||||
|
||||
### Docker Networking Basics
|
||||
### Docker Networking Basics
|
||||
|
||||
Open a terminal, and type the command `docker network` this is the main command for configuring and managing container networks.
|
||||
Open a terminal, and type the command `docker network` this is the main command for configuring and managing container networks.
|
||||
|
||||
From the below, you can see this is how we can use the command, and all of the sub-commands available. We can create new networks, list existing ones, and inspect and remove networks.
|
||||
From the below, you can see this is how we can use the command, and all of the sub-commands available. We can create new networks, list existing ones, and inspect and remove networks.
|
||||
|
||||

|
||||
|
||||
Let's take a look at the existing networks we have since our installation, so the out-of-box Docker networking looks like using the `docker network list` command.
|
||||
Let's take a look at the existing networks we have since our installation, so the out-of-box Docker networking looks like using the `docker network list` command.
|
||||
|
||||
Each network gets a unique ID and NAME. Each network is also associated with a single driver. Notice that the "bridge" network and the "host" network have the same name as their respective drivers.
|
||||
|
||||

|
||||
|
||||
Next, we can take a deeper look into our networks with the `docker network inspect` command.
|
||||
Next, we can take a deeper look into our networks with the `docker network inspect` command.
|
||||
|
||||
With me running `docker network inspect bridge` I can get all the configuration details of that specific network name. This includes name, ID, drivers, connected containers and as you can see quite a lot more.
|
||||
With me running `docker network inspect bridge` I can get all the configuration details of that specific network name. This includes name, ID, drivers, connected containers and as you can see quite a lot more.
|
||||
|
||||

|
||||
|
||||
### Docker: Bridge Networking
|
||||
### Docker: Bridge Networking
|
||||
|
||||
As you have seen above a standard installation of Docker Desktop gives us a pre-built network called `bridge` If you look back up to the `docker network list` command, you will see that the network called bridge is associated with the `bridge` driver. Just because they have the same name doesn't they are the same thing. Connected but not the same thing.
|
||||
As you have seen above a standard installation of Docker Desktop gives us a pre-built network called `bridge` If you look back up to the `docker network list` command, you will see that the network called bridge is associated with the `bridge` driver. Just because they have the same name doesn't they are the same thing. Connected but not the same thing.
|
||||
|
||||
The output above also shows that the bridge network is scoped locally. This means that the network only exists on this Docker host. This is true of all networks using the bridge driver - the bridge driver provides single-host networking.
|
||||
|
||||
@ -41,27 +42,27 @@ All networks created with the bridge driver are based on a Linux bridge (a.k.a.
|
||||
|
||||
### Connect a Container
|
||||
|
||||
By default the bridge network is assigned to new containers, meaning unless you specify a network all containers will be connected to the bridge network.
|
||||
By default the bridge network is assigned to new containers, meaning unless you specify a network all containers will be connected to the bridge network.
|
||||
|
||||
Let's create a new container with the command `docker run -dt ubuntu sleep infinity`
|
||||
|
||||
The sleep command above is just going to keep the container running in the background so we can mess around with it.
|
||||
The sleep command above is just going to keep the container running in the background so we can mess around with it.
|
||||
|
||||

|
||||
|
||||
If we then check our bridge network with `docker network inspect bridge` you will see that we have a container matching what we have just deployed because we did not specify a network.
|
||||
If we then check our bridge network with `docker network inspect bridge` you will see that we have a container matching what we have just deployed because we did not specify a network.
|
||||
|
||||

|
||||
|
||||
We can also dive into the container using `docker exec -it 3a99af449ca2 bash` you will have to use `docker ps` to get your container ID.
|
||||
We can also dive into the container using `docker exec -it 3a99af449ca2 bash` you will have to use `docker ps` to get your container ID.
|
||||
|
||||
From here our image doesn't have anything to ping so we need to run the following command.`apt-get update && apt-get install -y iputils-ping` then ping an external interfacing address. `ping -c5 www.90daysofdevops.com`
|
||||
|
||||

|
||||
|
||||
To clear this up we can run `docker stop 3a99af449ca2` again and use `docker ps` to find your container ID but this will remove our container.
|
||||
To clear this up we can run `docker stop 3a99af449ca2` again and use `docker ps` to find your container ID but this will remove our container.
|
||||
|
||||
### Configure NAT for external connectivity
|
||||
### Configure NAT for external connectivity
|
||||
|
||||
In this step, we'll start a new NGINX container and map port 8080 on the Docker host to port 80 inside of the container. This means that traffic that hits the Docker host on port 8080 will be passed on to port 80 inside the container.
|
||||
|
||||
@ -75,27 +76,27 @@ Review the container status and port mappings by running `docker ps`
|
||||
|
||||
The top line shows the new web1 container running NGINX. Take note of the command the container is running as well as the port mapping - `0.0.0.0:8080->80/tcp` maps port 8080 on all host interfaces to port 80 inside the web1 container. This port mapping is what effectively makes the container's web service accessible from external sources (via the Docker hosts IP address on port 8080).
|
||||
|
||||
Now we need our IP address for our actual host, we can do this by going into our WSL terminal and using the `IP addr` command.
|
||||
Now we need our IP address for our actual host, we can do this by going into our WSL terminal and using the `IP addr` command.
|
||||
|
||||

|
||||
|
||||
Then we can take this IP and open a browser and head to `http://172.25.218.154:8080/` Your IP might be different. This confirms that NGINX is accessible.
|
||||
Then we can take this IP and open a browser and head to `http://172.25.218.154:8080/` Your IP might be different. This confirms that NGINX is accessible.
|
||||
|
||||

|
||||
|
||||
I have taken these instructions from this site from way back in 2017 DockerCon but they are still relevant today. However, the rest of the walkthrough goes into Docker Swarm and I am not going to be looking into that here. [Docker Networking - DockerCon 2017](https://github.com/docker/labs/tree/master/dockercon-us-2017/docker-networking)
|
||||
|
||||
### Securing your containers
|
||||
### Securing your containers
|
||||
|
||||
Containers provide a secure environment for your workloads vs a full server configuration. They offer the ability to break up your applications into much smaller, loosely coupled components each isolated from one another which helps reduce the attack surface overall.
|
||||
Containers provide a secure environment for your workloads vs a full server configuration. They offer the ability to break up your applications into much smaller, loosely coupled components each isolated from one another which helps reduce the attack surface overall.
|
||||
|
||||
But they are not immune from hackers that are looking to exploit systems. We still need to understand the security pitfalls of the technology and maintain best practices.
|
||||
But they are not immune from hackers that are looking to exploit systems. We still need to understand the security pitfalls of the technology and maintain best practices.
|
||||
|
||||
### Move away from root permission
|
||||
### Move away from root permission
|
||||
|
||||
All of the containers we have deployed have been using the root permission to the process within your containers. This means they have full administrative access to your container and host environments. Now to walk through we knew these systems were not going to be up and running for long. But you saw how easy it was to get up and running.
|
||||
All of the containers we have deployed have been using the root permission to the process within your containers. This means they have full administrative access to your container and host environments. Now to walk through we knew these systems were not going to be up and running for long. But you saw how easy it was to get up and running.
|
||||
|
||||
We can add a few steps to our process to enable non-root users to be our preferred best practice. When creating our dockerfile we can create user accounts. You can find this example also in the containers folder in the repository.
|
||||
We can add a few steps to our process to enable non-root users to be our preferred best practice. When creating our dockerfile we can create user accounts. You can find this example also in the containers folder in the repository.
|
||||
|
||||
```
|
||||
# Use the official Ubuntu 18.04 as base
|
||||
@ -111,21 +112,21 @@ However, this method doesn’t address the underlying security flaw of the image
|
||||
|
||||
### Private Registry
|
||||
|
||||
Another area we have used heavily in public registries in DockerHub, with a private registry of container images set up by your organisation means that you can host where you wish or there are managed services for this as well, but all in all, this gives you complete control of the images available for you and your team.
|
||||
Another area we have used heavily in public registries in DockerHub, with a private registry of container images set up by your organisation means that you can host where you wish or there are managed services for this as well, but all in all, this gives you complete control of the images available for you and your team.
|
||||
|
||||
DockerHub is great to give you a baseline, but it's only going to be providing you with a basic service where you have to put a lot of trust into the image publisher.
|
||||
DockerHub is great to give you a baseline, but it's only going to be providing you with a basic service where you have to put a lot of trust into the image publisher.
|
||||
|
||||
### Lean & Clean
|
||||
### Lean & Clean
|
||||
|
||||
Have mentioned this throughout, although not related to security. But the size of your container can also affect security in terms of attack surface if you have resources you do not use in your application then you do not need them in your container.
|
||||
Have mentioned this throughout, although not related to security. But the size of your container can also affect security in terms of attack surface if you have resources you do not use in your application then you do not need them in your container.
|
||||
|
||||
This is also my major concern with pulling the `latest` images because that can bring a lot of bloat to your images as well. DockerHub does show the compressed size for each of the images in a repository.
|
||||
This is also my major concern with pulling the `latest` images because that can bring a lot of bloat to your images as well. DockerHub does show the compressed size for each of the images in a repository.
|
||||
|
||||
Checking `docker image` is a great command to see the size of your images.
|
||||
Checking `docker image` is a great command to see the size of your images.
|
||||
|
||||

|
||||
|
||||
## Resources
|
||||
## Resources
|
||||
|
||||
- [TechWorld with Nana - Docker Tutorial for Beginners](https://www.youtube.com/watch?v=3c-iBn73dDE)
|
||||
- [Programming with Mosh - Docker Tutorial for Beginners](https://www.youtube.com/watch?v=pTFZFxd4hOI)
|
||||
|
@ -1,65 +1,66 @@
|
||||
---
|
||||
title: '#90DaysOfDevOps - Alternatives to Docker - Day 48'
|
||||
title: "#90DaysOfDevOps - Alternatives to Docker - Day 48"
|
||||
published: false
|
||||
description: 90DaysOfDevOps - Alternatives to Docker
|
||||
tags: 'devops, 90daysofdevops, learning'
|
||||
tags: "devops, 90daysofdevops, learning"
|
||||
cover_image: null
|
||||
canonical_url: null
|
||||
id: 1048807
|
||||
---
|
||||
|
||||
## Alternatives to Docker
|
||||
|
||||
I did say at the very beginning of this section that we were going to be using Docker, simply because resource wise there is so much and the community is very big, but also this was really where the indents to making containers popular came from. I would encourage you to go and watch some of the history around Docker and how it came to be, I found it very useful.
|
||||
I did say at the very beginning of this section that we were going to be using Docker, simply because resource wise there is so much and the community is very big, but also this was really where the indents to making containers popular came from. I would encourage you to go and watch some of the history around Docker and how it came to be, I found it very useful.
|
||||
|
||||
But as I have alluded to there are other alternatives to Docker. If we think about what Docker is and what we have covered. It is a platform for developing, testing, deploying, and managing applications.
|
||||
|
||||
I want to highlight a few alternatives to Docker that you might or will in the future see out in the wild.
|
||||
I want to highlight a few alternatives to Docker that you might or will in the future see out in the wild.
|
||||
|
||||
### Podman
|
||||
|
||||
What is Podman? Podman is a daemon-less container engine for developing, managing, and running OCI Containers on your Linux System. Containers can either be run as root or in rootless mode.
|
||||
What is Podman? Podman is a daemon-less container engine for developing, managing, and running OCI Containers on your Linux System. Containers can either be run as root or in rootless mode.
|
||||
|
||||
I am going to be looking at this from a Windows point of view but know that like Docker there is no requirement for virtualisation there as it will use the underlying OS which is cannot do in the Windows world.
|
||||
I am going to be looking at this from a Windows point of view but know that like Docker there is no requirement for virtualisation there as it will use the underlying OS which is cannot do in the Windows world.
|
||||
|
||||
Podman can be run under WSL2 although not as sleek as the experience with Docker Desktop. There is also a Windows remote client where you can connect to a Linux VM where your containers will run.
|
||||
Podman can be run under WSL2 although not as sleek as the experience with Docker Desktop. There is also a Windows remote client where you can connect to a Linux VM where your containers will run.
|
||||
|
||||
My Ubuntu on WSL2 is the 20.04 release. Following the next steps will enable you to install Podman on your WSL instance.
|
||||
My Ubuntu on WSL2 is the 20.04 release. Following the next steps will enable you to install Podman on your WSL instance.
|
||||
|
||||
```
|
||||
```Shell
|
||||
echo "deb https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/xUbuntu_20.04/ /" |
|
||||
sudo tee /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list
|
||||
```
|
||||
|
||||
Add the GPG Key
|
||||
Add the GPG Key
|
||||
|
||||
```
|
||||
```Shell
|
||||
curl -L "https://download.opensuse.org/repositories/devel:/kubic:\
|
||||
/libcontainers:/stable/xUbuntu_20.04/Release.key" | sudo apt-key add -
|
||||
```
|
||||
|
||||
Run a system update and upgrade with the `sudo apt-get update && sudo apt-get upgrade` command. Finally, we can install podman using `sudo apt install podman`
|
||||
Run a system update and upgrade with the `sudo apt-get update && sudo apt-get upgrade` command. Finally, we can install podman using `sudo apt install podman`
|
||||
|
||||
We can now use a lot of the same commands we have been using for docker, note that we do not have that nice docker desktop UI. You can see below I used `podman images` and I have nothing after installation then I used `podman pull ubuntu` to pull down the ubuntu container image.
|
||||
We can now use a lot of the same commands we have been using for docker, note that we do not have that nice docker desktop UI. You can see below I used `podman images` and I have nothing after installation then I used `podman pull ubuntu` to pull down the ubuntu container image.
|
||||
|
||||

|
||||
|
||||
We can then run our Ubuntu image using `podman run -dit ubuntu` and `podman ps` to see our running image.
|
||||
We can then run our Ubuntu image using `podman run -dit ubuntu` and `podman ps` to see our running image.
|
||||
|
||||

|
||||
|
||||
To then get into that container we can run `podman attach dazzling_darwin` your container name will most likely be different.
|
||||
To then get into that container we can run `podman attach dazzling_darwin` your container name will most likely be different.
|
||||
|
||||

|
||||
|
||||
If you are moving from docker to podman it is also common to change your config file to have `alias docker=podman` that way any command you run with docker will use podman.
|
||||
If you are moving from docker to podman it is also common to change your config file to have `alias docker=podman` that way any command you run with docker will use podman.
|
||||
|
||||
### LXC
|
||||
### LXC
|
||||
|
||||
LXC is a containerisation engine that enables users again to create multiple isolated Linux container environments. Unlike Docker, LXC acts as a hypervisor for creating multiple Linux machines with separate system files, and networking features. Was around before Docker and then made a short comeback due to Docker's shortcomings.
|
||||
LXC is a containerisation engine that enables users again to create multiple isolated Linux container environments. Unlike Docker, LXC acts as a hypervisor for creating multiple Linux machines with separate system files, and networking features. Was around before Docker and then made a short comeback due to Docker's shortcomings.
|
||||
|
||||
LXC is as lightweight though as docker and easily deployed.
|
||||
LXC is as lightweight though as docker and easily deployed.
|
||||
|
||||
### Containerd
|
||||
### Containerd
|
||||
|
||||
A standalone container runtime. Containerd brings simplicity and robustness as well as of course portability. Containerd was formerly a tool that runs as part of Docker container services until Docker decided to graduate its components into standalone components.
|
||||
|
||||
@ -67,21 +68,21 @@ A project in the Cloud Native Computing Foundation, placing it in the same class
|
||||
|
||||
### Other Docker tooling
|
||||
|
||||
We could also mention toolings and options around Rancher, and VirtualBox but we can cover them in more detail another time.
|
||||
We could also mention toolings and options around Rancher, and VirtualBox but we can cover them in more detail another time.
|
||||
|
||||
[**Gradle**](https://gradle.org/)
|
||||
[**Gradle**](https://gradle.org/)
|
||||
|
||||
- Build scans allow teams to collaboratively debug their scripts and track the history of all builds.
|
||||
- Execution options give teams the ability to continuously build so that whenever changes are inputted, the task is automatically executed.
|
||||
- The custom repository layout gives teams the ability to treat any file directory structure as an artefact repository.
|
||||
|
||||
[**Packer**](https://packer.io/)
|
||||
[**Packer**](https://packer.io/)
|
||||
|
||||
- Ability to create multiple machine images in parallel to save developer time and increase efficiency.
|
||||
- Teams can easily debug builds using Packer’s debugger, which inspects failures and allows teams to try out solutions before restarting builds.
|
||||
- Support with many platforms via plugins so teams can customize their builds.
|
||||
|
||||
[**Logspout**](https://github.com/gliderlabs/logspout)
|
||||
[**Logspout**](https://github.com/gliderlabs/logspout)
|
||||
|
||||
- Logging tool - The tool’s customizability allows teams to ship the same logs to multiple destinations.
|
||||
- Teams can easily manage their files because the tool only requires access to the Docker socket.
|
||||
@ -99,9 +100,7 @@ We could also mention toolings and options around Rancher, and VirtualBox but we
|
||||
- Create teams and assign roles and permissions to team members.
|
||||
- Know what is running in each environment using the tool’s dashboard.
|
||||
|
||||
|
||||
|
||||
## Resources
|
||||
## Resources
|
||||
|
||||
- [TechWorld with Nana - Docker Tutorial for Beginners](https://www.youtube.com/watch?v=3c-iBn73dDE)
|
||||
- [Programming with Mosh - Docker Tutorial for Beginners](https://www.youtube.com/watch?v=pTFZFxd4hOI)
|
||||
@ -113,4 +112,4 @@ We could also mention toolings and options around Rancher, and VirtualBox but we
|
||||
- [Podman | Daemonless Docker | Getting Started with Podman](https://www.youtube.com/watch?v=Za2BqzeZjBk)
|
||||
- [LXC - Guide to building an LXC Lab](https://www.youtube.com/watch?v=cqOtksmsxfg)
|
||||
|
||||
See you on [Day 49](day49.md)
|
||||
See you on [Day 49](day49.md)
|
||||
|
101
Days/day49.md
101
Days/day49.md
@ -1,33 +1,34 @@
|
||||
---
|
||||
title: '#90DaysOfDevOps - The Big Picture: Kubernetes - Day 49'
|
||||
title: "#90DaysOfDevOps - The Big Picture: Kubernetes - Day 49"
|
||||
published: false
|
||||
description: 90DaysOfDevOps - The Big Picture Kubernetes
|
||||
tags: 'devops, 90daysofdevops, learning'
|
||||
tags: "devops, 90daysofdevops, learning"
|
||||
cover_image: null
|
||||
canonical_url: null
|
||||
id: 1049049
|
||||
---
|
||||
|
||||
## The Big Picture: Kubernetes
|
||||
|
||||
In the last section we covered Containers, Containers fall short when it comes to scale and orchestration alone. The best we can do is use docker-compose to bring up multiple containers together. When it comes to Kubernetes which is a Container Orchestrator, this gives us the ability to scale up and down in an automated way or based on a load of your applications and services.
|
||||
In the last section we covered Containers, Containers fall short when it comes to scale and orchestration alone. The best we can do is use docker-compose to bring up multiple containers together. When it comes to Kubernetes which is a Container Orchestrator, this gives us the ability to scale up and down in an automated way or based on a load of your applications and services.
|
||||
|
||||
As a platform Kubernetes offers the ability to orchestrate containers according to your requirements and desired state. We are going to cover Kubernetes in this section as it is growing rapidly as the next wave of infrastructure. I would also suggest that from a DevOps perspective Kubernetes is just one platform that you will need to have a basic understanding of, you will also need to understand bare metal, virtualisation and most likely cloud-based services as well. Kubernetes is just another option to run our applications.
|
||||
As a platform Kubernetes offers the ability to orchestrate containers according to your requirements and desired state. We are going to cover Kubernetes in this section as it is growing rapidly as the next wave of infrastructure. I would also suggest that from a DevOps perspective Kubernetes is just one platform that you will need to have a basic understanding of, you will also need to understand bare metal, virtualisation and most likely cloud-based services as well. Kubernetes is just another option to run our applications.
|
||||
|
||||
### What is Container Orchestration?
|
||||
|
||||
I have mentioned Kubernetes and I have mentioned Container Orchestration, Kubernetes is the technology whereas container orchestration is the concept or the process behind the technology. Kubernetes is not the only Container Orchestration platform we also have Docker Swarm, HashiCorp Nomad and others. But Kubernetes is going from strength to strength so I want to cover Kubernetes but wanted to say that it is not the only one out there.
|
||||
I have mentioned Kubernetes and I have mentioned Container Orchestration, Kubernetes is the technology whereas container orchestration is the concept or the process behind the technology. Kubernetes is not the only Container Orchestration platform we also have Docker Swarm, HashiCorp Nomad and others. But Kubernetes is going from strength to strength so I want to cover Kubernetes but wanted to say that it is not the only one out there.
|
||||
|
||||
### What is Kubernetes?
|
||||
|
||||
The first thing you should read if you are new to Kubernetes is the official documentation, My experience of really deep diving into Kubernetes a little over a year ago was that this is going to be a steep learning curve. Coming from a virtualisation and storage background I was thinking about how daunting this felt.
|
||||
The first thing you should read if you are new to Kubernetes is the official documentation, My experience of really deep diving into Kubernetes a little over a year ago was that this is going to be a steep learning curve. Coming from a virtualisation and storage background I was thinking about how daunting this felt.
|
||||
|
||||
But the community, free learning resources and documentation are amazing. [Kubernetes.io](https://kubernetes.io/docs/concepts/overview/what-is-kubernetes/)
|
||||
But the community, free learning resources and documentation are amazing. [Kubernetes.io](https://kubernetes.io/docs/concepts/overview/what-is-kubernetes/)
|
||||
|
||||
*Kubernetes is a portable, extensible, open-source platform for managing containerized workloads and services, that facilitates both declarative configuration and automation. It has a large, rapidly growing ecosystem. Kubernetes services, support, and tools are widely available.*
|
||||
_Kubernetes is a portable, extensible, open-source platform for managing containerized workloads and services, that facilitates both declarative configuration and automation. It has a large, rapidly growing ecosystem. Kubernetes services, support, and tools are widely available._
|
||||
|
||||
Important things to note from the above quote, Kubernetes is Open-Source with a rich history that goes back to Google who donated the project to the Cloud Native Computing Foundation (CNCF) and it has now been progressed by the open-source community as well as large enterprise vendors contributing to making Kubernetes what it is today.
|
||||
Important things to note from the above quote, Kubernetes is Open-Source with a rich history that goes back to Google who donated the project to the Cloud Native Computing Foundation (CNCF) and it has now been progressed by the open-source community as well as large enterprise vendors contributing to making Kubernetes what it is today.
|
||||
|
||||
I mentioned above that containers are great and in the previous section, we spoke about how containers and container images have changed and accelerated the adoption of cloud-native systems. But containers alone are not going to give you the production-ready experience you need from your application. Kubernetes gives us the following:
|
||||
I mentioned above that containers are great and in the previous section, we spoke about how containers and container images have changed and accelerated the adoption of cloud-native systems. But containers alone are not going to give you the production-ready experience you need from your application. Kubernetes gives us the following:
|
||||
|
||||
- **Service discovery and load balancing** Kubernetes can expose a container using the DNS name or using their IP address. If traffic to a container is high, Kubernetes can load balance and distribute the network traffic so that the deployment is stable.
|
||||
|
||||
@ -39,25 +40,24 @@ I mentioned above that containers are great and in the previous section, we spok
|
||||
|
||||
- **Self-healing** Kubernetes restarts containers that fail, replaces containers, kills containers that don't respond to your user-defined health check, and doesn't advertise them to clients until they are ready to serve.
|
||||
|
||||
- **Secret and configuration management** Kubernetes lets you store and manage sensitive information, such as passwords, OAuth tokens, and SSH keys. You can deploy and update secrets and application configuration without rebuilding your container images, and without exposing secrets in your stack configuration.
|
||||
- **Secret and configuration management** Kubernetes lets you store and manage sensitive information, such as passwords, OAuth tokens, and SSH keys. You can deploy and update secrets and application configuration without rebuilding your container images, and without exposing secrets in your stack configuration.
|
||||
|
||||
Kubernetes provides you with a framework to run distributed systems resiliently.
|
||||
|
||||
Container Orchestration manages the deployment, placement, and lifecycle of containers.
|
||||
|
||||
It also has many other responsibilities:
|
||||
It also has many other responsibilities:
|
||||
|
||||
- Cluster management federates hosts into one target.
|
||||
|
||||
- Schedule management distributes containers across nodes through the scheduler.
|
||||
|
||||
- Service discovery knows where containers are located and distributes client requests across them.
|
||||
|
||||
- Replication ensures that the right number of nodes and containers are available for the requested workload.
|
||||
|
||||
- Health management detects and replaces unhealthy containers and nodes.
|
||||
|
||||
### Main Kubernetes Components
|
||||
### Main Kubernetes Components
|
||||
|
||||
Kubernetes is a container orchestrator to provision, manage, and scale apps. You can use it to manage the lifecycle of containerized apps in a cluster of nodes, which is a collection of worker machines such as VMs or physical machines.
|
||||
|
||||
@ -67,20 +67,21 @@ The key paradigm of Kubernetes is its declarative model. You provide the state t
|
||||
|
||||
### Node
|
||||
|
||||
**Control Plane**
|
||||
#### Control Plane
|
||||
|
||||
Every Kubernetes cluster requires a Control Plane node, the control plane's components make global decisions about the cluster (for example, scheduling), as well as detecting and responding to cluster events.
|
||||
Every Kubernetes cluster requires a Control Plane node, the control plane's components make global decisions about the cluster (for example, scheduling), as well as detecting and responding to cluster events.
|
||||
|
||||

|
||||
|
||||
**Worker Node**
|
||||
A worker machine that runs Kubernetes workloads. It can be a physical (bare metal) machine or a virtual machine (VM). Each node can host one or more pods. Kubernetes nodes are managed by a control plane
|
||||
#### Worker Node
|
||||
|
||||
A worker machine that runs Kubernetes workloads. It can be a physical (bare metal) machine or a virtual machine (VM). Each node can host one or more pods. Kubernetes nodes are managed by a control plane
|
||||
|
||||

|
||||
|
||||
There are other node types but I won't be covering them here.
|
||||
There are other node types but I won't be covering them here.
|
||||
|
||||
**kubelet**
|
||||
#### kubelet
|
||||
|
||||
An agent that runs on each node in the cluster. It makes sure that containers are running in a Pod.
|
||||
|
||||
@ -88,7 +89,7 @@ The kubelet takes a set of PodSpecs that are provided through various mechanisms
|
||||
|
||||

|
||||
|
||||
**kube-proxy**
|
||||
#### kube-proxy
|
||||
|
||||
kube-proxy is a network proxy that runs on each node in your cluster, implementing part of the Kubernetes Service concept.
|
||||
|
||||
@ -98,7 +99,7 @@ kube-proxy uses the operating system packet filtering layer if there is one and
|
||||
|
||||

|
||||
|
||||
**Container runtime**
|
||||
#### Container runtime
|
||||
|
||||
The container runtime is the software that is responsible for running containers.
|
||||
|
||||
@ -110,29 +111,29 @@ Kubernetes supports several container runtimes: Docker, containerd, CRI-O, and a
|
||||
|
||||
A cluster is a group of nodes, where a node can be a physical machine or a virtual machine. Each of the nodes will have the container runtime (Docker) and will also be running a kubelet service, which is an agent that takes in the commands from the Master controller (more on that later) and a Proxy, that is used to proxy connections to the Pods from another component (Services, that we will see later).
|
||||
|
||||
Our control plane which can be made highly available will contain some unique roles compared to the worker nodes, the most important will be the kube API server, this is where any communication will take place to get information or push information to our Kubernetes cluster.
|
||||
Our control plane which can be made highly available will contain some unique roles compared to the worker nodes, the most important will be the kube API server, this is where any communication will take place to get information or push information to our Kubernetes cluster.
|
||||
|
||||
**Kube API-Server**
|
||||
#### Kube API-Server
|
||||
|
||||
The Kubernetes API server validates and configures data for the API objects which include pods, services, replication controllers, and others. The API Server services REST operations and provide the frontend to the cluster's shared state through which all other components interact.
|
||||
|
||||
**Scheduler**
|
||||
#### Scheduler
|
||||
|
||||
The Kubernetes scheduler is a control plane process which assigns Pods to Nodes. The scheduler determines which Nodes are valid placements for each Pod in the scheduling queue according to constraints and available resources. The scheduler then ranks each valid Node and binds the Pod to a suitable Node.
|
||||
|
||||
**Controller Manager**
|
||||
#### Controller Manager
|
||||
|
||||
The Kubernetes controller manager is a daemon that embeds the core control loops shipped with Kubernetes. In applications of robotics and automation, a control loop is a non-terminating loop that regulates the state of the system. In Kubernetes, a controller is a control loop that watches the shared state of the cluster through the apiserver and makes changes attempting to move the current state towards the desired state.
|
||||
|
||||
**etcd**
|
||||
#### etcd
|
||||
|
||||
Consistent and highly-available key value store used as Kubernetes' backing store for all cluster data.
|
||||
|
||||

|
||||
|
||||
**kubectl**
|
||||
#### kubectl
|
||||
|
||||
To manage this from a CLI point of view we have kubectl, kubectl interacts with the API server.
|
||||
To manage this from a CLI point of view we have kubectl, kubectl interacts with the API server.
|
||||
|
||||
The Kubernetes command-line tool, kubectl, allows you to run commands against Kubernetes clusters. You can use kubectl to deploy applications, inspect and manage cluster resources, and view logs.
|
||||
|
||||
@ -154,11 +155,11 @@ A Pod is a group of containers that form a logical application. E.g. If you have
|
||||
|
||||
### Deployments
|
||||
|
||||
- You can just decide to run Pods but when they die they die.
|
||||
- You can just decide to run Pods but when they die they die.
|
||||
|
||||
- A Deployment will enable your pod to run continuously.
|
||||
- A Deployment will enable your pod to run continuously.
|
||||
|
||||
- Deployments allow you to update a running app without downtime.
|
||||
- Deployments allow you to update a running app without downtime.
|
||||
|
||||
- Deployments also specify a strategy to restart Pods when they die
|
||||
|
||||
@ -166,17 +167,17 @@ A Pod is a group of containers that form a logical application. E.g. If you have
|
||||
|
||||
### ReplicaSets
|
||||
|
||||
- The Deployment can also create the ReplicaSet
|
||||
- The Deployment can also create the ReplicaSet
|
||||
|
||||
- A ReplicaSet ensures your app has the desired number of Pods
|
||||
|
||||
- ReplicaSets will create and scale Pods based on the Deployment
|
||||
- ReplicaSets will create and scale Pods based on the Deployment
|
||||
|
||||
- Deployments, ReplicaSets, and Pods are not exclusive but can be
|
||||
|
||||
### StatefulSets
|
||||
|
||||
- Does your App require you to keep information about its state?
|
||||
- Does your App require you to keep information about its state?
|
||||
|
||||
- A database needs state
|
||||
|
||||
@ -188,23 +189,23 @@ A Pod is a group of containers that form a logical application. E.g. If you have
|
||||
|
||||
### DaemonSets
|
||||
|
||||
- DaemonSets are for continuous process
|
||||
- DaemonSets are for continuous process
|
||||
|
||||
- They run one Pod per Node.
|
||||
- They run one Pod per Node.
|
||||
|
||||
- Each new node added to the cluster gets a pod started
|
||||
|
||||
- Useful for background tasks such as monitoring and log collection
|
||||
- Useful for background tasks such as monitoring and log collection
|
||||
|
||||
- Each pod has a unique, persistent identifier that the controller maintains over any rescheduling.
|
||||
|
||||

|
||||
|
||||
### Services
|
||||
### Services
|
||||
|
||||
- A single endpoint to access Pods
|
||||
- A single endpoint to access Pods
|
||||
|
||||
- a unified way to route traffic to a cluster and eventually to a list of Pods.
|
||||
- a unified way to route traffic to a cluster and eventually to a list of Pods.
|
||||
|
||||
- By using a Service, Pods can be brought up and down without affecting anything.
|
||||
|
||||
@ -212,18 +213,18 @@ This is just a quick overview and notes around the fundamental building blocks o
|
||||
|
||||

|
||||
|
||||
### What we will cover in the series on Kubernetes
|
||||
### What we will cover in the series on Kubernetes
|
||||
|
||||
- Kubernetes Architecture
|
||||
- Kubectl Commands
|
||||
- Kubernetes YAML
|
||||
- Kubernetes Ingress
|
||||
- Kubernetes Architecture
|
||||
- Kubectl Commands
|
||||
- Kubernetes YAML
|
||||
- Kubernetes Ingress
|
||||
- Kubernetes Services
|
||||
- Helm Package Manager
|
||||
- Persistent Storage
|
||||
- Stateful Apps
|
||||
- Helm Package Manager
|
||||
- Persistent Storage
|
||||
- Stateful Apps
|
||||
|
||||
## Resources
|
||||
## Resources
|
||||
|
||||
- [Kubernetes Documentation](https://kubernetes.io/docs/home/)
|
||||
- [TechWorld with Nana - Kubernetes Tutorial for Beginners [FULL COURSE in 4 Hours]](https://www.youtube.com/watch?v=X48VuDVv0do)
|
||||
|
@ -1,50 +1,52 @@
|
||||
---
|
||||
title: '#90DaysOfDevOps - Choosing your Kubernetes platform - Day 50'
|
||||
title: "#90DaysOfDevOps - Choosing your Kubernetes platform - Day 50"
|
||||
published: false
|
||||
description: 90DaysOfDevOps - Choosing your Kubernetes platform
|
||||
tags: 'devops, 90daysofdevops, learning'
|
||||
tags: "devops, 90daysofdevops, learning"
|
||||
cover_image: null
|
||||
canonical_url: null
|
||||
id: 1049046
|
||||
---
|
||||
## Choosing your Kubernetes platform
|
||||
|
||||
I wanted to use this session to break down some of the platforms or maybe distributions is a better term to use here, one thing that has been a challenge in the Kubernetes world is removing complexity.
|
||||
## Choosing your Kubernetes platform
|
||||
|
||||
Kubernetes the hard way walks through how to build out from nothing to a full-blown functional Kubernetes cluster this is to the extreme but more and more at least the people I am speaking to are wanting to remove that complexity and run a managed Kubernetes cluster. The issue there is that it costs more money but the benefits could be if you use a managed service do you need to know the underpinning node architecture and what is happening from a Control Plane node point of view when generally you do not have access to this.
|
||||
I wanted to use this session to break down some of the platforms or maybe distributions is a better term to use here, one thing that has been a challenge in the Kubernetes world is removing complexity.
|
||||
|
||||
Then we have the local development distributions that enable us to use our systems and run a local version of Kubernetes so developers can have the full working environment to run their apps in the platform they are intended for.
|
||||
Kubernetes the hard way walks through how to build out from nothing to a full-blown functional Kubernetes cluster this is to the extreme but more and more at least the people I am speaking to are wanting to remove that complexity and run a managed Kubernetes cluster. The issue there is that it costs more money but the benefits could be if you use a managed service do you need to know the underpinning node architecture and what is happening from a Control Plane node point of view when generally you do not have access to this.
|
||||
|
||||
The general basis of all of these concepts is that they are all a flavour of Kubernetes which means we should be able to freely migrate and move our workloads where we need them to suit our requirements.
|
||||
Then we have the local development distributions that enable us to use our systems and run a local version of Kubernetes so developers can have the full working environment to run their apps in the platform they are intended for.
|
||||
|
||||
A lot of our choice will also depend on what investments have been made. I mentioned the developer experience as well but some of those local Kubernetes environments that run our laptops are great for getting to grips with the technology without spending any money.
|
||||
The general basis of all of these concepts is that they are all a flavour of Kubernetes which means we should be able to freely migrate and move our workloads where we need them to suit our requirements.
|
||||
|
||||
### Bare-Metal Clusters
|
||||
A lot of our choice will also depend on what investments have been made. I mentioned the developer experience as well but some of those local Kubernetes environments that run our laptops are great for getting to grips with the technology without spending any money.
|
||||
|
||||
An option for many could be running your Linux OS straight onto several physical servers to create our cluster, it could also be Windows but I have not heard much about the adoption rate around Windows, Containers and Kubernetes. If you are a business and you have made a CAPEX decision to buy your physical servers then this might be how you go when building out your Kubernetes cluster, the management and admin side here means you are going to have to build yourself and manage everything from the ground up.
|
||||
### Bare-Metal Clusters
|
||||
|
||||
### Virtualisation
|
||||
An option for many could be running your Linux OS straight onto several physical servers to create our cluster, it could also be Windows but I have not heard much about the adoption rate around Windows, Containers and Kubernetes. If you are a business and you have made a CAPEX decision to buy your physical servers then this might be how you go when building out your Kubernetes cluster, the management and admin side here means you are going to have to build yourself and manage everything from the ground up.
|
||||
|
||||
Regardless of test and learning environments or enterprise-ready Kubernetes clusters virtualisation is a great way to go, typically the ability to spin up virtual machines to act as your nodes and then cluster those together. You have the underpinning architecture, efficiency and speed of virtualisation as well as leveraging that existing spend. VMware for example offers a great solution for both Virtual Machines and Kubernetes in various flavours.
|
||||
### Virtualisation
|
||||
|
||||
My first ever Kubernetes cluster was built based on Virtualisation using Microsoft Hyper-V on an old server that I had which was capable of running a few VMs as my nodes.
|
||||
Regardless of test and learning environments or enterprise-ready Kubernetes clusters virtualisation is a great way to go, typically the ability to spin up virtual machines to act as your nodes and then cluster those together. You have the underpinning architecture, efficiency and speed of virtualisation as well as leveraging that existing spend. VMware for example offers a great solution for both Virtual Machines and Kubernetes in various flavours.
|
||||
|
||||
### Local Desktop options
|
||||
My first ever Kubernetes cluster was built based on Virtualisation using Microsoft Hyper-V on an old server that I had which was capable of running a few VMs as my nodes.
|
||||
|
||||
There are several options when it comes to running a local Kubernetes cluster on your desktop or laptop. This as previously said gives developers the ability to see what their app will look like without having to have multiple costly or complex clusters. Personally, this has been one that I have used a lot and in particular, I have been using minikube. It has some great functionality and adds-ons which changes the way you get something up and running.
|
||||
### Local Desktop options
|
||||
|
||||
There are several options when it comes to running a local Kubernetes cluster on your desktop or laptop. This as previously said gives developers the ability to see what their app will look like without having to have multiple costly or complex clusters. Personally, this has been one that I have used a lot and in particular, I have been using minikube. It has some great functionality and adds-ons which changes the way you get something up and running.
|
||||
|
||||
### Kubernetes Managed Services
|
||||
|
||||
### Kubernetes Managed Services
|
||||
I have mentioned virtualisation, and this can be achieved with hypervisors locally but we know from previous sections we could also leverage VMs in the public cloud to act as our nodes. What I am talking about here with Kubernetes managed services are the offerings we see from the large hyperscalers but also from MSPs removing layers of management and control away from the end user, this could be removing the control plane from the end user this is what happens with Amazon EKS, Microsoft AKS and Google Kubernetes Engine. (GKE)
|
||||
|
||||
### Overwhelming choice
|
||||
### Overwhelming choice
|
||||
|
||||
I mean the choice is great but there is a point where things become overwhelming and this is not a depth look into all options within each category listed above. On top of the above, we also have OpenShift which is from Red Hat and this option can be run across the options above in all the major cloud providers and probably today gives the best overall useability to the admins regardless of where clusters are deployed.
|
||||
I mean the choice is great but there is a point where things become overwhelming and this is not a depth look into all options within each category listed above. On top of the above, we also have OpenShift which is from Red Hat and this option can be run across the options above in all the major cloud providers and probably today gives the best overall useability to the admins regardless of where clusters are deployed.
|
||||
|
||||
So where do you start from your learning perspective, as I said I started with the virtualisation route but that was because I had access to a physical server which I could use for the purpose, I appreciate and in fact, since then I no longer have this option.
|
||||
So where do you start from your learning perspective, as I said I started with the virtualisation route but that was because I had access to a physical server which I could use for the purpose, I appreciate and in fact, since then I no longer have this option.
|
||||
|
||||
My actual advice now would be to use Minikube as a first option or Kind (Kubernetes in Docker) but Minikube gives us some additional benefits which almost abstracts the complexity out as we can just use add-ons and get things built out quickly and we can then blow it away when we are finished, we can run multiple clusters, we can run it almost anywhere, cross-platform and hardware agnostic.
|
||||
My actual advice now would be to use Minikube as a first option or Kind (Kubernetes in Docker) but Minikube gives us some additional benefits which almost abstracts the complexity out as we can just use add-ons and get things built out quickly and we can then blow it away when we are finished, we can run multiple clusters, we can run it almost anywhere, cross-platform and hardware agnostic.
|
||||
|
||||
I have been through a bit of a journey with my learning around Kubernetes so I am going to leave the platform choice and specifics here to list out the options that I have tried to give me a better understanding of Kubernetes the platform and where it can run. What I might do with the below blog posts is take another look at these update them and bring them more into here vs them being linked to blog posts.
|
||||
I have been through a bit of a journey with my learning around Kubernetes so I am going to leave the platform choice and specifics here to list out the options that I have tried to give me a better understanding of Kubernetes the platform and where it can run. What I might do with the below blog posts is take another look at these update them and bring them more into here vs them being linked to blog posts.
|
||||
|
||||
- [Kubernetes playground – How to choose your platform](https://vzilla.co.uk/vzilla-blog/building-the-home-lab-kubernetes-playground-part-1)
|
||||
- [Kubernetes playground – Setting up your cluster](https://vzilla.co.uk/vzilla-blog/building-the-home-lab-kubernetes-playground-part-2)
|
||||
@ -56,22 +58,22 @@ I have been through a bit of a journey with my learning around Kubernetes so I a
|
||||
- [Getting started with CIVO Cloud](https://vzilla.co.uk/vzilla-blog/getting-started-with-civo-cloud)
|
||||
- [Minikube - Kubernetes Demo Environment For Everyone](https://vzilla.co.uk/vzilla-blog/project_pace-kasten-k10-demo-environment-for-everyone)
|
||||
|
||||
### What we will cover in the series on Kubernetes
|
||||
### What we will cover in the series on Kubernetes
|
||||
|
||||
- Kubernetes Architecture
|
||||
- Kubectl Commands
|
||||
- Kubernetes YAML
|
||||
- Kubernetes Ingress
|
||||
- Kubernetes Architecture
|
||||
- Kubectl Commands
|
||||
- Kubernetes YAML
|
||||
- Kubernetes Ingress
|
||||
- Kubernetes Services
|
||||
- Helm Package Manager
|
||||
- Persistent Storage
|
||||
- Stateful Apps
|
||||
- Helm Package Manager
|
||||
- Persistent Storage
|
||||
- Stateful Apps
|
||||
|
||||
## Resources
|
||||
## Resources
|
||||
|
||||
- [Kubernetes Documentation](https://kubernetes.io/docs/home/)
|
||||
- [TechWorld with Nana - Kubernetes Tutorial for Beginners [FULL COURSE in 4 Hours]](https://www.youtube.com/watch?v=X48VuDVv0do)
|
||||
- [TechWorld with Nana - Kubernetes Crash Course for Absolute Beginners](https://www.youtube.com/watch?v=s_o8dwzRlu4)
|
||||
- [Kunal Kushwaha - Kubernetes Tutorial for Beginners | What is Kubernetes? Architecture Simplified!](https://www.youtube.com/watch?v=KVBON1lA9N8)
|
||||
|
||||
See you on [Day 51](day51.md)
|
||||
See you on [Day 51](day51.md)
|
||||
|
157
Days/day51.md
157
Days/day51.md
@ -1,23 +1,24 @@
|
||||
---
|
||||
title: '#90DaysOfDevOps - Deploying your first Kubernetes Cluster - Day 51'
|
||||
title: "#90DaysOfDevOps - Deploying your first Kubernetes Cluster - Day 51"
|
||||
published: false
|
||||
description: 90DaysOfDevOps - Deploying your first Kubernetes Cluster
|
||||
tags: 'devops, 90daysofdevops, learning'
|
||||
tags: "devops, 90daysofdevops, learning"
|
||||
cover_image: null
|
||||
canonical_url: null
|
||||
id: 1048778
|
||||
---
|
||||
## Deploying your first Kubernetes Cluster
|
||||
|
||||
In this post we are going get a Kubernetes cluster up and running on our local machine using minikube, this will give us a baseline Kubernetes cluster for the rest of the Kubernetes section, although we will look at deploying a Kubernetes cluster also in VirtualBox later on. The reason for choosing this method vs spinning a managed Kubernetes cluster up in the public cloud is that this is going to cost money even with the free tier, I shared some blogs though if you would like to spin up that environment in the previous section [Day 50](day50.md).
|
||||
## Deploying your first Kubernetes Cluster
|
||||
|
||||
### What is Minikube?
|
||||
In this post we are going get a Kubernetes cluster up and running on our local machine using minikube, this will give us a baseline Kubernetes cluster for the rest of the Kubernetes section, although we will look at deploying a Kubernetes cluster also in VirtualBox later on. The reason for choosing this method vs spinning a managed Kubernetes cluster up in the public cloud is that this is going to cost money even with the free tier, I shared some blogs though if you would like to spin up that environment in the previous section [Day 50](day50.md).
|
||||
|
||||
*“minikube quickly sets up a local Kubernetes cluster on macOS, Linux, and Windows. We proudly focus on helping application developers and new Kubernetes users.”*
|
||||
### What is Minikube?
|
||||
|
||||
You might not fit into the above but I have found minikube is a great little tool if you just want to test something out in a Kubernetes fashion, you can easily deploy and app and they have some amazing add ons which I will also cover.
|
||||
> “minikube quickly sets up a local Kubernetes cluster on macOS, Linux, and Windows. We proudly focus on helping application developers and new Kubernetes users.”
|
||||
|
||||
To begin with regardless of your workstation OS, you can run minikube. First, head over to the [project page here](https://minikube.sigs.k8s.io/docs/start/). The first option you have is choosing your installation method. I did not use this method, but you might choose to vs my way (my way is coming up).
|
||||
You might not fit into the above but I have found minikube is a great little tool if you just want to test something out in a Kubernetes fashion, you can easily deploy and app and they have some amazing add ons which I will also cover.
|
||||
|
||||
To begin with regardless of your workstation OS, you can run minikube. First, head over to the [project page here](https://minikube.sigs.k8s.io/docs/start/). The first option you have is choosing your installation method. I did not use this method, but you might choose to vs my way (my way is coming up).
|
||||
|
||||
mentioned below it states that you need to have a “Container or virtual machine manager, such as: Docker, Hyperkit, Hyper-V, KVM, Parallels, Podman, VirtualBox, or VMware” this is where MiniKube will run and the easy option and unless stated in the repository I am using Docker. You can install Docker on your system using the following [link](https://docs.docker.com/get-docker/).
|
||||
|
||||
@ -25,7 +26,7 @@ mentioned below it states that you need to have a “Container or virtual machin
|
||||
|
||||
### My way of installing minikube and other prereqs…
|
||||
|
||||
I have been using arkade for some time now to get all those Kubernetes tools and CLIs, you can see the installation steps on this [github repository](https://github.com/alexellis/arkade) for getting started with Arkade. I have also mentioned this in other blog posts where I needed something installing. The simplicity of just hitting arkade get and then seeing if your tool or cli is available is handy. In the Linux section we spoke about package manager and the process for getting our software, you can think about Arkade as that marketplace for all your apps and clis for Kubernetes. A very handy little tool to have on your systems, written in Golang and cross platform.
|
||||
I have been using arkade for some time now to get all those Kubernetes tools and CLIs, you can see the installation steps on this [github repository](https://github.com/alexellis/arkade) for getting started with Arkade. I have also mentioned this in other blog posts where I needed something installing. The simplicity of just hitting arkade get and then seeing if your tool or cli is available is handy. In the Linux section we spoke about package manager and the process for getting our software, you can think about Arkade as that marketplace for all your apps and clis for Kubernetes. A very handy little tool to have on your systems, written in Golang and cross platform.
|
||||
|
||||

|
||||
|
||||
@ -33,33 +34,33 @@ As part of the long list of available apps within arkade minikube is one of them
|
||||
|
||||

|
||||
|
||||
We will also need kubectl as part of our tooling so you can also get this via arkade or I believe that the minikube documentation brings this down as part of the curl commands mentioned above. We will cover more on kubectl later on in the post.
|
||||
We will also need kubectl as part of our tooling so you can also get this via arkade or I believe that the minikube documentation brings this down as part of the curl commands mentioned above. We will cover more on kubectl later on in the post.
|
||||
|
||||
### Getting a Kubernetes cluster up and running
|
||||
|
||||
For this particular section I want to cover the options available to us when it comes to getting a Kubernetes cluster up and running on your local machine. We could simply run the following command and it would spin up a cluster for you to use.
|
||||
|
||||
minikube is used on the command line, and simply put once you have everything installed you can run `minikube start` to deploy your first Kubernetes cluster. You will see below that the Docker Driver is the default as to where we will be running our nested virtualisation node. I mentioned at the start of the post the other options available, the other options help when you want to expand what this local Kubernetes cluster needs to look like.
|
||||
minikube is used on the command line, and simply put once you have everything installed you can run `minikube start` to deploy your first Kubernetes cluster. You will see below that the Docker Driver is the default as to where we will be running our nested virtualisation node. I mentioned at the start of the post the other options available, the other options help when you want to expand what this local Kubernetes cluster needs to look like.
|
||||
|
||||
A single Minikube cluster is going to consist of a single docker container in this instance which will have the control plane node and worker node in one instance. Where as typically you would separate those nodes out. Something we will cover in the next section where we look at still home lab type Kubernetes environments but a little closer to production architecture.
|
||||
A single Minikube cluster is going to consist of a single docker container in this instance which will have the control plane node and worker node in one instance. Where as typically you would separate those nodes out. Something we will cover in the next section where we look at still home lab type Kubernetes environments but a little closer to production architecture.
|
||||
|
||||

|
||||
|
||||
I have mentioned this a few times now, I really like minikube because of the addons available, the ability to deploy a cluster with a simple command including all the required addons from the start really helps me deploy the same required setup everytime.
|
||||
|
||||
Below you can see a list of those addons, I generally use the `csi-hostpath-driver` and the `volumesnapshots` addons but you can see the long list below. Sure these addons can generally be deployed using Helm again something we will cover later on in the Kubernetes section but this makes things much simpler.
|
||||
Below you can see a list of those addons, I generally use the `csi-hostpath-driver` and the `volumesnapshots` addons but you can see the long list below. Sure these addons can generally be deployed using Helm again something we will cover later on in the Kubernetes section but this makes things much simpler.
|
||||
|
||||

|
||||
|
||||
I am also defining in our project some additional configuration, apiserver is set to 6433 instead of a random API port, I define the container runtime also to containerd however docker is default and CRI-O is also available. I am also setting a specific Kubernetes version.
|
||||
I am also defining in our project some additional configuration, apiserver is set to 6433 instead of a random API port, I define the container runtime also to containerd however docker is default and CRI-O is also available. I am also setting a specific Kubernetes version.
|
||||
|
||||

|
||||
|
||||
Now we are ready to deploy our first Kubernetes cluster using minikube. I mentioned before though that you will also need `kubectl` to interact with your cluster. You can get kubectl installed using arkade with the command `arkade get kubectl`
|
||||
Now we are ready to deploy our first Kubernetes cluster using minikube. I mentioned before though that you will also need `kubectl` to interact with your cluster. You can get kubectl installed using arkade with the command `arkade get kubectl`
|
||||
|
||||

|
||||
|
||||
or you can download cross platform from the following
|
||||
or you can download cross platform from the following
|
||||
|
||||
- [Linux](https://kubernetes.io/docs/tasks/tools/install-kubectl-linux)
|
||||
- [macOS](https://kubernetes.io/docs/tasks/tools/install-kubectl-macos)
|
||||
@ -71,74 +72,74 @@ Once you have kubectl installed we can then interact with our cluster with a sim
|
||||
|
||||
### What is kubectl?
|
||||
|
||||
We now have our minikube | Kubernetes cluster up and running and I have asked you to install both Minikube where I have explained at least what it does but I have not really explained what kubectl is and what it does.
|
||||
We now have our minikube | Kubernetes cluster up and running and I have asked you to install both Minikube where I have explained at least what it does but I have not really explained what kubectl is and what it does.
|
||||
|
||||
kubectl is a cli that is used or allows you to interact with Kubernetes clusters, we are using it here for interacting with our minikube cluster but we would also use kubectl for interacting with our enterprise clusters across the public cloud.
|
||||
kubectl is a cli that is used or allows you to interact with Kubernetes clusters, we are using it here for interacting with our minikube cluster but we would also use kubectl for interacting with our enterprise clusters across the public cloud.
|
||||
|
||||
We use kubectl to deploy applications, inspect and manage cluster resources. A much better [Overview of kubectl](https://kubernetes.io/docs/reference/kubectl/overview/) can be found here on the Kubernetes official documentation.
|
||||
We use kubectl to deploy applications, inspect and manage cluster resources. A much better [Overview of kubectl](https://kubernetes.io/docs/reference/kubectl/overview/) can be found here on the Kubernetes official documentation.
|
||||
|
||||
kubectl interacts with the API server found on the Control Plane node which we breifly covered in an earlier post.
|
||||
kubectl interacts with the API server found on the Control Plane node which we breifly covered in an earlier post.
|
||||
|
||||
### kubectl cheat sheet
|
||||
|
||||
Along with the official documentation, I have also found myself with this page open all the time when looking for kubectl commands. [Unofficial Kubernetes](https://unofficial-kubernetes.readthedocs.io/en/latest/)
|
||||
|
||||
|Listing Resources | |
|
||||
| ------------------------------ | ----------------------------------------- |
|
||||
|kubectl get nodes |List all nodes in cluster |
|
||||
|kubectl get namespaces |List all namespaces in cluster |
|
||||
|kubectl get pods |List all pods in default namespace cluster |
|
||||
|kubectl get pods -n name |List all pods in "name" namespace |
|
||||
| Listing Resources | |
|
||||
| ------------------------ | ------------------------------------------ |
|
||||
| kubectl get nodes | List all nodes in cluster |
|
||||
| kubectl get namespaces | List all namespaces in cluster |
|
||||
| kubectl get pods | List all pods in default namespace cluster |
|
||||
| kubectl get pods -n name | List all pods in "name" namespace |
|
||||
|
||||
|Creating Resources | |
|
||||
| ------------------------------ | ----------------------------------------- |
|
||||
|kubectl create namespace name |Create a namespace called "name" |
|
||||
|kubectl create -f [filename] |Create a resource from a JSON or YAML file:|
|
||||
| Creating Resources | |
|
||||
| ----------------------------- | ------------------------------------------- |
|
||||
| kubectl create namespace name | Create a namespace called "name" |
|
||||
| kubectl create -f [filename] | Create a resource from a JSON or YAML file: |
|
||||
|
||||
|Editing Resources | |
|
||||
| ------------------------------ | ----------------------------------------- |
|
||||
|kubectl edit svc/servicename |To edit a service |
|
||||
| Editing Resources | |
|
||||
| ---------------------------- | ----------------- |
|
||||
| kubectl edit svc/servicename | To edit a service |
|
||||
|
||||
|More detail on Resources | |
|
||||
| ------------------------------ | ------------------------------------------------------ |
|
||||
|kubectl describe nodes | display the state of any number of resources in detail,|
|
||||
| More detail on Resources | |
|
||||
| ------------------------ | ------------------------------------------------------- |
|
||||
| kubectl describe nodes | display the state of any number of resources in detail, |
|
||||
|
||||
|Delete Resources | |
|
||||
| ------------------------------ | ------------------------------------------------------ |
|
||||
|kubectl delete pod | Remove resources, this can be from stdin or file |
|
||||
| Delete Resources | |
|
||||
| ------------------ | ------------------------------------------------ |
|
||||
| kubectl delete pod | Remove resources, this can be from stdin or file |
|
||||
|
||||
You will find yourself wanting to know the short names for some of the kubectl commands, for example `-n` is the short name for `namespace` which makes it easier to type a command but also if you are scripting anything you can have much tidier code.
|
||||
|
||||
| Short name | Full name |
|
||||
| -------------------- | ---------------------------- |
|
||||
| csr | certificatesigningrequests |
|
||||
| cs | componentstatuses |
|
||||
| cm | configmaps |
|
||||
| ds | daemonsets |
|
||||
| deploy | deployments |
|
||||
| ep | endpoints |
|
||||
| ev | events |
|
||||
| hpa | horizontalpodautoscalers |
|
||||
| ing | ingresses |
|
||||
| limits | limitranges |
|
||||
| ns | namespaces |
|
||||
| no | nodes |
|
||||
| pvc | persistentvolumeclaims |
|
||||
| pv | persistentvolumes |
|
||||
| po | pods |
|
||||
| pdb | poddisruptionbudgets |
|
||||
| psp | podsecuritypolicies |
|
||||
| rs | replicasets |
|
||||
| rc | replicationcontrollers |
|
||||
| quota | resourcequotas |
|
||||
| sa | serviceaccounts |
|
||||
| svc | services |
|
||||
| Short name | Full name |
|
||||
| ---------- | -------------------------- |
|
||||
| csr | certificatesigningrequests |
|
||||
| cs | componentstatuses |
|
||||
| cm | configmaps |
|
||||
| ds | daemonsets |
|
||||
| deploy | deployments |
|
||||
| ep | endpoints |
|
||||
| ev | events |
|
||||
| hpa | horizontalpodautoscalers |
|
||||
| ing | ingresses |
|
||||
| limits | limitranges |
|
||||
| ns | namespaces |
|
||||
| no | nodes |
|
||||
| pvc | persistentvolumeclaims |
|
||||
| pv | persistentvolumes |
|
||||
| po | pods |
|
||||
| pdb | poddisruptionbudgets |
|
||||
| psp | podsecuritypolicies |
|
||||
| rs | replicasets |
|
||||
| rc | replicationcontrollers |
|
||||
| quota | resourcequotas |
|
||||
| sa | serviceaccounts |
|
||||
| svc | services |
|
||||
|
||||
The final thing to add here is that I created another project around minikube to help me quickly spin up demo environments to display data services and protecting those workloads with Kasten K10, [Project Pace](https://github.com/MichaelCade/project_pace) can be found there and would love your feedback or interaction, it also displays or includes some automated ways of deploying your minikube clusters and creating different data services applications.
|
||||
The final thing to add here is that I created another project around minikube to help me quickly spin up demo environments to display data services and protecting those workloads with Kasten K10, [Project Pace](https://github.com/MichaelCade/project_pace) can be found there and would love your feedback or interaction, it also displays or includes some automated ways of deploying your minikube clusters and creating different data services applications.
|
||||
|
||||
Next up, we will get in to deploying multiple nodes into virtual machines using VirtualBox but we are going to hit the easy button there like we did in the Linux section where we used vagrant to quickly spin up the machines and deploy our software how we want them.
|
||||
Next up, we will get in to deploying multiple nodes into virtual machines using VirtualBox but we are going to hit the easy button there like we did in the Linux section where we used vagrant to quickly spin up the machines and deploy our software how we want them.
|
||||
|
||||
I added this list to the post yesterday which are walkthrough blogs I have done around different Kubernetes clusters being deployed.
|
||||
I added this list to the post yesterday which are walkthrough blogs I have done around different Kubernetes clusters being deployed.
|
||||
|
||||
- [Kubernetes playground – How to choose your platform](https://vzilla.co.uk/vzilla-blog/building-the-home-lab-kubernetes-playground-part-1)
|
||||
- [Kubernetes playground – Setting up your cluster](https://vzilla.co.uk/vzilla-blog/building-the-home-lab-kubernetes-playground-part-2)
|
||||
@ -150,26 +151,26 @@ I added this list to the post yesterday which are walkthrough blogs I have done
|
||||
- [Getting started with CIVO Cloud](https://vzilla.co.uk/vzilla-blog/getting-started-with-civo-cloud)
|
||||
- [Minikube - Kubernetes Demo Environment For Everyone](https://vzilla.co.uk/vzilla-blog/project_pace-kasten-k10-demo-environment-for-everyone)
|
||||
|
||||
### What we will cover in the series on Kubernetes
|
||||
### What we will cover in the series on Kubernetes
|
||||
|
||||
We have started covering some of these mentioned below but we are going to get more hands on tomorrow with our second cluster deployment then we can start deploying applications into our clusters.
|
||||
We have started covering some of these mentioned below but we are going to get more hands on tomorrow with our second cluster deployment then we can start deploying applications into our clusters.
|
||||
|
||||
- Kubernetes Architecture
|
||||
- Kubectl Commands
|
||||
- Kubernetes YAML
|
||||
- Kubernetes Ingress
|
||||
- Kubernetes Architecture
|
||||
- Kubectl Commands
|
||||
- Kubernetes YAML
|
||||
- Kubernetes Ingress
|
||||
- Kubernetes Services
|
||||
- Helm Package Manager
|
||||
- Persistant Storage
|
||||
- Stateful Apps
|
||||
- Helm Package Manager
|
||||
- Persistant Storage
|
||||
- Stateful Apps
|
||||
|
||||
## Resources
|
||||
## Resources
|
||||
|
||||
If you have FREE resources that you have used then please feel free to add them in here via a PR to the repository and I will be happy to include them.
|
||||
If you have FREE resources that you have used then please feel free to add them in here via a PR to the repository and I will be happy to include them.
|
||||
|
||||
- [Kubernetes Documentation](https://kubernetes.io/docs/home/)
|
||||
- [TechWorld with Nana - Kubernetes Tutorial for Beginners [FULL COURSE in 4 Hours]](https://www.youtube.com/watch?v=X48VuDVv0do)
|
||||
- [TechWorld with Nana - Kubernetes Crash Course for Absolute Beginners](https://www.youtube.com/watch?v=s_o8dwzRlu4)
|
||||
- [Kunal Kushwaha - Kubernetes Tutorial for Beginners | What is Kubernetes? Architecture Simplified!](https://www.youtube.com/watch?v=KVBON1lA9N8)
|
||||
|
||||
See you on [Day 52](day52.md)
|
||||
See you on [Day 52](day52.md)
|
||||
|
102
Days/day52.md
102
Days/day52.md
@ -1,63 +1,64 @@
|
||||
---
|
||||
title: '#90DaysOfDevOps - Setting up a multinode Kubernetes Cluster - Day 52'
|
||||
title: "#90DaysOfDevOps - Setting up a multinode Kubernetes Cluster - Day 52"
|
||||
published: false
|
||||
description: 90DaysOfDevOps - Setting up a multinode Kubernetes Cluster
|
||||
tags: 'devops, 90daysofdevops, learning'
|
||||
tags: "devops, 90daysofdevops, learning"
|
||||
cover_image: null
|
||||
canonical_url: null
|
||||
id: 1049050
|
||||
---
|
||||
## Setting up a multinode Kubernetes Cluster
|
||||
|
||||
I wanted this title to be "Setting up a multinode Kubernetes cluster with Vagrant" but thought it might be a little too long!
|
||||
## Setting up a multinode Kubernetes Cluster
|
||||
|
||||
In the session yesterday we used a cool project to deploy our first Kubernetes cluster and get a little hands on with the most important CLI tool you will come across when using Kubernetes (kubectl).
|
||||
I wanted this title to be "Setting up a multinode Kubernetes cluster with Vagrant" but thought it might be a little too long!
|
||||
|
||||
Here we are going to use VirtualBox as our base but as mentioned the last time we spoke about Vagrant back in the Linux section we can really use any hypervisor or virtualisation tool supported. It was [Day 14](day14.md) when we went through and deployed an Ubuntu machine for the Linux section.
|
||||
In the session yesterday we used a cool project to deploy our first Kubernetes cluster and get a little hands on with the most important CLI tool you will come across when using Kubernetes (kubectl).
|
||||
|
||||
### A quick recap on Vagrant
|
||||
Here we are going to use VirtualBox as our base but as mentioned the last time we spoke about Vagrant back in the Linux section we can really use any hypervisor or virtualisation tool supported. It was [Day 14](day14.md) when we went through and deployed an Ubuntu machine for the Linux section.
|
||||
|
||||
Vagrant is a CLI utility that manages the lifecyle of your virtual machines. We can use vagrant to spin up and down virtual machines across many different platforms including vSphere, Hyper-v, Virtual Box and also Docker. It does have other providers but we will stick with that we are using Virtual Box here so we are good to go.
|
||||
### A quick recap on Vagrant
|
||||
|
||||
I am going to be using a baseline this [blog and repository](https://devopscube.com/kubernetes-cluster-vagrant/) to walk through the configuration. I would however advise that if this is your first time deploying a Kubernetes cluster then maybe also look into how you would do this manually and then at least you know what this looks like. Although I will say that this Day 0 operations and effort is being made more efficient with every release of Kubernetes. I liken this very much to the days of VMware and ESX and how you would need at least a day to deploy 3 ESX servers now we can have that up and running in an hour. We are heading in that direction when it comes to Kubernetes.
|
||||
Vagrant is a CLI utility that manages the lifecyle of your virtual machines. We can use vagrant to spin up and down virtual machines across many different platforms including vSphere, Hyper-v, Virtual Box and also Docker. It does have other providers but we will stick with that we are using Virtual Box here so we are good to go.
|
||||
|
||||
### Kubernetes Lab environment
|
||||
I am going to be using a baseline this [blog and repository](https://devopscube.com/kubernetes-cluster-vagrant/) to walk through the configuration. I would however advise that if this is your first time deploying a Kubernetes cluster then maybe also look into how you would do this manually and then at least you know what this looks like. Although I will say that this Day 0 operations and effort is being made more efficient with every release of Kubernetes. I liken this very much to the days of VMware and ESX and how you would need at least a day to deploy 3 ESX servers now we can have that up and running in an hour. We are heading in that direction when it comes to Kubernetes.
|
||||
|
||||
I have uploaded in [Kubernetes folder](Kubernetes) the vagrantfile that we will be using to build out our environment. Grab this and navigate to this directory in your terminal. I am again using Windows so I will be using PowerShell to perform my workstation commands with vagrant. If you do not have vagrant then you can use arkade, we covered this yesterday when installing minikube and other tools. A simple command `arkade get vagrant` should see you download and install the latest version of vagrant.
|
||||
### Kubernetes Lab environment
|
||||
|
||||
When you are in your directory then you can simply run `vagrant up` and if all is configured correctly then you should see the following kick off in your terminal.
|
||||
I have uploaded in [Kubernetes folder](Kubernetes) the vagrantfile that we will be using to build out our environment. Grab this and navigate to this directory in your terminal. I am again using Windows so I will be using PowerShell to perform my workstation commands with vagrant. If you do not have vagrant then you can use arkade, we covered this yesterday when installing minikube and other tools. A simple command `arkade get vagrant` should see you download and install the latest version of vagrant.
|
||||
|
||||
When you are in your directory then you can simply run `vagrant up` and if all is configured correctly then you should see the following kick off in your terminal.
|
||||
|
||||

|
||||
|
||||
In the terminal you are going to see a number of steps taking place, but in the meantime let's take a look at what we are actually building here.
|
||||
In the terminal you are going to see a number of steps taking place, but in the meantime let's take a look at what we are actually building here.
|
||||
|
||||

|
||||
|
||||
From the above you can see that we are going to build out 3 virtual machines, we will have a control plane node and then two worker nodes. If you head back to [Day 49](day49.md) You will see some more description on these areas we see in the image.
|
||||
From the above you can see that we are going to build out 3 virtual machines, we will have a control plane node and then two worker nodes. If you head back to [Day 49](day49.md) You will see some more description on these areas we see in the image.
|
||||
|
||||
Also in the image we indicate that our kubectl access will come from outside of the cluster and hit that kube apiserver when in fact as part of the vagrant provisioning we are deploying kubectl on each of these nodes so that we can access the cluster from within each of our nodes.
|
||||
Also in the image we indicate that our kubectl access will come from outside of the cluster and hit that kube apiserver when in fact as part of the vagrant provisioning we are deploying kubectl on each of these nodes so that we can access the cluster from within each of our nodes.
|
||||
|
||||
The process of building out this lab could take anything from 5 minutes to 30 minutes depending on your setup.
|
||||
The process of building out this lab could take anything from 5 minutes to 30 minutes depending on your setup.
|
||||
|
||||
I am going to cover the scripts shortly as well but you will notice if you look into the vagrant file that we are calling on 3 scripts as part of the deployment and this is really where the cluster is created. We have seen how easy it is to use vagrant to deploy our virtual machines and OS installations using vagrant boxes but having the ability to run a shell script as part of the deployment process is where it gets quite interesting around automating these lab build outs.
|
||||
I am going to cover the scripts shortly as well but you will notice if you look into the vagrant file that we are calling on 3 scripts as part of the deployment and this is really where the cluster is created. We have seen how easy it is to use vagrant to deploy our virtual machines and OS installations using vagrant boxes but having the ability to run a shell script as part of the deployment process is where it gets quite interesting around automating these lab build outs.
|
||||
|
||||
Once complete we can then ssh to one of our nodes `vagrant ssh master` from the terminal should get you access, default username and password is `vagrant/vagrant`
|
||||
Once complete we can then ssh to one of our nodes `vagrant ssh master` from the terminal should get you access, default username and password is `vagrant/vagrant`
|
||||
|
||||
You can also use `vagrant ssh node01` and `vagrant ssh node02` to gain access to the worker nodes should you wish.
|
||||
You can also use `vagrant ssh node01` and `vagrant ssh node02` to gain access to the worker nodes should you wish.
|
||||
|
||||

|
||||
|
||||
Now we are in one of the above nodes in our new cluster we can issue `kubectl get nodes` to show our 3 node cluster and the status of this.
|
||||
Now we are in one of the above nodes in our new cluster we can issue `kubectl get nodes` to show our 3 node cluster and the status of this.
|
||||
|
||||

|
||||
|
||||
At this point we have a running 3 node cluster, with 1 control plane node and 2 worker nodes.
|
||||
At this point we have a running 3 node cluster, with 1 control plane node and 2 worker nodes.
|
||||
|
||||
### Vagrantfile and Shell Script walkthrough
|
||||
### Vagrantfile and Shell Script walkthrough
|
||||
|
||||
If we take a look at our vagrantfile, you will see that we are defining a number of worker nodes, networking IP addresses for the bridged network within VirtualBox and then some naming. Another you will notice is that we are also calling upon some scripts that we want to run on specific hosts.
|
||||
If we take a look at our vagrantfile, you will see that we are defining a number of worker nodes, networking IP addresses for the bridged network within VirtualBox and then some naming. Another you will notice is that we are also calling upon some scripts that we want to run on specific hosts.
|
||||
|
||||
```
|
||||
```
|
||||
NUM_WORKER_NODES=2
|
||||
IP_NW="10.0.0."
|
||||
IP_START=10
|
||||
@ -98,28 +99,29 @@ Vagrant.configure("2") do |config|
|
||||
end
|
||||
end
|
||||
end
|
||||
```
|
||||
Lets break down those scripts that are being ran. We have three scripts listed in the above VAGRANTFILE to run on specific nodes.
|
||||
```
|
||||
|
||||
Lets break down those scripts that are being ran. We have three scripts listed in the above VAGRANTFILE to run on specific nodes.
|
||||
|
||||
`master.vm.provision "shell", path: "scripts/common.sh"`
|
||||
|
||||
This script above is going to focus on getting the nodes ready, it is going to be ran on all 3 of our nodes and it will remove any existing Docker components and reinstall Docker and ContainerD as well as kubeadm, kubelet and kubectl. This script will also update existing software packages on the system.
|
||||
This script above is going to focus on getting the nodes ready, it is going to be ran on all 3 of our nodes and it will remove any existing Docker components and reinstall Docker and ContainerD as well as kubeadm, kubelet and kubectl. This script will also update existing software packages on the system.
|
||||
|
||||
`master.vm.provision "shell", path: "scripts/master.sh"`
|
||||
|
||||
The master.sh script will only run on the control plane node, this script is going to create the Kubernetes cluster using kubeadm commands. It will also prepare the config context for access to this cluster which we will cover next.
|
||||
The master.sh script will only run on the control plane node, this script is going to create the Kubernetes cluster using kubeadm commands. It will also prepare the config context for access to this cluster which we will cover next.
|
||||
|
||||
`node.vm.provision "shell", path: "scripts/node.sh"`
|
||||
|
||||
This is simply going to take the config created by the master and join our nodes to the Kubernetes cluster, this join process again uses kubeadm and another script which can be found in the config folder.
|
||||
This is simply going to take the config created by the master and join our nodes to the Kubernetes cluster, this join process again uses kubeadm and another script which can be found in the config folder.
|
||||
|
||||
### Access to the Kubernetes cluster
|
||||
### Access to the Kubernetes cluster
|
||||
|
||||
Now we have two clusters deployed we have our minikube cluster that we deployed in the previous section and we have the new 3 node cluster we just deployed to VirtualBox.
|
||||
Now we have two clusters deployed we have our minikube cluster that we deployed in the previous section and we have the new 3 node cluster we just deployed to VirtualBox.
|
||||
|
||||
Also in that config file that you will also have access to on the machine you ran vagrant from consists of how we can gain access to our cluster from our workstation.
|
||||
Also in that config file that you will also have access to on the machine you ran vagrant from consists of how we can gain access to our cluster from our workstation.
|
||||
|
||||
Before we show that let me touch on the context.
|
||||
Before we show that let me touch on the context.
|
||||
|
||||

|
||||
|
||||
@ -127,23 +129,23 @@ Context is important, the ability to access your Kubernetes cluster from your de
|
||||
|
||||
By default, the Kubernetes CLI client (kubectl) uses the C:\Users\username\.kube\config to store the Kubernetes cluster details such as endpoint and credentials. If you have deployed a cluster you will be able to see this file in that location. But if you have been using maybe the master node to run all of your kubectl commands so far via SSH or other methods then this post will hopefully help you get to grips with being able to connect with your workstation.
|
||||
|
||||
We then need to grab the kubeconfig file from the cluster or we can also get this from our config file once deployed, grab the contents of this file either via SCP or just open a console session to your master node and copy to the local windows machine.
|
||||
We then need to grab the kubeconfig file from the cluster or we can also get this from our config file once deployed, grab the contents of this file either via SCP or just open a console session to your master node and copy to the local windows machine.
|
||||
|
||||

|
||||
|
||||
We then want to take a copy of that config file and move to our `$HOME/.kube/config` location.
|
||||
We then want to take a copy of that config file and move to our `$HOME/.kube/config` location.
|
||||
|
||||

|
||||
|
||||
Now from your local workstation you will be able to run `kubectl cluster-info` and `kubectl get nodes` to validate that you have access to your cluster.
|
||||
Now from your local workstation you will be able to run `kubectl cluster-info` and `kubectl get nodes` to validate that you have access to your cluster.
|
||||
|
||||

|
||||
|
||||
This not only allows for connectivity and control from your windows machine but this then also allows us to do some port forwarding to access certain services from our windows machine
|
||||
|
||||
If you are interested in how you would manage multiple clusters on your workstation then I have a more detailed walkthrough [here](https://vzilla.co.uk/vzilla-blog/building-the-home-lab-kubernetes-playground-part-6).
|
||||
If you are interested in how you would manage multiple clusters on your workstation then I have a more detailed walkthrough [here](https://vzilla.co.uk/vzilla-blog/building-the-home-lab-kubernetes-playground-part-6).
|
||||
|
||||
I have added this list which are walkthrough blogs I have done around different Kubernetes clusters being deployed.
|
||||
I have added this list which are walkthrough blogs I have done around different Kubernetes clusters being deployed.
|
||||
|
||||
- [Kubernetes playground – How to choose your platform](https://vzilla.co.uk/vzilla-blog/building-the-home-lab-kubernetes-playground-part-1)
|
||||
- [Kubernetes playground – Setting up your cluster](https://vzilla.co.uk/vzilla-blog/building-the-home-lab-kubernetes-playground-part-2)
|
||||
@ -155,26 +157,26 @@ I have added this list which are walkthrough blogs I have done around different
|
||||
- [Getting started with CIVO Cloud](https://vzilla.co.uk/vzilla-blog/getting-started-with-civo-cloud)
|
||||
- [Minikube - Kubernetes Demo Environment For Everyone](https://vzilla.co.uk/vzilla-blog/project_pace-kasten-k10-demo-environment-for-everyone)
|
||||
|
||||
### What we will cover in the series on Kubernetes
|
||||
### What we will cover in the series on Kubernetes
|
||||
|
||||
We have started covering some of these mentioned below but we are going to get more hands on tomorrow with our second cluster deployment then we can start deploying applications into our clusters.
|
||||
We have started covering some of these mentioned below but we are going to get more hands on tomorrow with our second cluster deployment then we can start deploying applications into our clusters.
|
||||
|
||||
- Kubernetes Architecture
|
||||
- Kubectl Commands
|
||||
- Kubernetes YAML
|
||||
- Kubernetes Ingress
|
||||
- Kubernetes Architecture
|
||||
- Kubectl Commands
|
||||
- Kubernetes YAML
|
||||
- Kubernetes Ingress
|
||||
- Kubernetes Services
|
||||
- Helm Package Manager
|
||||
- Persistant Storage
|
||||
- Stateful Apps
|
||||
- Helm Package Manager
|
||||
- Persistant Storage
|
||||
- Stateful Apps
|
||||
|
||||
## Resources
|
||||
## Resources
|
||||
|
||||
If you have FREE resources that you have used then please feel free to add them in here via a PR to the repository and I will be happy to include them.
|
||||
If you have FREE resources that you have used then please feel free to add them in here via a PR to the repository and I will be happy to include them.
|
||||
|
||||
- [Kubernetes Documentation](https://kubernetes.io/docs/home/)
|
||||
- [TechWorld with Nana - Kubernetes Tutorial for Beginners [FULL COURSE in 4 Hours]](https://www.youtube.com/watch?v=X48VuDVv0do)
|
||||
- [TechWorld with Nana - Kubernetes Crash Course for Absolute Beginners](https://www.youtube.com/watch?v=s_o8dwzRlu4)
|
||||
- [Kunal Kushwaha - Kubernetes Tutorial for Beginners | What is Kubernetes? Architecture Simplified!](https://www.youtube.com/watch?v=KVBON1lA9N8)
|
||||
|
||||
See you on [Day 53](day53.md)
|
||||
See you on [Day 53](day53.md)
|
||||
|
@ -1,63 +1,64 @@
|
||||
---
|
||||
title: '#90DaysOfDevOps - Rancher Overview - Hands On - Day 53'
|
||||
title: "#90DaysOfDevOps - Rancher Overview - Hands On - Day 53"
|
||||
published: false
|
||||
description: 90DaysOfDevOps - Rancher Overview - Hands On
|
||||
tags: 'devops, 90daysofdevops, learning'
|
||||
tags: "devops, 90daysofdevops, learning"
|
||||
cover_image: null
|
||||
canonical_url: null
|
||||
id: 1048742
|
||||
---
|
||||
|
||||
## Rancher Overview - Hands On
|
||||
|
||||
In this section we are going to take a look at Rancher, so far everything we have done has been in the cli and using kubectl but we have a few really good UIs and multi cluster management tools to give our operations teams good visibility into our cluster management.
|
||||
In this section we are going to take a look at Rancher, so far everything we have done has been in the cli and using kubectl but we have a few really good UIs and multi cluster management tools to give our operations teams good visibility into our cluster management.
|
||||
|
||||
Rancher is according to their [site](https://rancher.com/)
|
||||
|
||||
*Rancher is a complete software stack for teams adopting containers. It addresses the operational and security challenges of managing multiple Kubernetes clusters across any infrastructure, while providing DevOps teams with integrated tools for running containerized workloads.*
|
||||
> Rancher is a complete software stack for teams adopting containers. It addresses the operational and security challenges of managing multiple Kubernetes clusters across any infrastructure, while providing DevOps teams with integrated tools for running containerized workloads.
|
||||
|
||||
Rancher enables us to deploy production grade Kubernetes clusters from pretty much any location and then provides centralised authentication, access control and observability. I mentioned in a previous section that there is almost an overwhelming choice when it comes to Kubernetes and where you should or could run them, looking at Rancher it really doesn't matter where they are.
|
||||
Rancher enables us to deploy production grade Kubernetes clusters from pretty much any location and then provides centralised authentication, access control and observability. I mentioned in a previous section that there is almost an overwhelming choice when it comes to Kubernetes and where you should or could run them, looking at Rancher it really doesn't matter where they are.
|
||||
|
||||
### Deploy Rancher
|
||||
|
||||
The first thing we need to do is deploy Rancher on our local workstation, there are few ways and locations you can choose to proceed with this step, for me I want to use my local workstation and run rancher as a docker container. By running the command below we will pull down a container image and then have access to the rancher UI.
|
||||
The first thing we need to do is deploy Rancher on our local workstation, there are few ways and locations you can choose to proceed with this step, for me I want to use my local workstation and run rancher as a docker container. By running the command below we will pull down a container image and then have access to the rancher UI.
|
||||
|
||||
Other rancher deployment methods are available [Rancher Quick-Start-Guide](https://rancher.com/docs/rancher/v2.6/en/quick-start-guide/deployment/)
|
||||
|
||||
`sudo docker run -d --restart=unless-stopped -p 80:80 -p 443:443 --privileged rancher/rancher`
|
||||
|
||||
As you can see in our Docker Desktop we have a running rancher container.
|
||||
As you can see in our Docker Desktop we have a running rancher container.
|
||||
|
||||

|
||||
|
||||
### Accessing Rancher UI
|
||||
|
||||
With the above container running we should be able to navigate to it via a web page. `https://localhost` will bring up a login page as per below.
|
||||
With the above container running we should be able to navigate to it via a web page. `https://localhost` will bring up a login page as per below.
|
||||
|
||||

|
||||
|
||||
Follow the instructions below to get the password required. Because I am using Windows I chose to use bash for Windows because of the grep command required.
|
||||
Follow the instructions below to get the password required. Because I am using Windows I chose to use bash for Windows because of the grep command required.
|
||||
|
||||

|
||||
|
||||
We can then take the above password and login, the next page is where we can define a new password.
|
||||
We can then take the above password and login, the next page is where we can define a new password.
|
||||
|
||||

|
||||
|
||||
Once we have done the above we will then be logged in and we can see our opening screen. As part of the Rancher deployment we will also see a local K3s cluster provisioned.
|
||||
Once we have done the above we will then be logged in and we can see our opening screen. As part of the Rancher deployment we will also see a local K3s cluster provisioned.
|
||||
|
||||

|
||||
|
||||
### A quick tour of rancher
|
||||
|
||||
The first thing for us to look at is our locally deployed K3S cluster You can see below that we get a good visual on what is happening inside our cluster. This is the default deployment and we have not yet deployed anything to this cluster. You can see it is made up of 1 node and has 5 deployments. Then you can also see that there are some stats on pods, cores and memory.
|
||||
The first thing for us to look at is our locally deployed K3S cluster You can see below that we get a good visual on what is happening inside our cluster. This is the default deployment and we have not yet deployed anything to this cluster. You can see it is made up of 1 node and has 5 deployments. Then you can also see that there are some stats on pods, cores and memory.
|
||||
|
||||

|
||||
|
||||
On the left hand menu we also have an Apps & Marketplace tab, this allows us to choose applications we would like to run on our clusters, as mentioned previously Rancher gives us the capability of running or managing a number of different clusters. With the marketplace we can deploy our applications very easily.
|
||||
On the left hand menu we also have an Apps & Marketplace tab, this allows us to choose applications we would like to run on our clusters, as mentioned previously Rancher gives us the capability of running or managing a number of different clusters. With the marketplace we can deploy our applications very easily.
|
||||
|
||||

|
||||
|
||||
Another thing to mention is that if you did need to get access to any cluster being managed by Rancher in the top right you have the ability to open a kubectl shell to the selected cluster.
|
||||
Another thing to mention is that if you did need to get access to any cluster being managed by Rancher in the top right you have the ability to open a kubectl shell to the selected cluster.
|
||||
|
||||

|
||||
|
||||
@ -65,21 +66,21 @@ Another thing to mention is that if you did need to get access to any cluster be
|
||||
|
||||
Over the past two sessions we have created a minikube cluster locally and we have used Vagrant with VirtualBox to create a 3 node Kubernetes cluster, with Rancher we can also create clusters. In the [Rancher Folder](Kubernetes/Rancher) you will find additional vagrant files that will build out the same 3 nodes but without the steps for creating our Kubernetes cluster (we want Rancher to do this for us)
|
||||
|
||||
We do however want docker installed and for the OS to be updated so you will still see the `common.sh` script being ran on each of our nodes. This will also install Kubeadm, Kubectl etc. But it will not run the Kubeadm commands to create and join our nodes into a cluster.
|
||||
We do however want docker installed and for the OS to be updated so you will still see the `common.sh` script being ran on each of our nodes. This will also install Kubeadm, Kubectl etc. But it will not run the Kubeadm commands to create and join our nodes into a cluster.
|
||||
|
||||
We can navigate to our vagrant folder location and we can simply run `vagrant up` and this will begin that process of creating our 3 VMs in virtualbox.
|
||||
We can navigate to our vagrant folder location and we can simply run `vagrant up` and this will begin that process of creating our 3 VMs in virtualbox.
|
||||
|
||||

|
||||
|
||||
Now that we have our nodes or VMs in place and ready, we can then use Rancher to create our new Kubernetes cluster. The first screen to create your cluster gives you some options as to where your cluster is, i.e are you using the public cloud managed Kubernetes services, vSphere or something else.
|
||||
Now that we have our nodes or VMs in place and ready, we can then use Rancher to create our new Kubernetes cluster. The first screen to create your cluster gives you some options as to where your cluster is, i.e are you using the public cloud managed Kubernetes services, vSphere or something else.
|
||||
|
||||

|
||||
|
||||
We will be choosing "custom" as we are not using one of the integrated platforms. The opening page is where you define your cluster name (it says local below but you cannot use local, our cluster is called vagrant.) you can define Kubernetes versions here, network providers and some other configuration options to get your Kubernetes cluster up and running.
|
||||
We will be choosing "custom" as we are not using one of the integrated platforms. The opening page is where you define your cluster name (it says local below but you cannot use local, our cluster is called vagrant.) you can define Kubernetes versions here, network providers and some other configuration options to get your Kubernetes cluster up and running.
|
||||
|
||||

|
||||
|
||||
The next page is going to give you the registration code that needs to be ran on each of your nodes with the appropriate services to be enabled. etcd, controlplane and worker. For our master node we want etcd and controlplane so the command can be seen below.
|
||||
The next page is going to give you the registration code that needs to be ran on each of your nodes with the appropriate services to be enabled. etcd, controlplane and worker. For our master node we want etcd and controlplane so the command can be seen below.
|
||||
|
||||

|
||||
|
||||
@ -87,11 +88,11 @@ The next page is going to give you the registration code that needs to be ran on
|
||||
sudo docker run -d --privileged --restart=unless-stopped --net=host -v /etc/kubernetes:/etc/kubernetes -v /var/run:/var/run rancher/rancher-agent:v2.6.3 --server https://10.0.0.1 --token mpq8cbjjwrj88z4xmf7blqxcfmwdsmq92bmwjpphdkklfckk5hfwc2 --ca-checksum a81944423cbfeeb92be0784edebba1af799735ebc30ba8cbe5cc5f996094f30b --etcd --controlplane
|
||||
```
|
||||
|
||||
If networking is configured correctly then you should pretty quickly see the following in your rancher dashboard, indicating that the first master node is now being registered and the cluster is being created.
|
||||
If networking is configured correctly then you should pretty quickly see the following in your rancher dashboard, indicating that the first master node is now being registered and the cluster is being created.
|
||||
|
||||

|
||||
|
||||
We can then repeat the registration process for each of the worker nodes with the following command and after some time you will have your cluster up and running with the ability to leverage the marketplace to deploy your applications.
|
||||
We can then repeat the registration process for each of the worker nodes with the following command and after some time you will have your cluster up and running with the ability to leverage the marketplace to deploy your applications.
|
||||
|
||||
```
|
||||
sudo docker run -d --privileged --restart=unless-stopped --net=host -v /etc/kubernetes:/etc/kubernetes -v /var/run:/var/run rancher/rancher-agent:v2.6.3 --server https://10.0.0.1 --token mpq8cbjjwrj88z4xmf7blqxcfmwdsmq92bmwjpphdkklfckk5hfwc2 --ca-checksum a81944423cbfeeb92be0784edebba1af799735ebc30ba8cbe5cc5f996094f30b --worker
|
||||
@ -99,30 +100,30 @@ sudo docker run -d --privileged --restart=unless-stopped --net=host -v /etc/kube
|
||||
|
||||

|
||||
|
||||
Over the last 3 sessions we have used a few different ways to get up and running with a Kubernetes cluster, over the remaining days we are going to look at the application side of the platform arguably the most important. We will look into services and being able to provision and use our service in Kubernetes.
|
||||
Over the last 3 sessions we have used a few different ways to get up and running with a Kubernetes cluster, over the remaining days we are going to look at the application side of the platform arguably the most important. We will look into services and being able to provision and use our service in Kubernetes.
|
||||
|
||||
I have been told since that the requirements around bootstrapping rancher nodes requires those VMs to have 4GB ram or they will crash-loop, I have since updated as our worker nodes had 2GB.
|
||||
I have been told since that the requirements around bootstrapping rancher nodes requires those VMs to have 4GB ram or they will crash-loop, I have since updated as our worker nodes had 2GB.
|
||||
|
||||
### What we will cover in the series on Kubernetes
|
||||
### What we will cover in the series on Kubernetes
|
||||
|
||||
We have started covering some of these mentioned below but we are going to get more hands on tomorrow with our second cluster deployment then we can start deploying applications into our clusters.
|
||||
We have started covering some of these mentioned below but we are going to get more hands on tomorrow with our second cluster deployment then we can start deploying applications into our clusters.
|
||||
|
||||
- Kubernetes Architecture
|
||||
- Kubectl Commands
|
||||
- Kubernetes YAML
|
||||
- Kubernetes Ingress
|
||||
- Kubernetes Architecture
|
||||
- Kubectl Commands
|
||||
- Kubernetes YAML
|
||||
- Kubernetes Ingress
|
||||
- Kubernetes Services
|
||||
- Helm Package Manager
|
||||
- Persistant Storage
|
||||
- Stateful Apps
|
||||
- Helm Package Manager
|
||||
- Persistant Storage
|
||||
- Stateful Apps
|
||||
|
||||
## Resources
|
||||
## Resources
|
||||
|
||||
If you have FREE resources that you have used then please feel free to add them in here via a PR to the repository and I will be happy to include them.
|
||||
If you have FREE resources that you have used then please feel free to add them in here via a PR to the repository and I will be happy to include them.
|
||||
|
||||
- [Kubernetes Documentation](https://kubernetes.io/docs/home/)
|
||||
- [TechWorld with Nana - Kubernetes Tutorial for Beginners [FULL COURSE in 4 Hours]](https://www.youtube.com/watch?v=X48VuDVv0do)
|
||||
- [TechWorld with Nana - Kubernetes Crash Course for Absolute Beginners](https://www.youtube.com/watch?v=s_o8dwzRlu4)
|
||||
- [Kunal Kushwaha - Kubernetes Tutorial for Beginners | What is Kubernetes? Architecture Simplified!](https://www.youtube.com/watch?v=KVBON1lA9N8)
|
||||
|
||||
See you on [Day 54](day54.md)
|
||||
See you on [Day 54](day54.md)
|
||||
|
103
Days/day54.md
103
Days/day54.md
@ -1,5 +1,5 @@
|
||||
---
|
||||
title: '#90DaysOfDevOps - Kubernetes Application Deployment - Day 54'
|
||||
title: "#90DaysOfDevOps - Kubernetes Application Deployment - Day 54"
|
||||
published: false
|
||||
description: 90DaysOfDevOps - Kubernetes Application Deployment
|
||||
tags: "devops, 90daysofdevops, learning"
|
||||
@ -7,31 +7,32 @@ cover_image: null
|
||||
canonical_url: null
|
||||
id: 1048764
|
||||
---
|
||||
## Kubernetes Application Deployment
|
||||
|
||||
Now we finally get to actually deploying some applications into our clusters, some would say this is the reason Kubernetes exists, for Application delivery.
|
||||
## Kubernetes Application Deployment
|
||||
|
||||
Now we finally get to actually deploying some applications into our clusters, some would say this is the reason Kubernetes exists, for Application delivery.
|
||||
|
||||
The idea here is that we can take our container images and now deploy these as pods into our Kubernetes cluster to take advantage of Kubernetes as a container orchestrator.
|
||||
|
||||
### Deploying Apps into Kubernetes
|
||||
|
||||
There are several ways in which we can deploy our applications into our Kubernetes cluster, we will cover two of the most common approaches which will be YAML files and Helm charts.
|
||||
There are several ways in which we can deploy our applications into our Kubernetes cluster, we will cover two of the most common approaches which will be YAML files and Helm charts.
|
||||
|
||||
We will be using our minikube cluster for these application deployments. We will be walking through some of the previously mentioned components or building blocks of Kubernetes.
|
||||
We will be using our minikube cluster for these application deployments. We will be walking through some of the previously mentioned components or building blocks of Kubernetes.
|
||||
|
||||
All through this section and the Container section we have discussed about images and the benefits of Kubernetes and how we can handle scale quite easily on this platform.
|
||||
All through this section and the Container section we have discussed about images and the benefits of Kubernetes and how we can handle scale quite easily on this platform.
|
||||
|
||||
In this first step we are simply going to create a stateless application within our minikube cluster. We will be using the defacto standard stateless application in our first demonostration `nginx` we will configure a Deployment, which will provide us with our pods and then we will also create a service which will allow us to navigate to the simple web server hosted by the nginx pod. All of this will be contained in a namespace.
|
||||
In this first step we are simply going to create a stateless application within our minikube cluster. We will be using the defacto standard stateless application in our first demonostration `nginx` we will configure a Deployment, which will provide us with our pods and then we will also create a service which will allow us to navigate to the simple web server hosted by the nginx pod. All of this will be contained in a namespace.
|
||||
|
||||

|
||||
|
||||
### Creating the YAML
|
||||
|
||||
In the first demo we want to define everything we do with YAML, we could have a whole section on YAML but I am going to skim over this and leave some resources at the end that will cover YAML in more detail.
|
||||
In the first demo we want to define everything we do with YAML, we could have a whole section on YAML but I am going to skim over this and leave some resources at the end that will cover YAML in more detail.
|
||||
|
||||
We could create the following as one YAML file or we could break this down for each aspect of our application, i.e this could be separate files for namespace, deployment and service creation but in this file below we separate these by using `---` in one file. You can find this file located [here](Kubernetes) (File name:- nginx-stateless-demo.yaml)
|
||||
|
||||
```
|
||||
```Yaml
|
||||
apiVersion: v1
|
||||
kind: Namespace
|
||||
metadata:
|
||||
@ -74,7 +75,8 @@ spec:
|
||||
port: 80
|
||||
targetPort: 80
|
||||
```
|
||||
### Checking our cluster
|
||||
|
||||
### Checking our cluster
|
||||
|
||||
Before we deploy anything we should just make sure that we have no existing namespaces called `nginx` we can do this by running the `kubectl get namespace` command and as you can see below we do not have a namespace called `nginx`
|
||||
|
||||
@ -82,13 +84,13 @@ Before we deploy anything we should just make sure that we have no existing name
|
||||
|
||||
### Time to deploy our App
|
||||
|
||||
Now we are ready to deploy our application to our minikube cluster, this same process will work on any other Kubernetes cluster.
|
||||
Now we are ready to deploy our application to our minikube cluster, this same process will work on any other Kubernetes cluster.
|
||||
|
||||
We need to navigate to our yaml file location and then we can run `kubectl create -f nginx-stateless-demo.yaml` which you then see that 3 objects have been created, we have a namespace, deployment and service.
|
||||
We need to navigate to our yaml file location and then we can run `kubectl create -f nginx-stateless-demo.yaml` which you then see that 3 objects have been created, we have a namespace, deployment and service.
|
||||
|
||||

|
||||
|
||||
Let's run the command again to see our available namespaces in our cluster `kubectl get namespace` and you can now see that we have our new namespace.
|
||||
Let's run the command again to see our available namespaces in our cluster `kubectl get namespace` and you can now see that we have our new namespace.
|
||||
|
||||

|
||||
|
||||
@ -96,70 +98,69 @@ If we then check our namespace for pods using `kubectl get pods -n nginx` you wi
|
||||
|
||||

|
||||
|
||||
We can also check our service is created by running `kubectl get service -n nginx`
|
||||
We can also check our service is created by running `kubectl get service -n nginx`
|
||||
|
||||

|
||||
|
||||
Finally we can then go and check our deployment, the deployment is where and how we keep our desired configuration.
|
||||
Finally we can then go and check our deployment, the deployment is where and how we keep our desired configuration.
|
||||
|
||||

|
||||
|
||||
The above takes a few commands that are worth knowing but you can also use `kubectl get all -n nginx` to see everything we deployed with that one YAML file.
|
||||
The above takes a few commands that are worth knowing but you can also use `kubectl get all -n nginx` to see everything we deployed with that one YAML file.
|
||||
|
||||

|
||||
|
||||
You will notice in the above that we also have a replicaset, in our deployment we define how many replicas of our image we would like to deploy. This was set to 1 initially, but if we wanted to quickly scale our application then we can do this several ways.
|
||||
You will notice in the above that we also have a replicaset, in our deployment we define how many replicas of our image we would like to deploy. This was set to 1 initially, but if we wanted to quickly scale our application then we can do this several ways.
|
||||
|
||||
We can edit our file using `kubectl edit deployment nginx-deployment -n nginx` which will open a text editor within your terminal and enable you to modify you deployment.
|
||||
We can edit our file using `kubectl edit deployment nginx-deployment -n nginx` which will open a text editor within your terminal and enable you to modify you deployment.
|
||||
|
||||

|
||||
|
||||
Upon saving the above in your text editor within the terminal if there was no issues and the correct formatting was used then you should see additional deployed in your namespace.
|
||||
Upon saving the above in your text editor within the terminal if there was no issues and the correct formatting was used then you should see additional deployed in your namespace.
|
||||
|
||||

|
||||
|
||||
We can also make a change to the number of replicas using kubectl and the `kubectl scale deployment nginx-deployment --replicas=10 -n nginx`
|
||||
We can also make a change to the number of replicas using kubectl and the `kubectl scale deployment nginx-deployment --replicas=10 -n nginx`
|
||||
|
||||

|
||||
|
||||
We can equally use this method to scale our application down back to 1 again if we wish using either method. I used the edit option but you can also use the scale command above.
|
||||
We can equally use this method to scale our application down back to 1 again if we wish using either method. I used the edit option but you can also use the scale command above.
|
||||
|
||||

|
||||
|
||||
Hopefully here you can see the use case not only are things super fast to spin up and down but we have the ability to quickly scale up and down our applications. If this was a web server we could scale up during busy times and down when load is quiet.
|
||||
Hopefully here you can see the use case not only are things super fast to spin up and down but we have the ability to quickly scale up and down our applications. If this was a web server we could scale up during busy times and down when load is quiet.
|
||||
|
||||
### Exposing our app
|
||||
|
||||
### Exposing our app
|
||||
But how do we access our web server?
|
||||
|
||||
But how do we access our web server?
|
||||
If you look above at our service you will see there is no External IP available so we cannot just open a web browser and expect this to be there magically. For access we have a few options.
|
||||
|
||||
If you look above at our service you will see there is no External IP available so we cannot just open a web browser and expect this to be there magically. For access we have a few options.
|
||||
**ClusterIP** - The IP you do see is a ClusterIP this is on an internal network on the cluster. Only things within the cluster can reach this IP.
|
||||
|
||||
**ClusterIP** - The IP you do see is a ClusterIP this is on an internal network on the cluster. Only things within the cluster can reach this IP.
|
||||
**NodePort** - Exposes the service on the same port of each of the selected nodes in the cluster using NAT.
|
||||
|
||||
**NodePort** - Exposes the service on the same port of each of the selected nodes in the cluster using NAT.
|
||||
**LoadBalancer** - Creates an external load balancer in the current cloud, we are using minikube but also if you have built your own Kubernetes cluster i.e what we did in VirtualBox you would need to deploy a LoadBalancer such as metallb into your cluster to provide this functionality.
|
||||
|
||||
**LoadBalancer** - Creates an external load balancer in the current cloud, we are using minikube but also if you have built your own Kubernetes cluster i.e what we did in VirtualBox you would need to deploy a LoadBalancer such as metallb into your cluster to provide this functionality.
|
||||
**Port-Forward** - We also have the ability to Port Forward, which allows you to access and interact with internal Kubernetes cluster processes from your localhost. Really this option is only for testing and fault finding.
|
||||
|
||||
**Port-Forward** - We also have the ability to Port Forward, which allows you to access and interact with internal Kubernetes cluster processes from your localhost. Really this option is only for testing and fault finding.
|
||||
We now have a few options to choose from, Minikube has some limitations or differences I should say to a full blown Kubernetes cluster.
|
||||
|
||||
We now have a few options to choose from, Minikube has some limitations or differences I should say to a full blown Kubernetes cluster.
|
||||
|
||||
We could simply run the following command to port forward our access using our local workstation.
|
||||
We could simply run the following command to port forward our access using our local workstation.
|
||||
|
||||
`kubectl port-forward deployment/nginx-deployment -n nginx 8090:80`
|
||||
|
||||

|
||||
|
||||
note that when you run the above command this terminal is now unusable as this is acting as your port forward to your local machine and port.
|
||||
note that when you run the above command this terminal is now unusable as this is acting as your port forward to your local machine and port.
|
||||
|
||||

|
||||
|
||||
We are now going to run through specifically with Minikube how we can expose our application. We can also use minikube to create a URL to connect to a service [More details](https://minikube.sigs.k8s.io/docs/commands/service/)
|
||||
We are now going to run through specifically with Minikube how we can expose our application. We can also use minikube to create a URL to connect to a service [More details](https://minikube.sigs.k8s.io/docs/commands/service/)
|
||||
|
||||
First of all we will delete our service using `kubectl delete service nginx-service -n nginx`
|
||||
|
||||
Next we are going to create a new service using `kubectl expose deployment nginx-deployment --name nginx-service --namespace nginx --port=80 --type=NodePort` notice here we are going to use the expose and change the type to NodePort.
|
||||
Next we are going to create a new service using `kubectl expose deployment nginx-deployment --name nginx-service --namespace nginx --port=80 --type=NodePort` notice here we are going to use the expose and change the type to NodePort.
|
||||
|
||||

|
||||
|
||||
@ -171,7 +172,7 @@ Open a browser or control and click on the link in your terminal.
|
||||
|
||||

|
||||
|
||||
### Helm
|
||||
### Helm
|
||||
|
||||
Helm is another way in which we can deploy our applications. Known as "The package manager for Kubernetes" You can find out more [here](https://helm.sh/)
|
||||
|
||||
@ -183,7 +184,7 @@ It is super simple to get Helm up and running or installed. Simply. You can find
|
||||
|
||||
Or you can use an installer script, the benefit here is that the latest version of the helm will be downloaded and installed.
|
||||
|
||||
```
|
||||
```Shell
|
||||
curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3
|
||||
|
||||
chmod 700 get_helm.sh
|
||||
@ -193,30 +194,30 @@ chmod 700 get_helm.sh
|
||||
|
||||
Finally, there is also the option to use a package manager for the application manager, homebrew for mac, chocolatey for windows, apt with Ubuntu/Debian, snap and pkg also.
|
||||
|
||||
Helm so far seems to be the go-to way to get different test applications downloaded and installed in your cluster.
|
||||
Helm so far seems to be the go-to way to get different test applications downloaded and installed in your cluster.
|
||||
|
||||
A good resource to link here would be [ArtifactHUB](https://artifacthub.io/) which is a resource to find, install and publish Kubernetes packages. I will also give a shout out to [KubeApps](https://kubeapps.com/) which is a UI to display helm charts.
|
||||
A good resource to link here would be [ArtifactHUB](https://artifacthub.io/) which is a resource to find, install and publish Kubernetes packages. I will also give a shout out to [KubeApps](https://kubeapps.com/) which is a UI to display helm charts.
|
||||
|
||||
### What we will cover in the series on Kubernetes
|
||||
### What we will cover in the series on Kubernetes
|
||||
|
||||
We have started covering some of these mentioned below but we are going to get more hands on tomorrow with our second cluster deployment then we can start deploying applications into our clusters.
|
||||
We have started covering some of these mentioned below but we are going to get more hands on tomorrow with our second cluster deployment then we can start deploying applications into our clusters.
|
||||
|
||||
- Kubernetes Architecture
|
||||
- Kubectl Commands
|
||||
- Kubernetes YAML
|
||||
- Kubernetes Ingress
|
||||
- Kubernetes Architecture
|
||||
- Kubectl Commands
|
||||
- Kubernetes YAML
|
||||
- Kubernetes Ingress
|
||||
- Kubernetes Services
|
||||
- Helm Package Manager
|
||||
- Persistant Storage
|
||||
- Stateful Apps
|
||||
- Helm Package Manager
|
||||
- Persistant Storage
|
||||
- Stateful Apps
|
||||
|
||||
## Resources
|
||||
## Resources
|
||||
|
||||
If you have FREE resources that you have used then please feel free to add them in here via a PR to the repository and I will be happy to include them.
|
||||
If you have FREE resources that you have used then please feel free to add them in here via a PR to the repository and I will be happy to include them.
|
||||
|
||||
- [Kubernetes Documentation](https://kubernetes.io/docs/home/)
|
||||
- [TechWorld with Nana - Kubernetes Tutorial for Beginners [FULL COURSE in 4 Hours]](https://www.youtube.com/watch?v=X48VuDVv0do)
|
||||
- [TechWorld with Nana - Kubernetes Crash Course for Absolute Beginners](https://www.youtube.com/watch?v=s_o8dwzRlu4)
|
||||
- [Kunal Kushwaha - Kubernetes Tutorial for Beginners | What is Kubernetes? Architecture Simplified!](https://www.youtube.com/watch?v=KVBON1lA9N8)
|
||||
|
||||
See you on [Day 55](day55.md)
|
||||
See you on [Day 55](day55.md)
|
||||
|
159
Days/day55.md
159
Days/day55.md
@ -1,5 +1,5 @@
|
||||
---
|
||||
title: '#90DaysOfDevOps - State and Ingress in Kubernetes - Day 55'
|
||||
title: "#90DaysOfDevOps - State and Ingress in Kubernetes - Day 55"
|
||||
published: false
|
||||
description: 90DaysOfDevOps - State and Ingress in Kubernetes
|
||||
tags: "devops, 90daysofdevops, learning"
|
||||
@ -7,67 +7,69 @@ cover_image: null
|
||||
canonical_url: null
|
||||
id: 1048779
|
||||
---
|
||||
|
||||
## State and Ingress in Kubernetes
|
||||
In this closing section of Kubernetes, we are going to take a look at State and ingress.
|
||||
|
||||
Everything we have said so far is about stateless, stateless is really where our applications do not care which network it is using and does not need any permanent storage. Whereas stateful apps, databases for example for such an application to function correctly, you’ll need to ensure that pods can reach each other through a unique identity that does not change (hostnames, IPs...etc.). Examples of stateful applications include MySQL clusters, Redis, Kafka, MongoDB and others. Basically though any application that stores data.
|
||||
In this closing section of Kubernetes, we are going to take a look at State and ingress.
|
||||
|
||||
### Stateful Application
|
||||
Everything we have said so far is about stateless, stateless is really where our applications do not care which network it is using and does not need any permanent storage. Whereas stateful apps, databases for example for such an application to function correctly, you’ll need to ensure that pods can reach each other through a unique identity that does not change (hostnames, IPs...etc.). Examples of stateful applications include MySQL clusters, Redis, Kafka, MongoDB and others. Basically though any application that stores data.
|
||||
|
||||
### Stateful Application
|
||||
|
||||
StatefulSets represent a set of Pods with unique, persistent identities and stable hostnames that Kubernetes maintains regardless of where they are scheduled. The state information and other resilient data for any given StatefulSet Pod is maintained in persistent disk storage associated with the StatefulSet.
|
||||
|
||||
### Deployment vs StatefulSet
|
||||
### Deployment vs StatefulSet
|
||||
|
||||
- Replicating stateful applications is more difficult.
|
||||
- Replicating our pods in a deployment (Stateless Application) is identical and interchangable.
|
||||
- Create pods in random order with random hashes
|
||||
- One Service that load balances to any Pod.
|
||||
- Replicating stateful applications is more difficult.
|
||||
- Replicating our pods in a deployment (Stateless Application) is identical and interchangable.
|
||||
- Create pods in random order with random hashes
|
||||
- One Service that load balances to any Pod.
|
||||
|
||||
When it comes to StatefulSets or Stateful Applications the above is more difficult.
|
||||
When it comes to StatefulSets or Stateful Applications the above is more difficult.
|
||||
|
||||
- Cannot be created or deleted at the same time.
|
||||
- Can't be randomly addressed.
|
||||
- Cannot be created or deleted at the same time.
|
||||
- Can't be randomly addressed.
|
||||
- replica Pods are not identical
|
||||
|
||||
Something you will see in our demonstration shortly is that each pod has its own identity. With a stateless Application you will see random names. For example `app-7469bbb6d7-9mhxd` where as a Stateful Application would be more aligned to `mongo-0` and then when scaled it will create a new pod called `mongo-1`.
|
||||
Something you will see in our demonstration shortly is that each pod has its own identity. With a stateless Application you will see random names. For example `app-7469bbb6d7-9mhxd` where as a Stateful Application would be more aligned to `mongo-0` and then when scaled it will create a new pod called `mongo-1`.
|
||||
|
||||
These pods are created from the same specification, but they are not interchangable. Each StatefulSet pod has a persistent identifier across any re-scheduling. This is necessary because when we require stateful workloads such as a database where we require writing and reading to a database, we cannot have two pods writing at the same time with no awareness as this will give us data inconsistency. We need to ensure that only one of our pods is writing to the database at any given time however we can have multiple pods reading that data.
|
||||
These pods are created from the same specification, but they are not interchangable. Each StatefulSet pod has a persistent identifier across any re-scheduling. This is necessary because when we require stateful workloads such as a database where we require writing and reading to a database, we cannot have two pods writing at the same time with no awareness as this will give us data inconsistency. We need to ensure that only one of our pods is writing to the database at any given time however we can have multiple pods reading that data.
|
||||
|
||||
Each pod in a StatefulSet would have access to its own persistent volume and replica copy of the database to read from, this is continuously updated from the master. Its also interesting to note that each pod will also store its pod state in this persistent volume, if then `mongo-0` dies then when a new one is provisioned it will take over the pod state stored in storage.
|
||||
Each pod in a StatefulSet would have access to its own persistent volume and replica copy of the database to read from, this is continuously updated from the master. Its also interesting to note that each pod will also store its pod state in this persistent volume, if then `mongo-0` dies then when a new one is provisioned it will take over the pod state stored in storage.
|
||||
|
||||
TLDR; StatefulSets vs Deployments
|
||||
TLDR; StatefulSets vs Deployments
|
||||
|
||||
- Predicatable pod name = `mongo-0`
|
||||
- Fixed individual DNS name
|
||||
- Predictable pod name = `mongo-0`
|
||||
- Fixed individual DNS name
|
||||
- Pod Identity - Retain State, Retain Role
|
||||
- Replicating stateful apps is complex
|
||||
- There are lots of things you must do:
|
||||
- Configure cloning and data synchronisation.
|
||||
- Make remote shared storage available.
|
||||
- Management & backup
|
||||
- Replicating stateful apps is complex
|
||||
- There are lots of things you must do:
|
||||
- Configure cloning and data synchronisation.
|
||||
- Make remote shared storage available.
|
||||
- Management & backup
|
||||
|
||||
### Persistant Volumes | Claims | StorageClass
|
||||
|
||||
How to persist data in Kubernetes?
|
||||
How to persist data in Kubernetes?
|
||||
|
||||
We mentioned above when we have a stateful application, we have to store the state somewhere and this is where the need for a volume comes in, out of the box Kubernetes does not provide persistance out of the box.
|
||||
We mentioned above when we have a stateful application, we have to store the state somewhere and this is where the need for a volume comes in, out of the box Kubernetes does not provide persistence out of the box.
|
||||
|
||||
We require a storage layer that does not depend on the pod lifecycle. This storage should be available and accessible from all of our Kubernetes nodes. The storage should also be outside of the Kubernetes cluster to be able to survive even if the Kubernetes cluster crashes.
|
||||
We require a storage layer that does not depend on the pod lifecycle. This storage should be available and accessible from all of our Kubernetes nodes. The storage should also be outside of the Kubernetes cluster to be able to survive even if the Kubernetes cluster crashes.
|
||||
|
||||
### Persistent Volume
|
||||
### Persistent Volume
|
||||
|
||||
- A cluster resource (like CPU and RAM) to store data.
|
||||
- Created via a YAML file
|
||||
- Created via a YAML file
|
||||
- Needs actual physical storage (NAS)
|
||||
- External integration to your Kubernetes cluster
|
||||
- You can have different types of storage available in your storage.
|
||||
- You can have different types of storage available in your storage.
|
||||
- PVs are not namespaced
|
||||
- Local storage is available but it would be specific to one node in the cluster
|
||||
- Database persistence should use remote storage (NAS)
|
||||
|
||||
### Persistent Volume Claim
|
||||
|
||||
A persistent volume alone above can be there and available but unless it is claimed by an application it is not being used.
|
||||
A persistent volume alone above can be there and available but unless it is claimed by an application it is not being used.
|
||||
|
||||
- Created via a YAML file
|
||||
- Persistent Volume Claim is used in pod configuration (volumes attribute)
|
||||
@ -75,36 +77,36 @@ A persistent volume alone above can be there and available but unless it is clai
|
||||
- Volume is mounted into the pod
|
||||
- Pods can have multiple different volume types (ConfigMap, Secret, PVC)
|
||||
|
||||
Another way to think of PVs and PVCs is that
|
||||
Another way to think of PVs and PVCs is that
|
||||
|
||||
PVs are created by the Kubernetes Admin
|
||||
PVs are created by the Kubernetes Admin
|
||||
PVCs are created by the user or application developer
|
||||
|
||||
We also have two other types of volumes that we will not get into detail on but worth mentioning:
|
||||
We also have two other types of volumes that we will not get into detail on but worth mentioning:
|
||||
|
||||
### ConfigMaps | Secrets
|
||||
- Configuration file for your pod.
|
||||
- Certificate file for your pod.
|
||||
### ConfigMaps | Secrets
|
||||
|
||||
### StorageClass
|
||||
- Configuration file for your pod.
|
||||
- Certificate file for your pod.
|
||||
|
||||
### StorageClass
|
||||
|
||||
- Created via a YAML file
|
||||
- Provisions Persistent Volumes Dynamically when a PVC claims it
|
||||
- Each storage backend has its own provisioner
|
||||
- Provisions Persistent Volumes Dynamically when a PVC claims it
|
||||
- Each storage backend has its own provisioner
|
||||
- Storage backend is defined in YAML (via provisioner attribute)
|
||||
- Abstracts underlying storage provider
|
||||
- Abstracts underlying storage provider
|
||||
- Define parameters for that storage
|
||||
|
||||
|
||||
### Walkthrough time
|
||||
|
||||
In the session yesterday we walked through creating a stateless application, here we want to do the same but we want to use our minikube cluster to deploy a stateful workload.
|
||||
In the session yesterday we walked through creating a stateless application, here we want to do the same but we want to use our minikube cluster to deploy a stateful workload.
|
||||
|
||||
A recap on the minikube command we are using to have the capability and addons to use persistence is `minikube start --addons volumesnapshots,csi-hostpath-driver --apiserver-port=6443 --container-runtime=containerd -p mc-demo --kubernetes-version=1.21.2`
|
||||
A recap on the minikube command we are using to have the capability and addons to use persistence is `minikube start --addons volumesnapshots,csi-hostpath-driver --apiserver-port=6443 --container-runtime=containerd -p mc-demo --kubernetes-version=1.21.2`
|
||||
|
||||
This command uses the csi-hostpath-driver which is what gives us our storageclass, something I will show later.
|
||||
This command uses the csi-hostpath-driver which is what gives us our storageclass, something I will show later.
|
||||
|
||||
The build out of the application looks like the below:
|
||||
The build out of the application looks like the below:
|
||||
|
||||

|
||||
|
||||
@ -112,13 +114,13 @@ You can find the YAML configuration file for this application here [pacman-state
|
||||
|
||||
### StorageClass Configuration
|
||||
|
||||
There is one more step though that we should run before we start deploying our application and that is make sure that our storageclass (csi-hostpath-sc) is our default one. We can firstly check this by running the `kubectl get storageclass` command but out of the box the minikube cluster will be showing the standard storageclass as default so we have to change that with the following commands.
|
||||
There is one more step though that we should run before we start deploying our application and that is make sure that our storageclass (csi-hostpath-sc) is our default one. We can firstly check this by running the `kubectl get storageclass` command but out of the box the minikube cluster will be showing the standard storageclass as default so we have to change that with the following commands.
|
||||
|
||||
This first command will make our csi-hostpath-sc storageclass our default.
|
||||
|
||||
`kubectl patch storageclass csi-hostpath-sc -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'`
|
||||
|
||||
This command will remove the default annotation from the standard StorageClass.
|
||||
This command will remove the default annotation from the standard StorageClass.
|
||||
|
||||
`kubectl patch storageclass standard -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"false"}}}'`
|
||||
|
||||
@ -128,39 +130,39 @@ We start with no pacman namespace in our cluster. `kubectl get namespace`
|
||||
|
||||

|
||||
|
||||
We will then deploy our YAML file. `kubectl create -f pacman-stateful-demo.yaml` you can see from this command we are creating a number of objects within our Kubernetes cluster.
|
||||
We will then deploy our YAML file. `kubectl create -f pacman-stateful-demo.yaml` you can see from this command we are creating a number of objects within our Kubernetes cluster.
|
||||
|
||||

|
||||
|
||||
We now have our newly created namespace.
|
||||
We now have our newly created namespace.
|
||||
|
||||

|
||||
|
||||
You can then see from the next image and command `kubectl get all -n pacman` that we have a number of things happening inside of our namespace. We have our pods running our NodeJS web front end, we have mongo running our backend database. There are services for both pacman and mongo to access those pods. We have a deployment for pacman and a statefulset for mongo.
|
||||
You can then see from the next image and command `kubectl get all -n pacman` that we have a number of things happening inside of our namespace. We have our pods running our NodeJS web front end, we have mongo running our backend database. There are services for both pacman and mongo to access those pods. We have a deployment for pacman and a statefulset for mongo.
|
||||
|
||||

|
||||
|
||||
We also have our persistent volume and persistent volume claim by running `kubectl get pv` will give us our non namespaced persistent volumes and running `kubectl get pvc -n pacman` will give us our namespaced persistent volume claims.
|
||||
We also have our persistent volume and persistent volume claim by running `kubectl get pv` will give us our non namespaced persistent volumes and running `kubectl get pvc -n pacman` will give us our namespaced persistent volume claims.
|
||||
|
||||

|
||||
|
||||
### Playing the game | I mean accessing our mission critical application
|
||||
|
||||
Because we are using Minikube as mentioned in the stateless application we have a few hurdles to get over when it comes to accessing our application, If however we had access to ingress or a load balancer within our cluster the service is set up to automatically get an IP from that to gain access externally. (you can see this above in the image of all components in the pacman namespace).
|
||||
Because we are using Minikube as mentioned in the stateless application we have a few hurdles to get over when it comes to accessing our application, If however we had access to ingress or a load balancer within our cluster the service is set up to automatically get an IP from that to gain access externally. (you can see this above in the image of all components in the pacman namespace).
|
||||
|
||||
For this demo we are going to use the port forward method to access our application. By opening a new terminal and running the following `kubectl port-forward svc/pacman 9090:80 -n pacman` command, opening a browser we will now have access to our application. If you are running this in AWS or specific locations then this will also report on the cloud and zone as well as the host which equals your pod within Kubernetes, again you can look back and see this pod name in our screenshots above.
|
||||
For this demo we are going to use the port forward method to access our application. By opening a new terminal and running the following `kubectl port-forward svc/pacman 9090:80 -n pacman` command, opening a browser we will now have access to our application. If you are running this in AWS or specific locations then this will also report on the cloud and zone as well as the host which equals your pod within Kubernetes, again you can look back and see this pod name in our screenshots above.
|
||||
|
||||

|
||||
|
||||
Now we can go and create a high score which will then be stored in our database.
|
||||
Now we can go and create a high score which will then be stored in our database.
|
||||
|
||||

|
||||
|
||||
Ok, great we have a high score but what happens if we go and delete our `mongo-0` pod? by running `kubectl delete pod mongo-0 -n pacman` I can delete that and if you are still in the app you will see that high score not available at least for a few seconds.
|
||||
Ok, great we have a high score but what happens if we go and delete our `mongo-0` pod? by running `kubectl delete pod mongo-0 -n pacman` I can delete that and if you are still in the app you will see that high score not available at least for a few seconds.
|
||||
|
||||

|
||||
|
||||
Now if I go back to my game I can create a new game and see my high scores. The only way you can truly believe me on this though is if you give it a try and share on social media your high scores!
|
||||
Now if I go back to my game I can create a new game and see my high scores. The only way you can truly believe me on this though is if you give it a try and share on social media your high scores!
|
||||
|
||||

|
||||
|
||||
@ -168,28 +170,29 @@ With the deployment we can scale this up using the commands that we covered in t
|
||||
|
||||

|
||||
|
||||
### Ingress explained
|
||||
|
||||
### Ingress explained
|
||||
Before we wrap things up with Kubernetes I also wanted to touch on a huge aspect of Kubernetes and that is ingress.
|
||||
Before we wrap things up with Kubernetes I also wanted to touch on a huge aspect of Kubernetes and that is ingress.
|
||||
|
||||
### What is ingress?
|
||||
### What is ingress?
|
||||
|
||||
So far with our examples we have used port-forward or we have used specific commands within minikube to gain access to our applications but this in production is not going to work. We are going to want a better way of accessing our applications at scale with multiple users.
|
||||
So far with our examples we have used port-forward or we have used specific commands within minikube to gain access to our applications but this in production is not going to work. We are going to want a better way of accessing our applications at scale with multiple users.
|
||||
|
||||
We also spoke about NodePort being an option but again this should be only for test purposes.
|
||||
We also spoke about NodePort being an option but again this should be only for test purposes.
|
||||
|
||||
Ingress gives us a better way of exposing our applications, this allows us to define routing rules within our Kubernetes cluster.
|
||||
Ingress gives us a better way of exposing our applications, this allows us to define routing rules within our Kubernetes cluster.
|
||||
|
||||
For ingress we would create a forward request to the internal service of our application.
|
||||
For ingress we would create a forward request to the internal service of our application.
|
||||
|
||||
### When do you need ingress?
|
||||
If you are using a cloud provider, a managed Kubernetes offering they most likely will have their own ingress option for your cluster or they provide you with their own load balancer option. You don't have to implement this yourself, one of the benefits of managed Kubernetes.
|
||||
### When do you need ingress?
|
||||
|
||||
If you are running your own cluster then you will need to configure an entrypoint.
|
||||
If you are using a cloud provider, a managed Kubernetes offering they most likely will have their own ingress option for your cluster or they provide you with their own load balancer option. You don't have to implement this yourself, one of the benefits of managed Kubernetes.
|
||||
|
||||
### Configure Ingress on Minikube
|
||||
If you are running your own cluster then you will need to configure an entrypoint.
|
||||
|
||||
On my particular running cluster called mc-demo I can run the following command to get ingress enabled on my cluster.
|
||||
### Configure Ingress on Minikube
|
||||
|
||||
On my particular running cluster called mc-demo I can run the following command to get ingress enabled on my cluster.
|
||||
|
||||
`minikube --profile='mc-demo' addons enable ingress`
|
||||
|
||||
@ -201,25 +204,25 @@ If we check our namespaces now you will see that we have a new ingress-nginx nam
|
||||
|
||||
Now we must create our ingress YAML configuration to hit our Pacman service I have added this file to the repository [pacman-ingress.yaml](Kubernetes)
|
||||
|
||||
We can then create this in our ingress namespace with `kubectl create -f pacman-ingress.yaml`
|
||||
We can then create this in our ingress namespace with `kubectl create -f pacman-ingress.yaml`
|
||||
|
||||

|
||||
|
||||
Then if we run `kubectl get ingress -n pacman`
|
||||
Then if we run `kubectl get ingress -n pacman`
|
||||
|
||||

|
||||
|
||||
I am then told because we are using minikube running on WSL2 in Windows we have to create the minikube tunnel using `minikube tunnel --profile=mc-demo`
|
||||
I am then told because we are using minikube running on WSL2 in Windows we have to create the minikube tunnel using `minikube tunnel --profile=mc-demo`
|
||||
|
||||
But I am still not able to gain access to 192.168.49.2 and play my pacman game.
|
||||
But I am still not able to gain access to 192.168.49.2 and play my pacman game.
|
||||
|
||||
If anyone has or can get this working on Windows and WSL I would appreciate the feedback. I will raise an issue on the repository for this and come back to it once I have time and a fix.
|
||||
If anyone has or can get this working on Windows and WSL I would appreciate the feedback. I will raise an issue on the repository for this and come back to it once I have time and a fix.
|
||||
|
||||
UPDATE: I feel like this blog helps identify maybe the cause of this not working with WSL [Configuring Ingress to run Minikube on WSL2 using Docker runtime](https://hellokube.dev/posts/configure-minikube-ingress-on-wsl2/)
|
||||
|
||||
## Resources
|
||||
## Resources
|
||||
|
||||
If you have FREE resources that you have used then please feel free to add them in here via a PR to the repository and I will be happy to include them.
|
||||
If you have FREE resources that you have used then please feel free to add them in here via a PR to the repository and I will be happy to include them.
|
||||
|
||||
- [Kubernetes StatefulSet simply explained](https://www.youtube.com/watch?v=pPQKAR1pA9U)
|
||||
- [Kubernetes Volumes explained](https://www.youtube.com/watch?v=0swOh5C3OVM)
|
||||
@ -229,8 +232,8 @@ If you have FREE resources that you have used then please feel free to add them
|
||||
- [TechWorld with Nana - Kubernetes Crash Course for Absolute Beginners](https://www.youtube.com/watch?v=s_o8dwzRlu4)
|
||||
- [Kunal Kushwaha - Kubernetes Tutorial for Beginners | What is Kubernetes? Architecture Simplified!](https://www.youtube.com/watch?v=KVBON1lA9N8)
|
||||
|
||||
This wraps up our Kubernetes section, there is so much additional content we could cover on Kubernetes and 7 days gives us a foundational knowledge but there are people running through [100DaysOfKubernetes](https://100daysofkubernetes.io/overview.html) where you can get really into the weeds.
|
||||
This wraps up our Kubernetes section, there is so much additional content we could cover on Kubernetes and 7 days gives us a foundational knowledge but there are people running through [100DaysOfKubernetes](https://100daysofkubernetes.io/overview.html) where you can get really into the weeds.
|
||||
|
||||
Next up we are going to be taking a look at Infrastructure as Code and the important role it plays from a DevOps perspective.
|
||||
Next up we are going to be taking a look at Infrastructure as Code and the important role it plays from a DevOps perspective.
|
||||
|
||||
See you on [Day 56](day56.md)
|
||||
See you on [Day 56](day56.md)
|
||||
|
127
Days/day56.md
127
Days/day56.md
@ -1,5 +1,5 @@
|
||||
---
|
||||
title: '#90DaysOfDevOps - The Big Picture: IaC - Day 56'
|
||||
title: "#90DaysOfDevOps - The Big Picture: IaC - Day 56"
|
||||
published: false
|
||||
description: 90DaysOfDevOps - The Big Picture IaC
|
||||
tags: "devops, 90daysofdevops, learning"
|
||||
@ -7,118 +7,125 @@ cover_image: null
|
||||
canonical_url: null
|
||||
id: 1048709
|
||||
---
|
||||
|
||||
## The Big Picture: IaC
|
||||
|
||||
Humans make mistakes! Automation is the way to go!
|
||||
Humans make mistakes! Automation is the way to go!
|
||||
|
||||
How do you build your systems today?
|
||||
How do you build your systems today?
|
||||
|
||||
What would be your plan if you were to lose everything today, Physical machines, Virtual Machines, Cloud VMs, Cloud PaaS etc etc.?
|
||||
What would be your plan if you were to lose everything today, Physical machines, Virtual Machines, Cloud VMs, Cloud PaaS etc etc.?
|
||||
|
||||
How long would it take you to replace everything?
|
||||
How long would it take you to replace everything?
|
||||
|
||||
Infrastructure as code provides a solution to be able to do this whilst also being able to test this, we should not confuse this with backup and recovery but in terms of your infrastructure and environments, your platforms we should be able to spin them up and treat them as cattle vs pets.
|
||||
Infrastructure as code provides a solution to be able to do this whilst also being able to test this, we should not confuse this with backup and recovery but in terms of your infrastructure and environments, your platforms we should be able to spin them up and treat them as cattle vs pets.
|
||||
|
||||
The TLDR; is that we can use code to rebuild our whole entire environment.
|
||||
The TLDR; is that we can use code to rebuild our whole entire environment.
|
||||
|
||||
If we also remember from the start we said about DevOps in general is a way in which to break down barriers to deliver systems into production safely and rapidly.
|
||||
If we also remember from the start we said about DevOps in general is a way in which to break down barriers to deliver systems into production safely and rapidly.
|
||||
|
||||
Infrastructure as code helps us deliver the systems, we have spoken a lot of processes and tools. IaC brings us more tools to be familiar with to enable this part of the process.
|
||||
Infrastructure as code helps us deliver the systems, we have spoken a lot of processes and tools. IaC brings us more tools to be familiar with to enable this part of the process.
|
||||
|
||||
We are going to concentrate on Infrastructure as code in this section. You might also hear this mentioned as Infrastructure from code or configuration as code. I think the most well known term is likely Infrastructure as code.
|
||||
We are going to concentrate on Infrastructure as code in this section. You might also hear this mentioned as Infrastructure from code or configuration as code. I think the most well known term is likely Infrastructure as code.
|
||||
|
||||
### Pets vs Cattle
|
||||
### Pets vs Cattle
|
||||
|
||||
If we take a look at pre DevOps, if we had the requirement to build a new Application, we would need to prepare our servers manually for the most part.
|
||||
If we take a look at pre DevOps, if we had the requirement to build a new Application, we would need to prepare our servers manually for the most part.
|
||||
|
||||
- Deploy VMs | Physical Servers and install operating system
|
||||
- Configure networking
|
||||
- Create routing tables
|
||||
- Install software and updates
|
||||
- Configure software
|
||||
- Install database
|
||||
- Configure networking
|
||||
- Create routing tables
|
||||
- Install software and updates
|
||||
- Configure software
|
||||
- Install database
|
||||
|
||||
This would be a manual process performed by Systems Administrators. The bigger the application the more resource and servers required the more manual effort it would take to bring up those systems. This would take a huge amount of human effort and time but also as a business you would have to pay for that resource to build out this environment. As I opened the section with "Humans make mistakes! Automation is the way to go!"
|
||||
|
||||
Ongoing from the above initial setup phase you then have maintenance of these servers.
|
||||
Ongoing from the above initial setup phase you then have maintenance of these servers.
|
||||
|
||||
- Update versions
|
||||
- Deploy new releases
|
||||
- Data Management
|
||||
- Recovery of Applications
|
||||
- Add, Remove and Scale Servers
|
||||
- Update versions
|
||||
- Deploy new releases
|
||||
- Data Management
|
||||
- Recovery of Applications
|
||||
- Add, Remove and Scale Servers
|
||||
- Network Configuration
|
||||
|
||||
Add the complexity of multiple test and dev environments.
|
||||
Add the complexity of multiple test and dev environments.
|
||||
|
||||
This is where Infrastructure as Code comes in, the above was very much a time where we would look after those servers as if they were pets, people even called them servers pet names or at least named them something because they were going to be around for a while, they were going to hopefully be part of the "family" for a while.
|
||||
This is where Infrastructure as Code comes in, the above was very much a time where we would look after those servers as if they were pets, people even called them servers pet names or at least named them something because they were going to be around for a while, they were going to hopefully be part of the "family" for a while.
|
||||
|
||||
With Infrastructure as Code we have the ability to automate all these tasks end to end. Infrastructure as code is a concept and there are tools that carry out this automated provisioning of infrastructure, at this point if something bad happens to a server you throw it away and you spin up a new one. This process is automated and the server is exactly as defined in code. At this point we don't care what they are called they are there in the field serving their purpose until they are no longer in the field and we have another to replace it either because of a failure or because we updated part or all of our application.
|
||||
With Infrastructure as Code we have the ability to automate all these tasks end to end. Infrastructure as code is a concept and there are tools that carry out this automated provisioning of infrastructure, at this point if something bad happens to a server you throw it away and you spin up a new one. This process is automated and the server is exactly as defined in code. At this point we don't care what they are called they are there in the field serving their purpose until they are no longer in the field and we have another to replace it either because of a failure or because we updated part or all of our application.
|
||||
|
||||
This can be used in almost all platforms, virtualisation, cloud based workloads and also cloud-native infrastructure such as Kubernetes and containers.
|
||||
This can be used in almost all platforms, virtualisation, cloud based workloads and also cloud-native infrastructure such as Kubernetes and containers.
|
||||
|
||||
### Infrastructure Provisioning
|
||||
Not all IaC cover all of the below, You will find that the tool we are going to be using during this section only really covers the the first 2 areas of below; Terraform is that tool we will be covering and this allows us to start from nothing and define in code what our infrastructure should look like and then deploy that, it will also enable us to manage that infrastructure and also initially deploy an application but at that point it is going to lose track of the application which is where the next section comes in and something like Ansible as a configuration management tool might work better on that front.
|
||||
### Infrastructure Provisioning
|
||||
|
||||
Without jumping ahead tools like chef, puppet and ansible are best suited to deal with the initial application setup and then to manage those applications and their configuration.
|
||||
Not all IaC cover all of the below, You will find that the tool we are going to be using during this section only really covers the the first 2 areas of below; Terraform is that tool we will be covering and this allows us to start from nothing and define in code what our infrastructure should look like and then deploy that, it will also enable us to manage that infrastructure and also initially deploy an application but at that point it is going to lose track of the application which is where the next section comes in and something like Ansible as a configuration management tool might work better on that front.
|
||||
|
||||
Initial installation & configuration of software
|
||||
Without jumping ahead tools like chef, puppet and ansible are best suited to deal with the initial application setup and then to manage those applications and their configuration.
|
||||
|
||||
- Spinning up new servers
|
||||
- Network configuration
|
||||
- Creating load balancers
|
||||
Initial installation & configuration of software
|
||||
|
||||
- Spinning up new servers
|
||||
- Network configuration
|
||||
- Creating load balancers
|
||||
- Configuration on infrastructure level
|
||||
|
||||
### Configuration of provisioned infrastructure
|
||||
### Configuration of provisioned infrastructure
|
||||
|
||||
- Installing application on servers
|
||||
- Prepare the servers to deploy your application.
|
||||
- Installing application on servers
|
||||
- Prepare the servers to deploy your application.
|
||||
|
||||
### Deployment of Application
|
||||
### Deployment of Application
|
||||
|
||||
- Deploy and Manage Application
|
||||
- Deploy and Manage Application
|
||||
- Maintain phase
|
||||
- Software updates
|
||||
- Reconfiguration
|
||||
- Software updates
|
||||
- Reconfiguration
|
||||
|
||||
### Difference of IaC tools
|
||||
### Difference of IaC tools
|
||||
|
||||
Declarative vs procedural
|
||||
Declarative vs procedural
|
||||
|
||||
Procedural
|
||||
- Step by step instruction
|
||||
- Create a server > Add a server > Make this change
|
||||
Procedural
|
||||
|
||||
Declartive
|
||||
- declare end result
|
||||
- 2 Servers
|
||||
- Step by step instruction
|
||||
- Create a server > Add a server > Make this change
|
||||
|
||||
Declarative
|
||||
|
||||
- declare end result
|
||||
- 2 Servers
|
||||
|
||||
Mutable (pets) vs Immutable (cattle)
|
||||
|
||||
Mutable
|
||||
Mutable
|
||||
|
||||
- Change instead of replace
|
||||
- Generally long lived
|
||||
- Generally long lived
|
||||
|
||||
Immutable
|
||||
|
||||
- Replace instead of change
|
||||
- Possibly short lived
|
||||
- Possibly short lived
|
||||
|
||||
This is really why we have lots of different options for Infrastructure as Code because there is no one tool to rule them all.
|
||||
This is really why we have lots of different options for Infrastructure as Code because there is no one tool to rule them all.
|
||||
|
||||
We are going to be mostly using terraform and getting hands on as this is the best way to start seeing the benefits of Infrastructure as Code when it is in action. Getting hands on is also the best way to pick up the skills as you are going to be writing code.
|
||||
We are going to be mostly using terraform and getting hands on as this is the best way to start seeing the benefits of Infrastructure as Code when it is in action. Getting hands on is also the best way to pick up the skills as you are going to be writing code.
|
||||
|
||||
Next up we will start looking into Terraform with a 101 before we get some hands on get using.
|
||||
Next up we will start looking into Terraform with a 101 before we get some hands on get using.
|
||||
|
||||
## Resources
|
||||
I have listed a lot of resources down below and I think this topic has been covered so many times out there, If you have additional resources be sure to raise a PR with your resources and I will be happy to review and add them to the list.
|
||||
## Resources
|
||||
|
||||
- [What is Infrastructure as Code? Difference of Infrastructure as Code Tools ](https://www.youtube.com/watch?v=POPP2WTJ8es)
|
||||
I have listed a lot of resources down below and I think this topic has been covered so many times out there, If you have additional resources be sure to raise a PR with your resources and I will be happy to review and add them to the list.
|
||||
|
||||
- [What is Infrastructure as Code? Difference of Infrastructure as Code Tools](https://www.youtube.com/watch?v=POPP2WTJ8es)
|
||||
- [Terraform Tutorial | Terraform Course Overview 2021](https://www.youtube.com/watch?v=m3cKkYXl-8o)
|
||||
- [Terraform explained in 15 mins | Terraform Tutorial for Beginners ](https://www.youtube.com/watch?v=l5k1ai_GBDE)
|
||||
- [Terraform explained in 15 mins | Terraform Tutorial for Beginners](https://www.youtube.com/watch?v=l5k1ai_GBDE)
|
||||
- [Terraform Course - From BEGINNER to PRO!](https://www.youtube.com/watch?v=7xngnjfIlK4&list=WL&index=141&t=16s)
|
||||
- [HashiCorp Terraform Associate Certification Course](https://www.youtube.com/watch?v=V4waklkBC38&list=WL&index=55&t=111s)
|
||||
- [Terraform Full Course for Beginners](https://www.youtube.com/watch?v=EJ3N-hhiWv0&list=WL&index=39&t=27s)
|
||||
- [KodeKloud - Terraform for DevOps Beginners + Labs: Complete Step by Step Guide!](https://www.youtube.com/watch?v=YcJ9IeukJL8&list=WL&index=16&t=11s)
|
||||
- [KodeKloud - Terraform for DevOps Beginners + Labs: Complete Step by Step Guide!](https://www.youtube.com/watch?v=YcJ9IeukJL8&list=WL&index=16&t=11s)
|
||||
- [Terraform Simple Projects](https://terraform.joshuajebaraj.com/)
|
||||
- [Terraform Tutorial - The Best Project Ideas](https://www.youtube.com/watch?v=oA-pPa0vfks)
|
||||
- [Awesome Terraform](https://github.com/shuaibiyy/awesome-terraform)
|
||||
|
@ -1,50 +1,50 @@
|
||||
---
|
||||
title: '#90DaysOfDevOps - An intro to Terraform - Day 57'
|
||||
title: "#90DaysOfDevOps - An intro to Terraform - Day 57"
|
||||
published: false
|
||||
description: 90DaysOfDevOps - An intro to Terraform
|
||||
tags: 'devops, 90daysofdevops, learning'
|
||||
tags: "devops, 90daysofdevops, learning"
|
||||
cover_image: null
|
||||
canonical_url: null
|
||||
id: 1048710
|
||||
---
|
||||
## An intro to Terraform
|
||||
|
||||
"Terraform is a tool for building, changing, and versioning infrastructure safely and efficiently"
|
||||
## An intro to Terraform
|
||||
|
||||
The above quote is from HashiCorp, HashiCorp is the company behind Terraform.
|
||||
"Terraform is a tool for building, changing, and versioning infrastructure safely and efficiently"
|
||||
|
||||
The above quote is from HashiCorp, HashiCorp is the company behind Terraform.
|
||||
|
||||
"Terraform is an open-source infrastructure as code software tool that provides a consistent CLI workflow to manage hundreds of cloud services. Terraform codifies cloud APIs into declarative configuration files."
|
||||
|
||||
HashiCorp have a great resource in [HashiCorp Learn](https://learn.hashicorp.com/terraform?utm_source=terraform_io&utm_content=terraform_io_hero) which covers all of their products and gives some great walkthrough demos when you are trying to achieve something with Infrastructure as Code.
|
||||
HashiCorp have a great resource in [HashiCorp Learn](https://learn.hashicorp.com/terraform?utm_source=terraform_io&utm_content=terraform_io_hero) which covers all of their products and gives some great walkthrough demos when you are trying to achieve something with Infrastructure as Code.
|
||||
|
||||
All cloud providers and on prem platforms generally give us access to management consoles which enables us to create our resources via a UI, generally these platforms also provide a CLI or API access to also create the same resources but with an API we have the ability to provision fast.
|
||||
All cloud providers and on prem platforms generally give us access to management consoles which enables us to create our resources via a UI, generally these platforms also provide a CLI or API access to also create the same resources but with an API we have the ability to provision fast.
|
||||
|
||||
Infrastructure as Code allows us to hook into those APIs to deploy our resources in a desired state.
|
||||
Infrastructure as Code allows us to hook into those APIs to deploy our resources in a desired state.
|
||||
|
||||
Other tools but not exclusive or exhaustive below. If you have other tools then please share via a PR.
|
||||
Other tools but not exclusive or exhaustive below. If you have other tools then please share via a PR.
|
||||
|
||||
| Cloud Specific | Cloud Agnostic |
|
||||
| Cloud Specific | Cloud Agnostic |
|
||||
| ------------------------------- | -------------- |
|
||||
| AWS CloudFormation | Terraform |
|
||||
| Azure Resource Manager | Pulumi |
|
||||
| Google Cloud Deployment Manager | |
|
||||
| AWS CloudFormation | Terraform |
|
||||
| Azure Resource Manager | Pulumi |
|
||||
| Google Cloud Deployment Manager | |
|
||||
|
||||
This is another reason why we are using Terraform, we want to be agnostic to the clouds and platforms that we wish to use for our demos but also in general.
|
||||
This is another reason why we are using Terraform, we want to be agnostic to the clouds and platforms that we wish to use for our demos but also in general.
|
||||
|
||||
## Terraform Overview
|
||||
## Terraform Overview
|
||||
|
||||
Terraform is a provisioning focused tool, Terraform is a CLI that gives the capabilities of being able to provision complex infrastructure environments. With Terraform we can define complex infrastructure requirements that exist locally or remote (cloud) Terraform not only enables us to build things initially but also to maintain and update those resources for their lifetime.
|
||||
Terraform is a provisioning focused tool, Terraform is a CLI that gives the capabilities of being able to provision complex infrastructure environments. With Terraform we can define complex infrastructure requirements that exist locally or remote (cloud) Terraform not only enables us to build things initially but also to maintain and update those resources for their lifetime.
|
||||
|
||||
We are going to cover the high level here but for more details and loads of resources you can head to [terraform.io](https://www.terraform.io/)
|
||||
|
||||
### Write
|
||||
|
||||
Terraform allows us to create declaritive configuration files that will build our environments. The files are written using the HashiCorp Configuration Language (HCL) which allows for concise descriptions of resources using blocks, arguments, and expressions. We will of course be looking into these in detail in deploying VMs, Containers and within Kubernetes.
|
||||
|
||||
Terraform allows us to create declarative configuration files that will build our environments. The files are written using the HashiCorp Configuration Language (HCL) which allows for concise descriptions of resources using blocks, arguments, and expressions. We will of course be looking into these in detail in deploying VMs, Containers and within Kubernetes.
|
||||
|
||||
### Plan
|
||||
|
||||
The ability to check that the above configuration files are going to deploy what we want to see using specific functions of the terraform cli to be able to test that plan before deploying anything or changing anything. Remember Terraform is a continued tool for your infrastructure if you would like to change aspect of your infrastructure you should do that via terraform so that it is captured all in code.
|
||||
The ability to check that the above configuration files are going to deploy what we want to see using specific functions of the terraform cli to be able to test that plan before deploying anything or changing anything. Remember Terraform is a continued tool for your infrastructure if you would like to change aspect of your infrastructure you should do that via terraform so that it is captured all in code.
|
||||
|
||||
### Apply
|
||||
|
||||
@ -52,48 +52,45 @@ Obviously once you are happy you can go ahead and apply this configuration to th
|
||||
|
||||
Another thing to mention is that there are also modules available, and this is similar to container images in that these modules have been created and shared in public so you do not have to create it again and again just re use the best practice of deploying a specific infrastructure resource the same way everywhere. You can find the modules available [here](https://registry.terraform.io/browse/modules)
|
||||
|
||||
|
||||
The Terraform workflow looks like this: (*taken from the terraform site*)
|
||||
|
||||
The Terraform workflow looks like this: (_taken from the terraform site_)
|
||||
|
||||

|
||||
|
||||
### Terraform vs Vagrant
|
||||
|
||||
During this challenge we have used Vagrant which happens to be another Hashicorp open source tool which concentrates on the development environments.
|
||||
During this challenge we have used Vagrant which happens to be another Hashicorp open source tool which concentrates on the development environments.
|
||||
|
||||
- Vagrant is a tool focused for managing development environments
|
||||
|
||||
- Terraform is a tool for building infrastructure.
|
||||
- Terraform is a tool for building infrastructure.
|
||||
|
||||
A great comparison of the two tools can be found here on the official [Hashicorp site](https://www.vagrantup.com/intro/vs/terraform)
|
||||
|
||||
## Terraform Installation
|
||||
|
||||
## Terraform Installation
|
||||
There is really not much to the installation of Terraform.
|
||||
|
||||
There is really not much to the installation of Terraform.
|
||||
|
||||
Terraform is cross platform and you can see below on my Linux machine we have several options to download and install the CLI
|
||||
Terraform is cross platform and you can see below on my Linux machine we have several options to download and install the CLI
|
||||
|
||||

|
||||
|
||||
|
||||
Using `arkade` to install Terraform, arkade is a handy little tool for getting your required tools, apps and clis onto your system. A simple `arkade get terraform` will allow for an update of terraform if available or this same command will also install the Terraform CLI
|
||||
|
||||

|
||||
|
||||
We are going to get into more around HCL and then also start using Terraform to create some infrastructure resources in various different platforms.
|
||||
We are going to get into more around HCL and then also start using Terraform to create some infrastructure resources in various different platforms.
|
||||
|
||||
## Resources
|
||||
I have listed a lot of resources down below and I think this topic has been covered so many times out there, If you have additional resources be sure to raise a PR with your resources and I will be happy to review and add them to the list.
|
||||
## Resources
|
||||
|
||||
- [What is Infrastructure as Code? Difference of Infrastructure as Code Tools ](https://www.youtube.com/watch?v=POPP2WTJ8es)
|
||||
I have listed a lot of resources down below and I think this topic has been covered so many times out there, If you have additional resources be sure to raise a PR with your resources and I will be happy to review and add them to the list.
|
||||
|
||||
- [What is Infrastructure as Code? Difference of Infrastructure as Code Tools](https://www.youtube.com/watch?v=POPP2WTJ8es)
|
||||
- [Terraform Tutorial | Terraform Course Overview 2021](https://www.youtube.com/watch?v=m3cKkYXl-8o)
|
||||
- [Terraform explained in 15 mins | Terraform Tutorial for Beginners ](https://www.youtube.com/watch?v=l5k1ai_GBDE)
|
||||
- [Terraform explained in 15 mins | Terraform Tutorial for Beginners](https://www.youtube.com/watch?v=l5k1ai_GBDE)
|
||||
- [Terraform Course - From BEGINNER to PRO!](https://www.youtube.com/watch?v=7xngnjfIlK4&list=WL&index=141&t=16s)
|
||||
- [HashiCorp Terraform Associate Certification Course](https://www.youtube.com/watch?v=V4waklkBC38&list=WL&index=55&t=111s)
|
||||
- [Terraform Full Course for Beginners](https://www.youtube.com/watch?v=EJ3N-hhiWv0&list=WL&index=39&t=27s)
|
||||
- [KodeKloud - Terraform for DevOps Beginners + Labs: Complete Step by Step Guide!](https://www.youtube.com/watch?v=YcJ9IeukJL8&list=WL&index=16&t=11s)
|
||||
- [KodeKloud - Terraform for DevOps Beginners + Labs: Complete Step by Step Guide!](https://www.youtube.com/watch?v=YcJ9IeukJL8&list=WL&index=16&t=11s)
|
||||
- [Terraform Simple Projects](https://terraform.joshuajebaraj.com/)
|
||||
- [Terraform Tutorial - The Best Project Ideas](https://www.youtube.com/watch?v=oA-pPa0vfks)
|
||||
- [Awesome Terraform](https://github.com/shuaibiyy/awesome-terraform)
|
||||
|
@ -1,5 +1,5 @@
|
||||
---
|
||||
title: '#90DaysOfDevOps - HashiCorp Configuration Language (HCL) - Day 58'
|
||||
title: "#90DaysOfDevOps - HashiCorp Configuration Language (HCL) - Day 58"
|
||||
published: false
|
||||
description: 90DaysOfDevOps - HashiCorp Configuration Language (HCL)
|
||||
tags: "devops, 90daysofdevops, learning"
|
||||
@ -7,22 +7,22 @@ cover_image: null
|
||||
canonical_url: null
|
||||
id: 1048741
|
||||
---
|
||||
|
||||
## HashiCorp Configuration Language (HCL)
|
||||
|
||||
Before we start making stuff with Terraform we have to dive a little into HashiCorp Configuration Language (HCL). So far during our challenge we have looked at a few different scripting and programming languages and here is another one. We touched on the [Go programming language](day07.md) then [bash scripts](day19.md) we even touched on a little python when it came to [network automation](day27.md)
|
||||
|
||||
Now we must cover HashiCorp Configuration Language (HCL) if this is the first time you are seeing the language it might look a little daunting but its quite simple and very powerful.
|
||||
|
||||
As we move through this section, we are going to be using examples that we can run locally on our system regardless of what OS you are using, we will be using virtualbox, albeit not the infrastructure platform you would usually be using with Terraform. However running this locally, it is free and will allow us to achieve what we are looking for in this post. We could also extend this posts concepts to docker or Kubernetes as well.
|
||||
As we move through this section, we are going to be using examples that we can run locally on our system regardless of what OS you are using, we will be using virtualbox, albeit not the infrastructure platform you would usually be using with Terraform. However running this locally, it is free and will allow us to achieve what we are looking for in this post. We could also extend this posts concepts to docker or Kubernetes as well.
|
||||
|
||||
In general though, you would or should be using Terraform to deploy your infrastructure in the public cloud (AWS, Google, Microsoft Azure) but then also in your virtualisation environments such as (VMware, Microsoft Hyper-V, Nutanix AHV). In the public cloud Terraform allows for us to do a lot more than just Virtual Machine automated deployment, we can create all the required infrastructure such as PaaS workloads and all of the networking required assets such as VPCs and Security Groups.
|
||||
In general though, you would or should be using Terraform to deploy your infrastructure in the public cloud (AWS, Google, Microsoft Azure) but then also in your virtualisation environments such as (VMware, Microsoft Hyper-V, Nutanix AHV). In the public cloud Terraform allows for us to do a lot more than just Virtual Machine automated deployment, we can create all the required infrastructure such as PaaS workloads and all of the networking required assets such as VPCs and Security Groups.
|
||||
|
||||
There are two important aspects to Terraform, we have the code which we are going to get into in this post and then we also have the state. Both of these together could be called the Terraform core. We then have the environment we wish to speak to and deploy into, which is executed using Terraform providers, briefly mentioned in the last session, but we have an AWS provider, we have an Azure providers etc. There are hundreds.
|
||||
There are two important aspects to Terraform, we have the code which we are going to get into in this post and then we also have the state. Both of these together could be called the Terraform core. We then have the environment we wish to speak to and deploy into, which is executed using Terraform providers, briefly mentioned in the last session, but we have an AWS provider, we have an Azure providers etc. There are hundreds.
|
||||
|
||||
### Basic Terraform Usage
|
||||
|
||||
Let's take a look at a Terraform `.tf` file to see how they are made up. The first example we will walk through will in fact be code to deploy resources to AWS, this would then also require the AWS CLI to be installed on your system and configured for your account.
|
||||
|
||||
Let's take a look at a Terraform `.tf` file to see how they are made up. The first example we will walk through will in fact be code to deploy resources to AWS, this would then also require the AWS CLI to be installed on your system and configured for your account.
|
||||
|
||||
### Providers
|
||||
|
||||
@ -38,7 +38,8 @@ terraform {
|
||||
}
|
||||
}
|
||||
```
|
||||
We might also add in a region as well here to determine which AWS region we would like to provision to we can do this by adding the following:
|
||||
|
||||
We might also add in a region as well here to determine which AWS region we would like to provision to we can do this by adding the following:
|
||||
|
||||
```
|
||||
provider "aws" {
|
||||
@ -46,15 +47,14 @@ provider "aws" {
|
||||
}
|
||||
```
|
||||
|
||||
### Resources
|
||||
### Terraform Resources
|
||||
|
||||
- Another important component of a terraform config file which describes one or more infrastructure objects like EC2, Load Balancer, VPC, etc.
|
||||
|
||||
- A resource block declares a resource of a given type ("aws_instance") with a given local name ("90daysofdevops").
|
||||
- A resource block declares a resource of a given type ("aws_instance") with a given local name ("90daysofdevops").
|
||||
|
||||
- The resource type and name together serve as an identifier for a given resource.
|
||||
|
||||
|
||||
```
|
||||
resource "aws_instance" "90daysofdevops" {
|
||||
ami = data.aws_ami.instance_id.id
|
||||
@ -78,9 +78,9 @@ resource "aws_instance" "90daysofdevops" {
|
||||
}
|
||||
```
|
||||
|
||||
You can see from the above we are also running a `yum` update and installing `httpd` into our ec2 instance.
|
||||
You can see from the above we are also running a `yum` update and installing `httpd` into our ec2 instance.
|
||||
|
||||
If we now look at the complete main.tf file it might look something like this.
|
||||
If we now look at the complete main.tf file it might look something like this.
|
||||
|
||||
```
|
||||
terraform {
|
||||
@ -123,9 +123,10 @@ resource "aws_instance" "90daysofdevops" {
|
||||
}
|
||||
}
|
||||
```
|
||||
The above code will go and deploy a very simple web server as an ec2 instance in AWS, the great thing about this and any other configuration like this is that we can repeat this and we will get the same output every single time. Other than the chance that I have messed up the code there is no human interaction with the above.
|
||||
|
||||
We can take a look at a super simple example, one that you will likely never use but let's humour it anyway. Like with all good scripting and programming language we should start with a hello-world scenario.
|
||||
The above code will go and deploy a very simple web server as an ec2 instance in AWS, the great thing about this and any other configuration like this is that we can repeat this and we will get the same output every single time. Other than the chance that I have messed up the code there is no human interaction with the above.
|
||||
|
||||
We can take a look at a super simple example, one that you will likely never use but let's humour it anyway. Like with all good scripting and programming language we should start with a hello-world scenario.
|
||||
|
||||
```
|
||||
terraform {
|
||||
@ -140,66 +141,67 @@ output "hello_world" {
|
||||
value = "Hello, 90DaysOfDevOps from Terraform"
|
||||
}
|
||||
```
|
||||
You will find this file in the IAC folder under hello-world, but out of the box this is not going to simply work there are some commans we need to run in order to use our terraform code.
|
||||
|
||||
In your terminal navigate to your folder where the main.tf has been created, this could be from this repository or you could create a new one using the code above.
|
||||
You will find this file in the IAC folder under hello-world, but out of the box this is not going to simply work there are some commans we need to run in order to use our terraform code.
|
||||
|
||||
When in that folder we are going to run `terraform init`
|
||||
In your terminal navigate to your folder where the main.tf has been created, this could be from this repository or you could create a new one using the code above.
|
||||
|
||||
We need to perform this on any directory where we have or before we run any terraform code. Initialising a configuration directory downloads and installs the providers defined in the configuration, in this case we have no providers but in the example above this would download the aws provider for this configuration.
|
||||
When in that folder we are going to run `terraform init`
|
||||
|
||||
We need to perform this on any directory where we have or before we run any terraform code. Initialising a configuration directory downloads and installs the providers defined in the configuration, in this case we have no providers but in the example above this would download the aws provider for this configuration.
|
||||
|
||||

|
||||
|
||||
The next command will be `terraform plan`
|
||||
The next command will be `terraform plan`
|
||||
|
||||
The `terraform plan` command creates an execution plan, which lets you preview the changes that Terraform plans to make to your infrastructure.
|
||||
|
||||
You can simply see below that with our hello-world example we are going to see an output if this was an AWS ec2 instance we would see all the steps that we will be creating.
|
||||
You can simply see below that with our hello-world example we are going to see an output if this was an AWS ec2 instance we would see all the steps that we will be creating.
|
||||
|
||||

|
||||
|
||||
At this point we have initialised our repository and we have our providers downloaded where required, we have run a test walkthrough to make sure this is what we want to see so now we can run and deploy our code.
|
||||
At this point we have initialised our repository and we have our providers downloaded where required, we have run a test walkthrough to make sure this is what we want to see so now we can run and deploy our code.
|
||||
|
||||
`terraform apply` allows us to do this there is a built in safety measure to this command and this will again give you a plan view on what is going to happen which warrants a response from you to say yes to continue.
|
||||
`terraform apply` allows us to do this there is a built in safety measure to this command and this will again give you a plan view on what is going to happen which warrants a response from you to say yes to continue.
|
||||
|
||||

|
||||
|
||||
When we type in yes to the enter a value, and our code is deployed. Obviously not that exciting but you can see we have the output that we defined in our code.
|
||||
When we type in yes to the enter a value, and our code is deployed. Obviously not that exciting but you can see we have the output that we defined in our code.
|
||||
|
||||

|
||||
|
||||
Now we have not deployed anything, we have not added, changed or destroyed anything but if we did then we would see that indicated also in the above. If however we had deployed something and we wanted to get rid of everything we deployed we can use the `terraform destroy` command. Again this has that safety where you have to type yes although you can use `--auto-approve` on the end of your `apply` and `destroy` commands to bypass that manual intervention. But I would advise only using this shortcut when in learning and testing as everything will dissappear sometimes faster than it was built.
|
||||
Now we have not deployed anything, we have not added, changed or destroyed anything but if we did then we would see that indicated also in the above. If however we had deployed something and we wanted to get rid of everything we deployed we can use the `terraform destroy` command. Again this has that safety where you have to type yes although you can use `--auto-approve` on the end of your `apply` and `destroy` commands to bypass that manual intervention. But I would advise only using this shortcut when in learning and testing as everything will dissappear sometimes faster than it was built.
|
||||
|
||||
From this there are really 4 commands we have covered from the Terraform CLI.
|
||||
From this there are really 4 commands we have covered from the Terraform CLI.
|
||||
|
||||
- `terraform init` = get your project folder ready with providers
|
||||
- `terraform plan` = show what is going to be created, changed during the next command based on our code.
|
||||
- `terraform apply` = will go and deploy the resources defined in our code.
|
||||
- `terraform init` = get your project folder ready with providers
|
||||
- `terraform plan` = show what is going to be created, changed during the next command based on our code.
|
||||
- `terraform apply` = will go and deploy the resources defined in our code.
|
||||
- `terraform destroy` = will destroy the resources we have created in our project
|
||||
|
||||
We also covered two important aspects of our code files.
|
||||
We also covered two important aspects of our code files.
|
||||
|
||||
- providers = how does terraform speak to the end platform via APIs
|
||||
- providers = how does terraform speak to the end platform via APIs
|
||||
- resources = what it is we want to deploy with code
|
||||
|
||||
Another thing to note when running `terraform init` take a look at the tree on the folder before and after to see what happens and where we store providers and modules.
|
||||
Another thing to note when running `terraform init` take a look at the tree on the folder before and after to see what happens and where we store providers and modules.
|
||||
|
||||
### Terraform state
|
||||
### Terraform state
|
||||
|
||||
We also need to be aware of the state file that is created also inside our directory and for this hello world example our state file is simple. This is a JSON file which is the representation of the world according to Terraform. The state will happily show off your sensitive data so be careful and as a best practice put your `.tfstate` files in your `.gitignore` folder before uploading to GitHub.
|
||||
We also need to be aware of the state file that is created also inside our directory and for this hello world example our state file is simple. This is a JSON file which is the representation of the world according to Terraform. The state will happily show off your sensitive data so be careful and as a best practice put your `.tfstate` files in your `.gitignore` folder before uploading to GitHub.
|
||||
|
||||
By default the state file as you can see lives inside the same directory as your project code, but it can also be stored remotely as an option. In a production environment this is likely going to be a shared location such as an S3 bucket.
|
||||
By default the state file as you can see lives inside the same directory as your project code, but it can also be stored remotely as an option. In a production environment this is likely going to be a shared location such as an S3 bucket.
|
||||
|
||||
Another option could be Terraform Cloud, this is a paid for managed service. (Free up to 5 users)
|
||||
|
||||
The pros for storing state in a remote location is that we get:
|
||||
The pros for storing state in a remote location is that we get:
|
||||
|
||||
- Sensitive data encrypted
|
||||
- Collaboration
|
||||
- Automation
|
||||
- Sensitive data encrypted
|
||||
- Collaboration
|
||||
- Automation
|
||||
- However it could bring increase complexity
|
||||
|
||||
```
|
||||
```JSON
|
||||
{
|
||||
"version": 4,
|
||||
"terraform_version": "1.1.6",
|
||||
@ -215,17 +217,17 @@ The pros for storing state in a remote location is that we get:
|
||||
}
|
||||
```
|
||||
|
||||
## Resources
|
||||
|
||||
## Resources
|
||||
I have listed a lot of resources down below and I think this topic has been covered so many times out there, If you have additional resources be sure to raise a PR with your resources and I will be happy to review and add them to the list.
|
||||
I have listed a lot of resources down below and I think this topic has been covered so many times out there, If you have additional resources be sure to raise a PR with your resources and I will be happy to review and add them to the list.
|
||||
|
||||
- [What is Infrastructure as Code? Difference of Infrastructure as Code Tools ](https://www.youtube.com/watch?v=POPP2WTJ8es)
|
||||
- [What is Infrastructure as Code? Difference of Infrastructure as Code Tools](https://www.youtube.com/watch?v=POPP2WTJ8es)
|
||||
- [Terraform Tutorial | Terraform Course Overview 2021](https://www.youtube.com/watch?v=m3cKkYXl-8o)
|
||||
- [Terraform explained in 15 mins | Terraform Tutorial for Beginners ](https://www.youtube.com/watch?v=l5k1ai_GBDE)
|
||||
- [Terraform explained in 15 mins | Terraform Tutorial for Beginners](https://www.youtube.com/watch?v=l5k1ai_GBDE)
|
||||
- [Terraform Course - From BEGINNER to PRO!](https://www.youtube.com/watch?v=7xngnjfIlK4&list=WL&index=141&t=16s)
|
||||
- [HashiCorp Terraform Associate Certification Course](https://www.youtube.com/watch?v=V4waklkBC38&list=WL&index=55&t=111s)
|
||||
- [Terraform Full Course for Beginners](https://www.youtube.com/watch?v=EJ3N-hhiWv0&list=WL&index=39&t=27s)
|
||||
- [KodeKloud - Terraform for DevOps Beginners + Labs: Complete Step by Step Guide!](https://www.youtube.com/watch?v=YcJ9IeukJL8&list=WL&index=16&t=11s)
|
||||
- [KodeKloud - Terraform for DevOps Beginners + Labs: Complete Step by Step Guide!](https://www.youtube.com/watch?v=YcJ9IeukJL8&list=WL&index=16&t=11s)
|
||||
- [Terraform Simple Projects](https://terraform.joshuajebaraj.com/)
|
||||
- [Terraform Tutorial - The Best Project Ideas](https://www.youtube.com/watch?v=oA-pPa0vfks)
|
||||
- [Awesome Terraform](https://github.com/shuaibiyy/awesome-terraform)
|
||||
|
@ -1,21 +1,22 @@
|
||||
---
|
||||
title: '#90DaysOfDevOps - Create a VM with Terraform & Variables - Day 59'
|
||||
title: "#90DaysOfDevOps - Create a VM with Terraform & Variables - Day 59"
|
||||
published: false
|
||||
description: 90DaysOfDevOps - Create a VM with Terraform & Variables
|
||||
tags: 'devops, 90daysofdevops, learning'
|
||||
tags: "devops, 90daysofdevops, learning"
|
||||
cover_image: null
|
||||
canonical_url: null
|
||||
id: 1049051
|
||||
---
|
||||
|
||||
## Create a VM with Terraform & Variables
|
||||
|
||||
In this session we are going to be creating a VM or two VMs using terraform inside VirtualBox. This is not the normal, VirtualBox is a workstation virtualisation option and really this would not be a use case for Terraform but I am currently 36,000ft in the air and as much as I have deployed public cloud resources this high in the clouds it is much faster to do this locally on my laptop.
|
||||
In this session we are going to be creating a VM or two VMs using terraform inside VirtualBox. This is not the normal, VirtualBox is a workstation virtualisation option and really this would not be a use case for Terraform but I am currently 36,000ft in the air and as much as I have deployed public cloud resources this high in the clouds it is much faster to do this locally on my laptop.
|
||||
|
||||
Purely demo purpose but the concept is the same we are going to have our desired state configuration code and then we are going to run that against the virtualbox provider. In the past we have used vagrant here and I covered off the differences between vagrant and terraform at the beginning of the section.
|
||||
Purely demo purpose but the concept is the same we are going to have our desired state configuration code and then we are going to run that against the virtualbox provider. In the past we have used vagrant here and I covered off the differences between vagrant and terraform at the beginning of the section.
|
||||
|
||||
### Create virtual machine in VirtualBox
|
||||
### Create virtual machine in VirtualBox
|
||||
|
||||
The first thing we are going to do is create a new folder called virtualbox, we can then create a virtualbox.tf file and this is going to be where we define our resources. The code below which can be found in the VirtualBox folder as virtualbox.tf this is going to create 2 VMs in Virtualbox.
|
||||
The first thing we are going to do is create a new folder called virtualbox, we can then create a virtualbox.tf file and this is going to be where we define our resources. The code below which can be found in the VirtualBox folder as virtualbox.tf this is going to create 2 VMs in Virtualbox.
|
||||
|
||||
You can find more about the community virtualbox provider [here](https://registry.terraform.io/providers/terra-farm/virtualbox/latest/docs/resources/vm)
|
||||
|
||||
@ -54,54 +55,53 @@ output "IPAddr_2" {
|
||||
|
||||
```
|
||||
|
||||
Now that we have our code defined we can now perform the `terraform init` on our folder to download the provider for virtualbox.
|
||||
Now that we have our code defined we can now perform the `terraform init` on our folder to download the provider for virtualbox.
|
||||
|
||||

|
||||
|
||||
|
||||
Obviously you will also need to have virtualbox installed on your system as well. We can then next run `terraform plan` to see what our code will create for us. Followed by `terraform apply` the below image shows your completed process.
|
||||
|
||||

|
||||
|
||||
In Virtualbox you will now see your 2 virtual machines.
|
||||
In Virtualbox you will now see your 2 virtual machines.
|
||||
|
||||

|
||||
|
||||
### Change configuration
|
||||
### Change configuration
|
||||
|
||||
Lets add another node to our deployment. We can simply change the count line to show our newly desired number of nodes. When we run our `terraform apply` it will look something like below.
|
||||
Lets add another node to our deployment. We can simply change the count line to show our newly desired number of nodes. When we run our `terraform apply` it will look something like below.
|
||||
|
||||

|
||||
|
||||
Once complete in virtualbox you can see we now have 3 nodes up and running.
|
||||
Once complete in virtualbox you can see we now have 3 nodes up and running.
|
||||
|
||||

|
||||
|
||||
When we are finished we can clear this up using the `terraform destroy` and our machines will be removed.
|
||||
When we are finished we can clear this up using the `terraform destroy` and our machines will be removed.
|
||||
|
||||

|
||||
|
||||
### Variables & Outputs
|
||||
### Variables & Outputs
|
||||
|
||||
We did mention outputs when we ran our hello-world example in the last session. But we can get into more detail here.
|
||||
We did mention outputs when we ran our hello-world example in the last session. But we can get into more detail here.
|
||||
|
||||
But there are many other variables that we can use here as well, there are also a few different ways in which we can define variables.
|
||||
But there are many other variables that we can use here as well, there are also a few different ways in which we can define variables.
|
||||
|
||||
- We can manually enter our variables with the `terraform plan` or `terraform apply` command
|
||||
|
||||
- We can define them in the .tf file within the block
|
||||
- We can define them in the .tf file within the block
|
||||
|
||||
- We can use environment variables within our system using `TF_VAR_NAME` as the format.
|
||||
- We can use environment variables within our system using `TF_VAR_NAME` as the format.
|
||||
|
||||
- My preference is to use a terraform.tfvars file in our project folder.
|
||||
- My preference is to use a terraform.tfvars file in our project folder.
|
||||
|
||||
- There is an *auto.tfvars file option
|
||||
- There is an \*auto.tfvars file option
|
||||
|
||||
- or we can define when we run the `terraform plan` or `terraform apply` with the `-var` or `-var-file`.
|
||||
- or we can define when we run the `terraform plan` or `terraform apply` with the `-var` or `-var-file`.
|
||||
|
||||
Starting from the bottom moving up would be the order in which the variables are defined.
|
||||
Starting from the bottom moving up would be the order in which the variables are defined.
|
||||
|
||||
We have also mentioned that the state file will contain sensitive information. We can define our sensitive information as a variable and we can define this as being sensitive.
|
||||
We have also mentioned that the state file will contain sensitive information. We can define our sensitive information as a variable and we can define this as being sensitive.
|
||||
|
||||
```
|
||||
variable "some resource" {
|
||||
@ -112,16 +112,17 @@ variable "some resource" {
|
||||
}
|
||||
```
|
||||
|
||||
## Resources
|
||||
I have listed a lot of resources down below and I think this topic has been covered so many times out there, If you have additional resources be sure to raise a PR with your resources and I will be happy to review and add them to the list.
|
||||
## Resources
|
||||
|
||||
- [What is Infrastructure as Code? Difference of Infrastructure as Code Tools ](https://www.youtube.com/watch?v=POPP2WTJ8es)
|
||||
I have listed a lot of resources down below and I think this topic has been covered so many times out there, If you have additional resources be sure to raise a PR with your resources and I will be happy to review and add them to the list.
|
||||
|
||||
- [What is Infrastructure as Code? Difference of Infrastructure as Code Tools](https://www.youtube.com/watch?v=POPP2WTJ8es)
|
||||
- [Terraform Tutorial | Terraform Course Overview 2021](https://www.youtube.com/watch?v=m3cKkYXl-8o)
|
||||
- [Terraform explained in 15 mins | Terraform Tutorial for Beginners ](https://www.youtube.com/watch?v=l5k1ai_GBDE)
|
||||
- [Terraform explained in 15 mins | Terraform Tutorial for Beginners](https://www.youtube.com/watch?v=l5k1ai_GBDE)
|
||||
- [Terraform Course - From BEGINNER to PRO!](https://www.youtube.com/watch?v=7xngnjfIlK4&list=WL&index=141&t=16s)
|
||||
- [HashiCorp Terraform Associate Certification Course](https://www.youtube.com/watch?v=V4waklkBC38&list=WL&index=55&t=111s)
|
||||
- [Terraform Full Course for Beginners](https://www.youtube.com/watch?v=EJ3N-hhiWv0&list=WL&index=39&t=27s)
|
||||
- [KodeKloud - Terraform for DevOps Beginners + Labs: Complete Step by Step Guide!](https://www.youtube.com/watch?v=YcJ9IeukJL8&list=WL&index=16&t=11s)
|
||||
- [KodeKloud - Terraform for DevOps Beginners + Labs: Complete Step by Step Guide!](https://www.youtube.com/watch?v=YcJ9IeukJL8&list=WL&index=16&t=11s)
|
||||
- [Terraform Simple Projects](https://terraform.joshuajebaraj.com/)
|
||||
- [Terraform Tutorial - The Best Project Ideas](https://www.youtube.com/watch?v=oA-pPa0vfks)
|
||||
- [Awesome Terraform](https://github.com/shuaibiyy/awesome-terraform)
|
||||
|
@ -91,7 +91,7 @@ The quickest way to get in touch is going to be via Twitter, my handle is [@Mich
|
||||
### Kubernetes
|
||||
|
||||
- [✔️] ☸ 49 > [The Big Picture: Kubernetes](Days/day49.md)
|
||||
- [✔️] ☸ 50 > [Choosing your Kubernetes platform ](Days/day50.md)
|
||||
- [✔️] ☸ 50 > [Choosing your Kubernetes platform](Days/day50.md)
|
||||
- [✔️] ☸ 51 > [Deploying your first Kubernetes Cluster](Days/day51.md)
|
||||
- [✔️] ☸ 52 > [Setting up a multinode Kubernetes Cluster](Days/day52.md)
|
||||
- [✔️] ☸ 53 > [Rancher Overview - Hands On](Days/day53.md)
|
||||
@ -101,7 +101,7 @@ The quickest way to get in touch is going to be via Twitter, my handle is [@Mich
|
||||
### Learn Infrastructure as Code
|
||||
|
||||
- [✔️] 🤖 56 > [The Big Picture: IaC](Days/day56.md)
|
||||
- [✔️] 🤖 57 > [An intro to Terraform ](Days/day57.md)
|
||||
- [✔️] 🤖 57 > [An intro to Terraform](Days/day57.md)
|
||||
- [✔️] 🤖 58 > [HashiCorp Configuration Language (HCL)](Days/day58.md)
|
||||
- [✔️] 🤖 59 > [Create a VM with Terraform & Variables](Days/day59.md)
|
||||
- [✔️] 🤖 60 > [Docker Containers, Provisioners & Modules](Days/day60.md)
|
||||
|
@ -61,7 +61,7 @@ DevOpsエンジニアとして、あなたはアプリケーションをプロ
|
||||
- [What is DevOps? - TechWorld with Nana](https://www.youtube.com/watch?v=0yWAtQ6wYNM)
|
||||
- [What is DevOps? - GitHub YouTube](https://www.youtube.com/watch?v=kBV8gPVZNEE)
|
||||
- [What is DevOps? - IBM YouTube](https://www.youtube.com/watch?v=UbtB4sMaaNM)
|
||||
- [What is DevOps? - AWS ](https://aws.amazon.com/devops/what-is-devops/)
|
||||
- [What is DevOps? - AWS](https://aws.amazon.com/devops/what-is-devops/)
|
||||
- [What is DevOps? - Microsoft](https://docs.microsoft.com/en-us/devops/what-is-devops)
|
||||
|
||||
ここまで来れば、ここが自分の望むところかどうかが分かるはずです。それでは、[3日目](day03.md)でお会いしましょう。
|
||||
ここまで来れば、ここが自分の望むところかどうかが分かるはずです。それでは、[3日目](day03.md)でお会いしましょう。
|
||||
|
@ -77,7 +77,7 @@ CIリリースが成功した場合 = 継続的デプロイメント = デプロ
|
||||
### リソース:
|
||||
|
||||
- [DevOps for Developers – Software or DevOps Engineer?](https://www.youtube.com/watch?v=a0-uE3rOyeU)
|
||||
- [Techworld with Nana -DevOps Roadmap 2022 - How to become a DevOps Engineer? What is DevOps? ](https://www.youtube.com/watch?v=9pZ2xmsSDdo&t=125s)
|
||||
- [Techworld with Nana -DevOps Roadmap 2022 - How to become a DevOps Engineer? What is DevOps?](https://www.youtube.com/watch?v=9pZ2xmsSDdo&t=125s)
|
||||
- [How to become a DevOps Engineer in 2021 - DevOps Roadmap](https://www.youtube.com/watch?v=5pxbp6FyTfk)
|
||||
|
||||
ここまで来れば、ここが自分の居場所かどうかが分かるはずです。
|
||||
|
@ -112,9 +112,9 @@ Next up we will start looking into Terraform with a 101 before we get some hands
|
||||
## Resources
|
||||
I have listed a lot of resources down below and I think this topic has been covered so many times out there, If you have additional resources be sure to raise a PR with your resources and I will be happy to review and add them to the list.
|
||||
|
||||
- [What is Infrastructure as Code? Difference of Infrastructure as Code Tools ](https://www.youtube.com/watch?v=POPP2WTJ8es)
|
||||
- [What is Infrastructure as Code? Difference of Infrastructure as Code Tools](https://www.youtube.com/watch?v=POPP2WTJ8es)
|
||||
- [Terraform Tutorial | Terraform Course Overview 2021](https://www.youtube.com/watch?v=m3cKkYXl-8o)
|
||||
- [Terraform explained in 15 mins | Terraform Tutorial for Beginners ](https://www.youtube.com/watch?v=l5k1ai_GBDE)
|
||||
- [Terraform explained in 15 mins | Terraform Tutorial for Beginners](https://www.youtube.com/watch?v=l5k1ai_GBDE)
|
||||
- [Terraform Course - From BEGINNER to PRO!](https://www.youtube.com/watch?v=7xngnjfIlK4&list=WL&index=141&t=16s)
|
||||
- [HashiCorp Terraform Associate Certification Course](https://www.youtube.com/watch?v=V4waklkBC38&list=WL&index=55&t=111s)
|
||||
- [Terraform Full Course for Beginners](https://www.youtube.com/watch?v=EJ3N-hhiWv0&list=WL&index=39&t=27s)
|
||||
|
@ -87,9 +87,9 @@ We are going to get into more around HCL and then also start using Terraform to
|
||||
## Resources
|
||||
I have listed a lot of resources down below and I think this topic has been covered so many times out there, If you have additional resources be sure to raise a PR with your resources and I will be happy to review and add them to the list.
|
||||
|
||||
- [What is Infrastructure as Code? Difference of Infrastructure as Code Tools ](https://www.youtube.com/watch?v=POPP2WTJ8es)
|
||||
- [What is Infrastructure as Code? Difference of Infrastructure as Code Tools](https://www.youtube.com/watch?v=POPP2WTJ8es)
|
||||
- [Terraform Tutorial | Terraform Course Overview 2021](https://www.youtube.com/watch?v=m3cKkYXl-8o)
|
||||
- [Terraform explained in 15 mins | Terraform Tutorial for Beginners ](https://www.youtube.com/watch?v=l5k1ai_GBDE)
|
||||
- [Terraform explained in 15 mins | Terraform Tutorial for Beginners](https://www.youtube.com/watch?v=l5k1ai_GBDE)
|
||||
- [Terraform Course - From BEGINNER to PRO!](https://www.youtube.com/watch?v=7xngnjfIlK4&list=WL&index=141&t=16s)
|
||||
- [HashiCorp Terraform Associate Certification Course](https://www.youtube.com/watch?v=V4waklkBC38&list=WL&index=55&t=111s)
|
||||
- [Terraform Full Course for Beginners](https://www.youtube.com/watch?v=EJ3N-hhiWv0&list=WL&index=39&t=27s)
|
||||
|
@ -219,9 +219,9 @@ The pros for storing state in a remote location is that we get:
|
||||
## Resources
|
||||
I have listed a lot of resources down below and I think this topic has been covered so many times out there, If you have additional resources be sure to raise a PR with your resources and I will be happy to review and add them to the list.
|
||||
|
||||
- [What is Infrastructure as Code? Difference of Infrastructure as Code Tools ](https://www.youtube.com/watch?v=POPP2WTJ8es)
|
||||
- [What is Infrastructure as Code? Difference of Infrastructure as Code Tools](https://www.youtube.com/watch?v=POPP2WTJ8es)
|
||||
- [Terraform Tutorial | Terraform Course Overview 2021](https://www.youtube.com/watch?v=m3cKkYXl-8o)
|
||||
- [Terraform explained in 15 mins | Terraform Tutorial for Beginners ](https://www.youtube.com/watch?v=l5k1ai_GBDE)
|
||||
- [Terraform explained in 15 mins | Terraform Tutorial for Beginners](https://www.youtube.com/watch?v=l5k1ai_GBDE)
|
||||
- [Terraform Course - From BEGINNER to PRO!](https://www.youtube.com/watch?v=7xngnjfIlK4&list=WL&index=141&t=16s)
|
||||
- [HashiCorp Terraform Associate Certification Course](https://www.youtube.com/watch?v=V4waklkBC38&list=WL&index=55&t=111s)
|
||||
- [Terraform Full Course for Beginners](https://www.youtube.com/watch?v=EJ3N-hhiWv0&list=WL&index=39&t=27s)
|
||||
|
@ -115,9 +115,9 @@ variable "some resource" {
|
||||
## Resources
|
||||
I have listed a lot of resources down below and I think this topic has been covered so many times out there, If you have additional resources be sure to raise a PR with your resources and I will be happy to review and add them to the list.
|
||||
|
||||
- [What is Infrastructure as Code? Difference of Infrastructure as Code Tools ](https://www.youtube.com/watch?v=POPP2WTJ8es)
|
||||
- [What is Infrastructure as Code? Difference of Infrastructure as Code Tools](https://www.youtube.com/watch?v=POPP2WTJ8es)
|
||||
- [Terraform Tutorial | Terraform Course Overview 2021](https://www.youtube.com/watch?v=m3cKkYXl-8o)
|
||||
- [Terraform explained in 15 mins | Terraform Tutorial for Beginners ](https://www.youtube.com/watch?v=l5k1ai_GBDE)
|
||||
- [Terraform explained in 15 mins | Terraform Tutorial for Beginners](https://www.youtube.com/watch?v=l5k1ai_GBDE)
|
||||
- [Terraform Course - From BEGINNER to PRO!](https://www.youtube.com/watch?v=7xngnjfIlK4&list=WL&index=141&t=16s)
|
||||
- [HashiCorp Terraform Associate Certification Course](https://www.youtube.com/watch?v=V4waklkBC38&list=WL&index=55&t=111s)
|
||||
- [Terraform Full Course for Beginners](https://www.youtube.com/watch?v=EJ3N-hhiWv0&list=WL&index=39&t=27s)
|
||||
|
@ -179,9 +179,9 @@ We are breaking down our infrastructure into components, components are known he
|
||||
## Resources
|
||||
I have listed a lot of resources down below and I think this topic has been covered so many times out there, If you have additional resources be sure to raise a PR with your resources and I will be happy to review and add them to the list.
|
||||
|
||||
- [What is Infrastructure as Code? Difference of Infrastructure as Code Tools ](https://www.youtube.com/watch?v=POPP2WTJ8es)
|
||||
- [What is Infrastructure as Code? Difference of Infrastructure as Code Tools](https://www.youtube.com/watch?v=POPP2WTJ8es)
|
||||
- [Terraform Tutorial | Terraform Course Overview 2021](https://www.youtube.com/watch?v=m3cKkYXl-8o)
|
||||
- [Terraform explained in 15 mins | Terraform Tutorial for Beginners ](https://www.youtube.com/watch?v=l5k1ai_GBDE)
|
||||
- [Terraform explained in 15 mins | Terraform Tutorial for Beginners](https://www.youtube.com/watch?v=l5k1ai_GBDE)
|
||||
- [Terraform Course - From BEGINNER to PRO!](https://www.youtube.com/watch?v=7xngnjfIlK4&list=WL&index=141&t=16s)
|
||||
- [HashiCorp Terraform Associate Certification Course](https://www.youtube.com/watch?v=V4waklkBC38&list=WL&index=55&t=111s)
|
||||
- [Terraform Full Course for Beginners](https://www.youtube.com/watch?v=EJ3N-hhiWv0&list=WL&index=39&t=27s)
|
||||
|
@ -153,9 +153,9 @@ Cons
|
||||
## Resources
|
||||
I have listed a lot of resources down below and I think this topic has been covered so many times out there, If you have additional resources be sure to raise a PR with your resources and I will be happy to review and add them to the list.
|
||||
|
||||
- [What is Infrastructure as Code? Difference of Infrastructure as Code Tools ](https://www.youtube.com/watch?v=POPP2WTJ8es)
|
||||
- [What is Infrastructure as Code? Difference of Infrastructure as Code Tools](https://www.youtube.com/watch?v=POPP2WTJ8es)
|
||||
- [Terraform Tutorial | Terraform Course Overview 2021](https://www.youtube.com/watch?v=m3cKkYXl-8o)
|
||||
- [Terraform explained in 15 mins | Terraform Tutorial for Beginners ](https://www.youtube.com/watch?v=l5k1ai_GBDE)
|
||||
- [Terraform explained in 15 mins | Terraform Tutorial for Beginners](https://www.youtube.com/watch?v=l5k1ai_GBDE)
|
||||
- [Terraform Course - From BEGINNER to PRO!](https://www.youtube.com/watch?v=7xngnjfIlK4&list=WL&index=141&t=16s)
|
||||
- [HashiCorp Terraform Associate Certification Course](https://www.youtube.com/watch?v=V4waklkBC38&list=WL&index=55&t=111s)
|
||||
- [Terraform Full Course for Beginners](https://www.youtube.com/watch?v=EJ3N-hhiWv0&list=WL&index=39&t=27s)
|
||||
|
@ -94,9 +94,9 @@ This wraps up the Infrastructure as code section and next we move on to that lit
|
||||
## Resources
|
||||
I have listed a lot of resources down below and I think this topic has been covered so many times out there, If you have additional resources be sure to raise a PR with your resources and I will be happy to review and add them to the list.
|
||||
|
||||
- [What is Infrastructure as Code? Difference of Infrastructure as Code Tools ](https://www.youtube.com/watch?v=POPP2WTJ8es)
|
||||
- [What is Infrastructure as Code? Difference of Infrastructure as Code Tools](https://www.youtube.com/watch?v=POPP2WTJ8es)
|
||||
- [Terraform Tutorial | Terraform Course Overview 2021](https://www.youtube.com/watch?v=m3cKkYXl-8o)
|
||||
- [Terraform explained in 15 mins | Terraform Tutorial for Beginners ](https://www.youtube.com/watch?v=l5k1ai_GBDE)
|
||||
- [Terraform explained in 15 mins | Terraform Tutorial for Beginners](https://www.youtube.com/watch?v=l5k1ai_GBDE)
|
||||
- [Terraform Course - From BEGINNER to PRO!](https://www.youtube.com/watch?v=7xngnjfIlK4&list=WL&index=141&t=16s)
|
||||
- [HashiCorp Terraform Associate Certification Course](https://www.youtube.com/watch?v=V4waklkBC38&list=WL&index=55&t=111s)
|
||||
- [Terraform Full Course for Beginners](https://www.youtube.com/watch?v=EJ3N-hhiWv0&list=WL&index=39&t=27s)
|
||||
|
@ -61,7 +61,7 @@ My advice is to watch all of the below and hopefully you also picked something u
|
||||
- [What is DevOps? - TechWorld with Nana](https://www.youtube.com/watch?v=0yWAtQ6wYNM)
|
||||
- [What is DevOps? - GitHub YouTube](https://www.youtube.com/watch?v=kBV8gPVZNEE)
|
||||
- [What is DevOps? - IBM YouTube](https://www.youtube.com/watch?v=UbtB4sMaaNM)
|
||||
- [What is DevOps? - AWS ](https://aws.amazon.com/devops/what-is-devops/)
|
||||
- [What is DevOps? - AWS](https://aws.amazon.com/devops/what-is-devops/)
|
||||
- [What is DevOps? - Microsoft](https://docs.microsoft.com/en-us/devops/what-is-devops)
|
||||
|
||||
If you made it this far then you will know if this is where you want to be or not. See you on [Day 3](day03.md).
|
||||
|
@ -77,7 +77,7 @@ This last bit was a bit of a recap for me on Day 3 but think this actually makes
|
||||
### Resources:
|
||||
|
||||
- [DevOps for Developers – Software or DevOps Engineer?](https://www.youtube.com/watch?v=a0-uE3rOyeU)
|
||||
- [Techworld with Nana -DevOps Roadmap 2022 - How to become a DevOps Engineer? What is DevOps? ](https://www.youtube.com/watch?v=9pZ2xmsSDdo&t=125s)
|
||||
- [Techworld with Nana -DevOps Roadmap 2022 - How to become a DevOps Engineer? What is DevOps?](https://www.youtube.com/watch?v=9pZ2xmsSDdo&t=125s)
|
||||
- [How to become a DevOps Engineer in 2021 - DevOps Roadmap](https://www.youtube.com/watch?v=5pxbp6FyTfk)
|
||||
|
||||
If you made it this far then you will know if this is where you want to be or not.
|
||||
|
@ -112,9 +112,9 @@ Next up we will start looking into Terraform with a 101 before we get some hands
|
||||
## Resources
|
||||
I have listed a lot of resources down below and I think this topic has been covered so many times out there, If you have additional resources be sure to raise a PR with your resources and I will be happy to review and add them to the list.
|
||||
|
||||
- [What is Infrastructure as Code? Difference of Infrastructure as Code Tools ](https://www.youtube.com/watch?v=POPP2WTJ8es)
|
||||
- [What is Infrastructure as Code? Difference of Infrastructure as Code Tools](https://www.youtube.com/watch?v=POPP2WTJ8es)
|
||||
- [Terraform Tutorial | Terraform Course Overview 2021](https://www.youtube.com/watch?v=m3cKkYXl-8o)
|
||||
- [Terraform explained in 15 mins | Terraform Tutorial for Beginners ](https://www.youtube.com/watch?v=l5k1ai_GBDE)
|
||||
- [Terraform explained in 15 mins | Terraform Tutorial for Beginners](https://www.youtube.com/watch?v=l5k1ai_GBDE)
|
||||
- [Terraform Course - From BEGINNER to PRO!](https://www.youtube.com/watch?v=7xngnjfIlK4&list=WL&index=141&t=16s)
|
||||
- [HashiCorp Terraform Associate Certification Course](https://www.youtube.com/watch?v=V4waklkBC38&list=WL&index=55&t=111s)
|
||||
- [Terraform Full Course for Beginners](https://www.youtube.com/watch?v=EJ3N-hhiWv0&list=WL&index=39&t=27s)
|
||||
|
@ -86,9 +86,9 @@ We are going to get into more around HCL and then also start using Terraform to
|
||||
## Resources
|
||||
I have listed a lot of resources down below and I think this topic has been covered so many times out there, If you have additional resources be sure to raise a PR with your resources and I will be happy to review and add them to the list.
|
||||
|
||||
- [What is Infrastructure as Code? Difference of Infrastructure as Code Tools ](https://www.youtube.com/watch?v=POPP2WTJ8es)
|
||||
- [What is Infrastructure as Code? Difference of Infrastructure as Code Tools](https://www.youtube.com/watch?v=POPP2WTJ8es)
|
||||
- [Terraform Tutorial | Terraform Course Overview 2021](https://www.youtube.com/watch?v=m3cKkYXl-8o)
|
||||
- [Terraform explained in 15 mins | Terraform Tutorial for Beginners ](https://www.youtube.com/watch?v=l5k1ai_GBDE)
|
||||
- [Terraform explained in 15 mins | Terraform Tutorial for Beginners](https://www.youtube.com/watch?v=l5k1ai_GBDE)
|
||||
- [Terraform Course - From BEGINNER to PRO!](https://www.youtube.com/watch?v=7xngnjfIlK4&list=WL&index=141&t=16s)
|
||||
- [HashiCorp Terraform Associate Certification Course](https://www.youtube.com/watch?v=V4waklkBC38&list=WL&index=55&t=111s)
|
||||
- [Terraform Full Course for Beginners](https://www.youtube.com/watch?v=EJ3N-hhiWv0&list=WL&index=39&t=27s)
|
||||
|
@ -219,9 +219,9 @@ The pros for storing state in a remote location is that we get:
|
||||
## Resources
|
||||
I have listed a lot of resources down below and I think this topic has been covered so many times out there, If you have additional resources be sure to raise a PR with your resources and I will be happy to review and add them to the list.
|
||||
|
||||
- [What is Infrastructure as Code? Difference of Infrastructure as Code Tools ](https://www.youtube.com/watch?v=POPP2WTJ8es)
|
||||
- [What is Infrastructure as Code? Difference of Infrastructure as Code Tools](https://www.youtube.com/watch?v=POPP2WTJ8es)
|
||||
- [Terraform Tutorial | Terraform Course Overview 2021](https://www.youtube.com/watch?v=m3cKkYXl-8o)
|
||||
- [Terraform explained in 15 mins | Terraform Tutorial for Beginners ](https://www.youtube.com/watch?v=l5k1ai_GBDE)
|
||||
- [Terraform explained in 15 mins | Terraform Tutorial for Beginners](https://www.youtube.com/watch?v=l5k1ai_GBDE)
|
||||
- [Terraform Course - From BEGINNER to PRO!](https://www.youtube.com/watch?v=7xngnjfIlK4&list=WL&index=141&t=16s)
|
||||
- [HashiCorp Terraform Associate Certification Course](https://www.youtube.com/watch?v=V4waklkBC38&list=WL&index=55&t=111s)
|
||||
- [Terraform Full Course for Beginners](https://www.youtube.com/watch?v=EJ3N-hhiWv0&list=WL&index=39&t=27s)
|
||||
|
@ -115,9 +115,9 @@ variable "some resource" {
|
||||
## Resources
|
||||
I have listed a lot of resources down below and I think this topic has been covered so many times out there, If you have additional resources be sure to raise a PR with your resources and I will be happy to review and add them to the list.
|
||||
|
||||
- [What is Infrastructure as Code? Difference of Infrastructure as Code Tools ](https://www.youtube.com/watch?v=POPP2WTJ8es)
|
||||
- [What is Infrastructure as Code? Difference of Infrastructure as Code Tools](https://www.youtube.com/watch?v=POPP2WTJ8es)
|
||||
- [Terraform Tutorial | Terraform Course Overview 2021](https://www.youtube.com/watch?v=m3cKkYXl-8o)
|
||||
- [Terraform explained in 15 mins | Terraform Tutorial for Beginners ](https://www.youtube.com/watch?v=l5k1ai_GBDE)
|
||||
- [Terraform explained in 15 mins | Terraform Tutorial for Beginners](https://www.youtube.com/watch?v=l5k1ai_GBDE)
|
||||
- [Terraform Course - From BEGINNER to PRO!](https://www.youtube.com/watch?v=7xngnjfIlK4&list=WL&index=141&t=16s)
|
||||
- [HashiCorp Terraform Associate Certification Course](https://www.youtube.com/watch?v=V4waklkBC38&list=WL&index=55&t=111s)
|
||||
- [Terraform Full Course for Beginners](https://www.youtube.com/watch?v=EJ3N-hhiWv0&list=WL&index=39&t=27s)
|
||||
|
@ -179,9 +179,9 @@ We are breaking down our infrastructure into components, components are known he
|
||||
## Resources
|
||||
I have listed a lot of resources down below and I think this topic has been covered so many times out there, If you have additional resources be sure to raise a PR with your resources and I will be happy to review and add them to the list.
|
||||
|
||||
- [What is Infrastructure as Code? Difference of Infrastructure as Code Tools ](https://www.youtube.com/watch?v=POPP2WTJ8es)
|
||||
- [What is Infrastructure as Code? Difference of Infrastructure as Code Tools](https://www.youtube.com/watch?v=POPP2WTJ8es)
|
||||
- [Terraform Tutorial | Terraform Course Overview 2021](https://www.youtube.com/watch?v=m3cKkYXl-8o)
|
||||
- [Terraform explained in 15 mins | Terraform Tutorial for Beginners ](https://www.youtube.com/watch?v=l5k1ai_GBDE)
|
||||
- [Terraform explained in 15 mins | Terraform Tutorial for Beginners](https://www.youtube.com/watch?v=l5k1ai_GBDE)
|
||||
- [Terraform Course - From BEGINNER to PRO!](https://www.youtube.com/watch?v=7xngnjfIlK4&list=WL&index=141&t=16s)
|
||||
- [HashiCorp Terraform Associate Certification Course](https://www.youtube.com/watch?v=V4waklkBC38&list=WL&index=55&t=111s)
|
||||
- [Terraform Full Course for Beginners](https://www.youtube.com/watch?v=EJ3N-hhiWv0&list=WL&index=39&t=27s)
|
||||
|
@ -153,9 +153,9 @@ Cons
|
||||
## Resources
|
||||
I have listed a lot of resources down below and I think this topic has been covered so many times out there, If you have additional resources be sure to raise a PR with your resources and I will be happy to review and add them to the list.
|
||||
|
||||
- [What is Infrastructure as Code? Difference of Infrastructure as Code Tools ](https://www.youtube.com/watch?v=POPP2WTJ8es)
|
||||
- [What is Infrastructure as Code? Difference of Infrastructure as Code Tools](https://www.youtube.com/watch?v=POPP2WTJ8es)
|
||||
- [Terraform Tutorial | Terraform Course Overview 2021](https://www.youtube.com/watch?v=m3cKkYXl-8o)
|
||||
- [Terraform explained in 15 mins | Terraform Tutorial for Beginners ](https://www.youtube.com/watch?v=l5k1ai_GBDE)
|
||||
- [Terraform explained in 15 mins | Terraform Tutorial for Beginners](https://www.youtube.com/watch?v=l5k1ai_GBDE)
|
||||
- [Terraform Course - From BEGINNER to PRO!](https://www.youtube.com/watch?v=7xngnjfIlK4&list=WL&index=141&t=16s)
|
||||
- [HashiCorp Terraform Associate Certification Course](https://www.youtube.com/watch?v=V4waklkBC38&list=WL&index=55&t=111s)
|
||||
- [Terraform Full Course for Beginners](https://www.youtube.com/watch?v=EJ3N-hhiWv0&list=WL&index=39&t=27s)
|
||||
|
@ -94,9 +94,9 @@ This wraps up the Infrastructure as code section and next we move on to that lit
|
||||
## Resources
|
||||
I have listed a lot of resources down below and I think this topic has been covered so many times out there, If you have additional resources be sure to raise a PR with your resources and I will be happy to review and add them to the list.
|
||||
|
||||
- [What is Infrastructure as Code? Difference of Infrastructure as Code Tools ](https://www.youtube.com/watch?v=POPP2WTJ8es)
|
||||
- [What is Infrastructure as Code? Difference of Infrastructure as Code Tools](https://www.youtube.com/watch?v=POPP2WTJ8es)
|
||||
- [Terraform Tutorial | Terraform Course Overview 2021](https://www.youtube.com/watch?v=m3cKkYXl-8o)
|
||||
- [Terraform explained in 15 mins | Terraform Tutorial for Beginners ](https://www.youtube.com/watch?v=l5k1ai_GBDE)
|
||||
- [Terraform explained in 15 mins | Terraform Tutorial for Beginners](https://www.youtube.com/watch?v=l5k1ai_GBDE)
|
||||
- [Terraform Course - From BEGINNER to PRO!](https://www.youtube.com/watch?v=7xngnjfIlK4&list=WL&index=141&t=16s)
|
||||
- [HashiCorp Terraform Associate Certification Course](https://www.youtube.com/watch?v=V4waklkBC38&list=WL&index=55&t=111s)
|
||||
- [Terraform Full Course for Beginners](https://www.youtube.com/watch?v=EJ3N-hhiWv0&list=WL&index=39&t=27s)
|
||||
|
@ -62,7 +62,7 @@ My advice is to watch all of the below and hopefully you also picked something u
|
||||
- [What is DevOps? - TechWorld with Nana](https://www.youtube.com/watch?v=0yWAtQ6wYNM)
|
||||
- [What is DevOps? - GitHub YouTube](https://www.youtube.com/watch?v=kBV8gPVZNEE)
|
||||
- [What is DevOps? - IBM YouTube](https://www.youtube.com/watch?v=UbtB4sMaaNM)
|
||||
- [What is DevOps? - AWS ](https://aws.amazon.com/devops/what-is-devops/)
|
||||
- [What is DevOps? - AWS](https://aws.amazon.com/devops/what-is-devops/)
|
||||
- [What is DevOps? - Microsoft](https://docs.microsoft.com/en-us/devops/what-is-devops)
|
||||
|
||||
If you made it this far then you will know if this is where you want to be or not. See you on [Day 3](day03.md).
|
||||
|
@ -77,7 +77,7 @@ This last bit was a bit of a recap for me on Day 3 but think this actually makes
|
||||
### Resources:
|
||||
|
||||
- [DevOps for Developers – Software or DevOps Engineer?](https://www.youtube.com/watch?v=a0-uE3rOyeU)
|
||||
- [Techworld with Nana -DevOps Roadmap 2022 - How to become a DevOps Engineer? What is DevOps? ](https://www.youtube.com/watch?v=9pZ2xmsSDdo&t=125s)
|
||||
- [Techworld with Nana -DevOps Roadmap 2022 - How to become a DevOps Engineer? What is DevOps?](https://www.youtube.com/watch?v=9pZ2xmsSDdo&t=125s)
|
||||
- [How to become a DevOps Engineer in 2021 - DevOps Roadmap](https://www.youtube.com/watch?v=5pxbp6FyTfk)
|
||||
|
||||
If you made it this far then you will know if this is where you want to be or not.
|
||||
|
@ -112,9 +112,9 @@ Next up we will start looking into Terraform with a 101 before we get some hands
|
||||
## Resources
|
||||
I have listed a lot of resources down below and I think this topic has been covered so many times out there, If you have additional resources be sure to raise a PR with your resources and I will be happy to review and add them to the list.
|
||||
|
||||
- [What is Infrastructure as Code? Difference of Infrastructure as Code Tools ](https://www.youtube.com/watch?v=POPP2WTJ8es)
|
||||
- [What is Infrastructure as Code? Difference of Infrastructure as Code Tools](https://www.youtube.com/watch?v=POPP2WTJ8es)
|
||||
- [Terraform Tutorial | Terraform Course Overview 2021](https://www.youtube.com/watch?v=m3cKkYXl-8o)
|
||||
- [Terraform explained in 15 mins | Terraform Tutorial for Beginners ](https://www.youtube.com/watch?v=l5k1ai_GBDE)
|
||||
- [Terraform explained in 15 mins | Terraform Tutorial for Beginners](https://www.youtube.com/watch?v=l5k1ai_GBDE)
|
||||
- [Terraform Course - From BEGINNER to PRO!](https://www.youtube.com/watch?v=7xngnjfIlK4&list=WL&index=141&t=16s)
|
||||
- [HashiCorp Terraform Associate Certification Course](https://www.youtube.com/watch?v=V4waklkBC38&list=WL&index=55&t=111s)
|
||||
- [Terraform Full Course for Beginners](https://www.youtube.com/watch?v=EJ3N-hhiWv0&list=WL&index=39&t=27s)
|
||||
|
@ -87,9 +87,9 @@ We are going to get into more around HCL and then also start using Terraform to
|
||||
## Resources
|
||||
I have listed a lot of resources down below and I think this topic has been covered so many times out there, If you have additional resources be sure to raise a PR with your resources and I will be happy to review and add them to the list.
|
||||
|
||||
- [What is Infrastructure as Code? Difference of Infrastructure as Code Tools ](https://www.youtube.com/watch?v=POPP2WTJ8es)
|
||||
- [What is Infrastructure as Code? Difference of Infrastructure as Code Tools](https://www.youtube.com/watch?v=POPP2WTJ8es)
|
||||
- [Terraform Tutorial | Terraform Course Overview 2021](https://www.youtube.com/watch?v=m3cKkYXl-8o)
|
||||
- [Terraform explained in 15 mins | Terraform Tutorial for Beginners ](https://www.youtube.com/watch?v=l5k1ai_GBDE)
|
||||
- [Terraform explained in 15 mins | Terraform Tutorial for Beginners](https://www.youtube.com/watch?v=l5k1ai_GBDE)
|
||||
- [Terraform Course - From BEGINNER to PRO!](https://www.youtube.com/watch?v=7xngnjfIlK4&list=WL&index=141&t=16s)
|
||||
- [HashiCorp Terraform Associate Certification Course](https://www.youtube.com/watch?v=V4waklkBC38&list=WL&index=55&t=111s)
|
||||
- [Terraform Full Course for Beginners](https://www.youtube.com/watch?v=EJ3N-hhiWv0&list=WL&index=39&t=27s)
|
||||
|
@ -219,9 +219,9 @@ The pros for storing state in a remote location is that we get:
|
||||
## Resources
|
||||
I have listed a lot of resources down below and I think this topic has been covered so many times out there, If you have additional resources be sure to raise a PR with your resources and I will be happy to review and add them to the list.
|
||||
|
||||
- [What is Infrastructure as Code? Difference of Infrastructure as Code Tools ](https://www.youtube.com/watch?v=POPP2WTJ8es)
|
||||
- [What is Infrastructure as Code? Difference of Infrastructure as Code Tools](https://www.youtube.com/watch?v=POPP2WTJ8es)
|
||||
- [Terraform Tutorial | Terraform Course Overview 2021](https://www.youtube.com/watch?v=m3cKkYXl-8o)
|
||||
- [Terraform explained in 15 mins | Terraform Tutorial for Beginners ](https://www.youtube.com/watch?v=l5k1ai_GBDE)
|
||||
- [Terraform explained in 15 mins | Terraform Tutorial for Beginners](https://www.youtube.com/watch?v=l5k1ai_GBDE)
|
||||
- [Terraform Course - From BEGINNER to PRO!](https://www.youtube.com/watch?v=7xngnjfIlK4&list=WL&index=141&t=16s)
|
||||
- [HashiCorp Terraform Associate Certification Course](https://www.youtube.com/watch?v=V4waklkBC38&list=WL&index=55&t=111s)
|
||||
- [Terraform Full Course for Beginners](https://www.youtube.com/watch?v=EJ3N-hhiWv0&list=WL&index=39&t=27s)
|
||||
|
@ -115,9 +115,9 @@ variable "some resource" {
|
||||
## Resources
|
||||
I have listed a lot of resources down below and I think this topic has been covered so many times out there, If you have additional resources be sure to raise a PR with your resources and I will be happy to review and add them to the list.
|
||||
|
||||
- [What is Infrastructure as Code? Difference of Infrastructure as Code Tools ](https://www.youtube.com/watch?v=POPP2WTJ8es)
|
||||
- [What is Infrastructure as Code? Difference of Infrastructure as Code Tools](https://www.youtube.com/watch?v=POPP2WTJ8es)
|
||||
- [Terraform Tutorial | Terraform Course Overview 2021](https://www.youtube.com/watch?v=m3cKkYXl-8o)
|
||||
- [Terraform explained in 15 mins | Terraform Tutorial for Beginners ](https://www.youtube.com/watch?v=l5k1ai_GBDE)
|
||||
- [Terraform explained in 15 mins | Terraform Tutorial for Beginners](https://www.youtube.com/watch?v=l5k1ai_GBDE)
|
||||
- [Terraform Course - From BEGINNER to PRO!](https://www.youtube.com/watch?v=7xngnjfIlK4&list=WL&index=141&t=16s)
|
||||
- [HashiCorp Terraform Associate Certification Course](https://www.youtube.com/watch?v=V4waklkBC38&list=WL&index=55&t=111s)
|
||||
- [Terraform Full Course for Beginners](https://www.youtube.com/watch?v=EJ3N-hhiWv0&list=WL&index=39&t=27s)
|
||||
|
@ -179,9 +179,9 @@ We are breaking down our infrastructure into components, components are known he
|
||||
## Resources
|
||||
I have listed a lot of resources down below and I think this topic has been covered so many times out there, If you have additional resources be sure to raise a PR with your resources and I will be happy to review and add them to the list.
|
||||
|
||||
- [What is Infrastructure as Code? Difference of Infrastructure as Code Tools ](https://www.youtube.com/watch?v=POPP2WTJ8es)
|
||||
- [What is Infrastructure as Code? Difference of Infrastructure as Code Tools](https://www.youtube.com/watch?v=POPP2WTJ8es)
|
||||
- [Terraform Tutorial | Terraform Course Overview 2021](https://www.youtube.com/watch?v=m3cKkYXl-8o)
|
||||
- [Terraform explained in 15 mins | Terraform Tutorial for Beginners ](https://www.youtube.com/watch?v=l5k1ai_GBDE)
|
||||
- [Terraform explained in 15 mins | Terraform Tutorial for Beginners](https://www.youtube.com/watch?v=l5k1ai_GBDE)
|
||||
- [Terraform Course - From BEGINNER to PRO!](https://www.youtube.com/watch?v=7xngnjfIlK4&list=WL&index=141&t=16s)
|
||||
- [HashiCorp Terraform Associate Certification Course](https://www.youtube.com/watch?v=V4waklkBC38&list=WL&index=55&t=111s)
|
||||
- [Terraform Full Course for Beginners](https://www.youtube.com/watch?v=EJ3N-hhiWv0&list=WL&index=39&t=27s)
|
||||
|
@ -153,9 +153,9 @@ Cons
|
||||
## Resources
|
||||
I have listed a lot of resources down below and I think this topic has been covered so many times out there, If you have additional resources be sure to raise a PR with your resources and I will be happy to review and add them to the list.
|
||||
|
||||
- [What is Infrastructure as Code? Difference of Infrastructure as Code Tools ](https://www.youtube.com/watch?v=POPP2WTJ8es)
|
||||
- [What is Infrastructure as Code? Difference of Infrastructure as Code Tools](https://www.youtube.com/watch?v=POPP2WTJ8es)
|
||||
- [Terraform Tutorial | Terraform Course Overview 2021](https://www.youtube.com/watch?v=m3cKkYXl-8o)
|
||||
- [Terraform explained in 15 mins | Terraform Tutorial for Beginners ](https://www.youtube.com/watch?v=l5k1ai_GBDE)
|
||||
- [Terraform explained in 15 mins | Terraform Tutorial for Beginners](https://www.youtube.com/watch?v=l5k1ai_GBDE)
|
||||
- [Terraform Course - From BEGINNER to PRO!](https://www.youtube.com/watch?v=7xngnjfIlK4&list=WL&index=141&t=16s)
|
||||
- [HashiCorp Terraform Associate Certification Course](https://www.youtube.com/watch?v=V4waklkBC38&list=WL&index=55&t=111s)
|
||||
- [Terraform Full Course for Beginners](https://www.youtube.com/watch?v=EJ3N-hhiWv0&list=WL&index=39&t=27s)
|
||||
|
@ -94,9 +94,9 @@ This wraps up the Infrastructure as code section and next we move on to that lit
|
||||
## Resources
|
||||
I have listed a lot of resources down below and I think this topic has been covered so many times out there, If you have additional resources be sure to raise a PR with your resources and I will be happy to review and add them to the list.
|
||||
|
||||
- [What is Infrastructure as Code? Difference of Infrastructure as Code Tools ](https://www.youtube.com/watch?v=POPP2WTJ8es)
|
||||
- [What is Infrastructure as Code? Difference of Infrastructure as Code Tools](https://www.youtube.com/watch?v=POPP2WTJ8es)
|
||||
- [Terraform Tutorial | Terraform Course Overview 2021](https://www.youtube.com/watch?v=m3cKkYXl-8o)
|
||||
- [Terraform explained in 15 mins | Terraform Tutorial for Beginners ](https://www.youtube.com/watch?v=l5k1ai_GBDE)
|
||||
- [Terraform explained in 15 mins | Terraform Tutorial for Beginners](https://www.youtube.com/watch?v=l5k1ai_GBDE)
|
||||
- [Terraform Course - From BEGINNER to PRO!](https://www.youtube.com/watch?v=7xngnjfIlK4&list=WL&index=141&t=16s)
|
||||
- [HashiCorp Terraform Associate Certification Course](https://www.youtube.com/watch?v=V4waklkBC38&list=WL&index=55&t=111s)
|
||||
- [Terraform Full Course for Beginners](https://www.youtube.com/watch?v=EJ3N-hhiWv0&list=WL&index=39&t=27s)
|
||||
|
Loading…
Reference in New Issue
Block a user