Merge branch 'MichaelCade:main' into main

This commit is contained in:
Chris Williams 2023-02-05 19:00:13 -05:00 committed by GitHub
commit 55f8bcd254
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
19 changed files with 1390 additions and 100 deletions

View File

@ -1,24 +0,0 @@
name: Add contributors
on:
schedule:
- cron: '0 12 * * *'
# push:
# branches:
# - master
jobs:
add-contributors:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- uses: BobAnkh/add-contributors@master
with:
REPO_NAME: 'MichaelCade/90DaysOfDevOps'
CONTRIBUTOR: '### Other Contributors'
COLUMN_PER_ROW: '6'
ACCESS_TOKEN: ${{secrets.GITHUB_TOKEN}}
IMG_WIDTH: '100'
FONT_SIZE: '14'
PATH: '/Contributors.md'
COMMIT_MESSAGE: 'docs(Contributors): update contributors'
AVATAR_SHAPE: 'round'

View File

@ -84,7 +84,7 @@ If we create an additional file called `samplecode.ps1`, the status would become
![](Images/Day35_Git10.png)
Add our new file using the `git add sample code.ps1` command and then we can run `git status` again and see our file is ready to be committed.
Add our new file using the `git add samplecode.ps1` command and then we can run `git status` again and see our file is ready to be committed.
![](Images/Day35_Git11.png)

View File

@ -44,7 +44,7 @@ Now we can choose additional components that we would like to also install but a
![](Images/Day36_Git4.png)
We can then choose which SSH Executable we wish to use. IN leave this as the bundled OpenSSH that you might have seen in the Linux section.
We can then choose which SSH Executable we wish to use. I leave this as the bundled OpenSSH that you might have seen in the Linux section.
![](Images/Day36_Git5.png)

38
2023.md
View File

@ -16,14 +16,14 @@ Or contact us via Twitter, my handle is [@MichaelCade1](https://twitter.com/Mich
## List of Topics
| Topic | Author | Date | Twitter Handle |
| Topic | Author | Date | Twitter Handle |
| -------------------------------------- | ----------------------------------- | ------------------- | ----------------------------------------------------------------------------------------------- |
| DevSecOps | Michael Cade | 1st Jan - 6th Jan | [@MichaelCade1](https://twitter.com/MichaelCade1) |
| Secure Coding | Prateek Jain | 7th Jan - 13th Jan | [@PrateekJainDev](https://twitter.com/PrateekJainDev) |
| Secure Coding | Prateek Jain | 7th Jan - 13th Jan | [@PrateekJainDev](https://twitter.com/PrateekJainDev) |
| Continuous Build, Integration, Testing | Anton Sankov and Svetlomir Balevski | 14th Jan - 20th Jan | [@a_sankov](https://twitter.com/a_sankov) |
| Continuous Delivery & Deployment | Anton Sankov | 21st Jan - 27th Jan | [@a_sankov](https://twitter.com/a_sankov) |
| Runtime Defence & Monitoring | Ben Hirschberg | 28th Jan - 3rd Feb | [@slashben81](https://twitter.com/slashben81) |
| Secrets Management | Bryan Krausen | 4th Feb - 10th Feb | [@btkrausen](https://twitter.com/btkrausen) |
| Runtime Defence & Monitoring | Ben Hirschberg | 28th Jan - 3rd Feb | [@slashben81](https://twitter.com/slashben81) |
| Secrets Management | Bryan Krausen | 4th Feb - 10th Feb | [@btkrausen](https://twitter.com/btkrausen) |
| Python | Rishab Kumar | 11th Feb - 17th Feb | [@rishabk7](https://twitter.com/rishabk7) |
| AWS | Chris Williams | 18th Feb - 24th Feb | [@mistwire](https://twitter.com/mistwire) |
| OpenShift | Dean Lewis | 25th Feb - 3rd Mar | [@saintdle](https://twitter.com/saintdle) |
@ -52,27 +52,27 @@ Or contact us via Twitter, my handle is [@MichaelCade1](https://twitter.com/Mich
- [✔️] ⌨️ 10 > [Software Composition Analysis Overview](2023/day10.md)
- [✔️] ⌨️ 11 > [SCA Implementation with OWASP Dependency Check](2023/day11.md)
- [✔️] ⌨️ 12 > [Secure Coding Practices](2023/day12.md)
- [] ⌨️ 13 > [](2023/day13.md)
- [✔️] ⌨️ 13 > [Additional Secure Coding Practices](2023/day13.md)
### Continuous Build, Integration, Testing
- [] 🐧 14 > [](2023/day14.md)
- [] 🐧 15 > [](2023/day15.md)
- [] 🐧 16 > [](2023/day16.md)
- [] 🐧 17 > [](2023/day17.md)
- [] 🐧 18 > [](2023/day18.md)
- [] 🐧 19 > [](2023/day19.md)
- [] 🐧 20 > [](2023/day20.md)
- [✔️] 🐧 14 > [Container Image Scanning](2023/day14.md)
- [✔️] 🐧 15 > [Container Image Scanning Advanced](2023/day15.md)
- [✔️] 🐧 16 > [Fuzzing](2023/day16.md)
- [✔️] 🐧 17 > [Fuzzing Advanced](2023/day17.md)
- [✔️] 🐧 18 > [DAST](2023/day18.md)
- [✔️] 🐧 19 > [IAST](2023/day19.md)
- [✔️] 🐧 20 > [Practical Lab on IAST and DAST](2023/day20.md)
### Continuous Delivery & Deployment
- [] 🌐 21 > [](2023/day21.md)
- [] 🌐 22 > [](2023/day22.md)
- [] 🌐 23 > [](2023/day23.md)
- [] 🌐 24 > [](2023/day24.md)
- [] 🌐 25 > [](2023/day25.md)
- [] 🌐 26 > [](2023/day26.md)
- [] 🌐 27 > [](2023/day27.md)
- [✔️] 🌐 21 > [Continuous Image Repository Scan](2023/day21.md)
- [✔️] 🌐 22 > [Continuous Image Repository Scan - Container Registries](2023/day22.md)
- [✔️] 🌐 23 > [Artifacts Scan](2023/day23.md)
- [✔️] 🌐 24 > [Signing](2023/day24.md)
- [✔️] 🌐 25 > [Systems Vulnerability Scanning](2023/day25.md)
- [✔️] 🌐 26 > [Containers Vulnerability Scanning](2023/day26.md)
- [✔️] 🌐 27 > [Network Vulnerability Scan](2023/day27.md)
### Runtime Defence & Monitoring

View File

@ -84,4 +84,6 @@ jobs:
- [Hadolint GitHub](https://github.com/hadolint/hadolint)
- [Hadolint Online](https://hadolint.github.io/hadolint/)
- [Top 20 Dockerfile best practices](https://sysdig.com/blog/dockerfile-best-practices/)
- [Top 20 Dockerfile best practices](https://sysdig.com/blog/dockerfile-best-practices/)
Next up we will be starting our **Continuous Build, Integration, Testing** with [Day 14](day14.md) covering Container Image Scanning from [Anton Sankov](https://twitter.com/a_sankov).

View File

@ -228,3 +228,6 @@ It is between 0 and 10.
<https://www.nist.gov/itl/executive-order-improving-nations-cybersecurity>
<https://www.aquasec.com/cloud-native-academy/supply-chain-security/sbom/>
On [Day 16](day16.md) we will take a look into "Fuzzing" or Fuzz Testing.

View File

@ -1,7 +1,7 @@
# Fuzzing
Fuzzing, also known as "fuzz testing," is a software testing technique that involves providing invalid, unexpected, or random data as input to a computer program.
The goal of fuzzing is to identify security vulnerabilities and other bugs in the program by causing it to crash or exhibit unintended behavior.
The goal of fuzzing is to identify security vulnerabilities and other bugs in the program by causing it to crash or exhibit unintended behaviour.
Fuzzing can be performed manually or by using a testing library/framework to craft the inputs for us.
@ -32,13 +32,13 @@ However, in more complex systems such fail points may not be obvious, and may be
This is where fuzzing comes in handy.
The Go Fuzzing library (part of the standard language library since Go 1.18) generates many inputs for a test case, and then based on the coverage and the results determines which inputs are "interesting".
The Go Fuzzing library (part of the standard language library since Go 1.18) generates many inputs for a test case, and then based on the coverage and the results determine which inputs are "interesting".
If we write a fuzz test for this function what will happen is:
1. The fuzzing library will start providing random strings starting from smaller strings and increasing their size.
2. Once the library provides a string of lenght 4 it will notice a change in the test-coverage (`if (len(s) == 4)` is now `true`) and will continue to generate inputs with this lenght.
3. Once the library provides a string of lenght 4 that starts with `f` it will notice another change in the test-coverage (`if s[0] == "f"` is now `true`) and will continue to generate inputs that start with `f`.
2. Once the library provides a string of length 4 it will notice a change in the test-coverage (`if (len(s) == 4)` is now `true`) and will continue to generate inputs with this length.
3. Once the library provides a string of length 4 that starts with `f` it will notice another change in the test-coverage (`if s[0] == "f"` is now `true`) and will continue to generate inputs that start with `f`.
4. The same thing will repeat for `u` and the double `z`.
5. Once it provides `fuzz` as input the function will panic and the test will fail.
6. We have _fuzzed_ successfully!
@ -56,7 +56,7 @@ Fuzzing is a useful technique, but there are situations in which it might not be
For example, if the input that fails our code is too specific and there are no clues to help, the fuzzing library might not be able to guess it.
If we change the example code from the previoud paragraph to something like this:
If we change the example code from the previous paragraph to something like this:
```go
func DontPanic(s input) {

View File

@ -1,26 +1,242 @@
# DAST
DAST, or Dynamic Application Security Testing, is a technique that is used to evaluate the security of an application by simulating attacks from external sources.
Idea is to automate as much as possible black-box penetration testing.
It can be used for acquiring the low-hanging fruits so a real humans time will be spared and additionally for generating traffic to other security tools (e.g. IAST).
# Fuzzing Advanced
Nevertheless, It is an essential component of the SSDLC, as it helps organizations uncover potential vulnerabilities early in the development process, before the application is deployed to production. By conducting DAST testing, organizations can prevent security incidents and protect their data and assets from being compromised by attackers.
Yesterday we learned what fuzzing is and how to write fuzz tests (unit tests with fuzzy inputs).
However, fuzz testing goes beyond just unit testing.
We can use this methodology to test our web application by fuzzing the requests sent to our server.
## Tools
Today, we will take a practical approach to fuzzy testing a web server.
There are various open-source tools available for conducting DAST, such as ZAP, Burp Suite, and Arachni. These tools can simulate different types of attacks on the application, such as SQL injection, cross-site scripting, and other common vulnerabilities. For example, if an application is vulnerable to SQL injection, a DAST tool can send a malicious SQL query to the application, such as ' OR 1=1 --, and evaluate its response to determine if it is vulnerable. If the application is vulnerable, it may return all records from the database, indicating that the SQL injection attack was successful.
As some of the tests could be quite invasive (for example it may include DROP TABLE or something similar) or at least put a good amount of test data into the databases or even DOS the app,
__DAST tools should never run against a production environment!!!__
All tools have the possibility for authentication into the application and this could lead to production credentials compromise. Also when run authenticated scans against the testing environment, use suitable roles (if RBAC model exists, for the application, of course), e.g. DAST shouldnt use role that have the possibility to delete or modify other users because this way the whole environment can became unusable.
As with other testing methodologies it is necessary to analyze the scope, so not unneeded targets are scanned.
Different tools can help us do this.
## Usage
Common error is scanning compensating security controls (e.g. WAF) instead of the real application. DAST is in its core an application security testing tool and should be used against actual applications, not against security mitigations. As it uses pretty standardized attacks, external controls can block the attacking traffic and this way to cover potentially exploitable flows (as per definition adversary would be able to eventually bypass such measures)
Actual scans are quite slow, so sometimes they should be run outside of the DevOps pipeline. Good example is running them nightly or during the weekend. Some of the simple tools (zap / arachny, …) could be used into pipelines but often, due to the nature of the scan can slow down the whole development process.
Once the DAST testing is complete, the results are analyzed to identify any vulnerabilities that were discovered. The organization can then take appropriate remediation steps to address the vulnerabilities and improve the overall security of the application. This may involve fixing the underlying code, implementing additional security controls, such as input validation and filtering, or both.
In conclusion, the use of DAST in the SSDLC is essential for ensuring the security of an application. By conducting DAST testing and identifying vulnerabilities early in the development process, organizations can prevent security incidents and protect their assets from potential threats. Open-source tools, such as ZAP, Burp Suite, and Arachni, can be used to conduct DAST testing and help organizations improve their overall security posture.
As with all other tools part of DevSecOps pipeline DAST should not be the only scanner in place and as with all others, it is not a substitute for penetration test and good development practices.
Such tools are [Burp Intruder](https://portswigger.net/burp/documentation/desktop/tools/intruder) and [SmartBear](https://smartbear.com/).
However, there are proprietary tools that require a paid license to use them.
## Some useful links and open-source tools:
- https://github.com/zaproxy/zaproxy
- https://www.arachni-scanner.com/
- https://owasp.org/www-project-devsecops-guideline/latest/02b-Dynamic-Application-Security-Testing
That is why for our demonstration today we are going to use a simple open-source CLI written in Go that was inspired by Burp Intruder and provides similar functionality.
It is called [httpfuzz](https://github.com/JonCooperWorks/httpfuzz).
## Getting started
This tool is quite simple.
We provide it a template for our requests (in which we have defined placeholders for the fuzzy data), a wordlist (the fuzzy data) and `httpfuzz` will render the requests and send them to our server.
First, we need to define a template for our requests.
Create a file named `request.txt` with the following content:
```text
POST / HTTP/1.1
Content-Type: application/json
User-Agent: PostmanRuntime/7.26.3
Accept: */*
Cache-Control: no-cache
Host: localhost:8000
Accept-Encoding: gzip, deflate
Connection: close
Content-Length: 35
{
"name": "`S9`",
}
```
This is a valid HTTP `POST` request to the `/` route with JSON body.
The "\`" symbol in the body defines a placeholder that will be substituted with the data we provide.
`httpfuzz` can also fuzz the headers, path, and URL params.
Next, we need to provide a wordlist of inputs that will be placed in the request.
Create a file named `data.txt` with the following content:
```text
SOME_NAME
Mozilla/5.0 (Linux; Android 7.0; SM-G930VC Build/NRD90M; wv) AppleWebKit/537.36 (KHTML, like Gecko) Version/4.0 Chrome/58.0.3029.83 Mobile Safari/537.36
```
In this file, we defined two inputs that will be substituted inside the body.
In a real-world scenario, you should put much more data here for proper fuzz testing.
Now that we have our template and our inputs, let's run the tool.
Unfortunately, this tool is not distributed as a binary, so we will have to build it from source.
Clone the repo and run:
```shell
go build -o httpfuzz cmd/httpfuzz.go
```
(requires to have a recent version of Go installed on your machine).
Now that we have the binary let's run it:
```shell
./httpfuzz \
--wordlist data.txt \
--seed-request request.txt \
--target-header User-Agent \
--target-param fuzz \
--delay-ms 50 \
--skip-cert-verify \
--proxy-url http://localhost:8080 \
```
- `httpfuzz` is the binary we are invoking.
- `--wordlist data.txt` is the file with inputs we provided.
- `--seed-request requests.txt` is the request template.
- `--target-header User-Agent` tells `httpfuzz` to use the provided inputs in the place of the `User-Agent` header.
- `--target-param fuzz` tells `httpfuzz` to use the provided inputs as values for the `fuzz` URL parameter.
- `--delay-ms 50` tells `httpfuzz` to wait 50 ms between the requests.
- `--skip-cert-verify` tells `httpfuzz` to not do any TLS verification.
- `--proxy-url http://localhost:8080` tells `httpfuzz` where our HTTP server is.
We have 2 inputs and 3 places to place them (in the body, the `User-Agent` header, and the `fuzz` parameter).
This means that `httpfuzz` will generate 6 requests and send them to our server.
Let's run it and see what happens.
I wrote a simple web server that logs all requests so that we can see what is coming into our server:
```shell
$ ./httpfuzz \
--wordlist data.txt \
--seed-request request.txt \
--target-header User-Agent \
--target-param fuzz \
--delay-ms 50 \
--skip-cert-verify \
--proxy-url http://localhost:8080 \
httpfuzz: httpfuzz.go:164: Sending 6 requests
```
and the server logs:
```text
-----
Got request to http://localhost:8000/
User-Agent header = [SOME_NAME]
Name = S9
-----
Got request to http://localhost:8000/?fuzz=SOME_NAME
User-Agent header = [PostmanRuntime/7.26.3]
Name = S9
-----
Got request to http://localhost:8000/
User-Agent header = [PostmanRuntime/7.26.3]
Name = SOME_NAME
-----
Got request to http://localhost:8000/
User-Agent header = [Mozilla/5.0 (Linux; Android 7.0; SM-G930VC Build/NRD90M; wv) AppleWebKit/537.36 (KHTML, like Gecko) Version/4.0 Chrome/58.0.3029.83 Mobile Safari/537.36]
Name = S9
-----
Got request to http://localhost:8000/?fuzz=Mozilla%2F5.0+%28Linux%3B+Android+7.0%3B+SM-G930VC+Build%2FNRD90M%3B+wv%29+AppleWebKit%2F537.36+%28KHTML%2C+like+Gecko%29+Version%2F4.083+Mobile+Safari%2F537.36
User-Agent header = [PostmanRuntime/7.26.3]
Name = S9
-----
Got request to http://localhost:8000/
User-Agent header = [PostmanRuntime/7.26.3]
Name = Mozilla/5.0 (Linux; Android 7.0; SM-G930VC Build/NRD90M; wv) AppleWebKit/537.36 (KHTML, like Gecko) Version/4.0 Chrome/58.0.3029.83 Mobile Safari/537.36
```
We see that we have received 6 HTTP requests.
Two of them have a value from our values file for the `User-Agent` header, and 4 have the default header from the template.
Two of them have a value from our values file for the `fuzz` query parameter, and 4 have the default header from the template.
Two of them have a value from our values file for the `Name` body property, and 4 have the default header from the template.
A slight improvement of the tool could be to make different permutations of these requests (for example, a request that has both `?fuzz=` and `User-Agent` as values from the values file).
Notice how `httpfuzz` does not give us any information about the outcome of the requests.
To figure that out, we need to either set up some sort of monitoring for our server or write a `httpfuzz` plugin that will process the results in a meaningful for us way.
Let's do that.
To write a custom plugin, we need to implement the [`Listener`](https://github.com/JonCooperWorks/httpfuzz/blob/master/plugin.go#L13) interface:
```go
// Listener must be implemented by a plugin to users to hook the request - response transaction.
// The Listen method will be run in its own goroutine, so plugins cannot block the rest of the program, however panics can take down the entire process.
type Listener interface {
Listen(results <-chan *Result)
}
```
```go
package main
import (
"bytes"
"io/ioutil"
"log"
"github.com/joncooperworks/httpfuzz"
)
type logResponseCodePlugin struct {
logger *log.Logger
}
func (b *logResponseCodePlugin) Listen(results <-chan *httpfuzz.Result) {
for result := range results {
b.logger.Printf("Got %d response from the server\n", result.Response.StatusCode)
}
}
// New returns a logResponseCodePlugin plugin that simple logs the response code of the response.
func New(logger *log.Logger) (httpfuzz.Listener, error) {
return &logResponseCodePlugin{logger: logger}, nil
}
```
Now we need to build our plugin first:
```shell
go build -buildmode=plugin -o log exampleplugins/log/log.go
```
and then we can plug it into `httpfuzz` via the `--post-request` flag:
```shell
$ ./httpfuzz \
--wordlist data.txt \
--seed-request request.txt \
--target-header User-Agent \
--target-param fuzz \
--delay-ms 50 \
--skip-cert-verify \
--proxy-url http://localhost:8080 \
--post-request log
httpfuzz: httpfuzz.go:164: Sending 6 requests
httpfuzz: log.go:15: Got 200 response from the server
httpfuzz: log.go:15: Got 200 response from the server
httpfuzz: log.go:15: Got 200 response from the server
httpfuzz: log.go:15: Got 200 response from the server
httpfuzz: log.go:15: Got 200 response from the server
httpfuzz: log.go:15: Got 200 response from the server
```
Voila!
Now we can at least see what the response code from the server was.
Of course, we can write much more sophisticated plugins that output much more data, but for the purpose of this exercise, that is enough.
## Summary
Fuzzing is a really powerful testing technique that goes way beyond unit testing.
Fuzzing can be extremely useful for testing HTTP servers by substituting parts of valid HTTP requests with data that could potentially expose vulnerabilities or deficiencies in our server.
There are many tools that can help us in fuzzy testing our web applications, both free and paid ones.
## Resources
[OWASP: Fuzzing](https://owasp.org/www-community/Fuzzing)
[OWASP: Fuzz Vectors](https://owasp.org/www-project-web-security-testing-guide/v41/6-Appendix/C-Fuzz_Vectors)
[Hacking HTTP with HTTPfuzz](https://medium.com/swlh/hacking-http-with-httpfuzz-67cfd061b616)
[Fuzzing the Stack for Fun and Profit at DefCamp 2019](https://www.youtube.com/watch?v=qCMfrbpuCBk&list=PLnwq8gv9MEKiUOgrM7wble1YRsrqRzHKq&index=33)
[HTTP Fuzzing Scan with SmartBear](https://support.smartbear.com/readyapi/docs/security/scans/types/fuzzing-http.html)
[Fuzzing Session: Finding Bugs and Vulnerabilities Automatically](https://youtu.be/DSJePjhBN5E)
[Fuzzing the CNCF Landscape](https://youtu.be/zIyIZxAZLzo)

View File

@ -1,33 +1,26 @@
# IAST (Interactive Application Security Testing)
# DAST
DAST, or Dynamic Application Security Testing, is a technique that is used to evaluate the security of an application by simulating attacks from external sources.
Idea is to automate as much as possible black-box penetration testing.
It can be used for acquiring the low-hanging fruits so a real humans time will be spared and additionally for generating traffic to other security tools (e.g. IAST).
IAST is a type of security testing tool that is designed to identify vulnerabilities in web applications and help developers fix them. It works by injecting a small agent into the application's runtime environment and monitoring its behavior in real-time. This allows IAST tools to identify vulnerabilities as they occur, rather than relying on static analysis or simulated attacks.
Nevertheless, It is an essential component of the SSDLC, as it helps organizations uncover potential vulnerabilities early in the development process, before the application is deployed to production. By conducting DAST testing, organizations can prevent security incidents and protect their data and assets from being compromised by attackers.
IAST works through software instrumentation, or the use of instruments to monitor an application as it runs and gather information about what it does and how it performs. IAST solutions instrument applications by deploying agents and sensors in running applications and continuously analyzing all application interactions initiated by manual tests, automated tests, or a combination of both to identify vulnerabilities in real time Instrumentation.
IAST agent is running inside the application and monitor for known attack patterns. As it is part of the application, it can monitor traffic between different components (either as classic MVC deployments and in microservices deployment).
## Tools
## For IAST to be used, there are few prerequisites.
- Application should be instrumented (inject the agent).
- Traffic should be generated - via manual or automated tests. Another possible approach is via DAST tools (OWASP ZAP can be used for example).
There are various open-source tools available for conducting DAST, such as ZAP, Burp Suite, and Arachni. These tools can simulate different types of attacks on the application, such as SQL injection, cross-site scripting, and other common vulnerabilities. For example, if an application is vulnerable to SQL injection, a DAST tool can send a malicious SQL query to the application, such as ' OR 1=1 --, and evaluate its response to determine if it is vulnerable. If the application is vulnerable, it may return all records from the database, indicating that the SQL injection attack was successful.
As some of the tests could be quite invasive (for example it may include DROP TABLE or something similar) or at least put a good amount of test data into the databases or even DOS the app,
__DAST tools should never run against a production environment!!!__
All tools have the possibility for authentication into the application and this could lead to production credentials compromise. Also when run authenticated scans against the testing environment, use suitable roles (if RBAC model exists, for the application, of course), e.g. DAST shouldnt use role that have the possibility to delete or modify other users because this way the whole environment can became unusable.
As with other testing methodologies it is necessary to analyze the scope, so not unneeded targets are scanned.
## Advantages
One of the main advantages of IAST tools is that they can provide detailed and accurate information about vulnerabilities and how to fix them. This can save developers a lot of time and effort, as they don't have to manually search for vulnerabilities or try to reproduce them in a testing environment. IAST tools can also identify vulnerabilities that might be missed by other testing methods, such as those that require user interaction or are triggered under certain conditions. Testing time depends on the tests used (as IAST is not a standalone system) and with faster tests (automated tests) can be include into CI/CD pipelines. It can be used to detect different kind of vulnerabilities and due to the nature of the tools (it looks for “real traffic only) false positives/negatives findings are relatively rear compared to other testing types.
IAST can be used in two flavors - as a typical testing tool and as real-time protection (it is called RAST in this case). Both work at the same principals and can be used together.
## Usage
Common error is scanning compensating security controls (e.g. WAF) instead of the real application. DAST is in its core an application security testing tool and should be used against actual applications, not against security mitigations. As it uses pretty standardized attacks, external controls can block the attacking traffic and this way to cover potentially exploitable flows (as per definition adversary would be able to eventually bypass such measures)
Actual scans are quite slow, so sometimes they should be run outside of the DevOps pipeline. Good example is running them nightly or during the weekend. Some of the simple tools (zap / arachny, …) could be used into pipelines but often, due to the nature of the scan can slow down the whole development process.
Once the DAST testing is complete, the results are analyzed to identify any vulnerabilities that were discovered. The organization can then take appropriate remediation steps to address the vulnerabilities and improve the overall security of the application. This may involve fixing the underlying code, implementing additional security controls, such as input validation and filtering, or both.
In conclusion, the use of DAST in the SSDLC is essential for ensuring the security of an application. By conducting DAST testing and identifying vulnerabilities early in the development process, organizations can prevent security incidents and protect their assets from potential threats. Open-source tools, such as ZAP, Burp Suite, and Arachni, can be used to conduct DAST testing and help organizations improve their overall security posture.
As with all other tools part of DevSecOps pipeline DAST should not be the only scanner in place and as with all others, it is not a substitute for penetration test and good development practices.
## There are several disadvantages of the technology as well:
- It is relatively new technology so there is not a lot of knowledge and experience both for the security teams and for the tools builders (open-source or commercial).
- The solution cannot be used alone - something (or someone) should generate traffic patterns. It is important that all possible endpoints are queried during the tests.
- Findings are based on traffic. This is especially true if used for testing alone - if there is no traffic to a portion of the app / site it would not be tested so no findings are going to be generated.
- Due to need of instrumentation of the app, it can be fairly complex, especially compared to the source scanning tools (SAST or SCA).
There are several different IAST tools available, each with its own features and capabilities.
## Some common features of IAST tools include:
- Real-time monitoring: IAST tools monitor the application's behavior in real-time, allowing them to identify vulnerabilities as they occur.
- Vulnerability identification: IAST tools can identify a wide range of vulnerabilities, including injection attacks, cross-site scripting (XSS), and cross-site request forgery (CSRF).
- Remediation guidance: IAST tools often provide detailed information about how to fix identified vulnerabilities, including code snippets and recommendations for secure coding practices.
- Integration with other tools: IAST tools can often be integrated with other security testing tools, such as static code analysis or penetration testing tools, to provide a more comprehensive view of an application's security.
IAST tools can be a valuable addition to a developer's toolkit, as they can help identify and fix vulnerabilities in real-time, saving time and effort. If you are a developer and are interested in using an IAST tool, there are many options available, so it is important to research and compare different tools to find the one that best fits your needs.
## Tool example
There are almost no open-source tools on the market. Example is the commercial tool: Contrast Community Edition (CE) - Fully featured version for 1 app and up to 5 users (some Enterprise features disabled). Contrast CE supports Java and .NET only.
Can be found here - https://www.contrastsecurity.com/contrast-community-edition
## Some useful links and open-source tools:
- https://github.com/zaproxy/zaproxy
- https://www.arachni-scanner.com/
- https://owasp.org/www-project-devsecops-guideline/latest/02b-Dynamic-Application-Security-Testing

View File

@ -0,0 +1,33 @@
# IAST (Interactive Application Security Testing)
IAST is a type of security testing tool that is designed to identify vulnerabilities in web applications and help developers fix them. It works by injecting a small agent into the application's runtime environment and monitoring its behaviour in real-time. This allows IAST tools to identify vulnerabilities as they occur, rather than relying on static analysis or simulated attacks.
IAST works through software instrumentation, or the use of instruments to monitor an application as it runs and gather information about what it does and how it performs. IAST solutions instrument applications by deploying agents and sensors in running applications and continuously analyzing all application interactions initiated by manual tests, automated tests, or a combination of both to identify vulnerabilities in real time Instrumentation.
IAST agent is running inside the application and monitoring for known attack patterns. As it is part of the application, it can monitor traffic between different components (either as classic MVC deployments and in microservices deployment).
## For IAST to be used, there are few prerequisites.
- Application should be instrumented (inject the agent).
- Traffic should be generated - via manual or automated tests. Another possible approach is via DAST tools (OWASP ZAP can be used for example).
## Advantages
One of the main advantages of IAST tools is that they can provide detailed and accurate information about vulnerabilities and how to fix them. This can save developers a lot of time and effort, as they don't have to manually search for vulnerabilities or try to reproduce them in a testing environment. IAST tools can also identify vulnerabilities that might be missed by other testing methods, such as those that require user interaction or are triggered under certain conditions. Testing time depends on the tests used (as IAST is not a standalone system) and with faster tests (automated tests) can be included into CI/CD pipelines. It can be used to detect different kind of vulnerabilities and due to the nature of the tools (it looks for “real traffic only) false positives/negatives findings are relatively rear compared to other testing types.
IAST can be used in two flavours - as a typical testing tool and as real-time protection (it is called RAST in this case). Both work at the same principles and can be used together.
## There are several disadvantages of the technology as well:
- It is relatively new technology so there is not a lot of knowledge and experience both for the security teams and for the tools builders (open-source or commercial).
- The solution cannot be used alone - something (or someone) should generate traffic patterns. It is important that all possible endpoints are queried during the tests.
- Findings are based on traffic. This is especially true if used for testing alone - if there is no traffic to a portion of the app / site it would not be tested so no findings are going to be generated.
- Due to need of instrumentation of the app, it can be fairly complex, especially compared to the source scanning tools (SAST or SCA).
There are several different IAST tools available, each with its own features and capabilities.
## Some common features of IAST tools include:
- Real-time monitoring: IAST tools monitor the application's behaviour in real-time, allowing them to identify vulnerabilities as they occur.
- Vulnerability identification: IAST tools can identify a wide range of vulnerabilities, including injection attacks, cross-site scripting (XSS), and cross-site request forgery (CSRF).
- Remediation guidance: IAST tools often provide detailed information about how to fix identified vulnerabilities, including code snippets and recommendations for secure coding practices.
- Integration with other tools: IAST tools can often be integrated with other security testing tools, such as static code analysis or penetration testing tools, to provide a more comprehensive view of an application's security.
IAST tools can be a valuable addition to a developer's toolkit, as they can help identify and fix vulnerabilities in real-time, saving time and effort. If you are a developer and are interested in using an IAST tool, there are many options available, so it is important to research and compare different tools to find the one that best fits your needs.
## Tool example
There are almost no open-source tools on the market. Example is the commercial tool: Contrast Community Edition (CE) - Fully featured version for 1 app and up to 5 users (some Enterprise features disabled). Contrast CE supports Java and .NET only.
Can be found here - https://www.contrastsecurity.com/contrast-community-edition

View File

@ -0,0 +1,153 @@
# IAST and DAST in conjunction - lab time
After learning what IAST and DAST are it's time to get our hands dirty and perform an exercise in which we use these processes to find vulnerabilities in real applications.
**NOTE:** There are no open-source IAST implementations, so we will have to use a commerical solution.
Don't worry, there is a free-tier, so you will be able to follow the lab without paying anything.
This lab is based on this [repo](https://github.com/rstatsinger/contrast-java-webgoat-docker).
It contains a vulnerable Java application to be tested and exploited, Docker and Docker Compose for easy setup and [Contrast Community Edition](https://www.contrastsecurity.com/contrast-community-edition?utm_campaign=ContrastCommunityEdition&utm_source=GitHub&utm_medium=WebGoatLab) for IAST solution.
## Prerequisites
- [Docker](https://www.docker.com/products/docker-desktop/)
- [Docker Compose](https://docs.docker.com/compose/)
- Contrast CE account. Sign up for free [here](https://www.contrastsecurity.com/contrast-community-edition?utm_campaign=ContrastCommunityEdition&utm_source=GitHub&utm_medium=WebGoatLab).
**NOTE:** The authors of this article and of the 90 Days of DevOps program are in way associated or affilited with Contrast Security.
We are using this commercial solution, because there is not an open-source one, and because this one has a free-tier that does not require paying or providing a credit card.
1. As there are no open-source IAST implementation will use a commercial one with some free licenses. For this purpose, you will need 2 componenets:
IAST solution from here - <https://github.com/rstatsinger/contrast-java-webgoat-docker>. You need docker and docker-compose installed in mac or linux enviroment (this lab is tested on Mint). Please follow the README to create account in Contrast.
## Getting started
To start, clone the [repository](https://github.com/rstatsinger/contrast-java-webgoat-docker).
Get your credentials from Contrast Security.
Click on your name in the top-right corner -> `Organization Settings` -> `Agent`.
Get the values for `Agent Username`, `Agent Service Key` and `API Key`.
Replace these values in the `.env.template` file in the newly cloned repository.
**NOTE:** These values are secret.
Do not commit them to Git.
It's best to put the `.env.template` under `.gitignore` so that you don't commit these values by mistake.
## Running the vulnerable application
To run the vulnerable application, run:
```sh
./run.sh
```
or
```sh
docker compose up
```
Once ready, the application UI will be accessible on <http://localhost:8080/WebGoat>.
## Do some damage
Now that we have a vulnerable application let's try to exploit it.
1. Install ZAP Proxy from [here](https://www.zaproxy.org/download/)
An easy way to do that is via a DAST scanner.
One such scanner is [ZAP Proxy](https://www.zaproxy.org/).
It is a free and open-source web app scanner.
2. Install `zap-cli` from [here](https://github.com/Grunny/zap-cli)
Next, install `zap-cli`.
`zap-cli` is an open-source CLI for ZAP Proxy.
3. Run ZAP proxy
Run ZAP Proxy from its installed location.
In Linux Mint it is by default in `/opt/zaproxy`.
In MacOS it is in `Applications`.
4. Set env variables for `ZAP_API_KEY` and `ZAP_PORT`
Get these values from ZAP Proxy.
Go to `Options...` -> `API` to get the API Key.
Go to `Options...` -> `Network` -> `Local Servers/Proxies` to configure and obtain the port.
5. Run several commands with `zap-cli`
For example:
```sh
zap-cli quick-scan -s all --ajax-spider -r http://127.0.0.1:8080/WebGoat/login.mvc
```
Alternatively, you can follow the instructions in the [repo](https://github.com/rstatsinger/contrast-java-webgoat-docker/blob/master/Lab-WebGoat.pdf)
to cause some damage to the vulnerable application.
6. Observe findings in Constrast
Either way, if you go to the **Vulnerabilities** tab for your application in Contrast you should be able to see that Contrast detected the vulnerabilities
and is warning you to take some action.
## Bonus: Image Scanning
We saw how an IAST solution helped us detect attacks by observing the behaviour of the application.
Let's see whether we could have done something to prevent these attacks in the first place.
The vulnerable application we used for this demo was packages as a container.
Let's scan this container via the `grype` scanner we learned about in Days [14](day14.md) and [15](day15.md) and see the results.
```sh
$ grype contrast-java-webgoat-docker-webgoat
✔ Vulnerability DB [no update available]
✔ Loaded image
✔ Parsed image
✔ Cataloged packages [316 packages]
✔ Scanned image [374 vulnerabilities]
NAME INSTALLED FIXED-IN TYPE VULNERABILITY SEVERITY
apt 1.8.2.3 deb CVE-2011-3374 Negligible
axis 1.4 java-archive GHSA-55w9-c3g2-4rrh Medium
axis 1.4 java-archive GHSA-96jq-75wh-2658 Medium
bash 5.0-4 deb CVE-2019-18276 Negligible
bash 5.0-4 (won't fix) deb CVE-2022-3715 High
bsdutils 1:2.33.1-0.1 deb CVE-2022-0563 Negligible
bsdutils 1:2.33.1-0.1 (won't fix) deb CVE-2021-37600 Low
commons-beanutils 1.8.3 java-archive CVE-2014-0114 High
commons-beanutils 1.8.3 java-archive CVE-2019-10086 High
commons-beanutils 1.8.3 1.9.2 java-archive GHSA-p66x-2cv9-qq3v High
commons-beanutils 1.8.3 1.9.4 java-archive GHSA-6phf-73q6-gh87 High
commons-collections 3.2.1 java-archive CVE-2015-6420 High
commons-collections 3.2.1 3.2.2 java-archive GHSA-6hgm-866r-3cjv High
commons-collections 3.2.1 3.2.2 java-archive GHSA-fjq5-5j5f-mvxh Critical
commons-fileupload 1.3.1 java-archive CVE-2016-1000031 Critical
commons-fileupload 1.3.1 java-archive CVE-2016-3092 High
commons-fileupload 1.3.1 1.3.2 java-archive GHSA-fvm3-cfvj-gxqq High
commons-fileupload 1.3.1 1.3.3 java-archive GHSA-7x9j-7223-rg5m Critical
commons-io 2.4 java-archive CVE-2021-29425 Medium
commons-io 2.4 2.7 java-archive GHSA-gwrp-pvrq-jmwv Medium
coreutils 8.30-3 deb CVE-2017-18018 Negligible
coreutils 8.30-3 (won't fix) deb CVE-2016-2781 Low
curl 7.64.0-4+deb10u3 deb CVE-2021-22922 Negligible
curl 7.64.0-4+deb10u3 deb CVE-2021-22923 Negligible
<truncated>
```
As we can see this image is full with vulnerabilities.
If we dive into each one we will see we have vulnerabilities like RCE (Remote Code Execution), SQL Injection, XML External Entity Vulnerability, etc.
## Week Summary
IAST and DAST are important methods that can help us find vulnerabilities in our application via monitoring its behaviour.
This is done once the application is already deployed.
Container Image Scanning can help us find vulnerabilities in our application based on the library that are present inside the container.
Image Scanning and IAST/DAST are not mutually-exclusive.
They both have their place in a Secure SDLC and can help us find different problems before the attackers do.

View File

@ -0,0 +1,230 @@
# Continuous Image Repository Scan
In [Day 14](day14.md), we learned what container image scanning is and why it's important.
We also learned about tools like Grype and Trivy that help us scan our container images.
However, in modern SDLCs, a DevSecOps engineer would rarely scan container images by hand, e.g., they would not be running Grype and Trivy locally and looking at every single vulnerability.
Instead, they would have the image scanning configured as part of the CI/CD pipeline.
This way, they would be sure that all the images that are being built by the pipelines are also scanned by the image scanner.
These results could then be sent by another system, where the DevSecOps engineers could look at them and take some action depending on the result.
A sample CI/CD pipeline could look like this:
0. _Developer pushes code_
1. Lint the code
2. Build the code
3. Test the code
4. Build the artifacts (container images, helm charts, etc.)
5. Scan the artifacts
6. (Optional) Send the scan results somewhere
7. (Optional) Verify the scan results and fail the pipeline if the verification fails
8. Push the artifacts to a repository
A failure in the scan or verify steps (steps 6 and 7) would mean that our container will not be pushed to our repository, and we cannot use the code we submitted.
Today, we are going to take a look at how we can set up such a pipeline and what would be a sensible configuration for one.
## Setting up a CI/CD pipeline with Grype
Let's take a look at the [Grype](https://github.com/anchore/grype) scanner.
Grype is an open-source scanner maintained by the company [Anchore](https://anchore.com/).
### Scanning an image with Grype
Scanning a container image with Grype is as simple as running:
```shell
grype <IMAGE>
```
For example, if we want to scan the `ubuntu:20.04` image, we can run:
```shell
$ grype ubuntu:20.04
✔ Vulnerability DB [no update available]
✔ Pulled image
✔ Loaded image
✔ Parsed image
✔ Cataloged packages [92 packages]
✔ Scanned image [19 vulnerabilities]
NAME INSTALLED FIXED-IN TYPE VULNERABILITY SEVERITY
coreutils 8.30-3ubuntu2 deb CVE-2016-2781 Low
gpgv 2.2.19-3ubuntu2.2 deb CVE-2022-3219 Low
libc-bin 2.31-0ubuntu9.9 deb CVE-2016-20013 Negligible
libc6 2.31-0ubuntu9.9 deb CVE-2016-20013 Negligible
libncurses6 6.2-0ubuntu2 deb CVE-2021-39537 Negligible
libncurses6 6.2-0ubuntu2 deb CVE-2022-29458 Negligible
libncursesw6 6.2-0ubuntu2 deb CVE-2021-39537 Negligible
libncursesw6 6.2-0ubuntu2 deb CVE-2022-29458 Negligible
libpcre3 2:8.39-12ubuntu0.1 deb CVE-2017-11164 Negligible
libsystemd0 245.4-4ubuntu3.19 deb CVE-2022-3821 Medium
libtinfo6 6.2-0ubuntu2 deb CVE-2021-39537 Negligible
libtinfo6 6.2-0ubuntu2 deb CVE-2022-29458 Negligible
libudev1 245.4-4ubuntu3.19 deb CVE-2022-3821 Medium
login 1:4.8.1-1ubuntu5.20.04.4 deb CVE-2013-4235 Low
ncurses-base 6.2-0ubuntu2 deb CVE-2021-39537 Negligible
ncurses-base 6.2-0ubuntu2 deb CVE-2022-29458 Negligible
ncurses-bin 6.2-0ubuntu2 deb CVE-2021-39537 Negligible
ncurses-bin 6.2-0ubuntu2 deb CVE-2022-29458 Negligible
passwd 1:4.8.1-1ubuntu5.20.04.4 deb CVE-2013-4235 Low
```
Of course, you already know that because we did it on [Day 14](day14.md).
However, this command will only output the vulnerabilities and exit with a success code.
So if this were in a CI/CD pipeline, the pipeline would be successful even if we have many vulnerabilities.
The person running the pipeline would have to open it, see the logs and manually determine whether the results are OK.
This is tedious and error prone.
Let's see how we can enforce some rules for the results that come out of the scan.
### Enforcing rules for the scanned images
As we already established, just scanning the image does not do much except for giving us visibility into the number of vulnerabilities we have inside the image.
But what if we want to enforce a set of rules for our container images?
For example, a good rule would be "an image should not have critical vulnerabilities" or "an image should not have vulnerabilities with available fixes."
Fortunately for us, this is also something that Grype supports out of the box.
We can use the `--fail-on <SEVERITY>` flag to tell Grype to exit with a non-zero exit code if, during the scan, it found vulnerabilities with a severity higher or equal to the one we specified.
This will fail our pipeline, and the engineer would have to look at the results and fix something in order to make it pass.
Let's tried it out.
We are going to use the `springio/petclinic:latest` image, which we already found has many vulnerabilities.
You can go back to [Day 14](day14.md) or scan it yourself to see how much exactly.
We want to fail the pipeline if the image has `CRITICAL` vulnerabilities.
We are going to run the can like this:
```shell
$ grype springio/petclinic:latest --fail-on critical
✔ Vulnerability DB [no update available]
✔ Loaded image
✔ Parsed image
✔ Cataloged packages [212 packages]
✔ Scanned image [168 vulnerabilities]
NAME INSTALLED FIXED-IN TYPE VULNERABILITY SEVERITY
spring-core 5.3.6 java-archive CVE-2016-1000027 Critical
spring-core 5.3.6 java-archive CVE-2022-22965 Critical
...
1 error occurred:
* discovered vulnerabilities at or above the severity threshold
$ echo $?
1
```
We see two things here:
- apart from the results, Grype also outputted an error that is telling us that this scan violated the rule we had defined (no CRITICAL vulnerabilities)
- Grype exited with exit code 1, which indicates failure.
If this were a CI pipeline, it would have failed.
When this happens, we will be blocked from merging our code and pushing our container to the registry.
This means that we need to take some action to fix the failure so that we can finish our task and push our change.
Let's see what our options are.
### Fixing the pipeline
Once we encounter a vulnerability that is preventing us from publishing our container, we have a few ways we can go depending on the vulnerability.
#### 1. The vulnerability has a fix
The best-case scenario is when this vulnerability is already fixed in a newer version of the library we depend on.
One such vulnerability is this one:
```text
NAME INSTALLED FIXED-IN TYPE VULNERABILITY SEVERITY
snakeyaml 1.27 1.31 java-archive GHSA-3mc7-4q67-w48m High
```
This is a `High` severity vulnerability.
It's coming from the Java package `snakeyaml`, version `1.27`.
Grype is telling us that this vulnerability is fixed in version `1.31` of the same library.
In this case, we can just upgrade the version of this library in our `pom.xml` or `build.gradle` file,
test our code to make sure nothing breaks with the new version,
and submit the code again.
This will build a new version of our container, re-scan it, and hopefully, this time, the vulnerability will not come up, and our scan will be successful.
### 2. The vulnerability does not have a fix, but it's not dangerous
Sometimes a vulnerability we encounter will not have a fix available.
These are so-called zero-day vulnerabilities that are disclosed before a fix is available.
We can see two of those in the initial scan results:
```text
NAME INSTALLED FIXED-IN TYPE VULNERABILITY SEVERITY
spring-core 5.3.6 java-archive CVE-2016-1000027 Critical
spring-core 5.3.6 java-archive CVE-2022-22965 Critical
```
When we encounter such a vulnerability, we need to evaluate how severe it is and calculate the risk of releasing our software with that vulnerability in it.
We can determine that the vulnerability does not constitute any danger to our software and its consumers.
One such case might be when a vulnerability requires physical access to the servers to be exploited.
If we are sure that our physical servers are secure enough and an attacker cannot get access to them, we can safely ignore this vulnerability.
In this case, we can tell Grype to ignore this vulnerability and not fail the scan because of it.
We can do this via the `grype.yaml` configuration file, where we can list vulnerabilities we want to ignore:
```yaml
ignore:
# This is the full set of supported rule fields:
- vulnerability: CVE-2016-1000027
fix-state: unknown
package:
name: spring-core
version: 5.3.6
type: java-archive
# We can list as many of these as we want
- vulnerability: CVE-2022-22965
# Or list whole packages which we want to ignore
- package:
type: gem
```
Putting this in our configuration file and re-running the scan will make our pipeline green.
However, it is crucial that we keep track of this file and not ignore vulnerabilities that have a fix.
For example, when a fix for this vulnerability is released, it's best we upgrade our dependency and remove this vulnerability from our application.
That way, we will ensure that our application is as secure as possible and there are no vulnerabilities that can turn out to be more severe than we initially thought.
### 3. Vulnerability does not have a fix, and IT IS dangerous
The worst-case scenario is if we encounter a vulnerability that does not have a fix, and it is indeed dangerous, and there is a possibility to be exploited.
In that case, there is no right move.
The best thing we can do is sit down with our security team and come up with an action plan.
We might decide it's best to do nothing while the vulnerability is fixed.
We might decide to manually patch some stuff so that we remove at least some part of the danger.
It really depends on the situation.
Sometimes, a zero-day vulnerability is already in your application that is deployed.
In that case, freezing deploys won't help because your app is already vulnerable.
That was the case with the Log4Shell vulnerability that was discovered in late 2021 but has been present in Log4j since 2013.
Luckily, there was a fix available within hours, but next time we might not be this lucky.
## Summary
As we already learned in [Day 14](day14.md), scanning your container images for vulnerabilities is important as it can give you valuable insights about
the security posture of your images.
Today we learned that it's even better to have it as part of your CI/CD pipeline and to enforce some basic rules about what vulnerabilities you have inside your images.
Finally, we discussed the steps we can take when we find a vulnerability.
Tomorrow we are going to take a look at container registries that enable this scanning out of the box and also at scanning other types of artifacts.
See you on [Day 22](day22.md).

View File

@ -0,0 +1,77 @@
# Continuous Image Repository Scan - Container Registries
Yesterday we learned how to integrate container image vulnerability scanning into our CI/CD pipelines.
Today, we are going to take a look at how to enforce that our images are scanned on another level - the container registry.
There are container registries that will automatically scan your container images once you push them.
This ensures that we will have visibility into the number of vulnerabilities for every container image produced by our team.
Let's take a look at few different registries that provide this capability and how we can use it.
## Docker Hub
[Docker Hub](https://hub.docker.com/) is the first container registry.
It was build by the team that created Docker and is still very popular today.
Docker Hub has automatic vulnerability scanner, powered by [Snyk](https://snyk.io/).
This means that, if enabled, when you push an image to Docker Hub it will be automatically scanned and the results with be visible to you in the UI.
You can learn more about how to enable and use this feature from the Docker Hub [docs](https://docs.docker.com/docker-hub/vulnerability-scanning/).
**NOTE:** This feature is not free.
In order to use it you need to have a subscription.
## Harbor
[Harbor](https://goharbor.io/) is an open-source container registry.
Originally developed in VMware, it is now part of the CNCF.
It supports image scanning via [Trivy](https://github.com/aquasecurity/trivy) and/or [Clair](https://github.com/quay/clair).
This is configured during installation.
(Even if you don't enable image scanning during installation, it can always be configured afterwards).
For more info, check out the [docs](https://goharbor.io/docs/2.0.0/administration/vulnerability-scanning/).
## AWS ECR
[AWS ECR](https://aws.amazon.com/ecr/) also supports [image scanning via Clair](https://docs.aws.amazon.com/AmazonECR/latest/userguide/image-scanning-basic.html).
## Azure Container Registry
[Azure Container Registry](https://azure.microsoft.com/en-us/products/container-registry) support [image scanning via Qualys](https://azure.microsoft.com/en-us/updates/vulnerability-scanning-for-images-in-azure-container-registry-is-now-generally-available/).
## GCP
[GCP Container Registry](https://cloud.google.com/container-registry) also support [automatic image scanning](https://cloud.google.com/container-analysis/docs/automated-scanning-howto).
## Policy Enforcement
Just scanning the images and having the results visible in your registry is nice thing to have,
but it would be even better if we have a way to enforce some standards for these images.
In [Day 14](day14.md) we saw how to make `grype` fail a scan if an image has vulnerabilities above a certain severity.
Something like this can also be enforced on the container registry level.
For example, [Harbor](https://goharbor.io/) has the **Prevent vulnerable images from running** option, which when enable does not allow you to pull an image that has vulnerabilities above a certain severity.
If you cannot pull the image, you cannot run it, so this is a good rule to have if you don't want to be running vulnerable images.
Of course, a rule like that can effectively prevent you from deploying something to your environment, so you need to use it carefully.
More about this option and how to enable it in Harbor you can read [here](https://goharbor.io/docs/2.3.0/working-with-projects/project-configuration/).
For more granular control and for unblocking deployments you can configure a [per-project CVE allowlist](https://goharbor.io/docs/2.3.0/working-with-projects/project-configuration/configure-project-allowlist/).
This will allow certain images to run even though they have vulnerabilities.
However, these vulnerabilities would be manually curated and allow-listed by the repo admin.
## Summary
Scanning your container images and having visibility into the number of vulnerabilities inside them is critical for a secure SDLC.
One place to do that is you CI pipeline (as seen in [Day 21](day21.md)).
Another place is your container registry (as seen today).
Both are good options, both have their pros and cons.
It is up to the DevSecOps architect to decide which approach works better for them and their thread model.

View File

@ -0,0 +1,161 @@
# Artifacts Scan
In the previous two days we learned why and how to scan container images.
However, usually our infrastructure consists of more than just container images.
Yes, our services will run as containers, but around them we can also have other artifacts like:
- Kubernetes manifests
- Helm templates
- Terraform code
For maximum security, you would be scanning all the artifacts that you use for your environment, not only your container images.
The reason for that is that even if you have the most secure Docker images with no CVEs,
but run then on an insecure infrastructure with bad Kubernetes configuration,
then your environment will not be secure.
**Each system is as secure as its weakest link.**
Today we are going to take a look at different tools for scanning artifacts different than container images.
## Kubernetes manifests
Scanning Kubernetes manifests can expose misconfigurations and security bad practices like:
- running containers as root
- running containers with no resource limits
- giving too much and too powerful capabilities to the containers
- hardcoding secrets in the templates, etc.
All of these are part of the security posture of our Kubernetes workloads, and having a bad posture in security is just as bad as having a bad posture in real-life.
One popular open-source tool for scanning Kubernetes manifests is [KubeSec](https://kubesec.io/).
It outputs a list of misconfiguration.
For example, this Kubernetes manifest taken from their docs has a lot of misconfigurations like missing memory limits, running as root, etc.
```yaml
apiVersion: v1
kind: Pod
metadata:
name: kubesec-demo
spec:
containers:
- name: kubesec-demo
image: gcr.io/google-samples/node-hello:1.0
securityContext:
runAsNonRoot: false
```
Let's scan it and look at the results.
```shell
$ kubesec scan kubesec-test.yaml
[
{
"object": "Pod/kubesec-demo.default",
"valid": true,
"message": "Passed with a score of 0 points",
"score": 0,
"scoring": {
"advise": [
{
"selector": ".metadata .annotations .\"container.seccomp.security.alpha.kubernetes.io/pod\"",
"reason": "Seccomp profiles set minimum privilege and secure against unknown threats"
},
{
"selector": ".spec .serviceAccountName",
"reason": "Service accounts restrict Kubernetes API access and should be configured with least privilege"
},
{
"selector": "containers[] .securityContext .runAsNonRoot == true",
"reason": "Force the running image to run as a non-root user to ensure least privilege"
},
{
"selector": ".metadata .annotations .\"container.apparmor.security.beta.kubernetes.io/nginx\"",
"reason": "Well defined AppArmor policies may provide greater protection from unknown threats. WARNING: NOT PRODUCTION READY"
},
{
"selector": "containers[] .resources .requests .memory",
"reason": "Enforcing memory requests aids a fair balancing of resources across the cluster"
},
{
"selector": "containers[] .securityContext .runAsUser -gt 10000",
"reason": "Run as a high-UID user to avoid conflicts with the host's user table"
},
{
"selector": "containers[] .resources .limits .cpu",
"reason": "Enforcing CPU limits prevents DOS via resource exhaustion"
},
{
"selector": "containers[] .resources .requests .cpu",
"reason": "Enforcing CPU requests aids a fair balancing of resources across the cluster"
},
{
"selector": "containers[] .securityContext .readOnlyRootFilesystem == true",
"reason": "An immutable root filesystem can prevent malicious binaries being added to PATH and increase attack cost"
},
{
"selector": "containers[] .securityContext .capabilities .drop",
"reason": "Reducing kernel capabilities available to a container limits its attack surface"
},
{
"selector": "containers[] .resources .limits .memory",
"reason": "Enforcing memory limits prevents DOS via resource exhaustion"
},
{
"selector": "containers[] .securityContext .capabilities .drop | index(\"ALL\")",
"reason": "Drop all capabilities and add only those required to reduce syscall attack surface"
}
]
}
}
]
```
As we see it produced 12 warnings about thing in this manifests we would want to change.
Each warning has an explanation telling us WHY we need to fix it.
### Others
Other such tools include [kube-bench](https://github.com/aquasecurity/kube-bench), [kubeaudit](https://github.com/Shopify/kubeaudit) and [kube-score](https://github.com/zegl/kube-score).
They work in the same or similar manner.
You give them a resource to analyze and they output a list of things to fix.
They can be used in a CI setup.
Some of them can also be used as [Kubernetes validating webhook](https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/), and can block resources from being created if they violate a policy.
## Helm templates
[Helm](https://helm.sh/) templates are basically templated Kubernetes resources that can be reused and configured with different values.
There are some tools like [Snyk](https://docs.snyk.io/products/snyk-infrastructure-as-code/scan-kubernetes-configuration-files/scan-and-fix-security-issues-in-helm-charts) that have *some* support for scanning Helm templates for misconfigurations the same way we are scanning Kubernetes resources.
However, the best way to approach this problem is to just scan the final templated version of your Helm charts.
E.g. use the `helm template` to substitute the templated values with actual ones and just scan that via the tools provided above.
## Terraform
The most popular tool for scanning misconfigurations in Terraform code is [tfsec](https://github.com/aquasecurity/tfsec).
It uses static analysis to spot potential issues in your code.
It support multiple cloud providers and points out issues specific to the one you are using.
For example, it has checks for [using the default VPC in AWS](https://aquasecurity.github.io/tfsec/v1.28.1/checks/aws/ec2/no-default-vpc/),
[hardcoding secrets in the EC2 user data](https://aquasecurity.github.io/tfsec/v1.28.1/checks/aws/ec2/no-secrets-in-launch-template-user-data/),
or [allowing public access to your ECR container images](https://aquasecurity.github.io/tfsec/v1.28.1/checks/aws/ecr/no-public-access/).
It allow you to enable/disable checks and to ignore warnings via inline comments.
It also allows you to define your own policies via [Rego](https://www.openpolicyagent.org/docs/latest/policy-language/).
## Summary
A Secure SDLC would include scanning of all artifacts that end up in our production environment, not just the container images.
Today we learned how to scan non-container artifacts like Kubernetes manifests, Helm charts and Terraform code.
The tools we looked at are free and open-source and can be integrated into any workflow or CI pipeline.

View File

@ -0,0 +1,147 @@
# Signing
The process of signing involves... well, signing an artifact with a key, and later verifying that this artifact has not been tampered with.
An "artifact" in this scenario can be anything
- [code](https://venafi.com/machine-identity-basics/what-is-code-signing/#item-1)
- [git commit](https://docs.github.com/en/authentication/managing-commit-signature-verification/signing-commits)
- [container images](https://docs.sigstore.dev/cosign/overview/)
Signing and verifying the signature ensures that the artifact(container) we pulled from the registry is the same one that we pushed.
This secures us from supply chain and man-in-the-middle attack where we download something different that we wanted.
The CI workflow would look like this:
0. Developer pushes code to Git
1. CI builds the code into a container
2. **CI signs the container with our private key**
3. CI pushes the signed container to our registry
And then when we want to deploy this image:
1. Pull the image
2. **Verify the signature with our public key**
1. If signature does not match, fail the deploy - image is probably compromised
3. If signature does match, proceed with the deploy
This workflow is based on public-private key cryptography.
When you sign something with your private key, everyone that has access to your public key can verify that this was signed by you.
And since the public key is... well, public, that means everyone.
## The danger of NOT signing your images
If you are not signing your container images, there is the danger that someone will replace an image in your repository with another image that is malicious.
For example, you can push the `my-repo/my-image:1.0.0` image to your repository, but image tags, even versioned ones (like `1.0.0`) are mutable.
So an attacker that has access to your repo can push another image, tag it the same way, and this way it will override your image.
Then, when you go an deploy this image, the image that will get deployed is the one that attacked forged.
This will probably be a maliciuos one.
For example, on that has malware, is stealing data, or using your infrastructure for mining crypto currencies.
This problem can be solved by signing your images, because when you sign an images, then you can later verify that what you pull is what you uploaded in the first place.
So let's take a look at how we can do this via a tool called [cosign](https://docs.sigstore.dev/cosign/overview/).
## Signing container images
First, download the tool, following the instructions for your OS [here](https://docs.sigstore.dev/cosign/installation/).
Generate a key-pair if you don't have one:
```console
cosign generate-key-pair
```
This will output two files in the current folder:
- `cosign.key` - your private key.
DO NOT SHARE WITH ANYONE.
- `cosign.pub` - your public key.
Share with whoever needs it.
We can use the private key to sign an image:
```console
$ cosign sign --key cosign.key asankov/signed
Enter password for private key:
Pushing signature to: index.docker.io/asankov/signed
```
This command signed the `asankov/signed` contaner image and pushed the signature to the container repo.
## Verifying signatures
Now that we have signed the image, let's verify the signature.
For that, we need our public key:
```console
$ cosign verify --key=cosign.pub asankov/signed | jq
Verification for index.docker.io/asankov/signed:latest --
The following checks were performed on each of these signatures:
- The cosign claims were validated
- The signatures were verified against the specified public key
[
{
"critical": {
"identity": {
"docker-reference": "index.docker.io/asankov/signed"
},
"image": {
"docker-manifest-digest": "sha256:93d62c92b70efc512379cf89317eaf41b8ce6cba84a5e69507a95a7f15708506"
},
"type": "cosign container image signature"
},
"optional": null
}
]
```
The output of this command showed us that the image is signed by the key we expected.
Since we are the only ones that have access to our private key, this means that no one except us could have pushed this image and signature to the container repo.
Hence, the contents of this image have not been tampered with since we pushed it.
Let's try to verify an image that we have NOT signed.
```console
$ cosign verify --key=cosign.pub asankov/not-signed
Error: no matching signatures:
main.go:62: error during command execution: no matching signatures:
```
Just as expected, `cosign` could not verify the signature of this image (because there was not one).
In this example, this image (`asankov/not-signed`) is not signed at all, but we would have gotten the same error if someone had signed this image with different key than the one we are using to verify it.
### Verifying signatures in Kubernetes
In the previous example, we were verifying the signatures by hand.
However, that is good only for demo purposes or for playing around with the tool.
In a real-world scenario, you would want this verification to be done automatically at the time of deploy.
Fortunately, there are many `cosign` integrations for doing that.
For example, if we are using Kubernetes, we can deploy a validating webhook that will audit all new deployments and verify that the container images used by them are signed.
For Kubernetes you can choose from 3 existing integrations - [Gatekeeper](https://github.com/sigstore/cosign-gatekeeper-provider), [Kyverno](https://kyverno.io/docs/writing-policies/verify-images/) or [Conaisseur](https://github.com/sse-secure-systems/connaisseur#what-is-connaisseur).
You can choose one of the three depending on your preference, or if you are already using them for something else.
## Dangers to be aware of
As with everything else, signing images is not a silver bullet and will not solve all your security problems.
There is still the problem that your private keys might leak, in which case everyone can sign everything and it will still pass your signature check.
However, integrating signing into your workflow adds yet another layer of defence and one more hoop for attackers to jump over.
## Summary
Signing artifacts prevents supply-chain and man-in-the-middle attacks, by allowing you to verify the integrity of your artifacts.
[Sigstore](https://sigstore.dev/) and [cosign](https://docs.sigstore.dev/cosign/overview/) are useful tools to sign your artifacts and they come with many integrations to choose from.

View File

@ -0,0 +1,84 @@
# Systems Vulnerability Scanning
## What is systems vulnerability scanning?
Vulnerability scanning is the process of scanning a network or system to identify any existing security vulnerabilities.
It is a proactive measure used to detect any weaknesses that an attacker may exploit to gain unauthorised access to a system or network.
Vulnerability scanning can be either manual or automated.
It can involve scanning for known vulnerabilities, analysing the configuration of a system or network, or using an automated tool to detect any possible vulnerabilities.
## How do you perform a vulnerability scan?
A vulnerability scan is typically performed with specialised software that searches for known weaknesses and security issues in the system.
The scan typically looks for missing patches, known malware, open ports, weak passwords, and other security risks.
Once the scan is complete, the results are analysed to determine which areas of the system need to be addressed to improve its overall security.
## What are the types of vulnerability scans?
There are two main types of vulnerability scan: unauthenticated and authenticated.
Unauthenticated scans are conducted without any credentials and, as such, can only provide limited information about potential vulnerabilities.
This type of scan helps identify low-hanging fruit, such as unpatched systems or open ports.
Authenticated scans, on the other hand, are conducted with administrative credentials.
This allows the scanning tool to provide much more comprehensive information about potential vulnerabilities, including those that may not be easily exploitable.
In the next two days we are going to take a look at containers and network vulnerability scan, which are more specific subsets os system vulnerability scanning.
## Why are vulnerability scans important?
Vulnerabilities are widespread across organisations of all sizes.
New ones are discovered constantly or can be introduced due to system changes.
Criminal hackers use automated tools to identify and exploit known vulnerabilities and access unsecured systems, networks or data.
Exploiting vulnerabilities with automated tools is simple: attacks are cheap, easy to run and indiscriminate, so every Internet-facing organisation is at risk.
All it takes is one vulnerability for an attacker to access your network.
This is why applying patches to fix these security vulnerabilities is essential.
Updating your software, firmware and operating systems to the newest versions will help protect your organisation from potential vulnerabilities.
Worse, most intrusions are not discovered until it is too late. According to the global median, dwell time between the start of a cyber intrusion and its identification is 24 days.
## What does a vulnerability scan test?
Automated vulnerability scanning tools scan for open ports and detect common services running on those ports.
They identify any configuration issues or other vulnerabilities on those services and look at whether best practice is being followed, such as using TLSv1.2 or higher and strong cipher suites.
A vulnerability scanning report is then generated to highlight the items that have been identified.
By acting on these findings, an organisation can improve its security posture.
## Who conducts vulnerability scans?
IT departments usually undertake vulnerability scanning if they have the expertise and software to do so, or they can call on a third-party security service provider.
Vulnerability scans are also performed by attackers who scour the Internet to find entry points into systems and networks.
Many companies have bug bountry programs, that allow enthical hackers to report vulnerabilities and gain money for that.
Usually the bug bountry programs have boundaries, e.g. they define what is allowed and what is not.
Participating in big bounty programs must be done resposibly.
Hacking is a crime, and if you are caugh you cannot just claim that you did it for good, or that you were not going to exploit your findings.
## How often should you conduct a vulnerability scan?
Vulnerability scans should be performed regularly so you can detect new vulnerabilities quickly and take appropriate action.
This will help identify your security weaknesses and the extent to which you are open to attack.
## Penetration testing
Penetration testing is the next step after vulnerability scanning.
In penetration testing professional ethical hackers combine the results of automated scans with their expertise to reveal vulnerabilities that may not be identified by scans alone.
Penetration testers will also consider your environment (a significant factor in determining vulnerabilities true severity) and upgrade or downgrade the score as appropriate.
A scan can detect something that is vulnerability, but it cannot be actively exploited, because of the way it is incorporated into our system.
This makes the vulnerability a low priority one, because why fix something that presents no danger to you.
If an issue comes up in penetration testing then that means that this issue is exploitable, and probably a high priority - in the penetation testers managed to exploit it, so will the hackers.

View File

@ -0,0 +1,129 @@
# Containers Vulnerability Scanning
[Yesterday](day25.md) we learned that vulnerability scanning is the process of scanning a network or system to identify any existing security vulnerabilities.
We also learned that Containers Vulnerability Scanning is a subset of Systems Vulnerability Scanning, e.g. we are only scanning the "containers" part of our system.
In [Day 14](day14.md) we learned what container image vulnerability scanning and how it makes us more secure.
Then in [Day 15](day15.md) we learned more about that and on Days [21](day21.md) and [22](day22.md) we learned how to integrate the scanning process into our CI/CD pipelines
so that it is automatic and enforced.
Today, we are going to look at other techniques of scanning and securing containers.
Vulnerability scanning is important, but is not a silver bullet and not a guarantee that you are secure.
There are a few reasons for that.
First, image scanning only shows you the list of _known_ vulnerabilities.
There might be many vulnerabilities which have not been discovered, but are still there and could be exploited.
Second, the security of our deployments depends not only on the image and number of vulnerabilities, but also on the way we deploy that image.
For example, if we deploy an insecure application on the open internet where everyone has access to it, or leave the default SSH port and password of our VM,
then it does not matter whether our container has vulnerabilities or not, because the attackers will use the other holes in our system to get in.
That is why today we are going to take a look at few other aspects of containers vulnerability scanning.
## Host Security
Containers run on hosts.
Docker containers run on hosts that have the Docker Daemon installed.
Same is true for containerd, podman, cri-o, and other container runtimes.
If your host is not secured, and someone manages to break it, they will probably have access to your containers and be able to start, stop, modify them, etc.
That is why it's important to secure the host and secure it well.
Securing VMs is a deep topic I will not go into today, but the most basic things you can do are:
- limit the visibility of the machine on the public network
- if possible use a Load Balancer to access your containers, and make the host machine not visible on the public internet
- close all unnecessary ports
- use strong password for SSH and RDP
In the bottom of the article I will link 2 articles from AWS and VMware about VM security.
## Network Security
Network security is another deep topic, which we will look into in better detail [tomorrow](day27.md).
At a minimum, you should not have network exposure you don't need.
E.g. if Container A does not need to make network calls to Container B, it should not be able to make this calls at a first place.
In Docker you can define [different network drivers](https://docs.docker.com/network/) that can help you with this.
In Kubernetes there are [network policies](https://kubernetes.io/docs/concepts/services-networking/network-policies/) that limit which container has access to what.
## Security misconfiguration
When working with containers, there are a few security misconfiguration which you can make that can put you in danger of being hacked.
### Capabilities
One such thing is giving your container excessive capabilities.
[Linux capabilities](https://man7.org/linux/man-pages/man7/capabilities.7.html) determine what syscalls you container can execute.
The best practice is to be aware of the capabilities your containers need and assign them only them.
That way you will be sure that a left-over capability that was never needed was not abused by an attacker.
In practice, it is hard to know what capabilities exactly your containers need, because that involves complex monitoring of your container over time.
Even the developers that wrote the code are probably not aware of what capabilities exactly are needed to perform the actions that they code is doing.
That is so, because capabilities are a low-level construct and developers usually write higher-level code.
However, it is good to know which capabilities you should avoid assigning to your containers, because they are too overpowered and give it too many permissions.
One such capability is `CAP_SYS_ADMIN` which is way overpowered and can do a lot of things.
Even the Linux docs of this capability warn you that you should not be using this capability if you can avoid it.
### Running as root
Running containers as root is a really bad practice and it should be avoided as much as possible.
Of course, there might be situations in which you _must_ run containers as root.
One such example are the core components of Kubernetes, which run as root containers, because they need to have a lot of priviledges on the host.
However, if you are running a simple web server, or something like this, you should not have the need to run the container as root.
Running a container as root means that basically you are throwing away all the isolation containers give you, as a root container have almost full control over the host.
A lot of container runtime vulnerabilities are only applicable if containers are running as root.
Tools like [falco](https://github.com/falcosecurity/falco) and [kube-bench](https://github.com/aquasecurity/kube-bench) will warn you if you are running containers as root, so that you can take actions and change that.
### Resource limits
Not defining resource limits for your containers can lead to a DDoS attack that brings down your whole infrastructure.
When you are being DDoS-ed the workload starts consuming more memory and CPU.
If that workload is a container with no limits, at some point it will drain all the available resources from the host and there will be none left for the other containers on that host.
At some point, the whole host might go down, which will lead to more pressure on your other hosts and can have a domino effect on your whole infra.
If you have sensible limits for your container, it will consume them, but the orchestrator would not give him more.
At some point, the container will die due to lack of resources, but nothing else will happen.
Your host and other containers will be safe.
## Summary
Containers Vulnerability Scanning is more than just scanning for CVEs.
It includes things like proper configuration, host security, network configuration, etc.
There is not one tool that can help with this, but there are open source solutions that you can combine to achieve the desired results.
Most of these lessons are useful no matter the orchestrator you are using.
You can be using Kubernetes, OpenShift, AWS ECS, Docker Compose, VMs with Docker, etc.
The basics are the same, and you should adapt them to the platform you are using.
Some orchestrators give you more features than others.
For example, Kubernetes has [dynamic admission controllers](https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/) that lets you define custom checks for your resources.
As far as I am aware, Docker Compose does not have something like this, but if you know what you want to achieve it should not be difficult to write your own.
## Resources
[This article](https://sysdig.com/blog/container-security-best-practices/) by Sysdig contains many best practices for containers vulnerability scanning.
Some of them like container image scanning and Infrastructure-as-Code scanning we already mentioned in previous days.
It also includes other useful things like [Host scanning](https://sysdig.com/blog/vulnerability-assessment/#host), [real-time logging and monitoring](https://sysdig.com/blog/container-security-best-practices/#13) and [security misconfigurations](https://sysdig.com/blog/container-security-best-practices/#11).
More on VM security:
<https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-security.html>
<https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.security.doc/GUID-60025A18-8FCF-42D4-8E7A-BB6E14708787.html>

View File

@ -0,0 +1,84 @@
# Network Vulnerability Scan
On [Day 25](day25.md) we learned that vulnerability scanning is the process of scanning a network or system to identify any existing security vulnerabilities.
We also learned that Network Vulnerability Scanning is a subset of Systems Vulnerability Scanning, e.g. we are only scanning the network part of our system.
Today we are going to dive deeper into what Network Vulnerability Scanning is and how we can do it.
## Network Vulnerability Scanning
**Network vulnerability scanning** is the process of identifying weaknesses on a network that is a potential target for exploitation by threat actors.
Once upon a time, before the cloud, network security was easy (sort of, good security is never easy).
You build a huge firewall around your data center, allow traffic only to the proper entrypoints and assume that everything that managed to get inside is legitimate.
This approach has one huge flaw - if an attacker managed to get through the wall, there are no more lines of defence to stop them.
Nowadays, such an approach would work even less.
With the cloud and microservices architecture, the actors in a network has grown exponentially.
This requires us to change our mindset and adopt new processes and tools in building secure systems.
One such process is **Network Vulnerability Scanning**.
The tool that does that is called **Network Vulnerability Scanner**.
## How does network vulnerability scanning work?
Vulnerability scanning software relies on a database of known vulnerabilities and automated tests for them.
A scanner would scan a wide range of devices and hosts on your networks, identifying the device type and operating system, and probing for relevant vulnerabilities.
A scan may be purely network-based, conducted from the wider internet (external scan) or from inside your local intranet (internal scan).
It may be a deep inspection that is possible when the scanner has been provided with credentials to authenticate itself as a legitimate user of the host or device.
## Vulnerability management
After a scan has been performed and has found vulnerabilities, the next step is to address them.
This is the vulnerability management phase.
A vulnerability could be marked as false positive, e.g. the scanner reported something that is not true.
It could be acknowledged and then assessed by the security team.
Many vulnerabilities can be addressed by patching, but not all.
A cost/benefit analysis should be part of the process because not all vulnerabilities are security risks in every environment, and there may be business reasons why you cant install a given patch.
It would be useful if the scanner reports alternative means to remediate the vulnerability (e.g., disabling a service or blocking a port via firewall).
## Caveats
Similar to container image vulnerability scanning, network vulnerability scanning tests your system for _known_ vulnerabilities.
So it will not find anything that is not already reporter.
Also, it will not protect you from something like exposing your admin panel to the internet and using the default password.
(Although I would assume that some network scanner are smart enough to test for well-known endpoints that should not be exposed).
At the end of the day, it's up to you to know your system, and to know the way to test it, and protect it.
Tools only go so far.
## Network Scanners
Here is a list of network scanners that can be used for that purpose.
**NOTE:** The tools on this list are not free and open-source, but most of them have free trials, which you can use to evaluate them.
- [Intruder Network Vulnerability Scanner](https://www.intruder.io/network-vulnerability-scanner)
- [SecPod SanerNow Vulnerability Management](https://www.secpod.com/vulnerability-management/)
- [ManageEngine Vulnerability Manager Plus](https://www.manageengine.com/vulnerability-management/)
- [Domotz](https://www.domotz.com/features/network-security.php)
- [Microsoft Defender for Endpoint](https://www.microsoft.com/en-us/security/business/endpoint-security/microsoft-defender-endpoint)
- [Rapid7 InsightVM](https://www.rapid7.com/products/insightvm/)
## Summary
As with all the security processes we talked about in the previous day, network scanning is not a silver bullet.
Utilizing a network scanner would not make you secure if you are not taking care of the other aspects of systems security.
Also, using a tool like a network scanner does not mean that you don't need a security team.
Quite, the opposite, a good Secure SDLC starts with enabling the security team to run that kind of tool againts the system.
Then they would also be responsible for triaging the results and working with the revelant teams that need to fix the vulnerabilities.
That will be done by either patching up the system, closing a hole that is not necessary, or re-architecturing the system in a more secure manner.
## Resources
<https://www.comparitech.com/net-admin/free-network-vulnerability-scanners/>
<https://www.rapid7.com/solutions/network-vulnerability-scanner/>

View File

@ -32,6 +32,8 @@ The two images below will take you to the 2022 and 2023 edition of the learning
</p>
</a>
From this year we have built website for 90DaysOfDevops Challenge :rocket: :technologist: - [Link for website](https://www.90daysofdevops.com/#/2023)
The quickest way to get in touch is going to be via Twitter, my handle is [@MichaelCade1](https://twitter.com/MichaelCade1)