Merge branch 'main' into day-86

This commit is contained in:
Michael Cade 2023-03-27 09:55:36 +01:00 committed by GitHub
commit e2b73f59df
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
37 changed files with 1624 additions and 2 deletions

5
.vscode/settings.json vendored Normal file
View File

@ -0,0 +1,5 @@
{
"githubPullRequests.ignoredPullRequestBranches": [
"main"
]
}

View File

@ -156,8 +156,8 @@ Or contact us via Twitter, my handle is [@MichaelCade1](https://twitter.com/Mich
### Engineering for Day 2 Ops ### Engineering for Day 2 Ops
- [] 👷🏻‍♀️ 84 > [](2023/day84.md) - [] 👷🏻‍♀️ 84 > [Writing an API - What is an API?](2023/day84.md)
- [] 👷🏻‍♀️ 85 > [](2023/day85.md) - [] 👷🏻‍♀️ 85 > [Queues, Queue workers and Tasks (Asynchronous architecture)](2023/day85.md)
- [] 👷🏻‍♀️ 86 > [Designing for Resilience, Redundancy and Reliability](2023/day86.md) - [] 👷🏻‍♀️ 86 > [Designing for Resilience, Redundancy and Reliability](2023/day86.md)
- [] 👷🏻‍♀️ 87 > [](2023/day87.md) - [] 👷🏻‍♀️ 87 > [](2023/day87.md)
- [] 👷🏻‍♀️ 88 > [](2023/day88.md) - [] 👷🏻‍♀️ 88 > [](2023/day88.md)

View File

@ -0,0 +1,44 @@
# Getting started
This repo expects you to have a working kubernetes cluster already setup and
available with kubectl
We expect you already have a kubernetes cluster setup and available with kubectl and helm.
I like using (Civo)[https://www.civo.com/] for this as it is easy to setup and run clusters
The code is available in this folder to build/push your own images if you wish - there are no instructions for this.
## Start the Database
```shell
kubectl apply -f database/mysql.yaml
```
## deploy the day1 - sync
```shell
kubectl apply -f synchronous/k8s.yaml
```
Check your logs
```shell
kubectl logs deploy/generator
kubectl logs deploy/requestor
```
## deploy nats
helm repo add nats https://nats-io.github.io/k8s/helm/charts/
helm install my-nats nats/nats
## deploy day 2 - async
```shell
kubectl apply -f async/k8s.yaml
```
Check your logs
```shell
kubectl logs deploy/generator
kubectl logs deploy/requestor
```

View File

@ -0,0 +1,17 @@
# Set the base image to use
FROM golang:1.17-alpine
# Set the working directory inside the container
WORKDIR /app
# Copy the source code into the container
COPY . .
# Build the Go application
RUN go build -o main .
# Expose the port that the application will run on
EXPOSE 8080
# Define the command that will run when the container starts
CMD ["/app/main"]

View File

@ -0,0 +1,17 @@
module main
go 1.20
require (
github.com/go-sql-driver/mysql v1.7.0
github.com/nats-io/nats.go v1.24.0
)
require (
github.com/golang/protobuf v1.5.3 // indirect
github.com/nats-io/nats-server/v2 v2.9.15 // indirect
github.com/nats-io/nkeys v0.3.0 // indirect
github.com/nats-io/nuid v1.0.1 // indirect
golang.org/x/crypto v0.6.0 // indirect
google.golang.org/protobuf v1.30.0 // indirect
)

View File

@ -0,0 +1,32 @@
github.com/go-sql-driver/mysql v1.7.0 h1:ueSltNNllEqE3qcWBTD0iQd3IpL/6U+mJxLkazJ7YPc=
github.com/go-sql-driver/mysql v1.7.0/go.mod h1:OXbVy3sEdcQ2Doequ6Z5BW6fXNQTmx+9S1MCJN5yJMI=
github.com/golang/protobuf v1.5.0/go.mod h1:FsONVRAS9T7sI+LIUmWTfcYkHO4aIWwzhcaSAoJOfIk=
github.com/golang/protobuf v1.5.3 h1:KhyjKVUg7Usr/dYsdSqoFveMYd5ko72D+zANwlG1mmg=
github.com/golang/protobuf v1.5.3/go.mod h1:XVQd3VNwM+JqD3oG2Ue2ip4fOMUkwXdXDdiuN0vRsmY=
github.com/google/go-cmp v0.5.5/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/klauspost/compress v1.16.0 h1:iULayQNOReoYUe+1qtKOqw9CwJv3aNQu8ivo7lw1HU4=
github.com/minio/highwayhash v1.0.2 h1:Aak5U0nElisjDCfPSG79Tgzkn2gl66NxOMspRrKnA/g=
github.com/nats-io/jwt/v2 v2.3.0 h1:z2mA1a7tIf5ShggOFlR1oBPgd6hGqcDYsISxZByUzdI=
github.com/nats-io/nats-server/v2 v2.9.15 h1:MuwEJheIwpvFgqvbs20W8Ish2azcygjf4Z0liVu2I4c=
github.com/nats-io/nats-server/v2 v2.9.15/go.mod h1:QlCTy115fqpx4KSOPFIxSV7DdI6OxtZsGOL1JLdeRlE=
github.com/nats-io/nats.go v1.24.0 h1:CRiD8L5GOQu/DcfkmgBcTTIQORMwizF+rPk6T0RaHVQ=
github.com/nats-io/nats.go v1.24.0/go.mod h1:dVQF+BK3SzUZpwyzHedXsvH3EO38aVKuOPkkHlv5hXA=
github.com/nats-io/nkeys v0.3.0 h1:cgM5tL53EvYRU+2YLXIK0G2mJtK12Ft9oeooSZMA2G8=
github.com/nats-io/nkeys v0.3.0/go.mod h1:gvUNGjVcM2IPr5rCsRsC6Wb3Hr2CQAm08dsxtV6A5y4=
github.com/nats-io/nuid v1.0.1 h1:5iA8DT8V7q8WK2EScv2padNa/rTESc1KdnPw4TC2paw=
github.com/nats-io/nuid v1.0.1/go.mod h1:19wcPz3Ph3q0Jbyiqsd0kePYG7A95tJPxeL+1OSON2c=
golang.org/x/crypto v0.0.0-20210314154223-e6e6c4f2bb5b/go.mod h1:T9bdIzuCu7OtxOm1hfPfRQxPLYneinmdGuTeoZ9dtd4=
golang.org/x/crypto v0.6.0 h1:qfktjS5LUO+fFKeJXZ+ikTRijMmljikvG68fpMMruSc=
golang.org/x/crypto v0.6.0/go.mod h1:OFC/31mSvZgRz0V1QTNCzfAI1aIRzbiufJtkMIlEp58=
golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.5.0 h1:MUK/U/4lj1t1oPg0HfuXDN/Z1wv31ZJ/YcPiGccS4DU=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/time v0.3.0 h1:rg5rLMjNzMS1RkNLzCG38eapWhnYLFYXDXj2gOlr8j4=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
google.golang.org/protobuf v1.26.0-rc.1/go.mod h1:jlhhOSvTdKEhbULTjvd4ARK9grFBp09yW+WbY/TyQbw=
google.golang.org/protobuf v1.26.0/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc=
google.golang.org/protobuf v1.30.0 h1:kPPoIgf3TsEvrm0PFe15JQ+570QVxYzEvvHqChK+cng=
google.golang.org/protobuf v1.30.0/go.mod h1:HV8QOd/L58Z+nl8r43ehVNZIU/HEI6OcFqwMG9pJV4I=

View File

@ -0,0 +1,100 @@
package main
import (
"database/sql"
"fmt"
_ "github.com/go-sql-driver/mysql"
nats "github.com/nats-io/nats.go"
"math/rand"
"time"
)
func generateAndStoreString() (string, error) {
// Connect to the database
db, err := sql.Open("mysql", "root:password@tcp(mysql:3306)/mysql")
if err != nil {
return "", err
}
defer db.Close()
// Generate a random string
// Define a string of characters to use
characters := "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789"
// Generate a random string of length 10
randomString := make([]byte, 64)
for i := range randomString {
randomString[i] = characters[rand.Intn(len(characters))]
}
// Insert the random number into the database
_, err = db.Exec("INSERT INTO generator_async(random_string) VALUES(?)", string(randomString))
if err != nil {
return "", err
}
fmt.Printf("Random string %s has been inserted into the database\n", string(randomString))
return string(randomString), nil
}
func main() {
err := createGeneratordb()
if err != nil {
panic(err.Error())
}
nc, _ := nats.Connect("nats://my-nats:4222")
defer nc.Close()
nc.Subscribe("generator", func(msg *nats.Msg) {
s, err := generateAndStoreString()
if err != nil {
print(err)
}
nc.Publish("generator_reply", []byte(s))
nc.Publish("confirmation", []byte(s))
})
nc.Subscribe("confirmation_reply", func(msg *nats.Msg) {
stringReceived(string(msg.Data))
})
// Subscribe to the queue
// when a message comes in call generateAndStoreString() then put the string on the
// reply queue. also add a message onto the confirmation queue
// subscribe to the confirmation reply queue
// when a message comes in call
for {
time.Sleep(1 * time.Second)
}
}
func createGeneratordb() error {
db, err := sql.Open("mysql", "root:password@tcp(mysql:3306)/mysql")
if err != nil {
return err
}
defer db.Close()
// try to create a table for us
_, err = db.Exec("CREATE TABLE IF NOT EXISTS generator_async(random_string VARCHAR(100), seen BOOLEAN, requested BOOLEAN)")
return err
}
func stringReceived(input string) {
// Connect to the database
db, err := sql.Open("mysql", "root:password@tcp(mysql:3306)/mysql")
if err != nil {
print(err)
}
defer db.Close()
_, err = db.Exec("UPDATE generator_async SET requested = true WHERE random_string = ?", input)
if err != nil {
print(err)
}
}

View File

@ -0,0 +1,69 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: requestor
spec:
replicas: 1
selector:
matchLabels:
app: requestor
template:
metadata:
labels:
app: requestor
spec:
containers:
- name: requestor
image: heyal/requestor:async
imagePullPolicy: Always
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: requestor-service
spec:
selector:
app: requestor
ports:
- name: http
protocol: TCP
port: 8080
targetPort: 8080
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: generator
spec:
replicas: 1
selector:
matchLabels:
app: generator
template:
metadata:
labels:
app: generator
spec:
containers:
- name: generator
image: heyal/generator:async
imagePullPolicy: Always
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: generator-service
spec:
selector:
app: generator
ports:
- name: http
protocol: TCP
port: 8080
targetPort: 8080
type: ClusterIP

View File

@ -0,0 +1,17 @@
# Set the base image to use
FROM golang:1.17-alpine
# Set the working directory inside the container
WORKDIR /app
# Copy the source code into the container
COPY . .
# Build the Go application
RUN go build -o main .
# Expose the port that the application will run on
EXPOSE 8080
# Define the command that will run when the container starts
CMD ["/app/main"]

View File

@ -0,0 +1,17 @@
module main
go 1.20
require (
github.com/go-sql-driver/mysql v1.7.0
github.com/nats-io/nats.go v1.24.0
)
require (
github.com/golang/protobuf v1.5.3 // indirect
github.com/nats-io/nats-server/v2 v2.9.15 // indirect
github.com/nats-io/nkeys v0.3.0 // indirect
github.com/nats-io/nuid v1.0.1 // indirect
golang.org/x/crypto v0.6.0 // indirect
google.golang.org/protobuf v1.30.0 // indirect
)

View File

@ -0,0 +1,32 @@
github.com/go-sql-driver/mysql v1.7.0 h1:ueSltNNllEqE3qcWBTD0iQd3IpL/6U+mJxLkazJ7YPc=
github.com/go-sql-driver/mysql v1.7.0/go.mod h1:OXbVy3sEdcQ2Doequ6Z5BW6fXNQTmx+9S1MCJN5yJMI=
github.com/golang/protobuf v1.5.0/go.mod h1:FsONVRAS9T7sI+LIUmWTfcYkHO4aIWwzhcaSAoJOfIk=
github.com/golang/protobuf v1.5.3 h1:KhyjKVUg7Usr/dYsdSqoFveMYd5ko72D+zANwlG1mmg=
github.com/golang/protobuf v1.5.3/go.mod h1:XVQd3VNwM+JqD3oG2Ue2ip4fOMUkwXdXDdiuN0vRsmY=
github.com/google/go-cmp v0.5.5/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/klauspost/compress v1.16.0 h1:iULayQNOReoYUe+1qtKOqw9CwJv3aNQu8ivo7lw1HU4=
github.com/minio/highwayhash v1.0.2 h1:Aak5U0nElisjDCfPSG79Tgzkn2gl66NxOMspRrKnA/g=
github.com/nats-io/jwt/v2 v2.3.0 h1:z2mA1a7tIf5ShggOFlR1oBPgd6hGqcDYsISxZByUzdI=
github.com/nats-io/nats-server/v2 v2.9.15 h1:MuwEJheIwpvFgqvbs20W8Ish2azcygjf4Z0liVu2I4c=
github.com/nats-io/nats-server/v2 v2.9.15/go.mod h1:QlCTy115fqpx4KSOPFIxSV7DdI6OxtZsGOL1JLdeRlE=
github.com/nats-io/nats.go v1.24.0 h1:CRiD8L5GOQu/DcfkmgBcTTIQORMwizF+rPk6T0RaHVQ=
github.com/nats-io/nats.go v1.24.0/go.mod h1:dVQF+BK3SzUZpwyzHedXsvH3EO38aVKuOPkkHlv5hXA=
github.com/nats-io/nkeys v0.3.0 h1:cgM5tL53EvYRU+2YLXIK0G2mJtK12Ft9oeooSZMA2G8=
github.com/nats-io/nkeys v0.3.0/go.mod h1:gvUNGjVcM2IPr5rCsRsC6Wb3Hr2CQAm08dsxtV6A5y4=
github.com/nats-io/nuid v1.0.1 h1:5iA8DT8V7q8WK2EScv2padNa/rTESc1KdnPw4TC2paw=
github.com/nats-io/nuid v1.0.1/go.mod h1:19wcPz3Ph3q0Jbyiqsd0kePYG7A95tJPxeL+1OSON2c=
golang.org/x/crypto v0.0.0-20210314154223-e6e6c4f2bb5b/go.mod h1:T9bdIzuCu7OtxOm1hfPfRQxPLYneinmdGuTeoZ9dtd4=
golang.org/x/crypto v0.6.0 h1:qfktjS5LUO+fFKeJXZ+ikTRijMmljikvG68fpMMruSc=
golang.org/x/crypto v0.6.0/go.mod h1:OFC/31mSvZgRz0V1QTNCzfAI1aIRzbiufJtkMIlEp58=
golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.5.0 h1:MUK/U/4lj1t1oPg0HfuXDN/Z1wv31ZJ/YcPiGccS4DU=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/time v0.3.0 h1:rg5rLMjNzMS1RkNLzCG38eapWhnYLFYXDXj2gOlr8j4=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
google.golang.org/protobuf v1.26.0-rc.1/go.mod h1:jlhhOSvTdKEhbULTjvd4ARK9grFBp09yW+WbY/TyQbw=
google.golang.org/protobuf v1.26.0/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc=
google.golang.org/protobuf v1.30.0 h1:kPPoIgf3TsEvrm0PFe15JQ+570QVxYzEvvHqChK+cng=
google.golang.org/protobuf v1.30.0/go.mod h1:HV8QOd/L58Z+nl8r43ehVNZIU/HEI6OcFqwMG9pJV4I=

View File

@ -0,0 +1,108 @@
package main
import (
"database/sql"
"errors"
"fmt"
_ "github.com/go-sql-driver/mysql"
nats "github.com/nats-io/nats.go"
"time"
)
func storeString(input string) error {
// Connect to the database
db, err := sql.Open("mysql", "root:password@tcp(mysql:3306)/mysql")
defer db.Close()
// Insert the random number into the database
_, err = db.Exec("INSERT INTO requestor_async(random_string) VALUES(?)", input)
if err != nil {
return err
}
fmt.Printf("Random string %s has been inserted into the database\n", input)
return nil
}
func getStringFromDB(input string) error {
// see if the string exists in the db, if so return nil
// if not, return an error
db, err := sql.Open("mysql", "root:password@tcp(mysql:3306)/mysql")
defer db.Close()
result, err := db.Query("SELECT * FROM requestor_async WHERE random_string = ?", input)
if err != nil {
return err
}
for result.Next() {
var randomString string
err = result.Scan(&randomString)
if err != nil {
return err
}
if randomString == input {
return nil
}
}
return errors.New("string not found")
}
func main() {
err := createRequestordb()
if err != nil {
panic(err.Error())
}
// setup a goroutine loop calling the generator every minute, saving the result in the DB
nc, _ := nats.Connect("nats://my-nats:4222")
defer nc.Close()
ticker := time.NewTicker(60 * time.Second)
quit := make(chan struct{})
go func() {
for {
select {
case <-ticker.C:
nc.Publish("generator", []byte(""))
case <-quit:
ticker.Stop()
return
}
}
}()
nc.Subscribe("generator_reply", func(msg *nats.Msg) {
err := storeString(string(msg.Data))
if err != nil {
print(err)
}
})
nc.Subscribe("confirmation", func(msg *nats.Msg) {
err := getStringFromDB(string(msg.Data))
if err != nil {
print(err)
}
nc.Publish("confirmation_reply", []byte(string(msg.Data)))
})
// create a goroutine here to listen for messages on the queue to check, see if we have them
for {
time.Sleep(10 * time.Second)
}
}
func createRequestordb() error {
db, err := sql.Open("mysql", "root:password@tcp(mysql:3306)/mysql")
if err != nil {
return err
}
defer db.Close()
// try to create a table for us
_, err = db.Exec("CREATE TABLE IF NOT EXISTS requestor_async(random_string VARCHAR(100))")
return err
}

View File

@ -0,0 +1,2 @@
docker build ./async/requestor/ -f async/requestor/Dockerfile -t heyal/requestor:async && docker push heyal/requestor:async
docker build ./async/generator/ -f async/generator/Dockerfile -t heyal/generator:async&& docker push heyal/generator:async

View File

@ -0,0 +1,77 @@
apiVersion: v1
kind: Service
metadata:
name: mysql
spec:
ports:
- port: 3306
selector:
app: mysql
clusterIP: None
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
spec:
selector:
matchLabels:
app: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql
spec:
containers:
- image: mysql:5.6
name: mysql
env:
# Use secret in real usage
- name: MYSQL_ROOT_PASSWORD
value: password
- name: MYSQL_DATABASE
value: example
- name: MYSQL_USER
value: example
- name: MYSQL_PASSWORD
value: password
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: mysql-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 20Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
# https://kubernetes.io/docs/tasks/run-application/run-single-instance-stateful-application/

View File

@ -0,0 +1,17 @@
# Set the base image to use
FROM golang:1.17-alpine
# Set the working directory inside the container
WORKDIR /app
# Copy the source code into the container
COPY . .
# Build the Go application
RUN go build -o main .
# Expose the port that the application will run on
EXPOSE 8080
# Define the command that will run when the container starts
CMD ["/app/main"]

View File

@ -0,0 +1,5 @@
module main
go 1.20
require github.com/go-sql-driver/mysql v1.7.0

View File

@ -0,0 +1,2 @@
github.com/go-sql-driver/mysql v1.7.0 h1:ueSltNNllEqE3qcWBTD0iQd3IpL/6U+mJxLkazJ7YPc=
github.com/go-sql-driver/mysql v1.7.0/go.mod h1:OXbVy3sEdcQ2Doequ6Z5BW6fXNQTmx+9S1MCJN5yJMI=

View File

@ -0,0 +1,139 @@
package main
import (
"database/sql"
"fmt"
_ "github.com/go-sql-driver/mysql"
"math/rand"
"net/http"
"time"
)
func generateAndStoreString() (string, error) {
// Connect to the database
db, err := sql.Open("mysql", "root:password@tcp(mysql:3306)/mysql")
if err != nil {
return "", err
}
defer db.Close()
// Generate a random string
// Define a string of characters to use
characters := "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789"
// Generate a random string of length 10
randomString := make([]byte, 64)
for i := range randomString {
randomString[i] = characters[rand.Intn(len(characters))]
}
// Insert the random number into the database
_, err = db.Exec("INSERT INTO generator(random_string) VALUES(?)", string(randomString))
if err != nil {
return "", err
}
fmt.Printf("Random string %s has been inserted into the database\n", string(randomString))
return string(randomString), nil
}
func main() {
// Create a new HTTP server
server := &http.Server{
Addr: ":8080",
}
err := createGeneratordb()
if err != nil {
panic(err.Error())
}
ticker := time.NewTicker(60 * time.Second)
quit := make(chan struct{})
go func() {
for {
select {
case <-ticker.C:
checkStringReceived()
case <-quit:
ticker.Stop()
return
}
}
}()
// Handle requests to /generate
http.HandleFunc("/new", func(w http.ResponseWriter, r *http.Request) {
// Generate a random number
randomString, err := generateAndStoreString()
if err != nil {
http.Error(w, "unable to generate and save random string", http.StatusInternalServerError)
return
}
print(fmt.Sprintf("random string: %s", randomString))
w.Write([]byte(randomString))
})
// Start the server
fmt.Println("Listening on port 8080")
err = server.ListenAndServe()
if err != nil {
panic(err.Error())
}
}
func createGeneratordb() error {
db, err := sql.Open("mysql", "root:password@tcp(mysql:3306)/mysql")
if err != nil {
return err
}
defer db.Close()
// try to create a table for us
_, err = db.Exec("CREATE TABLE IF NOT EXISTS generator(random_string VARCHAR(100), seen BOOLEAN)")
return err
}
func checkStringReceived() {
// get a list of strings from database that dont have the "seen" bool set top true
// loop over them and make a call to the requestor's 'check' endpoint and if we get a 200 then set seen to true
// Connect to the database
db, err := sql.Open("mysql", "root:password@tcp(mysql:3306)/mysql")
if err != nil {
print(err)
}
defer db.Close()
// Insert the random number into the database
results, err := db.Query("SELECT random_string FROM generator WHERE seen IS NOT true")
if err != nil {
print(err)
}
// loop over results
for results.Next() {
var randomString string
err = results.Scan(&randomString)
if err != nil {
print(err)
}
// make a call to the requestor's 'check' endpoint
// if we get a 200 then set seen to true
r, err := http.Get("http://requestor-service:8080/check/" + randomString)
if err != nil {
print(err)
}
if r.StatusCode == 200 {
_, err = db.Exec("UPDATE generator SET seen = true WHERE random_string = ?", randomString)
if err != nil {
print(err)
}
} else {
fmt.Println(fmt.Sprintf("Random string has not been received by the requestor: %s", randomString))
}
}
}

View File

@ -0,0 +1,69 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: requestor
spec:
replicas: 1
selector:
matchLabels:
app: requestor
template:
metadata:
labels:
app: requestor
spec:
containers:
- name: requestor
image: heyal/requestor:sync
imagePullPolicy: Always
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: requestor-service
spec:
selector:
app: requestor
ports:
- name: http
protocol: TCP
port: 8080
targetPort: 8080
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: generator
spec:
replicas: 1
selector:
matchLabels:
app: generator
template:
metadata:
labels:
app: generator
spec:
containers:
- name: generator
image: heyal/generator:sync
imagePullPolicy: Always
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: generator-service
spec:
selector:
app: generator
ports:
- name: http
protocol: TCP
port: 8080
targetPort: 8080
type: ClusterIP

View File

@ -0,0 +1,17 @@
# Set the base image to use
FROM golang:1.17-alpine
# Set the working directory inside the container
WORKDIR /app
# Copy the source code into the container
COPY . .
# Build the Go application
RUN go build -o main .
# Expose the port that the application will run on
EXPOSE 8080
# Define the command that will run when the container starts
CMD ["/app/main"]

View File

@ -0,0 +1,5 @@
module main
go 1.20
require github.com/go-sql-driver/mysql v1.7.0

View File

@ -0,0 +1,2 @@
github.com/go-sql-driver/mysql v1.7.0 h1:ueSltNNllEqE3qcWBTD0iQd3IpL/6U+mJxLkazJ7YPc=
github.com/go-sql-driver/mysql v1.7.0/go.mod h1:OXbVy3sEdcQ2Doequ6Z5BW6fXNQTmx+9S1MCJN5yJMI=

View File

@ -0,0 +1,134 @@
package main
import (
"database/sql"
"errors"
"fmt"
_ "github.com/go-sql-driver/mysql"
"io"
"net/http"
"strings"
"time"
)
func storeString(input string) error {
// Connect to the database
db, err := sql.Open("mysql", "root:password@tcp(mysql:3306)/mysql")
defer db.Close()
// Insert the random number into the database
_, err = db.Exec("INSERT INTO requestor(random_string) VALUES(?)", input)
if err != nil {
return err
}
fmt.Printf("Random string %s has been inserted into the database\n", input)
return nil
}
func getStringFromDB(input string) error {
// see if the string exists in the db, if so return nil
// if not, return an error
db, err := sql.Open("mysql", "root:password@tcp(mysql:3306)/mysql")
defer db.Close()
result, err := db.Query("SELECT * FROM requestor WHERE random_string = ?", input)
if err != nil {
return err
}
for result.Next() {
var randomString string
err = result.Scan(&randomString)
if err != nil {
return err
}
if randomString == input {
return nil
}
}
return errors.New("string not found")
}
func getStringFromGenerator() {
// make a request to the generator
// save sthe string to db
r, err := http.Get("http://generator-service:8080/new")
if err != nil {
fmt.Println(err)
return
}
body, err := io.ReadAll(r.Body)
if err != nil {
fmt.Println(err)
return
}
fmt.Println(fmt.Sprintf("body from generator: %s", string(body)))
storeString(string(body))
}
func main() {
// setup a goroutine loop calling the generator every minute, saving the result in the DB
ticker := time.NewTicker(60 * time.Second)
quit := make(chan struct{})
go func() {
for {
select {
case <-ticker.C:
getStringFromGenerator()
case <-quit:
ticker.Stop()
return
}
}
}()
// Create a new HTTP server
server := &http.Server{
Addr: ":8080",
}
err := createRequestordb()
if err != nil {
panic(err.Error())
}
// Handle requests to /generate
http.HandleFunc("/check/", func(w http.ResponseWriter, r *http.Request) {
// get the value after check from the url
id := strings.TrimPrefix(r.URL.Path, "/check/")
// check if it exists in the db
err := getStringFromDB(id)
if err != nil {
http.Error(w, "string not found", http.StatusNotFound)
return
}
fmt.Fprintf(w, "string found", http.StatusOK)
})
// Start the server
fmt.Println("Listening on port 8080")
err = server.ListenAndServe()
if err != nil {
panic(err.Error())
}
}
func createRequestordb() error {
db, err := sql.Open("mysql", "root:password@tcp(mysql:3306)/mysql")
if err != nil {
return err
}
defer db.Close()
// try to create a table for us
_, err = db.Exec("CREATE TABLE IF NOT EXISTS requestor(random_string VARCHAR(100))")
return err
}

View File

@ -71,6 +71,16 @@ Auditing Kubernetes configuration is simple and there are multiple tools you can
We will see the simple way to scan our cluster with Kubescape: We will see the simple way to scan our cluster with Kubescape:
```
kubescape installation in macOs M1 and M2 chip error fixed
[kubescape](https://github.com/kubescape/kubescape)
```
```bash ```bash
curl -s https://raw.githubusercontent.com/kubescape/kubescape/master/install.sh | /bin/bash curl -s https://raw.githubusercontent.com/kubescape/kubescape/master/install.sh | /bin/bash
kubescape scan --enable-host-scan --verbose kubescape scan --enable-host-scan --verbose

View File

@ -67,4 +67,6 @@ Cilium is a Container Networking Interface that leverages eBPF to optimize packe
### Conclusion ### Conclusion
A serivce mesh is a power application networking layer that provides traffic management, observability, and security. We will explore more in the next 6 days of #90DayofDevOps! A serivce mesh is a power application networking layer that provides traffic management, observability, and security. We will explore more in the next 6 days of #90DayofDevOps!
Want to get deeper into Service Mesh? Head over to [#70DaysofServiceMesh](https://github.com/distributethe6ix/70DaysOfServiceMesh)!
See you in [Day 78](day78.md). See you in [Day 78](day78.md).

View File

@ -229,4 +229,6 @@ Let's label our default namespace with the *istio-injection=enabled* label. This
### Conclusion ### Conclusion
I decided to jump into getting a service mesh up and online. It's easy enough if you have the right pieces in place, like a Kubernetes cluster and a load-balancer service. Using the demo profile, you can have Istiod, and the Ingress/Egress gateway deployed. Deploy a sample app with a service definition, and you can expose it via the Ingress-Gateway and route to it using a virtual service. I decided to jump into getting a service mesh up and online. It's easy enough if you have the right pieces in place, like a Kubernetes cluster and a load-balancer service. Using the demo profile, you can have Istiod, and the Ingress/Egress gateway deployed. Deploy a sample app with a service definition, and you can expose it via the Ingress-Gateway and route to it using a virtual service.
Want to get deeper into Service Mesh? Head over to [#70DaysofServiceMesh](https://github.com/distributethe6ix/70DaysOfServiceMesh)!
See you on [Day 79](day79.md) and beyond of #90DaysofServiceMesh See you on [Day 79](day79.md) and beyond of #90DaysofServiceMesh

View File

@ -66,4 +66,6 @@ Governance and Oversight | Istio Community | Linkered Community | AWS | Hashicor
### Conclusion ### Conclusion
Service Meshes have come a long way in terms of capabilities and the environments they support. Istio appears to be the most feature-complete service mesh, providing a balance of platform support, customizability, extensibility, and is most production ready. Linkered trails right behind with a lighter-weight approach, and is mostly complete as a service mesh. AppMesh is mostly feature-filled but specific to the AWS Ecosystem. Consul is a great contender to Istio and Linkered. The Cilium CNI is taking the approach of using eBPF and climbing up the networking stack to address Service Mesh capabilities, but it has a lot of catching up to do. Service Meshes have come a long way in terms of capabilities and the environments they support. Istio appears to be the most feature-complete service mesh, providing a balance of platform support, customizability, extensibility, and is most production ready. Linkered trails right behind with a lighter-weight approach, and is mostly complete as a service mesh. AppMesh is mostly feature-filled but specific to the AWS Ecosystem. Consul is a great contender to Istio and Linkered. The Cilium CNI is taking the approach of using eBPF and climbing up the networking stack to address Service Mesh capabilities, but it has a lot of catching up to do.
Want to get deeper into Service Mesh? Head over to [#70DaysofServiceMesh](https://github.com/distributethe6ix/70DaysOfServiceMesh)!
See you on [Day 80](day80.md) of #70DaysOfServiceMesh! See you on [Day 80](day80.md) of #70DaysOfServiceMesh!

View File

@ -334,4 +334,6 @@ I briefly covered several traffic management components that allow requests to f
And I got to show you all of this in action! And I got to show you all of this in action!
Want to get deeper into Service Mesh Traffic Engineering? Head over to [#70DaysofServiceMesh](https://github.com/distributethe6ix/70DaysOfServiceMesh)!
See you on [Day 81](day81.md) and beyond! :smile:! See you on [Day 81](day81.md) and beyond! :smile:!

View File

@ -201,4 +201,6 @@ Go ahead and end the Kiali dashboard process with *ctrl+c*.
### Conclusion ### Conclusion
I've explored a few of the tools to be able to understand how we can observe services in our mesh and better understand how our applications are performing, or, if there are any issues. I've explored a few of the tools to be able to understand how we can observe services in our mesh and better understand how our applications are performing, or, if there are any issues.
Want to get deeper into Service Mesh Observability? Head over to [#70DaysofServiceMesh](https://github.com/distributethe6ix/70DaysOfServiceMesh)!
See you on [Day 82](day82.md) See you on [Day 82](day82.md)

View File

@ -0,0 +1,289 @@
## Day 82 - Securing your microservices
> **Tutorial**
>> *Let's secure our microservices*
Security is an extensive area when in comes to microservices. I could basically spend an entire 70days on just security alone but, I want to get to the specifics of testing this out in a lab environment with a Service Mesh.
Before I do, what security concerns am I addressing with a Service Mesh?
- Identity
- Policy
- Authorization
- Authentication
- Encrpytion
- Data Loss Prevention.
There's plenty to dig into with these but what specifically of Service Mesh helps us acheive these?
The sidecar, an ingress gateway, a node-level proxy, and the service mesh control plane interacting with the Kubernetes layer.
As a security operator, I may issue policy configurations, or authentication configurations to the Istio control plane which in turn provides this to the Kubernetes API to turn these into runtime configurations for pods and other resources.
In Kubernetes, the CNI layer may be able to provide a limited amount of network policy and encryption. Looking at a service mesh, encryption can be provided through mutual-TLS, or mTLS for service-to-service communication, and this same layer can provide a mechanism for Authentication using strong identities in SPIFFE ID format. Layer 7 Authorization is another capability of a service mesh. We can authorize certain services to perform actions (HTTP operations) against other services.
mTLS is used to authenticate peers in both directions; more on mTLS and TLS in later days.
To simplify this, Authentication is about having keys to unlock and enter through the door, and Authorization is about what you are allowed to do/touch, once you're in. Defence in Depth.
Let's review what Istio offers and proceed to configure some of this. We will explore some of these in greater detail in future days.
### Istio Peer Authentication and mTLS
One of the key aspects of the Istio service mesh is its ability to issue and manage identity for workloads that are deployed into the mesh. To put it into perspective, if all services have a sidecar, and are issued an identity (it's own identity) from the Istiod control plane, a new ability to trust and verify services now exists. This is how Peer Authentication is achieved using mTLS. I plan to go into lots more details in future modules.
In Istio, Peer Authentication must be configured for services and can be scoped to workloads, namespaces, or the entire mesh.
There are three modes, I'll explain them briefly and we'll get to configuring!
* PERMISSIVE: for when you have plaintext AND encrypted traffic. Migration-oriented
* STRICT: Only mTLS enabled workloads
* DISABLE: No mTLS at all.
We can also take care of End-user Auth using JWT (JSON Web Tokens) but I'll explore this later.
### Configuring Istio Peer AuthN and Strict mTLS
Let's get to configuring our environment with Peer Authentication and verify.
We already have our environment ready to go so we just need to deploy another sample app that won't have the sidecar, and we also need to turn up an Authentication policy.
Let's deploy a new namespace called sleep and deploy a sleeper pod to it.
```
kubectl create ns sleep
```
```
kubectl get ns
```
```
cd istio-1.16.1
kubectl apply -f samples/sleep/sleep.yaml -n sleep
```
Let's test to make sure the sleeper pod can communicate with the bookinfo app!
This command simply execs into the name of the sleep pod with the "app=sleep" label in the sleep namespace and proceeds to curl productpage in the default namespace.
The status code should be 200!
```
kubectl exec "$(kubectl get pod -l app=sleep -n sleep -o jsonpath={.items..metadata.name})" -c sleep -n sleep -- curl productpage.default.svc.cluster.local:9080 -s -o /dev/null -w "%{http_code}\n"
```
```
200
```
Let's apply our PeerAuthentication
```
kubectl apply -f - <<EOF
apiVersion: security.istio.io/v1beta1
kind: PeerAuthentication
metadata:
name: "default"
namespace: "istio-system"
spec:
mtls:
mode: STRICT
EOF
```
Now, if we re-run the curl command as I did previously (I just up-arrowed but you can copy and paste from above), it will fail with exit code 56. This strongly implies that peer-authentication in STRICT mode disallows non-mTLS workloads from communicating with mTLS workloads.
As I mentioned previously, I'll expand further in future days.
### Istio Layer 7 Authorization
Authorization is a very interesting area of security, mainly because we can granularly control who can perform whatever action against another resource. More specifically, what HTTP operations can one service perform against another. Can a service *GET* data from another service using HTTP operations?
I'll briefly explore a simple Authorization policy that allows GET requests but disallows DELETE requests against a resource. This is a highly granular approach to Zero-trust.
A consideration when moving towards production is to use a tool like Kiali (or others) to create a service mapping and understand the various HTTP calls made between services. This will allow you to establish incremental policies.
The one key element here is that Envoy (or Proxyv2 for Istio) as a sidecar, is the policy enforcement point for all Authorization Policies. Policies are evaluated by the authorization engine that authorizes requests at runtime. The destination sidecar, or Envoy, is responsible for evaluating this.
Let's get to configuring!
### Configuring L7 Authorization Policies
In order to get a policy up and running, we first need to deny all HTTP-based based operations.
Also, the flow of the request looks like this:
![security](images/Day82-1.png)
The lock-icon is indicative of the fact that mTLS is enabled and ready to go.
Let's DENY ALL
```
kubectl apply -f - <<EOF
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: allow-nothing
namespace: default
spec:
{}
EOF
```
Now if you curl bookinfo's product page from our sleep pod, it will fail!
```
kubectl exec "$(kubectl get pod -l app=sleep -o jsonpath={.items..metadata.name})" -c sleep -- curl productpage.default.svc.cluster.local:9080 -s -o /dev/null -w "%{http_code}\n"
```
You'll see a 403, HTTP 403 = Forbidden...or Not Authorized
Here's my output:
```
marinow@marinos-air ~ % kubectl exec "$(kubectl get pod -l app=sleep -o jsonpath={.items..metadata.name})" -c sleep -- curl productpage.default.svc.cluster.local:9080 -s -o /dev/null -w "%{http_code}\n"
200
```
```
marinow@marinos-air ~ % kubectl apply -f - <<EOF
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: allow-nothing
namespace: default
spec:
{}
EOF
authorizationpolicy.security.istio.io/allow-nothing created
```
```
marinow@marinos-air ~ % kubectl exec "$(kubectl get pod -l app=sleep -o jsonpath={.items..metadata.name})" -c sleep -- curl productpage.default.svc.cluster.local:9080 -s -o /dev/null -w "%{http_code}\n"
403
marinow@marinos-air ~ %
```
To fix this, we can apply a policy for each service in the request path to allow for service-to-service communication.
We need four authorization policies
1. User client to product page.
2. From Product Page to Details
3. From product-page to reviews
4. From reviews to ratings
If we look at the configuration below, the AuthZ Policy has a few key important elements:
1. The apiVersion specifies that this is an Istio resource/CRD
2. The spec, which states which resource will receive the HTTP Request inbound, and the method, which is GET.
3. The action or method that can take place against the above resource, which is *ALLOW*
This allows an external client to run an HTTP GET operation against product-page. All other types of requests will be denied.
Let's apply it.
```
kubectl apply -f - <<EOF
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: "get-productpage"
namespace: default
spec:
selector:
matchLabels:
app: productpage
action: ALLOW
rules:
- to:
- operation:
methods: ["GET"]
EOF
```
The curl command run previously should return a 200 success code.
```
kubectl exec "$(kubectl get pod -l app=sleep -o jsonpath={.items..metadata.name})" -c sleep -- curl productpage.default.svc.cluster.local:9080 -s -o /dev/null -w "%{http_code}\n"
```
But if we change the resource from productpage.default.svc.cluster.local:9080 to ratings.default.svc.cluster.local:9080, and curl to it, it should return a 403.
```
kubectl exec "$(kubectl get pod -l app=sleep -o jsonpath={.items..metadata.name})" -c sleep -- curl ratings.default.svc.cluster.local:9080 -s -o /dev/null -w "%{http_code}\n"
```
We can even see the result of applying just one AuthZ policy:
![security](images/Day82-2.png)
I am incrementally allowing services to trust only the services that they need to communicate with.
Let's apply the rest of the policies:
This one will allow product-page to HTTP GET details from the *reviews* service.
```
kubectl apply -f - <<EOF
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: "get-reviews"
namespace: default
spec:
selector:
matchLabels:
app: reviews
action: ALLOW
rules:
- from:
- source:
principals: ["cluster.local/ns/default/sa/bookinfo-productpage"]
to:
- operation:
methods: ["GET"]
EOF
```
There is a *rules* section which specifies the source of the request through the *principals* key. Here we specify the service account of the bookinfo-productpage, it's identity.
This policy allows the *reviews* services to get data from the *ratings* service
```
kubectl apply -f - <<EOF
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: "get-ratings"
namespace: default
spec:
selector:
matchLabels:
app: ratings
action: ALLOW
rules:
- from:
- source:
principals: ["cluster.local/ns/default/sa/bookinfo-reviews"]
to:
- operation:
methods: ["GET"]
EOF
```
This policy will allow *productpage* to get details from the *details* service.
```
kubectl apply -f - <<EOF
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: "get-details"
namespace: default
spec:
selector:
matchLabels:
app: details
action: ALLOW
rules:
- from:
- source:
principals: ["cluster.local/ns/default/sa/bookinfo-productpage"]
to:
- operation:
methods: ["GET"]
EOF
```
If all has been deployed correctly accessing bookinfo should provide a successful result.
*If you've set up a local host dns record, you should be able to go to bookinfo.io/productpage to see this working*
Seeing this in action:
![security](images/Day82-2.png)
Now we know our AuthZ policies are working.
On Day 82 (plus several days), I dug into Authentication with mMTLS and Authorization with Authorization Policies. This just scratches the surface and we absolutely need to dig deeper. Want to get deeper into Service Mesh Security? Head over to [#70DaysofServiceMesh](https://github.com/distributethe6ix/70DaysOfServiceMesh)!
See you on [Day 83](day83.md)

View File

@ -0,0 +1,266 @@
## Day 83 - Sidecar or Sidecar-less? Enter Ambient Mesh
> **Tutorial**
>> *Let's investigate sidecar-less with Ambient Mesh*
### Enter Ambient Mesh
For Day 83 (and the final day of Service Mesh for #90DaysOfDevOps), I decided to provide some background on Istio Ambient Mesh, to provide more approaches to onboard a service mesh.
Istio has been around for a while and has been significantly battle-tested against various conditions, scenarios, configurations, and use-cases.
Design patterns have emerged where the sidecar might inhibit performance of applications and through experimentation and a constant feedback loop, Ambient Mesh was born. Ambient Mesh was a joint collaboration between [Solo.io](https://Solo.io) and Google. You can read abut the announcement [here](https://istio.io/latest/blog/2022/introducing-ambient-mesh/).
Ambient Mesh is still a part of Istio and as of this moment, is in Alpha under Istio 1.17. When you download istioctl, there is an Ambient Mesh profile.
### The Benefits of Ambient Mesh
1. Reduced resource consumption: Sidecar proxies can consume a significant amount of resources, particularly in terms of memory and CPU. Eliminating the need for sidecar proxies, allows us to reduce the resource overhead associated with your service mesh.
2. Simplified deployment: Without sidecar proxies, the deployment process becomes more straightforward, making it easier to manage and maintain your service mesh. You no longer need to worry about injecting sidecar proxies into your application containers or maintaining their configurations.
3. Faster startup times: Sidecar proxies can add latency to the startup time of your services, as they need to be initialized before your applications can start processing traffic. A sidecar-less approach can help improve startup times and reduce overall latency.
4. Lower maintenance: Sidecar proxies require regular updates, configuration management, and maintenance. A sidecar-less approach can simplify your operational tasks and reduce the maintenance burden.
5. Easier experimentation: A sidecar-less approach like Ambient Mesh allows you to experiment with service mesh architectures without making significant changes to your existing applications. This lowers the barrier to entry and allows you to more easily evaluate the benefits of a service mesh.
6. Faster time to Zero-Trust: Ambient Mesh deploys the necessary key components to employ mTLS, Authentication, L4 Authorization and L7 Authorization
7. Still Istio: Istio Ambient Mesh still has all the same CRDS present like Virtual Services, Gateway, Destination Rules, Service Entries, Authentication, and Authorization Policies.
8. Sidecar AND Sidecarless: With Istio in Ambient mode, you can still deploy sidecars to necessary resources, and still have communication between
### Ambient Mesh Architecture
There are several main components that drive Ambient Mesh today:
- **Istiod**: Which is the control plane for the Istio service mesh and is the point of configuration, and certificate management.
- **The L4 Ztunnel**: This is strictly responsible for handle mTLS for communicating pods through ztunnel pods on each node. The ztunnel pods form mTLS between each other. The ztunnel is responsible for L4 Authorization as well. This is a `daemonset` in Kubernetes.
- **The Istio CNI Node**: Responsible for directing traffic from a workload pod to the ztunnel. This is a `daemonset` in Kubernetes.
- **The WayPoint Proxy**: The L7 proxy which provides the same functionality as the sidecar, except deployed at the destination of a request to process things like L7 Authorization Policies. This leverages the Gateway API resource.
- **HBONE**: The tunnel protocol which uses TCP Port 15008 to form tunnels between ztunnels on different nodes and between ztunnels and Waypoint Proxies.
This diagram from this [Ambient Security Blog](https://istio.io/latest/blog/2022/ambient-security/) provides a good representation of the architecture.
![Ambient Mesh](https://istio.io/latest/blog/2022/ambient-security/ambient-layers.png)
### ztunnels and Rust
To enhance the experience and performance of Ambient Mesh, the ztunnel which previously used a slimmed down Envoy instance, now uses Rust. Read more over at this [Rust-Based Ztunnel blog](https://istio.io/latest/blog/2023/rust-based-ztunnel/)
### Deploying Ambient Mesh
Before we get started, you want to make sure you have a basic Kubernetes cluster. *K3s will not work today.*
I'd recommend KinD or Minikube so you have less restrictions and the setup is pretty easy.
**Note** You can actually run this on ARM, or x86, so this will run well on your ARM-based MAC. In my setup i'm using x86 based Ubuntu 22.04.
Let's first download the binary which you can select your flavour here: https://github.com/istio/istio/releases/tag/1.18.0-alpha.0
```
wget https://github.com/istio/istio/releases/download/1.18.0-alpha.0/istio-1.18.0-alpha.0-linux-amd64.tar.gz
```
And then uncompress the file and change to the directory so you'll have access to the istioctl binary:
```
tar -xf istio-1.18.0-alpha.0-linux-amd64.tar.gz istio-1.18.0-alpha.0/
```
```
cd istio-1.18.0-alpha.0/
```
Finally, let's make istioctl executable everywhere
```
export PATH=$PWD/bin:$PATH
```
Next, let's create your KinD cluster (assuming you already have Docker ready to go)
```
kind create cluster --config=- <<EOF
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
name: ambient
nodes:
- role: control-plane
- role: worker
- role: worker
EOF
```
Which should produce the following output after completed:
```
Creating cluster "ambient" ...
✓ Ensuring node image (kindest/node:v1.25.3) 🖼
✓ Preparing nodes 📦 📦 📦
✓ Writing configuration 📜
✓ Starting control-plane 🕹️
✓ Installing CNI 🔌
✓ Installing StorageClass 💾
✓ Joining worker nodes 🚜
Set kubectl context to "kind-ambient"
You can now use your cluster with:
kubectl cluster-info --context kind-ambient
Thanks for using kind! 😊
```
If you'd like, feel free to verify that the cluster is there by running
```
kubectl get nodes -o wide
```
If all looks good, let's install Istio with the Ambient Profile
```
istioctl install --set profile=ambient --skip-confirmation
```
Here is the output from my terminal assuming everything works:
```
marino@mwlinux02:~$ istioctl install --set profile=ambient --skip-confirmation
✔ Istio core installed
✔ Istiod installed
✔ CNI installed
✔ Ingress gateways installed
✔ Ztunnel installed
✔ Installation complete
Making this installation the default for injection and validation.
```
Let's verify that all the key components have been deployed:
```
kubectl get pods -n istio-system
```
And you should see:
```
marino@mwlinux02:~$ kubectl get pods -n istio-system
NAME READY STATUS RESTARTS AGE
istio-cni-node-2wxwt 1/1 Running 0 40s
istio-cni-node-6cptk 1/1 Running 0 40s
istio-cni-node-khvbq 1/1 Running 0 40s
istio-ingressgateway-cb4d6fd7-mcgtz 1/1 Running 0 40s
istiod-7596fdbb4c-8tfz8 1/1 Running 0 46s
ztunnel-frxrc 1/1 Running 0 47s
ztunnel-lq4gk 1/1 Running 0 47s
ztunnel-qbz6w 1/1 Running 0 47s
```
Now, we can go ahead and deploy our app. Make sure you're still in the `istio-1.18.0-alpha.0/` directory.
```
kubectl apply -f samples/bookinfo/platform/kube/bookinfo.yaml
```
And proceed to deploy two additional pods, `sleep` and `notsleep`
```
kubectl apply -f samples/sleep/sleep.yaml
kubectl apply -f samples/sleep/notsleep.yaml
```
We also need to expose our app via the Istio Ingress Gateway. Let's quickly review the file:
```
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: bookinfo-gateway
spec:
# The selector matches the ingress gateway pod labels.
# If you installed Istio using Helm following the standard documentation, this would be "istio=ingress"
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: bookinfo
spec:
hosts:
- "*"
gateways:
- bookinfo-gateway
http:
- match:
- uri:
exact: /productpage
- uri:
prefix: /static
- uri:
exact: /login
- uri:
exact: /logout
- uri:
prefix: /api/v1/products
route:
- destination:
host: productpage
port:
number: 9080
```
If you need a refresher on the Gateway and Virtual Services resources, check out [Day80](day80.md)!
The gateway resource here is listening on *any* host, for port 80 with the HTTP protocol, and is abstracting the Istio Ingress Gateway. The Virtual Service resource is directing requests to the appropriate Kubernetes Service and its port, based on the URI path.
Let's apply it
```
kubectl apply -f samples/bookinfo/networking/bookinfo-gateway.yaml
```
Next we can test basic connectivity:
```
kubectl exec deploy/sleep -- curl -s http://istio-ingressgateway.istio-system/productpage | grep -o "<title>.*</title>"
```
```
kubectl exec deploy/sleep -- curl -s http://productpage:9080/ | grep -o "<title>.*</title>"
```
```
kubectl exec deploy/notsleep -- curl -s http://productpage:9080/ | grep -o "<title>.*</title>"
```
Which will all produce the result:
```
<title>Simple Bookstore App</title>
```
So we know, we can access our apps and even with Istio installed, we need to enable the Mesh. So....
FINALLY, we're going to throw our services into the ambient-enabled mesh, notice how we label the namespace with the `dataplane-mode=ambient` versus using the normal `istio-injection=enabled` label.
```
kubectl label namespace default istio.io/dataplane-mode=ambient
```
This label effectively tells the `ztunnel` to capture the identity of an out-bound workload and act on its behalf for mTLS, whle also telling the `istio-cni-node` pods to route traffic towards the ztunnel.
You can re-run the `curl` commands above if you'd like but, I'd recommend just running this looped-curl, and we can check the logs:
```
for i in {1..10}; do kubectl exec deploy/notsleep -- curl -s http://productpage:9080/ | grep -o "<title>.*</title>"; sleep 1; done
```
Now, let's review the ztunnel logs:
```
kubectl logs -n istio-system -l app=ztunnel
```
And the output:
```
2023-03-25T15:21:34.772090Z INFO inbound{id=50c102c520ed8af1a79e89b9dc38c4ba peer_ip=10.244.1.4 peer_id=spiffe://cluster.local/ns/default/sa/notsleep}: ztunnel::proxy::inbound: got CONNECT request to 10.244.2.8:9080
2023-03-25T15:21:29.935241Z INFO outbound{id=a9056d62a14941a70613094ac981c5e6}: ztunnel::proxy::outbound: proxy to 10.244.2.8:9080 using HBONE via 10.244.2.8:15008 type Direct
```
The output is very interesting because you can see that `notsleep` is using the HBONE protocol on port `15008` to tunnel via the ztunnel over to `productpage`. If you ran `kubectl get pods -o wide` you will see that the IPs are owned by the notsleep and productpage pods. So NOW you've got a bit exposure to Ambient Mesh, go explore more!
### There is so much more to Ambient Mesh
This module has gotten pretty long, but I encourage you to dig deeper into Ambient Mesh and see how you can potentially use it for things like Edge Computing for low-resource environments. If you want some more guided labs, check out [Solo Academy](https://academy.solo.io)!!!
### The end of the Service Mesh section for #90DaysofDevOps
We have certainly reached the end of the Service Mesh section but, I encourage y'all to check out [#70DaysofServiceMesh](https://github.com/distributethe6ix/70DaysOfServiceMesh) to get even deeper and get ultra meshy :smile:!
See you on [Day 84](day84.md) and beyond! :smile:!

View File

@ -0,0 +1,58 @@
# Writing an API - What is an API?
The acronym API stands for “application programming interface”. What does this really mean though? Its a way of
controlling an application programmatically. So when you use a website that displays some data to you (like Twitter)
there will be an action taken by the interface to get data or send data to the application (the twitter backend in this
example) - this is done programmatically in the background by code running in the user interface.
In the example given above we looked at an example of a public API, however the vast majority of APIs are private, one
request to the public twitter API will likely cause a cascade of interactions between programs in the backend. These
could be to save the tweet text into a datastore, to update the number of likes or views a tweet has or to take an image
that has been uploaded and resize it for a better viewing experience.
We build programs with APIs that other people can call so that we can expose program logic to other developers, teams
and our customers or suppliers. They are a predefined way of sharing information. For example, we can define an API
using [openapi specification](https://swagger.io/resources/open-api/) which is used
for [RESTful](https://en.wikipedia.org/wiki/Representational_state_transfer) API design. This api specification forms a
contract that we can fulfil. For example, If you make an API request to me and pass a specific set of content, such as a
date range, I will respond with a specific set of data. Therefore you can reliably expect to receive data of a certain
type when calling my API.
We are going to build up an example set of applications that communicate using an API for this section of the learning
journey to illustrate the topics and give you a hands-on look at how things can be done.
Design:
2 programs that communicate bi-directionally, every minute or so one application will request a random string from the
other, once one is received it will store this number in a database for future use
The Random Number generator will generate a random string when requested and save this into a database, the application
will then ask the first program for confirmation that it received the string, and store this information against the
string in the database
The applications will be called:
generator
requestor
This may sound like a silly example but it allows us to quickly look into the various tasks involved with building,
deploying, monitoring and owning a service that runs in production. There are bi-directional failure modes as each
application needs something from the other to complete successfully and things we can monitor such as API call rates -
We can see if one application stops running.
We need to now decide what our API Interfaces should look like. We have 2 API calls that will be used to communicate
here. Firstly, the `requestor` will call the `generator` and ask for a string. This is likely going to be an API call
without any additional content other than making a request for a string. Secondly, the `generator` will start to ask
the `requestor` for confirmation that it received and stored the string, in this case we need to pass a parameter to
the `requestor` which will be the string we are interested in knowing about.
The `generator` will use a URL path of `/new` to serve up random strings
The `requestor` is going to use URL paths to receive string information from the `generator` to check the status, so we
will setup a URL of `/strings/<STRING>` where <STRING> is the string of interest.
## Building the API
There is a folder on the Github repository under the 2023 section called `day2-ops-code` and we will be using this
folder to store our code for this, and future, section of the learning journey.
We are using Golang's built in HTTP server to serve up our endpoints and asynchronous goroutines to handle the
checks. Every 60 seconds we will look into the generators database and get all the strings which we dont have
conformation for and then calling the requesters endpoint to check if the string is there.

View File

@ -0,0 +1,62 @@
# Queues, Queue workers and Tasks (Asynchronous architecture)
Yesterday we looked at how we can use HTTP based APIs for communication between services. This works well until you need
to scale, release a new version or one of your services goes down. Then we start to see the calling service fail because
its dependency is not working as expected. We have tightly coupled our two services, one cant work without the other.
There are many ways to solve this problem, a light touch approach for existing applications is to use something called a
circuit breaker to buffer failures and retry until we get a successful response. This is explained well in this blog
by [Martin Fowler](https://martinfowler.com/bliki/CircuitBreaker.html). However, this is synchronous, if we were to wrap
our calls in a circuit breaker we would start to block processes and our user could see a slowdown in response times.
Additionally, we cant scale our applications using this approach, the way that the code is currently written every
instance of our `generator` api would be asking
the `requestor for confirmation of receiving the string. This wont scale well when we move to having 2, 5, 10, or 100 instances running. We would quickly see the `
requestor` being overwhelmed with requests from the 100 generator applications.
There is a way to solve these problems which is to use Queues. This is a shift in thinking to using an asynchronous
approach to solving our problem. This can work well when the responses dont need to be immediate between applications.
In this case it doesn't matter if we add some delay in the requests between the applications. As long as the data
eventually flows between them we are happy.
![Queues, producers and Consumers](./images/day84-queues.png)
(https://dashbird.io/knowledge-base/well-architected/queue-and-asynchronous-processing/)
In the drawing above we can see how we can add a Queue in between our applications and the Queue stores the intent of
the message across the bridge. If the Consumer was to fail and stop reacting messages then, providing our Queue software
has sufficient storage, the messages to the consumer would still “succeed” as far as the producer is concerned.
This is a powerful pattern that isolates components of a system from each other and limits the blast radius of failure.
It does however add complexity. The Consumer and Producer must share a common understanding of the shape of a message
otherwise the Consumer will be unable to act on the message.
We are going to implement a Queue in between our two applications in our data flows.
By the end of the section we will have the following data flows
Requestor (asks for a random string) → Queue → Generator (gets the message, generates a string and passes it back to the
Requestor on another Queue) → Requestor (gets the string back on a queue and stores it, then sends a message to the
queue saying it has received it) → Queue → Generator (marks that the message was received)
The last section here, where the Generator needs to know if the Requestor got the message is a simplification of a real
process where an application may pass back an enriched data record or some further information but it allows us to have
a simple two way communication.
Can you see how by the end of this section we will be able to stop, scale, deploy or modify each of the two components
without the other needing to know?
## Modifying our application
We are now going to modify our app to fit the model described above. Where we previously made a HTTP api call to our
partner app we are now going to place a message on a named queue and then subscribe to the corresponding response
queues.
Finally we are going to stop using HTTP endpoints to listen for requests, we are now subscribing to queues and waiting
for messages before we perform any work.
I have picked [NATSio](https://nats.io/) as the queue technology as I have worked with it previously and its very easy
to set up and use.
Head over to the 2023/day2-ops-code/README.md to see how to deploy day 2's application.
It sends messages to a queue and then waits for a response. The response is a message on a different queue. The message
contains the data that we are interested in.

BIN
2023/images/Day82-1.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 137 KiB

BIN
2023/images/Day82-2.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 66 KiB

BIN
2023/images/Day82-3.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 108 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 34 KiB