Merge pull request #369 from distributethe6ix/main

This commit is contained in:
Michael Cade 2023-03-21 16:29:49 +00:00 committed by GitHub
commit 841efcf554
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
5 changed files with 407 additions and 1 deletions

View File

@ -1,4 +1,4 @@
## 2 - Install and Test a Service Mesh
## Day 78 - Install and Test a Service Mesh
> **Tutorial**
> *Let's install our first service mesh.*

View File

@ -0,0 +1,69 @@
## Day 79 - Comparing Different Service Meshes
> **Informational**
> *A comparison of Istio, Linkerd, Consul, AWS App Mesh and Cilium*
### Service Mesh Offerings
There are PLENTY of service mesh offerings out there. Some are highly proprietary while others are very open.
Here are offerings you should definitely look into:
Service Mesh | Open Source or Proprietary | Notes |
---|---|---|
Istio | Open Source | Widely adopted and abstracted
Linkerd | Open Source | Built by Buoyant
Consul | Open Source | Owned by Hashcorp, Cloud offering available
Kuma | Open Source | Maintained by Kong
Traefik Mesh | Open Source | Specialized Proxy
Open Service Mesh | Open Source | By Microsoft
Gloo Mesh | Proprietary | Built by Solo.io ontop of Istio
AWS App Mesh | Proprietary | AWS specific services
OpenShift Service Mesh | Proprietary | Built by Redhad, based on Istio
Tanzu Service Mesh | Proprietary | SaaS based on Istio, built by VMware
Anthos Service Mesh | Proprietary | SaaS based on Istio, built by Google
Bouyant Cloud | Proprietary | SaaS based on Linkerd
Cilium Service Mesh | Open Source | Orginally a CNI
I'll quickly recap some of the key options I'll compare. This was taken from Day 1.
#### Istio
Istio is an open-source service mesh built by Google, IBM, and Lyft, and currently actively developed on and maintained by companies such as Solo.io. It is based on the Envoy proxy which is adopted for the sidecar pattern. Istio offers a high degree of customization and extensibility with advanced traffic routing, observability, and security for microservices. A new mode of operation for sidecar-less service mesh, called Ambient Mesh, was launched in 2022.
#### AppMesh
AppMesh is a service mesh implementation that is proprietary to AWS but primarily focuses in on applications deployed to various AWS services such as ECS, EKS, EC2. Its tight-nit integration into the AWS ecosystem allows for quick onboarding of services into the mesh.
#### Consul
Consul is a serivce mesh offering from Hashicorp that also provides traffic routing, observability, and sercurity much like Istio does.
#### Linkerd
Linkerd is an open-source service mesh offering that is lightweight. Similar to Istio, it provides traffic management, observability, and security, using a similar architecture. Linkerd adopts a sidecar-pattern using a Rust-based proxy.
#### Cilium
Cilium is a Container Networking Interface that leverages eBPF to optimize packet processing using the Linux kernel. It offers some Service Mesh capabilities, and doesn't use the sidecar model. It proceeds to deploy a per-node instance of Envoy for any sort of Layer 7 processing of requests.
### Comparsion Table
Feature | Istio | Linkerd | AppMesh | Consul | Cilium |
---|---|---|---|---|---|
Current Version | 1.16.1 | 2.12 | N/A (it's AWS :D ) | 1.14.3 | 1.12
Project Creators | Google, Lyft, IBM, Solo | Buoyant | AWS | Hashicorp | Isovalent
Service Proxy | Envoy, Rust-Proxy (experimental) | Linkerd2-proxy | Envoy | Interchangeable, Envoy default | Per-node Envoy
Ingress Capabilities | Yes via the Istio Ingress-Gateway | No; BYO | Yes via AWS | Envoy | Cilium-Based Ingress
Traffic Management (Load Balancing, Traffic Split) | Yes | Yes | Yes | Yes | Yes, but manual Envoy config required for traffic splits
Resiliency Capabilities (Circuit Breaking, Retries/Timeouts, Faults, Delays) | Yes | Yes, no Circuit Breaking or Delays | Yes, No Fault or Delays | Yes, No Fault or Delays | Circuit Breaking, Retries and Timeouts require manual Envoy configuration, no other resiliency capabilities
Monitoring | Access Logs, Kiali, Jaegar/Zikin, Grafana, Prometheus, LETS, OTEL | LETS, Prometheus, Grafana, OTEL | AWS X-RAY, and Cloud Watch provides these | Datadog, Jaegar, Zipkin, OpenTracing, OTEL, Honeycomb | Hubble, OTEL, Prometheus, Grafana
Security Capabilities (mTLS, External CA) | Yes | Yes | Yes | Yes | Yes, with Wireguard
Getting Started | Yes | Yes | Yes | Yes | Yes
Production Ready | Yes | Yes | Yes | Yes | Yes
Key Features | Sidecar and Sidecar-less, Wasm Extensibility, VM support, Multi-cloud Support, Data Plane extensions | Simplistic and non-invasive | Highly focused and tight integration into AWS Ecosystem | Tight integration into Nomad and Hashicorp Ecosystem | Usage of eBPF for enhanced packet processing, Cilium Control Plane used to manage Service Mesh, No sidecars
Limitations | Complex, learning curve | Strictly K8s, additional config for BYO Ingress | Limited to just AWS services | Storage tied to Consul and not K8s | Not a complete Service Mesh, requires manual configuration
Protocol Support (TCP, HTTP 1.1 & 2, gRPC) | Yes | Yes | Yes | Yes | Yes
Sidecar Modes | Sidecar and Sidecar-less | Sidecar | Sidecar | Sidecar | No sidecar
CNI Redirection | Istio CNI Plugin | linkerd-cni | ProxyConfiguration Required | Consul CNI | eBPF Kernel processing
Platform Support | K8s and VMs | K8s | EC2, EKS, ECS, Fargate, K8s on EC2 | K8s, Nomad, ECS, Lambda, VMs | K8s, VMs, Nomad
Multi-cluster Mesh | Yes | Yes | Yes, only AWS | Yes | Yes
Governance and Oversight | Istio Community | Linkered Community | AWS | Hashicorp | Cilium Community
### Conclusion
Service Meshes have come a long way in terms of capabilities and the environments they support. Istio appears to be the most feature-complete service mesh, providing a balance of platform support, customizability, extensibility, and is most production ready. Linkered trails right behind with a lighter-weight approach, and is mostly complete as a service mesh. AppMesh is mostly feature-filled but specific to the AWS Ecosystem. Consul is a great contender to Istio and Linkered. The Cilium CNI is taking the approach of using eBPF and climbing up the networking stack to address Service Mesh capabilities, but it has a lot of catching up to do.
See you on Day 4 of #70DaysOfServiceMesh!

View File

@ -0,0 +1,337 @@
## Day 80 - Traffic Engineering Basics
> **Tutorial**
> *Let's test out traffic routing, and shifting*
### Reviewing Key Traffic Management Concepts
HEY YOU MADE IT THIS FAR! Let's keep going :smile:!!!
I'm going to review some of these concepts very briefly and if you'd like the expanded approach, check out #70DaysofServiceMesh
Traffic management is an important topic in the world of microservices communication because, you have not one or two, you have thousands of services making requests to each other. In the world of physical networking, network devices can be used for flow control and packet routing, but because the size of our networks have grown to accomodate microservices communications, manually creating the path way for each to connect does not scale well.
Kubernetes has done quite a lot to simplify networking for microservices through technologies like CNI, Ingress (and more recently), Gateway API. There are other challenges around traffic routing that can be solved with custom-tailored solutions.
Some of the key areas to address with traffic management are:
- Request Routing
- Traffic Splitting
- Traffic Shifting
- Releasing (new versions of your app)
- Traffic mirroring
- Load-balancing
Traffic, or requests, will always enter the Service Mesh through some Ingress, such as the Istio-Ingress-Gateway. Once in the mesh, the request might need to make its way through multiple services before a final response is formed. Each of the microservices, will have a sidecar to process the request and return some response. But, we also need to know how each of these services gets to other services, and what to do when these inbound requests come in.
Client ---> Bookinfo ----> | ProductPage ---> Reviews ---> Ratings |
In the flow above, the client makes a request to Bookinfo (via a DNS name) which is then translated into request towards the first service in the path, ProductPage, which then needs to illicit a respect from Reviews, and Reviews from Ratings.
Let's explore the components that make this happen, briefly, and revisit these in the future.
#### Istio Ingress Gateway
As mentioned previously, the Istio Ingress Gateway is the entrypoint for requests getting into the mesh. The Istio Ingress Gateway is a resource deployed with both a Deployment and Service defined with a Load Balancer. This is advantageous, because you can create an Istio Ingress resource that listens on certain ports (HTTP) for request to hostnames like istio.io. And you can do this for multiple hosts. This is important because you can virtually overload the resource by creating multiple Ingress resources for the same gateway. This saves you on procuring multiple load-balancers per service.
Interestingly enough, any service you expose with the Istio Ingress Gateway, means that its service type will be set to ClusterIP. We don't connect to the service directly, we do so via the Ingress Gateway. This is also a layer of security with TLS.
You configure an Istio Ingress Gateway resource, and then an associated Virtual Service to route to your services
Istio's Ingress Gateway uses the Proxyv2 image which is purpose-built Envoy proxy, for Istio.
The Gateway configuration we used previously...
```
cat istio-1.16.1/samples/bookinfo/networking/bookinfo-gateway.yaml
```
```
marinow@mwm1mbp networking % cat bookinfo-gateway.yaml
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: bookinfo-gateway
spec:
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
```
The output shows us a few key pieces of information:
- The name of the Gateway resources
- The specific Istio Ingress gateway we use, using the label-selector mechanism
- The wildcard denoted by an asterisk specifies the host we are listening on, basically any host
- The port number, port 80
- The protocol which is HTTP
*Istio Ingress Gateway: I will listen for requests coming into any DNS hostname directed to port 80 using the HTTP protocol.*
#### Sidecars
Sidecars are an important traffic management tool as they live right alongside each microservice and will proxy requests on behalf of them.
The sidecars behave in the same manner as the Istio Ingress Gateway, and will receive and process requests, and provide responses appropriately (as they are configured to). The sidecars also play a huge role with observability and security, which I'll explore later.
Istio's sidecar uses the Proxyv2 image which is purpose-built Envoy proxy, for Istio.
#### Virtual Services
Virtual Services are the "how do we get there" set of rules for Istio. I view this as routing rules for request. If a request is received destined towards a particular service, route it here.
Here is an example of a Virtual Service definition we'll be using for each of the microservices in our environment:
```
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: productpage
spec:
hosts:
- productpage
http:
- route:
- destination:
host: productpage
subset: v1
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: reviews
spec:
hosts:
- reviews
http:
- route:
- destination:
host: reviews
subset: v1
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: ratings
spec:
hosts:
- ratings
http:
- route:
- destination:
host: ratings
subset: v1
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: details
spec:
hosts:
- details
http:
- route:
- destination:
host: details
subset: v1
---
```
There are 4 virtual service configurations present, each one's "host" field corresponds to the microservice and it's Kubernetes Service resource.
The protocol being HTTP is specified, with the destination and a *subset* which is translated to a particular microservice with a label affixed to it. This is important to distinguish multiple versions of the same resource. We also need another resource to help with distinguishing, the Destination Rule resource.
#### Destination Rules
While Virtual Services point us to the service and host entries of where our services live, Destination Rules provide a granular action list. What happens when a request arrives at this destination?
Destination Rules allow us to specify multiple versions of a service based on the back-end pods using Subsets. This is referenced by the Virtual Service resource to establish which available services can be routed to.
This might be useful for dark launches and canary releasing so you can split traffic to different versions.
Looking at the Destination Rule Resource for the Reviews services we can see the 3 different subsets for the 3 different versions. Notice that the labels actually correspond with the deployment resources for each version of reviews.
This is how we know how to route to each version. VS tells us *where*, DR tells us *how* and *what to do*.
```
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: reviews
spec:
host: reviews
subsets:
- name: v1
labels:
version: v1
- name: v2
labels:
version: v2
- name: v3
labels:
version: v3
```
#### Service Entries
I'll dive into this later, but Service Entries provide a mechanism for internal services to know how to route to external services, like a Database, or Git repo, or Object storage, for example. This can all be controlled using Service Entries.
This can be used to control outbound requests as well, but requires a bit of control plane rework. More on this on later.
### Setting up Request Routing
We are using the same environment for Day 2. Go back and review it if you have not set up Istio.
To set up request routing to Reviews-v1, we need to have a destination rule and virtual service configuration in place.
Let's apply them:
#### Destination Rule (make sure you are in the right directory)
```
cd istio-1.16.1
kubectl apply -f samples/bookinfo/networking/destination-rule-all.yaml
```
#### Virtual Service
```
kubectl apply -f samples/bookinfo/networking/virtual-service-all-v1.yaml
```
Let's verify that the resources were created:
```
kubectl get vs && kubectl get dr
```
AND THE RESULT
```
marinow@mwm1mbp istio-1.16.1 % kubectl get vs && kubectl get dr
NAME GATEWAYS HOSTS AGE
bookinfo ["bookinfo-gateway"] ["*"] 4d15h
productpage ["productpage"] 14h
reviews ["reviews"] 14h
ratings ["ratings"] 14h
details ["details"] 14h
NAME HOST AGE
productpage productpage 96s
reviews reviews 96s
ratings ratings 96s
details details 96s
```
Now, if I head over to my browser (I have a localhost DNS entry), I can get to bookinfo.io/productpage. If I hit refresh a few times, only the **Reviews-v1** service is hit.
![ServiceMesh](images/Day80-1.png)
This is because I configured my virtual service resource to only route to **v1** of Reviews as seen in the configuration below.
```
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: reviews
spec:
hosts:
- reviews
http:
- route:
- destination:
host: reviews
subset: v1
---
```
Now I'll update the configuration to route to **v2** IF and ONLY IF, I pass along a request header with the string "jason" as the end-user. Otherwise, my requests will continue to go to **v1**.
Before I update it, let's look at it
```
cat samples/bookinfo/networking/virtual-service-reviews-test-v2.yaml
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: reviews
spec:
hosts:
- reviews
http:
- match:
- headers:
end-user:
exact: jason
route:
- destination:
host: reviews
subset: v2
- route:
- destination:
host: reviews
subset: v1
```
Notice the match field and what follows below. The route field has been indented because it's under a *match* condition.
Now I'll apply it:
```
kubectl apply -f samples/bookinfo/networking/virtual-service-reviews-test-v2.yaml
```
And we can test by logging in via the website and entering jason as the user.
![ServiceMesh](images/Day80-2.png)
So now we know our Destination Rule works with our Virtual Service Configuration.
Let's shift some traffic!
### Setting up Traffic Shifting
To begin, we need to remove our previous virtual service configuration that routes using the *jason* header.
```
kubectl delete -f samples/bookinfo/networking/virtual-service-reviews-test-v2.yaml
```
Next, I'll quickly review the traffic-shifting we'll do.
```
cat samples/bookinfo/networking/virtual-service-reviews-50-v3.yaml
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: reviews
spec:
hosts:
- reviews
http:
- route:
- destination:
host: reviews
subset: v1
weight: 70
- destination:
host: reviews
subset: v3
weight: 30
```
Within each destination, it points to a subset. v1 points to Reviews-v1, while v3 points to Reviews-v3. We can apply this reviews VS resource, and it will then split the traffic with 70% of requests going to v1, while v3 receives 30% of the requests.
Let's apply the config and test:
```
kubectl apply -f samples/bookinfo/networking/virtual-service-reviews-50-v3.yaml
```
Now we can test this using a curl command in a for-loop. The for-loop runs 10 times, to make requests to the Product page, I've used grep to narrow down output to either v1 or v3, to witness the split of 70/30 Reviews-v1 getting 70% of the requests:
```
for i in {1..10}; do curl -s http://bookinfo.io/productpage | grep "reviews-v"; done
```
AND THE RESULT:
```
<u>reviews-v3-6dc9897554-8pgtx</u>
<u>reviews-v1-9c6bb6658-lvzsr</u>
<u>reviews-v1-9c6bb6658-lvzsr</u>
<u>reviews-v1-9c6bb6658-lvzsr</u>
<u>reviews-v3-6dc9897554-8pgtx</u>
<u>reviews-v3-6dc9897554-8pgtx</u>
<u>reviews-v1-9c6bb6658-lvzsr</u>
<u>reviews-v1-9c6bb6658-lvzsr</u>
<u>reviews-v1-9c6bb6658-lvzsr</u>
<u>reviews-v1-9c6bb6658-lvzsr</u>
```
### Conclusion
Day 4 of #70DaysOfServiceMesh scratches the surface of Traffic management capabilities. We'll explore more in future modules.
I briefly covered several traffic management components that allow requests to flow within a Service Mesh:
- Istio Ingress Gateway
- Sidecar
- Virtual Services
- Destination Rules
- Service Entries
And I got to show you all of this in action!
See you on Day 5 and beyond! :smile:!

BIN
2023/images/Day80-1.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 259 KiB

BIN
2023/images/Day80-2.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 240 KiB