What is Service Mesh?
Service Mesh is an additional layer of infrastructure added on top of an application’s services to manage and organize communication between them.
To illustrate, let’s take the example of a stock checking system where users enter queries to determine product availability. In the background, various microservices are involved, such as those responsible for fetching, searching, and displaying the data. Now, imagine a scenario where there is high demand due to a promotion, resulting in numerous customers querying the system simultaneously. Without a mechanism to handle this load or implement throttling, both the database and the microservices would be strained. This is where service mesh comes in to alleviate these challenges.
Need Of Service Mesh
As we transition from a monolithic application infrastructure to a microservice-based infrastructure, the number of microservices and the complexity of communication between them also increase. To effectively manage this infrastructure, the use of a service mesh becomes necessary.
While applications with low traffic or a small number of microservices can handle their inter-service communication within their own processes, as the communication becomes more complex, an additional dedicated layer or infrastructure, such as a service mesh, is required to handle it efficiently.
The Infrastructure of Service Mesh
Sidecar Proxy
In a service mesh, all the communication-related logic is encapsulated into a single entity called a proxy, which runs alongside the actual service. That’s why they are referred to as “sidecar proxies.” These sidecar proxies, separated from each service, form a mesh network.
Data Plane
The data plane is an essential component of the service mesh that manages network traffic and facilitates communication between instances. The sidecar proxy is a subelement of the data plane.
Control Plane
The control plane is the architectural component responsible for generating and deploying configurations that control the behavior of the data plane in the service mesh.
Service Mesh Features
Load Balancing
A service mesh offers layer 7 load balancing, enabling efficient distribution of application requests across multiple instances.
Encryption
Service mesh can encrypt and decrypt requests and responses, reducing the burden on services for enhanced security. Additionally, it improves performance by reusing existing connections instead of creating new ones, thereby avoiding costly computation cycles.
Authentication and Authorization
Service mesh can authenticate and authorize requests both internally and externally, ensuring that only valid requests are processed.
Supports the Circuit Breaker Pattern
In the event of an unhealthy instance, service mesh can isolate it from other healthy instances, restore it to a healthy state, and add it back to the instance pool.
Observability
Service mesh offers insights and behavioral details regarding the health of services. The control plane can extract valuable data such as service health, traffic, latency, and access logs.
How It Works
The configuration generated by the control plane is distributed across all available data planes. The data planes pass this configuration to the sidecar proxies, which handle communication between services.
- Considering the infrastructure of service mesh and using the example of stock checking systems, when a request is made from one service to another (e.g., from the user interface service to the backend database handling service), the service mesh first validates the request. It then checks if the service responsible for the database is available (i.e., if there are instances of that service).
- If the service is available, the request is passed through.
- If the service is not available, the service mesh will retry the request after a specific time interval.
In the case of throttling on a specific instance, the service mesh load balances the requests or applies retry logic to reduce service failure.
How Service Mesh Enhances the Application Performance
The requests to microservices are routed through proxies instead of directly to the microservices themselves. This approach allows any network connectivity issues to be encountered and addressed by checking the proxies, while keeping the microservices untouched and operational. As a result, developers can focus on improving the microservices and application functionality without having to spend time coding communication and request handling.
Service Mesh collects various performance metrics and parameters from service-to-service communication. Over time, the data collected can be used to establish new rules and optimize inter-service communication, leading to a more efficient and reliable infrastructure for service requests.
Here is a list of some service mesh products and services available in the market:
- Istio
- Tetrate
- VMware Tanzu Service Mesh
- F5 Nginx Service Mesh
- Google Anthos Service Mesh
Conclusion
In conclusion, a service mesh is an additional layer of infrastructure that enables efficient communication between services in a microservice-based application. It offers features like load balancing, encryption, authentication, circuit breaker support, and observability. By using proxies, service mesh ensures the reliability of microservices while developers can focus on improving the application. Popular service mesh products include Istio, Tetrate, VMware Tanzu Service Mesh, F5 Nginx Service Mesh, and Google Anthos Service Mesh.
Reference Links:
https://www.cncf.io/blog/2022/05/17/service-meshes-are-on-the-rise-but-greater-understanding-and-experience-are-required/
https://www.redhat.com/architect/why-when-service-mesh