Skip to content

EN_Net_Services

somaz edited this page Mar 30, 2026 · 1 revision

Network: Services & Proxies

8. Service Mesh vs API Gateway

Service Mesh

Service Mesh is an infrastructure layer for managing communication between services in distributed applications. In a microservices architecture, an application is composed of several small services that communicate with each other to form a cohesive application. Since this inter-service communication happens over a network, an infrastructure to manage it is necessary.

Service Mesh abstracts and manages the communication between these services, essentially aiding in managing the network infrastructure for inter-service communication. It offers functionalities like distributed tracking, security, logging, and load balancing, which help in handling communication between services safely and efficiently.

Implementation of Service Mesh often uses the sidecar pattern, deploying a sidecar container on each service instance and managing communication through it. This container typically consists of a proxy or agent provided by the Service Mesh solution. Popular Service Mesh solutions in Kubernetes environments include Istio and Linkerd.

Key Features of Service Mesh
  • Distributed Tracking: Facilitates rapid response to issues by tracking and analyzing communication between services.
  • Security: Enhances security through traffic encryption, authentication, authorization, and access control.
  • Logging: Logs details of inter-service communication for troubleshooting.
  • Load Balancing: Distributes traffic among multiple service instances, ensuring stable service delivery.

Major Service Mesh solutions include Istio, Linkerd, and Consul, which simplify the implementation and operation of a Service Mesh.

API Gateway

API Gateway serves as a single point of entry in a microservices architecture, exposing multiple backend services as one API. When clients send HTTP requests to the API Gateway, it invokes internal services, processes the results, and returns them to the clients.

Main Features of API Gateway
  • Authentication and Authorization: Verifies client requests and performs user authentication and authorization checks, ensuring secure API access.
  • Load Balancing: Distributes requests across multiple services, improving service availability and load distribution.
  • Caching: Utilizes cache for repetitive requests, reducing the load on backend services.
  • Logging and Monitoring: Records and monitors client requests and service responses, enabling quick response to issues.
  • API Management: Manages APIs and their versions, allowing deployment of new API versions or removal of old ones.

API Gateway is an essential component in microservices architecture, offering various functionalities and flexibility for efficient application management and operation.

Differences between Service Mesh and API Gateway

API Gateway and Service Mesh are both tools for managing communication in distributed applications but differ in their purpose and methods of implementation.

API Gateway acts as a server that mediates communication between clients and backend services. Clients send requests to the backend services via the API Gateway, which performs necessary authentication, authorization, and logging before forwarding these requests. API Gateway provides a unified entry point, simplifying client-to-backend communication and enhancing security and monitoring.

Conversely, Service Mesh manages communication between services within distributed applications. Each service instance has a sidecar container to facilitate inter-service communication. Service Mesh offers functionalities like distributed tracking, security, logging, and load balancing to ensure safe and efficient communication between services.

Thus, API Gateway mediates communication between clients and backend services, managing and protecting externally accessed services. In contrast, Service Mesh handles communication within distributed applications, safeguarding and stabilizing the operation of these applications.


9. What is Reverse Proxy?

Reverse Proxy is a type of server that sits in front of a web server and forwards client (e.g. web browser) requests to the web server. The main difference between reverse proxy and forward proxy is the direction of service. A forward proxy acts as a gateway between users and the vast resources of the Internet, while a reverse proxy acts as a gateway between the Internet and a smaller group of servers.

Features of Reverse Proxy

  • Load Balancing: Improves the speed and stability of resources by balancing the load by distributing client requests to multiple servers.
  • Global Server Load Balancing: This involves improving user response time by routing client requests to the closest server based on the client's geographic location.
  • SSL Encryption: Centralizes SSL certificate management in one place instead of managing it on individual servers.
  • Caching Content: Reduces server load by providing static and dynamic content cached on the web server.
  • Compression: Compresses the file before sending it to the client to reduce response bandwidth.
  • Security and Anonymity: Provides security by hiding the characteristics and origin of the backend server.

Example reverse proxy tool: Nginx

In addition to being used as a web server, load balancer, HTTP cache, and mail proxy, Nginx is one of the most popular reverse proxy servers in the world. It is well known for its high performance, stability, rich feature set, simple configuration, and low resource consumption.

http { upstream backend { server backend1.example.com weight=5; server backend2.example.com; server backend3.example.com; }

server {
    listen 80;

    location / {
        proxy_pass http://backend;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }
}

}

  • upstream backend defines a group of server backends with different weights (if not specified, each weight is the same). This allows Nginx to distribute the load by not only forwarding traffic to different servers, but also assigning more traffic to servers with higher weights.
  • The proxy_pass directive passes the request to the upstream server group.
  • The proxy_set_header directive modifies the request headers passed to the backend server. This may include passing the actual client IP address and other details to the backend.

Reference

Clone this wiki locally