Envoy Proxy is a high-performance, extensible, and widely adopted edge and service proxy. As a modern server software, Envoy plays a pivotal role in managing network traffic, enhancing web performance, and bolstering security. It offers numerous benefits such as improved service-to-service communication, advanced load balancing, and extensive observability features.
In this article, we will talk about what Envoy Proxy is, how it operates, its key features, and the advantages it brings to the table. Understanding Envoy is essential for web server administrators and webmasters who aim to optimize their network infrastructure and improve overall web server performance.
Let’s dive in!
Key Takeaways
- Envoy Proxy is a modern, high-performance edge and service proxy designed for cloud-native applications. It excels in microservices architectures, as an edge proxy, within service mesh implementations, and as an API gateway.
- Envoy Proxy is designed for high performance and scalability, handling a high volume of network traffic with minimal resource usage. Its dynamic configuration APIs allow it to adapt to changes in real-time, making it ideal for dynamic environments.
- Envoy Proxy provides a range of security features, including support for Transport Layer Security (TLS) and Mutual TLS (mTLS), integration with external authentication services, and rate limiting features to protect against denial-of-service attacks.
- While Envoy, NGINX, and HAProxy all provide powerful capabilities for managing network traffic, each has its strengths and weaknesses. The best choice depends on your specific needs and the architecture of your system.
- The installation and configuration of Envoy Proxy can be achieved through a series of steps, with the flexibility to adapt to your specific network configuration. Once installed, Envoy’s configuration can be customized to suit your specific needs, offering a high degree of flexibility and powerful capabilities.
Table of Contents:
What is Envoy Proxy?
Envoy Proxy is a high-performance, open-source edge and service proxy designed for cloud-native applications. It was originally built by Lyft to handle their microservices architecture and has since been adopted by many other organizations and projects. Envoy is often used as a component in service mesh architectures, such as Istio and Consul, but it can also be used standalone.
Envoy Proxy operates at the network level and provides a unified platform to manage, control, and monitor network traffic between services. It supports a wide range of features including advanced load balancing, service discovery, health checking, and more. It is also protocol agnostic, meaning it can handle any type of network traffic, though it has advanced features for HTTP/2, HTTP/3 and gRPC.
Brief History and Development of Envoy Proxy
Envoy Proxy was developed and open-sourced by the ride-sharing company Lyft in 2016. The goal was to create a robust and modern proxy that could handle the complexities of Lyft’s growing microservices architecture. Matt Klein, a software engineer at Lyft, was the primary developer behind Envoy.
Since its release, Envoy has gained significant popularity in the cloud-native ecosystem. It was accepted into the Cloud Native Computing Foundation (CNCF) as an incubating project in 2017 and graduated to a top-level project in 2018, indicating its wide adoption and healthy community.
The development of Envoy has been driven by the needs of modern, distributed systems. It has been designed from the ground up to handle the performance and reliability requirements of large-scale, microservice-based architectures. Today, it is used by many large tech companies including Google, Airbnb, and Dropbox, among others.
Key Features of Envoy Proxy
Envoy Proxy is packed with a range of features that make it a standout choice for managing network traffic in modern, distributed systems. Here are some of the key features:
- Advanced Load Balancing: Envoy supports a variety of load balancing strategies, including round robin, least request, and random. It also supports more advanced features like zone-aware load balancing, canary releases, and traffic shifting.
- HTTP/2 and gRPC Support: Envoy is designed with first-class support for HTTP/2 and gRPC, which are increasingly used in modern applications. This includes features like HTTP/2 gRPC bridging, which allows Envoy to translate between the protocols.
- HTTP/3 support (currently in alpha): As of 1.19.0, Envoy now supports HTTP/3 upstream and downstream, and translating between any combination of HTTP/1.1, HTTP/2 and HTTP/3 in either direction.
- Observability: Envoy provides detailed metrics and logging, and it integrates with popular monitoring tools like Prometheus and Datadog. It also supports distributed tracing, which can be crucial for debugging in microservices architectures.
- Dynamic Configuration: Envoy’s configuration can be dynamically updated via an API, without the need to restart the proxy. This is particularly useful in dynamic environments where services are frequently scaling up and down or moving around.
- Resilience and Health Checks: Envoy includes features to improve the resilience of your system, such as circuit breakers, retries, and rate limiting. It can also actively check the health of your services and react accordingly.
- Security: Envoy supports TLS and mTLS for secure communication between services. It also integrates with external authentication services.
- Designed for Microservices: Unlike other proxies, Envoy is designed from the ground up for microservices architectures. It understands the complexities and challenges of these environments and provides features to address them.
- Built-In Observability: Envoy’s built-in observability features are more extensive than many other proxies. This can reduce the need for additional monitoring tools and make it easier to understand what’s happening in your system.
- Active Health Checking: Unlike some other proxies, Envoy can actively check the health of your services and adjust its routing accordingly. This can improve the reliability of your system.
How Does Envoy Proxy Work?
At its core, Envoy Proxy operates as a communication bus and a universal data plane designed for large microservice mesh architectures. The architecture of Envoy is based on an extensible platform that provides a set of APIs for adding filters to the proxy. These filters can be used to customize the behavior of Envoy, allowing it to handle different types of network traffic, perform various tasks, and integrate with other systems. Envoy’s architecture is modular and allows for a high degree of flexibility and customization, making it adaptable to a wide range of use cases.
Envoy has been designed with a strong focus on HTTP/2 and gRPC, two protocols that are becoming increasingly important in modern web development. HTTP/2 is a major revision of the HTTP protocol that provides significant performance improvements, while gRPC is a high-performance, open-source framework for RPC communication. When a connection is established, Envoy can automatically upgrade HTTP/1.1 requests to HTTP/2 or gRPC. This means that even if your applications are still using HTTP/1.1, they can benefit from the performance improvements of these newer protocols when communicating via Envoy.
HTTP/3 is the next major version of the HTTP protocol and is supported by Envoy Proxy. It’s designed to improve upon the performance characteristics of HTTP/2 by changing the transport layer from TCP to QUIC, a protocol that provides built-in support for features like connection multiplexing, stream prioritization, and more. Envoy’s HTTP/3 support can be enabled by adding specific configuration options, and it’s designed to work seamlessly with both TCP and UDP listeners. While HTTP/3 support in Envoy is deemed ready for production use in controlled environments, it’s still under active development for broader internet use. Envoy’s HTTP/3 support includes features like automatic protocol negotiation, which allows Envoy to automatically upgrade connections to HTTP/3 when possible. This makes it easier for applications to start benefiting from the performance improvements of HTTP/3 without needing to be rewritten to support the new protocol.
Service discovery and load balancing are other key aspects of Envoy’s operation. In a microservices architecture, services are often dynamically scaled and moved around, which can make it difficult to know where to send requests. Envoy solves this problem by integrating with service discovery systems, which keep track of where services are running. Once Envoy knows where the services are, it can use its advanced load balancing features to distribute requests between them. This includes standard techniques like round-robin and least-requests, as well as more advanced strategies like weighted distribution and traffic shifting for canary releases. This combination of service discovery and load balancing allows Envoy to effectively manage network traffic in dynamic, distributed environments.
Use Cases for Envoy Proxy
Envoy Proxy is a versatile tool that can be used in a variety of scenarios to manage network traffic, enhance performance, and improve security. Here are some examples of where Envoy Proxy is particularly useful:
Microservices
In a microservices architecture, an application is broken down into a collection of loosely coupled services. Each service is a small, independent unit that performs a specific function. Envoy Proxy is particularly useful in such environments. For instance, consider an e-commerce application composed of services like User Management, Product Catalog, Shopping Cart, and Payment Processing. These services need to communicate with each other over the network, and that’s where Envoy comes in. It can handle service discovery, ensuring that each service knows where to find the others. It can also manage load balancing, distributing network traffic to prevent any single service from becoming a bottleneck. Additionally, Envoy’s capabilities like rate limiting (controlling the number of requests a service can handle) and circuit breaking (preventing network congestion during service failures) contribute to the robustness and resilience of the microservices ecosystem.
Edge Proxy
As an edge proxy, Envoy acts as the gateway between your network and the broader internet, managing all incoming traffic. For example, in a web application, all requests from users’ browsers would first hit Envoy. Envoy can then handle tasks like TLS termination, which involves decrypting incoming requests and encrypting outgoing responses, offloading this computational load from the application servers. It can also manage authentication, ensuring that only authorized requests reach your services. Additionally, Envoy’s rate limiting capabilities can protect your services from being overwhelmed by too many requests at once.
Service Mesh
In a service mesh architecture, each service in the application is paired with an instance of Envoy, forming a “sidecar” proxy. For example, in a cloud-native application running on Kubernetes, each service’s pod would include both the service itself and an Envoy sidecar. The sidecar intercepts all inbound and outbound network traffic, allowing the service to offload many networking concerns to Envoy. This includes service discovery, ensuring the service knows where to send its outbound requests. It also includes retries and timeouts, adding resilience to the service by automatically retrying failed requests and preventing requests from hanging indefinitely.
API Gateway
As an API gateway, Envoy serves as the single point of entry for all external API traffic. For example, in a Software-as-a-Service (SaaS) application, third-party developers might interact with your API to build their own applications. All their requests would first go through Envoy. Envoy can then handle tasks like rate limiting, ensuring that no developer can overwhelm your services with too many requests. It can also manage authentication, ensuring that only authorized developers can access your API. Additionally, Envoy can handle API versioning, routing requests to the correct version of your services based on the API version specified in the request.
Performance and Scalability of Envoy Proxy
Envoy Proxy is designed for high performance and scalability, making it an excellent choice for large, distributed systems. It’s built in C++, allowing it to handle a high volume of network traffic with minimal resource usage. Envoy’s architecture is event-driven, which means it can handle thousands of concurrent connections on a single thread. This makes it highly efficient, as it can serve many requests without the need for a large number of threads or processes.
In terms of scalability, Envoy shines in dynamic environments where services are frequently scaling up and down. Its dynamic configuration APIs allow it to adapt to changes in real-time, without the need for restarts. This is particularly useful in a microservices architecture or a service mesh, where the number and location of services can change frequently. Envoy’s load balancing features also contribute to its scalability, as they allow it to distribute traffic evenly across services, preventing any single service from becoming a bottleneck.
Security Features of Envoy Proxy
Security is a critical concern in network communication, and Envoy Proxy provides a range of features to help secure your system. One of the key security features of Envoy is its support for Transport Layer Security (TLS) and Mutual TLS (mTLS). TLS encrypts the communication between services, preventing eavesdropping and tampering. mTLS goes a step further by requiring both the client and the server to authenticate each other, providing an additional layer of security.
Envoy also integrates with external authentication services, allowing you to implement custom authentication logic. This can be used to ensure that only authorized clients can access your services. Additionally, Envoy’s rate limiting features can help protect your services from denial-of-service (DoS) attacks by limiting the number of requests a client can make in a certain period.
Furthermore, Envoy has built-in support for the JWT (JSON Web Token) and OAuth2 protocols, which are widely used for securing APIs. These features, combined with Envoy’s robust performance and scalability, make it a powerful tool for secure network communication.
Envoy Proxy vs Competitors
When comparing Envoy Proxy with other popular server software like NGINX and HAProxy, it’s important to consider the specific use cases and features of each tool.
Name | Best Used For | Advantages | Disadvantages |
---|---|---|---|
Envoy Proxy | Cloud-native applications, microservices architectures, service mesh implementations | Advanced load balancing, dynamic configuration, built-in observability, first-class support for HTTP/2 and gRPC | Can be complex to configure and manage due to its extensive feature set and flexibility |
NGINX | Web server, reverse proxy, load balancer, serving static content | High performance, stability, rich feature set, simple configuration, low resource consumption | May not offer the same level of functionality as Envoy for microservices or service mesh architectures |
HAProxy | Load balancing HTTP and TCP servers | Reliable, high performance, features like rate limiting, request routing, and health checking | Lacks built-in support for service discovery, does not have the same level of support for gRPC as Envoy |
While all three tools are powerful and capable, the best choice depends on your specific needs and the architecture of your system. Envoy shines in dynamic, distributed environments and offers a wide range of advanced features, but it can be more complex to manage. NGINX is a versatile and reliable tool that’s easy to configure, but it may not offer the same level of functionality for microservices or service mesh architectures. HAProxy is a high-performance load balancer, but it lacks some of the features of Envoy and NGINX.
NGINX
NGINX is a popular web server, reverse proxy, and load balancer. It’s known for its high performance, stability, rich feature set, simple configuration, and low resource consumption. NGINX supports a wide range of protocols, including HTTP, HTTPS, SMTP, POP3, and IMAP. It also offers features like serving static content, reverse proxying, caching, load balancing, and media streaming. However, NGINX was not originally designed for microservices or service mesh architectures, and while it can be used in these scenarios, it may not offer the same level of functionality as Envoy in these areas.
HAProxy
HAProxy is a reliable, high performance TCP/HTTP load balancer. It’s used by many high-profile websites to improve performance and reliability. HAProxy excels at load balancing HTTP and TCP servers, and it offers features like rate limiting, request routing, and health checking. However, HAProxy does not have built-in support for service discovery, which can make it less suitable for dynamic environments where services are frequently scaling up and down. Also, while HAProxy does support HTTP/2, it does not have the same level of support for gRPC as Envoy.
Setting Up Envoy Proxy
Here’s a step-by-step guide on how to install and configure Envoy Proxy on Ubuntu and CentOS. Please note that you should have root or sudo access to run these commands.
Ubuntu
Update the package lists for upgrades and new package installations:
sudo apt-get update
Install the necessary software to manage the repository:
sudo apt-get install -y apt-transport-https ca-certificates curl software-properties-common
Add the official GPG key of the Docker repository:
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
Add the Docker repository:
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
Update the package database with the Docker packages:
sudo apt-get update
Install Docker:
sudo apt-get install -y docker-ce
Pull the Envoy Docker image:
docker pull envoyproxy/envoy:v1.26.2
Run the Envoy Docker container:
docker run -d -p 9901:9901 -p 10000:10000 envoyproxy/envoy:v1.26.2
CentOS
Update the package lists for upgrades and new package installations:
sudo yum update -y
Install the necessary dependencies:
sudo yum install -y yum-utils device-mapper-persistent-data lvm2
Add the Docker repository:
sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
Install Docker:
sudo yum install -y docker-ce
Start and enable Docker:
sudo systemctl start docker sudo systemctl enable docker
Pull the Envoy Docker image:
docker pull envoyproxy/envoy:v1.26.2
Run the Envoy Docker container:
docker run -d -p 9901:9901 -p 10000:10000 envoyproxy/envoy:v1.26.2
This will install Envoy and start it running on port 10000 for incoming HTTP connections and port 9901 for administration purposes. You can now configure Envoy by editing its configuration file, which is typically located at /etc/envoy/envoy.yaml.
Please note that this guide assumes that Docker is the chosen runtime for Envoy. Envoy can also be built and installed from source, or run with other runtimes like Podman.
Configuring Envoy Proxy
Once you’ve installed Envoy Proxy, the next step is to configure it. Envoy’s configuration is defined using a YAML file. Here’s an example of a basic configuration file:
admin: access_log_path: /tmp/admin_access.log address: socket_address: { address: 0.0.0.0, port_value: 9901 } static_resources: listeners: - name: listener_0 address: socket_address: { address: 0.0.0.0, port_value: 10000 } filter_chains: - filters: - name: envoy.filters.network.http_connection_manager typed_config: "@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager stat_prefix: ingress_http route_config: name: local_route virtual_hosts: - name: local_service domains: ["*"] routes: - match: { prefix: "/" } route: { cluster: service_cluster } http_filters: - name: envoy.filters.http.router clusters: - name: service_cluster connect_timeout: 0.25s type: STATIC lb_policy: ROUND_ROBIN load_assignment: cluster_name: service_cluster endpoints: - lb_endpoints: - endpoint: address: socket_address: address: 127.0.0.1 port_value: 12345
This configuration sets up a single listener on port 10000, which routes all incoming HTTP requests to a local service running on port 12345.
Here are a few recommended settings to consider:
- Access Log Path: The access_log_path under the admin section is where Envoy will write access logs. You may want to change this to a different location based on your logging strategy.
- Admin Address: The address under the admin section is where Envoy’s admin interface will be available. You may want to change the address to a different IP or the port_value to a different port if the defaults conflict with other services on your system.
- Listener Address and Port: The address and port_value under the listeners section define where Envoy will listen for incoming connections. You may want to change these based on your network configuration.
- Cluster Configuration: The clusters section is where you define the backends that Envoy will route traffic to. You can define multiple clusters for different services, and you can control the load balancing policy and other settings for each cluster.
Envoy’s configuration is very flexible, so you can add, remove, or modify settings to suit your specific needs. Be sure to check the official Envoy documentation for more details on the available configuration options.
Conclusion
In this article, we’ve talked in-depth about Envoy Proxy, a modern, high-performance edge and service proxy designed for cloud-native applications. We’ve also compared Envoy with other popular server software like NGINX and HAProxy, highlighting its unique strengths and potential areas of improvement.
For web server administrators and webmasters, understanding and leveraging Envoy Proxy can be a game-changer. It can help manage network traffic more effectively, enhance performance, and improve security in your system. Whether you’re working with a microservices architecture, managing a service mesh, or simply looking for a robust edge proxy or API gateway, Envoy Proxy offers a compelling solution.
As with any technology, the best way to truly understand Envoy Proxy is to get hands-on. So, consider setting up a test environment, experimenting with its features, and exploring how it can benefit your specific use case.
For a comprehensive review and comparison of the most popular proxy server software available, be sure to check out this guide.
Hope you found this article helpful.
If you have any questions or comments, please feel free to leave them below.
FAQ
-
What is the role of Envoy in a service mesh architecture?
In a service mesh architecture, Envoy acts as a “sidecar” proxy, managing all inbound and outbound network traffic for each service. This allows the service to offload many networking concerns to Envoy, such as service discovery, load balancing, retries, timeouts, and more. This can greatly simplify the development and operation of services in a service mesh.
-
How does Envoy handle load balancing?
Envoy supports a variety of load balancing strategies, including round robin, least request, and random. It also supports more advanced features like zone-aware load balancing, canary releases, and traffic shifting. This allows Envoy to distribute network traffic evenly across services, preventing any single service from becoming a bottleneck.
-
What security features does Envoy provide?
Envoy provides a range of security features, including support for Transport Layer Security (TLS) and Mutual TLS (mTLS) for secure communication between services. It also integrates with external authentication services, allowing you to implement custom authentication logic. Additionally, Envoy’s rate limiting features can help protect your services from denial-of-service attacks.
-
How does Envoy compare to other server software like NGINX and HAProxy?
Envoy, NGINX, and HAProxy all provide powerful capabilities for managing network traffic, but they each have their strengths and weaknesses. Envoy shines in dynamic, distributed environments and offers a wide range of advanced features. NGINX is a versatile and reliable tool that’s easy to configure, while HAProxy is a high-performance load balancer. The best choice depends on your specific needs and the architecture of your system.
-
What is the performance impact of using Envoy?
Envoy is designed for high performance and can handle a high volume of network traffic with minimal resource usage. Its event-driven architecture allows it to handle thousands of concurrent connections on a single thread. However, as with any proxy, there is some overhead associated with the additional network hop. The impact of this overhead will depend on your specific use case and network conditions.