Guest Author

Why You Need an API Gateway For Microservices

Emre Savcı
Ambassador Labs
Published in
7 min readJul 6, 2023

--

An API gateway is one of the most critical components in a microservice architecture because it acts as a single entry point for all client requests.

By handling routine operations such as authentication, routing, and caching at the gateway level, API gateways can improve a microservices-based application’s performance, security, and manageability.

In this article, we will discuss the key reasons why an API gateway is essential for microservices, and then we will explore one of the most popular API gateways, Edge Stack, and learn its benefits and how to use it.

What is an API Gateway?

An API gateway is a layer between the many services that make up the application and the external clients in a microservices architecture. It is in charge of forwarding client requests to the proper services and combining the responses received from those services into a single response which is then returned to the client.

Benefits of an API Gateway

Here are some key benefits of using an API gateway for microservices-based applications:

  • A higher level of security: API gateways can assist in enforcing security guidelines and guarding against frequent dangers like denial-of-service attacks and cross-site scripting.
  • Increased adaptability: Since API gateways can support a vast number of protocols, formats, and languages, integrating with a large selection of client applications and services is made simpler.
  • Improved manageability: API gateways can act as the application’s central point of management, making it simpler to enable features like rate restriction and quotas and to monitor and resolve problems.
  • Better scaling: API gateways can assist in distributing the load among the many services, making it simpler to scale the application to accommodate changing demand.
  • Performance gains: By managing routine operations like authentication, routing, and caching at the gateway level, API gateways enhance the application’s overall performance.

Why should you use an API Gateway for microservices?

An application that lives in a microservices environment requires much functionality to operate. Those functionalities are frequently required by every service that handles client requests. For example, if you have a service that handles requests from your front-end application, it should adjust its CORS policies. Now consider that you need to accomplish this task for every other service that handles frontend application requests. That would be time-consuming and complex, right? That’s where API gateways come in — to automate and manage this entire process!

API gateways are also essential for application-independent operations. One of the most important parts of serving your users is handling and routing incoming traffic. API gateways play a crucial role in traffic distribution. Imagine that you want to upgrade a service to a new version. To minimize the effect of this upgrade, you can use an API gateway to redirect a certain percentage of the traffic to the new version while redirecting most of the traffic to the old version or switching to the stable version if an error occurs in the new version.

An API gateway is a great place to implement cross-cutting concerns for microservices. Find below some API gateway features that target these concerns:

  • Load balancing: API gateways can automatically distribute incoming requests across multiple instances of a service using a variety of load-balancing algorithms. This can improve the performance and availability of your services and save you from having to implement your own load-balancing logic in your application code.
  • Observability: API gateways can integrate with observability tools like Prometheus and Zipkin to provide metrics and tracing information about your services. This can help you to monitor the performance and the health of your services, and it can save you from having to instrument your application code with custom metrics and tracing logic.
  • TLS termination: Terminating TLS connections and forwarding unencrypted requests to your services can be done by API gateways. This can save you from implementing TLS in your application code and make it easier to manage the TLS certificates and the encryption settings.
  • Service discovery: API gateways can automatically discover and configure the routes to your services based on the Kubernetes service definitions. This can save you from manually configuring and managing the routes to your services. It also makes it easier to deploy and update your services.
  • Rate limiting: API gateways enable you to enforce rate limits on your microservices at the edge of your network rather than within each individual service. This makes it easier to manage and update your rate-limiting rules and offloads the computational burden of enforcing those rules from your microservices.

In conclusion, API gateways help decrease application development time by taking responsibility for many everyday tasks and enabling us to manage traffic for more healthy services. Now let’s look at an actual API gateway and its usage.

How to use the Edge Stack API gateway

Envoy is a widely adopted, high performance C++ distributed proxy with a small memory footprint designed for large and small microservice architectures. It offers first-class support for HTTP/2 and gRPC, as well as advanced features like automatic retries, circuit breaking, global rate limiting, and request shadowing.

Built on top of the Envoy proxy, Edge Stack is an API gateway and Kubernetes ingress controller that offers a scalable, secure, and simple-to-use solution for managing and exposing APIs on Kubernetes.

Step 1: Configure Edge Stack API Gateway

Edge Stack API Gateway is typically deployed on Kubernetes and is often configured using Kubernetes CRDs. Once configured, businesses can deploy their APIs on Kubernetes and expose them through Edge Stack.

Edge Stack includes a command-line interface (CLI), a web-based administrative interface, a set of Kubernetes annotations, and a RESTful API. Using these tools, you can define the routes, policies, and configurations needed to expose your microservices through Edge Stack.

You can easily install Edge Stack to your Kubernetes cluster by following the Helm installation method.

Step 2: Set up Edge Stack’s CRDs

After installation, you should set up the custom resources Edge Stack provides. Rather than rely on Kubernetes annotations, Edge Stack’s custom resource definitions (CRDs) provide the flexibility to optimally control your traffic while offering the same benefits as other native Kubernetes objects such as CLI compatibility, security, API services, and RBAC.

Here are some explanations of Edge Stack custom resources:

  • Listener: The Listener custom resource defines port and protocol information used by Envoy to listen for incoming requests.
  • Mapping: The Mapping custom resource defines the mappings between incoming requests and the appropriate Kubernetes services, and it is mapped to the Envoy route configuration. This includes the hostname, the path, and the method that should be matched, and the service that should be used to handle the request.
  • Host: The Host resource defines a domain in Edge Stack. You can add your TLS configuration to the Host resources. Host resources are associated with Mapping resources.
  • Filter: Edge Stack contains a set of built-in filters that makes it easier to protect your applications. For instance, instead of going through the time-consuming process of setting up a JWT filter manually (coding, token extraction, authorization), you can simply create a JWT authorization filter and protect your applications.
  • TLSContext: The TLSContext resource allows you to configure TLS for your services.

Step 3: Routing traffic with Edge Stack API Gateway

Now let’s actually use Edge Stack to route traffic to a sample microservice-based application.

As an example, I will create a custom-service deployment:

kubectl create deployment custom-service –image=kennethreitz/httpbin
kubectl expose deployment custom-service - port=80 - target-port=80

And now, we can create a configuration file that can be used to configure Listener and Mapping resources in Edge Stack to route incoming requests to another service in Kubernetes:

kubectl apply -f - <<EOF
---
apiVersion: getambassador.io/v3alpha1
kind: Mapping
metadata:
name: my-service-backend
namespace: ambassador
spec:
prefix: /custom-service/
host: custom-service
service: custom-service.default
EOF

This configuration defines a Mapping resource with the prefix set to /custom-service/, which means that it will match requests with a URL starting with /custom-service/. The service field is set to custom-service.default, which is the name of the Kubernetes service to the route that should forward requests. When this configuration is applied to a Kubernetes cluster, Edge Stack will create the necessary Envoy route rules to forward requests that match the prefix to the specified service. The following configuration creates a Listener resource for Envoy to listen to network requests:

kubectl apply -f - <<EOF
---
apiVersion: getambassador.io/v3alpha1
kind: Listener
metadata:
name: edge-stack-listener-8080
namespace: ambassador
spec:
port: 8080
protocol: HTTP
securityModel: XFP
hostBinding:
namespace:
from: ALL
---
apiVersion: getambassador.io/v3alpha1
kind: Listener
metadata:
name: edge-stack-listener-8443
namespace: ambassador
spec:
port: 8443
protocol: HTTPS
securityModel: XFP
hostBinding:
namespace:
from: ALL
EOF

For testing purposes, I’ve forwarded the Edge Stack service port to my localhost:

kubectl port-forward services/edge-stack -n ambassador 8443:443

And now, we can test it by sending a request:

curl https://localhost:8443/custom-service/some-path -H ‘host: custom-service’ -v -k

We simply need to see our service’s response as shown below:

> GET /custom-service/headers HTTP/1.1
> Host: custom-service
> User-Agent: curl/7.77.0
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< server: envoy
< date: Mon, 03 Jul 2023 13:44:07 GMT
< content-type: application/json
< content-length: 245
< access-control-allow-origin: *
< access-control-allow-credentials: true
< x-envoy-upstream-service-time: 77
<
{
"headers": {
"Accept": "*/*",
"Host": "custom-service",
"User-Agent": "curl/7.77.0",
"X-Envoy-Expected-Rq-Timeout-Ms": "3000",
"X-Envoy-Internal": "true",
"X-Envoy-Original-Path": "/custom-service/headers"
}
}

As shown in this example, creating a gateway for your services using Edge Stack is very straightforward.

Read this article to learn more about configuring and deploying Edge Stack as an API gateway for your microservice-based application.

Conclusion

In the article, you learned about API gateways, why they are essential for microservice-based applications, and how Edge Stack, a Kubernetes-native API Gateway, delivers scalability, security, and simplicity. Try Edge Stack API Gateway today!

--

--

Sr. Software Engineer @Trendyol & Couchbase Ambassador | Interested in Go, Kubernetes, Istio, CNCF, Scalability. Open Source Contributor.