Guest Author

Deploying Edge Stack as an API Gateway

Kayode Adeniyi
Ambassador Labs
Published in
9 min readApr 7, 2023

--

Edge Stack is a powerful API gateway and Ingress controller that simplifies microservices management. Built on top of the Envoy proxy, it provides advanced features such as rate limiting, authentication, and observability while being highly scalable and flexible. Whether you’re just starting with microservices or already have a complex architecture, Edge Stack is an excellent choice for simplifying microservices management.

This tutorial will guide you through setting up Edge Stack with a node.js server in a Docker container in your local environment. By following these steps, you’ll learn how to leverage the power of Edge Stack to simplify your microservices management while also gaining hands-on experience with Docker and Kubernetes.

Prerequisites

To get the most out of this tutorial, you’ll need:

Project Architecture

Crafting an architecture for the application you intend to build is vital because it helps you understand the application’s complexity and workflow, map out the integral components, and how they’ll communicate with each other. Keeping this in mind, I created an architecture to enable you to visualize the application workflow we will build in this article.

Project Setup

This section will focus on setting up the environment in which our server will run. First, we will install nodeJS, Docker, and Docker-compose in our system. We added Docker to this setup because it isolates the whole application, making it easy to run on any operating system with Docker installed without any crashes.

Installing Node.js: To set up node.js, we first fetch its installation package and install it using the apt-get install command. Run the commands below on your terminal to do this.

curl -fsSL https://deb.nodesource.com/setup_19.x | sudo -E bash — &&\
sudo apt-get install -y nodejs

If you are not on Debian-based operating system, check the installation guide that matches yours here.

Installing Docker and Docker-Compose: To set up a docker repository in Linux, run the following commands. Not using Linux? Then follow these installation instructions.

sudo apt-get update
sudo apt-get install \
ca-certificates \
curl \
gnupg \
lsb-release
sudo mkdir -m 0755 -p /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg — dearmor -o /etc/apt/keyrings/docker.gpg
echo \
“deb [arch=$(dpkg — print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) stable” | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

Installing the Docker Engine: To set up a docker engine in Linux, run the following commands:

sudo apt-get update
sudo chmod a+r /etc/apt/keyrings/docker.gpg
sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin

Installing Docker Compose: Run this command sudo apt install docker-compose to install Docker Compose on your machine.

Let’s Code

In this section, we will create a node.js server and expose an endpoint to deliver an array of data inside HTTP responses. After creating the server, we will configure Edge Stack’s routes and run both services using Docker-compose. The complete GitHub source code is available here.

The first step is to set up a node project. To do this, run the npm init command and fill out the prompts. After completing this step, you will have a package.json file that contains descriptive and functional metadata about the project inside your home directory.

Next, we will run the npm install express nodemon command to install Express and Nodemon to enable the node server to run smoothly. For context, Express is a back-end web application framework for building RESTful APIs, while Nodemon is a command-line tool that helps with the speedy development of node projects by restarting the application on any change in the code.

After installing Express and Nodemon, we can now build the server-side logic and create an entry point to the server. To do this, go to your root directory, create a file named app.js, and paste the following code snippet below into it.

// app.js
const express = require(“express”);
const app = express();
const content = require(“./route”);

// Data to be fetched by the API
app.use("/v1/content",content);
const PORT = process.env.PORT || 3000;
app.listen(PORT, () => {
console.log(`Server started on port ${PORT}`);
});

In the app.js file, we imported the express server and a route file (we will create it in the coming steps) and exposed an endpoint on port 3000.

Now, let’s create a controller that will handle the logic and serve responses to incoming requests. Create a controller.js file in your root directory and paste the following code.

// controller.js
const content = [
{ id: 1, name: “John Doe”, age: 32 },
{ id: 2, name: “Jane Doe”, age: 30 },
{ id: 3, name: “Jim Smith”, age: 35 },
{ id: 4, name: “Jim SMith”, age: 40}
];

const getContent = async (req,res) => {
console.log("Fetching static content from Array of users.");

if (content.length == null){
console.log("No content to be fetched");
return res.status(404).json({
message: "No content to be fetched",
content: content
})
}

console.log("content fetched.",content);

return res.status(200).json({
message: "content fetched",
content: content,
})
}

module.exports = {getContent};

In this file, we created an array of JSON objects and named it content. Then we created a function called getContent that will send the array of JSON objects to the request and export it so it can be imported into other files.

Now, let’s create a file named route.js, import the controller we built in the previous step, and configure it to be called when the /fetch postfix is used in the endpoint.

// route.js
const app = require(“express”);
const router = app.Router();
const {getContent} = require("./controller");

router.get("/fetch",getContent);

module.exports = router;

router.get("/fetch",getContent);

module.exports = router;

We are ready to test our server! So, open the package.json file and override the script’s object with the following code block.

“scripts”: {
“test”: “nodemon app.js”,
“start”: “node app.js”
},

Save all your file changes, and then start the server by running the npm test command. This will return something similar to the output below:

Let’s go ahead and test the endpoint by opening our browser and pasting this URL below in the address bar.

http://localhost:3000/v1/content/fetch

After pasting the URL on your browser, you will see an output such as this in your terminal and a response in the browser window.

Configuring the Edge Stack API Gateway

Now, we will configure Edge Stack to act as an API Gateway for our content service. We will create a route file for each service that tells the ambassador to request a resource from a specific service. These route files are also known as Mappings.

To create a mapping for our content service, create a folder in the backend folder, and inside of it, create a file named content.yaml and paste the YAML configuration below into it.

// backend/ambassador/content.yaml
— -
apiVersion: getambassador.io/v2
kind: Mapping
name: get_content
prefix: /v1/content/
rewrite: ‘’
service: content:3000

For context, here’s a brief explanation of the YAML configuration file above:

  • apiVersion: The API version used by Kubernetes
  • kind: The type of manifest to be used in Kubernetes
  • name: The name of the route
  • prefix: A part of the URL path that comes before the actual endpoint
  • Rewrite: It refers to the ability to modify the URL path or headers of a request before it is forwarded to the backend service
  • service: Traffic is to be routed through this service

Now that our node server and Edge Stack’s mapping are ready, we will use docker-compose to run our application.

Configuring docker-compose environment

In the docker-compose file, we will spin up two containers — one for Edge Stack’s and the other for the node server. The Ambassador container will run on port 8080 and mount the mappings into the container to complete the ambassador’s configuration, while the node server will run on port 3000.

Dockerize the node server: Before we configure docker-compose, we need to dockerize the node server. To do this, we need a Dockerfile. So, create a Dockerfile in your root directory and paste the following code.

FROM node:14-alpine
WORKDIR /usr/app
COPY package.json .
RUN npm install
COPY . .

In this Dockerfile, we used a node alpine base image for the environment. Then we create a directory and copy the package.json file into it. Finally, we run the install command and copy the rest of the code into the directory. Now we are ready to create the docker-compose environment for our application.

Writing the docker-compose file: In your project’s root directory, create a docker-compose.yaml file and paste the YAML configuration below into it.

version: "3"

services:
ambassador:
image: datawire/ambassador:1.10.0
ports:
- 8080:8080
volumes:
- ./backend/ambassador:/ambassador/ambassador-config
environment:
- AMBASSADOR_NO_KUBEWATCH=no_kubewatch

content:
build: backend/content/
container_name: content
command: node app.js
hostname: content:3000
restart: always
ports:
- 3000:3000
environment:
PORT: 3000

To improve workflow efficiency and reduce errors, we will create a Makefile and configure it to run our Docker environment. To do this, create a Makefile in your project’s repository and paste the code below.

// Makefile

.PHONY: start status stop
start:
docker-compose up -d --build

status:
docker ps

stop:
docker-compose down

So far, we’ve built and dockerized the node server, configured the Ambassador Edge Stack API Gateway, and created a docker-compose environment to run this application. In the next section, we will test the whole setup.

Testing the content service

To test our service, run the command below from the root of your directory to spin up our environment.

sudo make start

After a successful run, your output would be something like this;

To confirm that the containers are running, you can list them by running this make command.

sudo make status

If the containers spun up successfully, your output would be something like this:

As you can see from the output above, the containers are up at their respective ports and ready to work. Now let’s test the endpoint exposed on the content service, just like we did in the previous step when we set up our service. The only difference will be that the request will be directed towards the Edge Stack API gateway on port 8080, the opposite of what we did beforehand. So, hit the following endpoint in your browser’s address bar:

http://localhost:8080/v1/content/fetch

You will see an output such as this in your browser:

Additionally, to successfully debug and improve the container, check the logs of the Ambassador container for in-depth information on the API requests. To do this, get the container ID of the ambassador container by running the make status command, copy the id, and use it in the command below:

sudo docker logs -f dae0ccec1f43

On running the command above, an output similar to the one in the image below will be displayed. This is detailed information about the successful GET request. It shows the type of request, status code, status flags (if any), service, and port on which the Edge Stack forwarded the request. You can visit Edge Stack’s official documentation to learn more about ambassador logs.

Conclusion

This article guided you through configuring a Docker environment for a node server and showed you how to configure Edge Stack to route traffic to your server. Finally, it showed you how to trace any errors using Ambassador’s Envoy-based logging mechanism.

Edge Stack accelerates Kubernetes development by providing easy route configuration, scalability, and logging for better debugging. I recommend Kubernetes development teams use it as an API gateway for their cloud-native applications.

Visit the Edge Stack website or the documentation to learn more. Good luck learning! 😇

--

--