https://github.com/nanit/api-gateway-example — Just clone and run make test to start TDD’ing your NGINX configuration.
Micro-Services is definitely the present and future of software architecture. It existed in the past, but now days it is safe to say that it is considered a superior methodology to build and deploy software. The introduction of container technologies made Micro-Services more accessible than ever, since the price of operations has dropped significantly with the help of those technologies. The specifics are still interpreted differently by different tech teams. The size, responsibility and persistent storage access and other attributes of each Micro-Service may be different between organizations, but the core idea of decomposing a monolith into a set of smaller, independent components is shared across them.
Our NGINX Gateway
Nanit’s backend is also composed out of a few dozens of Micro Services. We have a single NGINX gateway which makes the proper routing of every API request to the appropriate service. A request to /login for example, is proxied to our authentication API, while a request to /messages is proxied to our inbox API. These are simple examples but in reality things tend to become much more complicated when you deal with a production-scale system for two main reasons:
- More and more services join the party and you find yourself juggling between new routes and new services added to NGINX, trying not to break things as you add them to your configuration. Things can even get messier since new location blocks you add to your NGINX configuration may actually override old location blocks. The rule-set by which NGINX chooses the location block to process a request with makes it almost impossible to track your gateway routing as its configuration size grows.
- There are subtle differences in how NGINX passes the request path to your Micro Service. It depends on wether or not you have a path after the domain on your proxy_pass directive. You can read more about it here but the end result is that you want to have a way to verify that the path you expect your Micro Service to receive is indeed the one passed by NGINX.
Since this gateway was extremely crucial for us, we wanted to have a solid way to validate it works as we expect it to work before deploying changes to it to production. We wanted make sure that a request to /login is routed to the authentication API with the /login path. Later, when we added the /messages location to the gateway, we wanted to make sure that both /login and /messages are still routed to their proper services with their proper paths. Unfortunately, NGINX has no built-in way to test these kind of things.
Tell Me How!
You can find the full working example here:
Just clone it and run make test to see the tests pass.
The basic idea is a docker-compose in which our NGINX gateway is linked to a set of Micro Services.
Each micro service is a dummy web service which returns a JSON containing two values: Its name and the URL path it received from the gateway.
By examining the returned JSON we can verify that HTTP requests to our NGINX gateway are proxied correctly to the different Micro Services.
The following schema shows how a test works:
This is how the docker-compose.yml looks like:
There are 3 main components here:
First: the authentication and inbox Micro Services built as dummy apps. These will just return the SERVICE_NAME environment variable they were started with and the path they received from NGINX which proxied the request to them.
We chose Ruby to implement these fake services, but you can choose which ever language you please.
The implementation is pretty simple and includes only a few lines:
Second: the NGINX API gateway. This is the same gateway you’re going to later deploy to production. It includes an nginx.conf file and each service sets its routes in a designated conf file in app/services/service_name.conf. This is an example of how authentication.conf looks like:
A simple proxy from /login to our authentication service. It assumes that authentication is a resolvable DNS name internally in our cluster.
Third: tester. This one sends HTTP requests to the gateway and verifies the expected service and path are returned for each call. We chose to use Ruby’s RSpec for this purpose but again, any other test framework can work here.
The gateway route specs are formatted in YAML so you can easily add and change routes without touching the code itself:
Now you can TDD your NGINX configuration:
- Add a new route to expected_routes.yml
- Run make test and see the tests fail
- Add the appropriate NGINX configuration
- Run make test again and (hopefully) everything goes green
Isn’t that great?
For a long time we felt our NGINX gateway was the weak spot in our infrastructure. All services were independently tested but what does it worth if the gateway which routes requests to them fails?
Today, we’re feeling very comfortable to deploy it. Every Pull Request is automatically tested by Jenkins which alerts us if any route is broken for some reason. When a new configuration is merged to master we can be sure that it works as we expect it to work.
There’s no better feeling than another boring and expected deployment to a crucial part of your infrastructure :)