Lately I’ve been developing a lot of bots, and the de-facto standard is to use ngrok. However the free version has a hostname that expires every 2 hours, which makes it a pain to reconfigure. So rather than shelling out the few $$ required to have the paid, supported version, I built my own.

The goal is to create a publicly accessible endpoint that forwards all HTTP traffic to a client installed on a local environment.

┌────────────────────────┐     ┌────────────────┐     ┌──────────────────┐
│     Local machine      │     │  Relay server  │     │   Other system   │
│    ───────────────     │     │ ────────────── │     │   ─────────────  │
│                        │     │                │     │                  │
│  App ◄── Rlay client ──┼─────┼►  Rlay server ◄┼─────┼─ External system │
│                        │     │                │     │                  │
└────────────────────────┘     └────────────────┘     └──────────────────┘

It is composed of two components:

  1. A server, that you deploy on a machine accessible to the internet, which will receive the HTTP calls, transfer them to the client, then forward the responses.
  2. A client, that you run on your local machine. It will connect to a) the distant server, b) whatever app you want on your local machine, and proxy the calls made to the server to your machine.

Since rlay’s client is connecting to the server, you don’t need to open any ports on your local machine to make a local webapp available to the external world. It facilitates the development of applications when another system needs to call a webhook in your system.

The server is deployed behind TLS, so it’s effectively creating a secured tunnel between the local machine and the server.

Rlay’s code is on GitHub, as well as a Kubernetes chart, client and server are available on npmjs, and a docker image is published to dockerhub.

how it works

The code is pretty simple, I invite you to have a look!

The server is exposing an http server that catches all the traffic, and a socket.io server. Whenever an http request is received from a remote service, it packages it as a message and sends it to any socket connected to it. It’s also opening a one-time listener to receive the response ; and keeps the remote service hanging while the client processes the request.

The client connects to the server using socket.io. After a handshake that validates the password, it listens to events coming from the server. Whenever one is received, it relays the HTTP call locally, using the same path, headers, and body that it received. If something responds (hopefully…), it packages the response and sends it back to the server.

When the server receives that response, it forwards it to the remote service.

deployment

The recommended deployment is on any kubernetes server you would have dangling around. A helm chart is available in the server’s directory, which contains an ingress and certificate. Why? You don’t have one!?

setup

On the server: deploy rlay either with docker, npm, or using kubernetes. Set the environment variable RLAY_PASSWORD to whatever password you want, and RLAY_PORT to the port to be listened. Rlay will not start if a password is not provided.

On your local machine: npm install -g rlay, then set an environment variable RLAY_PASSWORD to your rlay password, and RLAY_HOST to your rlay server DNS, including the protocol (e.g. https://myrlayserver.mydomain.com).

using

  • Start your local dev server, say it’s a webapp listening on port 8080.
  • Configure whatever remote service needs to call it to point to your rlay server (e.g. https://myrlayserver.mydomain.com)
  • Start the rlay client: rlay --port 8080

From there on, when the remote service makes http calls, they will be forwarded to your local environment.