Alex Ellis
Do you ever find yourself wishing that you could get Ingress into your local or private Kubernetes clusters? Perhaps it’s during development, a CI job with KinD, or a customer demo.
It wasn’t long after creating the first version of inlets that my interest turned to Kubernetes. Could the operator pattern help me bring Ingress and TCP LoadBalancers to clusters hidden behind firewalls, HTTP Proxies and NAT?
The answer was yes, but if you delete and re-create your cluster many times in a day or week, there may be a better fit for you.
Let’s first recap how the operator works.
I got the first proof of concept working 5 Oct 2019 using Equinix Metal (née Packet) for the hosting. It watched for Services of type LoadBalancer, then provisioned a cloud instance using an API token and
This was actually recorded on the last day of vacation, that’s how badly I wanted to see this problem fixed:
Since then, support for around a dozen clouds was added including AWS EC2, GCP, Linode, Vultr, DigitalOcean, Azure and others.
Installing the inlets-operator brings LoadBalancers to any Kubernetes cluster, why would you want that?
The Kubernetes Operator encodes the knowledge and experience of a human operator into code. For inlets, it took my knowledge of creating a cloud instance, installing the inlets tunnel server software, then running a client pod in my local cluster.
Ivan Velichko who is an SRE at Booking.com created an animation to show exactly what happens and in what order
Read more about Operators in Ivan’s blog post
The operator is ideal for a single user, with a single long-lived cluster. That could be your Raspberry Pi, a private data center or a local K3d, KinD or minikube cluster.
The IP will go with you, and because the client runs as a Pod, it will restart whenever there’s an interruption in traffic, like going to your local cafe.
inlets-operator brings a Service LoadBalancer with public IP to any Kubernetes cluster i.e. minikube/k3d/KinD/kubeadm
— Alex Ellis (@alexellisuk) October 18, 2019
I set up @openfaascloud on my laptop at home, when I got to a coffee shop it reconnected with the same public IP from @digitalocean😱https://t.co/PanfWfMRlT pic.twitter.com/hHCeMRW7z2
But here’s three scenarios where the operator may fall short:
The third issue isn’t specific to inlets, if you delete the LoadBalancer service, or delete the operator then any external VMs will be cleaned up and removed. But it turns out that some people are too lazy to do that, and at times I may also be included in that group.
There is a simply work-around to this problem.
The inletsctl tool uses the same code as inlets-operator to provision VMs, so we can use that for the first step.
Install the tool:
# sudo is optional and is used to move the binary to /usr/local/bin/
curl -SLfs https://inletsctl.inlets.dev | sudo sh
Then explore the options and providers with inletsctl create --help
. The key options you’ll need are --provider
, --region
and --access-token-file
.
inletsctl create \
--access-token-file ~/Downloads/do-access-token \
--provider digitalocean \
--region lon1
Using provider: digitalocean
Requesting host: upbeat-jackson5 in lon1, from digitalocean
2021/07/08 10:42:23 Provisioning host with DigitalOcean
Host: 253982495, status:
[1/500] Host: 253982495, status: new
..
[11/500] Host: 253982495, status: active
Note the output with its sample connection command, IP address and auth token for the tunnel server.
inlets Pro TCP (0.8.3) server summary:
IP: 165.227.232.164
Auth-token: DP4bepIxuNXbjbtXWsu6aSkEE9r5cvMta56le2ajP7l9ajJpAgEcFxBTWSlR2PdB
Command:
# Obtain a license at https://inlets.dev
# Store it at $HOME/.inlets/LICENSE or use --help for more options
export LICENSE="$HOME/.inlets/LICENSE"
# Give a single value or comma-separated
export PORTS="8000"
# Where to route traffic from the inlets server
export UPSTREAM="localhost"
inlets-pro tcp client --url "wss://165.227.232.164:8123" \
--token "DP4bepIxuNXbjbtXWsu6aSkEE9r5cvMta56le2ajP7l9ajJpAgEcFxBTWSlR2PdB" \
--upstream $UPSTREAM \
--ports $PORTS
Let’s imagine you’ve deployed Nginx to your cluster, and that’s what you want to expose.
cat <<EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-1
labels:
app: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
EOF
Now create a private ClusterIP for the Deployment, so that it can be accessed:
kubectl expose deployment nginx-1 --port=80 --type=ClusterIP
Then deploy a tunnel client that forwards traffic to the nginx-1
service on port 80.
inlets-pro has two helm charts which can be used to run both a client or server as Pods within your cluster
You can write your own YAML manually for an inlets-pro client, or deploy the chart for the client.
First create a secret for your inlets-pro license key:
kubectl create secret generic -n default \
inlets-license --from-file license=$HOME/.inlets/LICENSE
Then create a secret for the auth token:
kubectl create secret generic -n default \
nginx-1-tunnel-token \
--from-literal token=DP4bepIxuNXbjbtXWsu6aSkEE9r5cvMta56le2ajP7l9ajJpAgEcFxBTWSlR2PdB
$ helm repo add inlets-pro https://inlets.github.io/inlets-pro/charts/
$ helm repo update
helm upgrade --install \
--namespace default \
--set autoTLS=true \
--set ports=80 \
--set upstream=nginx-1 \
--set url=wss://165.227.232.164:8123 \
--set tokenSecretName=nginx-1-tunnel-token \
--set fullnameOverride="nginx-1-tunnel" \
nginx-1-tunnel \
inlets-pro/inlets-pro-client
The key fields you need to set are:
ports
- this is a comma separated list of TCP ports to expose on the remote serverupstream
- this is the DNS name of the service to forward traffic to which is accessible within the cluster, in this instance it’s our ClusterIPThe other fields correspond to the name of the tunnel or are defaults.
You’ll see a deployment created by the helm chart with the name you specified in the fullnameOverride
:
kubectl get deploy/nginx-1-tunnel
NAME READY UP-TO-DATE AVAILABLE AGE
nginx-1-tunnel 1/1 1 1 39s
And you can check the logs for more details:
kubectl logs deploy/nginx-1-tunnel
Then try to access your Nginx service via the public IP of your inlets tunnel server:
curl -i http://165.227.232.164:80
HTTP/1.1 200 OK
Server: nginx/1.14.2
Date: Thu, 08 Jul 2021 10:03:09 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Tue, 04 Dec 2018 14:44:49 GMT
Connection: keep-alive
ETag: "5c0692e1-264"
Accept-Ranges: bytes
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
Let’s say that you wanted to expose Traefik or ingress-nginx, how does that compare?
Create a secret for the auth token:
kubectl create secret generic -n default \
traefik-tunnel-token \
--from-literal token=DP4bepIxuNXbjbtXWsu6aSkEE9r5cvMta56le2ajP7l9ajJpAgEcFxBTWSlR2PdB
Just follow the steps from before, then change the helm install
to the following:
helm repo add inlets-pro https://inlets.github.io/inlets-pro/charts/
helm repo update
helm upgrade --install \
--namespace kube-system \
--set autoTLS=true \
--set ports="80\,443" \
--set upstream=traefik \
--set url=wss://165.227.232.164:8123 \
--set tokenSecretName=traefik-tunnel-token \
--set fullnameOverride="traefik-tunnel" \
traefik-tunnel \
inlets-pro/inlets-tcp-client
What changed?
kube-system
to match where K3s installs Traefik by defaulttraefik
There’s also a dashboard port on 8080, you can add that to the list if you wish.
Finally, the name is updated. You can install the helm chart for the inlets client or server as many times as you like.
If you close the lid on your laptop and open it in a coffee shop and connect to their captive WiFi portal, your IP address will go with you and will work just the same there or on the other side of the world after a 12 hour flight to San Francisco.
I showed you how to expose a single HTTP service, but TCP services are also supported like MongoDB or Postresql.
For a production configuration, you are more likely to want to expose an IngressController or an Istio Gateway. In this way, you just pay for a single exit server created with inletsctl or the operator and make sure that you have TLS encryption enabled for any traffic you serve.
An IngressController can also be used to set authentication for your endpoints and for testing OAuth2 workflows.
We looked at how the operator pattern works and encoded my operational experience of inlets into code, and also where it fell short in one or two scenarios. Then I showed you how to create a tunnel server manually and then deploy an inlets client using YAML.
Miles Kane uses inlets Pro to get access to his own services running on a HA K3s cluster built with Raspberry Pis.
@alexellisuk 3 nodes @Raspberry_Pi @HAProxy @keepalived 3 nodes @kubernetesio #k3sup #master #embedded #etcd 3 nodes workers. @nginx #Kubernetes #ingress and @inletsdev #pro #operator pic.twitter.com/FNTMIDClpC
— milsman2 (@milsman2) July 21, 2021
The exit-servers can also be hosted within a public Kubernetes cluster, it might be a good option for a large team, or for part of a SaaS product that needs to expose endpoints dynamically.
You can get a copy of inlets Pro for personal or business use in the store
Subscribe for updates and new content from OpenFaaS Ltd.
By providing your email, you agree to receive marketing emails.