Do you ever find yourself wishing that you could get Ingress into your local or private Kubernetes clusters? Perhaps it’s during development, a CI job with KinD, or a customer demo.
It wasn’t long after creating the first version of inlets that my interest turned to Kubernetes. Could the operator pattern help me bring Ingress and TCP LoadBalancers to clusters hidden behind firewalls, HTTP Proxies and NAT?
The answer was yes, but if you delete and re-create your cluster many times in a day or week, there may be a better fit for you.
Let’s first recap how the operator works.
I got the first proof of concept working 5 Oct 2019 using Equinix Metal (née Packet) for the hosting. It watched for Services of type LoadBalancer, then provisioned a cloud instance using an API token and
This was actually recorded on the last day of vacation, that’s how badly I wanted to see this problem fixed:
Since then, support for around a dozen clouds was added including AWS EC2, GCP, Linode, Vultr, DigitalOcean, Azure and others.
Installing the inlets-operator brings LoadBalancers to any Kubernetes cluster, why would you want that?
- You’re deploying to public cloud and want a similar test environment
- You self-host services with a Raspberry Pi and K3s or in a homelab
- You have an on-premises Kubernetes cluster and want others to access services / endpoints
The workflow explained
The Kubernetes Operator encodes the knowledge and experience of a human operator into code. For inlets, it took my knowledge of creating a cloud instance, installing the inlets tunnel server software, then running a client pod in my local cluster.
Ivan Velichko who is an SRE at Booking.com created an animation to show exactly what happens and in what order
Read more about Operators in Ivan’s blog post
Where operators fall short
The operator is ideal for a single user, with a single long-lived cluster. That could be your Raspberry Pi, a private data center or a local K3d, KinD or minikube cluster.
The IP will go with you, and because the client runs as a Pod, it will restart whenever there’s an interruption in traffic, like going to your local cafe.
But here’s three scenarios where the operator may fall short:
- If you have dozens of team members all using the inlets-operator, then there will potentially be a lot of VMs created and it could be hard to manage them centrally.
- Secondly, the operator requires an access token to provision the cloud host.
- Thirdly, if you delete your cluster, external resources cannot be cleaned up
The third issue isn’t specific to inlets, if you delete the LoadBalancer service, or delete the operator then any external VMs will be cleaned up and removed. But it turns out that some people are too lazy to do that, and at times I may also be included in that group.
There is a simply work-around to this problem.
- Create a VM and collect its connection details
- Share or store the details of the VM
- Run the client with YAML or the helm chart
The inletsctl tool uses the same code as inlets-operator to provision VMs, so we can use that for the first step.
Install the tool:
# sudo is optional and is used to move the binary to /usr/local/bin/
curl -SLfs https://inletsctl.inlets.dev | sudo sh
Then explore the options and providers with
inletsctl create --help. The key options you’ll need are
inletsctl create \
--access-token-file ~/Downloads/do-access-token \
--provider digitalocean \
Using provider: digitalocean
Requesting host: upbeat-jackson5 in lon1, from digitalocean
2021/07/08 10:42:23 Provisioning host with DigitalOcean
Host: 253982495, status:
[1/500] Host: 253982495, status: new
[11/500] Host: 253982495, status: active
Note the output with its sample connection command, IP address and auth token for the tunnel server.
inlets PRO TCP (0.8.3) server summary:
# Obtain a license at https://inlets.dev
# Store it at $HOME/.inlets/LICENSE or use --help for more options
# Give a single value or comma-separated
# Where to route traffic from the inlets server
inlets-pro tcp client --url "wss://126.96.36.199:8123" \
--token "DP4bepIxuNXbjbtXWsu6aSkEE9r5cvMta56le2ajP7l9ajJpAgEcFxBTWSlR2PdB" \
--upstream $UPSTREAM \
Let’s imagine you’ve deployed Nginx to your cluster, and that’s what you want to expose.
cat <<EOF | kubectl apply -f -
- name: nginx
- containerPort: 80
Now create a private ClusterIP for the Deployment, so that it can be accessed:
kubectl expose deployment nginx-1 --port=80 --type=ClusterIP
Then deploy a tunnel client that forwards traffic to the
nginx-1 service on port 80.
inlets-pro has two helm charts which can be used to run both a client or server as Pods within your cluster
You can write your own YAML manually for an inlets-pro client, or deploy the chart for the client.
First create a secret for your inlets-pro license key:
kubectl create secret generic -n default \
inlets-license --from-file license=$HOME/.inlets/LICENSE
Then create a secret for the auth token:
kubectl create secret generic -n default \
git clone https://github.com/inlets/inlets-pro
helm upgrade --install \
--namespace default \
--set autoTLS=true \
--set ports=80 \
--set upstream=nginx-1 \
--set url=wss://188.8.131.52:8123 \
--set tokenSecretName=nginx-1-tunnel-token \
--set fullnameOverride="nginx-1-tunnel" \
The key fields you need to set are:
ports - this is a comma separated list of TCP ports to expose on the remote server
upstream - this is the DNS name of the service to forward traffic to which is accessible within the cluster, in this instance it’s our ClusterIP
The other fields correspond to the name of the tunnel or are defaults.
You’ll see a deployment created by the helm chart with the name you specified in the
kubectl get deploy/nginx-1-tunnel
NAME READY UP-TO-DATE AVAILABLE AGE
nginx-1-tunnel 1/1 1 1 39s
And you can check the logs for more details:
kubectl logs deploy/nginx-1-tunnel
Then try to access your Nginx service via the public IP of your inlets tunnel server:
curl -i http://184.108.40.206:80
HTTP/1.1 200 OK
Date: Thu, 08 Jul 2021 10:03:09 GMT
Last-Modified: Tue, 04 Dec 2018 14:44:49 GMT
<title>Welcome to nginx!</title>
Production use and travel
If you close the lid on your laptop and open it in a coffee shop and connect to their captive WiFi portal, your IP address will go with you and will work just the same there or on the other side of the world after a 12 hour flight to San Francisco.
I showed you how to expose a single HTTP service, but TCP services are also supported like MongoDB or Postresql.
For a production configuration, you are more likely to want to expose an IngressController or an Istio Gateway. In this way, you just pay for a single exit server created with inletsctl or the operator and make sure that you have TLS encryption enabled for any traffic you serve.
An IngressController can also be used to set authentication for your endpoints and for testing OAuth2 workflows.
We looked at how the operator pattern works and encoded my operational experience of inlets into code, and also where it fell short in one or two scenarios. Then I showed you how to create a tunnel server manually and then deploy an inlets client using YAML.
Maartje uses inlets PRO to host dozens of side-projects and told me that she saved hundreds of dollars per year. Apparently the savings went on to fund her Raspberry Pi cluster!
The exit-servers can also be hosted within a public Kubernetes cluster, it might be a good option for a large team, or for part of a SaaS product that needs to expose endpoints dynamically.
You can get a copy of inlets PRO for personal or business use in the store