deor

expose a local k3s server, without exposing your internet

Locally running services on k8s (or k3s, in my case) can be annoying to expose publically nowadays because ISPs usually provision dynamic rotating IPs instead of static ones. Personally, my ISP said they could provision me a static IP for a monthly cost, which I was planning to do until I remembered services like ngrok exist.

While ngrok is cool, something open source / more controllable would be nicer for a homelab. This was a problem I ran into several months ago, and I never ended up solving it because it was too much work at the time.

While scrolling twitter I ran into this cool startup that makes varying open source software, called Fyra Labs. They actually made Chisel Operator (github here, star it!), a kubernetes load balancer controller to easily allocate IPs and create load balancer services that can be accessed externally, via chisel tunnels.

Setting it up was very easy, I did run into one issue due to traefik and Chisel Operator both using servicelb. For now, because the cluster was fresh I re-installed k3s on my nodes with the --disable traefik flag. I later found out that I could have just disabled servicelb for traefik and it would have worked.

The setup process only took a couple steps:

  1. Installing the chisel operator.
  2. Choosing an external host for an ExitNode , which is where the chisel server will be running and where clients will access from. You can chose a provider like DigitalOcean, AWS, Linode, or you can self host a chisel server instance anywhere you want. I chose DigitalOcean because I had an account and wanted easy setup. If you are using a provider, you need to setup an ExitNodeProvisioner which automatically creates the server instances.
  3. Creating the LoadBalancer service to expose the deployment via the setup exit node.

Brief (more detailed) tutorial for how to do the above:

Installing the chisel operator

kubectl apply -k https://github.com/FyraLabs/chisel-operator

Thats it..

Setting up an ExitNodeProvisioner

I used DigitalOcean, but you can use AWS/DigitalOcean/Linode. Or you can learn how to self host your own ExitNode here.

For DigitalOcean, go to the API sections of your dashboard and make a read/write api key for the provisioner to use to automatically create droplets.

apiVersion: chisel-operator.io/v1
kind: ExitNodeProvisioner
metadata:
  name: digitalocean-provisioner
  namespace: default
spec:
  DigitalOcean:
    auth: digitalocean-auth
    region: nyc1
---
apiVersion: v1
kind: Secret
metadata:
  name: digitalocean-auth
  namespace: default
type: Opaque
stringData:
  DIGITALOCEAN_TOKEN: 'YOUR_TOKEN_HERE'

Creating the LoadBalancer service to expose a deployment

This is how you can expose a simple whoami deployment via an exit node the operator automatically provisioned on digital ocean. After applying this, look at the digital ocean dashboard and you can find the droplet that was provisioned there, and access the public IP.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: whoami
  namespace: default
spec:
  replicas: 1
  selector:
    matchLabels:
      app: whoami
  template:
    metadata:
      labels:
        app: whoami
    spec:
      containers:
        - name: whoami
          image: containous/whoami
          ports:
            - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: whoami
  annotations:
    chisel-operator.io/exit-node-provisioner: "digitalocean-provisioner"
spec:
  selector:
    app: whoami
  ports:
    - port: 80
      targetPort: 80
  type: LoadBalancer

And thats it! Shoutout to korewaChino from Fyra Labs for creating the operator, and helping quickly merge a PR for bumping the ubuntu version for the digital ocean droplet!