Bind 9 DNS in K3S but not to replace CoreDNS

One of the issues that I had with my previous home server, ss, was that I struggled to get Bind9 deployed for internal hostname resolution. I had to resort to running bind9 on the metal which didn’t feel correct given the setup.

Since deploying whismur, I have finally figured out the magic trick to getting Bind9 running in K3S and port 53 exposed.


Gitlab Deployment

Motivation has eluded me while my social life, work life, financial commitments and personal infrastructure have consumed so much of my time. I was updating some local notes on Obsidian for a while, but I’ve encountered a large data loss event which nuked a few blog posts from this sites local copy.

I was updating the home page and adding a better organized menu for the fun of it, and figured that I should put a newer most on seeing that it’s almost been five months.

My new home server whismur now has two K3S nodes. I wanted a better pipeline for deploying manifests so it now has Flux CD running on it, and I publish my Kubernetes manifests as OCI repositories in Gitlab. Flux CD is linked up with the Gitlab agent and deploys these to my home server.

That said, you can find the deployment for my home DNS here: https://gitlab.com/hxme/kubes/dns/

The only issue with this deployment is that it requires a statically set Load Balancer IP. This is not something that I have been motivated to fix, but feel free to send a MR.

Magic

The magic is to expose port 53 with spec.template.spec.containers.ports.containerPort. You then need to deploy a load balancer mapping port 53.

My deployment.yaml file:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: dns
  namespace: dns
  labels:
    app: dns
spec:
  replicas: 1
  selector:
    matchLabels:
      app: dns
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: dns
    spec:
      containers:
      - name: dns
        image: registry.gitlab.com/dxcker/bind:latest
        ##imagePullPolicy: Always
        #command: [ "watch", "-n5", "date" ]
        command: [ "/usr/sbin/named", "-g", "-c", "/config/named.conf", "-u", "named" ]
        ports:
        - containerPort: 53
          name: dns-port
        volumeMounts:
          - name: bind
            mountPath: /config
      volumes:
      - name: bind
        configMap:
          name: bind

My loadbalancer.yaml file:

apiVersion: v1
kind: Service
metadata:
  name: dns
  namespace: dns
spec:
  selector:
    app: dns
  ports:
    - protocol: TCP
      port: 53
      targetPort: 53
      name: dns-tcp
    - protocol: UDP
      port: 53
      targetPort: 53
      name: dns-udp
  #clusterIP: 10.0.171.239
  type: LoadBalancer
status:
  loadBalancer:
    ingress:
    - ip: 10.40.0.140

Like I said, this is dependent on a statically set IP and it technically forces DNS to only resolve on one node.o

Caveats

Cool Thing

The really cool thing is that you don’t really need to worry about master/slave here.

I thought that a master/slave combo would cause major issues for me. Then I realized that it doesn’t matter if they’re all masters; a config change can force a new deployment, so no master will be out of date theoretically.

If you resolve the issue with the static IP, then you can infinitely scale horizontally.


Crappy post, but hopefully it helps someone someday.


Later edit: I removed the load balance IP and it works just fine. No issues seem to have come up from doing this.