JB

VPN Accessible Docker Containers

Live journal of locking down Docker containers to my VPN. Spoiler alert, not great.

I was trying to avoid host level networking but you’re better off configuring Wireguard on the host and mapping ports to it.

Genuinely don’t try to run Wireguard in docker. It’s kernel level.


If you’re here looking to configure a Wireguard network for containers to communicate then you should know exactly the end result that I opted for.

This does not cluster docker hosts networking. For that you should use swarm.

This simple provides me with a VPN that I can connect to in order to access sensitive containers such as my monitoring stack. This solution gives me the ability to go to <internal_ip>:<port> or to somehow map somehostname.com to an IP locally and connect to the reverse proxy mapped to the Wireguard interfaces IP.

That is to say; this doesn’t allow for any kind of clustering or Docker internal network exposure. It just allows me to hide loki behind a VPN. It still keeps a dependency of knowing “loki is on host A”


Few services really need to be exposed to the internet. Most businesses want to use a VPN for accessing their services, and tbh I wanted to use my VPN to keep my monitoring from being web accessible.

Why Keep Monitoring from being Web Accessible

I’m a little security paranoid conscious and I didn’t really want to leave node_exporter exposed to the internet, as was my previous tactic. I always figured that if people want to see how much RAM I have then so be it, considering that most microservices leave a health check publicly accessible.

Unfortunately it is something that can be used to plan DDOS attacks and self-assess the success of your attack while it’s happening. The gaming community is pretty toxic and my servers are somewhat susceptible to DDOS.

There’s also the issue that game servers run with RCON passwords or server passwords in the startup line. This means their passwords are plaintext in the process list, and using something like process-exporter or cadvisor could become a major security issue in the future.

While we’re out there; leaving loki world accessible? Yeah probably not.

You also have the concern that vulnerabilities in these services could introduce RCE’s, and the patching schedule for monitoring services aren’t done hourly/daily.

That said, a bunch of containers still need to talk so how do we achieve this?

Foreword on Kubernetes

You don’t need to do this with Kubernetes.

The complexity of this deploy took a nose dive when I realized that my future Kubernetes migration would have solved all of this anyway. It explains why it’s taken me so long to do this, because I just dropped kubes for Docker and I’m having buyers remorse.

More Context

This was setup for bare metal docker hosts in the data center. Each host is it’s own silo and there’s zero scope for an internal network to be deployed, as much as people keep telling me that I should.

My goal is to build these servers out solely as stand alone units and keep the configurations minimal and automated.

Warning

Apparently linuxserver/wireguard fucks with your bare metals configuration.

I forgot that Wireguard is kernel space and installed their container. They might have covered some configuration for me.

How to Configure

I did up a draw.io however it didn’t really explain how this works all that well.

Here’s the rough breakdown:

This setup is significantly simpler than what I was testing before I got to this point.

Install Wireguard to your bare metal host. For Debian:

apt -y install wireguard wireguard-tools

Save yourself some time and configure using wg-quick:

> cat wg0.conf 
[Interface]
PrivateKey = aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa=
Address = 10.0.0.2/24
ListenPort = 51820

[Peer]
PublicKey = bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb=
AllowedIPs = 10.0.0.1/32, 10.0.0.0/24, 10.1.0.0/16
Endpoint = 1.2.3.4:51820
PersistentKeepalive = 25

In the above, 10.1.0.0/16 is the theoretical subnet of my homes LAN. I connect into this VPN via OPNSense and I wasn’t really bothered to NAT stuff too much.

Bring the Wireguard VPN up:

wg-quick up /path/to/wg0.conf

Check to make sure peers have connected

> wg show
interface: wg0
  public key: aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa=
  private key: (hidden)
  listening port: 51820

peer: bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb=
  endpoint: 1.2.3.4:51820
  allowed ips: 10.0.0.1/32, 10.0.0.0/24, 10.1.0.0/16
  latest handshake: 1 minute, 4 seconds ago
  transfer: 6.92 MiB received, 14.63 MiB sent
  persistent keepalive: every 25 seconds

You can see on the remote server that there’s been a recent handshake (initial setup should say seconds) and transfer has come and gone. You can also confirm this locally (via OPNSense -> VPN -> Wireguard -> Status).

And that’s really all she wrote on setting up the Wireguard VPN.

Now if you’re probably wanting to access some containers privately. I personally have LetsEncrypt wildcard certificates being handled on this server, so YMMV, but I leverage these certificates in a reverse proxy container and access my containers that way.

> cat reverse-proxy/docker-compose.yaml 
services:

  reverse-proxy:
    container_name: reverse-proxy
    image: registry.gitlab.com/dxcker/reverse-proxy:latest
    ports:
      - "4.3.2.1:80:80"
      - "4.3.2.1:443:443"
    volumes:
      - data/pub-rp/config:/config
      - data/pub-rp/client_confs:/extra
      - /opt/ssl/data/archive:/certs
    networks:
      - pubproxy
    restart: always

  vpn-reverse-proxy:
    container_name: vpn-reverse-proxy
    image: registry.gitlab.com/dxcker/reverse-proxy:latest
    volumes:
      - public:/var/www/html
      - data/vpn-rp/config:/config
      - data/vpn-rp/client_confs:/extra
      - ../ssl/data/archive:/certs
    ports:
      - "10.0.0.2:80:80"
      - "10.0.0.2:443:443"
    networks:
      - privproxy
    restart: always

networks:
  pubproxy:
    name: pubproxy
  privproxy:
    name: privproxy

When you roll a service, you need to add them to pubproxy or privproxy.

# cat monitoring/docker-compose.yaml 
services:
  uptime-kuma:
    image: louislam/uptime-kuma:1
    container_name: uptime-kuma
    volumes:
      - data/uptime-kuma/data:/app/data
    restart: unless-stopped
    networks:
      - privproxy

Then you can use Dockers internal DNS to map a FQDN to an internal domain name:

> cat reverse-proxy/data/vpn-rp/client_confs/hxst.conf 
<VirtualHost *:443>
	ServerAdmin admin@somefdqn.com
	ServerName  uptime.somefdqn.com

	ErrorLog /dev/stderr
	CustomLog /dev/stdout combined

	SSLEngine on
	SSLCertificateFile /certs/somefdqn.com/fullchain1.pem
	SSLCertificateKeyFile /certs/somefdqn.com/privkey1.pem
	SSLCipherSuite HIGH:!aNULL:!MD5
	SSLProtocol All -SSLv2 -SSLv3 -TLSv1 -TLSv1.1

	# Enable Proxy
	ProxyRequests On
	ProxyPass / http://uptime-kuma:3001/
	ProxyPassReverse / http://uptime-kuma:3001/
	ProxyPreserveHost On

	# Set the headers to forward client information
	RequestHeader set X-Real-IP %{REMOTE_ADDR}s
	RequestHeader set X-Forwarded-Host uptime-kuma.somefdqn.com
	RequestHeader set X-Forwarded-For %{X_FORWARDED_FOR}s
	RequestHeader set X-Forwarded-Proto https

	# Disable proxy redirects and buffering
	ProxyPassReverseCookieDomain uptime-kuma:3001 uptime.somefdqn.com
	SetEnv proxy-nokeepalive 1
</VirtualHost>

My home network has OPNSense handling all of the routing, firewalls, etc. I connect OPNSense to the Wireguard VPN (shown earlier its connected) then on my local machine, setup a record in my hosts file to map uptime.somefqdn.com to 10.0.0.2.

With all of the above up and running, I can access Uptime Kuma via my VPN IP but not my public IP.

If you wanted to expose a service publicly then the steps are identical except with pubproxy instead of privproxy.

DNS

You could absolutely run a DNS service for your VPN. It could be as simple as this:

# cat bind/docker-compose.yaml 
services:
  bind9:
    image: registry.gitlab.com/dxcker/bind:latest
    container_name: bind
    ports:
      - "10.0.0.2:53:53/tcp"
      - "10.0.0.2:53:53/udp"
    volumes:
      - bind/data/config:/config
      - bind/data/zones:/zones
      - bind/data/cache:/var/cache/bind
      - bind/data/keys:/keys
    restart: unless-stopped

You would then need to update your own configuration to poll this DNS server. Because you don’t need to worry about authoriative servers in this configuration, you can use whatever you want as a FQDN.

My only recommendation is to a real, registered FQDN and setup SSL certificates for it. For example, you might operate from example.com and run internal infrastructure on int.exmaple.com.

My personal preference is that I have example.com with it’s public presence but I have example.net for internal. Example.net routes all traffic to example.com but because it’s a real, registered domain I can setup nameservers and thus, wildcard SSL certificates from LE.

I then use example.net for all internal communications and exapmle.com for all external. All of it is with valid SSL certs, which makes life with modern browsers way easier.