OpenTofu Docker Host
Finally a fantastic method of manging docker containers without a control plane.
I have kubes now and cringe at the meer thought of doing this.
A while ago, I replaced ss with whismur and instantly lived to regret my
decision. ss (super server or silent server) is a purpose built whitebox
gaming server with all Noctua cooling, where-as whismur was a DL380 with mini
turbines for fans. I could hear Flux CD updating it’s manifests from the street.
Both of these servers aim to provide me with storage at home. I have a few 12TB or more disk (I forget) and it acts as the central location for all of my personal data such as photos, phone backups, mail, home automation, etc.
When I dropped back to ss I went over to my new ZFS standard build. I restore
snapshots a few times testing between K3S and Docker and for home, I just wanted
the simplistic Docker approach. This entirely broke my automations because I was
heavily invested in Flux, but I think I’ve finally found a solution that I can
store in git without having a bunch of complicated automations to deploy these
containers.
Learning the Provider
As you probably know, we’re abandoning “Terraform” for “OpenTofu” due to the licensing drama. The provider can be found at their new provider registry: [https://search.opentofu.org/provider/kreuzwerker/docker/latest].
Personally, Terraform was still all the rage when I used it professionally last
so I had to quickly re-tool into Tofu. It took me half a second, but I figured
out that terraform was replaced by tofu and everything seems to be the same
after figuring that out.
Using the provider is super simple. You just add the provider blocks to your Tofu directory and it “just works” based on the SSH configuration of your machine.
terraform {
required_providers {
docker = {
source = "kreuzwerker/docker"
version = "3.6.0"
}
}
}
provider "docker" {
host = "ssh://192.168.0.1"
ssh_opts = ["-o", "StrictHostKeyChecking=no", "-o", "UserKnownHostsFile=/dev/null"]
}
I did a quick test with an image and container as per the example. The Ubuntu container exits out immediately and I’m too lazy to figure anything out just yet, so I tested Nginx. It worked perfectly:
# Pulls the image
resource "docker_image" "nginx" {
name = "nginx:latest"
}
# Create a container
resource "docker_container" "foo" {
image = docker_image.nginx.image_id
name = "foo"
}
The provider used my SSH configuration and even played nice with the ssh agent. No need to make crazy custom configurations!
Gitlab Automation (Security/Setup)
I’m not setting this up to be compliant with any stringent security policies.
From a cloud context, you might be managing build and deploy permissions via something like IAM, and you might be distributing infrastructure responsibilities across teams. You might also be employing a least privileges model.
This flies in the face of all of that however you can adapt this to a closer alignment with your existing cloud security.
The best way to achieve this would be to run your Docker hosts as user services, and configure SSH access or runners accordingly. You would have to manage the system users and their services - which is out of scope here - however this would provide a pretty comparable security model to IAM credentials.
The entire time I did this though, I longed for RBAC and namespaces. So if you’re considering your security to this extent, it might be time for Kubernetes.
Gitlab Automation - General
I have a personal preference towards using (GitLabs OpenTofu Component)[https://gitlab.com/components/opentofu#job-templates]. Historically, GitLab shipped a modified Terraform binary that provided the Terraform plan to a merge request, making DevOps life slightly simpler.
There’s no shame in using third party pipelines when that third party is the
company you’re leasing compute from. Sure things like EPEL and RPMFusion cause a
raised eyebrow, but you wouldn’t scoff at seeing extras added to your apt
sources. Hopefully this contextualizes my point…
Irrespective of your choice between SSH and sockets, you’re going to need GitLab Runners.
A simple Docker Compose using volume mounts for configurations (the mount point suits my storage strategy, you should adopt for your context)
# cat docker-compose.yaml
services:
gitlab-hxme-runner:
image: gitlab/gitlab-runner:latest
restart: always
volumes:
- /dpool/services/runners/data/gitlab-hxme-runner/config:/etc/gitlab-runner
- /var/run/docker.sock:/var/run/docker.sock
You should adopt the socket to what works best for you. If you’re running this
as a user service, you’ll likely find the socket is
$XDG_RUNTIME_DIR/docker.sock.
The easiest way to build this out for home is to just bring up a Gitlab Runner and provide it access to your Docker socket as above. I personally have Gitlab Runners running in the data center, so I have a mix of runners both internal and external to my home network. I’ll need Docker deployments to run at home, so I’ll need to leverage job tagging to ensure jobs are run at the correct location.
Put simply, I created a new runner and tagged it “hxme-docker”. It’ll run any job, but if something is tagged with “hxme-docker” then it will exclusively run that job.
You can bring this up without configuration then configure ex post facto. After
you docker compose up -d, you can simply
docker compose exec gitlab-hxme-runner /bin/bash to open a shell.
With a shell open, you’ll need to manually define the runner.
gitlab-runner register \
--non-interactive \
--url https://gitlab.com/ \
--registration-token YOUR_TOKEN \
--executor docker \
--docker-image docker:latest \
--description "Runs docker deployments from OpenTofu" \
--tag-list "docker-hxme" \
--run-untagged=true \
--locked=false
I immediately saw the “You’ve registered a new runner” from the “register” screen in GitLab. So straight up works.
Gitlab Automation - SSH
One deploy method is SSH, which unfortunately added some complications.
My main frustration is that the documentation for the OpenTofu deploy was not really aligned with your average cloud operations deployment. Yes, some teams go above and beyond, but almost every team that I’ve worked on have remained outdated until after I arrived. This documentation would not have been well adopted by most engineers that I’ve worked with, put it that way.
Bypassing my frustrations, you’re going to need to use the Job Templates that they ship instead of the full pipeline. You’ll also need to extend from two components, one coming from GitLabs components and another self defined one to provide SSH configuration.
You need to configure a variable with your SSH private key. You might be better using “file” type variables/secrets but I haven’t learned this yet.
The .gitlab-ci.yml file contents below worked a treat for me:
include:
- component: $CI_SERVER_FQDN/components/opentofu/job-templates@2.5.0
inputs:
root_dir: './tf'
.pre:
tags: ["hxme-docker"]
before_script:
- eval "$(ssh-agent -s)"
- echo "$SSH_PRIVATE_KEY" | tr -d '\r' | ssh-add -
- mkdir -p ~/.ssh
- chmod 700 ~/.ssh
- echo -e "Host *\n\tStrictHostKeyChecking no\n" > ~/.ssh/config
plan:
extends: [.opentofu:plan, .pre]
apply:
extends: [.opentofu:apply, .pre]
Then in tf/provider.tf you can provide the following blocks to deploy an Nginx
container named ‘foo’:
terraform {
required_providers {
docker = {
source = "kreuzwerker/docker"
version = "3.6.0"
}
}
}
provider "docker" {
host = "ssh://user@host"
ssh_opts = ["-o", "StrictHostKeyChecking=no", "-o", "UserKnownHostsFile=/dev/null"]
}
# Pulls the image
resource "docker_image" "nginx" {
name = "nginx:latest"
}
# Create a container
resource "docker_container" "foo" {
image = docker_image.nginx.image_id
name = "foo"
}
While somewhat flying in the face of some attack vector advices, the above will work just fine for a basic, understandable deployment.
It straight up worked for me: [https://gitlab.com/hxme/docker/dns/-/jobs/10259300634]
Gitlab Automation - Socket
This is going to be the easiest way to handle things imo. The security stance is questionable in my opinion, but I’m not ultimately fussed for this home lab. If you’re looking to simplify some docker stuff at home, this might help you a bit.
You want to make sure that you’re in a state from two headings ago; have your runners running and configured on your target host.
You want to update your
/dpool/services/runners/data/gitlab-hxme-runner/config/config.toml file to be
privileged, and mount the Docker socket:
22000/concurrent = 1
check_interval = 0
shutdown_timeout = 0
[session_server]
session_timeout = 1800
[[runners]]
name = "Runs docker deployments from OpenTofu"
url = "https://gitlab.com/"
id = ID that was there
token = "yourtoken"
token_obtained_at = 2025-06-05T03:24:58Z
token_expires_at = 0001-01-01T00:00:00Z
executor = "docker"
[runners.cache]
MaxUploadedArchiveSize = 0
[runners.cache.s3]
[runners.cache.gcs]
[runners.cache.azure]
[runners.docker]
tls_verify = false
image = "docker:latest"
privileged = true
disable_entrypoint_overwrite = false
oom_kill_disable = false
disable_cache = false
volumes = ["/cache", "/var/run/docker.sock:/var/run/docker.sock"]
shm_size = 0
network_mtu = 0
Note the lines “privileged” and “volumes”. Privileged has been set to “True” (False by default) and “volumes” has the Docker socket specified.
In your ./tf/provider.tf file, you want to change the provider to access the
socket and not an SSH host:
provider "docker" {
host = "unix:///var/run/docker.sock"
}
Just to be a completionist, you can remove the SSH setup from .pre in
.gitlab-ci.yml:
include:
- component: $CI_SERVER_FQDN/components/opentofu/job-templates@2.5.0
inputs:
root_dir: './tf'
.pre:
tags: ["hxme-docker"]
plan:
extends: [.opentofu:plan, .pre]
apply:
extends: [.opentofu:apply, .pre]
With these changes committed, it should “just work”: https://gitlab.com/hxme/docker/dns/-/pipelines/1854421488
This Doesn’t Scale
This does not really scale all that well. It’s extremely niche and only really good for specific scenarios.
For me, this reduced some clutter for me and allowed me to version control some Docker Compose files. It also allows me to approach things in a logical manner that’s familiar to me, and similar to my DevOps workflows.
I do not recommend this for businesses. There are much better options that offer more robust management, better security and actual scaling.
That said, great fun at home though!