I’ve since retired this cluster in favor of running everything in Pterodactyl. I dont have documentation on this because it’s all fairly simple.
Ptera seems to be half-heartedly maintained. The core system is fine but the eggs leave a lot to be desired.
My little Ark server project has come to a close and it has left me feeling a tad dead inside. It has been a while since I last automated this and this project dragged up a lot of that PTSD. Unless you enjoy staying up til 4am babying docker containers to avoid crashes from resource contention, I’d avoid running your own cluster. It’s not worth even learning. Pay someone else to do it.
My hardware is a DL360 with dual E5s, 256GB RAM, 3TB SSD RAID6, ~$1.5k invested. Colo fees ~$80/pm with 24/7 access. The DL360 runs Proxmox with a tricky bridged network deal to avoid running a physical firewall, and I can VPN into opnsense to gain access to the (virtual) ‘local’ network.
The Ark ‘cluster’ is provisioned with 128GB of RAM and 12 vCPUs. This VM is a glorified Docker host and I push configurations out to it using Docker compose. I originally looked at building a Kubernetes cluster out of VMs and using tagging to force certain pods to specific nodes however I ran into issues using Rook.io and I don’t have a dedicated SAN at the DC to accomodate the shared saves directory required by Ark. This is why docker compose was chosen; it lets me easily define how I want my containers to be provisioned, and I can easily deploy via Gitlab CICD/SSH.
The infrastructure is extremely simplistic because you can’t autoscale Ark servers without risking players profiles. While save data is kept in a shared directory, the game retains a customizable save period in memory and you can’t plug a messaging framework into it; making real clustering impossible. Thankfully these servers handle heavy network loads quite well and I have to begrudgingly admit; Wildcards custom UDP-based implementation of TCP probably helps with this.
The primary thing to remember with an Ark cluster is that only the saves need to be shared as sharing any other files can potentially lead to difficulty brining up some of your other servers. It’s speculated that this is either an open file limit that’s beyond what we’ve configured in testing, or it’s an odd file lock/contention issue with sharing files. Never-the-less, the easiest way to get around this is to give each Ark container it’s own game data, and only share what you need to share.
The main directories that you want to concern yourself with are:
ShooterGame/Saved/
ShooterGame/Saved/Config
ShooterGame/Content/Mods/
Luckily, Docker allows mounts-in-mounts and it will sort your mount points by length and mount in ascending order. This means that you can mount the Config mountpoint inside of the mounted Saved mountpoint. Getting too meta for you yet?
Another important note on the file structure is that you should run the ShooterGame
binary as a non-root user and disable write permissions over the ShooterGame/Saved/Config
directory. For examples sake, let’s say that I run the ShooterGame
binary as the user ‘ark’. If the ‘ark’ user is given write permissions over the ShooterGame/Saved/Config
directory then it is given permission to delete files inside that directory. The binary does not overwrite the configuration files inside of this directory, it deletes and repopulates. I think this might be recent behaviour as it never used to.
Luckily, this is a very simple fix. chown -R root:root ShooterGame/Saved/Config ; chmod -R 755 ShooterGame/Saved/Config
will resolve your GUS and Game config files being overwritten by the binary; and it will now always overwrite those settings for some messed up reason. I’ve mentioned this in the Wildcard Discord when staff were chatting in #general and noone replied to me; and I am not contacting their support.
With all of this in mind, my recommended way of running Ark servers is to have a directory containing your mountpoints for all of your Ark containers. Allow each Ark container to manage it’s own set of binaries and library files, and have a central Saved
directory that’s shared between containers. In docker-compose, this looks something like this:
version: '3'
services:
ark-ti:
image: registry.gitlab.com/dxcker/ark:latest
ports:
- "27010:27010/udp"
- "7777:7777/udp"
- "7778:7778/udp"
- "27030:27030/tcp"
volumes:
- /opt/ark/game/arkti:/data/game
- /opt/ark/game/shared/saves:/data/game/ShooterGame/Saved
Moving on from files, we also have another Gotcha! with the ports. You have four ports to concern yourself with: the ‘steam’ port for allowing Steam to query the server and allow connections, the ‘game’ ports which I’ll get to in a second, and the ‘rcon’ port which I highly recommend restricting (but I dont because yolo).
Typically speaking, Steam ports following the regex pattern of 270[0-9]{2}
with people typically using the space of 27020-27029. You can, however, set this to any number that is unused by your system.
The RCON port can also be whatever you want, however I follow the convention of ${STEAM_PORT} + 20
so that they flow nicely in my firewalls UI.
Finally, we have the game ports. This can trip people up; if you specify 7777
as the game port (as is standard), Ark will use both 7777
and 7778
. Furthermore, these are the ports that Ark supposedly uses a UDP-based implementation of TCP, not that it really matters to you running the server, I just find it interesting. It’s just worth remembering that if you use game port N, you need to also forward N+1.
Essentially I want to deploy everything from Gitlab CICD because it’s all bash inside of yaml. Baml? Yash? Eh.
The easiest way to handle the automation of Ark servers is to create a directory structure from the start. Create a directory for each map that you would like to run using basic bash hacks, then create the shared saves/config directory:
mkdir -p /opt/ark/game/{arkab,arkva,etc}
mkdir -p /opt/ark/game/shared/saves/Config/
I would recommend not performing the afforementioned file permissions over the Ark directory just yet; save that for later. You want to run your cluster first, allow it to populate all of it’s directories, then restrict it after.
The next thing that I would recommend is downloading SteamCMD to a shared directory as well as it just makes your life easier. You can see the installation instructions here however I’ll include a script for quickly getting the job done:
mkdir -p /opt/steam # Protip; -p suppressed setting $? to > 1 in the event that the directory already exists...
yum install glibc.i686 libstdc++.i686 ncurses-libs.i686 -y
curl -sqL "https://steamcdn-a.akamaihd.net/client/installer/steamcmd_linux.tar.gz" | tar zxvf - -C /opt/steam/
I hate myself for not having converted my Ark Docker image to Debian but it’s not worth the time right now. Moving on.
There isn’t much more in the way of ‘setup’. Make sure that you have Docker installed, and get ready to begin planning your automation.
Start off by building your docker-compose.yaml
file. You can use this as a reference point. The basics here are:
version: '3'
services:
ark-ti:
image: registry.gitlab.com/dxcker/ark:latest
ports:
- "27010:27010/udp"
- "7777:7777/udp"
- "7778:7778/udp"
- "27030:27030/tcp"
volumes:
- /opt/ark/game/arkti:/data/game
- /opt/ark/game/shared/saves:/data/game/ShooterGame/Saved
- /opt/ark/game/shared/mods:/data/game/ShooterGame/Content/Mods/
- /opt/ark/scripts:/data/scripts
- /opt/steam:/data/steam
environment:
- RCON_PASSWORD=REPLACE_RCON_PASSWORD
- CLUSTER_ID=REPLACE_CLUSTER_ID
- ARK_MAP=TheIsland
- GAME_PORT=7777
- QUERY_PORT=27010
- RCON_PORT=27030
- SAVE_DIR_NAME=arkti
This will create a Container using the image provided (you can sub mine out because it’s not documented or intended for public use), exposing the ports specified, mapping the volumes show and setting the environment variables I have declared.
The reason why we run this using environment variables is because we want the startup commands to be the exact same with exception to ports, map and AlternativeSaveDir. It’s also cheaper and easier to use a single Docker image which can change it’s behavior based on a variable as opposed to building 14 different images (one for each map).
My exact startup script can be found https://gitlab.com/hxst/servers/ark/-/blob/main/scripts/start.sh however the general jist is that you’re going to need a non-root user, and you want to really enforce those file permissions if you want to avoid sleepless nights. The afforementioned link also has the SteamCMD command for downloading Workshop content direct to your server however Wildcard has officially said that they will no longer support this method of adding mods to your server.
TL;DR on the mod stuff is that Ark requires it in “Format A” but SteamCMD downloads it in “Format B”. There’s a Perl script floating around Github which handles the conversion, but Wildcard refuses to implement it into the game. If you ask Wildcard to fix this, they will send you a link to Ark Server Manager and tell you to code it yourself. I’ll code it myself one day, but I have no motivation to do so as I don’t run a modded cluster yet/anymore.
The important parts of the start up script:
MAP=${ARK_MAP}
SESSION_NAME="My Cluster ${MAP}"
/data/steam/steamcmd.sh +force_install_dir /data/game/ +login anonymous +app_update 376030 validate +quit
cp /data/game/ShooterGame/Saved/Config/LinuxServer/Game.ini.custom /data/game/ShooterGame/Saved/Config/LinuxServer/Game.ini
cp /data/game/ShooterGame/Saved/Config/LinuxServer/GameUserSettings.ini.custom /data/game/ShooterGame/Saved/Config/LinuxServer/GameUserSettings.ini
chown -R root:root /data/game/ShooterGame/Saved/Config/LinuxServer
chmod -R 755 /data/game/ShooterGame/Saved/Config/LinuxServer
su ark -c "/data/game/ShooterGame/Binaries/Linux/ShooterGameServer ${MAP}?listen?Port=${GAME_PORT}?QueryPort=${QUERY_PORT}?RCONEnabled=True?RCONPort=${RCON_PORT}?ServerAdminPassword=${RCON_PASSWORD}?AltSaveDirectoryName=${SAVE_DIR_NAME}?SessionName=\"${SESSION_NAME}\"?PreventDownloadSurvivors=False?PreventDownloadItems=False?PreventDownloadDinos=False?PreventUploadSurvivors=False?PreventUploadItems=False?PreventUploadDinos=False?ShowFloatingDamageText=true?PreventOfflinePvP=false?PreventOfflinePvPInterval=300 -NoTransferFromFiltering -clusterid=${CLUSTER_ID} -server -crossplay"
To break this down;
Essentially, this is all that you need to start your Ark cluster.
You can see how I do it here. I don’t really want to write too much about this.
The crux that you need to care about is SSH access into your docker host and that you have rsync installed there. I find using a locally compiled version of rcon.c is handy for performing server-side commands like SaveWorld
and Shutdown
because it seems to handle Arks strange sockets well. I wrote my own RCON client in Python however I struggled handling the packets that it sent back. It would appear that Ark has improved how they handle RCON commands too, because it no longer returns back non-space-whitespaces nor forces rcon.c to set a non-zero $?
(which breaks for loops).
alert.sh
is:
for i in `seq 27030 27041` ; do echo $i ; /opt/ark/rcon -PREPLACE_RCON_PASSWORD -aMY_PUBLIC_IP -p${i} "$@" ; done
This let’s me run commands like Broadcast
to alert to server shutdowns, SaveWorld
to force a world save prior to shutdown, and finally Shutdown
to cleanly shut the Ark servers down.
One thing from my Gitlab CICD is that Gitlab appears to no longer support sleep
commands in scripts and just skips past them. I need to bench test my ‘delay’ and my pipelines a bit not-great at the time of writing this because I think I wrote it at like 3am during an outage caused by the afforementioned config rewrites. The way to get around this with gitlab ci is to run a job after a delay however, as of writing, I haven’t pushed this to my own repos yet.
Anyway, the crux….
Because we use mountpoints as I described earlier, we can just update the files and let the start script handle the patching.
So let’s talk about my own repo because it’s probably the best educational tool for running this PITA of a cluster.
I use this repo to power the oceania.gg servers. Link to my repo is here: https://gitlab.com/hxst/servers/ark
The configs you need to care about are Game.ini and GameUserSettings.ini. I recommend running up a single Ark container and letting the binary write it’s own versions of these and then editting it as needed. You can use this wiki article for reference with configuring your cluster.
The docker-compose file should also be stored in your repo to make editting it easier. Use this to define your containers and their variables and push it out to the remote server before running it.
The https://gitlab.com/hxst/servers/ark/-/tree/main/scripts directory has the ‘start’ script, which I push to another mountpoint. There’s a trick here; dxcker/ark has it’s own start.sh
file which is a sort-of placeholder. I seem to have messed this up, but it’s meant to [COPY start.sh /data/scripts/start.sh](https://gitlab.com/dxcker/ark/-/blob/main/Dockerfile#L16)
which I will need to fix later. Essentially, by mounting that scripts directory over the top of the images scripts directory, we can change the startup behavior without changing the entrypoint. This lets you either use the default start script or set your own. Self five.
You’ll notice that I import an SSH key via STDIN here. This hack allows me to store the private key as a CICD variable in Gitlab, mask it so it doesn’t appear in the jobs logs but still use it to connect into the Docker host without a password. It’s not the most secure in the world, however it’s how things need to be done in this context. I considered using JIT SSH keys for this but it’s a lot of work to simply shift the problem from that Gitlab repo to somewhere else.
Adding the keyscan to known hosts is neccersary to comply with stricthostkey checks without changing the .ssh/config
to disable them. You could avoid this by running a shell
executor however I try to avoid this. Another way to avoid this line would be to have strict host key checking disabled in dxcker/alxpne which I may do one day. It’s also worth noting that I run this job on a private runner hosted in my own ‘lan’, which is why I can get away with using local IPs. Gitlab runners are super easy to setup and I recommend using them over shared if you can.
Finally, you can see here how I approached using Gitlab variables inside of the Gitlab job to set environment variables for the containers. It feels a little bit janky, however it works and it doesn’t expose those values to the jobs logs thanks again to masking.
By the end of all this, I have a nice Ark cluster running on my ‘dodo’ host, which I can easily manage without the need for interfaces like ASM:
root@dodo:~# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
f46fc79f916a registry.gitlab.com/dxcker/ark:latest "/bin/sh -c /data/sc…" 38 hours ago Up 38 hours 0.0.0.0:7777-7778->7777-7778/udp, 0.0.0.0:27010->27010/udp, 0.0.0.0:27030->27030/tcp ark_ark-ti_1
8266a016ea42 registry.gitlab.com/dxcker/ark:latest "/bin/sh -c /data/sc…" 38 hours ago Up 38 hours 0.0.0.0:7799-7800->7799-7800/udp, 0.0.0.0:27021->27021/udp, 0.0.0.0:27041->27041/tcp ark_ark-fo_1
7be8d01df58f registry.gitlab.com/dxcker/ark:latest "/bin/sh -c /data/sc…" 38 hours ago Up 38 hours 0.0.0.0:7787-7788->7787-7788/udp, 0.0.0.0:27015->27015/udp, 0.0.0.0:27035->27035/tcp ark_ark-ex_1
e4fc39b02f47 registry.gitlab.com/dxcker/ark:latest "/bin/sh -c /data/sc…" 38 hours ago Up 38 hours 0.0.0.0:7797-7798->7797-7798/udp, 0.0.0.0:27020->27020/udp, 0.0.0.0:27040->27040/tcp ark_ark-li_1
77a9fc89001f registry.gitlab.com/dxcker/ark:latest "/bin/sh -c /data/sc…" 38 hours ago Up 38 hours 0.0.0.0:7795-7796->7795-7796/udp, 0.0.0.0:27019->27019/udp, 0.0.0.0:27039->27039/tcp ark_ark-ci_1
e2cd078d4bca registry.gitlab.com/dxcker/ark:latest "/bin/sh -c /data/sc…" 38 hours ago Up 38 hours 0.0.0.0:7791-7792->7791-7792/udp, 0.0.0.0:27017->27017/udp, 0.0.0.0:27037->27037/tcp ark_ark-g1_1
8b2c3721b333 registry.gitlab.com/dxcker/ark:latest "/bin/sh -c /data/sc…" 38 hours ago Up 38 hours 0.0.0.0:7779-7780->7779-7780/udp, 0.0.0.0:27011->27011/udp, 0.0.0.0:27031->27031/tcp ark_ark-tc_1
858a1270cea8 registry.gitlab.com/dxcker/ark:latest "/bin/sh -c /data/sc…" 38 hours ago Up 38 hours 0.0.0.0:7785-7786->7785-7786/udp, 0.0.0.0:27014->27014/udp, 0.0.0.0:27034->27034/tcp ark_ark-ab_1
98c7c4f57898 registry.gitlab.com/dxcker/ark:latest "/bin/sh -c /data/sc…" 38 hours ago Up 38 hours 0.0.0.0:7781-7782->7781-7782/udp, 0.0.0.0:27012->27012/udp, 0.0.0.0:27032->27032/tcp ark_ark-se_1
21555ba30840 registry.gitlab.com/dxcker/ark:latest "/bin/sh -c /data/sc…" 38 hours ago Up 38 hours 0.0.0.0:7793-7794->7793-7794/udp, 0.0.0.0:27018->27018/udp, 0.0.0.0:27038->27038/tcp ark_ark-g2_1
c2d6418d2967 registry.gitlab.com/dxcker/ark:latest "/bin/sh -c /data/sc…" 38 hours ago Up 38 hours 0.0.0.0:7789-7790->7789-7790/udp, 0.0.0.0:27016->27016/udp, 0.0.0.0:27036->27036/tcp ark_ark-va_1
2df22d5160b2 registry.gitlab.com/dxcker/ark:latest "/bin/sh -c /data/sc…" 38 hours ago Up 38 hours 0.0.0.0:7783-7784->7783-7784/udp, 0.0.0.0:27013->27013/udp, 0.0.0.0:27033->27033/tcp ark_ark-ra_1
root@dodo:~# df -h | grep sdb
/dev/sdb1 392G 251G 121G 68% /
root@dodo:~# free -m
total used free shared buff/cache available
Mem: 90526 84860 792 1 4873 4754
Swap: 33741 533 33208
root@dodo:~# top
top - 15:18:22 up 7 days, 6:44, 1 user, load average: 11.34, 11.53, 11.29
Tasks: 241 total, 7 running, 234 sleeping, 0 stopped, 0 zombie
%Cpu(s): 61.3 us, 11.8 sy, 0.0 ni, 26.6 id, 0.0 wa, 0.0 hi, 0.3 si, 0.0 st
MiB Mem : 90526.6 total, 790.5 free, 84861.9 used, 4874.2 buff/cache
MiB Swap: 33742.0 total, 33208.6 free, 533.4 used. 4753.5 avail Mem
You can see that I could benefit by actually taking this host down to increase the RAM lol. I also have 12 cores provisioned, so that load is absolutely phenomenal (and consistent!). I ran these commands with eight players connected to The Island, and noone playing other Arks or doing bosses, both of which would cause usage to spike.
Another thing that I should probably note for the Googlers out there:
root@dodo:~# docker logs ark_ark-va_1 | tail
[S_API FAIL] SteamAPI_Init() failed; SteamAPI_IsSteamRunning() failed.
Setting breakpad minidump AppID = 346110
This is absolutely benign. Don’t stress yourself out trying to fix this; it’s not needed for Ark to run. As long as you see ‘Setting breakpad’, you’ll know that you have around a 15 minute wait for your server to appear in the server browser.
This is another error message which is benign and safe to ignore: Work thread 'CHTTPClientThreadPool:2' is marked exited, but we could not immediately join prior to deleting -- proceeding without join
Later edit: I’ve thrown up a Docker image to perform RCON commands via Gitlab CICD jobs: https://gitlab.com/dxcker/rcon
It’s rudimentary but should simplify these builds scripts if you need.