None
Is this even possible?

Update on progress is here!

Background:

I wanted to set-up a personal VPN server. Why do you ask? Several reasons:

  • When traveling, I wanted access to my Netflix, HBO, Hulu subscriptions.
  • I wanted to remove as many externally available port forwards as possible.
  • It's cool to be able to access the home network as if sitting on the couch.

I had a few options; run a Wireguard server on a Raspberry PI, Linux server, or another persistent host, or run it on a NAS.

There's a few pros/cons to each of these options, but after thinking about it, I settled on the idea to see if I could get in running on the NAS.

The following is a fairly detailed discussion of how I ended up building this, but there's an expectation that you know how to access the SSH interface of the NAS, comfortable with Linux, and some familiarity with Docker.

I'm also not going into the specifics of port forwarding. That's an exercise left to the reader.

Challenges:

The NAS I'm using is a PR4100 by Western Digital. I love it for several reasons, but a primary one is its Docker engine. The first problem is the Docker instance is old… I mean old, circa 2015! (1.7.0)

The second challenge is the PR4100's base OS is some highly customized Linux instance. The package manager is Debian, and I could've started hacking around, but I didn't want to do anything to touch the OS. The NAS gets semi-regular updates directly from Western Digital, and I was too nervous about touching anything that might interfere with that.

Wireguard itself has a couple of features that make it incredibly powerful yet problematic for this scenario:

…WireGuard lives inside the Linux kernel means that secure networking can be very high-speed

WireGuard securely encapsulates IP packets over UDP. You add a WireGuard interface, configure it with your private key and your peers' public keys, and then you send packets across it.

So it's both fast and straightforward. I have the problem that I didn't want to touch the NAS OS and install anything into the kernel… hmm, ok, we're stuck running in Userspace. Is this even possible?

Cloudflare to the rescue:

None
Odd name, great implementation!

It is called BoringTun, and is a userspace implementation of the WireGuard® protocol written in Rust.

This isn't just good; it's brilliant. Rust as a language is inherently secure and fast. It's never going to be as fast as a kernel integration, but that level of speed isn't necessary. ~10/20mbps would be more than enough for my use-cases.

Build:

My first challenge was getting my hands on the Boringtun binary to run it inside a container. From the install instructions:

You can install this project using cargo:

cargo install boringtun

Hmm.. ok, well, I'd like to try and keep this container as small as possible, and installing cargo (and it's dependencies) isn't going to keep the whole container anywhere under 200MB!

Docker to the rescue, let's use a simple base image and go from there. Here's my build Dockerfile:

FROM lsiobase/ubuntu:bionic
RUN \
 echo "***** install cargo ****" && \
 apt-get update && \
 apt-get install -y cargo && \
 echo "***** install boringtun via cargo ****" && \
 cargo install boringtun

This will install ubuntu and cargo, then build Boringtun and we can copy the binary out of the container for later use:

docker build -t boringtun:build -f ./Dockerfile_build ./
docker create --name=devops boringtun:build
docker start devops
docker cp devops:/root/.cargo/bin/boringtun $build/boringtun/bin/

The nice thing about a NAS is you have loads of persistent storage! $build in the above is mapped to: /mnt/HD/HD_a2/my/build/directory/

Container:

Now that we've built and saved the binary, we can start making the final vpn_server container. Here's my Dockerfile:

FROM lsiobase/ubuntu:bionic
# set wg privilege level
ENV WG_SUDO=1
# add local files
COPY bin/ /data/
COPY root/ /
RUN \
 echo "***** install wireguard tools ****" && \
 apt-get update && \
 apt-get install -y wireguard-tools -o APT::Install-Suggests=0 -o APT::Install-Recommends=0  && \
 echo "***** install ip mgmt tools *****" && \
 apt-get install -y iproute2 iptables
# ports and volumes
EXPOSE 32000/udp
VOLUME /config

Some things to note here:

  • The WG_SUDO=1 environment variable is important so Boringtun will start with the correct privileges (rather than drop them on startup)
  • The Boringtun binary should be in the /bin directory.
  • EXPOSE and VOLUME are port and config mount points.

Now let's build and start the container with the correct parameters:

docker build -t boringtun:prod -f ./Dockerfile_prod ./
docker create --name=vpn_server \
 --cap-add=NET_ADMIN \
 --device=/dev/net/tun:/dev/net/tun \
 -e PUID=1000 -e PGID=1000 \
 -e TZ=America/Los_Angeles \
 -p 32000:32000/udp \
 -v $build/config/boringtun/:/config \
 --restart always boringtun:prod
docker start vpn_server

This creates the container and sets up the relevant capabilities (NET_ADMIN) and mounts the required tun/tap location from the host (/dev/net/tun ).

At this point, I was able to connect from the iOS client (https://apps.apple.com/us/app/wireguard/id1441195209), yet couldn't send any traffic back. It took me what felt like an eternity to figure it out:

INSIDE the container I had to run:

iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE

This allows traffic to come in through the wg0 interface and then correctly route out the containerseth0 interface NAT'd correctly.

Getting it all running:

So now we've got a correctly configured container that will properly route traffic. The last step is to bring up the interface and assign an ip/net.

I'm using an awesome image from linuxserver.io: lsiobase/ubuntu:bionic

If you've not used a container from these folks, I can't endorse them strongly enough. Fantastic builds, small, and quality. They also take advantage of s6 so keeping your process running is trivial, here's the run script for this container:

#!/usr/bin/with-contenv bash
exec /config/boringtun.sh

…and you probably want to see the associated shell script too:

#! /bin/bash
declare -a pids
waitPids() {
    while [ ${#pids[@]} -ne 0 ]; do
        wait ${pids[0]}
    done
}
addPid() {
    local desc=$1
    local pid=$2
    echo "$desc -- $pid"
    pids=(${pids[@]} $pid)
}
printf "Starting Wireguard userspace (boringtun)\n"
exec /data/boringtun -f wg0 &
addPid "boringtun startup" $!
# This gives Boringtun some time to properly startup
while [[ $(wg showconf wg0 2>&1) == *"Protocol not supported"  ]]; do
 sleep 1
done
# Load the configuration into the wireguard interface
printf "Setting config: \
        $(wg setconf wg0 /config/boringtun.conf)\n"
# Config VPN network on the server side, and sets the .1 address.
printf "Setting network interface options: \
        $(ip addr add 10.1.10.1/24 dev wg0)\n"
# This brings up the wireguard interface inside the container 
printf "Bringing interface up: \
        $(ip link set wg0 up)\n"
# Enable the iptables rule to support traffic redirection
printf "Configuring iptables: \
        $(iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE)\n";
# Display the running configs
printf "Active network interfaces:\n$(ip addr)\n"
printf "Active network route table:\n$(ip route)\n"
printf "Active Wireguard config:\n$(wg show wg0 2>&1)\n"
waitPids

If you're interested in the genesis of thatwaitPids function check out this.

I'm not going to go into the details of s6 for this write-up, I'll leave that to you. But it might be useful to see the directory structure of /root that s6 expects, used in the above Dockerfile and copied to /:

root
  ->etc
      ->services.d
              ->boringtun
                      -> run

Wireguard config:

I'd suggest you take a moment to look over the awesome Wireguard quickstart tutorial: https://www.wireguard.com/quickstart/, but specifics of setting up Wireguard are fairly simple, here's my wireguard.conf file:

[Interface]
ListenPort = 32000
PrivateKey = <super_secret_private key>
[Peer]
PublicKey = iWCu/UNSuf8AXry7ltiL+aNJQcAyXHs8lR5S0dMReX8=
AllowedIPs = 10.1.10.2/32
[Peer]
PublicKey = tXQiY09jNURYPtUy16E2jAYERZX3+PNZ1XEMKP2WC0I=
AllowedIPs = 10.1.10.3/32

The configuration of the server itself is done in the above shell script, with the configuration above referenced via /conf inside the container.

This is probably clear as mud at this point. But to help, here's my container state when everything is running correctly:

None
container IP configuration

Conclusion:

So how big did the container end up being?:

None
Image size

~166MB, that's not too shabby! But more importantly, what sort of throughput am I seeing on a WiFi network outside of the home with all client traffic routing over the VPN:

None
Speedy!!

Damn! Almost ~60Mbps… that's plenty for streaming away from home!